22 December 2010

hitting the ground running

I've mentioned creating and injecting a root ssh key into a new instance. The images we run are deployed with ssh enabled and not TCP wrappered; we COULD wrapper them and use the 'add an exception for ssh from a single IP' tool which the PMman web control interface has, but we have found the support load fallout from people just getting started is too high. Once they have deployed and hardened a couple of boxes, they 'get it and can use the 'lock from all' web tool, and then add a single IP if their taste runs to web tools

The very next step I take as to each machine I administer, is to run a hardening script. While I have published an outline here, I use a script rather than reading and scrape and pasting from that outline. This step is done through a script, not because I think I WILL forget something, but because I know the script will NOT forget anything, and is written to perform the hardening process in an idempotent fashion -- that is, when done, finishing the same end result, time after time. One path to get to better host security is to have good processes, consciously designed, systematically applied, and continuously improved

herrold@centos-5 admin]$ ./hardening.sh hostname.pmman.net
The authenticity of host 'hostname.pmman.net (198.178.231.xyz)' can't be established.
RSA key fingerprint is 86:6e:84:e0:27:57:dd:4d:1f:88:82:fc:42:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hostname.pmman.net,198.178.231.xyz' (RSA) to the list of known hosts.
hosts.allow 100% 488 0.5KB/s 00:00
hosts.deny 100% 390 0.4KB/s 00:00
iptables 100% 1337 1.3KB/s 00:00
sshd_config 100% 3325 3.3KB/s 00:00
README 100% 897 0.9KB/s 00:00
rollup.pem 100% 0 0.0KB/s 00:00
openssl.cnf 100% 9682 9.5KB/s 00:00
arm-pmman.sh 100% 363 0.4KB/s 00:00
sa-update-local-NOTES 100% 877 0.9KB/s 00:00
sa-update-local 100% 117 0.1KB/s 00:00
logwatch.conf 100% 80 0.1KB/s 00:00
rollup.pem 100% 0 0.0KB/s 00:00
Package sendmail-8.13.8-8.el5.x86_64 already installed and latest version
Package 1:make-3.81-3.el5.x86_64 already installed and latest version
Package m4-1.4.15-2orc.x86_64 already installed and latest version
Package iputils-20020927-46.el5.x86_64 already installed and latest version
logwatch.conf 100% 80 0.1KB/s 00:00
Stopping crond: cannot stop crond: crond is not running.[FAILED]
Starting crond: [ OK ]
Shutting down sendmail: [FAILED]
Starting sendmail: [ OK ]
Starting sm-client: [ OK ]
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ]
Stopping auditd: [FAILED]
Starting auditd: [ OK ]
/etc/aliases: 76 aliases, longest 10 bytes, 765 bytes total

info: inspecting /etc/aliases for a root email forwarder off the box
# Person who should get root's mail
#root: marc

info: 1. do you want fail2ban -- if so, run: ./fix-fail2ban.sh hostname.pmman.net

info: 2. updates are not run by this script: consider running:
ssh -l root hostname.pmman.net yum -y -q upgrade --enablerepo=pmman-mail

info: 3. verify that root's email is properly handled

info: 4. now: ssh -l root hostname.pmman.net
cd /root/hardening/

and do some patching and service restarting ...
[herrold@centos-5 admin]$

Note: the IP is obscured, and the host name and ssh host key altered. The edit to add an opff-box alias entry for root's email is to centralize all the miscellaneous cron and asynchronous notifications off the box, to centrally monitored point

Then as noted before, this is a stock CentOS 5 image, and so needs some further tightening done and updates run. I have long since scripted that process:

[herrold@centos-5 admin]$ ./fix-fail2ban.sh hostname.pmman.net
local-fb-fix.sh 100% 256 0.3KB/s 00:00
Stopping fail2ban: [FAILED]
Starting fail2ban: [ OK ]
[herrold@centos-5 admin]$ ssh -l root hostname.pmman.net yum -y -q upgrade --enablerepo=pmman-mail
[herrold@centos-5 admin]$

Then, as suggested, the edits on the remote machine

[root@vm175551137 hardening]# netstat -pant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3641/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2627/sendmail: acce
tcp 0 240 198.178.231.xyz:22 76.242.0.abc:41936 ESTABLISHED 3593/0
[root@vm175551137 hardening]# history
1 cd /etc/
2 joe aliases
3 yum install ipsec-tools
4 rm *~
5 newaliases
6 cd /root/hardening/
7 ls
8 joe /etc/ssh/sshd_config
9 /sbin/service sshd restart
10 ls
11 cp hosts.allow hosts.deny /etc
12 joe iptables /etc/sysconfig/iptables
13 /sbin/service iptables restart
14 netstat -pant
15 history
[root@vm175551137 hardening]# reboot

Broadcast message from root (pts/0) (Wed Dec 22 14:47:10 2010):

The system is going down for reboot NOW!
[root@vm175551137 hardening]# Connection to hostname.pmman.net closed by remote host.
Connection to hostname.pmman.net closed.
[herrold@centos-5 admin]$

I added the ipsec-tools in support of an application this particular unit will be performing. The reboot at the end is for good measure to apply any new kernel and libraries through a clean boot. The 'true' hostname will be picked up from DNS PTR records, once that has been done already, after a reboot, as well

two minutes on keyed ssh access

In a Linux box, in BSD or derived such as under OS/X, or under Windows in Putty, a person can generate a LOCAL keypair which is used for keyed SSH access to such Virtual Machine instances. I have completely moved away from password based external access for all new instances, as keys carefully managed are demonstrably safer

So you know: There is some heat, but not a lot of thoughtful light about permitting and using root ssh access. Some not well though out security policies have a phobic avoidance of such. I'll address the matter in a later post, discussing hardening generally, sshd config file hardening, remote syslogs, TCP wrappers, iptables, and dynamic dictionary attack response

For the time being, let's put to one side and get past that security design choice rant, and accept that at least initially, a PMman instance has already made a short term choice for setting up access and for management of such units which are running CentOS or others following the Red Hat approach for management of instances [i.e., not SuSE/SLES, Debian family, nor the BSDs]

Under such Linux, this looks like this:

$ # create a high strength passphrase
$ # I have written of gen-pw.sh before
$ gen-pw.sh -a
a2Wa4aSaLWkRac
$ cd ~
$ cd .ssh
$ ssh-keygen -t dsa -f pippin.pmman.net.dsa

$ # there is a passphrase prompt here and
$ # we use that: a2Wa4aSaLWkRac
$ # -- also make a record of it in a safe place
$ # -- if one maintains multiple keys per box, it can be a
$ # chore to manage this -- but see: man ssh-agent

$ # this generated ~userid/.ssh/hostname.pmman.net.dsa
$ # [the private part] ... and
$ # ~userid/.ssh/hostname.pmman.net.dsa.pub
$ # [the PUBLIC part

$ cat ~userid/.ssh/hostname.pmman.net.dsa.pub

and scrape and place it in your mouse pastebuffer, and proceed to the web interface. The -f file's name 'hostname.pmman.net.dsa' is arbitrary, but chosen to be mnemonic

Then add a new stanza to: ~userid/.ssh/config like this:

#
Host hostname32.pmman.net hostname64.pmman.net
# optionally one can make a note of the passphrase here, but
# at the risk of exposing such if a local dire read compromise
# is experienced, or a backup of such falls into untrusted hands
IdentityFile /home/userid/.ssh/hostname.pmman.net.dsa
PasswordAuthentication no
Protocol 2
PubkeyAuthentication yes
#

Note here a key may be used on more than one host; that is, we can add the same public key into /root/.ssh/authorized_keys of more than one unit -- here, both a 32 bit and a 64 bit instance with similar hostnames. But I get ahead of myself ...

Using a secure means, we need to transfer taht public key to a remote instance, and to add it to the right file; Here, we use the SSL protected web interface under the PMman machine management interface for a given machine, in the [more] menu, first item. By placing the public part into the web form box, the management backend at the datacenter, will be validated as to form, and then place that public key into ROOT's /root/.ssh/authorized_keys file

At that point, one can then ssh to that remote box as root, accept the host key, and take steps for hardening, adding a working userid, and so forth

13 December 2010

loop -de- loop

As I count it at the moment, I am building and using content from more than eight loop mounted ISOs on a principal NFS, TFTP and 'next', and FTP server in the internal network

Red Hat has updates for 4.9, and 5.6 in beta; CentOS is in QA on a initial '6' release; I am doing private builds for a Fortune 50 on some backports out of RawHide and from some local packaging; and I am working on a 'back of the envelope' design and test to try to get control of the huge bloat on Red Hat ISO space for installs, to see if I can get a trimmed minimal installer for i386, x86_64, ppc[64] and s390x down to a single piece of CD sized ISO media. Then there is my favorite of the minimal wire install image, again which I package up into an ISO

Going forward, we will see more of encrypted filsystems across loop devices, and that will also put load on here. It may be time for the kernel folks to consider bumping that limit to 16

As such I regularly crest over the stock eight provided loop devices. To address this without a reboot, one simply has to:

# shutdown all uses of loop devices, so we can remove the module
/sbin/rmmod loop
echo "options loop max_loop=255" > /etc/modprobe.d/loop.local.conf
/sbin/depmod -a
/sbin/modprobe loop

Note: that 'depmod' may not be strictly required, but will in any event be harmless, so I do it -- heck, I still type sync ; sync before rebooting a box, and I KNOW that is not needed any more. The force of habit ...

I add the suffix .conf on that file, because I was scolded by a Debian box a couple weeks ago on the topic; it seems that they are deprecating sourcing files in /etc/modprobe.d/ lacking such. Since when did Linux starting use file name suffixes to determine a file's content? -- at least it is not required to be in 8.3 format

Another approach is doing it with hard-coded values at boot time, with sysctl.conf or such


Tip of the hat to Paul Howarth on the SELinux rant I went off on last week; The interaction of loop mounted ISOs, and mounts in the FTP space of a filesystem can also be addressed with options to the mount command, and in the /etc/fstab with context= choices. He writes and points out:

I use context mounts to avoid it, e.g. in fstab:

/path/to/CentOS-5.5-x86_64-bin-DVD-1of2.iso
   /path/to/dvd1 iso9660
   _netdev,ro,loop,fscontext=system_u:object_r:public_content_t:s0
   0 0

... sorry about the funky line wrapping, but there is just no good way to display really long /etc/fstab entries

10 December 2010

later, bye

CentOS is not for some people -- I get it and there is no sense getting agitated about it

17:20 rictec> regret to informe that i will have to un-install Centos as soon as i find the un-installer
17:21 orc_emac> rictec: we do not publish one, but it is this:
   as root:
   dd if=/dev/zero of=/dev/sda bs=512 count=1
17:21 orc_emac> as it is so easy to remember, we don't bother publishing one

'later, bye'


10 Dec 2010: reformatted due to some rendering issue reports

ripping out the safeties

There have been the endless bull session threads on the CentOS main mailing list, nominally on the subjects of SELinux and IPv6 the last couple of weeks. I am just not of a mind to tolerate cr*ppy content on mailing lists anymore. On one such list, a 'regular' whom I identify as '...' had the misfortune of being the 'designated bad example' of the day

a bull jumping the shark

On Fri, 10 Dec 2010, ... wrote:


Make sure iptables is off. Is selinux on and enforcing? If so, reverse that too (disabled).

and ... followed up with yet another ready description on how to disable the safeties

you know -- this is just a poor response all around ...

I understand that the poster produces a lot of content. I know that he is an old bull in the pen there. I also see that he races to post first, and I feel that others do not answer as a result. I see that he carries lots of URLs in all his posts so that Google's spiders crawling through the mailing list archive will widely index his business because it seems to have a lot of links

spider in a world wide web

But quantity is not quality

The response SHOULD have been to sharpen the issue if it was unclear, and describe a diagnostic flow so that one can BOTH have iptables security, and SElinux protections, AND a working system

The question, trimmed to its essence:

I get this error:
(13)Permission denied: make_sock: could not bind to address [::]:8091

(13)Permission denied: make_sock: could not bind to address 0.0.0.0:8091

no listening sockets available, shutting down

Unable to open log

So the first question to know the answer to is:

did the process start and persist?

Check for this thus:

# netstat -pant | grep 8091 

If that command, which looks for a listening socket returns some content, the process is running and its name is shown

If that command returns nothing, the process in question did not start, and we need to try to manually start it

IN NO CASE does iptables PREVENT a process from starting as it is running in the kernel and blocking parts of the network stack from transiting packets

I note in passing that it is a usual voodoo local folk wisdom in this project that one does not bind to ALL interfaces, [0.0.0.0] -- I do not know if this is the case or in play here, but note in passing that the usual practice ** here ** is to build to a specific interface, or more commonly a specific IP

Such constraints usually indicate a problem in the underlying application [ISO layer 7] not being sufficiently mature to reply to the IP that it received a service request from, and to let the routing tables manage routing at the proper ISO layer

Turning to how to amend iptables for socket based services:

If one needs to have 8091/tcp open (or 8091/udp), OPEN IT PROPERLY

Add to /etc/sysconfig/iptables the following entry:

[herrold@elided iptables]$ diff -u iptables-ORIG iptables
--- iptables-ORIG 2010-12-10 10:15:19.000000000 -0500
+++ iptables 2010-12-10 10:15:48.000000000 -0500
@@ -14,6 +14,8 @@
-A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
+-A RH-Firewall-1-INPUT -p udp -m udp --dport 8091 -j ACCEPT
+-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 8091 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 37 -j
ACCEPT
[herrold@elided iptables]$

and then as root run:

# /sbin/service iptables restart 

The rules are of a common form across all Linux variants, and exhaustively documented. There is not any valid reason NOT to manage iptables rules generation and maintenance 'right'


As to SELinux, there is full logging running when the auditd and the restorecond are present. One can identify, and add rules on the fly, to progressively add 'permit rules' all SELinux based 'intercepts'

Here is a sample script of general applicability, under the GPLv3+:

[root@elided bin]# cat selinux-fixup.sh
#!/bin/sh
#
# selinux-fixup.sh
# Copyright (c) 2010 R P herrold <info@owlriver.com>
# License: GPLv3+
#
# Additively build SELinux rule sets to investigate what
# a new application needs
#
export
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
#
# make sure we have all our tools, which may not install
# in a stock CentOS 5 minimal installation
rpm -q diffutils 2> /dev/null || yum -y -q install diffutils
rpm -q audit 2> /dev/null || yum -y -q install policycoreutils
rpm -q policycoreutils 2> /dev/null || yum -y -q install policycoreutils
#
/sbin/chkconfig auditd on
/sbin/service auditd restart
/sbin/chkconfig restorecond on
/sbin/service restorecond restart
#
cd /root/bin/
#
/bin/echo "A"
/bin/touch oldlog
/usr/bin/audit2allow -i denial-log > oldlog
#
# /bin/grep ftp /var/log/audit/audit.log* > /root/bin/ftp_audit.log
/bin/grep "avc: denied" /var/log/audit/audit.log* > /root/bin/denial-log
#
# echo A
# audit2allow -a -M ftpmirror
#
/bin/echo "B"
/usr/bin/audit2allow -i denial-log -M deniallog
#
/bin/echo "C"
/usr/sbin/semodule -i deniallog.pp
#
/bin/echo "D"
/usr/bin/audit2allow -i denial-log > newlog
/usr/bin/diff -u oldlog newlog
#
/bin/echo "E"
/bin/cat deniallog.te
#
/bin/echo "F"
#

which is used iteratively, running the application, and when the candidate under test stops for 'mysterious' reasons (more about 'mysterious' later), re-running it to see if some new SELinux denial has occurred --- the 'diff' is designed to make it perfectly clear what the new denial was, and to add and apply the needed allow rule

We don't KNOW IF there was a SELinux denial yet, and if so, what the denial was yet, as the sharpening question was not asked, but to wrap matters up

To make a set of local allow rules permanent, see:

$ man 8 semanage 

for the methodology for making such permanent and persistent once the full set are known

But sometimes (here in my example case !! ) the fix NEEDS to be upstreamed so that others using FOSS gain from the fix -- let's examine that next


If the problem occurs in in a package from an upstream, one can 'file bugs' against the 'selinux' component, and that group are quite attentive to addressing such

A run of that script looks like this [and in point of fact, at one customer's premise, there ARE some fixes needed with Red Hat's DHCP client and VSFTPD when loop mounted ISOs are used, that I need to file a couple of bugs on]

[root@elided bin]# ./selinux-fixup.sh
diffutils-2.8.1-15.2.3.el5
audit-1.7.17-3.el5
policycoreutils-1.33.12-14.8.el5
Stopping auditd: [ OK ]
Starting auditd: [ OK ]
Shutting down restorecond: [ OK ]
Starting restorecond: [ OK ]
A
B
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i deniallog.pp

C
D
E

module deniallog 1.0;

require {
type iso9660_t;
type ftpd_t;
type iptables_t;
type initrc_t;
class unix_stream_socket { read write };
class lnk_file getattr;
class unix_dgram_socket { read write };
class dir { read getattr search };
class file { read lock getattr };
}

#============= ftpd_t ==============
allow ftpd_t iso9660_t:dir { read getattr search };
allow ftpd_t iso9660_t:file { read lock getattr };
allow ftpd_t iso9660_t:lnk_file getattr;

#============= iptables_t ==============
allow iptables_t initrc_t:unix_dgram_socket { read write };
allow iptables_t initrc_t:unix_stream_socket { read write };
F
[root@elided bin]#

which rules are saying that iptables and ftp need help:

[root@elided bin]# cat /etc/sysconfig/iptables
# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -i eth1 -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -p 50 -j ACCEPT
-A RH-Firewall-1-INPUT -p 51 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j
ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s a.b.c.0/24 --dport 21 -j ACCEPT
# permit the north data center
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s a.b.c.0/24 --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT
[root@elided bin]#

pretty bog standard, but for the:

-A RH-Firewall-1-INPUT -i eth1 -j ACCEPT 

general allow rule on a backside RFC-1918 network on that interface

Note: a.b.c.0/24 is a replacement I did for privacy purposes, as the specific values do not matter

and the loop mounted ISO images:

[root@elided bin]# cat /etc/mtab
/dev/sda1 / ext3 rw 0 0
proc /proc proc rw 0 0
sysfs /sys sysfs rw 0 0
devpts /dev/pts devpts rw,gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-i386-boot.iso
/var/ftp/pub/mirror/redhat/rhel/loop/1 iso9660
ro,loop=/dev/loop0 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-i386-dvd.iso
/var/ftp/pub/mirror/redhat/rhel/loop/2 iso9660
ro,loop=/dev/loop1 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-s390x-dvd.iso
/var/ftp/pub/mirror/redhat/rhel/loop/3 iso9660
ro,loop=/dev/loop2 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-source-dvd1.iso
/var/ftp/pub/mirror/redhat/rhel/loop/4 iso9660
ro,loop=/dev/loop3 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-source-dvd2.iso
/var/ftp/pub/mirror/redhat/rhel/loop/5 iso9660
ro,loop=/dev/loop4 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-x86_64-boot.iso
/var/ftp/pub/mirror/redhat/rhel/loop/6 iso9660
ro,loop=/dev/loop5 0 0
/var/ftp/pub/mirror/redhat/rhel/ISOS/rhel-server-6.0-x86_64-dvd.iso
/var/ftp/pub/mirror/redhat/rhel/loop/7 iso9660
ro,loop=/dev/loop6 0 0
[root@elided bin]#

As a side note, I do not quite understand (it is a mystery to me) how Red Hat would have been able to test 'nightlies' for anaconda based FTP wire installs of loop mounted ISOs without that rule, but perhaps that case was not in their test coverage plan


SELinux has been around for eight years now I am told, and iptables longer (looking back to the ancestor packet filtering approaches (ipchains !!) under the 2.4 kernel and before) -- no business would run a box all 777 on permissions or on a root account with no password

This is not Dark Arts, or Black Magic, and one does not use voodoo methods of shaking a rubber chicken at such problems to solve them. Old dogs need to learn new tricks. Simply ripping out such protections is to be irresponsible. It is NOT proper sysadmin nor proper development

old dog, old tricks

I see that composing this piece took over two hours, but it is durable content and accurate on the topics of iptables and SELinux for this project. THAT is a better answer than snapping out a quick:

turn off all the safeties

reply, I submit

03 December 2010

Those who cannot remember the past are condemned to repeat it

A member of the trade press, formerly at the Linux Foundation, has speculated at length as to the release date of a CentOS 6. I and at least one other member of the CentOS core group were approached for comment on this topic, coming into the US Thanksgiving holiday. We were discussing how to respond, but we had not issued a formal reply. That writer went to press with a piece that expresses a date not of any formal CentOS origin or estimate. His words, his choice, his opinion, and nothing more

Here is a statement which is perhaps more accurate:

CentOS really doesn't do pre-release interviews as to release dates and process, other than what anyone may read in and infer from the 'centos-devel' mailing list. Any CentOS 6 series will ship when it is ready and will be available when it is announced

CentOS is the successor in part by merger of Tao Linux ('Hi, David'). This quote comes from the Tao

Those who know, do not speak; those who speak, do not know

Coping with xz under the RPM tools in CentOS 5

So there I am, minding my own business, building a SRPM from Red Hat's 'rawhide' archive, and it fails. They are cutting over to 'xz' compression for the tarballs they ship. Their archive, and so their call, and not the end of the world

The symptom shows up when rpmbuild goes to uncompress such:

...
+ rm -rf clamav-0.96.4
+ tar -xf /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar.xz
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: Read 6508 bytes from /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar.xz
tar: Error exit delayed from previous errors
error: Bad exit status from /var/tmp/rpm-tmp.36477 (%prep)

RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.36477 (%prep)
[herrold@centos-5

The error messages could be better, but the older compression methods that are known to the 'file' program available to rpmbuild that ships with CetnOS 5 do not contain the relevant 'magic numbers' yet. Progress is like that, and so until and unless Red Hat backports support into its RHEL sources, CentOS will not pick up the fix in its version 5 mainline

One perfectly suitable response is to use the RPM5 branch of the package manager, which DOES know. But some people cannot relax that constraint for various non-technical reasons

This issue is rather like the old cutover from md5sums to shasums which RPM did a while ago, and that I wrote aboutI wrote about

The fix is straightforward:

  1. Install the compressed tarball, spec file and any patches with rpm in the usual fashion
  2. Uncompress from the unknown compression format and re-compress with a known one
  3. Amend the spec file; here, I use grep to look, and as there is just one edit, sed to edit
  4. Rebuild using the '-ba' option from the revised .spec file with the tools of the target environment (here, CentOS 5)
  5. The resulting SRPM will be portable and as a result of the second step, uses a known compression

Lets look:

[herrold@centos-5 clamav]$ unxz /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar.xz
[herrold@centos-5 clamav]$ gzip /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar
gzip: /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar.gz already exists; do you wish to overwrite (y or n)? y

[herrold@centos-5 clamav]$

That question about over-writes happened because it appears the sources from a prior build of clamav-0.96.4 were not re-rolled into a 0.96.5 tarball by the upstream packager at RawHide, but may have been patched instead. I've not expressly looked

[herrold@centos-5 clamav]$ cp ~/rpmbuild/SPECS/clamav.spec .
[herrold@centos-5 clamav]$ rpmbuild -ba clamav.spec
error: File /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar.xz: No such file or directory
...

As we have not yet fixed the .spec file, this was expected, but is shown here so the diagnosis path is clear

[herrold@centos-5 clamav]$ grep xz clamav.spec
Source0: %name-%version%{?prerelease}-norar.tar.xz
[herrold@centos-5 clamav]$ sed -i -e 's@xz@gz@g' clamav.spec

And now the .spec file is ready as well

[herrold@centos-5 clamav]$ rpmbuild -ba clamav.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.47404
+ umask 022
+ cd /home/herrold/rpmbuild/BUILD
+ LANG=C
+ export LANG
+ unset DISPLAY
+ cd /home/herrold/rpmbuild/BUILD
+ rm -rf clamav-0.96.4
+ /bin/gzip -dc /home/herrold/rpmbuild/SOURCES/clamav/clamav-0.96.4-norar.tar.gz
+ tar -xf -
...
checking host system type... x86_64-redhat-linux-gnu
checking target system type... Invalid configuration `noarch-redhat-linux-gnu': machine `noarch-redhat' not recognized
configure: error: /bin/sh config/config.sub noarch-redhat-linux-gnu failed
error: Bad exit status from /var/tmp/rpm-tmp.86669 (%build)
...

The build fails for other reasons out of scope for this post, in that a new configure 'target' is emitted. This is similar to a later compression format addition, but a different problem, solved elsewhere. Such a change is another part of distribution and brand management matters at Red Hat's part. I'll note the solution for this (putting to side seriously amending the rpm build environment macros, which is the 'one way' path into later versions) in a later post

Once all the changes are done, and the 'scratch' test builds and will install cleanly, I go in with an editor, manually bump the release value by one, and add a note in the changelog stanza. Then I repeat the build 'for record', signing, and distribution. The Release 'bump' is needed so the NEVR (name, Epoch, Version, and Release comparison which librpm does, and that yum calls through librpm to do can detect the fact that a later updated version is in an updates repository in due course

All done

14 October 2010

arrogance, personified

I see this in the overnights from the OpenJDK front:
IBM joins the OpenJDK community, will help unify open source Java efforts, with the salient 'pull quote'

It became clear to us that first Sun and then Oracle were never planning to make the important test and certification tests for Java, the Java SE TCK, available to Apache
This dovetails with my prior post

It's official -- Oracle is a profit maximizer and could care a hoot about being anything but dictate what is best to FOSS. But then you already knew that, right?

Not to sound arrogant, but we know how to deal with the Linux community
... naw, that does not sound arrogant, at all, Wim

08 October 2010

checklist RO rsync server

Setting up a new RO RSYNC server setup

The primary usage case is we describe is how to deploy a read-only RSYNC server with no end user accounts, to be used for distribution of content (here, to move a builder result archive that is intentionally NOT 'visible' from the internet to a more capable transfer server) From there, the content is integrated into a internal archiving server, and after that, to a publicly accessible binary archive, accessible through ftp, rpm, or yum

As before, we start with a freshly deployed, and hardened PMman instance. At all times, we will strive to follow proper sysadmin 'best practices' discipline under SElinux, wrappers and iptables

Install and enable rsync, which is the package holding the stock rsync daemon. As rsync supports wrappers, we also need the xinetd which is the package holding the stock inetd in recent Red Hat derived distributions -- Let's get started:

  1. yum can do the install trivially

    yum install rsync xinetd
  2. Then enable the needed services:

    /sbin/chkconfig rsync on
    /sbin/chkconfig xinetd on
  3. We need to do some configuration for the rsync daemon as to permissions and directories to serve:

    [root@trap64 etc]# cd /etc
    [root@trap64 etc]# cat rsyncd.conf
    # motd file = /etc/rsyncd.motd
    log file = /var/log/rsyncd.log
    pid file = /var/run/rsyncd.pid
    lock file = /var/run/rsync.lock

    [trap64]
    path = /var/ftp/pub/local
    comment = x86_64 fruit
    uid = nobody
    gid = nobody
    read only = yes
    list = yes
    # auth users = username
    # secrets file = /etc/rsyncd.scrt
    hosts allow = 10.0.0.0/24 127.0.0.0/24
    hosts deny = 0.0.0.0/0

    [root@trap64 etc]#
  4. Set up the iptables -- I do not recall the rsync daemon port off the top of my head, so I look it up:

    [root@trap64 etc]# grep rsync /etc/services | head -2
    rsync 873/tcp # rsync
    rsync 873/udp # rsync
    [root@trap64 etc]#

    ... so the port is 873

    # localhost can do all ...
    -A RH-Firewall-1-INPUT -i lo -j ACCEPT
    # ...
    # rsync daemon
    -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/8 --dport 873 -j ACCEPT
    # ...
  5. Open the wrappers

    #
    ALL: ALL@127.0.0.1
    #
    # ...
    #
    rsync: ALL@10.0.0.0/255.0.0.0
    #
  6. Restart the wrappers enforcing daemon

    [root@trap64 sysconfig]# /sbin/service xinetd restart
  7. Test it:

    [root@trap64 sysconfig]# rsync localhost::
    trap64 x86_64 fruit
    [root@trap64 sysconfig]#
  8. To put it into production on a client, we can use something like this:

    #!/bin/sh
    #
    # this file: /root/bin/update-archive.sh
    # Copyright (c) 2010 R P Herrold
    # License: GPLv3+
    #
    # ln -s /root/bin/update-archive.sh /etc/cron.hourly/
    #
    export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
    umask 022
    # -- non local content goes into the mirror constellation
    [ ! -e /var/ftp/pub/mirror/pmman/RPMS/x86_64/ ] && \
    mkdir -p /var/ftp/pub/mirror/pmman/RPMS/x86_64/
    #
    # export VERBOSE="-v "
    export QUIET="-q "
    #
    /usr/bin/rsync -a ${VERBOSE} ${QUIET} --exclude=working \
    trap64.darkside.lan::trap64/pmman/RPMS/x86_64/. /var/ftp/pub/mirror/pmman/RPMS/x86_64/.
    chown -R root.root /var/ftp/pub/mirror/pmman/RPMS/x86_64
    #
  9. All done


Earlier in this series:

 Function  Link 
hardening http://www.pmman.com/usage/hardening/ 
 lftp  http://orcorc.blogspot.com/2010/08/mirroring-upstream-master-with-lftp-to.html 
 RO vsftpd  http://orcorc.blogspot.com/2010/07/checklist-ro-ftp-server-setup.html 
 RO NFS  http://orcorc.blogspot.com/2010/08/nfs-aide-to-memory.html 

06 October 2010

too funny

I mentioned in my last blog post:

More importantly, it seems that the sending email account webteam (at) bhphotovideo.com is unmonitored, although one has to assume an e-commerce vendor DOES have a 'webteam'. How curious

Having a unmonitored email sending role account is fine, of course; driving responses into a webbish workflow is fine as well; but why not use something obvious not a monitored account like: noreply@ ... or unmonitored@ ... instead

After I submitted feedback through their web workflow form, I received a confirmation email containing this:

If for any reason you have any additional questions, thoughts or comments, please feel free to email us at webmaster (at) bhphotovideo.com, as we would be happy to hear from you.

So ... the right hand does not know what the left is doing?

Lost password #FAIL

The file with my old saved password (a strong one: see: a prior post on the topic) for an e-commerce site was inadvertently deleted. No particular reason to chase the backup file out, as there was a lost password mailer. And so, I had occasion to use the 'lost password link' of that site today

Date: Wed, 6 Oct 2010 12:13:36 -0400 (EDT)
From: webteam (at) bhphotovideo.com
To: herrold (at) ...
Subject: Your Password from bhphotovideo.com
----------------------------------------

Dear Russell Herrold

Thank you for your inquiry. Here's your password:

t3f38RbMMweRhg

We look forward to your next visit to our site. Please feel free
to let us know if there's any other way we may assist you.

Thank you,
The B&H Web Team
www.bhphotovideo.com
NNN Ninth Avenue
New York, NY 10001, USA
800-606-asdf
212-444-qwer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is an automated email response and cannot be replied to.

A couple threshold matters: I changed the password value they sent me to something similar. More importantly, it seems that the sending email account webteam (at) bhphotovideo.com is unmonitored, although one has to assume an e-commerce vendor DOES have a 'webteam'. How curious

Having a unmonitored email sending role account is fine, of course; driving responses into a webbish workflow is fine as well; but why not use something obvious not a monitored account like: noreply@ ... or unmonitored@ ... instead?

Back to the topic at hand. That is: The 'lost password' mailer sent me a unhashed, plaintext prior password, and when using so, did NOT require an immediate change of credential when I used it to log in

There was a 'feedback form' on their site, and so I sent along this:

email is inherently insecure as it cannot be protected from being read by people 'along the way' on the transfer (such as the ISP of the server that received the email)

Sending a 'reset your password' one time link, and noting a credential change in a permanent part of an account history, is pretty basic

Not having this in you user account management interface, and sending a prior password in plaintext are a big red warning sign. I am left to wonder: Would they also disregard credit card data security [CISP/now PCI] credit card 'hashing' and no saved plaintext' credentials restrictions ;(

There is an old saying: A chain is only as strong as its weakest link ... I think we found a weak one here

27 September 2010

Unit test shepards

I read with interest over the weekend this unit testing and TDD blog post from Douglas Hubler. I met him in real life a few weeks up in Chicago at the annual ClueCon, and was very impressed

I tracked down his email address and started to write a private email, but then as I re-read my draft and his piece, I noticed that it was a 'talking draft' by him. As such, I decided to surface my thoughts here


Hi -- Russ herrold (ex CentOS) here -- we met at cluecon

You put your finger on the problem well here:

"Project Maintainers" were always in fear of holding the bag on contributions that introduced bugs while not advancing their employer's goals

which is the well known 'capture by the employer problem' in FOSS. I am not saying (and would never suggest) that employer sponsorship of an interested 'Project Maintainer' is undesirable -- just the opposite, as it funds getting SOME motion in some cases (i.e., when it suits the employer's goals, or is not a clear 'CLM' -- career limiting move). Of course this path leads to 'freeze ups' similar to what we see in Debian Stable, where nothing short of dynamite (or a working remote exploit) seems to work to pry some forward progress into the main trunk

I put on my 'agile' thinking cap, to scope out the implications of your post

To work, the "Unit Test Sheppards" need to have global mandate to commit at least unit tests, via a Version Control System, and there needs to be a working Continuous Integration server. If this 'breaks the build' either the test is wrong, or the code is wrong. In the first response to 'breaking the build' the CI server has to revert the test, and file an exception report, to be owned by the UTS in the first instance, with a CC to the PM

This gets a 'heads up' in front of the PM, and a careful UTS will at a minimum either: 1) acknowledge that the test was ill-considered, withdraw it, and close the bug; 2) amend their code to correct misunderstanding that resulted in a broken test and re-attempt the commit [closing the bug, with the possibility of a 're-entry' of a new bug on the revised test], or 3) add documentation to the bug filing that indicates why the test is right [perhaps something as simple as pointing to a release target milestone, or part of the Requirements document] in preparation to handing the bug off to the PM (staying on the bug as a CC), and handing it along to the PM's queue

One problem is that when there is only a single PM, there is also only a single point of blockage, and 'real life' intervening, or a work-plan to do a substantial refactoring (perhaps even already partially working in a private tree), or even a non-public agenda on the part of one's employer may prevent the PM from ** wanting ** to respond 'just now' if a well-form test and bugreport gets dropped on them ...

... but if the unit test is 'right', usually it is proper to add to a test suite. I put to one side whether one should run all unit tests every pass; Tests do rot and one may well need to trim obsolete tests away, or refactor old ones to match code reorganizations; clearly one answer when the suite gets 'too big' is to start prioritizing, adding stochastic selection to generally omit tests related to rarely encountered failure modes and so forth

But a well written test never fully 'goes away' by default. At some predictable interval, of course, the 'full boat' of ALL tests, as well as more rigorous end to end functional tests are needed. Beck's TDD book glosses over this to some extent as his focus was development, but 'testing' means much more than 'unit testing'

One additional avenue toward a solution would be to convert the single PM 'person' into a trellised PM 'role' or 'team' containing two or more non-affiliated project members

By and large, FOSS works better when there is a consensus approach to management of a resource. It is basic group dynamics that achieving consensus is easier in a small team, able to consult in the 'stand up five minute meeting, and to come to a tactical 'what is the simplest thing that we can do' to conform to a well-formed test, write (or adopt) the unit test, apply it, and move on ;) With only two people in the PM trellis, or a senior and a junior relationship, the group dynamics may result in impasse, which is only visible to the 'outsider' UTS as 'nothing is happening on this bug'

Lots of inter-person political approaches exist here, but ultimately and in most projects, there is a agreed-to Release Manager team with global commits, that has to be willing and able to 'take up the reins', intervene to intentionally 'break the build' in HEAD when an impasse continues 'too long' [I assume here a model of a stable release, and a developmental HEAD], and force the PM to respond (perhaps by relinquishing participation as a co-PM)

I don't have an obvious candidate solution to suggest here, as there as many approaches are possible, and I've seen the issue as a project lead, as well as a mere participant, and sometimes simply as a concerned onlooker

22 September 2010

lost in the bowels of Google Groups

A post I made earlier today to a mailing list seems to have been held up for an hour, even though I am a subscriber to the mailing list in question, have proper and meticulously preened DNS A, PTR, MX, and even TXT records, publishing SPF details properly, because of prior problems with Google's mailservice's erroneous markings of some pieces as 'spammy' in the past ...

Received: by 10.90.14.22 with SMTP id 22mr127029agn.36.1285171616911;
Wed, 22 Sep 2010 09:06:56 -0700 (PDT)
X-BeenThere: puppet-users@googlegroups.com
Received: by 10.91.83.8 with SMTP id k8ls391483agl.0.p; Wed, 22 Sep 2010
09:06:54 -0700 (PDT)
Received: by 10.150.51.21 with SMTP id y21mr255924yby.58.1285171614696;
Wed, 22 Sep 2010 09:06:54 -0700 (PDT)
Received: by 10.229.192.137 with SMTP id dq9mr33711qcb.14.1285167411800;
Wed, 22 Sep 2010 07:56:51 -0700 (PDT)
Received: by 10.229.192.137 with SMTP id dq9mr33709qcb.14.1285167411749;
Wed, 22 Sep 2010 07:56:51 -0700 (PDT)
Received: from bronson.owlriver.com (bronson.owlriver.com [198.49.244.50])
by gmr-mx.google.com with ESMTP id
c41si5677929qcs.12.2010.09.22.07.56.51;
Wed, 22 Sep 2010 07:56:51 -0700 (PDT)
Received-SPF: pass (google.com: best guess record for domain of
herrold@owlriver.com designates 198.49.244.50 as permitted sender)
client-ip=198.49.244.50;
Received: from localhost (localhost.localdomain [127.0.0.1])
by bronson.owlriver.com (8.13.8/8.13.8) with ESMTP id o8MEumOR020433
(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
Wed, 22 Sep 2010 10:56:49 -0400
Date: Wed, 22 Sep 2010 10:56:48 -0400 (EDT)

... anti-spam measures, one assumes. I understand taking such measures, but sure wish the scoring 'downticks' Google was marking were published and findable (compare, to the good: AOL's current practices)

But then, I am told from time to time that my world view and some of my approaches are 'too utopian'. Humph -- a little bit 'utopian is all right, but one can overdo it? Who knew?'

Change control in operations

This crossed the puppet-users mailing list earlier today:

We have an engineering environment of around 200 CentOS servers, plus a production environment of roughly the same size. Currently, when we roll out a new server, we do a 'yum update' so the new server has the latest packages; however this means that just about every server has a different set of package versions - a system rolled out today will have different versions from one rolled out last month, and that will have different versions from one rolled out last year.

...

Has anybody else been faced with this problem, and if so, how did you resolve it?

Let's consider just the problem of 'package version skew' in operations, and come up with a solution for it. [The questioner is also 'starting' with a couple of deployment targets that vary over time because of a poorly considered 'start image' creation ... An obvious approach here is to have a couple of stable base deployment image, and a set of defined transforms to produce a basic engineering workstation or server, per to specification, and is largely uninteresting here]

  1. Set up a local mirror of the centos external mirrors, and call it 'incoming'
  2. Optionally, set a sub-mirror of 'incoming' called 'vault', and mirror in a fashion that does NOT delete old content no longer present on 'incoming'
  3. Set a third mirror called 'testing', which 'picks and chooses' selected packages to test, and their dependencies (see the package: yum-utils for some tools to permit confirming that one has 'closure' of those dependencies)
  4. Test on your pre-deployment 'bench' against 'testing' until you have a change-set you wish to deploy throughout the universe of your boxes under management. Obviously, several 'testing' mirrors can be set up, for differing classes of machines
  5. FINALLY, have a master distribution mirror called 'rtm' that has a change-set from a 'testing' mirror deployed to it. Remove the stock repository specification files from
            /etc/yum.repos.d/ 
    and deploy local variants to taste, that point at 'rtm'. Again, several 'rtm' mirrors can be set up, for differing classes of machines

Something like this to ensure coherency of a enterprise wide deployment is usually mandated by a Change Control Board (explicitly, or implicitly). Obviously, other aspects of an IT policy document will attend to getting the various mirrors properly recoverable in one's backup strategy. [there, the 'testing' mirrors are often NOT covered, as they are ephemeral as to their usefulness, and recoverable out of 'vault' (top down) or from a 'rtm' (bottom up)]

21 September 2010

sitting in great connectivity ...

... sure makes a difference, seemingly

I do daily checkouts from the FreeSwitch project, and run the same build script on a CentOS box inside our local network (which is nominally down a data link that is 3 x T-1 wide), and another that is up at a data center, and has the ability to sustain a 3.5 GByte/sec transfer rate indefinitely (it has been the disaster failover site for the periodic 'Victoria's Secret' soft pr0n 'strut their stuff' webcast)

I synchronized builds on the two boxes yesterday, so they happened to be at the exact same checkout from upstream's version control system level. Today, I opened a couple of consoles, and fired off the build commands within a second of one another. The first part of that script is to checkout current to HEAD, and then off into the builds. I've marked the two units in alternating colors so the comparisons stand out better

Unit A:

Unpacking objects: 100% (38/38), done.
From git://git.freeswitch.org/freeswitch
184f395..f7d16ec master -> origin/master
Updating 184f395..f7d16ec
Fast-forward
libs/freetdm/src/include/private/ftdm_types.h | 2 +-
src/mod/applications/mod_spandsp/mod_spandsp_fax.c | 6 +-
src/mod/codecs/mod_codec2/Makefile | 14 ++
src/mod/codecs/mod_codec2/mod_codec2.c | 161 ++++++++++++++++++++
src/mod/endpoints/mod_sofia/mod_sofia.c | 23 +++
src/mod/endpoints/mod_sofia/mod_sofia.h | 1 +
src/mod/endpoints/mod_sofia/sofia_glue.c | 21 +++
src/switch_ivr.c | 4 +-
8 files changed, 226 insertions(+), 6 deletions(-)
create mode 100644 src/mod/codecs/mod_codec2/Makefile
create mode 100644 src/mod/codecs/mod_codec2/mod_codec2.c

real 0m1.105s
user 0m0.425s
sys 0m0.090s
/home/herrold/vcs/git/freeswitch

Unit B:

Unpacking objects: 100% (38/38), done.
From git://git.freeswitch.org/freeswitch
184f395..f7d16ec master -> origin/master
Updating 184f395..f7d16ec
Fast-forward
libs/freetdm/src/include/private/ftdm_types.h | 2 +-
src/mod/applications/mod_spandsp/mod_spandsp_fax.c | 6 +-
src/mod/codecs/mod_codec2/Makefile | 14 ++
src/mod/codecs/mod_codec2/mod_codec2.c | 161 ++++++++++++++++++++
src/mod/endpoints/mod_sofia/mod_sofia.c | 23 +++
src/mod/endpoints/mod_sofia/mod_sofia.h | 1 +
src/mod/endpoints/mod_sofia/sofia_glue.c | 21 +++
src/switch_ivr.c | 4 +-
8 files changed, 226 insertions(+), 6 deletions(-)
create mode 100644 src/mod/codecs/mod_codec2/Makefile
create mode 100644 src/mod/codecs/mod_codec2/mod_codec2.c

real 0m15.607s
user 0m0.168s
sys 0m0.096s
/home/herrold/vcs/git/freeswitch

One box is running an 386 kernel, and the other x86_64; memory is somewhat smaller on the x86_64. The 'horsepower' of each is roughly the same

Unit A:

[herrold@centos-5 ~]$ ssh freeswitch.pmman.com uname -a
Linux freeswitch.pmman.com 2.6.18-194.11.3.el5PAE #1 SMP Mon Aug 30 17:02:48 EDT 2010 i686 i686 i386 GNU/Linux
[herrold@centos-5 ~]$ ssh freeswitch.pmman.com free
total used free shared buffers cached
Mem: 6226068 4427212 1798856 0 303156 3936312

Unit B:

[herrold@centos-5 ~]$ uname -a
Linux centos-5.first.lan 2.6.18-194.11.3.el5xen #1 SMP Mon Aug 30 16:55:32 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[herrold@centos-5 ~]$ free
total used free shared buffers cached
Mem: 3072000 3036352 35648 0 291852 1790652

Unit A:

[herrold@centos-5 ~]$  ssh freeswitch.pmman.com dmesg \| grep -i bogo
Calibrating delay loop (skipped), value calculated using timer frequency.. 3990.15 BogoMIPS (lpj=1995079)
Calibrating delay using timer specific routine.. 3990.04 BogoMIPS (lpj=1995020)
Total of 2 processors activated (7980.19 BogoMIPS).

Unit B:

[herrold@centos-5 ~]$ dmesg | grep -i bogo
Calibrating delay using timer specific routine.. 6652.60 BogoMIPS (lpj=13305207)

Unit A:

[herrold@centos-5 ~]$ ssh freeswitch.pmman.com cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU 5130 @ 2.00GHz
stepping : 6
cpu MHz : 1995.224
cache size : 4096 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl vmx tm2 ssse3 cx16 xtpr lahf_lm
bogomips : 3990.44

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU 5130 @ 2.00GHz
stepping : 6
cpu MHz : 1995.224
cache size : 4096 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl vmx tm2 ssse3 cx16 xtpr lahf_lm
bogomips : 3990.02

Unit B:

[herrold@centos-5 ~]$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 CPU 6700 @ 2.66GHz
stepping : 6
cpu MHz : 2660.050
cache size : 4096 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu tsc msr pae cx8 apic mtrr cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni est ssse3 cx16 lahf_lm
bogomips : 6652.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 CPU 6700 @ 2.66GHz
stepping : 6
cpu MHz : 2660.050
cache size : 4096 KB
physical id : 1
siblings : 1
core id : 0
cpu cores : 1
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu tsc msr pae cx8 apic mtrr cmov pat clflush acpi mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc pni est ssse3 cx16 lahf_lm
bogomips : 6652.60
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

But ...

Unit A:

Wrote: /home/herrold/rpmbuild/RPMS/i386/freeswitch-sounds-0.0.20100921.git-1.i386.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.3898
+ umask 022
+ cd /home/herrold/rpmbuild/BUILD
+ cd freeswitch-20100921
+ '[' /var/tmp/freeswitch-0.0.20100921.git.root '!=' / ']'
+ rm -rf /var/tmp/freeswitch-0.0.20100921.git.root
+ exit 0

real 17m56.699s
user 13m18.982s
sys 3m10.880s

real 24m50.468s
user 18m2.521s
sys 4m56.827s
[herrold@freeswitch freeswitch]$

Unit B:

Wrote: /home/herrold/rpmbuild/RPMS/x86_64/freeswitch-sounds-0.0.20100921.git-1.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.90424
+ umask 022
+ cd /home/herrold/rpmbuild/BUILD
+ cd freeswitch-20100921
+ '[' /var/tmp/freeswitch-0.0.20100921.git.root '!=' / ']'
+ rm -rf /var/tmp/freeswitch-0.0.20100921.git.root
+ exit 0

real 27m27.666s
user 8m27.160s
sys 3m25.909s

real 48m25.064s
user 11m34.027s
sys 5m15.264s
[herrold@centos-5 freeswitch]$

That is, the older, 2GHz Xeon is running away from the newer 2.6 GHz Core Duo. Quite the discrepency there, but the numbers don't lie. Perhaps due to the local load of being a X-desktop on 'centos-5' [no local xen domU are presently running on it], and NOT running X on the remote server. Interesting 'food for thought' of a problem to research as to the why's and wherefore's on causation

12 September 2010

What do you discuss?

Great minds discuss ideas
Average minds discuss events
Small minds discuss people
  -- Eleanor Roosevelt

09 September 2010

office background noise

A question in IRC: Do you listen to music online?

17:07 =orc_orc> xmms is playing: Peshay / Pacific atm
17:08 =orc_orc> the library has more to 'rip' than I will ever be able to grow
tired of, for free
17:08 =orc_orc> NFS makes the OGG files available freely, throughout the LAN
17:08 =orc_orc> (through an RO export)

As I recall, I used 'grip' build under CentOS 4 to populate that music archive, which xmms randomly wanders through

07 September 2010

an interesting forgery

It is quite common for an online service provider to suggest adding their 'email sending address' to a end user, so that spam filters let pieces from know senders avoid spam filtering

This piece came in. Here are the headers:

Return-Path: 
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
bronson.owlriver.com
X-Spam-Level:
X-Spam-Status: No, score=-87.1 required=4.0 tests=BAYES_05,
HTML_IMAGE_ONLY_24,
HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_BL_SPAMCOP_NET,RCVD_IN_PSBL,
SPF_HELO_PASS,
T_SURBL_MULTI1,T_SURBL_MULTI2,T_SURBL_MULTI3,T_URIBL_BLACK_OVERLAP,
URIBL_BLACK,URIBL_DBL_SPAM,URIBL_JP_SURBL,URIBL_OB_SURBL,URIBL_SC_SURBL,
URIBL_WS_SURBL,USER_IN_WHITELIST autolearn=no version=3.3.1
Received: from shadow.apd.hu (shadow.apd.hu [195.70.36.72])
by bronson.owlriver.com (8.13.8/8.13.8) with SMTP id o8224mbp009823
for <rpm@owlriver.com>; Wed, 1 Sep 2010 22:04:50 -0400
Date: Thu, 2 Sep 2010 04:04:49 +0000
From: Twitter <twitter-notification-rpm=owlriver.com@postmaster.twitter.com>
Reply-To: noreply@postmaster.twitter.com
To: rpm@owlriver.com
Message-Id: <6aba5bca4c284_51e06cbd75096ceb8@mx001.twitter.com.tmail>
Subject: You have 5 unread direct messages from Twitter!
Mime-Version: 1.0
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: Quoted-printable
Content-Disposition: inline
X-Campaignid: twitter20100902312977
Errors-To: Twitter
<twitter-notification-rpm=owlriver.com@postmaster.twitter.com>
Bounces-To: Twitter
<twitter-notification-rpm=owlriver.com@postmaster.twitter.com>
X-Envelope-To: rpm@owlriver.com
X-Munge: added X-Envelope-To
X-Orig-Subject: You have 5 unread direct messages from Twitter!
X-Loop: herrold@owlriver.com
X-ORC: antiloop

The body is heavily obsfucated HTML, but the clear text is:

HI, RPM.

You have 5 unread direct messages from Twitter!
http://twitter.com/account/messages/rpm/RKQYA-KU4GO-417167
[medicinete.info]

The Twitter Team

If you received this message in error and did not sign up for a
Twitter account, click not my account [medicinete.info].

Please do not reply to this message; it was sent from an unmonitored
email address. This message is a service email related to your use of
Twitter. For general inquiries or to request support with your
Twitter account, please visit us at Twitter Support
[medicinete.info].

Clever enough -- the "[medicinete.info]" is added by my MUA -- Mail (reading) User Agent, alpine, and so the link to a forged site is obvious. But the use of the forged sender address, and the fact that I have a global 'whitelist' pass rule on that mail server, rather than 'per user' pass rules for the custom spamassassin on this CentOS 5 box, means that the forgery was treated as though it was from a trusted sender and favorably scored 100 points

Of course there IS no such user 'rpm' here sending email, but that was scraped off a web page in the domain, and so it draws content from hopeful spammers

05 September 2010

"Okay, not a problem"

It drives me nuts in a store or when contacting telephone support somewhere, when the clerk or call center denizen replies to my social courtesy of thanking them for some service, to receive in return:

Okay, not a problem

D*mn it -- In such a circumstance, I have usually just made a purchase, or have previously paid good money to get their firm's attention. I could care less if they were pleased to not to have had to work hard doing their appointed tasks. I know darn well they are drawing some salary to boot

I rather feel that I am entitled, instead, to:

Thank you

or perhaps,

You are welcome and it was a pleasure

as the back and forth of the interaction suggests

Oh, yes, and "No worries" usually works just about as well with me, except when used as an affirmation that all is well

26 August 2010

living at HEAD

Mark Shuttlesworth gave a keynote a few OLS ago, proposing that some folks need to 'live at head' and file the bleeding edge bugs early so nasty stuff gets fixed early, and good stuff flows down

The assumption is that the need for patches, except for branding, etc will fade away, and the character of Open Source's code improves with 'a rising tide that floats all boats'. Apple is pretty far down this road already with Clang/LLVM

I already build much from nightlies from VCS anyway (including clang), and it is simple enough to first solve a selfhosting iso builder. Then continue to populate leaf nodes to taste. Auto-reinstall daily preserving configs or detecting breakages and writing idempotent configs migrators as needed

File bugs ruthlessly, or at least relentlessly

Wire in valgrind, llvm, drill in unit tests and end to end functional tests, buildbots, LSB conformance testers, more, and file more bugs off the daily exceptions reports after new commits appear upstream. Whitelist acknowledged bugs for a while (with timeouts to keep the upstream honest)

Keep the machines busy at night rather than letting them play cards with one another


This email drafted in 5 minutes but I've ranted the rant enough times it is familiar. How is that for a real world and useful goal? Certainly better than playing 'bikeshed' politics

25 August 2010

What not to wear

Me, quoting me from a private IRC conversation

14:29 =orc_orc> disabling selinux is like
   having perms of 777
   or no root password at all
   or no wrappers
   or no iptables
14:29 =orc_orc> only weak minds should still be doing these things

Actually, one should set the root password to a unique and hard one for each box, and only use it for recovery [our practice per the 'blue ring notebook' of procedure at one site I ran. The 'per machine' passwords were 'remembered' and kept in a bound book journal in the CIO's office safe; usage logged, resets after use noted in said journal, audits performed]. Non-local root password based login should not be enabled. Rather, one should rely on pass-phrase protected, keyed SSH access the rest of the time

Rock and Roll never forgets ...

I came across this quote [pdf] today, from 2006 doing some research. From TFA:

Q: Why isn't Oracle a member of the OSDL?
A. ... We basically know where to go. We have a good relationship directly with people in the Linux community. We have all our partners. So there is no immediate advantage to being a member for us.
Not to sound arrogant, but we know how to deal with the Linux community

Gotcha, Wim ... 'does not play well with others'

... in a related vein, I cannot remember the last time I heard an Oracle representative on the weekly LSB conference call

18 August 2010

TTYtter move to OAth

One of the command line tools of interest for light weight micro-notifications is TTYtter. In the CentOS-devel mailing list a few months, I described using it to issue completion, and build closure notices doing a long running, unattended distribution build for a IBM s390x architecture

Twitter has announced, and been testing a move to using OAuth -- Rather repeat details here as to why this is compelling solution to federated authentication, and how it is still secure, please to take a look at: http://oauth.net/, and the Twitter writeup on that topic

Cutting over to the latest TTYtter (which has the needed code to use OAuth) is as easy as doing a download, setting a file permission executable, and updating a test symlink. Some minor edits to the 'rc' file were needed -- the lynx seems to lack needed crypto hooks, and so we edit to cut over to using curl. The tool then leads one through generating, and injecting locally the needed OAuth keying

[herrold@centos-5 ttytter]$ ./ttytter
-- using SSL for default URLs.
trying to find cURL ... /usr/bin/curl
-- checking version at http://www.floodgap.com/software/ttytter/01current.txt
-- your version of TTYtter is up to date (1.1.3)
** warning: -user is ignored when -authtype=oauth (default)

++-------------------------------------------------------------------++
|| WELCOME TO TTYtter: let's get you set up with an OAuth keyfile! ||
++-------------------------------------------------------------------++
Twitter now requires all applications authenticating to it use OAuth, a
more complex authentication system that uses tokens and keys instead of
screen names and passwords. To use TTYtter with this Twitter account,
you will need your own app key and access token. This requires a browser.

The app key/secret and user access token/secret go into a keyfile and
act as your credentials; instead of using -user, you use -keyf. THIS
KEYFILE NEVER EXPIRES. YOU ONLY NEED TO DO THIS ONCE FOR EACH ACCOUNT.

If you DON'T want to use OAuth with TTYtter, PRESS CTRL-C now. Restart
TTYtter with -authtype=basic to use a username and password. THIS IS
WHAT YOU WANT FOR STATUSNET, BUT WON'T WORK WITH TWITTER AFTER AUGUST 2010.
If you need help with this, talk to @ttytter or E-mail ckaiser@floodgap.com.

Otherwise, press RETURN/ENTER now to start the process.

Start your browser.
1. Log in to https://twitter.com/ with your desired account.
2. Go to this URL (all one line). You must be logged into Twitter FIRST!

http://dev.twitter.com/apps/key_exchange?oauth_consumer_key=credentialelided12345

3. Twitter will confirm. Click Authorize, and accept the terms of service.
4. Copy the entire string you get back.

ck=BbEgsckKyR1234567890fw&
cs=QhuEHoZoh1234567890rg5oZjCmaddogk
kjhFnaYE&at1234567890n3lqqFjredbullX
pyTM3iQH6I1234567890k8Wilz&
ats=BVrFP1234567890ggODKHmTChME1234567890PCo9Y

-- Paste it into this terminal, then hit ENTER and CTRL-D to write it ---------
ck=BbEgsckKyR1234567890fw&
cs=QhuEHoZoh1234567890rg5oZjCmaddogk
kjhFnaYE&at1234567890n3lqqFjredbullX
pyTM3iQH6I1234567890k8Wilz&
ats=BVrFP1234567890ggODKHmTChME1234567890PCo9Y

-- EOF ------------------------------------------------------------------------
Written new key file /home/herrold/.ttytterkey
Now, restart TTYtter to use this keyfile -- it will use this one by default.
(For multiple key files with multiple accounts, write them to separate
filenames, and tell TTYtter where the key is using -keyf=... .)

Easy enough. The edits to the 'RC' file, post changes are shown thus:

[herrold@centos-5 ~]$ cat ~/.ttytterrc
#
# user=herrold:oldpass3.2.7172.14159word
keyf=/home/herrold/.ttytterkey
hold=1
ssl=1
# lynx=1
curl=1
# url=https://twitter.com/statuses/public_timeline.json
vcheck=1
#
[herrold@centos-5 ttytter]$

and let's fire it up again and test for function:

[herrold@centos-5 ttytter]$ ./ttytter
-- using SSL for default URLs.
trying to find cURL ... /usr/bin/curl
-- checking version at http://www.floodgap.com/software/ttytter/01current.txt
-- your version of TTYtter is up to date (1.1.3)
(checking credentials) test-login SUCCEEDED!
-- processing credentials: logged in as herrold
######################################################
...

After showing recent and direct posts, ends up at a prompt ready for content

...
-- notification: API rate limit is currently 350 req/hr
-- your version of TTYtter is up to date (1.1.3)
-- you are logged in as herrold

TTYtter> hello world and @ttytter using OAuth credentials
TTYtter> c9> hello world and @ttytter using OAuth credentials

Let's look a bit at the files with an eye to privacy. Oops

[herrold@centos-5 ~]$ cd ~ ; ls -al .ttytt*
-rw-rw-r-- 1 herrold herrold 174 Aug 18 09:43 .ttytterkey
-rwx------ 1 herrold herrold 161 Aug 18 09:48 .ttytterrc
[herrold@centos-5 ~]$

I fix that thus, of course:

[herrold@centos-5 ~]$ chmod 600 .ttytterkey
[herrold@centos-5 ~]$

I'll send a bug report to the author, suggesting use of a umask 077 before creating that keying file. A quick restart of the client indicates it is fine with that set of permissions

All done

17 August 2010

Stupid RPM tricks: SOURCE files
No. 2 in a series

The #rpm channel has this question this morning:

how can I simple include a file placed in SOURCES dir in a pkg?

Locally on a development box, I maintain a huge archive of unpacked .spec files, either written from scratch, or with automated tools locally produced in house, or extracted from SRPMs (from a mirror collection of somewhat over a quarter-million such SRPMs)

This gives me a huge reservoir to look through for a clean example. Folks in a Unix ™ like environment are fortunate to live in a culture what favors portability by rebuilding from sources, and avoiding binary data stores in favor of plain text ones. This has been a part of the environment since the earliest days of that culture. The tool grep can act as one's eyes, to scout out examples

I know from experience reading and then writing .spec files, that a file living in the ./SOURCES/ directory might be listed, and then referred to by using the identifier: SOURCE. So, I look with grep and find the following

[herrold@centos-5 ~]$ cd rpmbuild/SPECS/
[herrold@centos-5 SPECS]$ grep SOURCE *spec | wc
623 2876 44222
[herrold@centos-5 SPECS]$ grep SOURCE *spec | less

As there were LOTS of matches, I used less to scan the results a screen at a time, looking for a well known, and readily available package to pull an example from. The ISC bind nameserver is a well known and prevasively available one

[herrold@centos-5 SPECS]$ grep -i SOURCE bind*spec | grep 29
Source29: named.conf.sample
cp -fp %{SOURCE29} sample/etc/named.conf
[herrold@centos-5 SPECS]$ grep -i sample bind*spec
Source29: named.conf.sample
Source30: named.rfc1912.zones.sample
# sample bind configuration files for %doc:
mkdir -p sample/etc sample/var/named/{data,slaves}
cp -fp %{SOURCE29} sample/etc/named.conf
cp -fp %{SOURCE30} sample/etc/named.rfc1912.zones
cp -fp %{SOURCE31} sample/etc/
cp -fp %{SOURCE17} %{SOURCE18} %{SOURCE19} %{SOURCE20} %{SOURCE21} %{SOURCE22} %{SOURCE23} sample/var/named
ns localhost.' > sample/var/named/$f;
/usr/bin/tail -n '+'`/bin/egrep -n '\\$Id: rndc.conf,v' bin/rndc/rndc.conf | sed 's/:.*$/+1/' | bc` bin/rndc/rndc.conf | sed '/Sample rndc configuration file./{p;i\
;d}' > sample/etc/rndc.conf;
%doc sample/
...

So we see that the process is:

  1. List that file, living in ./SOURCES/ in the .spec file
  2. Simply use the cp command (with some options preserving timestamps, etc.) to the install staging point, and
  3. include the destination directory or the specific file in the list down in the %files stanza of the .spec file

By reading prior examples, a person can find both good and bad examples, of course, and so cross checking is sensible. This particular example was chosen as it is a nice clean one


This post is part of a continuing series. The content of this series might be indexed out with the tag: rpm It addresses usage questions and issues raised in the #rpm IRC channel on freenode. That channel is populated by folks who are part of the RPM branch represented at: http://rpm5.org/. Your author of this content formerly served as the editor of the RPM website, for many years, until the domain owner 'pulled' it back one day. The older content formerly at the old rpm.org site is now archived at: http://www.oldrpm.org/

16 August 2010

Enterprise distributions ...

Where old software goes, to live out its declining days

see this bug: https://bugzilla.redhat.com/attachment.cgi?id=439002

Perhaps the motto is: 'we shall backport no patch before its time'

Mirroring a upstream master with lftp to make a local yum repository

I've been assembling parts for an automated builder, over on the ia64 (Intel Itanium architecture). It has been a while since CentOS had this in the active part of the updates rotation, and I've been working on builders again, as much for relaxation as for anything else. The old binary RPMs aged away to the archive systen under the vault.centos.org hostname, and I don't want to be repeatedly hitting and loading that link for local purposes

As such I set to establishing a local mirror to run providing ftp access to that mirrored content for the local ia64 to draw from in populating a build chroot. I deployed a fresh host, gave it some drive space, and hardened it. Then I installed and turned up vsftpd. And I installed lftp which I use for mirroring remote FTP or WWW sites. It is more lightweight than rsync, and as this is anonymous mirroring, there is no security issue

Setting up the control file to drive the mirror could not be easier. Note: we control permitted shell level access via wrappers and iptables, and use 'keyed' ssh access, and a measure of ~/.ssh/config , ssh-agent , and ssh-add and well named config files, to manage complexity, so it feels like reaching out to a host away in a data center is the same as to one inside the local trusted network. This is not a transitive trust, of course, and remote machines cannot reach in

[herrold@centos-5 localcopy]$ scp root@198.178.231.209:/root/*conf .
... snip ...
[herrold@centos-5 localcopy]$ cat c55-ia64.conf
#
mirror -c -e \
http://vault.centos.org/4.4/os/ia64/ \
/var/ftp/pub/mirror/centos/4/4.4/os/ia64
#
[herrold@centos-5 localcopy]$

and running it just as easy:

[root@ia64-c44-mirror ~]# lftp -f c55-ia64.conf

I took a look to make sure it got a full measure of content:

[root@ia64-c44-mirror ~]# du /var/ftp/pub/mirror/centos/4/4.4/os/ia64
10076 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/images/pxeboot
59680 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/images
2048812 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS/RPMS
165296 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS/base
2214116 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS
7952 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/repodata
25080 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/headers
2307452 /var/ftp/pub/mirror/centos/4/4.4/os/ia64
[root@ia64-c44-mirror ~]#

... and then configured a custom set of yum repository files to point to that archive

[root@ia64-builder hardening]# cd /etc/yum.repos.d/
[root@ia64-builder yum.repos.d]# grep -v ^# pmman-CentOS-Base.repo | grep -v ^$
[base]
name=CentOS-$releasever - Base
baseurl=ftp://ia64-c44-mirror.pmman.net/pub/mirror/centos/4/4.4/os/$basearch/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
priority=1
protect=1
[root@ia64-builder yum.repos.d]#

The proof of the pudding is in the eating, of course, so on that ia64, let's remove and then reinstall a leaf node package (here, the joe text editor)

[root@ia64-builder ~]# yum -d 0 -e 0 clean all
[root@ia64-builder ~]# rpm -e joe
[root@ia64-builder ~]# rpm -q joe
package joe is not installed
[root@ia64-builder ~]# yum -d 0 -e 0 -y install joe
[root@ia64-builder ~]# rpm -q joe
joe-3.1-7.rhel4
[root@ia64-builder ~]#

Seems fine

Stupid RPM tricks, No. 1 in a new series

In the #rpm channel on freenode, a new person has wandered in with what seems like a basic question when writing up a new .spec file. I looked into the back archive for this blog, and it seems there is a way to get a list of just the items I have tagged with "rpm"

When I add:
%config %{_builddir}/ossec-hids-%version/etc/ossec.conf
.. where the rpm will put this file when i install that package?

Kind of a strange question as a line begenning: %config ... is usually placed in the %files stanza to mark the character of a file as a configuration file. Those files usually live at down /etc/ or perhaps /etc/packagename/ The author will find that this construct is not likely to place content where a general system application striving to meet and to be conformant with the File Hierarchy Standard would place such

Putting that to one side, let's 'solve' here it ends up when RPMBUILD processes that .spec file stanza
$ rpm --showrc | grep _builddir


[herrold@centos-5 ~]$ rpm --showrc | grep _builddir
RPM_BUILD_DIR="%{u2p:%{_builddir}}"
cd %{u2p:%{_builddir}}
/usr/lib/rpm/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}"
-14: __mono_provides /usr/lib/rpm/mono-find-provides %{_builddir}/%{?buildsubdir} %{buildroot} %{_libdir}
-14: __mono_requires /usr/lib/rpm/mono-find-requires %{_builddir}/%{?buildsubdir} %{buildroot} %{_libdir}
-14: _builddir %{_topdir}/BUILD
RPM_BUILD_DIR="%{_builddir}"
[herrold@centos-5 ~]$

Obviously here is a need to run another query to examine '_topdir' to fully answer that question:


[herrold@centos-5 ~]$ rpm --showrc | grep _topdir
-14: _builddir %{_topdir}/BUILD
-14: _rpmdir %{_topdir}/RPMS
-14: _sourcedir %{_topdir}/SOURCES/%{name}
-14: _specdir %{_topdir}/SPECS
-14: _srcrpmdir %{_topdir}/SRPMS
-14: _topdir /home/herrold/rpmbuild
[herrold@centos-5 ~]$

So that file will end up at: %{_topdir}/BUILD/ossec-hids-%version/etc/ossec.conf , or more precisely: /home/herrold/rpmbuild/BUILD/ossec-hids-%version/etc/ossec.conf

The 'takeaway' here is that the "_builddir" variable is not customarily used down in the %files stanza of a .spec file

13 August 2010

stirring the pot with the same old spoon -- rpm options

The same old question came up in an IRC channel yet again today, by a person who will not read, nor Google. Once again, with feeling, here is a quick script to study, as to stripping out excessive content on a Red Hat, CentOS, or Fedora derived 'multi-arch' box

[root@centos-5 bin]# cat ./strip-non-x86_64.sh
#/bin/sh
#
# strip-non-x86_64.sh
#
# strip all non x64_64 content (and also leave behind
# noarch stuff) for more build dependency friendly
# environment
#
# Copyright (c) 2010 R P Herrold
# GPLv3+
# tossed together for education purposes for a blog post
#
# optionally run over or over, after enabling the removal
# or one pass, if you uncomment two places
#
# not designed to be the more efficient one pass solution
# which is possible
#
# NODEPS="--nodeps "
#
# see: man rpm, see '--qf' or: queryformat
# or --querytags
#
for i in `rpm -qa --qf '%{name}.%{arch}\n' | sort | grep -v ^kernel | \
grep -v noarch$ | grep -v "86_64" | grep -v ^gpg-pubkey `; do
echo "${i}"
# rpm -e ${i} ${NODEPS}
done
#
[root@centos-5 bin]# ./strip-non-x86_64.sh
[root@centos-5 bin]#

I am so bone tired of people who want to be fed from the same old dirty spoon

For extra credit see: man yum.conf and add the needed multilib_policy = best line to etc/yum.conf

'your winnings, sir'

From the movie: Casablanca

Captain Renault: I'm shocked, shocked to find that gambling is going on in here!
[a croupier hands Renault a pile of money]
Croupier: Your winnings, sir. ...

A person who was, I guess, looking to stir up something, was "shocked, shocked" in a CentOS mailing list to learn I have a direct commercial interest in the success of CentOS, and do not hide it, based on this post: one; and my reply

As a bit of 'inside baseball', behind the scenes, I have not had access to certain parts of back side CentOS facilities for a while, and formally floated my request for getting this access right cleaned up, after informal efforts were not advanced by the party needing to take the needed steps. I was met with this reply in my mail spool yesterday:

Subject: Giving Russ wider access to machines etc
I have 2 concerns when it comes to giving Russ wider machine level access to [those facilities]
1) He has a direct commercial interest in CentOS. ...

This is easy -- I co-founded this project long since, and have in no wise ever engaged in any act to damage it. The author proposes to exclude me as I have a 'direct commercial interest'. A slippery slope, not well thought through, and not a fight that makes ANY sense in an enterprise distribution. Form follows function

Either the GPL means what it says, and so also similar licenses such as BSD, or they do not. I commercialize Linux and other Open Source, and have for many years long before CentOS existed, and don't hide it

If some people don't like that they can say so, and I'll surface the discussions

In other news:

  1. water is wet
  2. CentOS still has no corporate existence to protect its members, as part of an unincorporated association, from having to individually defend suits such as Oracle's today

I am not going to hang around exposed, without a corporate form and protection, until people decide they want to move forward. It's been a year since the Lance letter and the time for 'all deliberate speed' is over

If it turns out that only one person calls the shots these days in CentOS and to my exclusion, so be it. I'll not be a servant to another, and I'll not be improperly excluded

Chickens, coming home to roost

I see in the overnight news that the trades have picked up on the fact that Oracle (now owner of the former Sun's intellectual property in relevant part) has filed suit, on patent grounds against Google

Big Deal, right? Doesn't affect me, right?

Wrong

Two years ago, the Linux Standards Base was talking about requiring Java functionalities in a 'conformant to the LSB standard' distribution. I spoke strongly against this as did Alan Cox, but as I am just one person; and Red Hat is largely uninterested in the LSB other than to conform to it to satisfy the 'checkbox' on an evaluation matrix, a 'trial use' of Java appeared in LSB 4.0

I asked the question and noted my response:

So, exposing people to an NDA, and potentially unlimited liability, and defense costs, in order to be be able to test and demonstrate their distribution is LSB complaint is fine?

I think not. I think the LSB cannot in good conscience place
distributions in the line of fire, until and unless the
testing tool is not a 'spring gun'.

The pain comes in part from this:

(f) you agree to defend and indemnify Sun and its licensors from and against any damages, costs, liabilities, settlement amounts and/or expenses (including attorneys' fees) incurred in connection with any claim, lawsuit or action by any third party that arises or results from

(i) the use or distribution of your Operating
System, or any part thereof, in any manner

... and it did not take a rocket scientist to see that patent threats were in the minefield:

LSB needs to decide how much and what kinds of risk [contractual NDA, contractual indemnification, trademark infringement, patent, copyright, more?] it is willing to ask distributions to expose themselves to, to become 'LSB compliant'.

The rubber has just hit the road; the chickens are on the glide path, coming home to roost. The hypotheticals just got instantiated, and the danger is real

12 August 2010

NFS aide to memory

I had occasion to add NFS Read Only exports from several servers holding several generations of source RPMs to build, to be mounted on a build client. I always have to look at an existing setup, and replicate the configuration files ( /etc/exports and /etc/fstab ). Some RPM packages are needed as well

On the server side, we need to install the package: nfs-utils as a 'keystone' that pulls in other dependencies it needs


# yum install nfs-utils

Do an edit in /etc/exports


#
/path/to exported/directory 10.85.0.0/16(ro)
#

And finally enable the services, and start them:

# /sbin/chkconfig portmap on
# /sbin/chkconfig nfs on
# /sbin/service portmap start
# /sbin/service nfs start

Turning to the client side, we needs a running portmap and netfs


# yum install nfs-utils

Make any needed mountpoints:


# mkdir -p /mnt/nfs/1
# mkdir -p /mnt/nfs/2
# mkdir -p /mnt/nfs/3
# mkdir -p /mnt/nfs/4

Add needed entries in the /etc/fstab [Note: I spread the content over two lines for each entry for presentation purposes]


#
10.85.85.232:/var/ftp /mnt/nfs/1 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
10.85.85.253:/var/ftp /mnt/nfs/2 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
10.85.85.154:/var/ftp /mnt/nfs/3 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
10.85.85.133:/var/ftp /mnt/nfs/4 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
#

And finally enable the services, and start them:

# /sbin/chkconfig portmap on
# /sbin/chkconfig netfs on
# /sbin/service portmap start
# /sbin/service netfs start

Do the mounts:

# mount -a

Test using df -h and ls down in the mounts

All done

29 July 2010

line noise and random numbers

"Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin"
  -- John von Neumann

Stipulated, but I am beset by closer devils, and I've tinkered with a mild solution I like. Let me tell you more

I've been badgered as some web sites have moved to Java-script evaluation routines for site passwords. Some require mixed case; others punctuation; no doubled letters; minimum length. Contrariwise some limit the character set to prohibit what others require

As such, for the last year or so I have been playing with a quick script on my CentOS 5 box, to generate a unique password per website, and keep a master index of the userid, email address used, and password used. This of course limits my ability to connect to those sites when away from that list. Open-ID roll-in seems to be coming however, and I have a rather clever device, backed by Verisign, and using an inexpensive OTP -- one time password -- hardware device as part of the authentication process

The little generator I wrote is based on simple shell tools -- md5sum, df, date, ps, cut, tr and so forth. It gathers a bit of entropy from a few sources froma few systems around the office, which should be non-correlated from a theoretical basis in the time frames at issue. It does some hashing to get good dispersion. Then it expands into 3 or 4 character vectors each 16 characters wide, using the hexidecimal digits that md5sum emits, as translated by tr; the first three are letters, upper, lower, and digits; the fourth character set are selected specials and punctuation excluding some shell meta's. Depending on a limitation by an option to -a, that vector may be limited to the alpha-numerics only, or also stir in the specials

That 'deck of characters' is handed off to a 'repeated cut of the deck' shuffler, and returned mixed once more just for good measure

I then add a 'bumper' of a letter or digit at each end [one site prohibited starting with a special], and a second character of 'bang' to prevent a mouse slip from dropping a password into the bash history in the case of a panel slip

The results are assembled, trimmed to an optionally specified length, and displayed, where I harvest them as mentioned above

Really, passwords need to die, die, die, but that is for another post

[herrold@centos-5 bin]$ for i in `seq 1 10 `; do ./gen-pw.sh ; done
e!~YJAJ{e:sU[4
2!R5K*U#)LoH~2
c!T)T7A10RjS}7
1!cGJ5T@]YjW>4
5!Q+#)K8:@rT]2
8!^)S~FF:5lV<4
b!dJ:TcKK{tQ)9
2!1dEa:fe~mR{4
3!cD1:eH^6wO*d
d!U*5(UEFWsI:e
[herrold@centos-5 bin]$ for i in `seq 1 10 `; do ./gen-pw.sh -a ; done
5ec280RSY5wIfd
0ddQ31EdJGmIdb
7eb52645U1tH06
0bfb401eG1jUa5
c2cT85QY22pS2d
ba8EALA9RRtR1f
35f59JRD6KpN04
7ed956UbA9pV59
402H3YLLR8hR3e
f2a0Aa9J0JrPde
[herrold@centos-5 bin]$

Completely un-memorizable of course, so really only suitable to a protected physical environment where one may write them down

Random number sin, of course, but the cyber-ninja can more readily put my thumb between pliers jaws, than predict the pseudo random source values I used, I'll readily spill the secret to access my LOL CATS site account. ... I still have to get around to building a few non-correlated hardware random number generators -- diode based, lava lamp based, dice tumbling machine for serious entertaining, I guess