26 August 2010

living at HEAD

Mark Shuttlesworth gave a keynote a few OLS ago, proposing that some folks need to 'live at head' and file the bleeding edge bugs early so nasty stuff gets fixed early, and good stuff flows down

The assumption is that the need for patches, except for branding, etc will fade away, and the character of Open Source's code improves with 'a rising tide that floats all boats'. Apple is pretty far down this road already with Clang/LLVM

I already build much from nightlies from VCS anyway (including clang), and it is simple enough to first solve a selfhosting iso builder. Then continue to populate leaf nodes to taste. Auto-reinstall daily preserving configs or detecting breakages and writing idempotent configs migrators as needed

File bugs ruthlessly, or at least relentlessly

Wire in valgrind, llvm, drill in unit tests and end to end functional tests, buildbots, LSB conformance testers, more, and file more bugs off the daily exceptions reports after new commits appear upstream. Whitelist acknowledged bugs for a while (with timeouts to keep the upstream honest)

Keep the machines busy at night rather than letting them play cards with one another


This email drafted in 5 minutes but I've ranted the rant enough times it is familiar. How is that for a real world and useful goal? Certainly better than playing 'bikeshed' politics

25 August 2010

What not to wear

Me, quoting me from a private IRC conversation

14:29 =orc_orc> disabling selinux is like
   having perms of 777
   or no root password at all
   or no wrappers
   or no iptables
14:29 =orc_orc> only weak minds should still be doing these things

Actually, one should set the root password to a unique and hard one for each box, and only use it for recovery [our practice per the 'blue ring notebook' of procedure at one site I ran. The 'per machine' passwords were 'remembered' and kept in a bound book journal in the CIO's office safe; usage logged, resets after use noted in said journal, audits performed]. Non-local root password based login should not be enabled. Rather, one should rely on pass-phrase protected, keyed SSH access the rest of the time

Rock and Roll never forgets ...

I came across this quote [pdf] today, from 2006 doing some research. From TFA:

Q: Why isn't Oracle a member of the OSDL?
A. ... We basically know where to go. We have a good relationship directly with people in the Linux community. We have all our partners. So there is no immediate advantage to being a member for us.
Not to sound arrogant, but we know how to deal with the Linux community

Gotcha, Wim ... 'does not play well with others'

... in a related vein, I cannot remember the last time I heard an Oracle representative on the weekly LSB conference call

18 August 2010

TTYtter move to OAth

One of the command line tools of interest for light weight micro-notifications is TTYtter. In the CentOS-devel mailing list a few months, I described using it to issue completion, and build closure notices doing a long running, unattended distribution build for a IBM s390x architecture

Twitter has announced, and been testing a move to using OAuth -- Rather repeat details here as to why this is compelling solution to federated authentication, and how it is still secure, please to take a look at: http://oauth.net/, and the Twitter writeup on that topic

Cutting over to the latest TTYtter (which has the needed code to use OAuth) is as easy as doing a download, setting a file permission executable, and updating a test symlink. Some minor edits to the 'rc' file were needed -- the lynx seems to lack needed crypto hooks, and so we edit to cut over to using curl. The tool then leads one through generating, and injecting locally the needed OAuth keying

[herrold@centos-5 ttytter]$ ./ttytter
-- using SSL for default URLs.
trying to find cURL ... /usr/bin/curl
-- checking version at http://www.floodgap.com/software/ttytter/01current.txt
-- your version of TTYtter is up to date (1.1.3)
** warning: -user is ignored when -authtype=oauth (default)

++-------------------------------------------------------------------++
|| WELCOME TO TTYtter: let's get you set up with an OAuth keyfile! ||
++-------------------------------------------------------------------++
Twitter now requires all applications authenticating to it use OAuth, a
more complex authentication system that uses tokens and keys instead of
screen names and passwords. To use TTYtter with this Twitter account,
you will need your own app key and access token. This requires a browser.

The app key/secret and user access token/secret go into a keyfile and
act as your credentials; instead of using -user, you use -keyf. THIS
KEYFILE NEVER EXPIRES. YOU ONLY NEED TO DO THIS ONCE FOR EACH ACCOUNT.

If you DON'T want to use OAuth with TTYtter, PRESS CTRL-C now. Restart
TTYtter with -authtype=basic to use a username and password. THIS IS
WHAT YOU WANT FOR STATUSNET, BUT WON'T WORK WITH TWITTER AFTER AUGUST 2010.
If you need help with this, talk to @ttytter or E-mail ckaiser@floodgap.com.

Otherwise, press RETURN/ENTER now to start the process.

Start your browser.
1. Log in to https://twitter.com/ with your desired account.
2. Go to this URL (all one line). You must be logged into Twitter FIRST!

http://dev.twitter.com/apps/key_exchange?oauth_consumer_key=credentialelided12345

3. Twitter will confirm. Click Authorize, and accept the terms of service.
4. Copy the entire string you get back.

ck=BbEgsckKyR1234567890fw&
cs=QhuEHoZoh1234567890rg5oZjCmaddogk
kjhFnaYE&at1234567890n3lqqFjredbullX
pyTM3iQH6I1234567890k8Wilz&
ats=BVrFP1234567890ggODKHmTChME1234567890PCo9Y

-- Paste it into this terminal, then hit ENTER and CTRL-D to write it ---------
ck=BbEgsckKyR1234567890fw&
cs=QhuEHoZoh1234567890rg5oZjCmaddogk
kjhFnaYE&at1234567890n3lqqFjredbullX
pyTM3iQH6I1234567890k8Wilz&
ats=BVrFP1234567890ggODKHmTChME1234567890PCo9Y

-- EOF ------------------------------------------------------------------------
Written new key file /home/herrold/.ttytterkey
Now, restart TTYtter to use this keyfile -- it will use this one by default.
(For multiple key files with multiple accounts, write them to separate
filenames, and tell TTYtter where the key is using -keyf=... .)

Easy enough. The edits to the 'RC' file, post changes are shown thus:

[herrold@centos-5 ~]$ cat ~/.ttytterrc
#
# user=herrold:oldpass3.2.7172.14159word
keyf=/home/herrold/.ttytterkey
hold=1
ssl=1
# lynx=1
curl=1
# url=https://twitter.com/statuses/public_timeline.json
vcheck=1
#
[herrold@centos-5 ttytter]$

and let's fire it up again and test for function:

[herrold@centos-5 ttytter]$ ./ttytter
-- using SSL for default URLs.
trying to find cURL ... /usr/bin/curl
-- checking version at http://www.floodgap.com/software/ttytter/01current.txt
-- your version of TTYtter is up to date (1.1.3)
(checking credentials) test-login SUCCEEDED!
-- processing credentials: logged in as herrold
######################################################
...

After showing recent and direct posts, ends up at a prompt ready for content

...
-- notification: API rate limit is currently 350 req/hr
-- your version of TTYtter is up to date (1.1.3)
-- you are logged in as herrold

TTYtter> hello world and @ttytter using OAuth credentials
TTYtter> c9> hello world and @ttytter using OAuth credentials

Let's look a bit at the files with an eye to privacy. Oops

[herrold@centos-5 ~]$ cd ~ ; ls -al .ttytt*
-rw-rw-r-- 1 herrold herrold 174 Aug 18 09:43 .ttytterkey
-rwx------ 1 herrold herrold 161 Aug 18 09:48 .ttytterrc
[herrold@centos-5 ~]$

I fix that thus, of course:

[herrold@centos-5 ~]$ chmod 600 .ttytterkey
[herrold@centos-5 ~]$

I'll send a bug report to the author, suggesting use of a umask 077 before creating that keying file. A quick restart of the client indicates it is fine with that set of permissions

All done

17 August 2010

Stupid RPM tricks: SOURCE files
No. 2 in a series

The #rpm channel has this question this morning:

how can I simple include a file placed in SOURCES dir in a pkg?

Locally on a development box, I maintain a huge archive of unpacked .spec files, either written from scratch, or with automated tools locally produced in house, or extracted from SRPMs (from a mirror collection of somewhat over a quarter-million such SRPMs)

This gives me a huge reservoir to look through for a clean example. Folks in a Unix ™ like environment are fortunate to live in a culture what favors portability by rebuilding from sources, and avoiding binary data stores in favor of plain text ones. This has been a part of the environment since the earliest days of that culture. The tool grep can act as one's eyes, to scout out examples

I know from experience reading and then writing .spec files, that a file living in the ./SOURCES/ directory might be listed, and then referred to by using the identifier: SOURCE. So, I look with grep and find the following

[herrold@centos-5 ~]$ cd rpmbuild/SPECS/
[herrold@centos-5 SPECS]$ grep SOURCE *spec | wc
623 2876 44222
[herrold@centos-5 SPECS]$ grep SOURCE *spec | less

As there were LOTS of matches, I used less to scan the results a screen at a time, looking for a well known, and readily available package to pull an example from. The ISC bind nameserver is a well known and prevasively available one

[herrold@centos-5 SPECS]$ grep -i SOURCE bind*spec | grep 29
Source29: named.conf.sample
cp -fp %{SOURCE29} sample/etc/named.conf
[herrold@centos-5 SPECS]$ grep -i sample bind*spec
Source29: named.conf.sample
Source30: named.rfc1912.zones.sample
# sample bind configuration files for %doc:
mkdir -p sample/etc sample/var/named/{data,slaves}
cp -fp %{SOURCE29} sample/etc/named.conf
cp -fp %{SOURCE30} sample/etc/named.rfc1912.zones
cp -fp %{SOURCE31} sample/etc/
cp -fp %{SOURCE17} %{SOURCE18} %{SOURCE19} %{SOURCE20} %{SOURCE21} %{SOURCE22} %{SOURCE23} sample/var/named
ns localhost.' > sample/var/named/$f;
/usr/bin/tail -n '+'`/bin/egrep -n '\\$Id: rndc.conf,v' bin/rndc/rndc.conf | sed 's/:.*$/+1/' | bc` bin/rndc/rndc.conf | sed '/Sample rndc configuration file./{p;i\
;d}' > sample/etc/rndc.conf;
%doc sample/
...

So we see that the process is:

  1. List that file, living in ./SOURCES/ in the .spec file
  2. Simply use the cp command (with some options preserving timestamps, etc.) to the install staging point, and
  3. include the destination directory or the specific file in the list down in the %files stanza of the .spec file

By reading prior examples, a person can find both good and bad examples, of course, and so cross checking is sensible. This particular example was chosen as it is a nice clean one


This post is part of a continuing series. The content of this series might be indexed out with the tag: rpm It addresses usage questions and issues raised in the #rpm IRC channel on freenode. That channel is populated by folks who are part of the RPM branch represented at: http://rpm5.org/. Your author of this content formerly served as the editor of the RPM website, for many years, until the domain owner 'pulled' it back one day. The older content formerly at the old rpm.org site is now archived at: http://www.oldrpm.org/

16 August 2010

Enterprise distributions ...

Where old software goes, to live out its declining days

see this bug: https://bugzilla.redhat.com/attachment.cgi?id=439002

Perhaps the motto is: 'we shall backport no patch before its time'

Mirroring a upstream master with lftp to make a local yum repository

I've been assembling parts for an automated builder, over on the ia64 (Intel Itanium architecture). It has been a while since CentOS had this in the active part of the updates rotation, and I've been working on builders again, as much for relaxation as for anything else. The old binary RPMs aged away to the archive systen under the vault.centos.org hostname, and I don't want to be repeatedly hitting and loading that link for local purposes

As such I set to establishing a local mirror to run providing ftp access to that mirrored content for the local ia64 to draw from in populating a build chroot. I deployed a fresh host, gave it some drive space, and hardened it. Then I installed and turned up vsftpd. And I installed lftp which I use for mirroring remote FTP or WWW sites. It is more lightweight than rsync, and as this is anonymous mirroring, there is no security issue

Setting up the control file to drive the mirror could not be easier. Note: we control permitted shell level access via wrappers and iptables, and use 'keyed' ssh access, and a measure of ~/.ssh/config , ssh-agent , and ssh-add and well named config files, to manage complexity, so it feels like reaching out to a host away in a data center is the same as to one inside the local trusted network. This is not a transitive trust, of course, and remote machines cannot reach in

[herrold@centos-5 localcopy]$ scp root@198.178.231.209:/root/*conf .
... snip ...
[herrold@centos-5 localcopy]$ cat c55-ia64.conf
#
mirror -c -e \
http://vault.centos.org/4.4/os/ia64/ \
/var/ftp/pub/mirror/centos/4/4.4/os/ia64
#
[herrold@centos-5 localcopy]$

and running it just as easy:

[root@ia64-c44-mirror ~]# lftp -f c55-ia64.conf

I took a look to make sure it got a full measure of content:

[root@ia64-c44-mirror ~]# du /var/ftp/pub/mirror/centos/4/4.4/os/ia64
10076 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/images/pxeboot
59680 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/images
2048812 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS/RPMS
165296 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS/base
2214116 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/CentOS
7952 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/repodata
25080 /var/ftp/pub/mirror/centos/4/4.4/os/ia64/headers
2307452 /var/ftp/pub/mirror/centos/4/4.4/os/ia64
[root@ia64-c44-mirror ~]#

... and then configured a custom set of yum repository files to point to that archive

[root@ia64-builder hardening]# cd /etc/yum.repos.d/
[root@ia64-builder yum.repos.d]# grep -v ^# pmman-CentOS-Base.repo | grep -v ^$
[base]
name=CentOS-$releasever - Base
baseurl=ftp://ia64-c44-mirror.pmman.net/pub/mirror/centos/4/4.4/os/$basearch/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
priority=1
protect=1
[root@ia64-builder yum.repos.d]#

The proof of the pudding is in the eating, of course, so on that ia64, let's remove and then reinstall a leaf node package (here, the joe text editor)

[root@ia64-builder ~]# yum -d 0 -e 0 clean all
[root@ia64-builder ~]# rpm -e joe
[root@ia64-builder ~]# rpm -q joe
package joe is not installed
[root@ia64-builder ~]# yum -d 0 -e 0 -y install joe
[root@ia64-builder ~]# rpm -q joe
joe-3.1-7.rhel4
[root@ia64-builder ~]#

Seems fine

Stupid RPM tricks, No. 1 in a new series

In the #rpm channel on freenode, a new person has wandered in with what seems like a basic question when writing up a new .spec file. I looked into the back archive for this blog, and it seems there is a way to get a list of just the items I have tagged with "rpm"

When I add:
%config %{_builddir}/ossec-hids-%version/etc/ossec.conf
.. where the rpm will put this file when i install that package?

Kind of a strange question as a line begenning: %config ... is usually placed in the %files stanza to mark the character of a file as a configuration file. Those files usually live at down /etc/ or perhaps /etc/packagename/ The author will find that this construct is not likely to place content where a general system application striving to meet and to be conformant with the File Hierarchy Standard would place such

Putting that to one side, let's 'solve' here it ends up when RPMBUILD processes that .spec file stanza
$ rpm --showrc | grep _builddir


[herrold@centos-5 ~]$ rpm --showrc | grep _builddir
RPM_BUILD_DIR="%{u2p:%{_builddir}}"
cd %{u2p:%{_builddir}}
/usr/lib/rpm/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}"
-14: __mono_provides /usr/lib/rpm/mono-find-provides %{_builddir}/%{?buildsubdir} %{buildroot} %{_libdir}
-14: __mono_requires /usr/lib/rpm/mono-find-requires %{_builddir}/%{?buildsubdir} %{buildroot} %{_libdir}
-14: _builddir %{_topdir}/BUILD
RPM_BUILD_DIR="%{_builddir}"
[herrold@centos-5 ~]$

Obviously here is a need to run another query to examine '_topdir' to fully answer that question:


[herrold@centos-5 ~]$ rpm --showrc | grep _topdir
-14: _builddir %{_topdir}/BUILD
-14: _rpmdir %{_topdir}/RPMS
-14: _sourcedir %{_topdir}/SOURCES/%{name}
-14: _specdir %{_topdir}/SPECS
-14: _srcrpmdir %{_topdir}/SRPMS
-14: _topdir /home/herrold/rpmbuild
[herrold@centos-5 ~]$

So that file will end up at: %{_topdir}/BUILD/ossec-hids-%version/etc/ossec.conf , or more precisely: /home/herrold/rpmbuild/BUILD/ossec-hids-%version/etc/ossec.conf

The 'takeaway' here is that the "_builddir" variable is not customarily used down in the %files stanza of a .spec file

13 August 2010

stirring the pot with the same old spoon -- rpm options

The same old question came up in an IRC channel yet again today, by a person who will not read, nor Google. Once again, with feeling, here is a quick script to study, as to stripping out excessive content on a Red Hat, CentOS, or Fedora derived 'multi-arch' box

[root@centos-5 bin]# cat ./strip-non-x86_64.sh
#/bin/sh
#
# strip-non-x86_64.sh
#
# strip all non x64_64 content (and also leave behind
# noarch stuff) for more build dependency friendly
# environment
#
# Copyright (c) 2010 R P Herrold
# GPLv3+
# tossed together for education purposes for a blog post
#
# optionally run over or over, after enabling the removal
# or one pass, if you uncomment two places
#
# not designed to be the more efficient one pass solution
# which is possible
#
# NODEPS="--nodeps "
#
# see: man rpm, see '--qf' or: queryformat
# or --querytags
#
for i in `rpm -qa --qf '%{name}.%{arch}\n' | sort | grep -v ^kernel | \
grep -v noarch$ | grep -v "86_64" | grep -v ^gpg-pubkey `; do
echo "${i}"
# rpm -e ${i} ${NODEPS}
done
#
[root@centos-5 bin]# ./strip-non-x86_64.sh
[root@centos-5 bin]#

I am so bone tired of people who want to be fed from the same old dirty spoon

For extra credit see: man yum.conf and add the needed multilib_policy = best line to etc/yum.conf

'your winnings, sir'

From the movie: Casablanca

Captain Renault: I'm shocked, shocked to find that gambling is going on in here!
[a croupier hands Renault a pile of money]
Croupier: Your winnings, sir. ...

A person who was, I guess, looking to stir up something, was "shocked, shocked" in a CentOS mailing list to learn I have a direct commercial interest in the success of CentOS, and do not hide it, based on this post: one; and my reply

As a bit of 'inside baseball', behind the scenes, I have not had access to certain parts of back side CentOS facilities for a while, and formally floated my request for getting this access right cleaned up, after informal efforts were not advanced by the party needing to take the needed steps. I was met with this reply in my mail spool yesterday:

Subject: Giving Russ wider access to machines etc
I have 2 concerns when it comes to giving Russ wider machine level access to [those facilities]
1) He has a direct commercial interest in CentOS. ...

This is easy -- I co-founded this project long since, and have in no wise ever engaged in any act to damage it. The author proposes to exclude me as I have a 'direct commercial interest'. A slippery slope, not well thought through, and not a fight that makes ANY sense in an enterprise distribution. Form follows function

Either the GPL means what it says, and so also similar licenses such as BSD, or they do not. I commercialize Linux and other Open Source, and have for many years long before CentOS existed, and don't hide it

If some people don't like that they can say so, and I'll surface the discussions

In other news:

  1. water is wet
  2. CentOS still has no corporate existence to protect its members, as part of an unincorporated association, from having to individually defend suits such as Oracle's today

I am not going to hang around exposed, without a corporate form and protection, until people decide they want to move forward. It's been a year since the Lance letter and the time for 'all deliberate speed' is over

If it turns out that only one person calls the shots these days in CentOS and to my exclusion, so be it. I'll not be a servant to another, and I'll not be improperly excluded

Chickens, coming home to roost

I see in the overnight news that the trades have picked up on the fact that Oracle (now owner of the former Sun's intellectual property in relevant part) has filed suit, on patent grounds against Google

Big Deal, right? Doesn't affect me, right?

Wrong

Two years ago, the Linux Standards Base was talking about requiring Java functionalities in a 'conformant to the LSB standard' distribution. I spoke strongly against this as did Alan Cox, but as I am just one person; and Red Hat is largely uninterested in the LSB other than to conform to it to satisfy the 'checkbox' on an evaluation matrix, a 'trial use' of Java appeared in LSB 4.0

I asked the question and noted my response:

So, exposing people to an NDA, and potentially unlimited liability, and defense costs, in order to be be able to test and demonstrate their distribution is LSB complaint is fine?

I think not. I think the LSB cannot in good conscience place
distributions in the line of fire, until and unless the
testing tool is not a 'spring gun'.

The pain comes in part from this:

(f) you agree to defend and indemnify Sun and its licensors from and against any damages, costs, liabilities, settlement amounts and/or expenses (including attorneys' fees) incurred in connection with any claim, lawsuit or action by any third party that arises or results from

(i) the use or distribution of your Operating
System, or any part thereof, in any manner

... and it did not take a rocket scientist to see that patent threats were in the minefield:

LSB needs to decide how much and what kinds of risk [contractual NDA, contractual indemnification, trademark infringement, patent, copyright, more?] it is willing to ask distributions to expose themselves to, to become 'LSB compliant'.

The rubber has just hit the road; the chickens are on the glide path, coming home to roost. The hypotheticals just got instantiated, and the danger is real

12 August 2010

NFS aide to memory

I had occasion to add NFS Read Only exports from several servers holding several generations of source RPMs to build, to be mounted on a build client. I always have to look at an existing setup, and replicate the configuration files ( /etc/exports and /etc/fstab ). Some RPM packages are needed as well

On the server side, we need to install the package: nfs-utils as a 'keystone' that pulls in other dependencies it needs


# yum install nfs-utils

Do an edit in /etc/exports


#
/path/to exported/directory 10.85.0.0/16(ro)
#

And finally enable the services, and start them:

# /sbin/chkconfig portmap on
# /sbin/chkconfig nfs on
# /sbin/service portmap start
# /sbin/service nfs start

Turning to the client side, we needs a running portmap and netfs


# yum install nfs-utils

Make any needed mountpoints:


# mkdir -p /mnt/nfs/1
# mkdir -p /mnt/nfs/2
# mkdir -p /mnt/nfs/3
# mkdir -p /mnt/nfs/4

Add needed entries in the /etc/fstab [Note: I spread the content over two lines for each entry for presentation purposes]


#
10.85.85.232:/var/ftp /mnt/nfs/1 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
10.85.85.253:/var/ftp /mnt/nfs/2 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
10.85.85.154:/var/ftp /mnt/nfs/3 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
10.85.85.133:/var/ftp /mnt/nfs/4 nfs    rsize=32768,wsize=32768,soft,nolock 0 0
#

And finally enable the services, and start them:

# /sbin/chkconfig portmap on
# /sbin/chkconfig netfs on
# /sbin/service portmap start
# /sbin/service netfs start

Do the mounts:

# mount -a

Test using df -h and ls down in the mounts

All done