27 June 2010

lost memories

From time to time, "we" 'clean house' and find the black trash bags. It is no surprise to me, of course for and earlier me have done to work of carefully tied packaging closed, and cacheing treasures up in the attic; from time to time, I am instructed to 'get rid of that clutter' as the now grown kids 'will never use those again' I am slow to act on this injunction

The Brio trains, the metal Erector set, the cast lead soldiers and molds, the Duplo blocks, the stuffed animals, Lincoln logs, the McGuffey readers, the arrow and ax heads collected in the fields, have all fallen to head of the queue for disposition over time. Stuffed animals were in the dock this past weekend. At that point, I usually nod silently, carefully re-tie the sack, and set it to one side for a moment. Then my new task is to find a new hiding place for the bag in question after her attention turns to other matters

But a grandchild's mother and the child were delighted with the animal figures from my preservation efforts, even if the spouse was not as well pleased to see 'those old things' again

A few weeks ago, the Brio train set that was set aside in a cardboard box, up in the dark to rest almost two decades ago came out. It moved in with a grandson infatuated with rolling stock and was 'new' again; The Erector set, the melting pot and molds, all gone (not to return with current day safety rules — choking hazard of the nuts and bolts, heavy metal fumes). I am on the lookout for a replacement McGuffey reader set (that friend of books that taught me to read upstairs in a quiet room as the adults 'talked' downstairs), so I can 'seed' a room for young visitors

The flints and shaped stones? I was not attuned to their disposition occurring; a 'sharpie' sweet-talked a sale for a pittance from a elderly family member when 'cleaning up' prior to closing down a house before sale. That lot of childhood treasures also carried out the door the minnie balls I dug from the earth at Gettysburg

Entropy won a round that time; I know we'll battle again.


[An earlier version of this appeared at Victor Niederhoffer's Daily Speculations, which aggregator I recommend]

24 June 2010

Debian mkfs is working again

It's been a long June. I noticed early on that an update in Debian testing had moved mke2fs from one package to another without getting all the library dependencies right. As such I spent June without the ability to lay down a filesystem on a new partition with the 'proper' tool. Part of my series on logfile reading includes a task to review the 'percent full' for each partition (and to relocate or clean out fat ones) to avoid running out of room in a self-inficted denial of services attack

I tried the obvious fallback to build a new filesystem: busybox but the version found in Debian Testing was lacking a needed build time switch. I filed the bug, and considered a local patch, or perhaps whether to rebuild of part of the chain needed to fork mkfs for a bit, but my need for space to reorganize a host's files was not that great nor urgent. Just pesky each day to see

I knew from reading the bug reports that the fix had been committed and 'ageing' in the Debian fashion to its move from an Unstable 'nightly' to a mildly tested (or at least not black-balled) state and promotion into Testing

nfs2:~# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
ksysguard libdevmapper1.02.1
The following packages will be upgraded:
bsdutils e2fslibs e2fsprogs iptables iso-codes libblkid1 libcomerr2
libenchant1c2a libffcall1 libmime-tools-perl libnetpbm10 libss2 libuuid1
lockfile-progs mount mutt netpbm shared-desktop-ontologies util-linux
19 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
Need to get 9,841kB of archives.
After this operation, 115kB disk space will be freed.
Do you want to continue [Y/n]? y
...
nfs2:~#

I've been running repository data update operations daily .. the Debian approach is more measured in its pace than we use with CentOS, and I think we may have something to learn there. It is a rare package update that cannot wait for a daily repo data update, push and mirror overnight in our space, and it would avoid much confusion to casual sysadmins

Those bolded packages in that clutch of upgrades looks promising ...

nfs2:~# mkfs /dev/sda12
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
237568 inodes, 949835 blocks
47491 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=973078528
29 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
nfs2:~# date
Thu Jun 24 10:13:17 EDT 2010
nfs2:~#

Lovely; I'm back in business

19 June 2010

Reading the logs, part 3 -- Run your updates

It looks like I'll be writing these for a while as I clean up logfile noise. The earlier pieces are here and here. I say 'noise' here because they are not false positives, but neither are they material, just more a nuisance


One the things every admin who reads log files sees are automated scanners looking for exploits in 'canned' packages that were installed but have not been updated, either because the admin for a given machine has neglected to run updates, because it is not a publicly known exploit, or because the upstream has not yet addressed the matter.

A pattern that has emerged with our PMman with a data center with large contiguous swaths of IP space (and hosts scattered in assignment in that relatively compact range, said hosts reporting to me centrally) is as follows. The hostile exploit scanners are not even trying to be subtle any more -- they simply march sequentially through IP ranges, and inventory if a given weakness is present on every host to which they connect

Today, I focus on one sample report stanza:

--------------------- httpd Begin ------------------------

Requests with error response codes
400 Bad Request
HTTP/1.1: 1 Time(s)
403 Forbidden
/index.html: 1 Time(s)
404 Not Found
/cms/e107_files/e107.css: 1 Time(s)
/db/e107_files/e107.css: 1 Time(s)
/e107/e107_files/e107.css: 1 Time(s)
/e107_files/e107.css: 1 Time(s)
/forum/e107_files/e107.css: 1 Time(s)
/index.php: 1 Time(s)
/manager/html: 1 Time(s)
/portal/e107_files/e107.css: 1 Time(s)
/site/e107_files/e107.css: 1 Time(s)
/web/e107_files/e107.css: 1 Time(s)

---------------------- httpd End -------------------------

and apache can handle this trivially:

#
# file: noexploit.conf
#
# send scanners off to see the wizard
#
Redirect permanent /cms http://127.0.0.1/
Redirect permanent /db http://127.0.0.1/
Redirect permanent /e107 http://127.0.0.1/
Redirect permanent /forum http://127.0.0.1/
Redirect permanent /manager http://127.0.0.1/
Redirect permanent /mysql http://127.0.0.1/
Redirect permanent /phpmyadmin http://127.0.0.1/
Redirect permanent /phpMyAdmin http://127.0.0.1/
Redirect permanent /portal http://127.0.0.1/
Redirect permanent /site http://127.0.0.1/
Redirect permanent /user http://127.0.0.1/
Redirect permanent /users http://127.0.0.1/
Redirect permanent /web http://127.0.0.1/
#

The obvious next step is to package deployment hardenings, and add them to a local RPM repository so that simply running updates, as with yum will get the current best approaches on hardening, en masse, on all the servers

08 June 2010

Reading the logs ...

I see the following from logwatch in the overnight log file review:

 --------------------- httpd Begin ------------------------

Requests with error response codes
404 Not Found
/crossdomain.xml: 1 Time(s)

---------------------- httpd End -------------------------

and so I go digging:

[root@centos-5 httpd]# cat error_log
[Sun Jun 06 04:02:04 2010] [notice] Digest: generating secret for digest authentication ...
[Sun Jun 06 04:02:04 2010] [notice] Digest: done
[Sun Jun 06 04:02:05 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations
[Mon Jun 07 14:20:39 2010] [error] [client 127.0.0.2] File does not exist: /var/www/html/crossdomain.xml

Sure enough. It looks as though some piece of Flash code is hoping to 'leverage' a cross-domain permission to include something I may not have intentionally intended to allow.

See the note at: http://kb2.adobe.com/cps/142/tn_14213.html

For the sake of argument, assume you HAD to web view as root, as say with an operating system that required you use a browser front end to access system updates. Assume also that you improvidently viewed a 'seeder' of bad things that WROTE a hostile crossdomain.xml for later use by a second piece of hostile Flash to 'reap'

Oops ... game over