tag:blogger.com,1999:blog-44323255141092842042024-02-21T02:06:12.745+00:00orc_orc's sharp edge... out of the sheath, the blade cuts more easily. Musings on FOSS culture, economics and politicsR P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comBlogger132125tag:blogger.com,1999:blog-4432325514109284204.post-18228282959098809492014-07-11T14:20:00.000+01:002015-11-18T19:18:54.724+00:00The problem with older packaging managers in two lines ...There was a post from the NetBSD oriented '<a href="http://www.netbsd.org/docs/software/packages.html" target="_blank">pkgsrc</a>' project into the Scientific-Linux mailing list earlier this week. Lots of skilled geeks following their muse ... but
From the #pkgsrc IRC channel ('nick' changed to protect the poster)
<blockquote>09:11 wxzy> Ugh. I think I should have chosen the X from pkgsrc.<br>
09:11 wxyz> The one I have now lacks fonts that applications need.</blockquote>
Yeah. Package 'Managers' that unroll tarballs, but lack a strong Requirements / Dependency system are like that
R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-70029540708155252992014-06-19T11:11:00.002+01:002014-06-19T11:11:52.147+01:00Naming Names and 'Shaming'<p>I customarily run all the updates for the Windows 7 (64 bit) box at home on Sunday, but was traveling then, and so just ran some. Adobe's Flash Player (each no doubt asserted as trademarks) had a update notice in the bunch. I refer to those Marks in this paragraph for identification and not for other purposes. Disparagement comes later </p>
<p>So I attained Administrator (the Windows role enabled for installing software in the customary security model), and went through the process of starting their updater. This is a small intermediate program which retrieves the larger update, and then hands off execution to that later payload. AT NO TIME, have I or any other person with the knowledge needed to reach Admin rights, ever consented to the (seemingly always 'defaulted on') "Also install some third party 'enhancement'". It seems MacAfee and its Security Suite are the current firm, co-marketing of their products with Adobe. Again, I refer to those Marks in this paragraph for identification and not for other purposes. Disparagement comes later </p>
<p>Notwithstanding no consent, the McAfee kit was part of the payload that the Adobe retrieved ... and installed ... with no consent to such by me </p>
<p>Let's be frank. Adobe's Flash Player is a gaping security hole, with a well documented track record of <a href="http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=Flash+Player" target="Blank">poor coding</a>. There are 356 using the search link. I am all for cleaning up issues and not hiding them, but this is just a whorehouse of fail, Adobe </p>
<p>The McAfee record for 'Security Suite' is less clear, as it seems to me that the nameing of their products is designed to confuse, and segment purchases, rather than to provide long term clarity of what the names, used as Marks, mean. The CVE count at <a href="http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=McAfee" target="_blank">MITRE</a> is 137 by that search. But I see from a more general search engine query that McAfee seems to have licensed for distribution and re-naming, its product: 'AT&T Internet Security Suite - powered by McAfee' for instance, so one cannot rely on the Marks to reliability identify offenders. AOL seems to have licensed it as well </p>
<p>McAfee seems to have <a href="http://blog.osvdb.org/2014/05/07/the-scraping-problem-and-ethics/" target="_blank">other lapses</a> as well </p>
<p>I am just not sure what the remedy is: </p>
<ul><li>Stop using Windows? This is pretty much done with the maturity of the Open Source ecosphere -- we have NONE at the office facing the internet, and only ONE left at home or in the family members I support. For commercial software use, Apple's OS/X and hardware have been remarkably complete and their use uneventful without need to load them down with defensive 'crapware' </li>
<li>Stop using Flash? Is the pain worth the short remaining life Adobe's Flash seems destined to have. I recall there was an effort to re-implement to the published Flash specification, and Adobe (to their credit is better than most commercial vendors on publishing file format documentation -- think: PSF and friends). time to investigate what the FOSS replacement is </li>
<li>And of course: 'Shame' bundlers, and providers of 'ride-along' 'crapware'. So: Adobe, McAfee, and your products: Flash Player, and Security Suite -- consider yourself identified as bad actors, worthy of avoidance </li></ul>
<p>Well, a start then </p>
R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-17416191833242207612013-10-02T20:28:00.001+01:002013-10-02T20:28:32.162+01:00About those doughnuts ...In a recent <a href="http://www.orcorc.blogspot.com/2013/06/phone-call-ive-got-this-sick-machine.html" target="_blank">post</a>, I closed:<br /><code>Dollars to doughnuts, there is 'more than one roach' lurking.<br />I'll cover a bet that there are not tested backups in that shop <br /></code>I had occasion to speak with that person again the other day. No backups, no thought to plan for such by that person's predecessor, and so by definition, no Disaster Recovery Plan <br /><a href="http://gallery.herrold.com/stauff-crop.png" target="_blank"><img alt="Staufs storefront" border="0" src="http://gallery.herrold.com/stauff-crop.png" /></a><br />I am looking forward to getting a box of doughnuts -- from the DK would be nice. Please make sure two Apple Fritters are in there, as the other coffee vultures always grab the one I wanted. The <a href="http://events.nbc4i.com/columbus_oh/venues/show/289915-donut-kitchen" target="_blank">Doughnut Kitchen</a> is close to <a href="http://www.staufs.com/" target="_blank">Staufs</a>
R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-68133712130993922172013-09-20T15:27:00.000+01:002013-09-20T20:10:44.365+01:00win win habitsPMman was designed for
long lived VM's (compare contra: ephemeral AWS or OpenShift type instances) , and so our communication needs vary from other cloud
instance control interfaces. This applies for the client tenant side views, but also for the sysadmin / devop side view<br />
<br />
One of the convenience features of the PMman cloud product we run is the ease of communicating with 'tenants' on a given dom0. When I came in this morning, I had this notice waiting for me:<br />
<blockquote>
From: mdadm monitoring <root kvm-nnnn.pmman.com=""><br />To: root@kvm-nNNN.pmman.com<br />Subject: DegradedArray event on /dev/md0:kvm-nNNN.pmman.com<br /><br />This is an automatically generated mail message from mdadm<br />running on kvm-nNNN.pmman.com<br /><br />A DegradedArray event had been detected on md device /dev/md0.<br /><br />Faithfully yours, etc.</root></blockquote>
It is not the end of the world -- there are four members in that raid array; all mutable data is backed up nightly, and it is easy enough to fix with a hot drive swap. BUT, we did a drive swap for a different raid member on that same unit within the last month. I want to totally remove that unit from production, and put it into 'on the bench' mode, so I can see if there is some deeper hardware issue<br />
<br />
The neat part from my point of view is that part of our design included a way for contacting the tenant VM owners on JUST that box alone. It is as easy as clicking a couple buttons and typing<br />
<blockquote>
The following message will be sent to the to the list of users [on box: kvm-nNNN.pmman.com]. Here is the opportunity to fine tune the list.<br />
<br />
From: support@pmman.com<br />
Subject: Raid array member maintenance needed<br />
Notice Level: N<br />
Message: We have had some raid array member failures on the underlying dom0 of a machine you run at PMman. We replaced a drive hot three weeks ago on this same unit, and now have a new failure.<br />
<br />
This may be a portent of a failing drive control subsystem, rather than the drive (although the previously removed drive tested bad and has been RMA'ed)<br />
<br />
One new feature of PMMan of which you may not be aware is the 'no extra charge' weekly reboot and Level 0 backup. This is called a 'cyclic' backup with a 7 day repetition interval. To the extent you have NOT enabled this feature, I strongly suggest you test it and take advantage of this enhanced functionality of the interface<br />
<br />
It is accessible off the 'Backups' button of the VM control web interface. Feedback is welcome of course. A nice side benefit from OUR point of view of offering this to customers is that it enables us to do invisible migrations from one dom0 to another as the backup and reboot are occurring, and so 'clear out' a dom0 host of running client instance VMs<br />
<br />
Thanks<br />
<br />
-- Russ herrold<br />
User list: ... [elided] ...</blockquote>
Emails go out, and a history item gets dropped into each VM's logfile as well as in our admin side logs. Optionally I can have the tool turn on an 'attention needed' flag in the end tenant's console, that will persist until they acknowledge it. We already do that as to 'too long between updates' and' 'too long between reboots' and such<br />
<br />
We can of course do invisible 'hot' migrations of machines around, but even safer is to encourage the good habit of encouraging tenant VM owners to take (and we automatically test) Level zero backups<br />
<br />
Win-winR P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-60377307325227079942013-06-19T20:03:00.001+01:002013-06-19T20:03:54.294+01:00I am not Harry TrumanI received a email from a customer, followed by a phone call, to the effect they had received huge number of email 'return' bounces to a general intake email address. He and I have had this discussion before<br />
<br />
I have written about <a href="http://www.owlriver.com/tips/sendmail-tip/" target="_blank">email sender forgery</a> (There is probably NOT an email account: <b>godzilla@microsoft.com</b>) and its <a href="http://www.owlriver.com/tips/smtp-rfc/" target="_blank">fallout</a> ("Customer: My cousin says that his email to me is not going through") before. So let's take the time to think it through yet again<br />
<br />
Takeaway: He wanted me to stop such pieces from cluttering their email box, but he is unwilling to have 'heavy' spam filtering<br />
<br />
As a personal matter, and also wearing my sysadmin hat, I would like to stop seeing this cruft as well<br />
<br />
But as a technical matter, it seems that it cannot easily be done without constant 'tuning' of rejection rules or some other rather serious matching of 'Message-ID' of pieces sent against return pieces offered. An attempt to do so through filtering tools with no prior knowledge of Message-ID's sent, is to always 'play defense' against the spammers, without an ability ever score a 'win'. The effort to match Message-Id's in offered return pieces is perhaps more promising<br />
<br />
But, so far, no-one has been sufficiently vexed by it in the FOSS community to publish such a tool and to commit do doing do the ongoing 'tuning' of message parsers needed. Perhaps we can design around it with existing tools, and amending our outgoing pieces by adding a certification that a given candidate email is truly from us<br /><br />
As a design matter, building a milter, writing some procmail rules, and parsing sendmail logs, probably into a database backend, as my first thought as to how I would approach the matter. The database constraint is troubling, though. I have other work that I need to attend to first, but I went through the thought process. I memorialize that process in part in case someone is interested. Even more, I will provide webspace, mailing list support, and a VCS <i>gratis</i>, if someone 'feels the itch'. It would be useful to have, but is not urgent to attain -- Seven Habits Quadrant 2 or 4 stuff. Absent such a volunteer effort or a paying customer, for me, Quadrant 4<br />
<br />
Or, version two, a trusted cohort of outbound mailservers could build a <a href="http://en.wikipedia.org/wiki/Message_authentication_code" target="_blank">MAC</a> MIME <a href="http://www.ietf.org/rfc/rfc1521.txt" target="_blank">Multipart</a> attachment for each outbound message, and also a second MIME attachment that is cryptographically validated 'clearsign' of that MAC part. Possibly bundle this up into a <a href="http://tools.ietf.org/html/rfc2387" target="_blank">Multipart Related </a> set of structured
attachments. Add these two new MIME attachments to all messages on every
outbound piece. The first part -- the MAC part -- would be based a hash of the message body, <b>plus</b> a timestamp of seconds since Epoch or such, and other optional entropy, to avoid forgery and replay attacks<br />
<br />
Later then, when a putative return is offered, only accept for further processing those
returns that had a validating pair of MIME attachments, produced based on a re-hash the message body in chief, and that MAC section's timestamp; and that had previously clearsigned by it. Discard stale stuff, and non-validating content. This gets rid of the need for the database and simplifies the procmail rules. A well-formed candidate return piece can carry around all that is needed to known to decide if one will pass a mail return message along to human eyes<br />
<br />
Not free, as it will burn up compute cycles on every send, and a few more at return time, but also complete and under controllable locally so resistant to spammers. Avoids the database requirement, so it can scale out. Most of the needed tools already exist as FOSS. hmmm<br />
<br />
<br />
The protocols governing what constitutes: <b>email</b> permit a sender to enter whatever 'return address, and 'sender address' they wish on a piece of email. It is trivial to find a 'open' relay to accept email to send to any third party. Consider the analogy:<ul>
<li>All the while being careful to not leave a fingerprint or other biometric, I use cash to purchase a post card at the corner store, along with a stamp </li>
<li>I address it to someone of tender sensibilities, and assert that I noticed that their car was parked outside the local 'adult entertainment' establishment </li>
<li>I sign it: Harry S Truman </li>
<li>I enter a 'return address' of:</li>
</ul>
<br />
<blockquote class="tr_bq">
Harry S Truman<br />
President Emeritus<br />
1600 Pennsylvania Avenue<br />
Washington DC 20500 </blockquote>
<li>I mail it</li>
<br /><br />
The recipient is outraged to find such a libelous assertion, visible for their letter carrier to see, and demands that the person who did so be identified, and stopped. Also, for good measure, they want the Postal Service Inspectors to get on the matter to prevent such heartbreaking assertions to never happen again<br />
<br />
About all the Postal Service will offer to do in the usual case is to return the piece to its nominal sender. And he no longer receives mail at that address <br />
<br />
(I note parenthetically that the Postal Service DOES seem to <a href="http://www.thesmokinggun.com/documents/woman-arrested-for-obama-bloomberg-ricin-letters-687435" target="_blank">scan images of ALL paper mail</a> passing through their system)<br />
<br />
<br />
Stopping spam (here: bounce backsplatter and <a href="http://en.wikipedia.org/wiki/Joe_job" target="_blank">'joe jobs'</a>) is just not going to turn out to have a durable, easy, and comprehensive solution, without re-thinking what <i>we send</i> looks like. Spammers and legitimate receivers are in a 'arms race' and today's fix will rot if senders can re-engineer around the fixes. If this state of affairs distresses a person greatly and until I can get that MIME solution going to test my hypothesis: stop reading email; hire a full time, 24x7 secretary to pre-read all email and toss the junk.; turn up the filtering and accept the false positives; grow a thick skin<br />
<br />
Or, of course, start coding and beat me to itR P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-16645914330353231332013-06-13T18:41:00.001+01:002013-06-13T18:41:50.081+01:00Phone call: 'I've got this sick machine ...'me: well, why it is sick?<br />
<br />
them: yum complains about a missing signing key<br />
<br />
me: so install the key; it is down in <tt>/etc/pki/rpm-gpg/</tt>, and <tt>rpm --import ... </tt> will do the trick<br />
<br />
them: that directory is not there<br />
<br />
me: who set up the machine?<br />
<br />
them: well, I was handed it, and ...<br />
<br />
me: so, take a level zero backup and then clean up the machine before trying to work on it, or deploy a new one<br />
<br />
them: well, I can't<br />
<br />
I just got off that call from a friend in a new employment situation<br />
<br />
The technical fix was outlined by me <a href="http://www.owlriver.com/tips/broken-system/" target="_blank">long ago</a>, and I sent an email with the link along to the person calling <br />
<br />
BUT: Fixing the mindset inside the caller's head: <b>do not try to work in a undefined (here: broken) environment</b> is harder<br />
<br />
But the caller has a problem in their work-flow process; a fix has to be done; sooner is probably better than later; a broken machine in production is 'technical debt', pure and simple. Fundamental expectations are not met; binary partition will not work well to isolate problems, as more than one piece is probably broken. It will break again, and a perception may well form that the caller may be the problem, rather than the broken environment they were handed<br />
<br />
Be sure to make a note to yourself to also address the broken process that permitted that machine to escape into production. Dollars to doughnuts, there is 'more than one roach' lurking. I'll cover a bet that there are not tested backups in that shop<br />
<br />R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-22399456246948937832013-01-04T21:47:00.002+00:002013-01-04T21:47:56.959+00:00Another pet died across the holidaysI wrote before about un-maintained and <a href="http://orcorc.blogspot.com/2012/09/feeding-pet.html" target="_blank">orphaned WordPress sites</a> being exploited. That same frantic user from two months ago, called again. The TL;DR summary is:<br />
<ul>
<li>cPanel administration with multiple accounts in a single host without protections </li>
<li>OS Updates not being run</li>
<li>WordPress updates not being run</li>
<li>Random add-on's being used without an awareness of security issues</li>
<li>No SELinux (disabled)</li>
</ul>
An exploit un-gzip-ping a hostile payload from cache was used, and the machine taken over<br />
<br />
The absence of good sysadmin skills, well packaged content, and updates 'for the loss' ...R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-48699766436504603102012-10-30T19:14:00.001+00:002012-10-30T19:14:28.738+00:00disable IPv6 DNS resultsWe had an end user appear in the main #centos IRC channel the other day with a IPv6 problem. That person had leased a VPS somewhere, and their provider had included and enabled IPv6, at least partially. Something was wrong in the network fabric, so that while some IPv6 services worked, others did not; DNS results returned with AAAA record results; but then the VM hoster was not transiting port 80 TCP traffic. Very curious, and frustrating to the end user who just wanted <tt>yum</tt> to work so they could install updates and packages on their instance <br />
<br />
The culprit is the grafting in of IPv6 readiness in <tt>man 2 getaddrinfo</tt>. This is the way of the future, so there is no fighting it on a long term basis, but tactically having a means to be IPv4 only is appealing for people just wanting to work in the older address space. The <b>TL; DR</b> uptake of that man page is that in a properly functioning system, name resolution answers under IPv6 are preferred, and only if not available, does one fall back to the older IPv4. But this places a premium on IPv6 <em>actually working</em> when present. We've shipped a full native IPv6 setup for customers at <a href="http://www.pmman.com/" target="_blank">PMMan</a> for a couple of years ago, but I assure you that we had some head-scratching as we rolled it out, and found customers using tunnels from <a href="https://www.tunnelbroker.net/account.php" target="_blank">HE</a> or <a href="http://www.sixxs.net/faq/account/?faq=10steps" target="_blank">SixXs</a> were also leaking advertisements to other VM's. We added rules to filter out the false traffic after a bit of tracing with <tt>TCPDUMP</tt> <br />
<br />
I have blogged about it before when IPv6 link-local address resolution (the <tt>^FE</tt> family) was confusing <a href="http://orcorc.blogspot.com/2009/03/ipv6-eats-kittens-and-distcc-on-debian.html" target="_blank"><tt>distcc</tt></a> under Debian a couple of years ago. There are links in the CentOS wiki for approaches on disabling IPv6 traffic, which vary between <a href="http://wiki.centos.org/FAQ/CentOS5?highlight=%28ipv6%29#head-47912ebdae3b5ac10ff76053ef057c366b421dc4" target="_blank">C5</a> and <a href="http://wiki.centos.org/FAQ/CentOS6#head-d47139912868bcb9d754441ecb6a8a10d41781df" target="_blank">C6</a> <br />
<br />
That last mentioned article has an outlink to a <a href="https://bugzilla.redhat.com/show_bug.cgi?id=641836" target="_blank">bugzilla</a> ticket that offers food for thought. It mentions in passing that one can direct a nameserver to NOT deliver IPv6 results with a fairly simple tweak <br />
<blockquote>
Another option is to add this to <tt>/etc/sysconfig/named</tt>:<br /><blockquote>
OPTIONS="-4"</blockquote>
</blockquote>
<br />
... so, ... it should be possible to set up a local cacheing nameserver on the <tt>localhost</tt>, configured to NOT return IVv6 content, and so workaround the issue. This smells sort of 'hackish', but it would have the benefit of being a single method that should work in the general case and not be tied to any particular kernel version, or other variable <br />
<br />R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-6696777981887786542012-09-27T08:32:00.002+01:002012-09-27T08:32:34.777+01:00Feeding the petWe had a frantic call from a sometimes customer today. Their self-administered WordPress-based website had a Trojan in it, and it was saturating their website traffic allocation. "THE SITE WAS DOWN!!" They had signed up at a CPanel mediated, shared hosting firm, and a plug-in they had installed turned out to contain a well-known trojan<br />
<br />
We spent a couple of hours looking into it. And then a couple hours looking into the WordPress security notification system. Perhaps, I should say: non-notification system as to getting subscribed to a formal notification mailing list from the WordPress folks, proper<br />
<br />
The WordPress model seems to be: treat your WordPress site as though it is a <b>pet that needs daily feeding</b>. And to be <a href="http://www.wpsecuritylock.com/how-i-got-hacked-by-dan-morris/" target="_blank">'put down'</a> when you lose interest in it, move on, or forget about it -- Oops. Log in daily as an administrator, and look for a notification <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.wpsecuritylock.com/images/wordpress-3-4-2-security-update.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://www.wpsecuritylock.com/images/wordpress-3-4-2-security-update.png" /></a></div>
that you need to apply the 'latest and greatest' update. Run the update process manually whenever it appears. Oh yeah, did you remember to take a backup FIRST, and test that you can roll back to it if the 'update' breaks anything? Oops<br />
<br />
This of course RULES OUT using a packaged approach to managing such sites, as the lag for stabilizing a new RPM package, accounting for potential database changes, and the like 'take too long'. Just unroll a tarball, and trust that it will not break any local customizations<br />
<br />
I see fourteen open tabs in my browser panel still open, related to trying to track down a central and formal notification feed that I (or any person seeking to get 'push' notification) might subscribe containing only 'Security' notifications. Weeding through the tabs, ...<br />
<br />
<ul>
<li>The<a href="http://codex.wordpress.org/Installing_WordPress" target="_blank"> 'Famous 5-Minute Install'</a> for WordPress -- Nope, no useful outlink for hardening, nor to subscribe to notifications, beyond a pointer to a third-party Ubuntu appliance with an 'automatic security updates'. That appliance's page has pointers to a tool to enable taking database backups, adding PHPMyAdmin, and Webmin. Not good choices for a person caring about security</li>
<li>Perhaps <a href="http://wordpress.org/news/category/security/" target="_blank">FAQ items tagged with: Security</a> -- Nope, clearly incomplete, as for example a Google search turns up <a href="http://wpmu.org/infected-wordpress-sites-connected-to-trojans-on-approx-700000-macs/" target="_blank">this third-party alert</a> for version 3.3.2, but the <a href="http://wordpress.org/news/2012/04/wordpress-3-3-2/" target="_blank">Release Notice</a> does not get titled with: Security</li>
<li>This <a href="http://core.trac.wordpress.org/ticket/10253" target="_blank">bug (#10253)</a> lingered for three years with a Security tag in their Trac issue tracker as to the <a href="http://core.trac.wordpress.org/query?status=closed&resolution=fixed&milestone=3.4&group=resolution&desc=1&order=severity" target="_blank">current release series (3.4)</a>, and was amended ten days ago; But the <a href="http://wordpress.org/news/category/releases/" target="_blank">latest release (for 3.4.2)</a> was twenty days ago when this is written. Should an update have been release? Who Knows?</li>
<li>Perhaps their <a href="https://codex.wordpress.org/FAQ_Security" target="_blank">FAQ Security</a> -- Nope, no push notification link suggested there, but lots of clutter as to copyright infringement notification handling, and miscellaneous topics</li>
<li>Perhaps watch the <a href="http://wordpress.org/news/category/releases/" target="_blank">Releases News</a> in an RSS reader - Oops, no sub-tag feed offered, and there has not been an "Important" Security release since December 2010, if one used that approach</li>
<li>Run a Google search daily, and look for third-party commendary - Nope, although nuggets may be found, for it is not viable as: <a href="http://www.wpsecuritylock.com/wordpress-security-alert-3-4-2/" target="_blank">Not Authoritative</a>, irregular and <a href="http://www.wpsecuritylock.com/category/blog/" target="_blank">partial as to updates</a>, and wading through search engine hit, or RSS feed clutter will kill your productivity</li>
</ul>
Clearly, one MUST configure the webserver to NOT permit off-site access to the credentials and configuration file: <tt>wp-config.php</tt> but I'll be darned if I can see instructions on the WordPress site, showing a novice administrator how to do this. In a shared hosting environment without 'root' level control, it is probably not even doable. There is not hint of this rather elementary precaution on the <a href="http://codex.wordpress.org/Editing_wp-config.php" target="_blank">official write-up</a> concerning editting the file<br />
<br />
A quick Google search for: turns up lots of vulnerable candidate installations, and a <a href="http://www.markflint.net/parsing-wordpress-wp-config-php-file/" target="_blank">handy, dandy code fragment</a> for parsing information out of potential victims so found, to automate take-overs. No criticism of the author of that code publishing his work; a knife can heal (as a scalpel), prepare dinner, or injure, depending on the intent of its holder<br />
<br />
I see an official <a href="https://codex.wordpress.org/FAQ_My_site_was_hacked" target="_blank">recovery outline</a> suggestion, anywayR P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-55263419066153539062012-09-26T21:10:00.000+01:002012-09-26T22:04:54.809+01:00Worth repeating; Trust and Open SourceI first encountered Mark Shuttlesworth in person at an Ottawa Linux Symposium a few years ago, and passed along a reply from Dag, responding to some controversial comment Shuttlesworth had made at the time. I choose not to use Ubuntu or Debian as my primary X desktop, but that said, there are 6 machines running one of those two distributions powered on in my office at the moment, so I am not a stranger there, either <br />
He was being 'up front' about the fact that <a href="http://www.markshuttleworth.com/archives/1182" target="_blank">Amazon search results</a> are being trialled for an upcoming Ubuntu version <br />
He points out, and it bears repeating, the following:
<br />
<blockquote>
[Question: ] <b>Why are you telling Amazon what I am searching for?</b><br />
[Answer: ] We are not telling Amazon what you are searching for. Your anonymity is preserved because we handle the query on your behalf. <b>Don’t trust us? Erm, we have root. You do trust us with your data already. You trust us not to screw up on your machine with every update. You trust Debian, and you trust a large swathe of the open source community. And most importantly, you trust us to address it when, being human, we err.</b></blockquote>
<br />
The <b>boldface</b> are important, but I carry the context as well here. When you use any computer operating system, you in the role of: <tt>user</tt> are implicitly placing trust in the decisions and the commitment of those who put it together to 'do the right thing', or to make it right when things go awry <br />
Do you trust your vendors? Your actions may be pointing out a dissonance, if you said: <tt>no</tt> R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-9184157786458560492012-09-18T20:01:00.000+01:002012-09-18T20:10:34.374+01:00More chickens, returning home to roostI referred to the <a href="http://orcorc.blogspot.com/2012/09/on-my-radar-java-to-ios-objective-c.html" target="_blank">roosting chickens</a> theme earlier today as to Oracle, and did not expect to be back to that topic for a while. But the news does not wait<br />
One of my long time <a href="http://orcorc.blogspot.com/2010/08/chickens-coming-home-to-roost.html" target="_blank">concerns</a> as a Cassandra, is that FOSS geeks firmly believe that ignorance and <a href="http://orcorc.blogspot.com/2009/03/promoting-ignorance.html" target="_blank">unworldly innocence</a>, and a native sense of protection that 'just conduct' while participating in FOSS matters, will somehow insulate the 'good folks' from the world<br />
To my experience, it does not work <a href="http://orcorc.blogspot.com/2010/08/your-winnings-sir.html" target="_blank">that way</a>. One needs to clearly disassociate from risky conduct. A person needs to read EULAs and look for traps, like <a href="https://lsbbugs.linuxfoundation.org/show_bug.cgi?id=3643#c1" target="_blank">indemnification</a> clauses <br />
I see in the overnight news that RackSpace and Github face some <a href="http://techcrunch.com/2012/09/18/patent-complaint-filed-against-rackspace-for-hosting-github/" target="_blank">hot water</a>. Drilling down, as to how this is likely to play out, I am aware that Github has an <a href="https://help.github.com/articles/github-terms-of-service" target="_blank">EULA</a>, that provides in part: <br />
<blockquote>
Section F 3 <br />
You shall defend GitHub against any claim, demand, suit or proceeding
made or brought against GitHub by a third party alleging that Your
Content, or Your use of the Service in violation of this Agreement,
infringes or misappropriates the intellectual property rights of a third
party or violates applicable law, and shall indemnify GitHub for any
damages finally awarded against, and for reasonable attorney’s fees
incurred by, GitHub in connection with any such claim, demand, suit or
proceeding; provided, that GitHub (a) promptly gives You written notice
of the claim, demand, suit or proceeding; (b) gives You sole control of
the defense and settlement of the claim, demand, suit or proceeding
(provided that You may not settle any claim, demand, suit or proceeding
unless the settlement unconditionally releases GitHub of all liability);
and (c) provides to You all reasonable assistance, at Your expense. </blockquote>
Guess who is going to be added as a Defendant in that lawsuit next; guess who's pocketbook will ultimately be looked to, to pay for Github 'lawyering up.' Github and RackSpace have every incentive in the world to 'point out' the poster of claimed infringing content to the Plaintiff and its lawyers, and so demonstrate 'good faith', a lack of intent to infringe, and their desire to mitigate any asserted damages R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-22354948513176927162012-09-18T09:59:00.000+01:002012-09-18T09:59:12.487+01:00On my RADAR: Java to iOS Objective CThis crossed my RSS newsreader feed :<blockquote>
J2ObjC is an open-source command-line tool from Google that translates Java code to Objective-C for the <a href="https://developer.apple.com/technologies/ios/" rel="nofollow">iOS</a>
(iPhone/iPad) platform. This tool enables Java code to be part of an
iOS application's build, as no editing of the generated files is
necessary. The goal is to write an app's non-UI code (such as data
access, or application logic) in Java, which is then shared by web apps
(using <a href="https://developers.google.com/web-toolkit/" rel="nofollow">GWT</a>), <a href="https://www.android.com/" rel="nofollow">Android</a> apps, and iOS app </blockquote>
<blockquote>
J2ObjC
supports most Java language and runtime features required by
client-side application developers, including exceptions, inner and
anonymous classes, generic types, threads and reflection. JUnit test
translation and execution is also supported </blockquote>
<br />
The project homepage is <a href="https://code.google.com/p/j2objc/" target="_blank">here</a>, and along with Google's Web Toolkit, seems to be under an acceptable FOSS license <br />
It does not strive to be a full replacement for all things Java, and as it notes: <blockquote>
when new projects first start working with it, they usually find new
bugs to be fixed. Apparently every Java developer has a slightly
different way of using Java, and the tool hasn't translated all
possible paths yet</blockquote>
<br />
Long time readers of this blog will recall that I have been a critic of Java in FOSS projects particularly in the <a href="http://orcorc.blogspot.com/2008/12/only-you-can-prevent-forest-fires.html" target="_blank">LSB standards</a> context, and as I predicted, the Oracle anti-FOSS Java lawsuit [against Google it turns out, because they are a juicy target] was <a href="http://orcorc.blogspot.com/2010/08/chickens-coming-home-to-roost.html" target="_blank">filed</a> a couple years later. <a href="http://orcorc.blogspot.com/2010/10/arrogance-personified.html" target="_target">People working for nasty corporate masters</a> earn their futures <br />
<br />This is not to say that translation layers are without warts and flaws, but at least this provides a partial roadway away from a Java dependency into a code generator that may emit freely licensable code. I'll be tracking this with interest. I have a sample conversion in mind ... <br />
R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-20842407315090080612012-09-12T22:31:00.000+01:002012-09-12T22:31:07.850+01:00about this week's GoDaddy outage<p>It is not clear if a cabal of Anonymous hackers, or simple network administration issues, caused the GoDaddy outage of Monday past. I guess it does not really matter </p><p>What <em>really</em> would have hurt is if the root domain server constellation had been compromised, to well and truly take down the internet. A Domain Registrar sends along updates to those root servers periodically, and GoDaddy's outage, from the extent of our involvement with them, simply impaired our ability to renew domains, and set new nameservers (NS records). As we had no urgent renewals pending, that is to say, not at all </p><p>We do not rely on GoDaddy for DNS services, and really, never have relied on them for production purposes. For <a href="http://www.pmman.com/" target="_blank">PMman</a> and for our ISP and COLO services, we run three geographically diverse nameservers for most of our purposes. We also run a few others for customers' needs (PTR records for a couple of datacenters we are in, testing, demonstration units) </p><p>The true 'masters' of our externally visible DNS servers are simply not accessible from the public internet. We push out updates to our public nameservers by cryptographically protected <a href="https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-bind-rndc.html" target="_blank"><tt>rdnc</tt></a> transactions. Those transactions are logged, and the information causing a given <tt>RDNC</tt> transaction are created by queries into a local database with a custom written LAMP control interface based on the FOSS tools that are in a stock <a href="http://www.centos.org/" target="_blank">CentOS</a> install. Compared to manually editing zone files, checking variants in and out of a version control system, and so forth, this more readily provides us with scalability, traceability and auditability. Why, I caught a piece of lint in a zone file just last week, reading the overnight error report emails </p><p>We also retrieve the state of the generated zone files at the client public nameservers, and check them for consistency and coherency, essentially after each update, to prevent errors from propagating. ACLs, transaction logging and other checks provide more tracability, and we closed the mouse hole that that 'lint' crept in through in short order </p><p>As a result of the GoDaddy outage, a couple of our 'alumni' tech support folks who have moved on in their careers to other employment, gave us a call Tuesday, because they remembered how paranoid I am on making sure DNS is available. I appreciate the calls, and we've some new customers as a result </p><p>People have strong opinions about GoDaddy, sometimes for reasons of political correctness; I like them, by and large, because they provide a workmanlike product for a price that is hard to beat. They sure beat the heck out of the old Network Solutions rates. I have something like 500 domains that I administer and renew and most are there, although some are at other registrars for both historical and other reasons </p><p>And while Danica Patrick is not my cup of tea, she is not hard on the eyes, either </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-57758166813192595092012-09-07T15:28:00.000+01:002012-09-07T15:28:22.471+01:00RPM and YUM trick: synchronize packages installed on two hosts<p>The question was asked in IRC today:<br><blockquote>hello folks, is there any way to install packages from a list written by yum list installed? I've two CentOS 6.3 hosts and I like to get them with the same packages installed (also versions) </blockquote></p><p>Here is a quick (and accurate) answer:<ol><li>Take and test backups to taste</li><li>Run: <br><blockquote><tt>rpm -qa --qf '%{name} \n' | sort > MANIFEST </tt></blockquote><em>Note: that is a <tt> backslash n </tt> -- the html markup makes it hard to see the distinction</em></li><li>Then copy that MANIFEST to the second unit and run: <br><blockquote><tt>yum -y install `cat MANIFEST` </tt></blockquote><em>Note: and here, <tt>backtick</tt> around the <tt>cat</tt> to get a sub-shell</em></li><li>Finish by running: <br><blockquote><tt>yum -y update </tt></blockquote>on each unit</li></ol></p><p>For extra credit, re-run the MANIFEST creator on each unit, and use <tt>diff</tt> to find any variances</p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-28384136341382602532012-08-01T22:25:00.002+01:002012-08-02T23:06:21.554+01:00Thinking about Secure Boot<p>The Open Source community has a corner turn coming, in the upcoming roll out of UEFI enabled 'Secure Boot' hardware. And the 'Build Your Own' hobby computer builders are in for a rude shock as well, as they are going to have to start paying per each re-install of a commercial operating system, because UEFI provides a mechanism to reliably control (read: prevent free unlimited) reinstalls of Microsoft operating systems, as well as end user Application software </p>
<p>An integrated computer hardware or motherboard Manufacturer (Dell, HP, SuperMicro, etc) will have no choice but to conform to the new 'latest is greatest' approach that Windows 8 will bring to the market if it wants to keep selling new hardware. ... which means being able to run whatever Microsoft's latest is. And I don't want to be coy about this: it is in their economic interest as well to gain a way to limit the life of hardware, as it builds a ready market for re-selling to customers with older hardware </p>
<p>I simplify here, but once in full release, Windows 8, (and going forward, I think I can safely predict that high ticket proprietary Applications), will ONLY install on a system that has a secure chain of signed binaries and RELATED counter-signed Variables. That secure chain will be done by verifying a chain of checksums ('hashes') duly counter-signed by a Key-Exchange Key (KEK) public/private key pair, on back to a database itself counter-signed and verifiable by a Platform Key (PK). The PK may, but is not required to be able to, be wiped by an end consumer. For the reasons below, I think there WILL be an ability to reset the NVRAM on the general case </p>
<p>But perhaps not -- in the past applying some firmware updates required cracking open a case and moving a jumper. So I feel pretty comfortable that a mechanical 'jumper' option will appear as motherboards are designed, to prevent purely electronic (via executable code actions) key-wipes; it just makes sense in a corporate IT environment to prevent end employees from tampering with machines. And in some use cases, the jumper disappears altogether </p>
<p>Fresh from the fabrication plant, no PK exists in a UEFI 'bios' and it is said to be in a 'Setup' state. In the usual case, a PK will initially be generated and 'injected' into the NVRAM keystore at the time of manufacture (those cases where the end customer expects to receive a 'ready to run' computer). That is, while there may be provision for generating and injecting a PK keypair (either at initial receipt, or on a unit needing a 'wipe and reinstall'), in the general case the PK keypair injection will have already been done by the Manufacturer. And the Manufacturer is NOT going to readily supply the private side of that key with each chassis it ships ... the support load is too great, and an end user will be told: they are out of luck as it is not available </p>
<p>Those Variables I mentioned will almost certainly contain one or more 'unique per unit' UUID type hash, and as I say, these variables will be countersigned and held in the motherboard's non-volatile configuration ram. We have seen such approaches before -- old SGI hardware used NVRAM to hold a MAC address, that may have been signed as well, and SGI install disks looked for values in a range assigned to SGI </p>
<p>Part of an OS install process will be generating and authorizing a unique UUID, signed by a controlled KEK keypair chain, that chain running unbroken through a specific PK. The Manufacturers will generate a keypair, good for a given number of units and report sales to their Licensor. No more pesky COA stickers. If a KEK pair is compromised, they report it as such to their Licensor (here, Microsoft), and the public side is added to a blacklist database update. Computers relying on that trust chain simply stop booting, or stop running some node-locked Application, and instruct their owners to call technical support </p>
<p>Eventually Tech support says that this unit is out of warranty and end of life, and the Manufacturer or its competitor gets a new sale. Life is good for the Licensor and the Manufacturers as a group </p>
<p>As I read the UEFI specification, however, nothing requires being able to 'read back' those hashes, so that one could back them up, or have them at hand to be able to re-inject them. Indeed, the usual model is to NOT be able to read back enough information from a secure key store to be ABLE to back up and restore it. To do so would destroy the metering and regulation value of the KEK signed hashes and Variables </p>
<p>I assume the Open Source community will solve how to transition most hardware back into Setup state; I see that some of the kernel hackers have already solved wiping and then injecting new keys into the PK. I know that UEFI supports, but does not mandate use of x.509 certificates, so it is just a matter of time until StartSSL or some other Open Source friendly Certificate Authority documents and issues signing keys for both formal distributions, and end users compiling local kernel modules needing signing </p>
<p>Two futures are around the corner, here: Either way, BYO hobby users of commercial operating systems and applications lose a lot of usability to experiment. When it is too hard to 'play' around, they will need a new space to experiment and customize in. It is Open Source for the win as such a venue </p>
<p>And for the Open Source folks: 1. If the hardware permits an ability to re-set a UEFI 'bios' to Setup state and to re-inject an initial PK, Open Source wins, as it picks up new users when a person is presented with an estimate for re-purchasing all that they thought they had bought, but that need new activation keys, and asks: what else can I do? 2. If the hardware does not, this is readily apparent, and the manufacturer is shunned and their hardware avoided by folks in the know </p>
<p>I know it is the present fashion the Open Source community to damn UEFI and all, but the outlook is not all darkness. Just a turn around the corner to a new future </p>
<hr>
<p>Updated: The interface at blogger.com (wasn't this formerly blogspot.com) has changed, and I missed that the final copy would 'lay our' so poorly; it is also eating random markup that formerly worked ... fixed</p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-21904984196900256822012-07-19T20:08:00.000+01:002012-07-19T20:08:00.683+01:00Right, Left, down the middle<p>A couple weeks ago, a 'Derecho' blew through Columbus, on its way to the metro DC area. Amazon had some failures that cascaded through to people who did not have site redundancy. People know that the East Coast was hit hard, but as we are out in 'fly-over' country they did not perhaps realize that we had several hundred thousand people around here without electricity for a couple weeks as well </p><p>I've mentioned <a href="http://orcorc.blogspot.com/2012/04/lopsa-at-pmman-dc.html" target="_blank">before</a> that the primary datacenter that we run our <a href="http://www.pmman.com/" target="_blank">PMman</a> product out of is at the Tier IV level -- multiply redundant cooling, power grid, power backup, fiber entrances, carriers. The owner, a friend, is just a fiend that he does not HAVE outages </p><p>Me, too. In our after-event review, I see that one of our secondary sites here in town fell back to its generators, but the rest were all fine. But all sites we use are well covered, all fiber, all multi-homed. Planning for failure was in our deployment planning checklist; we pay for (and we charge for) that coverage; and I consider it worth it </p><p>A national footprint customer based in Canada agrees. And their lead technical person reports that our connectivity is haster than their datacenter eighty miles from their home ofice. Not surprising, as oAltantaur main DC is on a 'main line' fiber route between Chicago, NYC, DC, and Atlanta -- financial markets and federal government presences can help, that way </p><p>If the availability of your online presence matters to you, feel free to ask for a quote </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-80450188780473557192012-04-16T16:40:00.006+01:002012-04-16T16:54:27.013+01:00Yeah, it's a Monday ...<p>Last week, I made a trip up to the local computer store, the mothership of MicroCenter, and finally broke down and bought a USB/VGA four port KVM switch for my lead worksstation .... not DVI or HDMI on the video, but still VGA. The PS/2 mouse and keyboards were scheduled to be phased out, and a move to USB devices slated for this week </p><p>Well, the components to be affected .. the old KVM, the panel monitor, and so on must have held a pow-wow across the weekend, because when I came in this morning the panel monitor's backlight (a Westinghouse L2210NW, 1680x1050, datecode of April 2008) seems to be completely dead. This is of course the absolutely MOST inconvenient part of the display chain to die, because I need to run custom 'modeline' detail under X to squeeze the maximum resolution and sharpness out of the display. I had also purchased the three year 'no questions' replacement warranty on that four year old unit, so no help there </p><p>I grabbed a 'retired to the front bench' NEC AcuSync LCD223wxm, also nominally 1680x1050, off one of the benches and have been fiddling with the modeline settings to have a backup to limp through the day, but the horizontal height is wrong, the pixels puddgy, and the video muddled </p><p>No doubt I _could_ get it to stand up and dance, but the NEC has a datecode from early 2007 so that is a suckers game </p><p>Yeah, its a Monday </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-91410593855969160522012-04-13T13:47:00.007+01:002012-04-13T14:14:41.494+01:00LOPSA at the PMman DC<p>I went up to a meeting at our North datacenter for <a href="http://www.pmman.com/" target="_blank">PMMan</a>, where local group of system admins held a meeting, starting up a local LOPSA chapter. Food and soda were provided by the DC operator, along with salad ... since when did sysadmins starting more healthy food, rather than a diet of high sugar, high caffeine junk food? </p><p>The presentation slide deck was fine, and the presenter (a 'long timer' at a local credit-card clearance operation) ran through his bullet list of what to look for in the 'build vs. buy (lease space)' decision, and then a number of siting concerns. </p><p>Now I am familiar with his firm's site from prior visits, and it is adjacent to a major highway with regular closures for accidents; adjacent to a major rail yard where chemical spills have caused evacuations; and sole serviced into the power company grid </p><p>Our North site was chosen after a survey of all offerings within a radius we were willing to drive to for 'end of the world' 'hands on' intervention; is jacked into two independent power grids along with the on-site generators, is a premier demonstration location for the former Liebert (now owned by Emerson) power and site conditioning </p><p>I happen do drink coffee daily at Staufs with Liebert's representative here in town. My evaluation team suggested the location as a finalist, and when I checked, it turned out that I already knew the owner / developer from long, long ago telephony days, and when I have time, I'll go up and 'shoot the bull' with him on Saturday mornings at the DC </p><p>We have had a grand total of ZERO power related outages, and only one network connectivity issue in the last three years, that lasting less than 15 minutes, and that, due to human error in not handling a BGP fail-across migration properly [the cut-over protocol was changes, as I noticed the drop from my monitoring and called the owner's cell phone at once ;) ]. Well suited to our 'enterprise' customers </p><p>It is 'carrier neutral' and hugely connected -- multiple entrances of up to 88 x 100 G fiber spread across six or seven principal carriers. Native IPv6 to all drops we run through multiple carriers, along with the IPv4. I helped with the IPv6 design and cut-over some 18 months ago, and it has been seamless. The facility, and our services, just do not have outages except that human error causes </p><p>It is not the cheapest in town ... but it is fairly priced for the value we have received </p><p>I had not sat down and reflected on how satisfied I was with that shift of our center of operations to the DC, but as I think about it, I am well pleased </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-54556547659120129952012-03-30T15:46:00.002+01:002012-03-30T15:48:07.869+01:00It is always time to get back into the swing of a good habitWhen I crushed my ankle, I really could not think straight for a couple months with the painkillers they had me on, and so I lost the habit<br /><br />It's time to start blogging regularly againR P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-69089386974170400882011-06-09T21:51:00.008+01:002011-06-09T22:42:20.814+01:00Happy IPv6 test day, part 2<p>In my <a href="http://orcorc.blogspot.com/2011/06/happy-ipv6-test-day.html" target="_blank">first post in this small series</a>, I closed without addressing matters of securing an IPv6 connection, and in matters of debugging where a connection failure is occurring. Thinking about it, the diagnostic post needs to come first, because tightening down a conneciton can cause hard to diagnose symptoms. So, on to diagnosis ... </p><p>We examined the interface results last time. Looking at just the routing related to ipv6 is straightfowrard as well:<br><br />Some familiar tools: <br><pre>/sbin/ifconfig eth0<br />/sbin/ifconfig sit1<br />/sbin/route -A inet6</pre></p><p>So using those tools:<br><pre>[herrold@hostname ~]$ /sbin/ifconfig sit1<br />sit1 Link encap:IPv6-in-IPv4<br /> inet6 addr: 2604:aa:bb:cc::2/64 Scope:Global<br /> inet6 addr: fe80::4cf2:1c/128 Scope:Link<br /> UP POINTOPOINT RUNNING NOARP MTU:1480 Metric:1<br /> RX packets:1691 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:1693 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:0<br /> RX bytes:177195 (173.0 KiB) TX bytes:210468 (205.5 KiB)<br /><br />[herrold@hostname ~]$ /sbin/route -n -A inet6 | grep 2604<br />2604:aa:bb:cc::/64 :: U 256 1658 0 sit1<br />2604:aa:bb:cc::/128 :: U 0 0 2 lo<br />2604:aa:bb:cc::2/128 :: U 0 1691 1 lo<br />[herrold@mailhub ~]$ </pre></p><p><br />That is a pretty ordinary routing table for a non-gateway endpoint. Off-box traffic (to the '/64' netmask) is handed to the <tt>sit</tt> interface, and local traffic (to the '/128') retained on the local <tt>lo</tt> interface </p><p>We use a unfamiliar tool: <tt>ping6</tt> -- The common '127.0.0.1' localhost has a new form under ipv6: <br><blockquote>ping6 ::1 -c 2 </blockquote></p><p>and testing <br><pre>[root@hostname ~]# ping6 ::1 -c 2<br />PING ::1(::1) 56 data bytes<br />64 bytes from ::1: icmp_seq=0 ttl=64 time=0.157 ms<br />64 bytes from ::1: icmp_seq=1 ttl=64 time=0.132 ms<br /><br />--- ::1 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1000ms<br />rtt min/avg/max/mdev = 0.132/0.144/0.157/0.017 ms, pipe 2<br />[root@ostname ~]# </pre></p><p>And we can ping by IP on the 'Global' link, both on the local end, and remotely with differing transit times for the packets: <br><br />Looking at the network connections, we examine the tunelling interface: <br><pre>[root@hostname ~]# /sbin/ifconfig sit1<br />sit1 Link encap:IPv6-in-IPv4<br /> inet6 addr: 2604:aa:bb:cc::2/64 Scope:Global<br /> inet6 addr: fe80::4cf2:1c/128 Scope:Link<br /> UP POINTOPOINT RUNNING NOARP MTU:1480 Metric:1<br /> RX packets:1714 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:1731 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:0<br /> RX bytes:179587 (175.3 KiB) TX bytes:215180 (210.1 KiB)<br /><br />[root@hostname ~]# ping6 2604:aa:bb:cc::2 -c 2<br />PING 2604:aa:bb:cc::2(2604:aa:bb:cc::2) 56 data bytes<br />64 bytes from 2604:aa:bb:cc::2: icmp_seq=0 ttl=64 time=0.135 ms<br />64 bytes from 2604:aa:bb:cc::2: icmp_seq=1 ttl=64 time=0.137 ms<br /><br />--- 2604:aa:bb:cc::2 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 999ms<br />rtt min/avg/max/mdev = 0.135/0.136/0.137/0.001 ms, pipe 2<br />[root@hostname ~]# ping6 2604:aa:bb:cc::1 -c 2<br />PING 2604:aa:bb:cc::1(2604:aa:bb:cc::1) 56 data bytes<br />64 bytes from 2604:aa:bb:cc::1: icmp_seq=0 ttl=64 time=55.1 ms<br />64 bytes from 2604:aa:bb:cc::1: icmp_seq=1 ttl=64 time=53.7 ms<br /><br />--- 2604:8800:100:bb::1 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1000ms<br />rtt min/avg/max/mdev = 53.781/54.473/55.165/0.692 ms, pipe 2<br />[root@hostname ~]# </pre></p><p>Turning to DNS and name resoluton, it is quite familiar. One does not need an IPv6 link to query nameservers and receive back results, as they will answer questions _about_ ipv6 hostnames ('AAAA' records) to any authorized inquirant. Try these: <br><br /><pre>dig +trace www.ipv6.sixxs.net<br />dig www.kame.net aaaa </pre><br></p><p>I get answers like this:<br><br /><pre>[herrold@centos-5 ~]$ dig www.kame.net aaaa<br /><br />; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> www.kame.net aaaa<br />;; global options: printcmd<br />;; Got answer:<br />;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45595<br />;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 3<br /><br />;; QUESTION SECTION:<br />;www.kame.net. IN AAAA<br /><br />;; ANSWER SECTION:<br />www.kame.net. 85164 IN CNAME orange.kame.net.<br />orange.kame.net. 85164 IN AAAA 2001:200:dff:fff1:216:3eff:feb1:44d7<br /><br />;; AUTHORITY SECTION:<br />kame.net. 85164 IN NS mango.itojun.org.<br />kame.net. 85164 IN NS orange.kame.net.<br /><br />;; ADDITIONAL SECTION:<br />mango.itojun.org. 2364 IN A 210.155.141.200<br />mango.itojun.org. 2364 IN AAAA 2001:2f0:0:8800:206:5bff:fe8d:940<br />mango.itojun.org. 2364 IN AAAA 2001:2f0:0:8800::1:1<br /><br />;; Query time: 1 msec<br />;; SERVER: 10.16.1.112#53(10.16.1.112)<br />;; WHEN: Thu Jun 9 17:17:20 2011<br />;; MSG SIZE rcvd: 195<br /><br />[herrold@centos-5 ~]$</pre></p><p>which is certainbly a mess to read -- let's trim out the interesting parts: <br><pre>[herrold@centos-5 ~]$ dig www.kame.net aaaa<br />;; ANSWER SECTION:<br />www.kame.net. 85164 IN CNAME orange.kame.net.<br />orange.kame.net. 85164 IN AAAA 2001:200:dff:fff1:216:3eff:feb1:44d7 <br /></pre></p><p>Which is the familiar information: a CNAME record is pointed in fact at a AAAA record at a ipv6 -type IP. We can ping (<tt>ping6</tt>) it by IP:<br><br /><pre>[root@hostname ~]# ping6 2001:200:dff:fff1:216:3eff:feb1:44d7 -c 2<br />PING 2001:200:dff:fff1:216:3eff:feb1:44d7(2001:200:dff:fff1:216:3eff:feb1:44d7) 56 data bytes<br />64 bytes from 2001:200:dff:fff1:216:3eff:feb1:44d7: icmp_seq=0 ttl=52 time=246 ms<br />64 bytes from 2001:200:dff:fff1:216:3eff:feb1:44d7: icmp_seq=1 ttl=52 time=256 ms<br /><br />--- 2001:200:dff:fff1:216:3eff:feb1:44d7 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 999ms<br />rtt min/avg/max/mdev = 246.833/251.705/256.577/4.872 ms, pipe 2<br />[root@hostname ~]#</pre></p><p>or ping it by name, as DNS is working: <br><br /><pre><[root@hostname ~]# ping6 www.kame.net -c 2<br />PING www.kame.net(2001:200:dff:fff1:216:3eff:feb1:44d7) 56 data bytes<br />64 bytes from 2001:200:dff:fff1:216:3eff:feb1:44d7: icmp_seq=0 ttl=52 time=227 ms<br />64 bytes from 2001:200:dff:fff1:216:3eff:feb1:44d7: icmp_seq=1 ttl=52 time=244 ms<br /><br />--- www.kame.net ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1008ms<br />rtt min/avg/max/mdev = 227.291/235.678/244.066/8.401 ms, pipe 2<br />[root@hostname ~]# </pre></p><p>Some other examples to try as 'beacons' might include: <br><pre>ping6 2604:8800:100:9a::1 -c 2<br />ping6 2001:200:0:8002:203:47ff:fea5:3085 -c 2<br />ping6 ftp.ipv6.uni-muenster.de -c 2<br />ping6 -I eth0 ipv6.google.com -c 2 </pre></p><br /><p>So the familiar diagnostic methods of examining interfaces, checking routing, testing connectivity by IP, and connectvity after resolution by name are all present</p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-78091838937070748152011-06-08T15:59:00.021+01:002011-06-09T15:14:09.668+01:00Happy IPv6 test day<p>There is a 'ipv6 readiness testing day' today, <strike>April</strike> June 8, 2011, and so it seems appropriate to post my personal checklist for putting a <a href="http://www.centos.org/" target="_blank">CentOS</a> box up on that network fabric </p><ul><li><a href="http://www.sixxs.net/faq/account/?faq=10steps" target="_blank">Apply</a> for an account with SixXs. Their reply takes a couple of days, as it is a volunteer run organization </li><li>Have a deployed, updated, and <a href="http://www.pmman.com/usage/hardening/" target="_blank">hardened</a> unit at a routable static IPv4 address </li><li>Amend <tt>/etc/sysconfig/iptables </tt> to include a line passing the tunnelling protocol. I place the entry after the IPSEC protocol entries in a stock setup. Restart iptables <br><pre>... <br />-A RH-Firewall-1-INPUT -p 50 -j ACCEPT<br />-A RH-Firewall-1-INPUT -p 51 -j ACCEPT<br /># vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv<br />-A RH-Firewall-1-INPUT -p ipv6 -j ACCEPT<br /># ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br />... </pre></li><li>Strip out any previous efforts at disabling the <tt>ipv6 </tt> / <tt>net-pf-10 </tt> kernel modules from loading in <tt>/etc/modules.conf</tt>, and in the files sourced in <tt>/etc/modprobe.d/ </tt>. Then rebuild the modules dependency table: <tt>/sbin/depmod -a </tt> </li><li>Amend <tt>/etc/sysconfig/network </tt> to carry the following lines: <br><pre>#<br />NETWORKING_IPV6=yes<br />IPV6INIT=yes<br />IPV6FORWARDING=yes<br />IPV6_DEFAULTDEV=sit1<br />#</pre>Which anticipates that the configuration details for the ipv6 tunnel will live in a file: <tt>/etc/sysconfig/network-scripts/ifcfg-sit1 </tt></li><li> and add that mentioned file: <tt>/etc/sysconfig/network-scripts/ifcfg-sit1 </tt> -- I have elided site-specific details as to IP addresses with: <tt>aa.bb.cc </tt> and <tt>aa:bb:cc </tt> placeholders <br><pre>#<br />DEVICE=sit1<br />BOOTPROTO=none<br />ONBOOT=yes<br />IPV6INIT=yes<br />IPV6_TUNNELNAME="SixXS"<br />#<br />IPV6_AUTOTUNNEL=yes<br />PHYSDEV=eth0<br />IPV6_ROUTER=yes<br />#<br />IPV6TUNNELIPV4="38.229.76.3"<br /># 38.229.76.3 is the remote end of the tunnel at the tunnel broker<br />IPV6TUNNELIPV4LOCAL="198.aa.bb.cc"<br /># 198.aa.bb.cc is the local ipv4 static IP<br />IPV6ADDR="2604:aa:bb:cc::2/64"<br /># 2604:aa:bb:cc::2/64 shows both the local gateway IP, and netmask<br /># the remote end gateway IP is by convention, the :1 <br />IPV6_MTU="1280"<br />TYPE=sit<br /># </pre></li></ul><p>At this point, simply restarting networking should bring up the ipv6 link, and properly route it -- so: <tt>/sbin/service network restart </tt> </p><p>The interfaces will look something like this: <br><pre>[herrold@nostname ~]$ /sbin/ifconfig eth0<br />eth0 Link encap:Ethernet HWaddr 00:01:02:aa:bb:cc<br /> inet addr:76.aa.bb.cc Bcast:76.aa.bb.dd Mask:255.255.255.248<br /> inet6 addr: fe80::201:aaff:bb05:cc16/64 Scope:Link<br /> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br /> RX packets:11088057 errors:0 dropped:0 overruns:1 frame:0<br /> TX packets:10668738 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:1000<br /> RX bytes:1726307345 (1.6 GiB) TX bytes:3178496052 (2.9 GiB)<br /> Interrupt:3 Base address:0x6f80<br /><br />[herrold@hostname ~]$ /sbin/ifconfig sit1<br />sit1 Link encap:IPv6-in-IPv4<br /> inet6 addr: 2604:aa:bb:cc::2/64 Scope:Global<br /> inet6 addr: fe80::bbf2:cc1c/128 Scope:Link<br /> UP POINTOPOINT RUNNING NOARP MTU:1480 Metric:1<br /> RX packets:500 errors:0 dropped:0 overruns:0 frame:0<br /> TX packets:502 errors:0 dropped:0 overruns:0 carrier:0<br /> collisions:0 txqueuelen:0<br /> RX bytes:53331 (52.0 KiB) TX bytes:62784 (61.3 KiB)<br /><br />[herrold@hostname ~]$ </pre></p><p>To wrap this up, ipv6 hardening, and connection debugging are worthy topics, and it may well be that a cautious sysadmin wants to lock down <tt>/etc/sysconfig/ip6tables </tt> and examine how one has hardened <tt>/etc/hosts.deny </tt> ... But rather than rush out content (I have a couple of mailing list posts I need to re-work), I'll leave these for later posts, while you, gentle reader, go apply for an account at a tunnel broker </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-33582163331516378612011-05-22T22:59:00.009+01:002011-05-22T23:30:48.913+01:00What not to buy: Dynex 1.3MP WebcamI've spent the time across the weekend, tinkering with a USB webcam -- particularly a Dynex 1.3MP Webcam (USB ID: 0x19ff:0x0102 ). As I recall, Dynex is a BestBuy house brand. The Linux USB device driver support <a href="http://www.qbik.ch/usb/devices/showdev.php?id=4550" target="_blank">table</a> indicates that the device is supported under _some_ Linux variant<br /><br />The need was occasioned because some small animal, probably a groundsquirrel, has been digging in the garden of missus, and she wanted confirmation on what to go after. The local cat, Malaki, heard it and darted to the door, but I was too late letting him out to track down the intruder ... this time<br /><br />My laptop at home has been my primary compute platform there, since I crushed my ankle late last December. I still need to post a page with all the gory x-ray details, to go along with the twitter pictures I sent along the way with recovery. The medical bill cost was staggering as well, and I'll sanitize and post details of that as well. Back to the laptop -- it runs a reasonably stock CentOS 5 most of the time, except when I've been trialling rebuilds of part of Red Hat's '6' series SRPM rebuilds<br /><br />The seemingly needed 'uvcvideo' video driver was present, and I forced it to load, at the cost of the machine locking up in short order thereafter. I had to power cycle the unit to recover use of it. Hmmm ...<br /><br />So I went looking for an application to pull content off of the newly present <tt>/dev/video0</tt>, and turned to the native 'ekiga' that CentOS 5 carries. It refused to acknowledge anything useful at that device, and so ... I had to power cycle the unit to recover use of it. Hmmm ...<br /><br />Perhaps it was 'ekiga'. So I set out to solve the needed packaging to attain a current 'zoneminder' ... a bit more complex chain:<br /><br /><pre>06:42:44 PM libgcrypt-devel-1.4.4-5.el5<br />06:42:44 PM libgpg-error-devel-1.4-2<br />06:42:46 PM gnutls-devel-1.4.1-3.el5_4.8<br />06:42:47 PM pcre-devel-6.6-6.el5_6.1<br />06:44:49 PM perl-MIME-Types-1.19-2orc<br />06:46:55 PM perl-TimeDate-1.16-5.el5<br />06:47:02 PM perl-MailTools-1.74-1orc<br />06:47:38 PM perl-DateManip-5.44-1.2.1<br />06:47:59 PM perl-DBD-MySQL-3.0007-2.el5<br />07:19:55 PM perl-PHP-Serialization-0.27-4orc<br />07:20:50 PM perl-MIME-Lite-3.01-5orc<br />07:23:33 PM perl-IO-Stringy-2.108-3.orc<br />07:23:54 PM perl-MIME-tools-5.411a-12orc<br />07:34:34 PM perl-IO-Zlib-1.10-1orc<br />07:43:10 PM perl-Compress-Raw-Zlib-2.027-1orc<br />07:47:54 PM perl-Archive-Zip-1.16-1.2.1<br />07:48:05 PM perl-Archive-Tar-1.39.1-1.el5_5.2<br />07:49:54 PM php-pdo-5.1.6-27.el5_5.3<br />07:49:55 PM php-mysql-5.1.6-27.el5_5.3<br />07:50:28 PM perl-Module-Load-0.10-3orc<br />07:51:38 PM perl-Device-SerialPort-1.002-3orc<br />07:51:49 PM zoneminder-1.23.3-2orc<br /></pre><br /><br />and went through the very nicely done configuration. Oops -- it wants a mysql database server running to save state:<br /><br /><pre><br />07:59:18 PM mysql-server-5.0.77-4.el5_6.6<br />08:03:20 PM mysql-test-5.0.77-4.el5_6.6<br /></pre><br /><br />Zoneminder was willing to admit it could read <tt>/dev/video0</tt> but all it returned was a black image. Grrr. ... and then after a few minutes, the laptop locked up again, and I had to power cycle the unit to recover use of it. Hmmm ..<br /><br />So I spent a few minutes with Google doing some research, and found what looks like a ratehr nice little application for USB frame grabbing called: gideo -- see: <a href="http://mxhaard.free.fr/index.html" target="_blank">A GTK video grabber designed with spca5xx components</a>. Building it dragged in the Gnome / GTK development environment of thirty or so packages, and I only had to fix up a dependency's .spec file to handle Red Hat's multilib conventions <br /><br /><pre><br />05:27:56 PM libtiff-devel-3.8.2-7.el5_6.7<br />05:29:52 PM gideo-0.1-1orc<br />05:43:18 PM SDL_image-1.2.10-2orc<br />05:43:18 PM SDL_image-devel-1.2.10-2orc<br /></pre><br /><br />But now, 'gideo' is unwilling to admit, or loading the module is unwilling to produce a live <tt>/dev/video0</tt>, and ... you guessed it: The laptop locked up again, and I had to power cycle the unit to recover use of it<br /><br />I think perhaps I'll try a different video cameraR P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-12221647827965487982010-12-22T19:35:00.008+00:002010-12-22T21:53:25.346+00:00hitting the ground running<p>I've <a href="http://orcorc.blogspot.com/2010/12/two-minutes-on-keyed-ssh-access.html" target="_blank">mentioned</a> creating and injecting a root ssh key into a new instance. The images we run are deployed with ssh enabled and not TCP wrappered; we COULD wrapper them and use the 'add an exception for ssh from a single IP' tool which the <a href="http://www.pmman.net/" target="_blank">PMman</a> web control interface has, but we have found the support load fallout from people just getting started is too high. Once they have deployed and hardened a couple of boxes, they 'get it and can use the 'lock from all' web tool, and then add a single IP if their taste runs to web tools </p><p>The very next step I take as to each machine I administer, is to run a hardening script. While I have <a href="http://www.pmman.com/usage/hardening/" target="_blank">published an outline here</a>, I use a script rather than reading and scrape and pasting from that outline. This step is done through a script, not because I think I WILL forget something, but because I know the script will NOT forget anything, and is written to perform the hardening process in an idempotent fashion -- that is, when done, finishing the same end result, time after time. One path to get to better host security is to have good processes, consciously designed, systematically applied, and continuously improved </p><pre><code>herrold@centos-5 admin]$ ./hardening.sh hostname.pmman.net<br />The authenticity of host 'hostname.pmman.net (198.178.231.xyz)' can't be established.<br />RSA key fingerprint is 86:6e:84:e0:27:57:dd:4d:1f:88:82:fc:42:1d.<br />Are you sure you want to continue connecting (yes/no)? yes<br />Warning: Permanently added 'hostname.pmman.net,198.178.231.xyz' (RSA) to the list of known hosts.<br />hosts.allow 100% 488 0.5KB/s 00:00<br />hosts.deny 100% 390 0.4KB/s 00:00<br />iptables 100% 1337 1.3KB/s 00:00<br />sshd_config 100% 3325 3.3KB/s 00:00<br />README 100% 897 0.9KB/s 00:00<br />rollup.pem 100% 0 0.0KB/s 00:00<br />openssl.cnf 100% 9682 9.5KB/s 00:00<br />arm-pmman.sh 100% 363 0.4KB/s 00:00<br />sa-update-local-NOTES 100% 877 0.9KB/s 00:00<br />sa-update-local 100% 117 0.1KB/s 00:00<br />logwatch.conf 100% 80 0.1KB/s 00:00<br />rollup.pem 100% 0 0.0KB/s 00:00<br />Package sendmail-8.13.8-8.el5.x86_64 already installed and latest version<br />Package 1:make-3.81-3.el5.x86_64 already installed and latest version<br />Package m4-1.4.15-2orc.x86_64 already installed and latest version<br />Package iputils-20020927-46.el5.x86_64 already installed and latest version<br />logwatch.conf 100% 80 0.1KB/s 00:00<br />Stopping crond: cannot stop crond: crond is not running.[FAILED]<br />Starting crond: [ OK ]<br />Shutting down sendmail: [FAILED]<br />Starting sendmail: [ OK ]<br />Starting sm-client: [ OK ]<br />Flushing firewall rules: [ OK ]<br />Setting chains to policy ACCEPT: filter [ OK ]<br />Unloading iptables modules: [ OK ]<br />Applying iptables firewall rules: [ OK ]<br />Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ]<br />Stopping auditd: [FAILED]<br />Starting auditd: [ OK ]<br />/etc/aliases: 76 aliases, longest 10 bytes, 765 bytes total<br /><br />info: inspecting /etc/aliases for a root email forwarder off the box<br /># Person who should get root's mail<br />#root: marc<br /><br />info: 1. do you want fail2ban -- if so, run: ./fix-fail2ban.sh hostname.pmman.net<br /><br />info: 2. updates are not run by this script: consider running:<br /> ssh -l root hostname.pmman.net yum -y -q upgrade --enablerepo=pmman-mail<br /><br />info: 3. verify that root's email is properly handled<br /><br />info: 4. now: ssh -l root hostname.pmman.net<br /> cd /root/hardening/<br /><br /> and do some patching and service restarting ...<br />[herrold@centos-5 admin]$ </code></pre><p>Note: the IP is obscured, and the host name and ssh host key altered. The edit to add an opff-box alias entry for <tt>root</tt>'s email is to centralize all the miscellaneous cron and asynchronous notifications off the box, to centrally monitored point </p><p>Then as noted before, this is a stock <a href="http://www.centos.org/" target="_blank">CentOS 5</a> image, and so needs some further tightening done and updates run. I have long since scripted that process: </p><pre><code>[herrold@centos-5 admin]$ ./fix-fail2ban.sh hostname.pmman.net<br />local-fb-fix.sh 100% 256 0.3KB/s 00:00<br />Stopping fail2ban: [FAILED]<br />Starting fail2ban: [ OK ]<br />[herrold@centos-5 admin]$ ssh -l root hostname.pmman.net yum -y -q upgrade --enablerepo=pmman-mail<br />[herrold@centos-5 admin]$ </code></pre><p>Then, as suggested, the edits on the remote machine </p><pre><code>[root@vm175551137 hardening]# netstat -pant<br />Active Internet connections (servers and established)<br />Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name<br />tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3641/sshd<br />tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2627/sendmail: acce<br />tcp 0 240 198.178.231.xyz:22 76.242.0.abc:41936 ESTABLISHED 3593/0<br />[root@vm175551137 hardening]# history<br /> 1 cd /etc/<br /> 2 joe aliases<br /> 3 yum install ipsec-tools<br /> 4 rm *~<br /> 5 newaliases<br /> 6 cd /root/hardening/<br /> 7 ls<br /> 8 joe /etc/ssh/sshd_config<br /> 9 /sbin/service sshd restart<br /> 10 ls<br /> 11 cp hosts.allow hosts.deny /etc<br /> 12 joe iptables /etc/sysconfig/iptables<br /> 13 /sbin/service iptables restart<br /> 14 netstat -pant<br /> 15 history<br />[root@vm175551137 hardening]# reboot <br /><br />Broadcast message from root (pts/0) (Wed Dec 22 14:47:10 2010):<br /><br />The system is going down for reboot NOW!<br />[root@vm175551137 hardening]# Connection to hostname.pmman.net closed by remote host.<br />Connection to hostname.pmman.net closed.<br />[herrold@centos-5 admin]$ </code></pre> <p>I added the <tt>ipsec-tools</tt> in support of an application this particular unit will be performing. The reboot at the end is for good measure to apply any new kernel and libraries through a clean boot. The 'true' hostname will be picked up from DNS PTR records, once that has been done already, after a reboot, as well </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-89495991086414717492010-12-22T18:38:00.014+00:002010-12-22T20:12:44.553+00:00two minutes on keyed ssh access<p>In a Linux box, in BSD or derived such as under OS/X, or under Windows in Putty, a person can generate a LOCAL keypair which is used for keyed SSH access to such Virtual Machine instances. I have completely moved away from password based external access for all new instances, as keys carefully managed are demonstrably safer </p><p>So you know: There is some heat, but not a lot of thoughtful light about permitting and using root ssh access. Some not well though out security policies have a phobic avoidance of such. I'll address the matter in a later post, discussing <a href="http://www.pmman.com/usage/hardening/" target="_blank">hardening generally</a>, sshd config file hardening, remote syslogs, TCP wrappers, iptables, and dynamic dictionary attack response </p><p>For the time being, let's put to one side and get past that security design choice rant, and accept that at least initially, a <a href="http://www.pmman.com/" target="_blank">PMman instance</a> has already made a short term choice for setting up access and for management of such units which are running <a href="http://www.centos.org/" target="_blank">CentOS</a> or others following the Red Hat approach for management of instances [i.e., not SuSE/SLES, Debian family, nor the BSDs]</p><p>Under such Linux, this looks like this: </p><pre><code>$ # create a high strength passphrase<br />$ # I have written of gen-pw.sh <a href="http://orcorc.blogspot.com/2010/07/line-noise-and-random-numbers.html" target="_blank">before </a><br />$ gen-pw.sh -a<br />a2Wa4aSaLWkRac<br />$ cd ~<br />$ cd .ssh<br />$ ssh-keygen -t dsa -f pippin.pmman.net.dsa<br /><br />$ # there is a passphrase prompt here and <br />$ # we use that: a2Wa4aSaLWkRac <br />$ # -- also make a record of it in a safe place<br />$ # -- if one maintains multiple keys per box, it can be a <br />$ # chore to manage this -- but see: man ssh-agent<br /><br />$ # this generated ~userid/.ssh/hostname.pmman.net.dsa <br />$ # [the private part] ... and <br />$ # ~userid/.ssh/hostname.pmman.net.dsa.pub <br />$ # [the PUBLIC part<br /><br />$ cat ~userid/.ssh/hostname.pmman.net.dsa.pub</code></pre><p>and scrape and place it in your mouse pastebuffer, and proceed to the web interface. The -f file's name 'hostname.pmman.net.dsa' is arbitrary, but chosen to be mnemonic </p><p>Then add a new stanza to: <tt>~userid/.ssh/config </tt> like this: </p><pre><code>#<br />Host hostname32.pmman.net hostname64.pmman.net<br /># optionally one can make a note of the passphrase here, but <br /># at the risk of exposing such if a local dire read compromise<br /># is experienced, or a backup of such falls into untrusted hands<br />IdentityFile /home/userid/.ssh/hostname.pmman.net.dsa<br />PasswordAuthentication no<br />Protocol 2<br />PubkeyAuthentication yes<br />#</code></pre><p>Note here a key may be used on more than one host; that is, we can add the same public key into <tt>/root/.ssh/authorized_keys </tt> of more than one unit -- here, both a 32 bit and a 64 bit instance with similar hostnames. But I get ahead of myself ... </p><p>Using a secure means, we need to transfer taht public key to a remote instance, and to add it to the right file; Here, we use the SSL protected web interface under the PMman machine management interface for a given machine, in the [more] menu, first item. By placing the public part into the web form box, the management backend at the datacenter, will be validated as to form, and then place that public key into ROOT's <tt>/root/.ssh/authorized_keys </tt> file </p><p>At that point, one can then ssh to that remote box as root, accept the host key, and take steps for hardening, adding a working userid, and so forth </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.comtag:blogger.com,1999:blog-4432325514109284204.post-28789420821718343042010-12-13T15:48:00.006+00:002010-12-13T16:26:12.160+00:00loop -de- loop<p>As I count it at the moment, I am building and using content from more than eight loop mounted ISOs on a principal NFS, TFTP and 'next', and FTP server in the internal network </p><p>Red Hat has updates for 4.9, and 5.6 in beta; CentOS is in QA on a initial '6' release; I am doing private builds for a Fortune 50 on some backports out of RawHide and from some local packaging; and I am working on a 'back of the envelope' design and test to try to get control of the huge bloat on Red Hat ISO space for installs, to see if I can get a trimmed minimal installer for i386, x86_64, ppc[64] and s390x down to a single piece of CD sized ISO media. Then there is my favorite of the minimal wire install image, again which I package up into an ISO </p><p>Going forward, we will see more of encrypted filsystems across loop devices, and that will also put load on here. It may be time for the kernel folks to consider bumping that limit to 16 </p><p>As such I regularly crest over the stock eight provided loop devices. To address this without a reboot, one simply has to: </p> <blockquote><pre># shutdown all uses of loop devices, so we can remove the module<br />/sbin/rmmod loop<br />echo "options loop max_loop=255" > /etc/modprobe.d/loop.local.conf<br />/sbin/depmod -a<br />/sbin/modprobe loop </pre></blockquote><p>Note: that 'depmod' may not be strictly required, but will in any event be harmless, so I do it -- heck, I still type <tt>sync ; sync</tt> before rebooting a box, and I KNOW that is not needed any more. The force of habit ... </p><p>I add the suffix <tt>.conf</tt> on that file, because I was scolded by a Debian box a couple weeks ago on the topic; it seems that they are deprecating sourcing files in <tt>/etc/modprobe.d/</tt> lacking such. Since when did Linux starting use file name suffixes to determine a file's content? -- at least it is not required to be in 8.3 format </p><p>Another approach is doing it with hard-coded values at boot time, with <tt>sysctl.conf</tt> or such </p><hr><p>Tip of the hat to Paul Howarth on the <a href="http://orcorc.blogspot.com/2010/12/ripping-out-safeties.html" target="_blank">SELinux rant I went off on last week</a>; The interaction of loop mounted ISOs, and mounts in the FTP space of a filesystem can also be addressed with options to the <tt>mount </tt> command, and in the <tt>/etc/fstab</tt> with <tt>context=</tt> choices. He writes and points out: </p><blockquote><p>I use context mounts to avoid it, e.g. in fstab: </p><code>/path/to/CentOS-5.5-x86_64-bin-DVD-1of2.iso <br /> /path/to/dvd1 iso9660 <br /> _netdev,ro,loop,fscontext=system_u:object_r:public_content_t:s0 <br /> 0 0 </code></blockquote><p>... sorry about the funky line wrapping, but there is just no good way to display really long <tt>/etc/fstab</tt> entries </p>R P Herroldhttp://www.blogger.com/profile/01424404882640619006noreply@blogger.com