31 March 2009

Who is in charge, here?


"When I use a word," Humpty Dumpty said, in a rather scornful tone, "it means just what I choose it to mean - neither more nor less."
"The question is," said Alice, "whether you can make words mean so many different things."
"The question is," said Humpty Dumpty, "which is to be master - that's all."
   -- Through the Looking Glass

A U S Supreme Court memorandum order (called a 'Slip Opinion' here) today said:

The writ of certiorari is dismissed as improvidently
It is so ordered.


Some Latin in there. 'PER CURIAM' is: By the Court as a complete panel and entity, without specific attribution of the action to any particular Justice. A 'writ of certiorari' is: a publicly stated intent of the Court to receive a case for presentation and argument, and possibly (usually) decision.

That a matter is characterized as: "improvidently granted" is not Latin, but is a 'term of art' -- basically: It turns out we (as a decision making body) cannot, will not, or should decide after, and we decline to consider the particular aspect of the case we initially though we should hear for the present.

It happens -- maybe with one member of the Court ill, the Court decided it needed to lighten its load; perhaps some conflict came to light in the investment portfolio of some Justices that the remaining (non-recusing) panel members of the Court felt now that fairly they could not hear and decide the matter, to avoid the appearance of making a biased, short handed, or improper decision.

The functional effect of such a terse statement is to leave intact and in effect the next prior lower court's ruling

While a lawyer's craft is well depicted in television crime procedurals, the more cerebral parts of avoiding issues which make one interesting for television, are decidedly more valuable services a lawyer provides to the general society, when advising a client. A lawyer's opinion can help the client see the areas to avoid; how to structure its affairs. The Court speaks in a completely understood fashion here, to communicate just what it intends to say, and nothing else

The recent 'mob rule' in Congress of vilifying and proposing to punishingly tax the folks at AIG who had clear contracts for payment of a 'retention' and performance bonus, in exchange for staying on as the ship of AIG seemed to be sinking (and indeed staying and doing their contractual duty) show how fragile civilization is. All the hollow words about 'fairness' and populist anger cannot mask the fact that Congress successfully pointed the finger of blame away from itself and the government, and toward a tiny minority of our society. If a Constitution, contracts, and rule of law are so readily cast aside, no one is safe. Paris 1789 and following, all over again. Whom shall the mob turn to next?


The use of words is how we explain to another, and sometimes to ourselves, what we are thinking; why we believe what we believe; permit us to reflect and find weak points in our thought processes. Structured words -- Court decisions, Constitutions, laws, opinion letters from lawyers are part of an ongoing societal dialog

We are all diminished when cheap talk trumps reason

OMG, Round Two

ring girl, round two

I wrote a bit back about a gratuitous change in Red Hat's RPM variant breaking backward SRPM readability, in a fashion which stranded users of the earlier Red Hat Enteprise release products (and rebuilds such as CentOS) away from the Raw Hide pool of developmental edge packages.Mothra attacks

The fix I outlined: to build and freeze (in time, and against updates) a RawHide domU instance, and to use that domU and an NFS mount back into the earlier dom0 to unpack SRPMs. This works fine for the present.

The full size screen shot is a bit large, but down that link. It takes just a couple of seconds to set up a new unpacking destination, and to do the rebuild, once it is set up.moving SRPMs from RawHide

27 March 2009

Promoting ignorance

Schultz knows nothingThere is a good reason lawyers should not give, and are really uncomfortable having a client publicly discuss advice they have given in a public forum

This crossed a mailing list today:

Subject: fedora-d-rh] Re: question about patent

Without reading or looking at the patent at all, it is almost always really bad to discuss patents in public, especially on email.

Patents & patent trolls are so pervasive that you can help feed patent trolls by bringing up the possibility of infringement in these forums (even when they are marginal claims).

I have always been given guidance that engineers should never, ever do patent searches and never discuss the specifics of IP issues in email.

Amazing takeaway. The poster missed the obvious extension that really NOTHING in the way of litigation awareness and preparations should be discussed

A quick Google search using: willful ignorance of a patent yields this in a pull quote:

Courts have used terms such as *intentional blindness,[15] *blind disregard of the peril it faced[16] and *willful ignorance[17] to describe the accused infringer who did not conduct a search prior to adopting a mark

[later] ... With the ease of accessing information, it is likely that courts will increasingly find that an accused infringer's failure to conduct an appropriate search before adopting its proposed mark is a clear indicator of bad faith.

The article's author 'threads the needle' nicely, between providing general information, and not giving express advice. But he DOES assume the reader recipient will CONSIDER the implications of what is being said. Silly lawyercamel in the eye of a needle

Down at the bottom of that information article, we find:

The information contained in this alert is provided for informational purposes only and does not represent legal advice. Neither the APLF nor the author intends to create an attorney client relationship by providing this information to you through this message.

Time to stretch the legs, and walk down to Stauff's for a coffee

26 March 2009

IPv6 eats kittens (and distcc) on Debian Testing

Flikr domo and kitten

This can only end badly

I spent a good 5 hours this week, tracking down a problem with distcc hanging up in our Debian Testing build farm. We use distcc to speed up compilation of the c++ sources in the development of the trading shim. Interestingly, our end user community forced us to this decision of developing on Debian testing, as they are using later gcc versions than we were on CentOS, and it was useful to be able to see their errors, BEFORE they reported them to us

On the new compile farm, sometimes we would get a compile in, say, 44 seconds; other times it would drag out for several minutes. This is a problem as we had just slotted a new unit into harness, and expected better results

In checking the logs in the client doing the distribution of compilation tasks, we were seeing a symptom of 'segfaults' in that client's process; other times, the client would stall, seemingly blocked waiting for a compilation result to come back from a remote buildfarm peer, that never came back. Checking on the remote build unit, one of the distccd children would die for mysterious reasons, leaving a message in the dmesg record. Once that failed build timed out, the needed file would be built locally, and the build proceed. Checking the log files nothing obvious jumped out

The obvious debugging technique is to get a minimal reproducer, and then to partition the problem into smaller and smaller possible causes using that reproducer tool. the issue will manifest on one setup, but not the other, ans so one can rule out more and more issues, until the answer is left, staring you in the face

Looking at my Debian helper tool, it had rotted, and was in sorry need of removal of some constraints: It did not use distcc when available; it did not use proper -J parallel compiles; it did not use -O3 optimization in the compiles. My test tool was not set up to see what I needed to see

Time to pay down some 'technical debt' (If you've not read martinfowler piece, and viewed Ward Cunningham's video, stop now, and do so). And so I made some payment there. After testing, I got these results:

MasterClientsElapsed time (real)
 pippin  nfs2,  0m23.281s 
 nfs2, pippin, localhost  0m23.702s  pippin, nfs2, localhost  0m22.551s 

My first thought looking at this: Well, that pretty conclusively rules out machine specific errors, or network path issues. It must be something different in the setup of the user provoking the issue that my tool does not duplicate. NOTE: This is wrong-headed, of course, as: 'An absence of evidence is not evidence of absence of a problem' but was an easy trap to fall into

For every complex problem, there is a solution that is simple, neat, and wrong.

  — H. L. Mencken

For every problem there is a solution which is simple, obvious, and wrong."

  — Albert Einstein

I tossed my results at that user for their thoughts on the results, and went back to work on another issue

Later in the day, doing some thought experiments with the user, we could not pin down where to look yet. But as a team, I had him provoke the issue with his setup, while I watched the logs on the various machines through several consoles. And the error appeared, and then jumped out and tickled my eyeballs. I was watching nothing in particular, until I saw the failure on process 29673, and then traced that back up. A successful and a failed session looked like this, respectively:

distccd[29673] (dcc_check_client)connection from :ffff:
distccd[29673] (dcc_r_file_timed)909179 bytes received in 0.078651s, rate 11289 kB/s
distccd[29627] (dcc_collect_child) cc times: user 1.132070s, system 0.144009s, 23039 minflt, 0 majflt
distccd[29673] (dcc_collect_child) cc times: user 1.092068s, system 0.104006s, 22481 minflt, 0 majflt
distccd[29673] (dcc_check_client) connection from ::ffff:
distccd[29673] (dcc_r_file_timed) 818437 bytes received in 0.071648s, rate 11155
distccd[31248](dcc_check_client)connection from ::ffff:
distccd[31248](dcc_r_file_timed)886761 bytes received in 0.076688s, rate 11292 kB/s
distccd[29627](dcc_collect_child)cc times: user 1.068066s, system 0.112007s, 23890 minflt, 0 majflt
distccd[29673](dcc_collect_child) cc times: user 1.108069s, system 0.112007s, 22012 minflt, 0 majflt
distccd[29673](dcc_pump_sendfile)Notice: sendfile: partial transmission of 15868 bytes; retrying 344332 @15868
distccd[1995] (dcc_log_child_exited)ERROR: child 29673: signal 11 (no core)

A-ha! Now we know what to look for:

dhcp-231:/var/log# grep dcc_pump_sendfile distccd-transition-log
distccd[29673] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 15868 bytes; retrying 344332 @15868
distccd[31248] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 15868 bytes; retrying 586732 @15868
distccd[30262] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 15868 bytes; retrying 4655916 @15868
distccd[2005] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 16384 bytes; retrying 74824 @16384
distccd[2128] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 16384 bytes; retrying 286560 @16384
distccd[2170] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 16384 bytes; retrying 97440 @16384
distccd[2129] (dcc_pump_sendfile) Notice: sendfile: partial transmission of 16384 bytes; retrying 301000 @16384

The TCP process of shuttling code to compile, and the binary results of such compiles are failing the same way, over and over again: partial transmission of 15868 bytes is present every time. Looking at the log entry again, the form of the connecting hosts is unusual: ::ffff: and ::ffff: Why that is IPv6 notation? And I reach back to my logs as I remember I had an issue like this a year or so on a Debian box

And so, Google with the search argument: debian ipv6 distcc confirms as its first result: 1. #481951 - distcc: zeroconf support broken wrt IPv6 - Debian Bug ... ... and the bug is still open. Killing off IPv6 is the obvious next step, and so, back to Google with: debian disable IPv6 to find: Disabling IPv6 under a 2.6 kernel. Reading the post, there is some back and forth, and the answer seems to be, there is not an 'official Debian answer', but this is what people are doing. Back to Google with: site:debian.org debian disable IPv6 seems to confirm that there is not a single well documented answer which has floated up in Google's searching

Compare: CentOS addresses the matter directly, and as the first Google hit with: site:centos.org disable IPv6
7. How do I disable IPv6?

* Edit /etc/sysconfig/network and set "NETWORKING_IPV6" to "no"
* Add the following to /etc/modprobe.conf :

alias ipv6 off
alias net-pf-10 off

* Run chkconfig ip6tables off to disable the IPv6 firewall
* Reboot the system

Alternative (which might be easier and works on any release with /etc/modprobe.d):
echo "install ipv6 /bin/true" > /etc/modprobe.d/disable-ipv6

Sadly, there is something else on Debian testing in play as well, and it is not just an IPv6 issue (although turning off IPv6 has drastically reduced the frequency of the issue). When I look in today to make sure the 'fix' is working

[74988.951989] distccd[8671]: segfault at 1 ip 7fdd2250e030 sp 7fff2b025da8 error 4 in libc-2.7.so[7fdd22493000+14a000]
[74989.017836] distccd[8651]: segfault at 1 ip 7fdd2250e030 sp 7fff2b025da8 error 4 in libc-2.7.so[7fdd22493000+14a000]
[74989.518050] distccd[8664]: segfault at 1 ip 7fdd2250e030 sp 7fff2b025da8 error 4 in libc-2.7.so[7fdd22493000+14a000]
[74994.152461] distccd[8659]: segfault at 1 ip 7fdd2250e030 sp 7fff2b025da8 error 4 in libc-2.7.so[7fdd22493000+14a000]

Where is that coffee cup? I knew this would not end welldomo eating a kitten

"It's different, this time"

Winston Smith

The British born, formerly American investment manager, Sir John Templeton, is attributed the following as to his craft:

The four most dangerous words in investing are 'This time it's different.'

I suspect the quip is over-constrained in limiting it to just investing. But I am meditating about another Briton's work

At last night's COLUG meeting, the presenter addressed the emergence of the latest round of internet based 'social networking' applications: twitter, facebook, blogging, multi-features personal information devices (cell phones, Blackberries, iTouchs, digital cameras and the like). I say latest round, because the assertion was made that: "Terrorists have never used photo reconnaissance" and contrarian I suggested that the people of Dresden might have a different point of view

The takeaway from the matter had to be a thoughtful person needs to be mindful of the obvious and non-obvious implications of these new technologies

The ability to build a 'mosaic' image of a person, from their public 'internet persona' is only getting easier, and more accessible to a wider audience of potential prying eyes. What once required the resources of a government or major multi-national corporation to 'dig out' are perhaps thoughtlessly revealed with all good intention. See, e.g., the 'Sarah' PSA: ("Online Sexual Exploitation - Everyone Knows Your Name"), which ends with the outline: "... so think before you post"

But the information leakage is much broader than that already, and at this point not controllable by any individual. When a member of a 'private' or 'backwater' mailing list uses GMail to subscribe, every poster suddenly is added to Google's indexing corpus; when someone at a local meeting snaps a cell phone picture and posts it publicly, it feeds the automated identification algorithms publicly known (Google's Photo), and otherwise (Think: the Tampa Bay Super Bowl photo identification effort of the crowd). Note the date of the Register article just cited: 7th February 2001. This was no Bush-ian crypto facist over-reaction to the 9/11 hijackings

During the presentation last night, the first advert link offered was for anti-aging patent drugs, along side the meeting photo (full of several grey haired and bald male persons; the second link was of 'Valerie Bertinelli -- Bikini Babe!' and had a weight loss advert in the 'doubleclick' advert box on the top right; but our presenter is interested in and follows a television show 'The Biggest Loser' and is browsing weight control related sites and mailing lists. A third, rather personal example from the presenter's prior experience completed the circle to make it clear that Google's advert engine is reading every word we read or write

The first time is an occurence; the second a co-incidence; after the third, one has to stop shaving with Occam's razor as the blade has gone dull

blank advert

I took a screenshot (full-size image) of what I am offered as to Valerie, and you'll notice that the upper right panel is blank. This is because some years ago, I amended the DNS records which computers using my DNS servers are provided, to return '' for all of 'doubleclick.net'

[root@xps400 conf]# grep -i doubleclick *.conf
NULLROUTE.conf: ad.doubleclick.net.
[root@xps400 conf]#

Adding that value (which causes the request for an advert to never reach the central advert monitoring and image feeding servers), and several more was part of a campaign for a corporate client I was consulting for at the time. The Windows 98 desktop computers which were issued to the staff did not have effective software installation access controls, to preventing addition of random malware and time wasters. Memos and meetings had not stopped the practice of a staffer downloading, say, Yahoo! Instant Messenger, and showing all her friends in that department how to do the same. Bandwidth exhaustion was becoming an issue; I assume that management also had some thoughts about lost productivity

As a technical fix the IS department was asked to remove it when found (done, but not persistent without effective access controls), and asked again. I was escalated in, and went to work with tcpdump

It turns out that the software designers at Yahoo knew their craft well. From memory, it first tried the universal Firewall Transversal Protocol (http), and then secure http and FTP

I blocked each new approach in turn. It fell back to nntp, and as I recall ntp. I do not recall that it tried to use dns content tunneling, but I certainly would have. The eventual solution had both port blocking and domain blacklisting

There is nothing new, nor indeed to my thinking, wrong for the owner of an asset to seek to profit maximize with it. But I think my thoughts and my words are my property, and on occasion on a 'think piece', I'll add the copyright reminder tag

.-- -... ---.. ... -.- -.--
Copyright (C) 2009 R P Herrold
My words are not deathless prose,
but they are mine.
Number 6

I also hold to the quaint notion that I am not a number, but an individual and the property of no one but my God. Silly, I know, but there you are

edit: typo fix

25 March 2009

People do go both ways

Scarecrow: people do go both waysThere is a scene depicted in the movie: 'Battle of the Bulge' (1965) about the 1944 attempted German breakout offensive through the Ardennes, where German commandos are tasked with and shown changing road signs to confuse Allied troops

When I started this blog, it was in response to a desire to make the CentOS internals a little more transparent to interested observers. We at the project do get the questions, and I think a thoughtful reader can pull connections from the little stories and examples I choose from the full breadth of the blog. While I might 'tag' something specifically 'CentOS', real life has no such natural boundaries, and these are just guide markers in the channel of life. confused highway sign

I added the blog into the CentOS aggregator at planet.centos.org, and set to writing. I cribbed the configs from an example of another CentOS member. I tried then to restrict the feed to the 'CentOS' label, but following the documentation just did not work. I settled for the default full feed, and resolved to solve the revisit the matter later

My friend toracat gently reminded me of the need to finish the job, this morning. Sigh ... back to wrestling markup

The example follows [there are annoying line breaks in the blog layout as rendered, and indeed in the doco upstream that need to be pasted back together, mentally]. Can you spot the error?

Full site feed:
  • Atom 1.0: http://blogname.blogspot.com/feeds/posts/default
  • RSS 2.0: http://blogname.blogspot.com/feed/post/default?alt=rss
Label-specific site feed:
  • Atom 1.0: http://blogname.blogspot.com/feeds/comments/default/-/labelname
  • RSS 2.0: http://blogname.blogspot.com/feeds/comments/default?alt=rss/-/labelname
Individual post comment feed:
  • Atom 1.0: http://blogname.blogspot.com/feeds/postId/comments/default
  • RSS 2.0: http://blogname.blogspot.com/feeds/postId/comments/default?alt=rss

There is the obvious need to s/comments/posts/g, but more is needed. I am accustomed to 'magic CGI directories' that accept variables. I use them myself. See, e.g., the expanded URL to the thumbnail of Mothra which is not just an image, but the filename, and a link to the full size one. No express CGI script is called out, as the index file for that directory is actually a smart CGI script


Enough clues, and on to the answer. I put bit of text around the answer so your eyes do not pick it out. The text at the fourth bullet above is malformed ... the part following: alt=rss needed to precede the question mark marker that identified the start of variables to the CGI script. We move before it the part: /-/labelname and add the desired label. Now a custom subfeed chosen by label is properly specified

But there are no road signs on the Blogger provided doco page to permit easily reporting errors, so that they might have be fixed

NutsAnthony McAuliffe

23 March 2009

No relation

Separated at birth?
We get questions, asking about the use of the 'orc' moniker, in IRC and at this blog.
  • Tolkein inspired?
  • World of Warcraft?
  • some older mythology?
  • None of the above
Nothing so derivative.

When first using IRC, the Freenode 'nickserv' wanted a userid not in current use, and the 'usual suspects I prefer to use were long locked up; so: orc_orc and the related variants I use. The 'Blogger' software added a later constraint to the DNS character set, causing the drop of the Underscore to form a valid domain name.

This pair of latecomers pictured above may be related to one another, but I have to disclaim any connection.

20 March 2009

Every step you take ...

a completely trackable and traceable survey tool
I received the above email [which I converted to a maskable image], with embedded web link, seeking market research data. I have masked the full URL, to prevent 'ballot box stuffing' and to protect my privacy

Now in doing good statistical sampling, customarily one assures the recipient / respondent that the responses are aggregated, and that no personally identifying information is available to the researcher. This is done to foster truthfulness and frankness from people responding to the survey, by reassuring them that no information leaks, say back to the entity covered by the survey can tie particular positive or negative 'pull comments' to a specific person
Other survey research techniques use 'calibration' questions, repeated in slightly varying form a couple of times in the survey, to make sure the respondent is actually reading the questions, is answering consistently, matches the 'shaped sample' desired demographic, and similar concerns

Here, I am solemnly (or perhaps, cheerfully) assured:
We will also gladly share the aggregate results of the survey with you, as it may be of interest to you.

All responses will remain anonymous and confidential.

What is does not say is that the author is not planning to use the data for selling 'individual drill down' detail by respondent

The sender is sort of aware of this, or perhaps it is just a boilerplate footer from SurveyMonkey:

This link is uniquely tied to this survey and your email address, please do not forward this message.
I think I will pass on this one. Time for more coffee

Revised to lay better in the top table 20 march 2009

18 March 2009

Caveats and Disclaimers

fine print This is a bit of housekeeping about this blog -- the boilerplate so to speak. I mentioned the need to do it, so here it is

I am an economist, duly trained both in academia, and in that broader school of life. I am a 'rough around the edges' statistician. I have been coding since before formal exposure to either of those disciplines. I am a mathematician. None of these pursuits carry formal certifications relevant here.

I am a lawyer, trained at a top ten school, long ago and far away, it seems.

---------------start disclaimer-------------------
I_A_AL, but not your lawyer. I offer legal advice and formal
opinion only within the confines of a previously established
and explicit attorney-client relationship where privilege may
be had; and NEVER on a public list server.
----------------end disclaimers ------------------

I may own positions from time to time in entities mentioned, and while I will try to flag such, obviously times and holdings change, and I'll not be updating such enumerations. I am NOT your investment adviser, not licensed as such, offer merely opinion which I may or may not advocate (an economist and lawyer can and should be ready to argue any side of an issue; sort of like 'high school debate club', but no holds barred) and do not render any advice or recommendation as to such matters

And this fun one:
"This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering legal, tax, accounting, or other professional service. If legal advice or other expert assistance is required, the services of a competent professional person should be sought."

-- from a Declaration of Principles jointly adopted by
a Committee of the American Bar Association and a
Committee of Publishers and Associations.
Generally, the appearance of trademarks or registered trademarks
within this blog are done as a nominative and factual matter, as
and for description and identification.
See, generally, 15 USC 1115(b)(4).

I am in no wise interested in any implied trademark
infringement or counterfeiting (11 USC 1114(1)); false
designation or unfair competition (15 USC 1125(a));
dilution (15 USC 1125(c); common law infringement or
unfair competition, or dilution; violation of business
practice law or regulation as to use of marks.

No patents are knowingly infringed, nor 'trade secret' or NDA matter disclosed

The photos used are Creative Commons licensed, or otherwise under a copyright I have proper access for reproduction.

Please respect my copyright

No electrons are harmed permanently in the production of my blog content, although several get quite annoyed

17 March 2009

I saw mommy, kissing Santa ...

Santa and mommy
I can see her lying back in a satin dress
In a room where you do what you don't confess
I could picture every move that a man could make
Getting lost in her loving is your first mistake

   -- Sundown, Gordon Lightfoot

It is always kind of a sad moment, watching a younger idealist encounter something that tears asunder their old mental model, and puts them on the path to being a battered, old, steel eyed mercenary. But with that loss of innocence, new doors open

One useful paradigm to look at the consumers of Enterprise *nix software is to break them into a partition of three major types:
  1. Those that Have to have the 'Real McCoy', possibly for 'CYA' purposes, or because a upstream vendor says that they need the 'real' one as part of the 'silo' they will support without extra charge (if at all) to meet a performance SLA
  2. Those who do not have a strong mandate, but are generally willing to pay the minimal incremental cost such a subscription adds to their bundle of functions, and
  3. Those who will simply not pay for 'free' software: No how, no way; no, sir
The commercial enterprise Linux' have been generally successful in 'cannibal conversions' of enterprise consumers of 'olde skoole' proprietary Unix -- The morning's news has rumor that IBM is sniffing around JAVA. We covered the topic, and Ted T'so's proto-quant thought piece on this [Ted being on leave of absence from IBM to the Linux Foundation, as I recall] some months back, in the context of the future for software freedom

All the young idealists from the BSD side of the FOSS house saw their holdings of SUNW eroded away in recent years, the progressive shifts away from hardware, away from ksh v. csh language debates, into the tangled place of license issue and re-inventions of approaches on scaling, as their firm flailed with JAVA [v. rather than use the one true type safe modern OO language, c++ // sorry, could not resist], into databases with a product that will NEVER be Oracle DB, no matter how hard it tries

JAVA felt it had to move past Berkeley DB, and darn it, all the cool kids use SQL. ORCL is the only credible lead player in database space (IBM and DB2 are there of course, but databases are rounding error to IBM's financial statement). JAVA never could articulate the unique value proposition that picking up MySQL, AB, brought to the table, and let the acquisition languish, perhaps hoping that the database's engine in the 'LAMP' stack would pull in tier 2 conversion sales (see the next part, infra). I think they have pretty well demonstrated that "hope" is not a business strategy to follow

Then there is that second tier -- FOSS *nix in through the side door, and without formal support contracts at first. "Under the Radar", so to speak. [Note: The linked article is a bit 'snarky' about Bob's new venture: Lulu, but I find it a wonderful and reliable service, to convert 'print pre-flighted' PDF's to bound books, for cheap, fast and reliably. Highly recommended.]

Just as I might choose to burn up a laser printer to print a manual, and do home-brew binding, Lulu has found a value proposition that makes me 'buy' their service, rather than 'build' it myself. They have convinced me that outsourcing my printing to tier 1 is the 'right' decision. He has converted me to producing wonderful documents from TeX that his business handles the ink to paper, binding and delivery parts. It seems Bob is also 'whiteboxing' short run, 'just in time' print of conference manuals, and continuing education materials. A nice niche, but low barriers to entry

And then there are the 'No how, no way' school in tier three. This recent post in the CentOS forums, "leasing CentOS5 from DataCenter", caught my eye:
Recently we had a customer come to us asking how much we lease out CentOS for.
I thought this was an odd question - since CentOS is ... FREE

When in dialogue with them I learned they have a number of servers with a different provider that charges them $5.00 per month for the Operating system.

I thought this was a bit strange - and wondered - Is it even legal?

How can a datacenter lease out something that is free?
I could understand perhaps charging a setup fee based upon a customers requirements - this is a service --- but
for a datacenter to live off of the backs of someone else by charging for something that is free -

it just bugs me and rubbed me the wrong way -

Any thoughts - ?

Not sure why it bugged me so much - perhaps its because we write a ton of opensource software and could not imagine someone charging for the software itself.

Support / Installation / Service yes - but the software ... i thought thats what GPL protected folks from
This poster has missed the point of the GPL so widely, it is painful.

The GPL is perfectly fine with charging for software which requires that it be accompanied with an offer of access to the sources it was built from. This is what builds markets, and indeed, what makes CentOS possible in part. CentOS is fine with a redistribution and commercialization, so long as our marks and brand are not mis-represented. [Advert: The CentOS project would put a 'tithe' of that rental to good use -- money, machines, bandwidth, and so forth, but it is not mandatory.] A better question might be: Is the data center that employs that poster itself providing the GPL required offer of sources access, and meeting its duty to provide, when they provide binaries under 'lease'?

Someone may well come along and undercut a person selling GPL and related FOSS licensed software for less. I wrote a post encouraging people who 'cannot wait' for the CentOS 5.3 respin, or the updates which get stacked up, waiting for that stabilization process to end, encouraging them to 'outcompete' CentOS. I am fine with that. I know it won't happen generally [Scientific Linux is the closest credible 'fellow traveller' remaining on this highway; Hi, Connie and Troy] soon, as it is non-trivial to ship and support the full line product

The protection of the Four Freedoms under the GPL makes it inevitable that someone will make a run at commercializing FOSS; this is a 'Good Thing'. But then the trick is to provide value; that is, also provide design services, consulting, 'service after the sale', or build a support infrastructure, to make it safe to entrust one's most valuable assets to that software. I feel CentOS meets that test in the 95% case for tier 2; others may dial that number up or down, and do according to their risk tolerance

And with that, we are back to my post sending people with an external factor 'beating on them' about SLA's, to: Go Buy from CentOS' Upstream

Disclaimer: I hold direct positions in JAVA (minimal to get keep skin in the game, and to remind me to follow it) and ORCL, and have held IBM in my past; I regularly quote against IBM as to providing third-party *nix support services. I probably need to write a Caveats and Disclaimers post

12 March 2009

Embarrassingly parallel

Bruce Schneier, in his 'Crypto-gram' summary this month, has an outlink to a story in The Register on a purported desire of the US NSA to crack Skype's call crypto

But this misses the point -- the needed technology and infra-structure are out there already, fielded, and ready to go, pretty everywhere. Let's take a hypothetical country -- call it 'Glassware' ("US", "China" and "Elbonia" were taken)

The country of Glassware has a population of M * 10 ^ N people

Of those M * 10 ^ N people, the average family size is three, and there are an average of two cell phones and one television (the latest -- digital)

There is a broadcast infrastructure suitable to distributing portions of a problem sample -- say, the header block -- sufficiently long that one can detect when a 'good' private key has been found, which is sufficient to decode something encoded with an asymmetric encoding public key.

That target information is distributed over the airwaves, in the vertical blanking interval or sub-carrier side layer, itself encoded with a private key, readily decodeable with one of several 'factory included 'public keys'

The power supply switches in the television sets do not actually place the sets into a 'No power drawn' mode -- just into a lower power use 'sleep' stand-by mode. When tickled with the right signal, and not otherwise engaged in presenting content to possessors of that unit (who might complain about glitches if the video graphics display processor did not fully paint their screen), it is possible to wake them up to do some ciphering. Good for them -- recycles the electrons, and so forth

The television has a handy feature -- it will accept and display caller ID information from nearby affiliated cellular phones, over BlueTooth -- it can be configured to ONLY display wanted cell phones, but it will receive data and collate data from all ringing near it.

So when Mrs Glassware has her girlfriends over, and the babysitter calls during the home sales party, the TV will pop up an alert for them of the call over the din of the fun.

The TV also sends back, over SMS messages, duly encoded and encrypted, the logfiles to series of central collation points -- Father Glassware can see when the oldest son is over at the home of the girl from the wrong side of the tracks. The benefits are as broad as the imagination can see. Who could be against protecting the children?

Those cell phones as it turns out are really not using very much of all that processing power they have in THEIR 'CUDA chips to draw those dinky screens, and are really off most of the time as well.

Let's not waste their graphics processor chips as well, when they are on the charger. This is great, as it simplifies the math.

Perhaps Glassware have an even better infrastructure -- say a national conversion to High Definition digital media signaling, and a mature broadband or cable modem backbone. All the better for shuttling information around digitally.

A friend who deals with quants, tells me the quants are all hot and bothered to get 4 x quad head graphics cards in Dell Precision units -- 16 GPU's, because each of them can do a 10,000 (10 ^ 4) speedup over the simple general purpose processors in the underlying processors the chassis carry. All for under $10k a unit. They are doing the math and think they can have a huge HPC farm, just in the normal overhead which their traders and developers have to have anyway to do their day jobs.

M is 3 in the US (we'll round to 4 to make the math prettier), and perhaps 10 in China, and N is 8 (a hundred million). Feel free to pick a value for your local Glassware

So properly harnessed, we have at least: M * (10 ^ N) * (10 ^ 4) in compute engines available to us -- we should be able to crank out at least 100,000 samples a second ... 10 ^ 5, In cough numbers -- sufficiently accurate for our 'back of the envelope' purposes here, 10 is equal to 2 ^ 3. 2 is useful, as it is bits of key strength to solve. There are 8.6 * 10 ^ 4 seconds in a day -- call it 2 ^ 16

so: M * 2 ^ (3 + 3 + 3 + N + 4 + 5+ 16)

US: 2 ^ 43 key trials per day;
China: 2 ^ 44 key trials per day.

The old DES cipher had a 2 ^ 56 bit keyspace -- worst case time to solution is 2 ^ 13 days and always getting better as build out scales in, without even beginning to bear pre-processing tricks, One time pad reuse, identifying non-perfect implementations, planting known cribs, and the rest.

And it is Free, free, free -- or better yet, paid for by others. What was that old saw about people living in glass houses?

09 March 2009

No sparrow falls, but that ....

Letters, we get letters ...

> I saw a blog posting recently where you
> commented that RHEL 2.1 was based on RHL
> 8.0. My memory /understanding is that
> RHEL 2.1 is based on RHL 7.1... or maybe
> 7.2... but not 8.0.

memory fades ... let me dig a bit

Let's see ... I was on Red Hat's 'tester-list' external beta tester (under NDA) program

The watershed email (which put my memories in the RHL 8 timeframe) was:

Date: Thu, 17 Jul 2003 14:06:07
From: Michael K. Johnson
To: testers-list@redhat.com
Subject: Heads-up -- change coming [Red Hat Confidential]

This mail contains only Red Hat Confidential material. ...

But there had been rumblings ever since RHAS 2.1 issued, which made me want the 'insurance policy' community RPM based distribution

Which led previously led to me setting off with Greg Kurtzer to get cAos going. Greg and I discussed an enterprise product from the earliest cAos meetings (earliest I have a record of is: 29 Apr 2003). Greg, Rocky Mcgaugh and I pitched this product concept and extension of the 'two Linux' policy to an IBM VP at SuperComputing 03 in November 2003 at Phoenix, AZ. I remember flying down from Las Vegas for the meeting

And it matured quickly. Parallel efforts were underway by others -- Tao Linux, Xos, and more. There was a mailing list: rhel-rebuild list <rhel-rebuild-l@uibk.ac.at> but I've not seen a post on that list for eons

Date: Tue, 9 Dec 2003 09:30:23 -0600 (CST)
From: Rocky McGaugh
Reply-To: caos@caosity.org
To: caos@caosity.org
Subject: cAos] Announce: centos

The cAos-EL projects have been officially renamed to:
centos (Community ENTerprise Operating System)

Sadly, Rocky has died, too young. Greg (sensibly for his needs) stayed with the cAos part. And I am an old, tired dog, am left on the CentOS part... Does anyone not not die too young while they are still young of heart?

so thus my recollection of late 2003 ...

RHL 8 general release Sept 30 2002
RHL 9 general release Mar 31 2003
RHL 7.3 GA May 6, 2002

RH AS 2.1 General Availability: May 17, 2002

negative rebuild report under 7.2
April 2003

So .. it seems true that RHAS 2.1 was on a pre RHL 8 base. But RHL 7.3 was so good, and RHAS 2.1 ... wasn't. Perhaps that led me to conflate the fork point

A 'tip of the hat' to the sharp eyed blog reader Scott Dowdle who noticed the matter and called it to my attention

aid to memory -- 5.3 QA archive .repo file

I mentioned needing to add the local install archive in a test candidate, at directory: /etc/yum.repos.d/ so that updates happen properly

[herrold@centos-5 yum.repos.d]$ cat CentOS-RPH.repo
name=CentOS-$releasever - 53
[herrold@centos-5 yum.repos.d]$

Without it, as noted in the twitter feed, we get some security updates from 5.2 [avahi, nss, avahi-compat-libdns_sd and nss-tools], and a unsolveable update set.

CentOS 5.3 qa - fresh installs testing on i386

centos-5.3 QA fresh installs
KB released released a twitter note:
ah! the smell of a freshly baked distro.
earlier today, reflecting the presence of updated binaries, and wire-install ready images. Yummy, and I noted them in my twitter feed.

I completed the rsync of the new candidate archive, and have knocked out both GUI and text mode domU wire installs of the i386 candidate. As you may recall, I had done updates testing before, without incident.

The screenshot is fullscreen of 1440x1050 and I am not about to booger the PCO feed again -- direct link is at: http://www.herrold.com/images/blog/c53.-qa.jpg

The test running (side by side in two xen domU test candidates) is a rather hard one, which build graphviz, on the way to the trading shim -- the test script can be retrieved from:
for those who want to play along.

Something is slightly wrong in the install of the graphviz binary produced, and I'll get that fixed. Also there is some LaTeX issue which I need to run down in the current doco build, generated from a SPOT -- a single point of truth. The -f option to shim-builder.sh may be enough to get past it, regenerating the .dot files, and this the .ps included art.

It succeeded in the GUI based installation. We have some slight warning level noise in today's shim in building today's shim, but it is just c++ being picky (hoo-ray for type checking). The TUI test lags, as I had to build the TUI instance separately, and get the local install archive set up in /etc/yum.repos.d/

This is ready to go as to my needs, but I'll test media installs, and on x86_64, next.

06 March 2009

Wrestling blog markup, Round Three

Lovely ... the large image in the prior post boogered the CentOS aggregator. Posting this to push it further down the page.

Where is that coffee cup?

Wrestling blog markup, Round Two

Perhaps I should have thought through what I was saying in preparing a posting a bit more to avoid getting bitten

This should look better in the CentOS consolidator:

ring girl, round two

PCO nine thumb  Stuck between a rock and a hard place here, it seems. Notice the ocean of whitespace to the left of the picture of the pregnant lady on the left image  OSE nine thumb

than this mess:
PCO eats markup

PCO eats markup diffferently than Blogger. Point taken.

But, dang it: 'wrangling' through blogging software quirks by adding tables with:
<td>&nbsp;<img src="http://www.herrold.com/images/blog/spacer.gif" width="0" height="0">&nbsp;</td>
blocks littered through it sure feels nasty and a lot like fighting browser quirks in a by-gone era. This should be WYSIWYG

 Where did my coffee cup get to? 

Wrestling blog markup

Stuck between a rock and a hard place here, it seems. Notice the ocean of whitespace to the left of the picture of the pregnant lady on the left image

I had previously noticed that the Centos feed was honoring exposing the <hr> 'separator rules' in my markup, where the Blogger was not, on my OMG post earlier this week. Not a big deal, and I did a quick re-edit and report, but I dislike doing so. It exposes a problem in the Blogger preview engine as well, I guess

But darn it -- I should not have to set up a table to box image placement, at this late date. I had to do so to get the 'clown' picture at the bottom of Nine pregnant gals to lay right at the bottom in the Blogger client. Then I see these side-by-sides once it hits the CentOS consolidator:


Grrr ....

News recap for Rip van Winkle

Rip van Winkle
This posted, in light of a question seen today:

CentOS' upstream published an announcement, reminding users that its 2.1 line reaches its scheduled end of life soon.

By the way: Just a quick 'shout out' and 'Thank you', by the way to John Newbigin, for being its shepard for the CentOS project.

From memory, substantive bug fixes were turned off, say, a year and a half ago. John has been there for CentOS 2.1 (based on a product itself based on the largely lame and unloved RHL 8). RHL 8 was a sick puppy, but also a gateway to the upstream's re-entry into the Linux market with the 'Enterprise' model, and so to CentOS itself

Reviewing this page, I see: tzdata enhancement update 2008-11-05, but that scarcely counts. Before that, libxml2 bug fix update 2008-09-02 (but that carries security overtones)

Before that, the first 'real' bugfix seems to be: dump bug fix update 2007-02-02 -- So, two years now since a material (non-security) bug fix.

Time to get that conversion to CentOS 3 series done.

Nine pregnant gals in a queue

a pregnant lady
I wrote a bit ago on the fact that the CentOS 5.3 update release was proceeding apace. I may have been insufficiently direct.
If you feel you need the facilities provided by the CentOS project sooner than it is provided, or that you need deterministic releases of support: Please go buy such from our upstream, or from a third party vendor who can sell you the expedited subset of services truly needed.

Almost all of the CentOS team have active consulting practices, or had such before their present $DAYJOB. They had demonstrated the ability to handle the matter.

Similarly, CentOS' upstream has a unit cost for a three year JBOSS and their enterprise distribution product of US $297 -- WITH non-metered support -- This is unbeatable to point your pointy haired boss at when she is badgering you about 'CentOS is late'.

We are not late of course; we will issue the 5.3 respin when it is right, and not before. There was some loose talk speculating that there were insufficient resources, or that QA testing had not begun. Neither is correct.

Consider the well known lesson as to the futility of trying to parallelize a task at a sequential constraint chokepoint. This was pointed out by Fred Brooks in The Mythical Man Month.
Adding manpower to a late software project makes it later.

Brooks constrains his example to late tasks, but it turns out this is a broader rule than that, and has general applicability.

A quick example might be to consider the time it takes to produce one new human baby. For one woman, the time is nine months. No matter how hard they try, nine women working in parallel cannot get that one baby any quicker by adding eight more pregnant women to that queue.

CentOS has some goals in its build process which the upstream does not -- we strive to produce the packages we release on a 'self-hosting' basis, so that anyone who works at it can replicate our work freely. Upstream has never had that goal in their RHL nor now their Enterprise product. We have to identify failures with the tools such as the ones KB talked about recently.

Also, build sequence matters a lot in bootstrapping into a next point release; there are hidden build order dependencies which need to be solved -- sort of like packing a station wagon with furniture and household goods, when moving. The big stuff HAS to go in first, and the little stuff later placed in 'found' gaps. This cannot be well parallelized.

We have the QA team up and primed [there is a non-public webpage, and I see 29 members by a quick count]; the needed ACL's to get at the candidates are in place and tested [I pushed an update earlier this week for one member]; some QA has occurred. I updated some results I had announced a couple weeks ago, in a coordinating mailing list on the QA's notes on this release as well.

This is a little too wide a parallelizing fanout, in terms of coordination of testing, but it happens to be how it turned out this time. The future with a CentOS 4.8 and 6.0 coming will probably be a bit smaller. The QA master (and indeed I hope, each of the 29 individually) will review the participation at the end of this cycle, and some on the list will get dropped from a QA role, and slots freed up for new members to be invited in.

Experience has shown that there is no sense adding 'community' that does not put their shoulder to the wheel and work; We on the CentOS team see the laments from people wanting to join. It seems to me from watching, that they really want to consume and not carry the load of CentOS. Note that 'users' of CentOS are not our target 'community'; they are welcomed, but really, how does having lots of support HELP the project; the main IRC channel #centos is consciously limited to CentOS specific issues, and structured as a learning environment for a reason.

Want to be asked onto CentOS QA? Go for it, there are no barriers to demonstrating competence and interest in any of the following venue: file good bugs; comment bugs with reproducers or better, fixes; participate on a sustained basis in a knowledgeable fashion on any of: mailing list, wiki, forums, and IRC

The CentOS team watches all these venue all the time -- the CentOS 'community' is a meritocracy, and merit will be welcomed in -- but also know, this means there is an implicit 'Bozo filter' as well.clowns

04 March 2009

Oh ... my ... goodness ...

I've been mirroring, rebuilding, patching, filing bugs, and so on, against bleeding edge source package RPMs (SRPMs) out of 'RawHide' for at least a decade. It is one means I have used to push features from later Red Hat releases and hotfixes back into previous releases, to customers in our consulting (and on my own account).

We pushed post-RHL support, and commercial RPM-based support for side architectures including Netwinders [MIPS], PA-RISC, PPC, Sparc hardware, and Alphas from RawHide. We have built FOSS-based 'latest and greatest' LTSP forks of reduced package sets for commercial applications from RawHide

A largely unheralded change to a new RPM package file format for Raw Hide SRPMs coming out to the builder, at Red Hat's rpm-4.6 breaks all that, for the first time in at least a decade ... Jeff Johnson, a former lead developer and maintainer for Red Hat preceding the current incumbent lead, bent over backward and jumped through hoops, to make rpm a lingua Franca. Jeff was followed by a short-time incumbent, Paul Nasrat, who similarly did no harm to rpm

[herrold@centos-5 ctrlproxy]$ cp /mnt/nfs/var/ftp/pub/mirror/redhat/rawhide/SRPMS/ctrlproxy-3.0.8-2.fc11.src.rpm .
[herrold@centos-5 ctrlproxy]$ ls
[herrold@centos-5 ctrlproxy]$ rpmbuild --rebuild \
Installing ctrlproxy-3.0.8-2.fc11.src.rpm
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
error: unpacking of archive failed on file /home/herrold/rpmbuild/SOURCES/ctrlproxy-3.0.8.tar.gz;49aeb249: cpio: MD5 sum mismatch
error: ctrlproxy-3.0.8-2.fc11.src.rpm cannot be installed
[herrold@centos-5 ctrlproxy]$ rpm -Vp ctrlproxy-3.0.8-2.fc11.src.rpm
Unsatisfied dependencies for ctrlproxy-3.0.8-2.fc11.src: rpmlib(FileDigests) <= 4.6.0-1
missing ctrlproxy-3.0.8.tar.gz
missing ctrlproxy.config
missing ctrlproxy.init
missing ctrlproxy.spec
[herrold@centos-5 ctrlproxy]$

Amid all that, the part which is important is that: a new rpmlib of at level 4.6.0-1 is needed (marked in red), and that without it, it produces a rather unhelpful cpio md5sum error message (marked in blue)

For the short term, until I get the matter sorted better, I'll set up a domU Raw Hide xen instance (which has the later rpmlib, and so can manipulate the package), such domU will be upgraded just enough to handle rpmlib(FileDigests) <= 4.6.0-1, then frozen against other breakage from later other updates

Further I'll grant that domU RW access to the NFS export that contains my build tree (/home/herrold/rpmbuild/), so that I can position a SRPM into that tree, and unpack it rpm -U packagename.src.rpm

With that unpacked set in the SOURCES subdirectory -- the tarball, patches, and such; and the SPECS subdirectory .spec file, I can then (hopefully) switch into an earlier rpm variant on an older unit, and rebuild the package to write a new SRPM with the older rpmlib form

We'll see. Change requires that the caterpillar moult and break out of the crysalis. It does not require that an angry Mothra result, and destroy the surrounding city in concert with Godzilla.

It might be asserted that this is some sort of performance or speed optimiszation.
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Donald Knuth
rpm has had the instrumentation capabilities in place to see about where performance penalties lie, and has them for some time, so that one can easily test where load is. This is not rocket science; similarly, we drilled in micro-second instrumentation and time-stamping in the shim, as we are accustomed to looking for code bottlenecks, and people using our software, and trading in the financial markets really care about where lags are. I wrote a bit about this last month in the New Future Always Coming piece.

As proof of the pudding, take a moment and run rpm 4.6.0 with --stats on a large package built under the old format, and rebuild and repeat the test with the new 4.6.0 variant. Please feel free to get back to me at: timetrials at owlriver dot com as to where YOU find the true performance issues are, comparing a package build with SHA2, v. MD5. I have my preliminary stats for a later post, but welcome more data

This sure seems like a gratuitous and thoughtless format breakage to me, with no backward compatibility path announced. RHAS 2.1 is about to go out of support, of course (which dates back to a foundation including a fix for the as shipped RHL 8 rpm database locking issues), but RHEL 3, 4 and 5 just lost the ability to use Raw Hide so far as I can currently see

I may be missing something obvious by way of a workaround, and would be glad to be corrected, but ...

I really feel that the current incumbent Panu Matilainen has NOT clearly articulated this on the mailing list and bugs at Red Hat which I read very closely

This WILL cause at a minimum confusion, and also compatibility problems down the road with bi-directional interoperability at the SRPM level between both the LSB, and Red Hat's major enterprise Linux distribution competitor, Novell and its SLES line [seemingly being branded: SUSE Linux Enterprise, presently]. Oracle's UBL and its consumers are sort of off in a 'market niche' world of its own here, and the impact will be less pronounced. It will probably cause such SRPMs to 'just NOT work' in the SuSE buildservice, but I have not tested this yet

Heaven help the users of side distributions -- cAos, Mandriva, PLD, and the non-English fluent RPM file format using -- who try to rebuild and use a random found SRPM. It really seems that Red Hat marginalizes (or forces one to choose a camp to join) other formats with this move. Perhaps that was their intent. Fedora has been used by Red Hat as a 'wedge' for this purpose in the past [consider studied dis-interest in inter-archive compatability], and it may be just more of the same

Grrr ...

Disclaimer: These days, I am aligned with the RPM5 insurgents, and served as the long time maintainer of the old RPM website which Red Hat has since re-claimed

SportsCenter for Quants

SportsCenter is familiar, has a friendly user interface, and is indeed omni-present -- We ate dinner last night at a sport bar, feasting on pub grub with my mother, my daughter, and her one year old son (and thus, my grandson), the baby. That young man is just getting walking fast down pat, preparatory to learning to run.

But he was also craning his head from side to side, sitting in the booster chair, to see past the adult's heads, for a good view of a panel television, following the fast paced clips

but ... What's a 'quant'?

A quant is a specialized kind of a geek, focused on studying, modeling, and seeking to understand Financial markets. They do this with an aim of being able to know just a little bit more, just a little bit sooner than other quants and other forms of market counterparty traders against whom they compete, to be able to arbitrage a profit from fleeting asset mis-pricings in a market by buying one that is 'too cheap' and selling the one 'too dear'. Profit !!

Part of my morning routine, is to check the overnight results of the markets in summary, and to get a preview of what is coming up as 'scheduled' events' for the upcoming trading day; of course, today's unscheduled exogenous external events are not previewed. I still need that time machine.

This morning, Jim Chanos of Kynikos Associates was on CNBC's Squawk Box [6 to 9 am, US ET], coming into the 8:15 release of the ADP Jobs data. For those of you who may not follow the Financial Layer, Chanos was the fellow who early on, clearly pointed out that the Enron empire and its emperor's had no clothes, in a quarterly conference call. He was called a rude name, but Enron could not thereafter hide from the truth of its frauds, and toppled in short order.

Chanos had the courage of his conviction (that Enron was overvalued) to sell 'short' Enron stock, and then later 'cover' and close the position for cents from the dollars he had received in the initial sale.

Today, anchor Joe Kernen and wingman Carl Quintanilla had a good ten minutes of give and take with Chanos. Good questions, and a very thoughtful and reasoned set of replies from Chanos.

Chanos asks the very sensible question of WHY a arbitrary minimal capitalization requirement is 'set in stone' at 2.5 percent, rather than say: 0.5 [under a convention called Basel II]. Economics 101 covers fractional reserve banking, and the multiplier effect; governmental (social) 'guarantees' and the moral hazard of 'too big to fail' in the US [compare contra, the last year in New Zealand] have removed market discipline by re-incentivizing leveraging in the last few years.

I say: Re-incentivized. Note that we went through all this in a smaller scale in the so-called S and L crisis only twenty years ago -- did no-one in government remember?

I was raised in a family that invests, reads and thinks about application of what we see in the WSJ and related financial press [I currently favor IBD], mailing lists, and newsletters, and historical literature about markets. Add training and practice as an Economist, and a lawyer, and developing quant tools, as well as having high power computing readily available all my life [currently, on a CentOS platform]. And so I am comfortable with Finance issues, and can work through what the 'correct' answers should be, by and large.

There is a large body of well written and often entertaining economic history -- I'll cover a bit in future posts -- which a geek will enjoy reading

Some geek friends who hang out in the side #centos IRC channels seem to feel helpless before market events. Treating CNBC as SportsCenter, and following the stars (and learning to recognize the rogues) is one way to gain confidence and financial literacy

Seven layer ISO stack, plus two

The Seven layer OSI model stack -- an ISO standard, palindromically -- characterizes how computer and data network (and thus internet) applications communicate between separate computers in a defined fashion. From the bottom up: Physical, Data Link, Network, Transport, Session, Presentation, and Application layers

Computer networking geeks use that the seven-layer stack, to specify and to analyze matters running from what a given voltage level means at the bottom (Physical layer), to how a email client retrieves email from a mailstore at the top (Application layer).

In preparing this piece, I find see reference some to mnemonic forms to get the sequence of the Layers correct that I did not know before:
The mnemonics "People Design Networks To Send Packets Accurately", "Please Do Not Throw Sausage Pizza Away", and "All People Seem To Need Data Processing" may help you remember the layers.
BambooWeb article

Some wags extend the definition thus:
Of course, above those seven layers are two more: the Financial, and the Political Layers, which also need to be functional.

The dis-functionality and friction of office politics, or perhaps of being unable to get funding needed for a more reliable and functional network infrastructure rather than a Bigger, Newer, Better computer to sit unused on a pointy headed boss' desk, each come to mind as obvious examples. In wider scopes, Financial may encompass Economics and Markets; Political expands to social interaction, policy and polity

It seems in at least one restatement, to also have added an explicit 'religious' layer as well under some restatements. I think one can perhaps subsume Religious into Political, but I can see the genesis of the historical tension to draw a difference between those asserting dominion over a person's earthly presence, and spiritual corpus.
Network technicians will sometimes refer euphemistically to "layer-eight problems," meaning problems with an end user and not with the network.

Carl Malamud, in his book [1991] book"Stacks," defines layers 8, 9, and 10 as "Money", "Politics", and "Religion". The "Religion layer" is used to describe non-rational behavior and/or decision-making that cannot be accounted for within the lower nine levels. (For example, a manager who insists on migrating all systems to a Microsoft platform "because everyone else is doing it" is said to be operating in Layer 10.)
BambooWeb article

I see also that people have adapted use of [pdf] the Political and Finacial Layers. No model is so good it cannot be twisted, abused and extended, it seems

Anyway, I lay this foundation to provide a kick-off target to point back to in future discussions of the Political and Financial layers of the stack