08 June 2009

Phat pipes

Check the top row, right entry ... peaking at 44 megaBytes per second, and a lesser rate sustained over 8 hours; all relevant filtering bridges, and servers in the transfer at our end are running ... CentOS

We've spent the last couple of months in the buildout of our (new) presence in the North data center. We have sites in the central city, on the Dublin fiber ring, and through the north end AT&T switching center, but each has had its faults over time. The downtown 'carrier hotel' was offline for four hours due to a lack of redundancy in its generators during last September's multi day power outage; the Dublin fiber ring peering exchange point had issues as well, but longer; our multi-site strategy saved the day as none of our customers lost inbound data nor went dark in their web presence; uplinks were not affected as we handle them over different routes. In the last couple weeks, AT&T's congestion issues have re-appeared at their plant as well when we were 'babysitting' a large CAD/FEA file transfer ... again multi-gig

The new data center is pricey -- but in addition to the care at the physical layer, it is BGP multi-homed and has really fat pipes. The screenshot up top shows the inbound consumption on the green. Iniitally we had a hard cap on our switch to limit it to 10 MegaBytes/Sec inbound -- but we were doing a large (a multi hundred gigabyte pull), and dropped the cap once it was clear all was working well

We are in the paperwork phase at the moment with ARIN, to clear up some 'lint' on our ASN, but with any luck by the end of the month, we'll have completed the cutover