A nuclear Google may be a very good thing

Update: April fools courtesy of Arrington and I got taken in bigtime. Ugh! Leaving the original post up as an ode to my naivete.

52 years ago in August 1958 the United States was so confident in our ability to provide clean nuclear energy that we put one hundred and sixteen men in a tin can called the USS Nautilus and sent them along with a nuclear reactor under the North Pole. Many of the crew from that voyage remain alive and well today.

What puzzles me is why the USA isn’t the undisputed world leader in nuclear power. Perhaps the title of undisputed leader in nuclear weaponry and leader in nuclear power are mutually exclusive. So it’s France who produce most of their power from nuclear reactors.

Google getting into the business of Nuclear power is the most exciting development in nuclear power in this country since the Nautilus. Google’s data centers are massive power consumers, but what also consumes a lot of resources is power transmission. It consumes land, steel, and a lot of power is lost during transmission.

If you can put a nuclear reactor on a 320ft submarine 60 years ago, you can build a clean nuclear reactor in 2010 with enough power to supply a local data center and the local town it provides employment for. My hope is that this is Google’s nuclear vision.

Microsoft Buzzquotes

“My machine overnight could process my in-box, analyze which ones were probably the most important, but it could go a step further,” he said. “It could interpret some of them, it could look at whether I’ve ever corresponded with these people, it could determine the semantic context, it could draft three possible replies. And when I came in in the morning, it would say, hey, I looked at these messages, these are the ones you probably care about, you probably want to do this for these guys, and just click yes and I’ll finish the appointment.” ~Craig Mundie from Microsoft in today’s NY Times

Sounds like Microsoft is working on a Positronic Brain rather than writing software for multi-core processors.

Server Downtime == Police Baricades and Angry World Series Fans

Paciolan is managing ticket sales for the Colorado Rockies. Their servers were hit with over 1500 requests per second and it took down not only the Rockies ticket sales infrastructure, but all Paciolans other customers too.

They claim to have been hit by a DDoS attack, but that’s something that’s hard to prove or disprove when you have corporate firewalls and AOL firewalls sending many requests from a single IP – it looks just like a DDoS attack but it actually isn’t.

Is 1500 requests per second a lot? No. Feedjit (my site) peaks at 140 requests per second and it does it with just two servers – and the data it’s serving is dynamic.

So a cluster of 10 to 30 servers should easily handle the load they’ve described – especially if all it’s doing is queueing visitors and only letting a handful through, which is what Paciolan’s ticketing software does.

The result? Police are erecting barricades around Coors Field. Here’s a quote from cNet:

“…many fans are apparently converging near Coors Field in hopes that the team will sell tickets in person through the box office; so many in fact that the police have closed streets around the ballpark and are erecting barricades, the paper reported.”

Ticketmaster is trying to buy Paciolan – the deal is currently under government review. Ticketmaster runs Mod_Perl (and so does Feedjit) and some very smart people who know a lot about scalability (and who I used to work with) work for Ticketmaster. So hopefully the deal will go through and mod_perl will come to the rescue.

btw, I’m doing a short talk in 2 days on how to scale your web servers fast based on my experience scaling Feedjit.

Facebook’s getting out of hand

Once upon a time I was a Facebook addict. It was an awesome way to reach out to people I haven’t been in contact with for years, share photos, update your status 80 times a day, etc. But Facebook apps are getting a little out of hand…

…and I’ve always hated that friend detail feature. </end rant>

The Naked Truth Party

I just got back from the Naked Truth panel and party in Seattle. It was loads of fun. I met John Cook for the first time in the flesh – he’s interviewed me about 3 times and we’ve never actually met. Also met Michael Arrington briefly.

The panel was so-so. I think the general consensus is that we didn’t learn a hell of a lot that’s new, but it made a great excuse for the party afterward. There was some playful banter on the panel between Seattle PI (John Cook) and the Seattle Times (Tricia Duryee) that turned into a bit of a circulation comparison.

Michael Arrington was hilarious on the panel openly poking fun at the WSJ and Fred Vogelstein from Wired. I’ve never been a fan of Wired and glad to see I’m not alone.

Looking forward to the next one!!

Rob Malda vs Alexa vs Slashdot vs Digg

Rob Malda (aka cmdrtaco), the founder of Slashdot.org has written a rather schizophrenic piece on Slashdot about Alexa. He spends most of the article beating up Alexa, but is sure to include 5 links to the website in the article – two of them specifically asking people to install the Alexa toolbar.

A while ago Digg passed Slashdot in traffic. (I’ve written about this before) An article covering the phenomenon got Dugg and thousands of Digg users clicked the link to Alexa and installed the Alexa toolbar. Notice the weird spike where the graphs meet. That skewed the Alexa results even further in Digg’s favor.

So now Slashdot looks even worse to journalists – most of whom are writing about Digg and calling Rob for background. Which is why Rob can’t help asking you to pretty please install the Alexa toolbar to make slashdot look good to journalists again.