This is a brilliant short talk by Tina Selig asking students “If you had $5 and 2 hours, what would you do to make as much money as possible?”. Her point that capital can simply be a distraction is a view I’ve held for a long time – especially in the context of cheap-to-start-and-run consumer web startups.
Author: mark
-
Sunshine with clouds – Ubuntu's game changing release
I’m going to use the term “Cloud” in this post which I despise for it’s nebulosity. The press has bandied the term around so much that it means everything from the Net as a whole to Google Apps to virtualization. My “cloud” means a cluster of virtual machines.
I’ve been a huge fan of Mark Shuttleworth for a long time. Besides the fact that his parents have great taste in first names, he’s taken his success with Thawte and ploughed it right back into the community who’s shoulders he stood on when he was getting started. And he’s a fellow South African. Ubuntu is doing for online business what the Clipper ship builders did for the tea trade in the 1800’s. [Incidentally, the Clipper ship is still the fastest commercial sailing vessel ever built.]
Today the huge distributed team that is Ubuntu released Karmic or Ubuntu 9.10. Karmic Server Edition includes the Ubuntu Enterprise Cloud (UEC) that allows you to take a group of physical machines, turn one of them into a controller and run multiple virtual machines on the others and manage them all from a single console.
The reason this is a game changer is because this brings an open source cloud deployment and management system into the main-stream. I’ve been opposed to using Amazon’s EC2 or any other proprietary cloud system because of the vendor lock-in. To effectively deploy in the cloud you need to invest a lot of time building your system for the cloud. And if it’s proprietary you are removing yourself from the hosting free-market. A year down the road your vendor can charge you whatever they like because the cost to leave is so much greater. And god help you if they go under.
It’s also been much more cost effective to buy your own hardware and amortize it over 3 or 4 years if your cash-flow can support doing that – rather than leasing. As of today you can both own the physical machines and run your own robust private cloud on them with a very well supported open source linux distro.
The UEC is also compatible with Amazon EC2 which lets you (in theory) move between EC2 and your private cloud with ease.
The advantages of building a cloud are clear. Assuming the performance cost of virtualization is low (it is), it lets you more effectively use your hardware. For example, your mail server source repository and proxy server can all run on their own virtual machines, sharing the hardware and you can track each machine’s performance separately and move one off to a different physical box if it starts hogging resources.
But what I love most about virtualization is the impact it has on Dev and QA. You can duplicate your entire production web cluster on two or three physical machines for Dev and do it again for QA.
To get started with Ubuntu UEC, read this overview, then this rather managerial guide to deploying and then this more practical guide to actually getting the job done.
-
No-latency SSH sessions on a 5Ghz WiFi router with 250mw radio
Disclaimer: You may brick your fancy new Linksys router by following the advice in this blog entry. A large number of folks have installed this software successfully including me. But consider yourself warned in case you’re the unlucky one.
I use SSH a lot. My wife and nephew love streaming video like Hulu instead of regular cable. For the last few years there’s been a cold war simmering. I’m working late, they start streaming, and my SSH session to my server gets higher latency. So every time I hit a keystroke it takes 0.3 seconds to appear instead of 0.01. Try hitting 10,000 keystrokes in an evening and you’ll begin to understand why this sucks.
I’ve tried screwing with the QoS settings on my Linksys routers but it doesn’t help at all. I ran across a bunch of articles explaining how it’s useless to try to use QoS because it only modifies your outgoing bandwidth and can’t change the speed at which routers on the Internet send you traffic.
Well that’s all bullshit. Here’s how you fix it:
Upgrade the firmware on your router to DD-WRT. Here’s the list of supported devices. I have a WRT320N Linksys router. It’s a newer router that has both a 2.4 Ghz and 5Ghz radio. Many routers that look new and claim to support “N” actually just have 2.4Ghz radios in them.
The DD-WRT firmware for the WRT320N router is very very new, but it works perfectly. Here’s how you upgrade:
Read Eko’s (DD-WRT author) announcement about WRT320N support here. The standard DD-WRT installation instructions are here so you may want to reference them too. Here’s how I upgraded without bricking my router:
- Download the ‘mini’ DD-WRT here.
- Open all the links in this blog entry in other browser windows in case you need to refer to them for troubleshooting. You’re about to lose your Internet access.
- Visit your router’s web interface and take not of all settings – not just your wireless SSID and keys but your current MAC address on your Internet interface too. I had to clone this once DD-WRT started up because my ISP hard-codes MAC addresses on their side and filters out any unauthorized MAC’s. I’d suggest printing the settings direct from your web browser.
- Use the web interface (visit http://192.168.1.1/ usually) and reset your router to factory default settings.
- You’ll need to log into your router again. For linksys the default login is a blank username and the password ‘admin’.
- Use Internet Explorer to upgrade the firmware using your router’s web interface. Apparently Firefox has a bug on some Linksys routers so don’t use that.
- Wait for the router to reboot.
- Hit http://192.168.1.1/ with your web browser and change your router’s default username and password.
- Go to the Clone MAC address option and set it to your old Internet MAC address
- Set up your wireless with the old SSID and key
- Confirm you can connect to the router via WiFi and have Internet Access.
Now the fun part:
- Go to Wireless, Advanced settings, and scroll down to TX Power. You can boost your transmit signal all the way to 251mw. Boosting it by about 70mw should be safe according to the help. I’ve actually left mine as is to increase my radio’s life, but nice to know I have that.
- Go to the NAT/QoS menu and hit the QoS tab on the right. Enable QoS. Add your machine’s MAC address. Set the priority to Premium (not Exempt because that does nothing). Hit Apply Settings. Every other machine now has a default priority of Standard and your traffic will be expedited.
- For Linux Geeks: Click the services tab and enable SSHd. Then ssh to your router’s IP, usually 192.168.1.1. Log in as root and whatever password you chose for your router. I actually changed my username to ‘admin’ but the username seems to stay root for ssh.
You can use a lot of standard linux commands in SSH – it’s busybox linux. Type:
cat /proc/net/ip_conntrack | grep <YourIPAddress>
Close to the end of each line you’ll see a mark= field. For your IP address it should have mark=10 for all your connections. Everyone else should be mark=0. The values mean:
- Exempt: 100
- Premium: 10
- Express: 20
- Standard: 30
- Bulk: 40
- (no QoS matched): 0
Remember if no QoS rule is matched the traffic is Standard priority if you have QoS enabled on the router. So you are Premium and everyone else is standard. Much more detail is available on the QoS DD-WRT Wiki here.
The Linux distro is quite amazing. There are over 1000 packages available for DD-WRT including Perl, PHP and MySQL in case you’d like to write a blogging platform for your Linksys router. To use this you’re going to have to upgrade your firmware to the ‘big’ version of the WRT320N binary. Don’t upgrade directly from Linksys firmware to the ‘big’ DD-WRT – Ecko recommends upgrading to mini first and then upgrading to ‘big’. Also note I haven’t tried running ‘big’ on the WRT320N because I’m quite happy with QoS and a more powerful radio.
There are detailed instructions on how to get Optware up and running once you’re running ‘big’ on the Wiki. It includes info on how to install a throttling HTTP server, Samba2 for windows networking and a torrent client.
If you’d like to run your WRT320N at 5Ghz the DD-WRT forums suggest switching wireless network mode to ‘NA-only’ but that didn’t work for my Snow Leopard OS X machine. When I was running Linksys I had to use 802.11A to make 5Ghz work for my macbook. And likewise for this router I run A-only. You can confirm you’re at 5Ghz by holding down the ‘option’ key on your macbook and clicking the wifi icon on top right.
I prefer 5Ghz because the spectrum is quieter, but 5Ghz doesn’t have the distance through air that 2.4 Ghz does. So boosting your TX power will give you the same distance with a clear spectrum while all your neighbors fight over teh 2.4Ghz band.
-
What the Web Sockets Protocol means for web startups
Ian Hickson’s latest draft of the Web Sockets Protocol (WSP) is up for your reading pleasure. It got me thinking about the tangible benefits the protocol is going to offer over the long polling that my company and others have been using for our real-time products.
The protocol works as follows:
Your browser accesses a web page and loads, lets say, a javascript application. Then the javascript application decides it needs a constant flow of data to and from it’s web server. So it sends an HTTP request that looks like this:
GET /demo HTTP/1.1 Upgrade: WebSocket Connection: Upgrade Host: example.com Origin: http://example.com WebSocket-Protocol: sample
The server responds with an HTTP response that looks like this:
HTTP/1.1 101 Web Socket Protocol Handshake Upgrade: WebSocket Connection: Upgrade WebSocket-Origin: http://example.com WebSocket-Location: ws://example.com/demo WebSocket-Protocol: sample
Now data can flow between the browser and server without having to send HTTP headers until the connection is broken down again.
Remember that at this point, the connection has been established on top of a standard TCP connection. The TCP protocol provides a reliable delivery mechanism so the WSP doesn’t have to worry about that. It can just send or receive data and rest assured the very best attempt will be made to deliver it – and if delivery fails it means the connection has broken and WSP will be notified accordingly. WSP is not limited to any frame size because TCP takes care of that by negotiating an MSS (maximum segment size) when it establishes the connection. WSP is just riding on top of TCP and can shove as much data in each frame as it likes and TCP will take care of breaking that up into packets that will fit on the network.
The WSP sends data using very lightweight frames. There are two ways the frames can be structured. The first frame type starts with a 0x00 byte (zero byte), consists of UTF-8 text and ends with a 0xFF byte with the UTF-8 text in between.
The second WSP frame type starts with a byte that ranges from 0x80 to 0xFF, meaning the byte has the high-bit (or left-most binary bit) set to 1. Then there is a series of bytes that all have the high-bit set and the 7 right most bits define the data length. Then there’s a final byte that doesn’t have the high-bit set and the data follows and is the length specified. This second WSP frame type is presumably for binary data and is designed to provide some future proofing.
If you’re still with me, here’s what this all means. Lets say you have a web application that has a real-time component. Perhaps it’s a chat application, perhaps it’s Google Wave, perhaps it’s something like my Feedjit Live that is hopefully showing a lot of visitors arriving here in real-time. Lets say you have 100,000 people using your application concurrently.
The application has been built to be as efficient as possible using the current HTTP specification. So your browser connects and the server holds the connection open and doesn’t send the response until there is data available. That’s called long-polling and it avoids the old situation of your browser reconnecting every few seconds and getting told there’s no data yet along with a full load of HTTP headers moving back and forward.
Lets assume that every 10 seconds the server or client has some new data they need to send to each other. Each time a full set of client and server headers are exchanged. They look like this:
GET / HTTP/1.1 User-Agent: ...some long user agent string... Host: markmaunder.com Accept: */* HTTP/1.1 200 OK Date: Sun, 25 Oct 2009 17:32:19 GMT Server: Apache X-Powered-By: PHP/5.2.3 X-Pingback: https://markmaunder.com/xmlrpc.php Connection: close Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8
That’s 373 bytes of data. Some simple math tells us that 100,000 people generating 373 bytes of data every 10 seconds gives us a network throughput of 29,840,000 bits per second or roughly 30 Megabits per second.
That’s 30 Mbps just for HTTP headers.
With the WSP every frame only has 2 bytes of packaging. 100,000 people X 2 bytes = 200,000 bytes per 10 seconds or 160 Kilobits per second.
So WSP takes 30 Mbps down to 160 Kbps for 100,000 concurrent users of your application. And that’s what Hickson and the WSP team and trying to do for us.
Google would be the single biggest winner if the WSP became standard in browsers and browser API’s like Javascript. Google’s goal is to turn the browser into an operating system and give their applications the ability to run on any machine that has a browser. Operating systems have two advantages over browsers: They have direct access to the network and they have local file system storage. If you solve the network problem you also solve the storage problem because you can store files over the network.
Hickson is also working on the HTML 5 specification for Google, but the current date the recommendation is expected to be ratified is 2022. WSP is also going to take time to be ratified and then incorporated into Javascript (and other) API’s. But it is so strategically important for Google that I expect to see it in Chrome and in Google’s proprietary web servers in the near future.
-
SSL Network problem follow-up
It’s now exactly a week since I blogged about my SSL issues over our network. To summarize, when fetching documents on the web via HTTPS from my servers, the connection would just hang halfway through until it timed out. I had confirmed that it wasn’t the infamous PMTU ICMP issue that is common if you’re fetching documents via HTTPS from a misconfigured web server. It was being caused by inbound HTTPS data packets getting dropped and when the retransmit would occur the retransmitted packets would also get dropped. Exactly the same packet every time would get dropped.
Last night we solved it. We’ve been working with Cisco for the last week and have been through several of their engineers with no progress. I was seeing packets arriving on my provider’s switch (we have a great working relationship and share a lot of data like sniffer logs) – but the packet was not arriving on my switch We had isolated it to the layer 2 infrastructure.
Last night we decided to throw a hail-mary and my provider changed the switch module my two HSRP uplinks were connected to from one 24 port module to another. And holycrap it fixed the problem. We then reconfigured routes and everything else so that the only thing that had changed was the 24 port module. And it was still fixed.
This is the strangest thing I’ve seen and the Cisco guys we were working with echoed that. It’s extremely rare for Layer 2 infrastructure which is fairly brain-dead to cause errors with packets that have a higher level protocol like HTTPS in common. These devices examine the layer-2 header with the MAC address and either forward the entire packet or not. The one thing we did notice is that the packets that were getting dropped were the last data packet in a PDU (protocol data unit) and were therefore slightly shorter by about 100 bytes than the other packets in the PDU that were stuffed full of data.
But we’ve exorcised the network ghosts and data is flowing smoothly again.
-
How to integrate PHP, Perl and other languages on Apache
I have this module that a great group of guys in Malaysia have put together. But their language of choice is PHP and mine is Perl. I need to modify it slightly to integrate it. For example, I need to add my own session code so that their code knows if my user is logged in or not and who they are.
I started writing PHP but quickly started duplicating code I’d already written in Perl. Fetch the session from the database, de-serialize the session data, that sort of thing. I also ran into issues trying to recreate my Perl decryption routines in PHP. [I use non-mainstream ciphers]
Then I found ways to run Perl inside PHP and vice-versa. But I quickly realized that’s a very bad idea. Not only are you creating a new Perl or PHP interpreter for every request, but you’re still duplicating code, and you’re using a lot more memory to run interpreters in addition to what mod_php and mod_perl already run.
Eventually I settled on creating a very lightweight wrapper function in PHP called doPerl. It looks like this:
$associativeArrayResult = doPerl(functionName, associativeArrayWithParameters);
function doPerl($func, $arrayData){ $ch = curl_init(); $ip = '127.0.0.1'; $postData = array( json => json_encode($arrayData), auth => 'myPassword', ); curl_setopt($ch,CURLOPT_POST, TRUE); curl_setopt($ch,CURLOPT_POSTFIELDS, $postData); curl_setopt($ch, CURLOPT_URL, "http://" . $ip . "/webService/" . $func . "/"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); $data = json_decode($output, TRUE); return $data; }
On the other side I have a very fast mod_perl handler that only allows connections from 127.0.0.1 (the local machine). I deserialise the incoming JSON data using Perl’s JSON::from_json(). I use eval() to execute the function name that is, as you can see above, part of the URL. I reserialize the result using Perl’s JSON::to_json($result) and send it back to the PHP app as the HTML body.
This is very very fast because all PHP and Perl code that executes is already in memory under mod_perl or mod_php. The only overhead is the connection creation, sending of packet data across the network connection and connection breakdown. Some of this is handled by your server’s hardware. [And of course the serialization/deserialization of the JSON data on both ends.]
The connection creation is a three way handshake, but because there’s no latency on the link it’s almost instantaneous. The transferring of data is faster than a network because the MTU on your lo interface (the 127.0.0.1 interface) is 16436 bytes instead of the normal 1500 bytes. That means the entire request or response fits inside a single packet. And connection termination is again just two packets from each side and because of the zero latency it’s super fast.
I use JSON because it’s less bulky than XML and on average it’s faster to parse across all languages. Both PHP and Perl’s JSON routines are ridiculously fast.
My final implementation on the PHP side is a set of wrapper classes that use the doPerl() function to do their work. Inside the classes I use caching liberally, either in instance variables, or if the data needs to persist across requests I use PHP’s excellent APC cache to store the data in shared memory.
Update: On request I’ve posted the perl web service handler for this here. The Perl code allows you to send parameters via POST using either a query parameter called ‘json’ and including escaped JSON that will get deserialized and passed to your function, or you can just use regular post style name value pairs that will be sent as a hashref to your function. I’ve included one test function called hello() in the code. Please note this web service module lets you execute arbitrary perl code in the web service module’s namespace and doesn’t filter out double colon’s, so really you can just do whatever the hell you want. So I’ve included two very simple security mechanisms that I strongly recommend you don’t remove. It only allows requests from localhost, and you must include an ‘auth’ post parameter containing a password (currently set to ‘password’). You’re going to have to implement the MM::Util::getIP() routine to make this work and it’s really just a one liner:
sub getIP { my $r = shift @_; return $r->headers_in->{'X-Forwarded-For'} ? $r->headers_in->{'X-Forwarded-For'} : $r->connection->get_remote_host(); }
-
Routers treat HTTPS and HTTP traffic differently
Well the title says it all. Internet routers live at Layer 3 [the Network Layer] of the OSI model which I’ve included to the left. HTTP and HTTPS live at Layer 7 (Application layer) of the OSI model, although some may argue HTTPS lives at Layer 6.
So how is it that Layer 3 devices like routers treat HTTPS traffic differently?
Because HTTPS servers set the DF or Do Not Fragment IP flag on packets and regular HTTP servers do not.
This matters because HTTP and HTTPS usually transfer a lot of data. That means that the packets are usually quite large and are often the maximum allowed size.
So if a server sends out a very big HTTP packet and it goes through a route on the network that does not allow packets that size, then the router in question simply breaks the packet up.
But if a server sends out a big HTTPS packet and it hits a route that doesn’t allow packets that size, the routers on that route can’t break the packet up. So they drop the packet and send back an ICMP message telling the machine that sent the big packet to adjust it’s MTU (maximum transfer unit) size and resend the packet. This is called Path MTU Discovery.
This can create some interesting problems that don’t exist with plain HTTP. For example, if your ops team has gotten a little overzealous with security and decided to filter out all ICMP traffic, your web server won’t receive any of those ICMP messages I’ve described above telling it to break up it’s packets and resend them. So large secure packets that usually are sent halfway through a secure HTTPS connection will just be dropped. So visitors to your website who are across network paths that need to have their packets broken up into smaller pieces will see half-loaded pages from the secure part of your site.
If you have the problem I’ve described above there are two solutions: If you’re a webmaster, make sure your web server can receive ICMP messages [You need to allow ICMP code 4 “Fragmentation needed and DF bit set”]. If you’re a web surfer (client) and are trying to access a secure site that has ICMP disabled, adjust your network card’s MTU to be smaller than the default (usually the default is 1500 for ethernet).
But the bottom line is that if everything else is working fine and you are having a problem sending or receiving HTTPS traffic, know that the big difference with HTTPS traffic over regular web traffic is that the packets can’t be broken up.
-
China's influence in Africa
As an African American, or rather, an American African (I’m white and African born), I hear a constant flow of stories about China’s increasing influence in Africa. They’ve clearly taken a long term view on Africa, perhaps motivated by their projected energy and natural resources needs. If you subscribe to the US view that free trade is good, then this is a good thing. [You can’t have it both ways folks!]
Whether or not you think it’s good for the continent, the data is surprising:
- The China National Petroleum Corporation (CNPC) is the single largest shareholder (40 percent) in the Greater Nile Petroleum Operating Company, which controls Sudan’s oil fields and has invested $3 billion in refinery and pipeline construction in Sudan since 1999. Sudan now supplies 7% of China’s total oil.
- In March 2004, Beijing extended a $2 billion loan to Angola in exchange for a contract to supply 10,000 barrels of crude oil per day.
- In July 2005, PetroChina concluded an $800 million deal with the Nigerian National Petroleum Corporation to purchase 30,000 barrels of oil per day for one year.
- In January 2006, China National Offshore Oil Corporation (CNOOC), after failing to acquire American-owned Unocal, purchased a 45 percent stake in a Nigerian offshore oil and gas field for $2.27 billion and promised to invest an additional $2.25 billion in field development.
- In April 2003, approximately 175 People’s Liberation Army (PLA) soldiers and a 42-man medical team were deployed to the Democratic Republic of Congo on a peacekeeping mission.
- In December 2003, 550 peacekeeping troops, equipped with nearly 200 military vehicles and water-supply trucks, were sent to Liberia.
- China has also deployed about 4,000 PLA troops to southern Sudan to guard an oil pipeline and reaffirmed its intention to strengthen military collaboration and exchanges with Ethiopia, Liberia, Nigeria, and Sudan.