Category: Linux

  • Sunshine with clouds – Ubuntu's game changing release

    I’m going to use the term “Cloud” in this post which I despise for it’s nebulosity. The press has bandied the term around so much that it means everything from the Net as a whole to Google Apps to virtualization. My “cloud” means a cluster of virtual machines.

    I’ve been a huge fan of Mark Shuttleworth for a long time. Besides the fact that his parents have great taste in first names, he’s taken his success with Thawte and ploughed it right back into the community who’s shoulders he stood on when he was getting started. And he’s a fellow South African. Ubuntu is doing for online business what the Clipper ship builders did for the tea trade in the 1800’s. [Incidentally, the Clipper ship is still the fastest commercial sailing vessel ever built.]

    Today the huge distributed team that is Ubuntu released Karmic or Ubuntu 9.10. Karmic Server Edition includes the Ubuntu Enterprise Cloud (UEC) that allows you to take a group of physical machines, turn one of them into a controller and run multiple virtual machines on the others and manage them all from a single console.

    The reason this is a game changer is because this brings an open source cloud deployment and management system into the main-stream. I’ve been opposed to using Amazon’s EC2 or any other proprietary cloud system because of the vendor lock-in. To effectively deploy in the cloud you need to invest a lot of time building your system for the cloud. And if it’s proprietary you are removing yourself from the hosting free-market. A year down the road your vendor can charge you whatever they like because the cost to leave is so much greater. And god help you if they go under.

    It’s also been much more cost effective to buy your own hardware and amortize it over 3 or 4 years if your cash-flow can support doing that – rather than leasing. As of today you can both own the physical machines and run your own robust private cloud on them with a very well supported open source linux distro.

    The UEC is also compatible with Amazon EC2 which lets you (in theory) move between EC2 and your private cloud with ease.

    The advantages of building a cloud are clear. Assuming the performance cost of virtualization is low (it is), it lets you more effectively use your hardware. For example, your mail server source repository and proxy server can all run on their own virtual machines, sharing the hardware and you can track each machine’s performance separately and move one off to a different physical box if it starts hogging resources.

    But what I love most about virtualization is the impact it has on Dev and QA. You can duplicate your entire production web cluster on two or three physical machines for Dev and do it again for QA.

    To get started with Ubuntu UEC, read this overview, then this rather managerial guide to deploying and then this more practical guide to actually getting the job done.

  • Routers treat HTTPS and HTTP traffic differently

    OSI Network Model

    Well the title says it all. Internet routers live at Layer 3 [the Network Layer] of the OSI model which I’ve included to the left. HTTP and HTTPS live at Layer 7 (Application layer) of the OSI model, although some may argue HTTPS lives at Layer 6.

    So how is it that Layer 3 devices like routers treat HTTPS traffic differently?

    Because HTTPS servers set the DF or Do Not Fragment IP flag on packets and regular HTTP servers do not.

    This matters because HTTP and HTTPS usually transfer a lot of data. That means that the packets are usually quite large and are often the maximum allowed size.

    So if a server sends out a very big HTTP packet and it goes through a route on the network that does not allow packets that size, then the router in question simply breaks the packet up.

    But if a server sends out a big HTTPS packet and it hits a route that doesn’t allow packets that size, the routers on that route can’t break the packet up. So they drop the packet and send back an ICMP message telling the machine that sent the big packet to adjust it’s MTU (maximum transfer unit) size and resend the packet. This is called Path MTU Discovery.

    This can create some interesting problems that don’t exist with plain HTTP. For example, if your ops team has gotten a little overzealous with security and decided to filter out all ICMP traffic, your web server won’t receive any of those ICMP messages I’ve described above telling it to break up it’s packets and resend them. So large secure packets that usually are sent halfway through a secure HTTPS connection will just be dropped. So visitors to your website who are across network paths that need to have their packets broken up into smaller pieces will see half-loaded pages from the secure part of your site.

    If you have the problem I’ve described above there are two solutions: If you’re a webmaster, make sure your web server can receive ICMP messages [You need to allow ICMP code 4 “Fragmentation needed and DF bit set”]. If you’re a web surfer (client) and are trying to access a secure site that has ICMP disabled, adjust your network card’s MTU to be smaller than the default (usually the default is 1500 for ethernet).

    But the bottom line is that if everything else is working fine and you are having a problem sending or receiving HTTPS traffic, know that the big difference with HTTPS traffic over regular web traffic is that the packets can’t be broken up.

  • SSL Timeouts and layer 3 infrastructure

    I’ve spent the last 5 days agonizing over a very hard problem on my network. Using curl, LWP::UserAgent, openssl, wget or any other SSL client, I’d see connections either timeout or hang halfway through the transfer. Everything else works fine including secure protocols like SSH and TLS. In fact inbound SSL connections work great too. It’s just when I connect to an external SSL host that it hiccups.

    If you remember your OSI model, SSL is well above layer 3 (IP addresses and routers) and layer 2 (LAN traffic routed via MAC addresses). So the last place I planned to look was the network infrastructure.

    I eliminated specific clients by trying others and I eliminated the OS by spinning up virtual machines running other versions of Linux. I elminated my physical hardware by reproducing it on a non Dell server and having one of the ops guys repro it on his OS X macbook.

    And just to prove it was the network, which is all that was left, I set up a VPN from one of my machines that tunnelled all traffic over the VPN to a machine on an external network that acted as the router, thereby encapsulating the layer 2 and 3 traffic in a layer 4 and 5 VPN. And the problem went away. So I knew it was the network.

    Tonight a few minutes ago my colo provider took down my local router and I gracefully failed over to the redundant router, and lo and behold the problem has gone away.

    I still don’t know what it is, but what I do know is that a big chunk of layer 3 infrastructure has been changed and it’s fixed a layer 5 problem. What’s weird is that TCP connections (which is what SSL rides on top of) have delivery confirmation. So if the problem was packet loss, TCP would just request the packet again. So it’s something else and something that only affects SSL – and only connections bound from my servers out to the Internet.

    The reason I’m posting this is because during the hours I spent Googling this issue this week (and finding nothing) I saw a lot of complaints about SSL timeouts and no solutions. So if you’re getting timeouts like this, check your underlying infrastructure and you might just be surprised. To verify that it’s a network problem, set up a VLAN using PPTP. Set up NAT on the external linux machine that is your VLAN server. Then disable the default gateway on the machine having the issue (the VLAN client) and verify that all traffic is routing via your VLAN. Then try and reproduce the SSL timeout and if it doesn’t occur, it’s probably your layer 2 or 3 infrastructure.

  • How to mirror someone elses web server with iptables

    It took me a while to find this – I needed it for testing purposes, nothing malicious. If you’d like your web server somewhere on the web to pretend to be any other web server, even a secure one, you can do the following. x.x.x.x is your own server and y.y.y.y is the ip of the server you’re trying to mirror. I’m also assuming you only have one network card in the machine and it’s called eth0. The following will mirror a secure web server. If you’d like to mirror a regular web server, replace 443 with port 80.

    iptables -t nat -A PREROUTING -p tcp -i eth0 -d x.x.x.x --dport 443 -j DNAT --to y.y.y.y
    iptables -t nat -A POSTROUTING -p tcp -o eth0 -d y.y.y.y --dport 443 -j MASQUERADE

    If this doesn’t work you probably have to enable packet forwarding like this:

    echo 1 > /proc/sys/net/ipv4/ip_forward
  • Super fast & easy virtual server setup on Ubuntu (Jaunty)

    While I upgrade to Karmic, here’s a quick setup to get a virtual ubuntu server running on a real ubuntu server:

    As root:

    ubuntu-vm-builder kvm jaunty --hostname dev2 --addpkg  openssh-server vim  -d /usr/local/vms/dev2 --mem 256 --libvirt qemu:///system

    This will create a jaunty jackalope ubuntu virtual server using the KVM hypervisor. The hostname will be dev2. It will add the openssh-server package as well as vim. It will put it in the /usr/local/vms/dev2 directory. It’ll allocate 256 Megs of memory for the machine. The libvirt options automatically adds your new machine to the qemu:///system domain.

    Once you’re done you can run:

    virsh

    In the virsh shell type:

    list --all

    You should see your new machine listed.

    To set up networking type ‘edit dev2′.

    Change (or add) the following:

    <interface type=’bridge’>
    <source bridge=’br0’/>
    <target dev=’vnet0’/>
    </interface>

    Leave out anything about a MAC address because virsh will automatically add that for you.

    Now the hard part. You want to create a linux bridge.

    If you have only one network interface on the box you’re going to need physical access. I’m going to assume that’s the case. [If you have a second, just leave it up and make sure you’re ssh’ing in via that port]


    ifconfig eth0 down
    ifconfig eth0 0.0.0.0
    brctl addbr br0
    brctl addif br0 eth0
    ifconfig br0 up

    At this point your bridge is up and your virtual machine can use it, but the guest OS doesn’t have an IP of it’s own. So:

    ifconfig br0 192.168.123.123 netmask 255.255.255.0

    Now add a default gateway to your host:

    route add default gw 192.168.123.1

    Now comes another tricky part. If you’re running all this on a machine with a GUI, life is easy. I’m going to assume you, like me, run ubuntu server. You need to launch your new virtual machine and you need to connect to it using VNC. Lets say you have a MacBook and want to run the VNC client on that. Here’s what you do:

    On the macbook launch a terminal. Go to root with: sudo su –

    Run:

    ssh -f -N -L 5900:127.0.0.1:88 root@your_host_machines_ip

    On the host machine run:

    ssh -f -N -L 88:localhost:5900 root@your_host_machines_ip

    Now go and download Chicken of the VNC for your Mac.

    Now on the host operating system run:

    virsh start dev2

    Then launch Chicken of the VNC and just connect to localhost. Bang you should have a console!

    Now edit your network settings:

    vim /etc/network/interfaces

    Just configure your network as per normal as if the machine was on your physical network. Something like:

    auto eth0
    iface eth0 inet static
    address 10.1.0.13
    netmask 255.255.255.0
    gateway 10.1.0.1

    Then do

    /etc/init.d/network restart

    And … unless I’ve forgotten a step which is quite likely … you should be up and running. Make sure the ssh server is running on your new server and try and ssh to your virtual server’s IP from the host machine.

    If you can’t ping the default gateway make sure your firewall software (if you have any) isn’t interfering. If you run shorewall you want to change the following:

    Edit the /etc/shorewall/interfaces file and change ‘eth0’ to ‘br0’

    Also add routeback,bridge to br0 so it looks something like this:

    net     br0  detect  routeback,bridge,tcpflags,norfc1918,routefilter,nosmurfs,logmartians

    Restart shorewall and give it a try.

    Now if you want to upgrade your new virtual Jaunty machine to karmic, simply do a:

    apt-get install update-manager-core
    do-release-upgrade -d

    I’ll try to include the settings in the host /etc/network/interfaces for br0 soon.

    If you’re still stuck, here are some great links:

    Introduction to Linux bridging.

    Info on libvirt.

    Setting up a bridge.

    ubuntu-vm-builder short guide.