Category: Ubuntu

  • Sunshine with clouds – Ubuntu's game changing release

    I’m going to use the term “Cloud” in this post which I despise for it’s nebulosity. The press has bandied the term around so much that it means everything from the Net as a whole to Google Apps to virtualization. My “cloud” means a cluster of virtual machines.

    I’ve been a huge fan of Mark Shuttleworth for a long time. Besides the fact that his parents have great taste in first names, he’s taken his success with Thawte and ploughed it right back into the community who’s shoulders he stood on when he was getting started. And he’s a fellow South African. Ubuntu is doing for online business what the Clipper ship builders did for the tea trade in the 1800’s. [Incidentally, the Clipper ship is still the fastest commercial sailing vessel ever built.]

    Today the huge distributed team that is Ubuntu released Karmic or Ubuntu 9.10. Karmic Server Edition includes the Ubuntu Enterprise Cloud (UEC) that allows you to take a group of physical machines, turn one of them into a controller and run multiple virtual machines on the others and manage them all from a single console.

    The reason this is a game changer is because this brings an open source cloud deployment and management system into the main-stream. I’ve been opposed to using Amazon’s EC2 or any other proprietary cloud system because of the vendor lock-in. To effectively deploy in the cloud you need to invest a lot of time building your system for the cloud. And if it’s proprietary you are removing yourself from the hosting free-market. A year down the road your vendor can charge you whatever they like because the cost to leave is so much greater. And god help you if they go under.

    It’s also been much more cost effective to buy your own hardware and amortize it over 3 or 4 years if your cash-flow can support doing that – rather than leasing. As of today you can both own the physical machines and run your own robust private cloud on them with a very well supported open source linux distro.

    The UEC is also compatible with Amazon EC2 which lets you (in theory) move between EC2 and your private cloud with ease.

    The advantages of building a cloud are clear. Assuming the performance cost of virtualization is low (it is), it lets you more effectively use your hardware. For example, your mail server source repository and proxy server can all run on their own virtual machines, sharing the hardware and you can track each machine’s performance separately and move one off to a different physical box if it starts hogging resources.

    But what I love most about virtualization is the impact it has on Dev and QA. You can duplicate your entire production web cluster on two or three physical machines for Dev and do it again for QA.

    To get started with Ubuntu UEC, read this overview, then this rather managerial guide to deploying and then this more practical guide to actually getting the job done.

  • Super fast & easy virtual server setup on Ubuntu (Jaunty)

    While I upgrade to Karmic, here’s a quick setup to get a virtual ubuntu server running on a real ubuntu server:

    As root:

    ubuntu-vm-builder kvm jaunty --hostname dev2 --addpkg  openssh-server vim  -d /usr/local/vms/dev2 --mem 256 --libvirt qemu:///system

    This will create a jaunty jackalope ubuntu virtual server using the KVM hypervisor. The hostname will be dev2. It will add the openssh-server package as well as vim. It will put it in the /usr/local/vms/dev2 directory. It’ll allocate 256 Megs of memory for the machine. The libvirt options automatically adds your new machine to the qemu:///system domain.

    Once you’re done you can run:

    virsh

    In the virsh shell type:

    list --all

    You should see your new machine listed.

    To set up networking type ‘edit dev2′.

    Change (or add) the following:

    <interface type=’bridge’>
    <source bridge=’br0’/>
    <target dev=’vnet0’/>
    </interface>

    Leave out anything about a MAC address because virsh will automatically add that for you.

    Now the hard part. You want to create a linux bridge.

    If you have only one network interface on the box you’re going to need physical access. I’m going to assume that’s the case. [If you have a second, just leave it up and make sure you’re ssh’ing in via that port]


    ifconfig eth0 down
    ifconfig eth0 0.0.0.0
    brctl addbr br0
    brctl addif br0 eth0
    ifconfig br0 up

    At this point your bridge is up and your virtual machine can use it, but the guest OS doesn’t have an IP of it’s own. So:

    ifconfig br0 192.168.123.123 netmask 255.255.255.0

    Now add a default gateway to your host:

    route add default gw 192.168.123.1

    Now comes another tricky part. If you’re running all this on a machine with a GUI, life is easy. I’m going to assume you, like me, run ubuntu server. You need to launch your new virtual machine and you need to connect to it using VNC. Lets say you have a MacBook and want to run the VNC client on that. Here’s what you do:

    On the macbook launch a terminal. Go to root with: sudo su –

    Run:

    ssh -f -N -L 5900:127.0.0.1:88 root@your_host_machines_ip

    On the host machine run:

    ssh -f -N -L 88:localhost:5900 root@your_host_machines_ip

    Now go and download Chicken of the VNC for your Mac.

    Now on the host operating system run:

    virsh start dev2

    Then launch Chicken of the VNC and just connect to localhost. Bang you should have a console!

    Now edit your network settings:

    vim /etc/network/interfaces

    Just configure your network as per normal as if the machine was on your physical network. Something like:

    auto eth0
    iface eth0 inet static
    address 10.1.0.13
    netmask 255.255.255.0
    gateway 10.1.0.1

    Then do

    /etc/init.d/network restart

    And … unless I’ve forgotten a step which is quite likely … you should be up and running. Make sure the ssh server is running on your new server and try and ssh to your virtual server’s IP from the host machine.

    If you can’t ping the default gateway make sure your firewall software (if you have any) isn’t interfering. If you run shorewall you want to change the following:

    Edit the /etc/shorewall/interfaces file and change ‘eth0’ to ‘br0’

    Also add routeback,bridge to br0 so it looks something like this:

    net     br0  detect  routeback,bridge,tcpflags,norfc1918,routefilter,nosmurfs,logmartians

    Restart shorewall and give it a try.

    Now if you want to upgrade your new virtual Jaunty machine to karmic, simply do a:

    apt-get install update-manager-core
    do-release-upgrade -d

    I’ll try to include the settings in the host /etc/network/interfaces for br0 soon.

    If you’re still stuck, here are some great links:

    Introduction to Linux bridging.

    Info on libvirt.

    Setting up a bridge.

    ubuntu-vm-builder short guide.