Ubuntu as your hypervisor

Ubuntu is a free server operating system that is easy to maintain and build on.  I’m a big fan; and recently, I’ve been using it to run our development environments at the office.  If, like us, you’re looking to build a low-cost environment for non-production work, here’s an article that may be a useful start.

Note that many production environments run on KVM – the setup I describe below would need some tweaking, especially from the hardware side, before it would be ready for that…  And while I do talk about production versus staging considerations throughout the article, there are some fundamental aspects that I do not talk about and which should be in the very least touched upon for production — such as setting up your environment with redundant compute nodes and redundant gigabit switches that are separate from your LAN switches, enabling of jumbo frames, disabling of multicasting, use of an iSCSI SAN with snapshotting and replication capability, not to mention your hardware’s scalability — please do therefore be mindful of the fact that, while this article is sufficient for development and testing, you should consider it as incomplete for a production environment.  Ye be warned.


Getting Ubuntu up and running
  • To get started, you’ll need Ubuntu Server. If you’re planning to use the server for production, download the latest LTS; otherwise, you can just get the latest version.
  • You’ll need the hardware to run the hypervisor, of course. Make sure that the machine that you use has ample CPU’s (64-bit processors with hypervising instructions), memory, and storage space (RAID-1 15K SAS should be sufficient for a production environment as long are you’ve got a SAN for storage; if your physical host is also supposed to be storing the VM’s, I assume this is a test or dev environment — I’d recommend at least two SATA drives in RAID-0). If you have spare hardware, I would definitely recommend setting up one machine as an iSCSI or NFS SAN instead.
  • Run the Ubuntu Server installation on your compute node. I won’t walk you through the installation, as this is not a KB article on setting up Ubuntu. However, do make sure you at least consider the following:
    • You may wish to set up your storage partitions as LVM so that you can add disks later (that is, if you’re using your compute node as a storage device as well)
    • When prompted for the services to install, you should at least set up openSSH and VM Server services.
  • Once your system is installed, you may wish to set up your public-key authentication. You can find information on how to do this in putty here: http://www.ualberta.ca/CNS/RESEARCH/LinuxClusters/pka-putty.html
  • Make sure that you have all the necessary libraries: apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils openssh-server virt-manager convirt
  • Set up your putty server profile (skip for Linux users):
    • Specify the host address
    • Specify the auto-login username in Connections > Data
    • Enable X11 forwarding in Connections > SSH > X11
    • Specify the private key file to be used in Connections > Auth (if applicable)
    • Make sure you have Xming installed. It needs to be running when you run putty.
  • If you’re using linux, when you connect via SSH be sure to specify the X11 parameters and public key parameters like so:ssh <host> -X -i <private key file>
    Private keys can be generated using ssh-keygen as described here:
  • At this point, your host should be ready to be used. You can create the VM in two ways:
    • run virt-manager and create the machine using the GUI
    • run ubuntu-vm-build with syntax like this:sudo ubuntu-vm-builder kvm hardy –addpkg vim –mem 256 –libvirt qemu:///system
The tools below are for Windows clients only — they are not needed for linux as the functionality is built-in:
Putty — Windows SSH client with some nifty advantages: you can create server profiles, set them to use public key authentication, enable X11 forwarding and TCP tunneling, all from a GUI.
X-ming — Windows X server that allows you to run Linux graphic applications remotely over SSH. Used with putty, you can run apps such as ghex or gedit from your Windows machine.
The tools  below are for managing the hypervisor. They are Linux applications, which is why you need the above tools if you’re running Windows
virt-manager — GUI interface for creating, starting, stoping or moving VM’s.
virsh — Command-line equivalent of virt-manager. Practical when you just want to start or shutdown a VM.
Useful commands:
virsh list –all → lists all machines running on the host
virsh start <machine name> → starts the machine
virsh shutdown <machine name> → attempts to gracefully shut down a VM
virsh suspend <machine name> → pauses the VM
virsh destroy <machine name> → forces the VM off
virsh can be used to migrate machines live from one host to another. Use this syntax:
virsh migrate –live <name of the machine> qemu+ssh://<destination physical host name>/system
convirt — similar to virt-manager, this GUI tool purportedly allows you to drag & drop VM’s from one server to another. Still under evaluation.
Next Steps
Here are a few next steps that you may wish to consider for enhancing your hypervised environment:

Set up NFS4 shares, so that you can share VM’s and migrate them from one compute node to another:  https://help.ubuntu.com/community/NFSv4Howto
Set up a bridge so that your VM’s can use the LAN: https://help.ubuntu.com/8.04/serverguide/C/libvirt.html

NOTE FOR LINUX MACHINES: when cloning a linux machine, don’t forget to delete /etc/udev/rules.d/70-persistent-net.rules: http://muffinresearch.co.uk/archives/2008/07/13/vmware-siocsifaddr-no-such-device-eth0-after-cloning/

>”My VMWare log partition is full!” – problem, cause, mitigation

>Hello folks 🙂  Been a while since I’ve last posted. I keep making vows that I will post regularly, and do so for about a month — and then, things get hectic again and I forget this site’s very existence. My solution is for me to quit whinging about how irregularly I post and continue to post relevant shite. No use posting for the purpose of posting, methinks. Fair enough?

Anyway, I finally got something off my plate today. It’s something that I’ve been meaning to write about, namely because the reason for its occurrence is unintuitive, it’s a silly problem to encounter in a production environment, and it’s relatively easy to resolve:

The problem

I first encountered this issue a few months ago; we’d been knee-deep into virtualizing a dozen servers for a client when, suddenly, the ESX machines stopped being able to start VM’s. We thought “OK, that’s weird”, poked around the VSphere Center logs. Queue a puzzling message: “No space left on device”. That couldn’t be right: the SAN we were using was brand new and practically empty. Since nothing else was working, we restarted the servers.

You can probably guess what happened next: physical servers come back up, and now none of the VM’s will start. Luv’ly.

Fortunately, we did finally decide to open up an SSH session in order to check out the logs there to see if there were any additional clues… and discovered that the /var/log directory (which has its own partition) was chock full of logs.

The cause

VMWare’s KB article explains this problem in detail, and actually provides a decent resolution… But here’s why I think this is unintuitive: although these ESX (and ESXi) boxes are *nix servers, absolutely everything is administered via the vSphere client.

The offensive security perks

Want to mess with the sysadmin? Flood his/her ESX box’s syslog file! That’s right, folks — by virtue of flooding the syslog file, the admin won’t be able to start a VM, use vMotion, etc etc…

A solution

One possible way to prevent this kind of issue is to rotate your logs; there’s a good explanation of how this is done here. Setup is rather simple; as a matter of fact, you’ll find that many distros have log rotation implemented out-of-the-box… So why hasn’t VMWare? I’m speculating, but I would imagine that since the only purpose of ESX is to run other machines, VMWare decided that 1) the volume of logs was low enough that they could do away with it, 2) they actually wanted to keep logs from being overwritten for debugging purposes and 3) they figured that in the worst case scenario it would be a way for administrators to be tipped off that something was wrong in the first place. Since this is pure speculation, I won’t go into how bad an idea this was or how a more elegant solution could have been found.

Nevertheless, if you are not ecstatic about losing valuable log information due to rotation, you could possibly set up your ESX boxes to log to a centralized rsyslog server over TLS. This is something that you should consider doing anyway – log consolidation’s a pretty hot topic nowadays.

On my side, I’ve written a very simple bash script which you can set to run as a cron job. It checks how much disk space is used on the log partitions and sends a message to syslog if it’s above 97% – you can then configure syslog to log to another server or set up swatch to e-mail you if the message ever shows up in your syslog:

export diskcheck=`df -h | grep /var/log | grep 9[789]%`
test -n $diskcheck && logger “Log disk is getting low on space: $diskcheck”

Silly, innit? But it works. Note, however, that if your log fills up really really fast, you might not get the message before it’s too late.

Well, that’s me for now. Back to work!

ADDENDUM: I’ve modded my script so that it can run as a service. The script below should be saved as /bin/vmwareDiskCheck.sh …


doservice () {
  while true; do
   export diskcheck=`df -h | grep /var/log | grep 9[789]%`
   test -n “$diskcheck” && logger “Log disk is getting low on space: $diskcheck”
   sleep 10

doservice &

… and this script should be saved as /etc/init.d/diskCheck:

# Init file for VMWare Log partition check
# chkconfig: 2345 55 25
# description: Routinely checks that /var/log isn’t too full.
# processname: diskcheck

# source function library
. /etc/rc.d/init.d/functions



start() {
  $path &

stop() {
  # use pgrep to determine the forked process
  # kill that process
  proc=`pgrep vmwareDiskCheck`
  kill $proc

case “$1” in
                echo $”Usage: $0 {start|stop|restart}”
exit $RETVAL

Comments or improvements welcome!

ADDENDUM 2: If you prefer a cron job, you can drop a script in your /etc/cron.hourly/ directory with the following code (don’t forget to make your script executable!)

  export diskcheck=`df -h | grep /var/log | grep 9[789]%`
  test -n “$diskcheck” && logger “Log disk is getting low on space: $diskcheck”

>FreeBSD + VirtualBox + RoR = nice, easy development environment :-D


So the new St. Noble security intern, Matt, has a hard-on for FreeBSD and has been trying to convince me to use it. Though I like to give him flak about it, I do actually enjoy the simplicity that it has to offer, and the delightful lack of a GUI – call me old-fashioned 🙂 [no, really, I do actually love a nice simple text interface, especially when I’m coding. Visual Studio, Eclipse, all those IDE’s are nice and all with their integrated file management, library browsers and auto-completion modules but when I code rails, I like using a simple text editor with nothing but the most basic syntax highlighting]  
I’ve found that, amongst other things, FreeBSD is great to use as a lightweight Ruby on Rails development environment, especially when combined with VirtualBox. If you own an IronKey, you can set up PortableVirtualBox and get it up and running quite easily.
Anyway, this is a simple guide to setting FreeBSD as your development environment. It’s not very in-depth, but I hope it will provide you enough tips and tricks to set yourself up without committing harakiri or throwing your machine out the window…
When you’re asked to do the partitioning, start by using auto-partition; take note of the different sizes, then adjust your swap space so that you can do a bit more swapping – I would recommend 1 GB instead of the de-facto standard. Choose a development installation on FreeBSD, and install the ports collection. Apart from that, follow the wizard. Pretty simple, huh?
I would recommend setting up sudo, but that’s not 100% necessary since your box is effectively sectioned off from all other computers. If you want to do that, then run ‘pkg_add -r sudo’. Remember to edit your /etc/sudoers file and add your regular username, and optionally prevent root from logging in via SSH by editing your /etc/ssh/sshd_config file.
VirtualBox setup:

Use a basic FreeBSD VM template, with two virtual NIC’s: set your first NIC up to be on the host-only network, and your second NIC to be on the NAT network. This allows you (and only you) to connect to your dev environment via SSH or web whilst providing the VM with a means to access the ‘net (especially practical if you’re consuming web services).

SSH and bash

By default, freeBSD uses csh. I’ve tried using tcsh instead, doesn’t seem to work for me; bash, on the other hand, works fine.  To install that, execute ‘pkg_add -r bash’ and wait until it’s installed. You can run it manually thereafter by executing ‘bash’ whether you’re at the console or remotely logged in via SSH. Ideally, though, you’d probably want your shell to be bash as soon as you start, right? Here’s the command to do that: ‘chsh -s /usr/local/bin/bash username‘, where /usr/local/bin/bash is the path to bash (so before executing the command, be sure to run ‘which bash’ to double-check the path) and username is your actual username. You must then edit your /etc/passwd file and substitute your shell for /usr/local/bin/bash – it’s the last field on the line with your username on it. You must both execute the chsh command and edit the passwd file in order for the switch to work! Kudos to vivek for the nice, easy tutorial that I didn’t read completely the first time like an idiot. If you follow it to the letter, this will work for your console, SSH, and will allow you to use SCP (there’s another way to set up bash on SSH, and that’s to use the ForceCommand directive in sshd_config – but that messes with your ability to use SCP).

Ruby, Rubygems, and Rails

No way around it – you’ve gotta compile the sucker; everything you need should be accessible via http://rubyonrails.org/download. Shouldn’t be too hard though: uncompress the tarball, run the usual ‘./configure; make; make install’. Download the rubygems tarball and install it using ‘ruby setup.rb’. Finally, run ‘gem install rails’.

Nano: syntax highlighting

A final note: if you are a fan of nano like me and you would like syntax highlighting, you can create a .nanorc file in your home folder and use the following example as a starting point: http://code.google.com/p/nanosyntax/source/browse/trunk/syntax-nanorc/ruby.nanorc

>Setting up a bridge for your headless VirtualBox machine


Last week, I wrote an article on how to set up a bridge for QEMU, which is quite practical for when you want to set up servers quickly and easily. QEMU has its drawbacks, however, when it comes to using graphic interfaces, so I tend to prefer using VirtualBox for my day-to-day virtualization needs.
I’ve been working on a virtual machine for teaching purposes, lately, and have determined that a headless VirtualBox VM is the way to go.  Here are a few notes that I’ve taken on setting up a headless VM on a bridged network – this allows the person running the VM to start the machine without starting up a console, and to be hit the VM’s services from the host machine.
Before I go on, though, here are the URL’s I use as a reference:
Good howtoforge by Falko Timme:
Setting up a bridge, according to the VirtualBox wiki:
These cover (with a fair amount of detail, I might add) the topics of setting up VirtualBox, creating a machine, and creating a fully functional bridge with DHCP etc etc. That’s something I’m not going to cover here – namely because it would be a pale copy of someone else’s work.  I’m writing about setting yourself up with something that you can run in a classroom or as a sandbox for short-term activities.  Hope this helps.

Setting up VirtuaBox 2.1 (or later)
At the time this article is written, Ubuntu Hardy Heron is the current LTS and VirtualBox 2.1 is the latest version. I will therefore be writing under the assumption that you are using these versions — please remember to change the commands according to your distro / version of VirtualBox!
First, you have to add VirtualBox’s repository and public key to your APT sources. Add the following line to your /etc/apt/sources.list file — you can tack it on to the end:
deb http://download.virtualbox.org/virtualbox/debian hardy non-free
You’ll also have to download and set up the key. You can do this using wget and apt-key:
wget http://download.virtualbox.org/virtualbox/debian/sun_vbox.asc
sudo apt-key add sun_vbox.asc
You can then retrieve virtualbox straight from apt-get:
apt-get install virtualbox-2.1
As opposed to VirtualBox open source edition (which can be run using the command ‘virtualbox’), VirtualBox 2.1 is run using ‘VirtualBox’ (case-sensitive, of course). I simply set up my VM using the GUI.

Setting up the bridge
As with qemu, you have to set yourself up with a virtual network interface (tap0, for instance), set it up with an IP address, and set up IP forwarding on your host machine.  I used the first of the scripts below to set myself up.  I then needed to run VirtualBox once again to modify the settings:  I added a Host Interface NIC to my machine’s configuration, which pointed to tap0.  I then ran my machine, and tested my config by having my guest ping my website, then my host and vice-versa.
I shutdown my guest, and tore down my virtual network using the second script below.


# Script to set up bridging for your virtualbox machines. When setting up your VM, add an extra network interface of type “Host Network”, called tap0.  You can use this script as the network “startup script”.

# Create a TAP interface, tap0, to be used for bridging; set the owner of that interface to the current user (hence the whoami command):
sudo tunctl -t tap0 -u `whoami`

# Create a bridge, br0, and add the tap interface to it. DO NOT ADD THE PHYSICAL INTERFACE: you will kill your network connection if you do that 🙂
sudo brctl addbr br0
sudo brctl addif br0 tap0

# Bring up the bridge and tap interfaces:
sudo ifconfig br0 up
sudo ifconfig tap0 up

# Turn on IP forwarding:
sudo xterm -e “echo 1 > /proc/sys/net/ipv4/ip_forward”

# Add a rule to forward traffic over to eth0:
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Script to tear down bridging for your virtualbox machines. You can use this script as the network “shutdown script”.

# Flush the traffic forwarding rules:
sudo iptables -t nat -F

# Disable IP forwarding
sudo xterm -e “echo 0 > /proc/sys/net/ipv4/ip_forward”

# bring down the bridge and tap interfaces
sudo ifconfig br0 down
sudo ifconfig tap0 down

# kill the bridge
sudo brctl delbr br0

# kill the tap interface
sudo tunctl -d tap0

Spreadin’ the love

Once the guest machine was configured and connected, I powered it down.  At this point, the guest is ready to transfer to a DVD or to a tarball.  I simply copied the machine’s config folder (~/.VirtualBox/Machines/<machine name>) and Virtual Disk (~/.VirtualBox/VDI/<machine name>.vdi) to a DVD. Before using them, of course, one needs to copy them to the correct locations on one’s disk. The VDI file will need to be registered using the Virtual Disk Manager of VirtualBox (or the equivalent VBoxManage command) and the machine will need to be registered using the following command:
VBoxManage registervm Machines/<machine name>/<machine name>.xml
I also copied the scripts to the DVD; I tacked on the following line at the end of the startup script:
VBoxManage startvm <machine name> -type vrdp
And this line at the beginning of the shutdown script:
VBoxManage controlvm <machine name> poweroff

>Adding a new virtual hard drive to a Ubuntu guest in VirtualBox

>The following is a simple procedure allowing you to create a new virtual hard drive for your ubuntu guest OS and assign it to a mount point.

As you might already know, VDI’s are a pain in the ass to resize — not impossible, but certainly a pain.  With an ubuntu guest OS, the simplest thing to do is to create a new VDI and mount it!

  1. Create the disk and attribute it to your Virtual Machine
    1. Fire up Virtual Box and open the Virtual Media Manager (File > Virtual Media Manager)
    2. Follow the instructions to create a new VDI file
    3. Open the settings of the virtual machine to which you’ll add your new drive.  Under the Hard Disks tab, add the drive that you’ve just created.
  2. Fire up your virtual machine
  3. Format and mount your new hard drive
    1. Once logged into the operating system, make sure that you have gparted installed (sudo apt-get install gparted if you don’t)
      1. The system should detect that you have a new drive (probably /dev/sdb).  Select that drive and format it as an msdos drive (click on New… and, when prompted, select msdos)
      2. Create a new partition of type ext3 (New…, then select primary partition and when prompted select ext3)
      3. Open up gparted (‘sudo gparted’ from a command line)
      4. Hit Apply — gparted will take care of formatting your drive
    2. You should now be able to use your hard drive simply by mounting it (run ‘mkdir /media/my_new_drive;mount /dev/sdb /media/my_new_drive’ as root).  However, for a more permanent setup, you’ll need to modify your /etc/fstab file
      1. Open /etc/fstab with your favorite editor (be sure to run the editor as root)
      2. Create a new line (I usually copy the line for my root partition and modify as necessary).  Make sure to specify your new hard drive as the device (probably /dev/sdb1) and some empty directory as the mount point (such as /opt).

>QEMU: Accessing the Internet and making the guest pingable from your host.

QEMU is a nice, fast virtualization tool that allows you to create guest machines. It works much like VMWare or VirtualBox; I won’t go into the merits and drawbacks of using one over the other (I use all three, selecting the most appropriate for the situation). I’ve found that qemu is best used for sandboxing, proofs of concept, and tutorials where you need a quick, disposable machine to be set up in very little time.

The following article is nothing new. It’s simply a rehash of the qemu documentation, merged with the following ubuntu post: http://ubuntuforums.org/showthread.php?t=179472

In the past, I’ve found that reading several articles on the same topic can be useful because it gives the reader several perspectives. This is my own “recipe”, hope it will be of use to someone out there…

The procedure in a nutshell:
1) Create a TAP network interface for communicating between your guest and host
2) Set your host up for NAT so that the guest can access the internet
3) Manually configure an IP address and name server on your guest OS.


– Creating the TAP interface –
You need to double-check that TAP is available on your host. To do this, simply type “ls /dev/net/tun” to check whether the device exists. By default, the Ubuntu kernel supports TAP. If your kernel doesn’t, google “Ubuntu tap interface”.

With qemu, this is not particularly complicated. Simply append the “-net nic” and “-net tap” flags to your qemu command. For instance:

qemu <name of your image> -net nic -net tap

Double check that a tap interface has indeed been created by running ifconfig
– Setting up NAT – 

You’ll need to enable IP forwarding on your host and set up iptables to forward traffic from your tap interface to your regular interface. I assume that the interface that you use to connect to the internet from your host is eth0 in the following lines. I also assume that your host connects to a router, and not directly to the internet.

To enable IP forwarding: echo 1 > /proc/sys/net/ipv4/ip_forward

To set up iptables: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE


– Configuring the IP address and name server –
Check out the IP address attributed to your host’s TAP interface and use it as a reference in your /etc/network/interfaces file. Assuming that your guest machine’s network card is eth0, your host IP is and subnet mask is

auto eth0

iface eth0 inet static




You’ll need to check your host’s /etc/resolv.conf file; use the same nameserver setting as your host, that’s the easiest thing to do. In other words, if your host’s /etc/resolv.conf file indicates the nameserver is then set up your ghost’s /etc/resolv.conf file to use as well.