Virtual Machines for Increased Cupboard Space

Or: What I Did on My Holidays.

My previous home networking setup consisted of a routing firewall, a main file server, and a DMZ’d web server machine (which I’d recently replaced)—and a desktop machine.

I’ve spent the last week, for no very good reason, ripping out most of these physical servers and replacing them with virtual servers running under the Xen virtual machine monitor on my main server.

Much of this time was spent pulling my hair out and struggling over various configuration issues, so I’ve made some notes here about the process I went through, and how to avoid making the same mistakes again.

This may be useful to anybody else planning to set up a similar Xen/Debian Linux configuration. You’ll probably need to do some background reading first, though.

Why do it?

Well, mainly because the web-server machine (the one running this site) was horribly underpowered. Also it seemed a dreadful waste of resources to have lots of different boxes always on and always consuming power, especially when my main server machine has enough horsepower to run everything comfortably by itself. And my router/firewall was running short of disk space and needed upgrading at some point anyway.

So it seemed to make sense to have everything running as virtual machines on my main server.

Getting Xen running

Right, so the first step was to get Xen running. Xen seemed the most attractive virtualisation option (compared to VMware or VirtualBox), since it’s:

  • Efficient, with virtual machines having close to native hardware performance;
  • Free Software;
  • Pretty mature;
  • Popular and well-supported.

I ended up compiling Xen from source*.

With latest version of Xen Source (3.1), and following the instructions in the user manual, I couldn’t get the Linux kernel to boot (on top of Xen). It wouldn’t recognise the SATA disks at boot-up. I tried recompiling the Linux kernel. I tried creating an initrd image, but nothing seemed to work. (In retrospect, this may have been confusion between different versions of kernels floating around. I’m not sure. It was a dark time. I try not to think about it too much.)

So I took a deep breath, and took a step back, and did what I should have done in the first place: Xen wanted to run Linux kernel version 2.6.18, so I first compiled that version of the Linux kernel from source myself, and got it running on the bare hardware without Xen. Then I took the ‘.config’ file and used it as a basis for compiling my virtualised kernels under Xen.

Note that the Xen source release downloads its preferred kernel (currently 2.6.18), patches it with appropriate Xen patches, then tries to install it. Once it has patched the kernel source, you can (re)compile the patched kernel yourself, fiddle with .config files, and generally do anything you could do to a normal Linux kernel source tree. Specifically (from within the patched source tree):

make menuconfig
Takes your existing ‘.config’ file (or creates one if it doesn’t exist) and lets you include or exclude kernel features via a fairly friendly menu-based system.
make oldconfig
Assumes that the ‘.config’ is for an older kernel version, or for a kernel without the current set of patches, and asks you questions about any features which your ‘.config’ file doesn’t cover. You don’t need to do this. You can just use ‘make menuconfig’ and set them yourself.
Compiles the kernel and modules
make install
Copies the compiled kernel, and the config file, and a file (whatever the hell that does) to your /boot directory.
make modules_install
Copies all the loadable kernel modules to /lib/modules/versionnumber
make clean
Erases all the object code to let you recompile, e.g., to start again after copying new ‘.config’ file across.
make mrproper
Does ‘make clean’, but also removes the ‘.config’ file, so you’re back to a blank slate. (I never found that I needed this one.)

You can compile the tools and things independantly within the Xen tree (‘make tools’, ‘make install-tools’). Look into the Xen Makefile to see the options available.

Note: I had to set the Xen hypervisor to be built without ‘pae’ (in order to match my non-pae kernel).

GRUB boot loader

Not sure which boot loader I had on my server before, because I think I nuked it as part of my desperately trying to get Xen to boot (see above).†

You’ll be wanting to make sure you have GRUB installed (as opposed to LILO or another boot loader) to get Xen working anyway.

Also, things I have learned about GRUB, and /boot, and booting Linux:

  • GRUB reads the menu.lst file when it boots, so you don’t need to reinstall GRUB each time you make a change to it.
  • When you install a kernel in /boot, the installer puts a copy of the kernel ‘.config’ file in there too, with a matching suffix. Handy if you later want to recompile a kernel with slightly different options.

The Xen command-line tools & daemons

You cannot run Xen 3.1 tools (like ‘xend’ or ‘xentop’) on Debian Sarge since the TLS library is the wrong kind. (You cannot just ‘mv /lib/tls /lib/tls.disabled’ as suggested in docs, because the Xen libraries themselves use it!) You need to get yourself a Xen-compatible version of libc6 (glibc, which contains the TLS library). Easiest way of doing this seemed to be to upgrade from Sarge to Etch and ‘apt-get install libc-xen’.

Upgrading to Etch apparently nuked my Xen binaries, as installed by ‘make install-tools’. (Is this because Etch has its own Xen package?) At one point, out of frustration from not being able to get the source distribution working, I tried installing the Debian Xen package. As a result, my system has the Debian ‘xen-utils-3.0’ package installed. There seem to be no ill effects with it coexisting with my home-compiled Xen binaries and kernel.

As advertised, if you put your virtual machine config files in /etc/xen/auto, they will be started by the xendomains daemon on system startup.

What’s nice is that you can tell your virtual machines to shut down cleanly via the xm command on the dom0 machine, without logging in to the domU itself. (Simply ‘xm shutdown machine.domain’.) I don’t know how that works. It must be magic.

When the xendomains daemon stops, it cleanly shuts down all the virtual machines which it started. So you should be able to shut down your (dom0) server, and have it cleanly shut down all its (domU) virtual machines automatically.

Logical Volume Management (LVM)

LVM is a dream. It Just Worked (once I’d ensured that the ‘block device mapper’ was compiled into the kernel).

LVM is a way of ‘soft partitioning’ a disk, or several disks, to create pools of disk space which can be divided between different volumes dynamically. This allows you to a) create, resize and delete partitions easily (ideal when creating new virtual machines), and b) create partitions which span several physical disks.

This was one of the components which I feared would eat up a lot of my time to get working, but just didn’t. I’m so impressed with LVM. To create my LVM volumes, I just followed the quick guide in the Xen user manual.

Networking setup

I use the Shoreline Firewall (Shorewall)‡ for my routing/firewalling needs.

The Xen docs talk about configuring ‘/etc/xen/xend-config.sxp’ for different networking scenarios, including changing from a ‘bridged’ networking mode to ‘routed’ networking. A routed configuration has got to be good, right? Also, an article by the Shorewall people about a virtual Xen router seemed to indicate that that was the way to go.

Quick summary (of hours of pain and anguish): I couldn’t get it to work. It was too complicated for me. I know the basics of TCP/IP networking and routing, but this was a real brain-fuck, particularly the routing rules when connecting to the vifs from dom0 directly.

So I decided to go with a configuration where I had a simple 3-port firewall running in one of my unprivileged domains, talking to the LAN and the (virtual) DMZ over a couple of bridges (in this context, a ‘bridge’ being a virtual network switch). The virtual firewall machine would have one physical network card, talking to the outside world, and two virtual network cards, talking to the LAN and DMZ. The configuration⁑ looks like this:

[Xen network diagram]

It was easier for me to understand than the routed version, because it’s a direct analogue of the previous physical setup. The vifs and bridges correspond to patch cables and switches (with the added bonus that they’re all free! You can have as many of them as you like!) There is routing going on inside the virtual firewall, of course, but the connections between the virtual machines—the network plumbing—is all bridging.

So how did I get my two bridges (‘dmzbr’ and ‘lanbr’) set up?

  1. Change the line in /etc/xen/xend-config.sxp from “(network-script network-bridge)” “(network-script network-custom)”.
  2. Create a file /etc/xen/scripts/network-custom, consisting of:
    dir=$(dirname "$0")
    . "$dir/"
    . "$dir/"
    create_bridge 'lanbr'
    create_bridge 'dmzbr'
    "$dir/network-bridge" "$@" bridge=lanbr

The default ‘network-bridge’ script behaviour is to create a single bridge, called ‘xenbr0’, and attach your dom0’s virtual network interface, eth0, to it. (It renames the physical network interface from ‘eth0’ to ‘peth0’.) This modified version above creates two bridges, lanbr and dmzbr, and attaches the dom0 network interface eth0 to lanbr.

You can extend this to create as many bridges as you need. The ‘vif’ line in your virtual machine configuration files can specify the bridge to which each virtual network interface is connected.

Getting the virtual firewall machine to take ownership of the physical second Ethernet card turned out to be a lot easier than I’d feared. Basically I followed what other people said on the Internet and it seemed to work:

  1. Find out the PCI address of the network card in question. (In my case ‘0000:07:00.0’. (The utility ‘lspci’ is your friend here.)
  2. Tell dom0 to ignore it. In my case, cos I’d compiled pciback into my dom0 kernel, I have to add the parameter ‘pciback.hide=(0000:07:00.0)’ to my kernel boot parameters. So the appropriate stanza in GRUB is this:
    title Xen 3.1 / XenLinux 2.6
    root (hd0,1)
    kernel /xen-3.1.gz console=vga dom0_mem=512M noreboot
    module /vmlinuz-2.6.18-xen root=/dev/sda5 ro console=tty0 pciback.hide=(0000:07:00.0)

    (The ‘noreboot’ means that, if Xen fails to boot the kernel, it won’t automatically reboot the machine. This gives you a chance to actually read the damn error message.)

  3. Tell Xen to assign the hardware to the appropriate virtual machine. You add a ‘pci’ line to the config file for the domU in question. In my case the appropriate line is:

    pci = [ '0000:07:00.0' ]

  4. Tell the domU machine to use this card as a particular ethernet port. I wanted it to be ‘eth2’, with ‘eth0’ and ‘eth1’ being virtual cards. I added a file “/etc/modprobe.d/network_cards” on the domU’s filesystem:

    alias eth0 xennet
    alias eth1 xennet
    alias eth2 r8169

  5. (‘r8169’ being the driver for the hardware NIC; ‘xennet’ being the driver for the Xen virtual network cards.)

That’s it. I had to ensure that my domU had the driver for the hardware, but it worked flawlessly. Note that the current version of the Xen docs omit to tell you how to assign hardware to a virtual machine. (Their Wiki does document the case where ‘pciback’ is compiled as a module, though.)

I followed the brilliant Shorewall documentation to get my virtual three-interface firewall up and running.


The Xen documentation is very good with what it covers, but there are big gaps, and it’s not easy to find, either. (It always took me a bit of Googling to find the user manual each time I needed it.) Various tutorials on the web are for different versions of Xen, with different host operating systems. It’s all a bit of a trial.

I’m really happy with the finished result, though. My server cupboard now has one lone server in it, plus a DSL modem, and that’s got to be an improvement. I feel 3 times more environmentally friendly than before.

My (virtual) servers are easier to get to and easier to maintain. Plus they all take disk space and memory and processor resources from a common pool, so, for example, giving my web server more (or less) memory is a case of changing a value in a configuration file; I can add a disk to the server, and, via the magic of LVM, distribute that space to all the virtual machines.

However, if you’re planning to do the same thing, do set aside a bit of time to get it working.


Xen user documentation (for source release). Not the easiest to find:
Installing Xen source on Debian (not as much use as I hoped, but I referred to it)
Xen-tools (to create guest VMs easily)
Configuring a three-interface firewall with Shorewall. Leads you by the hand. Very well written.

* In retrospect I should have started by upgrading my server to the latest version of Debian (4.0, “Etch”), which includes a relatively recent version of Xen (3.0) built-in… but I didn’t know that at the time.

† As a result I seem to have killed the Dell rescue partition… but I’ve never had to use it anyway. Didn’t seem to be worth the time and research to resurrect it.

‡ My previous, physical, router/firewall ran IPCop. It’s brilliant, and has a nice web control panel. However, with Shorewall, I can use the same firewalling software on all my (virtual) machines. Plus, can configure it from the command line.

⁑ The disadvantage of this setup is that my virtual firewall has to be running for me to have any Internet access whatsoever. When initially setting up Xen (and continually rebooting the server), I had my desktop machine conveniently connected to the ’Net so I could look stuff up. I wouldn’t be able to do that now. If I take down my server for serious maintenance again, I’ll need to arrange alternative Internet access. Another issue is that the Internet is not accessible in the boot sequence before the firewall domU comes up… though xend and xendomains are started quite early on in the boot sequence (before NTP tries to sync the clock over the Internet, for example), so this might not be a problem.

Leave a Reply

Your email address will not be published.