I installed XenServer 6.2 on an old Dell 2950, with a pair of dual-core Xeons, 8GB RAM and 6x15k SAS drives in a RAID10 configuration. As an aside, I get garbage output on the boot prompt when I boot the host...a problem happily solved by mashing the Enter key in frustration until XenServer began booting.
As mentioned above, I installed a FreeBSD 9.1-RELEASE amd64 guest without much difficulty. I then wanted to see how the performance stacked up against our existing ESXi 5.0 infrastructure. As a crude benchmark, I ran a time portsnap extract on the XenSever guest, another FreeBSD guest on a similarly spec'd 1950 running ESXi, and on a 2950 with FreeBSD installed natively. The wall times were as follows.
- XenServer FreeBSD DomU: 12:20
- ESXi FreeBSD guest: 6:30
- Raw hardware: 5:28
I was rather disappointed to see XenServer fare so poorly against VMWare. Not all was lost though, because my XenServer guest was running in HVM mode. I expected that I would see some performance improvement by using the Paravirtualized drivers available in FreeBSD. To summarize the FreeBSD Wiki, full PV support is only available on i386, but amd64 can use the PV block device and network interfaces. I tried building an PV image and shipping it over to the XenServer host, without success. I was unable to get XenServer to even attempt to boot my image.
I went back to my HVM DomU and installed the 9.1-p7 XENHVM kernel. On reboot, the guest hangs immediately after detecting the CDROM drive. For several minutes it displays a message about xenbusb_nop_confighook_cb timeout periodically, then nothing. Some googling suggests that this is a known issue, with a workaround of removing the virtual CD device, as indicated in this thread. I removed the CDROM device by following these instructions, and the guest now boots happily. With the PV drivers, your virtual disk device is named "ad0" by FreeBSD. The network interface becomes "xn0". Because of these changes, you will want to update your guest /etc/fstab file, and probably the network configuration in /etc/rc.conf. Running the portsnap benchmark on the updated guest yields a time of 8:37...a 43% improvement over the full HVM DomU, but still lagging behind VMWare.
More experimentation is required to tell if more performance can be squeezed out of XenServer, or whether the live migration features justify the performance drop.
No comments:
Post a Comment