Re: [Hampshire] Converting real servers to virtualised ones

Top Page
Author: Hugo Mills
Date:  
To: Hampshire LUG Discussion List
Subject: Re: [Hampshire] Converting real servers to virtualised ones

Reply to this message
gpg: failed to create temporary file '/var/lib/lurker/.#lk0x567be100.hantslug.org.uk.30820': Permission denied
gpg: keyblock resource '/var/lib/lurker/pubring.gpg': Permission denied
gpg: Signature made Fri Feb 29 13:04:11 2008 GMT
gpg: using DSA key 20ACB3BE515C238D
gpg: Can't check signature: No public key
On Fri, Feb 29, 2008 at 12:39:23PM +0000, Tony Whitmore wrote:
> On Fri, 29 Feb 2008 12:10:11 +0000, Hugo Mills <hugo@???> wrote:
> > On Fri, Feb 29, 2008 at 11:21:40AM +0000, Tony Whitmore wrote:
> >    Well, trivially -- bring the machine down, boot into your favourite
> > live CD, dd the disk over the network onto the VM host's
> > storage... all done.

>
> I had thought about that. I wasn't sure how well Windows would cope with
> the change in host environment etc. Presumably the virtualised hardware is
> significantly different from the random server hardware it is currently on
> and this can sometimes throw Windows into a BSOD.


Mmm. True. You may have to uninstall the disk drivers before you
take the copy -- this will then drop back to the excruciatingly slow
BIOS mode, which will work on anything. You can then install the new
drivers in your new environment.

> >    If you want to shrink the overall size of the disk used by the VM,
> > then you'll probably have to play games with a suitable
> > resizing/partitioning tool as well. Finally, you haven't decided on
> > the storage format for your VMs -- this will depend on your choice of
> > VM host environment, and on how you want to manage storage.

>
> Currently each Xen domU is a LVM logical volume because I understood
> performance to be better than using disk images on a filesystem. I'd like
> to consider moving to a SAN for our data and VM images in the future
> though.


You could also consider using NBDs -- ship a block dev [LV] from
your storage host to the VM host, and thence into the VM guest as a
block device.

> I used VNC for a test install of Windows on a VT host under Xen. I'm still
> unsure whether to continue down the Xen route in production or consider
> using Qemu with kvm etc. under Linux. I like the hypervisor model but
> perhaps it doesn't really produce any significant benefits.


The problem with the hypervisor model is that it's a nice idea if
you have tightly-controlled and well-understood hardware to run it on,
because then the hypervisor only needs to understand that limited set
of hardware. This is a good model for traditional big iron systems,
because the OS and hypervisor only runs on one hardware platform, and
the variations are minimal.

For crap commodity hardware like PCs, the hypervisor needs to
understand all of the variants of that hardware to do a good job. This
is particularly visible in the area of ACPI support, where even a
full-blown OS like Linux has trouble getting it right all the time.
Effectively, Xen will end up having to reimplement the entirety of the
Linux ACPI layer to work properly -- at which point you might as well
run a full OS as your host system, and run VMs in that, which is what
everything other than Xen does. This was one of the reasons I stopped
trying to use Xen and moved to qemu/kvm. (There were several other
reasons, too).

Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
    --- You can't expect a boy to be depraved until he's gone to ---     
                             a good school.