Xenserver Pool Backup Script

There is a couple of good scripts about on the internet for backing up running Xenserver servers, but I don’t think they are that elegant – they rely on temp files, there is no logging, and there is no folder maintenance. I’ve written this one script that can be run on one member of a pool (best to run it on the master) and it will backup all the virtual machines in the pool, their metadata, the pool database and the hosts. It also provides logging, clears it’s logs, and clears old backups. It also doesn’t create temp files and uses variables where others have used temp files. I’m not a developer, so this is maybe still a bit hacky, but it works great for me!

There is absolutely no support for this script, use it at your own risk – works perfectly for me but please test in your environment before you start using it in production.


# Written By: Matt Artley
# Created date: 22/06/2015
# Version: 2
# Visit: http://www.mattartley.com

# Define all variables

DATE=$(date +%y-%m-%d)
MOUNTTYPE="-t nfs"
UUIDS="$(xe vm-list is-control-domain=false is-a-snapshot=false params=uuid --minimal | tr ',' '\n')"
VMHOSTNAMES=( Host1 Host2 Host3 Host4 Host5 )

# Check for LOGPATH and create if needed

if [ ! -d "$LOGPATH" ]; then
	echo "Creating Log Directory"
	mkdir -p $LOGPATH
	echo "Log Directory exists - moving on"

exec 3>&1 1>>${LOGFILE} 2>&1

# Check for MOUNTPOINT and create if neccessary 

echo "Check Mountpoint"

if [ ! -d "$MOUNTPOINT" ]; then
	echo "Creating MOUNTPOINT Directory"
	mkdir -p $MOUNTPOINT
	echo "MOUNTPOINT Directory exists - moving on"

echo "Check Mountpoint....OK"
# Mounting remote NFS share backup NAS

echo "Connect to Remote File System (NFS)"

if grep -qs "$MOUNT" /proc/mounts; then
	echo "File system already Mounted"
	echo "File system is not mounted, preparing to mount"
		if [ $? -eq 0 ]; then
			echo "Mount success!"
			echo "Something went wrong with the mount...oops"

echo "Connect to Remote File System (NFS)....OK"

# Create Backup Directory

echo "Create Backup Directory on Remote Storage"

mkdir -p $BACKUPPATH

echo "Create Backup Directory on Remote Storage....OK"

# VMHost backups

if [ ! -d "$BACKUPPATH/VMHOSTS" ]; then
	echo "Creating VMHOSTS Directory"
	echo "VMHOSTS Directory exists - moving on"

	echo "Starting host-backup $POOLMEMBER"
	xe host-backup file-name=$BACKUPPATH/VMHOSTS/$POOLMEMBER-$DATE.bak host=$POOLMEMBER
	if [ $? -eq 0 ]; then
		echo "host-backup $POOLMEMBER....OK"
		echo "host-backup $POOLMEMBER....Failed"

# Pool database dump (only to be run on Master)

if [ "$HOSTNAME" = "$MASTER" ]; then
	echo "Server is MASTER - Starting pool-dump-database"
	xe pool-dump-database file-name=$BACKUPPATH/VMHOSTS/pool-DB-$DATE.bak
	if [ $? -eq 0 ]; then
		echo "pool-dump-database....OK"
		echo "pool-dump-database....Failed"

# Actual export and backup of VMs happens here

	VMNAME=`xe vm-list uuid=$VMUUID | grep name-label | cut -d":" -f2 | sed 's/^ *//g'`
	echo "Now moving to $VMNAME"
	SNAPUUID=`xe vm-snapshot uuid=$VMUUID new-name-label="SNAPSHOT-$VMUUID-$DATE"`
	echo "Creating Snapshot of $VMNAME"
	xe template-param-set is-a-template=false ha-always-run=false uuid=$SNAPUUID
	if [ $? -eq 0 ]; then
	echo "Creating Snapshot of $VMNAME....OK"
	echo "Creating Snapshot of $VMNAME....Failed"
	echo "Exporting .XVA of $VMNAME"
	xe vm-export vm=$SNAPUUID filename="$BACKUPPATH/$VMNAME-$DATE.xva"
	if [ $? -eq 0 ]; then
	echo "Exporting .XVA of $VMNAME....OK"
	echo "Exporting .XVA of $VMNAME....Failed"
	echo "Export vm metadata"
	xe vm-export uuid=$VMUUID filename="$BACKUPPATH/$VMNAME-$DATE" metadata=true
	if [ $? -eq 0 ]; then
	echo "Export vm metadata....OK"
	echo "Export vm metadata....Failed"
	echo "Removing created snapshot of $VMNAME"
	xe vm-uninstall uuid=$SNAPUUID force=true
	if [ $? -eq 0 ]; then
	echo "Removing created snapshot of $VMNAME....OK"
	echo "Removing created snapshot of $VMNAME....Failed"

# Clear old vm backups 

find $MOUNTPOINT/VMHOSTS/* -mtime +2 -exec rm -rf {} \;

# unmount file system

if [ $? -eq 0 ]; then
	echo "Filesystem unmounted"
	echo "Problem unmounting the filesystem"

# Clear Log Directory of logs older than 3 weeks

find $LOGPATH/ -mtime +21 -exec rm {} \;

Install foreman on Ubuntu 14.04

I found some patchy information and I had a lot of failed installs before I could get it up and running on Ubuntu 14.04 – this is the guide I made while I was doing it and includes all the errors I was making:

How to install Foreman on Ubuntu 14.04

How to install Foreman on Ubuntu 14.04 – Follow this guide, the internet is full of lies and deceit!

Create a linux 14.04 virtual machine – DON’T create a user called foreman, call it user, sysadmin, anything like that but not foreman – if you create a user “foreman” then the foreman install will fail and it won’t tell you why. Call the hostname foreman, or something similar. Once the server is created and up and running, stick an address reservation in DHCP and add a static record to DNS (as it’s a linux box it won’t register on the AD DNS)

1 – Change the hostname and the FQDN to match the DNS record

# sudo -s(if you don't like toggling your root access, just sudo each of these commands)
# vi /etc/hosts

Once your host file opens make changes so it looks like this:

root@foreman:~# vi /etc/hosts  localhost example.host.local foreman

to save and quit vi – ESC, :x, enter

IMPORTANT – don’t put any capital letters in any of the host names. Even though in DNS the host name is sometimes rendered as example.HOST.LOCAL. if you put it in the hosts file like that then the installation will fail, and it won’t tell you why.

Check the hostname of your host now

# hostname -f

hopefully it will echo back example.host.local
Reboot your host (not essential I don’t think, worth it though)

# reboot

2 – Add puppet repos, install the puppet agent and puppetmaster. Enable the puppetmaster and then restart it

# sudo -s (if you don't like toggling your root access, just sudo each of these commands)
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
# dpkg -i puppetlabs-release-trusty.deb
# apt-get update
# apt-get install -y puppetmaster puppet
# sed -i s/START=no/START=yes/g /etc/default/puppetmaster
# service puppetmaster restart

3 – Add foreman repos. Install Apache2 and Foreman-installer, then install Foreman

# echo "deb http://deb.theforeman.org/ trusty stable" > /etc/apt/sources.list.d/foreman.list
# echo "deb http://deb.theforeman.org/ plugins stable" >> /etc/apt/sources.list.d/foreman.list
# wget -q http://deb.theforeman.org/pubkey.gpg -O- | apt-key add -
# apt-get update
# apt-get install -y apache2 foreman-installer
# foreman-installer --enable-foreman-proxy > ./foreman.log

Hopefully you will not have any errors
4 – If you have no errors, continue to configure the puppet agent on the puppetmaster server (inception much?)

# sed -i '/\/var\/log\/puppet/a \server=example.host.local' /etc/puppet/puppet.conf
# service puppet restart

5 – now you can check out your foreman install – the default install credentials etc can be found in the log you made on installation

# cat foreman.log
* Foreman is running at https://example.host.local
Initial credentials are admin / SOMEPASSWORD
* Foreman Proxy is running at https://example.host.local:8443
* Puppetmaster is running at port 8140
The full log is at /var/log/foreman-installer/foreman-installer.log

6 – Open your browser and go to https://example.host.local – login with the credentials provided by the install log file

Foreman + Puppet

Foreman + puppet is a pretty awesome combination for automating your IT infrastructure.

I followed a great guide here for setting up a puppet master with foreman web gui.


Don’t have capital letters in your hostname – host.example.local not host.EXAMPLE.LOCAL – else it will have problems setting up the proxy.

Don’t name the user on your server “foreman” – it will make the installation fail, and you won’t know why. The installer creates a user foreman and does some stuff in the home directory.

Also for my setup to install correctly (foreman + ubuntu 14.04) I had to change this command:

foreman-installer > ./foreman.log


Foreman-installer --enable-foreman-proxy > ./foreman.log

else I had errors like:

/Stage[main]/Foreman_proxy::Register/Foreman_smartproxy  Could not evaluate: Could not load data from