Difference between revisions of "Backup and Restore"

From Proxmox VE
Jump to: navigation, search
m (Backup)
 
(33 intermediate revisions by 8 users not shown)
Line 1: Line 1:
=Introduction=
+
<!--PVE_IMPORT_START_MARKER-->
Backup is one of the most important integrated features of Proxmox VE. Backups can be created during run-time without downtime - Live backups.
+
<!-- Do not edit - this is autogenerated content -->
 +
{{#pvedocs:vzdump-plain.html}}
 +
[[Category:Reference Documentation]]
 +
<pvehide>
 +
Backups are a requirement for any sensible IT deployment, and Proxmox VE
 +
provides a fully integrated solution, using the capabilities of each
 +
storage and each guest system type. This allows the system
 +
administrator to fine tune via the mode option between consistency
 +
of the backups and downtime of the guest system.
 +
Proxmox VE backups are always full backups - containing the VM/CT
 +
configuration and all data. Backups can be started via the GUI or via
 +
the vzdump command line tool.
 +
Backup Storage
 +
Before a backup can run, a backup storage must be defined. Refer to
 +
the Storage documentation on how to add a storage. A backup storage
 +
must be a file level storage, as backups are stored as regular files.
 +
In most situations, using a NFS server is a good way to store backups.
 +
You can save those backups later to a tape drive, for off-site
 +
archiving.
 +
Scheduled Backup
 +
Backup jobs can be scheduled so that they are executed automatically
 +
on specific days and times, for selectable nodes and guest systems.
 +
Configuration of scheduled backups is done at the Datacenter level in
 +
the GUI, which will generate a cron entry in /etc/cron.d/vzdump.
 +
Backup modes
 +
There are several ways to provide consistency (option mode),
 +
depending on the guest type.
 +
Backup modes for VMs:
 +
stop mode
 +
This mode provides the highest consistency of the backup, at the cost
 +
of a short downtime in the VM operation. It works by executing an
 +
orderly shutdown of the VM, and then runs a background Qemu process to
 +
backup the VM data. After the backup is started, the VM goes to full
 +
operation mode if it was previously running. Consistency is guaranteed
 +
by using the live backup feature.
 +
suspend mode
 +
This mode is provided for compatibility reason, and suspends the VM
 +
before calling the snapshot mode. Since suspending the VM results in
 +
a longer downtime and does not necessarily improve the data
 +
consistency, the use of the snapshot mode is recommended instead.
 +
snapshot mode
 +
This mode provides the lowest operation downtime, at the cost of a
 +
small inconstancy risk.  It works by performing a Proxmox VE live
 +
backup, in which data blocks are copied while the VM is running. If the
 +
guest agent is enabled (agent: 1) and running, it calls
 +
guest-fsfreeze-freeze and guest-fsfreeze-thaw to improve
 +
consistency.
 +
A technical overview of the Proxmox VE live backup for QemuServer can
 +
be found online
 +
here.
 +
Proxmox VE live backup provides snapshot-like semantics on any
 +
storage type. It does not require that the underlying storage supports
 +
snapshots. Also please note that since the backups are done via
 +
a background Qemu process, a stopped VM will appear as running for a
 +
short amount of time while the VM disks are being read by Qemu.
 +
However the VM itself is not booted, only its disk(s) are read.
 +
Backup modes for Containers:
 +
stop mode
 +
Stop the container for the duration of the backup. This potentially
 +
results in a very long downtime.
 +
suspend mode
 +
This mode uses rsync to copy the container data to a temporary
 +
location (see option --tmpdir). Then the container is suspended and
 +
a second rsync copies changed files. After that, the container is
 +
started (resumed) again. This results in minimal downtime, but needs
 +
additional space to hold the container copy.
 +
When the container is on a local file system and the target storage of
 +
the backup is an NFS/CIFS server, you should set --tmpdir to reside on a
 +
local file system too, as this will result in a many fold performance
 +
improvement.  Use of a local tmpdir is also required if you want to
 +
backup a local container using ACLs in suspend mode if the backup
 +
storage is an NFS server.
 +
snapshot mode
 +
This mode uses the snapshotting facilities of the underlying
 +
storage. First, the container will be suspended to ensure data consistency.
 +
A temporary snapshot of the container&#8217;s volumes will be made and the
 +
snapshot content will be archived in a tar file. Finally, the temporary
 +
snapshot is deleted again.
 +
snapshot mode requires that all backed up volumes are on a storage that
 +
supports snapshots. Using the backup=no mount point option individual volumes
 +
can be excluded from the backup (and thus this requirement).
 +
By default additional mount points besides the Root Disk mount point are
 +
not included in backups. For volume mount points you can set the Backup option
 +
to include the mount point in the backup. Device and bind mounts are never
 +
backed up as their content is managed outside the Proxmox VE storage library.
 +
Backup File Names
 +
Newer versions of vzdump encode the guest type and the
 +
backup time into the filename, for example
 +
vzdump-lxc-105-2009_10_09-11_04_43.tar
 +
That way it is possible to store several backup in the same
 +
directory. The parameter maxfiles can be used to specify the
 +
maximum number of backups to keep.
 +
Restore
 +
A backup archive can be restored through the Proxmox VE web GUI or through the
 +
following CLI tools:
 +
pct restore
 +
Container restore utility
 +
qmrestore
 +
Virtual Machine restore utility
 +
For details see the corresponding manual pages.
 +
Bandwidth Limit
 +
Restoring one or more big backups may need a lot of resources, especially
 +
storage bandwidth for both reading from the backup storage and writing to
 +
the target storage. This can negatively effect other virtual guest as access
 +
to storage can get congested.
 +
To avoid this you can set bandwidth limits for a backup job. Proxmox VE
 +
implements two kinds of limits for restoring and archive:
 +
per-restore limit: denotes the maximal amount of bandwidth for
 +
  reading from a backup archive
 +
per-storage write limit: denotes the maximal amount of bandwidth used for
 +
  writing to a specific storage
 +
The read limit indirectly affects the write limit, as we cannot write more
 +
than we read. A smaller per-job limit will overwrite a bigger per-storage
 +
limit. A bigger per-job limit will only overwrite the per-storage limit if
 +
you have &#8216;Data.Allocate&#8217; permissions on the affected storage.
 +
You can use the &#8216;--bwlimit &lt;integer&gt;` option from the restore CLI commands
 +
to set up a restore job specific bandwidth limit.  Kibit/s is used as unit
 +
for the limit, this means passing `10240&#8217; will limit the read speed of the
 +
backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
 +
is available for the already running virtual guests, and thus the backup
 +
does not impact their operations.
 +
You can use &#8216;0` for the bwlimit parameter to disable all limits for
 +
a specific restore job. This can be helpful if you need to restore a very
 +
important virtual guest as fast as possible. (Needs `Data.Allocate&#8217;
 +
permissions on storage)
 +
Most times your storage&#8217;s generally available bandwidth stays the same over
 +
time, thus we implemented the possibility to set a default bandwidth limit
 +
per configured storage, this can be done with:
 +
# pvesm set STORAGEID --bwlimit KIBs
 +
Configuration
 +
Global configuration is stored in /etc/vzdump.conf. The file uses a
 +
simple colon separated key/value format. Each line has the following
 +
format:
 +
OPTION: value
 +
Blank lines in the file are ignored, and lines starting with a #
 +
character are treated as comments and are also ignored. Values from
 +
this file are used as default, and can be overwritten on the command
 +
line.
 +
We currently support the following options:
 +
bwlimit: &lt;integer&gt; (0 - N) (default = 0)
 +
Limit I/O bandwidth (KBytes per second).
 +
compress: &lt;0 | 1 | gzip | lzo&gt; (default = 0)
 +
Compress dump file.
 +
dumpdir: &lt;string&gt;
 +
Store resulting files to specified directory.
 +
exclude-path: &lt;string&gt;
 +
Exclude certain files/directories (shell globs).
 +
ionice: &lt;integer&gt; (0 - 8) (default = 7)
 +
Set CFQ ionice priority.
 +
lockwait: &lt;integer&gt; (0 - N) (default = 180)
 +
Maximal time to wait for the global lock (minutes).
 +
mailnotification: &lt;always | failure&gt; (default = always)
 +
Specify when to send an email
 +
mailto: &lt;string&gt;
 +
Comma-separated list of email addresses that should receive email notifications.
 +
maxfiles: &lt;integer&gt; (1 - N) (default = 1)
 +
Maximal number of backup files per guest system.
 +
mode: &lt;snapshot | stop | suspend&gt; (default = snapshot)
 +
Backup mode.
 +
pigz: &lt;integer&gt; (default = 0)
 +
Use pigz instead of gzip when N&gt;0. N=1 uses half of cores, N&gt;1 uses N as thread count.
 +
pool: &lt;string&gt;
 +
Backup all known guest systems included in the specified pool.
 +
remove: &lt;boolean&gt; (default = 1)
 +
Remove old backup files if there are more than maxfiles backup files.
 +
script: &lt;string&gt;
 +
Use specified hook script.
 +
stdexcludes: &lt;boolean&gt; (default = 1)
 +
Exclude temporary files and logs.
 +
stopwait: &lt;integer&gt; (0 - N) (default = 10)
 +
Maximal time to wait until a guest system is stopped (minutes).
 +
storage: &lt;string&gt;
 +
Store resulting file to this storage.
 +
tmpdir: &lt;string&gt;
 +
Store temporary files to specified directory.
 +
Example vzdump.conf Configuration
 +
tmpdir: /mnt/fast_local_disk
 +
storage: my_backup_storage
 +
mode: snapshot
 +
bwlimit: 10000
 +
Hook Scripts
 +
You can specify a hook script with option --script. This script is
 +
called at various phases of the backup process, with parameters
 +
accordingly set. You can find an example in the documentation
 +
directory (vzdump-hook-script.pl).
 +
File Exclusions
 +
this option is only available for container backups.
 +
vzdump skips the following files by default (disable with the option
 +
--stdexcludes 0)
 +
/tmp/?*
 +
/var/tmp/?*
 +
/var/run/?*pid
 +
You can also manually specify (additional) exclude paths, for example:
 +
# vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
 +
(only excludes tmp directories)
 +
Configuration files are also stored inside the backup archive
 +
(in ./etc/vzdump/) and will be correctly restored.
 +
Examples
 +
Simply dump guest 777 - no snapshot, just archive the guest private area and
 +
configuration files to the default dump directory (usually
 +
/var/lib/vz/dump/).
 +
# vzdump 777
 +
Use rsync and suspend/resume to create a snapshot (minimal downtime).
 +
# vzdump 777 --mode suspend
 +
Backup all guest systems and send notification mails to root and admin.
 +
# vzdump --all --mode suspend --mailto root --mailto admin
 +
Use snapshot mode (no downtime) and non-default dump directory.
 +
# vzdump 777 --dumpdir /mnt/backup --mode snapshot
 +
Backup more than one guest (selectively)
 +
# vzdump 101 102 103 --mailto root
 +
Backup all guests excluding 101 and 102
 +
# vzdump --mode suspend --exclude 101,102
 +
Restore a container to a new CT 600
 +
# pct restore 600 /mnt/backup/vzdump-lxc-777.tar
 +
Restore a QemuServer VM to VM 601
 +
# qmrestore /mnt/backup/vzdump-qemu-888.vma 601
 +
Clone an existing container 101 to a new container 300 with a 4GB root
 +
file system, using pipes
 +
# vzdump 101 --stdout | pct restore --rootfs 4 300 -
 +
</pvehide>
 +
<!--PVE_IMPORT_END_MARKER-->
  
Starting with V2.3, a new KVM live backup technology is introduced to Proxmox VE. The old backup system for KVM guests was using LVM snapshots which was working great in most situations. But it was not working if you store VM images on NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new KVM live backup is targeted to work for all storage types and eliminates the need for special LVM configurations with enough snapshot space by avoiding the use of temporary storage. It is also very fast and effective because the new backup format is optimized for storing VM backups (sparse files, out of order data, minimized IO).
 
  
Proxmox VE backups are always full backups - containing all VM/CT configurations and all data.
 
 
=Backup Storage=
 
[[Image:Screen-Add-Backup-ISO-Storage.png|thumb]]
 
Before a backup can run, a backup storage must be defined. See also [[Storage Model]].
 
 
In most situations, using a NFS server is the good way to store backups. Save those backups later to a tape drive, for off-site archive.
 
 
The "Max Backups" settings defines the maximum allowed backups of each VM/CT. That means, a scheduled job will store the defined number of  backup of each VM/CT. If the limit is reached, the oldest backup is removed automatically.
 
 
=Backup=
 
There are 3 backup modes available:
 
* stop (KVM): Shutdown the VM to get a consistent state, Then start KVM live backup and restart the VM (short downtime).
 
* stop (OpenVZ): Stop the VM/CT during backup (very long downtime)
 
* suspend (KVM): same behavoiur as 'snapshot' mode.
 
* suspend (OpenVZ): Use suspend/resume and multiple rsync passes. You can backup with minimal downtime without using LVM.
 
* snapshot (KVM): Use KVM live backup (no downtime, online)
 
* snapshot (OpenVZ): Use LVM2 snapshots (no downtime, online)
 
 
==Backup Now==
 
[[Image:Screen-Backup-Now.png|thumb]] [[Image:Screen-KVM-Backup-Task-Log.png|thumb]]
 
Backups can be triggered manually, just by selecting the VM/CT, the backup mode and the backup target.
 
 
Each backup job will run as a background task, displaying the progress in % and bytes, the % of sparse regions, the overall duration and the backup read and write speed in MB/s - detailed info for KVM backup jobs only.
 
 
Example KVM Backup Log:
 
 
<pre>INFO: starting new backup job: vzdump 109 --remove 0 --mode snapshot --compress lzo --storage backup-new --node proxmox-7-106
 
INFO: Starting Backup of VM 109 (qemu)
 
INFO: status = running
 
INFO: backup mode: snapshot
 
INFO: ionice priority: 7
 
INFO: creating archive '/var/lib/vz/backup-new/dump/vzdump-qemu-109-2013_01_29-08_49_28.vma.lzo'
 
INFO: started backup task 'fc366b6b-0916-4aef-b0d1-f54b5e0ced14'
 
INFO: status: 1% (646184960/34359738368), sparse 1% (441397248), duration 3, 215/68 MB/s
 
INFO: status: 2% (897056768/34359738368), sparse 1% (445210624), duration 6, 83/82 MB/s
 
...
 
INFO: status: 88% (30430658560/34359738368), sparse 63% (21809438720), duration 145, 702/0 MB/s
 
INFO: status: 95% (32870039552/34359738368), sparse 70% (24248819712), duration 148, 813/0 MB/s
 
INFO: status: 100% (34359738368/34359738368), sparse 74% (25738514432), duration 151, 496/0 MB/s
 
INFO: transferred 34359 MB in 151 seconds (227 MB/s)
 
INFO: archive file size: 3.75GB
 
INFO: Finished Backup of VM 109 (00:02:32)
 
INFO: Backup job finished successfully
 
TASK OK</pre>
 
 
Example CT Backup Log:
 
 
<pre>INFO: starting new backup job: vzdump 106 --remove 0 --mode snapshot --compress lzo --storage backup --node proxmox-7-106
 
INFO: Starting Backup of VM 106 (openvz)
 
INFO: CTID 106 exist mounted running
 
INFO: status = running
 
INFO: backup mode: snapshot
 
INFO: ionice priority: 7
 
INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-proxmox-7-106-0')
 
INFO: Logical volume "vzsnap-proxmox-7-106-0" created
 
INFO: creating archive '/backup/dump/vzdump-openvz-106-2013_01_29-09_22_37.tar.lzo'
 
INFO: Total bytes written: 529295360 (505MiB, 17MiB/s)
 
INFO: archive file size: 279MB
 
INFO: Finished Backup of VM 106 (00:00:33)
 
INFO: Backup job finished successfully
 
TASK OK</pre>
 
 
==Scheduled Backup==
 
[[Image:Screen-Scheduled-Backup.png|thumb]]
 
 
=Restore=
 
[[Image:Screen-VM-Restore.png|thumb]]
 
Backups can be restored on any node via GUI. You can restore to the original VMID, which means the currently running VM/CT will be deleted and replaced by the restore - or you can restore to a new VMID and a new storage location.
 
 
Each restore job will run as a background task, displaying the progress in % and bytes and the overall duration - detailed info for KVM restore job only.
 
 
Example KVM restore job log:
 
 
<pre>restore vma archive: lzop -d -c /var/lib/vz/backup-new/dump/vzdump-qemu-109-2013_01_29-08_49_28.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp324484.fifo - /var/tmp/vzdumptmp324484
 
CFG: size: 469 name: qemu-server.conf
 
DEV: dev_id=1 size: 34359738368 devname: drive-virtio0
 
CTIME: Tue Jan 29 08:49:29 2013
 
Formatting '/var/lib/vz/images/110/vm-110-disk-1.qcow2', fmt=qcow2 size=34359738368 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
 
new volume ID is 'local:110/vm-110-disk-1.qcow2'
 
map 'drive-virtio0' to '/var/lib/vz/images/110/vm-110-disk-1.qcow2' (write zeros = 0)
 
progress 1% (read 343605248 bytes, duration 1 sec)
 
progress 2% (read 687210496 bytes, duration 2 sec)
 
progress 3% (read 1030815744 bytes, duration 4 sec)
 
...
 
progress 98% (read 33672593408 bytes, duration 87 sec)
 
progress 99% (read 34016198656 bytes, duration 87 sec)
 
progress 100% (read 34359738368 bytes, duration 87 sec)
 
TASK OK</pre>
 
 
 
Example CT Restore Task Log:
 
<pre>extracting archive '/backup/dump/vzdump-openvz-106-2013_01_29-09_22_37.tar.lzo'
 
Total bytes read: 529295360 (505MiB, 151MiB/s)
 
restore configuration to '/etc/pve/nodes/proxmox-7-106/openvz/111.conf'
 
TASK OK</pre>
 
 
=vzdump.conf=
 
If you need custom settings for vzdump, you can set this via /etc/vzdump.conf. For details, see:
 
man vzdump
 
 
=CLI=
 
==Backup==
 
For KVM VM and OpenVZ CT:
 
man vzdump
 
==Restore==
 
For KVM VM:
 
man qmrestore
 
 
For OpenVZ CT:
 
man vzrestore
 
 
=Video Tutorials=
 
tbd: [http://www.youtube.com/user/ProxmoxVE Proxmox VE Youtube channel]
 
[[Category: Proxmox VE 2.0]]
 
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Latest revision as of 11:24, 16 July 2019

Backups are a requirement for any sensible IT deployment, and Proxmox VE provides a fully integrated solution, using the capabilities of each storage and each guest system type. This allows the system administrator to fine tune via the mode option between consistency of the backups and downtime of the guest system.

Proxmox VE backups are always full backups - containing the VM/CT configuration and all data. Backups can be started via the GUI or via the vzdump command line tool.

Backup Storage

Before a backup can run, a backup storage must be defined. Refer to the Storage documentation on how to add a storage. A backup storage must be a file level storage, as backups are stored as regular files. In most situations, using a NFS server is a good way to store backups. You can save those backups later to a tape drive, for off-site archiving.

Scheduled Backup

Backup jobs can be scheduled so that they are executed automatically on specific days and times, for selectable nodes and guest systems. Configuration of scheduled backups is done at the Datacenter level in the GUI, which will generate a cron entry in /etc/cron.d/vzdump.

Backup modes

There are several ways to provide consistency (option mode), depending on the guest type.

Backup modes for VMs:
stop mode

This mode provides the highest consistency of the backup, at the cost of a short downtime in the VM operation. It works by executing an orderly shutdown of the VM, and then runs a background Qemu process to backup the VM data. After the backup is started, the VM goes to full operation mode if it was previously running. Consistency is guaranteed by using the live backup feature.

suspend mode

This mode is provided for compatibility reason, and suspends the VM before calling the snapshot mode. Since suspending the VM results in a longer downtime and does not necessarily improve the data consistency, the use of the snapshot mode is recommended instead.

snapshot mode

This mode provides the lowest operation downtime, at the cost of a small inconstancy risk. It works by performing a Proxmox VE live backup, in which data blocks are copied while the VM is running. If the guest agent is enabled (agent: 1) and running, it calls guest-fsfreeze-freeze and guest-fsfreeze-thaw to improve consistency.

A technical overview of the Proxmox VE live backup for QemuServer can be found online here.

Note Proxmox VE live backup provides snapshot-like semantics on any storage type. It does not require that the underlying storage supports snapshots. Also please note that since the backups are done via a background Qemu process, a stopped VM will appear as running for a short amount of time while the VM disks are being read by Qemu. However the VM itself is not booted, only its disk(s) are read.
Backup modes for Containers:
stop mode

Stop the container for the duration of the backup. This potentially results in a very long downtime.

suspend mode

This mode uses rsync to copy the container data to a temporary location (see option --tmpdir). Then the container is suspended and a second rsync copies changed files. After that, the container is started (resumed) again. This results in minimal downtime, but needs additional space to hold the container copy.

When the container is on a local file system and the target storage of the backup is an NFS/CIFS server, you should set --tmpdir to reside on a local file system too, as this will result in a many fold performance improvement. Use of a local tmpdir is also required if you want to backup a local container using ACLs in suspend mode if the backup storage is an NFS server.

snapshot mode

This mode uses the snapshotting facilities of the underlying storage. First, the container will be suspended to ensure data consistency. A temporary snapshot of the container’s volumes will be made and the snapshot content will be archived in a tar file. Finally, the temporary snapshot is deleted again.

Note snapshot mode requires that all backed up volumes are on a storage that supports snapshots. Using the backup=no mount point option individual volumes can be excluded from the backup (and thus this requirement).
Note By default additional mount points besides the Root Disk mount point are not included in backups. For volume mount points you can set the Backup option to include the mount point in the backup. Device and bind mounts are never backed up as their content is managed outside the Proxmox VE storage library.

Backup File Names

Newer versions of vzdump encode the guest type and the backup time into the filename, for example

vzdump-lxc-105-2009_10_09-11_04_43.tar

That way it is possible to store several backup in the same directory. The parameter maxfiles can be used to specify the maximum number of backups to keep.

Restore

A backup archive can be restored through the Proxmox VE web GUI or through the following CLI tools:

pct restore

Container restore utility

qmrestore

Virtual Machine restore utility

For details see the corresponding manual pages.

Bandwidth Limit

Restoring one or more big backups may need a lot of resources, especially storage bandwidth for both reading from the backup storage and writing to the target storage. This can negatively effect other virtual guest as access to storage can get congested.

To avoid this you can set bandwidth limits for a backup job. Proxmox VE implements two kinds of limits for restoring and archive:

  • per-restore limit: denotes the maximal amount of bandwidth for reading from a backup archive

  • per-storage write limit: denotes the maximal amount of bandwidth used for writing to a specific storage

The read limit indirectly affects the write limit, as we cannot write more than we read. A smaller per-job limit will overwrite a bigger per-storage limit. A bigger per-job limit will only overwrite the per-storage limit if you have ‘Data.Allocate’ permissions on the affected storage.

You can use the ‘--bwlimit <integer>` option from the restore CLI commands to set up a restore job specific bandwidth limit. Kibit/s is used as unit for the limit, this means passing `10240’ will limit the read speed of the backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth is available for the already running virtual guests, and thus the backup does not impact their operations.

Note You can use ‘0` for the bwlimit parameter to disable all limits for a specific restore job. This can be helpful if you need to restore a very important virtual guest as fast as possible. (Needs `Data.Allocate’ permissions on storage)

Most times your storage’s generally available bandwidth stays the same over time, thus we implemented the possibility to set a default bandwidth limit per configured storage, this can be done with:

# pvesm set STORAGEID --bwlimit KIBs

Configuration

Global configuration is stored in /etc/vzdump.conf. The file uses a simple colon separated key/value format. Each line has the following format:

OPTION: value

Blank lines in the file are ignored, and lines starting with a # character are treated as comments and are also ignored. Values from this file are used as default, and can be overwritten on the command line.

We currently support the following options:

bwlimit: <integer> (0 - N) (default = 0)

Limit I/O bandwidth (KBytes per second).

compress: <0 | 1 | gzip | lzo> (default = 0)

Compress dump file.

dumpdir: <string>

Store resulting files to specified directory.

exclude-path: <string>

Exclude certain files/directories (shell globs).

ionice: <integer> (0 - 8) (default = 7)

Set CFQ ionice priority.

lockwait: <integer> (0 - N) (default = 180)

Maximal time to wait for the global lock (minutes).

mailnotification: <always | failure> (default = always)

Specify when to send an email

mailto: <string>

Comma-separated list of email addresses that should receive email notifications.

maxfiles: <integer> (1 - N) (default = 1)

Maximal number of backup files per guest system.

mode: <snapshot | stop | suspend> (default = snapshot)

Backup mode.

pigz: <integer> (default = 0)

Use pigz instead of gzip when N>0. N=1 uses half of cores, N>1 uses N as thread count.

pool: <string>

Backup all known guest systems included in the specified pool.

remove: <boolean> (default = 1)

Remove old backup files if there are more than maxfiles backup files.

script: <string>

Use specified hook script.

stdexcludes: <boolean> (default = 1)

Exclude temporary files and logs.

stopwait: <integer> (0 - N) (default = 10)

Maximal time to wait until a guest system is stopped (minutes).

storage: <string>

Store resulting file to this storage.

tmpdir: <string>

Store temporary files to specified directory.

Example vzdump.conf Configuration
tmpdir: /mnt/fast_local_disk
storage: my_backup_storage
mode: snapshot
bwlimit: 10000

Hook Scripts

You can specify a hook script with option --script. This script is called at various phases of the backup process, with parameters accordingly set. You can find an example in the documentation directory (vzdump-hook-script.pl).

File Exclusions

Note this option is only available for container backups.

vzdump skips the following files by default (disable with the option --stdexcludes 0)

/tmp/?*
/var/tmp/?*
/var/run/?*pid

You can also manually specify (additional) exclude paths, for example:

# vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'

(only excludes tmp directories)

Configuration files are also stored inside the backup archive (in ./etc/vzdump/) and will be correctly restored.

Examples

Simply dump guest 777 - no snapshot, just archive the guest private area and configuration files to the default dump directory (usually /var/lib/vz/dump/).

# vzdump 777

Use rsync and suspend/resume to create a snapshot (minimal downtime).

# vzdump 777 --mode suspend

Backup all guest systems and send notification mails to root and admin.

# vzdump --all --mode suspend --mailto root --mailto admin

Use snapshot mode (no downtime) and non-default dump directory.

# vzdump 777 --dumpdir /mnt/backup --mode snapshot

Backup more than one guest (selectively)

# vzdump 101 102 103 --mailto root

Backup all guests excluding 101 and 102

# vzdump --mode suspend --exclude 101,102

Restore a container to a new CT 600

# pct restore 600 /mnt/backup/vzdump-lxc-777.tar

Restore a QemuServer VM to VM 601

# qmrestore /mnt/backup/vzdump-qemu-888.vma 601

Clone an existing container 101 to a new container 300 with a 4GB root file system, using pipes

# vzdump 101 --stdout | pct restore --rootfs 4 300 -