How to Migrate Virtual Machines Between Proxmox Servers

This article is created to provide guidance when doing VM Migration from One Proxmox Node to Another. One major need for migrating a virtual machine between proxmox servers is old servers decommissioning or balancing the VMs workloads distribution across hypervisor nodes. A Virtual Machine can also be migrated in disaster recovery scenarios.

The tool that will be used in this article is Vzdump. This is a utility available in every Proxmox server installation, and it allows you to create consistent snapshots of running virtual machines in a tar archive. This archive will contain VM configuration files. We will demonstrate how to use the vzdump command line tool to export a virtual machine as backup archive, moving generated file to a different PVE host, importing, and running the instance on the new host.

Backup Proxmox Virtual Machine using vzdump

The basic usage of vzdump utility is simple dump of guest operating system without snapshot. This process simply archives the guest’s private data and configuration files to the default dump directory, typically located at /var/lib/vz/dump/.

Command usage documentation can be checked using man:

man vzdump

The command usage syntax is:

vzdump <VMID> <OPTIONS>

Available options that can be used with the utility:

  • <vmid> The ID of the guest system you want to backup.
  • --compress <0 | 1 | gzip | lzo | zstd> Compress dump file.
  • --dumpdir <string> Store resulting files to specified directory.

List running Virtual Machines on host 1:

# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       101 AD-Server-2022       stopped    8192             100.00 0
       102 pfsense              stopped    4096              32.00 0
       104 Ubuntu-22-Desktop    running    16384             50.00 1231

For containers:

pct list

Backup the Virtual Machine by executing the following commands:

vzdump --compress <VMID>
vzdump --compress <ContainerID>

During backup process, compression of the file can be done with one of the following algorithms: lzo, gzip, or zstd. If you don’t specify backup algorithm the default is lzo.

vzdump --compress gzip <VMID>

Resulting file extensions from compression

  • .zst: For Zstandard (zstd) compression
  • .gz or .tgz: For gzip compression
  • .lzo: For lzo compression

You can also specify the directory where the backups are to be saved.

vzdump --compress gzip <VMID> --dumpdir /home/backups

For Container:

vzdump --compress gzip <ContainerID> --dumpdir /home/backups

Command execution sample output:

INFO: starting new backup job: vzdump 104 --compress 1
INFO: Starting Backup of VM 104 (qemu)
INFO: Backup started at 2024-12-06 21:28:43
INFO: status = running
INFO: VM Name: Ubuntu-22-Desktop
INFO: include disk 'scsi0' 'local:104/vm-104-disk-0.qcow2' 50G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo'
INFO: skipping guest-agent 'fs-freeze', agent configured but not running?
INFO: started backup task '179beeea-4d05-4c7b-bab6-0896f2a4a35c'
INFO: resuming VM again
INFO:   7% (3.9 GiB of 50.0 GiB) in 3s, read: 1.3 GiB/s, write: 437.1 MiB/s
INFO:  10% (5.4 GiB of 50.0 GiB) in 6s, read: 508.7 MiB/s, write: 491.1 MiB/s
INFO:  15% (7.8 GiB of 50.0 GiB) in 9s, read: 806.0 MiB/s, write: 786.2 MiB/s
INFO:  19% (9.9 GiB of 50.0 GiB) in 12s, read: 728.6 MiB/s, write: 687.3 MiB/s
INFO:  23% (11.6 GiB of 50.0 GiB) in 15s, read: 593.0 MiB/s, write: 577.3 MiB/s
INFO:  27% (14.0 GiB of 50.0 GiB) in 18s, read: 802.6 MiB/s, write: 769.8 MiB/s
INFO:  32% (16.4 GiB of 50.0 GiB) in 21s, read: 837.4 MiB/s, write: 820.7 MiB/s
INFO:  36% (18.4 GiB of 50.0 GiB) in 24s, read: 681.5 MiB/s, write: 632.9 MiB/s
INFO:  40% (20.3 GiB of 50.0 GiB) in 27s, read: 625.9 MiB/s, write: 597.3 MiB/s
INFO:  45% (22.6 GiB of 50.0 GiB) in 30s, read: 801.1 MiB/s, write: 765.0 MiB/s
INFO:  49% (25.0 GiB of 50.0 GiB) in 33s, read: 802.3 MiB/s, write: 776.4 MiB/s
INFO:  54% (27.0 GiB of 50.0 GiB) in 36s, read: 700.5 MiB/s, write: 682.9 MiB/s
INFO:  56% (28.0 GiB of 50.0 GiB) in 39s, read: 348.4 MiB/s, write: 337.0 MiB/s
INFO:  60% (30.4 GiB of 50.0 GiB) in 42s, read: 814.4 MiB/s, write: 765.6 MiB/s
INFO:  64% (32.2 GiB of 50.0 GiB) in 45s, read: 611.4 MiB/s, write: 579.4 MiB/s
INFO:  68% (34.3 GiB of 50.0 GiB) in 48s, read: 725.0 MiB/s, write: 683.0 MiB/s
INFO:  71% (35.8 GiB of 50.0 GiB) in 51s, read: 495.4 MiB/s, write: 475.7 MiB/s
INFO:  74% (37.3 GiB of 50.0 GiB) in 54s, read: 529.2 MiB/s, write: 502.4 MiB/s
INFO:  79% (39.7 GiB of 50.0 GiB) in 57s, read: 799.9 MiB/s, write: 766.0 MiB/s
INFO:  82% (41.2 GiB of 50.0 GiB) in 1m, read: 519.6 MiB/s, write: 498.8 MiB/s
INFO:  85% (42.8 GiB of 50.0 GiB) in 1m 3s, read: 536.5 MiB/s, write: 495.1 MiB/s
INFO:  89% (44.5 GiB of 50.0 GiB) in 1m 6s, read: 594.5 MiB/s, write: 590.0 MiB/s
INFO:  93% (46.7 GiB of 50.0 GiB) in 1m 9s, read: 729.8 MiB/s, write: 669.5 MiB/s
INFO:  96% (48.1 GiB of 50.0 GiB) in 1m 12s, read: 480.9 MiB/s, write: 472.1 MiB/s
INFO: 100% (50.0 GiB of 50.0 GiB) in 1m 15s, read: 662.3 MiB/s, write: 579.6 MiB/s
INFO: backup is sparse: 4.77 GiB (9%) total zero data
INFO: transferred 50.00 GiB in 75 seconds (682.7 MiB/s)
INFO: archive file size: 22.76GB
INFO: Finished Backup of VM 104 (00:01:18)
INFO: Backup finished at 2024-12-06 21:30:01
INFO: Backup job finished successfully

The backup file will be located in the default backup directory

# ls /var/lib/vz/dump/
vzdump-qemu-104-2024_12_06-21_28_43.log  vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo

Move generated VM backup archive

Use the du command to confirm the size of your backup.

# du -sh /var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo
23G	/var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo

For containerized workload:

# du -sh /var/lib/vz/dump/vzdump-lxc-101-2024_01_17-00_29_42.tar
6.7G	/var/lib/vz/dump/vzdump-lxc-101-2024_01_17-00_29_42.tar

After backup file is generated, use scp or rsync to migrate the file to a destination compute host for restoration.

scp /var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo root@PVE2IP:/var/lib/vz/dump/

Where;

  • /var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo is the path to VM backup
  • PVE2IP is to be replaced with actual IP address of destination Proxmox server
  • We’re copying the backup to the /var/lib/vz/dump/ in the destination server

Restoring VM from Backup archive on Proxmox Server

Use the qmrestore command to restore the virtual machine from QemuServer vzdump Backups.

qmrestore <archive> <vmid> [OPTIONS]

       Restore QemuServer vzdump backups.

       <archive>: <string>
           The backup file. You can pass - to read from standard input.

       <vmid>: <integer> (100 - 999999999)
           The (unique) ID of the VM.

       --bwlimit <number> (0 - N)
           Override I/O bandwidth limit (in KiB/s).

       --force <boolean>
           Allow to overwrite existing VM.

       --live-restore <boolean>
           Start the VM immediately from the backup and restore in background. PBS only.

       --pool <string>
           Add the VM to the specified pool.

       --storage <string>
           Default storage.

       --unique <boolean>
           Assign a unique random ethernet address.

Make sure the VM is in stopped state:

qm stop <VMID>

To stop container use:

pct stop <ContainerID>

As an example, to restore our backup from file and VM ID 601, we would run:

qmrestore /var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo 601

To get a list of available storage pools, run the following command:

root@pve02:~# pvesm status
Name              Type     Status           Total            Used       Available        %
local              dir     active       100597760        11813608        88784152   11.74%
local-lvm      lvmthin     active       365760512         7790698       357969813    2.13%

A storage pool can be specified using --storage.

Example of restoring to local-zfs instead of local-lvm:

qmrestore --storage local-zfs  /var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo 601

Example of Restoring an LXC Container:

pct restore <NEWID> zdump-lxc-104-2024_01_17-00_38_15.tar.gz

# Specifying storage pool
 pct restore  --storage local-zfs<NEWID> zdump-lxc-104-2024_01_17-00_38_15.tar.gz  

Below is a sample command output for restoration process.

restore vma archive: lzop -d -c /var/lib/vz/dump/vzdump-qemu-104-2024_12_06-21_28_43.vma.lzo | vma extract -v -r /var/tmp/vzdumptmp1055630.fifo - /var/tmp/vzdumptmp1055630
CFG: size: 450 name: qemu-server.conf
DEV: dev_id=1 size: 53687091200 devname: drive-scsi0
CTIME: Wed Dec  6 21:28:46 2024
Formatting '/var/lib/vz/images/601/vm-601-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=53687091200 lazy_refcounts=off refcount_bits=16
new volume ID is 'local:601/vm-601-disk-0.qcow2'
map 'drive-scsi0' to '/var/lib/vz/images/601/vm-601-disk-0.qcow2' (write zeros = 0)
progress 1% (read 536870912 bytes, duration 0 sec)
progress 2% (read 1073741824 bytes, duration 0 sec)
progress 3% (read 1610612736 bytes, duration 0 sec)
progress 4% (read 2147483648 bytes, duration 0 sec)
progress 5% (read 2684354560 bytes, duration 1 sec)
progress 6% (read 3221225472 bytes, duration 1 sec)
progress 7% (read 3758096384 bytes, duration 3 sec)
progress 8% (read 4294967296 bytes, duration 4 sec)
progress 9% (read 4831838208 bytes, duration 5 sec)
progress 10% (read 5368709120 bytes, duration 7 sec)
progress 11% (read 5905580032 bytes, duration 8 sec)
progress 12% (read 6442450944 bytes, duration 9 sec)
progress 13% (read 6979321856 bytes, duration 10 sec)
progress 14% (read 7516192768 bytes, duration 11 sec)
progress 15% (read 8053063680 bytes, duration 12 sec)
progress 16% (read 8589934592 bytes, duration 13 sec)
progress 17% (read 9126805504 bytes, duration 14 sec)
progress 18% (read 9663676416 bytes, duration 15 sec)
progress 19% (read 10200547328 bytes, duration 16 sec)
progress 20% (read 10737418240 bytes, duration 17 sec)
progress 21% (read 11274289152 bytes, duration 19 sec)
progress 22% (read 11811160064 bytes, duration 20 sec)
progress 23% (read 12348030976 bytes, duration 21 sec)
progress 24% (read 12884901888 bytes, duration 22 sec)
progress 25% (read 13421772800 bytes, duration 23 sec)
progress 26% (read 13958643712 bytes, duration 24 sec)
progress 27% (read 14495514624 bytes, duration 25 sec)
progress 28% (read 15032385536 bytes, duration 26 sec)
progress 29% (read 15569256448 bytes, duration 27 sec)
progress 30% (read 16106127360 bytes, duration 28 sec)
progress 31% (read 16642998272 bytes, duration 29 sec)
progress 32% (read 17179869184 bytes, duration 30 sec)
progress 33% (read 17716740096 bytes, duration 31 sec)
progress 34% (read 18253611008 bytes, duration 32 sec)
progress 35% (read 18790481920 bytes, duration 33 sec)
progress 36% (read 19327352832 bytes, duration 34 sec)
progress 37% (read 19864223744 bytes, duration 36 sec)
progress 38% (read 20401094656 bytes, duration 37 sec)
progress 39% (read 20937965568 bytes, duration 38 sec)
progress 40% (read 21474836480 bytes, duration 39 sec)
progress 41% (read 22011707392 bytes, duration 40 sec)
progress 42% (read 22548578304 bytes, duration 41 sec)
progress 43% (read 23085449216 bytes, duration 42 sec)
progress 44% (read 23622320128 bytes, duration 43 sec)
progress 45% (read 24159191040 bytes, duration 45 sec)
progress 46% (read 24696061952 bytes, duration 46 sec)
progress 47% (read 25232932864 bytes, duration 47 sec)
progress 48% (read 25769803776 bytes, duration 48 sec)
progress 49% (read 26306674688 bytes, duration 49 sec)
progress 50% (read 26843545600 bytes, duration 50 sec)
progress 51% (read 27380416512 bytes, duration 51 sec)
progress 52% (read 27917287424 bytes, duration 52 sec)
progress 53% (read 28454158336 bytes, duration 53 sec)
progress 54% (read 28991029248 bytes, duration 54 sec)
progress 55% (read 29527900160 bytes, duration 56 sec)
progress 56% (read 30064771072 bytes, duration 57 sec)
progress 57% (read 30601641984 bytes, duration 58 sec)
progress 58% (read 31138512896 bytes, duration 59 sec)
progress 59% (read 31675383808 bytes, duration 60 sec)
progress 60% (read 32212254720 bytes, duration 61 sec)
progress 61% (read 32749125632 bytes, duration 62 sec)
progress 62% (read 33285996544 bytes, duration 63 sec)
progress 63% (read 33822867456 bytes, duration 65 sec)
progress 64% (read 34359738368 bytes, duration 66 sec)
progress 65% (read 34896609280 bytes, duration 67 sec)
progress 66% (read 35433480192 bytes, duration 68 sec)
progress 67% (read 35970351104 bytes, duration 70 sec)
progress 68% (read 36507222016 bytes, duration 71 sec)
progress 69% (read 37044092928 bytes, duration 72 sec)
progress 70% (read 37580963840 bytes, duration 73 sec)
progress 71% (read 38117834752 bytes, duration 75 sec)
progress 72% (read 38654705664 bytes, duration 76 sec)
progress 73% (read 39191576576 bytes, duration 77 sec)
progress 74% (read 39728447488 bytes, duration 78 sec)
progress 75% (read 40265318400 bytes, duration 79 sec)
progress 76% (read 40802189312 bytes, duration 81 sec)
progress 77% (read 41339060224 bytes, duration 82 sec)
progress 78% (read 41875931136 bytes, duration 82 sec)
progress 79% (read 42412802048 bytes, duration 84 sec)
progress 80% (read 42949672960 bytes, duration 85 sec)
progress 81% (read 43486543872 bytes, duration 86 sec)
progress 82% (read 44023414784 bytes, duration 88 sec)
progress 83% (read 44560285696 bytes, duration 89 sec)
progress 84% (read 45097156608 bytes, duration 90 sec)
progress 85% (read 45634027520 bytes, duration 91 sec)
progress 86% (read 46170898432 bytes, duration 92 sec)
progress 87% (read 46707769344 bytes, duration 94 sec)
progress 88% (read 47244640256 bytes, duration 95 sec)
progress 89% (read 47781511168 bytes, duration 96 sec)
progress 90% (read 48318382080 bytes, duration 97 sec)
progress 91% (read 48855252992 bytes, duration 98 sec)
progress 92% (read 49392123904 bytes, duration 99 sec)
progress 93% (read 49928994816 bytes, duration 100 sec)
progress 94% (read 50465865728 bytes, duration 101 sec)
progress 95% (read 51002736640 bytes, duration 103 sec)
progress 96% (read 51539607552 bytes, duration 104 sec)
progress 97% (read 52076478464 bytes, duration 106 sec)
progress 98% (read 52613349376 bytes, duration 106 sec)
progress 99% (read 53150220288 bytes, duration 107 sec)
progress 100% (read 53687091200 bytes, duration 108 sec)
total bytes read 53687091200, sparse bytes 5123014656 (9.54%)
space reduction due to 4K zero blocks 0.96%
rescan volumes...

Use the qm command to list virtual machines on the destination server.

# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       100 pfsense              running    16384             32.00 3638583
       102 Fedora-39            running    2048              32.00 460944
       103 WinServer-2019       running    8192              50.00 3226348
       104 WinServer-2022       running    8192              50.00 3229915
       198 Ubuntu-Bionic        running    8192              30.00 1248387
       199 workstation          running    8192              30.00 3671877
       201 k8smaster1           running    8192              50.00 20651
       211 k8sworker1           running    16384             30.00 21307
       212 k8sworker2           running    16384             30.00 21357
       213 k8sworker3           running    16384             30.00 21428
       601 Ubuntu-22-Desktop    stopped    16384             50.00 0

Start the VM after complete migration.

qm start <VMID>

If your virtual machine had a static IP address configured, and the networking configuration differs on the new host, you’ll need to log into the instance and assign an IP address.

Join our Linux and open source community. Subscribe to our newsletter for tips, tricks, and collaboration opportunities!

Recent Post

Unlock the Right Solutions with Confidence

At CloudSpinx, we don’t just offer services - we deliver clarity, direction, and results. Whether you're navigating cloud adoption, scaling infrastructure, or solving DevOps challenges, our seasoned experts help you make smart, strategic decisions with total confidence. Let us turn complexity into opportunity and bring your vision to life.

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Post

In this tutorial we will cover step-by-step procedure of compiling Open vSwitch (OVS) from source code on Rocky Linux, AlmaLinux, […]

Open vSwitch (OVS) is an open source virtual switch widely adopted in network virtualization. OVS is popular in platforms like […]

Photoshop can be such a headache for both experts and beginners alike. Every time you open photoshop there’s always a […]

Let's Connect

Unleash the full potential of your business with CloudSpinx. Our expert solutions specialists are standing by to answer your questions and tailor a plan that perfectly aligns with your unique needs.
You will get a response from our solutions specialist within 12 hours
We understand emergencies can be stressful. For immediate assistance, chat with us now

Contact CloudSpinx today!

Download CloudSpinx Profile

Discover the full spectrum of our expertise and services by downloading our detailed Company Profile. Simply enter your first name, last name, and email address.