[PVE-User] online migration fails with processor type "KVM64"

Thomas Naumann thomas.naumann at ovgu.de
Wed Oct 12 11:32:34 CEST 2016


hi at all,

cluster-setup is:

- 3 nodes / proxmox 4.3-1/e7cdc165

- CPU on all nodes is 8x Quad-Core AMD Opteron 2356 (2 sockets)

- RAM: node 1 = 16 GB RAM, node 2 and 3 = 32 GB

- storage for VMs = DRBD (drbd-utils version 8.9.8-1, drbdmanage version 
0.97.3-1)


issue:

if processor type of a KVM-based VM is set to "KVM64" (default) online 
migration of VM is only possible between node 1 and 2 and vice versa. 
online migration is impossible from 1 to 3 and from 2 to 3. trying 
online migration from 1 to 3 or from 2 to 3 stops with following error 
message:

"task started by HA resource agent
Oct 10 13:59:17 starting migration of VM 100 to node '3'
Oct 10 13:59:17 copying disk images
Oct 10 13:59:18 starting VM 100 on remote node '3'
Oct 10 13:59:25 start failed: command '/usr/bin/kvm -id 100 -chardev 
'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 
'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/100.pid 
-daemonize -smbios 'type=1,uuid=f702c987-cb8f-4f39-ab62-cc3952b73811' 
-name OPN -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' 
-vga qxl -vnc unix:/var/run/qemu-server/100.vnc,x509,password -cpu 
qemu64,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -object 
'memory-backend-ram,id=ram-node0,size=2048M' -numa 
'node,nodeid=0,cpus=0-1,memdev=ram-node0' -k de -device 
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 
'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -spice 
'tls-port=61001,addr=localhost,tls-ciphers=DES-CBC3-SHA,seamless-migration=on' 
-device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 
'spicevmc,id=vdagent,name=vdagent' -device 
'virtserialport,chardev=vdagent,name=com.redhat.spice.0' -device 
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 
'initiator-name=iqn.1993-08.org.debian:01:4f3adc7457e0' -drive 
'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 
'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 
'file=/dev/drbd/by-res/vm-100-disk-1/0,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' 
-device 
'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' 
-netdev 
'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' 
-device 
'virtio-net-pci,mac=8A:78:12:6D:B0:D7,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' 
-netdev 
'type=tap,id=net1,ifname=tap100i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' 
-device 
'virtio-net-pci,mac=16:82:20:00:FC:3F,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301' 
-netdev 
'type=tap,id=net2,ifname=tap100i2,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' 
-device 
'virtio-net-pci,mac=A6:50:31:4E:BF:01,netdev=net2,bus=pci.0,addr=0x14,id=net2,bootindex=302' 
-machine 'type=pc-i440fx-2.6' -incoming 
unix:/run/qemu-server/100.migrate -S' failed: exit code 1
Oct 10 13:59:25 ERROR: online migrate failure - command '/usr/bin/ssh -o 
'BatchMode=yes' root@ qm start 100 --skiplock --migratedfrom 2 
--stateuri unix --machine pc-i440fx-2.6' failed: exit code 255
Oct 10 13:59:25 aborting phase 2 - cleanup resources
Oct 10 13:59:25 migrate_cancel
Oct 10 13:59:27 ERROR: migration finished with problems (duration 00:00:10)
TASK ERROR: migration problems"


if VM is shutdown offline migration between all 3 nodes is working. 
trying to start VM on node 3 fails with following error message:

"warning: host doesn't support requested feature: 
CPUID.80000001H:EDX.nx|xd [bit 20]
kvm: Host doesn't support requested features"


if processor type of a KVM-based VM is set to "host" online migration 
between all three nodes is working fine and as expected.


Does anyone know whats the reason for this behavior and how to resolve?


-- 

best regards
Thomas




More information about the pve-user mailing list