[pve-devel] [PVE-User] online migration broken in latest updates - "unknown command 'mtunnel'"

Yannis Milios yannis.milios at gmail.com
Fri Nov 11 13:45:16 CET 2016


Just tested it with pve-qemu-kvm 2.7.0-6 and it works fine, thanks!

On Fri, Nov 11, 2016 at 12:28 PM, Wolfgang Bumiller <w.bumiller at proxmox.com>
wrote:

> Any chance you could compare pve-qemu-kvm 2.7.0-5 and this test build:
> <http://download2.proxmox.com/temp/pve/pve-qemu-kvm_2.7.0-6_amd64.deb> ?
>
> On Fri, Nov 11, 2016 at 12:11:27PM +0000, Yannis Milios wrote:
> > Not sure if it's related, but after upgrading yesterday to the latest
> > updates, Ceph snapshots take a very long time to complete and finally
> they
> > fail.
> > This happens only if the VM is running and if I check the 'include RAM'
> box
> > in snapshot window. All 3 pve/ceph nodes are upgraded to the latest
> updates.
> >
> > I have 3 pve nodes with ceph storage role on them. Below follows some
> more
> > info:
> >
> > proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
> > pve-manager: 4.3-10 (running version: 4.3-10/7230e60f)
> > pve-kernel-4.4.21-1-pve: 4.4.21-71
> > pve-kernel-4.4.19-1-pve: 4.4.19-66
> > lvm2: 2.02.116-pve3
> > corosync-pve: 2.4.0-1
> > libqb0: 1.0-1
> > pve-cluster: 4.0-47
> > qemu-server: 4.0-94
> > pve-firmware: 1.1-10
> > libpve-common-perl: 4.0-80
> > libpve-access-control: 4.0-19
> > libpve-storage-perl: 4.0-68
> > pve-libspice-server1: 0.12.8-1
> > vncterm: 1.2-1
> > pve-docs: 4.3-14
> > pve-qemu-kvm: 2.7.0-6
> > pve-container: 1.0-81
> > pve-firewall: 2.0-31
> > pve-ha-manager: 1.0-35
> > ksm-control-daemon: 1.2-1
> > glusterfs-client: 3.5.2-2+deb8u2
> > lxc-pve: 2.0.5-1
> > lxcfs: 2.0.4-pve2
> > criu: 1.6.0-1
> > novnc-pve: 0.5-8
> > smartmontools: 6.5+svn4324-1~pve80
> > zfsutils: 0.6.5.8-pve13~bpo80
> > openvswitch-switch: 2.5.0-1
> > ceph: 0.94.9-1~bpo80+1
> >
> > ceph status
> >     cluster 32d19f44-fcef-4863-ad94-cb8d738fe179
> >      health HEALTH_OK
> >      monmap e3: 3 mons at {0=
> > 192.168.148.65:6789/0,1=192.168.149.95:6789/0,2=192.168.149.115:6789/0}
> >             election epoch 260, quorum 0,1,2 0,1,2
> >      osdmap e740: 6 osds: 6 up, 6 in
> >       pgmap v2319446: 120 pgs, 1 pools, 198 GB data, 51642 objects
> >             393 GB used, 2183 GB / 2576 GB avail
> >                  120 active+clean
> >   client io 4973 B/s rd, 115 kB/s wr, 35 op/s
> >
> >
> >
> > On Fri, Nov 11, 2016 at 7:05 AM, Thomas Lamprecht <
> t.lamprecht at proxmox.com>
> > wrote:
> >
> > > On 11/10/2016 10:35 PM, Lindsay Mathieson wrote:
> > >
> > >> On 11/11/2016 7:11 AM, Thomas Lamprecht wrote:
> > >>
> > >>> Are you sure you upgraded all, i.e. used:
> > >>> apt update
> > >>> apt full-upgrade
> > >>>
> > >>
> > >> Resolved it thanks Thomas - I hadn't updated the *destination* server.
> > >>
> > >>
> > >
> > > makes sense, should have been made sense a few days ago this, would
> not be
> > > too hard to catch :/
> > >
> > > anyway, for anyone reading this:
> > > When upgrading qemu-server to version 4.0.93 or newer you should
> upgrade
> > > all other nodes pve-cluster package to version 4.0-47 or newer, else
> > > migrations to those nodes will not work - as we use a new command to
> detect
> > > if we should send the traffic over a separate migration network.
> > >
> > > cheers,
> > > Thomas
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > pve-user mailing list
> > > pve-user at pve.proxmox.com
> > > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >
> > _______________________________________________
> > pve-devel mailing list
> > pve-devel at pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>



More information about the pve-devel mailing list