[pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info

Stefan Priebe s.priebe at profihost.ag
Thu Dec 27 19:37:59 CET 2012


Hi,

yes it works really fine with qemu 1.4:

Same VM 2GB in use of 4GB:

Dec 27 19:33:15 starting migration of VM 105 to node 'cloud1-1203' 
(10.255.0.22)
Dec 27 19:33:15 copying disk images
Dec 27 19:33:15 starting VM 105 on remote node 'cloud1-1203'
Dec 27 19:33:19 starting migration tunnel
Dec 27 19:33:20 starting online/live migration on port 60000
Dec 27 19:33:20 migrate_set_speed: 8589934592
Dec 27 19:33:20 migrate_set_downtime: 1
Dec 27 19:33:22 migration status: active (transferred 545515913, 
remaining 1537454080), total 2156396544) , expected downtime 0
Dec 27 19:33:24 migration status: active (transferred 993620520, 
remaining 1089703936), total 2156396544) , expected downtime 0
Dec 27 19:33:26 migration status: active (transferred 1460008672, 
remaining 623603712), total 2156396544) , expected downtime 0
Dec 27 19:33:28 migration status: active (transferred 1927652198, 
remaining 156454912), total 2156396544) , expected downtime 0
Dec 27 19:33:29 migration speed: 227.56 MB/s - downtime 652 ms
Dec 27 19:33:29 migration status: completed
Dec 27 19:33:33 migration finished successfuly (duration 00:00:18)
TASK OK

Stefan
Am 27.12.2012 15:40, schrieb Alexandre DERUMIER:
> you are too fast ;) here my package:
> http://odisoweb1.odiso.net/pve-qemu-kvm_1.3-10_amd64.deb
>
> series (patches attached in the mail)
> ------
> fr-ca-keymap-corrections.diff
> fairsched.diff
> pve-auth.patch
> vencrypt-auth-plain.patch
> enable-kvm-by-default.patch
> virtio-balloon-drop-old-stats-code.patch
> virtio-balloon-re-enable-balloon-stats.patch
> virtio-balloon-document-stats.patch
> virtio-balloon-fix-query.patch
>
>
> Does it work for you with migrate_downtime 1 ?
>
>
> here my logs:
>
> migration logs:small memory workload (default 0.030s migrate_downtime): downtime 29ms !
>
> Dec 27 15:20:29 starting migration of VM 99999 to node 'kvmtest2' (10.3.94.47)
> Dec 27 15:20:29 copying disk images
> Dec 27 15:20:29 starting VM 99999 on remote node 'kvmtest2'
> Dec 27 15:20:31 starting migration tunnel
> Dec 27 15:20:32 starting online/live migration on port 60000
> Dec 27 15:20:32 migrate_set_speed: 8589934592
> Dec 27 15:20:34 migration status: active (transferred 76262747, remaining 8305565696), total 8397586432, expected downtime 0)
> Dec 27 15:20:36 migration status: active (transferred 170298066, remaining 8170553344), total 8397586432, expected downtime 0)
> Dec 27 15:20:38 migration status: active (transferred 178034093, remaining 4819476480), total 8397586432, expected downtime 0)
> Dec 27 15:20:40 migration status: active (transferred 205755801, remaining 4589043712), total 8397586432, expected downtime 0)
> Dec 27 15:20:42 migration status: active (transferred 231493719, remaining 4186116096), total 8397586432, expected downtime 0)
> Dec 27 15:20:44 migration status: active (transferred 294439405, remaining 3542519808), total 8397586432, expected downtime 0)
> Dec 27 15:20:46 migration status: active (transferred 301252962, remaining 441729024), total 8397586432, expected downtime 0)
> Dec 27 15:20:48 migration speed: 500.00 MB/s - downtime 29 ms
> Dec 27 15:20:48 migration status: completed
> Dec 27 15:20:52 migration finished successfuly (duration 00:00:24)
> TASK OK
>
>
>
> migration logs:playing HD video in vlc : downtime 600ms (migrate_downtime set to 1sec because of never ending migration)
>
>
> topDec 27 15:34:37 starting migration of VM 99999 to node 'kvmtest2' (10.3.94.47)
> Dec 27 15:34:37 copying disk images
> Dec 27 15:34:37 starting VM 99999 on remote node 'kvmtest2'
> Dec 27 15:34:39 starting migration tunnel
> Dec 27 15:34:40 starting online/live migration on port 60000
> Dec 27 15:34:40 migrate_set_speed: 8589934592
> Dec 27 15:34:42 migration status: active (transferred 96367979, remaining 8285630464), total 8397586432, expected downtime 0)
> Dec 27 15:34:44 migration status: active (transferred 170482023, remaining 8142753792), total 8397586432, expected downtime 0)
> Dec 27 15:34:46 migration status: active (transferred 198946937, remaining 6157733888), total 8397586432, expected downtime 0)
> Dec 27 15:34:48 migration status: active (transferred 239722016, remaining 5490028544), total 8397586432, expected downtime 0)
> Dec 27 15:34:50 migration status: active (transferred 298664987, remaining 4960985088), total 8397586432, expected downtime 0)
> Dec 27 15:34:52 migration status: active (transferred 374755031, remaining 4403380224), total 8397586432, expected downtime 0)
> Dec 27 15:34:54 migration status: active (transferred 438843200, remaining 4119465984), total 8397586432, expected downtime 0)
> Dec 27 15:34:57 migration status: active (transferred 462321818, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:34:57 migration status: active (transferred 500708093, remaining 187273216), total 8397586432, expected downtime 0)
> Dec 27 15:35:01 migration status: active (transferred 525596322, remaining 66678784), total 8397586432, expected downtime 0)
> Dec 27 15:35:01 migration status: active (transferred 547180175, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:02 migration status: active (transferred 581027192, remaining 125509632), total 8397586432, expected downtime 0)
> Dec 27 15:35:03 migration status: active (transferred 603663775, remaining 40046592), total 8397586432, expected downtime 0)
> Dec 27 15:35:03 migration status: active (transferred 618874620, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:04 migration status: active (transferred 660269388, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:05 migration status: active (transferred 691792794, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:06 migration status: active (transferred 724227279, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:07 migration status: active (transferred 756320817, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:08 migration status: active (transferred 788061555, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:09 migration status: active (transferred 817596465, remaining 42708992), total 8397586432, expected downtime 0)
> Dec 27 15:35:10 migration status: active (transferred 827831246, remaining 14131200), total 8397586432, expected downtime 0)
> Dec 27 15:35:10 migration status: active (transferred 836576473, remaining 0), total 8397586432, expected downtime 0)
> Dec 27 15:35:10 migration status: active (transferred 868789540, remaining 32485376), total 8397586432, expected downtime 0)
> Dec 27 15:36:10 migration status: active (transferred 2892113024, remaining 50008064), total 8397586432, expected downtime 0)
> Dec 27 15:36:10 migration status: active (transferred 2899587577, remaining 32702464), total 8397586432, expected downtime 0)
> Dec 27 15:36:11 migration status: active (transferred 2928403594, remaining 13905920), total 8397586432, expected downtime 0)
> Dec 27 15:36:12 migration speed: 86.96 MB/s - downtime 603 ms
> Dec 27 15:36:12 migration status: completed
> Dec 27 15:36:16 migration finished successfuly (duration 00:01:39)
> TASK OK
>
> ----- Mail original -----
>
> De: "Stefan Priebe" <s.priebe at profihost.ag>
> À: "Alexandre DERUMIER" <aderumier at odiso.com>
> Cc: pve-devel at pve.proxmox.com, "Dietmar Maurer" <dietmar at proxmox.com>
> Envoyé: Jeudi 27 Décembre 2012 15:22:45
> Objet: Re: [pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info
>
> Sorry my fault ;-)
>
> Stefan
>
> Am 27.12.2012 15:19, schrieb Stefan Priebe:
>> Strangely the status of the VM is always paused after migration.
>>
>> Stefan
>> Am 27.12.2012 15:18, schrieb Stefan Priebe:
>>> Hi,
>>>
>>> have now done the same.
>>>
>>> With current git qemu migration is really fast using 1,6GB memory:
>>> Dec 27 15:17:45 starting migration of VM 105 to node 'cloud1-1202'
>>> (10.255.0.20)
>>> Dec 27 15:17:45 copying disk images
>>> Dec 27 15:17:45 starting VM 105 on remote node 'cloud1-1202'
>>> Dec 27 15:17:48 starting online/live migration on tcp:10.255.0.20:60000
>>> Dec 27 15:17:48 migrate_set_speed: 8589934592
>>> Dec 27 15:17:48 migrate_set_downtime: 0.05
>>> Dec 27 15:17:52 migration speed: 512.00 MB/s - downtime 174 ms
>>> Dec 27 15:17:52 migration status: completed
>>> Dec 27 15:17:53 migration finished successfuly (duration 00:00:09)
>>> TASK OK
>>>
>>> It's so fast that i can't check if i see that's while migrating.
>>>
>>> Greets,
>>> Stefan
>>>
>>> Am 27.12.2012 14:26, schrieb Alexandre DERUMIER:
>>>>>> The problem is the LATEST git qemu code they've changed a LOT of
>>>>>> include
>>>>>> file locations so nearly NO PVE patch applies...
>>>>
>>>> I'm currently building a pve-qemu-kvm on qemu 1.4, with basics patches
>>>>
>>>> fr-ca-keymap-corrections.diff
>>>> fairsched.diff
>>>> pve-auth.patch
>>>> vencrypt-auth-plain.patch
>>>> enable-kvm-by-default.patch
>>>>
>>>> should be enough to connect with vnc and test migration
>>>>
>>>> I'll keep in touch
>>>>
>>>>
>>>>
>>>> ----- Mail original -----
>>>>
>>>> De: "Stefan Priebe" <s.priebe at profihost.ag>
>>>> À: "Alexandre DERUMIER" <aderumier at odiso.com>
>>>> Cc: pve-devel at pve.proxmox.com, "Dietmar Maurer" <dietmar at proxmox.com>
>>>> Envoyé: Jeudi 27 Décembre 2012 13:40:24
>>>> Objet: Re: [pve-devel] [PATCH 3/3] add downtime && expected_downtime
>>>> query-migrate info
>>>>
>>>> Hi,
>>>> Am 27.12.2012 13:39, schrieb Alexandre DERUMIER:
>>>>>>> not right now - but i tested this yesterday and didn't saw a
>>>>>>> difference
>>>>>>> so i moved again to 3.6.11.
>>>>> It'll do test with a 3.6 kernel too, to see if I have a difference
>>>>
>>>> Thanks! Will retest with pve kernel too.
>>>>
>>>>> Do you have tried with last qemu 1.4 git ?
>>>>> Because I'm looking into the code, and the change in migration code
>>>>> is really huge.
>>>>> So we could known if it's a qemu migration code problem or not...
>>>>
>>>> The problem is the LATEST git qemu code they've changed a LOT of include
>>>> file locations so nearly NO PVE patch applies...
>>>>
>>>> Stefan



More information about the pve-devel mailing list