[pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info

Alexandre DERUMIER aderumier at odiso.com
Sat Dec 29 14:51:41 CET 2012


Hi Stefan,
Great to known that you finally found it !. (Do you have respond to Paolo Bonzini, because the last response I see was that it was not working).


>>Alexandre might you try my rebased patches on top of pve-qemu 1.3? 
So, Do you want to apply the first patch only (to fix you bug), or apply the whole patches set ? (equal to current git code ?).
Not sure that Dietmar want to apply the big patches set ;)


I'll retest the whole patches set, because I was having migration problem (vm paused with error) with last qemu git.

----- Mail original ----- 

De: "Stefan Priebe" <s.priebe at profihost.ag> 
À: "Alexandre DERUMIER" <aderumier at odiso.com> 
Cc: pve-devel at pve.proxmox.com, "Dietmar Maurer" <dietmar at proxmox.com> 
Envoyé: Samedi 29 Décembre 2012 14:31:04 
Objet: Re: [pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info 

Hi, 

i could fix my problem by applying the first 22 migration patches of 
this bunch: 
http://www.mail-archive.com/qemu-commits@nongnu.org/msg02028.html 

Alexandre might you try my rebased patches on top of pve-qemu 1.3? 

Attaches is a bunch of patches to apply on top of the qemu-kvm-src from 
PVE. Please also patch the internal async patch see separate patch. 

At last you need to remove the always-update-expected-downtime.patch 
patch from the debian series file. 

Thanks! 

Stefan 
Am 28.12.2012 09:19, schrieb Alexandre DERUMIER: 
> I'm looking into qemu src code, 
> in arch_init.c -> ram_save_iterate() 
> 
> it should return false, to retry transfert iteration of remaining memory , until the last step:ram_save_complete() 
> 
> Seem that for Stefan, it's return 0 directly with migrate_set_downtime = 1. 
> 
> I think interesting part is here: 
> 
> Maybe can we add somes logs ? (BTW, is it possible to logs qemu STDOUT somewhere in a file ?) 
> 
> 
> 
> bwidth = qemu_get_clock_ns(rt_clock) - bwidth; 
> bwidth = (bytes_transferred - bytes_transferred_last) / bwidth; 
> 
> /* if we haven't transferred anything this round, force 
> * expected_downtime to a very high value, but without 
> * crashing */ 
> if (bwidth == 0) { 
> bwidth = 0.000001; 
> } 
> 
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS); 
> 
> expected_downtime = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth; 
> DPRINTF("ram_save_live: expected(%" PRIu64 ") <= max(" PRIu64 ")?\n", 
> expected_downtime, migrate_max_downtime()); 
> 
> if (expected_downtime <= migrate_max_downtime()) { 
> migration_bitmap_sync(); 
> expected_downtime = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth; 
> s->expected_downtime = expected_downtime / 1000000; /* ns -> ms */ 
> 
> return expected_downtime <= migrate_max_downtime(); 
> } 
> return 0; 
> 
> 
> 
> 
> 
> 
> ----- Mail original ----- 
> 
> De: "Dietmar Maurer" <dietmar at proxmox.com> 
> À: "Stefan Priebe - Profihost AG" <s.priebe at profihost.ag> 
> Cc: "Alexandre DERUMIER" <aderumier at odiso.com>, pve-devel at pve.proxmox.com 
> Envoyé: Vendredi 28 Décembre 2012 07:39:50 
> Objet: RE: [pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info 
> 
>>>> Am 27.12.2012 16:21, schrieb Alexandre DERUMIER: 
>>>>> But if it's work fine for you with 1s migrate_downtime, we need to 
>>>>> find where the problem is in the current qemu 1.3 code ... (maybe 
>>>>> qemu mailing can help) 
>>>> To my last mails nobody answered... 
>>> 
>>> What information do you miss (what last mails?)? 
>> Last mails to qemu mailing list. It was regarding my migration problems. 
> 
> Ah, yes. I will do further tests today to reproduce the bug here. 
> 



More information about the pve-devel mailing list