[pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info

Stefan Priebe s.priebe at profihost.ag
Sat Dec 29 14:31:04 CET 2012


Hi,

i could fix my problem by applying the first 22 migration patches of 
this bunch:
http://www.mail-archive.com/qemu-commits@nongnu.org/msg02028.html

Alexandre might you try my rebased patches on top of pve-qemu 1.3?

Attaches is a bunch of patches to apply on top of the qemu-kvm-src from 
PVE. Please also patch the internal async patch see separate patch.

At last you need to remove the always-update-expected-downtime.patch 
patch from the debian series file.

Thanks!

Stefan
Am 28.12.2012 09:19, schrieb Alexandre DERUMIER:
> I'm looking into qemu src code,
> in arch_init.c -> ram_save_iterate()
>
> it should return false, to retry transfert iteration of remaining memory  , until the last step:ram_save_complete()
>
> Seem that for Stefan, it's return 0 directly with migrate_set_downtime = 1.
>
> I think interesting part is here:
>
> Maybe can we add somes logs ? (BTW, is it possible to logs qemu STDOUT somewhere in a file ?)
>
>
>
>      bwidth = qemu_get_clock_ns(rt_clock) - bwidth;
>      bwidth = (bytes_transferred - bytes_transferred_last) / bwidth;
>
>      /* if we haven't transferred anything this round, force
>       * expected_downtime to a very high value, but without
>       * crashing */
>      if (bwidth == 0) {
>          bwidth = 0.000001;
>      }
>
>      qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>
>      expected_downtime = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
>      DPRINTF("ram_save_live: expected(%" PRIu64 ") <= max(" PRIu64 ")?\n",
>              expected_downtime, migrate_max_downtime());
>
>      if (expected_downtime <= migrate_max_downtime()) {
>          migration_bitmap_sync();
>          expected_downtime = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
>          s->expected_downtime = expected_downtime / 1000000; /* ns -> ms */
>
>          return expected_downtime <= migrate_max_downtime();
>      }
>      return 0;
>
>
>
>
>
>
> ----- Mail original -----
>
> De: "Dietmar Maurer" <dietmar at proxmox.com>
> À: "Stefan Priebe - Profihost AG" <s.priebe at profihost.ag>
> Cc: "Alexandre DERUMIER" <aderumier at odiso.com>, pve-devel at pve.proxmox.com
> Envoyé: Vendredi 28 Décembre 2012 07:39:50
> Objet: RE: [pve-devel] [PATCH 3/3] add downtime && expected_downtime query-migrate info
>
>>>> Am 27.12.2012 16:21, schrieb Alexandre DERUMIER:
>>>>> But if it's work fine for you with 1s migrate_downtime, we need to
>>>>> find where the problem is in the current qemu 1.3 code ... (maybe
>>>>> qemu mailing can help)
>>>> To my last mails nobody answered...
>>>
>>> What information do you miss (what last mails?)?
>> Last mails to qemu mailing list. It was regarding my migration problems.
>
> Ah, yes. I will do further tests today to reproduce the bug here.
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: qemu-migration.tar.gz
Type: application/gzip
Size: 22248 bytes
Desc: not available
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20121229/16a3d749/attachment.gz>
-------------- next part --------------
commit 446219a884f32853ad178e7f53fba6f07f2dafa8
Author: Stefan Priebe <s.priebe at profihost.ag>
Date:   Sat Dec 29 14:21:45 2012 +0100

    internal-snapshot-async.patch: fix buffer-file.h to qemu-file.h

diff --git a/debian/patches/internal-snapshot-async.patch b/debian/patches/internal-snapshot-async.patch
index 6c86de3..3847409 100644
--- a/debian/patches/internal-snapshot-async.patch
+++ b/debian/patches/internal-snapshot-async.patch
@@ -245,7 +245,7 @@ Index: new/savevm-async.c
 +#include "qmp-commands.h"
 +#include "blockdev.h"
 +#include "qemu/qom-qobject.h"
-+#include "buffered_file.h"
++#include "qemu-file.h"
 +#include "migration.h"
 +
 +/* #define DEBUG_SAVEVM_STATE */


More information about the pve-devel mailing list