[pve-devel] [Patch guest-common] set last_sync_snapname after a job restored.

Thomas Lamprecht t.lamprecht at proxmox.com
Thu May 24 09:07:48 CEST 2018


On 5/22/18 3:30 PM, Wolfgang Link wrote:
> The replication need the last_sync_snapname to clean up
> the last snapshot after the replication run done.
> 
> If this is not correctly set the snapshot will exist until the next run.
> ---
>  PVE/Replication.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/PVE/Replication.pm b/PVE/Replication.pm
> index 493b77d..0d05547 100644
> --- a/PVE/Replication.pm
> +++ b/PVE/Replication.pm
> @@ -63,6 +63,7 @@ sub find_common_replication_snapshot {

The foreach loop iterating the remote snapshots below is in this while loop:

foreach my $volid (@$volumes) {
	[...]
}

so last_sync_snapname gets set to the remote snap of the last volid iterated,
couldn't that be a problem when multiple volumes get replicated?

>  		foreach my $remote_snap (@desc_sorted_snap) {
>  		    if (defined($last_snapshots->{$volid}->{$remote_snap})) {
>  			$base_snapshots->{$volid} = $remote_snap;
> +			$last_sync_snapname = $remote_snap;
>  			last;
>  		    }
>  		}
> 

Then, the if above hits only if the $remote_snap is in $last_snapshots,
which is as $last_sync_snapname both generated from $last_sync and $job
so this seems redundant/weird?

After some off-list discussion with Wolfgang B. we determined that it
could make more sense to actually calculate $last_sync in
find_common_replication_snapshot , i.e. go to both snapshot list and
take the newest timestamp they have in common - where all volumes were
replicated successfully. Then we have always a correct $last_sync and
the prepare method would take care of stale snapshots.




More information about the pve-devel mailing list