multifd: bugfix for incorrect migration data with QPL compression
authorYuan Liu <yuan1.liu@intel.com>
Wed, 18 Dec 2024 09:14:12 +0000 (17:14 +0800)
committerFabiano Rosas <farosas@suse.de>
Thu, 9 Jan 2025 20:40:21 +0000 (17:40 -0300)
When QPL compression is enabled on the migration channel and the same
dirty page changes from a normal page to a zero page in the iterative
memory copy, the dirty page will not be updated to a zero page again
on the target side, resulting in incorrect memory data on the source
and target sides.

The root cause is that the target side does not record the normal pages
to the receivedmap.

The solution is to add ramblock_recv_bitmap_set_offset in target side
to record the normal pages.

Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Jason Zeng <jason.zeng@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20241218091413.140396-3-yuan1.liu@intel.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
migration/multifd-qpl.c

index bbe466617f1512fe0d7284b1c02c0327e9f63290..88e2344af2f9a53f9bc1a79988795e9af3c784da 100644 (file)
@@ -679,6 +679,7 @@ static int multifd_qpl_recv(MultiFDRecvParams *p, Error **errp)
         qpl->zlen[i] = be32_to_cpu(qpl->zlen[i]);
         assert(qpl->zlen[i] <= multifd_ram_page_size());
         zbuf_len += qpl->zlen[i];
+        ramblock_recv_bitmap_set_offset(p->block, p->normal[i]);
     }
 
     /* read compressed pages */