physmem: Fix migration dirty bitmap coherency with TCG memory access
authorNicholas Piggin <npiggin@gmail.com>
Tue, 12 Mar 2024 20:14:58 +0000 (21:14 +0100)
committerPeter Xu <peterx@redhat.com>
Tue, 12 Mar 2024 21:39:40 +0000 (17:39 -0400)
The fastpath in cpu_physical_memory_sync_dirty_bitmap() to test large
aligned ranges forgot to bring the TCG TLB up to date after clearing
some of the dirty memory bitmap bits. This can result in stores though
the TCG TLB not setting the dirty memory bitmap and ultimately causes
memory corruption / lost updates during migration from a TCG host.

Fix this by calling cpu_physical_memory_dirty_bits_cleared() when
dirty bits have been cleared.

Fixes: aa8dc044772 ("migration: synchronize memory bitmap 64bits at a time")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240219061731.232570-1-npiggin@gmail.com>
[PMD: Split patch in 2: part 2/2, slightly adapt description]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Link: https://lore.kernel.org/r/20240312201458.79532-4-philmd@linaro.org
Signed-off-by: Peter Xu <peterx@redhat.com>
include/exec/ram_addr.h

index b060ea91760c7559b20cf8edc9d63d8733c9ebe0..de45ba7bc9680870d45f0702cc58c7e4ed458a99 100644 (file)
@@ -513,6 +513,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
                 idx++;
             }
         }
+        if (num_dirty) {
+            cpu_physical_memory_dirty_bits_cleared(start, length);
+        }
 
         if (rb->clear_bmap) {
             /*