Unfortunately, commit
f7b9dcfbcf44 broke populate_read_range(): the loop
end condition is very wrong, resulting in that function not populating the
full range. Lets' fix that.
Fixes: f7b9dcfbcf44 ("migration/ram: Factor out populating pages readable in ram_block_populate_pages()")
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
static inline void populate_read_range(RAMBlock *block, ram_addr_t offset,
ram_addr_t size)
{
+ const ram_addr_t end = offset + size;
+
/*
* We read one byte of each page; this will preallocate page tables if
* required and populate the shared zeropage on MAP_PRIVATE anonymous memory
* where no page was populated yet. This might require adaption when
* supporting other mappings, like shmem.
*/
- for (; offset < size; offset += block->page_size) {
+ for (; offset < end; offset += block->page_size) {
char tmp = *((char *)block->host + offset);
/* Don't optimize the read out */