migration: Fix rate limiting issue on RDMA migration
authorLidong Chen <jemmy858585@gmail.com>
Sat, 10 Mar 2018 14:32:58 +0000 (22:32 +0800)
committerDr. David Alan Gilbert <dgilbert@redhat.com>
Fri, 23 Mar 2018 16:37:15 +0000 (16:37 +0000)
RDMA migration implement save_page function for QEMUFile, but
ram_control_save_page do not increase bytes_xfer. So when doing
RDMA migration, it will use whole bandwidth.

Signed-off-by: Lidong Chen <lidongchen@tencent.com>
Message-Id: <1520692378-1835-1-git-send-email-lidongchen@tencent.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
migration/qemu-file.c

index e85f501f86e1f319112279a50610955fc81e7ea1..bb63c779cc9eea14358a41e4886eb448c5828284 100644 (file)
@@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
     if (f->hooks && f->hooks->save_page) {
         int ret = f->hooks->save_page(f, f->opaque, block_offset,
                                       offset, size, bytes_sent);
-
+        f->bytes_xfer += size;
         if (ret != RAM_SAVE_CONTROL_DELAYED) {
             if (bytes_sent && *bytes_sent > 0) {
                 qemu_update_position(f, *bytes_sent);