KVM: x86/mmu: Leverage vcpu->last_used_slot for rmap_add and rmap_recycle
authorDavid Matlack <dmatlack@google.com>
Wed, 4 Aug 2021 22:28:42 +0000 (22:28 +0000)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 6 Aug 2021 11:52:29 +0000 (07:52 -0400)
commit601f8af01e5ae535b45cdb91234887c9fd861ad4
treeb4cba5a72f0b37dcd6ec552c5cf47b3a4018ddbb
parent081de470f1e6e83f9f460ba5ae8f57ff07f37692
KVM: x86/mmu: Leverage vcpu->last_used_slot for rmap_add and rmap_recycle

rmap_add() and rmap_recycle() both run in the context of the vCPU and
thus we can use kvm_vcpu_gfn_to_memslot() to look up the memslot. This
enables rmap_add() and rmap_recycle() to take advantage of
vcpu->last_used_slot and avoid expensive memslot searching.

This change improves the performance of "Populate memory time" in
dirty_log_perf_test with tdp_mmu=N. In addition to improving the
performance, "Populate memory time" no longer scales with the number
of memslots in the VM.

Command                         | Before           | After
------------------------------- | ---------------- | -------------
./dirty_log_perf_test -v64 -x1  | 15.18001570s     | 14.99469366s
./dirty_log_perf_test -v64 -x64 | 18.71336392s     | 14.98675076s

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20210804222844.1419481-6-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu.c