KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_SET_PMU_EVENT_FILTER)
authorMichal Luczaj <mhal@rbox.co>
Sat, 7 Jan 2023 00:12:51 +0000 (01:12 +0100)
committerSean Christopherson <seanjc@google.com>
Fri, 3 Feb 2023 23:19:22 +0000 (15:19 -0800)
commit95744a90db18437410aa94620b8a330311bd9cf6
tree45b81654bc9fc3e804da4f37f1fc832905fab0d0
parent096691e0d2a1a97ce5a0d68cffdc84ce97bce304
KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_SET_PMU_EVENT_FILTER)

Reduce time spent holding kvm->lock: unlock mutex before calling
synchronize_srcu_expedited().  There is no need to hold kvm->lock until
all vCPUs have been kicked, KVM only needs to guarantee that all vCPUs
will switch to the new filter before exiting to userspace.  Protecting
the write to __reprogram_pmi is also unnecessary as a vCPU may process
a set bit before receiving the final KVM_REQ_PMU, but the per-vCPU writes
are guaranteed to occur after all vCPUs have switched to the new filter.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-2-mhal@rbox.co
[sean: expand changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/pmu.c