KVM: nVMX: stop abusing need_vmcs12_to_shadow_sync for eVMCS mapping
authorVitaly Kuznetsov <vkuznets@redhat.com>
Mon, 9 Mar 2020 15:52:12 +0000 (16:52 +0100)
committerPaolo Bonzini <pbonzini@redhat.com>
Mon, 16 Mar 2020 17:19:29 +0000 (18:19 +0100)
When vmx_set_nested_state() happens, we may not have all the required
data to map enlightened VMCS: e.g. HV_X64_MSR_VP_ASSIST_PAGE MSR may not
yet be restored so we need a postponed action. Currently, we (ab)use
need_vmcs12_to_shadow_sync/nested_sync_vmcs12_to_shadow() for that but
this is not ideal:
- We may not need to sync anything if L2 is running
- It is hard to propagate errors from nested_sync_vmcs12_to_shadow()
 as we call it from vmx_prepare_switch_to_guest() which happens just
 before we do VMLAUNCH, the code is not ready to handle errors there.

Move eVMCS mapping to nested_get_vmcs12_pages() and request
KVM_REQ_GET_VMCS12_PAGES, it seems to be is less abusive in nature.
It would probably be possible to introduce a specialized KVM_REQ_EVMCS_MAP
but it is undesirable to propagate eVMCS specifics all the way up to x86.c

Note, we don't need to request KVM_REQ_GET_VMCS12_PAGES from
vmx_set_nested_state() directly as nested_vmx_enter_non_root_mode() already
does that. Requesting KVM_REQ_GET_VMCS12_PAGES is done to document the
(non-obvious) side-effect and to be future proof.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/vmx/nested.c

index 98b82ccdf5f0a15961107e120e9394df536148e7..424d9a77af86ef29f16b440fcafade4cff77b8f6 100644 (file)
@@ -1996,14 +1996,6 @@ void nested_sync_vmcs12_to_shadow(struct kvm_vcpu *vcpu)
 {
        struct vcpu_vmx *vmx = to_vmx(vcpu);
 
-       /*
-        * hv_evmcs may end up being not mapped after migration (when
-        * L2 was running), map it here to make sure vmcs12 changes are
-        * properly reflected.
-        */
-       if (vmx->nested.enlightened_vmcs_enabled && !vmx->nested.hv_evmcs)
-               nested_vmx_handle_enlightened_vmptrld(vcpu, false);
-
        if (vmx->nested.hv_evmcs) {
                copy_vmcs12_to_enlightened(vmx);
                /* All fields are clean */
@@ -3062,6 +3054,14 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
        struct page *page;
        u64 hpa;
 
+       /*
+        * hv_evmcs may end up being not mapped after migration (when
+        * L2 was running), map it here to make sure vmcs12 changes are
+        * properly reflected.
+        */
+       if (vmx->nested.enlightened_vmcs_enabled && !vmx->nested.hv_evmcs)
+               nested_vmx_handle_enlightened_vmptrld(vcpu, false);
+
        if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
                /*
                 * Translate L1 physical address to host physical
@@ -5904,10 +5904,12 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
                set_current_vmptr(vmx, kvm_state->hdr.vmx.vmcs12_pa);
        } else if (kvm_state->flags & KVM_STATE_NESTED_EVMCS) {
                /*
-                * Sync eVMCS upon entry as we may not have
-                * HV_X64_MSR_VP_ASSIST_PAGE set up yet.
+                * nested_vmx_handle_enlightened_vmptrld() cannot be called
+                * directly from here as HV_X64_MSR_VP_ASSIST_PAGE may not be
+                * restored yet. EVMCS will be mapped from
+                * nested_get_vmcs12_pages().
                 */
-               vmx->nested.need_vmcs12_to_shadow_sync = true;
+               kvm_make_request(KVM_REQ_GET_VMCS12_PAGES, vcpu);
        } else {
                return -EINVAL;
        }