x86/mm/pat: Don't flush cache if hardware enforces cache coherency across encryption...
authorKrish Sadhukhan <krish.sadhukhan@oracle.com>
Thu, 17 Sep 2020 21:20:37 +0000 (21:20 +0000)
committerBorislav Petkov <bp@suse.de>
Fri, 18 Sep 2020 08:47:00 +0000 (10:47 +0200)
In some hardware implementations, coherency between the encrypted and
unencrypted mappings of the same physical page is enforced. In such a
system, it is not required for software to flush the page from all CPU
caches in the system prior to changing the value of the C-bit for the
page. So check that bit before flushing the cache.

 [ bp: Massage commit message. ]

Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20200917212038.5090-3-krish.sadhukhan@oracle.com
arch/x86/mm/pat/set_memory.c

index d1b2a889f035d8f0fab14398b6163080b4f4fa5b..40baa90e74f4caa0cbdb9a7abca60aede4fd5859 100644 (file)
@@ -1999,7 +1999,7 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
        /*
         * Before changing the encryption attribute, we need to flush caches.
         */
-       cpa_flush(&cpa, 1);
+       cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT));
 
        ret = __change_page_attr_set_clr(&cpa, 1);