x86/mm/tlb: Do not make is_lazy dirty for no reason
authorNadav Amit <namit@vmware.com>
Sat, 20 Feb 2021 23:17:09 +0000 (15:17 -0800)
committerIngo Molnar <mingo@kernel.org>
Sat, 6 Mar 2021 11:59:10 +0000 (12:59 +0100)
Blindly writing to is_lazy for no reason, when the written value is
identical to the old value, makes the cacheline dirty for no reason.
Avoid making such writes to prevent cache coherency traffic for no
reason.

Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lore.kernel.org/r/20210220231712.2475218-7-namit@vmware.com
arch/x86/mm/tlb.c

index 345a0aff5de4fcbead3f1cc799f60156cf172461..17ec4bfeee67a3d8961d131fad73a5a12a2902f0 100644 (file)
@@ -469,7 +469,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
                __flush_tlb_all();
        }
 #endif
-       this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
+       if (was_lazy)
+               this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
 
        /*
         * The membarrier system call requires a full memory barrier and