From: Guo Ren Date: Wed, 12 Jul 2023 14:03:20 +0000 (-0400) Subject: csky: pgtable: Invalidate stale I-cache lines in update_mmu_cache X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=1362d15ffb59db65b2df354b548b7915686cb05c;p=linux.git csky: pgtable: Invalidate stale I-cache lines in update_mmu_cache The final icache_flush was in the update_mmu_cache, and update_mmu_cache is after the set_pte_at. Thus, when CPU0 sets the pte, the other CPU would see it before the icache_flush broadcast happens, and their icaches may have cached stale VIPT cache lines in their I-caches. When address translation was ready for the new cache line, they will use the stale data of icache, not the fresh one of the dcache. The csky instruction cache is VIPT, and it needs an origin virtual address to invalidate the virtual address index entries of cache ways. The current implementation uses a temporary mapping mechanism - kmap_atomic, which returns a new virtual address for invalidation. But, the original virtual address cache line may still in the I-cache. So force invalidation I-cache in update_mmu_cache, and prevent flush_dcache when there is an EXEC page. This bug was detected in the 4*c860 SMP system, and this patch could pass the stress test. Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c index 9923cd24db583..500eb8f693971 100644 --- a/arch/csky/abiv2/cacheflush.c +++ b/arch/csky/abiv2/cacheflush.c @@ -27,11 +27,9 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, addr = (unsigned long) kmap_atomic(page); + icache_inv_range(address, address + PAGE_SIZE); dcache_wb_range(addr, addr + PAGE_SIZE); - if (vma->vm_flags & VM_EXEC) - icache_inv_range(addr, addr + PAGE_SIZE); - kunmap_atomic((void *) addr); }