mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()
authorKefeng Wang <wangkefeng.wang@huawei.com>
Tue, 1 Aug 2023 02:31:44 +0000 (10:31 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 21 Aug 2023 20:37:40 +0000 (13:37 -0700)
Archs may need to do special things when flushing hugepage tlb, so use the
more applicable flush_hugetlb_tlb_range() instead of flush_tlb_range().

Link: https://lkml.kernel.org/r/20230801023145.17026-2-wangkefeng.wang@huawei.com
Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Muchun Song <songmuchun@bytedance.com>
Cc: Barry Song <21cnbao@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c

index 26e87d6cc92f9361fbae493cbd3ac737fed01a6e..102f83bd3a9f4a09e65f7ff3a1bfc6c2cb6ab5b2 100644 (file)
@@ -5279,9 +5279,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
        }
 
        if (shared_pmd)
-               flush_tlb_range(vma, range.start, range.end);
+               flush_hugetlb_tlb_range(vma, range.start, range.end);
        else
-               flush_tlb_range(vma, old_end - len, old_end);
+               flush_hugetlb_tlb_range(vma, old_end - len, old_end);
        mmu_notifier_invalidate_range_end(&range);
        i_mmap_unlock_write(mapping);
        hugetlb_vma_unlock_write(vma);