mm/userfaultfd: page_add_file_rmap() -> folio_add_file_rmap_pte()
authorDavid Hildenbrand <david@redhat.com>
Wed, 20 Dec 2023 22:44:35 +0000 (23:44 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 29 Dec 2023 19:58:49 +0000 (11:58 -0800)
Let's convert mfill_atomic_install_pte().

Link: https://lkml.kernel.org/r/20231220224504.646757-12-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Peter Xu <peterx@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/userfaultfd.c

index 203cda9192c2946a8e57e920dc88ac73f06d598c..5e718014e671329e1228d9a8a0dd2e2dd2259960 100644 (file)
@@ -114,7 +114,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
                /* Usually, cache pages are already added to LRU */
                if (newly_allocated)
                        folio_add_lru(folio);
-               page_add_file_rmap(page, dst_vma, false);
+               folio_add_file_rmap_pte(folio, page, dst_vma);
        } else {
                folio_add_new_anon_rmap(folio, dst_vma, dst_addr);
                folio_add_lru_vma(folio, dst_vma);