mm: memory: add vm_normal_folio_pmd()
authorKefeng Wang <wangkefeng.wang@huawei.com>
Thu, 21 Sep 2023 07:44:12 +0000 (15:44 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 16 Oct 2023 22:44:37 +0000 (15:44 -0700)
Patch series "mm: convert numa balancing functions to use a folio", v2.

do_numa_pages() only handles non-compound pages, and only PMD-mapped THPs
are handled in do_huge_pmd_numa_page().  But a large, PTE-mapped folio
will be supported so let's convert more numa balancing functions to
use/take a folio in preparation for that, no functional change intended
for now.

This patch (of 6):

The new vm_normal_folio_pmd() wrapper is similar to vm_normal_folio(),
which allow them to completely replace the struct page variables with
struct folio variables.

Link: https://lkml.kernel.org/r/20230921074417.24004-1-wangkefeng.wang@huawei.com
Link: https://lkml.kernel.org/r/20230921074417.24004-2-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
include/linux/mm.h
mm/memory.c

index 126b54b4544235d11a5eaa48bab680419e7da0c1..52c40b3d08136899dbf205beee6b5e75a832e144 100644 (file)
@@ -2327,6 +2327,8 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
                             pte_t pte);
 struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
                             pte_t pte);
+struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
+                                 unsigned long addr, pmd_t pmd);
 struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
                                pmd_t pmd);
 
index d956b231e835abe26f1c92fc301ca99aeb094d79..311e862c6404aa58498af8e42209b4c2c3b1f91d 100644 (file)
@@ -689,6 +689,16 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 out:
        return pfn_to_page(pfn);
 }
+
+struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
+                                 unsigned long addr, pmd_t pmd)
+{
+       struct page *page = vm_normal_page_pmd(vma, addr, pmd);
+
+       if (page)
+               return page_folio(page);
+       return NULL;
+}
 #endif
 
 static void restore_exclusive_pte(struct vm_area_struct *vma,