mm/memory-failure: check the mapcount of the precise page
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Mon, 18 Dec 2023 13:58:36 +0000 (13:58 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 20 Dec 2023 21:46:19 +0000 (13:46 -0800)
A process may map only some of the pages in a folio, and might be missed
if it maps the poisoned page but not the head page.  Or it might be
unnecessarily hit if it maps the head page, but not the poisoned page.

Link: https://lkml.kernel.org/r/20231218135837.3310403-3-willy@infradead.org
Fixes: 7af446a841a2 ("HWPOISON, hugetlb: enable error handling path for hugepage")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory-failure.c

index 6953bda11e6ed0fcefa73f580e1a05834c14daf0..82e15baabb4827b078c254e24d91b18c61a00218 100644 (file)
@@ -1570,7 +1570,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
         * This check implies we don't kill processes if their pages
         * are in the swap cache early. Those are always late kills.
         */
-       if (!page_mapped(hpage))
+       if (!page_mapped(p))
                return true;
 
        if (PageSwapCache(p)) {
@@ -1621,10 +1621,10 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
                try_to_unmap(folio, ttu);
        }
 
-       unmap_success = !page_mapped(hpage);
+       unmap_success = !page_mapped(p);
        if (!unmap_success)
                pr_err("%#lx: failed to unmap page (mapcount=%d)\n",
-                      pfn, page_mapcount(hpage));
+                      pfn, page_mapcount(p));
 
        /*
         * try_to_unmap() might put mlocked page in lru cache, so call