mm, hwpoison: avoid unneeded page_mapped_in_vma() overhead in collect_procs_anon()
authorMiaohe Lin <linmiaohe@huawei.com>
Tue, 30 Aug 2022 12:36:02 +0000 (20:36 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 27 Sep 2022 02:46:04 +0000 (19:46 -0700)
If vma->vm_mm != t->mm, there's no need to call page_mapped_in_vma() as
add_to_kill() won't be called in this case. Move up the mm check to avoid
possible unneeded calling to page_mapped_in_vma().

Link: https://lkml.kernel.org/r/20220830123604.25763-5-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory-failure.c

index 01ce87f5706aa8c66f79aadc154d9ef1a19725e0..cca8264dda1bbb68c3a02556e37184fe635dc7a4 100644 (file)
@@ -521,11 +521,11 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill,
                anon_vma_interval_tree_foreach(vmac, &av->rb_root,
                                               pgoff, pgoff) {
                        vma = vmac->vma;
+                       if (vma->vm_mm != t->mm)
+                               continue;
                        if (!page_mapped_in_vma(page, vma))
                                continue;
-                       if (vma->vm_mm == t->mm)
-                               add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma,
-                                           to_kill);
+                       add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma, to_kill);
                }
        }
        read_unlock(&tasklist_lock);