From: Matthew Wilcox (Oracle) Date: Tue, 13 Oct 2020 23:51:28 +0000 (-0700) Subject: proc: optimise smaps for shmem entries X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=8cf886463ecc621688a9c81c387d0f9ed32e45ea;p=linux.git proc: optimise smaps for shmem entries Avoid bumping the refcount on pages when we're only interested in the swap entries. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton Acked-by: Johannes Weiner Cc: Alexey Dobriyan Cc: Chris Wilson Cc: Huang Ying Cc: Hugh Dickins Cc: Jani Nikula Cc: Matthew Auld Cc: William Kucharski Link: https://lkml.kernel.org/r/20200910183318.20139-5-willy@infradead.org Signed-off-by: Linus Torvalds --- diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 35172a91148e5..a1be198f755c3 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -520,16 +520,10 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr, page = device_private_entry_to_page(swpent); } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap && pte_none(*pte))) { - page = find_get_entry(vma->vm_file->f_mapping, + page = xa_load(&vma->vm_file->f_mapping->i_pages, linear_page_index(vma, addr)); - if (!page) - return; - if (xa_is_value(page)) mss->swap += PAGE_SIZE; - else - put_page(page); - return; }