mm/khugepaged: remove unneeded return value of khugepaged_add_pte_mapped_thp()
authorMiaohe Lin <linmiaohe@huawei.com>
Sat, 25 Jun 2022 09:28:15 +0000 (17:28 +0800)
committerakpm <akpm@linux-foundation.org>
Mon, 4 Jul 2022 01:08:51 +0000 (18:08 -0700)
The return value of khugepaged_add_pte_mapped_thp() is always 0 and also
ignored.  Remove it to clean up the code.

Link: https://lkml.kernel.org/r/20220625092816.4856-7-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: NeilBrown <neilb@suse.de>
Cc: Peter Xu <peterx@redhat.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/khugepaged.c

index 6a969c0633a96bf7369e959d968fd60bf584d23e..08e885f28def15791ce9a594ad1235ee3edcf02f 100644 (file)
@@ -1371,8 +1371,8 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
  * Notify khugepaged that given addr of the mm is pte-mapped THP. Then
  * khugepaged should try to collapse the page table.
  */
-static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
-                                        unsigned long addr)
+static void khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
+                                         unsigned long addr)
 {
        struct mm_slot *mm_slot;
 
@@ -1383,7 +1383,6 @@ static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
        if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP))
                mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr;
        spin_unlock(&khugepaged_mm_lock);
-       return 0;
 }
 
 static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,