mm/page_alloc: remove unnecessary check in break_down_buddy_pages
authorKemeng Shi <shikemeng@huaweicloud.com>
Wed, 27 Sep 2023 10:35:13 +0000 (18:35 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 18 Oct 2023 21:34:15 +0000 (14:34 -0700)
Patch series "Two minor cleanups to break_down_buddy_pages", v2.

Two minor cleanups to break_down_buddy_pages.

This patch (of 2):

1. We always have target in range started with next_page and full free
   range started with current_buddy.

2. The last split range size is 1 << low and low should be >= 0, then
   size >= 1.  So page + size != page is always true (because size > 0).
   As summary, current_page will not equal to target page.

Link: https://lkml.kernel.org/r/20230927103514.98281-1-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20230927103514.98281-2-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/page_alloc.c

index 85741403948f55261f953deeb7e30de716533caf..fdb68b7c8240de3e1722bca9acb639c5b6ae416d 100644 (file)
@@ -6480,10 +6480,8 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page,
                if (set_page_guard(zone, current_buddy, high, migratetype))
                        continue;
 
-               if (current_buddy != target) {
-                       add_to_free_list(current_buddy, zone, high, migratetype);
-                       set_buddy_order(current_buddy, high);
-               }
+               add_to_free_list(current_buddy, zone, high, migratetype);
+               set_buddy_order(current_buddy, high);
        }
 }