mm/slub, percpu: correct the calculation of early percpu allocation size
authorBaoquan He <bhe@redhat.com>
Mon, 24 Oct 2022 08:14:35 +0000 (16:14 +0800)
committerVlastimil Babka <vbabka@suse.cz>
Mon, 21 Nov 2022 09:19:46 +0000 (10:19 +0100)
SLUB allocator relies on percpu allocator to initialize its ->cpu_slab
during early boot. For that, the dynamic chunk of percpu which serves
the early allocation need be large enough to satisfy the kmalloc
creation.

However, the current BUILD_BUG_ON() in alloc_kmem_cache_cpus() doesn't
consider the kmalloc array with NR_KMALLOC_TYPES length. Fix that
with correct calculation.

Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Acked-by: Dennis Zhou <dennis@kernel.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
mm/slub.c

index 5eea9e446672f6b97bd73b6cf1c10755363a77f6..52b8995a03d13e98b7a0d62a6e38f1f722402118 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4005,7 +4005,8 @@ init_kmem_cache_node(struct kmem_cache_node *n)
 static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)
 {
        BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE <
-                       KMALLOC_SHIFT_HIGH * sizeof(struct kmem_cache_cpu));
+                       NR_KMALLOC_TYPES * KMALLOC_SHIFT_HIGH *
+                       sizeof(struct kmem_cache_cpu));
 
        /*
         * Must align to double word boundary for the double cmpxchg