From 9cf7a111836552d159d443491a38b3bc2cc8a174 Mon Sep 17 00:00:00 2001 From: Abel Wu Date: Tue, 13 Oct 2020 16:48:47 -0700 Subject: [PATCH] mm/slub: make add_full() condition more explicit The commit below is incomplete, as it didn't handle the add_full() part. commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), since that should be the only context in which we need the list_lock for add_full(). Signed-off-by: Abel Wu Signed-off-by: Andrew Morton Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Liu Xiang Link: https://lkml.kernel.org/r/20200811020240.1231-1-wuyun.wu@huawei.com Signed-off-by: Linus Torvalds --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 7728a0b71d633..f05900186c0bd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2245,7 +2245,8 @@ redo: } } else { m = M_FULL; - if (kmem_cache_debug(s) && !lock) { +#ifdef CONFIG_SLUB_DEBUG + if ((s->flags & SLAB_STORE_USER) && !lock) { lock = 1; /* * This also ensures that the scanning of full @@ -2254,6 +2255,7 @@ redo: */ spin_lock(&n->list_lock); } +#endif } if (l != m) { -- 2.30.2