mm/slub: optimize free fast path code layout
authorVlastimil Babka <vbabka@suse.cz>
Fri, 27 Oct 2023 10:34:18 +0000 (12:34 +0200)
committerVlastimil Babka <vbabka@suse.cz>
Wed, 6 Dec 2023 10:57:22 +0000 (11:57 +0100)
commitecf9a253ce120082ce0a8aff806c4de4865cfcc5
tree24023a233080709ff86c75a26dd9ce7901788f79
parent3450a0e5a6fc4cdbd70853f12c0c332dd24c1349
mm/slub: optimize free fast path code layout

Inspection of kmem_cache_free() disassembly showed we could make the
fast path smaller by providing few more hints to the compiler, and
splitting the memcg_slab_free_hook() into an inline part that only
checks if there's work to do, and an out of line part doing the actual
uncharge.

bloat-o-meter results:
add/remove: 2/0 grow/shrink: 0/3 up/down: 286/-554 (-268)
Function                                     old     new   delta
__memcg_slab_free_hook                         -     270    +270
__pfx___memcg_slab_free_hook                   -      16     +16
kfree                                        828     665    -163
kmem_cache_free                             1116     948    -168
kmem_cache_free_bulk.part                   1701    1478    -223

Checking kmem_cache_free() disassembly now shows the non-fastpath
cases are handled out of line, which should reduce instruction cache
usage.

Acked-by: David Rientjes <rientjes@google.com>
Tested-by: David Rientjes <rientjes@google.com>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
mm/slub.c