ring-buffer: Avoid softlockup in ring_buffer_resize()
authorZheng Yejian <zhengyejian1@huawei.com>
Wed, 6 Sep 2023 08:19:30 +0000 (16:19 +0800)
committerSteven Rostedt (Google) <rostedt@goodmis.org>
Thu, 7 Sep 2023 20:38:54 +0000 (16:38 -0400)
When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyejian1@huawei.com
Cc: <mhiramat@kernel.org>
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
kernel/trace/ring_buffer.c

index 78502d4c7214e8b97e2cc11c2072de47f91899cc..72ccf75defd0f02d20b1032a172a2dc628836da1 100644 (file)
@@ -2198,6 +2198,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
                                err = -ENOMEM;
                                goto out_err;
                        }
+
+                       cond_resched();
                }
 
                cpus_read_lock();