rcu-tasks: Ensure RCU Tasks Trace loops have quiescent states
authorPaul E. McKenney <paulmck@kernel.org>
Mon, 18 Jul 2022 17:57:26 +0000 (10:57 -0700)
committerPaul E. McKenney <paulmck@kernel.org>
Wed, 31 Aug 2022 12:10:55 +0000 (05:10 -0700)
The RCU Tasks Trace grace-period kthread loops across all CPUs, and
there can be quite a few CPUs, with some commercially available systems
sporting well over a thousand of them.  Some of these loops can feature
IPIs, which can take some time.  This commit therefore places a call to
cond_resched_tasks_rcu_qs() in each such loop.

Link: https://docs.google.com/document/d/1V0YnG1HTWMt9WHJjroiJL9lf-hMrud4v8Fn3fhyY0cI/edit?usp=sharing
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
kernel/rcu/tasks.h

index 469bf2a3b505e265d5e2681cff78d43f37f53a72..f5bf6fb430dabf9ec6e2d34cb521dab03e75b2b5 100644 (file)
@@ -1500,6 +1500,7 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop)
                if (rcu_tasks_trace_pertask_prep(t, true))
                        trc_add_holdout(t, hop);
                rcu_read_unlock();
+               cond_resched_tasks_rcu_qs();
        }
 
        // Only after all running tasks have been accounted for is it
@@ -1520,6 +1521,7 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop)
                        raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
                }
                raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
+               cond_resched_tasks_rcu_qs();
        }
 
        // Re-enable CPU hotplug now that the holdout list is populated.
@@ -1619,6 +1621,7 @@ static void check_all_holdout_tasks_trace(struct list_head *hop,
                        trc_del_holdout(t);
                else if (needreport)
                        show_stalled_task_trace(t, firstreport);
+               cond_resched_tasks_rcu_qs();
        }
 
        // Re-enable CPU hotplug now that the holdout list scan has completed.