sched/fair: Cleanup newidle_balance
authorVincent Guittot <vincent.guittot@linaro.org>
Tue, 19 Oct 2021 12:35:37 +0000 (14:35 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Sun, 31 Oct 2021 10:11:38 +0000 (11:11 +0100)
update_next_balance() uses sd->last_balance which is not modified by
load_balance() so we can merge the 2 calls in one place.

No functional change

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Link: https://lore.kernel.org/r/20211019123537.17146-6-vincent.guittot@linaro.org
kernel/sched/fair.c

index 57eae0ebc492e80a86ea21dddd85e84cec576d62..13950beb01a251bfb65bf5b75952e3dea9cbc62b 100644 (file)
@@ -10916,10 +10916,10 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
                int continue_balancing = 1;
                u64 domain_cost;
 
-               if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost) {
-                       update_next_balance(sd, &next_balance);
+               update_next_balance(sd, &next_balance);
+
+               if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost)
                        break;
-               }
 
                if (sd->flags & SD_BALANCE_NEWIDLE) {
 
@@ -10935,8 +10935,6 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
                        t0 = t1;
                }
 
-               update_next_balance(sd, &next_balance);
-
                /*
                 * Stop searching for tasks to pull if there are
                 * now runnable tasks on this rq.