sched/fair: Avoid double search on same cpu
authorAbel Wu <wuyun.abel@bytedance.com>
Wed, 7 Sep 2022 11:19:57 +0000 (19:19 +0800)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 7 Sep 2022 19:53:46 +0000 (21:53 +0200)
The prev cpu is checked at the beginning of SIS, and it's unlikely
to be idle before the second check in select_idle_smt(). So we'd
better focus on its SMT siblings.

Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Josh Don <joshdon@google.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Link: https://lore.kernel.org/r/20220907112000.1854-3-wuyun.abel@bytedance.com
kernel/sched/fair.c

index 9657c7de5f576a17c4f7c878dde383bef03d8938..1ad79aaaaf936280ec8c93bdfd2e0f68f8d5ed51 100644 (file)
@@ -6355,6 +6355,8 @@ static int select_idle_smt(struct task_struct *p, int target)
        int cpu;
 
        for_each_cpu_and(cpu, cpu_smt_mask(target), p->cpus_ptr) {
+               if (cpu == target)
+                       continue;
                if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
                        return cpu;
        }