sched/topology: Optimize topology_span_sane()
authorKyle Meyer <kyle.meyer@hpe.com>
Wed, 10 Apr 2024 21:33:11 +0000 (16:33 -0500)
committerYury Norov <yury.norov@gmail.com>
Thu, 9 May 2024 16:25:08 +0000 (09:25 -0700)
Optimize topology_span_sane() by removing duplicate comparisons.

Since topology_span_sane() is called inside of for_each_cpu(), each
previous CPU has already been compared against every other CPU. The
current CPU only needs to be compared against higher-numbered CPUs.

The total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 on each non-NUMA scheduling domain level.

Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
kernel/sched/topology.c

index 99ea5986038ce44997627fee1e01f6e36bef1b26..b6bcafc09969b66dd7db2b864ffd936731032536 100644 (file)
@@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
 static bool topology_span_sane(struct sched_domain_topology_level *tl,
                              const struct cpumask *cpu_map, int cpu)
 {
-       int i;
+       int i = cpu + 1;
 
        /* NUMA levels are allowed to overlap */
        if (tl->flags & SDTL_OVERLAP)
@@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
         * breaking the sched_group lists - i.e. a later get_group() pass
         * breaks the linking done for an earlier span.
         */
-       for_each_cpu(i, cpu_map) {
-               if (i == cpu)
-                       continue;
+       for_each_cpu_from(i, cpu_map) {
                /*
                 * We should 'and' all those masks with 'cpu_map' to exactly
                 * match the topology we're about to build, but that can only