sched/eevdf: Curb wakeup-preemption
authorPeter Zijlstra <peterz@infradead.org>
Wed, 16 Aug 2023 13:40:59 +0000 (15:40 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Thu, 17 Aug 2023 15:07:07 +0000 (17:07 +0200)
commit63304558ba5dcaaff9e052ee43cfdcc7f9c29e85
tree3a8ca75f5776357159eed23696024bc5a4f448b7
parent7170509cadbb76e5fa7d7b090d2cbdb93d56a2de
sched/eevdf: Curb wakeup-preemption

Mike and others noticed that EEVDF does like to over-schedule quite a
bit -- which does hurt performance of a number of benchmarks /
workloads.

In particular, what seems to cause over-scheduling is that when lag is
of the same order (or larger) than the request / slice then placement
will not only cause the task to be placed left of current, but also
with a smaller deadline than current, which causes immediate
preemption.

[ notably, lag bounds are relative to HZ ]

Mike suggested we stick to picking 'current' for as long as it's
eligible to run, giving it uninterrupted runtime until it reaches
parity with the pack.

Augment Mike's suggestion by only allowing it to exhaust it's initial
request.

One random data point:

echo NO_RUN_TO_PARITY > /debug/sched/features
perf stat -a -e context-switches --repeat 10 -- perf bench sched messaging -g 20 -t -l 5000

3,723,554        context-switches      ( +-  0.56% )
9.5136 +- 0.0394 seconds time elapsed  ( +-  0.41% )

echo RUN_TO_PARITY > /debug/sched/features
perf stat -a -e context-switches --repeat 10 -- perf bench sched messaging -g 20 -t -l 5000

2,556,535        context-switches      ( +-  0.51% )
9.2427 +- 0.0302 seconds time elapsed  ( +-  0.33% )

Suggested-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230816134059.GC982867@hirez.programming.kicks-ass.net
kernel/sched/fair.c
kernel/sched/features.h