Merge branch 'sch_fq-improvements'
authorDavid S. Miller <davem@davemloft.net>
Sun, 1 Oct 2023 12:20:36 +0000 (13:20 +0100)
committerDavid S. Miller <davem@davemloft.net>
Sun, 1 Oct 2023 12:20:36 +0000 (13:20 +0100)
commitb49a948568dcbb5f38cbf5356ea0fb9c9c6f6953
treeb6a000dec2cee2f62ef3a94484e1c80e77171356
parent66ac08a7385fa8a5d3312ca96d1399670cfca0eb
parent8f6c4ff9e0522da9313fbff5295ae208af679fed
Merge branch 'sch_fq-improvements'

Eric Dumazet says:

====================
net_sched: sch_fq: round of improvements

For FQ tenth anniversary, it was time for making it faster.

The FQ part (as in Fair Queue) is rather expensive, because
we have to classify packets and store them in a per-flow structure,
and add this per-flow structure in a hash table. Then the RR lists
also add cache line misses.

Most fq qdisc are almost idle. Trying to share NIC bandwidth has
no benefits, thus the qdisc could behave like a FIFO.

This series brings a 5 % throughput increase in intensive
tcp_rr workload, and 13 % increase for (unpaced) UDP packets.

v2: removed an extra label (build bot).
    Fix an accidental increase of stat_internal_packets counter
    in fast path.
    Added "constify qdisc_priv()" patch to allow fq_fastpath_check()
    first parameter to be const.
    typo on 'eligible' (Willem)
====================

Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>