xsk: Introduce padding between more ring pointers
authorMagnus Karlsson <magnus.karlsson@intel.com>
Mon, 16 Nov 2020 11:12:45 +0000 (12:12 +0100)
committerDaniel Borkmann <daniel@iogearbox.net>
Tue, 17 Nov 2020 21:07:40 +0000 (22:07 +0100)
Introduce one cache line worth of padding between the consumer pointer
and the flags field as well as between the flags field and the start
of the descriptors in all the lockless rings. This so that the x86 HW
adjacency prefetcher will not prefetch the adjacent pointer/field when
only one pointer/field is going to be used. This improves throughput
performance for the l2fwd sample app with 1% on my machine with HW
prefetching turned on in the BIOS.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/1605525167-14450-4-git-send-email-magnus.karlsson@gmail.com
net/xdp/xsk_queue.h

index cdb9cf3cd1361f9d15c36d081abe0408cf93ab55..74fac802cce125bcd8052b4dacee58c6c6792c87 100644 (file)
@@ -18,9 +18,11 @@ struct xdp_ring {
        /* Hinder the adjacent cache prefetcher to prefetch the consumer
         * pointer if the producer pointer is touched and vice versa.
         */
-       u32 pad ____cacheline_aligned_in_smp;
+       u32 pad1 ____cacheline_aligned_in_smp;
        u32 consumer ____cacheline_aligned_in_smp;
+       u32 pad2 ____cacheline_aligned_in_smp;
        u32 flags;
+       u32 pad3 ____cacheline_aligned_in_smp;
 };
 
 /* Used for the RX and TX queues for packets */