bpf: Fold smp_mb__before_atomic() into atomic_set_release()
authorPaul E. McKenney <paulmck@kernel.org>
Wed, 18 Oct 2023 22:28:32 +0000 (15:28 -0700)
committerDaniel Borkmann <daniel@iogearbox.net>
Tue, 24 Oct 2023 12:26:07 +0000 (14:26 +0200)
The bpf_user_ringbuf_drain() BPF_CALL function uses an atomic_set()
immediately preceded by smp_mb__before_atomic() so as to order storing
of ring-buffer consumer and producer positions prior to the atomic_set()
call's clearing of the ->busy flag, as follows:

        smp_mb__before_atomic();
        atomic_set(&rb->busy, 0);

Although this works given current architectures and implementations, and
given that this only needs to order prior writes against a later write.
However, it does so by accident because the smp_mb__before_atomic()
is only guaranteed to work with read-modify-write atomic operations, and
not at all with things like atomic_set() and atomic_read().

Note especially that smp_mb__before_atomic() will not, repeat *not*,
order the prior write to "a" before the subsequent non-read-modify-write
atomic read from "b", even on strongly ordered systems such as x86:

        WRITE_ONCE(a, 1);
        smp_mb__before_atomic();
        r1 = atomic_read(&b);

Therefore, replace the smp_mb__before_atomic() and atomic_set() with
atomic_set_release() as follows:

        atomic_set_release(&rb->busy, 0);

This is no slower (and sometimes is faster) than the original, and also
provides a formal guarantee of ordering that the original lacks.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: David Vernet <void@manifault.com>
Link: https://lore.kernel.org/bpf/ec86d38e-cfb4-44aa-8fdb-6c925922d93c@paulmck-laptop
kernel/bpf/ringbuf.c

index f045fde632e5f1ef1d3235ec07ed2f67e29a18b6..0ee653a936ea00f5e143cfb3d953831e8a202ce6 100644 (file)
@@ -770,8 +770,7 @@ schedule_work_return:
        /* Prevent the clearing of the busy-bit from being reordered before the
         * storing of any rb consumer or producer positions.
         */
-       smp_mb__before_atomic();
-       atomic_set(&rb->busy, 0);
+       atomic_set_release(&rb->busy, 0);
 
        if (flags & BPF_RB_FORCE_WAKEUP)
                irq_work_queue(&rb->work);