udp: relax atomic operation on sk->sk_rmem_alloc
authorEric Dumazet <edumazet@google.com>
Thu, 28 Mar 2024 14:40:30 +0000 (14:40 +0000)
committerJakub Kicinski <kuba@kernel.org>
Fri, 29 Mar 2024 22:03:10 +0000 (15:03 -0700)
atomic_add_return() is more expensive than atomic_add()
and seems overkill in UDP rx fast path.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240328144032.1864988-3-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net/ipv4/udp.c

index 6a39e7fa0616706ed6a8e3e0931ff66fe06af0ef..19d7db4563acd02222aead860e0fa8f22c8ada2e 100644 (file)
@@ -1516,12 +1516,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
        size = skb->truesize;
        udp_set_dev_scratch(skb);
 
-       /* we drop only if the receive buf is full and the receive
-        * queue contains some other skb
-        */
-       rmem = atomic_add_return(size, &sk->sk_rmem_alloc);
-       if (rmem > (size + (unsigned int)sk->sk_rcvbuf))
-               goto uncharge_drop;
+       atomic_add(size, &sk->sk_rmem_alloc);
 
        spin_lock(&list->lock);
        err = udp_rmem_schedule(sk, size);