net: enqueue_to_backlog() change vs not running device
authorEric Dumazet <edumazet@google.com>
Fri, 29 Mar 2024 15:42:20 +0000 (15:42 +0000)
committerDavid S. Miller <davem@davemloft.net>
Mon, 1 Apr 2024 10:28:31 +0000 (11:28 +0100)
If the device attached to the packet given to enqueue_to_backlog()
is not running, we drop the packet.

But we accidentally increase sd->dropped, giving false signals
to admins: sd->dropped should be reserved to cpu backlog pressure,
not to temporary glitches at device dismantles.

While we are at it, perform the netif_running() test before
we get the rps lock, and use REASON_DEV_READY
drop reason instead of NOT_SPECIFIED.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/core/dev.c

index c136e80dea6182faac153f0cc4149bf8698b6676..4ad7836365e68f700b26dba2c50515a8c18329cf 100644 (file)
@@ -4801,12 +4801,13 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
        unsigned long flags;
        unsigned int qlen;
 
-       reason = SKB_DROP_REASON_NOT_SPECIFIED;
+       reason = SKB_DROP_REASON_DEV_READY;
+       if (!netif_running(skb->dev))
+               goto bad_dev;
+
        sd = &per_cpu(softnet_data, cpu);
 
        backlog_lock_irq_save(sd, &flags);
-       if (!netif_running(skb->dev))
-               goto drop;
        qlen = skb_queue_len(&sd->input_pkt_queue);
        if (qlen <= READ_ONCE(net_hotdata.max_backlog) &&
            !skb_flow_limit(skb, qlen)) {
@@ -4827,10 +4828,10 @@ enqueue:
        }
        reason = SKB_DROP_REASON_CPU_BACKLOG;
 
-drop:
        sd->dropped++;
        backlog_unlock_irq_restore(sd, &flags);
 
+bad_dev:
        dev_core_stats_rx_dropped_inc(skb->dev);
        kfree_skb_reason(skb, reason);
        return NET_RX_DROP;