net: mana: Batch ringing RX queue doorbell on receiving packets
authorLong Li <longli@microsoft.com>
Mon, 17 Jul 2023 19:35:38 +0000 (12:35 -0700)
committerJakub Kicinski <kuba@kernel.org>
Wed, 19 Jul 2023 01:00:13 +0000 (18:00 -0700)
It's inefficient to ring the doorbell page every time a WQE is posted to
the received queue. Excessive MMIO writes result in CPU spending more
time waiting on LOCK instructions (atomic operations), resulting in
poor scaling performance.

Move the code for ringing doorbell page to where after we have posted all
WQEs to the receive queue during a callback from napi_poll().

With this change, tests showed an improvement from 120G/s to 160G/s on a
200G physical link, with 16 or 32 hardware queues.

Tests showed no regression in network latency benchmarks on single
connection.

Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: Long Li <longli@microsoft.com>
Link: https://lore.kernel.org/r/1689622539-5334-2-git-send-email-longli@linuxonhyperv.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
drivers/net/ethernet/microsoft/mana/mana_en.c

index a499e460594b6598bdd501cf5d27084af5c15b73..ac2acc9aca9d0c603028377c22aa9d3822fe97b9 100644 (file)
@@ -1386,8 +1386,8 @@ static void mana_post_pkt_rxq(struct mana_rxq *rxq)
 
        recv_buf_oob = &rxq->rx_oobs[curr_index];
 
-       err = mana_gd_post_and_ring(rxq->gdma_rq, &recv_buf_oob->wqe_req,
-                                   &recv_buf_oob->wqe_inf);
+       err = mana_gd_post_work_request(rxq->gdma_rq, &recv_buf_oob->wqe_req,
+                                       &recv_buf_oob->wqe_inf);
        if (WARN_ON_ONCE(err))
                return;
 
@@ -1657,6 +1657,12 @@ static void mana_poll_rx_cq(struct mana_cq *cq)
                mana_process_rx_cqe(rxq, cq, &comp[i]);
        }
 
+       if (comp_read > 0) {
+               struct gdma_context *gc = rxq->gdma_rq->gdma_dev->gdma_context;
+
+               mana_gd_wq_ring_doorbell(gc, rxq->gdma_rq);
+       }
+
        if (rxq->xdp_flush)
                xdp_do_flush();
 }