net/mlx5e: RX, Hook NAPIs to page pools
authorDragos Tatulea <dtatulea@nvidia.com>
Thu, 13 Apr 2023 14:14:05 +0000 (17:14 +0300)
committerSaeed Mahameed <saeedm@nvidia.com>
Fri, 21 Apr 2023 01:35:50 +0000 (18:35 -0700)
Linking the NAPI to the rq page_pool to improve page_pool cache
usage during skb recycling.

Here are the observed improvements for a iperf single stream
test case:

- For 1500 MTU and legacy rq, seeing a 20% improvement of cache usage.

- For 9K MTU, seeing 33-40 % page_pool cache usage improvements for
both striding and legacy rq (depending if the application is running on
the same core as the rq or not).

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
drivers/net/ethernet/mellanox/mlx5/core/en_main.c

index 7eb1eeb115ca51e352b9d612b3eb76cf4c173ff1..f5504b699fcf26ca071f4c861c9d53906f2d1496 100644 (file)
@@ -857,6 +857,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
                pp_params.pool_size = pool_size;
                pp_params.nid       = node;
                pp_params.dev       = rq->pdev;
+               pp_params.napi      = rq->cq.napi;
                pp_params.dma_dir   = rq->buff.map_dir;
                pp_params.max_len   = PAGE_SIZE;