From: Yevgeny Kliteynik <kliteyn@nvidia.com>
Date: Mon, 14 Nov 2022 22:11:38 +0000 (+0200)
Subject: net/mlx5: DR, Fix QP continuous allocation
X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=17dc71c336aac381f59ba541cf85fb0c192d1c1c;p=linux.git

net/mlx5: DR, Fix QP continuous allocation

When allocating a QP we allocate an RQ and an SQ, the RQ is stored first
in memory and followed by the SQ.
This allocation is not physically continiuos - it may span across different
physical pages. SW Steering code always writes in pairs: 1BB write + 1BB read,
or 2 continuous BBs of GTA WQE.

This lead to an issue where RQ allocation was 4x16 which is equal to 1 WQE BB,
causing 1 BB offset in the page and splitting the GTA WQE between different
physical pages.

The solution was to create the RQ with a even number of BBs and to have the
RQ aligned to a page.

Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
index d052d469d4dfe..4a5ae86e2b623 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
@@ -267,7 +267,7 @@ static struct mlx5dr_qp *dr_create_rc_qp(struct mlx5_core_dev *mdev,
 
 	dr_qp->rq.pc = 0;
 	dr_qp->rq.cc = 0;
-	dr_qp->rq.wqe_cnt = 4;
+	dr_qp->rq.wqe_cnt = 256;
 	dr_qp->sq.pc = 0;
 	dr_qp->sq.cc = 0;
 	dr_qp->sq.head = 0;