net/mlx5: Lag, decouple FDB selection and shared FDB
authorMark Bloch <mbloch@nvidia.com>
Tue, 24 May 2022 09:08:10 +0000 (12:08 +0300)
committerSaeed Mahameed <saeedm@nvidia.com>
Wed, 6 Jul 2022 23:11:54 +0000 (16:11 -0700)
commit4892bd9830c363420f00d90186630e7acbed5c9e
tree4405835a295c9277881dacc653f1928b99812458
parentd6c13d74b5c06bef75febf1f351de3c4c255f149
net/mlx5: Lag, decouple FDB selection and shared FDB

Multiport eswitch is required to use native FDB selection instead of
affinity, This was achieved by passing the shared_fdb flag down
the HW lag creation path. While it did accomplish the goal of setting
FDB selection mode to native, it had the side effect of also
creating a shared FDB configuration.

This created a few issues:
- TC rules are inserted into a non active FDB, which means traffic isn't
  offloaded as all traffic will reach only a single FDB.
- All wire traffic is treated as if a single physical port received it; while
  this is true for a bond configuration, this shouldn't be the case for
  multiport eswitch.

Create a new flag MLX5_LAG_MODE_FLAG_FDB_SEL_MODE_NATIVE
to indicate what FDB selection mode should be used.

Fixes: 94db33177819 ("net/mlx5: Support multiport eswitch mode")
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
drivers/net/ethernet/mellanox/mlx5/core/lag/debugfs.c
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c