RDMA/ipoib: Distribute cq completion vector better
authorJack Wang <jinpu.wang@cloud.ionos.com>
Tue, 13 Oct 2020 07:43:42 +0000 (09:43 +0200)
committerJason Gunthorpe <jgg@nvidia.com>
Fri, 20 Nov 2020 20:18:59 +0000 (16:18 -0400)
Currently ipoib choose cq completion vector based on port number, when HCA
only have one port, all the interface recv queue completion are bind to cq
completion vector 0.

To better distribute the load, use same method as __ib_alloc_cq_any to
choose completion vector, with the change, each interface now use
different completion vectors.

Link: https://lore.kernel.org/r/20201013074342.15867-1-jinpu.wang@cloud.ionos.com
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Reviewed-by: Gioh Kim <gi-oh.kim@cloud.ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/ulp/ipoib/ipoib_verbs.c

index 587252fd6f57dd9618e236143c061fbb47e0dd77..5a150a080ac217514ed7e5748c3189495105a019 100644 (file)
@@ -158,6 +158,7 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
 
        int ret, size, req_vec;
        int i;
+       static atomic_t counter;
 
        size = ipoib_recvq_size + 1;
        ret = ipoib_cm_dev_init(dev);
@@ -171,8 +172,7 @@ int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca)
                if (ret != -EOPNOTSUPP)
                        return ret;
 
-       req_vec = (priv->port - 1) * 2;
-
+       req_vec = atomic_inc_return(&counter) * 2;
        cq_attr.cqe = size;
        cq_attr.comp_vector = req_vec % priv->ca->num_comp_vectors;
        priv->recv_cq = ib_create_cq(priv->ca, ipoib_ib_rx_completion, NULL,