RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
authorJason Gunthorpe <jgg@nvidia.com>
Fri, 4 Sep 2020 22:41:42 +0000 (19:41 -0300)
committerJason Gunthorpe <jgg@nvidia.com>
Wed, 9 Sep 2020 18:33:17 +0000 (15:33 -0300)
It is possible for a single SGL to span an aligned boundary, eg if the SGL
is

  61440 -> 90112

Then the length is 28672, which currently limits the block size to
32k. With a 32k page size the two covering blocks will be:

  32768->65536 and 65536->98304

However, the correct answer is a 128K block size which will span the whole
28672 bytes in a single block.

Instead of limiting based on length figure out which high IOVA bits don't
change between the start and end addresses. That is the highest useful
page size.

Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/umem.c

index 831bff8d52e547834e9e04064127fbb280595126..09539dd764ec056623cb3418c33d03d52b1d290b 100644 (file)
@@ -156,8 +156,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
                return 0;
 
        va = virt;
-       /* max page size not to exceed MR length */
-       mask = roundup_pow_of_two(umem->length);
+       /* The best result is the smallest page size that results in the minimum
+        * number of required pages. Compute the largest page size that could
+        * work based on VA address bits that don't change.
+        */
+       mask = pgsz_bitmap &
+              GENMASK(BITS_PER_LONG - 1,
+                      bits_per((umem->length - 1 + virt) ^ virt));
        /* offset into first SGL */
        pgoff = umem->address & ~PAGE_MASK;