RDMA/mlx5: Use rdma_umem_for_each_dma_block()
authorJason Gunthorpe <jgg@nvidia.com>
Mon, 13 Feb 2023 18:14:11 +0000 (14:14 -0400)
committerLeon Romanovsky <leon@kernel.org>
Wed, 15 Feb 2023 11:21:47 +0000 (13:21 +0200)
Replace an open coding of rdma_umem_for_each_dma_block() with the proper
function.

Fixes: b3d47ebd4908 ("RDMA/mlx5: Use mlx5_umr_post_send_wait() to update MR pas")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/0-v1-c13a5b88359b+556d0-mlx5_umem_block_jgg@nvidia.com
Reviewed-by: Devesh Sharma <devesh.s.sharma@oracle.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
drivers/infiniband/hw/mlx5/umr.c

index 029e9536ec28f2ffc252890b8ef1ae5c2b4b91c6..55f4e048d9474377814ebf440112b816b3b558f7 100644 (file)
@@ -636,9 +636,7 @@ int mlx5r_umr_update_mr_pas(struct mlx5_ib_mr *mr, unsigned int flags)
        mlx5r_umr_set_update_xlt_data_seg(&wqe.data_seg, &sg);
 
        cur_mtt = mtt;
-       rdma_for_each_block(mr->umem->sgt_append.sgt.sgl, &biter,
-                           mr->umem->sgt_append.sgt.nents,
-                           BIT(mr->page_shift)) {
+       rdma_umem_for_each_dma_block(mr->umem, &biter, BIT(mr->page_shift)) {
                if (cur_mtt == (void *)mtt + sg.length) {
                        dma_sync_single_for_device(ddev, sg.addr, sg.length,
                                                   DMA_TO_DEVICE);