RDMA/rxe: Fix mr->map double free
authorLi Zhijian <lizhijian@fujitsu.com>
Sun, 30 Oct 2022 03:04:33 +0000 (03:04 +0000)
committerJason Gunthorpe <jgg@nvidia.com>
Sat, 19 Nov 2022 00:15:51 +0000 (20:15 -0400)
rxe_mr_cleanup() which tries to free mr->map again will be called when
rxe_mr_init_user() fails:

   CPU: 0 PID: 4917 Comm: rdma_flush_serv Kdump: loaded Not tainted 6.1.0-rc1-roce-flush+ #25
   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
   Call Trace:
    <TASK>
    dump_stack_lvl+0x45/0x5d
    panic+0x19e/0x349
    end_report.part.0+0x54/0x7c
    kasan_report.cold+0xa/0xf
    rxe_mr_cleanup+0x9d/0xf0 [rdma_rxe]
    __rxe_cleanup+0x10a/0x1e0 [rdma_rxe]
    rxe_reg_user_mr+0xb7/0xd0 [rdma_rxe]
    ib_uverbs_reg_mr+0x26a/0x480 [ib_uverbs]
    ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x1a2/0x250 [ib_uverbs]
    ib_uverbs_cmd_verbs+0x1397/0x15a0 [ib_uverbs]

This issue was firstly exposed since commit b18c7da63fcb ("RDMA/rxe: Fix
memory leak in error path code") and then we fixed it in commit
8ff5f5d9d8cf ("RDMA/rxe: Prevent double freeing rxe_map_set()") but this
fix was reverted together at last by commit 1e75550648da (Revert
"RDMA/rxe: Create duplicate mapping tables for FMRs")

Simply let rxe_mr_cleanup() always handle freeing the mr->map once it is
successfully allocated.

Fixes: 1e75550648da ("Revert "RDMA/rxe: Create duplicate mapping tables for FMRs"")
Link: https://lore.kernel.org/r/1667099073-2-1-git-send-email-lizhijian@fujitsu.com
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/sw/rxe/rxe_mr.c

index cd846cf82a84c0d660c38144a0049db0268e089d..b1423000e4bcdad218cc947306b3dfdc2c47fa36 100644 (file)
@@ -97,6 +97,7 @@ err2:
                kfree(mr->map[i]);
 
        kfree(mr->map);
+       mr->map = NULL;
 err1:
        return -ENOMEM;
 }
@@ -120,7 +121,6 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
        int                     num_buf;
        void                    *vaddr;
        int err;
-       int i;
 
        umem = ib_umem_get(&rxe->ib_dev, start, length, access);
        if (IS_ERR(umem)) {
@@ -159,9 +159,8 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
                        if (!vaddr) {
                                rxe_dbg_mr(mr, "Unable to get virtual address\n");
                                err = -ENOMEM;
-                               goto err_cleanup_map;
+                               goto err_release_umem;
                        }
-
                        buf->addr = (uintptr_t)vaddr;
                        buf->size = PAGE_SIZE;
                        num_buf++;
@@ -178,10 +177,6 @@ int rxe_mr_init_user(struct rxe_dev *rxe, u64 start, u64 length, u64 iova,
 
        return 0;
 
-err_cleanup_map:
-       for (i = 0; i < mr->num_map; i++)
-               kfree(mr->map[i]);
-       kfree(mr->map);
 err_release_umem:
        ib_umem_release(umem);
 err_out: