From: Shiraz Saleem Date: Thu, 28 Mar 2019 16:49:47 +0000 (-0500) Subject: RDMA/rdmavt: Use correct sizing on buffers holding page DMA addresses X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=629e6f9db6bf4c5702212dd77da534b838f14859;p=linux.git RDMA/rdmavt: Use correct sizing on buffers holding page DMA addresses The buffer that holds the page DMA addresses is sized off umem->nmap. This can potentially cause out of bound accesses on the PBL array when iterating the umem DMA-mapped SGL. This is because if umem pages are combined, umem->nmap can be much lower than the number of system pages in umem. Use ib_umem_num_pages() to size this buffer. Cc: Dennis Dalessandro Cc: Mike Marciniszyn Cc: Michael J. Ruhl Signed-off-by: Shiraz Saleem Signed-off-by: Jason Gunthorpe --- diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c index 7287950434969..e8b03ae54914d 100644 --- a/drivers/infiniband/sw/rdmavt/mr.c +++ b/drivers/infiniband/sw/rdmavt/mr.c @@ -392,7 +392,7 @@ struct ib_mr *rvt_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (IS_ERR(umem)) return (void *)umem; - n = umem->nmap; + n = ib_umem_num_pages(umem); mr = __rvt_alloc_mr(n, pd); if (IS_ERR(mr)) {