habanalabs: fix mapping with page size bigger than 4KB
authorOmer Shpigelman <oshpigelman@habana.ai>
Thu, 14 Mar 2019 14:54:45 +0000 (16:54 +0200)
committerOded Gabbay <oded.gabbay@gmail.com>
Thu, 14 Mar 2019 14:54:45 +0000 (16:54 +0200)
This patch fixes the mapping of virtual address to physical addresses on
architectures where PAGE_SIZE is bigger than 4KB.
The break down to the device page size was done only for the virtual
address while it should have been done for the physical address as well.
As a result virtual addresses were mapped to wrong physical address.
The fix is to apply the break down for the physical addresses as well in
order to get correct mappings.

Signed-off-by: Omer Shpigelman <oshpigelman@habana.ai>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
drivers/misc/habanalabs/mmu.c

index 2f2e99cb27439433bd4527350b2347a6856cab5d..3a5a2cec83051b08c1b838372aaf29c0f1b99e13 100644 (file)
@@ -832,7 +832,7 @@ err:
 int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, u32 page_size)
 {
        struct hl_device *hdev = ctx->hdev;
-       u64 real_virt_addr;
+       u64 real_virt_addr, real_phys_addr;
        u32 real_page_size, npages;
        int i, rc, mapped_cnt = 0;
 
@@ -857,14 +857,16 @@ int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, u32 page_size)
 
        npages = page_size / real_page_size;
        real_virt_addr = virt_addr;
+       real_phys_addr = phys_addr;
 
        for (i = 0 ; i < npages ; i++) {
-               rc = _hl_mmu_map(ctx, real_virt_addr, phys_addr,
+               rc = _hl_mmu_map(ctx, real_virt_addr, real_phys_addr,
                                real_page_size);
                if (rc)
                        goto err;
 
                real_virt_addr += real_page_size;
+               real_phys_addr += real_page_size;
                mapped_cnt++;
        }