swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()
authorWill Deacon <will@kernel.org>
Fri, 8 Mar 2024 15:28:26 +0000 (15:28 +0000)
committerChristoph Hellwig <hch@lst.de>
Wed, 13 Mar 2024 18:39:24 +0000 (11:39 -0700)
core-api/dma-api-howto.rst states the following properties of
dma_alloc_coherent():

  | The CPU virtual address and the DMA address are both guaranteed to
  | be aligned to the smallest PAGE_SIZE order which is greater than or
  | equal to the requested size.

However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
parameter of swiotlb_find_slots() and so this property is not upheld.
Instead, allocations larger than a page are aligned to PAGE_SIZE,

Calculate the mask corresponding to the page order suitable for holding
the allocation and pass that to swiotlb_find_slots().

Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers")
Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
kernel/dma/swiotlb.c

index 88114433f1e69e7720215f9cb5575db0543d6e03..a3645a9ae68e83b863fbe9bf15fca776603836c7 100644 (file)
@@ -1679,12 +1679,14 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
        struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
        struct io_tlb_pool *pool;
        phys_addr_t tlb_addr;
+       unsigned int align;
        int index;
 
        if (!mem)
                return NULL;
 
-       index = swiotlb_find_slots(dev, 0, size, 0, &pool);
+       align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
+       index = swiotlb_find_slots(dev, 0, size, align, &pool);
        if (index == -1)
                return NULL;