dma-direct: relax addressability checks in dma_direct_supported
authorChristoph Hellwig <hch@lst.de>
Mon, 3 Feb 2020 17:11:10 +0000 (18:11 +0100)
committerChristoph Hellwig <hch@lst.de>
Wed, 5 Feb 2020 17:50:55 +0000 (18:50 +0100)
commit91ef26f914171cf753330f13724fd9142b5b1640
tree67c2f70a79ebbe6ba1fcc802d13f4d24b06daf43
parent8c8c5a4994a306c217fd061cbfc5903399fd4c1c
dma-direct: relax addressability checks in dma_direct_supported

dma_direct_supported tries to find the minimum addressable bitmask
based on the end pfn and optional magic that architectures can use
to communicate the size of the magic ZONE_DMA that can be used
for bounce buffering.  But between the DMA offsets that can change
per device (or sometimes even region), the fact the ZONE_DMA isn't
even guaranteed to be the lowest addresses and failure of having
proper interfaces to the MM code this fails at least for one
arm subarchitecture.

As all the legacy DMA implementations have supported 32-bit DMA
masks, and 32-bit masks are guranteed to always work by the API
contract (using bounce buffers if needed), we can short cut the
complicated check and always return true without breaking existing
assumptions.  Hopefully we can properly clean up the interaction
with the arch defined zones and the bootmem allocator eventually.

Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
kernel/dma/direct.c