From: Mauro Carvalho Chehab Date: Fri, 1 May 2020 15:37:45 +0000 (+0200) Subject: docs: move DMA kAPI to Documentation/core-api X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=728c1471b54499e618fb8586852ac5e15a2c95ee;p=linux.git docs: move DMA kAPI to Documentation/core-api Move those files to the core-api, where they belong, renaming them to ReST and adding to the core API index file. Signed-off-by: Mauro Carvalho Chehab Link: https://lore.kernel.org/r/a1517185418cb9d987f566ef85a5dd5c7c99f34e.1588345503.git.mchehab+huawei@kernel.org Signed-off-by: Jonathan Corbet --- diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt deleted file mode 100644 index 358d495456d1b..0000000000000 --- a/Documentation/DMA-API-HOWTO.txt +++ /dev/null @@ -1,929 +0,0 @@ -========================= -Dynamic DMA mapping Guide -========================= - -:Author: David S. Miller -:Author: Richard Henderson -:Author: Jakub Jelinek - -This is a guide to device driver writers on how to use the DMA API -with example pseudo-code. For a concise description of the API, see -DMA-API.txt. - -CPU and DMA addresses -===================== - -There are several kinds of addresses involved in the DMA API, and it's -important to understand the differences. - -The kernel normally uses virtual addresses. Any address returned by -kmalloc(), vmalloc(), and similar interfaces is a virtual address and can -be stored in a ``void *``. - -The virtual memory system (TLB, page tables, etc.) translates virtual -addresses to CPU physical addresses, which are stored as "phys_addr_t" or -"resource_size_t". The kernel manages device resources like registers as -physical addresses. These are the addresses in /proc/iomem. The physical -address is not directly useful to a driver; it must use ioremap() to map -the space and produce a virtual address. - -I/O devices use a third kind of address: a "bus address". If a device has -registers at an MMIO address, or if it performs DMA to read or write system -memory, the addresses used by the device are bus addresses. In some -systems, bus addresses are identical to CPU physical addresses, but in -general they are not. IOMMUs and host bridges can produce arbitrary -mappings between physical and bus addresses. - -From a device's point of view, DMA uses the bus address space, but it may -be restricted to a subset of that space. For example, even if a system -supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU -so devices only need to use 32-bit DMA addresses. - -Here's a picture and some examples:: - - CPU CPU Bus - Virtual Physical Address - Address Address Space - Space Space - - +-------+ +------+ +------+ - | | |MMIO | Offset | | - | | Virtual |Space | applied | | - C +-------+ --------> B +------+ ----------> +------+ A - | | mapping | | by host | | - +-----+ | | | | bridge | | +--------+ - | | | | +------+ | | | | - | CPU | | | | RAM | | | | Device | - | | | | | | | | | | - +-----+ +-------+ +------+ +------+ +--------+ - | | Virtual |Buffer| Mapping | | - X +-------+ --------> Y +------+ <---------- +------+ Z - | | mapping | RAM | by IOMMU - | | | | - | | | | - +-------+ +------+ - -During the enumeration process, the kernel learns about I/O devices and -their MMIO space and the host bridges that connect them to the system. For -example, if a PCI device has a BAR, the kernel reads the bus address (A) -from the BAR and converts it to a CPU physical address (B). The address B -is stored in a struct resource and usually exposed via /proc/iomem. When a -driver claims a device, it typically uses ioremap() to map physical address -B at a virtual address (C). It can then use, e.g., ioread32(C), to access -the device registers at bus address A. - -If the device supports DMA, the driver sets up a buffer using kmalloc() or -a similar interface, which returns a virtual address (X). The virtual -memory system maps X to a physical address (Y) in system RAM. The driver -can use virtual address X to access the buffer, but the device itself -cannot because DMA doesn't go through the CPU virtual memory system. - -In some simple systems, the device can do DMA directly to physical address -Y. But in many others, there is IOMMU hardware that translates DMA -addresses to physical addresses, e.g., it translates Z to Y. This is part -of the reason for the DMA API: the driver can give a virtual address X to -an interface like dma_map_single(), which sets up any required IOMMU -mapping and returns the DMA address Z. The driver then tells the device to -do DMA to Z, and the IOMMU maps it to the buffer at address Y in system -RAM. - -So that Linux can use the dynamic DMA mapping, it needs some help from the -drivers, namely it has to take into account that DMA addresses should be -mapped only for the time they are actually used and unmapped after the DMA -transfer. - -The following API will work of course even on platforms where no such -hardware exists. - -Note that the DMA API works with any bus independent of the underlying -microprocessor architecture. You should use the DMA API rather than the -bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the -pci_map_*() interfaces. - -First of all, you should make sure:: - - #include - -is in your driver, which provides the definition of dma_addr_t. This type -can hold any valid DMA address for the platform and should be used -everywhere you hold a DMA address returned from the DMA mapping functions. - -What memory is DMA'able? -======================== - -The first piece of information you must know is what kernel memory can -be used with the DMA mapping facilities. There has been an unwritten -set of rules regarding this, and this text is an attempt to finally -write them down. - -If you acquired your memory via the page allocator -(i.e. __get_free_page*()) or the generic memory allocators -(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from -that memory using the addresses returned from those routines. - -This means specifically that you may _not_ use the memory/addresses -returned from vmalloc() for DMA. It is possible to DMA to the -_underlying_ memory mapped into a vmalloc() area, but this requires -walking page tables to get the physical addresses, and then -translating each of those pages back to a kernel address using -something like __va(). [ EDIT: Update this when we integrate -Gerd Knorr's generic code which does this. ] - -This rule also means that you may use neither kernel image addresses -(items in data/text/bss segments), nor module image addresses, nor -stack addresses for DMA. These could all be mapped somewhere entirely -different than the rest of physical memory. Even if those classes of -memory could physically work with DMA, you'd need to ensure the I/O -buffers were cacheline-aligned. Without that, you'd see cacheline -sharing problems (data corruption) on CPUs with DMA-incoherent caches. -(The CPU could write to one word, DMA would write to a different one -in the same cache line, and one of them could be overwritten.) - -Also, this means that you cannot take the return of a kmap() -call and DMA to/from that. This is similar to vmalloc(). - -What about block I/O and networking buffers? The block I/O and -networking subsystems make sure that the buffers they use are valid -for you to DMA from/to. - -DMA addressing capabilities -=========================== - -By default, the kernel assumes that your device can address 32-bits of DMA -addressing. For a 64-bit capable device, this needs to be increased, and for -a device with limitations, it needs to be decreased. - -Special note about PCI: PCI-X specification requires PCI-X devices to support -64-bit addressing (DAC) for all transactions. And at least one platform (SGI -SN2) requires 64-bit consistent allocations to operate correctly when the IO -bus is in PCI-X mode. - -For correct operation, you must set the DMA mask to inform the kernel about -your devices DMA addressing capabilities. - -This is performed via a call to dma_set_mask_and_coherent():: - - int dma_set_mask_and_coherent(struct device *dev, u64 mask); - -which will set the mask for both streaming and coherent APIs together. If you -have some special requirements, then the following two separate calls can be -used instead: - - The setup for streaming mappings is performed via a call to - dma_set_mask():: - - int dma_set_mask(struct device *dev, u64 mask); - - The setup for consistent allocations is performed via a call - to dma_set_coherent_mask():: - - int dma_set_coherent_mask(struct device *dev, u64 mask); - -Here, dev is a pointer to the device struct of your device, and mask is a bit -mask describing which bits of an address your device supports. Often the -device struct of your device is embedded in the bus-specific device struct of -your device. For example, &pdev->dev is a pointer to the device struct of a -PCI device (pdev is a pointer to the PCI device struct of your device). - -These calls usually return zero to indicated your device can perform DMA -properly on the machine given the address mask you provided, but they might -return an error if the mask is too small to be supportable on the given -system. If it returns non-zero, your device cannot perform DMA properly on -this platform, and attempting to do so will result in undefined behavior. -You must not use DMA on this device unless the dma_set_mask family of -functions has returned success. - -This means that in the failure case, you have two options: - -1) Use some non-DMA mode for data transfer, if possible. -2) Ignore this device and do not initialize it. - -It is recommended that your driver print a kernel KERN_WARNING message when -setting the DMA mask fails. In this manner, if a user of your driver reports -that performance is bad or that the device is not even detected, you can ask -them for the kernel messages to find out exactly why. - -The standard 64-bit addressing device would do something like this:: - - if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { - dev_warn(dev, "mydev: No suitable DMA available\n"); - goto ignore_this_device; - } - -If the device only supports 32-bit addressing for descriptors in the -coherent allocations, but supports full 64-bits for streaming mappings -it would look like this:: - - if (dma_set_mask(dev, DMA_BIT_MASK(64))) { - dev_warn(dev, "mydev: No suitable DMA available\n"); - goto ignore_this_device; - } - -The coherent mask will always be able to set the same or a smaller mask as -the streaming mask. However for the rare case that a device driver only -uses consistent allocations, one would have to check the return value from -dma_set_coherent_mask(). - -Finally, if your device can only drive the low 24-bits of -address you might do something like:: - - if (dma_set_mask(dev, DMA_BIT_MASK(24))) { - dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); - goto ignore_this_device; - } - -When dma_set_mask() or dma_set_mask_and_coherent() is successful, and -returns zero, the kernel saves away this mask you have provided. The -kernel will use this information later when you make DMA mappings. - -There is a case which we are aware of at this time, which is worth -mentioning in this documentation. If your device supports multiple -functions (for example a sound card provides playback and record -functions) and the various different functions have _different_ -DMA addressing limitations, you may wish to probe each mask and -only provide the functionality which the machine can handle. It -is important that the last call to dma_set_mask() be for the -most specific mask. - -Here is pseudo-code showing how this might be done:: - - #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) - #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) - - struct my_sound_card *card; - struct device *dev; - - ... - if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { - card->playback_enabled = 1; - } else { - card->playback_enabled = 0; - dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", - card->name); - } - if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { - card->record_enabled = 1; - } else { - card->record_enabled = 0; - dev_warn(dev, "%s: Record disabled due to DMA limitations\n", - card->name); - } - -A sound card was used as an example here because this genre of PCI -devices seems to be littered with ISA chips given a PCI front end, -and thus retaining the 16MB DMA addressing limitations of ISA. - -Types of DMA mappings -===================== - -There are two types of DMA mappings: - -- Consistent DMA mappings which are usually mapped at driver - initialization, unmapped at the end and for which the hardware should - guarantee that the device and the CPU can access the data - in parallel and will see updates made by each other without any - explicit software flushing. - - Think of "consistent" as "synchronous" or "coherent". - - The current default is to return consistent memory in the low 32 - bits of the DMA space. However, for future compatibility you should - set the consistent mask even if this default is fine for your - driver. - - Good examples of what to use consistent mappings for are: - - - Network card DMA ring descriptors. - - SCSI adapter mailbox command data structures. - - Device firmware microcode executed out of - main memory. - - The invariant these examples all require is that any CPU store - to memory is immediately visible to the device, and vice - versa. Consistent mappings guarantee this. - - .. important:: - - Consistent DMA memory does not preclude the usage of - proper memory barriers. The CPU may reorder stores to - consistent memory just as it may normal memory. Example: - if it is important for the device to see the first word - of a descriptor updated before the second, you must do - something like:: - - desc->word0 = address; - wmb(); - desc->word1 = DESC_VALID; - - in order to get correct behavior on all platforms. - - Also, on some platforms your driver may need to flush CPU write - buffers in much the same way as it needs to flush write buffers - found in PCI bridges (such as by reading a register's value - after writing it). - -- Streaming DMA mappings which are usually mapped for one DMA - transfer, unmapped right after it (unless you use dma_sync_* below) - and for which hardware can optimize for sequential accesses. - - Think of "streaming" as "asynchronous" or "outside the coherency - domain". - - Good examples of what to use streaming mappings for are: - - - Networking buffers transmitted/received by a device. - - Filesystem buffers written/read by a SCSI device. - - The interfaces for using this type of mapping were designed in - such a way that an implementation can make whatever performance - optimizations the hardware allows. To this end, when using - such mappings you must be explicit about what you want to happen. - -Neither type of DMA mapping has alignment restrictions that come from -the underlying bus, although some devices may have such restrictions. -Also, systems with caches that aren't DMA-coherent will work better -when the underlying buffers don't share cache lines with other data. - - -Using Consistent DMA mappings -============================= - -To allocate and map large (PAGE_SIZE or so) consistent DMA regions, -you should do:: - - dma_addr_t dma_handle; - - cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); - -where device is a ``struct device *``. This may be called in interrupt -context with the GFP_ATOMIC flag. - -Size is the length of the region you want to allocate, in bytes. - -This routine will allocate RAM for that region, so it acts similarly to -__get_free_pages() (but takes size instead of a page order). If your -driver needs regions sized smaller than a page, you may prefer using -the dma_pool interface, described below. - -The consistent DMA mapping interfaces, will by default return a DMA address -which is 32-bit addressable. Even if the device indicates (via the DMA mask) -that it may address the upper 32-bits, consistent allocation will only -return > 32-bit addresses for DMA if the consistent DMA mask has been -explicitly changed via dma_set_coherent_mask(). This is true of the -dma_pool interface as well. - -dma_alloc_coherent() returns two values: the virtual address which you -can use to access it from the CPU and dma_handle which you pass to the -card. - -The CPU virtual address and the DMA address are both -guaranteed to be aligned to the smallest PAGE_SIZE order which -is greater than or equal to the requested size. This invariant -exists (for example) to guarantee that if you allocate a chunk -which is smaller than or equal to 64 kilobytes, the extent of the -buffer you receive will not cross a 64K boundary. - -To unmap and free such a DMA region, you call:: - - dma_free_coherent(dev, size, cpu_addr, dma_handle); - -where dev, size are the same as in the above call and cpu_addr and -dma_handle are the values dma_alloc_coherent() returned to you. -This function may not be called in interrupt context. - -If your driver needs lots of smaller memory regions, you can write -custom code to subdivide pages returned by dma_alloc_coherent(), -or you can use the dma_pool API to do that. A dma_pool is like -a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). -Also, it understands common hardware constraints for alignment, -like queue heads needing to be aligned on N byte boundaries. - -Create a dma_pool like this:: - - struct dma_pool *pool; - - pool = dma_pool_create(name, dev, size, align, boundary); - -The "name" is for diagnostics (like a kmem_cache name); dev and size -are as above. The device's hardware alignment requirement for this -type of data is "align" (which is expressed in bytes, and must be a -power of two). If your device has no boundary crossing restrictions, -pass 0 for boundary; passing 4096 says memory allocated from this pool -must not cross 4KByte boundaries (but at that time it may be better to -use dma_alloc_coherent() directly instead). - -Allocate memory from a DMA pool like this:: - - cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); - -flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor -holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), -this returns two values, cpu_addr and dma_handle. - -Free memory that was allocated from a dma_pool like this:: - - dma_pool_free(pool, cpu_addr, dma_handle); - -where pool is what you passed to dma_pool_alloc(), and cpu_addr and -dma_handle are the values dma_pool_alloc() returned. This function -may be called in interrupt context. - -Destroy a dma_pool by calling:: - - dma_pool_destroy(pool); - -Make sure you've called dma_pool_free() for all memory allocated -from a pool before you destroy the pool. This function may not -be called in interrupt context. - -DMA Direction -============= - -The interfaces described in subsequent portions of this document -take a DMA direction argument, which is an integer and takes on -one of the following values:: - - DMA_BIDIRECTIONAL - DMA_TO_DEVICE - DMA_FROM_DEVICE - DMA_NONE - -You should provide the exact DMA direction if you know it. - -DMA_TO_DEVICE means "from main memory to the device" -DMA_FROM_DEVICE means "from the device to main memory" -It is the direction in which the data moves during the DMA -transfer. - -You are _strongly_ encouraged to specify this as precisely -as you possibly can. - -If you absolutely cannot know the direction of the DMA transfer, -specify DMA_BIDIRECTIONAL. It means that the DMA can go in -either direction. The platform guarantees that you may legally -specify this, and that it will work, but this may be at the -cost of performance for example. - -The value DMA_NONE is to be used for debugging. One can -hold this in a data structure before you come to know the -precise direction, and this will help catch cases where your -direction tracking logic has failed to set things up properly. - -Another advantage of specifying this value precisely (outside of -potential platform-specific optimizations of such) is for debugging. -Some platforms actually have a write permission boolean which DMA -mappings can be marked with, much like page protections in the user -program address space. Such platforms can and do report errors in the -kernel logs when the DMA controller hardware detects violation of the -permission setting. - -Only streaming mappings specify a direction, consistent mappings -implicitly have a direction attribute setting of -DMA_BIDIRECTIONAL. - -The SCSI subsystem tells you the direction to use in the -'sc_data_direction' member of the SCSI command your driver is -working on. - -For Networking drivers, it's a rather simple affair. For transmit -packets, map/unmap them with the DMA_TO_DEVICE direction -specifier. For receive packets, just the opposite, map/unmap them -with the DMA_FROM_DEVICE direction specifier. - -Using Streaming DMA mappings -============================ - -The streaming DMA mapping routines can be called from interrupt -context. There are two versions of each map/unmap, one which will -map/unmap a single memory region, and one which will map/unmap a -scatterlist. - -To map a single region, you do:: - - struct device *dev = &my_dev->dev; - dma_addr_t dma_handle; - void *addr = buffer->ptr; - size_t size = buffer->len; - - dma_handle = dma_map_single(dev, addr, size, direction); - if (dma_mapping_error(dev, dma_handle)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling; - } - -and to unmap it:: - - dma_unmap_single(dev, dma_handle, size, direction); - -You should call dma_mapping_error() as dma_map_single() could fail and return -error. Doing so will ensure that the mapping code will work correctly on all -DMA implementations without any dependency on the specifics of the underlying -implementation. Using the returned address without checking for errors could -result in failures ranging from panics to silent data corruption. The same -applies to dma_map_page() as well. - -You should call dma_unmap_single() when the DMA activity is finished, e.g., -from the interrupt which told you that the DMA transfer is done. - -Using CPU pointers like this for single mappings has a disadvantage: -you cannot reference HIGHMEM memory in this way. Thus, there is a -map/unmap interface pair akin to dma_{map,unmap}_single(). These -interfaces deal with page/offset pairs instead of CPU pointers. -Specifically:: - - struct device *dev = &my_dev->dev; - dma_addr_t dma_handle; - struct page *page = buffer->page; - unsigned long offset = buffer->offset; - size_t size = buffer->len; - - dma_handle = dma_map_page(dev, page, offset, size, direction); - if (dma_mapping_error(dev, dma_handle)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling; - } - - ... - - dma_unmap_page(dev, dma_handle, size, direction); - -Here, "offset" means byte offset within the given page. - -You should call dma_mapping_error() as dma_map_page() could fail and return -error as outlined under the dma_map_single() discussion. - -You should call dma_unmap_page() when the DMA activity is finished, e.g., -from the interrupt which told you that the DMA transfer is done. - -With scatterlists, you map a region gathered from several regions by:: - - int i, count = dma_map_sg(dev, sglist, nents, direction); - struct scatterlist *sg; - - for_each_sg(sglist, sg, count, i) { - hw_address[i] = sg_dma_address(sg); - hw_len[i] = sg_dma_len(sg); - } - -where nents is the number of entries in the sglist. - -The implementation is free to merge several consecutive sglist entries -into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any -consecutive sglist entries can be merged into one provided the first one -ends and the second one starts on a page boundary - in fact this is a huge -advantage for cards which either cannot do scatter-gather or have very -limited number of scatter-gather entries) and returns the actual number -of sg entries it mapped them to. On failure 0 is returned. - -Then you should loop count times (note: this can be less than nents times) -and use sg_dma_address() and sg_dma_len() macros where you previously -accessed sg->address and sg->length as shown above. - -To unmap a scatterlist, just call:: - - dma_unmap_sg(dev, sglist, nents, direction); - -Again, make sure DMA activity has already finished. - -.. note:: - - The 'nents' argument to the dma_unmap_sg call must be - the _same_ one you passed into the dma_map_sg call, - it should _NOT_ be the 'count' value _returned_ from the - dma_map_sg call. - -Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() -counterpart, because the DMA address space is a shared resource and -you could render the machine unusable by consuming all DMA addresses. - -If you need to use the same streaming DMA region multiple times and touch -the data in between the DMA transfers, the buffer needs to be synced -properly in order for the CPU and device to see the most up-to-date and -correct copy of the DMA buffer. - -So, firstly, just map it with dma_map_{single,sg}(), and after each DMA -transfer call either:: - - dma_sync_single_for_cpu(dev, dma_handle, size, direction); - -or:: - - dma_sync_sg_for_cpu(dev, sglist, nents, direction); - -as appropriate. - -Then, if you wish to let the device get at the DMA area again, -finish accessing the data with the CPU, and then before actually -giving the buffer to the hardware call either:: - - dma_sync_single_for_device(dev, dma_handle, size, direction); - -or:: - - dma_sync_sg_for_device(dev, sglist, nents, direction); - -as appropriate. - -.. note:: - - The 'nents' argument to dma_sync_sg_for_cpu() and - dma_sync_sg_for_device() must be the same passed to - dma_map_sg(). It is _NOT_ the count returned by - dma_map_sg(). - -After the last DMA transfer call one of the DMA unmap routines -dma_unmap_{single,sg}(). If you don't touch the data from the first -dma_map_*() call till dma_unmap_*(), then you don't have to call the -dma_sync_*() routines at all. - -Here is pseudo code which shows a situation in which you would need -to use the dma_sync_*() interfaces:: - - my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) - { - dma_addr_t mapping; - - mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); - if (dma_mapping_error(cp->dev, mapping)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling; - } - - cp->rx_buf = buffer; - cp->rx_len = len; - cp->rx_dma = mapping; - - give_rx_buf_to_card(cp); - } - - ... - - my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) - { - struct my_card *cp = devid; - - ... - if (read_card_status(cp) == RX_BUF_TRANSFERRED) { - struct my_card_header *hp; - - /* Examine the header to see if we wish - * to accept the data. But synchronize - * the DMA transfer with the CPU first - * so that we see updated contents. - */ - dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, - cp->rx_len, - DMA_FROM_DEVICE); - - /* Now it is safe to examine the buffer. */ - hp = (struct my_card_header *) cp->rx_buf; - if (header_is_ok(hp)) { - dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, - DMA_FROM_DEVICE); - pass_to_upper_layers(cp->rx_buf); - make_and_setup_new_rx_buf(cp); - } else { - /* CPU should not write to - * DMA_FROM_DEVICE-mapped area, - * so dma_sync_single_for_device() is - * not needed here. It would be required - * for DMA_BIDIRECTIONAL mapping if - * the memory was modified. - */ - give_rx_buf_to_card(cp); - } - } - } - -Drivers converted fully to this interface should not use virt_to_bus() any -longer, nor should they use bus_to_virt(). Some drivers have to be changed a -little bit, because there is no longer an equivalent to bus_to_virt() in the -dynamic DMA mapping scheme - you have to always store the DMA addresses -returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() -calls (dma_map_sg() stores them in the scatterlist itself if the platform -supports dynamic DMA mapping in hardware) in your driver structures and/or -in the card registers. - -All drivers should be using these interfaces with no exceptions. It -is planned to completely remove virt_to_bus() and bus_to_virt() as -they are entirely deprecated. Some ports already do not provide these -as it is impossible to correctly support them. - -Handling Errors -=============== - -DMA address space is limited on some architectures and an allocation -failure can be determined by: - -- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 - -- checking the dma_addr_t returned from dma_map_single() and dma_map_page() - by using dma_mapping_error():: - - dma_addr_t dma_handle; - - dma_handle = dma_map_single(dev, addr, size, direction); - if (dma_mapping_error(dev, dma_handle)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling; - } - -- unmap pages that are already mapped, when mapping error occurs in the middle - of a multiple page mapping attempt. These example are applicable to - dma_map_page() as well. - -Example 1:: - - dma_addr_t dma_handle1; - dma_addr_t dma_handle2; - - dma_handle1 = dma_map_single(dev, addr, size, direction); - if (dma_mapping_error(dev, dma_handle1)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling1; - } - dma_handle2 = dma_map_single(dev, addr, size, direction); - if (dma_mapping_error(dev, dma_handle2)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling2; - } - - ... - - map_error_handling2: - dma_unmap_single(dma_handle1); - map_error_handling1: - -Example 2:: - - /* - * if buffers are allocated in a loop, unmap all mapped buffers when - * mapping error is detected in the middle - */ - - dma_addr_t dma_addr; - dma_addr_t array[DMA_BUFFERS]; - int save_index = 0; - - for (i = 0; i < DMA_BUFFERS; i++) { - - ... - - dma_addr = dma_map_single(dev, addr, size, direction); - if (dma_mapping_error(dev, dma_addr)) { - /* - * reduce current DMA mapping usage, - * delay and try again later or - * reset driver. - */ - goto map_error_handling; - } - array[i].dma_addr = dma_addr; - save_index++; - } - - ... - - map_error_handling: - - for (i = 0; i < save_index; i++) { - - ... - - dma_unmap_single(array[i].dma_addr); - } - -Networking drivers must call dev_kfree_skb() to free the socket buffer -and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook -(ndo_start_xmit). This means that the socket buffer is just dropped in -the failure case. - -SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping -fails in the queuecommand hook. This means that the SCSI subsystem -passes the command to the driver again later. - -Optimizing Unmap State Space Consumption -======================================== - -On many platforms, dma_unmap_{single,page}() is simply a nop. -Therefore, keeping track of the mapping address and length is a waste -of space. Instead of filling your drivers up with ifdefs and the like -to "work around" this (which would defeat the whole purpose of a -portable API) the following facilities are provided. - -Actually, instead of describing the macros one by one, we'll -transform some example code. - -1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. - Example, before:: - - struct ring_state { - struct sk_buff *skb; - dma_addr_t mapping; - __u32 len; - }; - - after:: - - struct ring_state { - struct sk_buff *skb; - DEFINE_DMA_UNMAP_ADDR(mapping); - DEFINE_DMA_UNMAP_LEN(len); - }; - -2) Use dma_unmap_{addr,len}_set() to set these values. - Example, before:: - - ringp->mapping = FOO; - ringp->len = BAR; - - after:: - - dma_unmap_addr_set(ringp, mapping, FOO); - dma_unmap_len_set(ringp, len, BAR); - -3) Use dma_unmap_{addr,len}() to access these values. - Example, before:: - - dma_unmap_single(dev, ringp->mapping, ringp->len, - DMA_FROM_DEVICE); - - after:: - - dma_unmap_single(dev, - dma_unmap_addr(ringp, mapping), - dma_unmap_len(ringp, len), - DMA_FROM_DEVICE); - -It really should be self-explanatory. We treat the ADDR and LEN -separately, because it is possible for an implementation to only -need the address in order to perform the unmap operation. - -Platform Issues -=============== - -If you are just writing drivers for Linux and do not maintain -an architecture port for the kernel, you can safely skip down -to "Closing". - -1) Struct scatterlist requirements. - - You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture - supports IOMMUs (including software IOMMU). - -2) ARCH_DMA_MINALIGN - - Architectures must ensure that kmalloc'ed buffer is - DMA-safe. Drivers and subsystems depend on it. If an architecture - isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in - the CPU cache is identical to data in main memory), - ARCH_DMA_MINALIGN must be set so that the memory allocator - makes sure that kmalloc'ed buffer doesn't share a cache line with - the others. See arch/arm/include/asm/cache.h as an example. - - Note that ARCH_DMA_MINALIGN is about DMA memory alignment - constraints. You don't need to worry about the architecture data - alignment constraints (e.g. the alignment constraints about 64-bit - objects). - -Closing -======= - -This document, and the API itself, would not be in its current -form without the feedback and suggestions from numerous individuals. -We would like to specifically mention, in no particular order, the -following people:: - - Russell King - Leo Dagum - Ralf Baechle - Grant Grundler - Jay Estabrook - Thomas Sailer - Andrea Arcangeli - Jens Axboe - David Mosberger-Tang diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt deleted file mode 100644 index 2d8d2fed73172..0000000000000 --- a/Documentation/DMA-API.txt +++ /dev/null @@ -1,745 +0,0 @@ -============================================ -Dynamic DMA mapping using the generic device -============================================ - -:Author: James E.J. Bottomley - -This document describes the DMA API. For a more gentle introduction -of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. - -This API is split into two pieces. Part I describes the basic API. -Part II describes extensions for supporting non-consistent memory -machines. Unless you know that your driver absolutely has to support -non-consistent platforms (this is usually only legacy platforms) you -should only use the API described in part I. - -Part I - dma_API ----------------- - -To get the dma_API, you must #include . This -provides dma_addr_t and the interfaces described below. - -A dma_addr_t can hold any valid DMA address for the platform. It can be -given to a device to use as a DMA source or target. A CPU cannot reference -a dma_addr_t directly because there may be translation between its physical -address space and the DMA address space. - -Part Ia - Using large DMA-coherent buffers ------------------------------------------- - -:: - - void * - dma_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag) - -Consistent memory is memory for which a write by either the device or -the processor can immediately be read by the processor or device -without having to worry about caching effects. (You may however need -to make sure to flush the processor's write buffers before telling -devices to read that memory.) - -This routine allocates a region of bytes of consistent memory. - -It returns a pointer to the allocated region (in the processor's virtual -address space) or NULL if the allocation failed. - -It also returns a which may be cast to an unsigned integer the -same width as the bus and given to the device as the DMA address base of -the region. - -Note: consistent memory can be expensive on some platforms, and the -minimum allocation length may be as big as a page, so you should -consolidate your requests for consistent memory as much as possible. -The simplest way to do that is to use the dma_pool calls (see below). - -The flag parameter (dma_alloc_coherent() only) allows the caller to -specify the ``GFP_`` flags (see kmalloc()) for the allocation (the -implementation may choose to ignore flags that affect the location of -the returned memory, like GFP_DMA). - -:: - - void - dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_handle) - -Free a region of consistent memory you previously allocated. dev, -size and dma_handle must all be the same as those passed into -dma_alloc_coherent(). cpu_addr must be the virtual address returned by -the dma_alloc_coherent(). - -Note that unlike their sibling allocation calls, these routines -may only be called with IRQs enabled. - - -Part Ib - Using small DMA-coherent buffers ------------------------------------------- - -To get this part of the dma_API, you must #include - -Many drivers need lots of small DMA-coherent memory regions for DMA -descriptors or I/O buffers. Rather than allocating in units of a page -or more using dma_alloc_coherent(), you can use DMA pools. These work -much like a struct kmem_cache, except that they use the DMA-coherent allocator, -not __get_free_pages(). Also, they understand common hardware constraints -for alignment, like queue heads needing to be aligned on N-byte boundaries. - - -:: - - struct dma_pool * - dma_pool_create(const char *name, struct device *dev, - size_t size, size_t align, size_t alloc); - -dma_pool_create() initializes a pool of DMA-coherent buffers -for use with a given device. It must be called in a context which -can sleep. - -The "name" is for diagnostics (like a struct kmem_cache name); dev and size -are like what you'd pass to dma_alloc_coherent(). The device's hardware -alignment requirement for this type of data is "align" (which is expressed -in bytes, and must be a power of two). If your device has no boundary -crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated -from this pool must not cross 4KByte boundaries. - -:: - - void * - dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, - dma_addr_t *handle) - -Wraps dma_pool_alloc() and also zeroes the returned memory if the -allocation attempt succeeded. - - -:: - - void * - dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, - dma_addr_t *dma_handle); - -This allocates memory from the pool; the returned memory will meet the -size and alignment requirements specified at creation time. Pass -GFP_ATOMIC to prevent blocking, or if it's permitted (not -in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow -blocking. Like dma_alloc_coherent(), this returns two values: an -address usable by the CPU, and the DMA address usable by the pool's -device. - -:: - - void - dma_pool_free(struct dma_pool *pool, void *vaddr, - dma_addr_t addr); - -This puts memory back into the pool. The pool is what was passed to -dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what -were returned when that routine allocated the memory being freed. - -:: - - void - dma_pool_destroy(struct dma_pool *pool); - -dma_pool_destroy() frees the resources of the pool. It must be -called in a context which can sleep. Make sure you've freed all allocated -memory back to the pool before you destroy it. - - -Part Ic - DMA addressing limitations ------------------------------------- - -:: - - int - dma_set_mask_and_coherent(struct device *dev, u64 mask) - -Checks to see if the mask is possible and updates the device -streaming and coherent DMA mask parameters if it is. - -Returns: 0 if successful and a negative error if not. - -:: - - int - dma_set_mask(struct device *dev, u64 mask) - -Checks to see if the mask is possible and updates the device -parameters if it is. - -Returns: 0 if successful and a negative error if not. - -:: - - int - dma_set_coherent_mask(struct device *dev, u64 mask) - -Checks to see if the mask is possible and updates the device -parameters if it is. - -Returns: 0 if successful and a negative error if not. - -:: - - u64 - dma_get_required_mask(struct device *dev) - -This API returns the mask that the platform requires to -operate efficiently. Usually this means the returned mask -is the minimum required to cover all of memory. Examining the -required mask gives drivers with variable descriptor sizes the -opportunity to use smaller descriptors as necessary. - -Requesting the required mask does not alter the current mask. If you -wish to take advantage of it, you should issue a dma_set_mask() -call to set the mask to the value returned. - -:: - - size_t - dma_max_mapping_size(struct device *dev); - -Returns the maximum size of a mapping for the device. The size parameter -of the mapping functions like dma_map_single(), dma_map_page() and -others should not be larger than the returned value. - -:: - - unsigned long - dma_get_merge_boundary(struct device *dev); - -Returns the DMA merge boundary. If the device cannot merge any the DMA address -segments, the function returns 0. - -Part Id - Streaming DMA mappings --------------------------------- - -:: - - dma_addr_t - dma_map_single(struct device *dev, void *cpu_addr, size_t size, - enum dma_data_direction direction) - -Maps a piece of processor virtual memory so it can be accessed by the -device and returns the DMA address of the memory. - -The direction for both APIs may be converted freely by casting. -However the dma_API uses a strongly typed enumerator for its -direction: - -======================= ============================================= -DMA_NONE no direction (used for debugging) -DMA_TO_DEVICE data is going from the memory to the device -DMA_FROM_DEVICE data is coming from the device to the memory -DMA_BIDIRECTIONAL direction isn't known -======================= ============================================= - -.. note:: - - Not all memory regions in a machine can be mapped by this API. - Further, contiguous kernel virtual space may not be contiguous as - physical memory. Since this API does not provide any scatter/gather - capability, it will fail if the user tries to map a non-physically - contiguous piece of memory. For this reason, memory to be mapped by - this API should be obtained from sources which guarantee it to be - physically contiguous (like kmalloc). - - Further, the DMA address of the memory must be within the - dma_mask of the device (the dma_mask is a bit mask of the - addressable region for the device, i.e., if the DMA address of - the memory ANDed with the dma_mask is still equal to the DMA - address, then the device can perform DMA to the memory). To - ensure that the memory allocated by kmalloc is within the dma_mask, - the driver may specify various platform-dependent flags to restrict - the DMA address range of the allocation (e.g., on x86, GFP_DMA - guarantees to be within the first 16MB of available DMA addresses, - as required by ISA devices). - - Note also that the above constraints on physical contiguity and - dma_mask may not apply if the platform has an IOMMU (a device which - maps an I/O DMA address to a physical memory address). However, to be - portable, device driver writers may *not* assume that such an IOMMU - exists. - -.. warning:: - - Memory coherency operates at a granularity called the cache - line width. In order for memory mapped by this API to operate - correctly, the mapped region must begin exactly on a cache line - boundary and end exactly on one (to prevent two separately mapped - regions from sharing a single cache line). Since the cache line size - may not be known at compile time, the API will not enforce this - requirement. Therefore, it is recommended that driver writers who - don't take special care to determine the cache line size at run time - only map virtual regions that begin and end on page boundaries (which - are guaranteed also to be cache line boundaries). - - DMA_TO_DEVICE synchronisation must be done after the last modification - of the memory region by the software and before it is handed off to - the device. Once this primitive is used, memory covered by this - primitive should be treated as read-only by the device. If the device - may write to it at any point, it should be DMA_BIDIRECTIONAL (see - below). - - DMA_FROM_DEVICE synchronisation must be done before the driver - accesses data that may be changed by the device. This memory should - be treated as read-only by the driver. If the driver needs to write - to it at any point, it should be DMA_BIDIRECTIONAL (see below). - - DMA_BIDIRECTIONAL requires special handling: it means that the driver - isn't sure if the memory was modified before being handed off to the - device and also isn't sure if the device will also modify it. Thus, - you must always sync bidirectional memory twice: once before the - memory is handed off to the device (to make sure all memory changes - are flushed from the processor) and once before the data may be - accessed after being used by the device (to make sure any processor - cache lines are updated with data that the device may have changed). - -:: - - void - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, - enum dma_data_direction direction) - -Unmaps the region previously mapped. All the parameters passed in -must be identical to those passed in (and returned) by the mapping -API. - -:: - - dma_addr_t - dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction direction) - - void - dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, - enum dma_data_direction direction) - -API for mapping and unmapping for pages. All the notes and warnings -for the other mapping APIs apply here. Also, although the -and parameters are provided to do partial page mapping, it is -recommended that you never use these unless you really know what the -cache width is. - -:: - - dma_addr_t - dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) - - void - dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) - -API for mapping and unmapping for MMIO resources. All the notes and -warnings for the other mapping APIs apply here. The API should only be -used to map device MMIO resources, mapping of RAM is not permitted. - -:: - - int - dma_mapping_error(struct device *dev, dma_addr_t dma_addr) - -In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() -will fail to create a mapping. A driver can check for these errors by testing -the returned DMA address with dma_mapping_error(). A non-zero return value -means the mapping could not be created and the driver should take appropriate -action (e.g. reduce current DMA mapping usage or delay and try again later). - -:: - - int - dma_map_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction direction) - -Returns: the number of DMA address segments mapped (this may be shorter -than passed in if some elements of the scatter/gather list are -physically or virtually adjacent and an IOMMU maps them with a single -entry). - -Please note that the sg cannot be mapped again if it has been mapped once. -The mapping process is allowed to destroy information in the sg. - -As with the other mapping interfaces, dma_map_sg() can fail. When it -does, 0 is returned and a driver must take appropriate action. It is -critical that the driver do something, in the case of a block driver -aborting the request or even oopsing is better than doing nothing and -corrupting the filesystem. - -With scatterlists, you use the resulting mapping like this:: - - int i, count = dma_map_sg(dev, sglist, nents, direction); - struct scatterlist *sg; - - for_each_sg(sglist, sg, count, i) { - hw_address[i] = sg_dma_address(sg); - hw_len[i] = sg_dma_len(sg); - } - -where nents is the number of entries in the sglist. - -The implementation is free to merge several consecutive sglist entries -into one (e.g. with an IOMMU, or if several pages just happen to be -physically contiguous) and returns the actual number of sg entries it -mapped them to. On failure 0, is returned. - -Then you should loop count times (note: this can be less than nents times) -and use sg_dma_address() and sg_dma_len() macros where you previously -accessed sg->address and sg->length as shown above. - -:: - - void - dma_unmap_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction direction) - -Unmap the previously mapped scatter/gather list. All the parameters -must be the same as those and passed in to the scatter/gather mapping -API. - -Note: must be the number you passed in, *not* the number of -DMA address entries returned. - -:: - - void - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, - size_t size, - enum dma_data_direction direction) - - void - dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, - size_t size, - enum dma_data_direction direction) - - void - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, - int nents, - enum dma_data_direction direction) - - void - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, - int nents, - enum dma_data_direction direction) - -Synchronise a single contiguous or scatter/gather mapping for the CPU -and device. With the sync_sg API, all the parameters must be the same -as those passed into the single mapping API. With the sync_single API, -you can use dma_handle and size parameters that aren't identical to -those passed into the single mapping API to do a partial sync. - - -.. note:: - - You must do this: - - - Before reading values that have been written by DMA from the device - (use the DMA_FROM_DEVICE direction) - - After writing values that will be written to the device using DMA - (use the DMA_TO_DEVICE) direction - - before *and* after handing memory to the device if the memory is - DMA_BIDIRECTIONAL - -See also dma_map_single(). - -:: - - dma_addr_t - dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, - enum dma_data_direction dir, - unsigned long attrs) - - void - dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) - - int - dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, - int nents, enum dma_data_direction dir, - unsigned long attrs) - - void - dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, - int nents, enum dma_data_direction dir, - unsigned long attrs) - -The four functions above are just like the counterpart functions -without the _attrs suffixes, except that they pass an optional -dma_attrs. - -The interpretation of DMA attributes is architecture-specific, and -each attribute should be documented in Documentation/DMA-attributes.txt. - -If dma_attrs are 0, the semantics of each of these functions -is identical to those of the corresponding function -without the _attrs suffix. As a result dma_map_single_attrs() -can generally replace dma_map_single(), etc. - -As an example of the use of the ``*_attrs`` functions, here's how -you could pass an attribute DMA_ATTR_FOO when mapping memory -for DMA:: - - #include - /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and - * documented in Documentation/DMA-attributes.txt */ - ... - - unsigned long attr; - attr |= DMA_ATTR_FOO; - .... - n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); - .... - -Architectures that care about DMA_ATTR_FOO would check for its -presence in their implementations of the mapping and unmapping -routines, e.g.::: - - void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) - { - .... - if (attrs & DMA_ATTR_FOO) - /* twizzle the frobnozzle */ - .... - } - - -Part II - Advanced dma usage ----------------------------- - -Warning: These pieces of the DMA API should not be used in the -majority of cases, since they cater for unlikely corner cases that -don't belong in usual drivers. - -If you don't understand how cache line coherency works between a -processor and an I/O device, you should not be using this part of the -API at all. - -:: - - void * - dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, - gfp_t flag, unsigned long attrs) - -Identical to dma_alloc_coherent() except that when the -DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the -platform will choose to return either consistent or non-consistent memory -as it sees fit. By using this API, you are guaranteeing to the platform -that you have all the correct and necessary sync points for this memory -in the driver should it choose to return non-consistent memory. - -Note: where the platform can return consistent memory, it will -guarantee that the sync points become nops. - -Warning: Handling non-consistent memory is a real pain. You should -only use this API if you positively know your driver will be -required to work on one of the rare (usually non-PCI) architectures -that simply cannot make consistent memory. - -:: - - void - dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, - dma_addr_t dma_handle, unsigned long attrs) - -Free memory allocated by the dma_alloc_attrs(). All common -parameters must be identical to those otherwise passed to dma_free_coherent, -and the attrs argument must be identical to the attrs passed to -dma_alloc_attrs(). - -:: - - int - dma_get_cache_alignment(void) - -Returns the processor cache alignment. This is the absolute minimum -alignment *and* width that you must observe when either mapping -memory or doing partial flushes. - -.. note:: - - This API may return a number *larger* than the actual cache - line, but it will guarantee that one or more cache lines fit exactly - into the width returned by this call. It will also always be a power - of two for easy alignment. - -:: - - void - dma_cache_sync(struct device *dev, void *vaddr, size_t size, - enum dma_data_direction direction) - -Do a partial sync of memory that was allocated by dma_alloc_attrs() with -the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and -continuing on for size. Again, you *must* observe the cache line -boundaries when doing this. - -:: - - int - dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, - dma_addr_t device_addr, size_t size); - -Declare region of memory to be handed out by dma_alloc_coherent() when -it's asked for coherent memory for this device. - -phys_addr is the CPU physical address to which the memory is currently -assigned (this will be ioremapped so the CPU can access the region). - -device_addr is the DMA address the device needs to be programmed -with to actually address this memory (this will be handed out as the -dma_addr_t in dma_alloc_coherent()). - -size is the size of the area (must be multiples of PAGE_SIZE). - -As a simplification for the platforms, only *one* such region of -memory may be declared per device. - -For reasons of efficiency, most platforms choose to track the declared -region only at the granularity of a page. For smaller allocations, -you should use the dma_pool() API. - -Part III - Debug drivers use of the DMA-API -------------------------------------------- - -The DMA-API as described above has some constraints. DMA addresses must be -released with the corresponding function with the same size for example. With -the advent of hardware IOMMUs it becomes more and more important that drivers -do not violate those constraints. In the worst case such a violation can -result in data corruption up to destroyed filesystems. - -To debug drivers and find bugs in the usage of the DMA-API checking code can -be compiled into the kernel which will tell the developer about those -violations. If your architecture supports it you can select the "Enable -debugging of DMA-API usage" option in your kernel configuration. Enabling this -option has a performance impact. Do not enable it in production kernels. - -If you boot the resulting kernel will contain code which does some bookkeeping -about what DMA memory was allocated for which device. If this code detects an -error it prints a warning message with some details into your kernel log. An -example warning message may look like this:: - - WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 - check_unmap+0x203/0x490() - Hardware name: - forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong - function [device address=0x00000000640444be] [size=66 bytes] [mapped as - single] [unmapped as page] - Modules linked in: nfsd exportfs bridge stp llc r8169 - Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 - Call Trace: - [] warn_slowpath+0xf2/0x130 - [] _spin_unlock+0x10/0x30 - [] usb_hcd_link_urb_to_ep+0x75/0xc0 - [] _spin_unlock_irqrestore+0x12/0x40 - [] ohci_urb_enqueue+0x19f/0x7c0 - [] queue_work+0x56/0x60 - [] enqueue_task_fair+0x20/0x50 - [] usb_hcd_submit_urb+0x379/0xbc0 - [] cpumask_next_and+0x23/0x40 - [] find_busiest_group+0x207/0x8a0 - [] _spin_lock_irqsave+0x1f/0x50 - [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 - [] nv_tx_done_optimized+0xc6/0x2c0 - [] nv_nic_irq_optimized+0x73/0x2b0 - [] handle_IRQ_event+0x34/0x70 - [] handle_edge_irq+0xc9/0x150 - [] do_IRQ+0xcb/0x1c0 - [] ret_from_intr+0x0/0xa - <4>---[ end trace f6435a98e2a38c0e ]--- - -The driver developer can find the driver and the device including a stacktrace -of the DMA-API call which caused this warning. - -Per default only the first error will result in a warning message. All other -errors will only silently counted. This limitation exist to prevent the code -from flooding your kernel log. To support debugging a device driver this can -be disabled via debugfs. See the debugfs interface documentation below for -details. - -The debugfs directory for the DMA-API debugging code is called dma-api/. In -this directory the following files can currently be found: - -=============================== =============================================== -dma-api/all_errors This file contains a numeric value. If this - value is not equal to zero the debugging code - will print a warning for every error it finds - into the kernel log. Be careful with this - option, as it can easily flood your logs. - -dma-api/disabled This read-only file contains the character 'Y' - if the debugging code is disabled. This can - happen when it runs out of memory or if it was - disabled at boot time - -dma-api/dump This read-only file contains current DMA - mappings. - -dma-api/error_count This file is read-only and shows the total - numbers of errors found. - -dma-api/num_errors The number in this file shows how many - warnings will be printed to the kernel log - before it stops. This number is initialized to - one at system boot and be set by writing into - this file - -dma-api/min_free_entries This read-only file can be read to get the - minimum number of free dma_debug_entries the - allocator has ever seen. If this value goes - down to zero the code will attempt to increase - nr_total_entries to compensate. - -dma-api/num_free_entries The current number of free dma_debug_entries - in the allocator. - -dma-api/nr_total_entries The total number of dma_debug_entries in the - allocator, both free and used. - -dma-api/driver_filter You can write a name of a driver into this file - to limit the debug output to requests from that - particular driver. Write an empty string to - that file to disable the filter and see - all errors again. -=============================== =============================================== - -If you have this code compiled into your kernel it will be enabled by default. -If you want to boot without the bookkeeping anyway you can provide -'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. -Notice that you can not enable it again at runtime. You have to reboot to do -so. - -If you want to see debug messages only for a special device driver you can -specify the dma_debug_driver= parameter. This will enable the -driver filter at boot time. The debug code will only print errors for that -driver afterwards. This filter can be disabled or changed later using debugfs. - -When the code disables itself at runtime this is most likely because it ran -out of dma_debug_entries and was unable to allocate more on-demand. 65536 -entries are preallocated at boot - if this is too low for you boot with -'dma_debug_entries=' to overwrite the default. Note -that the code allocates entries in batches, so the exact number of -preallocated entries may be greater than the actual number requested. The -code will print to the kernel log each time it has dynamically allocated -as many entries as were initially preallocated. This is to indicate that a -larger preallocation size may be appropriate, or if it happens continually -that a driver may be leaking mappings. - -:: - - void - debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); - -dma-debug interface debug_dma_mapping_error() to debug drivers that fail -to check DMA mapping errors on addresses returned by dma_map_single() and -dma_map_page() interfaces. This interface clears a flag set by -debug_dma_map_page() to indicate that dma_mapping_error() has been called by -the driver. When driver does unmap, debug_dma_unmap() checks the flag and if -this flag is still set, prints warning message that includes call trace that -leads up to the unmap. This interface can be called from dma_mapping_error() -routines to enable DMA mapping error check debugging. diff --git a/Documentation/DMA-ISA-LPC.txt b/Documentation/DMA-ISA-LPC.txt deleted file mode 100644 index b1ec7b16c21ff..0000000000000 --- a/Documentation/DMA-ISA-LPC.txt +++ /dev/null @@ -1,152 +0,0 @@ -============================ -DMA with ISA and LPC devices -============================ - -:Author: Pierre Ossman - -This document describes how to do DMA transfers using the old ISA DMA -controller. Even though ISA is more or less dead today the LPC bus -uses the same DMA system so it will be around for quite some time. - -Headers and dependencies ------------------------- - -To do ISA style DMA you need to include two headers:: - - #include - #include - -The first is the generic DMA API used to convert virtual addresses to -bus addresses (see Documentation/DMA-API.txt for details). - -The second contains the routines specific to ISA DMA transfers. Since -this is not present on all platforms make sure you construct your -Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries -to build your driver on unsupported platforms. - -Buffer allocation ------------------ - -The ISA DMA controller has some very strict requirements on which -memory it can access so extra care must be taken when allocating -buffers. - -(You usually need a special buffer for DMA transfers instead of -transferring directly to and from your normal data structures.) - -The DMA-able address space is the lowest 16 MB of _physical_ memory. -Also the transfer block may not cross page boundaries (which are 64 -or 128 KiB depending on which channel you use). - -In order to allocate a piece of memory that satisfies all these -requirements you pass the flag GFP_DMA to kmalloc. - -Unfortunately the memory available for ISA DMA is scarce so unless you -allocate the memory during boot-up it's a good idea to also pass -__GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder. - -(This scarcity also means that you should allocate the buffer as -early as possible and not release it until the driver is unloaded.) - -Address translation -------------------- - -To translate the virtual address to a bus address, use the normal DMA -API. Do _not_ use isa_virt_to_bus() even though it does the same -thing. The reason for this is that the function isa_virt_to_bus() -will require a Kconfig dependency to ISA, not just ISA_DMA_API which -is really all you need. Remember that even though the DMA controller -has its origins in ISA it is used elsewhere. - -Note: x86_64 had a broken DMA API when it came to ISA but has since -been fixed. If your arch has problems then fix the DMA API instead of -reverting to the ISA functions. - -Channels --------- - -A normal ISA DMA controller has 8 channels. The lower four are for -8-bit transfers and the upper four are for 16-bit transfers. - -(Actually the DMA controller is really two separate controllers where -channel 4 is used to give DMA access for the second controller (0-3). -This means that of the four 16-bits channels only three are usable.) - -You allocate these in a similar fashion as all basic resources: - -extern int request_dma(unsigned int dmanr, const char * device_id); -extern void free_dma(unsigned int dmanr); - -The ability to use 16-bit or 8-bit transfers is _not_ up to you as a -driver author but depends on what the hardware supports. Check your -specs or test different channels. - -Transfer data -------------- - -Now for the good stuff, the actual DMA transfer. :) - -Before you use any ISA DMA routines you need to claim the DMA lock -using claim_dma_lock(). The reason is that some DMA operations are -not atomic so only one driver may fiddle with the registers at a -time. - -The first time you use the DMA controller you should call -clear_dma_ff(). This clears an internal register in the DMA -controller that is used for the non-atomic operations. As long as you -(and everyone else) uses the locking functions then you only need to -reset this once. - -Next, you tell the controller in which direction you intend to do the -transfer using set_dma_mode(). Currently you have the options -DMA_MODE_READ and DMA_MODE_WRITE. - -Set the address from where the transfer should start (this needs to -be 16-bit aligned for 16-bit transfers) and how many bytes to -transfer. Note that it's _bytes_. The DMA routines will do all the -required translation to values that the DMA controller understands. - -The final step is enabling the DMA channel and releasing the DMA -lock. - -Once the DMA transfer is finished (or timed out) you should disable -the channel again. You should also check get_dma_residue() to make -sure that all data has been transferred. - -Example:: - - int flags, residue; - - flags = claim_dma_lock(); - - clear_dma_ff(); - - set_dma_mode(channel, DMA_MODE_WRITE); - set_dma_addr(channel, phys_addr); - set_dma_count(channel, num_bytes); - - dma_enable(channel); - - release_dma_lock(flags); - - while (!device_done()); - - flags = claim_dma_lock(); - - dma_disable(channel); - - residue = dma_get_residue(channel); - if (residue != 0) - printk(KERN_ERR "driver: Incomplete DMA transfer!" - " %d bytes left!\n", residue); - - release_dma_lock(flags); - -Suspend/resume --------------- - -It is the driver's responsibility to make sure that the machine isn't -suspended while a DMA transfer is in progress. Also, all DMA settings -are lost when the system suspends so if your driver relies on the DMA -controller being in a certain state then you have to restore these -registers upon resume. diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt deleted file mode 100644 index 29dcbe8826e85..0000000000000 --- a/Documentation/DMA-attributes.txt +++ /dev/null @@ -1,140 +0,0 @@ -============== -DMA attributes -============== - -This document describes the semantics of the DMA attributes that are -defined in linux/dma-mapping.h. - -DMA_ATTR_WEAK_ORDERING ----------------------- - -DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping -may be weakly ordered, that is that reads and writes may pass each other. - -Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING, -those that do not will simply ignore the attribute and exhibit default -behavior. - -DMA_ATTR_WRITE_COMBINE ----------------------- - -DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be -buffered to improve performance. - -Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE, -those that do not will simply ignore the attribute and exhibit default -behavior. - -DMA_ATTR_NON_CONSISTENT ------------------------ - -DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either -consistent or non-consistent memory as it sees fit. By using this API, -you are guaranteeing to the platform that you have all the correct and -necessary sync points for this memory in the driver. - -DMA_ATTR_NO_KERNEL_MAPPING --------------------------- - -DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel -virtual mapping for the allocated buffer. On some architectures creating -such mapping is non-trivial task and consumes very limited resources -(like kernel virtual address space or dma consistent address space). -Buffers allocated with this attribute can be only passed to user space -by calling dma_mmap_attrs(). By using this API, you are guaranteeing -that you won't dereference the pointer returned by dma_alloc_attr(). You -can treat it as a cookie that must be passed to dma_mmap_attrs() and -dma_free_attrs(). Make sure that both of these also get this attribute -set on each call. - -Since it is optional for platforms to implement -DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the -attribute and exhibit default behavior. - -DMA_ATTR_SKIP_CPU_SYNC ----------------------- - -By default dma_map_{single,page,sg} functions family transfer a given -buffer from CPU domain to device domain. Some advanced use cases might -require sharing a buffer between more than one device. This requires -having a mapping created separately for each device and is usually -performed by calling dma_map_{single,page,sg} function more than once -for the given buffer with device pointer to each device taking part in -the buffer sharing. The first call transfers a buffer from 'CPU' domain -to 'device' domain, what synchronizes CPU caches for the given region -(usually it means that the cache has been flushed or invalidated -depending on the dma direction). However, next calls to -dma_map_{single,page,sg}() for other devices will perform exactly the -same synchronization operation on the CPU cache. CPU cache synchronization -might be a time consuming operation, especially if the buffers are -large, so it is highly recommended to avoid it if possible. -DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of -the CPU cache for the given buffer assuming that it has been already -transferred to 'device' domain. This attribute can be also used for -dma_unmap_{single,page,sg} functions family to force buffer to stay in -device domain after releasing a mapping for it. Use this attribute with -care! - -DMA_ATTR_FORCE_CONTIGUOUS -------------------------- - -By default DMA-mapping subsystem is allowed to assemble the buffer -allocated by dma_alloc_attrs() function from individual pages if it can -be mapped as contiguous chunk into device dma address space. By -specifying this attribute the allocated buffer is forced to be contiguous -also in physical memory. - -DMA_ATTR_ALLOC_SINGLE_PAGES ---------------------------- - -This is a hint to the DMA-mapping subsystem that it's probably not worth -the time to try to allocate memory to in a way that gives better TLB -efficiency (AKA it's not worth trying to build the mapping out of larger -pages). You might want to specify this if: - -- You know that the accesses to this memory won't thrash the TLB. - You might know that the accesses are likely to be sequential or - that they aren't sequential but it's unlikely you'll ping-pong - between many addresses that are likely to be in different physical - pages. -- You know that the penalty of TLB misses while accessing the - memory will be small enough to be inconsequential. If you are - doing a heavy operation like decryption or decompression this - might be the case. -- You know that the DMA mapping is fairly transitory. If you expect - the mapping to have a short lifetime then it may be worth it to - optimize allocation (avoid coming up with large pages) instead of - getting the slight performance win of larger pages. - -Setting this hint doesn't guarantee that you won't get huge pages, but it -means that we won't try quite as hard to get them. - -.. note:: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, - though ARM64 patches will likely be posted soon. - -DMA_ATTR_NO_WARN ----------------- - -This tells the DMA-mapping subsystem to suppress allocation failure reports -(similarly to __GFP_NOWARN). - -On some architectures allocation failures are reported with error messages -to the system logs. Although this can help to identify and debug problems, -drivers which handle failures (eg, retry later) have no problems with them, -and can actually flood the system logs with error messages that aren't any -problem at all, depending on the implementation of the retry mechanism. - -So, this provides a way for drivers to avoid those error messages on calls -where allocation failures are not a problem, and shouldn't bother the logs. - -.. note:: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. - -DMA_ATTR_PRIVILEGED -------------------- - -Some advanced peripherals such as remote processors and GPUs perform -accesses to DMA buffers in both privileged "supervisor" and unprivileged -"user" modes. This attribute is used to indicate to the DMA-mapping -subsystem that the buffer is fully accessible at the elevated privilege -level (and ideally inaccessible or at least read-only at the -lesser-privileged levels). diff --git a/Documentation/core-api/dma-api-howto.rst b/Documentation/core-api/dma-api-howto.rst new file mode 100644 index 0000000000000..358d495456d1b --- /dev/null +++ b/Documentation/core-api/dma-api-howto.rst @@ -0,0 +1,929 @@ +========================= +Dynamic DMA mapping Guide +========================= + +:Author: David S. Miller +:Author: Richard Henderson +:Author: Jakub Jelinek + +This is a guide to device driver writers on how to use the DMA API +with example pseudo-code. For a concise description of the API, see +DMA-API.txt. + +CPU and DMA addresses +===================== + +There are several kinds of addresses involved in the DMA API, and it's +important to understand the differences. + +The kernel normally uses virtual addresses. Any address returned by +kmalloc(), vmalloc(), and similar interfaces is a virtual address and can +be stored in a ``void *``. + +The virtual memory system (TLB, page tables, etc.) translates virtual +addresses to CPU physical addresses, which are stored as "phys_addr_t" or +"resource_size_t". The kernel manages device resources like registers as +physical addresses. These are the addresses in /proc/iomem. The physical +address is not directly useful to a driver; it must use ioremap() to map +the space and produce a virtual address. + +I/O devices use a third kind of address: a "bus address". If a device has +registers at an MMIO address, or if it performs DMA to read or write system +memory, the addresses used by the device are bus addresses. In some +systems, bus addresses are identical to CPU physical addresses, but in +general they are not. IOMMUs and host bridges can produce arbitrary +mappings between physical and bus addresses. + +From a device's point of view, DMA uses the bus address space, but it may +be restricted to a subset of that space. For example, even if a system +supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU +so devices only need to use 32-bit DMA addresses. + +Here's a picture and some examples:: + + CPU CPU Bus + Virtual Physical Address + Address Address Space + Space Space + + +-------+ +------+ +------+ + | | |MMIO | Offset | | + | | Virtual |Space | applied | | + C +-------+ --------> B +------+ ----------> +------+ A + | | mapping | | by host | | + +-----+ | | | | bridge | | +--------+ + | | | | +------+ | | | | + | CPU | | | | RAM | | | | Device | + | | | | | | | | | | + +-----+ +-------+ +------+ +------+ +--------+ + | | Virtual |Buffer| Mapping | | + X +-------+ --------> Y +------+ <---------- +------+ Z + | | mapping | RAM | by IOMMU + | | | | + | | | | + +-------+ +------+ + +During the enumeration process, the kernel learns about I/O devices and +their MMIO space and the host bridges that connect them to the system. For +example, if a PCI device has a BAR, the kernel reads the bus address (A) +from the BAR and converts it to a CPU physical address (B). The address B +is stored in a struct resource and usually exposed via /proc/iomem. When a +driver claims a device, it typically uses ioremap() to map physical address +B at a virtual address (C). It can then use, e.g., ioread32(C), to access +the device registers at bus address A. + +If the device supports DMA, the driver sets up a buffer using kmalloc() or +a similar interface, which returns a virtual address (X). The virtual +memory system maps X to a physical address (Y) in system RAM. The driver +can use virtual address X to access the buffer, but the device itself +cannot because DMA doesn't go through the CPU virtual memory system. + +In some simple systems, the device can do DMA directly to physical address +Y. But in many others, there is IOMMU hardware that translates DMA +addresses to physical addresses, e.g., it translates Z to Y. This is part +of the reason for the DMA API: the driver can give a virtual address X to +an interface like dma_map_single(), which sets up any required IOMMU +mapping and returns the DMA address Z. The driver then tells the device to +do DMA to Z, and the IOMMU maps it to the buffer at address Y in system +RAM. + +So that Linux can use the dynamic DMA mapping, it needs some help from the +drivers, namely it has to take into account that DMA addresses should be +mapped only for the time they are actually used and unmapped after the DMA +transfer. + +The following API will work of course even on platforms where no such +hardware exists. + +Note that the DMA API works with any bus independent of the underlying +microprocessor architecture. You should use the DMA API rather than the +bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the +pci_map_*() interfaces. + +First of all, you should make sure:: + + #include + +is in your driver, which provides the definition of dma_addr_t. This type +can hold any valid DMA address for the platform and should be used +everywhere you hold a DMA address returned from the DMA mapping functions. + +What memory is DMA'able? +======================== + +The first piece of information you must know is what kernel memory can +be used with the DMA mapping facilities. There has been an unwritten +set of rules regarding this, and this text is an attempt to finally +write them down. + +If you acquired your memory via the page allocator +(i.e. __get_free_page*()) or the generic memory allocators +(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from +that memory using the addresses returned from those routines. + +This means specifically that you may _not_ use the memory/addresses +returned from vmalloc() for DMA. It is possible to DMA to the +_underlying_ memory mapped into a vmalloc() area, but this requires +walking page tables to get the physical addresses, and then +translating each of those pages back to a kernel address using +something like __va(). [ EDIT: Update this when we integrate +Gerd Knorr's generic code which does this. ] + +This rule also means that you may use neither kernel image addresses +(items in data/text/bss segments), nor module image addresses, nor +stack addresses for DMA. These could all be mapped somewhere entirely +different than the rest of physical memory. Even if those classes of +memory could physically work with DMA, you'd need to ensure the I/O +buffers were cacheline-aligned. Without that, you'd see cacheline +sharing problems (data corruption) on CPUs with DMA-incoherent caches. +(The CPU could write to one word, DMA would write to a different one +in the same cache line, and one of them could be overwritten.) + +Also, this means that you cannot take the return of a kmap() +call and DMA to/from that. This is similar to vmalloc(). + +What about block I/O and networking buffers? The block I/O and +networking subsystems make sure that the buffers they use are valid +for you to DMA from/to. + +DMA addressing capabilities +=========================== + +By default, the kernel assumes that your device can address 32-bits of DMA +addressing. For a 64-bit capable device, this needs to be increased, and for +a device with limitations, it needs to be decreased. + +Special note about PCI: PCI-X specification requires PCI-X devices to support +64-bit addressing (DAC) for all transactions. And at least one platform (SGI +SN2) requires 64-bit consistent allocations to operate correctly when the IO +bus is in PCI-X mode. + +For correct operation, you must set the DMA mask to inform the kernel about +your devices DMA addressing capabilities. + +This is performed via a call to dma_set_mask_and_coherent():: + + int dma_set_mask_and_coherent(struct device *dev, u64 mask); + +which will set the mask for both streaming and coherent APIs together. If you +have some special requirements, then the following two separate calls can be +used instead: + + The setup for streaming mappings is performed via a call to + dma_set_mask():: + + int dma_set_mask(struct device *dev, u64 mask); + + The setup for consistent allocations is performed via a call + to dma_set_coherent_mask():: + + int dma_set_coherent_mask(struct device *dev, u64 mask); + +Here, dev is a pointer to the device struct of your device, and mask is a bit +mask describing which bits of an address your device supports. Often the +device struct of your device is embedded in the bus-specific device struct of +your device. For example, &pdev->dev is a pointer to the device struct of a +PCI device (pdev is a pointer to the PCI device struct of your device). + +These calls usually return zero to indicated your device can perform DMA +properly on the machine given the address mask you provided, but they might +return an error if the mask is too small to be supportable on the given +system. If it returns non-zero, your device cannot perform DMA properly on +this platform, and attempting to do so will result in undefined behavior. +You must not use DMA on this device unless the dma_set_mask family of +functions has returned success. + +This means that in the failure case, you have two options: + +1) Use some non-DMA mode for data transfer, if possible. +2) Ignore this device and do not initialize it. + +It is recommended that your driver print a kernel KERN_WARNING message when +setting the DMA mask fails. In this manner, if a user of your driver reports +that performance is bad or that the device is not even detected, you can ask +them for the kernel messages to find out exactly why. + +The standard 64-bit addressing device would do something like this:: + + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { + dev_warn(dev, "mydev: No suitable DMA available\n"); + goto ignore_this_device; + } + +If the device only supports 32-bit addressing for descriptors in the +coherent allocations, but supports full 64-bits for streaming mappings +it would look like this:: + + if (dma_set_mask(dev, DMA_BIT_MASK(64))) { + dev_warn(dev, "mydev: No suitable DMA available\n"); + goto ignore_this_device; + } + +The coherent mask will always be able to set the same or a smaller mask as +the streaming mask. However for the rare case that a device driver only +uses consistent allocations, one would have to check the return value from +dma_set_coherent_mask(). + +Finally, if your device can only drive the low 24-bits of +address you might do something like:: + + if (dma_set_mask(dev, DMA_BIT_MASK(24))) { + dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); + goto ignore_this_device; + } + +When dma_set_mask() or dma_set_mask_and_coherent() is successful, and +returns zero, the kernel saves away this mask you have provided. The +kernel will use this information later when you make DMA mappings. + +There is a case which we are aware of at this time, which is worth +mentioning in this documentation. If your device supports multiple +functions (for example a sound card provides playback and record +functions) and the various different functions have _different_ +DMA addressing limitations, you may wish to probe each mask and +only provide the functionality which the machine can handle. It +is important that the last call to dma_set_mask() be for the +most specific mask. + +Here is pseudo-code showing how this might be done:: + + #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) + #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) + + struct my_sound_card *card; + struct device *dev; + + ... + if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { + card->playback_enabled = 1; + } else { + card->playback_enabled = 0; + dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", + card->name); + } + if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { + card->record_enabled = 1; + } else { + card->record_enabled = 0; + dev_warn(dev, "%s: Record disabled due to DMA limitations\n", + card->name); + } + +A sound card was used as an example here because this genre of PCI +devices seems to be littered with ISA chips given a PCI front end, +and thus retaining the 16MB DMA addressing limitations of ISA. + +Types of DMA mappings +===================== + +There are two types of DMA mappings: + +- Consistent DMA mappings which are usually mapped at driver + initialization, unmapped at the end and for which the hardware should + guarantee that the device and the CPU can access the data + in parallel and will see updates made by each other without any + explicit software flushing. + + Think of "consistent" as "synchronous" or "coherent". + + The current default is to return consistent memory in the low 32 + bits of the DMA space. However, for future compatibility you should + set the consistent mask even if this default is fine for your + driver. + + Good examples of what to use consistent mappings for are: + + - Network card DMA ring descriptors. + - SCSI adapter mailbox command data structures. + - Device firmware microcode executed out of + main memory. + + The invariant these examples all require is that any CPU store + to memory is immediately visible to the device, and vice + versa. Consistent mappings guarantee this. + + .. important:: + + Consistent DMA memory does not preclude the usage of + proper memory barriers. The CPU may reorder stores to + consistent memory just as it may normal memory. Example: + if it is important for the device to see the first word + of a descriptor updated before the second, you must do + something like:: + + desc->word0 = address; + wmb(); + desc->word1 = DESC_VALID; + + in order to get correct behavior on all platforms. + + Also, on some platforms your driver may need to flush CPU write + buffers in much the same way as it needs to flush write buffers + found in PCI bridges (such as by reading a register's value + after writing it). + +- Streaming DMA mappings which are usually mapped for one DMA + transfer, unmapped right after it (unless you use dma_sync_* below) + and for which hardware can optimize for sequential accesses. + + Think of "streaming" as "asynchronous" or "outside the coherency + domain". + + Good examples of what to use streaming mappings for are: + + - Networking buffers transmitted/received by a device. + - Filesystem buffers written/read by a SCSI device. + + The interfaces for using this type of mapping were designed in + such a way that an implementation can make whatever performance + optimizations the hardware allows. To this end, when using + such mappings you must be explicit about what you want to happen. + +Neither type of DMA mapping has alignment restrictions that come from +the underlying bus, although some devices may have such restrictions. +Also, systems with caches that aren't DMA-coherent will work better +when the underlying buffers don't share cache lines with other data. + + +Using Consistent DMA mappings +============================= + +To allocate and map large (PAGE_SIZE or so) consistent DMA regions, +you should do:: + + dma_addr_t dma_handle; + + cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); + +where device is a ``struct device *``. This may be called in interrupt +context with the GFP_ATOMIC flag. + +Size is the length of the region you want to allocate, in bytes. + +This routine will allocate RAM for that region, so it acts similarly to +__get_free_pages() (but takes size instead of a page order). If your +driver needs regions sized smaller than a page, you may prefer using +the dma_pool interface, described below. + +The consistent DMA mapping interfaces, will by default return a DMA address +which is 32-bit addressable. Even if the device indicates (via the DMA mask) +that it may address the upper 32-bits, consistent allocation will only +return > 32-bit addresses for DMA if the consistent DMA mask has been +explicitly changed via dma_set_coherent_mask(). This is true of the +dma_pool interface as well. + +dma_alloc_coherent() returns two values: the virtual address which you +can use to access it from the CPU and dma_handle which you pass to the +card. + +The CPU virtual address and the DMA address are both +guaranteed to be aligned to the smallest PAGE_SIZE order which +is greater than or equal to the requested size. This invariant +exists (for example) to guarantee that if you allocate a chunk +which is smaller than or equal to 64 kilobytes, the extent of the +buffer you receive will not cross a 64K boundary. + +To unmap and free such a DMA region, you call:: + + dma_free_coherent(dev, size, cpu_addr, dma_handle); + +where dev, size are the same as in the above call and cpu_addr and +dma_handle are the values dma_alloc_coherent() returned to you. +This function may not be called in interrupt context. + +If your driver needs lots of smaller memory regions, you can write +custom code to subdivide pages returned by dma_alloc_coherent(), +or you can use the dma_pool API to do that. A dma_pool is like +a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). +Also, it understands common hardware constraints for alignment, +like queue heads needing to be aligned on N byte boundaries. + +Create a dma_pool like this:: + + struct dma_pool *pool; + + pool = dma_pool_create(name, dev, size, align, boundary); + +The "name" is for diagnostics (like a kmem_cache name); dev and size +are as above. The device's hardware alignment requirement for this +type of data is "align" (which is expressed in bytes, and must be a +power of two). If your device has no boundary crossing restrictions, +pass 0 for boundary; passing 4096 says memory allocated from this pool +must not cross 4KByte boundaries (but at that time it may be better to +use dma_alloc_coherent() directly instead). + +Allocate memory from a DMA pool like this:: + + cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); + +flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor +holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), +this returns two values, cpu_addr and dma_handle. + +Free memory that was allocated from a dma_pool like this:: + + dma_pool_free(pool, cpu_addr, dma_handle); + +where pool is what you passed to dma_pool_alloc(), and cpu_addr and +dma_handle are the values dma_pool_alloc() returned. This function +may be called in interrupt context. + +Destroy a dma_pool by calling:: + + dma_pool_destroy(pool); + +Make sure you've called dma_pool_free() for all memory allocated +from a pool before you destroy the pool. This function may not +be called in interrupt context. + +DMA Direction +============= + +The interfaces described in subsequent portions of this document +take a DMA direction argument, which is an integer and takes on +one of the following values:: + + DMA_BIDIRECTIONAL + DMA_TO_DEVICE + DMA_FROM_DEVICE + DMA_NONE + +You should provide the exact DMA direction if you know it. + +DMA_TO_DEVICE means "from main memory to the device" +DMA_FROM_DEVICE means "from the device to main memory" +It is the direction in which the data moves during the DMA +transfer. + +You are _strongly_ encouraged to specify this as precisely +as you possibly can. + +If you absolutely cannot know the direction of the DMA transfer, +specify DMA_BIDIRECTIONAL. It means that the DMA can go in +either direction. The platform guarantees that you may legally +specify this, and that it will work, but this may be at the +cost of performance for example. + +The value DMA_NONE is to be used for debugging. One can +hold this in a data structure before you come to know the +precise direction, and this will help catch cases where your +direction tracking logic has failed to set things up properly. + +Another advantage of specifying this value precisely (outside of +potential platform-specific optimizations of such) is for debugging. +Some platforms actually have a write permission boolean which DMA +mappings can be marked with, much like page protections in the user +program address space. Such platforms can and do report errors in the +kernel logs when the DMA controller hardware detects violation of the +permission setting. + +Only streaming mappings specify a direction, consistent mappings +implicitly have a direction attribute setting of +DMA_BIDIRECTIONAL. + +The SCSI subsystem tells you the direction to use in the +'sc_data_direction' member of the SCSI command your driver is +working on. + +For Networking drivers, it's a rather simple affair. For transmit +packets, map/unmap them with the DMA_TO_DEVICE direction +specifier. For receive packets, just the opposite, map/unmap them +with the DMA_FROM_DEVICE direction specifier. + +Using Streaming DMA mappings +============================ + +The streaming DMA mapping routines can be called from interrupt +context. There are two versions of each map/unmap, one which will +map/unmap a single memory region, and one which will map/unmap a +scatterlist. + +To map a single region, you do:: + + struct device *dev = &my_dev->dev; + dma_addr_t dma_handle; + void *addr = buffer->ptr; + size_t size = buffer->len; + + dma_handle = dma_map_single(dev, addr, size, direction); + if (dma_mapping_error(dev, dma_handle)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling; + } + +and to unmap it:: + + dma_unmap_single(dev, dma_handle, size, direction); + +You should call dma_mapping_error() as dma_map_single() could fail and return +error. Doing so will ensure that the mapping code will work correctly on all +DMA implementations without any dependency on the specifics of the underlying +implementation. Using the returned address without checking for errors could +result in failures ranging from panics to silent data corruption. The same +applies to dma_map_page() as well. + +You should call dma_unmap_single() when the DMA activity is finished, e.g., +from the interrupt which told you that the DMA transfer is done. + +Using CPU pointers like this for single mappings has a disadvantage: +you cannot reference HIGHMEM memory in this way. Thus, there is a +map/unmap interface pair akin to dma_{map,unmap}_single(). These +interfaces deal with page/offset pairs instead of CPU pointers. +Specifically:: + + struct device *dev = &my_dev->dev; + dma_addr_t dma_handle; + struct page *page = buffer->page; + unsigned long offset = buffer->offset; + size_t size = buffer->len; + + dma_handle = dma_map_page(dev, page, offset, size, direction); + if (dma_mapping_error(dev, dma_handle)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling; + } + + ... + + dma_unmap_page(dev, dma_handle, size, direction); + +Here, "offset" means byte offset within the given page. + +You should call dma_mapping_error() as dma_map_page() could fail and return +error as outlined under the dma_map_single() discussion. + +You should call dma_unmap_page() when the DMA activity is finished, e.g., +from the interrupt which told you that the DMA transfer is done. + +With scatterlists, you map a region gathered from several regions by:: + + int i, count = dma_map_sg(dev, sglist, nents, direction); + struct scatterlist *sg; + + for_each_sg(sglist, sg, count, i) { + hw_address[i] = sg_dma_address(sg); + hw_len[i] = sg_dma_len(sg); + } + +where nents is the number of entries in the sglist. + +The implementation is free to merge several consecutive sglist entries +into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any +consecutive sglist entries can be merged into one provided the first one +ends and the second one starts on a page boundary - in fact this is a huge +advantage for cards which either cannot do scatter-gather or have very +limited number of scatter-gather entries) and returns the actual number +of sg entries it mapped them to. On failure 0 is returned. + +Then you should loop count times (note: this can be less than nents times) +and use sg_dma_address() and sg_dma_len() macros where you previously +accessed sg->address and sg->length as shown above. + +To unmap a scatterlist, just call:: + + dma_unmap_sg(dev, sglist, nents, direction); + +Again, make sure DMA activity has already finished. + +.. note:: + + The 'nents' argument to the dma_unmap_sg call must be + the _same_ one you passed into the dma_map_sg call, + it should _NOT_ be the 'count' value _returned_ from the + dma_map_sg call. + +Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() +counterpart, because the DMA address space is a shared resource and +you could render the machine unusable by consuming all DMA addresses. + +If you need to use the same streaming DMA region multiple times and touch +the data in between the DMA transfers, the buffer needs to be synced +properly in order for the CPU and device to see the most up-to-date and +correct copy of the DMA buffer. + +So, firstly, just map it with dma_map_{single,sg}(), and after each DMA +transfer call either:: + + dma_sync_single_for_cpu(dev, dma_handle, size, direction); + +or:: + + dma_sync_sg_for_cpu(dev, sglist, nents, direction); + +as appropriate. + +Then, if you wish to let the device get at the DMA area again, +finish accessing the data with the CPU, and then before actually +giving the buffer to the hardware call either:: + + dma_sync_single_for_device(dev, dma_handle, size, direction); + +or:: + + dma_sync_sg_for_device(dev, sglist, nents, direction); + +as appropriate. + +.. note:: + + The 'nents' argument to dma_sync_sg_for_cpu() and + dma_sync_sg_for_device() must be the same passed to + dma_map_sg(). It is _NOT_ the count returned by + dma_map_sg(). + +After the last DMA transfer call one of the DMA unmap routines +dma_unmap_{single,sg}(). If you don't touch the data from the first +dma_map_*() call till dma_unmap_*(), then you don't have to call the +dma_sync_*() routines at all. + +Here is pseudo code which shows a situation in which you would need +to use the dma_sync_*() interfaces:: + + my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) + { + dma_addr_t mapping; + + mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); + if (dma_mapping_error(cp->dev, mapping)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling; + } + + cp->rx_buf = buffer; + cp->rx_len = len; + cp->rx_dma = mapping; + + give_rx_buf_to_card(cp); + } + + ... + + my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) + { + struct my_card *cp = devid; + + ... + if (read_card_status(cp) == RX_BUF_TRANSFERRED) { + struct my_card_header *hp; + + /* Examine the header to see if we wish + * to accept the data. But synchronize + * the DMA transfer with the CPU first + * so that we see updated contents. + */ + dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, + cp->rx_len, + DMA_FROM_DEVICE); + + /* Now it is safe to examine the buffer. */ + hp = (struct my_card_header *) cp->rx_buf; + if (header_is_ok(hp)) { + dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, + DMA_FROM_DEVICE); + pass_to_upper_layers(cp->rx_buf); + make_and_setup_new_rx_buf(cp); + } else { + /* CPU should not write to + * DMA_FROM_DEVICE-mapped area, + * so dma_sync_single_for_device() is + * not needed here. It would be required + * for DMA_BIDIRECTIONAL mapping if + * the memory was modified. + */ + give_rx_buf_to_card(cp); + } + } + } + +Drivers converted fully to this interface should not use virt_to_bus() any +longer, nor should they use bus_to_virt(). Some drivers have to be changed a +little bit, because there is no longer an equivalent to bus_to_virt() in the +dynamic DMA mapping scheme - you have to always store the DMA addresses +returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() +calls (dma_map_sg() stores them in the scatterlist itself if the platform +supports dynamic DMA mapping in hardware) in your driver structures and/or +in the card registers. + +All drivers should be using these interfaces with no exceptions. It +is planned to completely remove virt_to_bus() and bus_to_virt() as +they are entirely deprecated. Some ports already do not provide these +as it is impossible to correctly support them. + +Handling Errors +=============== + +DMA address space is limited on some architectures and an allocation +failure can be determined by: + +- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 + +- checking the dma_addr_t returned from dma_map_single() and dma_map_page() + by using dma_mapping_error():: + + dma_addr_t dma_handle; + + dma_handle = dma_map_single(dev, addr, size, direction); + if (dma_mapping_error(dev, dma_handle)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling; + } + +- unmap pages that are already mapped, when mapping error occurs in the middle + of a multiple page mapping attempt. These example are applicable to + dma_map_page() as well. + +Example 1:: + + dma_addr_t dma_handle1; + dma_addr_t dma_handle2; + + dma_handle1 = dma_map_single(dev, addr, size, direction); + if (dma_mapping_error(dev, dma_handle1)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling1; + } + dma_handle2 = dma_map_single(dev, addr, size, direction); + if (dma_mapping_error(dev, dma_handle2)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling2; + } + + ... + + map_error_handling2: + dma_unmap_single(dma_handle1); + map_error_handling1: + +Example 2:: + + /* + * if buffers are allocated in a loop, unmap all mapped buffers when + * mapping error is detected in the middle + */ + + dma_addr_t dma_addr; + dma_addr_t array[DMA_BUFFERS]; + int save_index = 0; + + for (i = 0; i < DMA_BUFFERS; i++) { + + ... + + dma_addr = dma_map_single(dev, addr, size, direction); + if (dma_mapping_error(dev, dma_addr)) { + /* + * reduce current DMA mapping usage, + * delay and try again later or + * reset driver. + */ + goto map_error_handling; + } + array[i].dma_addr = dma_addr; + save_index++; + } + + ... + + map_error_handling: + + for (i = 0; i < save_index; i++) { + + ... + + dma_unmap_single(array[i].dma_addr); + } + +Networking drivers must call dev_kfree_skb() to free the socket buffer +and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook +(ndo_start_xmit). This means that the socket buffer is just dropped in +the failure case. + +SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping +fails in the queuecommand hook. This means that the SCSI subsystem +passes the command to the driver again later. + +Optimizing Unmap State Space Consumption +======================================== + +On many platforms, dma_unmap_{single,page}() is simply a nop. +Therefore, keeping track of the mapping address and length is a waste +of space. Instead of filling your drivers up with ifdefs and the like +to "work around" this (which would defeat the whole purpose of a +portable API) the following facilities are provided. + +Actually, instead of describing the macros one by one, we'll +transform some example code. + +1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. + Example, before:: + + struct ring_state { + struct sk_buff *skb; + dma_addr_t mapping; + __u32 len; + }; + + after:: + + struct ring_state { + struct sk_buff *skb; + DEFINE_DMA_UNMAP_ADDR(mapping); + DEFINE_DMA_UNMAP_LEN(len); + }; + +2) Use dma_unmap_{addr,len}_set() to set these values. + Example, before:: + + ringp->mapping = FOO; + ringp->len = BAR; + + after:: + + dma_unmap_addr_set(ringp, mapping, FOO); + dma_unmap_len_set(ringp, len, BAR); + +3) Use dma_unmap_{addr,len}() to access these values. + Example, before:: + + dma_unmap_single(dev, ringp->mapping, ringp->len, + DMA_FROM_DEVICE); + + after:: + + dma_unmap_single(dev, + dma_unmap_addr(ringp, mapping), + dma_unmap_len(ringp, len), + DMA_FROM_DEVICE); + +It really should be self-explanatory. We treat the ADDR and LEN +separately, because it is possible for an implementation to only +need the address in order to perform the unmap operation. + +Platform Issues +=============== + +If you are just writing drivers for Linux and do not maintain +an architecture port for the kernel, you can safely skip down +to "Closing". + +1) Struct scatterlist requirements. + + You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture + supports IOMMUs (including software IOMMU). + +2) ARCH_DMA_MINALIGN + + Architectures must ensure that kmalloc'ed buffer is + DMA-safe. Drivers and subsystems depend on it. If an architecture + isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in + the CPU cache is identical to data in main memory), + ARCH_DMA_MINALIGN must be set so that the memory allocator + makes sure that kmalloc'ed buffer doesn't share a cache line with + the others. See arch/arm/include/asm/cache.h as an example. + + Note that ARCH_DMA_MINALIGN is about DMA memory alignment + constraints. You don't need to worry about the architecture data + alignment constraints (e.g. the alignment constraints about 64-bit + objects). + +Closing +======= + +This document, and the API itself, would not be in its current +form without the feedback and suggestions from numerous individuals. +We would like to specifically mention, in no particular order, the +following people:: + + Russell King + Leo Dagum + Ralf Baechle + Grant Grundler + Jay Estabrook + Thomas Sailer + Andrea Arcangeli + Jens Axboe + David Mosberger-Tang diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst new file mode 100644 index 0000000000000..2d8d2fed73172 --- /dev/null +++ b/Documentation/core-api/dma-api.rst @@ -0,0 +1,745 @@ +============================================ +Dynamic DMA mapping using the generic device +============================================ + +:Author: James E.J. Bottomley + +This document describes the DMA API. For a more gentle introduction +of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. + +This API is split into two pieces. Part I describes the basic API. +Part II describes extensions for supporting non-consistent memory +machines. Unless you know that your driver absolutely has to support +non-consistent platforms (this is usually only legacy platforms) you +should only use the API described in part I. + +Part I - dma_API +---------------- + +To get the dma_API, you must #include . This +provides dma_addr_t and the interfaces described below. + +A dma_addr_t can hold any valid DMA address for the platform. It can be +given to a device to use as a DMA source or target. A CPU cannot reference +a dma_addr_t directly because there may be translation between its physical +address space and the DMA address space. + +Part Ia - Using large DMA-coherent buffers +------------------------------------------ + +:: + + void * + dma_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_handle, gfp_t flag) + +Consistent memory is memory for which a write by either the device or +the processor can immediately be read by the processor or device +without having to worry about caching effects. (You may however need +to make sure to flush the processor's write buffers before telling +devices to read that memory.) + +This routine allocates a region of bytes of consistent memory. + +It returns a pointer to the allocated region (in the processor's virtual +address space) or NULL if the allocation failed. + +It also returns a which may be cast to an unsigned integer the +same width as the bus and given to the device as the DMA address base of +the region. + +Note: consistent memory can be expensive on some platforms, and the +minimum allocation length may be as big as a page, so you should +consolidate your requests for consistent memory as much as possible. +The simplest way to do that is to use the dma_pool calls (see below). + +The flag parameter (dma_alloc_coherent() only) allows the caller to +specify the ``GFP_`` flags (see kmalloc()) for the allocation (the +implementation may choose to ignore flags that affect the location of +the returned memory, like GFP_DMA). + +:: + + void + dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_handle) + +Free a region of consistent memory you previously allocated. dev, +size and dma_handle must all be the same as those passed into +dma_alloc_coherent(). cpu_addr must be the virtual address returned by +the dma_alloc_coherent(). + +Note that unlike their sibling allocation calls, these routines +may only be called with IRQs enabled. + + +Part Ib - Using small DMA-coherent buffers +------------------------------------------ + +To get this part of the dma_API, you must #include + +Many drivers need lots of small DMA-coherent memory regions for DMA +descriptors or I/O buffers. Rather than allocating in units of a page +or more using dma_alloc_coherent(), you can use DMA pools. These work +much like a struct kmem_cache, except that they use the DMA-coherent allocator, +not __get_free_pages(). Also, they understand common hardware constraints +for alignment, like queue heads needing to be aligned on N-byte boundaries. + + +:: + + struct dma_pool * + dma_pool_create(const char *name, struct device *dev, + size_t size, size_t align, size_t alloc); + +dma_pool_create() initializes a pool of DMA-coherent buffers +for use with a given device. It must be called in a context which +can sleep. + +The "name" is for diagnostics (like a struct kmem_cache name); dev and size +are like what you'd pass to dma_alloc_coherent(). The device's hardware +alignment requirement for this type of data is "align" (which is expressed +in bytes, and must be a power of two). If your device has no boundary +crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated +from this pool must not cross 4KByte boundaries. + +:: + + void * + dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, + dma_addr_t *handle) + +Wraps dma_pool_alloc() and also zeroes the returned memory if the +allocation attempt succeeded. + + +:: + + void * + dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, + dma_addr_t *dma_handle); + +This allocates memory from the pool; the returned memory will meet the +size and alignment requirements specified at creation time. Pass +GFP_ATOMIC to prevent blocking, or if it's permitted (not +in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow +blocking. Like dma_alloc_coherent(), this returns two values: an +address usable by the CPU, and the DMA address usable by the pool's +device. + +:: + + void + dma_pool_free(struct dma_pool *pool, void *vaddr, + dma_addr_t addr); + +This puts memory back into the pool. The pool is what was passed to +dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what +were returned when that routine allocated the memory being freed. + +:: + + void + dma_pool_destroy(struct dma_pool *pool); + +dma_pool_destroy() frees the resources of the pool. It must be +called in a context which can sleep. Make sure you've freed all allocated +memory back to the pool before you destroy it. + + +Part Ic - DMA addressing limitations +------------------------------------ + +:: + + int + dma_set_mask_and_coherent(struct device *dev, u64 mask) + +Checks to see if the mask is possible and updates the device +streaming and coherent DMA mask parameters if it is. + +Returns: 0 if successful and a negative error if not. + +:: + + int + dma_set_mask(struct device *dev, u64 mask) + +Checks to see if the mask is possible and updates the device +parameters if it is. + +Returns: 0 if successful and a negative error if not. + +:: + + int + dma_set_coherent_mask(struct device *dev, u64 mask) + +Checks to see if the mask is possible and updates the device +parameters if it is. + +Returns: 0 if successful and a negative error if not. + +:: + + u64 + dma_get_required_mask(struct device *dev) + +This API returns the mask that the platform requires to +operate efficiently. Usually this means the returned mask +is the minimum required to cover all of memory. Examining the +required mask gives drivers with variable descriptor sizes the +opportunity to use smaller descriptors as necessary. + +Requesting the required mask does not alter the current mask. If you +wish to take advantage of it, you should issue a dma_set_mask() +call to set the mask to the value returned. + +:: + + size_t + dma_max_mapping_size(struct device *dev); + +Returns the maximum size of a mapping for the device. The size parameter +of the mapping functions like dma_map_single(), dma_map_page() and +others should not be larger than the returned value. + +:: + + unsigned long + dma_get_merge_boundary(struct device *dev); + +Returns the DMA merge boundary. If the device cannot merge any the DMA address +segments, the function returns 0. + +Part Id - Streaming DMA mappings +-------------------------------- + +:: + + dma_addr_t + dma_map_single(struct device *dev, void *cpu_addr, size_t size, + enum dma_data_direction direction) + +Maps a piece of processor virtual memory so it can be accessed by the +device and returns the DMA address of the memory. + +The direction for both APIs may be converted freely by casting. +However the dma_API uses a strongly typed enumerator for its +direction: + +======================= ============================================= +DMA_NONE no direction (used for debugging) +DMA_TO_DEVICE data is going from the memory to the device +DMA_FROM_DEVICE data is coming from the device to the memory +DMA_BIDIRECTIONAL direction isn't known +======================= ============================================= + +.. note:: + + Not all memory regions in a machine can be mapped by this API. + Further, contiguous kernel virtual space may not be contiguous as + physical memory. Since this API does not provide any scatter/gather + capability, it will fail if the user tries to map a non-physically + contiguous piece of memory. For this reason, memory to be mapped by + this API should be obtained from sources which guarantee it to be + physically contiguous (like kmalloc). + + Further, the DMA address of the memory must be within the + dma_mask of the device (the dma_mask is a bit mask of the + addressable region for the device, i.e., if the DMA address of + the memory ANDed with the dma_mask is still equal to the DMA + address, then the device can perform DMA to the memory). To + ensure that the memory allocated by kmalloc is within the dma_mask, + the driver may specify various platform-dependent flags to restrict + the DMA address range of the allocation (e.g., on x86, GFP_DMA + guarantees to be within the first 16MB of available DMA addresses, + as required by ISA devices). + + Note also that the above constraints on physical contiguity and + dma_mask may not apply if the platform has an IOMMU (a device which + maps an I/O DMA address to a physical memory address). However, to be + portable, device driver writers may *not* assume that such an IOMMU + exists. + +.. warning:: + + Memory coherency operates at a granularity called the cache + line width. In order for memory mapped by this API to operate + correctly, the mapped region must begin exactly on a cache line + boundary and end exactly on one (to prevent two separately mapped + regions from sharing a single cache line). Since the cache line size + may not be known at compile time, the API will not enforce this + requirement. Therefore, it is recommended that driver writers who + don't take special care to determine the cache line size at run time + only map virtual regions that begin and end on page boundaries (which + are guaranteed also to be cache line boundaries). + + DMA_TO_DEVICE synchronisation must be done after the last modification + of the memory region by the software and before it is handed off to + the device. Once this primitive is used, memory covered by this + primitive should be treated as read-only by the device. If the device + may write to it at any point, it should be DMA_BIDIRECTIONAL (see + below). + + DMA_FROM_DEVICE synchronisation must be done before the driver + accesses data that may be changed by the device. This memory should + be treated as read-only by the driver. If the driver needs to write + to it at any point, it should be DMA_BIDIRECTIONAL (see below). + + DMA_BIDIRECTIONAL requires special handling: it means that the driver + isn't sure if the memory was modified before being handed off to the + device and also isn't sure if the device will also modify it. Thus, + you must always sync bidirectional memory twice: once before the + memory is handed off to the device (to make sure all memory changes + are flushed from the processor) and once before the data may be + accessed after being used by the device (to make sure any processor + cache lines are updated with data that the device may have changed). + +:: + + void + dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, + enum dma_data_direction direction) + +Unmaps the region previously mapped. All the parameters passed in +must be identical to those passed in (and returned) by the mapping +API. + +:: + + dma_addr_t + dma_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction direction) + + void + dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, + enum dma_data_direction direction) + +API for mapping and unmapping for pages. All the notes and warnings +for the other mapping APIs apply here. Also, although the +and parameters are provided to do partial page mapping, it is +recommended that you never use these unless you really know what the +cache width is. + +:: + + dma_addr_t + dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) + + void + dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) + +API for mapping and unmapping for MMIO resources. All the notes and +warnings for the other mapping APIs apply here. The API should only be +used to map device MMIO resources, mapping of RAM is not permitted. + +:: + + int + dma_mapping_error(struct device *dev, dma_addr_t dma_addr) + +In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() +will fail to create a mapping. A driver can check for these errors by testing +the returned DMA address with dma_mapping_error(). A non-zero return value +means the mapping could not be created and the driver should take appropriate +action (e.g. reduce current DMA mapping usage or delay and try again later). + +:: + + int + dma_map_sg(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction direction) + +Returns: the number of DMA address segments mapped (this may be shorter +than passed in if some elements of the scatter/gather list are +physically or virtually adjacent and an IOMMU maps them with a single +entry). + +Please note that the sg cannot be mapped again if it has been mapped once. +The mapping process is allowed to destroy information in the sg. + +As with the other mapping interfaces, dma_map_sg() can fail. When it +does, 0 is returned and a driver must take appropriate action. It is +critical that the driver do something, in the case of a block driver +aborting the request or even oopsing is better than doing nothing and +corrupting the filesystem. + +With scatterlists, you use the resulting mapping like this:: + + int i, count = dma_map_sg(dev, sglist, nents, direction); + struct scatterlist *sg; + + for_each_sg(sglist, sg, count, i) { + hw_address[i] = sg_dma_address(sg); + hw_len[i] = sg_dma_len(sg); + } + +where nents is the number of entries in the sglist. + +The implementation is free to merge several consecutive sglist entries +into one (e.g. with an IOMMU, or if several pages just happen to be +physically contiguous) and returns the actual number of sg entries it +mapped them to. On failure 0, is returned. + +Then you should loop count times (note: this can be less than nents times) +and use sg_dma_address() and sg_dma_len() macros where you previously +accessed sg->address and sg->length as shown above. + +:: + + void + dma_unmap_sg(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction direction) + +Unmap the previously mapped scatter/gather list. All the parameters +must be the same as those and passed in to the scatter/gather mapping +API. + +Note: must be the number you passed in, *not* the number of +DMA address entries returned. + +:: + + void + dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, + size_t size, + enum dma_data_direction direction) + + void + dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, + size_t size, + enum dma_data_direction direction) + + void + dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, + int nents, + enum dma_data_direction direction) + + void + dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, + int nents, + enum dma_data_direction direction) + +Synchronise a single contiguous or scatter/gather mapping for the CPU +and device. With the sync_sg API, all the parameters must be the same +as those passed into the single mapping API. With the sync_single API, +you can use dma_handle and size parameters that aren't identical to +those passed into the single mapping API to do a partial sync. + + +.. note:: + + You must do this: + + - Before reading values that have been written by DMA from the device + (use the DMA_FROM_DEVICE direction) + - After writing values that will be written to the device using DMA + (use the DMA_TO_DEVICE) direction + - before *and* after handing memory to the device if the memory is + DMA_BIDIRECTIONAL + +See also dma_map_single(). + +:: + + dma_addr_t + dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, + enum dma_data_direction dir, + unsigned long attrs) + + void + dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) + + int + dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, + unsigned long attrs) + + void + dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, + int nents, enum dma_data_direction dir, + unsigned long attrs) + +The four functions above are just like the counterpart functions +without the _attrs suffixes, except that they pass an optional +dma_attrs. + +The interpretation of DMA attributes is architecture-specific, and +each attribute should be documented in Documentation/DMA-attributes.txt. + +If dma_attrs are 0, the semantics of each of these functions +is identical to those of the corresponding function +without the _attrs suffix. As a result dma_map_single_attrs() +can generally replace dma_map_single(), etc. + +As an example of the use of the ``*_attrs`` functions, here's how +you could pass an attribute DMA_ATTR_FOO when mapping memory +for DMA:: + + #include + /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and + * documented in Documentation/DMA-attributes.txt */ + ... + + unsigned long attr; + attr |= DMA_ATTR_FOO; + .... + n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); + .... + +Architectures that care about DMA_ATTR_FOO would check for its +presence in their implementations of the mapping and unmapping +routines, e.g.::: + + void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) + { + .... + if (attrs & DMA_ATTR_FOO) + /* twizzle the frobnozzle */ + .... + } + + +Part II - Advanced dma usage +---------------------------- + +Warning: These pieces of the DMA API should not be used in the +majority of cases, since they cater for unlikely corner cases that +don't belong in usual drivers. + +If you don't understand how cache line coherency works between a +processor and an I/O device, you should not be using this part of the +API at all. + +:: + + void * + dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, + gfp_t flag, unsigned long attrs) + +Identical to dma_alloc_coherent() except that when the +DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the +platform will choose to return either consistent or non-consistent memory +as it sees fit. By using this API, you are guaranteeing to the platform +that you have all the correct and necessary sync points for this memory +in the driver should it choose to return non-consistent memory. + +Note: where the platform can return consistent memory, it will +guarantee that the sync points become nops. + +Warning: Handling non-consistent memory is a real pain. You should +only use this API if you positively know your driver will be +required to work on one of the rare (usually non-PCI) architectures +that simply cannot make consistent memory. + +:: + + void + dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, + dma_addr_t dma_handle, unsigned long attrs) + +Free memory allocated by the dma_alloc_attrs(). All common +parameters must be identical to those otherwise passed to dma_free_coherent, +and the attrs argument must be identical to the attrs passed to +dma_alloc_attrs(). + +:: + + int + dma_get_cache_alignment(void) + +Returns the processor cache alignment. This is the absolute minimum +alignment *and* width that you must observe when either mapping +memory or doing partial flushes. + +.. note:: + + This API may return a number *larger* than the actual cache + line, but it will guarantee that one or more cache lines fit exactly + into the width returned by this call. It will also always be a power + of two for easy alignment. + +:: + + void + dma_cache_sync(struct device *dev, void *vaddr, size_t size, + enum dma_data_direction direction) + +Do a partial sync of memory that was allocated by dma_alloc_attrs() with +the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and +continuing on for size. Again, you *must* observe the cache line +boundaries when doing this. + +:: + + int + dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, + dma_addr_t device_addr, size_t size); + +Declare region of memory to be handed out by dma_alloc_coherent() when +it's asked for coherent memory for this device. + +phys_addr is the CPU physical address to which the memory is currently +assigned (this will be ioremapped so the CPU can access the region). + +device_addr is the DMA address the device needs to be programmed +with to actually address this memory (this will be handed out as the +dma_addr_t in dma_alloc_coherent()). + +size is the size of the area (must be multiples of PAGE_SIZE). + +As a simplification for the platforms, only *one* such region of +memory may be declared per device. + +For reasons of efficiency, most platforms choose to track the declared +region only at the granularity of a page. For smaller allocations, +you should use the dma_pool() API. + +Part III - Debug drivers use of the DMA-API +------------------------------------------- + +The DMA-API as described above has some constraints. DMA addresses must be +released with the corresponding function with the same size for example. With +the advent of hardware IOMMUs it becomes more and more important that drivers +do not violate those constraints. In the worst case such a violation can +result in data corruption up to destroyed filesystems. + +To debug drivers and find bugs in the usage of the DMA-API checking code can +be compiled into the kernel which will tell the developer about those +violations. If your architecture supports it you can select the "Enable +debugging of DMA-API usage" option in your kernel configuration. Enabling this +option has a performance impact. Do not enable it in production kernels. + +If you boot the resulting kernel will contain code which does some bookkeeping +about what DMA memory was allocated for which device. If this code detects an +error it prints a warning message with some details into your kernel log. An +example warning message may look like this:: + + WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 + check_unmap+0x203/0x490() + Hardware name: + forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong + function [device address=0x00000000640444be] [size=66 bytes] [mapped as + single] [unmapped as page] + Modules linked in: nfsd exportfs bridge stp llc r8169 + Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 + Call Trace: + [] warn_slowpath+0xf2/0x130 + [] _spin_unlock+0x10/0x30 + [] usb_hcd_link_urb_to_ep+0x75/0xc0 + [] _spin_unlock_irqrestore+0x12/0x40 + [] ohci_urb_enqueue+0x19f/0x7c0 + [] queue_work+0x56/0x60 + [] enqueue_task_fair+0x20/0x50 + [] usb_hcd_submit_urb+0x379/0xbc0 + [] cpumask_next_and+0x23/0x40 + [] find_busiest_group+0x207/0x8a0 + [] _spin_lock_irqsave+0x1f/0x50 + [] check_unmap+0x203/0x490 + [] debug_dma_unmap_page+0x49/0x50 + [] nv_tx_done_optimized+0xc6/0x2c0 + [] nv_nic_irq_optimized+0x73/0x2b0 + [] handle_IRQ_event+0x34/0x70 + [] handle_edge_irq+0xc9/0x150 + [] do_IRQ+0xcb/0x1c0 + [] ret_from_intr+0x0/0xa + <4>---[ end trace f6435a98e2a38c0e ]--- + +The driver developer can find the driver and the device including a stacktrace +of the DMA-API call which caused this warning. + +Per default only the first error will result in a warning message. All other +errors will only silently counted. This limitation exist to prevent the code +from flooding your kernel log. To support debugging a device driver this can +be disabled via debugfs. See the debugfs interface documentation below for +details. + +The debugfs directory for the DMA-API debugging code is called dma-api/. In +this directory the following files can currently be found: + +=============================== =============================================== +dma-api/all_errors This file contains a numeric value. If this + value is not equal to zero the debugging code + will print a warning for every error it finds + into the kernel log. Be careful with this + option, as it can easily flood your logs. + +dma-api/disabled This read-only file contains the character 'Y' + if the debugging code is disabled. This can + happen when it runs out of memory or if it was + disabled at boot time + +dma-api/dump This read-only file contains current DMA + mappings. + +dma-api/error_count This file is read-only and shows the total + numbers of errors found. + +dma-api/num_errors The number in this file shows how many + warnings will be printed to the kernel log + before it stops. This number is initialized to + one at system boot and be set by writing into + this file + +dma-api/min_free_entries This read-only file can be read to get the + minimum number of free dma_debug_entries the + allocator has ever seen. If this value goes + down to zero the code will attempt to increase + nr_total_entries to compensate. + +dma-api/num_free_entries The current number of free dma_debug_entries + in the allocator. + +dma-api/nr_total_entries The total number of dma_debug_entries in the + allocator, both free and used. + +dma-api/driver_filter You can write a name of a driver into this file + to limit the debug output to requests from that + particular driver. Write an empty string to + that file to disable the filter and see + all errors again. +=============================== =============================================== + +If you have this code compiled into your kernel it will be enabled by default. +If you want to boot without the bookkeeping anyway you can provide +'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. +Notice that you can not enable it again at runtime. You have to reboot to do +so. + +If you want to see debug messages only for a special device driver you can +specify the dma_debug_driver= parameter. This will enable the +driver filter at boot time. The debug code will only print errors for that +driver afterwards. This filter can be disabled or changed later using debugfs. + +When the code disables itself at runtime this is most likely because it ran +out of dma_debug_entries and was unable to allocate more on-demand. 65536 +entries are preallocated at boot - if this is too low for you boot with +'dma_debug_entries=' to overwrite the default. Note +that the code allocates entries in batches, so the exact number of +preallocated entries may be greater than the actual number requested. The +code will print to the kernel log each time it has dynamically allocated +as many entries as were initially preallocated. This is to indicate that a +larger preallocation size may be appropriate, or if it happens continually +that a driver may be leaking mappings. + +:: + + void + debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); + +dma-debug interface debug_dma_mapping_error() to debug drivers that fail +to check DMA mapping errors on addresses returned by dma_map_single() and +dma_map_page() interfaces. This interface clears a flag set by +debug_dma_map_page() to indicate that dma_mapping_error() has been called by +the driver. When driver does unmap, debug_dma_unmap() checks the flag and if +this flag is still set, prints warning message that includes call trace that +leads up to the unmap. This interface can be called from dma_mapping_error() +routines to enable DMA mapping error check debugging. diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst new file mode 100644 index 0000000000000..29dcbe8826e85 --- /dev/null +++ b/Documentation/core-api/dma-attributes.rst @@ -0,0 +1,140 @@ +============== +DMA attributes +============== + +This document describes the semantics of the DMA attributes that are +defined in linux/dma-mapping.h. + +DMA_ATTR_WEAK_ORDERING +---------------------- + +DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping +may be weakly ordered, that is that reads and writes may pass each other. + +Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING, +those that do not will simply ignore the attribute and exhibit default +behavior. + +DMA_ATTR_WRITE_COMBINE +---------------------- + +DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be +buffered to improve performance. + +Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE, +those that do not will simply ignore the attribute and exhibit default +behavior. + +DMA_ATTR_NON_CONSISTENT +----------------------- + +DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either +consistent or non-consistent memory as it sees fit. By using this API, +you are guaranteeing to the platform that you have all the correct and +necessary sync points for this memory in the driver. + +DMA_ATTR_NO_KERNEL_MAPPING +-------------------------- + +DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel +virtual mapping for the allocated buffer. On some architectures creating +such mapping is non-trivial task and consumes very limited resources +(like kernel virtual address space or dma consistent address space). +Buffers allocated with this attribute can be only passed to user space +by calling dma_mmap_attrs(). By using this API, you are guaranteeing +that you won't dereference the pointer returned by dma_alloc_attr(). You +can treat it as a cookie that must be passed to dma_mmap_attrs() and +dma_free_attrs(). Make sure that both of these also get this attribute +set on each call. + +Since it is optional for platforms to implement +DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the +attribute and exhibit default behavior. + +DMA_ATTR_SKIP_CPU_SYNC +---------------------- + +By default dma_map_{single,page,sg} functions family transfer a given +buffer from CPU domain to device domain. Some advanced use cases might +require sharing a buffer between more than one device. This requires +having a mapping created separately for each device and is usually +performed by calling dma_map_{single,page,sg} function more than once +for the given buffer with device pointer to each device taking part in +the buffer sharing. The first call transfers a buffer from 'CPU' domain +to 'device' domain, what synchronizes CPU caches for the given region +(usually it means that the cache has been flushed or invalidated +depending on the dma direction). However, next calls to +dma_map_{single,page,sg}() for other devices will perform exactly the +same synchronization operation on the CPU cache. CPU cache synchronization +might be a time consuming operation, especially if the buffers are +large, so it is highly recommended to avoid it if possible. +DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of +the CPU cache for the given buffer assuming that it has been already +transferred to 'device' domain. This attribute can be also used for +dma_unmap_{single,page,sg} functions family to force buffer to stay in +device domain after releasing a mapping for it. Use this attribute with +care! + +DMA_ATTR_FORCE_CONTIGUOUS +------------------------- + +By default DMA-mapping subsystem is allowed to assemble the buffer +allocated by dma_alloc_attrs() function from individual pages if it can +be mapped as contiguous chunk into device dma address space. By +specifying this attribute the allocated buffer is forced to be contiguous +also in physical memory. + +DMA_ATTR_ALLOC_SINGLE_PAGES +--------------------------- + +This is a hint to the DMA-mapping subsystem that it's probably not worth +the time to try to allocate memory to in a way that gives better TLB +efficiency (AKA it's not worth trying to build the mapping out of larger +pages). You might want to specify this if: + +- You know that the accesses to this memory won't thrash the TLB. + You might know that the accesses are likely to be sequential or + that they aren't sequential but it's unlikely you'll ping-pong + between many addresses that are likely to be in different physical + pages. +- You know that the penalty of TLB misses while accessing the + memory will be small enough to be inconsequential. If you are + doing a heavy operation like decryption or decompression this + might be the case. +- You know that the DMA mapping is fairly transitory. If you expect + the mapping to have a short lifetime then it may be worth it to + optimize allocation (avoid coming up with large pages) instead of + getting the slight performance win of larger pages. + +Setting this hint doesn't guarantee that you won't get huge pages, but it +means that we won't try quite as hard to get them. + +.. note:: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, + though ARM64 patches will likely be posted soon. + +DMA_ATTR_NO_WARN +---------------- + +This tells the DMA-mapping subsystem to suppress allocation failure reports +(similarly to __GFP_NOWARN). + +On some architectures allocation failures are reported with error messages +to the system logs. Although this can help to identify and debug problems, +drivers which handle failures (eg, retry later) have no problems with them, +and can actually flood the system logs with error messages that aren't any +problem at all, depending on the implementation of the retry mechanism. + +So, this provides a way for drivers to avoid those error messages on calls +where allocation failures are not a problem, and shouldn't bother the logs. + +.. note:: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC. + +DMA_ATTR_PRIVILEGED +------------------- + +Some advanced peripherals such as remote processors and GPUs perform +accesses to DMA buffers in both privileged "supervisor" and unprivileged +"user" modes. This attribute is used to indicate to the DMA-mapping +subsystem that the buffer is fully accessible at the elevated privilege +level (and ideally inaccessible or at least read-only at the +lesser-privileged levels). diff --git a/Documentation/core-api/dma-isa-lpc.rst b/Documentation/core-api/dma-isa-lpc.rst new file mode 100644 index 0000000000000..b1ec7b16c21ff --- /dev/null +++ b/Documentation/core-api/dma-isa-lpc.rst @@ -0,0 +1,152 @@ +============================ +DMA with ISA and LPC devices +============================ + +:Author: Pierre Ossman + +This document describes how to do DMA transfers using the old ISA DMA +controller. Even though ISA is more or less dead today the LPC bus +uses the same DMA system so it will be around for quite some time. + +Headers and dependencies +------------------------ + +To do ISA style DMA you need to include two headers:: + + #include + #include + +The first is the generic DMA API used to convert virtual addresses to +bus addresses (see Documentation/DMA-API.txt for details). + +The second contains the routines specific to ISA DMA transfers. Since +this is not present on all platforms make sure you construct your +Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries +to build your driver on unsupported platforms. + +Buffer allocation +----------------- + +The ISA DMA controller has some very strict requirements on which +memory it can access so extra care must be taken when allocating +buffers. + +(You usually need a special buffer for DMA transfers instead of +transferring directly to and from your normal data structures.) + +The DMA-able address space is the lowest 16 MB of _physical_ memory. +Also the transfer block may not cross page boundaries (which are 64 +or 128 KiB depending on which channel you use). + +In order to allocate a piece of memory that satisfies all these +requirements you pass the flag GFP_DMA to kmalloc. + +Unfortunately the memory available for ISA DMA is scarce so unless you +allocate the memory during boot-up it's a good idea to also pass +__GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder. + +(This scarcity also means that you should allocate the buffer as +early as possible and not release it until the driver is unloaded.) + +Address translation +------------------- + +To translate the virtual address to a bus address, use the normal DMA +API. Do _not_ use isa_virt_to_bus() even though it does the same +thing. The reason for this is that the function isa_virt_to_bus() +will require a Kconfig dependency to ISA, not just ISA_DMA_API which +is really all you need. Remember that even though the DMA controller +has its origins in ISA it is used elsewhere. + +Note: x86_64 had a broken DMA API when it came to ISA but has since +been fixed. If your arch has problems then fix the DMA API instead of +reverting to the ISA functions. + +Channels +-------- + +A normal ISA DMA controller has 8 channels. The lower four are for +8-bit transfers and the upper four are for 16-bit transfers. + +(Actually the DMA controller is really two separate controllers where +channel 4 is used to give DMA access for the second controller (0-3). +This means that of the four 16-bits channels only three are usable.) + +You allocate these in a similar fashion as all basic resources: + +extern int request_dma(unsigned int dmanr, const char * device_id); +extern void free_dma(unsigned int dmanr); + +The ability to use 16-bit or 8-bit transfers is _not_ up to you as a +driver author but depends on what the hardware supports. Check your +specs or test different channels. + +Transfer data +------------- + +Now for the good stuff, the actual DMA transfer. :) + +Before you use any ISA DMA routines you need to claim the DMA lock +using claim_dma_lock(). The reason is that some DMA operations are +not atomic so only one driver may fiddle with the registers at a +time. + +The first time you use the DMA controller you should call +clear_dma_ff(). This clears an internal register in the DMA +controller that is used for the non-atomic operations. As long as you +(and everyone else) uses the locking functions then you only need to +reset this once. + +Next, you tell the controller in which direction you intend to do the +transfer using set_dma_mode(). Currently you have the options +DMA_MODE_READ and DMA_MODE_WRITE. + +Set the address from where the transfer should start (this needs to +be 16-bit aligned for 16-bit transfers) and how many bytes to +transfer. Note that it's _bytes_. The DMA routines will do all the +required translation to values that the DMA controller understands. + +The final step is enabling the DMA channel and releasing the DMA +lock. + +Once the DMA transfer is finished (or timed out) you should disable +the channel again. You should also check get_dma_residue() to make +sure that all data has been transferred. + +Example:: + + int flags, residue; + + flags = claim_dma_lock(); + + clear_dma_ff(); + + set_dma_mode(channel, DMA_MODE_WRITE); + set_dma_addr(channel, phys_addr); + set_dma_count(channel, num_bytes); + + dma_enable(channel); + + release_dma_lock(flags); + + while (!device_done()); + + flags = claim_dma_lock(); + + dma_disable(channel); + + residue = dma_get_residue(channel); + if (residue != 0) + printk(KERN_ERR "driver: Incomplete DMA transfer!" + " %d bytes left!\n", residue); + + release_dma_lock(flags); + +Suspend/resume +-------------- + +It is the driver's responsibility to make sure that the machine isn't +suspended while a DMA transfer is in progress. Also, all DMA settings +are lost when the system suspends so if your driver relies on the DMA +controller being in a certain state then you have to restore these +registers upon resume. diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index b29c4a07beda8..c00aef8433415 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -80,6 +80,10 @@ more memory-management documentation in :doc:`/vm/index`. :maxdepth: 1 memory-allocation + dma-api + dma-api-howto + dma-attributes + dma-isa-lpc mm-api genalloc pin_user_pages