forked from Minki/linux
00085f1efa
The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
129 lines
5.8 KiB
Plaintext
129 lines
5.8 KiB
Plaintext
DMA attributes
|
|
==============
|
|
|
|
This document describes the semantics of the DMA attributes that are
|
|
defined in linux/dma-mapping.h.
|
|
|
|
DMA_ATTR_WRITE_BARRIER
|
|
----------------------
|
|
|
|
DMA_ATTR_WRITE_BARRIER is a (write) barrier attribute for DMA. DMA
|
|
to a memory region with the DMA_ATTR_WRITE_BARRIER attribute forces
|
|
all pending DMA writes to complete, and thus provides a mechanism to
|
|
strictly order DMA from a device across all intervening busses and
|
|
bridges. This barrier is not specific to a particular type of
|
|
interconnect, it applies to the system as a whole, and so its
|
|
implementation must account for the idiosyncrasies of the system all
|
|
the way from the DMA device to memory.
|
|
|
|
As an example of a situation where DMA_ATTR_WRITE_BARRIER would be
|
|
useful, suppose that a device does a DMA write to indicate that data is
|
|
ready and available in memory. The DMA of the "completion indication"
|
|
could race with data DMA. Mapping the memory used for completion
|
|
indications with DMA_ATTR_WRITE_BARRIER would prevent the race.
|
|
|
|
DMA_ATTR_WEAK_ORDERING
|
|
----------------------
|
|
|
|
DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping
|
|
may be weakly ordered, that is that reads and writes may pass each other.
|
|
|
|
Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING,
|
|
those that do not will simply ignore the attribute and exhibit default
|
|
behavior.
|
|
|
|
DMA_ATTR_WRITE_COMBINE
|
|
----------------------
|
|
|
|
DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be
|
|
buffered to improve performance.
|
|
|
|
Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE,
|
|
those that do not will simply ignore the attribute and exhibit default
|
|
behavior.
|
|
|
|
DMA_ATTR_NON_CONSISTENT
|
|
-----------------------
|
|
|
|
DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either
|
|
consistent or non-consistent memory as it sees fit. By using this API,
|
|
you are guaranteeing to the platform that you have all the correct and
|
|
necessary sync points for this memory in the driver.
|
|
|
|
DMA_ATTR_NO_KERNEL_MAPPING
|
|
--------------------------
|
|
|
|
DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel
|
|
virtual mapping for the allocated buffer. On some architectures creating
|
|
such mapping is non-trivial task and consumes very limited resources
|
|
(like kernel virtual address space or dma consistent address space).
|
|
Buffers allocated with this attribute can be only passed to user space
|
|
by calling dma_mmap_attrs(). By using this API, you are guaranteeing
|
|
that you won't dereference the pointer returned by dma_alloc_attr(). You
|
|
can treat it as a cookie that must be passed to dma_mmap_attrs() and
|
|
dma_free_attrs(). Make sure that both of these also get this attribute
|
|
set on each call.
|
|
|
|
Since it is optional for platforms to implement
|
|
DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the
|
|
attribute and exhibit default behavior.
|
|
|
|
DMA_ATTR_SKIP_CPU_SYNC
|
|
----------------------
|
|
|
|
By default dma_map_{single,page,sg} functions family transfer a given
|
|
buffer from CPU domain to device domain. Some advanced use cases might
|
|
require sharing a buffer between more than one device. This requires
|
|
having a mapping created separately for each device and is usually
|
|
performed by calling dma_map_{single,page,sg} function more than once
|
|
for the given buffer with device pointer to each device taking part in
|
|
the buffer sharing. The first call transfers a buffer from 'CPU' domain
|
|
to 'device' domain, what synchronizes CPU caches for the given region
|
|
(usually it means that the cache has been flushed or invalidated
|
|
depending on the dma direction). However, next calls to
|
|
dma_map_{single,page,sg}() for other devices will perform exactly the
|
|
same synchronization operation on the CPU cache. CPU cache synchronization
|
|
might be a time consuming operation, especially if the buffers are
|
|
large, so it is highly recommended to avoid it if possible.
|
|
DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
|
|
the CPU cache for the given buffer assuming that it has been already
|
|
transferred to 'device' domain. This attribute can be also used for
|
|
dma_unmap_{single,page,sg} functions family to force buffer to stay in
|
|
device domain after releasing a mapping for it. Use this attribute with
|
|
care!
|
|
|
|
DMA_ATTR_FORCE_CONTIGUOUS
|
|
-------------------------
|
|
|
|
By default DMA-mapping subsystem is allowed to assemble the buffer
|
|
allocated by dma_alloc_attrs() function from individual pages if it can
|
|
be mapped as contiguous chunk into device dma address space. By
|
|
specifying this attribute the allocated buffer is forced to be contiguous
|
|
also in physical memory.
|
|
|
|
DMA_ATTR_ALLOC_SINGLE_PAGES
|
|
---------------------------
|
|
|
|
This is a hint to the DMA-mapping subsystem that it's probably not worth
|
|
the time to try to allocate memory to in a way that gives better TLB
|
|
efficiency (AKA it's not worth trying to build the mapping out of larger
|
|
pages). You might want to specify this if:
|
|
- You know that the accesses to this memory won't thrash the TLB.
|
|
You might know that the accesses are likely to be sequential or
|
|
that they aren't sequential but it's unlikely you'll ping-pong
|
|
between many addresses that are likely to be in different physical
|
|
pages.
|
|
- You know that the penalty of TLB misses while accessing the
|
|
memory will be small enough to be inconsequential. If you are
|
|
doing a heavy operation like decryption or decompression this
|
|
might be the case.
|
|
- You know that the DMA mapping is fairly transitory. If you expect
|
|
the mapping to have a short lifetime then it may be worth it to
|
|
optimize allocation (avoid coming up with large pages) instead of
|
|
getting the slight performance win of larger pages.
|
|
Setting this hint doesn't guarantee that you won't get huge pages, but it
|
|
means that we won't try quite as hard to get them.
|
|
|
|
NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM,
|
|
though ARM64 patches will likely be posted soon.
|