This patch adds support for dma_get_sgtable() function which is required
to let drivers to share the buffers allocated by DMA-mapping subsystem.
Generic implementation based on virt_to_page() is not suitable for ARM
dma-mapping subsystem.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch fixes the usage of uninitialized variables in dmabounce code
intoduced by commit a227fb92 ('ARM: dma-mapping: remove offset parameter
to prepare for generic dma_ops'):
arch/arm/common/dmabounce.c: In function ‘dmabounce_sync_for_device’:
arch/arm/common/dmabounce.c:409: warning: ‘off’ may be used uninitialized in this function
arch/arm/common/dmabounce.c:407: note: ‘off’ was declared here
arch/arm/common/dmabounce.c: In function ‘dmabounce_sync_for_cpu’:
arch/arm/common/dmabounce.c:369: warning: ‘off’ may be used uninitialized in this function
arch/arm/common/dmabounce.c:367: note: ‘off’ was declared here
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
This patch converts dma_alloc/free/mmap_{coherent,writecombine}
functions to use generic alloc/free/mmap methods from dma_map_ops
structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been
introduced to implement writecombine methods.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch removes dma bounce hooks from the common dma mapping
implementation on ARM architecture and creates a separate set of
dma_map_ops for dma bounce devices.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
This patch removes the need for the offset parameter in dma bounce
functions. This is required to let dma-mapping framework on ARM
architecture to use common, generic dma_map_ops based dma-mapping
helpers.
Background and more detailed explaination:
dma_*_range_* functions are available from the early days of the dma
mapping api. They are the correct way of doing a partial syncs on the
buffer (usually used by the network device drivers). This patch changes
only the internal implementation of the dma bounce functions to let
them tunnel through dma_map_ops structure. The driver api stays
unchanged, so driver are obliged to call dma_*_range_* functions to
keep code clean and easy to understand.
The only drawback from this patch is reduced detection of the dma api
abuse. Let us consider the following code:
dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE);
dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE);
Without the patch such code fails, because dma bounce code is unable
to find the bounce buffer for the given dma_address. After the patch
the above sync call will be equivalent to:
dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE);
which succeeds.
I don't consider this as a real problem, because DMA API abuse should be
caught by debug_dma_* function family. This patch lets us to simplify
the internal low-level implementation without chaning the driver visible
API.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code
easier to read.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Pass the device type specific needs_bounce function in at dmabounce
register time, avoiding the need for a platform specific global
function to do this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
DMA addresses should not be casted to void * for printing. Fix
that to be consistent with the rest of the file.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the decision whether to bounce into __dma_map_page(), before
the check for high pages. This avoids triggering the high page
check for devices which aren't using dmabounce. Fix the unmap path
to cope too.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the decision to perform DMA bouncing out of map_single() into its
own stand-alone function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Use dma_map_page()/dma_unmap_page() internals to handle dma_map_single()
and dma_unmap_single().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When map_single() is unable to obtain a safe buffer, we must return
the dma_addr_t error value, which is ~0 rather than 0.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add ARM support for the DMA debug infrastructure, which allows the
DMA API usage to be debugged.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
Commit f74f7e57ae (ARM: use
flush_kernel_dcache_area() for dmabounce) has broken dmabounce build:
CC arch/arm/common/dmabounce.o
arch/arm/common/dmabounce.c: In function 'unmap_single':
arch/arm/common/dmabounce.c:315: error: implicit declaration of function '__cpuc_flush_kernel_dcache_area'
make[2]: *** [arch/arm/common/dmabounce.o] Error 1
Fix it.
Signed-off-by: Mike Rapoport <mike@compulab.co.il>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
After copying data from the bounce buffer to the real buffer, use
flush_kernel_dcache_page() to ensure that data is written back in
manner coherent with future userspace mappings.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We will need to treat dma_unmap_page() differently from dma_unmap_single()
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Jamie Iles <jamie@jamieiles.com>
If a machine class has a custom __virt_to_bus() implementation then it
must provide a __arch_page_to_dma() implementation as well which is
_not_ based on page_address() to support highmem.
This patch fixes existing __arch_page_to_dma() and provide a default
implementation otherwise. The default implementation for highmem is
based on __pfn_to_bus() which is defined only when no custom
__virt_to_bus() is provided by the machine class.
That leaves only ebsa110 and footbridge which cannot support highmem
until they provide their own __arch_page_to_dma() implementation.
But highmem support on those legacy platforms with limited memory is
certainly not a priority.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Validate the direction argument like x86 does. In addition,
validate the dma_unmap_* parameters against those passed to
dma_map_* when using the DMA bounce code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The dmabounce dma_sync_xxx() implementation have been broken for
quite some time; they all copy data between the DMA buffer and
the CPU visible buffer no irrespective of the change of ownership.
(IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer
to the CPU buffer during a call to dma_sync_single_for_device().)
Fix it by getting rid of sync_single(), moving the contents into
the recently created dmabounce_sync_for_xxx() functions and adjusting
appropriately.
This also makes it possible to properly support the DMA range sync
functions.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We can translate a struct page directly to a DMA address using
page_to_dma(). No need to use page_address() followed by
virt_to_dma().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert the existing dma_sync_single_for_* APIs to the new range based
APIs, and make the dma_sync_single_for_* API a superset of it.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OMAP at least gets the return type(s) for the DMA translation functions
wrong, which can lead to subtle errors. Avoid this by moving the DMA
translation functions to asm/dma-mapping.h, and converting them to
inline functions.
Fix the OMAP DMA translation macros to use the correct argument and
result types.
Also, remove the unnecessary casts in dmabounce.c.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
architecture does:
This enables us to cleanly fix the Calgary IOMMU issue that some devices
are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).
I think that per-device dma_mapping_ops support would be also helpful for
KVM people to support PCI passthrough but Andi thinks that this makes it
difficult to support the PCI passthrough (see the above thread). So I
CC'ed this to KVM camp. Comments are appreciated.
A pointer to dma_mapping_ops to struct dev_archdata is added. If the
pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's
NULL, the system-wide dma_ops pointer is used as before.
If it's useful for KVM people, I plan to implement a mechanism to register
a hook called when a new pci (or dma capable) device is created (it works
with hot plugging). It enables IOMMUs to set up an appropriate
dma_mapping_ops per device.
The major obstacle is that dma_mapping_error doesn't take a pointer to the
device unlike other DMA operations. So x86 can't have dma_mapping_ops per
device. Note all the POWER IOMMUs use the same dma_mapping_error function
so this is not a problem for POWER but x86 IOMMUs use different
dma_mapping_error functions.
The first patch adds the device argument to dma_mapping_error. The patch
is trivial but large since it touches lots of drivers and dma-mapping.h in
all the architecture.
This patch:
dma_mapping_error() doesn't take a pointer to the device unlike other DMA
operations. So we can't have dma_mapping_ops per device.
Note that POWER already has dma_mapping_ops per device but all the POWER
IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device
argument.
[akpm@linux-foundation.org: fix sge]
[akpm@linux-foundation.org: fix svc_rdma]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix bnx2x]
[akpm@linux-foundation.org: fix s2io]
[akpm@linux-foundation.org: fix pasemi_mac]
[akpm@linux-foundation.org: fix sdhci]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix sparc]
[akpm@linux-foundation.org: fix ibmvscsi]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Avi Kivity <avi@qumranet.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We have the dev_printk() variants for this kind of thing, use them
instead of directly trying to access the bus_id field of struct device.
This is done in order to remove bus_id entirely.
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Noticed by Martin Michlmayr, this missing export prevents IEEE1394
from building with:
ERROR: "dma_sync_sg_for_device" [drivers/ieee1394/ieee1394.ko] undefined!
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/common/dmabounce.c: In function 'dma_map_sg':
arch/arm/common/dmabounce.c:445: error: implicit declaration of function 'sg_page'
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
consistent_sync() is used to handle the cache maintainence issues with
DMA operations. Since we've now removed the misuse of this function
from the two MTD drivers, rename it to prevent future mis-use.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than printk'ing the dmabounce statistics occasionally to
the kernel log, provide a sysfs file to allow this information
to be periodically read.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
dmabounce keeps a per-device structure, and finds the correct
structure by walking a list. Since architectures can now add
fields to struct device, we can attach this structure direct to
the struct device, thereby eliminating the code to search the
list.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The DMA cache handling functions take virtual addresses, but in the
form of unsigned long arguments. This leads to a little confusion
about what exactly they take. So, convert them to take const void *
instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The outer cache can be L2 as on RealView/EB MPCore platform or even L3
or further on ARMv7 cores. This patch adds the generic support for
flushing the outer cache in the DMA operations.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Memory allocated by the coherent memory allocators will be marked
uncacheable, which means it's pointless calling consistent_sync()
to perform cache maintainence on this memory; it's just a waste of
CPU cycles.
Moreover, with the (subsequent) merge of outer cache support, it
actually breaks things to call consistent_sync() on anything but
direct-mapped memory.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
dma_sync_single is no more (and to be removed in 2.7) so this export should be dma_sync_single_for_cpu.
Also export dma_sync_single_for_device.
Signed-off-by: Kevin Hilman <khilman@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Kevin Hilman
Previous locking changes to dmabounce incorrectly return non-NULL even
when buffer not found. Fix it up.
Signed-off-by: Kevin Hilman <khilman@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Kevin Hilman
This time with IRQ versions of locks.
Rework also enables compatability with realtime-preemption patch.
With the current locking via interrupt disabling, under RT,
potentially sleeping functions can be called with interrupts
disabled.
Signed-off-by: Kevin Hilman <khilman@mvista.com>
Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Erik Hovland
I found a typo and what seems to be a run-on sentence in
arch/arm/common/dmabounce.c
This patch corrects both.
Signed-off-by: Erik Hovland <erik@hovland.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Encapsulate pool data into dmabounce_pool. Only account successful
allocations. Use dma_mapping_error().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>