This patch adds new version of the PPC440SPe ADMA driver.
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The size of the requested and ioremaped memory is off by 1.
Use resource_size() to get the correct value.
Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Transfers and the test buffer have to be at least align bytes long.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The SH DMA driver by default uses 32-byte transfers, in this mode buffers and
sizes have to be 32-byte aligned. Specifying this requirement also fixes Oopses
with dmatest.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
1/ Error handling code following a kzalloc should free the allocated data.
2/ Report an error when no platform data is detected
Both problems fixed by moving the platform data check before the allocation,
and allows a goto to be killed.
Reported-by: Julia Lawall <julia@diku.dk>
Acked-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch adds support for the ST-Ericsson COH 901 318 DMA block,
found in the U300 series platforms. It registers a DMA slave for
device I/O and also a memcpy slave for memcpy.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The completion of a pq operation is notified with a null descriptor
appended to the end of the chain. This descriptor needs to be visible
to dma clients otherwise the client is precluded from ensuring all
operations are quiesced before freeing channel resources, i.e. due to
descriptor polling it may get the completion notification ahead of the
interrupt delivered by the null descriptor.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
ioat3.2 does not support asynchronous error notifications which makes
the driver experience latencies when non-zero pq validate results are
expected. Provide a mechanism for turning off async_xor_val and
async_syndrome_val via Kconfig. This approach is generally useful for
any driver that specifies ASYNC_TX_DISABLE_CHANNEL_SWITCH and would like
to force the async_tx api to fall back to the synchronous path for
certain operations.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Error interrupts and error completions may cause channel hangs, so
poll the channel status register after a timeout.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
RAID operations cause a system hang on platforms with DCA
(Direct-Cache-Access) enabled. So turn off RAID capabilities in this
case.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Add at91sam9g45 dependency to drivers/dma/Kconfig
Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There are cases where we can use this_cpu_ptr and as the result
of using this_cpu_ptr() we no longer need to determine the
currently executing cpu.
In those places no get/put_cpu combination is needed anymore.
The local cpu variable can be eliminated.
Preemption still needs to be disabled and enabled since the
modifications of the per cpu variables is not atomic. There may
be multiple per cpu variables modified and those must all
be from the same processor.
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Tejun Heo <tj@kernel.org>
cc: Eric Biederman <ebiederm@aristanetworks.com>
cc: Stephen Hemminger <shemminger@vyatta.com>
cc: David L Stevens <dlstevens@us.ibm.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
drivers/dma/ioat/dma_v3.c: In function 'ioat3_prep_memset_lock':
drivers/dma/ioat/dma_v3.c:439: warning: 'fill' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c:437: warning: 'desc' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c: In function '__ioat3_prep_xor_lock':
drivers/dma/ioat/dma_v3.c:489: warning: 'xor' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c:486: warning: 'desc' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c: In function '__ioat3_prep_pq_lock':
drivers/dma/ioat/dma_v3.c:631: warning: 'pq' may be used uninitialized in this function
drivers/dma/ioat/dma_v3.c:628: warning: 'desc' may be used uninitialized in this function
gcc-4.0, unlike gcc-4.3, does not see that these variables are
initialized before use. Convert the descriptor loops to do-while make
this initialization apparent.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
drivers/dma/ioat/dma_v2.c: In function 'ioat2_dma_prep_memcpy_lock':
drivers/dma/ioat/dma_v2.c:680: warning: 'hw' may be used uninitialized in this function
drivers/dma/ioat/dma_v2.c:681: warning: 'desc' may be used uninitialized in this function
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
With the addition of ioat_max_alloc_order it is not clear what the
maximum allocation order is, so document that in the modinfo. Also take
an opportunity to kill a stray semicolon.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch reworks platform driver power management code
for at_hdmac from legacy late/early callbacks to dev_pm_ops.
The callbacks are converted for CONFIG_SUSPEND like this:
suspend_late() -> suspend_noirq()
resume_early() -> resume_noirq()
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
A new ring implementation and the addition of raid functionality
constitutes a bump in the driver major version number.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch enables DCA support on multiple-IOH/multiple-IIO architectures.
It modifies dca module by replacing single dca_providers list
with dca_domains list, each domain containing separate list of providers.
This approach lets dca driver manage multiple domains, i.e. sets of providers
and requesters mapped back to the same PCI root complex device.
The driver takes care to register each requester to a provider
from the same domain.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
This restriction prevented ASYNC_TX_DMA from being enabled on platform
configurations where DMA address conversion could not be performed in
place on the stack. Since commit 04ce9ab3 ("async_xor: permit callers
to pass in a 'dma/page scribble' region") the async_tx api now either
uses a caller provided 'scribble' buffer, or performs the conversion in
place when sizeof(dma_addr_t) <= sizeof(struct page *).
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This supported all DMA channels, and it was tested in SH7722,
SH7780, SH7785 and SH7763.
This can not use with SH DMA API.
Signed-off-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
Reviewed-by: Matt Fleming <matt@console-pimps.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Dan Williams wrote:
... DMA-slave clients request specific channels and know the hardware
details at a low level, so it should not be too high an expectation to
push dma mapping responsibility to the client.
Also this patch includes DMA_COMPL_{SRC,DEST}_UNMAP_SINGLE support for
dw_dmac driver.
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the DMA_SLAVE capability of the DMAEngine API to copy/from a
scatterlist into an arbitrary list of hardware address/length pairs.
This allows a single DMA transaction to copy data from several different
devices into a scatterlist at the same time.
This also adds support to enable some controller-specific features such as
external start and external pause for a DMA transaction.
[dan.j.williams@intel.com: rebased on tx_list movement]
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Acked-by: Li Yang <leoli@freescale.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When using the Freescale DMA controller in external control mode, both the
request count and external pause bits need to be setup correctly. This was
being done with the same function.
The 83xx controller lacks the external pause feature, but has a similar
feature called external start. This feature requires that the request count
bits be setup correctly.
Split the function into two parts, to make it possible to use the external
start feature on the 83xx controller.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
All the necessary fields for handling an ioat2,3 ring entry can fit into
one cacheline. Move ->len prior to ->txd in struct ioat_ring_ent, and
move allocation of these entries to a hw-cache-aligned kmem cache to
reduce the number of cachelines dirtied for descriptor management.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The tx_list attribute of struct dma_async_tx_descriptor is common to
most, but not all dma driver implementations. None of the upper level
code (dmaengine/async_tx) uses it, so allow drivers to implement it
locally if they need it. This saves sizeof(struct list_head) bytes for
drivers that do not manage descriptors with a linked list (e.g.: ioatdma
v2,3).
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop txx9dmac's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop at_hdmac's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop mv_xor's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop ioatdma's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop iop-adma's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop fsldma's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Li Yang <leoli@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Drop dw_dmac's use of tx_list from struct dma_async_tx_descriptor in
preparation for removal of this field.
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Trivial cleanup to make the PCI ID table easier to read.
[dan.j.williams@intel.com: extended to v3.2 devices]
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The ioatdma module is missing aliases for the PCI devices it supports,
so it is not autoloaded on boot. Add a MODULE_DEVICE_TABLE() to get
these aliases.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The cleanup routine for the raid cases imposes extra checks for handling
raid descriptors and extended descriptors. If the channel does not
support raid it can avoid this extra overhead by using the ioat2 cleanup
path.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Jasper Forest introduces raid offload support via ioat3.2 support. When
raid offload is enabled two (out of 8 channels) will report raid5/raid6
offload capabilities. The remaining channels will only report ioat3.0
capabilities (memcpy).
Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The async_tx api uses the DMA_INTERRUPT operation type to terminate a
chain of issued operations with a callback routine.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
If a platform advertises pq capabilities, but not xor, then use
ioat3_prep_pqxor and ioat3_prep_pqxor_val to simulate xor support.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
ioat3.2 adds support for raid6 syndrome generation (xor sum of galois
field multiplication products) using up to 8 sources. It can also
perform an pq-zero-sum operation to validate whether the syndrome for a
given set of sources matches a previously computed syndrome.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This adds a hardware specific self test to be called from ioat_probe.
In the ioat3 case we will have tests for all the different raid
operations, while ioat1 and ioat2 will continue to just test memcpy.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
ioat3.2 adds xor offload support for up to 8 sources. It can also
perform an xor-zero-sum operation to validate whether all given sources
sum to zero, without writing to a destination. Xor descriptors differ
from memcpy in that one operation may require multiple descriptors
depending on the number of sources. When the number of sources exceeds
5 an extended descriptor is needed. These descriptors need to be
accounted for when updating the DMA_COUNT register.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Tag completion writes for direct cache access to reduce the latency of
checking for descriptor completions.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Export driver attributes for diagnostic purposes:
'ring_size': total number of descriptors available to the engine
'ring_active': number of descriptors in-flight
'capabilities': supported operation types for this channel
'version': Intel(R) QuickData specfication revision
This also allows some chattiness to be removed from the driver startup
as this information is now available via sysfs.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Up until this point the driver for Intel(R) QuickData Technology
engines, specification versions 2 and 3, were mostly identical save for
a few quirks. Version 3.2 hardware adds many new capabilities (like
raid offload support) requiring some infrastructure that is not relevant
for v2. For better code organization of the new funcionality move v3
and v3.2 support to its own file dma_v3.c, and export some routines from
the base files (dma.c and dma_v2.c) that can be reused directly.
The first new capability included in this code reorganization is support
for v3.2 memset operations.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
ioat3.2 adds raid5 and raid6 offload capabilities.
Signed-off-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In preparation for adding more operation types to the ioat3 path the
driver needs to honor the DMA_PREP_FENCE flag. For example the async_tx api
will hand xor->memcpy->xor chains to the driver with the 'fence' flag set on
the first xor and the memcpy operation. This flag in turn sets the 'fence'
flag in the descriptor control field telling the hardware that future
descriptors in the chain depend on the result of the current descriptor, so
wait for all writes to complete before starting the next operation.
Note that ioat1 does not prefetch the descriptor chain, so does not
require/support fenced operations.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Some engines have transfer size and address alignment restrictions. Add
a per-operation alignment property to struct dma_device that the async
routines and dmatest can use to check alignment capabilities.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Channel switching is problematic for some dmaengine drivers as the
architecture precludes separating the ->prep from ->submit. In these
cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
the async_tx allocator to only return channels that support all of the
required asynchronous operations.
For example MD_RAID456=y selects support for asynchronous xor, xor
validate, pq, pq validate, and memcpy. When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
quickly locate compatible channels with the guarantee that dependency
chains will remain on one channel. When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
channels that lead to operation chains that need to cross channel
boundaries using the async_tx channel switch capability.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Handle descriptor allocation failures by polling for a descriptor. The
driver will force forward progress when polled. In the best case this
polling interval will be the time it takes for one dma memcpy
transaction to complete. In the worst case, channel hang, we will need
to wait 100ms for the cleanup watchdog to fire (ioatdma driver).
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Increment the allocation order of the descriptor ring every time we run
out of descriptors up to a maximum of allocation order specified by the
module parameter 'ioat_max_alloc_order'. After each idle period
decrement the allocation order to a minimum order of
'ioat_ring_alloc_order' (i.e. the default ring size, tunable as a module
parameter).
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In order to support dynamic resizing of the descriptor ring or polling
for a descriptor in the presence of a hung channel the reset handler
needs to make progress while in a non-preemptible context. The current
workqueue implementation precludes polling channel reset completion
under spin_lock().
This conversion also allows us to return to opportunistic cleanup in the
ioat2 case as the timer implementation guarantees at least one cleanup
after every descriptor is submitted. This means the worst case
completion latency becomes the timer frequency (for exceptional
circumstances), but with the benefit of avoiding busy waiting when the
lock is contended.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Save 4 bytes per software descriptor by transmitting tx_cnt in an unused
portion of the hardware descriptor.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Mark all single use initialization routines with __devinit.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The register write in ioat_dma_cleanup_tasklet is unfortunate in two
ways:
1/ It clears the extra 'enable' bits that we set at alloc_chan_resources time
2/ It gives the impression that it disables interrupts when it is in
fact re-arming interrupts
[ Impact: fix, persist the value of the chanctrl register when re-arming ]
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Don't trust that the reserved bits are always zero, also sanity check
the returned value.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The cleanup path makes an effort to only perform an atomic read of the
64-bit completion address. However in the 32-bit case it does not
matter if we read the upper-32 and lower-32 non-atomically because the
upper-32 will always be zero.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Provide some output for debugging the driver.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The unified ioat1/ioat2 ioat_dma_unmap() implementation derives the
source and dest addresses from the unmap descriptor. There is no longer
a need to track this information in struct ioat_desc_sw.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Replace the current linked list munged into a ring with a native ring
buffer implementation. The benefit of this approach is reduced overhead
as many parameters can be derived from ring position with simple pointer
comparisons and descriptor allocation/freeing becomes just a
manipulation of head/tail pointers.
It requires a contiguous allocation for the software descriptor
information.
Since this arrangement is significantly different from the ioat1 chain,
move ioat2,3 support into its own file and header. Common routines are
exported from driver/dma/ioat/dma.[ch].
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Prepare the code for the conversion of the ioat2 linked-list-ring into a
native ring buffer. After this conversion ioat2 channels will share
less of the ioat1 infrastructure, but there will still be places where
sharing is possible. struct ioat_chan_common is created to house the
channel attributes that will remain common between ioat1 and ioat2
channels.
For every routine that accesses both common and hardware specific fields
the old unified 'ioat_chan' pointer is split into an 'ioat' and 'chan'
pointer. Where 'chan' references common fields and 'ioat' the
hardware/version specific.
[ Impact: pure structure member movement/variable renames, no logic changes ]
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
If a callback is to be attached to a descriptor the channel needs to
know at ->prep time so it can set the interrupt enable bit. This is in
preparation for moving descriptor ioat2 descriptor preparation from
->submit to ->prep.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The async_tx api assumes that after a successful ->prep a subsequent
->submit will not fail due to a lack of resources.
This also fixes a bug in the allocation failure case. Previously the
descriptors allocated prior to the allocation failure would not be
returned to the free list.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This cleans up a mess of and'ing and or'ing bit definitions, and allows
simple assignments from the specified dma_ctrl_flags parameter.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
->dmacount tracks the sequence number of active descriptors. It is
written to the DMACOUNT register to update the channel's view of pending
descriptors in the chain. The register is 16-bits so ->dmacount should
be unsigned and 16-bit as well. Also modify ->desccount to maintain
alignment.
This was never a problem in practice because we never compared dmacount
values, but this is a bug waiting to happen.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Towards the removal of ioatdma_device.version split the initialization
path into distinct versions. This conversion:
1/ moves version specific probe code to version specific routines
2/ removes the need for ioat_device
3/ turns off the ioat1 msi quirk if the device is reinitialized for intx
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The only .c files that utilize these protected prototypes depend on
CONFIG_INTEL_IOATDMA=y, so there is no value gained in providing empty
prototypes.
[ Impact: pure cleanup ]
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* reduce device->common. to dma-> in ioat_dma_{probe,remove,selftest}
* ioat_lookup_chan_by_index to ioat_chan_by_index
* multi-line function definitions
* ioat_desc_sw.async_tx to ioat_desc_sw.txd
* desc->txd. to tx-> in cleanup routine
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The driver currently duplicates much of what these routines offer, so
just use the common code. For example ->irq_mode tracks what interrupt
mode was initialized, which duplicates the ->msix_enabled and
->msi_enabled handling in pcim_release.
This also adds a check to the return value of dma_async_device_register,
which can fail.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Some of these defines may be useful outside of dma.c and the header is
private so there are no namespace pollution concerns.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Even though the intent is to extend dmatest with P+Q tests there is
still value in having an always-on sanity check to prevent an
unintentionally broken driver from registering.
This depends on raid6_pq.ko for verification, the side effect being that
PQ capable channels will fail to register when raid6 is disabled.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
iop33x support is not included because that engine is a bit more awkward
to handle in that it can either be in xor mode or pq mode. The
dmaengine/async_tx layers currently only comprehend static capabilities.
Note iop13xx does not support hardware PQ continuation so the driver
must handle the DMA_PREP_CONTINUE flag for operations across > 16
sources. From the comment for dma_maxpq:
/* When an engine does not support native continuation we need 3 extra
* source slots to reuse P and Q with the following coefficients:
* 1/ {00} * P : remove P from Q', but use it as a source for P'
* 2/ {01} * Q : use Q to continue Q' calculation
* 3/ {00} * Q : subtract Q from P' to cancel (2)
*/
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
lockdep correctly identifies a potential recursive locking case for
iop_chan->lock, but in the dependency submission case we expect that the same
class will be acquired for both the parent dependency and the child channel.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Test raid6 p+q operations with a simple "always multiply by 1" q
calculation to fit into dmatest's current destination verification
scheme.
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
[ Based on an original patch by Yuri Tikhonov ]
This adds support for doing asynchronous GF multiplication by adding
two additional functions to the async_tx API:
async_gen_syndrome() does simultaneous XOR and Galois field
multiplication of sources.
async_syndrome_val() validates the given source buffers against known P
and Q values.
When a request is made to run async_pq against more than the hardware
maximum number of supported sources we need to reuse the previous
generated P and Q values as sources into the next operation. Care must
be taken to remove Q from P' and P from Q'. For example to perform a 5
source pq op with hardware that only supports 4 sources at a time the
following approach is taken:
p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))
p' = p + q + q + src4 = p + src4
q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4
Note: 4 is the minimum acceptable maxpq otherwise we punt to
synchronous-software path.
The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
sources (in the above manner) and fill the remaining slots up to maxpq
with the new sources/coefficients.
Note1: Some devices have native support for P+Q continuation and can skip
this extra work. Devices with this capability can advertise it with
dma_set_maxpq. It is up to each driver how to handle the
DMA_PREP_CONTINUE flag.
Note2: The api supports disabling the generation of P when generating Q,
this is ignored by the synchronous path but is implemented by some dma
devices to save unnecessary writes. In this case the continuation
algorithm is simplified to only reuse Q as a source.
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
We currently walk the parent chain when waiting for a given tx to
complete however this walk may race with the driver cleanup routine.
The routines in async_raid6_recov.c may fall back to the synchronous
path at any point so we need to be prepared to call async_tx_quiesce()
(which calls dma_wait_for_async_tx). To remove the ->parent walk we
guarantee that every time a dependency is attached ->issue_pending() is
invoked, then we can simply poll the initial descriptor until
completion.
This also allows for a lighter weight 'issue pending' implementation as
there is no longer a requirement to iterate through all the channels'
->issue_pending() routines as long as operations have been submitted in
an ordered chain. async_tx_issue_pending() is added for this case.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
dmaengine: at_hdmac: add DMA slave transfers
dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller
dmaengine: dmatest: correct thread_count while using multiple thread per channel
dmaengine: dmatest: add a maximum number of test iterations
drivers/dma: Remove unnecessary semicolons
drivers/dma/fsldma.c: Remove unnecessary semicolons
dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
fsldma: do not clear bandwidth control bits on the 83xx controller
fsldma: enable external start for the 83xx controller
fsldma: use PCI Read Multiple command
When first created the ioat driver was the only inhabitant of
drivers/dma/. Now, it is the only multi-file (more than a .c and a .h)
driver in the directory. Moving it to an ioat/ subdirectory allows the
naming convention to be cleaned up, and allows for future splitting of
the source files by hardware version (v1, v2, and v3).
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch for at_hdmac adds the slave transfers capability to the Atmel DMA
controller available on some AT91 SOCs. This allow peripheral to memory and
memory to peripheral transfers with hardware handshaking.
Slave structure for controller specific information is passed through channel
private data. This at_dma_slave structure is defined in at_hdmac.h header file
and relative hardware definition are moved to this file from at_hdmac_regs.h.
Doing this we allow the channel configuration from platform definition code.
This work is intensively based on dw_dmac and several slave implementations.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This AHB DMA Controller (aka HDMA or DMAC on AT91 systems) is availlable on
at91sam9rl chip. It will be used on other products in the future.
This first release covers only the memory-to-memory tranfer type. This is the
only tranfer type supported by this chip. On other products, it will be used
also for peripheral DMA transfer (slave API support to come).
I used dmatest client without problem in different configurations to test it.
Full documentation for this controller can be found in the SAM9RL datasheet:
http://www.atmel.com/dyn/products/product_card.asp?part_id=4243
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
It seems that thread_count is not properly calculated in dmatest.
In fact the thread count number that is returned from dmatest_add_threads() is
not correctly added to the thread_count and thus not properly printed.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The dmatest usually waits for the killing of its kthreads to stop
running tests. This patch adds a parameter that sets a maximum
number of test iterations.
This feature is quite interesting for debugging when you set a lot of
traces in your dmaengine controller driver.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch reworks platform driver power management code
for txx9dmac from legacy late/early callbacks to dev_pm_ops.
The callbacks are converted for CONFIG_SUSPEND like this:
suspend_late() -> suspend_noirq()
resume_early() -> resume_noirq()
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
This patch reworks platform driver power management code
for dw_dmac from legacy late/early callbacks to dev_pm_ops.
The callbacks are converted for CONFIG_SUSPEND like this:
suspend_late() -> suspend_noirq()
resume_early() -> resume_noirq()
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
On HIGHMEM64G systems dma_addr_t is known to be larger than (void *)
which precludes async_xor from performing dma address conversions by
reusing the input parameter address list. However, other parts of the
dmaengine infrastructure do not suffer this constraint, so the
HIGHMEM64G restriction can be down-levelled.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch does not change actual behaviour since dma_unmap_page is just
an alias of dma_unmap_single on MIPS.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Cc: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch adds support for the integrated DMAC of the TXx9 family.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The 83xx controller does not support the external pause feature. The bit
in the mode register that controls external pause on the 85xx controller
happens to be part of the bandwidth control settings for the 83xx
controller.
This patch fixes the driver so that it only clears the external pause bit
if the hardware is the 85xx controller. When driving the 83xx controller,
the bit is left untouched. This follows the existing convention that mode
registers settings are not touched unless necessary.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The 83xx controller has external start capability, but lacks external pause
capability. Hook up the external start function pointer for the 83xx
controller.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
By default, the Freescale 83xx DMA controller uses the PCI Read Line
command when reading data over the PCI bus. Setting the controller to use
the PCI Read Multiple command instead allows the controller to read much
larger bursts of data, which provides a drastic speed increase.
The slowdown due to using PCI Read Line was only observed when a PCI-to-PCI
bridge was between the devices trying to communicate.
A simple test driver showed an increase from 4MB/sec to 116MB/sec when
performing DMA over the PCI bus. Using DMA to transfer between blocks of
local SDRAM showed no change in performance with this patch. The dmatest
driver was also used to verify the correctness of the transfers, and showed
no errors.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Acked-by: Timur Tabi <timur@freescale.com>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_xor() needs space to perform dma and page address conversions. In
most cases the code can simply reuse the struct page * array because the
size of the native pointer matches the size of a dma/page address. In
order to support archs where sizeof(dma_addr_t) is larger than
sizeof(struct page *), or to preserve the input parameters, we utilize a
memory region passed in by the caller.
Since the code is now prepared to handle the case where it cannot
perform address conversions on the stack, we no longer need the
!HIGHMEM64G dependency in drivers/dma/Kconfig.
[ Impact: don't clobber input buffers for address conversions ]
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Testing the i7300_idle driver on i5000-series hardware required
an edit to i7300_idle.h to "#define SUPPORT_I5000 1" and a re-build
of both i7300_idle and ioat_dma.
Replace that build-time scheme with a load-time module parameter:
"7300_idle.forceload=1" to make it easier to test the driver
on hardware that while not officially validated, works fine
and is much more commonly available.
By default (no modparam) the driver will continue to load
only on the i7300.
Note that ioat_dma runs a copy of i7300_idle's probe routine
to know to reserve an IOAT channel for i7300_idle.
This change makes ioat_dma do that always on the i5000,
just like it does on the i7300.
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Andrew Henroid <andrew.d.henroid@intel.com>
We we build with dma_addr_t as a 64-bit quantity we get:
drivers/dma/fsldma.c: In function 'fsl_chan_xfer_ld_queue':
drivers/dma/fsldma.c:625: warning: cast to pointer from integer of different size
drivers/dma/fsldma.c: In function 'fsl_dma_chan_do_interrupt':
drivers/dma/fsldma.c:737: warning: cast to pointer from integer of different size
drivers/dma/fsldma.c:737: warning: cast to pointer from integer of different size
drivers/dma/fsldma.c: In function 'of_fsl_dma_probe':
drivers/dma/fsldma.c:927: warning: cast to pointer from integer of different
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When preparing a memcpy operation, if the kernel fails to allocate memory
for a link descriptor after the first link descriptor has already been
allocated, then some memory will never be released. Fix the problem by
walking the list of allocated descriptors backwards, and freeing the
allocated descriptors back into the DMA pool.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
On the 83xx controller, snooping is necessary for the DMA controller to
ensure cache coherence with the CPU when transferring to/from RAM.
The last descriptor in a chain will always have the End-of-Chain interrupt
bit set, so we can set the snoop bit while adding the End-of-Chain
interrupt bit.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
When creating a DMA transaction with multiple descriptors, the async_tx
cookie is set to 0 for each descriptor in the chain, excluding the last
descriptor, whose cookie is set to -EBUSY.
When fsl_dma_tx_submit() is run, it only assigns a cookie to the first
descriptor. All of the remaining descriptors keep their original value,
including the last descriptor, which is set to -EBUSY.
After the DMA completes, the driver will update the last completed cookie
to be -EBUSY, which is an error code instead of a valid cookie. This causes
dma_async_is_complete() to always return DMA_IN_PROGRESS.
This causes the fsldma driver to never cleanup the queue of link
descriptors, and the driver will re-run the DMA transaction on the hardware
each time it receives the End-of-Chain interrupt. This causes an infinite
loop.
With this patch, fsl_dma_tx_submit() is changed to assign a cookie to every
descriptor in the chain. The rest of the code then works without problems.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
When using the DMA controller from multiple threads at the same time, it is
possible to get lots of "DMA halt timeout!" errors printed to the kernel
log.
This occurs due to a race between fsl_dma_memcpy_issue_pending() and the
interrupt handler, fsl_dma_chan_do_interrupt(). Both call the
fsl_chan_xfer_ld_queue() function, which does not protect against
concurrent accesses to dma_halt() and dma_start().
The existing spinlock is moved to cover the dma_halt() and dma_start()
functions. Testing shows that the "DMA halt timeout!" errors disappear.
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Li Yang <leoli@freescale.com>
Fix the check of potential array overflow when using corrupted channel
device tree nodes.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Li Yang <leoli@freescale.com>
This also fixes the case of a single queued buffer, for example, when taking a
single frame snapshot with the mx3_camera driver.
Reported-by: Agustin Ferrin Pozuelo <gatoguan-os@yahoo.com>
Tested-by: Agustin Ferrin Pozuelo <gatoguan-os@yahoo.com>
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
as reported by Alexander Beregalov <a.beregalov@gmail.com>
ioatdma 0000:00:08.0: DMA-API: device driver frees DMA memory with
wrong function [device address=0x000000007f76f800] [size=2000 bytes]
[map
ped as single] [unmapped as page]
The ioatdma driver was unmapping all regions
(either allocated as page or single) using unmap_page.
This patch lets dma driver recognize if unmap_single or unmap_page should be used.
It introduces two new dma control flags:
DMA_COMPL_SRC_UNMAP_SINGLE and DMA_COMPL_DEST_UNMAP_SINGLE.
They should be set to indicate dma driver to do dma-unmapping as single
(first one for the source, tha latter for the destination).
If respective flag is not set, the driver assumes dma-unmapping as page.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Tested-by: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
disable_irq() should wait for all running handlers to complete
before returning. As such, if it's used to disable an interrupt
from that interrupt's handler it will deadlock. This replaces
the dangerous instances with the _nosync() variant which doesn't
have this problem.
Note the 2 handlers in question are only used #ifdef DEBUG so
I imagine these code paths don't get hit often.
Signed-off-by: Ben Nizette <bn@niasdigital.com>
Acked-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The check for reaching max_channels is short circuited by 'continuing'
after successfully adding a channel.
[ Impact: make the 'max_channels' module parameter actually have an effect ]
Cc: <stable@kernel.org>
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
'zero_sum' does not properly describe the operation of generating parity
and checking that it validates against an existing buffer. Change the
name of the operation to 'val' (for 'validate'). This is in
anticipation of the p+q case where it is a requirement to identify the
target parity buffers separately from the source buffers, because the
target parity buffers will not have corresponding pq coefficients.
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)
Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Replace all DMA_64BIT_MASK macro with DMA_BIT_MASK(64)
Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
dma: Add SoF and EoF debugging to ipu_idmac.c, minor cleanup
dw_dmac: add cyclic API to DW DMA driver
dmaengine: Add privatecnt to revert DMA_PRIVATE property
dmatest: add dma interrupts and callbacks
dmatest: add xor test
dmaengine: allow dma support for async_tx to be toggled
async_tx: provide __async_inline for HAS_DMA=n archs
dmaengine: kill some unused headers
dmaengine: initialize tx_list in dma_async_tx_descriptor_init
dma: i.MX31 IPU DMA robustness improvements
dma: improve section assignment in i.MX31 IPU DMA driver
dma: ipu_idmac driver cosmetic clean-up
dmaengine: fail device registration if channel registration fails
Add Start-of-Frame and End-of-Frame debugging to ipu_idmac.c, in the
future it might also be needed for the actual video processing in
mx3-camera, at which point, the ISRs will have to be transferred to
mx3_camera.c, for which ipu_irq_map() and ipu_irq_unmap() functions will
have to be exported.
Also simplify a couple of pointer-dereferences.
Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch adds a cyclic DMA interface to the DW DMA driver. This is
very useful if you want to use the DMA controller in combination with a
sound device which uses cyclic buffers.
Using a DMA channel for cyclic DMA will disable the possibility to use
it as a normal DMA engine until the user calls the cyclic free function
on the DMA channel. Also a cyclic DMA list can not be prepared if the
channel is already active.
Signed-off-by: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Currently dma_request_channel() set DMA_PRIVATE capability but never
clear it. So if a public channel was once grabbed by
dma_request_channel(), the device stay PRIVATE forever. Add
privatecnt member to dma_device to correctly revert it.
[lg@denx.de: fix bad usage of 'chan' in dma_async_device_register]
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use the callback infrastructure to report driver/hardware hangs or
missed interrupts. Since this makes the test threads much more
aggressive (from: explicit 1ms sleep to: wait_for_completion) we set the
nice value to 10 so as to not swamp legitimate tasks.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Centralize this common initialization (and one case where ipu_idmac is
duplicating ->chan initialization).
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Add DMA error handling to the ISR, move common code fragments to functions, fix
scatter-gather element queuing in the ISR, survive channel freeing and
re-allocation in a quick succession.
Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The i.MX31 IPU DMA driver is a platform driver, but doesn't need hotplug, so we
can use __init and __exit function attributes.
Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Atsushi points out:
"If alloc_percpu or kzalloc failed, chan_id does not match with its
position in device->channels list.
And above "continue" looks buggy anyway. Keeping incomplete channels
in device->channels list looks very dangerous..."
Also, fix up leakage of idr_ref in the idr_pre_get() and channel init
fail cases.
Reported-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This patch adds clkdev support for i.MX31. This is done in a
similar way done previously for i.MX27
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
dmatest: fix use after free in dmatest_exit
ipu_idmac: fix spinlock type
iop-adma, mv_xor: fix mem leak on self-test setup failure
fsldma: fix off by one in dma_halt
I/OAT: fail self-test if callback test reaches timeout
I/OAT: update driver version and copyright dates
I/OAT: list usage cleanup
I/OAT: set tcp_dma_copybreak to 256k for I/OAT ver.3
I/OAT: cancel watchdog before dma remove
I/OAT: fail initialization on zero channels detection
I/OAT: do not set DCACTRL_CMPL_WRITE_ENABLE for I/OAT ver.3
I/OAT: add verification for proper APICID_TAG_MAP setting by BIOS
dmaengine: update kerneldoc
dmatest_cleanup_chanel will free dtc, so grab ->chan before it goes away
and use it to do the release.
Reported-by: Thierry Reding <thierry.reding@avionic-design.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
fix a probably accidently dropped reference operator while calling
spin_unlock_restore to an ipu lock.
Signed-off-by: Luotao Fu <l.fu@pengutronix.de>
Cc: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
iop_adma_zero_sum_self_test has the brackets in the wrong place for the
setup failure deallocation path. This error was duplicated in
mv_xor_xor_self_test.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Prevent dev_err from firing even if we successfully detected 'dma-idle'
before the full 1ms timeout has elapsed.
Acked-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
If we miss interrupts in the self test then fail registration of this
channel as it is unsuitable for use as a public channel.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Together with new fixes update driver version
and extend copyright dates ranges.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Trivial cleanup, list_del(); list_add_tail() is equivalent
to list_move_tail(). Semantic patch for coccinelle can be
found at www.cccmz.de/~snakebyte/list_move_tail.spatch
Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Upcoming server platforms from Intel based on the Nehalem performance
have significantly improved CPU based copy performance.
However, the DMA engine can still be effective at higher I/O sizes
for TCP traffic and at this time copybreak
should be set to 256k for TCP traffic only.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Channel watchdog should be canceled before the rest of dma remove stuff.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
On some systems with I/OAT ver.2 when DCA is disabled in BIOS
situations have been observed
that zero DMA channels are detected instead of four.
To avoid kernel panic driver should fail gracefully with appropriate message.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Flag DCACTRL_CMPL_WRITE_ENABLE is valid only for I/OAT ver.2
so it should not be set for I/OAT ver.3.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
BIOS versions for systems with I/OAT ver.2 have been found
which fail to program APICID_TAG_MAP for DCA.
The ioatdma driver should recognize incorrectly set APICID_TAG_MAP
and disable DCA in that case.
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
`iop_adma_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv_xor_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv64xxx_i2c_unmap_regs' referenced in section `.devinit.text' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`mv64xxx_i2c_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`orion_nand_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
`pxafb_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.devexit.text' of drivers/built-in.o
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The conversion of atmel-mci to dma_request_channel missed the
initialization of the channel dma_slave information. The filter_fn passed
to dma_request_channel is responsible for initializing the channel's
private data. This implementation has the additional benefit of enabling
a generic client-channel data passing mechanism.
Reviewed-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
i.MX3x SoCs contain an Image Processing Unit, consisting of a Control
Module (CM), Display Interface (DI), Synchronous Display Controller (SDC),
Asynchronous Display Controller (ADC), Image Converter (IC), Post-Filter
(PF), Camera Sensor Interface (CSI), and an Image DMA Controller (IDMAC).
CM contains, among other blocks, an Interrupt Generator (IG) and a Clock
and Reset Control Unit (CRCU). This driver serves IDMAC and IG. They are
supported over dmaengine and irq-chip APIs respectively.
IDMAC is a specialised DMA controller, its DMA channels cannot be used for
general-purpose operations, even though it might be possible to configure
a memory-to-memory channel for memcpy operation. This driver will not work
with generic dmaengine clients, clients, wishing to use it must use
respective wrapper structures, they also must specify which channels they
require, as channels are hard-wired to specific IPU functions.
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Guennadi Liakhovetski <lg@denx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
dma_find_channel and dma_issue_pending_all are good places to warn about
improper api usage. However, warning correctly means synchronizing with
dma_list_mutex, i.e. too much overhead for these fast-path calls.
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The mpc83xx variant uses a shared IRQ for all channels, so the individual
channel nodes don't have an interrupt property. Fix the code to print the
controller IRQ instead if there isn't any for the channel.
Acked-by: Timur Tabi <timur@freescale.com>
Acked-by: Li Yang <leoli@freescale.com>
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
There's no per-channel IRQ on mpc83xx, so only call free_irq if we have one.
Acked-by: Timur Tabi <timur@freescale.com>
Acked-by: Li Yang <leoli@freescale.com>
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The dmatest driver should use DMA_BIDIRECTIONAL on the destination buffer
to ensure that the poison values are written to RAM and not just written
to cache and discarded.
Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The dmaengine sysfs implementation was fixed to support proper
lifetime rules which means that the current:
new_fsl_chan->dev = &new_fsl_chan->common.dev->device;
...retrieves a NULL pointer because new_fsl_chan->common.dev has not
been allocated at this point. So, set new_fsl_chan->dev to a valid
device.
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Reported-by: Ira Snyder <iws@ovro.caltech.edu>
Tested-by: Ira Snyder <iws@ovro.caltech.edu>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In dmaengine we track the dependencies between the descriptors
using the 'next' pointers of the structure. These pointers are
set to NULL as soon as the corresponding descriptor has been
submitted to the channel (in dma_run_dependencies()).
But, the first 'next' in chain is still remaining set, regardless
the fact, that tx->next has been already submitted. This may lead to
multiple submissions of the same descriptor. This patch fixes this.
Actually, some previous implementation of the xxx_run_dependencies()
function already had this fix in place. The fdb..0eaf3 commit, beside the
correct things, broke this.
Cc: <stable@kernel.org>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In the multiple device case we need to re-arm the completion and protect
against concurrent self-tests. The printk from the test callback is
removed as it can arbitrarily delay completion of the test.
Cc: <stable@kernel.org>
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
There are dmaengine users that would like to register dma devices at
subsys_initcall time to ensure channels are available by device_initcall
time.
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Allow dma_filter_fn routines to disambiguate multiple channels on a device
rather than assuming that all channels on a device are equal.
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Reported-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This brings some predictability to dma device numbers, i.e. an rmmod/insmod
cycle may now result in /sys/class/dma/dma0chan0 being restored rather than
/sys/class/dma/dma1chan0 appearing.
Cc: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Resolves:
WARNING: at drivers/base/core.c:122 device_release+0x4d/0x52()
Device 'dma0chan0' does not have a release() function, it is broken and must be fixed.
The dma_chan_dev object is introduced to gear-match sysfs kobject and
dmaengine channel lifetimes. When a channel is removed access to the
sysfs entries return -ENODEV until the kobject can be released.
The bulk of the change is updates to existing code to handle the extra
layer of indirection between a dma_chan and its struct device.
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Unregistering services should only happen at "remove" time. This prevents
the device from being unregistered while dmaengine clients are still
active. Also, the comment on ioat_remove is stale since removal is prevented
while a channel may be in use.
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This BUG_ON caught problems in early development but now it is in the
way as it invalidly triggers when trying to remove the module.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
DMA_NAK is now useless. We can just use a bool instead.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reference counting is done at the module level so clients need not worry
that a channel will leave while they are actively using dmaengine.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
All users have been converted to either the general-purpose allocator,
dma_find_channel, or dma_request_channel.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Now that clients no longer need to be notified of channel arrival
dma_async_client_register can simply increment the dmaengine_ref_count.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
dma_request_channel provides an exclusive channel, so we no longer need to
pass slave data through dmaengine.
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Replace the client registration infrastructure with a custom loop to
poll for channels. Once dma_request_channel returns NULL stop asking
for channels. A userspace side effect of this change if that loading
the dmatest module before loading a dma driver will result in no
channels being found, previously dmatest would get a callback. To
facilitate testing in the built-in case dmatest_init is marked as a
late_initcall. Another side effect is that channels under test can not
be used for any other purpose.
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This interface is primarily for device-to-memory clients which need to
search for dma channels with platform-specific characteristics. The
prototype is:
struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
dma_filter_fn filter_fn,
void *filter_param);
When the optional 'filter_fn' parameter is set to NULL
dma_request_channel simply returns the first channel that satisfies the
capability mask. Otherwise, when the mask parameter is insufficient for
specifying the necessary channel, the filter_fn routine can be used to
disposition the available channels in the system. The filter_fn routine
is called once for each free channel in the system. Upon seeing a
suitable channel filter_fn returns DMA_ACK which flags that channel to
be the return value from dma_request_channel. A channel allocated via
this interface is exclusive to the caller, until dma_release_channel()
is called.
To ensure that all channels are not consumed by the general-purpose
allocator the DMA_PRIVATE capability is provided to exclude a dma_device
from general-purpose (memory-to-memory) consideration.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_tx and net_dma each have open-coded versions of issue_pending_all,
so provide a common routine in dmaengine.
The implementation needs to walk the global device list, so implement
rcu to allow dma_issue_pending_all to run lockless. Clients protect
themselves from channel removal events by holding a dmaengine reference.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Allowing multiple clients to each define their own channel allocation
scheme quickly leads to a pathological situation. For memory-to-memory
offload all clients can share a central allocator.
This simply moves the existing async_tx allocator to dmaengine with
minimal fixups:
* async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan
* async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance
* split out common code from async_tx.c:__async_tx_find_channel -->
dma_find_channel
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Simply, if a client wants any dmaengine channel then prevent all dmaengine
modules from being removed. Once the clients are done re-enable module
removal.
Why?, beyond reducing complication:
1/ Tracking reference counts per-transaction in an efficient manner, as
is currently done, requires a complicated scheme to avoid cache-line
bouncing effects.
2/ Per-transaction ref-counting gives the false impression that a
dma-driver can be gracefully removed ahead of its user (net, md, or
dma-slave)
3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
if such an engine were built one day we still would not need to notify
clients of remove events. The driver can simply return NULL to a
->prep() request, something that is much easier for a client to handle.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_tx.ko is a consumer of dma channels. A circular dependency arises
if modules in drivers/dma rely on common code in async_tx.ko. It
prevents either module from being unloaded.
Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
where they should have been from the beginning.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Mapping the destination multiple times is a misuse of the dma-api.
Since the destination may be reused as a source, ensure that it is only
mapped once and that it is mapped bidirectionally. This appears to add
ugliness on the unmap side in that it always reads back the destination
address from the descriptor, but gcc can determine that dma_unmap is a
nop and not emit the code that calculates its arguments.
Cc: <stable@kernel.org>
Cc: Saeed Bishara <saeed@marvell.com>
Acked-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
There is a possibility to have two devices registered with the same id.
Cc: <stable@kernel.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
As part of the ioat_dma self-test it performs a printk from a completion
callback. Depending on the system console configuration this output can
take longer than a millisecond causing the self-test to fail. Introduce a
completion with a generous timeout to mitigate this failure.
Cc: <stable@kernel.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Now that the critical read back to flush the next descriptor address is
fixed we can downgrade some BUG_ONs that need only be enabled when testing
changes to the driver.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The current dummy read references the wrong address allowing the next
descriptor address update to linger in the store buffer and get passed
by an 'append' event.
This issue was uncovered by the change from strongly-ordered to device
memory for the adma registers.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
async_tx.callback should be checked for the first
not the last descriptor in the chain.
Cc: <stable@kernel.org>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Error handling needs to be modified in dma_pin_iovec_pages().
It should return NULL instead of ERR_PTR
(pinned_list is checked for NULL in tcp_recvmsg() to determine
if iovec pages have been successfully pinned down).
In case of error for the first iovec,
local_list->nr_iovecs needs to be initialized.
Cc: <stable@kernel.org>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If the ioatdma driver is loaded but not used it does not allocate descriptors.
Before it frees channel resources it should first be sure
that they have been previously allocated.
Cc: <stable@kernel.org>
Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Tested-by: Tom Picard <tom.s.picard@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When I7300_idle driver is not configured, there is a compile time
warning about IDLE_IOAT_CHANNEL not defined. Fix it.
Reported-by: Suresh Siddha <suresh.b.siddha@intel.com>
Reported-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Based on input from Andi Kleen:
share the platform detection code with ioat_dma and disable the channel in
dma engine only for specific platforms.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
The Intel 7300 Memory Controller supports dynamic throttling of memory which can
be used to save power when system is idle. This driver does the memory
throttling when all CPUs are idle on such a system.
Refer to "Intel 7300 Memory Controller Hub (MCH)" datasheet
for the config space description.
Signed-off-by: Andy Henroid <andrew.d.henroid@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
fsldma: allow Freescale Elo DMA driver to be compiled as a module
fsldma: remove internal self-test from Freescale Elo DMA driver
drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNEL
dmatest: properly handle duplicate DMA channels
drivers/dma/ioat_dma.c: drop code after return
async_tx: make async_tx_run_dependencies() easier to read
The tasklet checks RAW.BLOCK twice, and does not check RAW.XFER. This is
obviously wrong, and could theoretically cause the driver to hang.
Reported-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Modify the Freescale Elo / Elo Plus DMA driver so that it can be compiled as
a module.
The primary change is to stop treating the DMA controller as a bus, and the
DMA channels as devices on the bus. This is because the Open Firmware (OF)
kernel code does not allow busses to be removed, so although we can call
of_platform_bus_probe() to probe the DMA channels, there is no
of_platform_bus_remove(). Instead, the DMA channels are manually probed,
similar to what fsl_elbc_nand.c does.
Cc: Scott Wood <scottwood@freescale.com>
Acked-by: Li Yang <leoli@freescale.com>
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The Freescale Elo DMA driver runs an internal self-test before registering
the channels with the DMA engine. This self-test has a fundemental flaw in
that it calls the DMA engine's callback functions directly before the
registration. However, the registration initializes some variables that the
callback functions uses, namely the device struct.
The code works today because there are two device structs: the one created
by the DMA engine, and one created by the Open Firmware (OF) subsystem. The
self-test currently uses the device struct created by OF. However, in the
future, some of the device structs created by OF will be eliminated.
This means that the self-test will only have access to the device struct
created by the DMA engine. But this device struct isn't initialized when
the self-test runs, and this causes a kernel panic.
Since there is already a DMA test module (dmatest), the internal self-test
code is not useful anyway. It is extremely unlikely that the test will fail
in normal usage. It may have been helpful during development, but not any more.
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Li Yang <leoli@freescale.com>
Cc: Scott Wood <scottwood@freescale.com>
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>