Handle errors when trying to map the IRQ for the DMA channels.
The main motivation here is to be able to handle probe deferral. E.g. when
using DT overlays it is possible that the DMA controller is probed before
interrupt controller, depending on the order in the DT.
In order to support this switch from irq_of_parse_and_map() to
of_irq_get(), which internally does the same, but it will return
EPROBE_DEFER when the interrupt controller is not yet available.
As a result other errors, such as an invalid IRQ specification, or missing
IRQ are also properly handled.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/20211208114212.234130-1-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
`dmaengine_desc_callback_invoke()`. This makes sure that both types of
callbacks are handled correctly.
The xilinx_dma driver currently doesn't do this for cyclic descriptors and
only handles the `callback` type callback. If the client used the
`callback_result` type callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-2-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Modify the prototype from xilinx_dma_tx_descriptor to
xilinx_dma_alloc_tx_descriptor and xilinx_dma_channel_set_config
to xilinx_vdma_channel_set_config in API description to
fix below linux kernel-doc warnings.
drivers/dma/xilinx/xilinx_dma.c:800: warning: expecting
prototype for xilinx_dma_tx_descriptor(). Prototype was
for xilinx_dma_alloc_tx_descriptor() instead.
drivers/dma/xilinx/xilinx_dma.c:2471: warning: expecting
prototype for xilinx_dma_channel_set_config(). Prototype
was for xilinx_vdma_channel_set_config() instead.
Signed-off-by: Shravya Kumbham <shravya.kumbham@xilinx.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1631525316-2323-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The xilinx dma driver uses the consistent allocations, so for correct
operation also set the DMA mask for coherent APIs. It fixes the below
kernel crash with dmatest client when DMA IP is configured with 64-bit
address width and linux is booted from high (>4GB) memory.
Call trace:
[ 489.531257] dma_alloc_from_pool+0x8c/0x1c0
[ 489.535431] dma_direct_alloc+0x284/0x330
[ 489.539432] dma_alloc_attrs+0x80/0xf0
[ 489.543174] dma_pool_alloc+0x160/0x2c0
[ 489.547003] xilinx_cdma_prep_memcpy+0xa4/0x180
[ 489.551524] dmatest_func+0x3cc/0x114c
[ 489.555266] kthread+0x124/0x130
[ 489.558486] ret_from_fork+0x10/0x3c
[ 489.562051] ---[ end trace 248625b2d596a90a ]---
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Reviewed-by: Harini Katakam <harini.katakam@xilinx.com>
Link: https://lore.kernel.org/r/1629363528-30347-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
When user calls dmaengine_terminate_sync, the driver will clean up any
remaining descriptors for all the pending or active transfers that had
previously been submitted. However, this might happen whilst the tasklet is
invoking the DMA callback for the last finished transfer, so by the time it
returns and takes over the channel's spinlock, the list of completed
descriptors it was traversing is no longer valid. This leads to a
read-after-free situation.
Fix it by signalling whether a user-triggered termination has happened by
means of a boolean variable.
Signed-off-by: Adrian Larumbe <adrian.martinezlarumbe@imgtec.com>
Link: https://lore.kernel.org/r/20210706234338.7696-3-adrian.martinezlarumbe@imgtec.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The Xilinx dmaengine driver uses a tasklet to process completed
descriptors and execute their callbacks.
Currently consumers of the DMA channel have to no method of synchronization
against this tasklet when using the Xilinx dmaengine drivers. This can lead
to race conditions when the consumer frees resources that are accessed in
the callback before the tasklet has finished running.
It is not enough to just call dmaengine_terminal_all() since on a
multi-processor system the tasklet can run concurrently to it and might
call the callback after dmaengine_terminate_all() has already finished.
To mitigate this issue implement the synchronize() callback for the driver,
which will wait until the tasklet has finished.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20210313125311.4823-1-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The DMA transfer might finish just after checking the state with
dma_cookie_status, but before the lock is acquired. Not checking
for an empty list in xilinx_dma_tx_status may result in reading
random data or data corruption when desc is written to. This can
be reliably triggered by using dma_sync_wait to wait for DMA
completion.
Signed-off-by: Sebastian von Ohr <vonohr@smaract.com>
Tested-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/20200303130518.333-1-vonohr@smaract.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In overlay application we noticed that dma channel node probe order is
inverted i.e s2mm channel is probed first followed by mm2s channel. The
reason for this inversion is fdtoverlay utility which uses a function
called fdt_add_subnode(*). It stores the subnodes after the properties,
this has the effect of inserting the new subnode before any others and
the end result is a reversal.
Because of this inverted channel probe order, the node probed first is
assigned a '0' index instead of Channel ID should be '0' for tx and '1'
for rx and dmatest client using the DT convention fails in dma transfer
as channel are swapped.
To fix above behavior and make channel assignment index independent
of probe order, always assign mm2s channel at '0' index and the s2mm
channel at IP specific fixed offset derived from the max_channels
count.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1580388865-9960-3-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Reset DMA channel after stop to ensure that pending transfers and FIFOs
in the datapath are flushed or completed. It also cleanup the terminate
path and removes stop for the cyclic mode as after the reset stop is not
required. This fixes intermittent data verification failure when xilinx
dma test the client is stressed and loaded/unloaded multiple times.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1580283909-32678-1-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add support for AXI Multichannel Direct Memory Access (AXI MCDMA)
core, which is a soft Xilinx IP core that provides high-bandwidth
direct memory access between memory and AXI4-Stream target peripherals.
The AXI MCDMA core provides scatter-gather interface with multiple
independent transmit and receive channels. The driver supports
device_prep_slave_sg slave transfer mode.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571763622-29281-7-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Whenever we reset the channel, we need to clear desc_pendingcount
along with desc_submitcount. Otherwise when a new transaction is
submitted, the irq coalesce level could be programmed to an incorrect
value in the axidma case.
This behavior can be observed when terminating pending transactions
with xilinx_dma_terminate_all() and then submitting new transactions
without releasing and requesting the channel.
Signed-off-by: Nicholas Graumann <nick.graumann@gmail.com>
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Link: https://lore.kernel.org/r/1571150904-3988-8-git-send-email-radhey.shyam.pandey@xilinx.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Pull dmaengine updates from Vinod Koul:
- Add support in dmaengine core to do device node checks for DT devices
and update bunch of drivers to use that and remove open coding from
drivers
- New driver/driver support for new hardware, namely:
- MediaTek UART APDMA
- Freescale i.mx7ulp edma2
- Synopsys eDMA IP core version 0
- Allwinner H6 DMA
- Updates to axi-dma and support for interleaved cyclic transfers
- Greg's debugfs return value check removals on drivers
- Updates to stm32-dma, hsu, dw, pl330, tegra drivers
* tag 'dmaengine-5.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (68 commits)
dmaengine: Revert "dmaengine: fsl-edma: add i.mx7ulp edma2 version support"
dmaengine: at_xdmac: check for non-empty xfers_list before invoking callback
Documentation: dmaengine: clean up description of dmatest usage
dmaengine: tegra210-adma: remove PM_CLK dependency
dmaengine: fsl-edma: add i.mx7ulp edma2 version support
dt-bindings: dma: fsl-edma: add new i.mx7ulp-edma
dmaengine: fsl-edma-common: version check for v2 instead
dmaengine: fsl-edma-common: move dmamux register to another single function
dmaengine: fsl-edma: add drvdata for fsl-edma
dmaengine: Revert "dmaengine: fsl-edma: support little endian for edma driver"
dmaengine: rcar-dmac: Reject zero-length slave DMA requests
dmaengine: dw: Enable iDMA 32-bit on Intel Elkhart Lake
dmaengine: dw-edma: fix semicolon.cocci warnings
dmaengine: sh: usb-dmac: Use [] to denote a flexible array member
dmaengine: dmatest: timeout value of -1 should specify infinite wait
dmaengine: dw: Distinguish ->remove() between DW and iDMA 32-bit
dmaengine: fsl-edma: support little endian for edma driver
dmaengine: hsu: Revert "set HSU_CH_MTSR to memory width"
dmagengine: pl330: add code to get reset property
dt-bindings: pl330: document the optional resets property
...
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3029 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We get a compiler warn about variable ‘tail_desc’ set but not used
drivers/dma/xilinx/xilinx_dma.c:1102:42: warning:
variable ‘tail_desc’ set but not used [-Wunused-but-set-variable]
struct xilinx_dma_tx_descriptor *desc, *tail_desc;
So remove it.
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Pull dmaengine updates from Vinod Koul:
- dmatest updates for modularizing common struct and code
- remove SG support for VDMA xilinx IP and updates to driver
- Update to dw driver to support Intel iDMA controllers multi-block
support
- tegra updates for proper reporting of residue
- Add Snow Ridge ioatdma device id and support for IOATDMA v3.4
- struct_size() usage and useless LIST_HEAD cleanups in subsystem.
- qDMA controller driver for Layerscape SoCs
- stm32-dma PM Runtime support
- And usual updates to imx-sdma, sprd, Documentation, fsl-edma,
bcm2835, qcom_hidma etc
* tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (81 commits)
dmaengine: imx-sdma: fix consistent dma test failures
dmaengine: imx-sdma: add a test for imx8mq multi sdma devices
dmaengine: imx-sdma: add clock ratio 1:1 check
dmaengine: dmatest: move test data alloc & free into functions
dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func()
dmaengine: dmatest: wrap src & dst data into a struct
dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4
dmaengine: ioatdma: add descriptor pre-fetch support for v3.4
dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4
dmaengine: ioatdma: Add Snow Ridge ioatdma device id
dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifier
dmaengine: mv_xor: Use correct device for DMA API
Documentation :dmaengine: clarify DMA desc. pointer after submission
Documentation: dmaengine: fix dmatest.rst warning
dmaengine: k3dma: Add support for dma-channel-mask
dmaengine: k3dma: Delete axi_config
dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware
Documentation: bindings: dma: Add binding for dma-channel-mask
Documentation: bindings: k3dma: Extend the k3dma driver binding to support hisi-asp
...
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/dma/xilinx/xilinx_dma.c: In function 'xilinx_vdma_start_transfer':
drivers/dma/xilinx/xilinx_dma.c:1104:33: warning:
variable 'tail_segment' set but not used [-Wunused-but-set-variable]
It not used since commit b8349172b4 ("dmaengine: xilinx_dma: Drop SG support
for VDMA IP")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
We already need to zero out memory for dma_alloc_coherent(), as such
using dma_zalloc_coherent() is superflous. Phase it out.
This change was generated with the following Coccinelle SmPL patch:
@ replace_dma_zalloc_coherent @
expression dev, size, data, handle, flags;
@@
-dma_zalloc_coherent(dev, size, handle, flags)
+dma_alloc_coherent(dev, size, handle, flags)
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
[hch: re-ran the script on the latest tree]
Signed-off-by: Christoph Hellwig <hch@lst.de>
xilinx_vdma_start_transfer() is used only for VDMA IP, still it contains
conditional code on has_sg variable. has_sg is set only whenever the HW
does support SG mode, that is never true for VDMA IP.
This patch drops the never-taken branches.
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The AXIDMA and CDMA HW can be either direct-access or scatter-gather
version. These are SW incompatible.
The driver can handle both versions: a DT property was used to
tell the driver whether to assume the HW is in scatter-gather mode.
This patch makes the driver to autodetect this information. The DT
property is not required anymore.
No changes for VDMA.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: devicetree@vger.kernel.org
Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Whenever a single or cyclic transaction is prepared, the driver
could eventually split it over several SG descriptors in order
to deal with the HW maximum transfer length.
This could end up in DMA operations starting from a misaligned
address. This seems fatal for the HW if DRE (Data Realignment Engine)
is not enabled.
This patch eventually adjusts the transfer size in order to make sure
all operations start from an aligned address.
Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This patch removes a bit of duplicated code by introducing a new
function that implements calculations for DMA copy size, and
prepares for changes to the copy size calculation that will
happen in following patches.
Suggested-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>