The DMA_SG is still a type of memory copy operation that should
conform the hardware restriction. So this patch just applies the
copy_align to DMA_SG as well.
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When CONFIG_PM_SLEEP is disabled, we get a build error in
the cppi41 dmaengine driver, since the runtime-pm functions
are hidden within the wrong #ifdef:
drivers/dma/cppi41.c:1158:21: error: 'cppi41_runtime_suspend' undeclared here (not in a function)
This removes the #ifdef and instead uses __maybe_unused
annotations that cannot have this problem.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: fdea2d09b9 ("dmaengine: cppi41: Add basic PM runtime support")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added k3_dma_prep_dma_cyclic function has some debug output
that uses incorrect typecasts, some of which cause a warning like:
drivers/dma/k3dma.c: In function 'k3_dma_prep_dma_cyclic':
drivers/dma/k3dma.c:589:671: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
In general, we have to print 'dma_addr_t' values using special
'%pad' format to get the correct behavior on kernels that have
a 64-bit dma_addr_t type but 32-bit pointers.
Similarly, printing size_t values should be done using the %z
modifier to get the correct behavior on 64-bit kernels.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: a7e08fa6cc ("k3dma: Add cyclic mode for audio")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
A workaround for a warning introduced a use of the NO_IRQ
macro that should have been gone for a long time.
It is clear from the code that the value cannot actually
be used, but apparently there was a configuration at
some point that caused a warning, so instead of just
reverting that patch, this rearranges the code in a way that
the warning cannot reappear.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 6ef41cf6f7 ("dmaengine :ipu: change ipu_irq_handler() to remove compile warning")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
irq_of_parse_and_map() returns 0 on error, no NO_IRQ, so the
failure condition can never be met.
This changes the comparison to check for zero instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The mxs_chan->chan_irq variable is guaranteed to never be NO_IRQ,
as it gets assigned the result of platform_get_irq() that returns
either a valid positive interrupt number, or a negative failure
code that leads to the channel not being used.
This removes the redundant check, eliminating one more instance
of NO_IRQ.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The use of NO_IRQ is incorrect here and should never have been there,
as irq_of_parse_and_map() returns '0' on failure, not NO_IRQ.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since we have nice macro IRQ_RETVAL() we would use it to convert a flag of
handled interrupt from int to irqreturn_t.
The rationale of doing this is:
a) hence we implicitly mark hsu_dma_do_irq() as an auxiliary function that
can't be used as interrupt handler directly, and
b) to be in align with serial driver which is using serial8250_handle_irq()
that returns plain int by design.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dmatest is currently including compare and fill time into the
calculated performance numbers. This does not reflect the HW
capability and the results vary based on the CPU speed instead of
the HW speed.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The HIDMA driver is capable of error detection. However, the error was
not being passed back to the client when tx_status API is called.
Changing the error handling behavior to follow this oder.
1. dmaengine asserts error interrupt
2. Driver receives and mark's the txn as error
3. Driver completes the txn and intimates the client. No further
submissions. Drop the locks before calling callback, as subsequent
processing by client maybe in callback thread.
4. Client invokes status and you can return error
5. On error, client calls terminate_all. You can reset channel, free all
descriptors in the active, pending and completed lists
6. Client prepares new txn and so on.
As part of this work, got rid of the reset in the interrupt handler when
an error happens and the HW is put into disabled state. The only way to
recover is for the client to terminate the channel.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pass the DMA errors to the client by passing a result argument. The HW only
supports a generic error when something goes wrong. That's why, using
DMA_TRANS_ABORTED all the time.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
There is a race condition between data transfer callback and descriptor
free code. The callback routine may decide to clear the resources even
though the descriptor has not yet been freed.
Instead of calling the callback first and then releasing the memory,
this code is changing the order to return the descriptor back to the
free pool and then call the user provided callback.
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Let's keep the device enabled between cppi41_dma_issue_pending()
and dmaengine_desc_get_callback_invoke() and rely on the PM runtime
autoidle timeout elsewhere.
As the PM runtime is for whole device, not for each channel,
we need to queue pending transfers if the device is PM runtime
suspended. Then we start the pending transfers in PM runtime
resume.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The calculation of the DMA transaction residue supports only fixed
size data transfers. This implementation is not covering all
operations (e.g. data receiving) when we need to know the exact amount
of bytes transferred.
The loop channels handling was changed to clear the buffer
descriptor errors and use the bd->mode.count to calculate the
residue.
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Reviewed-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Having the SDMA driver use a tasklet for running the clients
callback introduce some issues:
- probability to have desynchronized data because of the
race condition created since the DMA transaction status
is retrieved only when the callback is executed, leaving
plenty of time for transaction status to get altered.
- inter-transfer latency which can leave channels idle.
Move the callback execution, for cyclic channels, to SDMA
interrupt (as advised in `Documentation/dmaengine/provider.txt`)
to (a)reduce the inter-transfer latency and (b) eliminate the
race condition possibility where DMA transaction status might
be changed by the time is read.
The responsibility of the SDMA interrupt latency
is moved to the SDMA clients which case by case should defer
the work to bottom-halves when needed.
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Reviewed-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There are at least two known devices, e.g. DMA controller found on ARC AXS101
SDP board, that have LLP register and no multi block transfer support at the
same time.
Override autodetection by user provided data.
Reported-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Reviewed-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Intel Quark UART uses DesignWare DMA IP. Though the DMA IP is connected in such
way that handshake interface uses inverted polarity. We have to provide a
possibility to set this in the DMA driver when configuring a channel.
Introduce a new member of custom slave configuration called 'hs_polarity' and
set active low polarity in case this value is 'true'.
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
It seems we need to extend custom slave configuration by one more member to
support Intel Quart UART. It becomes a burden to manage all members of struct
dw_dma_slave one-by-one.
Replace the set of fields by embedding struct dw_dma_slave into struct
dw_dma_chan.
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Let's just move code from cppi41_dma_issue_pending() to
push_desc_queue() as that's the only call to push_desc_queue().
We want to do this for PM runtime as we need to call push_desc_queue()
also for pending queued transfers from PM runtime resume.
No functional changes, just moves code around.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This allows the k3dma driver to be selected on HiKey via the ARCH_HISI
dependency.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Jingoo Han <jg1.han@samsung.com>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the k3dma driver doesn't offer the cyclic mode
necessary for handling audio.
This patch adds it.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline, removed a few
bits of logic that didn't seem to have much effect]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With cyclic mode, the shared virt-dma logic doesn't actually
manage the descriptor state, nor the calling of the descriptor
free callback. This results in leaking a desc structure every
time we start an audio transfer.
Thus we must manage it ourselves. The k3dma driver already keeps
track of the active and finished descriptors via ds_run and ds_done
pointers, so cleanup how we handle those two values, so when we
tear down everything in terminate_all, call free_desc on the ds_run
and ds_done pointers if they are not null.
NOTE: HiKey doesn't use the non-cyclic dma modes, so I'm not been
able to test those modes. But with this patch we no longer leak
the desc structures.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
After lots of debugging on an occasional DMA ERR issue, I realized
that the desc structures which we point the dma hardware are being
allocated out of regular memory. This means when we fill the desc
structures, that data doesn't always get flushed out to memory by
the time we start the dma transfer, resulting in the dma engine getting
some null values, resulting in a DMA ERR on the first irq.
Thus, this patch adopts mechanism similar to the zx296702_dma of
allocating the desc structures from a dma pool, so the memory caching
rules are properly set to avoid this issue.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: John Stutlz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As it was before, as soon as the DMAC IP felt there was an error
he would return IRQ_NONE since no actual transfer had completed.
After spinning on that for 100K interrupts, Linux yanks the IRQ with
a "nobody cared" error.
This patch lets it handle the interrupt and keep the IRQ alive.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The offsets for ERR1 and ERR2 are wrong actually.
That's why you can never clear an error.
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Max burst len is a 4-bit field, but at the moment it's clipped with
a 5-bit constant... reduce it to that which can be expressed
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Andy Green <andy@warmcat.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Signed-off-by: Andy Green <andy.green@linaro.org>
[jstultz: Forward ported to mainline]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Allow i.MX7 to work with the imx-sdma driver.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Tested-by: Stefan Agner <stefan@agner.ch>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
of_match_device could return NULL, and so cause a NULL pointer
dereference later at line 850:
mdma->soc = match->data;
For fixing this problem, we use of_device_get_match_data(), this will
simplify the code a little by using a standard function for
getting the match data.
This was reported by coverity (CID 1324134)
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Commit 498b5fdd40 ("PM / clk: Add support for adding a specific clock
from device-tree") add a new helper function for adding a clock from
device-tree to a device. Update the ADMA driver to use this new function
to simplify the driver.
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Acked-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
free_irq() expects the same device identity that was passed to
corresponding request_irq(), otherwise the IRQ is not freed.
Fixes: e1f7c9eee7 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver")
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When terminating for_each_compatible_node() iteration with
break or return, of_node_put() should be used to prevent
stale device node references from being left behind.
Found by Coccinelle.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In a very tight timeframe, the debug message in the transfer completion
handler can be misleading, as the completion test report can change just
after the message, and the code flow cannot be deduced from the debug
message.
This is just a cleanup to make debugging easier.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In the case where a descriptor is chained on a running channel, and as
explained in the comment in the code 10 lines above, the success of the
chaining is ensured either if :
- the DMA is still running
- or if the chained transfer is completed
Unfortunately the transfer completness test was done on the descriptor
to which the transfer was chained, and not the transfer being chained at
the end, ie. hot-chained.
This corner case is extremely hard to trigger, as usually the DMA chain
is still running, and the first case takes care of returning success of
the hot-chaining. It was seen by hot-chaining several "small transfers"
to a running "big transfer", not in a real-life usecase but by testing
the robustness of the driver.
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sDMA in OMAP3630 or newer SoC have support for LinkedList transfer. When
LinkedList or Descriptor load feature is present we can create the
descriptors for each and program sDMA to walk through the list of
descriptors instead of the current way of sDMA stop, sDMA reconfiguration
and sDMA start after each SG transfer.
By using LinkedList transfer in sDMA the number of DMA interrupts will
decrease dramatically.
Booting up the board with filesystem on SD card for example:
W/o LinkedList support:
27: 4436 0 WUGEN 13 Level omap-dma-engine
Same board/filesystem with this patch:
27: 1027 0 WUGEN 13 Level omap-dma-engine
Or copying files from SD card to eMCC:
2.1G /usr/
232001
W/o LinkedList we see ~761069 DMA interrupts.
With LinkedList support it is down to ~269314 DMA interrupts.
With the decreased DMA interrupt number the CPU load is dropping
significantly as well.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of accessing the array via index, take the pointer first and use
it to set up the omap_sg struct.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Print the same information the driver prints when allocating the channel
resources regarding to the sDMA channel.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On OMAP1 platforms we do not have 32 channels available. Allocate the
lch_map based on the available channels. This way we are not going to have
more visible channels then it is available on the platform.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Flatten the indentation level of the function which gives better view on
the cases we handle here.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We can drop the (sg)idx parameter for the omap_dma_start_sg() function and
increment the sgidx inside of the same function.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The USB-DMAC's interruption happens even if the CHCR.DE is not set to 1
because CHCR.NULLE is set to 1. So, this driver should call
usb_dmac_isr_transfer_end() if the DE bit is set to 1 only. Otherwise,
the desc is possible to be NULL in the usb_dmac_isr_transfer_end().
Fixes: 0c1c8ff32f ("dmaengine: usb-dmac: Add Renesas USB DMA Controller (USB-DMAC) driver)
Cc: <stable@vger.kernel.org> # v4.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Cc: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Cc: Xuelin Shi <xuelin.shi@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Completion callback should happen after dma_descriptor_unmap() has
happened. This allow the cache invalidate to happen and ensure that
the data accessed by the upper layer is in memory that was from DMA
rather than stale data. On some architecture this is done by the
hardware, however we should make the code consistent to not cause
confusion.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Provide a mechanism to translate CHANERR bits to English strings in order
to allow user to report more concise errors.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adding error handling to the ioatdma driver so that when a
read/write error occurs the error results are reported back and
all the remaining descriptors are aborted. This utilizes the new
dmaengine callback function that allows reporting of results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Adding a new callback that will provide the error result for a transaction.
The result is allocated on the stack and the callback should create a copy
if it wishes to retain the information after exiting. The result parameter
is now defined and takes over the dummy void pointer we placed in the
helper functions previously. dmaengine drivers should start converting
to the new "callback_result" callback in order to receive transaction
results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Li Yang <leoli@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Viresh Kumar <vireshk@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is in preperation of moving to a callback that provides results to the
callback for the transaction. The conversion will maintain current behavior
and the driver must convert to new callback mechanism at a later time in
order to receive results.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Dmaengine does not provide a way to pass back the result from a DMA
transaction through the callback function. We are adding dmaengine
helper function in order to prep for a mechanism that allow result
status and other information through the callback. The initial conversion
will make the existing driver use these new helper functions but retain
the original behavior of the code. However, the helper functions paves
a way towards adding the result parameter through callback.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Static analysis showed that unitialized array is being used for compare.
At line 850 when a dma_mapping_error() occurs, it jumps to dma_unmap. At
this point, dma_srcs has not been initialized. However, the code after
dma_unmap label checks dma_srcs for a comparison and thus is comparing
to random garbage in the array. Given that when dest_dma is being mapped
this is the first instance of mapping DMA memory and failed, there is
really nothing to be cleaned up and thus should jump to free_resources
label instead.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This time we have bit of largish changes:
New drivers:
- Xilinx zynqmp dma engine driver.
- Marvell xor2 driver.
Updates:
- dmatest sg support.
- updates and enhancements to Xilinx drivers, adding of cyclic mode.
- clock handling fixes across drivers.
- removal of OOM messages on kzalloc across subsystem.
- interleaved transfers support in omap driver.
- runtime pm support in qcom bam dma.
- tasklet kill freeup across drivers.
- irq cleanup on remove across drivers.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXmZj8AAoJEHwUBw8lI4NHANYP/0e7LkiopiBdxKyYNwViyrfL
XGsB3Fcd8MvYnojzlRUNUi5dt86YfXM5JixMWg8e2WeDSK9AXBEpRBHlEJXA7FNn
/BXy8FZxW4YkXOBSOL+GbDgC48CRiQrmoHSsYE1e9qosJTTpDJlTMd0I3EmdAK53
wjNKBYGv5ORNMXdYXJe/6uUqbN0QT7Qr7a9+Q1qgwhF1wd8pKjVXvDD6Qj2NeM8L
OCySngBECDWTCEYpuNXHbtp/s8QpWteGwCTyQYTkuNsFYM2H3nQCXi9ObMOGR9IN
44FpLgeLeIgW03F/1DmVvMWDYgCgow+b1usqHRWC7x33K/ArqzZzAsPyKePqOBU5
B9zzAla+/QKi73mKauqgHl/Siokr9FZdFpvTVWf2ssm/k3b3GJMO9tPPJ2ocyvZ6
lwlHrTMOV9n2tzeBkkadgLPWO6yDlcYlDVjj1P36DzC88GhjEfFOlq3tqcEJWxmR
adDFX2yRXdtpA6XSiI9l7jxUHxBWZwOTPJ6h1gznk/wVVd0TjyzZteX2gPc8lkcL
Aedhyx0zgGl5bE4+eBsNKLiOrUj468j7Cb87Hhe4YygDw/2T2Ff5RDGxQ9iRrkCb
YRPP21453VS01GuF2T1vzziJ/tGl8IwIon1EOSsTXuImH1sm7Or3W+Cyrke9AZgo
0M8kfHJ2EfcRnwHE8N2H
=tYp0
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-4.8-rc1' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine updates from Vinod Koul:
"This time we have bit of largish changes: two new drivers, bunch of
updates and cleanups to existing set. Nothing super exciting though.
New drivers:
- Xilinx zynqmp dma engine driver
- Marvell xor2 driver
Updates:
- dmatest sg support
- updates and enhancements to Xilinx drivers, adding of cyclic mode
- clock handling fixes across drivers
- removal of OOM messages on kzalloc across subsystem
- interleaved transfers support in omap driver
- runtime pm support in qcom bam dma
- tasklet kill freeup across drivers
- irq cleanup on remove across drivers"
* tag 'dmaengine-4.8-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (94 commits)
dmaengine: k3dma: add missing clk_disable_unprepare() on error in k3_dma_probe()
dmaengine: zynqmp_dma: add missing MODULE_LICENSE
dmaengine: qcom_hidma: use for_each_matching_node() macro
dmaengine: zynqmp_dma: Fix static checker warning
dmaengine: omap-dma: Support for interleaved transfer
dmaengine: ioat: statify symbol
dmaengine: pxa_dma: implement device_synchronize
dmaengine: imx-sdma: remove assignment never used
dmaengine: imx-sdma: remove dummy assignment
dmaengine: cppi: remove unused and bogus check
dmaengine: qcom_hidma_lli: kill the tasklets upon exit
dmaengine: pxa_dma: remove owner assignment
dmaengine: fsl_raid: remove owner assignment
dmaengine: coh901318: remove owner assignment
dmaengine: qcom_hidma: kill the tasklets upon exit
dmaengine: txx9dmac: explicitly freeup irq
dmaengine: sirf-dma: kill the tasklets upon exit
dmaengine: s3c24xx: kill the tasklets upon exit
dmaengine: s3c24xx: explicitly freeup irq
dmaengine: pl330: explicitly freeup irq
...
Add the missing clk_disable_unprepare() before return
from k3_dma_probe() in the error handling case.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We get a warning about the missing MODULE_LICENSE tag for this newly
added driver module:
WARNING: modpost: missing MODULE_LICENSE() in drivers/dma/xilinx/zynqmp_dma.o
see include/linux/module.h for more information
This adds a "GPL" license, matching the "version 2 or later" information in
the comment at the start of the file.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use for_each_matching_node() macro instead of open coding it.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Initial support for interleaved transfer with sDMA.
The implementation only supports DMA_MEM_TO_MEM and frame_size must be 1.
sDMA needs to be configured for double indexing when ICG is needed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Sparse warns:
drivers/dma/ioat/init.c:1215:6: warning: symbol 'ioat_resume' was not declared. Should it be static?
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Implement the function which wait until a dma channel is stopped to have
a synchronization point.
This also protects the pxad_remove() from races, such as spurious
interrupts while removing the driver, because :
- as long as there is one dma channel requested, ie. dma_chan_get() but
no dma_chan_put(), the try_module_get() of dma_chan_get() prevents
the remove() routine from running
- when the last channel is released, ie. the last dma_chan_put() is
called, if there is a running DMA, pxad_synchronize() is called
- pxad_synchronize() waits for the channel to stop, which in turn
ensures on pxa architecture that the interrupt cannot be fired anymore
Reported-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
David reported:
[drivers/dma/imx-sdma.c:769]: (style) Variable 'emi_2_emi' is assigned a value that is never used
Since emi_2_emi is never used afterwards, remove thsi as well
Reported-by: David Binderman <dcb314@hotmail.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
David reported:
drivers/dma/imx-sdma.c:1003]: (style) Same expression on both sides of '|='
ORing with itself yields same result, So remove this
Reported-by: David Binderman <dcb314@hotmail.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cppi41_dma_prep_slave_sg() variable num is initialized to zero, but never
updated and a BUG_ON is checked for it being greater than zero which will be
always false.
Remove the bogus check and this variable
Reported-by: David Binderman <linuxdev.baldrick@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Sinan Kaya <okaya@codeaurora.org>
debugfs file operations owner is set by core, so remove
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Robert Jarzmik <robert.jarzmik@free.fr>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Sinan Kaya <okaya@codeaurora.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Barry Song <Baohua.Song@csr.com>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jassi Brar <jassisinghbrar@gmail.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Mario Six <mario.six@gdsys.cc>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Guennadi Liakhovetski <g.liakhovetski+renesas@gmail.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Sparse complains:
drivers/dma/mmp_tdma.c:407:22: warning: symbol 'mmp_tdma_alloc_descriptor' was not declared. Should it be static?
drivers/dma/mmp_tdma.c:595:17: warning: symbol 'mmp_tdma_xlate' was not declared. Should it be static?
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Qiao Zhou <zhouqiao@marvell.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jingchang Lu <b35083@freescale.com>
Cc: Peter Griffin <peter.griffin@linaro.org>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
size_t should be printed with %zu, not %lu as driver did, so fix these
warning by doing this change
drivers/dma/fsl_raid.c: In function 'fsl_re_prep_dma_genq':
drivers/dma/fsl_raid.c:341:4: warning: format '%lu' expects argument of type
'long unsigned int', but argument 3 has type 'size_t' [-Wformat=]
len, FSL_RE_MAX_DATA_LEN);
^
drivers/dma/fsl_raid.c: In function 'fsl_re_prep_dma_pq':
drivers/dma/fsl_raid.c:428:4: warning: format '%lu' expects argument of type
'long unsigned int', but argument 3 has type 'size_t' [-Wformat=]
len, FSL_RE_MAX_DATA_LEN);
^
drivers/dma/fsl_raid.c: In function 'fsl_re_prep_dma_memcpy':
drivers/dma/fsl_raid.c:549:4: warning: format '%lu' expects argument of type
'long unsigned int', but argument 3 has type 'size_t' [-Wformat=]
len, FSL_RE_MAX_DATA_LEN);
^
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Xuelin Shi <xuelin.shi@freescale.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: Jingchang Lu <b35083@freescale.com>
Cc: Peter Griffin <peter.griffin@linaro.org>
drivers should ensure that tasklets are killed, so that they can't be
executed after driver remove is executed, so ensure they are killed.
This driver used vchan tasklets, so those need to be killed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Sparse complains:
drivers/dma/coh901318.c:269:30: warning: symbol 'chan_config' was not declared. Should it be static?
drivers/dma/coh901318.c:2806:12: warning: symbol 'coh901318_init' was not declared. Should it be static?
drivers/dma/coh901318.c:2812:13: warning: symbol 'coh901318_exit' was not declared. Should it be static?
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
drivers should ensure that tasklets are killed, so that they can't be
run after driver remove is executed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
This patch updates the dmatest client to
Support scatter-gather dma mode.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Currently the handler ignores the channel 0 interrupt and thus doesn't ack
it properly. This is done in order to allow sdma_run_channel0() to poll
on the irq status bit, as this function may be called in atomic context,
but needs to know when the channel has finished.
This works mostly, as the polling happens under a spinlock, disabling IRQs
on the local CPU, leaving only a very slight race window for a spurious
IRQ to happen if the handler is executed on another CPU in an SMP system.
Still this is clearly suboptimal.
This behavior turns into a real problem on an RT system, where the spinlock
doesn't disable IRQs on the local CPU. Not acking the IRQ in the handler
in such a setup is very likely to drown the CPU in an IRQ storm, leaving
it unable to make any progress in the polling loop, leading to the IRQ
never being acked.
Fix this by properly acknowledging the channel 0 IRQ in the handler.
As the IRQ status bit can no longer be used to poll for the channel
completion, switch over to using the SDMA_H_STATSTOP register for this
purpose, where bit 0 is cleared by the hardware when the channel is done.
Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In case of error, the function platform_device_register_full()
returns ERR_PTR() and never returns NULL. The NULL test in the
return value check should be replaced with IS_ERR().
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The new mv_xor_v2 driver supports the XOR engines found in the 64-bits
ARM from Marvell of the Armada 7K and Armada 8K family. This XOR
engine is a completely new hardware block, entirely different from the
one used on previous Marvell Armada platforms, which use the existing
mv_xor driver.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The newly added zynqmp_dma driver produces a warning on 32-bit architectures
when dma_addr_t is 64-bit wide:
drivers/dma/xilinx/zynqmp_dma.c: In function 'zynqmp_dma_config_sg_ll_desc':
drivers/dma/xilinx/zynqmp_dma.c:321:9: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v);
^
drivers/dma/xilinx/zynqmp_dma.c:321:29: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
((dma_addr_t)sdesc - (dma_addr_t)chan->desc_pool_v);
This changes the cast to the more appropriate uintptr_t.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In cyclic DMA mode need to link the tail bd segment
with the head bd segment to process bd's in cyclic.
Current driver is doing this only for tx channel
needs to update the same for rx channel case also.
This patch fixes the same.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Added the driver for zynqmp dma engine used in Zynq
UltraScale+ MPSoC. This dma controller supports memory to memory
and I/O to I/O buffer transfers.
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>