Commit Graph

1603 Commits

Author SHA1 Message Date
Tomasz Figa
48924e4224 dmaengine: PL08x: Move LLI dumping code into separate function
This patch refactors debugging code that dumps LLI entries by moving it
into separate function, which is stubbed when VERBOSE_DEBUG is not
selected. This allows us to get rid of the ugly ifdef from the body of
pl08x_fill_llis_for_desc().

No functional change is introduced by this patch.

Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-09-02 11:49:56 +05:30
Tomasz Figa
ba6785ffc8 dmaengine: PL08x: Rework LLI handling to be less fragile
Currently memory allocated for LLIs is casted to an array of structs,
which is fragile and also limits the driver to a single, predefined LLI
layout, while there are some variants of PL08x, which have more fields
in LLI (namely PL080S with its extra CCTL2).

This patch makes LLIs a sequence of 32-bit words, which is just filled
with appropriate values in appropriate order and padded with required
amount of dummy words (currently zero, but PL080S will make better use
of this).

Suggested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-09-02 11:49:56 +05:30
Tomasz Figa
d86ccea794 dmaengine: PL08x: Add support for different offset of CONFIG register
Some variants of PL08x (namely PL080S, found in Samsung S3C64xx SoCs)
have CONFIG register at different offset. This patch makes the driver
use offset from vendor data struct.

Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-09-02 11:49:56 +05:30
Tomasz Figa
68a7faa200 dmaengine: PL08x: Refactor pl08x_getbytes_chan() to lower indentation
Further patch will introduce support for PL080S, which requires some
things to be done conditionally, thus increasing indentation level of
some functions even more.

This patch reduces indentation level of pl08x_getbytes_chan() function
by inverting several conditions and returning from function wherever
possible.

Signed-off-by: Tomasz Figa <tomasz.figa@gmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-09-02 11:49:56 +05:30
Lars-Peter Clausen
39ff86130a dma: pl330: Fix handling of TERMINATE_ALL while processing completed descriptors
The pl330 DMA driver is broken in regard to handling a terminate all request
while it is processing the list of completed descriptors. This is most visible
when calling dmaengine_terminate_all() from within the descriptors callback for
cyclic transfers. In this case the TERMINATE_ALL transfer will clear the
work_list and stop the transfer. But after all callbacks for all completed
descriptors have been handled the descriptors will be re-enqueued into the (now
empty) work_list. So the next time dma_async_issue_pending() is called for the
channel these descriptors will be transferred again which will cause data
corruption. Similar issues can occur if dmaengine_terminate_all() is not called
from within the descriptor callback but runs on a different CPU at the same time
as the completed descriptor list is processed.

This patch introduces a new per channel list which will hold the completed
descriptors. While processing the list the channel's lock will be held to avoid
racing against dmaengine_terminate_all(). The lock will be released when calling
the descriptors callback though. Since the list of completed descriptors might
be modified (e.g. by calling dmaengine_terminate_all() from the callback) we can
not use the normal list iterator macros. Instead we'll need to check for each
loop iteration again if there are still items in the list. The drivers
TERMINATE_ALL implementation is updated to move descriptors from both the
work_list as well the new completed_list back to the descriptor pool. This makes
sure that none of the descripts finds its way back into the work list and also
that we do not call any futher complete callbacks after
dmaengine_terminate_all() has been called.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-28 11:27:17 +05:30
Zhangfei Gao
8e6152bc66 dmaengine: Add hisilicon k3 DMA engine driver
Add dmaengine driver for hisilicon k3 platform based on virt_dma

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Tested-by: Kai Yang <jean.yangkai@huawei.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-28 11:23:40 +05:30
Sascha Hauer
dcfec3c098 dma: imx-sdma: Add ROM script addresses to driver
This adds the ROM script addresses for i.MX25, i.MX5x and i.MX6 to the
SDMA driver needed for the driver to work without additional firmware.

The ROM script addresses are SoC specific and in some cases even tapeout
specific. This patch adds the ROM script addresses only for SoCs which
do not have a tapeout specific SDMA ROM, because currently it's unclear
how this case should be handled.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-26 20:47:16 +05:30
Sascha Hauer
17bba72f8f dma: imx-sdma: Use struct for driver data
Use a struct type instead of an enum type for distinguishing between
different versions. This makes it simpler to handle multiple differences
without cluttering the code with comparisons for certain devtypes.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-26 20:47:16 +05:30
Daniel Mack
023bf55f1c dma: mmp_pdma: set DMA_PRIVATE
As the driver now has its own xlate function and makes use of the
dma_get_slave_channel(), we need to manually set the DMA_PRIVATE flags.

Drivers which rely on of_dma_simple_xlate() do implicitly the same by
going through __dma_request_channel().

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:53 +05:30
Daniel Mack
50440d74aa dma: mmp_pdma: add support for cyclic DMA descriptors
Provide a callback to prepare cyclic DMA transfers.
This is for instance needed for audio channel transport.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:52 +05:30
Daniel Mack
0cd6156177 dma: mmp_pdma: don't clear DCMD_ENDIRQEN at end of pending chain
In order to fully support multiple transactions per channel, we need to
assure we get an interrupt for each completed transaction. That flags
bit is also our only way to tell at which descriptor a transaction ends.

So, remove the manual clearing of that bit, and then inline the only
remaining command that is left in append_pending_queue() for better
readability.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:52 +05:30
Daniel Mack
b721f9e800 dma: mmp_pdma: only complete one transaction from dma_do_tasklet()
Currently, when an interrupt has occured for a channel, the tasklet
worker code will only look at the very last entry in the running list
and complete its cookie, and then dispose the entire running chain.
Hence, the first transaction's cookie will never complete.

In fact, the interrupt we should handle will be the one related to the
first descriptor in the chain with the ENDIRQEN bit set, so complete
the second transaction that is in fact still running.

As a result, the driver can't currently handle multiple transactions on
one chanel, and it's likely that no drivers exist that rely on this
feature.

Fix this by walking the running_chain and look for the first
descriptor that has the interrupt-enable bit set. Only queue
descriptors up to that point for completion handling, while leaving
the rest intact. Also, only make the channel idle if the list is
completely empty after such a cycle.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 22:04:52 +05:30
Andy Shevchenko
b4d6d33676 acpi-dma: remove ugly conversion
In case of big endian CPU we have to convert either all fields in the structure
or leave this job to ACPICA. The second choice seems the best.

So, let's remove the ugly conversion that is not fully comprehensive anyway.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 16:43:45 +05:30
Dan Carpenter
5be2190af4 dmaengine: ste_dma40: off by one in d40_of_probe()
If "num_disabled" is equal to STEDMA40_MAX_PHYS (32) then we would write
one space beyond the end of the pdata->disable_channels[] array.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 16:23:32 +05:30
Fabio Estevam
3a919d5b43 dma: ste_dma: Fix warning when CONFIG_ARM_LPAE=y
When CONFIG_ARM_LPAE=y the following build warning are generated:

drivers/dma/ste_dma40.c:3228:2: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'resource_size_t' [-Wformat]
drivers/dma/ste_dma40.c:3582:3: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'resource_size_t' [-Wformat]
drivers/dma/ste_dma40.c:3582:3: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'resource_size_t' [-Wformat]
drivers/dma/ste_dma40.c:3593:5: warning: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'resource_size_t' [-Wformat]

According to Documentation/printk-formats.txt '%pa' can be used to properly
print 'resource_size_t'.

Also, for printing memory region the '%pr' is more convenient.

Reported-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Acked-by: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 16:17:27 +05:30
Jingoo Han
4d1b80bf7a dma: ipu: remove unnecessary platform_set_drvdata()
The driver core clears the driver data to NULL after device_release
or on probe failure. Thus, it is not needed to manually clear the
device driver data to NULL.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-25 14:37:13 +05:30
Shawn Guo
b1baec525e dma: mxs-dma: remove code left from generic DMA binding conversion
With all mxs-dma clients moved to use generic DMA helper, the code
left from generic DMA binding conversion can be removed now.

Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-19 14:20:21 +05:30
Huang Shijie
e8690fc2bc dma: imx-sdma: remove the unused completion
After the patch: "2ccaef0 dma: imx-sdma: make channel0 operations atomic",
the "done" completion is not used any more.

Just remove it.

Signed-off-by: Huang Shijie <b32955@freescale.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-19 14:16:19 +05:30
Julia Lawall
f2d04c3209 dma: mmp: simplify use of devm_ioremap_resource
Remove unneeded error handling on the result of a call to
platform_get_resource when the value is passed to devm_ioremap_resource.

A simplified version of the semantic patch that makes this change is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
@@
expression pdev,res,n,e,e1;
expression ret != 0;
identifier l;
@@

- res = platform_get_resource(pdev, IORESOURCE_MEM, n);
  ... when != res
- if (res == NULL) { ... \(goto l;\|return ret;\) }
  ... when != res
+ res = platform_get_resource(pdev, IORESOURCE_MEM, n);
  e = devm_ioremap_resource(e1, res);
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 14:51:28 +05:30
Daniel Mack
6fc4573c4e dma: mmp_pdma: add support for byte-aligned transfers
The PXA DMA controller has a DALGN register which allows for
byte-aligned DMA transfers. Use it in case any of the transfer
descriptors is not aligned to a mask of ~0x7.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:16 +05:30
Daniel Mack
8fd6aac3a8 dma: mmp_pdma: remove duplicate assignment
The DMA_SLAVE is currently set twice.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:16 +05:30
Daniel Mack
419d1f126b dma: mmp_pdma: print the number of channels at probe time
That helps check the provided runtime information.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:16 +05:30
Daniel Mack
a9a7cf08bd dma: mmp_pdma: make the controller a DMA provider
This patch makes the mmp_pdma controller able to provide DMA resources
in DT environments by providing an dma xlate function.

of_dma_simple_xlate() isn't used here, because if fails to handle
multiple different DMA engines or several instances of the same
controller. Instead, a private implementation is provided that makes use
of the newly introduced dma_get_slave_channel() call.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack
13b3006b8e dma: mmp_pdma: add filter function
PXA peripherals need to obtain specific DMA request ids which will
eventually be stored in the DRCMR register.

Currently, clients are expected to store that number inside the slave
config block as slave_id, which is unfortunately incompatible with the
way DMA resources are handled in DT environments.

This patch adds a filter function which stores the filter parameter
passed in by of-dma.c into the channel's drcmr register.

For backward compatability, cfg->slave_id is still used if set to
a non-zero value.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack
1ac0e845c1 dma: mmp_pdma: fix maximum transfer length
There's no reason for limiting the maximum transfer length to 0x1000.
Take the actual bit mask instead; the PDMA is able to transfer chunks of
up to SZ_8K - 1.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack
638a542cc4 dma: mmp_pdma: refactor unlocking path in lookup_phy()
As suggested by Ezequiel García, release the spinlock at the end of the
function only, and use a goto for the control flow.

Just a minor cleanup.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack
8b298ded90 dma: mmp_pdma: factor out DRCMR register calculation
The exact same calculation is done twice, so let's factor it out to a
macro.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Vinod Koul
aab81e47f5 Merge branch 'topic/of' into for-linus 2013-08-14 13:55:04 +05:30
Chanho Park
52a9d17910 dma: pl330: split off common code to give back descriptors
This patch adds __pl330_giveback_descs which give back descriptors when fails
allocating descriptors. It requires to eliminate duplication for
pl330_prep_dma_sg which will be added later.

Signed-off-by: Chanho Park <chanho61.park@samsung.com>
Acked-by : Jassi Brar <jassisinghbrar@gmail.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 17:39:45 +05:30
Barry Song
2a76689bca dmaengine: sirf: add PM entries for sleep and runtime
this patch adds PM ops entries in sirf-dma drivers, so that this
driver can support suspend/resume, hibernation and runtime PM.

while suspending, sirf-dma will lose all registers, so we save
them at suspend and restore in resume for active channels.

Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Rongjun Ying <Rongjun.Ying@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 17:01:01 +05:30
Jingoo Han
d4adcc0160 dma: use dev_get_platdata()
Use the wrapper function for retrieving the platform data instead of
accessing dev->platform_data directly.

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:56:41 +05:30
Jingoo Han
696b4ff8b2 dma: sirf: use NULL instead of 0
sirfsoc_dma_prep_cyclic() returns pointer, thus NULL should be
used instead of 0 in order to fix the following sparse warning:

drivers/dma/sirf-dma.c:598:24: warning: Using plain integer as NULL pointer

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:54:42 +05:30
Jingoo Han
4c14372590 dma: mv_xor: use NULL instead of 0
%p is used, thus NULL should be used instead of 0
in order to fix the following sparse warning:

drivers/dma/mv_xor.c:648:9: warning: Using plain integer as NULL pointer

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:54:42 +05:30
Jingoo Han
69c9f0ae1d dma: mmp_pdma: Staticize mmp_pdma_alloc_descriptor()
mmp_pdma_alloc_descriptor() is used only in this file.
Fix the following sparse warning:

drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static?

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:54:42 +05:30
Zhangfei Gao
7bb587f4ee dmaengine: add interface of dma_get_slave_channel
Suggested by Arnd, add dma_get_slave_channel interface
Dma host driver could get specific channel specificied by request line, rather than filter.

host example:
static struct dma_chan *xx_of_dma_simple_xlate(struct of_phandle_args *dma_spec,
		struct of_dma *ofdma)
{
	struct xx_dma_dev *d = ofdma->of_dma_data;
	unsigned int request = dma_spec->args[0];

	if (request > d->dma_requests)
		return NULL;

	return dma_get_slave_channel(&(d->chans[request].vc.chan));
}

probe:
of_dma_controller_register((&op->dev)->of_node, xx_of_dma_simple_xlate, d);

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:32:59 +05:30
Xiang Wang
26a2dfdeb9 dma: mmp_pdma: clear DRCMR when free a phy channel
In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.

Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:27 +05:30
Xiang Wang
027f28b7bb dma: mmp_pdma: add protect when alloc/free phy channels
In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.

Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
effd5cf6fe dma: dw: return DMA_PAUSED only if cookie status is DMA_IN_PROGRESS
To obey a usual practice let's return DMA_PAUSED status only if
dma_cookie_status returned DMA_IN_PROGRESS.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
12381dc0c7 dma: dw: return DMA_SUCCESS immediately from device_tx_status()
There is no point to go throught the rest of the function if first call to
dma_cookie_status() returned DMA_SUCCESS.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
3783cef876 dma: dw: allow shared interrupts
In the PC world is quite possible that devices are sharing the same interrupt
line. The patch prepares dw_dmac driver to such cases.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
78f3c9d2e0 dma: dw: improve comparison with ~0
In general ~0 does not fit some integer types. Let's do a helper to make a
comparison with that constant properly.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
be480dcbb5 dma: dw: append MODULE_DEVICE_TABLE for ACPI case
In rare cases (mostly for the testing purposes) the dw_dmac driver might be
compiled as a module as well as the other LPSS device drivers (I2C, SPI,
HSUART). When udev handles the event of the devices appearing the dw_dmac
module is missing. This patch will fix that.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
4aed508fc5 acpi-dma: fix sparse warning
This patch fixes sparse warning:
	drivers/dma/acpi-dma.c:76:21: sparse: cast to restricted __le32

Since everything in all ACPI tables is little-endian, by definition, the used
types in practice are uXX. Thus, we have to enforce __leXX if we want to
convert them to CPU order.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
985a0cb970 txx9dmac: return DMA_SUCCESS immediately from device_tx_status()
There is no point to go throught the rest of the function if first call to
dma_cookie_status() returned DMA_SUCCESS.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
c14d2bc470 mmp_tdma: set cookies as well when asked for tx status
dma_set_residue() sets only residue value, so user can't rely on the returned
values of cookies. That patch standardize the behaviour.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
c6c732cf9f ipu_idmac: re-use dma_cookie_status()
It's better to use generic dma_cookie_status() that allows user to get standard
possible return codes independently of the DMAC driver in charge.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
0a0aee203c tegra20-apb-dma: remove useless use of lock
Accordingly to dma_cookie_status() description locking is not required.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: linux-tegra@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
da0a908ed9 pch_dma: remove useless use of lock
Accordingly to dma_cookie_status() description locking is not required.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
108fae842a mpc512x_dma: remove useless use of lock
Accordingly to dma_cookie_status() description locking is not required.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:25 +05:30
Andy Shevchenko
4aa9fe0a1f mmp_pdma: remove useless use of lock
Accordingly to dma_cookie_status() description locking is not required.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:24 +05:30