dmaengine updates for 4.4-rc1

This time we have a very typical update which is mostly fixes and updates to
 drivers and no new drivers.
 
 - Biggest change is coming from Peter for edma cleanup which even caused
   some last minute regression, things seem settled now
 - idma64 and dw updates
 - iotdma updates
 - module autoload fixes for various drivers
 - scatter gather support for hdmac
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJWP1vYAAoJEHwUBw8lI4NHKg8QAKwJ/iPxZDMfWwCLAoZoeaDp
 GgOmxIZ3LWrFQHIziI1IUcxSAU28Z+N6GBziaXycw49oQic4J/ukHGhgxwI2GM/W
 JGbFzHalnwdGqKzVZyfWGV0njT3KRvwKl7qqb66ZhikF5lD5imga66QGGxof8wBK
 uoXra7yycrE8nGpyY5Gdd4H59aWVyAK1fW6+0cxa2kMfWo8vLkgj/olbzYZydhQz
 DFCEvx2hbTyP/8VjGybVhtd+oeLjX43tGJOm6weyCb/8z3/soD73CChZIP53Mt6i
 g+LKPfurRZ3VGILgdemkiMB+5pXi4jhmziw5pCSI6BluUGVZAMyg8hC1zapFUr/2
 HaLfVmtxYdsBlceuMss1f3vyN2d+kRdDPKlOb9Td0PC2TEpSa5QayuMscvVfau+Q
 epSdIxgjlHqAVDVZJ0kX55FLdmItx36tat6zJL5cUcZmrDvEY3TQmvcXyR2dIltY
 U9WB2lRaETOG4Js0gAItXDjijNvLs0F4q10Zk6RdH4BcWbuqvONAhaCpWKeOWNsf
 NgQkkg+DqHqol6+C0pI0uX6dvEfwJuZ4bqq98wl4SryuxWJNra/g55/oFEFDbJCU
 oRMPYguMUgV8tzCpAurmDhxcrsZu/dvaSSW+flVz2QP81ecvFjRnZ8hhXbCr6iWi
 VVoAISmQBQPCXoqPr7ZC
 =tIzO
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we have a very typical update which is mostly fixes and
  updates to drivers and no new drivers.

   - the biggest change is coming from Peter for edma cleanup which even
     caused some last minute regression, things seem settled now
   - idma64 and dw updates
   - iotdma updates
   - module autoload fixes for various drivers
   - scatter gather support for hdmac"

* tag 'dmaengine-4.4-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (77 commits)
  dmaengine: edma: Add dummy driver skeleton for edma3-tptc
  Revert "ARM: DTS: am33xx: Use the new DT bindings for the eDMA3"
  Revert "ARM: DTS: am437x: Use the new DT bindings for the eDMA3"
  dmaengine: dw: some Intel devices has no memcpy support
  dmaengine: dw: platform: provide platform data for Intel
  dmaengine: dw: don't override platform data with autocfg
  dmaengine: hdmac: Add scatter-gathered memset support
  dmaengine: hdmac: factorise memset descriptor allocation
  dmaengine: virt-dma: Fix kernel-doc annotations
  ARM: DTS: am437x: Use the new DT bindings for the eDMA3
  ARM: DTS: am33xx: Use the new DT bindings for the eDMA3
  dmaengine: edma: New device tree binding
  dmaengine: Kconfig: edma: Select TI_DMA_CROSSBAR in case of ARCH_OMAP
  dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx
  dmaengine: edma: Merge the of parsing functions
  dmaengine: edma: Do not allocate memory for edma_rsv_info in case of DT boot
  dmaengine: edma: Refactor the dma device and channel struct initialization
  dmaengine: edma: Get qDMA channel information from HW also
  dmaengine: edma: Merge map_dmach_to_queue into assign_channel_eventq
  dmaengine: edma: Correct PaRAM access function names (_parm_ to _param_)
  ...
This commit is contained in:
Linus Torvalds 2015-11-10 10:05:17 -08:00
commit 041c79514a
46 changed files with 2555 additions and 2706 deletions

View File

@ -2,9 +2,10 @@ Texas Instruments DMA Crossbar (DMA request router)
Required properties:
- compatible: "ti,dra7-dma-crossbar" for DRA7xx DMA crossbar
"ti,am335x-edma-crossbar" for AM335x and AM437x
- reg: Memory map for accessing module
- #dma-cells: Should be set to <1>.
Clients should use the crossbar request number (input)
- #dma-cells: Should be set to to match with the DMA controller's dma-cells
for ti,dra7-dma-crossbar and <3> for ti,am335x-edma-crossbar.
- dma-requests: Number of DMA requests the crossbar can receive
- dma-masters: phandle pointing to the DMA controller
@ -14,6 +15,15 @@ The DMA controller node need to have the following poroperties:
Optional properties:
- ti,dma-safe-map: Safe routing value for unused request lines
Notes:
When requesting channel via ti,dra7-dma-crossbar, the DMA clinet must request
the DMA event number as crossbar ID (input to the DMA crossbar).
For ti,am335x-edma-crossbar: the meaning of parameters of dmas for clients:
dmas = <&edma_xbar 12 0 1>; where <12> is the DMA request number, <0> is the TC
the event should be assigned and <1> is the mux selection for in the crossbar.
When mux 0 is used the DMA channel can be requested directly from edma node.
Example:
/* DMA controller */
@ -47,6 +57,7 @@ uart1: serial@4806a000 {
ti,hwmods = "uart1";
clock-frequency = <48000000>;
status = "disabled";
/* Requesting crossbar input 49 and 50 */
dmas = <&sdma_xbar 49>, <&sdma_xbar 50>;
dma-names = "tx", "rx";
};

View File

@ -1,4 +1,119 @@
TI EDMA
Texas Instruments eDMA
The eDMA3 consists of two components: Channel controller (CC) and Transfer
Controller(s) (TC). The CC is the main entry for DMA users since it is
responsible for the DMA channel handling, while the TCs are responsible to
execute the actual DMA tansfer.
------------------------------------------------------------------------------
eDMA3 Channel Controller
Required properties:
- compatible: "ti,edma3-tpcc" for the channel controller(s)
- #dma-cells: Should be set to <2>. The first number is the DMA request
number and the second is the TC the channel is serviced on.
- reg: Memory map of eDMA CC
- reg-names: "edma3_cc"
- interrupts: Interrupt lines for CCINT, MPERR and CCERRINT.
- interrupt-names: "edma3_ccint", "emda3_mperr" and "edma3_ccerrint"
- ti,tptcs: List of TPTCs associated with the eDMA in the following form:
<&tptc_phandle TC_priority_number>. The highest priority is 0.
Optional properties:
- ti,hwmods: Name of the hwmods associated to the eDMA CC
- ti,edma-memcpy-channels: List of channels allocated to be used for memcpy, iow
these channels will be SW triggered channels. The list must
contain 16 bits numbers, see example.
- ti,edma-reserved-slot-ranges: PaRAM slot ranges which should not be used by
the driver, they are allocated to be used by for example the
DSP. See example.
------------------------------------------------------------------------------
eDMA3 Transfer Controller
Required properties:
- compatible: "ti,edma3-tptc" for the transfer controller(s)
- reg: Memory map of eDMA TC
- interrupts: Interrupt number for TCerrint.
Optional properties:
- ti,hwmods: Name of the hwmods associated to the given eDMA TC
- interrupt-names: "edma3_tcerrint"
------------------------------------------------------------------------------
Example:
edma: edma@49000000 {
compatible = "ti,edma3-tpcc";
ti,hwmods = "tpcc";
reg = <0x49000000 0x10000>;
reg-names = "edma3_cc";
interrupts = <12 13 14>;
interrupt-names = "edma3_ccint", "emda3_mperr", "edma3_ccerrint";
dma-requests = <64>;
#dma-cells = <2>;
ti,tptcs = <&edma_tptc0 7>, <&edma_tptc1 7>, <&edma_tptc2 0>;
/* Channel 20 and 21 is allocated for memcpy */
ti,edma-memcpy-channels = /bits/ 16 <20 21>;
/* The following PaRAM slots are reserved: 35-45 and 100-110 */
ti,edma-reserved-slot-ranges = /bits/ 16 <35 10>,
/bits/ 16 <100 10>;
};
edma_tptc0: tptc@49800000 {
compatible = "ti,edma3-tptc";
ti,hwmods = "tptc0";
reg = <0x49800000 0x100000>;
interrupts = <112>;
interrupt-names = "edm3_tcerrint";
};
edma_tptc1: tptc@49900000 {
compatible = "ti,edma3-tptc";
ti,hwmods = "tptc1";
reg = <0x49900000 0x100000>;
interrupts = <113>;
interrupt-names = "edm3_tcerrint";
};
edma_tptc2: tptc@49a00000 {
compatible = "ti,edma3-tptc";
ti,hwmods = "tptc2";
reg = <0x49a00000 0x100000>;
interrupts = <114>;
interrupt-names = "edm3_tcerrint";
};
sham: sham@53100000 {
compatible = "ti,omap4-sham";
ti,hwmods = "sham";
reg = <0x53100000 0x200>;
interrupts = <109>;
/* DMA channel 36 executed on eDMA TC0 - low priority queue */
dmas = <&edma 36 0>;
dma-names = "rx";
};
mcasp0: mcasp@48038000 {
compatible = "ti,am33xx-mcasp-audio";
ti,hwmods = "mcasp0";
reg = <0x48038000 0x2000>,
<0x46000000 0x400000>;
reg-names = "mpu", "dat";
interrupts = <80>, <81>;
interrupt-names = "tx", "rx";
status = "disabled";
/* DMA channels 8 and 9 executed on eDMA TC2 - high priority queue */
dmas = <&edma 8 2>,
<&edma 9 2>;
dma-names = "tx", "rx";
};
------------------------------------------------------------------------------
DEPRECATED binding, new DTS files must use the ti,edma3-tpcc/ti,edma3-tptc
binding.
Required properties:
- compatible : "ti,edma3"

View File

@ -737,7 +737,6 @@ config ARCH_DAVINCI
select GENERIC_CLOCKEVENTS
select GENERIC_IRQ_CHIP
select HAVE_IDE
select TI_PRIV_EDMA
select USE_OF
select ZONE_DMA
help

View File

@ -17,6 +17,3 @@ config SHARP_PARAM
config SHARP_SCOOP
bool
config TI_PRIV_EDMA
bool

View File

@ -15,6 +15,5 @@ obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o
CFLAGS_REMOVE_mcpm_entry.o = -pg
AFLAGS_mcpm_head.o := -march=armv7-a
AFLAGS_vlock.o := -march=armv7-a
obj-$(CONFIG_TI_PRIV_EDMA) += edma.o
obj-$(CONFIG_BL_SWITCHER) += bL_switcher.o
obj-$(CONFIG_BL_SWITCHER_DUMMY_IF) += bL_switcher_dummy_if.o

File diff suppressed because it is too large Load Diff

View File

@ -147,150 +147,118 @@ static s8 da850_queue_priority_mapping[][2] = {
{-1, -1}
};
static struct edma_soc_info da830_edma_cc0_info = {
static struct edma_soc_info da8xx_edma0_pdata = {
.queue_priority_mapping = da8xx_queue_priority_mapping,
.default_queue = EVENTQ_1,
};
static struct edma_soc_info *da830_edma_info[EDMA_MAX_CC] = {
&da830_edma_cc0_info,
static struct edma_soc_info da850_edma1_pdata = {
.queue_priority_mapping = da850_queue_priority_mapping,
.default_queue = EVENTQ_0,
};
static struct edma_soc_info da850_edma_cc_info[] = {
static struct resource da8xx_edma0_resources[] = {
{
.queue_priority_mapping = da8xx_queue_priority_mapping,
.default_queue = EVENTQ_1,
},
{
.queue_priority_mapping = da850_queue_priority_mapping,
.default_queue = EVENTQ_0,
},
};
static struct edma_soc_info *da850_edma_info[EDMA_MAX_CC] = {
&da850_edma_cc_info[0],
&da850_edma_cc_info[1],
};
static struct resource da830_edma_resources[] = {
{
.name = "edma_cc0",
.name = "edma3_cc",
.start = DA8XX_TPCC_BASE,
.end = DA8XX_TPCC_BASE + SZ_32K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc0",
.name = "edma3_tc0",
.start = DA8XX_TPTC0_BASE,
.end = DA8XX_TPTC0_BASE + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc1",
.name = "edma3_tc1",
.start = DA8XX_TPTC1_BASE,
.end = DA8XX_TPTC1_BASE + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma0",
.name = "edma3_ccint",
.start = IRQ_DA8XX_CCINT0,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma0_err",
.name = "edma3_ccerrint",
.start = IRQ_DA8XX_CCERRINT,
.flags = IORESOURCE_IRQ,
},
};
static struct resource da850_edma_resources[] = {
static struct resource da850_edma1_resources[] = {
{
.name = "edma_cc0",
.start = DA8XX_TPCC_BASE,
.end = DA8XX_TPCC_BASE + SZ_32K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc0",
.start = DA8XX_TPTC0_BASE,
.end = DA8XX_TPTC0_BASE + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc1",
.start = DA8XX_TPTC1_BASE,
.end = DA8XX_TPTC1_BASE + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_cc1",
.name = "edma3_cc",
.start = DA850_TPCC1_BASE,
.end = DA850_TPCC1_BASE + SZ_32K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc2",
.name = "edma3_tc0",
.start = DA850_TPTC2_BASE,
.end = DA850_TPTC2_BASE + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma0",
.start = IRQ_DA8XX_CCINT0,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma0_err",
.start = IRQ_DA8XX_CCERRINT,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma1",
.name = "edma3_ccint",
.start = IRQ_DA850_CCINT1,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma1_err",
.name = "edma3_ccerrint",
.start = IRQ_DA850_CCERRINT1,
.flags = IORESOURCE_IRQ,
},
};
static struct platform_device da830_edma_device = {
static const struct platform_device_info da8xx_edma0_device __initconst = {
.name = "edma",
.id = -1,
.dev = {
.platform_data = da830_edma_info,
},
.num_resources = ARRAY_SIZE(da830_edma_resources),
.resource = da830_edma_resources,
.id = 0,
.dma_mask = DMA_BIT_MASK(32),
.res = da8xx_edma0_resources,
.num_res = ARRAY_SIZE(da8xx_edma0_resources),
.data = &da8xx_edma0_pdata,
.size_data = sizeof(da8xx_edma0_pdata),
};
static struct platform_device da850_edma_device = {
static const struct platform_device_info da850_edma1_device __initconst = {
.name = "edma",
.id = -1,
.dev = {
.platform_data = da850_edma_info,
},
.num_resources = ARRAY_SIZE(da850_edma_resources),
.resource = da850_edma_resources,
.id = 1,
.dma_mask = DMA_BIT_MASK(32),
.res = da850_edma1_resources,
.num_res = ARRAY_SIZE(da850_edma1_resources),
.data = &da850_edma1_pdata,
.size_data = sizeof(da850_edma1_pdata),
};
int __init da830_register_edma(struct edma_rsv_info *rsv)
{
da830_edma_cc0_info.rsv = rsv;
struct platform_device *edma_pdev;
return platform_device_register(&da830_edma_device);
da8xx_edma0_pdata.rsv = rsv;
edma_pdev = platform_device_register_full(&da8xx_edma0_device);
return IS_ERR(edma_pdev) ? PTR_ERR(edma_pdev) : 0;
}
int __init da850_register_edma(struct edma_rsv_info *rsv[2])
{
struct platform_device *edma_pdev;
if (rsv) {
da850_edma_cc_info[0].rsv = rsv[0];
da850_edma_cc_info[1].rsv = rsv[1];
da8xx_edma0_pdata.rsv = rsv[0];
da850_edma1_pdata.rsv = rsv[1];
}
return platform_device_register(&da850_edma_device);
edma_pdev = platform_device_register_full(&da8xx_edma0_device);
if (IS_ERR(edma_pdev)) {
pr_warn("%s: Failed to register eDMA0\n", __func__);
return PTR_ERR(edma_pdev);
}
edma_pdev = platform_device_register_full(&da850_edma1_device);
return IS_ERR(edma_pdev) ? PTR_ERR(edma_pdev) : 0;
}
static struct resource da8xx_i2c_resources0[] = {

View File

@ -569,61 +569,58 @@ static u8 dm355_default_priorities[DAVINCI_N_AINTC_IRQ] = {
/*----------------------------------------------------------------------*/
static s8
queue_priority_mapping[][2] = {
static s8 queue_priority_mapping[][2] = {
/* {event queue no, Priority} */
{0, 3},
{1, 7},
{-1, -1},
};
static struct edma_soc_info edma_cc0_info = {
static struct edma_soc_info dm355_edma_pdata = {
.queue_priority_mapping = queue_priority_mapping,
.default_queue = EVENTQ_1,
};
static struct edma_soc_info *dm355_edma_info[EDMA_MAX_CC] = {
&edma_cc0_info,
};
static struct resource edma_resources[] = {
{
.name = "edma_cc0",
.name = "edma3_cc",
.start = 0x01c00000,
.end = 0x01c00000 + SZ_64K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc0",
.name = "edma3_tc0",
.start = 0x01c10000,
.end = 0x01c10000 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc1",
.name = "edma3_tc1",
.start = 0x01c10400,
.end = 0x01c10400 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma0",
.name = "edma3_ccint",
.start = IRQ_CCINT0,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma0_err",
.name = "edma3_ccerrint",
.start = IRQ_CCERRINT,
.flags = IORESOURCE_IRQ,
},
/* not using (or muxing) TC*_ERR */
};
static struct platform_device dm355_edma_device = {
.name = "edma",
.id = 0,
.dev.platform_data = dm355_edma_info,
.num_resources = ARRAY_SIZE(edma_resources),
.resource = edma_resources,
static const struct platform_device_info dm355_edma_device __initconst = {
.name = "edma",
.id = 0,
.dma_mask = DMA_BIT_MASK(32),
.res = edma_resources,
.num_res = ARRAY_SIZE(edma_resources),
.data = &dm355_edma_pdata,
.size_data = sizeof(dm355_edma_pdata),
};
static struct resource dm355_asp1_resources[] = {
@ -1062,13 +1059,18 @@ int __init dm355_init_video(struct vpfe_config *vpfe_cfg,
static int __init dm355_init_devices(void)
{
struct platform_device *edma_pdev;
int ret = 0;
if (!cpu_is_davinci_dm355())
return 0;
davinci_cfg_reg(DM355_INT_EDMA_CC);
platform_device_register(&dm355_edma_device);
edma_pdev = platform_device_register_full(&dm355_edma_device);
if (IS_ERR(edma_pdev)) {
pr_warn("%s: Failed to register eDMA\n", __func__);
return PTR_ERR(edma_pdev);
}
ret = davinci_init_wdt();
if (ret)

View File

@ -853,8 +853,7 @@ static u8 dm365_default_priorities[DAVINCI_N_AINTC_IRQ] = {
};
/* Four Transfer Controllers on DM365 */
static s8
dm365_queue_priority_mapping[][2] = {
static s8 dm365_queue_priority_mapping[][2] = {
/* {event queue no, Priority} */
{0, 7},
{1, 7},
@ -863,53 +862,49 @@ dm365_queue_priority_mapping[][2] = {
{-1, -1},
};
static struct edma_soc_info edma_cc0_info = {
static struct edma_soc_info dm365_edma_pdata = {
.queue_priority_mapping = dm365_queue_priority_mapping,
.default_queue = EVENTQ_3,
};
static struct edma_soc_info *dm365_edma_info[EDMA_MAX_CC] = {
&edma_cc0_info,
};
static struct resource edma_resources[] = {
{
.name = "edma_cc0",
.name = "edma3_cc",
.start = 0x01c00000,
.end = 0x01c00000 + SZ_64K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc0",
.name = "edma3_tc0",
.start = 0x01c10000,
.end = 0x01c10000 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc1",
.name = "edma3_tc1",
.start = 0x01c10400,
.end = 0x01c10400 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc2",
.name = "edma3_tc2",
.start = 0x01c10800,
.end = 0x01c10800 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc3",
.name = "edma3_tc3",
.start = 0x01c10c00,
.end = 0x01c10c00 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma0",
.name = "edma3_ccint",
.start = IRQ_CCINT0,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma0_err",
.name = "edma3_ccerrint",
.start = IRQ_CCERRINT,
.flags = IORESOURCE_IRQ,
},
@ -919,7 +914,7 @@ static struct resource edma_resources[] = {
static struct platform_device dm365_edma_device = {
.name = "edma",
.id = 0,
.dev.platform_data = dm365_edma_info,
.dev.platform_data = &dm365_edma_pdata,
.num_resources = ARRAY_SIZE(edma_resources),
.resource = edma_resources,
};

View File

@ -498,61 +498,58 @@ static u8 dm644x_default_priorities[DAVINCI_N_AINTC_IRQ] = {
/*----------------------------------------------------------------------*/
static s8
queue_priority_mapping[][2] = {
static s8 queue_priority_mapping[][2] = {
/* {event queue no, Priority} */
{0, 3},
{1, 7},
{-1, -1},
};
static struct edma_soc_info edma_cc0_info = {
static struct edma_soc_info dm644x_edma_pdata = {
.queue_priority_mapping = queue_priority_mapping,
.default_queue = EVENTQ_1,
};
static struct edma_soc_info *dm644x_edma_info[EDMA_MAX_CC] = {
&edma_cc0_info,
};
static struct resource edma_resources[] = {
{
.name = "edma_cc0",
.name = "edma3_cc",
.start = 0x01c00000,
.end = 0x01c00000 + SZ_64K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc0",
.name = "edma3_tc0",
.start = 0x01c10000,
.end = 0x01c10000 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc1",
.name = "edma3_tc1",
.start = 0x01c10400,
.end = 0x01c10400 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma0",
.name = "edma3_ccint",
.start = IRQ_CCINT0,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma0_err",
.name = "edma3_ccerrint",
.start = IRQ_CCERRINT,
.flags = IORESOURCE_IRQ,
},
/* not using TC*_ERR */
};
static struct platform_device dm644x_edma_device = {
.name = "edma",
.id = 0,
.dev.platform_data = dm644x_edma_info,
.num_resources = ARRAY_SIZE(edma_resources),
.resource = edma_resources,
static const struct platform_device_info dm644x_edma_device __initconst = {
.name = "edma",
.id = 0,
.dma_mask = DMA_BIT_MASK(32),
.res = edma_resources,
.num_res = ARRAY_SIZE(edma_resources),
.data = &dm644x_edma_pdata,
.size_data = sizeof(dm644x_edma_pdata),
};
/* DM6446 EVM uses ASP0; line-out is a pair of RCA jacks */
@ -950,12 +947,17 @@ int __init dm644x_init_video(struct vpfe_config *vpfe_cfg,
static int __init dm644x_init_devices(void)
{
struct platform_device *edma_pdev;
int ret = 0;
if (!cpu_is_davinci_dm644x())
return 0;
platform_device_register(&dm644x_edma_device);
edma_pdev = platform_device_register_full(&dm644x_edma_device);
if (IS_ERR(edma_pdev)) {
pr_warn("%s: Failed to register eDMA\n", __func__);
return PTR_ERR(edma_pdev);
}
platform_device_register(&dm644x_mdio_device);
platform_device_register(&dm644x_emac_device);

View File

@ -531,8 +531,7 @@ static u8 dm646x_default_priorities[DAVINCI_N_AINTC_IRQ] = {
/*----------------------------------------------------------------------*/
/* Four Transfer Controllers on DM646x */
static s8
dm646x_queue_priority_mapping[][2] = {
static s8 dm646x_queue_priority_mapping[][2] = {
/* {event queue no, Priority} */
{0, 4},
{1, 0},
@ -541,65 +540,63 @@ dm646x_queue_priority_mapping[][2] = {
{-1, -1},
};
static struct edma_soc_info edma_cc0_info = {
static struct edma_soc_info dm646x_edma_pdata = {
.queue_priority_mapping = dm646x_queue_priority_mapping,
.default_queue = EVENTQ_1,
};
static struct edma_soc_info *dm646x_edma_info[EDMA_MAX_CC] = {
&edma_cc0_info,
};
static struct resource edma_resources[] = {
{
.name = "edma_cc0",
.name = "edma3_cc",
.start = 0x01c00000,
.end = 0x01c00000 + SZ_64K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc0",
.name = "edma3_tc0",
.start = 0x01c10000,
.end = 0x01c10000 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc1",
.name = "edma3_tc1",
.start = 0x01c10400,
.end = 0x01c10400 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc2",
.name = "edma3_tc2",
.start = 0x01c10800,
.end = 0x01c10800 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma_tc3",
.name = "edma3_tc3",
.start = 0x01c10c00,
.end = 0x01c10c00 + SZ_1K - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "edma0",
.name = "edma3_ccint",
.start = IRQ_CCINT0,
.flags = IORESOURCE_IRQ,
},
{
.name = "edma0_err",
.name = "edma3_ccerrint",
.start = IRQ_CCERRINT,
.flags = IORESOURCE_IRQ,
},
/* not using TC*_ERR */
};
static struct platform_device dm646x_edma_device = {
.name = "edma",
.id = 0,
.dev.platform_data = dm646x_edma_info,
.num_resources = ARRAY_SIZE(edma_resources),
.resource = edma_resources,
static const struct platform_device_info dm646x_edma_device __initconst = {
.name = "edma",
.id = 0,
.dma_mask = DMA_BIT_MASK(32),
.res = edma_resources,
.num_res = ARRAY_SIZE(edma_resources),
.data = &dm646x_edma_pdata,
.size_data = sizeof(dm646x_edma_pdata),
};
static struct resource dm646x_mcasp0_resources[] = {
@ -936,9 +933,12 @@ void dm646x_setup_vpif(struct vpif_display_config *display_config,
int __init dm646x_init_edma(struct edma_rsv_info *rsv)
{
edma_cc0_info.rsv = rsv;
struct platform_device *edma_pdev;
return platform_device_register(&dm646x_edma_device);
dm646x_edma_pdata.rsv = rsv;
edma_pdev = platform_device_register_full(&dm646x_edma_device);
return IS_ERR(edma_pdev) ? PTR_ERR(edma_pdev) : 0;
}
void __init dm646x_init(void)

View File

@ -96,7 +96,6 @@ config ARCH_OMAP2PLUS
select OMAP_GPMC
select PINCTRL
select SOC_BUS
select TI_PRIV_EDMA
select OMAP_IRQCHIP
help
Systems based on OMAP2, OMAP3, OMAP4 or OMAP5

View File

@ -603,18 +603,11 @@ static void __init genclk_init_parent(struct clk *clk)
clk->parent = parent;
}
static struct dw_dma_platform_data dw_dmac0_data = {
.nr_channels = 3,
.block_size = 4095U,
.nr_masters = 2,
.data_width = { 2, 2 },
};
static struct resource dw_dmac0_resource[] = {
PBMEM(0xff200000),
IRQ(2),
};
DEFINE_DEV_DATA(dw_dmac, 0);
DEFINE_DEV(dw_dmac, 0);
DEV_CLK(hclk, dw_dmac0, hsb, 10);
/* --------------------------------------------------------------------

View File

@ -229,7 +229,7 @@ config IMX_SDMA
Support the i.MX SDMA engine. This engine is integrated into
Freescale i.MX25/31/35/51/53/6 chips.
config IDMA64
config INTEL_IDMA64
tristate "Intel integrated DMA 64-bit support"
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
@ -486,7 +486,7 @@ config TI_EDMA
depends on ARCH_DAVINCI || ARCH_OMAP || ARCH_KEYSTONE
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
select TI_PRIV_EDMA
select TI_DMA_CROSSBAR if ARCH_OMAP
default n
help
Enable support for the TI EDMA controller. This DMA

View File

@ -34,7 +34,7 @@ obj-$(CONFIG_HSU_DMA) += hsu/
obj-$(CONFIG_IMG_MDC_DMA) += img-mdc-dma.o
obj-$(CONFIG_IMX_DMA) += imx-dma.o
obj-$(CONFIG_IMX_SDMA) += imx-sdma.o
obj-$(CONFIG_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IDMA64) += idma64.o
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o

View File

@ -161,10 +161,8 @@ int acpi_dma_controller_register(struct device *dev,
return -EINVAL;
/* Check if the device was enumerated by ACPI */
if (!ACPI_HANDLE(dev))
return -EINVAL;
if (acpi_bus_get_device(ACPI_HANDLE(dev), &adev))
adev = ACPI_COMPANION(dev);
if (!adev)
return -EINVAL;
adma = kzalloc(sizeof(*adma), GFP_KERNEL);
@ -359,10 +357,11 @@ struct dma_chan *acpi_dma_request_slave_chan_by_index(struct device *dev,
int found;
/* Check if the device was enumerated by ACPI */
if (!dev || !ACPI_HANDLE(dev))
if (!dev)
return ERR_PTR(-ENODEV);
if (acpi_bus_get_device(ACPI_HANDLE(dev), &adev))
adev = ACPI_COMPANION(dev);
if (!adev)
return ERR_PTR(-ENODEV);
memset(&pdata, 0, sizeof(pdata));

View File

@ -458,10 +458,10 @@ atc_chain_complete(struct at_dma_chan *atchan, struct at_desc *desc)
dma_cookie_complete(txd);
/* If the transfer was a memset, free our temporary buffer */
if (desc->memset) {
if (desc->memset_buffer) {
dma_pool_free(atdma->memset_pool, desc->memset_vaddr,
desc->memset_paddr);
desc->memset = false;
desc->memset_buffer = false;
}
/* move children to free_list */
@ -881,6 +881,46 @@ err_desc_get:
return NULL;
}
static struct at_desc *atc_create_memset_desc(struct dma_chan *chan,
dma_addr_t psrc,
dma_addr_t pdst,
size_t len)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_desc *desc;
size_t xfer_count;
u32 ctrla = ATC_SRC_WIDTH(2) | ATC_DST_WIDTH(2);
u32 ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN |
ATC_SRC_ADDR_MODE_FIXED |
ATC_DST_ADDR_MODE_INCR |
ATC_FC_MEM2MEM;
xfer_count = len >> 2;
if (xfer_count > ATC_BTSIZE_MAX) {
dev_err(chan2dev(chan), "%s: buffer is too big\n",
__func__);
return NULL;
}
desc = atc_desc_get(atchan);
if (!desc) {
dev_err(chan2dev(chan), "%s: can't get a descriptor\n",
__func__);
return NULL;
}
desc->lli.saddr = psrc;
desc->lli.daddr = pdst;
desc->lli.ctrla = ctrla | xfer_count;
desc->lli.ctrlb = ctrlb;
desc->txd.cookie = 0;
desc->len = len;
return desc;
}
/**
* atc_prep_dma_memset - prepare a memcpy operation
* @chan: the channel to prepare operation on
@ -893,12 +933,10 @@ static struct dma_async_tx_descriptor *
atc_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
size_t len, unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
struct at_desc *desc = NULL;
size_t xfer_count;
u32 ctrla;
u32 ctrlb;
struct at_desc *desc;
void __iomem *vaddr;
dma_addr_t paddr;
dev_vdbg(chan2dev(chan), "%s: d0x%x v0x%x l0x%zx f0x%lx\n", __func__,
dest, value, len, flags);
@ -914,46 +952,26 @@ atc_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
return NULL;
}
xfer_count = len >> 2;
if (xfer_count > ATC_BTSIZE_MAX) {
dev_err(chan2dev(chan), "%s: buffer is too big\n",
__func__);
return NULL;
}
ctrlb = ATC_DEFAULT_CTRLB | ATC_IEN
| ATC_SRC_ADDR_MODE_FIXED
| ATC_DST_ADDR_MODE_INCR
| ATC_FC_MEM2MEM;
ctrla = ATC_SRC_WIDTH(2) |
ATC_DST_WIDTH(2);
desc = atc_desc_get(atchan);
if (!desc) {
dev_err(chan2dev(chan), "%s: can't get a descriptor\n",
__func__);
return NULL;
}
desc->memset_vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC,
&desc->memset_paddr);
if (!desc->memset_vaddr) {
vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC, &paddr);
if (!vaddr) {
dev_err(chan2dev(chan), "%s: couldn't allocate buffer\n",
__func__);
goto err_put_desc;
return NULL;
}
*(u32*)vaddr = value;
desc = atc_create_memset_desc(chan, paddr, dest, len);
if (!desc) {
dev_err(chan2dev(chan), "%s: couldn't get a descriptor\n",
__func__);
goto err_free_buffer;
}
*desc->memset_vaddr = value;
desc->memset = true;
desc->lli.saddr = desc->memset_paddr;
desc->lli.daddr = dest;
desc->lli.ctrla = ctrla | xfer_count;
desc->lli.ctrlb = ctrlb;
desc->memset_paddr = paddr;
desc->memset_vaddr = vaddr;
desc->memset_buffer = true;
desc->txd.cookie = -EBUSY;
desc->len = len;
desc->total_len = len;
/* set end-of-link on the descriptor */
@ -963,11 +981,87 @@ atc_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
return &desc->txd;
err_put_desc:
atc_desc_put(atchan, desc);
err_free_buffer:
dma_pool_free(atdma->memset_pool, vaddr, paddr);
return NULL;
}
static struct dma_async_tx_descriptor *
atc_prep_dma_memset_sg(struct dma_chan *chan,
struct scatterlist *sgl,
unsigned int sg_len, int value,
unsigned long flags)
{
struct at_dma_chan *atchan = to_at_dma_chan(chan);
struct at_dma *atdma = to_at_dma(chan->device);
struct at_desc *desc = NULL, *first = NULL, *prev = NULL;
struct scatterlist *sg;
void __iomem *vaddr;
dma_addr_t paddr;
size_t total_len = 0;
int i;
dev_vdbg(chan2dev(chan), "%s: v0x%x l0x%zx f0x%lx\n", __func__,
value, sg_len, flags);
if (unlikely(!sgl || !sg_len)) {
dev_dbg(chan2dev(chan), "%s: scatterlist is empty!\n",
__func__);
return NULL;
}
vaddr = dma_pool_alloc(atdma->memset_pool, GFP_ATOMIC, &paddr);
if (!vaddr) {
dev_err(chan2dev(chan), "%s: couldn't allocate buffer\n",
__func__);
return NULL;
}
*(u32*)vaddr = value;
for_each_sg(sgl, sg, sg_len, i) {
dma_addr_t dest = sg_dma_address(sg);
size_t len = sg_dma_len(sg);
dev_vdbg(chan2dev(chan), "%s: d0x%08x, l0x%zx\n",
__func__, dest, len);
if (!is_dma_fill_aligned(chan->device, dest, 0, len)) {
dev_err(chan2dev(chan), "%s: buffer is not aligned\n",
__func__);
goto err_put_desc;
}
desc = atc_create_memset_desc(chan, paddr, dest, len);
if (!desc)
goto err_put_desc;
atc_desc_chain(&first, &prev, desc);
total_len += len;
}
/*
* Only set the buffer pointers on the last descriptor to
* avoid free'ing while we have our transfer still going
*/
desc->memset_paddr = paddr;
desc->memset_vaddr = vaddr;
desc->memset_buffer = true;
first->txd.cookie = -EBUSY;
first->total_len = total_len;
/* set end-of-link on the descriptor */
set_desc_eol(desc);
first->txd.flags = flags;
return &first->txd;
err_put_desc:
atc_desc_put(atchan, first);
return NULL;
}
/**
* atc_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
@ -1851,6 +1945,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_INTERLEAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMCPY, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMSET, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_MEMSET_SG, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_PRIVATE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SLAVE, at91sam9g45_config.cap_mask);
dma_cap_set(DMA_SG, at91sam9g45_config.cap_mask);
@ -1972,6 +2067,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
if (dma_has_cap(DMA_MEMSET, atdma->dma_common.cap_mask)) {
atdma->dma_common.device_prep_dma_memset = atc_prep_dma_memset;
atdma->dma_common.device_prep_dma_memset_sg = atc_prep_dma_memset_sg;
atdma->dma_common.fill_align = DMAENGINE_ALIGN_4_BYTES;
}

View File

@ -202,7 +202,7 @@ struct at_desc {
size_t src_hole;
/* Memset temporary buffer */
bool memset;
bool memset_buffer;
dma_addr_t memset_paddr;
int *memset_vaddr;
};

View File

@ -938,13 +938,19 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
{
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
struct at_xdmac_desc *prev = NULL, *first = NULL;
struct data_chunk *chunk, *prev_chunk = NULL;
dma_addr_t dst_addr, src_addr;
size_t dst_skip, src_skip, len = 0;
size_t prev_dst_icg = 0, prev_src_icg = 0;
size_t src_skip = 0, dst_skip = 0, len = 0;
struct data_chunk *chunk;
int i;
if (!xt || (xt->numf != 1) || (xt->dir != DMA_MEM_TO_MEM))
if (!xt || !xt->numf || (xt->dir != DMA_MEM_TO_MEM))
return NULL;
/*
* TODO: Handle the case where we have to repeat a chain of
* descriptors...
*/
if ((xt->numf > 1) && (xt->frame_size > 1))
return NULL;
dev_dbg(chan2dev(chan), "%s: src=0x%08x, dest=0x%08x, numf=%d, frame_size=%d, flags=0x%lx\n",
@ -954,66 +960,60 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
src_addr = xt->src_start;
dst_addr = xt->dst_start;
for (i = 0; i < xt->frame_size; i++) {
struct at_xdmac_desc *desc;
size_t src_icg, dst_icg;
chunk = xt->sgl + i;
dst_icg = dmaengine_get_dst_icg(xt, chunk);
src_icg = dmaengine_get_src_icg(xt, chunk);
src_skip = chunk->size + src_icg;
dst_skip = chunk->size + dst_icg;
dev_dbg(chan2dev(chan),
"%s: chunk size=%d, src icg=%d, dst icg=%d\n",
__func__, chunk->size, src_icg, dst_icg);
/*
* Handle the case where we just have the same
* transfer to setup, we can just increase the
* block number and reuse the same descriptor.
*/
if (prev_chunk && prev &&
(prev_chunk->size == chunk->size) &&
(prev_src_icg == src_icg) &&
(prev_dst_icg == dst_icg)) {
dev_dbg(chan2dev(chan),
"%s: same configuration that the previous chunk, merging the descriptors...\n",
__func__);
at_xdmac_increment_block_count(chan, prev);
continue;
}
desc = at_xdmac_interleaved_queue_desc(chan, atchan,
prev,
src_addr, dst_addr,
xt, chunk);
if (!desc) {
list_splice_init(&first->descs_list,
&atchan->free_descs_list);
return NULL;
}
if (!first)
first = desc;
if (xt->numf > 1) {
first = at_xdmac_interleaved_queue_desc(chan, atchan,
NULL,
src_addr, dst_addr,
xt, xt->sgl);
for (i = 0; i < xt->numf; i++)
at_xdmac_increment_block_count(chan, first);
dev_dbg(chan2dev(chan), "%s: add desc 0x%p to descs_list 0x%p\n",
__func__, desc, first);
list_add_tail(&desc->desc_node, &first->descs_list);
__func__, first, first);
list_add_tail(&first->desc_node, &first->descs_list);
} else {
for (i = 0; i < xt->frame_size; i++) {
size_t src_icg = 0, dst_icg = 0;
struct at_xdmac_desc *desc;
if (xt->src_sgl)
src_addr += src_skip;
chunk = xt->sgl + i;
if (xt->dst_sgl)
dst_addr += dst_skip;
dst_icg = dmaengine_get_dst_icg(xt, chunk);
src_icg = dmaengine_get_src_icg(xt, chunk);
len += chunk->size;
prev_chunk = chunk;
prev_dst_icg = dst_icg;
prev_src_icg = src_icg;
prev = desc;
src_skip = chunk->size + src_icg;
dst_skip = chunk->size + dst_icg;
dev_dbg(chan2dev(chan),
"%s: chunk size=%d, src icg=%d, dst icg=%d\n",
__func__, chunk->size, src_icg, dst_icg);
desc = at_xdmac_interleaved_queue_desc(chan, atchan,
prev,
src_addr, dst_addr,
xt, chunk);
if (!desc) {
list_splice_init(&first->descs_list,
&atchan->free_descs_list);
return NULL;
}
if (!first)
first = desc;
dev_dbg(chan2dev(chan), "%s: add desc 0x%p to descs_list 0x%p\n",
__func__, desc, first);
list_add_tail(&desc->desc_node, &first->descs_list);
if (xt->src_sgl)
src_addr += src_skip;
if (xt->dst_sgl)
dst_addr += dst_skip;
len += chunk->size;
prev = desc;
}
}
first->tx_dma_desc.cookie = -EBUSY;

View File

@ -1074,11 +1074,9 @@ static void dmaengine_destroy_unmap_pool(void)
for (i = 0; i < ARRAY_SIZE(unmap_pool); i++) {
struct dmaengine_unmap_pool *p = &unmap_pool[i];
if (p->pool)
mempool_destroy(p->pool);
mempool_destroy(p->pool);
p->pool = NULL;
if (p->cache)
kmem_cache_destroy(p->cache);
kmem_cache_destroy(p->cache);
p->cache = NULL;
}
}

View File

@ -163,7 +163,7 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
/*----------------------------------------------------------------------*/
static inline unsigned int dwc_fast_fls(unsigned long long v)
static inline unsigned int dwc_fast_ffs(unsigned long long v)
{
/*
* We can be a lot more clever here, but this should take care
@ -712,7 +712,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
dw->data_width[dwc->dst_master]);
src_width = dst_width = min_t(unsigned int, data_width,
dwc_fast_fls(src | dest | len));
dwc_fast_ffs(src | dest | len));
ctllo = DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_DST_WIDTH(dst_width)
@ -791,7 +791,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
switch (direction) {
case DMA_MEM_TO_DEV:
reg_width = __fls(sconfig->dst_addr_width);
reg_width = __ffs(sconfig->dst_addr_width);
reg = sconfig->dst_addr;
ctllo = (DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_DST_WIDTH(reg_width)
@ -811,7 +811,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
len = sg_dma_len(sg);
mem_width = min_t(unsigned int,
data_width, dwc_fast_fls(mem | len));
data_width, dwc_fast_ffs(mem | len));
slave_sg_todev_fill_desc:
desc = dwc_desc_get(dwc);
@ -848,7 +848,7 @@ slave_sg_todev_fill_desc:
}
break;
case DMA_DEV_TO_MEM:
reg_width = __fls(sconfig->src_addr_width);
reg_width = __ffs(sconfig->src_addr_width);
reg = sconfig->src_addr;
ctllo = (DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_SRC_WIDTH(reg_width)
@ -868,7 +868,7 @@ slave_sg_todev_fill_desc:
len = sg_dma_len(sg);
mem_width = min_t(unsigned int,
data_width, dwc_fast_fls(mem | len));
data_width, dwc_fast_ffs(mem | len));
slave_sg_fromdev_fill_desc:
desc = dwc_desc_get(dwc);
@ -1499,9 +1499,8 @@ EXPORT_SYMBOL(dw_dma_cyclic_free);
int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
{
struct dw_dma *dw;
bool autocfg;
bool autocfg = false;
unsigned int dw_params;
unsigned int nr_channels;
unsigned int max_blk_size = 0;
int err;
int i;
@ -1515,33 +1514,42 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
pm_runtime_get_sync(chip->dev);
dw_params = dma_read_byaddr(chip->regs, DW_PARAMS);
autocfg = dw_params >> DW_PARAMS_EN & 0x1;
if (!pdata) {
dw_params = dma_read_byaddr(chip->regs, DW_PARAMS);
dev_dbg(chip->dev, "DW_PARAMS: 0x%08x\n", dw_params);
dev_dbg(chip->dev, "DW_PARAMS: 0x%08x\n", dw_params);
autocfg = dw_params >> DW_PARAMS_EN & 1;
if (!autocfg) {
err = -EINVAL;
goto err_pdata;
}
if (!pdata && autocfg) {
pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) {
err = -ENOMEM;
goto err_pdata;
}
/* Get hardware configuration parameters */
pdata->nr_channels = (dw_params >> DW_PARAMS_NR_CHAN & 7) + 1;
pdata->nr_masters = (dw_params >> DW_PARAMS_NR_MASTER & 3) + 1;
for (i = 0; i < pdata->nr_masters; i++) {
pdata->data_width[i] =
(dw_params >> DW_PARAMS_DATA_WIDTH(i) & 3) + 2;
}
max_blk_size = dma_readl(dw, MAX_BLK_SIZE);
/* Fill platform data with the default values */
pdata->is_private = true;
pdata->is_memcpy = true;
pdata->chan_allocation_order = CHAN_ALLOCATION_ASCENDING;
pdata->chan_priority = CHAN_PRIORITY_ASCENDING;
} else if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) {
} else if (pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) {
err = -EINVAL;
goto err_pdata;
}
if (autocfg)
nr_channels = (dw_params >> DW_PARAMS_NR_CHAN & 0x7) + 1;
else
nr_channels = pdata->nr_channels;
dw->chan = devm_kcalloc(chip->dev, nr_channels, sizeof(*dw->chan),
dw->chan = devm_kcalloc(chip->dev, pdata->nr_channels, sizeof(*dw->chan),
GFP_KERNEL);
if (!dw->chan) {
err = -ENOMEM;
@ -1549,22 +1557,12 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
}
/* Get hardware configuration parameters */
if (autocfg) {
max_blk_size = dma_readl(dw, MAX_BLK_SIZE);
dw->nr_masters = (dw_params >> DW_PARAMS_NR_MASTER & 3) + 1;
for (i = 0; i < dw->nr_masters; i++) {
dw->data_width[i] =
(dw_params >> DW_PARAMS_DATA_WIDTH(i) & 3) + 2;
}
} else {
dw->nr_masters = pdata->nr_masters;
for (i = 0; i < dw->nr_masters; i++)
dw->data_width[i] = pdata->data_width[i];
}
dw->nr_masters = pdata->nr_masters;
for (i = 0; i < dw->nr_masters; i++)
dw->data_width[i] = pdata->data_width[i];
/* Calculate all channel mask before DMA setup */
dw->all_chan_mask = (1 << nr_channels) - 1;
dw->all_chan_mask = (1 << pdata->nr_channels) - 1;
/* Force dma off, just in case */
dw_dma_off(dw);
@ -1589,7 +1587,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
goto err_pdata;
INIT_LIST_HEAD(&dw->dma.channels);
for (i = 0; i < nr_channels; i++) {
for (i = 0; i < pdata->nr_channels; i++) {
struct dw_dma_chan *dwc = &dw->chan[i];
dwc->chan.device = &dw->dma;
@ -1602,7 +1600,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
/* 7 is highest priority & 0 is lowest. */
if (pdata->chan_priority == CHAN_PRIORITY_ASCENDING)
dwc->priority = nr_channels - i - 1;
dwc->priority = pdata->nr_channels - i - 1;
else
dwc->priority = i;
@ -1656,10 +1654,13 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dma_writel(dw, CLEAR.DST_TRAN, dw->all_chan_mask);
dma_writel(dw, CLEAR.ERROR, dw->all_chan_mask);
dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
/* Set capabilities */
dma_cap_set(DMA_SLAVE, dw->dma.cap_mask);
if (pdata->is_private)
dma_cap_set(DMA_PRIVATE, dw->dma.cap_mask);
if (pdata->is_memcpy)
dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
dw->dma.dev = chip->dev;
dw->dma.device_alloc_chan_resources = dwc_alloc_chan_resources;
dw->dma.device_free_chan_resources = dwc_free_chan_resources;
@ -1687,7 +1688,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
goto err_dma_register;
dev_info(chip->dev, "DesignWare DMA Controller, %d channels\n",
nr_channels);
pdata->nr_channels);
pm_runtime_put_sync_suspend(chip->dev);

View File

@ -15,12 +15,6 @@
#include "internal.h"
static struct dw_dma_platform_data dw_pci_pdata = {
.is_private = 1,
.chan_allocation_order = CHAN_ALLOCATION_ASCENDING,
.chan_priority = CHAN_PRIORITY_ASCENDING,
};
static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
{
struct dw_dma_chip *chip;
@ -101,19 +95,19 @@ static const struct dev_pm_ops dw_pci_dev_pm_ops = {
static const struct pci_device_id dw_pci_id_table[] = {
/* Medfield */
{ PCI_VDEVICE(INTEL, 0x0827), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0830), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0827) },
{ PCI_VDEVICE(INTEL, 0x0830) },
/* BayTrail */
{ PCI_VDEVICE(INTEL, 0x0f06), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0f40), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0f06) },
{ PCI_VDEVICE(INTEL, 0x0f40) },
/* Braswell */
{ PCI_VDEVICE(INTEL, 0x2286), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x22c0), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x2286) },
{ PCI_VDEVICE(INTEL, 0x22c0) },
/* Haswell */
{ PCI_VDEVICE(INTEL, 0x9c60), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x9c60) },
{ }
};
MODULE_DEVICE_TABLE(pci, dw_pci_id_table);

View File

@ -155,6 +155,7 @@ static int dw_probe(struct platform_device *pdev)
struct dw_dma_chip *chip;
struct device *dev = &pdev->dev;
struct resource *mem;
const struct acpi_device_id *id;
struct dw_dma_platform_data *pdata;
int err;
@ -178,6 +179,11 @@ static int dw_probe(struct platform_device *pdev)
pdata = dev_get_platdata(dev);
if (!pdata)
pdata = dw_dma_parse_dt(pdev);
if (!pdata && has_acpi_companion(dev)) {
id = acpi_match_device(dev->driver->acpi_match_table, dev);
if (id)
pdata = (struct dw_dma_platform_data *)id->driver_data;
}
chip->dev = dev;
@ -246,8 +252,17 @@ MODULE_DEVICE_TABLE(of, dw_dma_of_id_table);
#endif
#ifdef CONFIG_ACPI
static struct dw_dma_platform_data dw_dma_acpi_pdata = {
.nr_channels = 8,
.is_private = true,
.chan_allocation_order = CHAN_ALLOCATION_ASCENDING,
.chan_priority = CHAN_PRIORITY_ASCENDING,
.block_size = 4095,
.nr_masters = 2,
};
static const struct acpi_device_id dw_dma_acpi_id_table[] = {
{ "INTL9C60", 0 },
{ "INTL9C60", (kernel_ulong_t)&dw_dma_acpi_pdata },
{ }
};
MODULE_DEVICE_TABLE(acpi, dw_dma_acpi_id_table);

File diff suppressed because it is too large Load Diff

View File

@ -1512,6 +1512,7 @@ static const struct of_device_id fsldma_of_ids[] = {
{ .compatible = "fsl,elo-dma", },
{}
};
MODULE_DEVICE_TABLE(of, fsldma_of_ids);
static struct platform_driver fsldma_of_driver = {
.driver = {

View File

@ -65,9 +65,6 @@ static void idma64_chan_init(struct idma64 *idma64, struct idma64_chan *idma64c)
u32 cfghi = IDMA64C_CFGH_SRC_PER(1) | IDMA64C_CFGH_DST_PER(0);
u32 cfglo = 0;
/* Enforce FIFO drain when channel is suspended */
cfglo |= IDMA64C_CFGL_CH_DRAIN;
/* Set default burst alignment */
cfglo |= IDMA64C_CFGL_DST_BURST_ALIGN | IDMA64C_CFGL_SRC_BURST_ALIGN;
@ -257,15 +254,15 @@ static u64 idma64_hw_desc_fill(struct idma64_hw_desc *hw,
dar = config->dst_addr;
ctllo |= IDMA64C_CTLL_DST_FIX | IDMA64C_CTLL_SRC_INC |
IDMA64C_CTLL_FC_M2P;
src_width = min_t(u32, 2, __fls(sar | hw->len));
dst_width = __fls(config->dst_addr_width);
src_width = __ffs(sar | hw->len | 4);
dst_width = __ffs(config->dst_addr_width);
} else { /* DMA_DEV_TO_MEM */
sar = config->src_addr;
dar = hw->phys;
ctllo |= IDMA64C_CTLL_DST_INC | IDMA64C_CTLL_SRC_FIX |
IDMA64C_CTLL_FC_P2M;
src_width = __fls(config->src_addr_width);
dst_width = min_t(u32, 2, __fls(dar | hw->len));
src_width = __ffs(config->src_addr_width);
dst_width = __ffs(dar | hw->len | 4);
}
lli->sar = sar;
@ -428,12 +425,17 @@ static int idma64_slave_config(struct dma_chan *chan,
return 0;
}
static void idma64_chan_deactivate(struct idma64_chan *idma64c)
static void idma64_chan_deactivate(struct idma64_chan *idma64c, bool drain)
{
unsigned short count = 100;
u32 cfglo;
cfglo = channel_readl(idma64c, CFG_LO);
if (drain)
cfglo |= IDMA64C_CFGL_CH_DRAIN;
else
cfglo &= ~IDMA64C_CFGL_CH_DRAIN;
channel_writel(idma64c, CFG_LO, cfglo | IDMA64C_CFGL_CH_SUSP);
do {
udelay(1);
@ -456,7 +458,7 @@ static int idma64_pause(struct dma_chan *chan)
spin_lock_irqsave(&idma64c->vchan.lock, flags);
if (idma64c->desc && idma64c->desc->status == DMA_IN_PROGRESS) {
idma64_chan_deactivate(idma64c);
idma64_chan_deactivate(idma64c, false);
idma64c->desc->status = DMA_PAUSED;
}
spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
@ -486,7 +488,7 @@ static int idma64_terminate_all(struct dma_chan *chan)
LIST_HEAD(head);
spin_lock_irqsave(&idma64c->vchan.lock, flags);
idma64_chan_deactivate(idma64c);
idma64_chan_deactivate(idma64c, true);
idma64_stop_transfer(idma64c);
if (idma64c->desc) {
idma64_vdesc_free(&idma64c->desc->vdesc);

View File

@ -16,6 +16,8 @@
#include <linux/spinlock.h>
#include <linux/types.h>
#include <asm-generic/io-64-nonatomic-lo-hi.h>
#include "virt-dma.h"
/* Channel registers */
@ -166,19 +168,13 @@ static inline void idma64c_writel(struct idma64_chan *idma64c, int offset,
static inline u64 idma64c_readq(struct idma64_chan *idma64c, int offset)
{
u64 l, h;
l = idma64c_readl(idma64c, offset);
h = idma64c_readl(idma64c, offset + 4);
return l | (h << 32);
return lo_hi_readq(idma64c->regs + offset);
}
static inline void idma64c_writeq(struct idma64_chan *idma64c, int offset,
u64 value)
{
idma64c_writel(idma64c, offset, value);
idma64c_writel(idma64c, offset + 4, value >> 32);
lo_hi_writeq(value, idma64c->regs + offset);
}
#define channel_readq(idma64c, reg) \
@ -217,7 +213,7 @@ static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value)
idma64_writel(idma64, IDMA64_##reg, (value))
/**
* struct idma64_chip - representation of DesignWare DMA controller hardware
* struct idma64_chip - representation of iDMA 64-bit controller hardware
* @dev: struct device of the DMA controller
* @irq: irq line
* @regs: memory mapped I/O space

View File

@ -1478,7 +1478,7 @@ static int __init sdma_event_remap(struct sdma_engine *sdma)
event_remap = of_find_property(np, propname, NULL);
num_map = event_remap ? (event_remap->length / sizeof(u32)) : 0;
if (!num_map) {
dev_warn(sdma->dev, "no event needs to be remapped\n");
dev_dbg(sdma->dev, "no event needs to be remapped\n");
goto out;
} else if (num_map % EVENT_REMAP_CELLS) {
dev_err(sdma->dev, "the property %s must modulo %d\n",
@ -1826,8 +1826,6 @@ static int sdma_probe(struct platform_device *pdev)
of_node_put(spba_bus);
}
dev_info(sdma->dev, "initialized\n");
return 0;
err_register:
@ -1852,7 +1850,6 @@ static int sdma_remove(struct platform_device *pdev)
}
platform_set_drvdata(pdev, NULL);
dev_info(&pdev->dev, "Removed...\n");
return 0;
}

View File

@ -197,7 +197,8 @@ static void __ioat_start_null_desc(struct ioatdma_chan *ioat_chan)
void ioat_start_null_desc(struct ioatdma_chan *ioat_chan)
{
spin_lock_bh(&ioat_chan->prep_lock);
__ioat_start_null_desc(ioat_chan);
if (!test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
__ioat_start_null_desc(ioat_chan);
spin_unlock_bh(&ioat_chan->prep_lock);
}

View File

@ -82,8 +82,9 @@ struct ioatdma_device {
struct dma_pool *sed_hw_pool[MAX_SED_POOLS];
struct dma_device dma_dev;
u8 version;
struct msix_entry msix_entries[4];
struct ioatdma_chan *idx[4];
#define IOAT_MAX_CHANS 4
struct msix_entry msix_entries[IOAT_MAX_CHANS];
struct ioatdma_chan *idx[IOAT_MAX_CHANS];
struct dca_provider *dca;
enum ioat_irq_mode irq_mode;
u32 cap;
@ -95,6 +96,7 @@ struct ioatdma_chan {
dma_addr_t last_completion;
spinlock_t cleanup_lock;
unsigned long state;
#define IOAT_CHAN_DOWN 0
#define IOAT_COMPLETION_ACK 1
#define IOAT_RESET_PENDING 2
#define IOAT_KOBJ_INIT_FAIL 3

View File

@ -27,6 +27,7 @@
#include <linux/workqueue.h>
#include <linux/prefetch.h>
#include <linux/dca.h>
#include <linux/aer.h>
#include "dma.h"
#include "registers.h"
#include "hw.h"
@ -1186,13 +1187,116 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
return 0;
}
static void ioat_shutdown(struct pci_dev *pdev)
{
struct ioatdma_device *ioat_dma = pci_get_drvdata(pdev);
struct ioatdma_chan *ioat_chan;
int i;
if (!ioat_dma)
return;
for (i = 0; i < IOAT_MAX_CHANS; i++) {
ioat_chan = ioat_dma->idx[i];
if (!ioat_chan)
continue;
spin_lock_bh(&ioat_chan->prep_lock);
set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
del_timer_sync(&ioat_chan->timer);
spin_unlock_bh(&ioat_chan->prep_lock);
/* this should quiesce then reset */
ioat_reset_hw(ioat_chan);
}
ioat_disable_interrupts(ioat_dma);
}
void ioat_resume(struct ioatdma_device *ioat_dma)
{
struct ioatdma_chan *ioat_chan;
u32 chanerr;
int i;
for (i = 0; i < IOAT_MAX_CHANS; i++) {
ioat_chan = ioat_dma->idx[i];
if (!ioat_chan)
continue;
spin_lock_bh(&ioat_chan->prep_lock);
clear_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
spin_unlock_bh(&ioat_chan->prep_lock);
chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
writel(chanerr, ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
/* no need to reset as shutdown already did that */
}
}
#define DRV_NAME "ioatdma"
static pci_ers_result_t ioat_pcie_error_detected(struct pci_dev *pdev,
enum pci_channel_state error)
{
dev_dbg(&pdev->dev, "%s: PCIe AER error %d\n", DRV_NAME, error);
/* quiesce and block I/O */
ioat_shutdown(pdev);
return PCI_ERS_RESULT_NEED_RESET;
}
static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev)
{
pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED;
int err;
dev_dbg(&pdev->dev, "%s post reset handling\n", DRV_NAME);
if (pci_enable_device_mem(pdev) < 0) {
dev_err(&pdev->dev,
"Failed to enable PCIe device after reset.\n");
result = PCI_ERS_RESULT_DISCONNECT;
} else {
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_wake_from_d3(pdev, false);
}
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
dev_err(&pdev->dev,
"AER uncorrect error status clear failed: %#x\n", err);
}
return result;
}
static void ioat_pcie_error_resume(struct pci_dev *pdev)
{
struct ioatdma_device *ioat_dma = pci_get_drvdata(pdev);
dev_dbg(&pdev->dev, "%s: AER handling resuming\n", DRV_NAME);
/* initialize and bring everything back */
ioat_resume(ioat_dma);
}
static const struct pci_error_handlers ioat_err_handler = {
.error_detected = ioat_pcie_error_detected,
.slot_reset = ioat_pcie_error_slot_reset,
.resume = ioat_pcie_error_resume,
};
static struct pci_driver ioat_pci_driver = {
.name = DRV_NAME,
.id_table = ioat_pci_tbl,
.probe = ioat_pci_probe,
.remove = ioat_remove,
.shutdown = ioat_shutdown,
.err_handler = &ioat_err_handler,
};
static struct ioatdma_device *
@ -1245,13 +1349,17 @@ static int ioat_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
pci_set_drvdata(pdev, device);
device->version = readb(device->reg_base + IOAT_VER_OFFSET);
if (device->version >= IOAT_VER_3_0)
if (device->version >= IOAT_VER_3_0) {
err = ioat3_dma_probe(device, ioat_dca_enabled);
else
if (device->version >= IOAT_VER_3_3)
pci_enable_pcie_error_reporting(pdev);
} else
return -ENODEV;
if (err) {
dev_err(dev, "Intel(R) I/OAT DMA Engine init failed\n");
pci_disable_pcie_error_reporting(pdev);
return -ENODEV;
}
@ -1271,6 +1379,8 @@ static void ioat_remove(struct pci_dev *pdev)
free_dca_provider(device->dca);
device->dca = NULL;
}
pci_disable_pcie_error_reporting(pdev);
ioat_dma_remove(device);
}

View File

@ -121,6 +121,9 @@ ioat_dma_prep_memcpy_lock(struct dma_chan *c, dma_addr_t dma_dest,
size_t total_len = len;
int num_descs, idx, i;
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
num_descs = ioat_xferlen_to_descs(ioat_chan, len);
if (likely(num_descs) &&
ioat_check_space_lock(ioat_chan, num_descs) == 0)
@ -254,6 +257,11 @@ struct dma_async_tx_descriptor *
ioat_prep_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
unsigned int src_cnt, size_t len, unsigned long flags)
{
struct ioatdma_chan *ioat_chan = to_ioat_chan(chan);
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
return __ioat_prep_xor_lock(chan, NULL, dest, src, src_cnt, len, flags);
}
@ -262,6 +270,11 @@ ioat_prep_xor_val(struct dma_chan *chan, dma_addr_t *src,
unsigned int src_cnt, size_t len,
enum sum_check_flags *result, unsigned long flags)
{
struct ioatdma_chan *ioat_chan = to_ioat_chan(chan);
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
/* the cleanup routine only sets bits on validate failure, it
* does not clear bits on validate success... so clear it here
*/
@ -574,6 +587,11 @@ ioat_prep_pq(struct dma_chan *chan, dma_addr_t *dst, dma_addr_t *src,
unsigned int src_cnt, const unsigned char *scf, size_t len,
unsigned long flags)
{
struct ioatdma_chan *ioat_chan = to_ioat_chan(chan);
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
/* specify valid address for disabled result */
if (flags & DMA_PREP_PQ_DISABLE_P)
dst[0] = dst[1];
@ -614,6 +632,11 @@ ioat_prep_pq_val(struct dma_chan *chan, dma_addr_t *pq, dma_addr_t *src,
unsigned int src_cnt, const unsigned char *scf, size_t len,
enum sum_check_flags *pqres, unsigned long flags)
{
struct ioatdma_chan *ioat_chan = to_ioat_chan(chan);
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
/* specify valid address for disabled result */
if (flags & DMA_PREP_PQ_DISABLE_P)
pq[0] = pq[1];
@ -638,6 +661,10 @@ ioat_prep_pqxor(struct dma_chan *chan, dma_addr_t dst, dma_addr_t *src,
{
unsigned char scf[MAX_SCF];
dma_addr_t pq[2];
struct ioatdma_chan *ioat_chan = to_ioat_chan(chan);
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
if (src_cnt > MAX_SCF)
return NULL;
@ -661,6 +688,10 @@ ioat_prep_pqxor_val(struct dma_chan *chan, dma_addr_t *src,
{
unsigned char scf[MAX_SCF];
dma_addr_t pq[2];
struct ioatdma_chan *ioat_chan = to_ioat_chan(chan);
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
if (src_cnt > MAX_SCF)
return NULL;
@ -689,6 +720,9 @@ ioat_prep_interrupt_lock(struct dma_chan *c, unsigned long flags)
struct ioat_ring_ent *desc;
struct ioat_dma_descriptor *hw;
if (test_bit(IOAT_CHAN_DOWN, &ioat_chan->state))
return NULL;
if (ioat_check_space_lock(ioat_chan, 1) == 0)
desc = ioat_get_ring_ent(ioat_chan, ioat_chan->head);
else

View File

@ -652,6 +652,7 @@ static const struct of_device_id moxart_dma_match[] = {
{ .compatible = "moxa,moxart-dma" },
{ }
};
MODULE_DEVICE_TABLE(of, moxart_dma_match);
static struct platform_driver moxart_driver = {
.probe = moxart_probe,

View File

@ -1073,6 +1073,7 @@ static const struct of_device_id mpc_dma_match[] = {
{ .compatible = "fsl,mpc8308-dma", },
{},
};
MODULE_DEVICE_TABLE(of, mpc_dma_match);
static struct platform_driver mpc_dma_driver = {
.probe = mpc_dma_probe,

View File

@ -935,8 +935,12 @@ static struct dma_async_tx_descriptor *omap_dma_prep_dma_cyclic(
else
d->ccr |= CCR_SYNC_ELEMENT;
if (dir == DMA_DEV_TO_MEM)
if (dir == DMA_DEV_TO_MEM) {
d->ccr |= CCR_TRIGGER_SRC;
d->csdp |= CSDP_DST_PACKED;
} else {
d->csdp |= CSDP_SRC_PACKED;
}
d->cicr |= CICR_MISALIGNED_ERR_IE | CICR_TRANS_ERR_IE;

View File

@ -1149,6 +1149,7 @@ static const struct of_device_id sirfsoc_dma_match[] = {
{ .compatible = "sirf,atlas7-dmac-v2", .data = &sirfsoc_dmadata_a7v2,},
{},
};
MODULE_DEVICE_TABLE(of, sirfsoc_dma_match);
static struct platform_driver sirfsoc_dma_driver = {
.probe = sirfsoc_dma_probe,

View File

@ -2907,7 +2907,7 @@ static int __init d40_dmaengine_init(struct d40_base *base,
if (err) {
d40_err(base->dev,
"Failed to regsiter memcpy only channels\n");
"Failed to register memcpy only channels\n");
goto failure2;
}

View File

@ -908,6 +908,7 @@ static const struct of_device_id sun6i_dma_match[] = {
{ .compatible = "allwinner,sun8i-h3-dma", .data = &sun8i_h3_dma_cfg },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, sun6i_dma_match);
static int sun6i_dma_probe(struct platform_device *pdev)
{

View File

@ -17,13 +17,184 @@
#include <linux/of_device.h>
#include <linux/of_dma.h>
#define TI_XBAR_OUTPUTS 127
#define TI_XBAR_INPUTS 256
#define TI_XBAR_DRA7 0
#define TI_XBAR_AM335X 1
static const struct of_device_id ti_dma_xbar_match[] = {
{
.compatible = "ti,dra7-dma-crossbar",
.data = (void *)TI_XBAR_DRA7,
},
{
.compatible = "ti,am335x-edma-crossbar",
.data = (void *)TI_XBAR_AM335X,
},
{},
};
/* Crossbar on AM335x/AM437x family */
#define TI_AM335X_XBAR_LINES 64
struct ti_am335x_xbar_data {
void __iomem *iomem;
struct dma_router dmarouter;
u32 xbar_events; /* maximum number of events to select in xbar */
u32 dma_requests; /* number of DMA requests on eDMA */
};
struct ti_am335x_xbar_map {
u16 dma_line;
u16 mux_val;
};
static inline void ti_am335x_xbar_write(void __iomem *iomem, int event, u16 val)
{
writeb_relaxed(val & 0x1f, iomem + event);
}
static void ti_am335x_xbar_free(struct device *dev, void *route_data)
{
struct ti_am335x_xbar_data *xbar = dev_get_drvdata(dev);
struct ti_am335x_xbar_map *map = route_data;
dev_dbg(dev, "Unmapping XBAR event %u on channel %u\n",
map->mux_val, map->dma_line);
ti_am335x_xbar_write(xbar->iomem, map->dma_line, 0);
kfree(map);
}
static void *ti_am335x_xbar_route_allocate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
struct ti_am335x_xbar_data *xbar = platform_get_drvdata(pdev);
struct ti_am335x_xbar_map *map;
if (dma_spec->args_count != 3)
return ERR_PTR(-EINVAL);
if (dma_spec->args[2] >= xbar->xbar_events) {
dev_err(&pdev->dev, "Invalid XBAR event number: %d\n",
dma_spec->args[2]);
return ERR_PTR(-EINVAL);
}
if (dma_spec->args[0] >= xbar->dma_requests) {
dev_err(&pdev->dev, "Invalid DMA request line number: %d\n",
dma_spec->args[0]);
return ERR_PTR(-EINVAL);
}
/* The of_node_put() will be done in the core for the node */
dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
if (!dma_spec->np) {
dev_err(&pdev->dev, "Can't get DMA master\n");
return ERR_PTR(-EINVAL);
}
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (!map) {
of_node_put(dma_spec->np);
return ERR_PTR(-ENOMEM);
}
map->dma_line = (u16)dma_spec->args[0];
map->mux_val = (u16)dma_spec->args[2];
dma_spec->args[2] = 0;
dma_spec->args_count = 2;
dev_dbg(&pdev->dev, "Mapping XBAR event%u to DMA%u\n",
map->mux_val, map->dma_line);
ti_am335x_xbar_write(xbar->iomem, map->dma_line, map->mux_val);
return map;
}
static const struct of_device_id ti_am335x_master_match[] = {
{ .compatible = "ti,edma3-tpcc", },
{},
};
static int ti_am335x_xbar_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
const struct of_device_id *match;
struct device_node *dma_node;
struct ti_am335x_xbar_data *xbar;
struct resource *res;
void __iomem *iomem;
int i, ret;
if (!node)
return -ENODEV;
xbar = devm_kzalloc(&pdev->dev, sizeof(*xbar), GFP_KERNEL);
if (!xbar)
return -ENOMEM;
dma_node = of_parse_phandle(node, "dma-masters", 0);
if (!dma_node) {
dev_err(&pdev->dev, "Can't get DMA master node\n");
return -ENODEV;
}
match = of_match_node(ti_am335x_master_match, dma_node);
if (!match) {
dev_err(&pdev->dev, "DMA master is not supported\n");
return -EINVAL;
}
if (of_property_read_u32(dma_node, "dma-requests",
&xbar->dma_requests)) {
dev_info(&pdev->dev,
"Missing XBAR output information, using %u.\n",
TI_AM335X_XBAR_LINES);
xbar->dma_requests = TI_AM335X_XBAR_LINES;
}
of_node_put(dma_node);
if (of_property_read_u32(node, "dma-requests", &xbar->xbar_events)) {
dev_info(&pdev->dev,
"Missing XBAR input information, using %u.\n",
TI_AM335X_XBAR_LINES);
xbar->xbar_events = TI_AM335X_XBAR_LINES;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
iomem = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(iomem))
return PTR_ERR(iomem);
xbar->iomem = iomem;
xbar->dmarouter.dev = &pdev->dev;
xbar->dmarouter.route_free = ti_am335x_xbar_free;
platform_set_drvdata(pdev, xbar);
/* Reset the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_am335x_xbar_write(xbar->iomem, i, 0);
ret = of_dma_router_register(node, ti_am335x_xbar_route_allocate,
&xbar->dmarouter);
return ret;
}
/* Crossbar on DRA7xx family */
#define TI_DRA7_XBAR_OUTPUTS 127
#define TI_DRA7_XBAR_INPUTS 256
#define TI_XBAR_EDMA_OFFSET 0
#define TI_XBAR_SDMA_OFFSET 1
struct ti_dma_xbar_data {
struct ti_dra7_xbar_data {
void __iomem *iomem;
struct dma_router dmarouter;
@ -35,35 +206,35 @@ struct ti_dma_xbar_data {
u32 dma_offset;
};
struct ti_dma_xbar_map {
struct ti_dra7_xbar_map {
u16 xbar_in;
int xbar_out;
};
static inline void ti_dma_xbar_write(void __iomem *iomem, int xbar, u16 val)
static inline void ti_dra7_xbar_write(void __iomem *iomem, int xbar, u16 val)
{
writew_relaxed(val, iomem + (xbar * 2));
}
static void ti_dma_xbar_free(struct device *dev, void *route_data)
static void ti_dra7_xbar_free(struct device *dev, void *route_data)
{
struct ti_dma_xbar_data *xbar = dev_get_drvdata(dev);
struct ti_dma_xbar_map *map = route_data;
struct ti_dra7_xbar_data *xbar = dev_get_drvdata(dev);
struct ti_dra7_xbar_map *map = route_data;
dev_dbg(dev, "Unmapping XBAR%u (was routed to %d)\n",
map->xbar_in, map->xbar_out);
ti_dma_xbar_write(xbar->iomem, map->xbar_out, xbar->safe_val);
ti_dra7_xbar_write(xbar->iomem, map->xbar_out, xbar->safe_val);
idr_remove(&xbar->map_idr, map->xbar_out);
kfree(map);
}
static void *ti_dma_xbar_route_allocate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct platform_device *pdev = of_find_device_by_node(ofdma->of_node);
struct ti_dma_xbar_data *xbar = platform_get_drvdata(pdev);
struct ti_dma_xbar_map *map;
struct ti_dra7_xbar_data *xbar = platform_get_drvdata(pdev);
struct ti_dra7_xbar_map *map;
if (dma_spec->args[0] >= xbar->xbar_requests) {
dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
@ -93,12 +264,12 @@ static void *ti_dma_xbar_route_allocate(struct of_phandle_args *dma_spec,
dev_dbg(&pdev->dev, "Mapping XBAR%u to DMA%d\n",
map->xbar_in, map->xbar_out);
ti_dma_xbar_write(xbar->iomem, map->xbar_out, map->xbar_in);
ti_dra7_xbar_write(xbar->iomem, map->xbar_out, map->xbar_in);
return map;
}
static const struct of_device_id ti_dma_master_match[] = {
static const struct of_device_id ti_dra7_master_match[] = {
{
.compatible = "ti,omap4430-sdma",
.data = (void *)TI_XBAR_SDMA_OFFSET,
@ -110,12 +281,12 @@ static const struct of_device_id ti_dma_master_match[] = {
{},
};
static int ti_dma_xbar_probe(struct platform_device *pdev)
static int ti_dra7_xbar_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
const struct of_device_id *match;
struct device_node *dma_node;
struct ti_dma_xbar_data *xbar;
struct ti_dra7_xbar_data *xbar;
struct resource *res;
u32 safe_val;
void __iomem *iomem;
@ -136,7 +307,7 @@ static int ti_dma_xbar_probe(struct platform_device *pdev)
return -ENODEV;
}
match = of_match_node(ti_dma_master_match, dma_node);
match = of_match_node(ti_dra7_master_match, dma_node);
if (!match) {
dev_err(&pdev->dev, "DMA master is not supported\n");
return -EINVAL;
@ -146,16 +317,16 @@ static int ti_dma_xbar_probe(struct platform_device *pdev)
&xbar->dma_requests)) {
dev_info(&pdev->dev,
"Missing XBAR output information, using %u.\n",
TI_XBAR_OUTPUTS);
xbar->dma_requests = TI_XBAR_OUTPUTS;
TI_DRA7_XBAR_OUTPUTS);
xbar->dma_requests = TI_DRA7_XBAR_OUTPUTS;
}
of_node_put(dma_node);
if (of_property_read_u32(node, "dma-requests", &xbar->xbar_requests)) {
dev_info(&pdev->dev,
"Missing XBAR input information, using %u.\n",
TI_XBAR_INPUTS);
xbar->xbar_requests = TI_XBAR_INPUTS;
TI_DRA7_XBAR_INPUTS);
xbar->xbar_requests = TI_DRA7_XBAR_INPUTS;
}
if (!of_property_read_u32(node, "ti,dma-safe-map", &safe_val))
@ -169,30 +340,50 @@ static int ti_dma_xbar_probe(struct platform_device *pdev)
xbar->iomem = iomem;
xbar->dmarouter.dev = &pdev->dev;
xbar->dmarouter.route_free = ti_dma_xbar_free;
xbar->dmarouter.route_free = ti_dra7_xbar_free;
xbar->dma_offset = (u32)match->data;
platform_set_drvdata(pdev, xbar);
/* Reset the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_dma_xbar_write(xbar->iomem, i, xbar->safe_val);
ti_dra7_xbar_write(xbar->iomem, i, xbar->safe_val);
ret = of_dma_router_register(node, ti_dma_xbar_route_allocate,
ret = of_dma_router_register(node, ti_dra7_xbar_route_allocate,
&xbar->dmarouter);
if (ret) {
/* Restore the defaults for the crossbar */
for (i = 0; i < xbar->dma_requests; i++)
ti_dma_xbar_write(xbar->iomem, i, i);
ti_dra7_xbar_write(xbar->iomem, i, i);
}
return ret;
}
static const struct of_device_id ti_dma_xbar_match[] = {
{ .compatible = "ti,dra7-dma-crossbar" },
{},
};
static int ti_dma_xbar_probe(struct platform_device *pdev)
{
const struct of_device_id *match;
int ret;
match = of_match_node(ti_dma_xbar_match, pdev->dev.of_node);
if (unlikely(!match))
return -EINVAL;
switch ((u32)match->data) {
case TI_XBAR_DRA7:
ret = ti_dra7_xbar_probe(pdev);
break;
case TI_XBAR_AM335X:
ret = ti_am335x_xbar_probe(pdev);
break;
default:
dev_err(&pdev->dev, "Unsupported crossbar\n");
ret = -ENODEV;
break;
}
return ret;
}
static struct platform_driver ti_dma_xbar_driver = {
.driver = {

View File

@ -47,9 +47,9 @@ struct virt_dma_desc *vchan_find_desc(struct virt_dma_chan *, dma_cookie_t);
/**
* vchan_tx_prep - prepare a descriptor
* vc: virtual channel allocating this descriptor
* vd: virtual descriptor to prepare
* tx_flags: flags argument passed in to prepare function
* @vc: virtual channel allocating this descriptor
* @vd: virtual descriptor to prepare
* @tx_flags: flags argument passed in to prepare function
*/
static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan *vc,
struct virt_dma_desc *vd, unsigned long tx_flags)
@ -65,7 +65,7 @@ static inline struct dma_async_tx_descriptor *vchan_tx_prep(struct virt_dma_chan
/**
* vchan_issue_pending - move submitted descriptors to issued list
* vc: virtual channel to update
* @vc: virtual channel to update
*
* vc.lock must be held by caller
*/
@ -77,7 +77,7 @@ static inline bool vchan_issue_pending(struct virt_dma_chan *vc)
/**
* vchan_cookie_complete - report completion of a descriptor
* vd: virtual descriptor to update
* @vd: virtual descriptor to update
*
* vc.lock must be held by caller
*/
@ -97,7 +97,7 @@ static inline void vchan_cookie_complete(struct virt_dma_desc *vd)
/**
* vchan_cyclic_callback - report the completion of a period
* vd: virtual descriptor
* @vd: virtual descriptor
*/
static inline void vchan_cyclic_callback(struct virt_dma_desc *vd)
{
@ -109,7 +109,7 @@ static inline void vchan_cyclic_callback(struct virt_dma_desc *vd)
/**
* vchan_next_desc - peek at the next descriptor to be processed
* vc: virtual channel to obtain descriptor from
* @vc: virtual channel to obtain descriptor from
*
* vc.lock must be held by caller
*/
@ -123,8 +123,8 @@ static inline struct virt_dma_desc *vchan_next_desc(struct virt_dma_chan *vc)
/**
* vchan_get_all_descriptors - obtain all submitted and issued descriptors
* vc: virtual channel to get descriptors from
* head: list of descriptors found
* @vc: virtual channel to get descriptors from
* @head: list of descriptors found
*
* vc.lock must be held by caller
*

View File

@ -547,14 +547,12 @@ static struct xgene_dma_desc_sw *xgene_dma_alloc_descriptor(
struct xgene_dma_desc_sw *desc;
dma_addr_t phys;
desc = dma_pool_alloc(chan->desc_pool, GFP_NOWAIT, &phys);
desc = dma_pool_zalloc(chan->desc_pool, GFP_NOWAIT, &phys);
if (!desc) {
chan_err(chan, "Failed to allocate LDs\n");
return NULL;
}
memset(desc, 0, sizeof(*desc));
INIT_LIST_HEAD(&desc->tx_list);
desc->tx.phys = phys;
desc->tx.tx_submit = xgene_dma_tx_submit;
@ -894,60 +892,6 @@ static void xgene_dma_free_chan_resources(struct dma_chan *dchan)
chan->desc_pool = NULL;
}
static struct dma_async_tx_descriptor *xgene_dma_prep_memcpy(
struct dma_chan *dchan, dma_addr_t dst, dma_addr_t src,
size_t len, unsigned long flags)
{
struct xgene_dma_desc_sw *first = NULL, *new;
struct xgene_dma_chan *chan;
size_t copy;
if (unlikely(!dchan || !len))
return NULL;
chan = to_dma_chan(dchan);
do {
/* Allocate the link descriptor from DMA pool */
new = xgene_dma_alloc_descriptor(chan);
if (!new)
goto fail;
/* Create the largest transaction possible */
copy = min_t(size_t, len, XGENE_DMA_MAX_64B_DESC_BYTE_CNT);
/* Prepare DMA descriptor */
xgene_dma_prep_cpy_desc(chan, new, dst, src, copy);
if (!first)
first = new;
new->tx.cookie = 0;
async_tx_ack(&new->tx);
/* Update metadata */
len -= copy;
dst += copy;
src += copy;
/* Insert the link descriptor to the LD ring */
list_add_tail(&new->node, &first->tx_list);
} while (len);
new->tx.flags = flags; /* client is in control of this ack */
new->tx.cookie = -EBUSY;
list_splice(&first->tx_list, &new->tx_list);
return &new->tx;
fail:
if (!first)
return NULL;
xgene_dma_free_desc_list(chan, &first->tx_list);
return NULL;
}
static struct dma_async_tx_descriptor *xgene_dma_prep_sg(
struct dma_chan *dchan, struct scatterlist *dst_sg,
u32 dst_nents, struct scatterlist *src_sg,
@ -1707,7 +1651,6 @@ static void xgene_dma_set_caps(struct xgene_dma_chan *chan,
dma_cap_zero(dma_dev->cap_mask);
/* Set DMA device capability */
dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask);
dma_cap_set(DMA_SG, dma_dev->cap_mask);
/* Basically here, the X-Gene SoC DMA engine channel 0 supports XOR
@ -1734,7 +1677,6 @@ static void xgene_dma_set_caps(struct xgene_dma_chan *chan,
dma_dev->device_free_chan_resources = xgene_dma_free_chan_resources;
dma_dev->device_issue_pending = xgene_dma_issue_pending;
dma_dev->device_tx_status = xgene_dma_tx_status;
dma_dev->device_prep_dma_memcpy = xgene_dma_prep_memcpy;
dma_dev->device_prep_dma_sg = xgene_dma_prep_sg;
if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) {
@ -1787,8 +1729,7 @@ static int xgene_dma_async_register(struct xgene_dma *pdma, int id)
/* DMA capability info */
dev_info(pdma->dev,
"%s: CAPABILITY ( %s%s%s%s)\n", dma_chan_name(&chan->dma_chan),
dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "MEMCPY " : "",
"%s: CAPABILITY ( %s%s%s)\n", dma_chan_name(&chan->dma_chan),
dma_has_cap(DMA_SG, dma_dev->cap_mask) ? "SGCPY " : "",
dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "XOR " : "",
dma_has_cap(DMA_PQ, dma_dev->cap_mask) ? "PQ " : "");

View File

@ -1349,6 +1349,7 @@ static const struct of_device_id xilinx_vdma_of_ids[] = {
{ .compatible = "xlnx,axi-vdma-1.00.a",},
{}
};
MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
static struct platform_driver xilinx_vdma_driver = {
.driver = {

View File

@ -441,7 +441,7 @@ static struct zx_dma_desc_sw *zx_alloc_desc_resource(int num,
kfree(ds);
return NULL;
}
memset(ds->desc_hw, sizeof(struct zx_desc_hw) * num, 0);
memset(ds->desc_hw, 0, sizeof(struct zx_desc_hw) * num);
ds->desc_num = num;
return ds;
}

View File

@ -34,7 +34,7 @@ struct of_dma_filter_info {
dma_filter_fn filter_fn;
};
#ifdef CONFIG_OF
#ifdef CONFIG_DMA_OF
extern int of_dma_controller_register(struct device_node *np,
struct dma_chan *(*of_dma_xlate)
(struct of_phandle_args *, struct of_dma *),

View File

@ -37,6 +37,7 @@ struct dw_dma_slave {
* @nr_channels: Number of channels supported by hardware (max 8)
* @is_private: The device channels should be marked as private and not for
* by the general purpose DMA channel allocator.
* @is_memcpy: The device channels do support memory-to-memory transfers.
* @chan_allocation_order: Allocate channels starting from 0 or 7
* @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0.
* @block_size: Maximum block size supported by the controller
@ -47,6 +48,7 @@ struct dw_dma_slave {
struct dw_dma_platform_data {
unsigned int nr_channels;
bool is_private;
bool is_memcpy;
#define CHAN_ALLOCATION_ASCENDING 0 /* zero to seven */
#define CHAN_ALLOCATION_DESCENDING 1 /* seven to zero */
unsigned char chan_allocation_order;

View File

@ -41,51 +41,6 @@
#ifndef EDMA_H_
#define EDMA_H_
/* PaRAM slots are laid out like this */
struct edmacc_param {
u32 opt;
u32 src;
u32 a_b_cnt;
u32 dst;
u32 src_dst_bidx;
u32 link_bcntrld;
u32 src_dst_cidx;
u32 ccnt;
} __packed;
/* fields in edmacc_param.opt */
#define SAM BIT(0)
#define DAM BIT(1)
#define SYNCDIM BIT(2)
#define STATIC BIT(3)
#define EDMA_FWID (0x07 << 8)
#define TCCMODE BIT(11)
#define EDMA_TCC(t) ((t) << 12)
#define TCINTEN BIT(20)
#define ITCINTEN BIT(21)
#define TCCHEN BIT(22)
#define ITCCHEN BIT(23)
/*ch_status paramater of callback function possible values*/
#define EDMA_DMA_COMPLETE 1
#define EDMA_DMA_CC_ERROR 2
#define EDMA_DMA_TC1_ERROR 3
#define EDMA_DMA_TC2_ERROR 4
enum address_mode {
INCR = 0,
FIFO = 1
};
enum fifo_width {
W8BIT = 0,
W16BIT = 1,
W32BIT = 2,
W64BIT = 3,
W128BIT = 4,
W256BIT = 5
};
enum dma_event_q {
EVENTQ_0 = 0,
EVENTQ_1 = 1,
@ -94,64 +49,10 @@ enum dma_event_q {
EVENTQ_DEFAULT = -1
};
enum sync_dimension {
ASYNC = 0,
ABSYNC = 1
};
#define EDMA_CTLR_CHAN(ctlr, chan) (((ctlr) << 16) | (chan))
#define EDMA_CTLR(i) ((i) >> 16)
#define EDMA_CHAN_SLOT(i) ((i) & 0xffff)
#define EDMA_CHANNEL_ANY -1 /* for edma_alloc_channel() */
#define EDMA_SLOT_ANY -1 /* for edma_alloc_slot() */
#define EDMA_CONT_PARAMS_ANY 1001
#define EDMA_CONT_PARAMS_FIXED_EXACT 1002
#define EDMA_CONT_PARAMS_FIXED_NOT_EXACT 1003
#define EDMA_MAX_CC 2
/* alloc/free DMA channels and their dedicated parameter RAM slots */
int edma_alloc_channel(int channel,
void (*callback)(unsigned channel, u16 ch_status, void *data),
void *data, enum dma_event_q);
void edma_free_channel(unsigned channel);
/* alloc/free parameter RAM slots */
int edma_alloc_slot(unsigned ctlr, int slot);
void edma_free_slot(unsigned slot);
/* alloc/free a set of contiguous parameter RAM slots */
int edma_alloc_cont_slots(unsigned ctlr, unsigned int id, int slot, int count);
int edma_free_cont_slots(unsigned slot, int count);
/* calls that operate on part of a parameter RAM slot */
void edma_set_src(unsigned slot, dma_addr_t src_port,
enum address_mode mode, enum fifo_width);
void edma_set_dest(unsigned slot, dma_addr_t dest_port,
enum address_mode mode, enum fifo_width);
dma_addr_t edma_get_position(unsigned slot, bool dst);
void edma_set_src_index(unsigned slot, s16 src_bidx, s16 src_cidx);
void edma_set_dest_index(unsigned slot, s16 dest_bidx, s16 dest_cidx);
void edma_set_transfer_params(unsigned slot, u16 acnt, u16 bcnt, u16 ccnt,
u16 bcnt_rld, enum sync_dimension sync_mode);
void edma_link(unsigned from, unsigned to);
void edma_unlink(unsigned from);
/* calls that operate on an entire parameter RAM slot */
void edma_write_slot(unsigned slot, const struct edmacc_param *params);
void edma_read_slot(unsigned slot, struct edmacc_param *params);
/* channel control operations */
int edma_start(unsigned channel);
void edma_stop(unsigned channel);
void edma_clean_channel(unsigned channel);
void edma_clear_event(unsigned channel);
void edma_pause(unsigned channel);
void edma_resume(unsigned channel);
void edma_assign_channel_eventq(unsigned channel, enum dma_event_q eventq_no);
struct edma_rsv_info {
const s16 (*rsv_chans)[2];
@ -170,10 +71,11 @@ struct edma_soc_info {
/* Resource reservation for other cores */
struct edma_rsv_info *rsv;
/* List of channels allocated for memcpy, terminated with -1 */
s16 *memcpy_channels;
s8 (*queue_priority_mapping)[2];
const s16 (*xbar_chans)[2];
};
int edma_trigger_channel(unsigned);
#endif