spi: Updates for v6.8

A moderately busy release for SPI, the main core update was the merging
 of support for multiple chip selects, used in some flash configurations.
 There were also big overhauls for the AXI SPI Engine and PL022 drivers,
 plus some new device support for ST.
 
 There's a few patches for other trees, API updates to allow the
 multiple chip select support and one of the naming modernisations
 touched a controller embedded in the USB code.
 
  - Support for multiple chip selects.
  - A big overhaul for the AXI SPI engine driver, modernising it and
    adding a bunch of new features.
  - Modernisation of the PL022 driver, fixing some issues with submitting
    messages while in atomic context in the process.
  - Many drivers were converted to use new APIs which avoid outdated
    terminology for devices and controllers.
  - Support for ST Microelectronics STM32F7 and STM32MP25, and Renesas
    RZ/Five.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmWbHmMACgkQJNaLcl1U
 h9CpSwf+O981469g1twyEpq5PJlNgdXmrKUpezcC18X4DLXmlf5hoCsHUFIU2DuX
 oBZuUQVp1KaEzJ4LX1giAOTuhfwPAItGR+/JMs6VxT/V0MMCHaNYcU5zLHXacDFL
 URU7hyyhxUp9PzGNI/IEQH2DPv3QVX8Z1CVQQNQpnTsvbpBEF/osxB3SdWg65Y4J
 B9nEW5hnyDsjxQVzjwCMFsy1vJeaBkP++zdPhPGE4RaNcweX+hksVRWVJ3DqUuJC
 u4IyO5Hmduqmyjyc7MEV6lekecnyHc72WIzFXJpy0FOW0CstOQD59D5Fnbdvbb9i
 mm3IJ1Vh/oepZBNPAmHCPqMEAqr5ZQ==
 =49Kh
 -----END PGP SIGNATURE-----

Merge tag 'spi-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
 "A moderately busy release for SPI, the main core update was the
  merging of support for multiple chip selects, used in some flash
  configurations. There were also big overhauls for the AXI SPI Engine
  and PL022 drivers, plus some new device support for ST.

  There's a few patches for other trees, API updates to allow the
  multiple chip select support and one of the naming modernisations
  touched a controller embedded in the USB code.

   - Support for multiple chip selects.

   - A big overhaul for the AXI SPI engine driver, modernising it and
     adding a bunch of new features.

   - Modernisation of the PL022 driver, fixing some issues with
     submitting messages while in atomic context in the process.

   - Many drivers were converted to use new APIs which avoid outdated
     terminology for devices and controllers.

   - Support for ST Microelectronics STM32F7 and STM32MP25, and Renesas
     RZ/Five"

* tag 'spi-v6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (83 commits)
  spi: stm32: add st,stm32mp25-spi compatible supporting STM32MP25 soc
  dt-bindings: spi: stm32: add st,stm32mp25-spi compatible
  spi: stm32: use dma_get_slave_caps prior to configuring dma channel
  spi: axi-spi-engine: fix struct member doc warnings
  spi: pl022: update description of internal_cs_control()
  spi: pl022: delete description of cur_msg
  spi: dw: Remove Intel Thunder Bay SOC support
  spi: dw: Remove Intel Thunder Bay SOC support
  spi: sh-msiof: Enforce fixed DTDL for R-Car H3
  spi: ljca: switch to use devm_spi_alloc_host()
  spi: cs42l43: switch to use devm_spi_alloc_host()
  spi: zynqmp-gqspi: switch to use modern name
  spi: zynq-qspi: switch to use modern name
  spi: xtensa-xtfpga: switch to use modern name
  spi: xlp: switch to use modern name
  spi: xilinx: switch to use modern name
  spi: xcomm: switch to use modern name
  spi: uniphier: switch to use modern name
  spi: topcliff-pch: switch to use modern name
  spi: wpcm-fiu: switch to use devm_spi_alloc_host()
  ...
This commit is contained in:
Linus Torvalds 2024-01-09 15:02:12 -08:00
commit 301940020a
59 changed files with 2148 additions and 1714 deletions

View File

@ -1,31 +0,0 @@
Analog Devices AXI SPI Engine controller Device Tree Bindings
Required properties:
- compatible : Must be "adi,axi-spi-engine-1.00.a""
- reg : Physical base address and size of the register map.
- interrupts : Property with a value describing the interrupt
number.
- clock-names : List of input clock names - "s_axi_aclk", "spi_clk"
- clocks : Clock phandles and specifiers (See clock bindings for
details on clock-names and clocks).
- #address-cells : Must be <1>
- #size-cells : Must be <0>
Optional subnodes:
Subnodes are use to represent the SPI slave devices connected to the SPI
master. They follow the generic SPI bindings as outlined in spi-bus.txt.
Example:
spi@@44a00000 {
compatible = "adi,axi-spi-engine-1.00.a";
reg = <0x44a00000 0x1000>;
interrupts = <0 56 4>;
clocks = <&clkc 15 &clkc 15>;
clock-names = "s_axi_aclk", "spi_clk";
#address-cells = <1>;
#size-cells = <0>;
/* SPI devices */
};

View File

@ -0,0 +1,66 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/spi/adi,axi-spi-engine.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Analog Devices AXI SPI Engine Controller
description: |
The AXI SPI Engine controller is part of the SPI Engine framework[1] and
allows memory mapped access to the SPI Engine control bus. This allows it
to be used as a general purpose software driven SPI controller as well as
some optional advanced acceleration and offloading capabilities.
[1] https://wiki.analog.com/resources/fpga/peripherals/spi_engine
maintainers:
- Michael Hennerich <Michael.Hennerich@analog.com>
- Nuno Sá <nuno.sa@analog.com>
allOf:
- $ref: /schemas/spi/spi-controller.yaml#
properties:
compatible:
const: adi,axi-spi-engine-1.00.a
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
items:
- description: The AXI interconnect clock.
- description: The SPI controller clock.
clock-names:
items:
- const: s_axi_aclk
- const: spi_clk
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
unevaluatedProperties: false
examples:
- |
spi@44a00000 {
compatible = "adi,axi-spi-engine-1.00.a";
reg = <0x44a00000 0x1000>;
interrupts = <0 56 4>;
clocks = <&clkc 15>, <&clkc 15>;
clock-names = "s_axi_aclk", "spi_clk";
#address-cells = <1>;
#size-cells = <0>;
/* SPI devices */
};

View File

@ -21,7 +21,7 @@ properties:
- enum:
- renesas,rspi-r7s72100 # RZ/A1H
- renesas,rspi-r7s9210 # RZ/A2
- renesas,r9a07g043-rspi # RZ/G2UL
- renesas,r9a07g043-rspi # RZ/G2UL and RZ/Five
- renesas,r9a07g044-rspi # RZ/G2{L,LC}
- renesas,r9a07g054-rspi # RZ/V2L
- const: renesas,rspi-rz

View File

@ -72,8 +72,6 @@ properties:
- const: snps,dw-apb-ssi
- description: Intel Keem Bay SPI Controller
const: intel,keembay-ssi
- description: Intel Thunder Bay SPI Controller
const: intel,thunderbay-ssi
- description: Intel Mount Evans Integrated Management Complex SPI Controller
const: intel,mountevans-imc-ssi
- description: AMD Pensando Elba SoC SPI Controller

View File

@ -23,7 +23,9 @@ properties:
compatible:
enum:
- st,stm32f4-spi
- st,stm32f7-spi
- st,stm32h7-spi
- st,stm32mp25-spi
reg:
maxItems: 1

View File

@ -3,13 +3,13 @@ PXA2xx SPI on SSP driver HOWTO
==============================
This a mini HOWTO on the pxa2xx_spi driver. The driver turns a PXA2xx
synchronous serial port into an SPI master controller
synchronous serial port into an SPI host controller
(see Documentation/spi/spi-summary.rst). The driver has the following features
- Support for any PXA2xx and compatible SSP.
- SSP PIO and SSP DMA data transfers.
- External and Internal (SSPFRM) chip selects.
- Per slave device (chip) configuration.
- Per peripheral device (chip) configuration.
- Full suspend, freeze, resume support.
The driver is built around a &struct spi_message FIFO serviced by kernel
@ -17,10 +17,10 @@ thread. The kernel thread, spi_pump_messages(), drives message FIFO and
is responsible for queuing SPI transactions and setting up and launching
the DMA or interrupt driven transfers.
Declaring PXA2xx Master Controllers
-----------------------------------
Typically, for a legacy platform, an SPI master is defined in the
arch/.../mach-*/board-*.c as a "platform device". The master configuration
Declaring PXA2xx host controllers
---------------------------------
Typically, for a legacy platform, an SPI host controller is defined in the
arch/.../mach-*/board-*.c as a "platform device". The host controller configuration
is passed to the driver via a table found in include/linux/spi/pxa2xx_spi.h::
struct pxa2xx_spi_controller {
@ -30,7 +30,7 @@ is passed to the driver via a table found in include/linux/spi/pxa2xx_spi.h::
};
The "pxa2xx_spi_controller.num_chipselect" field is used to determine the number of
slave device (chips) attached to this SPI master.
peripheral devices (chips) attached to this SPI host controller.
The "pxa2xx_spi_controller.enable_dma" field informs the driver that SSP DMA should
be used. This caused the driver to acquire two DMA channels: Rx channel and
@ -40,8 +40,8 @@ See the "PXA2xx Developer Manual" section "DMA Controller".
For the new platforms the description of the controller and peripheral devices
comes from Device Tree or ACPI.
NSSP MASTER SAMPLE
------------------
NSSP HOST SAMPLE
----------------
Below is a sample configuration using the PXA255 NSSP for a legacy platform::
static struct resource pxa_spi_nssp_resources[] = {
@ -57,7 +57,7 @@ Below is a sample configuration using the PXA255 NSSP for a legacy platform::
},
};
static struct pxa2xx_spi_controller pxa_nssp_master_info = {
static struct pxa2xx_spi_controller pxa_nssp_controller_info = {
.num_chipselect = 1, /* Matches the number of chips attached to NSSP */
.enable_dma = 1, /* Enables NSSP DMA */
};
@ -68,7 +68,7 @@ Below is a sample configuration using the PXA255 NSSP for a legacy platform::
.resource = pxa_spi_nssp_resources,
.num_resources = ARRAY_SIZE(pxa_spi_nssp_resources),
.dev = {
.platform_data = &pxa_nssp_master_info, /* Passed to driver */
.platform_data = &pxa_nssp_controller_info, /* Passed to driver */
},
};
@ -81,17 +81,17 @@ Below is a sample configuration using the PXA255 NSSP for a legacy platform::
(void)platform_add_device(devices, ARRAY_SIZE(devices));
}
Declaring Slave Devices
-----------------------
Typically, for a legacy platform, each SPI slave (chip) is defined in the
Declaring peripheral devices
----------------------------
Typically, for a legacy platform, each SPI peripheral device (chip) is defined in the
arch/.../mach-*/board-*.c using the "spi_board_info" structure found in
"linux/spi/spi.h". See "Documentation/spi/spi-summary.rst" for additional
information.
Each slave device attached to the PXA must provide slave specific configuration
Each peripheral device (chip) attached to the PXA2xx must provide specific chip configuration
information via the structure "pxa2xx_spi_chip" found in
"include/linux/spi/pxa2xx_spi.h". The pxa2xx_spi master controller driver
will uses the configuration whenever the driver communicates with the slave
"include/linux/spi/pxa2xx_spi.h". The PXA2xx host controller driver will use
the configuration whenever the driver communicates with the peripheral
device. All fields are optional.
::
@ -123,7 +123,7 @@ dma_burst_size == 0.
The "pxa2xx_spi_chip.timeout" fields is used to efficiently handle
trailing bytes in the SSP receiver FIFO. The correct value for this field is
dependent on the SPI bus speed ("spi_board_info.max_speed_hz") and the specific
slave device. Please note that the PXA2xx SSP 1 does not support trailing byte
peripheral device. Please note that the PXA2xx SSP 1 does not support trailing byte
timeouts and must busy-wait any trailing bytes.
NOTE: the SPI driver cannot control the chip select if SSPFRM is used, so the
@ -132,8 +132,8 @@ asserted around the complete message. Use SSPFRM as a GPIO (through a descriptor
to accommodate these chips.
NSSP SLAVE SAMPLE
-----------------
NSSP PERIPHERAL SAMPLE
----------------------
For a legacy platform or in some other cases, the pxa2xx_spi_chip structure
is passed to the pxa2xx_spi driver in the "spi_board_info.controller_data"
field. Below is a sample configuration using the PXA255 NSSP.
@ -161,16 +161,16 @@ field. Below is a sample configuration using the PXA255 NSSP.
.bus_num = 2, /* Framework bus number */
.chip_select = 0, /* Framework chip select */
.platform_data = NULL; /* No spi_driver specific config */
.controller_data = &cs8415a_chip_info, /* Master chip config */
.irq = STREETRACER_APCI_IRQ, /* Slave device interrupt */
.controller_data = &cs8415a_chip_info, /* Host controller config */
.irq = STREETRACER_APCI_IRQ, /* Peripheral device interrupt */
},
{
.modalias = "cs8405a", /* Name of spi_driver for this device */
.max_speed_hz = 3686400, /* Run SSP as fast a possible */
.bus_num = 2, /* Framework bus number */
.chip_select = 1, /* Framework chip select */
.controller_data = &cs8405a_chip_info, /* Master chip config */
.irq = STREETRACER_APCI_IRQ, /* Slave device interrupt */
.controller_data = &cs8405a_chip_info, /* Host controller config */
.irq = STREETRACER_APCI_IRQ, /* Peripheral device interrupt */
},
};
@ -193,17 +193,14 @@ mode supports both coherent and stream based DMA mappings.
The following logic is used to determine the type of I/O to be used on
a per "spi_transfer" basis::
if !enable_dma then
always use PIO transfers
if spi_message.len > 65536 then
if spi_message.is_dma_mapped or rx_dma_buf != 0 or tx_dma_buf != 0 then
reject premapped transfers
if spi_message.len > 8191 then
print "rate limited" warning
use PIO transfers
if spi_message.is_dma_mapped and rx_dma_buf != 0 and tx_dma_buf != 0 then
use coherent DMA mode
if rx_buf and tx_buf are aligned on 8 byte boundary then
if enable_dma and the size is in the range [DMA burst size..65536] then
use streaming DMA mode
otherwise

View File

@ -3408,6 +3408,16 @@ W: https://ez.analog.com/linux-software-drivers
F: Documentation/devicetree/bindings/hwmon/adi,axi-fan-control.yaml
F: drivers/hwmon/axi-fan-control.c
AXI SPI ENGINE
M: Michael Hennerich <michael.hennerich@analog.com>
M: Nuno Sá <nuno.sa@analog.com>
R: David Lechner <dlechner@baylibre.com>
L: linux-spi@vger.kernel.org
S: Supported
W: https://ez.analog.com/linux-software-drivers
F: Documentation/devicetree/bindings/spi/adi,axi-spi-engine.yaml
F: drivers/spi/spi-axi-spi-engine.c
AXXIA I2C CONTROLLER
M: Krzysztof Adamski <krzysztof.adamski@nokia.com>
L: linux-i2c@vger.kernel.org

View File

@ -375,7 +375,7 @@ static int rmi_spi_probe(struct spi_device *spi)
struct rmi_device_platform_data *spi_pdata = spi->dev.platform_data;
int error;
if (spi->master->flags & SPI_MASTER_HALF_DUPLEX)
if (spi->master->flags & SPI_CONTROLLER_HALF_DUPLEX)
return -EINVAL;
rmi_spi = devm_kzalloc(&spi->dev, sizeof(struct rmi_spi_xport),

View File

@ -98,7 +98,7 @@ static int tps6594_spi_probe(struct spi_device *spi)
spi_set_drvdata(spi, tps);
tps->dev = dev;
tps->reg = spi->chip_select;
tps->reg = spi_get_chipselect(spi, 0);
tps->irq = spi->irq;
tps->regmap = devm_regmap_init(dev, NULL, spi, &tps6594_spi_regmap_config);

View File

@ -1322,7 +1322,7 @@ static int mmc_spi_probe(struct spi_device *spi)
/* We rely on full duplex transfers, mostly to reduce
* per-transfer overheads (by making fewer transfers).
*/
if (spi->master->flags & SPI_MASTER_HALF_DUPLEX)
if (spi->master->flags & SPI_CONTROLLER_HALF_DUPLEX)
return -EINVAL;
/* MMC and SD specs only seem to care that sampling is on the

View File

@ -974,7 +974,7 @@ static int spinand_manufacturer_match(struct spinand_device *spinand,
spinand->manufacturer = manufacturer;
return 0;
}
return -ENOTSUPP;
return -EOPNOTSUPP;
}
static int spinand_id_detect(struct spinand_device *spinand)

View File

@ -3146,7 +3146,7 @@ int spi_nor_set_4byte_addr_mode(struct spi_nor *nor, bool enable)
int ret;
ret = params->set_4byte_addr_mode(nor, enable);
if (ret && ret != -ENOTSUPP)
if (ret && ret != -EOPNOTSUPP)
return ret;
if (enable) {
@ -3237,7 +3237,8 @@ static void spi_nor_soft_reset(struct spi_nor *nor)
ret = spi_mem_exec_op(nor->spimem, &op);
if (ret) {
dev_warn(nor->dev, "Software reset failed: %d\n", ret);
if (ret != -EOPNOTSUPP)
dev_warn(nor->dev, "Software reset failed: %d\n", ret);
return;
}

View File

@ -156,7 +156,7 @@ static void ks8851_rdreg(struct ks8851_net *ks, unsigned int op,
txb[0] = cpu_to_le16(op | KS_SPIOP_RD);
if (kss->spidev->master->flags & SPI_MASTER_HALF_DUPLEX) {
if (kss->spidev->master->flags & SPI_CONTROLLER_HALF_DUPLEX) {
msg = &kss->spi_msg2;
xfer = kss->spi_xfer2;
@ -180,7 +180,7 @@ static void ks8851_rdreg(struct ks8851_net *ks, unsigned int op,
ret = spi_sync(kss->spidev, msg);
if (ret < 0)
netdev_err(ks->netdev, "read: spi_sync() failed\n");
else if (kss->spidev->master->flags & SPI_MASTER_HALF_DUPLEX)
else if (kss->spidev->master->flags & SPI_CONTROLLER_HALF_DUPLEX)
memcpy(rxb, trx, rxl);
else
memcpy(rxb, trx + 2, rxl);

View File

@ -1177,9 +1177,10 @@ config SPI_ZYNQ_QSPI
config SPI_ZYNQMP_GQSPI
tristate "Xilinx ZynqMP GQSPI controller"
depends on (SPI_MASTER && HAS_DMA) || COMPILE_TEST
depends on (SPI_MEM && HAS_DMA) || COMPILE_TEST
help
Enables Xilinx GQSPI controller driver for Zynq UltraScale+ MPSoC.
This controller only supports SPI memory interface.
config SPI_AMD
tristate "AMD SPI controller"

View File

@ -272,7 +272,7 @@ static int atmel_qspi_find_mode(const struct spi_mem_op *op)
if (atmel_qspi_is_compatible(op, &atmel_qspi_modes[i]))
return i;
return -ENOTSUPP;
return -EOPNOTSUPP;
}
static bool atmel_qspi_supports_op(struct spi_mem *mem,

View File

@ -146,7 +146,7 @@ static int ath79_exec_mem_op(struct spi_mem *mem,
/* Only use for fast-read op. */
if (op->cmd.opcode != 0x0b || op->data.dir != SPI_MEM_DATA_IN ||
op->addr.nbytes != 3 || op->dummy.nbytes != 1)
return -ENOTSUPP;
return -EOPNOTSUPP;
/* disable GPIO mode */
ath79_spi_wr(sp, AR71XX_SPI_REG_FS, 0);

View File

@ -6,12 +6,14 @@
*/
#include <linux/clk.h>
#include <linux/idr.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/spi/spi.h>
#include <linux/timer.h>
#define SPI_ENGINE_VERSION_MAJOR(x) ((x >> 16) & 0xff)
#define SPI_ENGINE_VERSION_MINOR(x) ((x >> 8) & 0xff)
@ -52,6 +54,7 @@
#define SPI_ENGINE_CMD_REG_CLK_DIV 0x0
#define SPI_ENGINE_CMD_REG_CONFIG 0x1
#define SPI_ENGINE_CMD_REG_XFER_BITS 0x2
#define SPI_ENGINE_MISC_SYNC 0x0
#define SPI_ENGINE_MISC_SLEEP 0x1
@ -78,6 +81,32 @@ struct spi_engine_program {
uint16_t instructions[];
};
/**
* struct spi_engine_message_state - SPI engine per-message state
*/
struct spi_engine_message_state {
/** @p: Instructions for executing this message. */
struct spi_engine_program *p;
/** @cmd_length: Number of elements in cmd_buf array. */
unsigned cmd_length;
/** @cmd_buf: Array of commands not yet written to CMD FIFO. */
const uint16_t *cmd_buf;
/** @tx_xfer: Next xfer with tx_buf not yet fully written to TX FIFO. */
struct spi_transfer *tx_xfer;
/** @tx_length: Size of tx_buf in bytes. */
unsigned int tx_length;
/** @tx_buf: Bytes not yet written to TX FIFO. */
const uint8_t *tx_buf;
/** @rx_xfer: Next xfer with rx_buf not yet fully written to RX FIFO. */
struct spi_transfer *rx_xfer;
/** @rx_length: Size of tx_buf in bytes. */
unsigned int rx_length;
/** @rx_buf: Bytes not yet written to the RX FIFO. */
uint8_t *rx_buf;
/** @sync_id: ID to correlate SYNC interrupts with this message. */
u8 sync_id;
};
struct spi_engine {
struct clk *clk;
struct clk *ref_clk;
@ -85,22 +114,9 @@ struct spi_engine {
spinlock_t lock;
void __iomem *base;
struct spi_message *msg;
struct spi_engine_program *p;
unsigned cmd_length;
const uint16_t *cmd_buf;
struct spi_transfer *tx_xfer;
unsigned int tx_length;
const uint8_t *tx_buf;
struct spi_transfer *rx_xfer;
unsigned int rx_length;
uint8_t *rx_buf;
unsigned int sync_id;
unsigned int completed_id;
struct ida sync_ida;
struct timer_list watchdog_timer;
struct spi_controller *controller;
unsigned int int_enable;
};
@ -127,25 +143,17 @@ static unsigned int spi_engine_get_config(struct spi_device *spi)
return config;
}
static unsigned int spi_engine_get_clk_div(struct spi_engine *spi_engine,
struct spi_device *spi, struct spi_transfer *xfer)
{
unsigned int clk_div;
clk_div = DIV_ROUND_UP(clk_get_rate(spi_engine->ref_clk),
xfer->speed_hz * 2);
if (clk_div > 255)
clk_div = 255;
else if (clk_div > 0)
clk_div -= 1;
return clk_div;
}
static void spi_engine_gen_xfer(struct spi_engine_program *p, bool dry,
struct spi_transfer *xfer)
{
unsigned int len = xfer->len;
unsigned int len;
if (xfer->bits_per_word <= 8)
len = xfer->len;
else if (xfer->bits_per_word <= 16)
len = xfer->len / 2;
else
len = xfer->len / 4;
while (len) {
unsigned int n = min(len, 256U);
@ -163,22 +171,16 @@ static void spi_engine_gen_xfer(struct spi_engine_program *p, bool dry,
}
static void spi_engine_gen_sleep(struct spi_engine_program *p, bool dry,
struct spi_engine *spi_engine, unsigned int clk_div,
struct spi_transfer *xfer)
int delay_ns, u32 sclk_hz)
{
unsigned int spi_clk = clk_get_rate(spi_engine->ref_clk);
unsigned int t;
int delay;
delay = spi_delay_to_ns(&xfer->delay, xfer);
if (delay < 0)
return;
delay /= 1000;
if (delay == 0)
/* negative delay indicates error, e.g. from spi_delay_to_ns() */
if (delay_ns <= 0)
return;
t = DIV_ROUND_UP(delay * spi_clk, (clk_div + 1) * 2);
/* rounding down since executing the instruction adds a couple of ticks delay */
t = DIV_ROUND_DOWN_ULL((u64)delay_ns * sclk_hz, NSEC_PER_SEC);
while (t) {
unsigned int n = min(t, 256U);
@ -195,53 +197,105 @@ static void spi_engine_gen_cs(struct spi_engine_program *p, bool dry,
if (assert)
mask ^= BIT(spi_get_chipselect(spi, 0));
spi_engine_program_add_cmd(p, dry, SPI_ENGINE_CMD_ASSERT(1, mask));
spi_engine_program_add_cmd(p, dry, SPI_ENGINE_CMD_ASSERT(0, mask));
}
static int spi_engine_compile_message(struct spi_engine *spi_engine,
struct spi_message *msg, bool dry, struct spi_engine_program *p)
/*
* Performs precompile steps on the message.
*
* The SPI core does most of the message/transfer validation and filling in
* fields for us via __spi_validate(). This fixes up anything remaining not
* done there.
*
* NB: This is separate from spi_engine_compile_message() because the latter
* is called twice and would otherwise result in double-evaluation.
*/
static void spi_engine_precompile_message(struct spi_message *msg)
{
unsigned int clk_div, max_hz = msg->spi->controller->max_speed_hz;
struct spi_transfer *xfer;
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
clk_div = DIV_ROUND_UP(max_hz, xfer->speed_hz);
xfer->effective_speed_hz = max_hz / min(clk_div, 256U);
}
}
static void spi_engine_compile_message(struct spi_message *msg, bool dry,
struct spi_engine_program *p)
{
struct spi_device *spi = msg->spi;
struct spi_controller *host = spi->controller;
struct spi_transfer *xfer;
int clk_div, new_clk_div;
bool cs_change = true;
bool keep_cs = false;
u8 bits_per_word = 0;
clk_div = -1;
clk_div = 1;
spi_engine_program_add_cmd(p, dry,
SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_CONFIG,
spi_engine_get_config(spi)));
xfer = list_first_entry(&msg->transfers, struct spi_transfer, transfer_list);
spi_engine_gen_cs(p, dry, spi, !xfer->cs_off);
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
new_clk_div = spi_engine_get_clk_div(spi_engine, spi, xfer);
new_clk_div = host->max_speed_hz / xfer->effective_speed_hz;
if (new_clk_div != clk_div) {
clk_div = new_clk_div;
/* actual divider used is register value + 1 */
spi_engine_program_add_cmd(p, dry,
SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_CLK_DIV,
clk_div));
clk_div - 1));
}
if (cs_change)
spi_engine_gen_cs(p, dry, spi, true);
if (bits_per_word != xfer->bits_per_word) {
bits_per_word = xfer->bits_per_word;
spi_engine_program_add_cmd(p, dry,
SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_XFER_BITS,
bits_per_word));
}
spi_engine_gen_xfer(p, dry, xfer);
spi_engine_gen_sleep(p, dry, spi_engine, clk_div, xfer);
spi_engine_gen_sleep(p, dry, spi_delay_to_ns(&xfer->delay, xfer),
xfer->effective_speed_hz);
cs_change = xfer->cs_change;
if (list_is_last(&xfer->transfer_list, &msg->transfers))
cs_change = !cs_change;
if (xfer->cs_change) {
if (list_is_last(&xfer->transfer_list, &msg->transfers)) {
keep_cs = true;
} else {
if (!xfer->cs_off)
spi_engine_gen_cs(p, dry, spi, false);
if (cs_change)
spi_engine_gen_cs(p, dry, spi, false);
spi_engine_gen_sleep(p, dry, spi_delay_to_ns(
&xfer->cs_change_delay, xfer),
xfer->effective_speed_hz);
if (!list_next_entry(xfer, transfer_list)->cs_off)
spi_engine_gen_cs(p, dry, spi, true);
}
} else if (!list_is_last(&xfer->transfer_list, &msg->transfers) &&
xfer->cs_off != list_next_entry(xfer, transfer_list)->cs_off) {
spi_engine_gen_cs(p, dry, spi, xfer->cs_off);
}
}
return 0;
if (!keep_cs)
spi_engine_gen_cs(p, dry, spi, false);
/*
* Restore clockdiv to default so that future gen_sleep commands don't
* have to be aware of the current register state.
*/
if (clk_div != 1)
spi_engine_program_add_cmd(p, dry,
SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_CLK_DIV, 0));
}
static void spi_engine_xfer_next(struct spi_engine *spi_engine,
static void spi_engine_xfer_next(struct spi_message *msg,
struct spi_transfer **_xfer)
{
struct spi_message *msg = spi_engine->msg;
struct spi_transfer *xfer = *_xfer;
if (!xfer) {
@ -256,147 +310,192 @@ static void spi_engine_xfer_next(struct spi_engine *spi_engine,
*_xfer = xfer;
}
static void spi_engine_tx_next(struct spi_engine *spi_engine)
static void spi_engine_tx_next(struct spi_message *msg)
{
struct spi_transfer *xfer = spi_engine->tx_xfer;
struct spi_engine_message_state *st = msg->state;
struct spi_transfer *xfer = st->tx_xfer;
do {
spi_engine_xfer_next(spi_engine, &xfer);
spi_engine_xfer_next(msg, &xfer);
} while (xfer && !xfer->tx_buf);
spi_engine->tx_xfer = xfer;
st->tx_xfer = xfer;
if (xfer) {
spi_engine->tx_length = xfer->len;
spi_engine->tx_buf = xfer->tx_buf;
st->tx_length = xfer->len;
st->tx_buf = xfer->tx_buf;
} else {
spi_engine->tx_buf = NULL;
st->tx_buf = NULL;
}
}
static void spi_engine_rx_next(struct spi_engine *spi_engine)
static void spi_engine_rx_next(struct spi_message *msg)
{
struct spi_transfer *xfer = spi_engine->rx_xfer;
struct spi_engine_message_state *st = msg->state;
struct spi_transfer *xfer = st->rx_xfer;
do {
spi_engine_xfer_next(spi_engine, &xfer);
spi_engine_xfer_next(msg, &xfer);
} while (xfer && !xfer->rx_buf);
spi_engine->rx_xfer = xfer;
st->rx_xfer = xfer;
if (xfer) {
spi_engine->rx_length = xfer->len;
spi_engine->rx_buf = xfer->rx_buf;
st->rx_length = xfer->len;
st->rx_buf = xfer->rx_buf;
} else {
spi_engine->rx_buf = NULL;
st->rx_buf = NULL;
}
}
static bool spi_engine_write_cmd_fifo(struct spi_engine *spi_engine)
static bool spi_engine_write_cmd_fifo(struct spi_engine *spi_engine,
struct spi_message *msg)
{
void __iomem *addr = spi_engine->base + SPI_ENGINE_REG_CMD_FIFO;
struct spi_engine_message_state *st = msg->state;
unsigned int n, m, i;
const uint16_t *buf;
n = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_CMD_FIFO_ROOM);
while (n && spi_engine->cmd_length) {
m = min(n, spi_engine->cmd_length);
buf = spi_engine->cmd_buf;
while (n && st->cmd_length) {
m = min(n, st->cmd_length);
buf = st->cmd_buf;
for (i = 0; i < m; i++)
writel_relaxed(buf[i], addr);
spi_engine->cmd_buf += m;
spi_engine->cmd_length -= m;
st->cmd_buf += m;
st->cmd_length -= m;
n -= m;
}
return spi_engine->cmd_length != 0;
return st->cmd_length != 0;
}
static bool spi_engine_write_tx_fifo(struct spi_engine *spi_engine)
static bool spi_engine_write_tx_fifo(struct spi_engine *spi_engine,
struct spi_message *msg)
{
void __iomem *addr = spi_engine->base + SPI_ENGINE_REG_SDO_DATA_FIFO;
struct spi_engine_message_state *st = msg->state;
unsigned int n, m, i;
const uint8_t *buf;
n = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_SDO_FIFO_ROOM);
while (n && spi_engine->tx_length) {
m = min(n, spi_engine->tx_length);
buf = spi_engine->tx_buf;
for (i = 0; i < m; i++)
writel_relaxed(buf[i], addr);
spi_engine->tx_buf += m;
spi_engine->tx_length -= m;
while (n && st->tx_length) {
if (st->tx_xfer->bits_per_word <= 8) {
const u8 *buf = st->tx_buf;
m = min(n, st->tx_length);
for (i = 0; i < m; i++)
writel_relaxed(buf[i], addr);
st->tx_buf += m;
st->tx_length -= m;
} else if (st->tx_xfer->bits_per_word <= 16) {
const u16 *buf = (const u16 *)st->tx_buf;
m = min(n, st->tx_length / 2);
for (i = 0; i < m; i++)
writel_relaxed(buf[i], addr);
st->tx_buf += m * 2;
st->tx_length -= m * 2;
} else {
const u32 *buf = (const u32 *)st->tx_buf;
m = min(n, st->tx_length / 4);
for (i = 0; i < m; i++)
writel_relaxed(buf[i], addr);
st->tx_buf += m * 4;
st->tx_length -= m * 4;
}
n -= m;
if (spi_engine->tx_length == 0)
spi_engine_tx_next(spi_engine);
if (st->tx_length == 0)
spi_engine_tx_next(msg);
}
return spi_engine->tx_length != 0;
return st->tx_length != 0;
}
static bool spi_engine_read_rx_fifo(struct spi_engine *spi_engine)
static bool spi_engine_read_rx_fifo(struct spi_engine *spi_engine,
struct spi_message *msg)
{
void __iomem *addr = spi_engine->base + SPI_ENGINE_REG_SDI_DATA_FIFO;
struct spi_engine_message_state *st = msg->state;
unsigned int n, m, i;
uint8_t *buf;
n = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_SDI_FIFO_LEVEL);
while (n && spi_engine->rx_length) {
m = min(n, spi_engine->rx_length);
buf = spi_engine->rx_buf;
for (i = 0; i < m; i++)
buf[i] = readl_relaxed(addr);
spi_engine->rx_buf += m;
spi_engine->rx_length -= m;
while (n && st->rx_length) {
if (st->rx_xfer->bits_per_word <= 8) {
u8 *buf = st->rx_buf;
m = min(n, st->rx_length);
for (i = 0; i < m; i++)
buf[i] = readl_relaxed(addr);
st->rx_buf += m;
st->rx_length -= m;
} else if (st->rx_xfer->bits_per_word <= 16) {
u16 *buf = (u16 *)st->rx_buf;
m = min(n, st->rx_length / 2);
for (i = 0; i < m; i++)
buf[i] = readl_relaxed(addr);
st->rx_buf += m * 2;
st->rx_length -= m * 2;
} else {
u32 *buf = (u32 *)st->rx_buf;
m = min(n, st->rx_length / 4);
for (i = 0; i < m; i++)
buf[i] = readl_relaxed(addr);
st->rx_buf += m * 4;
st->rx_length -= m * 4;
}
n -= m;
if (spi_engine->rx_length == 0)
spi_engine_rx_next(spi_engine);
if (st->rx_length == 0)
spi_engine_rx_next(msg);
}
return spi_engine->rx_length != 0;
return st->rx_length != 0;
}
static irqreturn_t spi_engine_irq(int irq, void *devid)
{
struct spi_controller *host = devid;
struct spi_message *msg = host->cur_msg;
struct spi_engine *spi_engine = spi_controller_get_devdata(host);
unsigned int disable_int = 0;
unsigned int pending;
int completed_id = -1;
pending = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_INT_PENDING);
if (pending & SPI_ENGINE_INT_SYNC) {
writel_relaxed(SPI_ENGINE_INT_SYNC,
spi_engine->base + SPI_ENGINE_REG_INT_PENDING);
spi_engine->completed_id = readl_relaxed(
completed_id = readl_relaxed(
spi_engine->base + SPI_ENGINE_REG_SYNC_ID);
}
spin_lock(&spi_engine->lock);
if (pending & SPI_ENGINE_INT_CMD_ALMOST_EMPTY) {
if (!spi_engine_write_cmd_fifo(spi_engine))
if (!spi_engine_write_cmd_fifo(spi_engine, msg))
disable_int |= SPI_ENGINE_INT_CMD_ALMOST_EMPTY;
}
if (pending & SPI_ENGINE_INT_SDO_ALMOST_EMPTY) {
if (!spi_engine_write_tx_fifo(spi_engine))
if (!spi_engine_write_tx_fifo(spi_engine, msg))
disable_int |= SPI_ENGINE_INT_SDO_ALMOST_EMPTY;
}
if (pending & (SPI_ENGINE_INT_SDI_ALMOST_FULL | SPI_ENGINE_INT_SYNC)) {
if (!spi_engine_read_rx_fifo(spi_engine))
if (!spi_engine_read_rx_fifo(spi_engine, msg))
disable_int |= SPI_ENGINE_INT_SDI_ALMOST_FULL;
}
if (pending & SPI_ENGINE_INT_SYNC) {
if (spi_engine->msg &&
spi_engine->completed_id == spi_engine->sync_id) {
struct spi_message *msg = spi_engine->msg;
if (pending & SPI_ENGINE_INT_SYNC && msg) {
struct spi_engine_message_state *st = msg->state;
kfree(spi_engine->p);
msg->status = 0;
msg->actual_length = msg->frame_length;
spi_engine->msg = NULL;
spi_finalize_current_message(host);
if (completed_id == st->sync_id) {
if (timer_delete_sync(&spi_engine->watchdog_timer)) {
msg->status = 0;
msg->actual_length = msg->frame_length;
spi_finalize_current_message(host);
}
disable_int |= SPI_ENGINE_INT_SYNC;
}
}
@ -412,43 +511,86 @@ static irqreturn_t spi_engine_irq(int irq, void *devid)
return IRQ_HANDLED;
}
static int spi_engine_transfer_one_message(struct spi_controller *host,
struct spi_message *msg)
static int spi_engine_prepare_message(struct spi_controller *host,
struct spi_message *msg)
{
struct spi_engine_program p_dry, *p;
struct spi_engine *spi_engine = spi_controller_get_devdata(host);
unsigned int int_enable = 0;
unsigned long flags;
struct spi_engine_message_state *st;
size_t size;
int ret;
st = kzalloc(sizeof(*st), GFP_KERNEL);
if (!st)
return -ENOMEM;
spi_engine_precompile_message(msg);
p_dry.length = 0;
spi_engine_compile_message(spi_engine, msg, true, &p_dry);
spi_engine_compile_message(msg, true, &p_dry);
size = sizeof(*p->instructions) * (p_dry.length + 1);
p = kzalloc(sizeof(*p) + size, GFP_KERNEL);
if (!p)
if (!p) {
kfree(st);
return -ENOMEM;
spi_engine_compile_message(spi_engine, msg, false, p);
}
ret = ida_alloc_range(&spi_engine->sync_ida, 0, U8_MAX, GFP_KERNEL);
if (ret < 0) {
kfree(p);
kfree(st);
return ret;
}
st->sync_id = ret;
spi_engine_compile_message(msg, false, p);
spi_engine_program_add_cmd(p, false, SPI_ENGINE_CMD_SYNC(st->sync_id));
st->p = p;
st->cmd_buf = p->instructions;
st->cmd_length = p->length;
msg->state = st;
return 0;
}
static int spi_engine_unprepare_message(struct spi_controller *host,
struct spi_message *msg)
{
struct spi_engine *spi_engine = spi_controller_get_devdata(host);
struct spi_engine_message_state *st = msg->state;
ida_free(&spi_engine->sync_ida, st->sync_id);
kfree(st->p);
kfree(st);
return 0;
}
static int spi_engine_transfer_one_message(struct spi_controller *host,
struct spi_message *msg)
{
struct spi_engine *spi_engine = spi_controller_get_devdata(host);
struct spi_engine_message_state *st = msg->state;
unsigned int int_enable = 0;
unsigned long flags;
mod_timer(&spi_engine->watchdog_timer, jiffies + msecs_to_jiffies(5000));
spin_lock_irqsave(&spi_engine->lock, flags);
spi_engine->sync_id = (spi_engine->sync_id + 1) & 0xff;
spi_engine_program_add_cmd(p, false,
SPI_ENGINE_CMD_SYNC(spi_engine->sync_id));
spi_engine->msg = msg;
spi_engine->p = p;
spi_engine->cmd_buf = p->instructions;
spi_engine->cmd_length = p->length;
if (spi_engine_write_cmd_fifo(spi_engine))
if (spi_engine_write_cmd_fifo(spi_engine, msg))
int_enable |= SPI_ENGINE_INT_CMD_ALMOST_EMPTY;
spi_engine_tx_next(spi_engine);
if (spi_engine_write_tx_fifo(spi_engine))
spi_engine_tx_next(msg);
if (spi_engine_write_tx_fifo(spi_engine, msg))
int_enable |= SPI_ENGINE_INT_SDO_ALMOST_EMPTY;
spi_engine_rx_next(spi_engine);
if (spi_engine->rx_length != 0)
spi_engine_rx_next(msg);
if (st->rx_length != 0)
int_enable |= SPI_ENGINE_INT_SDI_ALMOST_FULL;
int_enable |= SPI_ENGINE_INT_SYNC;
@ -461,6 +603,29 @@ static int spi_engine_transfer_one_message(struct spi_controller *host,
return 0;
}
static void spi_engine_timeout(struct timer_list *timer)
{
struct spi_engine *spi_engine = from_timer(spi_engine, timer, watchdog_timer);
struct spi_controller *host = spi_engine->controller;
if (WARN_ON(!host->cur_msg))
return;
dev_err(&host->dev,
"Timeout occurred while waiting for transfer to complete. Hardware is probably broken.\n");
host->cur_msg->status = -ETIMEDOUT;
spi_finalize_current_message(host);
}
static void spi_engine_release_hw(void *p)
{
struct spi_engine *spi_engine = p;
writel_relaxed(0xff, spi_engine->base + SPI_ENGINE_REG_INT_PENDING);
writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_INT_ENABLE);
writel_relaxed(0x01, spi_engine->base + SPI_ENGINE_REG_RESET);
}
static int spi_engine_probe(struct platform_device *pdev)
{
struct spi_engine *spi_engine;
@ -473,35 +638,28 @@ static int spi_engine_probe(struct platform_device *pdev)
if (irq < 0)
return irq;
spi_engine = devm_kzalloc(&pdev->dev, sizeof(*spi_engine), GFP_KERNEL);
if (!spi_engine)
return -ENOMEM;
host = spi_alloc_host(&pdev->dev, 0);
host = devm_spi_alloc_host(&pdev->dev, sizeof(*spi_engine));
if (!host)
return -ENOMEM;
spi_controller_set_devdata(host, spi_engine);
spi_engine = spi_controller_get_devdata(host);
spin_lock_init(&spi_engine->lock);
ida_init(&spi_engine->sync_ida);
timer_setup(&spi_engine->watchdog_timer, spi_engine_timeout, TIMER_IRQSAFE);
spi_engine->controller = host;
spi_engine->clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk");
if (IS_ERR(spi_engine->clk)) {
ret = PTR_ERR(spi_engine->clk);
goto err_put_host;
}
if (IS_ERR(spi_engine->clk))
return PTR_ERR(spi_engine->clk);
spi_engine->ref_clk = devm_clk_get_enabled(&pdev->dev, "spi_clk");
if (IS_ERR(spi_engine->ref_clk)) {
ret = PTR_ERR(spi_engine->ref_clk);
goto err_put_host;
}
if (IS_ERR(spi_engine->ref_clk))
return PTR_ERR(spi_engine->ref_clk);
spi_engine->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(spi_engine->base)) {
ret = PTR_ERR(spi_engine->base);
goto err_put_host;
}
if (IS_ERR(spi_engine->base))
return PTR_ERR(spi_engine->base);
version = readl(spi_engine->base + SPI_ENGINE_REG_VERSION);
if (SPI_ENGINE_VERSION_MAJOR(version) != 1) {
@ -509,54 +667,42 @@ static int spi_engine_probe(struct platform_device *pdev)
SPI_ENGINE_VERSION_MAJOR(version),
SPI_ENGINE_VERSION_MINOR(version),
SPI_ENGINE_VERSION_PATCH(version));
ret = -ENODEV;
goto err_put_host;
return -ENODEV;
}
writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_RESET);
writel_relaxed(0xff, spi_engine->base + SPI_ENGINE_REG_INT_PENDING);
writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_INT_ENABLE);
ret = request_irq(irq, spi_engine_irq, 0, pdev->name, host);
ret = devm_add_action_or_reset(&pdev->dev, spi_engine_release_hw,
spi_engine);
if (ret)
goto err_put_host;
return ret;
ret = devm_request_irq(&pdev->dev, irq, spi_engine_irq, 0, pdev->name,
host);
if (ret)
return ret;
host->dev.of_node = pdev->dev.of_node;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_3WIRE;
host->bits_per_word_mask = SPI_BPW_MASK(8);
host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
host->max_speed_hz = clk_get_rate(spi_engine->ref_clk) / 2;
host->transfer_one_message = spi_engine_transfer_one_message;
host->prepare_message = spi_engine_prepare_message;
host->unprepare_message = spi_engine_unprepare_message;
host->num_chipselect = 8;
ret = spi_register_controller(host);
if (host->max_speed_hz == 0)
return dev_err_probe(&pdev->dev, -EINVAL, "spi_clk rate is 0");
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret)
goto err_free_irq;
return ret;
platform_set_drvdata(pdev, host);
return 0;
err_free_irq:
free_irq(irq, host);
err_put_host:
spi_controller_put(host);
return ret;
}
static void spi_engine_remove(struct platform_device *pdev)
{
struct spi_controller *host = spi_controller_get(platform_get_drvdata(pdev));
struct spi_engine *spi_engine = spi_controller_get_devdata(host);
int irq = platform_get_irq(pdev, 0);
spi_unregister_controller(host);
free_irq(irq, host);
spi_controller_put(host);
writel_relaxed(0xff, spi_engine->base + SPI_ENGINE_REG_INT_PENDING);
writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_INT_ENABLE);
writel_relaxed(0x01, spi_engine->base + SPI_ENGINE_REG_RESET);
}
static const struct of_device_id spi_engine_match_table[] = {
@ -567,7 +713,6 @@ MODULE_DEVICE_TABLE(of, spi_engine_match_table);
static struct platform_driver spi_engine_driver = {
.probe = spi_engine_probe,
.remove_new = spi_engine_remove,
.driver = {
.name = "spi-engine",
.of_match_table = spi_engine_match_table,

View File

@ -1199,7 +1199,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
if (!op->data.nbytes || !op->addr.nbytes || op->addr.nbytes > 4 ||
op->data.dir != SPI_MEM_DATA_IN)
return -ENOTSUPP;
return -EOPNOTSUPP;
buf = op->data.buf.in;
addr = op->addr.val;

View File

@ -1840,7 +1840,7 @@ static int cqspi_probe(struct platform_device *pdev)
if (ddata->jh7110_clk_init) {
ret = cqspi_jh7110_clk_init(pdev, cqspi);
if (ret)
goto probe_clk_failed;
goto probe_reset_failed;
}
if (of_device_is_compatible(pdev->dev.of_node,
@ -1901,6 +1901,8 @@ static int cqspi_probe(struct platform_device *pdev)
probe_setup_failed:
cqspi_controller_enable(cqspi, 0);
probe_reset_failed:
if (cqspi->is_jh7110)
cqspi_jh7110_disable_clk(pdev, cqspi);
clk_disable_unprepare(cqspi->clk);
probe_clk_failed:
return ret;

View File

@ -619,7 +619,6 @@ MODULE_DEVICE_TABLE(of, cdns_xspi_of_match);
static struct platform_driver cdns_xspi_platform_driver = {
.probe = cdns_xspi_probe,
.remove = NULL,
.driver = {
.name = CDNS_XSPI_NAME,
.of_match_table = cdns_xspi_of_match,

View File

@ -213,7 +213,7 @@ static int cs42l43_spi_probe(struct platform_device *pdev)
if (!priv)
return -ENOMEM;
priv->ctlr = devm_spi_alloc_master(&pdev->dev, sizeof(*priv->ctlr));
priv->ctlr = devm_spi_alloc_host(&pdev->dev, sizeof(*priv->ctlr));
if (!priv->ctlr)
return -ENOMEM;

View File

@ -411,7 +411,6 @@ static const struct of_device_id dw_spi_mmio_of_match[] = {
{ .compatible = "renesas,rzn1-spi", .data = dw_spi_pssi_init},
{ .compatible = "snps,dwc-ssi-1.01a", .data = dw_spi_hssi_init},
{ .compatible = "intel,keembay-ssi", .data = dw_spi_intel_init},
{ .compatible = "intel,thunderbay-ssi", .data = dw_spi_intel_init},
{
.compatible = "intel,mountevans-imc-ssi",
.data = dw_spi_mountevans_imc_init,

View File

@ -145,10 +145,10 @@ static int get_spi_clk_cfg(unsigned int speed_hz,
return ret;
}
static void handle_se_timeout(struct spi_master *spi,
struct spi_message *msg)
static void handle_se_timeout(struct spi_controller *spi,
struct spi_message *msg)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
unsigned long time_left;
struct geni_se *se = &mas->se;
const struct spi_transfer *xfer;
@ -160,9 +160,9 @@ static void handle_se_timeout(struct spi_master *spi,
xfer = mas->cur_xfer;
mas->cur_xfer = NULL;
if (spi->slave) {
if (spi->target) {
/*
* skip CMD Cancel sequnece since spi slave
* skip CMD Cancel sequnece since spi target
* doesn`t support CMD Cancel sequnece
*/
spin_unlock_irq(&mas->lock);
@ -225,17 +225,17 @@ reset_if_dma:
}
}
static void handle_gpi_timeout(struct spi_master *spi, struct spi_message *msg)
static void handle_gpi_timeout(struct spi_controller *spi, struct spi_message *msg)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
dmaengine_terminate_sync(mas->tx);
dmaengine_terminate_sync(mas->rx);
}
static void spi_geni_handle_err(struct spi_master *spi, struct spi_message *msg)
static void spi_geni_handle_err(struct spi_controller *spi, struct spi_message *msg)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
switch (mas->cur_xfer_mode) {
case GENI_SE_FIFO:
@ -286,8 +286,8 @@ static bool spi_geni_is_abort_still_pending(struct spi_geni_master *mas)
static void spi_geni_set_cs(struct spi_device *slv, bool set_flag)
{
struct spi_geni_master *mas = spi_master_get_devdata(slv->master);
struct spi_master *spi = dev_get_drvdata(mas->dev);
struct spi_geni_master *mas = spi_controller_get_devdata(slv->controller);
struct spi_controller *spi = dev_get_drvdata(mas->dev);
struct geni_se *se = &mas->se;
unsigned long time_left;
@ -395,9 +395,9 @@ static int geni_spi_set_clock_and_bw(struct spi_geni_master *mas,
}
static int setup_fifo_params(struct spi_device *spi_slv,
struct spi_master *spi)
struct spi_controller *spi)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
struct geni_se *se = &mas->se;
u32 loopback_cfg = 0, cpol = 0, cpha = 0, demux_output_inv = 0;
u32 demux_sel;
@ -434,7 +434,7 @@ static int setup_fifo_params(struct spi_device *spi_slv,
static void
spi_gsi_callback_result(void *cb, const struct dmaengine_result *result)
{
struct spi_master *spi = cb;
struct spi_controller *spi = cb;
spi->cur_msg->status = -EIO;
if (result->result != DMA_TRANS_NOERROR) {
@ -454,7 +454,7 @@ spi_gsi_callback_result(void *cb, const struct dmaengine_result *result)
}
static int setup_gsi_xfer(struct spi_transfer *xfer, struct spi_geni_master *mas,
struct spi_device *spi_slv, struct spi_master *spi)
struct spi_device *spi_slv, struct spi_controller *spi)
{
unsigned long flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK;
struct dma_slave_config config = {};
@ -560,14 +560,14 @@ static u32 get_xfer_len_in_words(struct spi_transfer *xfer,
static bool geni_can_dma(struct spi_controller *ctlr,
struct spi_device *slv, struct spi_transfer *xfer)
{
struct spi_geni_master *mas = spi_master_get_devdata(slv->master);
struct spi_geni_master *mas = spi_controller_get_devdata(slv->controller);
u32 len, fifo_size;
if (mas->cur_xfer_mode == GENI_GPI_DMA)
return true;
/* Set SE DMA mode for SPI slave. */
if (ctlr->slave)
/* Set SE DMA mode for SPI target. */
if (ctlr->target)
return true;
len = get_xfer_len_in_words(xfer, mas);
@ -579,10 +579,10 @@ static bool geni_can_dma(struct spi_controller *ctlr,
return false;
}
static int spi_geni_prepare_message(struct spi_master *spi,
struct spi_message *spi_msg)
static int spi_geni_prepare_message(struct spi_controller *spi,
struct spi_message *spi_msg)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
int ret;
switch (mas->cur_xfer_mode) {
@ -657,7 +657,7 @@ static int spi_geni_init(struct spi_geni_master *mas)
proto = geni_se_read_proto(se);
if (spi->slave) {
if (spi->target) {
if (proto != GENI_SE_SPI_SLAVE) {
dev_err(mas->dev, "Invalid proto %d\n", proto);
goto out_pm;
@ -715,7 +715,7 @@ static int spi_geni_init(struct spi_geni_master *mas)
}
/* We always control CS manually */
if (!spi->slave) {
if (!spi->target) {
spi_tx_cfg = readl(se->base + SE_SPI_TRANS_CFG);
spi_tx_cfg &= ~CS_TOGGLE;
writel(spi_tx_cfg, se->base + SE_SPI_TRANS_CFG);
@ -824,7 +824,7 @@ static void geni_spi_handle_rx(struct spi_geni_master *mas)
static int setup_se_xfer(struct spi_transfer *xfer,
struct spi_geni_master *mas,
u16 mode, struct spi_master *spi)
u16 mode, struct spi_controller *spi)
{
u32 m_cmd = 0;
u32 len;
@ -913,11 +913,11 @@ static int setup_se_xfer(struct spi_transfer *xfer,
return ret;
}
static int spi_geni_transfer_one(struct spi_master *spi,
struct spi_device *slv,
struct spi_transfer *xfer)
static int spi_geni_transfer_one(struct spi_controller *spi,
struct spi_device *slv,
struct spi_transfer *xfer)
{
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
int ret;
if (spi_geni_is_abort_still_pending(mas))
@ -939,8 +939,8 @@ static int spi_geni_transfer_one(struct spi_master *spi,
static irqreturn_t geni_spi_isr(int irq, void *data)
{
struct spi_master *spi = data;
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_controller *spi = data;
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
struct geni_se *se = &mas->se;
u32 m_irq;
@ -1042,7 +1042,7 @@ static irqreturn_t geni_spi_isr(int irq, void *data)
static int spi_geni_probe(struct platform_device *pdev)
{
int ret, irq;
struct spi_master *spi;
struct spi_controller *spi;
struct spi_geni_master *mas;
void __iomem *base;
struct clk *clk;
@ -1064,12 +1064,12 @@ static int spi_geni_probe(struct platform_device *pdev)
if (IS_ERR(clk))
return PTR_ERR(clk);
spi = devm_spi_alloc_master(dev, sizeof(*mas));
spi = devm_spi_alloc_host(dev, sizeof(*mas));
if (!spi)
return -ENOMEM;
platform_set_drvdata(pdev, spi);
mas = spi_master_get_devdata(spi);
mas = spi_controller_get_devdata(spi);
mas->irq = irq;
mas->dev = dev;
mas->se.dev = dev;
@ -1113,7 +1113,7 @@ static int spi_geni_probe(struct platform_device *pdev)
pm_runtime_enable(dev);
if (device_property_read_bool(&pdev->dev, "spi-slave"))
spi->slave = true;
spi->target = true;
ret = geni_icc_get(&mas->se, NULL);
if (ret)
@ -1135,7 +1135,7 @@ static int spi_geni_probe(struct platform_device *pdev)
* for dma (gsi) mode, the gsi will set cs based on params passed in
* TRE
*/
if (!spi->slave && mas->cur_xfer_mode == GENI_SE_FIFO)
if (!spi->target && mas->cur_xfer_mode == GENI_SE_FIFO)
spi->set_cs = spi_geni_set_cs;
/*
@ -1148,7 +1148,7 @@ static int spi_geni_probe(struct platform_device *pdev)
if (ret)
goto spi_geni_release_dma;
ret = spi_register_master(spi);
ret = spi_register_controller(spi);
if (ret)
goto spi_geni_probe_free_irq;
@ -1164,11 +1164,11 @@ spi_geni_probe_runtime_disable:
static void spi_geni_remove(struct platform_device *pdev)
{
struct spi_master *spi = platform_get_drvdata(pdev);
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_controller *spi = platform_get_drvdata(pdev);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
/* Unregister _before_ disabling pm_runtime() so we stop transfers */
spi_unregister_master(spi);
spi_unregister_controller(spi);
spi_geni_release_dma_chan(mas);
@ -1178,8 +1178,8 @@ static void spi_geni_remove(struct platform_device *pdev)
static int __maybe_unused spi_geni_runtime_suspend(struct device *dev)
{
struct spi_master *spi = dev_get_drvdata(dev);
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_controller *spi = dev_get_drvdata(dev);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
int ret;
/* Drop the performance state vote */
@ -1194,8 +1194,8 @@ static int __maybe_unused spi_geni_runtime_suspend(struct device *dev)
static int __maybe_unused spi_geni_runtime_resume(struct device *dev)
{
struct spi_master *spi = dev_get_drvdata(dev);
struct spi_geni_master *mas = spi_master_get_devdata(spi);
struct spi_controller *spi = dev_get_drvdata(dev);
struct spi_geni_master *mas = spi_controller_get_devdata(spi);
int ret;
ret = geni_icc_enable(&mas->se);
@ -1211,30 +1211,30 @@ static int __maybe_unused spi_geni_runtime_resume(struct device *dev)
static int __maybe_unused spi_geni_suspend(struct device *dev)
{
struct spi_master *spi = dev_get_drvdata(dev);
struct spi_controller *spi = dev_get_drvdata(dev);
int ret;
ret = spi_master_suspend(spi);
ret = spi_controller_suspend(spi);
if (ret)
return ret;
ret = pm_runtime_force_suspend(dev);
if (ret)
spi_master_resume(spi);
spi_controller_resume(spi);
return ret;
}
static int __maybe_unused spi_geni_resume(struct device *dev)
{
struct spi_master *spi = dev_get_drvdata(dev);
struct spi_controller *spi = dev_get_drvdata(dev);
int ret;
ret = pm_runtime_force_resume(dev);
if (ret)
return ret;
ret = spi_master_resume(spi);
ret = spi_controller_resume(spi);
if (ret)
pm_runtime_force_suspend(dev);

View File

@ -346,14 +346,17 @@ static bool spi_ingenic_can_dma(struct spi_controller *ctlr,
static int spi_ingenic_request_dma(struct spi_controller *ctlr,
struct device *dev)
{
ctlr->dma_tx = dma_request_slave_channel(dev, "tx");
if (!ctlr->dma_tx)
return -ENODEV;
struct dma_chan *chan;
ctlr->dma_rx = dma_request_slave_channel(dev, "rx");
chan = dma_request_chan(dev, "tx");
if (IS_ERR(chan))
return PTR_ERR(chan);
ctlr->dma_tx = chan;
if (!ctlr->dma_rx)
return -ENODEV;
chan = dma_request_chan(dev, "rx");
if (IS_ERR(chan))
return PTR_ERR(chan);
ctlr->dma_rx = chan;
ctlr->can_dma = spi_ingenic_can_dma;

View File

@ -711,8 +711,7 @@ static bool intel_spi_cmp_mem_op(const struct intel_spi_mem_op *iop,
{
if (iop->mem_op.cmd.nbytes != op->cmd.nbytes ||
iop->mem_op.cmd.buswidth != op->cmd.buswidth ||
iop->mem_op.cmd.dtr != op->cmd.dtr ||
iop->mem_op.cmd.opcode != op->cmd.opcode)
iop->mem_op.cmd.dtr != op->cmd.dtr)
return false;
if (iop->mem_op.addr.nbytes != op->addr.nbytes ||
@ -737,11 +736,12 @@ intel_spi_match_mem_op(struct intel_spi *ispi, const struct spi_mem_op *op)
const struct intel_spi_mem_op *iop;
for (iop = ispi->mem_ops; iop->mem_op.cmd.opcode; iop++) {
if (intel_spi_cmp_mem_op(iop, op))
break;
if (iop->mem_op.cmd.opcode == op->cmd.opcode &&
intel_spi_cmp_mem_op(iop, op))
return iop;
}
return iop->mem_op.cmd.opcode ? iop : NULL;
return NULL;
}
static bool intel_spi_supports_mem_op(struct spi_mem *mem,

View File

@ -223,7 +223,7 @@ static int ljca_spi_probe(struct auxiliary_device *auxdev,
struct ljca_spi_dev *ljca_spi;
int ret;
controller = devm_spi_alloc_master(&auxdev->dev, sizeof(*ljca_spi));
controller = devm_spi_alloc_host(&auxdev->dev, sizeof(*ljca_spi));
if (!controller)
return -ENOMEM;

View File

@ -323,7 +323,7 @@ int spi_mem_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
return ret;
if (!spi_mem_internal_supports_op(mem, op))
return -ENOTSUPP;
return -EOPNOTSUPP;
if (ctlr->mem_ops && ctlr->mem_ops->exec_op && !spi_get_csgpiod(mem->spi, 0)) {
ret = spi_mem_access_start(mem);
@ -339,7 +339,7 @@ int spi_mem_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
* read path) and expect the core to use the regular SPI
* interface in other cases.
*/
if (!ret || ret != -ENOTSUPP)
if (!ret || ret != -ENOTSUPP || ret != -EOPNOTSUPP)
return ret;
}
@ -559,7 +559,7 @@ spi_mem_dirmap_create(struct spi_mem *mem,
if (ret) {
desc->nodirmap = true;
if (!spi_mem_supports_op(desc->mem, &desc->info.op_tmpl))
ret = -ENOTSUPP;
ret = -EOPNOTSUPP;
else
ret = 0;
}

View File

@ -22,6 +22,7 @@
#include <linux/slab.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
#include <asm/time.h>
#include <asm/mpc52xx.h>

View File

@ -556,7 +556,7 @@ static int npcm_fiu_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
op->data.nbytes);
if (fiu->spix_mode || op->addr.nbytes > 4)
return -ENOTSUPP;
return -EOPNOTSUPP;
if (fiu->clkrate != chip->clkrate) {
ret = clk_set_rate(fiu->clk, chip->clkrate);

View File

@ -338,14 +338,8 @@ struct vendor_data {
* @clk: outgoing clock "SPICLK" for the SPI bus
* @host: SPI framework hookup
* @host_info: controller-specific data from machine setup
* @pump_transfers: Tasklet used in Interrupt Transfer mode
* @cur_msg: Pointer to current spi_message being processed
* @cur_transfer: Pointer to current spi_transfer
* @cur_chip: pointer to current clients chip(assigned from controller_state)
* @next_msg_cs_active: the next message in the queue has been examined
* and it was found that it uses the same chip select as the previous
* message, so we left it active after the previous transfer, and it's
* active already.
* @tx: current position in TX buffer to be read
* @tx_end: end position in TX buffer to be read
* @rx: current position in RX buffer to be written
@ -362,7 +356,6 @@ struct vendor_data {
* @dummypage: a dummy page used for driving data on the bus with DMA
* @dma_running: indicates whether DMA is in operation
* @cur_cs: current chip select index
* @cur_gpiod: current chip select GPIO descriptor
*/
struct pl022 {
struct amba_device *adev;
@ -372,12 +365,8 @@ struct pl022 {
struct clk *clk;
struct spi_controller *host;
struct pl022_ssp_controller *host_info;
/* Message per-transfer pump */
struct tasklet_struct pump_transfers;
struct spi_message *cur_msg;
struct spi_transfer *cur_transfer;
struct chip_data *cur_chip;
bool next_msg_cs_active;
void *tx;
void *tx_end;
void *rx;
@ -397,7 +386,6 @@ struct pl022 {
bool dma_running;
#endif
int cur_cs;
struct gpio_desc *cur_gpiod;
};
/**
@ -431,99 +419,29 @@ struct chip_data {
/**
* internal_cs_control - Control chip select signals via SSP_CSR.
* @pl022: SSP driver private data structure
* @command: select/delect the chip
* @enable: select/delect the chip
*
* Used on controller with internal chip select control via SSP_CSR register
* (vendor extension). Each of the 5 LSB in the register controls one chip
* select signal.
*/
static void internal_cs_control(struct pl022 *pl022, u32 command)
static void internal_cs_control(struct pl022 *pl022, bool enable)
{
u32 tmp;
tmp = readw(SSP_CSR(pl022->virtbase));
if (command == SSP_CHIP_SELECT)
if (enable)
tmp &= ~BIT(pl022->cur_cs);
else
tmp |= BIT(pl022->cur_cs);
writew(tmp, SSP_CSR(pl022->virtbase));
}
static void pl022_cs_control(struct pl022 *pl022, u32 command)
static void pl022_cs_control(struct spi_device *spi, bool enable)
{
struct pl022 *pl022 = spi_controller_get_devdata(spi->controller);
if (pl022->vendor->internal_cs_ctrl)
internal_cs_control(pl022, command);
else if (pl022->cur_gpiod)
/*
* This needs to be inverted since with GPIOLIB in
* control, the inversion will be handled by
* GPIOLIB's active low handling. The "command"
* passed into this function will be SSP_CHIP_SELECT
* which is enum:ed to 0, so we need the inverse
* (1) to activate chip select.
*/
gpiod_set_value(pl022->cur_gpiod, !command);
}
/**
* giveback - current spi_message is over, schedule next message and call
* callback of this message. Assumes that caller already
* set message->status; dma and pio irqs are blocked
* @pl022: SSP driver private data structure
*/
static void giveback(struct pl022 *pl022)
{
struct spi_transfer *last_transfer;
pl022->next_msg_cs_active = false;
last_transfer = list_last_entry(&pl022->cur_msg->transfers,
struct spi_transfer, transfer_list);
/* Delay if requested before any change in chip select */
/*
* FIXME: This runs in interrupt context.
* Is this really smart?
*/
spi_transfer_delay_exec(last_transfer);
if (!last_transfer->cs_change) {
struct spi_message *next_msg;
/*
* cs_change was not set. We can keep the chip select
* enabled if there is message in the queue and it is
* for the same spi device.
*
* We cannot postpone this until pump_messages, because
* after calling msg->complete (below) the driver that
* sent the current message could be unloaded, which
* could invalidate the cs_control() callback...
*/
/* get a pointer to the next message, if any */
next_msg = spi_get_next_queued_message(pl022->host);
/*
* see if the next and current messages point
* to the same spi device.
*/
if (next_msg && next_msg->spi != pl022->cur_msg->spi)
next_msg = NULL;
if (!next_msg || pl022->cur_msg->state == STATE_ERROR)
pl022_cs_control(pl022, SSP_CHIP_DESELECT);
else
pl022->next_msg_cs_active = true;
}
pl022->cur_msg = NULL;
pl022->cur_transfer = NULL;
pl022->cur_chip = NULL;
/* disable the SPI/SSP operation */
writew((readw(SSP_CR1(pl022->virtbase)) &
(~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase));
spi_finalize_current_message(pl022->host);
internal_cs_control(pl022, enable);
}
/**
@ -757,30 +675,6 @@ static void readwriter(struct pl022 *pl022)
*/
}
/**
* next_transfer - Move to the Next transfer in the current spi message
* @pl022: SSP driver private data structure
*
* This function moves though the linked list of spi transfers in the
* current spi message and returns with the state of current spi
* message i.e whether its last transfer is done(STATE_DONE) or
* Next transfer is ready(STATE_RUNNING)
*/
static void *next_transfer(struct pl022 *pl022)
{
struct spi_message *msg = pl022->cur_msg;
struct spi_transfer *trans = pl022->cur_transfer;
/* Move to next transfer */
if (trans->transfer_list.next != &msg->transfers) {
pl022->cur_transfer =
list_entry(trans->transfer_list.next,
struct spi_transfer, transfer_list);
return STATE_RUNNING;
}
return STATE_DONE;
}
/*
* This DMA functionality is only compiled in if we have
* access to the generic DMA devices/DMA engine.
@ -800,7 +694,6 @@ static void unmap_free_dma_scatter(struct pl022 *pl022)
static void dma_callback(void *data)
{
struct pl022 *pl022 = data;
struct spi_message *msg = pl022->cur_msg;
BUG_ON(!pl022->sgt_rx.sgl);
@ -845,13 +738,7 @@ static void dma_callback(void *data)
unmap_free_dma_scatter(pl022);
/* Update total bytes transferred */
msg->actual_length += pl022->cur_transfer->len;
/* Move to next transfer */
msg->state = next_transfer(pl022);
if (msg->state != STATE_DONE && pl022->cur_transfer->cs_change)
pl022_cs_control(pl022, SSP_CHIP_DESELECT);
tasklet_schedule(&pl022->pump_transfers);
spi_finalize_current_transfer(pl022->host);
}
static void setup_dma_scatter(struct pl022 *pl022,
@ -1189,6 +1076,9 @@ err_no_rxchan:
static void terminate_dma(struct pl022 *pl022)
{
if (!pl022->dma_running)
return;
struct dma_chan *rxchan = pl022->dma_rx_channel;
struct dma_chan *txchan = pl022->dma_tx_channel;
@ -1200,8 +1090,7 @@ static void terminate_dma(struct pl022 *pl022)
static void pl022_dma_remove(struct pl022 *pl022)
{
if (pl022->dma_running)
terminate_dma(pl022);
terminate_dma(pl022);
if (pl022->dma_tx_channel)
dma_release_channel(pl022->dma_tx_channel);
if (pl022->dma_rx_channel)
@ -1225,6 +1114,10 @@ static inline int pl022_dma_probe(struct pl022 *pl022)
return 0;
}
static inline void terminate_dma(struct pl022 *pl022)
{
}
static inline void pl022_dma_remove(struct pl022 *pl022)
{
}
@ -1246,16 +1139,7 @@ static inline void pl022_dma_remove(struct pl022 *pl022)
static irqreturn_t pl022_interrupt_handler(int irq, void *dev_id)
{
struct pl022 *pl022 = dev_id;
struct spi_message *msg = pl022->cur_msg;
u16 irq_status = 0;
if (unlikely(!msg)) {
dev_err(&pl022->adev->dev,
"bad message state in interrupt handler");
/* Never fail */
return IRQ_HANDLED;
}
/* Read the Interrupt Status Register */
irq_status = readw(SSP_MIS(pl022->virtbase));
@ -1287,10 +1171,8 @@ static irqreturn_t pl022_interrupt_handler(int irq, void *dev_id)
writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase));
writew((readw(SSP_CR1(pl022->virtbase)) &
(~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase));
msg->state = STATE_ERROR;
/* Schedule message queue handler */
tasklet_schedule(&pl022->pump_transfers);
pl022->cur_transfer->error |= SPI_TRANS_FAIL_IO;
spi_finalize_current_transfer(pl022->host);
return IRQ_HANDLED;
}
@ -1318,13 +1200,7 @@ static irqreturn_t pl022_interrupt_handler(int irq, void *dev_id)
"number of bytes on a 16bit bus?)\n",
(u32) (pl022->rx - pl022->rx_end));
}
/* Update total bytes transferred */
msg->actual_length += pl022->cur_transfer->len;
/* Move to next transfer */
msg->state = next_transfer(pl022);
if (msg->state != STATE_DONE && pl022->cur_transfer->cs_change)
pl022_cs_control(pl022, SSP_CHIP_DESELECT);
tasklet_schedule(&pl022->pump_transfers);
spi_finalize_current_transfer(pl022->host);
return IRQ_HANDLED;
}
@ -1361,98 +1237,20 @@ static int set_up_next_transfer(struct pl022 *pl022,
return 0;
}
/**
* pump_transfers - Tasklet function which schedules next transfer
* when running in interrupt or DMA transfer mode.
* @data: SSP driver private data structure
*
*/
static void pump_transfers(unsigned long data)
static int do_interrupt_dma_transfer(struct pl022 *pl022)
{
struct pl022 *pl022 = (struct pl022 *) data;
struct spi_message *message = NULL;
struct spi_transfer *transfer = NULL;
struct spi_transfer *previous = NULL;
int ret;
/* Get current state information */
message = pl022->cur_msg;
transfer = pl022->cur_transfer;
/* Handle for abort */
if (message->state == STATE_ERROR) {
message->status = -EIO;
giveback(pl022);
return;
}
/* Handle end of message */
if (message->state == STATE_DONE) {
message->status = 0;
giveback(pl022);
return;
}
/* Delay if requested at end of transfer before CS change */
if (message->state == STATE_RUNNING) {
previous = list_entry(transfer->transfer_list.prev,
struct spi_transfer,
transfer_list);
/*
* FIXME: This runs in interrupt context.
* Is this really smart?
*/
spi_transfer_delay_exec(previous);
/* Reselect chip select only if cs_change was requested */
if (previous->cs_change)
pl022_cs_control(pl022, SSP_CHIP_SELECT);
} else {
/* STATE_START */
message->state = STATE_RUNNING;
}
if (set_up_next_transfer(pl022, transfer)) {
message->state = STATE_ERROR;
message->status = -EIO;
giveback(pl022);
return;
}
/* Flush the FIFOs and let's go! */
flush(pl022);
if (pl022->cur_chip->enable_dma) {
if (configure_dma(pl022)) {
dev_dbg(&pl022->adev->dev,
"configuration of DMA failed, fall back to interrupt mode\n");
goto err_config_dma;
}
return;
}
err_config_dma:
/* enable all interrupts except RX */
writew(ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM, SSP_IMSC(pl022->virtbase));
}
static void do_interrupt_dma_transfer(struct pl022 *pl022)
{
/*
* Default is to enable all interrupts except RX -
* this will be enabled once TX is complete
*/
u32 irqflags = (u32)(ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM);
/* Enable target chip, if not already active */
if (!pl022->next_msg_cs_active)
pl022_cs_control(pl022, SSP_CHIP_SELECT);
ret = set_up_next_transfer(pl022, pl022->cur_transfer);
if (ret)
return ret;
if (set_up_next_transfer(pl022, pl022->cur_transfer)) {
/* Error path */
pl022->cur_msg->state = STATE_ERROR;
pl022->cur_msg->status = -EIO;
giveback(pl022);
return;
}
/* If we're using DMA, set up DMA here */
if (pl022->cur_chip->enable_dma) {
/* Configure DMA transfer */
@ -1469,6 +1267,7 @@ err_config_dma:
writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE),
SSP_CR1(pl022->virtbase));
writew(irqflags, SSP_IMSC(pl022->virtbase));
return 1;
}
static void print_current_status(struct pl022 *pl022)
@ -1495,111 +1294,65 @@ static void print_current_status(struct pl022 *pl022)
}
static void do_polling_transfer(struct pl022 *pl022)
static int do_polling_transfer(struct pl022 *pl022)
{
struct spi_message *message = NULL;
struct spi_transfer *transfer = NULL;
struct spi_transfer *previous = NULL;
int ret;
unsigned long time, timeout;
message = pl022->cur_msg;
/* Configuration Changing Per Transfer */
ret = set_up_next_transfer(pl022, pl022->cur_transfer);
if (ret)
return ret;
/* Flush FIFOs and enable SSP */
flush(pl022);
writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE),
SSP_CR1(pl022->virtbase));
while (message->state != STATE_DONE) {
/* Handle for abort */
if (message->state == STATE_ERROR)
break;
transfer = pl022->cur_transfer;
dev_dbg(&pl022->adev->dev, "polling transfer ongoing ...\n");
/* Delay if requested at end of transfer */
if (message->state == STATE_RUNNING) {
previous =
list_entry(transfer->transfer_list.prev,
struct spi_transfer, transfer_list);
spi_transfer_delay_exec(previous);
if (previous->cs_change)
pl022_cs_control(pl022, SSP_CHIP_SELECT);
} else {
/* STATE_START */
message->state = STATE_RUNNING;
if (!pl022->next_msg_cs_active)
pl022_cs_control(pl022, SSP_CHIP_SELECT);
timeout = jiffies + msecs_to_jiffies(SPI_POLLING_TIMEOUT);
while (pl022->tx < pl022->tx_end || pl022->rx < pl022->rx_end) {
time = jiffies;
readwriter(pl022);
if (time_after(time, timeout)) {
dev_warn(&pl022->adev->dev,
"%s: timeout!\n", __func__);
print_current_status(pl022);
return -ETIMEDOUT;
}
/* Configuration Changing Per Transfer */
if (set_up_next_transfer(pl022, transfer)) {
/* Error path */
message->state = STATE_ERROR;
break;
}
/* Flush FIFOs and enable SSP */
flush(pl022);
writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE),
SSP_CR1(pl022->virtbase));
dev_dbg(&pl022->adev->dev, "polling transfer ongoing ...\n");
timeout = jiffies + msecs_to_jiffies(SPI_POLLING_TIMEOUT);
while (pl022->tx < pl022->tx_end || pl022->rx < pl022->rx_end) {
time = jiffies;
readwriter(pl022);
if (time_after(time, timeout)) {
dev_warn(&pl022->adev->dev,
"%s: timeout!\n", __func__);
message->state = STATE_TIMEOUT;
print_current_status(pl022);
goto out;
}
cpu_relax();
}
/* Update total byte transferred */
message->actual_length += pl022->cur_transfer->len;
/* Move to next transfer */
message->state = next_transfer(pl022);
if (message->state != STATE_DONE
&& pl022->cur_transfer->cs_change)
pl022_cs_control(pl022, SSP_CHIP_DESELECT);
cpu_relax();
}
out:
/* Handle end of message */
if (message->state == STATE_DONE)
message->status = 0;
else if (message->state == STATE_TIMEOUT)
message->status = -EAGAIN;
else
message->status = -EIO;
giveback(pl022);
return;
return 0;
}
static int pl022_transfer_one_message(struct spi_controller *host,
struct spi_message *msg)
static int pl022_transfer_one(struct spi_controller *host, struct spi_device *spi,
struct spi_transfer *transfer)
{
struct pl022 *pl022 = spi_controller_get_devdata(host);
/* Initial message state */
pl022->cur_msg = msg;
msg->state = STATE_START;
pl022->cur_transfer = list_entry(msg->transfers.next,
struct spi_transfer, transfer_list);
pl022->cur_transfer = transfer;
/* Setup the SPI using the per chip configuration */
pl022->cur_chip = spi_get_ctldata(msg->spi);
pl022->cur_cs = spi_get_chipselect(msg->spi, 0);
/* This is always available but may be set to -ENOENT */
pl022->cur_gpiod = spi_get_csgpiod(msg->spi, 0);
pl022->cur_chip = spi_get_ctldata(spi);
pl022->cur_cs = spi_get_chipselect(spi, 0);
restore_state(pl022);
flush(pl022);
if (pl022->cur_chip->xfer_type == POLLING_TRANSFER)
do_polling_transfer(pl022);
return do_polling_transfer(pl022);
else
do_interrupt_dma_transfer(pl022);
return do_interrupt_dma_transfer(pl022);
}
return 0;
static void pl022_handle_err(struct spi_controller *ctlr, struct spi_message *message)
{
struct pl022 *pl022 = spi_controller_get_devdata(ctlr);
terminate_dma(pl022);
writew(DISABLE_ALL_INTERRUPTS, SSP_IMSC(pl022->virtbase));
writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase));
}
static int pl022_unprepare_transfer_hardware(struct spi_controller *host)
@ -2138,7 +1891,9 @@ static int pl022_probe(struct amba_device *adev, const struct amba_id *id)
host->cleanup = pl022_cleanup;
host->setup = pl022_setup;
host->auto_runtime_pm = true;
host->transfer_one_message = pl022_transfer_one_message;
host->transfer_one = pl022_transfer_one;
host->set_cs = pl022_cs_control;
host->handle_err = pl022_handle_err;
host->unprepare_transfer_hardware = pl022_unprepare_transfer_hardware;
host->rt = platform_info->rt;
host->dev.of_node = dev->of_node;
@ -2175,10 +1930,6 @@ static int pl022_probe(struct amba_device *adev, const struct amba_id *id)
goto err_no_clk;
}
/* Initialize transfer pump */
tasklet_init(&pl022->pump_transfers, pump_transfers,
(unsigned long)pl022);
/* Disable SSP */
writew((readw(SSP_CR1(pl022->virtbase)) & (~SSP_CR1_MASK_SSE)),
SSP_CR1(pl022->virtbase));
@ -2261,7 +2012,6 @@ pl022_remove(struct amba_device *adev)
pl022_dma_remove(pl022);
amba_release_regions(adev);
tasklet_disable(&pl022->pump_transfers);
}
#ifdef CONFIG_PM_SLEEP

View File

@ -29,12 +29,15 @@
#include <asm/unaligned.h>
#define SH_MSIOF_FLAG_FIXED_DTDL_200 BIT(0)
struct sh_msiof_chipdata {
u32 bits_per_word_mask;
u16 tx_fifo_size;
u16 rx_fifo_size;
u16 ctlr_flags;
u16 min_div_pow;
u32 flags;
};
struct sh_msiof_spi_priv {
@ -1072,6 +1075,16 @@ static const struct sh_msiof_chipdata rcar_gen3_data = {
.min_div_pow = 1,
};
static const struct sh_msiof_chipdata rcar_r8a7795_data = {
.bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16) |
SPI_BPW_MASK(24) | SPI_BPW_MASK(32),
.tx_fifo_size = 64,
.rx_fifo_size = 64,
.ctlr_flags = SPI_CONTROLLER_MUST_TX,
.min_div_pow = 1,
.flags = SH_MSIOF_FLAG_FIXED_DTDL_200,
};
static const struct of_device_id sh_msiof_match[] __maybe_unused = {
{ .compatible = "renesas,sh-mobile-msiof", .data = &sh_data },
{ .compatible = "renesas,msiof-r8a7743", .data = &rcar_gen2_data },
@ -1082,6 +1095,7 @@ static const struct of_device_id sh_msiof_match[] __maybe_unused = {
{ .compatible = "renesas,msiof-r8a7793", .data = &rcar_gen2_data },
{ .compatible = "renesas,msiof-r8a7794", .data = &rcar_gen2_data },
{ .compatible = "renesas,rcar-gen2-msiof", .data = &rcar_gen2_data },
{ .compatible = "renesas,msiof-r8a7795", .data = &rcar_r8a7795_data },
{ .compatible = "renesas,msiof-r8a7796", .data = &rcar_gen3_data },
{ .compatible = "renesas,rcar-gen3-msiof", .data = &rcar_gen3_data },
{ .compatible = "renesas,rcar-gen4-msiof", .data = &rcar_gen3_data },
@ -1279,6 +1293,9 @@ static int sh_msiof_spi_probe(struct platform_device *pdev)
return -ENXIO;
}
if (chipdata->flags & SH_MSIOF_FLAG_FIXED_DTDL_200)
info->dtdl = 200;
if (info->mode == MSIOF_SPI_TARGET)
ctlr = spi_alloc_target(&pdev->dev,
sizeof(struct sh_msiof_spi_priv));

View File

@ -138,8 +138,7 @@ struct sprd_adi_data {
u32 slave_offset;
u32 slave_addr_size;
int (*read_check)(u32 val, u32 reg);
int (*restart)(struct notifier_block *this,
unsigned long mode, void *cmd);
int (*restart)(struct sys_off_data *data);
void (*wdg_rst)(void *p);
};
@ -150,7 +149,6 @@ struct sprd_adi {
struct hwspinlock *hwlock;
unsigned long slave_vbase;
unsigned long slave_pbase;
struct notifier_block restart_handler;
const struct sprd_adi_data *data;
};
@ -370,11 +368,9 @@ static void sprd_adi_set_wdt_rst_mode(void *p)
#endif
}
static int sprd_adi_restart(struct notifier_block *this, unsigned long mode,
void *cmd, struct sprd_adi_wdg *wdg)
static int sprd_adi_restart(struct sprd_adi *sadi, unsigned long mode,
const char *cmd, struct sprd_adi_wdg *wdg)
{
struct sprd_adi *sadi = container_of(this, struct sprd_adi,
restart_handler);
u32 val, reboot_mode = 0;
if (!cmd)
@ -448,8 +444,7 @@ static int sprd_adi_restart(struct notifier_block *this, unsigned long mode,
return NOTIFY_DONE;
}
static int sprd_adi_restart_sc9860(struct notifier_block *this,
unsigned long mode, void *cmd)
static int sprd_adi_restart_sc9860(struct sys_off_data *data)
{
struct sprd_adi_wdg wdg = {
.base = PMIC_WDG_BASE,
@ -458,7 +453,7 @@ static int sprd_adi_restart_sc9860(struct notifier_block *this,
.wdg_clk = PMIC_CLK_EN,
};
return sprd_adi_restart(this, mode, cmd, &wdg);
return sprd_adi_restart(data->cb_data, data->mode, data->cmd, &wdg);
}
static void sprd_adi_hw_init(struct sprd_adi *sadi)
@ -533,7 +528,7 @@ static int sprd_adi_probe(struct platform_device *pdev)
pdev->id = of_alias_get_id(np, "spi");
num_chipselect = of_get_child_count(np);
ctlr = spi_alloc_master(&pdev->dev, sizeof(struct sprd_adi));
ctlr = spi_alloc_host(&pdev->dev, sizeof(struct sprd_adi));
if (!ctlr)
return -ENOMEM;
@ -590,9 +585,9 @@ static int sprd_adi_probe(struct platform_device *pdev)
}
if (sadi->data->restart) {
sadi->restart_handler.notifier_call = sadi->data->restart;
sadi->restart_handler.priority = 128;
ret = register_restart_handler(&sadi->restart_handler);
ret = devm_register_restart_handler(&pdev->dev,
sadi->data->restart,
sadi);
if (ret) {
dev_err(&pdev->dev, "can not register restart handler\n");
goto put_ctlr;
@ -606,14 +601,6 @@ put_ctlr:
return ret;
}
static void sprd_adi_remove(struct platform_device *pdev)
{
struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev);
struct sprd_adi *sadi = spi_controller_get_devdata(ctlr);
unregister_restart_handler(&sadi->restart_handler);
}
static struct sprd_adi_data sc9860_data = {
.slave_offset = ADI_10BIT_SLAVE_OFFSET,
.slave_addr_size = ADI_10BIT_SLAVE_ADDR_SIZE,
@ -657,7 +644,6 @@ static struct platform_driver sprd_adi_driver = {
.of_match_table = sprd_adi_of_match,
},
.probe = sprd_adi_probe,
.remove_new = sprd_adi_remove,
};
module_platform_driver(sprd_adi_driver);

View File

@ -578,7 +578,7 @@ static void sprd_spi_dma_release(struct sprd_spi *ss)
static int sprd_spi_dma_txrx_bufs(struct spi_device *sdev,
struct spi_transfer *t)
{
struct sprd_spi *ss = spi_master_get_devdata(sdev->master);
struct sprd_spi *ss = spi_controller_get_devdata(sdev->controller);
u32 trans_len = ss->trans_len;
int ret, write_size = 0;
@ -923,7 +923,7 @@ static int sprd_spi_probe(struct platform_device *pdev)
int ret;
pdev->id = of_alias_get_id(pdev->dev.of_node, "spi");
sctlr = spi_alloc_master(&pdev->dev, sizeof(*ss));
sctlr = spi_alloc_host(&pdev->dev, sizeof(*ss));
if (!sctlr)
return -ENOMEM;

View File

@ -6,7 +6,7 @@
* Patrice Chotard <patrice.chotard@st.com>
* Lee Jones <lee.jones@linaro.org>
*
* SPI master mode controller driver, used in STMicroelectronics devices.
* SPI host mode controller driver, used in STMicroelectronics devices.
*/
#include <linux/clk.h>
@ -115,10 +115,10 @@ static void ssc_read_rx_fifo(struct spi_st *spi_st)
spi_st->words_remaining -= count;
}
static int spi_st_transfer_one(struct spi_master *master,
static int spi_st_transfer_one(struct spi_controller *host,
struct spi_device *spi, struct spi_transfer *t)
{
struct spi_st *spi_st = spi_master_get_devdata(master);
struct spi_st *spi_st = spi_controller_get_devdata(host);
uint32_t ctl = 0;
/* Setup transfer */
@ -165,7 +165,7 @@ static int spi_st_transfer_one(struct spi_master *master,
if (ctl)
writel_relaxed(ctl, spi_st->base + SSC_CTL);
spi_finalize_current_transfer(spi->master);
spi_finalize_current_transfer(spi->controller);
return t->len;
}
@ -174,7 +174,7 @@ static int spi_st_transfer_one(struct spi_master *master,
#define MODEBITS (SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST | SPI_LOOP | SPI_CS_HIGH)
static int spi_st_setup(struct spi_device *spi)
{
struct spi_st *spi_st = spi_master_get_devdata(spi->master);
struct spi_st *spi_st = spi_controller_get_devdata(spi->controller);
u32 spi_st_clk, sscbrg, var;
u32 hz = spi->max_speed_hz;
@ -274,35 +274,35 @@ static irqreturn_t spi_st_irq(int irq, void *dev_id)
static int spi_st_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct spi_master *master;
struct spi_controller *host;
struct spi_st *spi_st;
int irq, ret = 0;
u32 var;
master = spi_alloc_master(&pdev->dev, sizeof(*spi_st));
if (!master)
host = spi_alloc_host(&pdev->dev, sizeof(*spi_st));
if (!host)
return -ENOMEM;
master->dev.of_node = np;
master->mode_bits = MODEBITS;
master->setup = spi_st_setup;
master->transfer_one = spi_st_transfer_one;
master->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16);
master->auto_runtime_pm = true;
master->bus_num = pdev->id;
master->use_gpio_descriptors = true;
spi_st = spi_master_get_devdata(master);
host->dev.of_node = np;
host->mode_bits = MODEBITS;
host->setup = spi_st_setup;
host->transfer_one = spi_st_transfer_one;
host->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16);
host->auto_runtime_pm = true;
host->bus_num = pdev->id;
host->use_gpio_descriptors = true;
spi_st = spi_controller_get_devdata(host);
spi_st->clk = devm_clk_get(&pdev->dev, "ssc");
if (IS_ERR(spi_st->clk)) {
dev_err(&pdev->dev, "Unable to request clock\n");
ret = PTR_ERR(spi_st->clk);
goto put_master;
goto put_host;
}
ret = clk_prepare_enable(spi_st->clk);
if (ret)
goto put_master;
goto put_host;
init_completion(&spi_st->done);
@ -324,7 +324,7 @@ static int spi_st_probe(struct platform_device *pdev)
var &= ~SSC_CTL_SR;
writel_relaxed(var, spi_st->base + SSC_CTL);
/* Set SSC into slave mode before reconfiguring PIO pins */
/* Set SSC into target mode before reconfiguring PIO pins */
var = readl_relaxed(spi_st->base + SSC_CTL);
var &= ~SSC_CTL_MS;
writel_relaxed(var, spi_st->base + SSC_CTL);
@ -347,11 +347,11 @@ static int spi_st_probe(struct platform_device *pdev)
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
platform_set_drvdata(pdev, master);
platform_set_drvdata(pdev, host);
ret = devm_spi_register_master(&pdev->dev, master);
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret) {
dev_err(&pdev->dev, "Failed to register master\n");
dev_err(&pdev->dev, "Failed to register host\n");
goto rpm_disable;
}
@ -361,15 +361,15 @@ rpm_disable:
pm_runtime_disable(&pdev->dev);
clk_disable:
clk_disable_unprepare(spi_st->clk);
put_master:
spi_master_put(master);
put_host:
spi_controller_put(host);
return ret;
}
static void spi_st_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct spi_st *spi_st = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct spi_st *spi_st = spi_controller_get_devdata(host);
pm_runtime_disable(&pdev->dev);
@ -381,8 +381,8 @@ static void spi_st_remove(struct platform_device *pdev)
#ifdef CONFIG_PM
static int spi_st_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_st *spi_st = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct spi_st *spi_st = spi_controller_get_devdata(host);
writel_relaxed(0, spi_st->base + SSC_IEN);
pinctrl_pm_select_sleep_state(dev);
@ -394,8 +394,8 @@ static int spi_st_runtime_suspend(struct device *dev)
static int spi_st_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_st *spi_st = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct spi_st *spi_st = spi_controller_get_devdata(host);
int ret;
ret = clk_prepare_enable(spi_st->clk);
@ -408,10 +408,10 @@ static int spi_st_runtime_resume(struct device *dev)
#ifdef CONFIG_PM_SLEEP
static int spi_st_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_controller *host = dev_get_drvdata(dev);
int ret;
ret = spi_master_suspend(master);
ret = spi_controller_suspend(host);
if (ret)
return ret;
@ -420,10 +420,10 @@ static int spi_st_suspend(struct device *dev)
static int spi_st_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_controller *host = dev_get_drvdata(dev);
int ret;
ret = spi_master_resume(master);
ret = spi_controller_resume(host);
if (ret)
return ret;

View File

@ -357,7 +357,7 @@ static int stm32_qspi_get_mode(u8 buswidth)
static int stm32_qspi_send(struct spi_device *spi, const struct spi_mem_op *op)
{
struct stm32_qspi *qspi = spi_controller_get_devdata(spi->master);
struct stm32_qspi *qspi = spi_controller_get_devdata(spi->controller);
struct stm32_qspi_flash *flash = &qspi->flash[spi_get_chipselect(spi, 0)];
u32 ccr, cr;
int timeout, err = 0, err_poll_status = 0;
@ -448,7 +448,7 @@ static int stm32_qspi_poll_status(struct spi_mem *mem, const struct spi_mem_op *
unsigned long polling_rate_us,
unsigned long timeout_ms)
{
struct stm32_qspi *qspi = spi_controller_get_devdata(mem->spi->master);
struct stm32_qspi *qspi = spi_controller_get_devdata(mem->spi->controller);
int ret;
if (!spi_mem_supports_op(mem, op))
@ -476,7 +476,7 @@ static int stm32_qspi_poll_status(struct spi_mem *mem, const struct spi_mem_op *
static int stm32_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
{
struct stm32_qspi *qspi = spi_controller_get_devdata(mem->spi->master);
struct stm32_qspi *qspi = spi_controller_get_devdata(mem->spi->controller);
int ret;
ret = pm_runtime_resume_and_get(qspi->dev);
@ -500,7 +500,7 @@ static int stm32_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
static int stm32_qspi_dirmap_create(struct spi_mem_dirmap_desc *desc)
{
struct stm32_qspi *qspi = spi_controller_get_devdata(desc->mem->spi->master);
struct stm32_qspi *qspi = spi_controller_get_devdata(desc->mem->spi->controller);
if (desc->info.op_tmpl.data.dir == SPI_MEM_DATA_OUT)
return -EOPNOTSUPP;
@ -518,7 +518,7 @@ static int stm32_qspi_dirmap_create(struct spi_mem_dirmap_desc *desc)
static ssize_t stm32_qspi_dirmap_read(struct spi_mem_dirmap_desc *desc,
u64 offs, size_t len, void *buf)
{
struct stm32_qspi *qspi = spi_controller_get_devdata(desc->mem->spi->master);
struct stm32_qspi *qspi = spi_controller_get_devdata(desc->mem->spi->controller);
struct spi_mem_op op;
u32 addr_max;
int ret;
@ -640,7 +640,7 @@ end_of_transfer:
static int stm32_qspi_setup(struct spi_device *spi)
{
struct spi_controller *ctrl = spi->master;
struct spi_controller *ctrl = spi->controller;
struct stm32_qspi *qspi = spi_controller_get_devdata(ctrl);
struct stm32_qspi_flash *flash;
u32 presc, mode;
@ -775,7 +775,7 @@ static int stm32_qspi_probe(struct platform_device *pdev)
struct resource *res;
int ret, irq;
ctrl = devm_spi_alloc_master(dev, sizeof(*qspi));
ctrl = devm_spi_alloc_host(dev, sizeof(*qspi));
if (!ctrl)
return -ENOMEM;
@ -861,7 +861,7 @@ static int stm32_qspi_probe(struct platform_device *pdev)
pm_runtime_enable(dev);
pm_runtime_get_noresume(dev);
ret = spi_register_master(ctrl);
ret = spi_register_controller(ctrl);
if (ret)
goto err_pm_runtime_free;
@ -892,7 +892,7 @@ static void stm32_qspi_remove(struct platform_device *pdev)
struct stm32_qspi *qspi = platform_get_drvdata(pdev);
pm_runtime_get_sync(qspi->dev);
spi_unregister_master(qspi->ctrl);
spi_unregister_controller(qspi->ctrl);
/* disable qspi */
writel_relaxed(0, qspi->io_base + QSPI_CR);
stm32_qspi_dma_free(qspi);

File diff suppressed because it is too large Load Diff

View File

@ -75,7 +75,7 @@
#define SUN4I_FIFO_STA_TF_CNT_BITS 16
struct sun4i_spi {
struct spi_master *master;
struct spi_controller *host;
void __iomem *base_addr;
struct clk *hclk;
struct clk *mclk;
@ -161,7 +161,7 @@ static inline void sun4i_spi_fill_fifo(struct sun4i_spi *sspi, int len)
static void sun4i_spi_set_cs(struct spi_device *spi, bool enable)
{
struct sun4i_spi *sspi = spi_master_get_devdata(spi->master);
struct sun4i_spi *sspi = spi_controller_get_devdata(spi->controller);
u32 reg;
reg = sun4i_spi_read(sspi, SUN4I_CTL_REG);
@ -201,11 +201,11 @@ static size_t sun4i_spi_max_transfer_size(struct spi_device *spi)
return SUN4I_MAX_XFER_SIZE - 1;
}
static int sun4i_spi_transfer_one(struct spi_master *master,
static int sun4i_spi_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *tfr)
{
struct sun4i_spi *sspi = spi_master_get_devdata(master);
struct sun4i_spi *sspi = spi_controller_get_devdata(host);
unsigned int mclk_rate, div, timeout;
unsigned int start, end, tx_time;
unsigned int tx_len = 0;
@ -331,7 +331,7 @@ static int sun4i_spi_transfer_one(struct spi_master *master,
msecs_to_jiffies(tx_time));
end = jiffies;
if (!timeout) {
dev_warn(&master->dev,
dev_warn(&host->dev,
"%s: timeout transferring %u bytes@%iHz for %i(%i)ms",
dev_name(&spi->dev), tfr->len, tfr->speed_hz,
jiffies_to_msecs(end - start), tx_time);
@ -386,8 +386,8 @@ static irqreturn_t sun4i_spi_handler(int irq, void *dev_id)
static int sun4i_spi_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct sun4i_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct sun4i_spi *sspi = spi_controller_get_devdata(host);
int ret;
ret = clk_prepare_enable(sspi->hclk);
@ -415,8 +415,8 @@ out:
static int sun4i_spi_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct sun4i_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct sun4i_spi *sspi = spi_controller_get_devdata(host);
clk_disable_unprepare(sspi->mclk);
clk_disable_unprepare(sspi->hclk);
@ -426,62 +426,62 @@ static int sun4i_spi_runtime_suspend(struct device *dev)
static int sun4i_spi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct sun4i_spi *sspi;
int ret = 0, irq;
master = spi_alloc_master(&pdev->dev, sizeof(struct sun4i_spi));
if (!master) {
dev_err(&pdev->dev, "Unable to allocate SPI Master\n");
host = spi_alloc_host(&pdev->dev, sizeof(struct sun4i_spi));
if (!host) {
dev_err(&pdev->dev, "Unable to allocate SPI Host\n");
return -ENOMEM;
}
platform_set_drvdata(pdev, master);
sspi = spi_master_get_devdata(master);
platform_set_drvdata(pdev, host);
sspi = spi_controller_get_devdata(host);
sspi->base_addr = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(sspi->base_addr)) {
ret = PTR_ERR(sspi->base_addr);
goto err_free_master;
goto err_free_host;
}
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
ret = -ENXIO;
goto err_free_master;
goto err_free_host;
}
ret = devm_request_irq(&pdev->dev, irq, sun4i_spi_handler,
0, "sun4i-spi", sspi);
if (ret) {
dev_err(&pdev->dev, "Cannot request IRQ\n");
goto err_free_master;
goto err_free_host;
}
sspi->master = master;
master->max_speed_hz = 100 * 1000 * 1000;
master->min_speed_hz = 3 * 1000;
master->set_cs = sun4i_spi_set_cs;
master->transfer_one = sun4i_spi_transfer_one;
master->num_chipselect = 4;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
master->bits_per_word_mask = SPI_BPW_MASK(8);
master->dev.of_node = pdev->dev.of_node;
master->auto_runtime_pm = true;
master->max_transfer_size = sun4i_spi_max_transfer_size;
sspi->host = host;
host->max_speed_hz = 100 * 1000 * 1000;
host->min_speed_hz = 3 * 1000;
host->set_cs = sun4i_spi_set_cs;
host->transfer_one = sun4i_spi_transfer_one;
host->num_chipselect = 4;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
host->bits_per_word_mask = SPI_BPW_MASK(8);
host->dev.of_node = pdev->dev.of_node;
host->auto_runtime_pm = true;
host->max_transfer_size = sun4i_spi_max_transfer_size;
sspi->hclk = devm_clk_get(&pdev->dev, "ahb");
if (IS_ERR(sspi->hclk)) {
dev_err(&pdev->dev, "Unable to acquire AHB clock\n");
ret = PTR_ERR(sspi->hclk);
goto err_free_master;
goto err_free_host;
}
sspi->mclk = devm_clk_get(&pdev->dev, "mod");
if (IS_ERR(sspi->mclk)) {
dev_err(&pdev->dev, "Unable to acquire module clock\n");
ret = PTR_ERR(sspi->mclk);
goto err_free_master;
goto err_free_host;
}
init_completion(&sspi->done);
@ -493,16 +493,16 @@ static int sun4i_spi_probe(struct platform_device *pdev)
ret = sun4i_spi_runtime_resume(&pdev->dev);
if (ret) {
dev_err(&pdev->dev, "Couldn't resume the device\n");
goto err_free_master;
goto err_free_host;
}
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
pm_runtime_idle(&pdev->dev);
ret = devm_spi_register_master(&pdev->dev, master);
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret) {
dev_err(&pdev->dev, "cannot register SPI master\n");
dev_err(&pdev->dev, "cannot register SPI host\n");
goto err_pm_disable;
}
@ -511,8 +511,8 @@ static int sun4i_spi_probe(struct platform_device *pdev)
err_pm_disable:
pm_runtime_disable(&pdev->dev);
sun4i_spi_runtime_suspend(&pdev->dev);
err_free_master:
spi_master_put(master);
err_free_host:
spi_controller_put(host);
return ret;
}

View File

@ -97,7 +97,7 @@ struct sun6i_spi_cfg {
};
struct sun6i_spi {
struct spi_master *master;
struct spi_controller *host;
void __iomem *base_addr;
dma_addr_t dma_addr_rx;
dma_addr_t dma_addr_tx;
@ -181,7 +181,7 @@ static inline void sun6i_spi_fill_fifo(struct sun6i_spi *sspi)
static void sun6i_spi_set_cs(struct spi_device *spi, bool enable)
{
struct sun6i_spi *sspi = spi_master_get_devdata(spi->master);
struct sun6i_spi *sspi = spi_controller_get_devdata(spi->controller);
u32 reg;
reg = sun6i_spi_read(sspi, SUN6I_TFR_CTL_REG);
@ -212,7 +212,7 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi,
struct spi_transfer *tfr)
{
struct dma_async_tx_descriptor *rxdesc, *txdesc;
struct spi_master *master = sspi->master;
struct spi_controller *host = sspi->host;
rxdesc = NULL;
if (tfr->rx_buf) {
@ -223,9 +223,9 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi,
.src_maxburst = 8,
};
dmaengine_slave_config(master->dma_rx, &rxconf);
dmaengine_slave_config(host->dma_rx, &rxconf);
rxdesc = dmaengine_prep_slave_sg(master->dma_rx,
rxdesc = dmaengine_prep_slave_sg(host->dma_rx,
tfr->rx_sg.sgl,
tfr->rx_sg.nents,
DMA_DEV_TO_MEM,
@ -245,38 +245,38 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi,
.dst_maxburst = 8,
};
dmaengine_slave_config(master->dma_tx, &txconf);
dmaengine_slave_config(host->dma_tx, &txconf);
txdesc = dmaengine_prep_slave_sg(master->dma_tx,
txdesc = dmaengine_prep_slave_sg(host->dma_tx,
tfr->tx_sg.sgl,
tfr->tx_sg.nents,
DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT);
if (!txdesc) {
if (rxdesc)
dmaengine_terminate_sync(master->dma_rx);
dmaengine_terminate_sync(host->dma_rx);
return -EINVAL;
}
}
if (tfr->rx_buf) {
dmaengine_submit(rxdesc);
dma_async_issue_pending(master->dma_rx);
dma_async_issue_pending(host->dma_rx);
}
if (tfr->tx_buf) {
dmaengine_submit(txdesc);
dma_async_issue_pending(master->dma_tx);
dma_async_issue_pending(host->dma_tx);
}
return 0;
}
static int sun6i_spi_transfer_one(struct spi_master *master,
static int sun6i_spi_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *tfr)
{
struct sun6i_spi *sspi = spi_master_get_devdata(master);
struct sun6i_spi *sspi = spi_controller_get_devdata(host);
unsigned int div, div_cdr1, div_cdr2, timeout;
unsigned int start, end, tx_time;
unsigned int trig_level;
@ -293,7 +293,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
sspi->tx_buf = tfr->tx_buf;
sspi->rx_buf = tfr->rx_buf;
sspi->len = tfr->len;
use_dma = master->can_dma ? master->can_dma(master, spi, tfr) : false;
use_dma = host->can_dma ? host->can_dma(host, spi, tfr) : false;
/* Clear pending interrupts */
sun6i_spi_write(sspi, SUN6I_INT_STA_REG, ~0);
@ -463,7 +463,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
} else {
ret = sun6i_spi_prepare_dma(sspi, tfr);
if (ret) {
dev_warn(&master->dev,
dev_warn(&host->dev,
"%s: prepare DMA failed, ret=%d",
dev_name(&spi->dev), ret);
return ret;
@ -486,7 +486,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
reg = sun6i_spi_read(sspi, SUN6I_TFR_CTL_REG);
sun6i_spi_write(sspi, SUN6I_TFR_CTL_REG, reg | SUN6I_TFR_CTL_XCH);
tx_time = spi_controller_xfer_timeout(master, tfr);
tx_time = spi_controller_xfer_timeout(host, tfr);
start = jiffies;
timeout = wait_for_completion_timeout(&sspi->done,
msecs_to_jiffies(tx_time));
@ -502,13 +502,13 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
timeout = wait_for_completion_timeout(&sspi->dma_rx_done,
timeout);
if (!timeout)
dev_warn(&master->dev, "RX DMA timeout\n");
dev_warn(&host->dev, "RX DMA timeout\n");
}
}
end = jiffies;
if (!timeout) {
dev_warn(&master->dev,
dev_warn(&host->dev,
"%s: timeout transferring %u bytes@%iHz for %i(%i)ms",
dev_name(&spi->dev), tfr->len, tfr->speed_hz,
jiffies_to_msecs(end - start), tx_time);
@ -518,8 +518,8 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
sun6i_spi_write(sspi, SUN6I_INT_CTL_REG, 0);
if (ret && use_dma) {
dmaengine_terminate_sync(master->dma_rx);
dmaengine_terminate_sync(master->dma_tx);
dmaengine_terminate_sync(host->dma_rx);
dmaengine_terminate_sync(host->dma_tx);
}
return ret;
@ -564,8 +564,8 @@ static irqreturn_t sun6i_spi_handler(int irq, void *dev_id)
static int sun6i_spi_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct sun6i_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct sun6i_spi *sspi = spi_controller_get_devdata(host);
int ret;
ret = clk_prepare_enable(sspi->hclk);
@ -601,8 +601,8 @@ out:
static int sun6i_spi_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct sun6i_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct sun6i_spi *sspi = spi_controller_get_devdata(host);
reset_control_assert(sspi->rstc);
clk_disable_unprepare(sspi->mclk);
@ -611,11 +611,11 @@ static int sun6i_spi_runtime_suspend(struct device *dev)
return 0;
}
static bool sun6i_spi_can_dma(struct spi_master *master,
static bool sun6i_spi_can_dma(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct sun6i_spi *sspi = spi_master_get_devdata(master);
struct sun6i_spi *sspi = spi_controller_get_devdata(host);
/*
* If the number of spi words to transfer is less or equal than
@ -627,67 +627,67 @@ static bool sun6i_spi_can_dma(struct spi_master *master,
static int sun6i_spi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct sun6i_spi *sspi;
struct resource *mem;
int ret = 0, irq;
master = spi_alloc_master(&pdev->dev, sizeof(struct sun6i_spi));
if (!master) {
dev_err(&pdev->dev, "Unable to allocate SPI Master\n");
host = spi_alloc_host(&pdev->dev, sizeof(struct sun6i_spi));
if (!host) {
dev_err(&pdev->dev, "Unable to allocate SPI Host\n");
return -ENOMEM;
}
platform_set_drvdata(pdev, master);
sspi = spi_master_get_devdata(master);
platform_set_drvdata(pdev, host);
sspi = spi_controller_get_devdata(host);
sspi->base_addr = devm_platform_get_and_ioremap_resource(pdev, 0, &mem);
if (IS_ERR(sspi->base_addr)) {
ret = PTR_ERR(sspi->base_addr);
goto err_free_master;
goto err_free_host;
}
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
ret = -ENXIO;
goto err_free_master;
goto err_free_host;
}
ret = devm_request_irq(&pdev->dev, irq, sun6i_spi_handler,
0, "sun6i-spi", sspi);
if (ret) {
dev_err(&pdev->dev, "Cannot request IRQ\n");
goto err_free_master;
goto err_free_host;
}
sspi->master = master;
sspi->host = host;
sspi->cfg = of_device_get_match_data(&pdev->dev);
master->max_speed_hz = 100 * 1000 * 1000;
master->min_speed_hz = 3 * 1000;
master->use_gpio_descriptors = true;
master->set_cs = sun6i_spi_set_cs;
master->transfer_one = sun6i_spi_transfer_one;
master->num_chipselect = 4;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST |
sspi->cfg->mode_bits;
master->bits_per_word_mask = SPI_BPW_MASK(8);
master->dev.of_node = pdev->dev.of_node;
master->auto_runtime_pm = true;
master->max_transfer_size = sun6i_spi_max_transfer_size;
host->max_speed_hz = 100 * 1000 * 1000;
host->min_speed_hz = 3 * 1000;
host->use_gpio_descriptors = true;
host->set_cs = sun6i_spi_set_cs;
host->transfer_one = sun6i_spi_transfer_one;
host->num_chipselect = 4;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST |
sspi->cfg->mode_bits;
host->bits_per_word_mask = SPI_BPW_MASK(8);
host->dev.of_node = pdev->dev.of_node;
host->auto_runtime_pm = true;
host->max_transfer_size = sun6i_spi_max_transfer_size;
sspi->hclk = devm_clk_get(&pdev->dev, "ahb");
if (IS_ERR(sspi->hclk)) {
dev_err(&pdev->dev, "Unable to acquire AHB clock\n");
ret = PTR_ERR(sspi->hclk);
goto err_free_master;
goto err_free_host;
}
sspi->mclk = devm_clk_get(&pdev->dev, "mod");
if (IS_ERR(sspi->mclk)) {
dev_err(&pdev->dev, "Unable to acquire module clock\n");
ret = PTR_ERR(sspi->mclk);
goto err_free_master;
goto err_free_host;
}
init_completion(&sspi->done);
@ -697,34 +697,34 @@ static int sun6i_spi_probe(struct platform_device *pdev)
if (IS_ERR(sspi->rstc)) {
dev_err(&pdev->dev, "Couldn't get reset controller\n");
ret = PTR_ERR(sspi->rstc);
goto err_free_master;
goto err_free_host;
}
master->dma_tx = dma_request_chan(&pdev->dev, "tx");
if (IS_ERR(master->dma_tx)) {
host->dma_tx = dma_request_chan(&pdev->dev, "tx");
if (IS_ERR(host->dma_tx)) {
/* Check tx to see if we need defer probing driver */
if (PTR_ERR(master->dma_tx) == -EPROBE_DEFER) {
if (PTR_ERR(host->dma_tx) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto err_free_master;
goto err_free_host;
}
dev_warn(&pdev->dev, "Failed to request TX DMA channel\n");
master->dma_tx = NULL;
host->dma_tx = NULL;
}
master->dma_rx = dma_request_chan(&pdev->dev, "rx");
if (IS_ERR(master->dma_rx)) {
if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {
host->dma_rx = dma_request_chan(&pdev->dev, "rx");
if (IS_ERR(host->dma_rx)) {
if (PTR_ERR(host->dma_rx) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto err_free_dma_tx;
}
dev_warn(&pdev->dev, "Failed to request RX DMA channel\n");
master->dma_rx = NULL;
host->dma_rx = NULL;
}
if (master->dma_tx && master->dma_rx) {
if (host->dma_tx && host->dma_rx) {
sspi->dma_addr_tx = mem->start + SUN6I_TXDATA_REG;
sspi->dma_addr_rx = mem->start + SUN6I_RXDATA_REG;
master->can_dma = sun6i_spi_can_dma;
host->can_dma = sun6i_spi_can_dma;
}
/*
@ -742,9 +742,9 @@ static int sun6i_spi_probe(struct platform_device *pdev)
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);
ret = devm_spi_register_master(&pdev->dev, master);
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret) {
dev_err(&pdev->dev, "cannot register SPI master\n");
dev_err(&pdev->dev, "cannot register SPI host\n");
goto err_pm_disable;
}
@ -754,26 +754,26 @@ err_pm_disable:
pm_runtime_disable(&pdev->dev);
sun6i_spi_runtime_suspend(&pdev->dev);
err_free_dma_rx:
if (master->dma_rx)
dma_release_channel(master->dma_rx);
if (host->dma_rx)
dma_release_channel(host->dma_rx);
err_free_dma_tx:
if (master->dma_tx)
dma_release_channel(master->dma_tx);
err_free_master:
spi_master_put(master);
if (host->dma_tx)
dma_release_channel(host->dma_tx);
err_free_host:
spi_controller_put(host);
return ret;
}
static void sun6i_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct spi_controller *host = platform_get_drvdata(pdev);
pm_runtime_force_suspend(&pdev->dev);
if (master->dma_tx)
dma_release_channel(master->dma_tx);
if (master->dma_rx)
dma_release_channel(master->dma_rx);
if (host->dma_tx)
dma_release_channel(host->dma_tx);
if (host->dma_rx)
dma_release_channel(host->dma_rx);
}
static const struct sun6i_spi_cfg sun6i_a31_spi_cfg = {

View File

@ -70,8 +70,8 @@
#define SP7021_FIFO_DATA_LEN (16)
enum {
SP7021_MASTER_MODE = 0,
SP7021_SLAVE_MODE = 1,
SP7021_HOST_MODE = 0,
SP7021_TARGET_MODE = 1,
};
struct sp7021_spi_ctlr {
@ -88,7 +88,7 @@ struct sp7021_spi_ctlr {
// data xfer lock
struct mutex buf_lock;
struct completion isr_done;
struct completion slave_isr;
struct completion target_isr;
unsigned int rx_cur_len;
unsigned int tx_cur_len;
unsigned int data_unit;
@ -96,7 +96,7 @@ struct sp7021_spi_ctlr {
u8 *rx_buf;
};
static irqreturn_t sp7021_spi_slave_irq(int irq, void *dev)
static irqreturn_t sp7021_spi_target_irq(int irq, void *dev)
{
struct sp7021_spi_ctlr *pspim = dev;
unsigned int data_status;
@ -104,25 +104,25 @@ static irqreturn_t sp7021_spi_slave_irq(int irq, void *dev)
data_status = readl(pspim->s_base + SP7021_DATA_RDY_REG);
data_status |= SP7021_SLAVE_CLR_INT;
writel(data_status , pspim->s_base + SP7021_DATA_RDY_REG);
complete(&pspim->slave_isr);
complete(&pspim->target_isr);
return IRQ_HANDLED;
}
static int sp7021_spi_slave_abort(struct spi_controller *ctlr)
static int sp7021_spi_target_abort(struct spi_controller *ctlr)
{
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
complete(&pspim->slave_isr);
complete(&pspim->target_isr);
complete(&pspim->isr_done);
return 0;
}
static int sp7021_spi_slave_tx(struct spi_device *spi, struct spi_transfer *xfer)
static int sp7021_spi_target_tx(struct spi_device *spi, struct spi_transfer *xfer)
{
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(spi->controller);
u32 value;
reinit_completion(&pspim->slave_isr);
reinit_completion(&pspim->target_isr);
value = SP7021_SLAVE_DMA_EN | SP7021_SLAVE_DMA_RW | FIELD_PREP(SP7021_SLAVE_DMA_CMD, 3);
writel(value, pspim->s_base + SP7021_SLAVE_DMA_CTRL_REG);
writel(xfer->len, pspim->s_base + SP7021_SLAVE_DMA_LENGTH_REG);
@ -137,7 +137,7 @@ static int sp7021_spi_slave_tx(struct spi_device *spi, struct spi_transfer *xfer
return 0;
}
static int sp7021_spi_slave_rx(struct spi_device *spi, struct spi_transfer *xfer)
static int sp7021_spi_target_rx(struct spi_device *spi, struct spi_transfer *xfer)
{
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(spi->controller);
u32 value;
@ -155,7 +155,7 @@ static int sp7021_spi_slave_rx(struct spi_device *spi, struct spi_transfer *xfer
return 0;
}
static void sp7021_spi_master_rb(struct sp7021_spi_ctlr *pspim, unsigned int len)
static void sp7021_spi_host_rb(struct sp7021_spi_ctlr *pspim, unsigned int len)
{
int i;
@ -166,7 +166,7 @@ static void sp7021_spi_master_rb(struct sp7021_spi_ctlr *pspim, unsigned int len
}
}
static void sp7021_spi_master_wb(struct sp7021_spi_ctlr *pspim, unsigned int len)
static void sp7021_spi_host_wb(struct sp7021_spi_ctlr *pspim, unsigned int len)
{
int i;
@ -177,7 +177,7 @@ static void sp7021_spi_master_wb(struct sp7021_spi_ctlr *pspim, unsigned int len
}
}
static irqreturn_t sp7021_spi_master_irq(int irq, void *dev)
static irqreturn_t sp7021_spi_host_irq(int irq, void *dev)
{
struct sp7021_spi_ctlr *pspim = dev;
unsigned int tx_cnt, total_len;
@ -206,9 +206,9 @@ static irqreturn_t sp7021_spi_master_irq(int irq, void *dev)
fd_status, rx_cnt, tx_cnt, tx_len);
if (rx_cnt > 0)
sp7021_spi_master_rb(pspim, rx_cnt);
sp7021_spi_host_rb(pspim, rx_cnt);
if (tx_cnt > 0)
sp7021_spi_master_wb(pspim, tx_cnt);
sp7021_spi_host_wb(pspim, tx_cnt);
fd_status = readl(pspim->m_base + SP7021_SPI_STATUS_REG);
tx_len = FIELD_GET(SP7021_TX_LEN_MASK, fd_status);
@ -224,7 +224,7 @@ static irqreturn_t sp7021_spi_master_irq(int irq, void *dev)
rx_cnt = FIELD_GET(SP7021_RX_CNT_MASK, fd_status);
if (rx_cnt > 0)
sp7021_spi_master_rb(pspim, rx_cnt);
sp7021_spi_host_rb(pspim, rx_cnt);
}
value = readl(pspim->m_base + SP7021_INT_BUSY_REG);
value |= SP7021_CLR_MASTER_INT;
@ -240,7 +240,7 @@ static irqreturn_t sp7021_spi_master_irq(int irq, void *dev)
static void sp7021_prep_transfer(struct spi_controller *ctlr, struct spi_device *spi)
{
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
pspim->tx_cur_len = 0;
pspim->rx_cur_len = 0;
@ -251,7 +251,7 @@ static void sp7021_prep_transfer(struct spi_controller *ctlr, struct spi_device
static int sp7021_spi_controller_prepare_message(struct spi_controller *ctlr,
struct spi_message *msg)
{
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
struct spi_device *s = msg->spi;
u32 valus, rs = 0;
@ -283,7 +283,7 @@ static int sp7021_spi_controller_prepare_message(struct spi_controller *ctlr,
static void sp7021_spi_setup_clk(struct spi_controller *ctlr, struct spi_transfer *xfer)
{
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
u32 clk_rate, clk_sel, div;
clk_rate = clk_get_rate(pspim->spi_clk);
@ -295,10 +295,10 @@ static void sp7021_spi_setup_clk(struct spi_controller *ctlr, struct spi_transfe
writel(pspim->xfer_conf, pspim->m_base + SP7021_SPI_CONFIG_REG);
}
static int sp7021_spi_master_transfer_one(struct spi_controller *ctlr, struct spi_device *spi,
static int sp7021_spi_host_transfer_one(struct spi_controller *ctlr, struct spi_device *spi,
struct spi_transfer *xfer)
{
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
unsigned long timeout = msecs_to_jiffies(1000);
unsigned int xfer_cnt, xfer_len, last_len;
unsigned int i, len_temp;
@ -323,7 +323,7 @@ static int sp7021_spi_master_transfer_one(struct spi_controller *ctlr, struct sp
if (pspim->tx_cur_len < xfer_len) {
len_temp = min(pspim->data_unit, xfer_len);
sp7021_spi_master_wb(pspim, len_temp);
sp7021_spi_host_wb(pspim, len_temp);
}
reg_temp = readl(pspim->m_base + SP7021_SPI_CONFIG_REG);
reg_temp &= ~SP7021_CLEAN_RW_BYTE;
@ -359,10 +359,10 @@ static int sp7021_spi_master_transfer_one(struct spi_controller *ctlr, struct sp
return 0;
}
static int sp7021_spi_slave_transfer_one(struct spi_controller *ctlr, struct spi_device *spi,
static int sp7021_spi_target_transfer_one(struct spi_controller *ctlr, struct spi_device *spi,
struct spi_transfer *xfer)
{
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
struct device *dev = pspim->dev;
int ret;
@ -371,14 +371,14 @@ static int sp7021_spi_slave_transfer_one(struct spi_controller *ctlr, struct spi
xfer->len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, xfer->tx_dma))
return -ENOMEM;
ret = sp7021_spi_slave_tx(spi, xfer);
ret = sp7021_spi_target_tx(spi, xfer);
dma_unmap_single(dev, xfer->tx_dma, xfer->len, DMA_TO_DEVICE);
} else if (xfer->rx_buf && !xfer->tx_buf) {
xfer->rx_dma = dma_map_single(dev, xfer->rx_buf, xfer->len,
DMA_FROM_DEVICE);
if (dma_mapping_error(dev, xfer->rx_dma))
return -ENOMEM;
ret = sp7021_spi_slave_rx(spi, xfer);
ret = sp7021_spi_target_rx(spi, xfer);
dma_unmap_single(dev, xfer->rx_dma, xfer->len, DMA_FROM_DEVICE);
} else {
dev_dbg(&ctlr->dev, "%s() wrong command\n", __func__);
@ -409,14 +409,14 @@ static int sp7021_spi_controller_probe(struct platform_device *pdev)
pdev->id = of_alias_get_id(pdev->dev.of_node, "sp_spi");
if (device_property_read_bool(dev, "spi-slave"))
mode = SP7021_SLAVE_MODE;
mode = SP7021_TARGET_MODE;
else
mode = SP7021_MASTER_MODE;
mode = SP7021_HOST_MODE;
if (mode == SP7021_SLAVE_MODE)
ctlr = devm_spi_alloc_slave(dev, sizeof(*pspim));
if (mode == SP7021_TARGET_MODE)
ctlr = devm_spi_alloc_target(dev, sizeof(*pspim));
else
ctlr = devm_spi_alloc_master(dev, sizeof(*pspim));
ctlr = devm_spi_alloc_host(dev, sizeof(*pspim));
if (!ctlr)
return -ENOMEM;
device_set_node(&ctlr->dev, dev_fwnode(dev));
@ -424,9 +424,9 @@ static int sp7021_spi_controller_probe(struct platform_device *pdev)
ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
ctlr->auto_runtime_pm = true;
ctlr->prepare_message = sp7021_spi_controller_prepare_message;
if (mode == SP7021_SLAVE_MODE) {
ctlr->transfer_one = sp7021_spi_slave_transfer_one;
ctlr->slave_abort = sp7021_spi_slave_abort;
if (mode == SP7021_TARGET_MODE) {
ctlr->transfer_one = sp7021_spi_target_transfer_one;
ctlr->target_abort = sp7021_spi_target_abort;
ctlr->flags = SPI_CONTROLLER_HALF_DUPLEX;
} else {
ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
@ -434,7 +434,7 @@ static int sp7021_spi_controller_probe(struct platform_device *pdev)
ctlr->max_speed_hz = 25000000;
ctlr->use_gpio_descriptors = true;
ctlr->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX;
ctlr->transfer_one = sp7021_spi_master_transfer_one;
ctlr->transfer_one = sp7021_spi_host_transfer_one;
}
platform_set_drvdata(pdev, ctlr);
pspim = spi_controller_get_devdata(ctlr);
@ -443,7 +443,7 @@ static int sp7021_spi_controller_probe(struct platform_device *pdev)
pspim->dev = dev;
mutex_init(&pspim->buf_lock);
init_completion(&pspim->isr_done);
init_completion(&pspim->slave_isr);
init_completion(&pspim->target_isr);
pspim->m_base = devm_platform_ioremap_resource_byname(pdev, "master");
if (IS_ERR(pspim->m_base))
@ -485,12 +485,12 @@ static int sp7021_spi_controller_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = devm_request_irq(dev, pspim->m_irq, sp7021_spi_master_irq,
ret = devm_request_irq(dev, pspim->m_irq, sp7021_spi_host_irq,
IRQF_TRIGGER_RISING, pdev->name, pspim);
if (ret)
return ret;
ret = devm_request_irq(dev, pspim->s_irq, sp7021_spi_slave_irq,
ret = devm_request_irq(dev, pspim->s_irq, sp7021_spi_target_irq,
IRQF_TRIGGER_RISING, pdev->name, pspim);
if (ret)
return ret;
@ -499,7 +499,7 @@ static int sp7021_spi_controller_probe(struct platform_device *pdev)
ret = spi_register_controller(ctlr);
if (ret) {
pm_runtime_disable(dev);
return dev_err_probe(dev, ret, "spi_register_master fail\n");
return dev_err_probe(dev, ret, "spi_register_controller fail\n");
}
return 0;
}
@ -516,7 +516,7 @@ static void sp7021_spi_controller_remove(struct platform_device *pdev)
static int __maybe_unused sp7021_spi_controller_suspend(struct device *dev)
{
struct spi_controller *ctlr = dev_get_drvdata(dev);
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
return reset_control_assert(pspim->rstc);
}
@ -524,7 +524,7 @@ static int __maybe_unused sp7021_spi_controller_suspend(struct device *dev)
static int __maybe_unused sp7021_spi_controller_resume(struct device *dev)
{
struct spi_controller *ctlr = dev_get_drvdata(dev);
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
reset_control_deassert(pspim->rstc);
return clk_prepare_enable(pspim->spi_clk);
@ -534,7 +534,7 @@ static int __maybe_unused sp7021_spi_controller_resume(struct device *dev)
static int sp7021_spi_runtime_suspend(struct device *dev)
{
struct spi_controller *ctlr = dev_get_drvdata(dev);
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
return reset_control_assert(pspim->rstc);
}
@ -542,7 +542,7 @@ static int sp7021_spi_runtime_suspend(struct device *dev)
static int sp7021_spi_runtime_resume(struct device *dev)
{
struct spi_controller *ctlr = dev_get_drvdata(dev);
struct sp7021_spi_ctlr *pspim = spi_master_get_devdata(ctlr);
struct sp7021_spi_ctlr *pspim = spi_controller_get_devdata(ctlr);
return reset_control_deassert(pspim->rstc);
}

View File

@ -225,11 +225,11 @@ static int write_fifo(struct synquacer_spi *sspi)
return 0;
}
static int synquacer_spi_config(struct spi_master *master,
static int synquacer_spi_config(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct synquacer_spi *sspi = spi_master_get_devdata(master);
struct synquacer_spi *sspi = spi_controller_get_devdata(host);
unsigned int speed, mode, bpw, cs, bus_width, transfer_mode;
u32 rate, val, div;
@ -263,7 +263,7 @@ static int synquacer_spi_config(struct spi_master *master,
}
sspi->transfer_mode = transfer_mode;
rate = master->max_speed_hz;
rate = host->max_speed_hz;
div = DIV_ROUND_UP(rate, speed);
if (div > 254) {
@ -350,11 +350,11 @@ static int synquacer_spi_config(struct spi_master *master,
return 0;
}
static int synquacer_spi_transfer_one(struct spi_master *master,
static int synquacer_spi_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct synquacer_spi *sspi = spi_master_get_devdata(master);
struct synquacer_spi *sspi = spi_controller_get_devdata(host);
int ret;
int status = 0;
u32 words;
@ -378,7 +378,7 @@ static int synquacer_spi_transfer_one(struct spi_master *master,
if (bpw == 8 && !(xfer->len % 4) && !(spi->mode & SPI_LSB_FIRST))
xfer->bits_per_word = 32;
ret = synquacer_spi_config(master, spi, xfer);
ret = synquacer_spi_config(host, spi, xfer);
/* restore */
xfer->bits_per_word = bpw;
@ -482,7 +482,7 @@ static int synquacer_spi_transfer_one(struct spi_master *master,
static void synquacer_spi_set_cs(struct spi_device *spi, bool enable)
{
struct synquacer_spi *sspi = spi_master_get_devdata(spi->master);
struct synquacer_spi *sspi = spi_controller_get_devdata(spi->controller);
u32 val;
val = readl(sspi->regs + SYNQUACER_HSSPI_REG_DMSTART);
@ -517,11 +517,11 @@ static int synquacer_spi_wait_status_update(struct synquacer_spi *sspi,
return -EBUSY;
}
static int synquacer_spi_enable(struct spi_master *master)
static int synquacer_spi_enable(struct spi_controller *host)
{
u32 val;
int status;
struct synquacer_spi *sspi = spi_master_get_devdata(master);
struct synquacer_spi *sspi = spi_controller_get_devdata(host);
/* Disable module */
writel(0, sspi->regs + SYNQUACER_HSSPI_REG_MCTRL);
@ -601,18 +601,18 @@ static irqreturn_t sq_spi_tx_handler(int irq, void *priv)
static int synquacer_spi_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct spi_master *master;
struct spi_controller *host;
struct synquacer_spi *sspi;
int ret;
int rx_irq, tx_irq;
master = spi_alloc_master(&pdev->dev, sizeof(*sspi));
if (!master)
host = spi_alloc_host(&pdev->dev, sizeof(*sspi));
if (!host)
return -ENOMEM;
platform_set_drvdata(pdev, master);
platform_set_drvdata(pdev, host);
sspi = spi_master_get_devdata(master);
sspi = spi_controller_get_devdata(host);
sspi->dev = &pdev->dev;
init_completion(&sspi->transfer_done);
@ -625,7 +625,7 @@ static int synquacer_spi_probe(struct platform_device *pdev)
sspi->clk_src_type = SYNQUACER_HSSPI_CLOCK_SRC_IHCLK; /* Default */
device_property_read_u32(&pdev->dev, "socionext,ihclk-rate",
&master->max_speed_hz); /* for ACPI */
&host->max_speed_hz); /* for ACPI */
if (dev_of_node(&pdev->dev)) {
if (device_property_match_string(&pdev->dev,
@ -655,21 +655,21 @@ static int synquacer_spi_probe(struct platform_device *pdev)
goto put_spi;
}
master->max_speed_hz = clk_get_rate(sspi->clk);
host->max_speed_hz = clk_get_rate(sspi->clk);
}
if (!master->max_speed_hz) {
if (!host->max_speed_hz) {
dev_err(&pdev->dev, "missing clock source\n");
ret = -EINVAL;
goto disable_clk;
}
master->min_speed_hz = master->max_speed_hz / 254;
host->min_speed_hz = host->max_speed_hz / 254;
sspi->aces = device_property_read_bool(&pdev->dev,
"socionext,set-aces");
sspi->rtm = device_property_read_bool(&pdev->dev, "socionext,use-rtm");
master->num_chipselect = SYNQUACER_HSSPI_NUM_CHIP_SELECT;
host->num_chipselect = SYNQUACER_HSSPI_NUM_CHIP_SELECT;
rx_irq = platform_get_irq(pdev, 0);
if (rx_irq <= 0) {
@ -699,27 +699,27 @@ static int synquacer_spi_probe(struct platform_device *pdev)
goto disable_clk;
}
master->dev.of_node = np;
master->dev.fwnode = pdev->dev.fwnode;
master->auto_runtime_pm = true;
master->bus_num = pdev->id;
host->dev.of_node = np;
host->dev.fwnode = pdev->dev.fwnode;
host->auto_runtime_pm = true;
host->bus_num = pdev->id;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_TX_DUAL | SPI_RX_DUAL |
SPI_TX_QUAD | SPI_RX_QUAD;
master->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(24) |
SPI_BPW_MASK(16) | SPI_BPW_MASK(8);
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_TX_DUAL | SPI_RX_DUAL |
SPI_TX_QUAD | SPI_RX_QUAD;
host->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(24) |
SPI_BPW_MASK(16) | SPI_BPW_MASK(8);
master->set_cs = synquacer_spi_set_cs;
master->transfer_one = synquacer_spi_transfer_one;
host->set_cs = synquacer_spi_set_cs;
host->transfer_one = synquacer_spi_transfer_one;
ret = synquacer_spi_enable(master);
ret = synquacer_spi_enable(host);
if (ret)
goto disable_clk;
pm_runtime_set_active(sspi->dev);
pm_runtime_enable(sspi->dev);
ret = devm_spi_register_master(sspi->dev, master);
ret = devm_spi_register_controller(sspi->dev, host);
if (ret)
goto disable_pm;
@ -730,15 +730,15 @@ disable_pm:
disable_clk:
clk_disable_unprepare(sspi->clk);
put_spi:
spi_master_put(master);
spi_controller_put(host);
return ret;
}
static void synquacer_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct synquacer_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct synquacer_spi *sspi = spi_controller_get_devdata(host);
pm_runtime_disable(sspi->dev);
@ -747,11 +747,11 @@ static void synquacer_spi_remove(struct platform_device *pdev)
static int __maybe_unused synquacer_spi_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct synquacer_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct synquacer_spi *sspi = spi_controller_get_devdata(host);
int ret;
ret = spi_master_suspend(master);
ret = spi_controller_suspend(host);
if (ret)
return ret;
@ -763,8 +763,8 @@ static int __maybe_unused synquacer_spi_suspend(struct device *dev)
static int __maybe_unused synquacer_spi_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct synquacer_spi *sspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct synquacer_spi *sspi = spi_controller_get_devdata(host);
int ret;
if (!pm_runtime_suspended(dev)) {
@ -778,7 +778,7 @@ static int __maybe_unused synquacer_spi_resume(struct device *dev)
return ret;
}
ret = synquacer_spi_enable(master);
ret = synquacer_spi_enable(host);
if (ret) {
clk_disable_unprepare(sspi->clk);
dev_err(dev, "failed to enable spi (%d)\n", ret);
@ -786,7 +786,7 @@ static int __maybe_unused synquacer_spi_resume(struct device *dev)
}
}
ret = spi_master_resume(master);
ret = spi_controller_resume(host);
if (ret < 0)
clk_disable_unprepare(sspi->clk);

View File

@ -164,7 +164,7 @@ struct tegra_spi_client_data {
struct tegra_spi_data {
struct device *dev;
struct spi_master *master;
struct spi_controller *host;
spinlock_t lock;
struct clk *clk;
@ -718,7 +718,7 @@ static void tegra_spi_deinit_dma_param(struct tegra_spi_data *tspi,
static int tegra_spi_set_hw_cs_timing(struct spi_device *spi)
{
struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_spi_data *tspi = spi_controller_get_devdata(spi->controller);
struct spi_delay *setup = &spi->cs_setup;
struct spi_delay *hold = &spi->cs_hold;
struct spi_delay *inactive = &spi->cs_inactive;
@ -772,7 +772,7 @@ static u32 tegra_spi_setup_transfer_one(struct spi_device *spi,
bool is_first_of_msg,
bool is_single_xfer)
{
struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_spi_data *tspi = spi_controller_get_devdata(spi->controller);
struct tegra_spi_client_data *cdata = spi->controller_data;
u32 speed = t->speed_hz;
u8 bits_per_word = t->bits_per_word;
@ -865,7 +865,7 @@ static u32 tegra_spi_setup_transfer_one(struct spi_device *spi,
static int tegra_spi_start_transfer_one(struct spi_device *spi,
struct spi_transfer *t, u32 command1)
{
struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_spi_data *tspi = spi_controller_get_devdata(spi->controller);
unsigned total_fifo_words;
int ret;
@ -912,10 +912,10 @@ static struct tegra_spi_client_data
*tegra_spi_parse_cdata_dt(struct spi_device *spi)
{
struct tegra_spi_client_data *cdata;
struct device_node *slave_np;
struct device_node *target_np;
slave_np = spi->dev.of_node;
if (!slave_np) {
target_np = spi->dev.of_node;
if (!target_np) {
dev_dbg(&spi->dev, "device node not found\n");
return NULL;
}
@ -924,9 +924,9 @@ static struct tegra_spi_client_data
if (!cdata)
return NULL;
of_property_read_u32(slave_np, "nvidia,tx-clk-tap-delay",
of_property_read_u32(target_np, "nvidia,tx-clk-tap-delay",
&cdata->tx_clk_tap_delay);
of_property_read_u32(slave_np, "nvidia,rx-clk-tap-delay",
of_property_read_u32(target_np, "nvidia,rx-clk-tap-delay",
&cdata->rx_clk_tap_delay);
return cdata;
}
@ -942,7 +942,7 @@ static void tegra_spi_cleanup(struct spi_device *spi)
static int tegra_spi_setup(struct spi_device *spi)
{
struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_spi_data *tspi = spi_controller_get_devdata(spi->controller);
struct tegra_spi_client_data *cdata = spi->controller_data;
u32 val;
unsigned long flags;
@ -993,7 +993,7 @@ static int tegra_spi_setup(struct spi_device *spi)
static void tegra_spi_transfer_end(struct spi_device *spi)
{
struct tegra_spi_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_spi_data *tspi = spi_controller_get_devdata(spi->controller);
int cs_val = (spi->mode & SPI_CS_HIGH) ? 0 : 1;
/* GPIO based chip select control */
@ -1025,11 +1025,11 @@ static void tegra_spi_dump_regs(struct tegra_spi_data *tspi)
tegra_spi_readl(tspi, SPI_FIFO_STATUS));
}
static int tegra_spi_transfer_one_message(struct spi_master *master,
static int tegra_spi_transfer_one_message(struct spi_controller *host,
struct spi_message *msg)
{
bool is_first_msg = true;
struct tegra_spi_data *tspi = spi_master_get_devdata(master);
struct tegra_spi_data *tspi = spi_controller_get_devdata(host);
struct spi_transfer *xfer;
struct spi_device *spi = msg->spi;
int ret;
@ -1078,7 +1078,7 @@ static int tegra_spi_transfer_one_message(struct spi_master *master,
reset_control_assert(tspi->rst);
udelay(2);
reset_control_deassert(tspi->rst);
tspi->last_used_cs = master->num_chipselect + 1;
tspi->last_used_cs = host->num_chipselect + 1;
goto complete_xfer;
}
@ -1112,7 +1112,7 @@ complete_xfer:
ret = 0;
exit:
msg->status = ret;
spi_finalize_current_message(master);
spi_finalize_current_message(host);
return ret;
}
@ -1293,40 +1293,40 @@ MODULE_DEVICE_TABLE(of, tegra_spi_of_match);
static int tegra_spi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct tegra_spi_data *tspi;
struct resource *r;
int ret, spi_irq;
int bus_num;
master = spi_alloc_master(&pdev->dev, sizeof(*tspi));
if (!master) {
dev_err(&pdev->dev, "master allocation failed\n");
host = spi_alloc_host(&pdev->dev, sizeof(*tspi));
if (!host) {
dev_err(&pdev->dev, "host allocation failed\n");
return -ENOMEM;
}
platform_set_drvdata(pdev, master);
tspi = spi_master_get_devdata(master);
platform_set_drvdata(pdev, host);
tspi = spi_controller_get_devdata(host);
if (of_property_read_u32(pdev->dev.of_node, "spi-max-frequency",
&master->max_speed_hz))
master->max_speed_hz = 25000000; /* 25MHz */
&host->max_speed_hz))
host->max_speed_hz = 25000000; /* 25MHz */
/* the spi->mode bits understood by this driver: */
master->use_gpio_descriptors = true;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST |
SPI_TX_DUAL | SPI_RX_DUAL | SPI_3WIRE;
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
master->setup = tegra_spi_setup;
master->cleanup = tegra_spi_cleanup;
master->transfer_one_message = tegra_spi_transfer_one_message;
master->set_cs_timing = tegra_spi_set_hw_cs_timing;
master->num_chipselect = MAX_CHIP_SELECT;
master->auto_runtime_pm = true;
host->use_gpio_descriptors = true;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST |
SPI_TX_DUAL | SPI_RX_DUAL | SPI_3WIRE;
host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
host->setup = tegra_spi_setup;
host->cleanup = tegra_spi_cleanup;
host->transfer_one_message = tegra_spi_transfer_one_message;
host->set_cs_timing = tegra_spi_set_hw_cs_timing;
host->num_chipselect = MAX_CHIP_SELECT;
host->auto_runtime_pm = true;
bus_num = of_alias_get_id(pdev->dev.of_node, "spi");
if (bus_num >= 0)
master->bus_num = bus_num;
host->bus_num = bus_num;
tspi->master = master;
tspi->host = host;
tspi->dev = &pdev->dev;
spin_lock_init(&tspi->lock);
@ -1334,20 +1334,20 @@ static int tegra_spi_probe(struct platform_device *pdev)
if (!tspi->soc_data) {
dev_err(&pdev->dev, "unsupported tegra\n");
ret = -ENODEV;
goto exit_free_master;
goto exit_free_host;
}
tspi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &r);
if (IS_ERR(tspi->base)) {
ret = PTR_ERR(tspi->base);
goto exit_free_master;
goto exit_free_host;
}
tspi->phys = r->start;
spi_irq = platform_get_irq(pdev, 0);
if (spi_irq < 0) {
ret = spi_irq;
goto exit_free_master;
goto exit_free_host;
}
tspi->irq = spi_irq;
@ -1355,14 +1355,14 @@ static int tegra_spi_probe(struct platform_device *pdev)
if (IS_ERR(tspi->clk)) {
dev_err(&pdev->dev, "can not get clock\n");
ret = PTR_ERR(tspi->clk);
goto exit_free_master;
goto exit_free_host;
}
tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
if (IS_ERR(tspi->rst)) {
dev_err(&pdev->dev, "can not get reset\n");
ret = PTR_ERR(tspi->rst);
goto exit_free_master;
goto exit_free_host;
}
tspi->max_buf_size = SPI_FIFO_DEPTH << 2;
@ -1370,7 +1370,7 @@ static int tegra_spi_probe(struct platform_device *pdev)
ret = tegra_spi_init_dma_param(tspi, true);
if (ret < 0)
goto exit_free_master;
goto exit_free_host;
ret = tegra_spi_init_dma_param(tspi, false);
if (ret < 0)
goto exit_rx_dma_free;
@ -1401,7 +1401,7 @@ static int tegra_spi_probe(struct platform_device *pdev)
tspi->spi_cs_timing1 = tegra_spi_readl(tspi, SPI_CS_TIMING1);
tspi->spi_cs_timing2 = tegra_spi_readl(tspi, SPI_CS_TIMING2);
tspi->def_command2_reg = tegra_spi_readl(tspi, SPI_COMMAND2);
tspi->last_used_cs = master->num_chipselect + 1;
tspi->last_used_cs = host->num_chipselect + 1;
pm_runtime_put(&pdev->dev);
ret = request_threaded_irq(tspi->irq, tegra_spi_isr,
tegra_spi_isr_thread, IRQF_ONESHOT,
@ -1412,10 +1412,10 @@ static int tegra_spi_probe(struct platform_device *pdev)
goto exit_pm_disable;
}
master->dev.of_node = pdev->dev.of_node;
ret = devm_spi_register_master(&pdev->dev, master);
host->dev.of_node = pdev->dev.of_node;
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret < 0) {
dev_err(&pdev->dev, "can not register to master err %d\n", ret);
dev_err(&pdev->dev, "can not register to host err %d\n", ret);
goto exit_free_irq;
}
return ret;
@ -1429,15 +1429,15 @@ exit_pm_disable:
tegra_spi_deinit_dma_param(tspi, false);
exit_rx_dma_free:
tegra_spi_deinit_dma_param(tspi, true);
exit_free_master:
spi_master_put(master);
exit_free_host:
spi_controller_put(host);
return ret;
}
static void tegra_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct tegra_spi_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct tegra_spi_data *tspi = spi_controller_get_devdata(host);
free_irq(tspi->irq, tspi);
@ -1455,15 +1455,15 @@ static void tegra_spi_remove(struct platform_device *pdev)
#ifdef CONFIG_PM_SLEEP
static int tegra_spi_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_controller *host = dev_get_drvdata(dev);
return spi_master_suspend(master);
return spi_controller_suspend(host);
}
static int tegra_spi_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_spi_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_spi_data *tspi = spi_controller_get_devdata(host);
int ret;
ret = pm_runtime_resume_and_get(dev);
@ -1473,17 +1473,17 @@ static int tegra_spi_resume(struct device *dev)
}
tegra_spi_writel(tspi, tspi->command1_reg, SPI_COMMAND1);
tegra_spi_writel(tspi, tspi->def_command2_reg, SPI_COMMAND2);
tspi->last_used_cs = master->num_chipselect + 1;
tspi->last_used_cs = host->num_chipselect + 1;
pm_runtime_put(dev);
return spi_master_resume(master);
return spi_controller_resume(host);
}
#endif
static int tegra_spi_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_spi_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_spi_data *tspi = spi_controller_get_devdata(host);
/* Flush all write which are in PPSB queue by reading back */
tegra_spi_readl(tspi, SPI_COMMAND1);
@ -1494,8 +1494,8 @@ static int tegra_spi_runtime_suspend(struct device *dev)
static int tegra_spi_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_spi_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_spi_data *tspi = spi_controller_get_devdata(host);
int ret;
ret = clk_prepare_enable(tspi->clk);

View File

@ -102,7 +102,7 @@
struct tegra_sflash_data {
struct device *dev;
struct spi_master *master;
struct spi_controller *host;
spinlock_t lock;
struct clk *clk;
@ -251,7 +251,7 @@ static int tegra_sflash_start_transfer_one(struct spi_device *spi,
struct spi_transfer *t, bool is_first_of_msg,
bool is_single_xfer)
{
struct tegra_sflash_data *tsd = spi_master_get_devdata(spi->master);
struct tegra_sflash_data *tsd = spi_controller_get_devdata(spi->controller);
u32 speed;
u32 command;
@ -303,12 +303,12 @@ static int tegra_sflash_start_transfer_one(struct spi_device *spi,
return tegra_sflash_start_cpu_based_transfer(tsd, t);
}
static int tegra_sflash_transfer_one_message(struct spi_master *master,
static int tegra_sflash_transfer_one_message(struct spi_controller *host,
struct spi_message *msg)
{
bool is_first_msg = true;
int single_xfer;
struct tegra_sflash_data *tsd = spi_master_get_devdata(master);
struct tegra_sflash_data *tsd = spi_controller_get_devdata(host);
struct spi_transfer *xfer;
struct spi_device *spi = msg->spi;
int ret;
@ -351,7 +351,7 @@ static int tegra_sflash_transfer_one_message(struct spi_master *master,
exit:
tegra_sflash_writel(tsd, tsd->def_command_reg, SPI_COMMAND);
msg->status = ret;
spi_finalize_current_message(master);
spi_finalize_current_message(host);
return ret;
}
@ -416,7 +416,7 @@ MODULE_DEVICE_TABLE(of, tegra_sflash_of_match);
static int tegra_sflash_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct tegra_sflash_data *tsd;
int ret;
const struct of_device_id *match;
@ -427,37 +427,37 @@ static int tegra_sflash_probe(struct platform_device *pdev)
return -ENODEV;
}
master = spi_alloc_master(&pdev->dev, sizeof(*tsd));
if (!master) {
dev_err(&pdev->dev, "master allocation failed\n");
host = spi_alloc_host(&pdev->dev, sizeof(*tsd));
if (!host) {
dev_err(&pdev->dev, "host allocation failed\n");
return -ENOMEM;
}
/* the spi->mode bits understood by this driver: */
master->mode_bits = SPI_CPOL | SPI_CPHA;
master->transfer_one_message = tegra_sflash_transfer_one_message;
master->auto_runtime_pm = true;
master->num_chipselect = MAX_CHIP_SELECT;
host->mode_bits = SPI_CPOL | SPI_CPHA;
host->transfer_one_message = tegra_sflash_transfer_one_message;
host->auto_runtime_pm = true;
host->num_chipselect = MAX_CHIP_SELECT;
platform_set_drvdata(pdev, master);
tsd = spi_master_get_devdata(master);
tsd->master = master;
platform_set_drvdata(pdev, host);
tsd = spi_controller_get_devdata(host);
tsd->host = host;
tsd->dev = &pdev->dev;
spin_lock_init(&tsd->lock);
if (of_property_read_u32(tsd->dev->of_node, "spi-max-frequency",
&master->max_speed_hz))
master->max_speed_hz = 25000000; /* 25MHz */
&host->max_speed_hz))
host->max_speed_hz = 25000000; /* 25MHz */
tsd->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(tsd->base)) {
ret = PTR_ERR(tsd->base);
goto exit_free_master;
goto exit_free_host;
}
ret = platform_get_irq(pdev, 0);
if (ret < 0)
goto exit_free_master;
goto exit_free_host;
tsd->irq = ret;
ret = request_irq(tsd->irq, tegra_sflash_isr, 0,
@ -465,7 +465,7 @@ static int tegra_sflash_probe(struct platform_device *pdev)
if (ret < 0) {
dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
tsd->irq);
goto exit_free_master;
goto exit_free_host;
}
tsd->clk = devm_clk_get(&pdev->dev, NULL);
@ -505,10 +505,10 @@ static int tegra_sflash_probe(struct platform_device *pdev)
tegra_sflash_writel(tsd, tsd->def_command_reg, SPI_COMMAND);
pm_runtime_put(&pdev->dev);
master->dev.of_node = pdev->dev.of_node;
ret = devm_spi_register_master(&pdev->dev, master);
host->dev.of_node = pdev->dev.of_node;
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret < 0) {
dev_err(&pdev->dev, "can not register to master err %d\n", ret);
dev_err(&pdev->dev, "can not register to host err %d\n", ret);
goto exit_pm_disable;
}
return ret;
@ -519,15 +519,15 @@ exit_pm_disable:
tegra_sflash_runtime_suspend(&pdev->dev);
exit_free_irq:
free_irq(tsd->irq, tsd);
exit_free_master:
spi_master_put(master);
exit_free_host:
spi_controller_put(host);
return ret;
}
static void tegra_sflash_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct tegra_sflash_data *tsd = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct tegra_sflash_data *tsd = spi_controller_get_devdata(host);
free_irq(tsd->irq, tsd);
@ -539,15 +539,15 @@ static void tegra_sflash_remove(struct platform_device *pdev)
#ifdef CONFIG_PM_SLEEP
static int tegra_sflash_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_controller *host = dev_get_drvdata(dev);
return spi_master_suspend(master);
return spi_controller_suspend(host);
}
static int tegra_sflash_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_sflash_data *tsd = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_sflash_data *tsd = spi_controller_get_devdata(host);
int ret;
ret = pm_runtime_resume_and_get(dev);
@ -558,14 +558,14 @@ static int tegra_sflash_resume(struct device *dev)
tegra_sflash_writel(tsd, tsd->command_reg, SPI_COMMAND);
pm_runtime_put(dev);
return spi_master_resume(master);
return spi_controller_resume(host);
}
#endif
static int tegra_sflash_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_sflash_data *tsd = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_sflash_data *tsd = spi_controller_get_devdata(host);
/* Flush all write which are in PPSB queue by reading back */
tegra_sflash_readl(tsd, SPI_COMMAND);
@ -576,8 +576,8 @@ static int tegra_sflash_runtime_suspend(struct device *dev)
static int tegra_sflash_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_sflash_data *tsd = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_sflash_data *tsd = spi_controller_get_devdata(host);
int ret;
ret = clk_prepare_enable(tsd->clk);

View File

@ -152,7 +152,7 @@ struct tegra_slink_chip_data {
struct tegra_slink_data {
struct device *dev;
struct spi_master *master;
struct spi_controller *host;
const struct tegra_slink_chip_data *chip_data;
spinlock_t lock;
@ -671,7 +671,7 @@ static void tegra_slink_deinit_dma_param(struct tegra_slink_data *tspi,
static int tegra_slink_start_transfer_one(struct spi_device *spi,
struct spi_transfer *t)
{
struct tegra_slink_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_slink_data *tspi = spi_controller_get_devdata(spi->controller);
u32 speed;
u8 bits_per_word;
unsigned total_fifo_words;
@ -737,7 +737,7 @@ static int tegra_slink_setup(struct spi_device *spi)
SLINK_CS_POLARITY3,
};
struct tegra_slink_data *tspi = spi_master_get_devdata(spi->master);
struct tegra_slink_data *tspi = spi_controller_get_devdata(spi->controller);
u32 val;
unsigned long flags;
int ret;
@ -768,10 +768,10 @@ static int tegra_slink_setup(struct spi_device *spi)
return 0;
}
static int tegra_slink_prepare_message(struct spi_master *master,
static int tegra_slink_prepare_message(struct spi_controller *host,
struct spi_message *msg)
{
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
struct spi_device *spi = msg->spi;
tegra_slink_clear_status(tspi);
@ -794,11 +794,11 @@ static int tegra_slink_prepare_message(struct spi_master *master,
return 0;
}
static int tegra_slink_transfer_one(struct spi_master *master,
static int tegra_slink_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *xfer)
{
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
int ret;
reinit_completion(&tspi->xfer_completion);
@ -825,10 +825,10 @@ static int tegra_slink_transfer_one(struct spi_master *master,
return 0;
}
static int tegra_slink_unprepare_message(struct spi_master *master,
static int tegra_slink_unprepare_message(struct spi_controller *host,
struct spi_message *msg)
{
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
tegra_slink_writel(tspi, tspi->def_command_reg, SLINK_COMMAND);
tegra_slink_writel(tspi, tspi->def_command2_reg, SLINK_COMMAND2);
@ -999,7 +999,7 @@ MODULE_DEVICE_TABLE(of, tegra_slink_of_match);
static int tegra_slink_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct tegra_slink_data *tspi;
struct resource *r;
int ret, spi_irq;
@ -1007,36 +1007,36 @@ static int tegra_slink_probe(struct platform_device *pdev)
cdata = of_device_get_match_data(&pdev->dev);
master = spi_alloc_master(&pdev->dev, sizeof(*tspi));
if (!master) {
dev_err(&pdev->dev, "master allocation failed\n");
host = spi_alloc_host(&pdev->dev, sizeof(*tspi));
if (!host) {
dev_err(&pdev->dev, "host allocation failed\n");
return -ENOMEM;
}
/* the spi->mode bits understood by this driver: */
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
master->setup = tegra_slink_setup;
master->prepare_message = tegra_slink_prepare_message;
master->transfer_one = tegra_slink_transfer_one;
master->unprepare_message = tegra_slink_unprepare_message;
master->auto_runtime_pm = true;
master->num_chipselect = MAX_CHIP_SELECT;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
host->setup = tegra_slink_setup;
host->prepare_message = tegra_slink_prepare_message;
host->transfer_one = tegra_slink_transfer_one;
host->unprepare_message = tegra_slink_unprepare_message;
host->auto_runtime_pm = true;
host->num_chipselect = MAX_CHIP_SELECT;
platform_set_drvdata(pdev, master);
tspi = spi_master_get_devdata(master);
tspi->master = master;
platform_set_drvdata(pdev, host);
tspi = spi_controller_get_devdata(host);
tspi->host = host;
tspi->dev = &pdev->dev;
tspi->chip_data = cdata;
spin_lock_init(&tspi->lock);
if (of_property_read_u32(tspi->dev->of_node, "spi-max-frequency",
&master->max_speed_hz))
master->max_speed_hz = 25000000; /* 25MHz */
&host->max_speed_hz))
host->max_speed_hz = 25000000; /* 25MHz */
tspi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &r);
if (IS_ERR(tspi->base)) {
ret = PTR_ERR(tspi->base);
goto exit_free_master;
goto exit_free_host;
}
tspi->phys = r->start;
@ -1045,26 +1045,26 @@ static int tegra_slink_probe(struct platform_device *pdev)
if (IS_ERR(tspi->clk)) {
ret = PTR_ERR(tspi->clk);
dev_err(&pdev->dev, "Can not get clock %d\n", ret);
goto exit_free_master;
goto exit_free_host;
}
tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
if (IS_ERR(tspi->rst)) {
dev_err(&pdev->dev, "can not get reset\n");
ret = PTR_ERR(tspi->rst);
goto exit_free_master;
goto exit_free_host;
}
ret = devm_tegra_core_dev_init_opp_table_common(&pdev->dev);
if (ret)
goto exit_free_master;
goto exit_free_host;
tspi->max_buf_size = SLINK_FIFO_DEPTH << 2;
tspi->dma_buf_size = DEFAULT_SPI_DMA_BUF_LEN;
ret = tegra_slink_init_dma_param(tspi, true);
if (ret < 0)
goto exit_free_master;
goto exit_free_host;
ret = tegra_slink_init_dma_param(tspi, false);
if (ret < 0)
goto exit_rx_dma_free;
@ -1103,10 +1103,10 @@ static int tegra_slink_probe(struct platform_device *pdev)
tegra_slink_writel(tspi, tspi->def_command_reg, SLINK_COMMAND);
tegra_slink_writel(tspi, tspi->def_command2_reg, SLINK_COMMAND2);
master->dev.of_node = pdev->dev.of_node;
ret = spi_register_master(master);
host->dev.of_node = pdev->dev.of_node;
ret = spi_register_controller(host);
if (ret < 0) {
dev_err(&pdev->dev, "can not register to master err %d\n", ret);
dev_err(&pdev->dev, "can not register to host err %d\n", ret);
goto exit_free_irq;
}
@ -1124,17 +1124,17 @@ exit_pm_disable:
tegra_slink_deinit_dma_param(tspi, false);
exit_rx_dma_free:
tegra_slink_deinit_dma_param(tspi, true);
exit_free_master:
spi_master_put(master);
exit_free_host:
spi_controller_put(host);
return ret;
}
static void tegra_slink_remove(struct platform_device *pdev)
{
struct spi_master *master = spi_master_get(platform_get_drvdata(pdev));
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = spi_controller_get(platform_get_drvdata(pdev));
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
spi_unregister_master(master);
spi_unregister_controller(host);
free_irq(tspi->irq, tspi);
@ -1146,21 +1146,21 @@ static void tegra_slink_remove(struct platform_device *pdev)
if (tspi->rx_dma_chan)
tegra_slink_deinit_dma_param(tspi, true);
spi_master_put(master);
spi_controller_put(host);
}
#ifdef CONFIG_PM_SLEEP
static int tegra_slink_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_controller *host = dev_get_drvdata(dev);
return spi_master_suspend(master);
return spi_controller_suspend(host);
}
static int tegra_slink_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
int ret;
ret = pm_runtime_resume_and_get(dev);
@ -1172,14 +1172,14 @@ static int tegra_slink_resume(struct device *dev)
tegra_slink_writel(tspi, tspi->command2_reg, SLINK_COMMAND2);
pm_runtime_put(dev);
return spi_master_resume(master);
return spi_controller_resume(host);
}
#endif
static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
/* Flush all write which are in PPSB queue by reading back */
tegra_slink_readl(tspi, SLINK_MAS_DATA);
@ -1190,8 +1190,8 @@ static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev)
static int __maybe_unused tegra_slink_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_slink_data *tspi = spi_controller_get_devdata(host);
int ret;
ret = clk_prepare_enable(tspi->clk);

View File

@ -175,7 +175,7 @@ struct tegra_qspi_client_data {
struct tegra_qspi {
struct device *dev;
struct spi_master *master;
struct spi_controller *host;
/* lock to protect data accessed by irq */
spinlock_t lock;
@ -809,7 +809,7 @@ err_out:
static u32 tegra_qspi_setup_transfer_one(struct spi_device *spi, struct spi_transfer *t,
bool is_first_of_msg)
{
struct tegra_qspi *tqspi = spi_master_get_devdata(spi->master);
struct tegra_qspi *tqspi = spi_controller_get_devdata(spi->controller);
struct tegra_qspi_client_data *cdata = spi->controller_data;
u32 command1, command2, speed = t->speed_hz;
u8 bits_per_word = t->bits_per_word;
@ -870,7 +870,7 @@ static u32 tegra_qspi_setup_transfer_one(struct spi_device *spi, struct spi_tran
static int tegra_qspi_start_transfer_one(struct spi_device *spi,
struct spi_transfer *t, u32 command1)
{
struct tegra_qspi *tqspi = spi_master_get_devdata(spi->master);
struct tegra_qspi *tqspi = spi_controller_get_devdata(spi->controller);
unsigned int total_fifo_words;
u8 bus_width = 0;
int ret;
@ -925,7 +925,7 @@ static int tegra_qspi_start_transfer_one(struct spi_device *spi,
static struct tegra_qspi_client_data *tegra_qspi_parse_cdata_dt(struct spi_device *spi)
{
struct tegra_qspi_client_data *cdata;
struct tegra_qspi *tqspi = spi_master_get_devdata(spi->master);
struct tegra_qspi *tqspi = spi_controller_get_devdata(spi->controller);
cdata = devm_kzalloc(tqspi->dev, sizeof(*cdata), GFP_KERNEL);
if (!cdata)
@ -941,7 +941,7 @@ static struct tegra_qspi_client_data *tegra_qspi_parse_cdata_dt(struct spi_devic
static int tegra_qspi_setup(struct spi_device *spi)
{
struct tegra_qspi *tqspi = spi_master_get_devdata(spi->master);
struct tegra_qspi *tqspi = spi_controller_get_devdata(spi->controller);
struct tegra_qspi_client_data *cdata = spi->controller_data;
unsigned long flags;
u32 val;
@ -1005,7 +1005,7 @@ static void tegra_qspi_handle_error(struct tegra_qspi *tqspi)
static void tegra_qspi_transfer_end(struct spi_device *spi)
{
struct tegra_qspi *tqspi = spi_master_get_devdata(spi->master);
struct tegra_qspi *tqspi = spi_controller_get_devdata(spi->controller);
int cs_val = (spi->mode & SPI_CS_HIGH) ? 0 : 1;
if (cs_val)
@ -1316,10 +1316,10 @@ static bool tegra_qspi_validate_cmb_seq(struct tegra_qspi *tqspi,
return true;
}
static int tegra_qspi_transfer_one_message(struct spi_master *master,
static int tegra_qspi_transfer_one_message(struct spi_controller *host,
struct spi_message *msg)
{
struct tegra_qspi *tqspi = spi_master_get_devdata(master);
struct tegra_qspi *tqspi = spi_controller_get_devdata(host);
int ret;
if (tegra_qspi_validate_cmb_seq(tqspi, msg))
@ -1327,7 +1327,7 @@ static int tegra_qspi_transfer_one_message(struct spi_master *master,
else
ret = tegra_qspi_non_combined_seq_xfer(tqspi, msg);
spi_finalize_current_message(master);
spi_finalize_current_message(host);
return ret;
}
@ -1533,38 +1533,38 @@ MODULE_DEVICE_TABLE(acpi, tegra_qspi_acpi_match);
static int tegra_qspi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct tegra_qspi *tqspi;
struct resource *r;
int ret, qspi_irq;
int bus_num;
master = devm_spi_alloc_master(&pdev->dev, sizeof(*tqspi));
if (!master)
host = devm_spi_alloc_host(&pdev->dev, sizeof(*tqspi));
if (!host)
return -ENOMEM;
platform_set_drvdata(pdev, master);
tqspi = spi_master_get_devdata(master);
platform_set_drvdata(pdev, host);
tqspi = spi_controller_get_devdata(host);
master->mode_bits = SPI_MODE_0 | SPI_MODE_3 | SPI_CS_HIGH |
SPI_TX_DUAL | SPI_RX_DUAL | SPI_TX_QUAD | SPI_RX_QUAD;
master->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(16) | SPI_BPW_MASK(8);
master->flags = SPI_CONTROLLER_HALF_DUPLEX;
master->setup = tegra_qspi_setup;
master->transfer_one_message = tegra_qspi_transfer_one_message;
master->num_chipselect = 1;
master->auto_runtime_pm = true;
host->mode_bits = SPI_MODE_0 | SPI_MODE_3 | SPI_CS_HIGH |
SPI_TX_DUAL | SPI_RX_DUAL | SPI_TX_QUAD | SPI_RX_QUAD;
host->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(16) | SPI_BPW_MASK(8);
host->flags = SPI_CONTROLLER_HALF_DUPLEX;
host->setup = tegra_qspi_setup;
host->transfer_one_message = tegra_qspi_transfer_one_message;
host->num_chipselect = 1;
host->auto_runtime_pm = true;
bus_num = of_alias_get_id(pdev->dev.of_node, "spi");
if (bus_num >= 0)
master->bus_num = bus_num;
host->bus_num = bus_num;
tqspi->master = master;
tqspi->host = host;
tqspi->dev = &pdev->dev;
spin_lock_init(&tqspi->lock);
tqspi->soc_data = device_get_match_data(&pdev->dev);
master->num_chipselect = tqspi->soc_data->cs_count;
host->num_chipselect = tqspi->soc_data->cs_count;
tqspi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &r);
if (IS_ERR(tqspi->base))
return PTR_ERR(tqspi->base);
@ -1625,10 +1625,10 @@ static int tegra_qspi_probe(struct platform_device *pdev)
goto exit_pm_disable;
}
master->dev.of_node = pdev->dev.of_node;
ret = spi_register_master(master);
host->dev.of_node = pdev->dev.of_node;
ret = spi_register_controller(host);
if (ret < 0) {
dev_err(&pdev->dev, "failed to register master: %d\n", ret);
dev_err(&pdev->dev, "failed to register host: %d\n", ret);
goto exit_free_irq;
}
@ -1644,10 +1644,10 @@ exit_pm_disable:
static void tegra_qspi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct tegra_qspi *tqspi = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct tegra_qspi *tqspi = spi_controller_get_devdata(host);
spi_unregister_master(master);
spi_unregister_controller(host);
free_irq(tqspi->irq, tqspi);
pm_runtime_force_suspend(&pdev->dev);
tegra_qspi_deinit_dma(tqspi);
@ -1655,15 +1655,15 @@ static void tegra_qspi_remove(struct platform_device *pdev)
static int __maybe_unused tegra_qspi_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct spi_controller *host = dev_get_drvdata(dev);
return spi_master_suspend(master);
return spi_controller_suspend(host);
}
static int __maybe_unused tegra_qspi_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_qspi *tqspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_qspi *tqspi = spi_controller_get_devdata(host);
int ret;
ret = pm_runtime_resume_and_get(dev);
@ -1676,13 +1676,13 @@ static int __maybe_unused tegra_qspi_resume(struct device *dev)
tegra_qspi_writel(tqspi, tqspi->def_command2_reg, QSPI_COMMAND2);
pm_runtime_put(dev);
return spi_master_resume(master);
return spi_controller_resume(host);
}
static int __maybe_unused tegra_qspi_runtime_suspend(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_qspi *tqspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_qspi *tqspi = spi_controller_get_devdata(host);
/* Runtime pm disabled with ACPI */
if (has_acpi_companion(tqspi->dev))
@ -1697,8 +1697,8 @@ static int __maybe_unused tegra_qspi_runtime_suspend(struct device *dev)
static int __maybe_unused tegra_qspi_runtime_resume(struct device *dev)
{
struct spi_master *master = dev_get_drvdata(dev);
struct tegra_qspi *tqspi = spi_master_get_devdata(master);
struct spi_controller *host = dev_get_drvdata(dev);
struct tegra_qspi *tqspi = spi_controller_get_devdata(host);
int ret;
/* Runtime pm disabled with ACPI */

View File

@ -40,7 +40,7 @@ struct ti_qspi {
/* list synchronization */
struct mutex list_lock;
struct spi_master *master;
struct spi_controller *host;
void __iomem *base;
void __iomem *mmap_base;
size_t mmap_size;
@ -137,20 +137,20 @@ static inline void ti_qspi_write(struct ti_qspi *qspi,
static int ti_qspi_setup(struct spi_device *spi)
{
struct ti_qspi *qspi = spi_master_get_devdata(spi->master);
struct ti_qspi *qspi = spi_controller_get_devdata(spi->controller);
int ret;
if (spi->master->busy) {
dev_dbg(qspi->dev, "master busy doing other transfers\n");
if (spi->controller->busy) {
dev_dbg(qspi->dev, "host busy doing other transfers\n");
return -EBUSY;
}
if (!qspi->master->max_speed_hz) {
if (!qspi->host->max_speed_hz) {
dev_err(qspi->dev, "spi max frequency not defined\n");
return -EINVAL;
}
spi->max_speed_hz = min(spi->max_speed_hz, qspi->master->max_speed_hz);
spi->max_speed_hz = min(spi->max_speed_hz, qspi->host->max_speed_hz);
ret = pm_runtime_resume_and_get(qspi->dev);
if (ret < 0) {
@ -526,7 +526,7 @@ static int ti_qspi_dma_xfer_sg(struct ti_qspi *qspi, struct sg_table rx_sg,
static void ti_qspi_enable_memory_map(struct spi_device *spi)
{
struct ti_qspi *qspi = spi_master_get_devdata(spi->master);
struct ti_qspi *qspi = spi_controller_get_devdata(spi->controller);
ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG);
if (qspi->ctrl_base) {
@ -540,7 +540,7 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi)
static void ti_qspi_disable_memory_map(struct spi_device *spi)
{
struct ti_qspi *qspi = spi_master_get_devdata(spi->master);
struct ti_qspi *qspi = spi_controller_get_devdata(spi->controller);
ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG);
if (qspi->ctrl_base)
@ -554,7 +554,7 @@ static void ti_qspi_setup_mmap_read(struct spi_device *spi, u8 opcode,
u8 data_nbits, u8 addr_width,
u8 dummy_bytes)
{
struct ti_qspi *qspi = spi_master_get_devdata(spi->master);
struct ti_qspi *qspi = spi_controller_get_devdata(spi->controller);
u32 memval = opcode;
switch (data_nbits) {
@ -576,7 +576,7 @@ static void ti_qspi_setup_mmap_read(struct spi_device *spi, u8 opcode,
static int ti_qspi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
{
struct ti_qspi *qspi = spi_controller_get_devdata(mem->spi->master);
struct ti_qspi *qspi = spi_controller_get_devdata(mem->spi->controller);
size_t max_len;
if (op->data.dir == SPI_MEM_DATA_IN) {
@ -606,19 +606,19 @@ static int ti_qspi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
static int ti_qspi_exec_mem_op(struct spi_mem *mem,
const struct spi_mem_op *op)
{
struct ti_qspi *qspi = spi_master_get_devdata(mem->spi->master);
struct ti_qspi *qspi = spi_controller_get_devdata(mem->spi->controller);
u32 from = 0;
int ret = 0;
/* Only optimize read path. */
if (!op->data.nbytes || op->data.dir != SPI_MEM_DATA_IN ||
!op->addr.nbytes || op->addr.nbytes > 4)
return -ENOTSUPP;
return -EOPNOTSUPP;
/* Address exceeds MMIO window size, fall back to regular mode. */
from = op->addr.val;
if (from + op->data.nbytes > qspi->mmap_size)
return -ENOTSUPP;
return -EOPNOTSUPP;
mutex_lock(&qspi->list_lock);
@ -633,10 +633,10 @@ static int ti_qspi_exec_mem_op(struct spi_mem *mem,
struct sg_table sgt;
if (virt_addr_valid(op->data.buf.in) &&
!spi_controller_dma_map_mem_op_data(mem->spi->master, op,
!spi_controller_dma_map_mem_op_data(mem->spi->controller, op,
&sgt)) {
ret = ti_qspi_dma_xfer_sg(qspi, sgt, from);
spi_controller_dma_unmap_mem_op_data(mem->spi->master,
spi_controller_dma_unmap_mem_op_data(mem->spi->controller,
op, &sgt);
} else {
ret = ti_qspi_dma_bounce_buffer(qspi, from,
@ -658,10 +658,10 @@ static const struct spi_controller_mem_ops ti_qspi_mem_ops = {
.adjust_op_size = ti_qspi_adjust_op_size,
};
static int ti_qspi_start_transfer_one(struct spi_master *master,
static int ti_qspi_start_transfer_one(struct spi_controller *host,
struct spi_message *m)
{
struct ti_qspi *qspi = spi_master_get_devdata(master);
struct ti_qspi *qspi = spi_controller_get_devdata(host);
struct spi_device *spi = m->spi;
struct spi_transfer *t;
int status = 0, ret;
@ -720,7 +720,7 @@ static int ti_qspi_start_transfer_one(struct spi_master *master,
ti_qspi_write(qspi, qspi->cmd | QSPI_INVAL, QSPI_SPI_CMD_REG);
m->status = status;
spi_finalize_current_message(master);
spi_finalize_current_message(host);
return status;
}
@ -756,33 +756,33 @@ MODULE_DEVICE_TABLE(of, ti_qspi_match);
static int ti_qspi_probe(struct platform_device *pdev)
{
struct ti_qspi *qspi;
struct spi_master *master;
struct spi_controller *host;
struct resource *r, *res_mmap;
struct device_node *np = pdev->dev.of_node;
u32 max_freq;
int ret = 0, num_cs, irq;
dma_cap_mask_t mask;
master = spi_alloc_master(&pdev->dev, sizeof(*qspi));
if (!master)
host = spi_alloc_host(&pdev->dev, sizeof(*qspi));
if (!host)
return -ENOMEM;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD;
master->flags = SPI_CONTROLLER_HALF_DUPLEX;
master->setup = ti_qspi_setup;
master->auto_runtime_pm = true;
master->transfer_one_message = ti_qspi_start_transfer_one;
master->dev.of_node = pdev->dev.of_node;
master->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(16) |
SPI_BPW_MASK(8);
master->mem_ops = &ti_qspi_mem_ops;
host->flags = SPI_CONTROLLER_HALF_DUPLEX;
host->setup = ti_qspi_setup;
host->auto_runtime_pm = true;
host->transfer_one_message = ti_qspi_start_transfer_one;
host->dev.of_node = pdev->dev.of_node;
host->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(16) |
SPI_BPW_MASK(8);
host->mem_ops = &ti_qspi_mem_ops;
if (!of_property_read_u32(np, "num-cs", &num_cs))
master->num_chipselect = num_cs;
host->num_chipselect = num_cs;
qspi = spi_master_get_devdata(master);
qspi->master = master;
qspi = spi_controller_get_devdata(host);
qspi->host = host;
qspi->dev = &pdev->dev;
platform_set_drvdata(pdev, qspi);
@ -792,7 +792,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
if (r == NULL) {
dev_err(&pdev->dev, "missing platform data\n");
ret = -ENODEV;
goto free_master;
goto free_host;
}
}
@ -812,7 +812,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
ret = irq;
goto free_master;
goto free_host;
}
mutex_init(&qspi->list_lock);
@ -820,7 +820,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
qspi->base = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(qspi->base)) {
ret = PTR_ERR(qspi->base);
goto free_master;
goto free_host;
}
@ -830,7 +830,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
"syscon-chipselects");
if (IS_ERR(qspi->ctrl_base)) {
ret = PTR_ERR(qspi->ctrl_base);
goto free_master;
goto free_host;
}
ret = of_property_read_u32_index(np,
"syscon-chipselects",
@ -838,7 +838,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
if (ret) {
dev_err(&pdev->dev,
"couldn't get ctrl_mod reg index\n");
goto free_master;
goto free_host;
}
}
@ -853,7 +853,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev);
if (!of_property_read_u32(np, "spi-max-frequency", &max_freq))
master->max_speed_hz = max_freq;
host->max_speed_hz = max_freq;
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY, mask);
@ -876,7 +876,7 @@ static int ti_qspi_probe(struct platform_device *pdev)
dma_release_channel(qspi->rx_chan);
goto no_dma;
}
master->dma_rx = qspi->rx_chan;
host->dma_rx = qspi->rx_chan;
init_completion(&qspi->transfer_complete);
if (res_mmap)
qspi->mmap_phys_base = (dma_addr_t)res_mmap->start;
@ -889,39 +889,40 @@ no_dma:
"mmap failed with error %ld using PIO mode\n",
PTR_ERR(qspi->mmap_base));
qspi->mmap_base = NULL;
master->mem_ops = NULL;
host->mem_ops = NULL;
}
}
qspi->mmap_enabled = false;
qspi->current_cs = -1;
ret = devm_spi_register_master(&pdev->dev, master);
ret = devm_spi_register_controller(&pdev->dev, host);
if (!ret)
return 0;
ti_qspi_dma_cleanup(qspi);
pm_runtime_disable(&pdev->dev);
free_master:
spi_master_put(master);
free_host:
spi_controller_put(host);
return ret;
}
static int ti_qspi_remove(struct platform_device *pdev)
static void ti_qspi_remove(struct platform_device *pdev)
{
struct ti_qspi *qspi = platform_get_drvdata(pdev);
int rc;
rc = spi_master_suspend(qspi->master);
if (rc)
return rc;
rc = spi_controller_suspend(qspi->host);
if (rc) {
dev_alert(&pdev->dev, "spi_controller_suspend() failed (%pe)\n",
ERR_PTR(rc));
return;
}
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
ti_qspi_dma_cleanup(qspi);
return 0;
}
static const struct dev_pm_ops ti_qspi_pm_ops = {
@ -930,7 +931,7 @@ static const struct dev_pm_ops ti_qspi_pm_ops = {
static struct platform_driver ti_qspi_driver = {
.probe = ti_qspi_probe,
.remove = ti_qspi_remove,
.remove_new = ti_qspi_remove,
.driver = {
.name = "ti-qspi",
.pm = &ti_qspi_pm_ops,

View File

@ -124,7 +124,7 @@ struct pch_spi_dma_ctrl {
* struct pch_spi_data - Holds the SPI channel specific details
* @io_remap_addr: The remapped PCI base address
* @io_base_addr: Base address
* @master: Pointer to the SPI master structure
* @host: Pointer to the SPI controller structure
* @work: Reference to work queue handler
* @wait: Wait queue for waking up upon receiving an
* interrupt.
@ -161,7 +161,7 @@ struct pch_spi_dma_ctrl {
struct pch_spi_data {
void __iomem *io_remap_addr;
unsigned long io_base_addr;
struct spi_master *master;
struct spi_controller *host;
struct work_struct work;
wait_queue_head_t wait;
u8 transfer_complete;
@ -216,48 +216,48 @@ static const struct pci_device_id pch_spi_pcidev_id[] = {
/**
* pch_spi_writereg() - Performs register writes
* @master: Pointer to struct spi_master.
* @host: Pointer to struct spi_controller.
* @idx: Register offset.
* @val: Value to be written to register.
*/
static inline void pch_spi_writereg(struct spi_master *master, int idx, u32 val)
static inline void pch_spi_writereg(struct spi_controller *host, int idx, u32 val)
{
struct pch_spi_data *data = spi_master_get_devdata(master);
struct pch_spi_data *data = spi_controller_get_devdata(host);
iowrite32(val, (data->io_remap_addr + idx));
}
/**
* pch_spi_readreg() - Performs register reads
* @master: Pointer to struct spi_master.
* @host: Pointer to struct spi_controller.
* @idx: Register offset.
*/
static inline u32 pch_spi_readreg(struct spi_master *master, int idx)
static inline u32 pch_spi_readreg(struct spi_controller *host, int idx)
{
struct pch_spi_data *data = spi_master_get_devdata(master);
struct pch_spi_data *data = spi_controller_get_devdata(host);
return ioread32(data->io_remap_addr + idx);
}
static inline void pch_spi_setclr_reg(struct spi_master *master, int idx,
static inline void pch_spi_setclr_reg(struct spi_controller *host, int idx,
u32 set, u32 clr)
{
u32 tmp = pch_spi_readreg(master, idx);
u32 tmp = pch_spi_readreg(host, idx);
tmp = (tmp & ~clr) | set;
pch_spi_writereg(master, idx, tmp);
pch_spi_writereg(host, idx, tmp);
}
static void pch_spi_set_master_mode(struct spi_master *master)
static void pch_spi_set_host_mode(struct spi_controller *host)
{
pch_spi_setclr_reg(master, PCH_SPCR, SPCR_MSTR_BIT, 0);
pch_spi_setclr_reg(host, PCH_SPCR, SPCR_MSTR_BIT, 0);
}
/**
* pch_spi_clear_fifo() - Clears the Transmit and Receive FIFOs
* @master: Pointer to struct spi_master.
* @host: Pointer to struct spi_controller.
*/
static void pch_spi_clear_fifo(struct spi_master *master)
static void pch_spi_clear_fifo(struct spi_controller *host)
{
pch_spi_setclr_reg(master, PCH_SPCR, SPCR_FICLR_BIT, 0);
pch_spi_setclr_reg(master, PCH_SPCR, 0, SPCR_FICLR_BIT);
pch_spi_setclr_reg(host, PCH_SPCR, SPCR_FICLR_BIT, 0);
pch_spi_setclr_reg(host, PCH_SPCR, 0, SPCR_FICLR_BIT);
}
static void pch_spi_handler_sub(struct pch_spi_data *data, u32 reg_spsr_val,
@ -312,7 +312,7 @@ static void pch_spi_handler_sub(struct pch_spi_data *data, u32 reg_spsr_val,
if (reg_spsr_val & SPSR_FI_BIT) {
if ((tx_index == bpw_len) && (rx_index == tx_index)) {
/* disable interrupts */
pch_spi_setclr_reg(data->master, PCH_SPCR, 0,
pch_spi_setclr_reg(data->host, PCH_SPCR, 0,
PCH_ALL);
/* transfer is completed;
@ -321,7 +321,7 @@ static void pch_spi_handler_sub(struct pch_spi_data *data, u32 reg_spsr_val,
data->transfer_active = false;
wake_up(&data->wait);
} else {
dev_vdbg(&data->master->dev,
dev_vdbg(&data->host->dev,
"%s : Transfer is not completed",
__func__);
}
@ -383,10 +383,10 @@ static irqreturn_t pch_spi_handler(int irq, void *dev_id)
/**
* pch_spi_set_baud_rate() - Sets SPBR field in SPBRR
* @master: Pointer to struct spi_master.
* @host: Pointer to struct spi_controller.
* @speed_hz: Baud rate.
*/
static void pch_spi_set_baud_rate(struct spi_master *master, u32 speed_hz)
static void pch_spi_set_baud_rate(struct spi_controller *host, u32 speed_hz)
{
u32 n_spbr = PCH_CLOCK_HZ / (speed_hz * 2);
@ -394,21 +394,21 @@ static void pch_spi_set_baud_rate(struct spi_master *master, u32 speed_hz)
if (n_spbr > PCH_MAX_SPBR)
n_spbr = PCH_MAX_SPBR;
pch_spi_setclr_reg(master, PCH_SPBRR, n_spbr, MASK_SPBRR_SPBR_BITS);
pch_spi_setclr_reg(host, PCH_SPBRR, n_spbr, MASK_SPBRR_SPBR_BITS);
}
/**
* pch_spi_set_bits_per_word() - Sets SIZE field in SPBRR
* @master: Pointer to struct spi_master.
* @host: Pointer to struct spi_controller.
* @bits_per_word: Bits per word for SPI transfer.
*/
static void pch_spi_set_bits_per_word(struct spi_master *master,
static void pch_spi_set_bits_per_word(struct spi_controller *host,
u8 bits_per_word)
{
if (bits_per_word == 8)
pch_spi_setclr_reg(master, PCH_SPBRR, 0, SPBRR_SIZE_BIT);
pch_spi_setclr_reg(host, PCH_SPBRR, 0, SPBRR_SIZE_BIT);
else
pch_spi_setclr_reg(master, PCH_SPBRR, SPBRR_SIZE_BIT, 0);
pch_spi_setclr_reg(host, PCH_SPBRR, SPBRR_SIZE_BIT, 0);
}
/**
@ -420,12 +420,12 @@ static void pch_spi_setup_transfer(struct spi_device *spi)
u32 flags = 0;
dev_dbg(&spi->dev, "%s SPBRR content =%x setting baud rate=%d\n",
__func__, pch_spi_readreg(spi->master, PCH_SPBRR),
__func__, pch_spi_readreg(spi->controller, PCH_SPBRR),
spi->max_speed_hz);
pch_spi_set_baud_rate(spi->master, spi->max_speed_hz);
pch_spi_set_baud_rate(spi->controller, spi->max_speed_hz);
/* set bits per word */
pch_spi_set_bits_per_word(spi->master, spi->bits_per_word);
pch_spi_set_bits_per_word(spi->controller, spi->bits_per_word);
if (!(spi->mode & SPI_LSB_FIRST))
flags |= SPCR_LSBF_BIT;
@ -433,29 +433,29 @@ static void pch_spi_setup_transfer(struct spi_device *spi)
flags |= SPCR_CPOL_BIT;
if (spi->mode & SPI_CPHA)
flags |= SPCR_CPHA_BIT;
pch_spi_setclr_reg(spi->master, PCH_SPCR, flags,
pch_spi_setclr_reg(spi->controller, PCH_SPCR, flags,
(SPCR_LSBF_BIT | SPCR_CPOL_BIT | SPCR_CPHA_BIT));
/* Clear the FIFO by toggling FICLR to 1 and back to 0 */
pch_spi_clear_fifo(spi->master);
pch_spi_clear_fifo(spi->controller);
}
/**
* pch_spi_reset() - Clears SPI registers
* @master: Pointer to struct spi_master.
* @host: Pointer to struct spi_controller.
*/
static void pch_spi_reset(struct spi_master *master)
static void pch_spi_reset(struct spi_controller *host)
{
/* write 1 to reset SPI */
pch_spi_writereg(master, PCH_SRST, 0x1);
pch_spi_writereg(host, PCH_SRST, 0x1);
/* clear reset */
pch_spi_writereg(master, PCH_SRST, 0x0);
pch_spi_writereg(host, PCH_SRST, 0x0);
}
static int pch_spi_transfer(struct spi_device *pspi, struct spi_message *pmsg)
{
struct pch_spi_data *data = spi_master_get_devdata(pspi->master);
struct pch_spi_data *data = spi_controller_get_devdata(pspi->controller);
int retval;
unsigned long flags;
@ -524,15 +524,15 @@ static void pch_spi_set_tx(struct pch_spi_data *data, int *bpw)
/* set baud rate if needed */
if (data->cur_trans->speed_hz) {
dev_dbg(&data->master->dev, "%s:setting baud rate\n", __func__);
pch_spi_set_baud_rate(data->master, data->cur_trans->speed_hz);
dev_dbg(&data->host->dev, "%s:setting baud rate\n", __func__);
pch_spi_set_baud_rate(data->host, data->cur_trans->speed_hz);
}
/* set bits per word if needed */
if (data->cur_trans->bits_per_word &&
(data->current_msg->spi->bits_per_word != data->cur_trans->bits_per_word)) {
dev_dbg(&data->master->dev, "%s:set bits per word\n", __func__);
pch_spi_set_bits_per_word(data->master,
dev_dbg(&data->host->dev, "%s:set bits per word\n", __func__);
pch_spi_set_bits_per_word(data->host,
data->cur_trans->bits_per_word);
*bpw = data->cur_trans->bits_per_word;
} else {
@ -590,13 +590,13 @@ static void pch_spi_set_tx(struct pch_spi_data *data, int *bpw)
if (n_writes > PCH_MAX_FIFO_DEPTH)
n_writes = PCH_MAX_FIFO_DEPTH;
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"\n%s:Pulling down SSN low - writing 0x2 to SSNXCR\n",
__func__);
pch_spi_writereg(data->master, PCH_SSNXCR, SSN_LOW);
pch_spi_writereg(data->host, PCH_SSNXCR, SSN_LOW);
for (j = 0; j < n_writes; j++)
pch_spi_writereg(data->master, PCH_SPDWR, data->pkt_tx_buff[j]);
pch_spi_writereg(data->host, PCH_SPDWR, data->pkt_tx_buff[j]);
/* update tx_index */
data->tx_index = j;
@ -609,13 +609,13 @@ static void pch_spi_set_tx(struct pch_spi_data *data, int *bpw)
static void pch_spi_nomore_transfer(struct pch_spi_data *data)
{
struct spi_message *pmsg, *tmp;
dev_dbg(&data->master->dev, "%s called\n", __func__);
dev_dbg(&data->host->dev, "%s called\n", __func__);
/* Invoke complete callback
* [To the spi core..indicating end of transfer] */
data->current_msg->status = 0;
if (data->current_msg->complete) {
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s:Invoking callback of SPI core\n", __func__);
data->current_msg->complete(data->current_msg->context);
}
@ -623,7 +623,7 @@ static void pch_spi_nomore_transfer(struct pch_spi_data *data)
/* update status in global variable */
data->bcurrent_msg_processing = false;
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s:data->bcurrent_msg_processing = false\n", __func__);
data->current_msg = NULL;
@ -638,11 +638,11 @@ static void pch_spi_nomore_transfer(struct pch_spi_data *data)
* bpw;sfer requests in the current message or there are
*more messages)
*/
dev_dbg(&data->master->dev, "%s:Invoke queue_work\n", __func__);
dev_dbg(&data->host->dev, "%s:Invoke queue_work\n", __func__);
schedule_work(&data->work);
} else if (data->board_dat->suspend_sts ||
data->status == STATUS_EXITING) {
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s suspend/remove initiated, flushing queue\n",
__func__);
list_for_each_entry_safe(pmsg, tmp, data->queue.next, queue) {
@ -662,14 +662,14 @@ static void pch_spi_set_ir(struct pch_spi_data *data)
/* enable interrupts, set threshold, enable SPI */
if ((data->bpw_len) > PCH_MAX_FIFO_DEPTH)
/* set receive threshold to PCH_RX_THOLD */
pch_spi_setclr_reg(data->master, PCH_SPCR,
pch_spi_setclr_reg(data->host, PCH_SPCR,
PCH_RX_THOLD << SPCR_RFIC_FIELD |
SPCR_FIE_BIT | SPCR_RFIE_BIT |
SPCR_ORIE_BIT | SPCR_SPE_BIT,
MASK_RFIC_SPCR_BITS | PCH_ALL);
else
/* set receive threshold to maximum */
pch_spi_setclr_reg(data->master, PCH_SPCR,
pch_spi_setclr_reg(data->host, PCH_SPCR,
PCH_RX_THOLD_MAX << SPCR_RFIC_FIELD |
SPCR_FIE_BIT | SPCR_ORIE_BIT |
SPCR_SPE_BIT,
@ -677,18 +677,18 @@ static void pch_spi_set_ir(struct pch_spi_data *data)
/* Wait until the transfer completes; go to sleep after
initiating the transfer. */
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s:waiting for transfer to get over\n", __func__);
wait_event_interruptible(data->wait, data->transfer_complete);
/* clear all interrupts */
pch_spi_writereg(data->master, PCH_SPSR,
pch_spi_readreg(data->master, PCH_SPSR));
pch_spi_writereg(data->host, PCH_SPSR,
pch_spi_readreg(data->host, PCH_SPSR));
/* Disable interrupts and SPI transfer */
pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL | SPCR_SPE_BIT);
pch_spi_setclr_reg(data->host, PCH_SPCR, 0, PCH_ALL | SPCR_SPE_BIT);
/* clear FIFO */
pch_spi_clear_fifo(data->master);
pch_spi_clear_fifo(data->host);
}
static void pch_spi_copy_rx_data(struct pch_spi_data *data, int bpw)
@ -750,25 +750,25 @@ static int pch_spi_start_transfer(struct pch_spi_data *data)
spin_lock_irqsave(&data->lock, flags);
/* disable interrupts, SPI set enable */
pch_spi_setclr_reg(data->master, PCH_SPCR, SPCR_SPE_BIT, PCH_ALL);
pch_spi_setclr_reg(data->host, PCH_SPCR, SPCR_SPE_BIT, PCH_ALL);
spin_unlock_irqrestore(&data->lock, flags);
/* Wait until the transfer completes; go to sleep after
initiating the transfer. */
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s:waiting for transfer to get over\n", __func__);
rtn = wait_event_interruptible_timeout(data->wait,
data->transfer_complete,
msecs_to_jiffies(2 * HZ));
if (!rtn)
dev_err(&data->master->dev,
dev_err(&data->host->dev,
"%s wait-event timeout\n", __func__);
dma_sync_sg_for_cpu(&data->master->dev, dma->sg_rx_p, dma->nent,
dma_sync_sg_for_cpu(&data->host->dev, dma->sg_rx_p, dma->nent,
DMA_FROM_DEVICE);
dma_sync_sg_for_cpu(&data->master->dev, dma->sg_tx_p, dma->nent,
dma_sync_sg_for_cpu(&data->host->dev, dma->sg_tx_p, dma->nent,
DMA_FROM_DEVICE);
memset(data->dma.tx_buf_virt, 0, PAGE_SIZE);
@ -780,14 +780,14 @@ static int pch_spi_start_transfer(struct pch_spi_data *data)
spin_lock_irqsave(&data->lock, flags);
/* clear fifo threshold, disable interrupts, disable SPI transfer */
pch_spi_setclr_reg(data->master, PCH_SPCR, 0,
pch_spi_setclr_reg(data->host, PCH_SPCR, 0,
MASK_RFIC_SPCR_BITS | MASK_TFIC_SPCR_BITS | PCH_ALL |
SPCR_SPE_BIT);
/* clear all interrupts */
pch_spi_writereg(data->master, PCH_SPSR,
pch_spi_readreg(data->master, PCH_SPSR));
pch_spi_writereg(data->host, PCH_SPSR,
pch_spi_readreg(data->host, PCH_SPSR));
/* clear FIFO */
pch_spi_clear_fifo(data->master);
pch_spi_clear_fifo(data->host);
spin_unlock_irqrestore(&data->lock, flags);
@ -846,7 +846,7 @@ static void pch_spi_request_dma(struct pch_spi_data *data, int bpw)
param->width = width;
chan = dma_request_channel(mask, pch_spi_filter, param);
if (!chan) {
dev_err(&data->master->dev,
dev_err(&data->host->dev,
"ERROR: dma_request_channel FAILS(Tx)\n");
goto out;
}
@ -860,7 +860,7 @@ static void pch_spi_request_dma(struct pch_spi_data *data, int bpw)
param->width = width;
chan = dma_request_channel(mask, pch_spi_filter, param);
if (!chan) {
dev_err(&data->master->dev,
dev_err(&data->host->dev,
"ERROR: dma_request_channel FAILS(Rx)\n");
dma_release_channel(dma->chan_tx);
dma->chan_tx = NULL;
@ -913,9 +913,9 @@ static void pch_spi_handle_dma(struct pch_spi_data *data, int *bpw)
/* set baud rate if needed */
if (data->cur_trans->speed_hz) {
dev_dbg(&data->master->dev, "%s:setting baud rate\n", __func__);
dev_dbg(&data->host->dev, "%s:setting baud rate\n", __func__);
spin_lock_irqsave(&data->lock, flags);
pch_spi_set_baud_rate(data->master, data->cur_trans->speed_hz);
pch_spi_set_baud_rate(data->host, data->cur_trans->speed_hz);
spin_unlock_irqrestore(&data->lock, flags);
}
@ -923,9 +923,9 @@ static void pch_spi_handle_dma(struct pch_spi_data *data, int *bpw)
if (data->cur_trans->bits_per_word &&
(data->current_msg->spi->bits_per_word !=
data->cur_trans->bits_per_word)) {
dev_dbg(&data->master->dev, "%s:set bits per word\n", __func__);
dev_dbg(&data->host->dev, "%s:set bits per word\n", __func__);
spin_lock_irqsave(&data->lock, flags);
pch_spi_set_bits_per_word(data->master,
pch_spi_set_bits_per_word(data->host,
data->cur_trans->bits_per_word);
spin_unlock_irqrestore(&data->lock, flags);
*bpw = data->cur_trans->bits_per_word;
@ -969,12 +969,12 @@ static void pch_spi_handle_dma(struct pch_spi_data *data, int *bpw)
size = data->bpw_len;
rem = data->bpw_len;
}
dev_dbg(&data->master->dev, "%s num=%d size=%d rem=%d\n",
dev_dbg(&data->host->dev, "%s num=%d size=%d rem=%d\n",
__func__, num, size, rem);
spin_lock_irqsave(&data->lock, flags);
/* set receive fifo threshold and transmit fifo threshold */
pch_spi_setclr_reg(data->master, PCH_SPCR,
pch_spi_setclr_reg(data->host, PCH_SPCR,
((size - 1) << SPCR_RFIC_FIELD) |
(PCH_TX_THOLD << SPCR_TFIC_FIELD),
MASK_RFIC_SPCR_BITS | MASK_TFIC_SPCR_BITS);
@ -1016,11 +1016,11 @@ static void pch_spi_handle_dma(struct pch_spi_data *data, int *bpw)
num, DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_rx) {
dev_err(&data->master->dev,
dev_err(&data->host->dev,
"%s:dmaengine_prep_slave_sg Failed\n", __func__);
return;
}
dma_sync_sg_for_device(&data->master->dev, sg, num, DMA_FROM_DEVICE);
dma_sync_sg_for_device(&data->host->dev, sg, num, DMA_FROM_DEVICE);
desc_rx->callback = pch_dma_rx_complete;
desc_rx->callback_param = data;
dma->nent = num;
@ -1078,20 +1078,20 @@ static void pch_spi_handle_dma(struct pch_spi_data *data, int *bpw)
sg, num, DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_tx) {
dev_err(&data->master->dev,
dev_err(&data->host->dev,
"%s:dmaengine_prep_slave_sg Failed\n", __func__);
return;
}
dma_sync_sg_for_device(&data->master->dev, sg, num, DMA_TO_DEVICE);
dma_sync_sg_for_device(&data->host->dev, sg, num, DMA_TO_DEVICE);
desc_tx->callback = NULL;
desc_tx->callback_param = data;
dma->nent = num;
dma->desc_tx = desc_tx;
dev_dbg(&data->master->dev, "%s:Pulling down SSN low - writing 0x2 to SSNXCR\n", __func__);
dev_dbg(&data->host->dev, "%s:Pulling down SSN low - writing 0x2 to SSNXCR\n", __func__);
spin_lock_irqsave(&data->lock, flags);
pch_spi_writereg(data->master, PCH_SSNXCR, SSN_LOW);
pch_spi_writereg(data->host, PCH_SSNXCR, SSN_LOW);
desc_rx->tx_submit(desc_rx);
desc_tx->tx_submit(desc_tx);
spin_unlock_irqrestore(&data->lock, flags);
@ -1107,12 +1107,12 @@ static void pch_spi_process_messages(struct work_struct *pwork)
int bpw;
data = container_of(pwork, struct pch_spi_data, work);
dev_dbg(&data->master->dev, "%s data initialized\n", __func__);
dev_dbg(&data->host->dev, "%s data initialized\n", __func__);
spin_lock(&data->lock);
/* check if suspend has been initiated;if yes flush queue */
if (data->board_dat->suspend_sts || (data->status == STATUS_EXITING)) {
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s suspend/remove initiated, flushing queue\n", __func__);
list_for_each_entry_safe(pmsg, tmp, data->queue.next, queue) {
pmsg->status = -EIO;
@ -1132,7 +1132,7 @@ static void pch_spi_process_messages(struct work_struct *pwork)
}
data->bcurrent_msg_processing = true;
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s Set data->bcurrent_msg_processing= true\n", __func__);
/* Get the message from the queue and delete it from there. */
@ -1150,7 +1150,7 @@ static void pch_spi_process_messages(struct work_struct *pwork)
if (data->use_dma)
pch_spi_request_dma(data,
data->current_msg->spi->bits_per_word);
pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);
pch_spi_writereg(data->host, PCH_SSNXCR, SSN_NO_CONTROL);
do {
int cnt;
/* If we are already processing a message get the next
@ -1161,14 +1161,14 @@ static void pch_spi_process_messages(struct work_struct *pwork)
data->cur_trans =
list_entry(data->current_msg->transfers.next,
struct spi_transfer, transfer_list);
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s :Getting 1st transfer message\n",
__func__);
} else {
data->cur_trans =
list_entry(data->cur_trans->transfer_list.next,
struct spi_transfer, transfer_list);
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s :Getting next transfer message\n",
__func__);
}
@ -1210,7 +1210,7 @@ static void pch_spi_process_messages(struct work_struct *pwork)
data->cur_trans->len = data->save_total_len;
data->current_msg->actual_length += data->cur_trans->len;
dev_dbg(&data->master->dev,
dev_dbg(&data->host->dev,
"%s:data->current_msg->actual_length=%d\n",
__func__, data->current_msg->actual_length);
@ -1229,7 +1229,7 @@ static void pch_spi_process_messages(struct work_struct *pwork)
} while (data->cur_trans != NULL);
out:
pch_spi_writereg(data->master, PCH_SSNXCR, SSN_HIGH);
pch_spi_writereg(data->host, PCH_SSNXCR, SSN_HIGH);
if (data->use_dma)
pch_spi_release_dma(data);
}
@ -1248,7 +1248,7 @@ static int pch_spi_get_resources(struct pch_spi_board_data *board_dat,
dev_dbg(&board_dat->pdev->dev, "%s ENTRY\n", __func__);
/* reset PCH SPI h/w */
pch_spi_reset(data->master);
pch_spi_reset(data->host);
dev_dbg(&board_dat->pdev->dev,
"%s pch_spi_reset invoked successfully\n", __func__);
@ -1297,22 +1297,22 @@ static int pch_alloc_dma_buf(struct pch_spi_board_data *board_dat,
static int pch_spi_pd_probe(struct platform_device *plat_dev)
{
int ret;
struct spi_master *master;
struct spi_controller *host;
struct pch_spi_board_data *board_dat = dev_get_platdata(&plat_dev->dev);
struct pch_spi_data *data;
dev_dbg(&plat_dev->dev, "%s:debug\n", __func__);
master = spi_alloc_master(&board_dat->pdev->dev,
host = spi_alloc_host(&board_dat->pdev->dev,
sizeof(struct pch_spi_data));
if (!master) {
dev_err(&plat_dev->dev, "spi_alloc_master[%d] failed.\n",
if (!host) {
dev_err(&plat_dev->dev, "spi_alloc_host[%d] failed.\n",
plat_dev->id);
return -ENOMEM;
}
data = spi_master_get_devdata(master);
data->master = master;
data = spi_controller_get_devdata(host);
data->host = host;
platform_set_drvdata(plat_dev, data);
@ -1330,13 +1330,13 @@ static int pch_spi_pd_probe(struct platform_device *plat_dev)
dev_dbg(&plat_dev->dev, "[ch%d] remap_addr=%p\n",
plat_dev->id, data->io_remap_addr);
/* initialize members of SPI master */
master->num_chipselect = PCH_MAX_CS;
master->transfer = pch_spi_transfer;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST;
master->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16);
master->max_speed_hz = PCH_MAX_BAUDRATE;
master->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX;
/* initialize members of SPI host */
host->num_chipselect = PCH_MAX_CS;
host->transfer = pch_spi_transfer;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST;
host->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16);
host->max_speed_hz = PCH_MAX_BAUDRATE;
host->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX;
data->board_dat = board_dat;
data->plat_dev = plat_dev;
@ -1365,25 +1365,25 @@ static int pch_spi_pd_probe(struct platform_device *plat_dev)
}
data->irq_reg_sts = true;
pch_spi_set_master_mode(master);
pch_spi_set_host_mode(host);
if (use_dma) {
dev_info(&plat_dev->dev, "Use DMA for data transfers\n");
ret = pch_alloc_dma_buf(board_dat, data);
if (ret)
goto err_spi_register_master;
goto err_spi_register_controller;
}
ret = spi_register_master(master);
ret = spi_register_controller(host);
if (ret != 0) {
dev_err(&plat_dev->dev,
"%s spi_register_master FAILED\n", __func__);
goto err_spi_register_master;
"%s spi_register_controller FAILED\n", __func__);
goto err_spi_register_controller;
}
return 0;
err_spi_register_master:
err_spi_register_controller:
pch_free_dma_buf(board_dat, data);
free_irq(board_dat->pdev->irq, data);
err_request_irq:
@ -1391,7 +1391,7 @@ err_request_irq:
err_spi_get_resources:
pci_iounmap(board_dat->pdev, data->io_remap_addr);
err_pci_iomap:
spi_master_put(master);
spi_controller_put(host);
return ret;
}
@ -1427,13 +1427,13 @@ static void pch_spi_pd_remove(struct platform_device *plat_dev)
/* disable interrupts & free IRQ */
if (data->irq_reg_sts) {
/* disable interrupts */
pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL);
pch_spi_setclr_reg(data->host, PCH_SPCR, 0, PCH_ALL);
data->irq_reg_sts = false;
free_irq(board_dat->pdev->irq, data);
}
pci_iounmap(board_dat->pdev, data->io_remap_addr);
spi_unregister_master(data->master);
spi_unregister_controller(data->host);
}
#ifdef CONFIG_PM
static int pch_spi_pd_suspend(struct platform_device *pd_dev,
@ -1463,8 +1463,8 @@ static int pch_spi_pd_suspend(struct platform_device *pd_dev,
/* Free IRQ */
if (data->irq_reg_sts) {
/* disable all interrupts */
pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL);
pch_spi_reset(data->master);
pch_spi_setclr_reg(data->host, PCH_SPCR, 0, PCH_ALL);
pch_spi_reset(data->host);
free_irq(board_dat->pdev->irq, data);
data->irq_reg_sts = false;
@ -1498,8 +1498,8 @@ static int pch_spi_pd_resume(struct platform_device *pd_dev)
}
/* reset PCH SPI h/w */
pch_spi_reset(data->master);
pch_spi_set_master_mode(data->master);
pch_spi_reset(data->host);
pch_spi_set_host_mode(data->host);
data->irq_reg_sts = true;
}
return 0;

View File

@ -26,7 +26,7 @@ struct uniphier_spi_priv {
void __iomem *base;
dma_addr_t base_dma_addr;
struct clk *clk;
struct spi_master *master;
struct spi_controller *host;
struct completion xfer_done;
int error;
@ -127,7 +127,7 @@ static inline void uniphier_spi_irq_disable(struct uniphier_spi_priv *priv,
static void uniphier_spi_set_mode(struct spi_device *spi)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(spi->master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(spi->controller);
u32 val1, val2;
/*
@ -180,7 +180,7 @@ static void uniphier_spi_set_mode(struct spi_device *spi)
static void uniphier_spi_set_transfer_size(struct spi_device *spi, int size)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(spi->master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(spi->controller);
u32 val;
val = readl(priv->base + SSI_TXWDS);
@ -198,7 +198,7 @@ static void uniphier_spi_set_transfer_size(struct spi_device *spi, int size)
static void uniphier_spi_set_baudrate(struct spi_device *spi,
unsigned int speed)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(spi->master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(spi->controller);
u32 val, ckdiv;
/*
@ -217,7 +217,7 @@ static void uniphier_spi_set_baudrate(struct spi_device *spi,
static void uniphier_spi_setup_transfer(struct spi_device *spi,
struct spi_transfer *t)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(spi->master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(spi->controller);
u32 val;
priv->error = 0;
@ -333,7 +333,7 @@ static void uniphier_spi_fill_tx_fifo(struct uniphier_spi_priv *priv)
static void uniphier_spi_set_cs(struct spi_device *spi, bool enable)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(spi->master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(spi->controller);
u32 val;
val = readl(priv->base + SSI_FPS);
@ -346,16 +346,16 @@ static void uniphier_spi_set_cs(struct spi_device *spi, bool enable)
writel(val, priv->base + SSI_FPS);
}
static bool uniphier_spi_can_dma(struct spi_master *master,
static bool uniphier_spi_can_dma(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
unsigned int bpw = bytes_per_word(priv->bits_per_word);
if ((!master->dma_tx && !master->dma_rx)
|| (!master->dma_tx && t->tx_buf)
|| (!master->dma_rx && t->rx_buf))
if ((!host->dma_tx && !host->dma_rx)
|| (!host->dma_tx && t->tx_buf)
|| (!host->dma_rx && t->rx_buf))
return false;
return DIV_ROUND_UP(t->len, bpw) > SSI_FIFO_DEPTH;
@ -363,33 +363,33 @@ static bool uniphier_spi_can_dma(struct spi_master *master,
static void uniphier_spi_dma_rxcb(void *data)
{
struct spi_master *master = data;
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct spi_controller *host = data;
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
int state = atomic_fetch_andnot(SSI_DMA_RX_BUSY, &priv->dma_busy);
uniphier_spi_irq_disable(priv, SSI_IE_RXRE);
if (!(state & SSI_DMA_TX_BUSY))
spi_finalize_current_transfer(master);
spi_finalize_current_transfer(host);
}
static void uniphier_spi_dma_txcb(void *data)
{
struct spi_master *master = data;
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct spi_controller *host = data;
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
int state = atomic_fetch_andnot(SSI_DMA_TX_BUSY, &priv->dma_busy);
uniphier_spi_irq_disable(priv, SSI_IE_TXRE);
if (!(state & SSI_DMA_RX_BUSY))
spi_finalize_current_transfer(master);
spi_finalize_current_transfer(host);
}
static int uniphier_spi_transfer_one_dma(struct spi_master *master,
static int uniphier_spi_transfer_one_dma(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
struct dma_async_tx_descriptor *rxdesc = NULL, *txdesc = NULL;
int buswidth;
@ -412,23 +412,23 @@ static int uniphier_spi_transfer_one_dma(struct spi_master *master,
.src_maxburst = SSI_FIFO_BURST_NUM,
};
dmaengine_slave_config(master->dma_rx, &rxconf);
dmaengine_slave_config(host->dma_rx, &rxconf);
rxdesc = dmaengine_prep_slave_sg(
master->dma_rx,
host->dma_rx,
t->rx_sg.sgl, t->rx_sg.nents,
DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!rxdesc)
goto out_err_prep;
rxdesc->callback = uniphier_spi_dma_rxcb;
rxdesc->callback_param = master;
rxdesc->callback_param = host;
uniphier_spi_irq_enable(priv, SSI_IE_RXRE);
atomic_or(SSI_DMA_RX_BUSY, &priv->dma_busy);
dmaengine_submit(rxdesc);
dma_async_issue_pending(master->dma_rx);
dma_async_issue_pending(host->dma_rx);
}
if (priv->tx_buf) {
@ -439,23 +439,23 @@ static int uniphier_spi_transfer_one_dma(struct spi_master *master,
.dst_maxburst = SSI_FIFO_BURST_NUM,
};
dmaengine_slave_config(master->dma_tx, &txconf);
dmaengine_slave_config(host->dma_tx, &txconf);
txdesc = dmaengine_prep_slave_sg(
master->dma_tx,
host->dma_tx,
t->tx_sg.sgl, t->tx_sg.nents,
DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!txdesc)
goto out_err_prep;
txdesc->callback = uniphier_spi_dma_txcb;
txdesc->callback_param = master;
txdesc->callback_param = host;
uniphier_spi_irq_enable(priv, SSI_IE_TXRE);
atomic_or(SSI_DMA_TX_BUSY, &priv->dma_busy);
dmaengine_submit(txdesc);
dma_async_issue_pending(master->dma_tx);
dma_async_issue_pending(host->dma_tx);
}
/* signal that we need to wait for completion */
@ -463,17 +463,17 @@ static int uniphier_spi_transfer_one_dma(struct spi_master *master,
out_err_prep:
if (rxdesc)
dmaengine_terminate_sync(master->dma_rx);
dmaengine_terminate_sync(host->dma_rx);
return -EINVAL;
}
static int uniphier_spi_transfer_one_irq(struct spi_master *master,
static int uniphier_spi_transfer_one_irq(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct device *dev = master->dev.parent;
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
struct device *dev = host->dev.parent;
unsigned long time_left;
reinit_completion(&priv->xfer_done);
@ -495,11 +495,11 @@ static int uniphier_spi_transfer_one_irq(struct spi_master *master,
return priv->error;
}
static int uniphier_spi_transfer_one_poll(struct spi_master *master,
static int uniphier_spi_transfer_one_poll(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
int loop = SSI_POLL_TIMEOUT_US * 10;
while (priv->tx_bytes) {
@ -520,14 +520,14 @@ static int uniphier_spi_transfer_one_poll(struct spi_master *master,
return 0;
irq_transfer:
return uniphier_spi_transfer_one_irq(master, spi, t);
return uniphier_spi_transfer_one_irq(host, spi, t);
}
static int uniphier_spi_transfer_one(struct spi_master *master,
static int uniphier_spi_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
unsigned long threshold;
bool use_dma;
@ -537,9 +537,9 @@ static int uniphier_spi_transfer_one(struct spi_master *master,
uniphier_spi_setup_transfer(spi, t);
use_dma = master->can_dma ? master->can_dma(master, spi, t) : false;
use_dma = host->can_dma ? host->can_dma(host, spi, t) : false;
if (use_dma)
return uniphier_spi_transfer_one_dma(master, spi, t);
return uniphier_spi_transfer_one_dma(host, spi, t);
/*
* If the transfer operation will take longer than
@ -548,33 +548,33 @@ static int uniphier_spi_transfer_one(struct spi_master *master,
threshold = DIV_ROUND_UP(SSI_POLL_TIMEOUT_US * priv->speed_hz,
USEC_PER_SEC * BITS_PER_BYTE);
if (t->len > threshold)
return uniphier_spi_transfer_one_irq(master, spi, t);
return uniphier_spi_transfer_one_irq(host, spi, t);
else
return uniphier_spi_transfer_one_poll(master, spi, t);
return uniphier_spi_transfer_one_poll(host, spi, t);
}
static int uniphier_spi_prepare_transfer_hardware(struct spi_master *master)
static int uniphier_spi_prepare_transfer_hardware(struct spi_controller *host)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
writel(SSI_CTL_EN, priv->base + SSI_CTL);
return 0;
}
static int uniphier_spi_unprepare_transfer_hardware(struct spi_master *master)
static int uniphier_spi_unprepare_transfer_hardware(struct spi_controller *host)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
writel(0, priv->base + SSI_CTL);
return 0;
}
static void uniphier_spi_handle_err(struct spi_master *master,
static void uniphier_spi_handle_err(struct spi_controller *host,
struct spi_message *msg)
{
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
u32 val;
/* stop running spi transfer */
@ -587,12 +587,12 @@ static void uniphier_spi_handle_err(struct spi_master *master,
uniphier_spi_irq_disable(priv, SSI_IE_ALL_MASK);
if (atomic_read(&priv->dma_busy) & SSI_DMA_TX_BUSY) {
dmaengine_terminate_async(master->dma_tx);
dmaengine_terminate_async(host->dma_tx);
atomic_andnot(SSI_DMA_TX_BUSY, &priv->dma_busy);
}
if (atomic_read(&priv->dma_busy) & SSI_DMA_RX_BUSY) {
dmaengine_terminate_async(master->dma_rx);
dmaengine_terminate_async(host->dma_rx);
atomic_andnot(SSI_DMA_RX_BUSY, &priv->dma_busy);
}
}
@ -641,7 +641,7 @@ done:
static int uniphier_spi_probe(struct platform_device *pdev)
{
struct uniphier_spi_priv *priv;
struct spi_master *master;
struct spi_controller *host;
struct resource *res;
struct dma_slave_caps caps;
u32 dma_tx_burst = 0, dma_rx_burst = 0;
@ -649,20 +649,20 @@ static int uniphier_spi_probe(struct platform_device *pdev)
int irq;
int ret;
master = spi_alloc_master(&pdev->dev, sizeof(*priv));
if (!master)
host = spi_alloc_host(&pdev->dev, sizeof(*priv));
if (!host)
return -ENOMEM;
platform_set_drvdata(pdev, master);
platform_set_drvdata(pdev, host);
priv = spi_master_get_devdata(master);
priv->master = master;
priv = spi_controller_get_devdata(host);
priv->host = host;
priv->is_save_param = false;
priv->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
if (IS_ERR(priv->base)) {
ret = PTR_ERR(priv->base);
goto out_master_put;
goto out_host_put;
}
priv->base_dma_addr = res->start;
@ -670,12 +670,12 @@ static int uniphier_spi_probe(struct platform_device *pdev)
if (IS_ERR(priv->clk)) {
dev_err(&pdev->dev, "failed to get clock\n");
ret = PTR_ERR(priv->clk);
goto out_master_put;
goto out_host_put;
}
ret = clk_prepare_enable(priv->clk);
if (ret)
goto out_master_put;
goto out_host_put;
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
@ -694,35 +694,35 @@ static int uniphier_spi_probe(struct platform_device *pdev)
clk_rate = clk_get_rate(priv->clk);
master->max_speed_hz = DIV_ROUND_UP(clk_rate, SSI_MIN_CLK_DIVIDER);
master->min_speed_hz = DIV_ROUND_UP(clk_rate, SSI_MAX_CLK_DIVIDER);
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
master->dev.of_node = pdev->dev.of_node;
master->bus_num = pdev->id;
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
host->max_speed_hz = DIV_ROUND_UP(clk_rate, SSI_MIN_CLK_DIVIDER);
host->min_speed_hz = DIV_ROUND_UP(clk_rate, SSI_MAX_CLK_DIVIDER);
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST;
host->dev.of_node = pdev->dev.of_node;
host->bus_num = pdev->id;
host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
master->set_cs = uniphier_spi_set_cs;
master->transfer_one = uniphier_spi_transfer_one;
master->prepare_transfer_hardware
host->set_cs = uniphier_spi_set_cs;
host->transfer_one = uniphier_spi_transfer_one;
host->prepare_transfer_hardware
= uniphier_spi_prepare_transfer_hardware;
master->unprepare_transfer_hardware
host->unprepare_transfer_hardware
= uniphier_spi_unprepare_transfer_hardware;
master->handle_err = uniphier_spi_handle_err;
master->can_dma = uniphier_spi_can_dma;
host->handle_err = uniphier_spi_handle_err;
host->can_dma = uniphier_spi_can_dma;
master->num_chipselect = 1;
master->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX;
host->num_chipselect = 1;
host->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX;
master->dma_tx = dma_request_chan(&pdev->dev, "tx");
if (IS_ERR_OR_NULL(master->dma_tx)) {
if (PTR_ERR(master->dma_tx) == -EPROBE_DEFER) {
host->dma_tx = dma_request_chan(&pdev->dev, "tx");
if (IS_ERR_OR_NULL(host->dma_tx)) {
if (PTR_ERR(host->dma_tx) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto out_disable_clk;
}
master->dma_tx = NULL;
host->dma_tx = NULL;
dma_tx_burst = INT_MAX;
} else {
ret = dma_get_slave_caps(master->dma_tx, &caps);
ret = dma_get_slave_caps(host->dma_tx, &caps);
if (ret) {
dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n",
ret);
@ -731,16 +731,16 @@ static int uniphier_spi_probe(struct platform_device *pdev)
dma_tx_burst = caps.max_burst;
}
master->dma_rx = dma_request_chan(&pdev->dev, "rx");
if (IS_ERR_OR_NULL(master->dma_rx)) {
if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {
host->dma_rx = dma_request_chan(&pdev->dev, "rx");
if (IS_ERR_OR_NULL(host->dma_rx)) {
if (PTR_ERR(host->dma_rx) == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto out_release_dma;
}
master->dma_rx = NULL;
host->dma_rx = NULL;
dma_rx_burst = INT_MAX;
} else {
ret = dma_get_slave_caps(master->dma_rx, &caps);
ret = dma_get_slave_caps(host->dma_rx, &caps);
if (ret) {
dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n",
ret);
@ -749,41 +749,41 @@ static int uniphier_spi_probe(struct platform_device *pdev)
dma_rx_burst = caps.max_burst;
}
master->max_dma_len = min(dma_tx_burst, dma_rx_burst);
host->max_dma_len = min(dma_tx_burst, dma_rx_burst);
ret = devm_spi_register_master(&pdev->dev, master);
ret = devm_spi_register_controller(&pdev->dev, host);
if (ret)
goto out_release_dma;
return 0;
out_release_dma:
if (!IS_ERR_OR_NULL(master->dma_rx)) {
dma_release_channel(master->dma_rx);
master->dma_rx = NULL;
if (!IS_ERR_OR_NULL(host->dma_rx)) {
dma_release_channel(host->dma_rx);
host->dma_rx = NULL;
}
if (!IS_ERR_OR_NULL(master->dma_tx)) {
dma_release_channel(master->dma_tx);
master->dma_tx = NULL;
if (!IS_ERR_OR_NULL(host->dma_tx)) {
dma_release_channel(host->dma_tx);
host->dma_tx = NULL;
}
out_disable_clk:
clk_disable_unprepare(priv->clk);
out_master_put:
spi_master_put(master);
out_host_put:
spi_controller_put(host);
return ret;
}
static void uniphier_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct uniphier_spi_priv *priv = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct uniphier_spi_priv *priv = spi_controller_get_devdata(host);
if (master->dma_tx)
dma_release_channel(master->dma_tx);
if (master->dma_rx)
dma_release_channel(master->dma_rx);
if (host->dma_tx)
dma_release_channel(host->dma_tx);
if (host->dma_rx)
dma_release_channel(host->dma_rx);
clk_disable_unprepare(priv->clk);
}

View File

@ -361,7 +361,7 @@ static int wpcm_fiu_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
wpcm_fiu_stall_host(fiu, false);
return -ENOTSUPP;
return -EOPNOTSUPP;
}
static int wpcm_fiu_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
@ -441,7 +441,7 @@ static int wpcm_fiu_probe(struct platform_device *pdev)
struct wpcm_fiu_spi *fiu;
struct resource *res;
ctrl = devm_spi_alloc_master(dev, sizeof(*fiu));
ctrl = devm_spi_alloc_host(dev, sizeof(*fiu));
if (!ctrl)
return -ENOMEM;

View File

@ -132,10 +132,10 @@ static int spi_xcomm_txrx_bufs(struct spi_xcomm *spi_xcomm,
return t->len;
}
static int spi_xcomm_transfer_one(struct spi_master *master,
static int spi_xcomm_transfer_one(struct spi_controller *host,
struct spi_message *msg)
{
struct spi_xcomm *spi_xcomm = spi_master_get_devdata(master);
struct spi_xcomm *spi_xcomm = spi_controller_get_devdata(host);
unsigned int settings = spi_xcomm->settings;
struct spi_device *spi = msg->spi;
unsigned cs_change = 0;
@ -197,7 +197,7 @@ static int spi_xcomm_transfer_one(struct spi_master *master,
spi_xcomm_chipselect(spi_xcomm, spi, false);
msg->status = status;
spi_finalize_current_message(master);
spi_finalize_current_message(host);
return status;
}
@ -205,27 +205,27 @@ static int spi_xcomm_transfer_one(struct spi_master *master,
static int spi_xcomm_probe(struct i2c_client *i2c)
{
struct spi_xcomm *spi_xcomm;
struct spi_master *master;
struct spi_controller *host;
int ret;
master = spi_alloc_master(&i2c->dev, sizeof(*spi_xcomm));
if (!master)
host = spi_alloc_host(&i2c->dev, sizeof(*spi_xcomm));
if (!host)
return -ENOMEM;
spi_xcomm = spi_master_get_devdata(master);
spi_xcomm = spi_controller_get_devdata(host);
spi_xcomm->i2c = i2c;
master->num_chipselect = 16;
master->mode_bits = SPI_CPHA | SPI_CPOL | SPI_3WIRE;
master->bits_per_word_mask = SPI_BPW_MASK(8);
master->flags = SPI_CONTROLLER_HALF_DUPLEX;
master->transfer_one_message = spi_xcomm_transfer_one;
master->dev.of_node = i2c->dev.of_node;
i2c_set_clientdata(i2c, master);
host->num_chipselect = 16;
host->mode_bits = SPI_CPHA | SPI_CPOL | SPI_3WIRE;
host->bits_per_word_mask = SPI_BPW_MASK(8);
host->flags = SPI_CONTROLLER_HALF_DUPLEX;
host->transfer_one_message = spi_xcomm_transfer_one;
host->dev.of_node = i2c->dev.of_node;
i2c_set_clientdata(i2c, host);
ret = devm_spi_register_master(&i2c->dev, master);
ret = devm_spi_register_controller(&i2c->dev, host);
if (ret < 0)
spi_master_put(master);
spi_controller_put(host);
return ret;
}

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Xilinx SPI controller driver (master mode only)
* Xilinx SPI controller driver (host mode only)
*
* Author: MontaVista Software, Inc.
* source@mvista.com
@ -83,7 +83,7 @@ struct xilinx_spi {
void __iomem *regs; /* virt. address of the control registers */
int irq;
bool force_irq; /* force irq to setup master inhibit */
bool force_irq; /* force irq to setup host inhibit */
u8 *rx_ptr; /* pointer in the Tx buffer */
const u8 *tx_ptr; /* pointer in the Rx buffer */
u8 bytes_per_word;
@ -174,10 +174,10 @@ static void xspi_init_hw(struct xilinx_spi *xspi)
regs_base + XIPIF_V123B_IIER_OFFSET);
/* Disable the global IPIF interrupt */
xspi->write_fn(0, regs_base + XIPIF_V123B_DGIER_OFFSET);
/* Deselect the slave on the SPI bus */
/* Deselect the Target on the SPI bus */
xspi->write_fn(0xffff, regs_base + XSPI_SSR_OFFSET);
/* Disable the transmitter, enable Manual Slave Select Assertion,
* put SPI controller into master mode, and enable it */
/* Disable the transmitter, enable Manual Target Select Assertion,
* put SPI controller into host mode, and enable it */
xspi->write_fn(XSPI_CR_MANUAL_SSELECT | XSPI_CR_MASTER_MODE |
XSPI_CR_ENABLE | XSPI_CR_TXFIFO_RESET | XSPI_CR_RXFIFO_RESET,
regs_base + XSPI_CR_OFFSET);
@ -185,12 +185,12 @@ static void xspi_init_hw(struct xilinx_spi *xspi)
static void xilinx_spi_chipselect(struct spi_device *spi, int is_on)
{
struct xilinx_spi *xspi = spi_master_get_devdata(spi->master);
struct xilinx_spi *xspi = spi_controller_get_devdata(spi->controller);
u16 cr;
u32 cs;
if (is_on == BITBANG_CS_INACTIVE) {
/* Deselect the slave on the SPI bus */
/* Deselect the target on the SPI bus */
xspi->write_fn(xspi->cs_inactive, xspi->regs + XSPI_SSR_OFFSET);
return;
}
@ -225,7 +225,7 @@ static void xilinx_spi_chipselect(struct spi_device *spi, int is_on)
static int xilinx_spi_setup_transfer(struct spi_device *spi,
struct spi_transfer *t)
{
struct xilinx_spi *xspi = spi_master_get_devdata(spi->master);
struct xilinx_spi *xspi = spi_controller_get_devdata(spi->controller);
if (spi->mode & SPI_CS_HIGH)
xspi->cs_inactive &= ~BIT(spi_get_chipselect(spi, 0));
@ -237,7 +237,7 @@ static int xilinx_spi_setup_transfer(struct spi_device *spi,
static int xilinx_spi_txrx_bufs(struct spi_device *spi, struct spi_transfer *t)
{
struct xilinx_spi *xspi = spi_master_get_devdata(spi->master);
struct xilinx_spi *xspi = spi_controller_get_devdata(spi->controller);
int remaining_words; /* the number of words left to transfer */
bool use_irq = false;
u16 cr = 0;
@ -335,9 +335,9 @@ static int xilinx_spi_txrx_bufs(struct spi_device *spi, struct spi_transfer *t)
}
/* This driver supports single master mode only. Hence Tx FIFO Empty
/* This driver supports single host mode only. Hence Tx FIFO Empty
* is the only interrupt we care about.
* Receive FIFO Overrun, Transmit FIFO Underrun, Mode Fault, and Slave Mode
* Receive FIFO Overrun, Transmit FIFO Underrun, Mode Fault, and Target Mode
* Fault are not to happen.
*/
static irqreturn_t xilinx_spi_irq(int irq, void *dev_id)
@ -393,7 +393,7 @@ static int xilinx_spi_probe(struct platform_device *pdev)
struct xspi_platform_data *pdata;
struct resource *res;
int ret, num_cs = 0, bits_per_word;
struct spi_master *master;
struct spi_controller *host;
bool force_irq = false;
u32 tmp;
u8 i;
@ -415,26 +415,26 @@ static int xilinx_spi_probe(struct platform_device *pdev)
if (!num_cs) {
dev_err(&pdev->dev,
"Missing slave select configuration data\n");
"Missing target select configuration data\n");
return -EINVAL;
}
if (num_cs > XILINX_SPI_MAX_CS) {
dev_err(&pdev->dev, "Invalid number of spi slaves\n");
dev_err(&pdev->dev, "Invalid number of spi targets\n");
return -EINVAL;
}
master = devm_spi_alloc_master(&pdev->dev, sizeof(struct xilinx_spi));
if (!master)
host = devm_spi_alloc_host(&pdev->dev, sizeof(struct xilinx_spi));
if (!host)
return -ENODEV;
/* the spi->mode bits understood by this driver: */
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST | SPI_LOOP |
SPI_CS_HIGH;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST | SPI_LOOP |
SPI_CS_HIGH;
xspi = spi_master_get_devdata(master);
xspi = spi_controller_get_devdata(host);
xspi->cs_inactive = 0xffffffff;
xspi->bitbang.master = master;
xspi->bitbang.master = host;
xspi->bitbang.chipselect = xilinx_spi_chipselect;
xspi->bitbang.setup_transfer = xilinx_spi_setup_transfer;
xspi->bitbang.txrx_bufs = xilinx_spi_txrx_bufs;
@ -444,9 +444,9 @@ static int xilinx_spi_probe(struct platform_device *pdev)
if (IS_ERR(xspi->regs))
return PTR_ERR(xspi->regs);
master->bus_num = pdev->id;
master->num_chipselect = num_cs;
master->dev.of_node = pdev->dev.of_node;
host->bus_num = pdev->id;
host->num_chipselect = num_cs;
host->dev.of_node = pdev->dev.of_node;
/*
* Detect endianess on the IP via loop bit in CR. Detection
@ -466,7 +466,7 @@ static int xilinx_spi_probe(struct platform_device *pdev)
xspi->write_fn = xspi_write32_be;
}
master->bits_per_word_mask = SPI_BPW_MASK(bits_per_word);
host->bits_per_word_mask = SPI_BPW_MASK(bits_per_word);
xspi->bytes_per_word = bits_per_word / 8;
xspi->buffer_size = xilinx_spi_find_buffer_size(xspi);
@ -496,17 +496,17 @@ static int xilinx_spi_probe(struct platform_device *pdev)
if (pdata) {
for (i = 0; i < pdata->num_devices; i++)
spi_new_device(master, pdata->devices + i);
spi_new_device(host, pdata->devices + i);
}
platform_set_drvdata(pdev, master);
platform_set_drvdata(pdev, host);
return 0;
}
static void xilinx_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct xilinx_spi *xspi = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct xilinx_spi *xspi = spi_controller_get_devdata(host);
void __iomem *regs_base = xspi->regs;
spi_bitbang_stop(&xspi->bitbang);
@ -516,7 +516,7 @@ static void xilinx_spi_remove(struct platform_device *pdev)
/* Disable the global IPIF interrupt */
xspi->write_fn(0, regs_base + XIPIF_V123B_DGIER_OFFSET);
spi_master_put(xspi->bitbang.master);
spi_controller_put(xspi->bitbang.master);
}
/* work with hotplug and coldplug */

View File

@ -95,7 +95,7 @@ struct xlp_spi_priv {
int rx_len; /* rx xfer length */
int txerrors; /* TXFIFO underflow count */
int rxerrors; /* RXFIFO overflow count */
int cs; /* slave device chip select */
int cs; /* target device chip select */
u32 spi_clk; /* spi clock frequency */
bool cmd_cont; /* cs active */
struct completion done; /* completion notification */
@ -138,7 +138,7 @@ static int xlp_spi_setup(struct spi_device *spi)
u32 fdiv, cfg;
int cs;
xspi = spi_master_get_devdata(spi->master);
xspi = spi_controller_get_devdata(spi->controller);
cs = spi_get_chipselect(spi, 0);
/*
* The value of fdiv must be between 4 and 65535.
@ -343,17 +343,17 @@ static int xlp_spi_txrx_bufs(struct xlp_spi_priv *xs, struct spi_transfer *t)
return bytesleft;
}
static int xlp_spi_transfer_one(struct spi_master *master,
static int xlp_spi_transfer_one(struct spi_controller *host,
struct spi_device *spi,
struct spi_transfer *t)
{
struct xlp_spi_priv *xspi = spi_master_get_devdata(master);
struct xlp_spi_priv *xspi = spi_controller_get_devdata(host);
int ret = 0;
xspi->cs = spi_get_chipselect(spi, 0);
xspi->dev = spi->dev;
if (spi_transfer_is_last(master, t))
if (spi_transfer_is_last(host, t))
xspi->cmd_cont = 0;
else
xspi->cmd_cont = 1;
@ -361,13 +361,13 @@ static int xlp_spi_transfer_one(struct spi_master *master,
if (xlp_spi_txrx_bufs(xspi, t))
ret = -EIO;
spi_finalize_current_transfer(master);
spi_finalize_current_transfer(host);
return ret;
}
static int xlp_spi_probe(struct platform_device *pdev)
{
struct spi_master *master;
struct spi_controller *host;
struct xlp_spi_priv *xspi;
struct clk *clk;
int irq, err;
@ -398,28 +398,28 @@ static int xlp_spi_probe(struct platform_device *pdev)
xspi->spi_clk = clk_get_rate(clk);
master = spi_alloc_master(&pdev->dev, 0);
if (!master) {
dev_err(&pdev->dev, "could not alloc master\n");
host = spi_alloc_host(&pdev->dev, 0);
if (!host) {
dev_err(&pdev->dev, "could not alloc host\n");
return -ENOMEM;
}
master->bus_num = 0;
master->num_chipselect = XLP_SPI_MAX_CS;
master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
master->setup = xlp_spi_setup;
master->transfer_one = xlp_spi_transfer_one;
master->dev.of_node = pdev->dev.of_node;
host->bus_num = 0;
host->num_chipselect = XLP_SPI_MAX_CS;
host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
host->setup = xlp_spi_setup;
host->transfer_one = xlp_spi_transfer_one;
host->dev.of_node = pdev->dev.of_node;
init_completion(&xspi->done);
spi_master_set_devdata(master, xspi);
spi_controller_set_devdata(host, xspi);
xlp_spi_sysctl_setup(xspi);
/* register spi controller */
err = devm_spi_register_master(&pdev->dev, master);
err = devm_spi_register_controller(&pdev->dev, host);
if (err) {
dev_err(&pdev->dev, "spi register master failed!\n");
spi_master_put(master);
dev_err(&pdev->dev, "spi register host failed!\n");
spi_controller_put(host);
return err;
}

View File

@ -53,7 +53,7 @@ static inline void xtfpga_spi_wait_busy(struct xtfpga_spi *xspi)
static u32 xtfpga_spi_txrx_word(struct spi_device *spi, unsigned nsecs,
u32 v, u8 bits, unsigned flags)
{
struct xtfpga_spi *xspi = spi_master_get_devdata(spi->master);
struct xtfpga_spi *xspi = spi_controller_get_devdata(spi->controller);
xspi->data = (xspi->data << bits) | (v & GENMASK(bits - 1, 0));
xspi->data_sz += bits;
@ -71,7 +71,7 @@ static u32 xtfpga_spi_txrx_word(struct spi_device *spi, unsigned nsecs,
static void xtfpga_spi_chipselect(struct spi_device *spi, int is_on)
{
struct xtfpga_spi *xspi = spi_master_get_devdata(spi->master);
struct xtfpga_spi *xspi = spi_controller_get_devdata(spi->controller);
WARN_ON(xspi->data_sz != 0);
xspi->data_sz = 0;
@ -81,19 +81,19 @@ static int xtfpga_spi_probe(struct platform_device *pdev)
{
struct xtfpga_spi *xspi;
int ret;
struct spi_master *master;
struct spi_controller *host;
master = devm_spi_alloc_master(&pdev->dev, sizeof(struct xtfpga_spi));
if (!master)
host = devm_spi_alloc_host(&pdev->dev, sizeof(struct xtfpga_spi));
if (!host)
return -ENOMEM;
master->flags = SPI_CONTROLLER_NO_RX;
master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 16);
master->bus_num = pdev->dev.id;
master->dev.of_node = pdev->dev.of_node;
host->flags = SPI_CONTROLLER_NO_RX;
host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 16);
host->bus_num = pdev->dev.id;
host->dev.of_node = pdev->dev.of_node;
xspi = spi_master_get_devdata(master);
xspi->bitbang.master = master;
xspi = spi_controller_get_devdata(host);
xspi->bitbang.master = host;
xspi->bitbang.chipselect = xtfpga_spi_chipselect;
xspi->bitbang.txrx_word[SPI_MODE_0] = xtfpga_spi_txrx_word;
xspi->regs = devm_platform_ioremap_resource(pdev, 0);
@ -113,17 +113,17 @@ static int xtfpga_spi_probe(struct platform_device *pdev)
return ret;
}
platform_set_drvdata(pdev, master);
platform_set_drvdata(pdev, host);
return 0;
}
static void xtfpga_spi_remove(struct platform_device *pdev)
{
struct spi_master *master = platform_get_drvdata(pdev);
struct xtfpga_spi *xspi = spi_master_get_devdata(master);
struct spi_controller *host = platform_get_drvdata(pdev);
struct xtfpga_spi *xspi = spi_controller_get_devdata(host);
spi_bitbang_stop(&xspi->bitbang);
spi_master_put(master);
spi_controller_put(host);
}
MODULE_ALIAS("platform:" XTFPGA_SPI_NAME);

View File

@ -54,10 +54,10 @@
#define ZYNQ_QSPI_CONFIG_MSTREN_MASK BIT(0) /* Master Mode */
/*
* QSPI Configuration Register - Baud rate and slave select
* QSPI Configuration Register - Baud rate and target select
*
* These are the values used in the calculation of baud rate divisor and
* setting the slave select.
* setting the target select.
*/
#define ZYNQ_QSPI_CONFIG_BAUD_DIV_MAX GENMASK(2, 0) /* Baud rate maximum */
#define ZYNQ_QSPI_CONFIG_BAUD_DIV_SHIFT 3 /* Baud rate divisor shift */
@ -164,14 +164,14 @@ static inline void zynq_qspi_write(struct zynq_qspi *xqspi, u32 offset,
*
* The default settings of the QSPI controller's configurable parameters on
* reset are
* - Master mode
* - Host mode
* - Baud rate divisor is set to 2
* - Tx threshold set to 1l Rx threshold set to 32
* - Flash memory interface mode enabled
* - Size of the word to be transferred as 8 bit
* This function performs the following actions
* - Disable and clear all the interrupts
* - Enable manual slave select
* - Enable manual target select
* - Enable manual start
* - Deselect all the chip select lines
* - Set the size of the word to be transferred as 32 bit
@ -289,7 +289,7 @@ static void zynq_qspi_txfifo_op(struct zynq_qspi *xqspi, unsigned int size)
*/
static void zynq_qspi_chipselect(struct spi_device *spi, bool assert)
{
struct spi_controller *ctlr = spi->master;
struct spi_controller *ctlr = spi->controller;
struct zynq_qspi *xqspi = spi_controller_get_devdata(ctlr);
u32 config_reg;
@ -377,7 +377,7 @@ static int zynq_qspi_config_op(struct zynq_qspi *xqspi, struct spi_device *spi)
*/
static int zynq_qspi_setup_op(struct spi_device *spi)
{
struct spi_controller *ctlr = spi->master;
struct spi_controller *ctlr = spi->controller;
struct zynq_qspi *qspi = spi_controller_get_devdata(ctlr);
if (ctlr->busy)
@ -525,7 +525,7 @@ static irqreturn_t zynq_qspi_irq(int irq, void *dev_id)
static int zynq_qspi_exec_mem_op(struct spi_mem *mem,
const struct spi_mem_op *op)
{
struct zynq_qspi *xqspi = spi_controller_get_devdata(mem->spi->master);
struct zynq_qspi *xqspi = spi_controller_get_devdata(mem->spi->controller);
int err = 0, i;
u8 *tmpbuf;
@ -637,7 +637,7 @@ static int zynq_qspi_probe(struct platform_device *pdev)
struct zynq_qspi *xqspi;
u32 num_cs;
ctlr = spi_alloc_master(&pdev->dev, sizeof(*xqspi));
ctlr = spi_alloc_host(&pdev->dev, sizeof(*xqspi));
if (!ctlr)
return -ENOMEM;
@ -647,14 +647,14 @@ static int zynq_qspi_probe(struct platform_device *pdev)
xqspi->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(xqspi->regs)) {
ret = PTR_ERR(xqspi->regs);
goto remove_master;
goto remove_ctlr;
}
xqspi->pclk = devm_clk_get(&pdev->dev, "pclk");
if (IS_ERR(xqspi->pclk)) {
dev_err(&pdev->dev, "pclk clock not found.\n");
ret = PTR_ERR(xqspi->pclk);
goto remove_master;
goto remove_ctlr;
}
init_completion(&xqspi->data_completion);
@ -663,13 +663,13 @@ static int zynq_qspi_probe(struct platform_device *pdev)
if (IS_ERR(xqspi->refclk)) {
dev_err(&pdev->dev, "ref_clk clock not found.\n");
ret = PTR_ERR(xqspi->refclk);
goto remove_master;
goto remove_ctlr;
}
ret = clk_prepare_enable(xqspi->pclk);
if (ret) {
dev_err(&pdev->dev, "Unable to enable APB clock.\n");
goto remove_master;
goto remove_ctlr;
}
ret = clk_prepare_enable(xqspi->refclk);
@ -715,7 +715,7 @@ static int zynq_qspi_probe(struct platform_device *pdev)
ret = devm_spi_register_controller(&pdev->dev, ctlr);
if (ret) {
dev_err(&pdev->dev, "spi_register_master failed\n");
dev_err(&pdev->dev, "devm_spi_register_controller failed\n");
goto clk_dis_all;
}
@ -725,7 +725,7 @@ clk_dis_all:
clk_disable_unprepare(xqspi->refclk);
clk_dis_pclk:
clk_disable_unprepare(xqspi->pclk);
remove_master:
remove_ctlr:
spi_controller_put(ctlr);
return ret;

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Xilinx Zynq UltraScale+ MPSoC Quad-SPI (QSPI) controller driver
* (master mode only)
* (host mode only)
*
* Copyright (C) 2009 - 2015 Xilinx, Inc.
*/
@ -235,21 +235,21 @@ static inline void zynqmp_gqspi_write(struct zynqmp_qspi *xqspi, u32 offset,
}
/**
* zynqmp_gqspi_selectslave - For selection of slave device
* zynqmp_gqspi_selecttarget - For selection of target device
* @instanceptr: Pointer to the zynqmp_qspi structure
* @slavecs: For chip select
* @slavebus: To check which bus is selected- upper or lower
* @targetcs: For chip select
* @targetbus: To check which bus is selected- upper or lower
*/
static void zynqmp_gqspi_selectslave(struct zynqmp_qspi *instanceptr,
u8 slavecs, u8 slavebus)
static void zynqmp_gqspi_selecttarget(struct zynqmp_qspi *instanceptr,
u8 targetcs, u8 targetbus)
{
/*
* Bus and CS lines selected here will be updated in the instance and
* used for subsequent GENFIFO entries during transfer.
*/
/* Choose slave select line */
switch (slavecs) {
/* Choose target select line */
switch (targetcs) {
case GQSPI_SELECT_FLASH_CS_BOTH:
instanceptr->genfifocs = GQSPI_GENFIFO_CS_LOWER |
GQSPI_GENFIFO_CS_UPPER;
@ -261,11 +261,11 @@ static void zynqmp_gqspi_selectslave(struct zynqmp_qspi *instanceptr,
instanceptr->genfifocs = GQSPI_GENFIFO_CS_LOWER;
break;
default:
dev_warn(instanceptr->dev, "Invalid slave select\n");
dev_warn(instanceptr->dev, "Invalid target select\n");
}
/* Choose the bus */
switch (slavebus) {
switch (targetbus) {
case GQSPI_SELECT_FLASH_BUS_BOTH:
instanceptr->genfifobus = GQSPI_GENFIFO_BUS_LOWER |
GQSPI_GENFIFO_BUS_UPPER;
@ -277,7 +277,7 @@ static void zynqmp_gqspi_selectslave(struct zynqmp_qspi *instanceptr,
instanceptr->genfifobus = GQSPI_GENFIFO_BUS_LOWER;
break;
default:
dev_warn(instanceptr->dev, "Invalid slave bus\n");
dev_warn(instanceptr->dev, "Invalid target bus\n");
}
}
@ -337,13 +337,13 @@ static void zynqmp_qspi_set_tapdelay(struct zynqmp_qspi *xqspi, u32 baudrateval)
*
* The default settings of the QSPI controller's configurable parameters on
* reset are
* - Master mode
* - Host mode
* - TX threshold set to 1
* - RX threshold set to 1
* - Flash memory interface mode enabled
* This function performs the following actions
* - Disable and clear all the interrupts
* - Enable manual slave select
* - Enable manual target select
* - Enable manual start
* - Deselect all the chip select lines
* - Set the little endian mode of TX FIFO
@ -426,9 +426,9 @@ static void zynqmp_qspi_init_hw(struct zynqmp_qspi *xqspi)
GQSPI_RX_FIFO_THRESHOLD);
zynqmp_gqspi_write(xqspi, GQSPI_GF_THRESHOLD_OFST,
GQSPI_GEN_FIFO_THRESHOLD_RESET_VAL);
zynqmp_gqspi_selectslave(xqspi,
GQSPI_SELECT_FLASH_CS_LOWER,
GQSPI_SELECT_FLASH_BUS_LOWER);
zynqmp_gqspi_selecttarget(xqspi,
GQSPI_SELECT_FLASH_CS_LOWER,
GQSPI_SELECT_FLASH_BUS_LOWER);
/* Initialize DMA */
zynqmp_gqspi_write(xqspi,
GQSPI_QSPIDMA_DST_CTRL_OFST,
@ -459,7 +459,7 @@ static void zynqmp_qspi_copy_read_data(struct zynqmp_qspi *xqspi,
*/
static void zynqmp_qspi_chipselect(struct spi_device *qspi, bool is_high)
{
struct zynqmp_qspi *xqspi = spi_master_get_devdata(qspi->master);
struct zynqmp_qspi *xqspi = spi_controller_get_devdata(qspi->controller);
ulong timeout;
u32 genfifoentry = 0, statusreg;
@ -594,7 +594,7 @@ static int zynqmp_qspi_config_op(struct zynqmp_qspi *xqspi,
*/
static int zynqmp_qspi_setup_op(struct spi_device *qspi)
{
struct spi_controller *ctlr = qspi->master;
struct spi_controller *ctlr = qspi->controller;
struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
if (ctlr->busy)
@ -1048,7 +1048,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
const struct spi_mem_op *op)
{
struct zynqmp_qspi *xqspi = spi_controller_get_devdata
(mem->spi->master);
(mem->spi->controller);
int err = 0, i;
u32 genfifoentry = 0;
u16 opcode = op->cmd.opcode;
@ -1224,7 +1224,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
u32 num_cs;
const struct qspi_platform_data *p_data;
ctlr = spi_alloc_master(&pdev->dev, sizeof(*xqspi));
ctlr = spi_alloc_host(&pdev->dev, sizeof(*xqspi));
if (!ctlr)
return -ENOMEM;
@ -1240,27 +1240,27 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
xqspi->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(xqspi->regs)) {
ret = PTR_ERR(xqspi->regs);
goto remove_master;
goto remove_ctlr;
}
xqspi->pclk = devm_clk_get(&pdev->dev, "pclk");
if (IS_ERR(xqspi->pclk)) {
dev_err(dev, "pclk clock not found.\n");
ret = PTR_ERR(xqspi->pclk);
goto remove_master;
goto remove_ctlr;
}
xqspi->refclk = devm_clk_get(&pdev->dev, "ref_clk");
if (IS_ERR(xqspi->refclk)) {
dev_err(dev, "ref_clk clock not found.\n");
ret = PTR_ERR(xqspi->refclk);
goto remove_master;
goto remove_ctlr;
}
ret = clk_prepare_enable(xqspi->pclk);
if (ret) {
dev_err(dev, "Unable to enable APB clock.\n");
goto remove_master;
goto remove_ctlr;
}
ret = clk_prepare_enable(xqspi->refclk);
@ -1346,7 +1346,7 @@ clk_dis_all:
clk_disable_unprepare(xqspi->refclk);
clk_dis_pclk:
clk_disable_unprepare(xqspi->pclk);
remove_master:
remove_ctlr:
spi_controller_put(ctlr);
return ret;

View File

@ -612,10 +612,21 @@ static int spi_dev_check(struct device *dev, void *data)
{
struct spi_device *spi = to_spi_device(dev);
struct spi_device *new_spi = data;
int idx, nw_idx;
u8 cs, cs_nw;
if (spi->controller == new_spi->controller &&
spi_get_chipselect(spi, 0) == spi_get_chipselect(new_spi, 0))
return -EBUSY;
if (spi->controller == new_spi->controller) {
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
cs = spi_get_chipselect(spi, idx);
for (nw_idx = 0; nw_idx < SPI_CS_CNT_MAX; nw_idx++) {
cs_nw = spi_get_chipselect(new_spi, nw_idx);
if (cs != 0xFF && cs_nw != 0xFF && cs == cs_nw) {
dev_err(dev, "chipselect %d already in use\n", cs_nw);
return -EBUSY;
}
}
}
}
return 0;
}
@ -629,13 +640,32 @@ static int __spi_add_device(struct spi_device *spi)
{
struct spi_controller *ctlr = spi->controller;
struct device *dev = ctlr->dev.parent;
int status;
int status, idx, nw_idx;
u8 cs, nw_cs;
/* Chipselects are numbered 0..max; validate. */
if (spi_get_chipselect(spi, 0) >= ctlr->num_chipselect) {
dev_err(dev, "cs%d >= max %d\n", spi_get_chipselect(spi, 0),
ctlr->num_chipselect);
return -EINVAL;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
/* Chipselects are numbered 0..max; validate. */
cs = spi_get_chipselect(spi, idx);
if (cs != 0xFF && cs >= ctlr->num_chipselect) {
dev_err(dev, "cs%d >= max %d\n", spi_get_chipselect(spi, idx),
ctlr->num_chipselect);
return -EINVAL;
}
}
/*
* Make sure that multiple logical CS doesn't map to the same physical CS.
* For example, spi->chip_select[0] != spi->chip_select[1] and so on.
*/
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
cs = spi_get_chipselect(spi, idx);
for (nw_idx = idx + 1; nw_idx < SPI_CS_CNT_MAX; nw_idx++) {
nw_cs = spi_get_chipselect(spi, nw_idx);
if (cs != 0xFF && nw_cs != 0xFF && cs == nw_cs) {
dev_err(dev, "chipselect %d already in use\n", nw_cs);
return -EBUSY;
}
}
}
/* Set the bus ID string */
@ -647,11 +677,8 @@ static int __spi_add_device(struct spi_device *spi)
* its configuration.
*/
status = bus_for_each_dev(&spi_bus_type, NULL, spi, spi_dev_check);
if (status) {
dev_err(dev, "chipselect %d already in use\n",
spi_get_chipselect(spi, 0));
if (status)
return status;
}
/* Controller may unregister concurrently */
if (IS_ENABLED(CONFIG_SPI_DYNAMIC) &&
@ -659,8 +686,15 @@ static int __spi_add_device(struct spi_device *spi)
return -ENODEV;
}
if (ctlr->cs_gpiods)
spi_set_csgpiod(spi, 0, ctlr->cs_gpiods[spi_get_chipselect(spi, 0)]);
if (ctlr->cs_gpiods) {
u8 cs;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
cs = spi_get_chipselect(spi, idx);
if (cs != 0xFF)
spi_set_csgpiod(spi, idx, ctlr->cs_gpiods[cs]);
}
}
/*
* Drivers may modify this initial i/o setup, but will
@ -701,6 +735,9 @@ int spi_add_device(struct spi_device *spi)
struct spi_controller *ctlr = spi->controller;
int status;
/* Set the bus ID string */
spi_dev_set_name(spi);
mutex_lock(&ctlr->add_lock);
status = __spi_add_device(spi);
mutex_unlock(&ctlr->add_lock);
@ -727,6 +764,7 @@ struct spi_device *spi_new_device(struct spi_controller *ctlr,
{
struct spi_device *proxy;
int status;
u8 idx;
/*
* NOTE: caller did any chip->bus_num checks necessary.
@ -742,6 +780,18 @@ struct spi_device *spi_new_device(struct spi_controller *ctlr,
WARN_ON(strlen(chip->modalias) >= sizeof(proxy->modalias));
/*
* Zero(0) is a valid physical CS value and can be located at any
* logical CS in the spi->chip_select[]. If all the physical CS
* are initialized to 0 then It would be difficult to differentiate
* between a valid physical CS 0 & an unused logical CS whose physical
* CS can be 0. As a solution to this issue initialize all the CS to 0xFF.
* Now all the unused logical CS will have 0xFF physical CS value & can be
* ignore while performing physical CS validity checks.
*/
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++)
spi_set_chipselect(proxy, idx, 0xFF);
spi_set_chipselect(proxy, 0, chip->chip_select);
proxy->max_speed_hz = chip->max_speed_hz;
proxy->mode = chip->mode;
@ -750,6 +800,15 @@ struct spi_device *spi_new_device(struct spi_controller *ctlr,
proxy->dev.platform_data = (void *) chip->platform_data;
proxy->controller_data = chip->controller_data;
proxy->controller_state = NULL;
/*
* spi->chip_select[i] gives the corresponding physical CS for logical CS i
* logical CS number is represented by setting the ith bit in spi->cs_index_mask
* So, for example, if spi->cs_index_mask = 0x01 then logical CS number is 0 and
* spi->chip_select[0] will give the physical CS.
* By default spi->chip_select[0] will hold the physical CS number so, set
* spi->cs_index_mask as 0x01.
*/
proxy->cs_index_mask = 0x01;
if (chip->swnode) {
status = device_add_software_node(&proxy->dev, chip->swnode);
@ -942,32 +1001,51 @@ static void spi_res_release(struct spi_controller *ctlr, struct spi_message *mes
}
/*-------------------------------------------------------------------------*/
static inline bool spi_is_last_cs(struct spi_device *spi)
{
u8 idx;
bool last = false;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
if ((spi->cs_index_mask >> idx) & 0x01) {
if (spi->controller->last_cs[idx] == spi_get_chipselect(spi, idx))
last = true;
}
}
return last;
}
static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
{
bool activate = enable;
u8 idx;
/*
* Avoid calling into the driver (or doing delays) if the chip select
* isn't actually changing from the last time this was called.
*/
if (!force && ((enable && spi->controller->last_cs == spi_get_chipselect(spi, 0)) ||
(!enable && spi->controller->last_cs != spi_get_chipselect(spi, 0))) &&
if (!force && ((enable && spi->controller->last_cs_index_mask == spi->cs_index_mask &&
spi_is_last_cs(spi)) ||
(!enable && spi->controller->last_cs_index_mask == spi->cs_index_mask &&
!spi_is_last_cs(spi))) &&
(spi->controller->last_cs_mode_high == (spi->mode & SPI_CS_HIGH)))
return;
trace_spi_set_cs(spi, activate);
spi->controller->last_cs = enable ? spi_get_chipselect(spi, 0) : -1;
spi->controller->last_cs_index_mask = spi->cs_index_mask;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++)
spi->controller->last_cs[idx] = enable ? spi_get_chipselect(spi, 0) : -1;
spi->controller->last_cs_mode_high = spi->mode & SPI_CS_HIGH;
if ((spi_get_csgpiod(spi, 0) || !spi->controller->set_cs_timing) && !activate)
spi_delay_exec(&spi->cs_hold, NULL);
if (spi->mode & SPI_CS_HIGH)
enable = !enable;
if (spi_get_csgpiod(spi, 0)) {
if (spi_is_csgpiod(spi)) {
if (!spi->controller->set_cs_timing && !activate)
spi_delay_exec(&spi->cs_hold, NULL);
if (!(spi->mode & SPI_NO_CS)) {
/*
* Historically ACPI has no means of the GPIO polarity and
@ -979,26 +1057,38 @@ static void spi_set_cs(struct spi_device *spi, bool enable, bool force)
* ambiguity. That's why we use enable, that takes SPI_CS_HIGH
* into account.
*/
if (has_acpi_companion(&spi->dev))
gpiod_set_value_cansleep(spi_get_csgpiod(spi, 0), !enable);
else
/* Polarity handled by GPIO library */
gpiod_set_value_cansleep(spi_get_csgpiod(spi, 0), activate);
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
if (((spi->cs_index_mask >> idx) & 0x01) &&
spi_get_csgpiod(spi, idx)) {
if (has_acpi_companion(&spi->dev))
gpiod_set_value_cansleep(spi_get_csgpiod(spi, idx),
!enable);
else
/* Polarity handled by GPIO library */
gpiod_set_value_cansleep(spi_get_csgpiod(spi, idx),
activate);
if (activate)
spi_delay_exec(&spi->cs_setup, NULL);
else
spi_delay_exec(&spi->cs_inactive, NULL);
}
}
}
/* Some SPI masters need both GPIO CS & slave_select */
if ((spi->controller->flags & SPI_CONTROLLER_GPIO_SS) &&
spi->controller->set_cs)
spi->controller->set_cs(spi, !enable);
if (!spi->controller->set_cs_timing) {
if (activate)
spi_delay_exec(&spi->cs_setup, NULL);
else
spi_delay_exec(&spi->cs_inactive, NULL);
}
} else if (spi->controller->set_cs) {
spi->controller->set_cs(spi, !enable);
}
if (spi_get_csgpiod(spi, 0) || !spi->controller->set_cs_timing) {
if (activate)
spi_delay_exec(&spi->cs_setup, NULL);
else
spi_delay_exec(&spi->cs_inactive, NULL);
}
}
#ifdef CONFIG_HAS_DMA
@ -1361,6 +1451,9 @@ static int spi_transfer_wait(struct spi_controller *ctlr,
"SPI transfer timed out\n");
return -ETIMEDOUT;
}
if (xfer->error & SPI_TRANS_FAIL_IO)
return -EIO;
}
return 0;
@ -2222,8 +2315,8 @@ static void of_spi_parse_dt_cs_delay(struct device_node *nc,
static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi,
struct device_node *nc)
{
u32 value;
int rc;
u32 value, cs[SPI_CS_CNT_MAX];
int rc, idx;
/* Mode (clock phase/polarity/etc.) */
if (of_property_read_bool(nc, "spi-cpha"))
@ -2295,14 +2388,53 @@ static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi,
return 0;
}
if (ctlr->num_chipselect > SPI_CS_CNT_MAX) {
dev_err(&ctlr->dev, "No. of CS is more than max. no. of supported CS\n");
return -EINVAL;
}
/*
* Zero(0) is a valid physical CS value and can be located at any
* logical CS in the spi->chip_select[]. If all the physical CS
* are initialized to 0 then It would be difficult to differentiate
* between a valid physical CS 0 & an unused logical CS whose physical
* CS can be 0. As a solution to this issue initialize all the CS to 0xFF.
* Now all the unused logical CS will have 0xFF physical CS value & can be
* ignore while performing physical CS validity checks.
*/
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++)
spi_set_chipselect(spi, idx, 0xFF);
/* Device address */
rc = of_property_read_u32(nc, "reg", &value);
if (rc) {
rc = of_property_read_variable_u32_array(nc, "reg", &cs[0], 1,
SPI_CS_CNT_MAX);
if (rc < 0) {
dev_err(&ctlr->dev, "%pOF has no valid 'reg' property (%d)\n",
nc, rc);
return rc;
}
spi_set_chipselect(spi, 0, value);
if (rc > ctlr->num_chipselect) {
dev_err(&ctlr->dev, "%pOF has number of CS > ctlr->num_chipselect (%d)\n",
nc, rc);
return rc;
}
if ((of_property_read_bool(nc, "parallel-memories")) &&
(!(ctlr->flags & SPI_CONTROLLER_MULTI_CS))) {
dev_err(&ctlr->dev, "SPI controller doesn't support multi CS\n");
return -EINVAL;
}
for (idx = 0; idx < rc; idx++)
spi_set_chipselect(spi, idx, cs[idx]);
/*
* spi->chip_select[i] gives the corresponding physical CS for logical CS i
* logical CS number is represented by setting the ith bit in spi->cs_index_mask
* So, for example, if spi->cs_index_mask = 0x01 then logical CS number is 0 and
* spi->chip_select[0] will give the physical CS.
* By default spi->chip_select[0] will hold the physical CS number so, set
* spi->cs_index_mask as 0x01.
*/
spi->cs_index_mask = 0x01;
/* Device speed */
if (!of_property_read_u32(nc, "spi-max-frequency", &value))
@ -2408,6 +2540,7 @@ struct spi_device *spi_new_ancillary_device(struct spi_device *spi,
struct spi_controller *ctlr = spi->controller;
struct spi_device *ancillary;
int rc = 0;
u8 idx;
/* Alloc an spi_device */
ancillary = spi_alloc_device(ctlr);
@ -2418,12 +2551,33 @@ struct spi_device *spi_new_ancillary_device(struct spi_device *spi,
strscpy(ancillary->modalias, "dummy", sizeof(ancillary->modalias));
/*
* Zero(0) is a valid physical CS value and can be located at any
* logical CS in the spi->chip_select[]. If all the physical CS
* are initialized to 0 then It would be difficult to differentiate
* between a valid physical CS 0 & an unused logical CS whose physical
* CS can be 0. As a solution to this issue initialize all the CS to 0xFF.
* Now all the unused logical CS will have 0xFF physical CS value & can be
* ignore while performing physical CS validity checks.
*/
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++)
spi_set_chipselect(ancillary, idx, 0xFF);
/* Use provided chip-select for ancillary device */
spi_set_chipselect(ancillary, 0, chip_select);
/* Take over SPI mode/speed from SPI main device */
ancillary->max_speed_hz = spi->max_speed_hz;
ancillary->mode = spi->mode;
/*
* spi->chip_select[i] gives the corresponding physical CS for logical CS i
* logical CS number is represented by setting the ith bit in spi->cs_index_mask
* So, for example, if spi->cs_index_mask = 0x01 then logical CS number is 0 and
* spi->chip_select[0] will give the physical CS.
* By default spi->chip_select[0] will hold the physical CS number so, set
* spi->cs_index_mask as 0x01.
*/
ancillary->cs_index_mask = 0x01;
WARN_ON(!mutex_is_locked(&ctlr->add_lock));
@ -2626,6 +2780,7 @@ struct spi_device *acpi_spi_device_alloc(struct spi_controller *ctlr,
struct acpi_spi_lookup lookup = {};
struct spi_device *spi;
int ret;
u8 idx;
if (!ctlr && index == -1)
return ERR_PTR(-EINVAL);
@ -2661,12 +2816,33 @@ struct spi_device *acpi_spi_device_alloc(struct spi_controller *ctlr,
return ERR_PTR(-ENOMEM);
}
/*
* Zero(0) is a valid physical CS value and can be located at any
* logical CS in the spi->chip_select[]. If all the physical CS
* are initialized to 0 then It would be difficult to differentiate
* between a valid physical CS 0 & an unused logical CS whose physical
* CS can be 0. As a solution to this issue initialize all the CS to 0xFF.
* Now all the unused logical CS will have 0xFF physical CS value & can be
* ignore while performing physical CS validity checks.
*/
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++)
spi_set_chipselect(spi, idx, 0xFF);
ACPI_COMPANION_SET(&spi->dev, adev);
spi->max_speed_hz = lookup.max_speed_hz;
spi->mode |= lookup.mode;
spi->irq = lookup.irq;
spi->bits_per_word = lookup.bits_per_word;
spi_set_chipselect(spi, 0, lookup.chip_select);
/*
* spi->chip_select[i] gives the corresponding physical CS for logical CS i
* logical CS number is represented by setting the ith bit in spi->cs_index_mask
* So, for example, if spi->cs_index_mask = 0x01 then logical CS number is 0 and
* spi->chip_select[0] will give the physical CS.
* By default spi->chip_select[0] will hold the physical CS number so, set
* spi->cs_index_mask as 0x01.
*/
spi->cs_index_mask = 0x01;
return spi;
}
@ -3100,6 +3276,7 @@ int spi_register_controller(struct spi_controller *ctlr)
struct boardinfo *bi;
int first_dynamic;
int status;
int idx;
if (!dev)
return -ENODEV;
@ -3164,7 +3341,8 @@ int spi_register_controller(struct spi_controller *ctlr)
}
/* Setting last_cs to -1 means no chip selected */
ctlr->last_cs = -1;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++)
ctlr->last_cs[idx] = -1;
status = device_add(&ctlr->dev);
if (status < 0)
@ -3889,7 +4067,7 @@ static int __spi_validate(struct spi_device *spi, struct spi_message *message)
* cs_change is set for each transfer.
*/
if ((spi->mode & SPI_CS_WORD) && (!(ctlr->mode_bits & SPI_CS_WORD) ||
spi_get_csgpiod(spi, 0))) {
spi_is_csgpiod(spi))) {
size_t maxsize = BITS_TO_BYTES(spi->bits_per_word);
int ret;

View File

@ -1201,7 +1201,7 @@ static int max3420_probe(struct spi_device *spi)
int err, irq;
u8 reg[8];
if (spi->master->flags & SPI_MASTER_HALF_DUPLEX) {
if (spi->master->flags & SPI_CONTROLLER_HALF_DUPLEX) {
dev_err(&spi->dev, "UDC needs full duplex to work\n");
return -EINVAL;
}

View File

@ -233,6 +233,8 @@ static inline void *spi_mem_get_drvdata(struct spi_mem *mem)
* limitations)
* @supports_op: check if an operation is supported by the controller
* @exec_op: execute a SPI memory operation
* not all driver provides supports_op(), so it can return -EOPNOTSUPP
* if the op is not supported by the driver/controller
* @get_name: get a custom name for the SPI mem device from the controller.
* This might be needed if the controller driver has been ported
* to use the SPI mem layer and a custom name is used to keep

View File

@ -20,6 +20,9 @@
#include <uapi/linux/spi/spi.h>
/* Max no. of CS supported per spi device */
#define SPI_CS_CNT_MAX 4
struct dma_chan;
struct software_node;
struct ptp_system_timestamp;
@ -132,7 +135,8 @@ extern void spi_transfer_cs_change_delay_exec(struct spi_message *msg,
* @max_speed_hz: Maximum clock rate to be used with this chip
* (on this board); may be changed by the device's driver.
* The spi_transfer.speed_hz can override this for each transfer.
* @chip_select: Chipselect, distinguishing chips handled by @controller.
* @chip_select: Array of physical chipselect, spi->chipselect[i] gives
* the corresponding physical CS for logical CS i.
* @mode: The spi mode defines how data is clocked out and in.
* This may be changed by the device's driver.
* The "active low" default for chipselect mode can be overridden
@ -157,8 +161,8 @@ extern void spi_transfer_cs_change_delay_exec(struct spi_message *msg,
* the device will bind to the named driver and only the named driver.
* Do not set directly, because core frees it; use driver_set_override() to
* set or clear it.
* @cs_gpiod: GPIO descriptor of the chipselect line (optional, NULL when
* not using a GPIO line)
* @cs_gpiod: Array of GPIO descriptors of the corresponding chipselect lines
* (optional, NULL when not using a GPIO line)
* @word_delay: delay to be inserted between consecutive
* words of a transfer
* @cs_setup: delay to be introduced by the controller after CS is asserted
@ -167,6 +171,7 @@ extern void spi_transfer_cs_change_delay_exec(struct spi_message *msg,
* deasserted. If @cs_change_delay is used from @spi_transfer, then the
* two delays will be added up.
* @pcpu_statistics: statistics for the spi_device
* @cs_index_mask: Bit mask of the active chipselect(s) in the chipselect array
*
* A @spi_device is used to interchange data between an SPI slave
* (usually a discrete chip) and CPU memory.
@ -182,7 +187,7 @@ struct spi_device {
struct spi_controller *controller;
struct spi_controller *master; /* Compatibility layer */
u32 max_speed_hz;
u8 chip_select;
u8 chip_select[SPI_CS_CNT_MAX];
u8 bits_per_word;
bool rt;
#define SPI_NO_TX BIT(31) /* No transmit wire */
@ -213,7 +218,7 @@ struct spi_device {
void *controller_data;
char modalias[SPI_NAME_SIZE];
const char *driver_override;
struct gpio_desc *cs_gpiod; /* Chip select GPIO descriptor */
struct gpio_desc *cs_gpiod[SPI_CS_CNT_MAX]; /* Chip select gpio desc */
struct spi_delay word_delay; /* Inter-word delay */
/* CS delays */
struct spi_delay cs_setup;
@ -223,6 +228,13 @@ struct spi_device {
/* The statistics */
struct spi_statistics __percpu *pcpu_statistics;
/* Bit mask of the chipselect(s) that the driver need to use from
* the chipselect array.When the controller is capable to handle
* multiple chip selects & memories are connected in parallel
* then more than one bit need to be set in cs_index_mask.
*/
u32 cs_index_mask : SPI_CS_CNT_MAX;
/*
* Likely need more hooks for more protocol options affecting how
* the controller talks to each chip, like:
@ -279,22 +291,33 @@ static inline void *spi_get_drvdata(const struct spi_device *spi)
static inline u8 spi_get_chipselect(const struct spi_device *spi, u8 idx)
{
return spi->chip_select;
return spi->chip_select[idx];
}
static inline void spi_set_chipselect(struct spi_device *spi, u8 idx, u8 chipselect)
{
spi->chip_select = chipselect;
spi->chip_select[idx] = chipselect;
}
static inline struct gpio_desc *spi_get_csgpiod(const struct spi_device *spi, u8 idx)
{
return spi->cs_gpiod;
return spi->cs_gpiod[idx];
}
static inline void spi_set_csgpiod(struct spi_device *spi, u8 idx, struct gpio_desc *csgpiod)
{
spi->cs_gpiod = csgpiod;
spi->cs_gpiod[idx] = csgpiod;
}
static inline bool spi_is_csgpiod(struct spi_device *spi)
{
u8 idx;
for (idx = 0; idx < SPI_CS_CNT_MAX; idx++) {
if (spi_get_csgpiod(spi, idx))
return true;
}
return false;
}
/**
@ -399,6 +422,8 @@ extern struct spi_device *spi_new_ancillary_device(struct spi_device *spi, u8 ch
* @bus_lock_spinlock: spinlock for SPI bus locking
* @bus_lock_mutex: mutex for exclusion of multiple callers
* @bus_lock_flag: indicates that the SPI bus is locked for exclusive use
* @multi_cs_cap: indicates that the SPI Controller can assert/de-assert
* more than one chip select at once.
* @setup: updates the device mode and clocking records used by a
* device's SPI controller; protocol code may call this. This
* must fail if an unrecognized or unsupported mode is requested.
@ -461,10 +486,13 @@ extern struct spi_device *spi_new_ancillary_device(struct spi_device *spi, u8 ch
* - return 1 if the transfer is still in progress. When
* the driver is finished with this transfer it must
* call spi_finalize_current_transfer() so the subsystem
* can issue the next transfer. Note: transfer_one and
* transfer_one_message are mutually exclusive; when both
* are set, the generic subsystem does not call your
* transfer_one callback.
* can issue the next transfer. If the transfer fails, the
* driver must set the flag SPI_TRANS_FAIL_IO to
* spi_transfer->error first, before calling
* spi_finalize_current_transfer().
* Note: transfer_one and transfer_one_message are mutually
* exclusive; when both are set, the generic subsystem does
* not call your transfer_one callback.
* @handle_err: the subsystem calls the driver to handle an error that occurs
* in the generic implementation of transfer_one_message().
* @mem_ops: optimized/dedicated operations for interactions with SPI memory.
@ -567,6 +595,11 @@ struct spi_controller {
#define SPI_CONTROLLER_MUST_TX BIT(4) /* Requires tx */
#define SPI_CONTROLLER_GPIO_SS BIT(5) /* GPIO CS must select slave */
#define SPI_CONTROLLER_SUSPENDED BIT(6) /* Currently suspended */
/*
* The spi-controller has multi chip select capability and can
* assert/de-assert more than one chip select at once.
*/
#define SPI_CONTROLLER_MULTI_CS BIT(7)
/* Flag indicating if the allocation of this struct is devres-managed */
bool devm_allocated;
@ -677,7 +710,8 @@ struct spi_controller {
bool rt;
bool auto_runtime_pm;
bool cur_msg_mapped;
char last_cs;
char last_cs[SPI_CS_CNT_MAX];
char last_cs_index_mask;
bool last_cs_mode_high;
bool fallback;
struct completion xfer_completion;
@ -1040,6 +1074,7 @@ struct spi_transfer {
unsigned len;
#define SPI_TRANS_FAIL_NO_START BIT(0)
#define SPI_TRANS_FAIL_IO BIT(1)
u16 error;
dma_addr_t tx_dma;
@ -1638,8 +1673,6 @@ spi_transfer_is_last(struct spi_controller *ctlr, struct spi_transfer *xfer)
/* Compatibility layer */
#define spi_master spi_controller
#define SPI_MASTER_HALF_DUPLEX SPI_CONTROLLER_HALF_DUPLEX
#define spi_master_get_devdata(_ctlr) spi_controller_get_devdata(_ctlr)
#define spi_master_set_devdata(_ctlr, _data) \
spi_controller_set_devdata(_ctlr, _data)

View File

@ -33,7 +33,7 @@ static int cs35l56_hda_spi_probe(struct spi_device *spi)
return ret;
}
ret = cs35l56_hda_common_probe(cs35l56, spi->chip_select);
ret = cs35l56_hda_common_probe(cs35l56, spi_get_chipselect(spi, 0));
if (ret)
return ret;
ret = cs35l56_irq_request(&cs35l56->base, spi->irq);