omap: redo the driver from legacy to mailbox api

zynqmp: enable bufferless IPI
 arm: add mhu-v3 driver
 common: convert from tasklet to BH workqueue
 qcom: MSM8974 APCS compatible
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE6EwehDt/SOnwFyTyf9lkf8eYP5UFAmZMrLAACgkQf9lkf8eY
 P5UWFRAAhdtfFtdw3MgabOEd47kybvr+gkBcPEHb83CIuCP8K10LPraAzDC/TIj/
 lJi56JCO7BNoZyZs/Hpk9TZvf5X7D437TGEUCCJzz4HSgVmjR2fGtHlDTanx+Pot
 jk/UOMLgtTxwt95DkIsyDs6UFXHuQ8MG3ItJmybuZs28S3fgoYRd0n+xQMz6rZm4
 iJoGGxYBv9tRbS+lkEaeu3DZb0429SXLSpx4MrRvjBpo0ms4RO/m0lphLJntkhoZ
 ic73P8UhdbKjOkNL1SM6riw3fm3szdWourcirwFbjkI2aaVbUUUt+HRCjrm+DD6l
 RfL3LT7w+uSMSuiLhDzG0Nj/8q05fSNMZ/qbXOcBawM7BeqzPJA59ysuIsYdTTSO
 RlQtQCQIHpdv5+GDcOrPGXePavDOFSapcFdcWE5yHOCmxweVLt6HeEk+uXCAq1/i
 f0+VymcsHLNcRpNA2ERuBzk+1oz1l866VpzNLVcvTaR5qWETFJAsDS2DyZveJ6V7
 xDnvqGMWsyLNzBrPtHQg3NigiaCrcaM6QBVow8kveZuYoYtmNJ4Iq2SH1KHauNXA
 1yatWf2UtjebBT7vvdptYra278szcpJ7b3NlJCJ6xsAP4ZUoH136Ib2kVWsZTId1
 HpZxrrGRngLNNwUU6fB2NtPSzXpYRzkCdwi4mitxjHA4pmfAsNQ=
 =i6Oe
 -----END PGP SIGNATURE-----

Merge tag 'mailbox-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox

Pull mailbox updates from Jassi Brar:

 - redo the omap driver from legacy to mailbox api

 - enable bufferless IPI for zynqmp

 - add mhu-v3 driver

 - convert from tasklet to BH workqueue

 - add qcom MSM8974 APCS compatible IDs

* tag 'mailbox-v6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox: (24 commits)
  dt-bindings: mailbox: qcom-ipcc: Document the SDX75 IPCC
  dt-bindings: mailbox: qcom: Add MSM8974 APCS compatible
  mailbox: Convert from tasklet to BH workqueue
  mailbox: mtk-cmdq: Fix pm_runtime_get_sync() warning in mbox shutdown
  mailbox: mtk-cmdq-mailbox: fix module autoloading
  mailbox: zynqmp: handle SGI for shared IPI
  mailbox: arm_mhuv3: Add driver
  dt-bindings: mailbox: arm,mhuv3: Add bindings
  mailbox: omap: Remove kernel FIFO message queuing
  mailbox: omap: Reverse FIFO busy check logic
  mailbox: omap: Remove mbox_chan_to_omap_mbox()
  mailbox: omap: Use mbox_controller channel list directly
  mailbox: omap: Use function local struct mbox_controller
  mailbox: omap: Merge mailbox child node setup loops
  mailbox: omap: Use devm_pm_runtime_enable() helper
  mailbox: omap: Remove device class
  mailbox: omap: Remove unneeded header omap-mailbox.h
  mailbox: omap: Move fifo size check to point of use
  mailbox: omap: Move omap_mbox_irq_t into driver
  mailbox: omap: Remove unused omap_mbox_request_channel() function
  ...
This commit is contained in:
Linus Torvalds 2024-05-21 10:40:06 -07:00
commit 34dcc46610
14 changed files with 1855 additions and 501 deletions

View File

@ -0,0 +1,224 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mailbox/arm,mhuv3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ARM MHUv3 Mailbox Controller
maintainers:
- Sudeep Holla <sudeep.holla@arm.com>
- Cristian Marussi <cristian.marussi@arm.com>
description: |
The Arm Message Handling Unit (MHU) Version 3 is a mailbox controller that
enables unidirectional communications with remote processors through various
possible transport protocols.
The controller can optionally support a varying number of extensions that, in
turn, enable different kinds of transport to be used for communication.
Number, type and characteristics of each supported extension can be discovered
dynamically at runtime.
Given the unidirectional nature of the controller, an MHUv3 mailbox controller
is composed of a MHU Sender (MHUS) containing a PostBox (PBX) block and a MHU
Receiver (MHUR) containing a MailBox (MBX) block, where
PBX is used to
- Configure the MHU
- Send Transfers to the Receiver
- Optionally receive acknowledgment of a Transfer from the Receiver
MBX is used to
- Configure the MHU
- Receive Transfers from the Sender
- Optionally acknowledge Transfers sent by the Sender
Both PBX and MBX need to be present and defined in the DT description if you
need to establish a bidirectional communication, since you will have to
acquire two distinct unidirectional channels, one for each block.
As a consequence both blocks needs to be represented separately and specified
as distinct DT nodes in order to properly describe their resources.
Note that, though, thanks to the runtime discoverability, there is no need to
identify the type of blocks with distinct compatibles.
Following are the MHUv3 possible extensions.
- Doorbell Extension (DBE): DBE defines a type of channel called a Doorbell
Channel (DBCH). DBCH enables a single bit Transfer to be sent from the
Sender to Receiver. The Transfer indicates that an event has occurred.
When DBE is implemented, the number of DBCHs that an implementation of the
MHU can support is between 1 and 128, numbered starting from 0 in ascending
order and discoverable at run-time.
Each DBCH contains 32 individual fields, referred to as flags, each of which
can be used independently. It is possible for the Sender to send multiple
Transfers at once using a single DBCH, so long as each Transfer uses
a different flag in the DBCH.
Optionally, data may be transmitted through an out-of-band shared memory
region, wherein the MHU Doorbell is used strictly as an interrupt generation
mechanism, but this is out of the scope of these bindings.
- FastChannel Extension (FCE): FCE defines a type of channel called a Fast
Channel (FCH). FCH is intended for lower overhead communication between
Sender and Receiver at the expense of determinism. An FCH allows the Sender
to update the channel value at any time, regardless of whether the previous
value has been seen by the Receiver. When the Receiver reads the channel's
content it gets the last value written to the channel.
FCH is considered lossy in nature, and means that the Sender has no way of
knowing if, or when, the Receiver will act on the Transfer.
FCHs are expected to behave as RAM which generates interrupts when writes
occur to the locations within the RAM.
When FCE is implemented, the number of FCHs that an implementation of the
MHU can support is between 1-1024, if the FastChannel word-size is 32-bits,
or between 1-512, when the FastChannel word-size is 64-bits.
FCHs are numbered from 0 in ascending order.
Note that the number of FCHs and the word-size are implementation defined,
not configurable but discoverable at run-time.
Optionally, data may be transmitted through an out-of-band shared memory
region, wherein the MHU FastChannel is used as an interrupt generation
mechanism which carries also a pointer to such out-of-band data, but this
is out of the scope of these bindings.
- FIFO Extension (FE): FE defines a Channel type called a FIFO Channel (FFCH).
FFCH allows a Sender to send
- Multiple Transfers to the Receiver without having to wait for the
previous Transfer to be acknowledged by the Receiver, as long as the
FIFO has room for the Transfer.
- Transfers which require the Receiver to provide acknowledgment.
- Transfers which have in-band payload.
In all cases, the data is guaranteed to be observed by the Receiver in the
same order which the Sender sent it.
When FE is implemented, the number of FFCHs that an implementation of the
MHU can support is between 1 and 64, numbered starting from 0 in ascending
order. The number of FFCHs, their depth (same for all implemented FFCHs) and
the access-granularity are implementation defined, not configurable but
discoverable at run-time.
Optionally, additional data may be transmitted through an out-of-band shared
memory region, wherein the MHU FIFO is used to transmit, in order, a small
part of the payload (like a header) and a reference to the shared memory
area holding the remaining, bigger, chunk of the payload, but this is out of
the scope of these bindings.
properties:
compatible:
const: arm,mhuv3
reg:
maxItems: 1
interrupts:
minItems: 1
maxItems: 74
interrupt-names:
description: |
The MHUv3 controller generates a number of events some of which are used
to generate interrupts; as a consequence it can expose a varying number of
optional PBX/MBX interrupts, representing the events generated during the
operation of the various transport protocols associated with different
extensions. All interrupts of the MHU are level-sensitive.
Some of these optional interrupts are defined per-channel, where the
number of channels effectively available is implementation defined and
run-time discoverable.
In the following names are enumerated using patterns, with per-channel
interrupts implicitly capped at the maximum channels allowed by the
specification for each extension type.
For the sake of simplicity maxItems is anyway capped to a most plausible
number, assuming way less channels would be implemented than actually
possible.
The only mandatory interrupts on the MHU are:
- combined
- mbx-fch-xfer-<N> but only if mbx-fcgrp-xfer-<N> is not implemented.
minItems: 1
maxItems: 74
items:
oneOf:
- const: combined
description: PBX/MBX Combined interrupt
- const: combined-ffch
description: PBX/MBX FIFO Combined interrupt
- pattern: '^ffch-low-tide-[0-9]+$'
description: PBX/MBX FIFO Channel <N> Low Tide interrupt
- pattern: '^ffch-high-tide-[0-9]+$'
description: PBX/MBX FIFO Channel <N> High Tide interrupt
- pattern: '^ffch-flush-[0-9]+$'
description: PBX/MBX FIFO Channel <N> Flush interrupt
- pattern: '^mbx-dbch-xfer-[0-9]+$'
description: MBX Doorbell Channel <N> Transfer interrupt
- pattern: '^mbx-fch-xfer-[0-9]+$'
description: MBX FastChannel <N> Transfer interrupt
- pattern: '^mbx-fchgrp-xfer-[0-9]+$'
description: MBX FastChannel <N> Group Transfer interrupt
- pattern: '^mbx-ffch-xfer-[0-9]+$'
description: MBX FIFO Channel <N> Transfer interrupt
- pattern: '^pbx-dbch-xfer-ack-[0-9]+$'
description: PBX Doorbell Channel <N> Transfer Ack interrupt
- pattern: '^pbx-ffch-xfer-ack-[0-9]+$'
description: PBX FIFO Channel <N> Transfer Ack interrupt
'#mbox-cells':
description: |
The first argument in the consumers 'mboxes' property represents the
extension type, the second is for the channel number while the third
depends on extension type.
Extension types constants are defined in <dt-bindings/arm/mhuv3-dt.h>.
Extension type for DBE is DBE_EXT and the third parameter represents the
doorbell flag number to use.
Extension type for FCE is FCE_EXT, third parameter unused.
Extension type for FE is FE_EXT, third parameter unused.
mboxes = <&mhu DBE_EXT 0 5>; // DBE, Doorbell Channel Window 0, doorbell 5.
mboxes = <&mhu DBE_EXT 7>; // DBE, Doorbell Channel Window 1, doorbell 7.
mboxes = <&mhu FCE_EXT 0 0>; // FCE, FastChannel Window 0.
mboxes = <&mhu FCE_EXT 3 0>; // FCE, FastChannel Window 3.
mboxes = <&mhu FE_EXT 1 0>; // FE, FIFO Channel Window 1.
mboxes = <&mhu FE_EXT 7 0>; // FE, FIFO Channel Window 7.
const: 3
clocks:
maxItems: 1
required:
- compatible
- reg
- interrupts
- interrupt-names
- '#mbox-cells'
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
mailbox@2aaa0000 {
compatible = "arm,mhuv3";
#mbox-cells = <3>;
reg = <0 0x2aaa0000 0 0x10000>;
clocks = <&clock 0>;
interrupt-names = "combined", "pbx-dbch-xfer-ack-1",
"ffch-high-tide-0";
interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
};
mailbox@2ab00000 {
compatible = "arm,mhuv3";
#mbox-cells = <3>;
reg = <0 0x2aab0000 0 0x10000>;
clocks = <&clock 0>;
interrupt-names = "combined", "mbx-dbch-xfer-1", "ffch-low-tide-0";
interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
};
};

View File

@ -30,6 +30,7 @@ properties:
- const: syscon
- items:
- enum:
- qcom,msm8974-apcs-kpss-global
- qcom,msm8976-apcs-kpss-global
- const: qcom,msm8994-apcs-kpss-global
- const: syscon

View File

@ -28,6 +28,7 @@ properties:
- qcom,sa8775p-ipcc
- qcom,sc7280-ipcc
- qcom,sc8280xp-ipcc
- qcom,sdx75-ipcc
- qcom,sm6350-ipcc
- qcom,sm6375-ipcc
- qcom,sm8250-ipcc

View File

@ -13195,6 +13195,15 @@ F: Documentation/devicetree/bindings/mailbox/arm,mhuv2.yaml
F: drivers/mailbox/arm_mhuv2.c
F: include/linux/mailbox/arm_mhuv2_message.h
MAILBOX ARM MHUv3
M: Sudeep Holla <sudeep.holla@arm.com>
M: Cristian Marussi <cristian.marussi@arm.com>
L: linux-kernel@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/mailbox/arm,mhuv3.yaml
F: drivers/mailbox/arm_mhuv3.c
MAN-PAGES: MANUAL PAGES FOR LINUX -- Sections 2, 3, 4, 5, and 7
M: Alejandro Colomar <alx@kernel.org>
L: linux-man@vger.kernel.org

View File

@ -23,6 +23,18 @@ config ARM_MHU_V2
Say Y here if you want to build the ARM MHUv2 controller driver,
which provides unidirectional mailboxes between processing elements.
config ARM_MHU_V3
tristate "ARM MHUv3 Mailbox"
depends on HAS_IOMEM || COMPILE_TEST
depends on OF
help
Say Y here if you want to build the ARM MHUv3 controller driver,
which provides unidirectional mailboxes between processing elements.
ARM MHUv3 controllers can implement a varying number of extensions
that provides different means of transports: supported extensions
will be discovered and possibly managed at probe-time.
config IMX_MBOX
tristate "i.MX Mailbox"
depends on ARCH_MXC || COMPILE_TEST
@ -68,15 +80,6 @@ config OMAP2PLUS_MBOX
OMAP2/3; or IPU, IVA HD and DSP in OMAP4/5. Say Y here if you
want to use OMAP2+ Mailbox framework support.
config OMAP_MBOX_KFIFO_SIZE
int "Mailbox kfifo default buffer size (bytes)"
depends on OMAP2PLUS_MBOX
default 256
help
Specify the default size of mailbox's kfifo buffers (bytes).
This can also be changed at runtime (via the mbox_kfifo_size
module parameter).
config ROCKCHIP_MBOX
bool "Rockchip Soc Integrated Mailbox Support"
depends on ARCH_ROCKCHIP || COMPILE_TEST

View File

@ -9,6 +9,8 @@ obj-$(CONFIG_ARM_MHU) += arm_mhu.o arm_mhu_db.o
obj-$(CONFIG_ARM_MHU_V2) += arm_mhuv2.o
obj-$(CONFIG_ARM_MHU_V3) += arm_mhuv3.o
obj-$(CONFIG_IMX_MBOX) += imx-mailbox.o
obj-$(CONFIG_ARMADA_37XX_RWTM_MBOX) += armada-37xx-rwtm-mailbox.o

1103
drivers/mailbox/arm_mhuv3.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -43,6 +43,7 @@
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/workqueue.h>
#define PDC_SUCCESS 0
@ -293,8 +294,8 @@ struct pdc_state {
unsigned int pdc_irq;
/* tasklet for deferred processing after DMA rx interrupt */
struct tasklet_struct rx_tasklet;
/* work for deferred processing after DMA rx interrupt */
struct work_struct rx_work;
/* Number of bytes of receive status prior to each rx frame */
u32 rx_status_len;
@ -952,18 +953,18 @@ static irqreturn_t pdc_irq_handler(int irq, void *data)
iowrite32(intstatus, pdcs->pdc_reg_vbase + PDC_INTSTATUS_OFFSET);
/* Wakeup IRQ thread */
tasklet_schedule(&pdcs->rx_tasklet);
queue_work(system_bh_wq, &pdcs->rx_work);
return IRQ_HANDLED;
}
/**
* pdc_tasklet_cb() - Tasklet callback that runs the deferred processing after
* pdc_work_cb() - Work callback that runs the deferred processing after
* a DMA receive interrupt. Reenables the receive interrupt.
* @t: Pointer to the Altera sSGDMA channel structure
*/
static void pdc_tasklet_cb(struct tasklet_struct *t)
static void pdc_work_cb(struct work_struct *t)
{
struct pdc_state *pdcs = from_tasklet(pdcs, t, rx_tasklet);
struct pdc_state *pdcs = from_work(pdcs, t, rx_work);
pdc_receive(pdcs);
@ -1577,8 +1578,8 @@ static int pdc_probe(struct platform_device *pdev)
pdc_hw_init(pdcs);
/* Init tasklet for deferred DMA rx processing */
tasklet_setup(&pdcs->rx_tasklet, pdc_tasklet_cb);
/* Init work for deferred DMA rx processing */
INIT_WORK(&pdcs->rx_work, pdc_work_cb);
err = pdc_interrupts_init(pdcs);
if (err)
@ -1595,7 +1596,7 @@ static int pdc_probe(struct platform_device *pdev)
return PDC_SUCCESS;
cleanup_buf_pool:
tasklet_kill(&pdcs->rx_tasklet);
cancel_work_sync(&pdcs->rx_work);
dma_pool_destroy(pdcs->rx_buf_pool);
cleanup_ring_pool:
@ -1611,7 +1612,7 @@ static void pdc_remove(struct platform_device *pdev)
pdc_free_debugfs();
tasklet_kill(&pdcs->rx_tasklet);
cancel_work_sync(&pdcs->rx_work);
pdc_hw_disable(pdcs);

View File

@ -21,6 +21,7 @@
#include <linux/pm_runtime.h>
#include <linux/suspend.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
#include "mailbox.h"
@ -80,7 +81,7 @@ struct imx_mu_con_priv {
char irq_desc[IMX_MU_CHAN_NAME_SIZE];
enum imx_mu_chan_type type;
struct mbox_chan *chan;
struct tasklet_struct txdb_tasklet;
struct work_struct txdb_work;
};
struct imx_mu_priv {
@ -232,7 +233,7 @@ static int imx_mu_generic_tx(struct imx_mu_priv *priv,
break;
case IMX_MU_TYPE_TXDB:
imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
tasklet_schedule(&cp->txdb_tasklet);
queue_work(system_bh_wq, &cp->txdb_work);
break;
case IMX_MU_TYPE_TXDB_V2:
imx_mu_xcr_rmw(priv, IMX_MU_GCR, IMX_MU_xCR_GIRn(priv->dcfg->type, cp->idx), 0);
@ -420,7 +421,7 @@ static int imx_mu_seco_tx(struct imx_mu_priv *priv, struct imx_mu_con_priv *cp,
}
/* Simulate hack for mbox framework */
tasklet_schedule(&cp->txdb_tasklet);
queue_work(system_bh_wq, &cp->txdb_work);
break;
default:
@ -484,9 +485,9 @@ exit:
return err;
}
static void imx_mu_txdb_tasklet(unsigned long data)
static void imx_mu_txdb_work(struct work_struct *t)
{
struct imx_mu_con_priv *cp = (struct imx_mu_con_priv *)data;
struct imx_mu_con_priv *cp = from_work(cp, t, txdb_work);
mbox_chan_txdone(cp->chan, 0);
}
@ -570,8 +571,7 @@ static int imx_mu_startup(struct mbox_chan *chan)
if (cp->type == IMX_MU_TYPE_TXDB) {
/* Tx doorbell don't have ACK support */
tasklet_init(&cp->txdb_tasklet, imx_mu_txdb_tasklet,
(unsigned long)cp);
INIT_WORK(&cp->txdb_work, imx_mu_txdb_work);
return 0;
}
@ -615,7 +615,7 @@ static void imx_mu_shutdown(struct mbox_chan *chan)
}
if (cp->type == IMX_MU_TYPE_TXDB) {
tasklet_kill(&cp->txdb_tasklet);
cancel_work_sync(&cp->txdb_work);
pm_runtime_put_sync(priv->dev);
return;
}

View File

@ -465,7 +465,7 @@ static void cmdq_mbox_shutdown(struct mbox_chan *chan)
struct cmdq_task *task, *tmp;
unsigned long flags;
WARN_ON(pm_runtime_get_sync(cmdq->mbox.dev));
WARN_ON(pm_runtime_get_sync(cmdq->mbox.dev) < 0);
spin_lock_irqsave(&thread->chan->lock, flags);
if (list_empty(&thread->task_busy_list))
@ -765,6 +765,7 @@ static const struct of_device_id cmdq_of_ids[] = {
{.compatible = "mediatek,mt8195-gce", .data = (void *)&gce_plat_mt8195},
{}
};
MODULE_DEVICE_TABLE(of, cmdq_of_ids);
static struct platform_driver cmdq_drv = {
.probe = cmdq_probe,

View File

@ -19,7 +19,6 @@
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/omap-mailbox.h>
#include <linux/mailbox_controller.h>
#include <linux/mailbox_client.h>
@ -51,6 +50,11 @@
#define MBOX_INTR_CFG_TYPE1 0
#define MBOX_INTR_CFG_TYPE2 1
typedef enum {
IRQ_TX = 1,
IRQ_RX = 2,
} omap_mbox_irq_t;
struct omap_mbox_fifo {
unsigned long msg;
unsigned long fifo_stat;
@ -61,14 +65,6 @@ struct omap_mbox_fifo {
u32 intr_bit;
};
struct omap_mbox_queue {
spinlock_t lock;
struct kfifo fifo;
struct work_struct work;
struct omap_mbox *mbox;
bool full;
};
struct omap_mbox_match_data {
u32 intr_type;
};
@ -81,29 +77,11 @@ struct omap_mbox_device {
u32 num_users;
u32 num_fifos;
u32 intr_type;
struct omap_mbox **mboxes;
struct mbox_controller controller;
struct list_head elem;
};
struct omap_mbox_fifo_info {
int tx_id;
int tx_usr;
int tx_irq;
int rx_id;
int rx_usr;
int rx_irq;
const char *name;
bool send_no_irq;
};
struct omap_mbox {
const char *name;
int irq;
struct omap_mbox_queue *rxq;
struct device *dev;
struct omap_mbox_device *parent;
struct omap_mbox_fifo tx_fifo;
struct omap_mbox_fifo rx_fifo;
@ -112,22 +90,6 @@ struct omap_mbox {
bool send_no_irq;
};
/* global variables for the mailbox devices */
static DEFINE_MUTEX(omap_mbox_devices_lock);
static LIST_HEAD(omap_mbox_devices);
static unsigned int mbox_kfifo_size = CONFIG_OMAP_MBOX_KFIFO_SIZE;
module_param(mbox_kfifo_size, uint, S_IRUGO);
MODULE_PARM_DESC(mbox_kfifo_size, "Size of omap's mailbox kfifo (bytes)");
static struct omap_mbox *mbox_chan_to_omap_mbox(struct mbox_chan *chan)
{
if (!chan || !chan->con_priv)
return NULL;
return (struct omap_mbox *)chan->con_priv;
}
static inline
unsigned int mbox_read_reg(struct omap_mbox_device *mdev, size_t ofs)
{
@ -197,7 +159,7 @@ static int is_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
return (int)(enable & status & bit);
}
static void _omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
static void omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
{
u32 l;
struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ?
@ -210,7 +172,7 @@ static void _omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
mbox_write_reg(mbox->parent, l, irqenable);
}
static void _omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
static void omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
{
struct omap_mbox_fifo *fifo = (irq == IRQ_TX) ?
&mbox->tx_fifo : &mbox->rx_fifo;
@ -227,87 +189,27 @@ static void _omap_mbox_disable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
mbox_write_reg(mbox->parent, bit, irqdisable);
}
void omap_mbox_enable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
if (WARN_ON(!mbox))
return;
_omap_mbox_enable_irq(mbox, irq);
}
EXPORT_SYMBOL(omap_mbox_enable_irq);
void omap_mbox_disable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
if (WARN_ON(!mbox))
return;
_omap_mbox_disable_irq(mbox, irq);
}
EXPORT_SYMBOL(omap_mbox_disable_irq);
/*
* Message receiver(workqueue)
*/
static void mbox_rx_work(struct work_struct *work)
{
struct omap_mbox_queue *mq =
container_of(work, struct omap_mbox_queue, work);
mbox_msg_t data;
u32 msg;
int len;
while (kfifo_len(&mq->fifo) >= sizeof(msg)) {
len = kfifo_out(&mq->fifo, (unsigned char *)&msg, sizeof(msg));
WARN_ON(len != sizeof(msg));
data = msg;
mbox_chan_received_data(mq->mbox->chan, (void *)data);
spin_lock_irq(&mq->lock);
if (mq->full) {
mq->full = false;
_omap_mbox_enable_irq(mq->mbox, IRQ_RX);
}
spin_unlock_irq(&mq->lock);
}
}
/*
* Mailbox interrupt handler
*/
static void __mbox_tx_interrupt(struct omap_mbox *mbox)
{
_omap_mbox_disable_irq(mbox, IRQ_TX);
omap_mbox_disable_irq(mbox, IRQ_TX);
ack_mbox_irq(mbox, IRQ_TX);
mbox_chan_txdone(mbox->chan, 0);
}
static void __mbox_rx_interrupt(struct omap_mbox *mbox)
{
struct omap_mbox_queue *mq = mbox->rxq;
u32 msg;
int len;
while (!mbox_fifo_empty(mbox)) {
if (unlikely(kfifo_avail(&mq->fifo) < sizeof(msg))) {
_omap_mbox_disable_irq(mbox, IRQ_RX);
mq->full = true;
goto nomem;
}
msg = mbox_fifo_read(mbox);
len = kfifo_in(&mq->fifo, (unsigned char *)&msg, sizeof(msg));
WARN_ON(len != sizeof(msg));
mbox_chan_received_data(mbox->chan, (void *)(uintptr_t)msg);
}
/* no more messages in the fifo. clear IRQ source. */
/* clear IRQ source. */
ack_mbox_irq(mbox, IRQ_RX);
nomem:
schedule_work(&mbox->rxq->work);
}
static irqreturn_t mbox_interrupt(int irq, void *p)
@ -323,188 +225,34 @@ static irqreturn_t mbox_interrupt(int irq, void *p)
return IRQ_HANDLED;
}
static struct omap_mbox_queue *mbox_queue_alloc(struct omap_mbox *mbox,
void (*work)(struct work_struct *))
{
struct omap_mbox_queue *mq;
if (!work)
return NULL;
mq = kzalloc(sizeof(*mq), GFP_KERNEL);
if (!mq)
return NULL;
spin_lock_init(&mq->lock);
if (kfifo_alloc(&mq->fifo, mbox_kfifo_size, GFP_KERNEL))
goto error;
INIT_WORK(&mq->work, work);
return mq;
error:
kfree(mq);
return NULL;
}
static void mbox_queue_free(struct omap_mbox_queue *q)
{
kfifo_free(&q->fifo);
kfree(q);
}
static int omap_mbox_startup(struct omap_mbox *mbox)
{
int ret = 0;
struct omap_mbox_queue *mq;
mq = mbox_queue_alloc(mbox, mbox_rx_work);
if (!mq)
return -ENOMEM;
mbox->rxq = mq;
mq->mbox = mbox;
ret = request_irq(mbox->irq, mbox_interrupt, IRQF_SHARED,
mbox->name, mbox);
ret = request_threaded_irq(mbox->irq, NULL, mbox_interrupt,
IRQF_ONESHOT, mbox->name, mbox);
if (unlikely(ret)) {
pr_err("failed to register mailbox interrupt:%d\n", ret);
goto fail_request_irq;
return ret;
}
if (mbox->send_no_irq)
mbox->chan->txdone_method = TXDONE_BY_ACK;
_omap_mbox_enable_irq(mbox, IRQ_RX);
omap_mbox_enable_irq(mbox, IRQ_RX);
return 0;
fail_request_irq:
mbox_queue_free(mbox->rxq);
return ret;
}
static void omap_mbox_fini(struct omap_mbox *mbox)
{
_omap_mbox_disable_irq(mbox, IRQ_RX);
omap_mbox_disable_irq(mbox, IRQ_RX);
free_irq(mbox->irq, mbox);
flush_work(&mbox->rxq->work);
mbox_queue_free(mbox->rxq);
}
static struct omap_mbox *omap_mbox_device_find(struct omap_mbox_device *mdev,
const char *mbox_name)
{
struct omap_mbox *_mbox, *mbox = NULL;
struct omap_mbox **mboxes = mdev->mboxes;
int i;
if (!mboxes)
return NULL;
for (i = 0; (_mbox = mboxes[i]); i++) {
if (!strcmp(_mbox->name, mbox_name)) {
mbox = _mbox;
break;
}
}
return mbox;
}
struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl,
const char *chan_name)
{
struct device *dev = cl->dev;
struct omap_mbox *mbox = NULL;
struct omap_mbox_device *mdev;
int ret;
if (!dev)
return ERR_PTR(-ENODEV);
if (dev->of_node) {
pr_err("%s: please use mbox_request_channel(), this API is supported only for OMAP non-DT usage\n",
__func__);
return ERR_PTR(-ENODEV);
}
mutex_lock(&omap_mbox_devices_lock);
list_for_each_entry(mdev, &omap_mbox_devices, elem) {
mbox = omap_mbox_device_find(mdev, chan_name);
if (mbox)
break;
}
mutex_unlock(&omap_mbox_devices_lock);
if (!mbox || !mbox->chan)
return ERR_PTR(-ENOENT);
ret = mbox_bind_client(mbox->chan, cl);
if (ret)
return ERR_PTR(ret);
return mbox->chan;
}
EXPORT_SYMBOL(omap_mbox_request_channel);
static struct class omap_mbox_class = { .name = "mbox", };
static int omap_mbox_register(struct omap_mbox_device *mdev)
{
int ret;
int i;
struct omap_mbox **mboxes;
if (!mdev || !mdev->mboxes)
return -EINVAL;
mboxes = mdev->mboxes;
for (i = 0; mboxes[i]; i++) {
struct omap_mbox *mbox = mboxes[i];
mbox->dev = device_create(&omap_mbox_class, mdev->dev,
0, mbox, "%s", mbox->name);
if (IS_ERR(mbox->dev)) {
ret = PTR_ERR(mbox->dev);
goto err_out;
}
}
mutex_lock(&omap_mbox_devices_lock);
list_add(&mdev->elem, &omap_mbox_devices);
mutex_unlock(&omap_mbox_devices_lock);
ret = devm_mbox_controller_register(mdev->dev, &mdev->controller);
err_out:
if (ret) {
while (i--)
device_unregister(mboxes[i]->dev);
}
return ret;
}
static int omap_mbox_unregister(struct omap_mbox_device *mdev)
{
int i;
struct omap_mbox **mboxes;
if (!mdev || !mdev->mboxes)
return -EINVAL;
mutex_lock(&omap_mbox_devices_lock);
list_del(&mdev->elem);
mutex_unlock(&omap_mbox_devices_lock);
mboxes = mdev->mboxes;
for (i = 0; mboxes[i]; i++)
device_unregister(mboxes[i]->dev);
return 0;
}
static int omap_mbox_chan_startup(struct mbox_chan *chan)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
struct omap_mbox *mbox = chan->con_priv;
struct omap_mbox_device *mdev = mbox->parent;
int ret = 0;
@ -519,7 +267,7 @@ static int omap_mbox_chan_startup(struct mbox_chan *chan)
static void omap_mbox_chan_shutdown(struct mbox_chan *chan)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
struct omap_mbox *mbox = chan->con_priv;
struct omap_mbox_device *mdev = mbox->parent;
mutex_lock(&mdev->cfg_lock);
@ -530,41 +278,40 @@ static void omap_mbox_chan_shutdown(struct mbox_chan *chan)
static int omap_mbox_chan_send_noirq(struct omap_mbox *mbox, u32 msg)
{
int ret = -EBUSY;
if (mbox_fifo_full(mbox))
return -EBUSY;
if (!mbox_fifo_full(mbox)) {
_omap_mbox_enable_irq(mbox, IRQ_RX);
mbox_fifo_write(mbox, msg);
ret = 0;
_omap_mbox_disable_irq(mbox, IRQ_RX);
omap_mbox_enable_irq(mbox, IRQ_RX);
mbox_fifo_write(mbox, msg);
omap_mbox_disable_irq(mbox, IRQ_RX);
/* we must read and ack the interrupt directly from here */
mbox_fifo_read(mbox);
ack_mbox_irq(mbox, IRQ_RX);
}
/* we must read and ack the interrupt directly from here */
mbox_fifo_read(mbox);
ack_mbox_irq(mbox, IRQ_RX);
return ret;
return 0;
}
static int omap_mbox_chan_send(struct omap_mbox *mbox, u32 msg)
{
int ret = -EBUSY;
if (!mbox_fifo_full(mbox)) {
mbox_fifo_write(mbox, msg);
ret = 0;
if (mbox_fifo_full(mbox)) {
/* always enable the interrupt */
omap_mbox_enable_irq(mbox, IRQ_TX);
return -EBUSY;
}
mbox_fifo_write(mbox, msg);
/* always enable the interrupt */
_omap_mbox_enable_irq(mbox, IRQ_TX);
return ret;
omap_mbox_enable_irq(mbox, IRQ_TX);
return 0;
}
static int omap_mbox_chan_send_data(struct mbox_chan *chan, void *data)
{
struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
struct omap_mbox *mbox = chan->con_priv;
int ret;
u32 msg = omap_mbox_message(data);
u32 msg = (u32)(uintptr_t)(data);
if (!mbox)
return -EINVAL;
@ -666,8 +413,9 @@ static struct mbox_chan *omap_mbox_of_xlate(struct mbox_controller *controller,
struct device_node *node;
struct omap_mbox_device *mdev;
struct omap_mbox *mbox;
int i;
mdev = container_of(controller, struct omap_mbox_device, controller);
mdev = dev_get_drvdata(controller->dev);
if (WARN_ON(!mdev))
return ERR_PTR(-EINVAL);
@ -678,22 +426,29 @@ static struct mbox_chan *omap_mbox_of_xlate(struct mbox_controller *controller,
return ERR_PTR(-ENODEV);
}
mbox = omap_mbox_device_find(mdev, node->name);
for (i = 0; i < controller->num_chans; i++) {
mbox = controller->chans[i].con_priv;
if (!strcmp(mbox->name, node->name)) {
of_node_put(node);
return &controller->chans[i];
}
}
of_node_put(node);
return mbox ? mbox->chan : ERR_PTR(-ENOENT);
return ERR_PTR(-ENOENT);
}
static int omap_mbox_probe(struct platform_device *pdev)
{
int ret;
struct mbox_chan *chnls;
struct omap_mbox **list, *mbox, *mboxblk;
struct omap_mbox_fifo_info *finfo, *finfoblk;
struct omap_mbox *mbox;
struct omap_mbox_device *mdev;
struct omap_mbox_fifo *fifo;
struct device_node *node = pdev->dev.of_node;
struct device_node *child;
const struct omap_mbox_match_data *match_data;
struct mbox_controller *controller;
u32 intr_type, info_count;
u32 num_users, num_fifos;
u32 tmp[3];
@ -722,40 +477,6 @@ static int omap_mbox_probe(struct platform_device *pdev)
return -ENODEV;
}
finfoblk = devm_kcalloc(&pdev->dev, info_count, sizeof(*finfoblk),
GFP_KERNEL);
if (!finfoblk)
return -ENOMEM;
finfo = finfoblk;
child = NULL;
for (i = 0; i < info_count; i++, finfo++) {
child = of_get_next_available_child(node, child);
ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
finfo->tx_id = tmp[0];
finfo->tx_irq = tmp[1];
finfo->tx_usr = tmp[2];
ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
finfo->rx_id = tmp[0];
finfo->rx_irq = tmp[1];
finfo->rx_usr = tmp[2];
finfo->name = child->name;
finfo->send_no_irq = of_property_read_bool(child, "ti,mbox-send-noirq");
if (finfo->tx_id >= num_fifos || finfo->rx_id >= num_fifos ||
finfo->tx_usr >= num_users || finfo->rx_usr >= num_users)
return -EINVAL;
}
mdev = devm_kzalloc(&pdev->dev, sizeof(*mdev), GFP_KERNEL);
if (!mdev)
return -ENOMEM;
@ -769,52 +490,67 @@ static int omap_mbox_probe(struct platform_device *pdev)
if (!mdev->irq_ctx)
return -ENOMEM;
/* allocate one extra for marking end of list */
list = devm_kcalloc(&pdev->dev, info_count + 1, sizeof(*list),
GFP_KERNEL);
if (!list)
return -ENOMEM;
chnls = devm_kcalloc(&pdev->dev, info_count + 1, sizeof(*chnls),
GFP_KERNEL);
if (!chnls)
return -ENOMEM;
mboxblk = devm_kcalloc(&pdev->dev, info_count, sizeof(*mbox),
GFP_KERNEL);
if (!mboxblk)
return -ENOMEM;
child = NULL;
for (i = 0; i < info_count; i++) {
int tx_id, tx_irq, tx_usr;
int rx_id, rx_usr;
mbox = devm_kzalloc(&pdev->dev, sizeof(*mbox), GFP_KERNEL);
if (!mbox)
return -ENOMEM;
child = of_get_next_available_child(node, child);
ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
tx_id = tmp[0];
tx_irq = tmp[1];
tx_usr = tmp[2];
ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp,
ARRAY_SIZE(tmp));
if (ret)
return ret;
rx_id = tmp[0];
/* rx_irq = tmp[1]; */
rx_usr = tmp[2];
if (tx_id >= num_fifos || rx_id >= num_fifos ||
tx_usr >= num_users || rx_usr >= num_users)
return -EINVAL;
mbox = mboxblk;
finfo = finfoblk;
for (i = 0; i < info_count; i++, finfo++) {
fifo = &mbox->tx_fifo;
fifo->msg = MAILBOX_MESSAGE(finfo->tx_id);
fifo->fifo_stat = MAILBOX_FIFOSTATUS(finfo->tx_id);
fifo->intr_bit = MAILBOX_IRQ_NOTFULL(finfo->tx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, finfo->tx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, finfo->tx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, finfo->tx_usr);
fifo->msg = MAILBOX_MESSAGE(tx_id);
fifo->fifo_stat = MAILBOX_FIFOSTATUS(tx_id);
fifo->intr_bit = MAILBOX_IRQ_NOTFULL(tx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, tx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, tx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, tx_usr);
fifo = &mbox->rx_fifo;
fifo->msg = MAILBOX_MESSAGE(finfo->rx_id);
fifo->msg_stat = MAILBOX_MSGSTATUS(finfo->rx_id);
fifo->intr_bit = MAILBOX_IRQ_NEWMSG(finfo->rx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, finfo->rx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, finfo->rx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, finfo->rx_usr);
fifo->msg = MAILBOX_MESSAGE(rx_id);
fifo->msg_stat = MAILBOX_MSGSTATUS(rx_id);
fifo->intr_bit = MAILBOX_IRQ_NEWMSG(rx_id);
fifo->irqenable = MAILBOX_IRQENABLE(intr_type, rx_usr);
fifo->irqstatus = MAILBOX_IRQSTATUS(intr_type, rx_usr);
fifo->irqdisable = MAILBOX_IRQDISABLE(intr_type, rx_usr);
mbox->send_no_irq = finfo->send_no_irq;
mbox->send_no_irq = of_property_read_bool(child, "ti,mbox-send-noirq");
mbox->intr_type = intr_type;
mbox->parent = mdev;
mbox->name = finfo->name;
mbox->irq = platform_get_irq(pdev, finfo->tx_irq);
mbox->name = child->name;
mbox->irq = platform_get_irq(pdev, tx_irq);
if (mbox->irq < 0)
return mbox->irq;
mbox->chan = &chnls[i];
chnls[i].con_priv = mbox;
list[i] = mbox++;
}
mutex_init(&mdev->cfg_lock);
@ -822,28 +558,30 @@ static int omap_mbox_probe(struct platform_device *pdev)
mdev->num_users = num_users;
mdev->num_fifos = num_fifos;
mdev->intr_type = intr_type;
mdev->mboxes = list;
controller = devm_kzalloc(&pdev->dev, sizeof(*controller), GFP_KERNEL);
if (!controller)
return -ENOMEM;
/*
* OMAP/K3 Mailbox IP does not have a Tx-Done IRQ, but rather a Tx-Ready
* IRQ and is needed to run the Tx state machine
*/
mdev->controller.txdone_irq = true;
mdev->controller.dev = mdev->dev;
mdev->controller.ops = &omap_mbox_chan_ops;
mdev->controller.chans = chnls;
mdev->controller.num_chans = info_count;
mdev->controller.of_xlate = omap_mbox_of_xlate;
ret = omap_mbox_register(mdev);
controller->txdone_irq = true;
controller->dev = mdev->dev;
controller->ops = &omap_mbox_chan_ops;
controller->chans = chnls;
controller->num_chans = info_count;
controller->of_xlate = omap_mbox_of_xlate;
ret = devm_mbox_controller_register(mdev->dev, controller);
if (ret)
return ret;
platform_set_drvdata(pdev, mdev);
pm_runtime_enable(mdev->dev);
devm_pm_runtime_enable(mdev->dev);
ret = pm_runtime_resume_and_get(mdev->dev);
if (ret < 0)
goto unregister;
return ret;
/*
* just print the raw revision register, the format is not
@ -854,61 +592,20 @@ static int omap_mbox_probe(struct platform_device *pdev)
ret = pm_runtime_put_sync(mdev->dev);
if (ret < 0 && ret != -ENOSYS)
goto unregister;
return ret;
devm_kfree(&pdev->dev, finfoblk);
return 0;
unregister:
pm_runtime_disable(mdev->dev);
omap_mbox_unregister(mdev);
return ret;
}
static void omap_mbox_remove(struct platform_device *pdev)
{
struct omap_mbox_device *mdev = platform_get_drvdata(pdev);
pm_runtime_disable(mdev->dev);
omap_mbox_unregister(mdev);
}
static struct platform_driver omap_mbox_driver = {
.probe = omap_mbox_probe,
.remove_new = omap_mbox_remove,
.driver = {
.name = "omap-mailbox",
.pm = &omap_mbox_pm_ops,
.of_match_table = of_match_ptr(omap_mailbox_of_match),
},
};
static int __init omap_mbox_init(void)
{
int err;
err = class_register(&omap_mbox_class);
if (err)
return err;
/* kfifo size sanity check: alignment and minimal size */
mbox_kfifo_size = ALIGN(mbox_kfifo_size, sizeof(u32));
mbox_kfifo_size = max_t(unsigned int, mbox_kfifo_size, sizeof(u32));
err = platform_driver_register(&omap_mbox_driver);
if (err)
class_unregister(&omap_mbox_class);
return err;
}
subsys_initcall(omap_mbox_init);
static void __exit omap_mbox_exit(void)
{
platform_driver_unregister(&omap_mbox_driver);
class_unregister(&omap_mbox_class);
}
module_exit(omap_mbox_exit);
module_platform_driver(omap_mbox_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("omap mailbox: interrupt driven messaging");

View File

@ -6,9 +6,11 @@
*/
#include <linux/arm-smccc.h>
#include <linux/cpuhotplug.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/mailbox_controller.h>
@ -16,6 +18,7 @@
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
/* IPI agent ID any */
@ -52,6 +55,15 @@
#define IPI_MB_CHNL_TX 0 /* IPI mailbox TX channel */
#define IPI_MB_CHNL_RX 1 /* IPI mailbox RX channel */
/* IPI Message Buffer Information */
#define RESP_OFFSET 0x20U
#define DEST_OFFSET 0x40U
#define IPI_BUF_SIZE 0x20U
#define DST_BIT_POS 9U
#define SRC_BITMASK GENMASK(11, 8)
#define MAX_SGI 16
/**
* struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel
* @is_opened: indicate if the IPI channel is opened
@ -72,6 +84,10 @@ struct zynqmp_ipi_mchan {
unsigned int chan_type;
};
struct zynqmp_ipi_mbox;
typedef int (*setup_ipi_fn)(struct zynqmp_ipi_mbox *ipi_mbox, struct device_node *node);
/**
* struct zynqmp_ipi_mbox - Description of a ZynqMP IPI mailbox
* platform data.
@ -81,6 +97,7 @@ struct zynqmp_ipi_mchan {
* @remote_id: remote IPI agent ID
* @mbox: mailbox Controller
* @mchans: array for channels, tx channel and rx channel.
* @setup_ipi_fn: Function Pointer to set up IPI Channels
*/
struct zynqmp_ipi_mbox {
struct zynqmp_ipi_pdata *pdata;
@ -88,6 +105,7 @@ struct zynqmp_ipi_mbox {
u32 remote_id;
struct mbox_controller mbox;
struct zynqmp_ipi_mchan mchans[2];
setup_ipi_fn setup_ipi_fn;
};
/**
@ -98,6 +116,7 @@ struct zynqmp_ipi_mbox {
* @irq: IPI agent interrupt ID
* @method: IPI SMC or HVC is going to be used
* @local_id: local IPI agent ID
* @virq_sgi: IRQ number mapped to SGI
* @num_mboxes: number of mailboxes of this IPI agent
* @ipi_mboxes: IPI mailboxes of this IPI agent
*/
@ -106,10 +125,13 @@ struct zynqmp_ipi_pdata {
int irq;
unsigned int method;
u32 local_id;
int virq_sgi;
int num_mboxes;
struct zynqmp_ipi_mbox ipi_mboxes[] __counted_by(num_mboxes);
};
static DEFINE_PER_CPU(struct zynqmp_ipi_pdata *, per_cpu_pdata);
static struct device_driver zynqmp_ipi_mbox_driver = {
.owner = THIS_MODULE,
.name = "zynqmp-ipi-mbox",
@ -163,9 +185,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) {
if (mchan->is_opened) {
msg = mchan->rx_buf;
msg->len = mchan->req_buf_size;
memcpy_fromio(msg->data, mchan->req_buf,
msg->len);
if (msg) {
msg->len = mchan->req_buf_size;
memcpy_fromio(msg->data, mchan->req_buf,
msg->len);
}
mbox_chan_received_data(chan, (void *)msg);
status = IRQ_HANDLED;
}
@ -174,6 +198,14 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
return status;
}
static irqreturn_t zynqmp_sgi_interrupt(int irq, void *data)
{
struct zynqmp_ipi_pdata **pdata_ptr = data;
struct zynqmp_ipi_pdata *pdata = *pdata_ptr;
return zynqmp_ipi_interrupt(irq, pdata);
}
/**
* zynqmp_ipi_peek_data - Peek to see if there are any rx messages.
*
@ -275,26 +307,26 @@ static int zynqmp_ipi_send_data(struct mbox_chan *chan, void *data)
if (mchan->chan_type == IPI_MB_CHNL_TX) {
/* Send request message */
if (msg && msg->len > mchan->req_buf_size) {
if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) {
dev_err(dev, "channel %d message length %u > max %lu\n",
mchan->chan_type, (unsigned int)msg->len,
mchan->req_buf_size);
return -EINVAL;
}
if (msg && msg->len)
if (msg && msg->len && mchan->req_buf)
memcpy_toio(mchan->req_buf, msg->data, msg->len);
/* Kick IPI mailbox to send message */
arg0 = SMC_IPI_MAILBOX_NOTIFY;
zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res);
} else {
/* Send response message */
if (msg && msg->len > mchan->resp_buf_size) {
if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) {
dev_err(dev, "channel %d message length %u > max %lu\n",
mchan->chan_type, (unsigned int)msg->len,
mchan->resp_buf_size);
return -EINVAL;
}
if (msg && msg->len)
if (msg && msg->len && mchan->resp_buf)
memcpy_toio(mchan->resp_buf, msg->data, msg->len);
arg0 = SMC_IPI_MAILBOX_ACK;
zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK,
@ -415,12 +447,6 @@ static struct mbox_chan *zynqmp_ipi_of_xlate(struct mbox_controller *mbox,
return chan;
}
static const struct of_device_id zynqmp_ipi_of_match[] = {
{ .compatible = "xlnx,zynqmp-ipi-mailbox" },
{},
};
MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
/**
* zynqmp_ipi_mbox_get_buf_res - Get buffer resource from the IPI dev node
*
@ -470,12 +496,9 @@ static void zynqmp_ipi_mbox_dev_release(struct device *dev)
static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
struct device_node *node)
{
struct zynqmp_ipi_mchan *mchan;
struct mbox_chan *chans;
struct mbox_controller *mbox;
struct resource res;
struct device *dev, *mdev;
const char *name;
int ret;
dev = ipi_mbox->pdata->dev;
@ -495,6 +518,73 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
}
mdev = &ipi_mbox->dev;
/* Get the IPI remote agent ID */
ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
if (ret < 0) {
dev_err(dev, "No IPI remote ID is specified.\n");
return ret;
}
ret = ipi_mbox->setup_ipi_fn(ipi_mbox, node);
if (ret) {
dev_err(dev, "Failed to set up IPI Buffers.\n");
return ret;
}
mbox = &ipi_mbox->mbox;
mbox->dev = mdev;
mbox->ops = &zynqmp_ipi_chan_ops;
mbox->num_chans = 2;
mbox->txdone_irq = false;
mbox->txdone_poll = true;
mbox->txpoll_period = 5;
mbox->of_xlate = zynqmp_ipi_of_xlate;
chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
if (!chans)
return -ENOMEM;
mbox->chans = chans;
chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
ret = devm_mbox_controller_register(mdev, mbox);
if (ret)
dev_err(mdev,
"Failed to register mbox_controller(%d)\n", ret);
else
dev_info(mdev,
"Registered ZynqMP IPI mbox with TX/RX channels.\n");
return ret;
}
/**
* zynqmp_ipi_setup - set up IPI Buffers for classic flow
*
* @ipi_mbox: pointer to IPI mailbox private data structure
* @node: IPI mailbox device node
*
* This will be used to set up IPI Buffers for ZynqMP SOC if user
* wishes to use classic driver usage model on new SOC's with only
* buffered IPIs.
*
* Note that bufferless IPIs and mixed usage of buffered and bufferless
* IPIs are not supported with this flow.
*
* This will be invoked with compatible string "xlnx,zynqmp-ipi-mailbox".
*
* Return: 0 for success, negative value for failure
*/
static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
struct device_node *node)
{
struct zynqmp_ipi_mchan *mchan;
struct device *mdev;
struct resource res;
const char *name;
int ret;
mdev = &ipi_mbox->dev;
mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
name = "local_request_region";
ret = zynqmp_ipi_mbox_get_buf_res(node, name, &res);
@ -569,39 +659,218 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
if (!mchan->rx_buf)
return -ENOMEM;
/* Get the IPI remote agent ID */
ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
if (ret < 0) {
dev_err(dev, "No IPI remote ID is specified.\n");
return 0;
}
/**
* versal_ipi_setup - Set up IPIs to support mixed usage of
* Buffered and Bufferless IPIs.
*
* @ipi_mbox: pointer to IPI mailbox private data structure
* @node: IPI mailbox device node
*
* Return: 0 for success, negative value for failure
*/
static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
struct device_node *node)
{
struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan;
struct resource host_res, remote_res;
struct device_node *parent_node;
int host_idx, remote_idx;
struct device *mdev;
tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
parent_node = of_get_parent(node);
mdev = &ipi_mbox->dev;
host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res);
remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res);
/*
* Only set up buffers if both sides claim to have msg buffers.
* This is because each buffered IPI's corresponding msg buffers
* are reserved for use by other buffered IPI's.
*/
if (!host_idx && !remote_idx) {
u32 host_src, host_dst, remote_src, remote_dst;
u32 buff_sz;
buff_sz = resource_size(&host_res);
host_src = host_res.start & SRC_BITMASK;
remote_src = remote_res.start & SRC_BITMASK;
host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET;
remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET;
/* Validate that IPI IDs is within IPI Message buffer space. */
if (host_dst >= buff_sz || remote_dst >= buff_sz) {
dev_err(mdev,
"Invalid IPI Message buffer values: %x %x\n",
host_dst, remote_dst);
return -EINVAL;
}
tx_mchan->req_buf = devm_ioremap(mdev,
host_res.start | remote_dst,
IPI_BUF_SIZE);
if (!tx_mchan->req_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
tx_mchan->resp_buf = devm_ioremap(mdev,
(remote_res.start | host_dst) +
RESP_OFFSET, IPI_BUF_SIZE);
if (!tx_mchan->resp_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
rx_mchan->req_buf = devm_ioremap(mdev,
remote_res.start | host_dst,
IPI_BUF_SIZE);
if (!rx_mchan->req_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
rx_mchan->resp_buf = devm_ioremap(mdev,
(host_res.start | remote_dst) +
RESP_OFFSET, IPI_BUF_SIZE);
if (!rx_mchan->resp_buf) {
dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
return -ENOMEM;
}
tx_mchan->resp_buf_size = IPI_BUF_SIZE;
tx_mchan->req_buf_size = IPI_BUF_SIZE;
tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
sizeof(struct zynqmp_ipi_message),
GFP_KERNEL);
if (!tx_mchan->rx_buf)
return -ENOMEM;
rx_mchan->resp_buf_size = IPI_BUF_SIZE;
rx_mchan->req_buf_size = IPI_BUF_SIZE;
rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
sizeof(struct zynqmp_ipi_message),
GFP_KERNEL);
if (!rx_mchan->rx_buf)
return -ENOMEM;
}
return 0;
}
static int xlnx_mbox_cpuhp_start(unsigned int cpu)
{
struct zynqmp_ipi_pdata *pdata;
pdata = get_cpu_var(per_cpu_pdata);
put_cpu_var(per_cpu_pdata);
enable_percpu_irq(pdata->virq_sgi, IRQ_TYPE_NONE);
return 0;
}
static int xlnx_mbox_cpuhp_down(unsigned int cpu)
{
struct zynqmp_ipi_pdata *pdata;
pdata = get_cpu_var(per_cpu_pdata);
put_cpu_var(per_cpu_pdata);
disable_percpu_irq(pdata->virq_sgi);
return 0;
}
static void xlnx_disable_percpu_irq(void *data)
{
struct zynqmp_ipi_pdata *pdata;
pdata = *this_cpu_ptr(&per_cpu_pdata);
disable_percpu_irq(pdata->virq_sgi);
}
static int xlnx_mbox_init_sgi(struct platform_device *pdev,
int sgi_num,
struct zynqmp_ipi_pdata *pdata)
{
int ret = 0;
int cpu;
/*
* IRQ related structures are used for the following:
* for each SGI interrupt ensure its mapped by GIC IRQ domain
* and that each corresponding linux IRQ for the HW IRQ has
* a handler for when receiving an interrupt from the remote
* processor.
*/
struct irq_domain *domain;
struct irq_fwspec sgi_fwspec;
struct device_node *interrupt_parent = NULL;
struct device *dev = &pdev->dev;
/* Find GIC controller to map SGIs. */
interrupt_parent = of_irq_find_parent(dev->of_node);
if (!interrupt_parent) {
dev_err(&pdev->dev, "Failed to find property for Interrupt parent\n");
return -EINVAL;
}
/* Each SGI needs to be associated with GIC's IRQ domain. */
domain = irq_find_host(interrupt_parent);
of_node_put(interrupt_parent);
/* Each mapping needs GIC domain when finding IRQ mapping. */
sgi_fwspec.fwnode = domain->fwnode;
/*
* When irq domain looks at mapping each arg is as follows:
* 3 args for: interrupt type (SGI), interrupt # (set later), type
*/
sgi_fwspec.param_count = 1;
/* Set SGI's hwirq */
sgi_fwspec.param[0] = sgi_num;
pdata->virq_sgi = irq_create_fwspec_mapping(&sgi_fwspec);
for_each_possible_cpu(cpu)
per_cpu(per_cpu_pdata, cpu) = pdata;
ret = request_percpu_irq(pdata->virq_sgi, zynqmp_sgi_interrupt, pdev->name,
&per_cpu_pdata);
WARN_ON(ret);
if (ret) {
irq_dispose_mapping(pdata->virq_sgi);
return ret;
}
mbox = &ipi_mbox->mbox;
mbox->dev = mdev;
mbox->ops = &zynqmp_ipi_chan_ops;
mbox->num_chans = 2;
mbox->txdone_irq = false;
mbox->txdone_poll = true;
mbox->txpoll_period = 5;
mbox->of_xlate = zynqmp_ipi_of_xlate;
chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
if (!chans)
return -ENOMEM;
mbox->chans = chans;
chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
ret = devm_mbox_controller_register(mdev, mbox);
if (ret)
dev_err(mdev,
"Failed to register mbox_controller(%d)\n", ret);
else
dev_info(mdev,
"Registered ZynqMP IPI mbox with TX/RX channels.\n");
irq_to_desc(pdata->virq_sgi);
irq_set_status_flags(pdata->virq_sgi, IRQ_PER_CPU);
/* Setup function for the CPU hot-plug cases */
cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mailbox/sgi:starting",
xlnx_mbox_cpuhp_start, xlnx_mbox_cpuhp_down);
return ret;
}
static void xlnx_mbox_cleanup_sgi(struct zynqmp_ipi_pdata *pdata)
{
cpuhp_remove_state(CPUHP_AP_ONLINE_DYN);
on_each_cpu(xlnx_disable_percpu_irq, NULL, 1);
irq_clear_status_flags(pdata->virq_sgi, IRQ_PER_CPU);
free_percpu_irq(pdata->virq_sgi, &per_cpu_pdata);
irq_dispose_mapping(pdata->virq_sgi);
}
/**
* zynqmp_ipi_free_mboxes - Free IPI mailboxes devices
*
@ -612,6 +881,9 @@ static void zynqmp_ipi_free_mboxes(struct zynqmp_ipi_pdata *pdata)
struct zynqmp_ipi_mbox *ipi_mbox;
int i;
if (pdata->irq < MAX_SGI)
xlnx_mbox_cleanup_sgi(pdata);
i = pdata->num_mboxes;
for (; i >= 0; i--) {
ipi_mbox = &pdata->ipi_mboxes[i];
@ -627,9 +899,11 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *nc, *np = pdev->dev.of_node;
struct zynqmp_ipi_pdata *pdata;
struct zynqmp_ipi_pdata __percpu *pdata;
struct of_phandle_args out_irq;
struct zynqmp_ipi_mbox *mbox;
int num_mboxes, ret = -EINVAL;
setup_ipi_fn ipi_fn;
num_mboxes = of_get_available_child_count(np);
if (num_mboxes == 0) {
@ -650,9 +924,18 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
return ret;
}
ipi_fn = (setup_ipi_fn)device_get_match_data(&pdev->dev);
if (!ipi_fn) {
dev_err(dev,
"Mbox Compatible String is missing IPI Setup fn.\n");
return -ENODEV;
}
pdata->num_mboxes = num_mboxes;
mbox = pdata->ipi_mboxes;
mbox->setup_ipi_fn = ipi_fn;
for_each_available_child_of_node(np, nc) {
mbox->pdata = pdata;
ret = zynqmp_ipi_mbox_probe(mbox, nc);
@ -665,14 +948,32 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
mbox++;
}
/* IPI IRQ */
ret = platform_get_irq(pdev, 0);
if (ret < 0)
ret = of_irq_parse_one(dev_of_node(dev), 0, &out_irq);
if (ret < 0) {
dev_err(dev, "failed to parse interrupts\n");
goto free_mbox_dev;
}
ret = out_irq.args[1];
/*
* If Interrupt number is in SGI range, then request SGI else request
* IPI system IRQ.
*/
if (ret < MAX_SGI) {
pdata->irq = ret;
ret = xlnx_mbox_init_sgi(pdev, pdata->irq, pdata);
if (ret)
goto free_mbox_dev;
} else {
ret = platform_get_irq(pdev, 0);
if (ret < 0)
goto free_mbox_dev;
pdata->irq = ret;
ret = devm_request_irq(dev, pdata->irq, zynqmp_ipi_interrupt,
IRQF_SHARED, dev_name(dev), pdata);
}
pdata->irq = ret;
ret = devm_request_irq(dev, pdata->irq, zynqmp_ipi_interrupt,
IRQF_SHARED, dev_name(dev), pdata);
if (ret) {
dev_err(dev, "IRQ %d is not requested successfully.\n",
pdata->irq);
@ -695,6 +996,17 @@ static void zynqmp_ipi_remove(struct platform_device *pdev)
zynqmp_ipi_free_mboxes(pdata);
}
static const struct of_device_id zynqmp_ipi_of_match[] = {
{ .compatible = "xlnx,zynqmp-ipi-mailbox",
.data = &zynqmp_ipi_setup,
},
{ .compatible = "xlnx,versal-ipi-mailbox",
.data = &versal_ipi_setup,
},
{},
};
MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
static struct platform_driver zynqmp_ipi_driver = {
.probe = zynqmp_ipi_probe,
.remove_new = zynqmp_ipi_remove,

View File

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
/*
* This header provides constants for the defined MHUv3 types.
*/
#ifndef _DT_BINDINGS_ARM_MHUV3_DT_H
#define _DT_BINDINGS_ARM_MHUV3_DT_H
#define DBE_EXT 0
#define FCE_EXT 1
#define FE_EXT 2
#endif /* _DT_BINDINGS_ARM_MHUV3_DT_H */

View File

@ -10,17 +10,4 @@ typedef uintptr_t mbox_msg_t;
#define omap_mbox_message(data) (u32)(mbox_msg_t)(data)
typedef int __bitwise omap_mbox_irq_t;
#define IRQ_TX ((__force omap_mbox_irq_t) 1)
#define IRQ_RX ((__force omap_mbox_irq_t) 2)
struct mbox_chan;
struct mbox_client;
struct mbox_chan *omap_mbox_request_channel(struct mbox_client *cl,
const char *chan_name);
void omap_mbox_enable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq);
void omap_mbox_disable_irq(struct mbox_chan *chan, omap_mbox_irq_t irq);
#endif /* OMAP_MAILBOX_H */