ARM: SoC drivers for v5.13

Updates for SoC specific drivers include a few subsystems that
 have their own maintainers but send them through the soc tree:
 
 TEE/OP-TEE:
  -  Add tracepoints around calls to secure world
 
 Memory controller drivers:
  - Minor fixes for Renesas, Exynos, Mediatek and Tegra platforms
  - Add debug statistics to Tegra20 memory controller
  - Update Tegra bindings and convert to dtschema
 
 ARM SCMI Firmware:
  - Support for modular SCMI protocols and vendor specific extensions
  - New SCMI IIO driver
  - Per-cpu DVFS
 
 The other driver changes are all from the platform maintainers
 directly and reflect the drivers that don't fit into any other
 subsystem as well as treewide changes for a particular platform.
 
 SoCFPGA:
  - Various cleanups contributed by Krzysztof Kozlowski
 
 Mediatek:
  - add MT8183 support to mutex driver
  - MMSYS: use per SoC array to describe the possible routing
  - add MMSYS support for MT8183 and MT8167
  - add support for PMIC wrapper with integrated arbiter
  - add support for MT8192/MT6873
 
 Tegra:
  - Bug fixes to PMC and clock drivers
 
 NXP/i.MX:
  - Update SCU power domain driver to keep console domain power on.
  - Add missing ADC1 power domain to SCU power domain driver.
  - Update comments for single global power domain in SCU power domain
    driver.
  - Add i.MX51/i.MX53 unique id support to i.MX SoC driver.
 
 NXP/FSL SoC driver updates for v5.13
  - Add ACPI support for RCPM driver
  - Use generic io{read,write} for QE drivers after performance optimized
    for PowerPC
  - Fix QBMAN probe to cleanup HW states correctly for kexec
  - Various cleanup and style fix for QBMAN/QE/GUTS drivers
 
 OMAP:
  - Preparation to use devicetree for genpd
  - ti-sysc needs iorange check improved when the interconnect target module
    has no control registers listed
  - ti-sysc needs to probe l4_wkup and l4_cfg interconnects first to avoid
    issues with missing resources and unnecessary deferred probe
  - ti-sysc debug option can now detect more devices
  - ti-sysc now warns if an old incomplete devicetree data is found as we
    now rely on it being complete for am3 and 4
  - soc init code needs to check for prcm and prm nodes for omap4/5 and dra7
  - omap-prm driver needs to enable autoidle retention support for omap4
  - omap5 clocks are missing gpmc and ocmc clock registers
  - pci-dra7xx now needs to use builtin_platform_driver instead of using
    builtin_platform_driver_probe for deferred probe to work
 
 Raspberry Pi:
  - Fix-up all RPi firmware drivers so as for unbind to happen in an
    orderly fashion
  - Support for RPi's PoE hat PWM bus
 
 Qualcomm
  - Improved detection for SCM calling conventions
  - Support for OEM specific wifi firmware path
  - Added drivers for SC7280/SM8350: RPMH, LLCC< AOSS QMP
 
 Signed-off-by: Arnd Bergmann <arnd@arndb.de>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAmCC2JwACgkQmmx57+YA
 GNkgRg//cBtq2NyDbjiNABxFSkmGCfcc0w0C2wjVzr4cfg6BLTbuvvlpZxI912pu
 P1G2sbsdfQJ8sSeIyZos+PilWK0zHrqlaGZfKI19US45dMjpteDBgsPd7wNZwBjQ
 jbops3YLjztZK1HpY4dIdvMnfxt7yRqhBWaTbPuCwQ35c5KsOM8NHB3cP3BUINWK
 x1uuBCv9svppzwdDiPxneV93WKEzabOUo+WBMPyh5vnyvmW17Iif4BA/VKQxzymm
 mWUi8HHpKBpvntJOKwAD2hnLAdpR3SwX20SLOpyLhnJMotbzNUEqq3LdRxDNPdHk
 ry+rarJ78JGlYfpcfegf2bLf5ITNMfOyRGkjtzeYpcZIXPjufOg9DA9YtAy37k0u
 L0T/9gQ+tQ01WGMca77OyUtIqJKdblZrQMfuH/yGlR99bqFQMV7rNc7GNlX1MXp/
 zw4aOYrRWGtGEeAjx5JJWcYydvMSJpCrqxTz3YhgeJECHB2iA6YkV3NROR4TLW//
 tfxaKqxR/KmSqE6hoVOAuuQ0BLXNlql/+4EE6MKsAOBiKPJclvmJg4CyuY8G21ev
 9Su0zJnXMzai7gNu32v1pizGj26+AOhxCEgAG0mGgk2jlQSn24CKgm5e7kCUewcF
 j/1XksNPT95v/K8MsLpXe5xGvF3jhA1BlFfvjJNZOrcZywBXRxg=
 =iidq
 -----END PGP SIGNATURE-----

Merge tag 'arm-drivers-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
 "Updates for SoC specific drivers include a few subsystems that have
  their own maintainers but send them through the soc tree:

  TEE/OP-TEE:
   - Add tracepoints around calls to secure world

  Memory controller drivers:
   - Minor fixes for Renesas, Exynos, Mediatek and Tegra platforms
   - Add debug statistics to Tegra20 memory controller
   - Update Tegra bindings and convert to dtschema

  ARM SCMI Firmware:
   - Support for modular SCMI protocols and vendor specific extensions
   - New SCMI IIO driver
   - Per-cpu DVFS

  The other driver changes are all from the platform maintainers
  directly and reflect the drivers that don't fit into any other
  subsystem as well as treewide changes for a particular platform.

  SoCFPGA:
   - Various cleanups contributed by Krzysztof Kozlowski

  Mediatek:
   - add MT8183 support to mutex driver
   - MMSYS: use per SoC array to describe the possible routing
   - add MMSYS support for MT8183 and MT8167
   - add support for PMIC wrapper with integrated arbiter
   - add support for MT8192/MT6873

  Tegra:
   - Bug fixes to PMC and clock drivers

  NXP/i.MX:
   - Update SCU power domain driver to keep console domain power on.
   - Add missing ADC1 power domain to SCU power domain driver.
   - Update comments for single global power domain in SCU power domain
     driver.
   - Add i.MX51/i.MX53 unique id support to i.MX SoC driver.

  NXP/FSL SoC driver updates for v5.13
   - Add ACPI support for RCPM driver
   - Use generic io{read,write} for QE drivers after performance
     optimized for PowerPC
   - Fix QBMAN probe to cleanup HW states correctly for kexec
   - Various cleanup and style fix for QBMAN/QE/GUTS drivers

  OMAP:
   - Preparation to use devicetree for genpd
   - ti-sysc needs iorange check improved when the interconnect target
     module has no control registers listed
   - ti-sysc needs to probe l4_wkup and l4_cfg interconnects first to
     avoid issues with missing resources and unnecessary deferred probe
   - ti-sysc debug option can now detect more devices
   - ti-sysc now warns if an old incomplete devicetree data is found as
     we now rely on it being complete for am3 and 4
   - soc init code needs to check for prcm and prm nodes for omap4/5 and
     dra7
   - omap-prm driver needs to enable autoidle retention support for
     omap4
   - omap5 clocks are missing gpmc and ocmc clock registers
   - pci-dra7xx now needs to use builtin_platform_driver instead of
     using builtin_platform_driver_probe for deferred probe to work

  Raspberry Pi:
   - Fix-up all RPi firmware drivers so as for unbind to happen in an
     orderly fashion
   - Support for RPi's PoE hat PWM bus

  Qualcomm
   - Improved detection for SCM calling conventions
   - Support for OEM specific wifi firmware path
   - Added drivers for SC7280/SM8350: RPMH, LLCC< AOSS QMP"

* tag 'arm-drivers-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (165 commits)
  soc: aspeed: fix a ternary sign expansion bug
  memory: mtk-smi: Add device-link between smi-larb and smi-common
  memory: samsung: exynos5422-dmc: handle clk_set_parent() failure
  memory: renesas-rpc-if: fix possible NULL pointer dereference of resource
  clk: socfpga: fix iomem pointer cast on 64-bit
  soc: aspeed: Adapt to new LPC device tree layout
  pinctrl: aspeed-g5: Adapt to new LPC device tree layout
  ipmi: kcs: aspeed: Adapt to new LPC DTS layout
  ARM: dts: Remove LPC BMC and Host partitions
  dt-bindings: aspeed-lpc: Remove LPC partitioning
  soc: fsl: enable acpi support in RCPM driver
  soc: qcom: mdt_loader: Detect truncated read of segments
  soc: qcom: mdt_loader: Validate that p_filesz < p_memsz
  soc: qcom: pdr: Fix error return code in pdr_register_listener
  firmware: qcom_scm: Fix kernel-doc function names to match
  firmware: qcom_scm: Suppress sysfs bind attributes
  firmware: qcom_scm: Workaround lack of "is available" call on SC7180
  firmware: qcom_scm: Reduce locking section for __get_convention()
  firmware: qcom_scm: Make __qcom_scm_is_call_available() return bool
  Revert "soc: fsl: qe: introduce qe_io{read,write}* wrappers"
  ...
This commit is contained in:
Linus Torvalds 2021-04-26 12:11:52 -07:00
commit 37f00ab4a0
158 changed files with 4767 additions and 2130 deletions

View File

@ -64,6 +64,21 @@ properties:
- compatible
- "#reset-cells"
pwm:
type: object
properties:
compatible:
const: raspberrypi,firmware-poe-pwm
"#pwm-cells":
# See pwm.yaml in this directory for a description of the cells format.
const: 2
required:
- compatible
- "#pwm-cells"
additionalProperties: false
required:
@ -87,5 +102,10 @@ examples:
compatible = "raspberrypi,firmware-reset";
#reset-cells = <1>;
};
pwm: pwm {
compatible = "raspberrypi,firmware-poe-pwm";
#pwm-cells = <2>;
};
};
...

View File

@ -13,6 +13,7 @@ Required Properties:
- "mediatek,mt6779-mmsys", "syscon"
- "mediatek,mt6797-mmsys", "syscon"
- "mediatek,mt7623-mmsys", "mediatek,mt2701-mmsys", "syscon"
- "mediatek,mt8167-mmsys", "syscon"
- "mediatek,mt8173-mmsys", "syscon"
- "mediatek,mt8183-mmsys", "syscon"
- #clock-cells: Must be 1

View File

@ -22,6 +22,7 @@ properties:
compatible:
enum:
- qcom,sc7180-llcc
- qcom,sc7280-llcc
- qcom,sdm845-llcc
- qcom,sm8150-llcc
- qcom,sm8250-llcc

View File

@ -20,6 +20,7 @@ Required properties:
* "qcom,scm-msm8996"
* "qcom,scm-msm8998"
* "qcom,scm-sc7180"
* "qcom,scm-sc7280"
* "qcom,scm-sdm845"
* "qcom,scm-sm8150"
* "qcom,scm-sm8250"

View File

@ -37,9 +37,10 @@ properties:
description:
phandle of the memory controller node
core-supply:
power-domains:
maxItems: 1
description:
Phandle of voltage regulator of the SoC "core" power domain.
Phandle of the SoC "core" power domain.
operating-points-v2:
description:
@ -370,7 +371,7 @@ examples:
nvidia,memory-controller = <&mc>;
operating-points-v2 = <&dvfs_opp_table>;
core-supply = <&vdd_core>;
power-domains = <&domain>;
#interconnect-cells = <0>;

View File

@ -23,7 +23,7 @@ For each opp entry in 'operating-points-v2' table:
matches, the OPP gets enabled.
Optional properties:
- core-supply: Phandle of voltage regulator of the SoC "core" power domain.
- power-domains: Phandle of the SoC "core" power domain.
Child device nodes describe the memory settings for different configurations and clock rates.
@ -48,7 +48,7 @@ Example:
interrupts = <0 78 0x04>;
clocks = <&tegra_car TEGRA20_CLK_EMC>;
nvidia,memory-controller = <&mc>;
core-supply = <&core_vdd_reg>;
power-domains = <&domain>;
operating-points-v2 = <&opp_table>;
}

View File

@ -1,40 +0,0 @@
NVIDIA Tegra20 MC(Memory Controller)
Required properties:
- compatible : "nvidia,tegra20-mc-gart"
- reg : Should contain 2 register ranges: physical base address and length of
the controller's registers and the GART aperture respectively.
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- mc: the module's clock input
- interrupts : Should contain MC General interrupt.
- #reset-cells : Should be 1. This cell represents memory client module ID.
The assignments may be found in header file <dt-bindings/memory/tegra20-mc.h>
or in the TRM documentation.
- #iommu-cells: Should be 0. This cell represents the number of cells in an
IOMMU specifier needed to encode an address. GART supports only a single
address space that is shared by all devices, therefore no additional
information needed for the address encoding.
- #interconnect-cells : Should be 1. This cell represents memory client.
The assignments may be found in header file <dt-bindings/memory/tegra20-mc.h>.
Example:
mc: memory-controller@7000f000 {
compatible = "nvidia,tegra20-mc-gart";
reg = <0x7000f000 0x400 /* controller registers */
0x58000000 0x02000000>; /* GART aperture */
clocks = <&tegra_car TEGRA20_CLK_MC>;
clock-names = "mc";
interrupts = <GIC_SPI 77 0x04>;
#reset-cells = <1>;
#iommu-cells = <0>;
#interconnect-cells = <1>;
};
video-codec@6001a000 {
compatible = "nvidia,tegra20-vde";
...
resets = <&mc TEGRA20_MC_RESET_VDE>;
iommus = <&mc>;
};

View File

@ -0,0 +1,79 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra20-mc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NVIDIA Tegra20 SoC Memory Controller
maintainers:
- Dmitry Osipenko <digetx@gmail.com>
- Jon Hunter <jonathanh@nvidia.com>
- Thierry Reding <thierry.reding@gmail.com>
description: |
The Tegra20 Memory Controller merges request streams from various client
interfaces into request stream(s) for the various memory target devices,
and returns response data to the various clients. The Memory Controller
has a configurable arbitration algorithm to allow the user to fine-tune
performance among the various clients.
Tegra20 Memory Controller includes the GART (Graphics Address Relocation
Table) which allows Memory Controller to provide a linear view of a
fragmented memory pages.
properties:
compatible:
const: nvidia,tegra20-mc-gart
reg:
items:
- description: controller registers
- description: GART registers
clocks:
maxItems: 1
clock-names:
items:
- const: mc
interrupts:
maxItems: 1
"#reset-cells":
const: 1
"#iommu-cells":
const: 0
"#interconnect-cells":
const: 1
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
- "#reset-cells"
- "#iommu-cells"
- "#interconnect-cells"
additionalProperties: false
examples:
- |
memory-controller@7000f000 {
compatible = "nvidia,tegra20-mc-gart";
reg = <0x7000f000 0x400>, /* Controller registers */
<0x58000000 0x02000000>; /* GART aperture */
clocks = <&clock_controller 32>;
clock-names = "mc";
interrupts = <0 77 4>;
#iommu-cells = <0>;
#reset-cells = <1>;
#interconnect-cells = <1>;
};

View File

@ -39,9 +39,10 @@ properties:
description:
Phandle of the Memory Controller node.
core-supply:
power-domains:
maxItems: 1
description:
Phandle of voltage regulator of the SoC "core" power domain.
Phandle of the SoC "core" power domain.
operating-points-v2:
description:
@ -241,7 +242,7 @@ examples:
nvidia,memory-controller = <&mc>;
operating-points-v2 = <&dvfs_opp_table>;
core-supply = <&vdd_core>;
power-domains = <&domain>;
#interconnect-cells = <0>;

View File

@ -9,13 +9,7 @@ primary use case of the Aspeed LPC controller is as a slave on the bus
conditions it can also take the role of bus master.
The LPC controller is represented as a multi-function device to account for the
mix of functionality it provides. The principle split is between the register
layout at the start of the I/O space which is, to quote the Aspeed datasheet,
"basically compatible with the [LPC registers from the] popular BMC controller
H8S/2168[1]", and everything else, where everything else is an eclectic
collection of functions with a esoteric register layout. "Everything else",
here labeled the "host" portion of the controller, includes, but is not limited
to:
mix of functionality, which includes, but is not limited to:
* An IPMI Block Transfer[2] Controller
@ -44,80 +38,36 @@ Required properties
===================
- compatible: One of:
"aspeed,ast2400-lpc", "simple-mfd"
"aspeed,ast2500-lpc", "simple-mfd"
"aspeed,ast2600-lpc", "simple-mfd"
"aspeed,ast2400-lpc-v2", "simple-mfd", "syscon"
"aspeed,ast2500-lpc-v2", "simple-mfd", "syscon"
"aspeed,ast2600-lpc-v2", "simple-mfd", "syscon"
- reg: contains the physical address and length values of the Aspeed
LPC memory region.
- #address-cells: <1>
- #size-cells: <1>
- ranges: Maps 0 to the physical address and length of the LPC memory
region
Required LPC Child nodes
========================
BMC Node
--------
- compatible: One of:
"aspeed,ast2400-lpc-bmc"
"aspeed,ast2500-lpc-bmc"
"aspeed,ast2600-lpc-bmc"
- reg: contains the physical address and length values of the
H8S/2168-compatible LPC controller memory region
Host Node
---------
- compatible: One of:
"aspeed,ast2400-lpc-host", "simple-mfd", "syscon"
"aspeed,ast2500-lpc-host", "simple-mfd", "syscon"
"aspeed,ast2600-lpc-host", "simple-mfd", "syscon"
- reg: contains the address and length values of the host-related
register space for the Aspeed LPC controller
- #address-cells: <1>
- #size-cells: <1>
- ranges: Maps 0 to the address and length of the host-related LPC memory
- ranges: Maps 0 to the physical address and length of the LPC memory
region
Example:
lpc: lpc@1e789000 {
compatible = "aspeed,ast2500-lpc", "simple-mfd";
compatible = "aspeed,ast2500-lpc-v2", "simple-mfd", "syscon";
reg = <0x1e789000 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x1e789000 0x1000>;
lpc_bmc: lpc-bmc@0 {
compatible = "aspeed,ast2500-lpc-bmc";
lpc_snoop: lpc-snoop@0 {
compatible = "aspeed,ast2600-lpc-snoop";
reg = <0x0 0x80>;
};
lpc_host: lpc-host@80 {
compatible = "aspeed,ast2500-lpc-host", "simple-mfd", "syscon";
reg = <0x80 0x1e0>;
reg-io-width = <4>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x80 0x1e0>;
interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
snoop-ports = <0x80>;
};
};
BMC Node Children
==================
Host Node Children
==================
LPC Host Interface Controller
-------------------
@ -149,14 +99,12 @@ Optional properties:
Example:
lpc-host@80 {
lpc_ctrl: lpc-ctrl@0 {
compatible = "aspeed,ast2500-lpc-ctrl";
reg = <0x0 0x80>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
memory-region = <&flash_memory>;
flash = <&spi>;
};
lpc_ctrl: lpc-ctrl@80 {
compatible = "aspeed,ast2500-lpc-ctrl";
reg = <0x80 0x80>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
memory-region = <&flash_memory>;
flash = <&spi>;
};
LPC Host Controller
@ -179,9 +127,9 @@ Required properties:
Example:
lhc: lhc@20 {
lhc: lhc@a0 {
compatible = "aspeed,ast2500-lhc";
reg = <0x20 0x24 0x48 0x8>;
reg = <0xa0 0x24 0xc8 0x8>;
};
LPC reset control
@ -192,16 +140,18 @@ state of the LPC bus. Some systems may chose to modify this configuration.
Required properties:
- compatible: "aspeed,ast2600-lpc-reset" or
"aspeed,ast2500-lpc-reset"
"aspeed,ast2400-lpc-reset"
- compatible: One of:
"aspeed,ast2600-lpc-reset";
"aspeed,ast2500-lpc-reset";
"aspeed,ast2400-lpc-reset";
- reg: offset and length of the IP in the LHC memory region
- #reset-controller indicates the number of reset cells expected
Example:
lpc_reset: reset-controller@18 {
lpc_reset: reset-controller@98 {
compatible = "aspeed,ast2500-lpc-reset";
reg = <0x18 0x4>;
reg = <0x98 0x4>;
#reset-cells = <1>;
};

View File

@ -16,6 +16,7 @@ properties:
compatible:
enum:
- brcm,bcm4908-pmb
- brcm,bcm63138-pmb
reg:
description: register space of one or more buses

View File

@ -25,10 +25,12 @@ properties:
- qcom,qcs404-rpmpd
- qcom,sdm660-rpmpd
- qcom,sc7180-rpmhpd
- qcom,sc7280-rpmhpd
- qcom,sdm845-rpmhpd
- qcom,sdx55-rpmhpd
- qcom,sm8150-rpmhpd
- qcom,sm8250-rpmhpd
- qcom,sm8350-rpmhpd
'#power-domain-cells':
const: 1

View File

@ -22,6 +22,7 @@ Required properties in pwrap device node.
"mediatek,mt6765-pwrap" for MT6765 SoCs
"mediatek,mt6779-pwrap" for MT6779 SoCs
"mediatek,mt6797-pwrap" for MT6797 SoCs
"mediatek,mt6873-pwrap" for MT6873/8192 SoCs
"mediatek,mt7622-pwrap" for MT7622 SoCs
"mediatek,mt8135-pwrap" for MT8135 SoCs
"mediatek,mt8173-pwrap" for MT8173 SoCs

View File

@ -17,6 +17,7 @@ power-domains.
Value type: <string>
Definition: must be one of:
"qcom,sc7180-aoss-qmp"
"qcom,sc7280-aoss-qmp"
"qcom,sdm845-aoss-qmp"
"qcom,sm8150-aoss-qmp"
"qcom,sm8250-aoss-qmp"

View File

@ -24,6 +24,13 @@ block and a BT, WiFi and FM radio block, all using SMD as command channels.
"qcom,riva",
"qcom,pronto"
- firmware-name:
Usage: optional
Value type: <string>
Definition: specifies the relative firmware image path for the WLAN NV
blob. Defaults to "wlan/prima/WCNSS_qcom_wlan_nv.bin" if
not specified.
= SUBNODES
The subnodes of the wcnss node are optional and describe the individual blocks in
the WCNSS.

View File

@ -16,35 +16,8 @@ components running across different processing clusters on a chip or
device to communicate with a power management controller (PMC) on a
device to issue or respond to power management requests.
EEMI ops is a structure containing all eemi APIs supported by Zynq MPSoC.
The zynqmp-firmware driver maintain all EEMI APIs in zynqmp_eemi_ops
structure. Any driver who want to communicate with PMC using EEMI APIs
can call zynqmp_pm_get_eemi_ops().
Example of EEMI ops::
/* zynqmp-firmware driver maintain all EEMI APIs */
struct zynqmp_eemi_ops {
int (*get_api_version)(u32 *version);
int (*query_data)(struct zynqmp_pm_query_data qdata, u32 *out);
};
static const struct zynqmp_eemi_ops eemi_ops = {
.get_api_version = zynqmp_pm_get_api_version,
.query_data = zynqmp_pm_query_data,
};
Example of EEMI ops usage::
static const struct zynqmp_eemi_ops *eemi_ops;
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
eemi_ops = zynqmp_pm_get_eemi_ops();
if (IS_ERR(eemi_ops))
return PTR_ERR(eemi_ops);
ret = eemi_ops->query_data(qdata, ret_payload);
Any driver who wants to communicate with PMC using EEMI APIs use the
functions provided for each function.
IOCTL
------

View File

@ -2316,6 +2316,7 @@ F: drivers/tty/serial/msm_serial.c
F: drivers/usb/dwc3/dwc3-qcom.c
F: include/dt-bindings/*/qcom*
F: include/linux/*/qcom*
F: include/linux/soc/qcom/
ARM/RADISYS ENP2611 MACHINE SUPPORT
M: Lennert Buytenhek <kernel@wantstofly.org>

View File

@ -1327,7 +1327,7 @@ config ARM_PSCI
# selected platforms.
config ARCH_NR_GPIO
int
default 2048 if ARCH_SOCFPGA
default 2048 if ARCH_INTEL_SOCFPGA
default 1024 if ARCH_BRCMSTB || ARCH_RENESAS || ARCH_TEGRA || \
ARCH_ZYNQ || ARCH_ASPEED
default 512 if ARCH_EXYNOS || ARCH_KEYSTONE || SOC_OMAP5 || \

View File

@ -1087,7 +1087,7 @@ choice
on SD5203 UART.
config DEBUG_SOCFPGA_UART0
depends on ARCH_SOCFPGA
depends on ARCH_INTEL_SOCFPGA
bool "Use SOCFPGA UART0 for low-level debug"
select DEBUG_UART_8250
help
@ -1095,7 +1095,7 @@ choice
on SOCFPGA(Cyclone 5 and Arria 5) based platforms.
config DEBUG_SOCFPGA_ARRIA10_UART1
depends on ARCH_SOCFPGA
depends on ARCH_INTEL_SOCFPGA
bool "Use SOCFPGA Arria10 UART1 for low-level debug"
select DEBUG_UART_8250
help
@ -1103,7 +1103,7 @@ choice
on SOCFPGA(Arria 10) based platforms.
config DEBUG_SOCFPGA_CYCLONE5_UART1
depends on ARCH_SOCFPGA
depends on ARCH_INTEL_SOCFPGA
bool "Use SOCFPGA Cyclone 5 UART1 for low-level debug"
select DEBUG_UART_8250
help

View File

@ -209,7 +209,7 @@ machine-$(CONFIG_PLAT_SAMSUNG) += s3c
machine-$(CONFIG_ARCH_S5PV210) += s5pv210
machine-$(CONFIG_ARCH_SA1100) += sa1100
machine-$(CONFIG_ARCH_RENESAS) += shmobile
machine-$(CONFIG_ARCH_SOCFPGA) += socfpga
machine-$(CONFIG_ARCH_INTEL_SOCFPGA) += socfpga
machine-$(CONFIG_ARCH_STI) += sti
machine-$(CONFIG_ARCH_STM32) += stm32
machine-$(CONFIG_ARCH_SUNXI) += sunxi

View File

@ -1033,7 +1033,7 @@ dtb-$(CONFIG_ARCH_S5PV210) += \
s5pv210-smdkc110.dtb \
s5pv210-smdkv210.dtb \
s5pv210-torbreck.dtb
dtb-$(CONFIG_ARCH_SOCFPGA) += \
dtb-$(CONFIG_ARCH_INTEL_SOCFPGA) += \
socfpga_arria5_socdk.dtb \
socfpga_arria10_socdk_nand.dtb \
socfpga_arria10_socdk_qspi.dtb \

View File

@ -343,59 +343,45 @@
};
lpc: lpc@1e789000 {
compatible = "aspeed,ast2400-lpc", "simple-mfd";
compatible = "aspeed,ast2400-lpc-v2", "simple-mfd", "syscon";
reg = <0x1e789000 0x1000>;
reg-io-width = <4>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x1e789000 0x1000>;
lpc_bmc: lpc-bmc@0 {
compatible = "aspeed,ast2400-lpc-bmc";
reg = <0x0 0x80>;
lpc_ctrl: lpc-ctrl@80 {
compatible = "aspeed,ast2400-lpc-ctrl";
reg = <0x80 0x10>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_host: lpc-host@80 {
compatible = "aspeed,ast2400-lpc-host", "simple-mfd", "syscon";
reg = <0x80 0x1e0>;
reg-io-width = <4>;
lpc_snoop: lpc-snoop@90 {
compatible = "aspeed,ast2400-lpc-snoop";
reg = <0x90 0x8>;
interrupts = <8>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x80 0x1e0>;
lhc: lhc@a0 {
compatible = "aspeed,ast2400-lhc";
reg = <0xa0 0x24 0xc8 0x8>;
};
lpc_ctrl: lpc-ctrl@0 {
compatible = "aspeed,ast2400-lpc-ctrl";
reg = <0x0 0x10>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_reset: reset-controller@98 {
compatible = "aspeed,ast2400-lpc-reset";
reg = <0x98 0x4>;
#reset-cells = <1>;
};
lpc_snoop: lpc-snoop@10 {
compatible = "aspeed,ast2400-lpc-snoop";
reg = <0x10 0x8>;
interrupts = <8>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lhc: lhc@20 {
compatible = "aspeed,ast2400-lhc";
reg = <0x20 0x24 0x48 0x8>;
};
lpc_reset: reset-controller@18 {
compatible = "aspeed,ast2400-lpc-reset";
reg = <0x18 0x4>;
#reset-cells = <1>;
};
ibt: ibt@c0 {
compatible = "aspeed,ast2400-ibt-bmc";
reg = <0xc0 0x18>;
interrupts = <8>;
status = "disabled";
};
ibt: ibt@140 {
compatible = "aspeed,ast2400-ibt-bmc";
reg = <0x140 0x18>;
interrupts = <8>;
status = "disabled";
};
};

View File

@ -434,91 +434,74 @@
};
lpc: lpc@1e789000 {
compatible = "aspeed,ast2500-lpc", "simple-mfd";
compatible = "aspeed,ast2500-lpc-v2", "simple-mfd", "syscon";
reg = <0x1e789000 0x1000>;
reg-io-width = <4>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x1e789000 0x1000>;
lpc_bmc: lpc-bmc@0 {
compatible = "aspeed,ast2500-lpc-bmc", "simple-mfd", "syscon";
reg = <0x0 0x80>;
reg-io-width = <4>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x0 0x80>;
kcs1: kcs@24 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
interrupts = <8>;
status = "disabled";
};
kcs2: kcs@28 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
interrupts = <8>;
status = "disabled";
};
kcs3: kcs@2c {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
interrupts = <8>;
status = "disabled";
};
kcs1: kcs@24 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
interrupts = <8>;
status = "disabled";
};
lpc_host: lpc-host@80 {
compatible = "aspeed,ast2500-lpc-host", "simple-mfd", "syscon";
reg = <0x80 0x1e0>;
reg-io-width = <4>;
kcs2: kcs@28 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
interrupts = <8>;
status = "disabled";
};
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x80 0x1e0>;
kcs3: kcs@2c {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
interrupts = <8>;
status = "disabled";
};
kcs4: kcs@94 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
interrupts = <8>;
status = "disabled";
};
kcs4: kcs@114 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x114 0x1>, <0x118 0x1>, <0x11c 0x1>;
interrupts = <8>;
status = "disabled";
};
lpc_ctrl: lpc-ctrl@0 {
compatible = "aspeed,ast2500-lpc-ctrl";
reg = <0x0 0x10>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_ctrl: lpc-ctrl@80 {
compatible = "aspeed,ast2500-lpc-ctrl";
reg = <0x80 0x10>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_snoop: lpc-snoop@10 {
compatible = "aspeed,ast2500-lpc-snoop";
reg = <0x10 0x8>;
interrupts = <8>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_snoop: lpc-snoop@90 {
compatible = "aspeed,ast2500-lpc-snoop";
reg = <0x90 0x8>;
interrupts = <8>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_reset: reset-controller@18 {
compatible = "aspeed,ast2500-lpc-reset";
reg = <0x18 0x4>;
#reset-cells = <1>;
};
lpc_reset: reset-controller@98 {
compatible = "aspeed,ast2500-lpc-reset";
reg = <0x98 0x4>;
#reset-cells = <1>;
};
lhc: lhc@20 {
compatible = "aspeed,ast2500-lhc";
reg = <0x20 0x24 0x48 0x8>;
};
lhc: lhc@a0 {
compatible = "aspeed,ast2500-lhc";
reg = <0xa0 0x24 0xc8 0x8>;
};
ibt: ibt@c0 {
compatible = "aspeed,ast2500-ibt-bmc";
reg = <0xc0 0x18>;
interrupts = <8>;
status = "disabled";
};
ibt: ibt@140 {
compatible = "aspeed,ast2500-ibt-bmc";
reg = <0x140 0x18>;
interrupts = <8>;
status = "disabled";
};
};

View File

@ -460,91 +460,74 @@
};
lpc: lpc@1e789000 {
compatible = "aspeed,ast2600-lpc", "simple-mfd";
compatible = "aspeed,ast2600-lpc-v2", "simple-mfd", "syscon";
reg = <0x1e789000 0x1000>;
reg-io-width = <4>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x1e789000 0x1000>;
lpc_bmc: lpc-bmc@0 {
compatible = "aspeed,ast2600-lpc-bmc", "simple-mfd", "syscon";
reg = <0x0 0x80>;
reg-io-width = <4>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x0 0x80>;
kcs1: kcs@24 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;
kcs_chan = <1>;
status = "disabled";
};
kcs2: kcs@28 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
kcs3: kcs@2c {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
kcs1: kcs@24 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x24 0x1>, <0x30 0x1>, <0x3c 0x1>;
interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;
kcs_chan = <1>;
status = "disabled";
};
lpc_host: lpc-host@80 {
compatible = "aspeed,ast2600-lpc-host", "simple-mfd", "syscon";
reg = <0x80 0x1e0>;
reg-io-width = <4>;
kcs2: kcs@28 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x28 0x1>, <0x34 0x1>, <0x40 0x1>;
interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x80 0x1e0>;
kcs3: kcs@2c {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x2c 0x1>, <0x38 0x1>, <0x44 0x1>;
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
kcs4: kcs@94 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x94 0x1>, <0x98 0x1>, <0x9c 0x1>;
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
kcs4: kcs@114 {
compatible = "aspeed,ast2500-kcs-bmc-v2";
reg = <0x114 0x1>, <0x118 0x1>, <0x11c 0x1>;
interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
lpc_ctrl: lpc-ctrl@0 {
compatible = "aspeed,ast2600-lpc-ctrl";
reg = <0x0 0x80>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_ctrl: lpc-ctrl@80 {
compatible = "aspeed,ast2600-lpc-ctrl";
reg = <0x80 0x80>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_snoop: lpc-snoop@0 {
compatible = "aspeed,ast2600-lpc-snoop";
reg = <0x0 0x80>;
interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lpc_snoop: lpc-snoop@80 {
compatible = "aspeed,ast2600-lpc-snoop";
reg = <0x80 0x80>;
interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&syscon ASPEED_CLK_GATE_LCLK>;
status = "disabled";
};
lhc: lhc@20 {
compatible = "aspeed,ast2600-lhc";
reg = <0x20 0x24 0x48 0x8>;
};
lhc: lhc@a0 {
compatible = "aspeed,ast2600-lhc";
reg = <0xa0 0x24 0xc8 0x8>;
};
lpc_reset: reset-controller@18 {
compatible = "aspeed,ast2600-lpc-reset";
reg = <0x18 0x4>;
#reset-cells = <1>;
};
lpc_reset: reset-controller@98 {
compatible = "aspeed,ast2600-lpc-reset";
reg = <0x98 0x4>;
#reset-cells = <1>;
};
ibt: ibt@c0 {
compatible = "aspeed,ast2600-ibt-bmc";
reg = <0xc0 0x18>;
interrupts = <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
ibt: ibt@140 {
compatible = "aspeed,ast2600-ibt-bmc";
reg = <0x140 0x18>;
interrupts = <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
};

View File

@ -79,7 +79,7 @@ CONFIG_ARCH_MSM8960=y
CONFIG_ARCH_MSM8974=y
CONFIG_ARCH_ROCKCHIP=y
CONFIG_ARCH_RENESAS=y
CONFIG_ARCH_SOCFPGA=y
CONFIG_ARCH_INTEL_SOCFPGA=y
CONFIG_PLAT_SPEAR=y
CONFIG_ARCH_SPEAR13XX=y
CONFIG_MACH_SPEAR1310=y

View File

@ -9,7 +9,7 @@ CONFIG_NAMESPACES=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_EMBEDDED=y
CONFIG_PROFILING=y
CONFIG_ARCH_SOCFPGA=y
CONFIG_ARCH_INTEL_SOCFPGA=y
CONFIG_ARM_THUMBEE=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2

View File

@ -574,7 +574,7 @@ static const char * const pdata_quirks_init_nodes[] = {
"prm",
};
void __init
static void __init
pdata_quirks_init_clocks(const struct of_device_id *omap_dt_match_table)
{
struct device_node *np;

View File

@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
menuconfig ARCH_SOCFPGA
menuconfig ARCH_INTEL_SOCFPGA
bool "Altera SOCFPGA family"
depends on ARCH_MULTI_V7
select ARCH_SUPPORTS_BIG_ENDIAN
@ -19,7 +19,7 @@ menuconfig ARCH_SOCFPGA
select PL310_ERRATA_753970 if PL310
select PL310_ERRATA_769419
if ARCH_SOCFPGA
if ARCH_INTEL_SOCFPGA
config SOCFPGA_SUSPEND
bool "Suspend to RAM on SOCFPGA"
help

View File

@ -8,16 +8,6 @@ config ARCH_ACTIONS
help
This enables support for the Actions Semiconductor S900 SoC family.
config ARCH_AGILEX
bool "Intel's Agilex SoCFPGA Family"
help
This enables support for Intel's Agilex SoCFPGA Family.
config ARCH_N5X
bool "Intel's eASIC N5X SoCFPGA Family"
help
This enables support for Intel's eASIC N5X SoCFPGA Family.
config ARCH_SUNXI
bool "Allwinner sunxi 64-bit SoC Family"
select ARCH_HAS_RESET_CONTROLLER
@ -254,10 +244,11 @@ config ARCH_SEATTLE
help
This enables support for AMD Seattle SOC Family
config ARCH_STRATIX10
bool "Altera's Stratix 10 SoCFPGA Family"
config ARCH_INTEL_SOCFPGA
bool "Intel's SoCFPGA ARMv8 Families"
help
This enables support for Altera's Stratix 10 SoCFPGA Family.
This enables support for Intel's SoCFPGA ARMv8 families:
Stratix 10 (ex. Altera), Agilex and eASIC N5X.
config ARCH_SYNQUACER
bool "Socionext SynQuacer SoC Family"

View File

@ -1,3 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only
dtb-$(CONFIG_ARCH_STRATIX10) += socfpga_stratix10_socdk.dtb \
dtb-$(CONFIG_ARCH_INTEL_SOCFPGA) += socfpga_stratix10_socdk.dtb \
socfpga_stratix10_socdk_nand.dtb

View File

@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
dtb-$(CONFIG_ARCH_AGILEX) += socfpga_agilex_socdk.dtb \
socfpga_agilex_socdk_nand.dtb
dtb-$(CONFIG_ARCH_INTEL_SOCFPGA) += socfpga_agilex_socdk.dtb \
socfpga_agilex_socdk_nand.dtb \
socfpga_n5x_socdk.dtb
dtb-$(CONFIG_ARCH_KEEMBAY) += keembay-evm.dtb
dtb-$(CONFIG_ARCH_N5X) += socfpga_n5x_socdk.dtb

View File

@ -52,7 +52,7 @@ CONFIG_ARCH_RENESAS=y
CONFIG_ARCH_ROCKCHIP=y
CONFIG_ARCH_S32=y
CONFIG_ARCH_SEATTLE=y
CONFIG_ARCH_STRATIX10=y
CONFIG_ARCH_INTEL_SOCFPGA=y
CONFIG_ARCH_SYNQUACER=y
CONFIG_ARCH_TEGRA=y
CONFIG_ARCH_SPRD=y

View File

@ -353,8 +353,10 @@ static int qcom_ebi2_probe(struct platform_device *pdev)
/* Figure out the chipselect */
ret = of_property_read_u32(child, "reg", &csindex);
if (ret)
if (ret) {
of_node_put(child);
return ret;
}
if (csindex > 5) {
dev_err(dev,

View File

@ -288,7 +288,7 @@ static int sysc_add_named_clock_from_child(struct sysc *ddata,
* limit for clk_get(). If cl ever needs to be freed, it should be done
* with clkdev_drop().
*/
cl = kcalloc(1, sizeof(*cl), GFP_KERNEL);
cl = kzalloc(sizeof(*cl), GFP_KERNEL);
if (!cl)
return -ENOMEM;
@ -1648,7 +1648,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
case SOC_UNKNOWN:
default:
return 0;
};
}
/* Remap the whole module range to be able to reset dispc outputs */
devm_iounmap(ddata->dev, ddata->module_va);
@ -2905,7 +2905,7 @@ static int sysc_init_soc(struct sysc *ddata)
break;
default:
break;
};
}
}
match = soc_device_match(sysc_soc_feat_match);

View File

@ -27,7 +27,6 @@
#define KCS_CHANNEL_MAX 4
/* mapped to lpc-bmc@0 IO space */
#define LPC_HICR0 0x000
#define LPC_HICR0_LPC3E BIT(7)
#define LPC_HICR0_LPC2E BIT(6)
@ -52,15 +51,13 @@
#define LPC_STR1 0x03C
#define LPC_STR2 0x040
#define LPC_STR3 0x044
/* mapped to lpc-host@80 IO space */
#define LPC_HICRB 0x080
#define LPC_HICRB 0x100
#define LPC_HICRB_IBFIF4 BIT(1)
#define LPC_HICRB_LPC4E BIT(0)
#define LPC_LADR4 0x090
#define LPC_IDR4 0x094
#define LPC_ODR4 0x098
#define LPC_STR4 0x09C
#define LPC_LADR4 0x110
#define LPC_IDR4 0x114
#define LPC_ODR4 0x118
#define LPC_STR4 0x11C
struct aspeed_kcs_bmc {
struct regmap *map;
@ -348,12 +345,20 @@ static int aspeed_kcs_probe(struct platform_device *pdev)
struct device_node *np;
int rc;
np = pdev->dev.of_node;
np = dev->of_node->parent;
if (!of_device_is_compatible(np, "aspeed,ast2400-lpc-v2") &&
!of_device_is_compatible(np, "aspeed,ast2500-lpc-v2") &&
!of_device_is_compatible(np, "aspeed,ast2600-lpc-v2")) {
dev_err(dev, "unsupported LPC device binding\n");
return -ENODEV;
}
np = dev->of_node;
if (of_device_is_compatible(np, "aspeed,ast2400-kcs-bmc") ||
of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc"))
of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc"))
kcs_bmc = aspeed_kcs_probe_of_v1(pdev);
else if (of_device_is_compatible(np, "aspeed,ast2400-kcs-bmc-v2") ||
of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc-v2"))
of_device_is_compatible(np, "aspeed,ast2500-kcs-bmc-v2"))
kcs_bmc = aspeed_kcs_probe_of_v2(pdev);
else
return -EINVAL;

View File

@ -394,6 +394,7 @@ source "drivers/clk/renesas/Kconfig"
source "drivers/clk/rockchip/Kconfig"
source "drivers/clk/samsung/Kconfig"
source "drivers/clk/sifive/Kconfig"
source "drivers/clk/socfpga/Kconfig"
source "drivers/clk/sprd/Kconfig"
source "drivers/clk/sunxi/Kconfig"
source "drivers/clk/sunxi-ng/Kconfig"

View File

@ -104,9 +104,7 @@ obj-y += renesas/
obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/
obj-$(CONFIG_COMMON_CLK_SAMSUNG) += samsung/
obj-$(CONFIG_CLK_SIFIVE) += sifive/
obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/
obj-$(CONFIG_ARCH_AGILEX) += socfpga/
obj-$(CONFIG_ARCH_STRATIX10) += socfpga/
obj-y += socfpga/
obj-$(CONFIG_PLAT_SPEAR) += spear/
obj-y += sprd/
obj-$(CONFIG_ARCH_STI) += st/

View File

@ -314,7 +314,7 @@ static int raspberrypi_clk_probe(struct platform_device *pdev)
return -ENOENT;
}
firmware = rpi_firmware_get(firmware_node);
firmware = devm_rpi_firmware_get(&pdev->dev, firmware_node);
of_node_put(firmware_node);
if (!firmware)
return -EPROBE_DEFER;

View File

@ -2,7 +2,7 @@
/*
* System Control and Power Interface (SCMI) Protocol based clock driver
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#include <linux/clk-provider.h>
@ -13,11 +13,13 @@
#include <linux/scmi_protocol.h>
#include <asm/div64.h>
static const struct scmi_clk_proto_ops *scmi_proto_clk_ops;
struct scmi_clk {
u32 id;
struct clk_hw hw;
const struct scmi_clock_info *info;
const struct scmi_handle *handle;
const struct scmi_protocol_handle *ph;
};
#define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw)
@ -29,7 +31,7 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
u64 rate;
struct scmi_clk *clk = to_scmi_clk(hw);
ret = clk->handle->clk_ops->rate_get(clk->handle, clk->id, &rate);
ret = scmi_proto_clk_ops->rate_get(clk->ph, clk->id, &rate);
if (ret)
return 0;
return rate;
@ -69,21 +71,21 @@ static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
{
struct scmi_clk *clk = to_scmi_clk(hw);
return clk->handle->clk_ops->rate_set(clk->handle, clk->id, rate);
return scmi_proto_clk_ops->rate_set(clk->ph, clk->id, rate);
}
static int scmi_clk_enable(struct clk_hw *hw)
{
struct scmi_clk *clk = to_scmi_clk(hw);
return clk->handle->clk_ops->enable(clk->handle, clk->id);
return scmi_proto_clk_ops->enable(clk->ph, clk->id);
}
static void scmi_clk_disable(struct clk_hw *hw)
{
struct scmi_clk *clk = to_scmi_clk(hw);
clk->handle->clk_ops->disable(clk->handle, clk->id);
scmi_proto_clk_ops->disable(clk->ph, clk->id);
}
static const struct clk_ops scmi_clk_ops = {
@ -142,11 +144,17 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
struct device *dev = &sdev->dev;
struct device_node *np = dev->of_node;
const struct scmi_handle *handle = sdev->handle;
struct scmi_protocol_handle *ph;
if (!handle || !handle->clk_ops)
if (!handle)
return -ENODEV;
count = handle->clk_ops->count_get(handle);
scmi_proto_clk_ops =
handle->devm_protocol_get(sdev, SCMI_PROTOCOL_CLOCK, &ph);
if (IS_ERR(scmi_proto_clk_ops))
return PTR_ERR(scmi_proto_clk_ops);
count = scmi_proto_clk_ops->count_get(ph);
if (count < 0) {
dev_err(dev, "%pOFn: invalid clock output count\n", np);
return -EINVAL;
@ -167,14 +175,14 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
if (!sclk)
return -ENOMEM;
sclk->info = handle->clk_ops->info_get(handle, idx);
sclk->info = scmi_proto_clk_ops->info_get(ph, idx);
if (!sclk->info) {
dev_dbg(dev, "invalid clock info for idx %d\n", idx);
continue;
}
sclk->id = idx;
sclk->handle = handle;
sclk->ph = ph;
err = scmi_clk_ops_init(dev, sclk);
if (err) {

View File

@ -0,0 +1,19 @@
# SPDX-License-Identifier: GPL-2.0
config CLK_INTEL_SOCFPGA
bool "Intel SoCFPGA family clock support" if COMPILE_TEST && !ARCH_INTEL_SOCFPGA
default ARCH_INTEL_SOCFPGA
help
Support for the clock controllers present on Intel SoCFPGA and eASIC
devices like Aria, Cyclone, Stratix 10, Agilex and N5X eASIC.
if CLK_INTEL_SOCFPGA
config CLK_INTEL_SOCFPGA32
bool "Intel Aria / Cyclone clock controller support" if COMPILE_TEST && (!ARM || !ARCH_INTEL_SOCFPGA)
default ARM && ARCH_INTEL_SOCFPGA
config CLK_INTEL_SOCFPGA64
bool "Intel Stratix / Agilex / N5X clock controller support" if COMPILE_TEST && (!ARM64 || !ARCH_INTEL_SOCFPGA)
default ARM64 && ARCH_INTEL_SOCFPGA
endif # CLK_INTEL_SOCFPGA

View File

@ -1,7 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_ARCH_SOCFPGA) += clk.o clk-gate.o clk-pll.o clk-periph.o
obj-$(CONFIG_ARCH_SOCFPGA) += clk-pll-a10.o clk-periph-a10.o clk-gate-a10.o
obj-$(CONFIG_ARCH_STRATIX10) += clk-s10.o
obj-$(CONFIG_ARCH_STRATIX10) += clk-pll-s10.o clk-periph-s10.o clk-gate-s10.o
obj-$(CONFIG_ARCH_AGILEX) += clk-agilex.o
obj-$(CONFIG_ARCH_AGILEX) += clk-pll-s10.o clk-periph-s10.o clk-gate-s10.o
obj-$(CONFIG_CLK_INTEL_SOCFPGA32) += clk.o clk-gate.o clk-pll.o clk-periph.o \
clk-pll-a10.o clk-periph-a10.o clk-gate-a10.o
obj-$(CONFIG_CLK_INTEL_SOCFPGA64) += clk-s10.o \
clk-pll-s10.o clk-periph-s10.o clk-gate-s10.o \
clk-agilex.o

View File

@ -2515,18 +2515,6 @@ static int clk_plle_tegra210_enable(struct clk_hw *hw)
pll_writel(val, PLLE_SS_CTRL, pll);
udelay(1);
val = pll_readl_misc(pll);
val &= ~PLLE_MISC_IDDQ_SW_CTRL;
pll_writel_misc(val, pll);
val = pll_readl(pll->params->aux_reg, pll);
val |= (PLLE_AUX_USE_LOCKDET | PLLE_AUX_SS_SEQ_INCLUDE);
val &= ~(PLLE_AUX_ENABLE_SWCTL | PLLE_AUX_SS_SWCTL);
pll_writel(val, pll->params->aux_reg, pll);
udelay(1);
val |= PLLE_AUX_SEQ_ENABLE;
pll_writel(val, pll->params->aux_reg, pll);
out:
if (pll->lock)
spin_unlock_irqrestore(pll->lock, flags);

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2012-2014 NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2012-2020 NVIDIA CORPORATION. All rights reserved.
*/
#include <linux/io.h>
@ -403,6 +403,14 @@ static unsigned long tegra210_input_freq[] = {
#define PLLRE_BASE_DEFAULT_MASK 0x1c000000
#define PLLRE_MISC0_WRITE_MASK 0x67ffffff
/* PLLE */
#define PLLE_MISC_IDDQ_SW_CTRL (1 << 14)
#define PLLE_AUX_USE_LOCKDET (1 << 3)
#define PLLE_AUX_SS_SEQ_INCLUDE (1 << 31)
#define PLLE_AUX_ENABLE_SWCTL (1 << 4)
#define PLLE_AUX_SS_SWCTL (1 << 6)
#define PLLE_AUX_SEQ_ENABLE (1 << 24)
/* PLLX */
#define PLLX_USE_DYN_RAMP 1
#define PLLX_BASE_LOCK (1 << 27)
@ -489,6 +497,49 @@ static unsigned long tegra210_input_freq[] = {
#define PLLU_MISC0_WRITE_MASK 0xbfffffff
#define PLLU_MISC1_WRITE_MASK 0x00000007
bool tegra210_plle_hw_sequence_is_enabled(void)
{
u32 value;
value = readl_relaxed(clk_base + PLLE_AUX);
if (value & PLLE_AUX_SEQ_ENABLE)
return true;
return false;
}
EXPORT_SYMBOL_GPL(tegra210_plle_hw_sequence_is_enabled);
int tegra210_plle_hw_sequence_start(void)
{
u32 value;
if (tegra210_plle_hw_sequence_is_enabled())
return 0;
/* skip if PLLE is not enabled yet */
value = readl_relaxed(clk_base + PLLE_MISC0);
if (!(value & PLLE_MISC_LOCK))
return -EIO;
value &= ~PLLE_MISC_IDDQ_SW_CTRL;
writel_relaxed(value, clk_base + PLLE_MISC0);
value = readl_relaxed(clk_base + PLLE_AUX);
value |= (PLLE_AUX_USE_LOCKDET | PLLE_AUX_SS_SEQ_INCLUDE);
value &= ~(PLLE_AUX_ENABLE_SWCTL | PLLE_AUX_SS_SWCTL);
writel_relaxed(value, clk_base + PLLE_AUX);
fence_udelay(1, clk_base);
value |= PLLE_AUX_SEQ_ENABLE;
writel_relaxed(value, clk_base + PLLE_AUX);
fence_udelay(1, clk_base);
return 0;
}
EXPORT_SYMBOL_GPL(tegra210_plle_hw_sequence_start);
void tegra210_xusb_pll_hw_control_enable(void)
{
u32 val;

View File

@ -2,7 +2,7 @@
/*
* System Control and Power Interface (SCMI) based CPUFreq Interface driver
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
* Sudeep Holla <sudeep.holla@arm.com>
*/
@ -25,17 +25,17 @@ struct scmi_data {
struct device *cpu_dev;
};
static const struct scmi_handle *handle;
static struct scmi_protocol_handle *ph;
static const struct scmi_perf_proto_ops *perf_ops;
static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
{
struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);
const struct scmi_perf_ops *perf_ops = handle->perf_ops;
struct scmi_data *priv = policy->driver_data;
unsigned long rate;
int ret;
ret = perf_ops->freq_get(handle, priv->domain_id, &rate, false);
ret = perf_ops->freq_get(ph, priv->domain_id, &rate, false);
if (ret)
return 0;
return rate / 1000;
@ -50,19 +50,17 @@ static int
scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
{
struct scmi_data *priv = policy->driver_data;
const struct scmi_perf_ops *perf_ops = handle->perf_ops;
u64 freq = policy->freq_table[index].frequency;
return perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false);
return perf_ops->freq_set(ph, priv->domain_id, freq * 1000, false);
}
static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
struct scmi_data *priv = policy->driver_data;
const struct scmi_perf_ops *perf_ops = handle->perf_ops;
if (!perf_ops->freq_set(handle, priv->domain_id,
if (!perf_ops->freq_set(ph, priv->domain_id,
target_freq * 1000, true))
return target_freq;
@ -75,7 +73,7 @@ scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
int cpu, domain, tdomain;
struct device *tcpu_dev;
domain = handle->perf_ops->device_domain_id(cpu_dev);
domain = perf_ops->device_domain_id(cpu_dev);
if (domain < 0)
return domain;
@ -87,7 +85,7 @@ scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
if (!tcpu_dev)
continue;
tdomain = handle->perf_ops->device_domain_id(tcpu_dev);
tdomain = perf_ops->device_domain_id(tcpu_dev);
if (tdomain == domain)
cpumask_set_cpu(cpu, cpumask);
}
@ -102,13 +100,13 @@ scmi_get_cpu_power(unsigned long *power, unsigned long *KHz,
unsigned long Hz;
int ret, domain;
domain = handle->perf_ops->device_domain_id(cpu_dev);
domain = perf_ops->device_domain_id(cpu_dev);
if (domain < 0)
return domain;
/* Get the power cost of the performance domain. */
Hz = *KHz * 1000;
ret = handle->perf_ops->est_power_get(handle, domain, &Hz, power);
ret = perf_ops->est_power_get(ph, domain, &Hz, power);
if (ret)
return ret;
@ -126,6 +124,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
struct scmi_data *priv;
struct cpufreq_frequency_table *freq_table;
struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power);
cpumask_var_t opp_shared_cpus;
bool power_scale_mw;
cpu_dev = get_cpu_device(policy->cpu);
@ -134,30 +133,64 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
return -ENODEV;
}
ret = handle->perf_ops->device_opps_add(handle, cpu_dev);
if (ret) {
dev_warn(cpu_dev, "failed to add opps to the device\n");
return ret;
}
if (!zalloc_cpumask_var(&opp_shared_cpus, GFP_KERNEL))
ret = -ENOMEM;
/* Obtain CPUs that share SCMI performance controls */
ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus);
if (ret) {
dev_warn(cpu_dev, "failed to get sharing cpumask\n");
return ret;
goto out_free_cpumask;
}
ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
if (ret) {
dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
__func__, ret);
return ret;
/*
* Obtain CPUs that share performance levels.
* The OPP 'sharing cpus' info may come from DT through an empty opp
* table and opp-shared.
*/
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, opp_shared_cpus);
if (ret || !cpumask_weight(opp_shared_cpus)) {
/*
* Either opp-table is not set or no opp-shared was found.
* Use the CPU mask from SCMI to designate CPUs sharing an OPP
* table.
*/
cpumask_copy(opp_shared_cpus, policy->cpus);
}
/*
* A previous CPU may have marked OPPs as shared for a few CPUs, based on
* what OPP core provided. If the current CPU is part of those few, then
* there is no need to add OPPs again.
*/
nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
if (nr_opp <= 0) {
dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n");
ret = -EPROBE_DEFER;
goto out_free_opp;
ret = perf_ops->device_opps_add(ph, cpu_dev);
if (ret) {
dev_warn(cpu_dev, "failed to add opps to the device\n");
goto out_free_cpumask;
}
nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
if (nr_opp <= 0) {
dev_err(cpu_dev, "%s: No OPPs for this device: %d\n",
__func__, ret);
ret = -ENODEV;
goto out_free_opp;
}
ret = dev_pm_opp_set_sharing_cpus(cpu_dev, opp_shared_cpus);
if (ret) {
dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
__func__, ret);
goto out_free_opp;
}
power_scale_mw = perf_ops->power_scale_mw_get(ph);
em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb,
opp_shared_cpus, power_scale_mw);
}
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
@ -173,7 +206,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
}
priv->cpu_dev = cpu_dev;
priv->domain_id = handle->perf_ops->device_domain_id(cpu_dev);
priv->domain_id = perf_ops->device_domain_id(cpu_dev);
policy->driver_data = priv;
policy->freq_table = freq_table;
@ -181,26 +214,27 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
/* SCMI allows DVFS request for any domain from any CPU */
policy->dvfs_possible_from_any_cpu = true;
latency = handle->perf_ops->transition_latency_get(handle, cpu_dev);
latency = perf_ops->transition_latency_get(ph, cpu_dev);
if (!latency)
latency = CPUFREQ_ETERNAL;
policy->cpuinfo.transition_latency = latency;
policy->fast_switch_possible =
handle->perf_ops->fast_switch_possible(handle, cpu_dev);
power_scale_mw = handle->perf_ops->power_scale_mw_get(handle);
em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
power_scale_mw);
perf_ops->fast_switch_possible(ph, cpu_dev);
free_cpumask_var(opp_shared_cpus);
return 0;
out_free_priv:
kfree(priv);
out_free_opp:
dev_pm_opp_remove_all_dynamic(cpu_dev);
out_free_cpumask:
free_cpumask_var(opp_shared_cpus);
return ret;
}
@ -233,12 +267,17 @@ static int scmi_cpufreq_probe(struct scmi_device *sdev)
{
int ret;
struct device *dev = &sdev->dev;
const struct scmi_handle *handle;
handle = sdev->handle;
if (!handle || !handle->perf_ops)
if (!handle)
return -ENODEV;
perf_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_PERF, &ph);
if (IS_ERR(perf_ops))
return PTR_ERR(perf_ops);
#ifdef CONFIG_COMMON_CLK
/* dummy clock provider as needed by OPP if clocks property is used */
if (of_find_property(dev->of_node, "#clock-cells", NULL))

View File

@ -100,7 +100,7 @@ config AT_XDMAC
config AXI_DMAC
tristate "Analog Devices AXI-DMAC DMA support"
depends on MICROBLAZE || NIOS2 || ARCH_ZYNQ || ARCH_ZYNQMP || ARCH_SOCFPGA || COMPILE_TEST
depends on MICROBLAZE || NIOS2 || ARCH_ZYNQ || ARCH_ZYNQMP || ARCH_INTEL_SOCFPGA || COMPILE_TEST
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
select REGMAP_MMIO

View File

@ -396,7 +396,7 @@ config EDAC_THUNDERX
config EDAC_ALTERA
bool "Altera SOCFPGA ECC"
depends on EDAC=y && (ARCH_SOCFPGA || ARCH_STRATIX10)
depends on EDAC=y && ARCH_INTEL_SOCFPGA
help
Support for error detection and correction on the
Altera SOCs. This is the global enable for the

View File

@ -1501,8 +1501,13 @@ static int altr_portb_setup(struct altr_edac_device_dev *device)
dci->mod_name = ecc_name;
dci->dev_name = ecc_name;
/* Update the PortB IRQs - A10 has 4, S10 has 2, Index accordingly */
#ifdef CONFIG_ARCH_STRATIX10
/*
* Update the PortB IRQs - A10 has 4, S10 has 2, Index accordingly
*
* FIXME: Instead of ifdefs with different architectures the driver
* should properly use compatibles.
*/
#ifdef CONFIG_64BIT
altdev->sb_irq = irq_of_parse_and_map(np, 1);
#else
altdev->sb_irq = irq_of_parse_and_map(np, 2);
@ -1521,7 +1526,7 @@ static int altr_portb_setup(struct altr_edac_device_dev *device)
goto err_release_group_1;
}
#ifdef CONFIG_ARCH_STRATIX10
#ifdef CONFIG_64BIT
/* Use IRQ to determine SError origin instead of assigning IRQ */
rc = of_property_read_u32_index(np, "interrupts", 1, &altdev->db_irq);
if (rc) {
@ -1931,7 +1936,7 @@ static int altr_edac_a10_device_add(struct altr_arria10_edac *edac,
goto err_release_group1;
}
#ifdef CONFIG_ARCH_STRATIX10
#ifdef CONFIG_64BIT
/* Use IRQ to determine SError origin instead of assigning IRQ */
rc = of_property_read_u32_index(np, "interrupts", 0, &altdev->db_irq);
if (rc) {
@ -2016,7 +2021,7 @@ static const struct irq_domain_ops a10_eccmgr_ic_ops = {
/************** Stratix 10 EDAC Double Bit Error Handler ************/
#define to_a10edac(p, m) container_of(p, struct altr_arria10_edac, m)
#ifdef CONFIG_ARCH_STRATIX10
#ifdef CONFIG_64BIT
/* panic routine issues reboot on non-zero panic_timeout */
extern int panic_timeout;
@ -2109,7 +2114,7 @@ static int altr_edac_a10_probe(struct platform_device *pdev)
altr_edac_a10_irq_handler,
edac);
#ifdef CONFIG_ARCH_STRATIX10
#ifdef CONFIG_64BIT
{
int dberror, err_addr;

View File

@ -206,7 +206,7 @@ config FW_CFG_SYSFS_CMDLINE
config INTEL_STRATIX10_SERVICE
tristate "Intel Stratix10 Service Layer"
depends on (ARCH_STRATIX10 || ARCH_AGILEX) && HAVE_ARM_SMCCC
depends on ARCH_INTEL_SOCFPGA && ARM64 && HAVE_ARM_SMCCC
default n
help
Intel Stratix10 service layer runs at privileged exception level,

View File

@ -2,11 +2,12 @@
/*
* System Control and Management Interface (SCMI) Base Protocol
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#define pr_fmt(fmt) "SCMI Notifications BASE - " fmt
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include "common.h"
@ -50,30 +51,30 @@ struct scmi_base_error_notify_payld {
* scmi_base_attributes_get() - gets the implementation details
* that are associated with the base protocol.
*
* @handle: SCMI entity handle
* @ph: SCMI protocol handle
*
* Return: 0 on success, else appropriate SCMI error.
*/
static int scmi_base_attributes_get(const struct scmi_handle *handle)
static int scmi_base_attributes_get(const struct scmi_protocol_handle *ph)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_base_attributes *attr_info;
struct scmi_revision_info *rev = handle->version;
struct scmi_revision_info *rev = ph->get_priv(ph);
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
0, sizeof(*attr_info), &t);
if (ret)
return ret;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
attr_info = t->rx.buf;
rev->num_protocols = attr_info->num_protocols;
rev->num_agents = attr_info->num_agents;
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -81,19 +82,20 @@ static int scmi_base_attributes_get(const struct scmi_handle *handle)
/**
* scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string.
*
* @handle: SCMI entity handle
* @ph: SCMI protocol handle
* @sub_vendor: specify true if sub-vendor ID is needed
*
* Return: 0 on success, else appropriate SCMI error.
*/
static int
scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
scmi_base_vendor_id_get(const struct scmi_protocol_handle *ph, bool sub_vendor)
{
u8 cmd;
int ret, size;
char *vendor_id;
struct scmi_xfer *t;
struct scmi_revision_info *rev = handle->version;
struct scmi_revision_info *rev = ph->get_priv(ph);
if (sub_vendor) {
cmd = BASE_DISCOVER_SUB_VENDOR;
@ -105,15 +107,15 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
size = ARRAY_SIZE(rev->vendor_id);
}
ret = scmi_xfer_get_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t);
ret = ph->xops->xfer_get_init(ph, cmd, 0, size, &t);
if (ret)
return ret;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
memcpy(vendor_id, t->rx.buf, size);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -123,30 +125,30 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
* implementation 32-bit version. The format of the version number is
* vendor-specific
*
* @handle: SCMI entity handle
* @ph: SCMI protocol handle
*
* Return: 0 on success, else appropriate SCMI error.
*/
static int
scmi_base_implementation_version_get(const struct scmi_handle *handle)
scmi_base_implementation_version_get(const struct scmi_protocol_handle *ph)
{
int ret;
__le32 *impl_ver;
struct scmi_xfer *t;
struct scmi_revision_info *rev = handle->version;
struct scmi_revision_info *rev = ph->get_priv(ph);
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION,
SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t);
ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_IMPLEMENT_VERSION,
0, sizeof(*impl_ver), &t);
if (ret)
return ret;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
impl_ver = t->rx.buf;
rev->impl_ver = le32_to_cpu(*impl_ver);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -155,23 +157,24 @@ scmi_base_implementation_version_get(const struct scmi_handle *handle)
* scmi_base_implementation_list_get() - gets the list of protocols it is
* OSPM is allowed to access
*
* @handle: SCMI entity handle
* @ph: SCMI protocol handle
* @protocols_imp: pointer to hold the list of protocol identifiers
*
* Return: 0 on success, else appropriate SCMI error.
*/
static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
u8 *protocols_imp)
static int
scmi_base_implementation_list_get(const struct scmi_protocol_handle *ph,
u8 *protocols_imp)
{
u8 *list;
int ret, loop;
struct scmi_xfer *t;
__le32 *num_skip, *num_ret;
u32 tot_num_ret = 0, loop_num_ret;
struct device *dev = handle->dev;
struct device *dev = ph->dev;
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_LIST_PROTOCOLS,
SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t);
ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_LIST_PROTOCOLS,
sizeof(*num_skip), 0, &t);
if (ret)
return ret;
@ -183,7 +186,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
/* Set the number of protocols to be skipped/already read */
*num_skip = cpu_to_le32(tot_num_ret);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (ret)
break;
@ -198,10 +201,10 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
tot_num_ret += loop_num_ret;
scmi_reset_rx_to_maxsz(handle, t);
ph->xops->reset_rx_to_maxsz(ph, t);
} while (loop_num_ret);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -209,7 +212,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
/**
* scmi_base_discover_agent_get() - discover the name of an agent
*
* @handle: SCMI entity handle
* @ph: SCMI protocol handle
* @id: Agent identifier
* @name: Agent identifier ASCII string
*
@ -218,63 +221,63 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
*
* Return: 0 on success, else appropriate SCMI error.
*/
static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
static int scmi_base_discover_agent_get(const struct scmi_protocol_handle *ph,
int id, char *name)
{
int ret;
struct scmi_xfer *t;
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_AGENT,
SCMI_PROTOCOL_BASE, sizeof(__le32),
SCMI_MAX_STR_SIZE, &t);
ret = ph->xops->xfer_get_init(ph, BASE_DISCOVER_AGENT,
sizeof(__le32), SCMI_MAX_STR_SIZE, &t);
if (ret)
return ret;
put_unaligned_le32(id, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
strlcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_base_error_notify(const struct scmi_handle *handle, bool enable)
static int scmi_base_error_notify(const struct scmi_protocol_handle *ph,
bool enable)
{
int ret;
u32 evt_cntl = enable ? BASE_TP_NOTIFY_ALL : 0;
struct scmi_xfer *t;
struct scmi_msg_base_error_notify *cfg;
ret = scmi_xfer_get_init(handle, BASE_NOTIFY_ERRORS,
SCMI_PROTOCOL_BASE, sizeof(*cfg), 0, &t);
ret = ph->xops->xfer_get_init(ph, BASE_NOTIFY_ERRORS,
sizeof(*cfg), 0, &t);
if (ret)
return ret;
cfg = t->tx.buf;
cfg->event_control = cpu_to_le32(evt_cntl);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_base_set_notify_enabled(const struct scmi_handle *handle,
static int scmi_base_set_notify_enabled(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enable)
{
int ret;
ret = scmi_base_error_notify(handle, enable);
ret = scmi_base_error_notify(ph, enable);
if (ret)
pr_debug("FAIL_ENABLED - evt[%X] ret:%d\n", evt_id, ret);
return ret;
}
static void *scmi_base_fill_custom_report(const struct scmi_handle *handle,
static void *scmi_base_fill_custom_report(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
@ -318,17 +321,24 @@ static const struct scmi_event_ops base_event_ops = {
.fill_custom_report = scmi_base_fill_custom_report,
};
int scmi_base_protocol_init(struct scmi_handle *h)
static const struct scmi_protocol_events base_protocol_events = {
.queue_sz = 4 * SCMI_PROTO_QUEUE_SZ,
.ops = &base_event_ops,
.evts = base_events,
.num_events = ARRAY_SIZE(base_events),
.num_sources = SCMI_BASE_NUM_SOURCES,
};
static int scmi_base_protocol_init(const struct scmi_protocol_handle *ph)
{
int id, ret;
u8 *prot_imp;
u32 version;
char name[SCMI_MAX_STR_SIZE];
const struct scmi_handle *handle = h;
struct device *dev = handle->dev;
struct scmi_revision_info *rev = handle->version;
struct device *dev = ph->dev;
struct scmi_revision_info *rev = scmi_revision_area_get(ph);
ret = scmi_version_get(handle, SCMI_PROTOCOL_BASE, &version);
ret = ph->xops->version_get(ph, &version);
if (ret)
return ret;
@ -338,13 +348,15 @@ int scmi_base_protocol_init(struct scmi_handle *h)
rev->major_ver = PROTOCOL_REV_MAJOR(version),
rev->minor_ver = PROTOCOL_REV_MINOR(version);
ph->set_priv(ph, rev);
scmi_base_attributes_get(handle);
scmi_base_vendor_id_get(handle, false);
scmi_base_vendor_id_get(handle, true);
scmi_base_implementation_version_get(handle);
scmi_base_implementation_list_get(handle, prot_imp);
scmi_setup_protocol_implemented(handle, prot_imp);
scmi_base_attributes_get(ph);
scmi_base_vendor_id_get(ph, false);
scmi_base_vendor_id_get(ph, true);
scmi_base_implementation_version_get(ph);
scmi_base_implementation_list_get(ph, prot_imp);
scmi_setup_protocol_implemented(ph, prot_imp);
dev_info(dev, "SCMI Protocol v%d.%d '%s:%s' Firmware version 0x%x\n",
rev->major_ver, rev->minor_ver, rev->vendor_id,
@ -352,16 +364,20 @@ int scmi_base_protocol_init(struct scmi_handle *h)
dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols,
rev->num_agents);
scmi_register_protocol_events(handle, SCMI_PROTOCOL_BASE,
(4 * SCMI_PROTO_QUEUE_SZ),
&base_event_ops, base_events,
ARRAY_SIZE(base_events),
SCMI_BASE_NUM_SOURCES);
for (id = 0; id < rev->num_agents; id++) {
scmi_base_discover_agent_get(handle, id, name);
scmi_base_discover_agent_get(ph, id, name);
dev_dbg(dev, "Agent %d: %s\n", id, name);
}
return 0;
}
static const struct scmi_protocol scmi_base = {
.id = SCMI_PROTOCOL_BASE,
.owner = NULL,
.instance_init = &scmi_base_protocol_init,
.ops = NULL,
.events = &base_protocol_events,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(base, scmi_base)

View File

@ -2,7 +2,7 @@
/*
* System Control and Management Interface (SCMI) Message Protocol bus layer
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@ -51,18 +51,53 @@ static int scmi_dev_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle)
static int scmi_match_by_id_table(struct device *dev, void *data)
{
scmi_prot_init_fn_t fn = idr_find(&scmi_protocols, protocol_id);
struct scmi_device *sdev = to_scmi_dev(dev);
struct scmi_device_id *id_table = data;
if (unlikely(!fn))
return -EINVAL;
return fn(handle);
return sdev->protocol_id == id_table->protocol_id &&
!strcmp(sdev->name, id_table->name);
}
static int scmi_protocol_dummy_init(struct scmi_handle *handle)
struct scmi_device *scmi_child_dev_find(struct device *parent,
int prot_id, const char *name)
{
return 0;
struct scmi_device_id id_table;
struct device *dev;
id_table.protocol_id = prot_id;
id_table.name = name;
dev = device_find_child(parent, &id_table, scmi_match_by_id_table);
if (!dev)
return NULL;
return to_scmi_dev(dev);
}
const struct scmi_protocol *scmi_protocol_get(int protocol_id)
{
const struct scmi_protocol *proto;
proto = idr_find(&scmi_protocols, protocol_id);
if (!proto || !try_module_get(proto->owner)) {
pr_warn("SCMI Protocol 0x%x not found!\n", protocol_id);
return NULL;
}
pr_debug("Found SCMI Protocol 0x%x\n", protocol_id);
return proto;
}
void scmi_protocol_put(int protocol_id)
{
const struct scmi_protocol *proto;
proto = idr_find(&scmi_protocols, protocol_id);
if (proto)
module_put(proto->owner);
}
static int scmi_dev_probe(struct device *dev)
@ -70,7 +105,6 @@ static int scmi_dev_probe(struct device *dev)
struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
struct scmi_device *scmi_dev = to_scmi_dev(dev);
const struct scmi_device_id *id;
int ret;
id = scmi_dev_match_id(scmi_dev, scmi_drv);
if (!id)
@ -79,14 +113,6 @@ static int scmi_dev_probe(struct device *dev)
if (!scmi_dev->handle)
return -EPROBE_DEFER;
ret = scmi_protocol_init(scmi_dev->protocol_id, scmi_dev->handle);
if (ret)
return ret;
/* Skip protocol initialisation for additional devices */
idr_replace(&scmi_protocols, &scmi_protocol_dummy_init,
scmi_dev->protocol_id);
return scmi_drv->probe(scmi_dev);
}
@ -113,6 +139,10 @@ int scmi_driver_register(struct scmi_driver *driver, struct module *owner,
{
int retval;
retval = scmi_protocol_device_request(driver->id_table);
if (retval)
return retval;
driver->driver.bus = &scmi_bus_type;
driver->driver.name = driver->name;
driver->driver.owner = owner;
@ -129,6 +159,7 @@ EXPORT_SYMBOL_GPL(scmi_driver_register);
void scmi_driver_unregister(struct scmi_driver *driver)
{
driver_unregister(&driver->driver);
scmi_protocol_device_unrequest(driver->id_table);
}
EXPORT_SYMBOL_GPL(scmi_driver_unregister);
@ -194,26 +225,45 @@ void scmi_set_handle(struct scmi_device *scmi_dev)
scmi_dev->handle = scmi_handle_get(&scmi_dev->dev);
}
int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn)
int scmi_protocol_register(const struct scmi_protocol *proto)
{
int ret;
spin_lock(&protocol_lock);
ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1,
GFP_ATOMIC);
spin_unlock(&protocol_lock);
if (ret != protocol_id)
pr_err("unable to allocate SCMI idr slot, err %d\n", ret);
if (!proto) {
pr_err("invalid protocol\n");
return -EINVAL;
}
return ret;
if (!proto->instance_init) {
pr_err("missing init for protocol 0x%x\n", proto->id);
return -EINVAL;
}
spin_lock(&protocol_lock);
ret = idr_alloc(&scmi_protocols, (void *)proto,
proto->id, proto->id + 1, GFP_ATOMIC);
spin_unlock(&protocol_lock);
if (ret != proto->id) {
pr_err("unable to allocate SCMI idr slot for 0x%x - err %d\n",
proto->id, ret);
return ret;
}
pr_debug("Registered SCMI Protocol 0x%x\n", proto->id);
return 0;
}
EXPORT_SYMBOL_GPL(scmi_protocol_register);
void scmi_protocol_unregister(int protocol_id)
void scmi_protocol_unregister(const struct scmi_protocol *proto)
{
spin_lock(&protocol_lock);
idr_remove(&scmi_protocols, protocol_id);
idr_remove(&scmi_protocols, proto->id);
spin_unlock(&protocol_lock);
pr_debug("Unregistered SCMI Protocol 0x%x\n", proto->id);
return;
}
EXPORT_SYMBOL_GPL(scmi_protocol_unregister);

View File

@ -2,9 +2,10 @@
/*
* System Control and Management Interface (SCMI) Clock Protocol
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#include <linux/module.h>
#include <linux/sort.h>
#include "common.h"
@ -74,52 +75,53 @@ struct clock_info {
struct scmi_clock_info *clk;
};
static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle,
struct clock_info *ci)
static int
scmi_clock_protocol_attributes_get(const struct scmi_protocol_handle *ph,
struct clock_info *ci)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_clock_protocol_attributes *attr;
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
0, sizeof(*attr), &t);
if (ret)
return ret;
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
ci->num_clocks = le16_to_cpu(attr->num_clocks);
ci->max_async_req = attr->max_async_req;
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_clock_attributes_get(const struct scmi_handle *handle,
static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph,
u32 clk_id, struct scmi_clock_info *clk)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_clock_attributes *attr;
ret = scmi_xfer_get_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK,
sizeof(clk_id), sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, CLOCK_ATTRIBUTES,
sizeof(clk_id), sizeof(*attr), &t);
if (ret)
return ret;
put_unaligned_le32(clk_id, t->tx.buf);
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE);
else
clk->name[0] = '\0';
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -136,7 +138,7 @@ static int rate_cmp_func(const void *_r1, const void *_r2)
}
static int
scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id,
struct scmi_clock_info *clk)
{
u64 *rate = NULL;
@ -148,8 +150,8 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
struct scmi_msg_clock_describe_rates *clk_desc;
struct scmi_msg_resp_clock_describe_rates *rlist;
ret = scmi_xfer_get_init(handle, CLOCK_DESCRIBE_RATES,
SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t);
ret = ph->xops->xfer_get_init(ph, CLOCK_DESCRIBE_RATES,
sizeof(*clk_desc), 0, &t);
if (ret)
return ret;
@ -161,7 +163,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
/* Set the number of rates to be skipped/already read */
clk_desc->rate_index = cpu_to_le32(tot_rate_cnt);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (ret)
goto err;
@ -171,7 +173,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
num_returned = NUM_RETURNED(rates_flag);
if (tot_rate_cnt + num_returned > SCMI_MAX_NUM_RATES) {
dev_err(handle->dev, "No. of rates > MAX_NUM_RATES");
dev_err(ph->dev, "No. of rates > MAX_NUM_RATES");
break;
}
@ -179,7 +181,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
clk->range.min_rate = RATE_TO_U64(rlist->rate[0]);
clk->range.max_rate = RATE_TO_U64(rlist->rate[1]);
clk->range.step_size = RATE_TO_U64(rlist->rate[2]);
dev_dbg(handle->dev, "Min %llu Max %llu Step %llu Hz\n",
dev_dbg(ph->dev, "Min %llu Max %llu Step %llu Hz\n",
clk->range.min_rate, clk->range.max_rate,
clk->range.step_size);
break;
@ -188,12 +190,12 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
rate = &clk->list.rates[tot_rate_cnt];
for (cnt = 0; cnt < num_returned; cnt++, rate++) {
*rate = RATE_TO_U64(rlist->rate[cnt]);
dev_dbg(handle->dev, "Rate %llu Hz\n", *rate);
dev_dbg(ph->dev, "Rate %llu Hz\n", *rate);
}
tot_rate_cnt += num_returned;
scmi_reset_rx_to_maxsz(handle, t);
ph->xops->reset_rx_to_maxsz(ph, t);
/*
* check for both returned and remaining to avoid infinite
* loop due to buggy firmware
@ -208,42 +210,42 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
clk->rate_discrete = rate_discrete;
err:
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value)
scmi_clock_rate_get(const struct scmi_protocol_handle *ph,
u32 clk_id, u64 *value)
{
int ret;
struct scmi_xfer *t;
ret = scmi_xfer_get_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK,
sizeof(__le32), sizeof(u64), &t);
ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_GET,
sizeof(__le32), sizeof(u64), &t);
if (ret)
return ret;
put_unaligned_le32(clk_id, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
*value = get_unaligned_le64(t->rx.buf);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
u64 rate)
static int scmi_clock_rate_set(const struct scmi_protocol_handle *ph,
u32 clk_id, u64 rate)
{
int ret;
u32 flags = 0;
struct scmi_xfer *t;
struct scmi_clock_set_rate *cfg;
struct clock_info *ci = handle->clk_priv;
struct clock_info *ci = ph->get_priv(ph);
ret = scmi_xfer_get_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK,
sizeof(*cfg), 0, &t);
ret = ph->xops->xfer_get_init(ph, CLOCK_RATE_SET, sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -258,26 +260,27 @@ static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
cfg->value_high = cpu_to_le32(rate >> 32);
if (flags & CLOCK_SET_ASYNC)
ret = scmi_do_xfer_with_response(handle, t);
ret = ph->xops->do_xfer_with_response(ph, t);
else
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (ci->max_async_req)
atomic_dec(&ci->cur_async_req);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id,
u32 config)
{
int ret;
struct scmi_xfer *t;
struct scmi_clock_set_config *cfg;
ret = scmi_xfer_get_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK,
sizeof(*cfg), 0, &t);
ret = ph->xops->xfer_get_init(ph, CLOCK_CONFIG_SET,
sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -285,33 +288,33 @@ scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
cfg->id = cpu_to_le32(clk_id);
cfg->attributes = cpu_to_le32(config);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_clock_enable(const struct scmi_handle *handle, u32 clk_id)
static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id)
{
return scmi_clock_config_set(handle, clk_id, CLOCK_ENABLE);
return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE);
}
static int scmi_clock_disable(const struct scmi_handle *handle, u32 clk_id)
static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id)
{
return scmi_clock_config_set(handle, clk_id, 0);
return scmi_clock_config_set(ph, clk_id, 0);
}
static int scmi_clock_count_get(const struct scmi_handle *handle)
static int scmi_clock_count_get(const struct scmi_protocol_handle *ph)
{
struct clock_info *ci = handle->clk_priv;
struct clock_info *ci = ph->get_priv(ph);
return ci->num_clocks;
}
static const struct scmi_clock_info *
scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id)
scmi_clock_info_get(const struct scmi_protocol_handle *ph, u32 clk_id)
{
struct clock_info *ci = handle->clk_priv;
struct clock_info *ci = ph->get_priv(ph);
struct scmi_clock_info *clk = ci->clk + clk_id;
if (!clk->name[0])
@ -320,7 +323,7 @@ scmi_clock_info_get(const struct scmi_handle *handle, u32 clk_id)
return clk;
}
static const struct scmi_clk_ops clk_ops = {
static const struct scmi_clk_proto_ops clk_proto_ops = {
.count_get = scmi_clock_count_get,
.info_get = scmi_clock_info_get,
.rate_get = scmi_clock_rate_get,
@ -329,24 +332,24 @@ static const struct scmi_clk_ops clk_ops = {
.disable = scmi_clock_disable,
};
static int scmi_clock_protocol_init(struct scmi_handle *handle)
static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph)
{
u32 version;
int clkid, ret;
struct clock_info *cinfo;
scmi_version_get(handle, SCMI_PROTOCOL_CLOCK, &version);
ph->xops->version_get(ph, &version);
dev_dbg(handle->dev, "Clock Version %d.%d\n",
dev_dbg(ph->dev, "Clock Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
cinfo = devm_kzalloc(handle->dev, sizeof(*cinfo), GFP_KERNEL);
cinfo = devm_kzalloc(ph->dev, sizeof(*cinfo), GFP_KERNEL);
if (!cinfo)
return -ENOMEM;
scmi_clock_protocol_attributes_get(handle, cinfo);
scmi_clock_protocol_attributes_get(ph, cinfo);
cinfo->clk = devm_kcalloc(handle->dev, cinfo->num_clocks,
cinfo->clk = devm_kcalloc(ph->dev, cinfo->num_clocks,
sizeof(*cinfo->clk), GFP_KERNEL);
if (!cinfo->clk)
return -ENOMEM;
@ -354,16 +357,20 @@ static int scmi_clock_protocol_init(struct scmi_handle *handle)
for (clkid = 0; clkid < cinfo->num_clocks; clkid++) {
struct scmi_clock_info *clk = cinfo->clk + clkid;
ret = scmi_clock_attributes_get(handle, clkid, clk);
ret = scmi_clock_attributes_get(ph, clkid, clk);
if (!ret)
scmi_clock_describe_rates_get(handle, clkid, clk);
scmi_clock_describe_rates_get(ph, clkid, clk);
}
cinfo->version = version;
handle->clk_ops = &clk_ops;
handle->clk_priv = cinfo;
return 0;
return ph->set_priv(ph, cinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_CLOCK, clock)
static const struct scmi_protocol scmi_clock = {
.id = SCMI_PROTOCOL_CLOCK,
.owner = THIS_MODULE,
.instance_init = &scmi_clock_protocol_init,
.ops = &clk_proto_ops,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(clock, scmi_clock)

View File

@ -4,7 +4,7 @@
* driver common header file containing some definitions, structures
* and function prototypes used in all the different SCMI protocols.
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#ifndef _SCMI_COMMON_H
#define _SCMI_COMMON_H
@ -14,11 +14,14 @@
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include <linux/types.h>
#include <asm/unaligned.h>
#include "notify.h"
#define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0)
#define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16)
#define PROTOCOL_REV_MAJOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x)))
@ -141,22 +144,92 @@ struct scmi_xfer {
struct completion *async_done;
};
void scmi_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer);
int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer);
int scmi_do_xfer_with_response(const struct scmi_handle *h,
struct scmi_xfer *xfer);
int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
size_t tx_size, size_t rx_size, struct scmi_xfer **p);
void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle,
struct scmi_xfer *xfer);
struct scmi_xfer_ops;
/**
* struct scmi_protocol_handle - Reference to an initialized protocol instance
*
* @dev: A reference to the associated SCMI instance device (handle->dev).
* @xops: A reference to a struct holding refs to the core xfer operations that
* can be used by the protocol implementation to generate SCMI messages.
* @set_priv: A method to set protocol private data for this instance.
* @get_priv: A method to get protocol private data previously set.
*
* This structure represents a protocol initialized against specific SCMI
* instance and it will be used as follows:
* - as a parameter fed from the core to the protocol initialization code so
* that it can access the core xfer operations to build and generate SCMI
* messages exclusively for the specific underlying protocol instance.
* - as an opaque handle fed by an SCMI driver user when it tries to access
* this protocol through its own protocol operations.
* In this case this handle will be returned as an opaque object together
* with the related protocol operations when the SCMI driver tries to access
* the protocol.
*/
struct scmi_protocol_handle {
struct device *dev;
const struct scmi_xfer_ops *xops;
int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv);
void *(*get_priv)(const struct scmi_protocol_handle *ph);
};
/**
* struct scmi_xfer_ops - References to the core SCMI xfer operations.
* @version_get: Get this version protocol.
* @xfer_get_init: Initialize one struct xfer if any xfer slot is free.
* @reset_rx_to_maxsz: Reset rx size to max transport size.
* @do_xfer: Do the SCMI transfer.
* @do_xfer_with_response: Do the SCMI transfer waiting for a response.
* @xfer_put: Free the xfer slot.
*
* Note that all this operations expect a protocol handle as first parameter;
* they then internally use it to infer the underlying protocol number: this
* way is not possible for a protocol implementation to forge messages for
* another protocol.
*/
struct scmi_xfer_ops {
int (*version_get)(const struct scmi_protocol_handle *ph, u32 *version);
int (*xfer_get_init)(const struct scmi_protocol_handle *ph, u8 msg_id,
size_t tx_size, size_t rx_size,
struct scmi_xfer **p);
void (*reset_rx_to_maxsz)(const struct scmi_protocol_handle *ph,
struct scmi_xfer *xfer);
int (*do_xfer)(const struct scmi_protocol_handle *ph,
struct scmi_xfer *xfer);
int (*do_xfer_with_response)(const struct scmi_protocol_handle *ph,
struct scmi_xfer *xfer);
void (*xfer_put)(const struct scmi_protocol_handle *ph,
struct scmi_xfer *xfer);
};
struct scmi_revision_info *
scmi_revision_area_get(const struct scmi_protocol_handle *ph);
int scmi_handle_put(const struct scmi_handle *handle);
struct scmi_handle *scmi_handle_get(struct device *dev);
void scmi_set_handle(struct scmi_device *scmi_dev);
int scmi_version_get(const struct scmi_handle *h, u8 protocol, u32 *version);
void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
void scmi_setup_protocol_implemented(const struct scmi_protocol_handle *ph,
u8 *prot_imp);
int scmi_base_protocol_init(struct scmi_handle *h);
typedef int (*scmi_prot_init_ph_fn_t)(const struct scmi_protocol_handle *);
/**
* struct scmi_protocol - Protocol descriptor
* @id: Protocol ID.
* @owner: Module reference if any.
* @instance_init: Mandatory protocol initialization function.
* @instance_deinit: Optional protocol de-initialization function.
* @ops: Optional reference to the operations provided by the protocol and
* exposed in scmi_protocol.h.
* @events: An optional reference to the events supported by this protocol.
*/
struct scmi_protocol {
const u8 id;
struct module *owner;
const scmi_prot_init_ph_fn_t instance_init;
const scmi_prot_init_ph_fn_t instance_deinit;
const void *ops;
const struct scmi_protocol_events *events;
};
int __init scmi_bus_init(void);
void __exit scmi_bus_exit(void);
@ -164,6 +237,7 @@ void __exit scmi_bus_exit(void);
#define DECLARE_SCMI_REGISTER_UNREGISTER(func) \
int __init scmi_##func##_register(void); \
void __exit scmi_##func##_unregister(void)
DECLARE_SCMI_REGISTER_UNREGISTER(base);
DECLARE_SCMI_REGISTER_UNREGISTER(clock);
DECLARE_SCMI_REGISTER_UNREGISTER(perf);
DECLARE_SCMI_REGISTER_UNREGISTER(power);
@ -172,17 +246,25 @@ DECLARE_SCMI_REGISTER_UNREGISTER(sensors);
DECLARE_SCMI_REGISTER_UNREGISTER(voltage);
DECLARE_SCMI_REGISTER_UNREGISTER(system);
#define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(id, name) \
int __init scmi_##name##_register(void) \
{ \
return scmi_protocol_register((id), &scmi_##name##_protocol_init); \
} \
\
void __exit scmi_##name##_unregister(void) \
{ \
scmi_protocol_unregister((id)); \
#define DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(name, proto) \
static const struct scmi_protocol *__this_proto = &(proto); \
\
int __init scmi_##name##_register(void) \
{ \
return scmi_protocol_register(__this_proto); \
} \
\
void __exit scmi_##name##_unregister(void) \
{ \
scmi_protocol_unregister(__this_proto); \
}
const struct scmi_protocol *scmi_protocol_get(int protocol_id);
void scmi_protocol_put(int protocol_id);
int scmi_protocol_acquire(const struct scmi_handle *handle, u8 protocol_id);
void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id);
/* SCMI Transport */
/**
* struct scmi_chan_info - Structure representing a SCMI channel information
@ -227,6 +309,11 @@ struct scmi_transport_ops {
bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer);
};
int scmi_protocol_device_request(const struct scmi_device_id *id_table);
void scmi_protocol_device_unrequest(const struct scmi_device_id *id_table);
struct scmi_device *scmi_child_dev_find(struct device *parent,
int prot_id, const char *name);
/**
* struct scmi_desc - Description of SoC integration
*
@ -265,4 +352,8 @@ void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem);
bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer);
void scmi_notification_instance_data_set(const struct scmi_handle *handle,
void *priv);
void *scmi_notification_instance_data_get(const struct scmi_handle *handle);
#endif /* _SCMI_COMMON_H */

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
/*
* System Control and Management Interface (SCMI) Notification support
*
* Copyright (C) 2020 ARM Ltd.
* Copyright (C) 2020-2021 ARM Ltd.
*/
/**
* DOC: Theory of operation
@ -91,6 +91,7 @@
#include <linux/types.h>
#include <linux/workqueue.h>
#include "common.h"
#include "notify.h"
#define SCMI_MAX_PROTO 256
@ -177,7 +178,7 @@
#define REVT_NOTIFY_SET_STATUS(revt, eid, sid, state) \
({ \
typeof(revt) r = revt; \
r->proto->ops->set_notify_enabled(r->proto->ni->handle, \
r->proto->ops->set_notify_enabled(r->proto->ph, \
(eid), (sid), (state)); \
})
@ -190,7 +191,7 @@
#define REVT_FILL_REPORT(revt, ...) \
({ \
typeof(revt) r = revt; \
r->proto->ops->fill_custom_report(r->proto->ni->handle, \
r->proto->ops->fill_custom_report(r->proto->ph, \
__VA_ARGS__); \
})
@ -278,6 +279,7 @@ struct scmi_registered_event;
* events' descriptors, whose fixed-size is determined at
* compile time.
* @registered_mtx: A mutex to protect @registered_events_handlers
* @ph: SCMI protocol handle reference
* @registered_events_handlers: An hashtable containing all events' handlers
* descriptors registered for this protocol
*
@ -302,6 +304,7 @@ struct scmi_registered_events_desc {
struct scmi_registered_event **registered_events;
/* mutex to protect registered_events_handlers */
struct mutex registered_mtx;
const struct scmi_protocol_handle *ph;
DECLARE_HASHTABLE(registered_events_handlers, SCMI_REGISTERED_HASH_SZ);
};
@ -368,7 +371,7 @@ static struct scmi_event_handler *
scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
static void scmi_put_active_handler(struct scmi_notify_instance *ni,
struct scmi_event_handler *hndl);
static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
static bool scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
struct scmi_event_handler *hndl);
/**
@ -579,11 +582,9 @@ int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
struct scmi_event_header eh;
struct scmi_notify_instance *ni;
/* Ensure notify_priv is updated */
smp_rmb();
if (!handle->notify_priv)
ni = scmi_notification_instance_data_get(handle);
if (!ni)
return 0;
ni = handle->notify_priv;
r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
if (!r_evt)
@ -732,14 +733,10 @@ scmi_allocate_registered_events_desc(struct scmi_notify_instance *ni,
/**
* scmi_register_protocol_events() - Register Protocol Events with the core
* @handle: The handle identifying the platform instance against which the
* the protocol's events are registered
* protocol's events are registered
* @proto_id: Protocol ID
* @queue_sz: Size in bytes of the associated queue to be allocated
* @ops: Protocol specific event-related operations
* @evt: Event descriptor array
* @num_events: Number of events in @evt array
* @num_sources: Number of possible sources for this protocol on this
* platform.
* @ph: SCMI protocol handle.
* @ee: A structure describing the events supported by this protocol.
*
* Used by SCMI Protocols initialization code to register with the notification
* core the list of supported events and their descriptors: takes care to
@ -748,60 +745,69 @@ scmi_allocate_registered_events_desc(struct scmi_notify_instance *ni,
*
* Return: 0 on Success
*/
int scmi_register_protocol_events(const struct scmi_handle *handle,
u8 proto_id, size_t queue_sz,
const struct scmi_event_ops *ops,
const struct scmi_event *evt, int num_events,
int num_sources)
int scmi_register_protocol_events(const struct scmi_handle *handle, u8 proto_id,
const struct scmi_protocol_handle *ph,
const struct scmi_protocol_events *ee)
{
int i;
unsigned int num_sources;
size_t payld_sz = 0;
struct scmi_registered_events_desc *pd;
struct scmi_notify_instance *ni;
const struct scmi_event *evt;
if (!ops || !evt)
if (!ee || !ee->ops || !ee->evts || !ph ||
(!ee->num_sources && !ee->ops->get_num_sources))
return -EINVAL;
/* Ensure notify_priv is updated */
smp_rmb();
if (!handle->notify_priv)
return -ENOMEM;
ni = handle->notify_priv;
/* Attach to the notification main devres group */
if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL))
ni = scmi_notification_instance_data_get(handle);
if (!ni)
return -ENOMEM;
for (i = 0; i < num_events; i++)
/* num_sources cannot be <= 0 */
if (ee->num_sources) {
num_sources = ee->num_sources;
} else {
int nsrc = ee->ops->get_num_sources(ph);
if (nsrc <= 0)
return -EINVAL;
num_sources = nsrc;
}
evt = ee->evts;
for (i = 0; i < ee->num_events; i++)
payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz);
payld_sz += sizeof(struct scmi_event_header);
pd = scmi_allocate_registered_events_desc(ni, proto_id, queue_sz,
payld_sz, num_events, ops);
pd = scmi_allocate_registered_events_desc(ni, proto_id, ee->queue_sz,
payld_sz, ee->num_events,
ee->ops);
if (IS_ERR(pd))
goto err;
return PTR_ERR(pd);
for (i = 0; i < num_events; i++, evt++) {
pd->ph = ph;
for (i = 0; i < ee->num_events; i++, evt++) {
struct scmi_registered_event *r_evt;
r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt),
GFP_KERNEL);
if (!r_evt)
goto err;
return -ENOMEM;
r_evt->proto = pd;
r_evt->evt = evt;
r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources,
sizeof(refcount_t), GFP_KERNEL);
if (!r_evt->sources)
goto err;
return -ENOMEM;
r_evt->num_sources = num_sources;
mutex_init(&r_evt->sources_mtx);
r_evt->report = devm_kzalloc(ni->handle->dev,
evt->max_report_sz, GFP_KERNEL);
if (!r_evt->report)
goto err;
return -ENOMEM;
pd->registered_events[i] = r_evt;
/* Ensure events are updated */
@ -815,8 +821,6 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
/* Ensure protocols are updated */
smp_wmb();
devres_close_group(ni->handle->dev, ni->gid);
/*
* Finalize any pending events' handler which could have been waiting
* for this protocol's events registration.
@ -824,13 +828,33 @@ int scmi_register_protocol_events(const struct scmi_handle *handle,
schedule_work(&ni->init_work);
return 0;
}
err:
dev_warn(handle->dev, "Proto:%X - Registration Failed !\n", proto_id);
/* A failing protocol registration does not trigger full failure */
devres_close_group(ni->handle->dev, ni->gid);
/**
* scmi_deregister_protocol_events - Deregister protocol events with the core
* @handle: The handle identifying the platform instance against which the
* protocol's events are registered
* @proto_id: Protocol ID
*/
void scmi_deregister_protocol_events(const struct scmi_handle *handle,
u8 proto_id)
{
struct scmi_notify_instance *ni;
struct scmi_registered_events_desc *pd;
return -ENOMEM;
ni = scmi_notification_instance_data_get(handle);
if (!ni)
return;
pd = ni->registered_protocols[proto_id];
if (!pd)
return;
ni->registered_protocols[proto_id] = NULL;
/* Ensure protocols are updated */
smp_wmb();
cancel_work_sync(&pd->equeue.notify_work);
}
/**
@ -900,9 +924,21 @@ static inline int scmi_bind_event_handler(struct scmi_notify_instance *ni,
if (!r_evt)
return -EINVAL;
/* Remove from pending and insert into registered */
/*
* Remove from pending and insert into registered while getting hold
* of protocol instance.
*/
hash_del(&hndl->hash);
/*
* Acquire protocols only for NON pending handlers, so as NOT to trigger
* protocol initialization when a notifier is registered against a still
* not registered protocol, since it would make little sense to force init
* protocols for which still no SCMI driver user exists: they wouldn't
* emit any event anyway till some SCMI driver starts using it.
*/
scmi_protocol_acquire(ni->handle, KEY_XTRACT_PROTO_ID(hndl->key));
hndl->r_evt = r_evt;
mutex_lock(&r_evt->proto->registered_mtx);
hash_add(r_evt->proto->registered_events_handlers,
&hndl->hash, hndl->key);
@ -1193,41 +1229,65 @@ static int scmi_disable_events(struct scmi_event_handler *hndl)
* * unregister and free the handler itself
*
* Context: Assumes all the proper locking has been managed by the caller.
*
* Return: True if handler was freed (users dropped to zero)
*/
static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
static bool scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
struct scmi_event_handler *hndl)
{
bool freed = false;
if (refcount_dec_and_test(&hndl->users)) {
if (!IS_HNDL_PENDING(hndl))
scmi_disable_events(hndl);
scmi_free_event_handler(hndl);
freed = true;
}
return freed;
}
static void scmi_put_handler(struct scmi_notify_instance *ni,
struct scmi_event_handler *hndl)
{
bool freed;
u8 protocol_id;
struct scmi_registered_event *r_evt = hndl->r_evt;
mutex_lock(&ni->pending_mtx);
if (r_evt)
if (r_evt) {
protocol_id = r_evt->proto->id;
mutex_lock(&r_evt->proto->registered_mtx);
}
scmi_put_handler_unlocked(ni, hndl);
freed = scmi_put_handler_unlocked(ni, hndl);
if (r_evt)
if (r_evt) {
mutex_unlock(&r_evt->proto->registered_mtx);
/*
* Only registered handler acquired protocol; must be here
* released only AFTER unlocking registered_mtx, since
* releasing a protocol can trigger its de-initialization
* (ie. including r_evt and registered_mtx)
*/
if (freed)
scmi_protocol_release(ni->handle, protocol_id);
}
mutex_unlock(&ni->pending_mtx);
}
static void scmi_put_active_handler(struct scmi_notify_instance *ni,
struct scmi_event_handler *hndl)
{
bool freed;
struct scmi_registered_event *r_evt = hndl->r_evt;
u8 protocol_id = r_evt->proto->id;
mutex_lock(&r_evt->proto->registered_mtx);
scmi_put_handler_unlocked(ni, hndl);
freed = scmi_put_handler_unlocked(ni, hndl);
mutex_unlock(&r_evt->proto->registered_mtx);
if (freed)
scmi_protocol_release(ni->handle, protocol_id);
}
/**
@ -1247,7 +1307,7 @@ static int scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
}
/**
* scmi_register_notifier() - Register a notifier_block for an event
* scmi_notifier_register() - Register a notifier_block for an event
* @handle: The handle identifying the platform instance against which the
* callback is registered
* @proto_id: Protocol ID
@ -1279,8 +1339,8 @@ static int scmi_event_handler_enable_events(struct scmi_event_handler *hndl)
*
* Return: 0 on Success
*/
static int scmi_register_notifier(const struct scmi_handle *handle,
u8 proto_id, u8 evt_id, u32 *src_id,
static int scmi_notifier_register(const struct scmi_handle *handle,
u8 proto_id, u8 evt_id, const u32 *src_id,
struct notifier_block *nb)
{
int ret = 0;
@ -1288,11 +1348,9 @@ static int scmi_register_notifier(const struct scmi_handle *handle,
struct scmi_event_handler *hndl;
struct scmi_notify_instance *ni;
/* Ensure notify_priv is updated */
smp_rmb();
if (!handle->notify_priv)
ni = scmi_notification_instance_data_get(handle);
if (!ni)
return -ENODEV;
ni = handle->notify_priv;
evt_key = MAKE_HASH_KEY(proto_id, evt_id,
src_id ? *src_id : SRC_ID_MASK);
@ -1313,7 +1371,7 @@ static int scmi_register_notifier(const struct scmi_handle *handle,
}
/**
* scmi_unregister_notifier() - Unregister a notifier_block for an event
* scmi_notifier_unregister() - Unregister a notifier_block for an event
* @handle: The handle identifying the platform instance against which the
* callback is unregistered
* @proto_id: Protocol ID
@ -1328,19 +1386,17 @@ static int scmi_register_notifier(const struct scmi_handle *handle,
*
* Return: 0 on Success
*/
static int scmi_unregister_notifier(const struct scmi_handle *handle,
u8 proto_id, u8 evt_id, u32 *src_id,
static int scmi_notifier_unregister(const struct scmi_handle *handle,
u8 proto_id, u8 evt_id, const u32 *src_id,
struct notifier_block *nb)
{
u32 evt_key;
struct scmi_event_handler *hndl;
struct scmi_notify_instance *ni;
/* Ensure notify_priv is updated */
smp_rmb();
if (!handle->notify_priv)
ni = scmi_notification_instance_data_get(handle);
if (!ni)
return -ENODEV;
ni = handle->notify_priv;
evt_key = MAKE_HASH_KEY(proto_id, evt_id,
src_id ? *src_id : SRC_ID_MASK);
@ -1356,7 +1412,7 @@ static int scmi_unregister_notifier(const struct scmi_handle *handle,
scmi_put_handler(ni, hndl);
/*
* This balances the initial get issued in @scmi_register_notifier.
* This balances the initial get issued in @scmi_notifier_register.
* If this notifier_block happened to be the last known user callback
* for this event, the handler is here freed and the event's generation
* stopped.
@ -1371,6 +1427,129 @@ static int scmi_unregister_notifier(const struct scmi_handle *handle,
return 0;
}
struct scmi_notifier_devres {
const struct scmi_handle *handle;
u8 proto_id;
u8 evt_id;
u32 __src_id;
u32 *src_id;
struct notifier_block *nb;
};
static void scmi_devm_release_notifier(struct device *dev, void *res)
{
struct scmi_notifier_devres *dres = res;
scmi_notifier_unregister(dres->handle, dres->proto_id, dres->evt_id,
dres->src_id, dres->nb);
}
/**
* scmi_devm_notifier_register() - Managed registration of a notifier_block
* for an event
* @sdev: A reference to an scmi_device whose embedded struct device is to
* be used for devres accounting.
* @proto_id: Protocol ID
* @evt_id: Event ID
* @src_id: Source ID, when NULL register for events coming form ALL possible
* sources
* @nb: A standard notifier block to register for the specified event
*
* Generic devres managed helper to register a notifier_block against a
* protocol event.
*/
static int scmi_devm_notifier_register(struct scmi_device *sdev,
u8 proto_id, u8 evt_id,
const u32 *src_id,
struct notifier_block *nb)
{
int ret;
struct scmi_notifier_devres *dres;
dres = devres_alloc(scmi_devm_release_notifier,
sizeof(*dres), GFP_KERNEL);
if (!dres)
return -ENOMEM;
ret = scmi_notifier_register(sdev->handle, proto_id,
evt_id, src_id, nb);
if (ret) {
devres_free(dres);
return ret;
}
dres->handle = sdev->handle;
dres->proto_id = proto_id;
dres->evt_id = evt_id;
dres->nb = nb;
if (src_id) {
dres->__src_id = *src_id;
dres->src_id = &dres->__src_id;
} else {
dres->src_id = NULL;
}
devres_add(&sdev->dev, dres);
return ret;
}
static int scmi_devm_notifier_match(struct device *dev, void *res, void *data)
{
struct scmi_notifier_devres *dres = res;
struct scmi_notifier_devres *xres = data;
if (WARN_ON(!dres || !xres))
return 0;
return dres->proto_id == xres->proto_id &&
dres->evt_id == xres->evt_id &&
dres->nb == xres->nb &&
((!dres->src_id && !xres->src_id) ||
(dres->src_id && xres->src_id &&
dres->__src_id == xres->__src_id));
}
/**
* scmi_devm_notifier_unregister() - Managed un-registration of a
* notifier_block for an event
* @sdev: A reference to an scmi_device whose embedded struct device is to
* be used for devres accounting.
* @proto_id: Protocol ID
* @evt_id: Event ID
* @src_id: Source ID, when NULL register for events coming form ALL possible
* sources
* @nb: A standard notifier block to register for the specified event
*
* Generic devres managed helper to explicitly un-register a notifier_block
* against a protocol event, which was previously registered using the above
* @scmi_devm_notifier_register.
*/
static int scmi_devm_notifier_unregister(struct scmi_device *sdev,
u8 proto_id, u8 evt_id,
const u32 *src_id,
struct notifier_block *nb)
{
int ret;
struct scmi_notifier_devres dres;
dres.handle = sdev->handle;
dres.proto_id = proto_id;
dres.evt_id = evt_id;
if (src_id) {
dres.__src_id = *src_id;
dres.src_id = &dres.__src_id;
} else {
dres.src_id = NULL;
}
ret = devres_release(&sdev->dev, scmi_devm_release_notifier,
scmi_devm_notifier_match, &dres);
WARN_ON(ret);
return ret;
}
/**
* scmi_protocols_late_init() - Worker for late initialization
* @work: The work item to use associated to the proper SCMI instance
@ -1428,8 +1607,10 @@ static void scmi_protocols_late_init(struct work_struct *work)
* directly from an scmi_driver to register its own notifiers.
*/
static const struct scmi_notify_ops notify_ops = {
.register_event_notifier = scmi_register_notifier,
.unregister_event_notifier = scmi_unregister_notifier,
.devm_event_notifier_register = scmi_devm_notifier_register,
.devm_event_notifier_unregister = scmi_devm_notifier_unregister,
.event_notifier_register = scmi_notifier_register,
.event_notifier_unregister = scmi_notifier_unregister,
};
/**
@ -1490,8 +1671,8 @@ int scmi_notification_init(struct scmi_handle *handle)
INIT_WORK(&ni->init_work, scmi_protocols_late_init);
scmi_notification_instance_data_set(handle, ni);
handle->notify_ops = &notify_ops;
handle->notify_priv = ni;
/* Ensure handle is up to date */
smp_wmb();
@ -1503,7 +1684,7 @@ int scmi_notification_init(struct scmi_handle *handle)
err:
dev_warn(handle->dev, "Initialization Failed.\n");
devres_release_group(handle->dev, NULL);
devres_release_group(handle->dev, gid);
return -ENOMEM;
}
@ -1515,15 +1696,10 @@ void scmi_notification_exit(struct scmi_handle *handle)
{
struct scmi_notify_instance *ni;
/* Ensure notify_priv is updated */
smp_rmb();
if (!handle->notify_priv)
ni = scmi_notification_instance_data_get(handle);
if (!ni)
return;
ni = handle->notify_priv;
handle->notify_priv = NULL;
/* Ensure handle is up to date */
smp_wmb();
scmi_notification_instance_data_set(handle, NULL);
/* Destroy while letting pending work complete */
destroy_workqueue(ni->notify_wq);

View File

@ -4,7 +4,7 @@
* notification header file containing some definitions, structures
* and function prototypes related to SCMI Notification handling.
*
* Copyright (C) 2020 ARM Ltd.
* Copyright (C) 2020-2021 ARM Ltd.
*/
#ifndef _SCMI_NOTIFY_H
#define _SCMI_NOTIFY_H
@ -31,8 +31,12 @@ struct scmi_event {
size_t max_report_sz;
};
struct scmi_protocol_handle;
/**
* struct scmi_event_ops - Protocol helpers called by the notification core.
* @get_num_sources: Returns the number of possible events' sources for this
* protocol
* @set_notify_enabled: Enable/disable the required evt_id/src_id notifications
* using the proper custom protocol commands.
* Return 0 on Success
@ -46,22 +50,42 @@ struct scmi_event {
* process context.
*/
struct scmi_event_ops {
int (*set_notify_enabled)(const struct scmi_handle *handle,
int (*get_num_sources)(const struct scmi_protocol_handle *ph);
int (*set_notify_enabled)(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enabled);
void *(*fill_custom_report)(const struct scmi_handle *handle,
void *(*fill_custom_report)(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id);
};
/**
* struct scmi_protocol_events - Per-protocol description of available events
* @queue_sz: Size in bytes of the per-protocol queue to use.
* @ops: Array of protocol-specific events operations.
* @evts: Array of supported protocol's events.
* @num_events: Number of supported protocol's events described in @evts.
* @num_sources: Number of protocol's sources, should be greater than 0; if not
* available at compile time, it will be provided at run-time via
* @get_num_sources.
*/
struct scmi_protocol_events {
size_t queue_sz;
const struct scmi_event_ops *ops;
const struct scmi_event *evts;
unsigned int num_events;
unsigned int num_sources;
};
int scmi_notification_init(struct scmi_handle *handle);
void scmi_notification_exit(struct scmi_handle *handle);
int scmi_register_protocol_events(const struct scmi_handle *handle,
u8 proto_id, size_t queue_sz,
const struct scmi_event_ops *ops,
const struct scmi_event *evt, int num_events,
int num_sources);
struct scmi_protocol_handle;
int scmi_register_protocol_events(const struct scmi_handle *handle, u8 proto_id,
const struct scmi_protocol_handle *ph,
const struct scmi_protocol_events *ee);
void scmi_deregister_protocol_events(const struct scmi_handle *handle,
u8 proto_id);
int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
const void *buf, size_t len, ktime_t ts);

View File

@ -2,7 +2,7 @@
/*
* System Control and Management Interface (SCMI) Performance Protocol
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#define pr_fmt(fmt) "SCMI Notifications PERF - " fmt
@ -11,6 +11,7 @@
#include <linux/of.h>
#include <linux/io.h>
#include <linux/io-64-nonatomic-hi-lo.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/scmi_protocol.h>
@ -175,21 +176,21 @@ static enum scmi_performance_protocol_cmd evt_2_cmd[] = {
PERF_NOTIFY_LEVEL,
};
static int scmi_perf_attributes_get(const struct scmi_handle *handle,
static int scmi_perf_attributes_get(const struct scmi_protocol_handle *ph,
struct scmi_perf_info *pi)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_perf_attributes *attr;
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0,
sizeof(*attr), &t);
if (ret)
return ret;
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
u16 flags = le16_to_cpu(attr->flags);
@ -200,28 +201,27 @@ static int scmi_perf_attributes_get(const struct scmi_handle *handle,
pi->stats_size = le32_to_cpu(attr->stats_size);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
struct perf_dom_info *dom_info)
scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph,
u32 domain, struct perf_dom_info *dom_info)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_perf_domain_attributes *attr;
ret = scmi_xfer_get_init(handle, PERF_DOMAIN_ATTRIBUTES,
SCMI_PROTOCOL_PERF, sizeof(domain),
sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, PERF_DOMAIN_ATTRIBUTES,
sizeof(domain), sizeof(*attr), &t);
if (ret)
return ret;
put_unaligned_le32(domain, t->tx.buf);
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
u32 flags = le32_to_cpu(attr->flags);
@ -245,7 +245,7 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -257,7 +257,7 @@ static int opp_cmp_func(const void *opp1, const void *opp2)
}
static int
scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, u32 domain,
struct perf_dom_info *perf_dom)
{
int ret, cnt;
@ -268,8 +268,8 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
struct scmi_msg_perf_describe_levels *dom_info;
struct scmi_msg_resp_perf_describe_levels *level_info;
ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_LEVELS,
SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t);
ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_LEVELS,
sizeof(*dom_info), 0, &t);
if (ret)
return ret;
@ -281,14 +281,14 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
/* Set the number of OPPs to be skipped/already read */
dom_info->level_index = cpu_to_le32(tot_opp_cnt);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (ret)
break;
num_returned = le16_to_cpu(level_info->num_returned);
num_remaining = le16_to_cpu(level_info->num_remaining);
if (tot_opp_cnt + num_returned > MAX_OPPS) {
dev_err(handle->dev, "No. of OPPs exceeded MAX_OPPS");
dev_err(ph->dev, "No. of OPPs exceeded MAX_OPPS");
break;
}
@ -299,13 +299,13 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
opp->trans_latency_us = le16_to_cpu
(level_info->opp[cnt].transition_latency_us);
dev_dbg(handle->dev, "Level %d Power %d Latency %dus\n",
dev_dbg(ph->dev, "Level %d Power %d Latency %dus\n",
opp->perf, opp->power, opp->trans_latency_us);
}
tot_opp_cnt += num_returned;
scmi_reset_rx_to_maxsz(handle, t);
ph->xops->reset_rx_to_maxsz(ph, t);
/*
* check for both returned and remaining to avoid infinite
* loop due to buggy firmware
@ -313,7 +313,7 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
} while (num_returned && num_remaining);
perf_dom->opp_count = tot_opp_cnt;
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL);
return ret;
@ -353,15 +353,15 @@ static void scmi_perf_fc_ring_db(struct scmi_fc_db_info *db)
#endif
}
static int scmi_perf_mb_limits_set(const struct scmi_handle *handle, u32 domain,
u32 max_perf, u32 min_perf)
static int scmi_perf_mb_limits_set(const struct scmi_protocol_handle *ph,
u32 domain, u32 max_perf, u32 min_perf)
{
int ret;
struct scmi_xfer *t;
struct scmi_perf_set_limits *limits;
ret = scmi_xfer_get_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF,
sizeof(*limits), 0, &t);
ret = ph->xops->xfer_get_init(ph, PERF_LIMITS_SET,
sizeof(*limits), 0, &t);
if (ret)
return ret;
@ -370,16 +370,16 @@ static int scmi_perf_mb_limits_set(const struct scmi_handle *handle, u32 domain,
limits->max_level = cpu_to_le32(max_perf);
limits->min_level = cpu_to_le32(min_perf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
u32 max_perf, u32 min_perf)
static int scmi_perf_limits_set(const struct scmi_protocol_handle *ph,
u32 domain, u32 max_perf, u32 min_perf)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom = pi->dom_info + domain;
if (dom->fc_info && dom->fc_info->limit_set_addr) {
@ -389,24 +389,24 @@ static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
return 0;
}
return scmi_perf_mb_limits_set(handle, domain, max_perf, min_perf);
return scmi_perf_mb_limits_set(ph, domain, max_perf, min_perf);
}
static int scmi_perf_mb_limits_get(const struct scmi_handle *handle, u32 domain,
u32 *max_perf, u32 *min_perf)
static int scmi_perf_mb_limits_get(const struct scmi_protocol_handle *ph,
u32 domain, u32 *max_perf, u32 *min_perf)
{
int ret;
struct scmi_xfer *t;
struct scmi_perf_get_limits *limits;
ret = scmi_xfer_get_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF,
sizeof(__le32), 0, &t);
ret = ph->xops->xfer_get_init(ph, PERF_LIMITS_GET,
sizeof(__le32), 0, &t);
if (ret)
return ret;
put_unaligned_le32(domain, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
limits = t->rx.buf;
@ -414,14 +414,14 @@ static int scmi_perf_mb_limits_get(const struct scmi_handle *handle, u32 domain,
*min_perf = le32_to_cpu(limits->min_level);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
u32 *max_perf, u32 *min_perf)
static int scmi_perf_limits_get(const struct scmi_protocol_handle *ph,
u32 domain, u32 *max_perf, u32 *min_perf)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom = pi->dom_info + domain;
if (dom->fc_info && dom->fc_info->limit_get_addr) {
@ -430,18 +430,17 @@ static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
return 0;
}
return scmi_perf_mb_limits_get(handle, domain, max_perf, min_perf);
return scmi_perf_mb_limits_get(ph, domain, max_perf, min_perf);
}
static int scmi_perf_mb_level_set(const struct scmi_handle *handle, u32 domain,
u32 level, bool poll)
static int scmi_perf_mb_level_set(const struct scmi_protocol_handle *ph,
u32 domain, u32 level, bool poll)
{
int ret;
struct scmi_xfer *t;
struct scmi_perf_set_level *lvl;
ret = scmi_xfer_get_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF,
sizeof(*lvl), 0, &t);
ret = ph->xops->xfer_get_init(ph, PERF_LEVEL_SET, sizeof(*lvl), 0, &t);
if (ret)
return ret;
@ -450,16 +449,16 @@ static int scmi_perf_mb_level_set(const struct scmi_handle *handle, u32 domain,
lvl->domain = cpu_to_le32(domain);
lvl->level = cpu_to_le32(level);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
u32 level, bool poll)
static int scmi_perf_level_set(const struct scmi_protocol_handle *ph,
u32 domain, u32 level, bool poll)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom = pi->dom_info + domain;
if (dom->fc_info && dom->fc_info->level_set_addr) {
@ -468,35 +467,35 @@ static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
return 0;
}
return scmi_perf_mb_level_set(handle, domain, level, poll);
return scmi_perf_mb_level_set(ph, domain, level, poll);
}
static int scmi_perf_mb_level_get(const struct scmi_handle *handle, u32 domain,
u32 *level, bool poll)
static int scmi_perf_mb_level_get(const struct scmi_protocol_handle *ph,
u32 domain, u32 *level, bool poll)
{
int ret;
struct scmi_xfer *t;
ret = scmi_xfer_get_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF,
sizeof(u32), sizeof(u32), &t);
ret = ph->xops->xfer_get_init(ph, PERF_LEVEL_GET,
sizeof(u32), sizeof(u32), &t);
if (ret)
return ret;
t->hdr.poll_completion = poll;
put_unaligned_le32(domain, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
*level = get_unaligned_le32(t->rx.buf);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
u32 *level, bool poll)
static int scmi_perf_level_get(const struct scmi_protocol_handle *ph,
u32 domain, u32 *level, bool poll)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom = pi->dom_info + domain;
if (dom->fc_info && dom->fc_info->level_get_addr) {
@ -504,10 +503,10 @@ static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
return 0;
}
return scmi_perf_mb_level_get(handle, domain, level, poll);
return scmi_perf_mb_level_get(ph, domain, level, poll);
}
static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
static int scmi_perf_level_limits_notify(const struct scmi_protocol_handle *ph,
u32 domain, int message_id,
bool enable)
{
@ -515,8 +514,7 @@ static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_perf_notify_level_or_limits *notify;
ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_PERF,
sizeof(*notify), 0, &t);
ret = ph->xops->xfer_get_init(ph, message_id, sizeof(*notify), 0, &t);
if (ret)
return ret;
@ -524,9 +522,9 @@ static int scmi_perf_level_limits_notify(const struct scmi_handle *handle,
notify->domain = cpu_to_le32(domain);
notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -540,7 +538,7 @@ static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size)
}
static void
scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
scmi_perf_domain_desc_fc(const struct scmi_protocol_handle *ph, u32 domain,
u32 message_id, void __iomem **p_addr,
struct scmi_fc_db_info **p_db)
{
@ -557,9 +555,8 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
if (!p_addr)
return;
ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_FASTCHANNEL,
SCMI_PROTOCOL_PERF,
sizeof(*info), sizeof(*resp), &t);
ret = ph->xops->xfer_get_init(ph, PERF_DESCRIBE_FASTCHANNEL,
sizeof(*info), sizeof(*resp), &t);
if (ret)
return;
@ -567,7 +564,7 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
info->domain = cpu_to_le32(domain);
info->message_id = cpu_to_le32(message_id);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (ret)
goto err_xfer;
@ -579,20 +576,20 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
phys_addr = le32_to_cpu(resp->chan_addr_low);
phys_addr |= (u64)le32_to_cpu(resp->chan_addr_high) << 32;
addr = devm_ioremap(handle->dev, phys_addr, size);
addr = devm_ioremap(ph->dev, phys_addr, size);
if (!addr)
goto err_xfer;
*p_addr = addr;
if (p_db && SUPPORTS_DOORBELL(flags)) {
db = devm_kzalloc(handle->dev, sizeof(*db), GFP_KERNEL);
db = devm_kzalloc(ph->dev, sizeof(*db), GFP_KERNEL);
if (!db)
goto err_xfer;
size = 1 << DOORBELL_REG_WIDTH(flags);
phys_addr = le32_to_cpu(resp->db_addr_low);
phys_addr |= (u64)le32_to_cpu(resp->db_addr_high) << 32;
addr = devm_ioremap(handle->dev, phys_addr, size);
addr = devm_ioremap(ph->dev, phys_addr, size);
if (!addr)
goto err_xfer;
@ -605,25 +602,25 @@ scmi_perf_domain_desc_fc(const struct scmi_handle *handle, u32 domain,
*p_db = db;
}
err_xfer:
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
}
static void scmi_perf_domain_init_fc(const struct scmi_handle *handle,
static void scmi_perf_domain_init_fc(const struct scmi_protocol_handle *ph,
u32 domain, struct scmi_fc_info **p_fc)
{
struct scmi_fc_info *fc;
fc = devm_kzalloc(handle->dev, sizeof(*fc), GFP_KERNEL);
fc = devm_kzalloc(ph->dev, sizeof(*fc), GFP_KERNEL);
if (!fc)
return;
scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_SET,
scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_SET,
&fc->level_set_addr, &fc->level_set_db);
scmi_perf_domain_desc_fc(handle, domain, PERF_LEVEL_GET,
scmi_perf_domain_desc_fc(ph, domain, PERF_LEVEL_GET,
&fc->level_get_addr, NULL);
scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_SET,
scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_SET,
&fc->limit_set_addr, &fc->limit_set_db);
scmi_perf_domain_desc_fc(handle, domain, PERF_LIMITS_GET,
scmi_perf_domain_desc_fc(ph, domain, PERF_LIMITS_GET,
&fc->limit_get_addr, NULL);
*p_fc = fc;
}
@ -640,14 +637,14 @@ static int scmi_dev_domain_id(struct device *dev)
return clkspec.args[0];
}
static int scmi_dvfs_device_opps_add(const struct scmi_handle *handle,
static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph,
struct device *dev)
{
int idx, ret, domain;
unsigned long freq;
struct scmi_opp *opp;
struct perf_dom_info *dom;
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
domain = scmi_dev_domain_id(dev);
if (domain < 0)
@ -672,11 +669,12 @@ static int scmi_dvfs_device_opps_add(const struct scmi_handle *handle,
return 0;
}
static int scmi_dvfs_transition_latency_get(const struct scmi_handle *handle,
struct device *dev)
static int
scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph,
struct device *dev)
{
struct perf_dom_info *dom;
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
int domain = scmi_dev_domain_id(dev);
if (domain < 0)
@ -687,35 +685,35 @@ static int scmi_dvfs_transition_latency_get(const struct scmi_handle *handle,
return dom->opp[dom->opp_count - 1].trans_latency_us * 1000;
}
static int scmi_dvfs_freq_set(const struct scmi_handle *handle, u32 domain,
static int scmi_dvfs_freq_set(const struct scmi_protocol_handle *ph, u32 domain,
unsigned long freq, bool poll)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom = pi->dom_info + domain;
return scmi_perf_level_set(handle, domain, freq / dom->mult_factor,
poll);
return scmi_perf_level_set(ph, domain, freq / dom->mult_factor, poll);
}
static int scmi_dvfs_freq_get(const struct scmi_handle *handle, u32 domain,
static int scmi_dvfs_freq_get(const struct scmi_protocol_handle *ph, u32 domain,
unsigned long *freq, bool poll)
{
int ret;
u32 level;
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom = pi->dom_info + domain;
ret = scmi_perf_level_get(handle, domain, &level, poll);
ret = scmi_perf_level_get(ph, domain, &level, poll);
if (!ret)
*freq = level * dom->mult_factor;
return ret;
}
static int scmi_dvfs_est_power_get(const struct scmi_handle *handle, u32 domain,
unsigned long *freq, unsigned long *power)
static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph,
u32 domain, unsigned long *freq,
unsigned long *power)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
struct perf_dom_info *dom;
unsigned long opp_freq;
int idx, ret = -EINVAL;
@ -739,25 +737,25 @@ static int scmi_dvfs_est_power_get(const struct scmi_handle *handle, u32 domain,
return ret;
}
static bool scmi_fast_switch_possible(const struct scmi_handle *handle,
static bool scmi_fast_switch_possible(const struct scmi_protocol_handle *ph,
struct device *dev)
{
struct perf_dom_info *dom;
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
dom = pi->dom_info + scmi_dev_domain_id(dev);
return dom->fc_info && dom->fc_info->level_set_addr;
}
static bool scmi_power_scale_mw_get(const struct scmi_handle *handle)
static bool scmi_power_scale_mw_get(const struct scmi_protocol_handle *ph)
{
struct scmi_perf_info *pi = handle->perf_priv;
struct scmi_perf_info *pi = ph->get_priv(ph);
return pi->power_scale_mw;
}
static const struct scmi_perf_ops perf_ops = {
static const struct scmi_perf_proto_ops perf_proto_ops = {
.limits_set = scmi_perf_limits_set,
.limits_get = scmi_perf_limits_get,
.level_set = scmi_perf_level_set,
@ -772,7 +770,7 @@ static const struct scmi_perf_ops perf_ops = {
.power_scale_mw_get = scmi_power_scale_mw_get,
};
static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
static int scmi_perf_set_notify_enabled(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enable)
{
int ret, cmd_id;
@ -781,7 +779,7 @@ static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
return -EINVAL;
cmd_id = evt_2_cmd[evt_id];
ret = scmi_perf_level_limits_notify(handle, src_id, cmd_id, enable);
ret = scmi_perf_level_limits_notify(ph, src_id, cmd_id, enable);
if (ret)
pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
evt_id, src_id, ret);
@ -789,7 +787,7 @@ static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
return ret;
}
static void *scmi_perf_fill_custom_report(const struct scmi_handle *handle,
static void *scmi_perf_fill_custom_report(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
@ -837,6 +835,16 @@ static void *scmi_perf_fill_custom_report(const struct scmi_handle *handle,
return rep;
}
static int scmi_perf_get_num_sources(const struct scmi_protocol_handle *ph)
{
struct scmi_perf_info *pi = ph->get_priv(ph);
if (!pi)
return -EINVAL;
return pi->num_domains;
}
static const struct scmi_event perf_events[] = {
{
.id = SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED,
@ -851,28 +859,36 @@ static const struct scmi_event perf_events[] = {
};
static const struct scmi_event_ops perf_event_ops = {
.get_num_sources = scmi_perf_get_num_sources,
.set_notify_enabled = scmi_perf_set_notify_enabled,
.fill_custom_report = scmi_perf_fill_custom_report,
};
static int scmi_perf_protocol_init(struct scmi_handle *handle)
static const struct scmi_protocol_events perf_protocol_events = {
.queue_sz = SCMI_PROTO_QUEUE_SZ,
.ops = &perf_event_ops,
.evts = perf_events,
.num_events = ARRAY_SIZE(perf_events),
};
static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph)
{
int domain;
u32 version;
struct scmi_perf_info *pinfo;
scmi_version_get(handle, SCMI_PROTOCOL_PERF, &version);
ph->xops->version_get(ph, &version);
dev_dbg(handle->dev, "Performance Version %d.%d\n",
dev_dbg(ph->dev, "Performance Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
if (!pinfo)
return -ENOMEM;
scmi_perf_attributes_get(handle, pinfo);
scmi_perf_attributes_get(ph, pinfo);
pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains,
sizeof(*pinfo->dom_info), GFP_KERNEL);
if (!pinfo->dom_info)
return -ENOMEM;
@ -880,24 +896,24 @@ static int scmi_perf_protocol_init(struct scmi_handle *handle)
for (domain = 0; domain < pinfo->num_domains; domain++) {
struct perf_dom_info *dom = pinfo->dom_info + domain;
scmi_perf_domain_attributes_get(handle, domain, dom);
scmi_perf_describe_levels_get(handle, domain, dom);
scmi_perf_domain_attributes_get(ph, domain, dom);
scmi_perf_describe_levels_get(ph, domain, dom);
if (dom->perf_fastchannels)
scmi_perf_domain_init_fc(handle, domain, &dom->fc_info);
scmi_perf_domain_init_fc(ph, domain, &dom->fc_info);
}
scmi_register_protocol_events(handle,
SCMI_PROTOCOL_PERF, SCMI_PROTO_QUEUE_SZ,
&perf_event_ops, perf_events,
ARRAY_SIZE(perf_events),
pinfo->num_domains);
pinfo->version = version;
handle->perf_ops = &perf_ops;
handle->perf_priv = pinfo;
return 0;
return ph->set_priv(ph, pinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_PERF, perf)
static const struct scmi_protocol scmi_perf = {
.id = SCMI_PROTOCOL_PERF,
.owner = THIS_MODULE,
.instance_init = &scmi_perf_protocol_init,
.ops = &perf_proto_ops,
.events = &perf_protocol_events,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(perf, scmi_perf)

View File

@ -2,11 +2,12 @@
/*
* System Control and Management Interface (SCMI) Power Protocol
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#define pr_fmt(fmt) "SCMI Notifications POWER - " fmt
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include "common.h"
@ -68,21 +69,21 @@ struct scmi_power_info {
struct power_dom_info *dom_info;
};
static int scmi_power_attributes_get(const struct scmi_handle *handle,
static int scmi_power_attributes_get(const struct scmi_protocol_handle *ph,
struct scmi_power_info *pi)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_power_attributes *attr;
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
0, sizeof(*attr), &t);
if (ret)
return ret;
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
pi->num_domains = le16_to_cpu(attr->num_domains);
pi->stats_addr = le32_to_cpu(attr->stats_addr_low) |
@ -90,28 +91,27 @@ static int scmi_power_attributes_get(const struct scmi_handle *handle,
pi->stats_size = le32_to_cpu(attr->stats_size);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
struct power_dom_info *dom_info)
scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph,
u32 domain, struct power_dom_info *dom_info)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_power_domain_attributes *attr;
ret = scmi_xfer_get_init(handle, POWER_DOMAIN_ATTRIBUTES,
SCMI_PROTOCOL_POWER, sizeof(domain),
sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, POWER_DOMAIN_ATTRIBUTES,
sizeof(domain), sizeof(*attr), &t);
if (ret)
return ret;
put_unaligned_le32(domain, t->tx.buf);
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
u32 flags = le32_to_cpu(attr->flags);
@ -121,19 +121,18 @@ scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
static int scmi_power_state_set(const struct scmi_protocol_handle *ph,
u32 domain, u32 state)
{
int ret;
struct scmi_xfer *t;
struct scmi_power_set_state *st;
ret = scmi_xfer_get_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER,
sizeof(*st), 0, &t);
ret = ph->xops->xfer_get_init(ph, POWER_STATE_SET, sizeof(*st), 0, &t);
if (ret)
return ret;
@ -142,64 +141,64 @@ scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
st->domain = cpu_to_le32(domain);
st->state = cpu_to_le32(state);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state)
static int scmi_power_state_get(const struct scmi_protocol_handle *ph,
u32 domain, u32 *state)
{
int ret;
struct scmi_xfer *t;
ret = scmi_xfer_get_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER,
sizeof(u32), sizeof(u32), &t);
ret = ph->xops->xfer_get_init(ph, POWER_STATE_GET, sizeof(u32), sizeof(u32), &t);
if (ret)
return ret;
put_unaligned_le32(domain, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
*state = get_unaligned_le32(t->rx.buf);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_power_num_domains_get(const struct scmi_handle *handle)
static int scmi_power_num_domains_get(const struct scmi_protocol_handle *ph)
{
struct scmi_power_info *pi = handle->power_priv;
struct scmi_power_info *pi = ph->get_priv(ph);
return pi->num_domains;
}
static char *scmi_power_name_get(const struct scmi_handle *handle, u32 domain)
static char *scmi_power_name_get(const struct scmi_protocol_handle *ph,
u32 domain)
{
struct scmi_power_info *pi = handle->power_priv;
struct scmi_power_info *pi = ph->get_priv(ph);
struct power_dom_info *dom = pi->dom_info + domain;
return dom->name;
}
static const struct scmi_power_ops power_ops = {
static const struct scmi_power_proto_ops power_proto_ops = {
.num_domains_get = scmi_power_num_domains_get,
.name_get = scmi_power_name_get,
.state_set = scmi_power_state_set,
.state_get = scmi_power_state_get,
};
static int scmi_power_request_notify(const struct scmi_handle *handle,
static int scmi_power_request_notify(const struct scmi_protocol_handle *ph,
u32 domain, bool enable)
{
int ret;
struct scmi_xfer *t;
struct scmi_power_state_notify *notify;
ret = scmi_xfer_get_init(handle, POWER_STATE_NOTIFY,
SCMI_PROTOCOL_POWER, sizeof(*notify), 0, &t);
ret = ph->xops->xfer_get_init(ph, POWER_STATE_NOTIFY,
sizeof(*notify), 0, &t);
if (ret)
return ret;
@ -207,18 +206,18 @@ static int scmi_power_request_notify(const struct scmi_handle *handle,
notify->domain = cpu_to_le32(domain);
notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_power_set_notify_enabled(const struct scmi_handle *handle,
static int scmi_power_set_notify_enabled(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enable)
{
int ret;
ret = scmi_power_request_notify(handle, src_id, enable);
ret = scmi_power_request_notify(ph, src_id, enable);
if (ret)
pr_debug("FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n",
evt_id, src_id, ret);
@ -226,10 +225,11 @@ static int scmi_power_set_notify_enabled(const struct scmi_handle *handle,
return ret;
}
static void *scmi_power_fill_custom_report(const struct scmi_handle *handle,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
static void *
scmi_power_fill_custom_report(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
{
const struct scmi_power_state_notify_payld *p = payld;
struct scmi_power_state_changed_report *r = report;
@ -246,6 +246,16 @@ static void *scmi_power_fill_custom_report(const struct scmi_handle *handle,
return r;
}
static int scmi_power_get_num_sources(const struct scmi_protocol_handle *ph)
{
struct scmi_power_info *pinfo = ph->get_priv(ph);
if (!pinfo)
return -EINVAL;
return pinfo->num_domains;
}
static const struct scmi_event power_events[] = {
{
.id = SCMI_EVENT_POWER_STATE_CHANGED,
@ -256,28 +266,36 @@ static const struct scmi_event power_events[] = {
};
static const struct scmi_event_ops power_event_ops = {
.get_num_sources = scmi_power_get_num_sources,
.set_notify_enabled = scmi_power_set_notify_enabled,
.fill_custom_report = scmi_power_fill_custom_report,
};
static int scmi_power_protocol_init(struct scmi_handle *handle)
static const struct scmi_protocol_events power_protocol_events = {
.queue_sz = SCMI_PROTO_QUEUE_SZ,
.ops = &power_event_ops,
.evts = power_events,
.num_events = ARRAY_SIZE(power_events),
};
static int scmi_power_protocol_init(const struct scmi_protocol_handle *ph)
{
int domain;
u32 version;
struct scmi_power_info *pinfo;
scmi_version_get(handle, SCMI_PROTOCOL_POWER, &version);
ph->xops->version_get(ph, &version);
dev_dbg(handle->dev, "Power Version %d.%d\n",
dev_dbg(ph->dev, "Power Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
if (!pinfo)
return -ENOMEM;
scmi_power_attributes_get(handle, pinfo);
scmi_power_attributes_get(ph, pinfo);
pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains,
sizeof(*pinfo->dom_info), GFP_KERNEL);
if (!pinfo->dom_info)
return -ENOMEM;
@ -285,20 +303,20 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
for (domain = 0; domain < pinfo->num_domains; domain++) {
struct power_dom_info *dom = pinfo->dom_info + domain;
scmi_power_domain_attributes_get(handle, domain, dom);
scmi_power_domain_attributes_get(ph, domain, dom);
}
scmi_register_protocol_events(handle,
SCMI_PROTOCOL_POWER, SCMI_PROTO_QUEUE_SZ,
&power_event_ops, power_events,
ARRAY_SIZE(power_events),
pinfo->num_domains);
pinfo->version = version;
handle->power_ops = &power_ops;
handle->power_priv = pinfo;
return 0;
return ph->set_priv(ph, pinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_POWER, power)
static const struct scmi_protocol scmi_power = {
.id = SCMI_PROTOCOL_POWER,
.owner = THIS_MODULE,
.instance_init = &scmi_power_protocol_init,
.ops = &power_proto_ops,
.events = &power_protocol_events,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(power, scmi_power)

View File

@ -2,11 +2,12 @@
/*
* System Control and Management Interface (SCMI) Reset Protocol
*
* Copyright (C) 2019 ARM Ltd.
* Copyright (C) 2019-2021 ARM Ltd.
*/
#define pr_fmt(fmt) "SCMI Notifications RESET - " fmt
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include "common.h"
@ -64,46 +65,45 @@ struct scmi_reset_info {
struct reset_dom_info *dom_info;
};
static int scmi_reset_attributes_get(const struct scmi_handle *handle,
static int scmi_reset_attributes_get(const struct scmi_protocol_handle *ph,
struct scmi_reset_info *pi)
{
int ret;
struct scmi_xfer *t;
u32 attr;
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_RESET, 0, sizeof(attr), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
0, sizeof(attr), &t);
if (ret)
return ret;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
attr = get_unaligned_le32(t->rx.buf);
pi->num_domains = attr & NUM_RESET_DOMAIN_MASK;
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int
scmi_reset_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
struct reset_dom_info *dom_info)
scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph,
u32 domain, struct reset_dom_info *dom_info)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_reset_domain_attributes *attr;
ret = scmi_xfer_get_init(handle, RESET_DOMAIN_ATTRIBUTES,
SCMI_PROTOCOL_RESET, sizeof(domain),
sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, RESET_DOMAIN_ATTRIBUTES,
sizeof(domain), sizeof(*attr), &t);
if (ret)
return ret;
put_unaligned_le32(domain, t->tx.buf);
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
u32 attributes = le32_to_cpu(attr->attributes);
@ -115,47 +115,49 @@ scmi_reset_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_reset_num_domains_get(const struct scmi_handle *handle)
static int scmi_reset_num_domains_get(const struct scmi_protocol_handle *ph)
{
struct scmi_reset_info *pi = handle->reset_priv;
struct scmi_reset_info *pi = ph->get_priv(ph);
return pi->num_domains;
}
static char *scmi_reset_name_get(const struct scmi_handle *handle, u32 domain)
static char *scmi_reset_name_get(const struct scmi_protocol_handle *ph,
u32 domain)
{
struct scmi_reset_info *pi = handle->reset_priv;
struct scmi_reset_info *pi = ph->get_priv(ph);
struct reset_dom_info *dom = pi->dom_info + domain;
return dom->name;
}
static int scmi_reset_latency_get(const struct scmi_handle *handle, u32 domain)
static int scmi_reset_latency_get(const struct scmi_protocol_handle *ph,
u32 domain)
{
struct scmi_reset_info *pi = handle->reset_priv;
struct scmi_reset_info *pi = ph->get_priv(ph);
struct reset_dom_info *dom = pi->dom_info + domain;
return dom->latency_us;
}
static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain,
static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain,
u32 flags, u32 state)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_reset_domain_reset *dom;
struct scmi_reset_info *pi = handle->reset_priv;
struct scmi_reset_info *pi = ph->get_priv(ph);
struct reset_dom_info *rdom = pi->dom_info + domain;
if (rdom->async_reset)
flags |= ASYNCHRONOUS_RESET;
ret = scmi_xfer_get_init(handle, RESET, SCMI_PROTOCOL_RESET,
sizeof(*dom), 0, &t);
ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t);
if (ret)
return ret;
@ -165,34 +167,35 @@ static int scmi_domain_reset(const struct scmi_handle *handle, u32 domain,
dom->reset_state = cpu_to_le32(state);
if (rdom->async_reset)
ret = scmi_do_xfer_with_response(handle, t);
ret = ph->xops->do_xfer_with_response(ph, t);
else
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_reset_domain_reset(const struct scmi_handle *handle, u32 domain)
static int scmi_reset_domain_reset(const struct scmi_protocol_handle *ph,
u32 domain)
{
return scmi_domain_reset(handle, domain, AUTONOMOUS_RESET,
return scmi_domain_reset(ph, domain, AUTONOMOUS_RESET,
ARCH_COLD_RESET);
}
static int
scmi_reset_domain_assert(const struct scmi_handle *handle, u32 domain)
scmi_reset_domain_assert(const struct scmi_protocol_handle *ph, u32 domain)
{
return scmi_domain_reset(handle, domain, EXPLICIT_RESET_ASSERT,
return scmi_domain_reset(ph, domain, EXPLICIT_RESET_ASSERT,
ARCH_COLD_RESET);
}
static int
scmi_reset_domain_deassert(const struct scmi_handle *handle, u32 domain)
scmi_reset_domain_deassert(const struct scmi_protocol_handle *ph, u32 domain)
{
return scmi_domain_reset(handle, domain, 0, ARCH_COLD_RESET);
return scmi_domain_reset(ph, domain, 0, ARCH_COLD_RESET);
}
static const struct scmi_reset_ops reset_ops = {
static const struct scmi_reset_proto_ops reset_proto_ops = {
.num_domains_get = scmi_reset_num_domains_get,
.name_get = scmi_reset_name_get,
.latency_get = scmi_reset_latency_get,
@ -201,16 +204,15 @@ static const struct scmi_reset_ops reset_ops = {
.deassert = scmi_reset_domain_deassert,
};
static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id,
bool enable)
static int scmi_reset_notify(const struct scmi_protocol_handle *ph,
u32 domain_id, bool enable)
{
int ret;
u32 evt_cntl = enable ? RESET_TP_NOTIFY_ALL : 0;
struct scmi_xfer *t;
struct scmi_msg_reset_notify *cfg;
ret = scmi_xfer_get_init(handle, RESET_NOTIFY,
SCMI_PROTOCOL_RESET, sizeof(*cfg), 0, &t);
ret = ph->xops->xfer_get_init(ph, RESET_NOTIFY, sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -218,18 +220,18 @@ static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id,
cfg->id = cpu_to_le32(domain_id);
cfg->event_control = cpu_to_le32(evt_cntl);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_reset_set_notify_enabled(const struct scmi_handle *handle,
static int scmi_reset_set_notify_enabled(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enable)
{
int ret;
ret = scmi_reset_notify(handle, src_id, enable);
ret = scmi_reset_notify(ph, src_id, enable);
if (ret)
pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n",
evt_id, src_id, ret);
@ -237,10 +239,11 @@ static int scmi_reset_set_notify_enabled(const struct scmi_handle *handle,
return ret;
}
static void *scmi_reset_fill_custom_report(const struct scmi_handle *handle,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
static void *
scmi_reset_fill_custom_report(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
{
const struct scmi_reset_issued_notify_payld *p = payld;
struct scmi_reset_issued_report *r = report;
@ -257,6 +260,16 @@ static void *scmi_reset_fill_custom_report(const struct scmi_handle *handle,
return r;
}
static int scmi_reset_get_num_sources(const struct scmi_protocol_handle *ph)
{
struct scmi_reset_info *pinfo = ph->get_priv(ph);
if (!pinfo)
return -EINVAL;
return pinfo->num_domains;
}
static const struct scmi_event reset_events[] = {
{
.id = SCMI_EVENT_RESET_ISSUED,
@ -266,28 +279,36 @@ static const struct scmi_event reset_events[] = {
};
static const struct scmi_event_ops reset_event_ops = {
.get_num_sources = scmi_reset_get_num_sources,
.set_notify_enabled = scmi_reset_set_notify_enabled,
.fill_custom_report = scmi_reset_fill_custom_report,
};
static int scmi_reset_protocol_init(struct scmi_handle *handle)
static const struct scmi_protocol_events reset_protocol_events = {
.queue_sz = SCMI_PROTO_QUEUE_SZ,
.ops = &reset_event_ops,
.evts = reset_events,
.num_events = ARRAY_SIZE(reset_events),
};
static int scmi_reset_protocol_init(const struct scmi_protocol_handle *ph)
{
int domain;
u32 version;
struct scmi_reset_info *pinfo;
scmi_version_get(handle, SCMI_PROTOCOL_RESET, &version);
ph->xops->version_get(ph, &version);
dev_dbg(handle->dev, "Reset Version %d.%d\n",
dev_dbg(ph->dev, "Reset Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
if (!pinfo)
return -ENOMEM;
scmi_reset_attributes_get(handle, pinfo);
scmi_reset_attributes_get(ph, pinfo);
pinfo->dom_info = devm_kcalloc(handle->dev, pinfo->num_domains,
pinfo->dom_info = devm_kcalloc(ph->dev, pinfo->num_domains,
sizeof(*pinfo->dom_info), GFP_KERNEL);
if (!pinfo->dom_info)
return -ENOMEM;
@ -295,20 +316,19 @@ static int scmi_reset_protocol_init(struct scmi_handle *handle)
for (domain = 0; domain < pinfo->num_domains; domain++) {
struct reset_dom_info *dom = pinfo->dom_info + domain;
scmi_reset_domain_attributes_get(handle, domain, dom);
scmi_reset_domain_attributes_get(ph, domain, dom);
}
scmi_register_protocol_events(handle,
SCMI_PROTOCOL_RESET, SCMI_PROTO_QUEUE_SZ,
&reset_event_ops, reset_events,
ARRAY_SIZE(reset_events),
pinfo->num_domains);
pinfo->version = version;
handle->reset_ops = &reset_ops;
handle->reset_priv = pinfo;
return 0;
return ph->set_priv(ph, pinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_RESET, reset)
static const struct scmi_protocol scmi_reset = {
.id = SCMI_PROTOCOL_RESET,
.owner = THIS_MODULE,
.instance_init = &scmi_reset_protocol_init,
.ops = &reset_proto_ops,
.events = &reset_protocol_events,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(reset, scmi_reset)

View File

@ -2,7 +2,7 @@
/*
* SCMI Generic power domain support.
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#include <linux/err.h>
@ -11,9 +11,11 @@
#include <linux/pm_domain.h>
#include <linux/scmi_protocol.h>
static const struct scmi_power_proto_ops *power_ops;
struct scmi_pm_domain {
struct generic_pm_domain genpd;
const struct scmi_handle *handle;
const struct scmi_protocol_handle *ph;
const char *name;
u32 domain;
};
@ -25,16 +27,15 @@ static int scmi_pd_power(struct generic_pm_domain *domain, bool power_on)
int ret;
u32 state, ret_state;
struct scmi_pm_domain *pd = to_scmi_pd(domain);
const struct scmi_power_ops *ops = pd->handle->power_ops;
if (power_on)
state = SCMI_POWER_STATE_GENERIC_ON;
else
state = SCMI_POWER_STATE_GENERIC_OFF;
ret = ops->state_set(pd->handle, pd->domain, state);
ret = power_ops->state_set(pd->ph, pd->domain, state);
if (!ret)
ret = ops->state_get(pd->handle, pd->domain, &ret_state);
ret = power_ops->state_get(pd->ph, pd->domain, &ret_state);
if (!ret && state != ret_state)
return -EIO;
@ -60,11 +61,16 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
struct genpd_onecell_data *scmi_pd_data;
struct generic_pm_domain **domains;
const struct scmi_handle *handle = sdev->handle;
struct scmi_protocol_handle *ph;
if (!handle || !handle->power_ops)
if (!handle)
return -ENODEV;
num_domains = handle->power_ops->num_domains_get(handle);
power_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_POWER, &ph);
if (IS_ERR(power_ops))
return PTR_ERR(power_ops);
num_domains = power_ops->num_domains_get(ph);
if (num_domains < 0) {
dev_err(dev, "number of domains not found\n");
return num_domains;
@ -85,14 +91,14 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
for (i = 0; i < num_domains; i++, scmi_pd++) {
u32 state;
if (handle->power_ops->state_get(handle, i, &state)) {
if (power_ops->state_get(ph, i, &state)) {
dev_warn(dev, "failed to get state for domain %d\n", i);
continue;
}
scmi_pd->domain = i;
scmi_pd->handle = handle;
scmi_pd->name = handle->power_ops->name_get(handle, i);
scmi_pd->ph = ph;
scmi_pd->name = power_ops->name_get(ph, i);
scmi_pd->genpd.name = scmi_pd->name;
scmi_pd->genpd.power_off = scmi_pd_power_off;
scmi_pd->genpd.power_on = scmi_pd_power_on;

View File

@ -2,12 +2,13 @@
/*
* System Control and Management Interface (SCMI) Sensor Protocol
*
* Copyright (C) 2018-2020 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
*/
#define pr_fmt(fmt) "SCMI Notifications SENSOR - " fmt
#include <linux/bitfield.h>
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include "common.h"
@ -201,21 +202,21 @@ struct sensors_info {
struct scmi_sensor_info *sensors;
};
static int scmi_sensor_attributes_get(const struct scmi_handle *handle,
static int scmi_sensor_attributes_get(const struct scmi_protocol_handle *ph,
struct sensors_info *si)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_resp_sensor_attributes *attr;
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES,
0, sizeof(*attr), &t);
if (ret)
return ret;
attr = t->rx.buf;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
si->num_sensors = le16_to_cpu(attr->num_sensors);
si->max_requests = attr->max_requests;
@ -224,7 +225,7 @@ static int scmi_sensor_attributes_get(const struct scmi_handle *handle,
si->reg_size = le32_to_cpu(attr->reg_size);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -235,7 +236,7 @@ static inline void scmi_parse_range_attrs(struct scmi_range_attrs *out,
out->max_range = get_unaligned_le64((void *)&in->max_range_low);
}
static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
static int scmi_sensor_update_intervals(const struct scmi_protocol_handle *ph,
struct scmi_sensor_info *s)
{
int ret, cnt;
@ -245,8 +246,8 @@ static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
struct scmi_msg_resp_sensor_list_update_intervals *buf;
struct scmi_msg_sensor_list_update_intervals *msg;
ret = scmi_xfer_get_init(handle, SENSOR_LIST_UPDATE_INTERVALS,
SCMI_PROTOCOL_SENSOR, sizeof(*msg), 0, &ti);
ret = ph->xops->xfer_get_init(ph, SENSOR_LIST_UPDATE_INTERVALS,
sizeof(*msg), 0, &ti);
if (ret)
return ret;
@ -259,7 +260,7 @@ static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
msg->id = cpu_to_le32(s->id);
msg->index = cpu_to_le32(desc_index);
ret = scmi_do_xfer(handle, ti);
ret = ph->xops->do_xfer(ph, ti);
if (ret)
break;
@ -277,7 +278,7 @@ static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
/* segmented intervals are reported in one triplet */
if (s->intervals.segmented &&
(num_remaining || num_returned != 3)) {
dev_err(handle->dev,
dev_err(ph->dev,
"Sensor ID:%d advertises an invalid segmented interval (%d)\n",
s->id, s->intervals.count);
s->intervals.segmented = false;
@ -288,7 +289,7 @@ static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
/* Direct allocation when exceeding pre-allocated */
if (s->intervals.count >= SCMI_MAX_PREALLOC_POOL) {
s->intervals.desc =
devm_kcalloc(handle->dev,
devm_kcalloc(ph->dev,
s->intervals.count,
sizeof(*s->intervals.desc),
GFP_KERNEL);
@ -300,7 +301,7 @@ static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
}
}
} else if (desc_index + num_returned > s->intervals.count) {
dev_err(handle->dev,
dev_err(ph->dev,
"No. of update intervals can't exceed %d\n",
s->intervals.count);
ret = -EINVAL;
@ -313,18 +314,18 @@ static int scmi_sensor_update_intervals(const struct scmi_handle *handle,
desc_index += num_returned;
scmi_reset_rx_to_maxsz(handle, ti);
ph->xops->reset_rx_to_maxsz(ph, ti);
/*
* check for both returned and remaining to avoid infinite
* loop due to buggy firmware
*/
} while (num_returned && num_remaining);
scmi_xfer_put(handle, ti);
ph->xops->xfer_put(ph, ti);
return ret;
}
static int scmi_sensor_axis_description(const struct scmi_handle *handle,
static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph,
struct scmi_sensor_info *s)
{
int ret, cnt;
@ -334,13 +335,13 @@ static int scmi_sensor_axis_description(const struct scmi_handle *handle,
struct scmi_msg_resp_sensor_axis_description *buf;
struct scmi_msg_sensor_axis_description_get *msg;
s->axis = devm_kcalloc(handle->dev, s->num_axis,
s->axis = devm_kcalloc(ph->dev, s->num_axis,
sizeof(*s->axis), GFP_KERNEL);
if (!s->axis)
return -ENOMEM;
ret = scmi_xfer_get_init(handle, SENSOR_AXIS_DESCRIPTION_GET,
SCMI_PROTOCOL_SENSOR, sizeof(*msg), 0, &te);
ret = ph->xops->xfer_get_init(ph, SENSOR_AXIS_DESCRIPTION_GET,
sizeof(*msg), 0, &te);
if (ret)
return ret;
@ -354,7 +355,7 @@ static int scmi_sensor_axis_description(const struct scmi_handle *handle,
msg->id = cpu_to_le32(s->id);
msg->axis_desc_index = cpu_to_le32(desc_index);
ret = scmi_do_xfer(handle, te);
ret = ph->xops->do_xfer(ph, te);
if (ret)
break;
@ -363,7 +364,7 @@ static int scmi_sensor_axis_description(const struct scmi_handle *handle,
num_remaining = NUM_AXIS_REMAINING(flags);
if (desc_index + num_returned > s->num_axis) {
dev_err(handle->dev, "No. of axis can't exceed %d\n",
dev_err(ph->dev, "No. of axis can't exceed %d\n",
s->num_axis);
break;
}
@ -405,18 +406,18 @@ static int scmi_sensor_axis_description(const struct scmi_handle *handle,
desc_index += num_returned;
scmi_reset_rx_to_maxsz(handle, te);
ph->xops->reset_rx_to_maxsz(ph, te);
/*
* check for both returned and remaining to avoid infinite
* loop due to buggy firmware
*/
} while (num_returned && num_remaining);
scmi_xfer_put(handle, te);
ph->xops->xfer_put(ph, te);
return ret;
}
static int scmi_sensor_description_get(const struct scmi_handle *handle,
static int scmi_sensor_description_get(const struct scmi_protocol_handle *ph,
struct sensors_info *si)
{
int ret, cnt;
@ -425,8 +426,8 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_sensor_description *buf;
ret = scmi_xfer_get_init(handle, SENSOR_DESCRIPTION_GET,
SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t);
ret = ph->xops->xfer_get_init(ph, SENSOR_DESCRIPTION_GET,
sizeof(__le32), 0, &t);
if (ret)
return ret;
@ -437,7 +438,8 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
/* Set the number of sensors to be skipped/already read */
put_unaligned_le32(desc_index, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (ret)
break;
@ -445,7 +447,7 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
num_remaining = le16_to_cpu(buf->num_remaining);
if (desc_index + num_returned > si->num_sensors) {
dev_err(handle->dev, "No. of sensors can't exceed %d",
dev_err(ph->dev, "No. of sensors can't exceed %d",
si->num_sensors);
break;
}
@ -500,8 +502,8 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
* Since the command is optional, on error carry
* on without any update interval.
*/
if (scmi_sensor_update_intervals(handle, s))
dev_dbg(handle->dev,
if (scmi_sensor_update_intervals(ph, s))
dev_dbg(ph->dev,
"Update Intervals not available for sensor ID:%d\n",
s->id);
}
@ -535,7 +537,7 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
}
}
if (s->num_axis > 0) {
ret = scmi_sensor_axis_description(handle, s);
ret = scmi_sensor_axis_description(ph, s);
if (ret)
goto out;
}
@ -545,7 +547,7 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
desc_index += num_returned;
scmi_reset_rx_to_maxsz(handle, t);
ph->xops->reset_rx_to_maxsz(ph, t);
/*
* check for both returned and remaining to avoid infinite
* loop due to buggy firmware
@ -553,12 +555,12 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
} while (num_returned && num_remaining);
out:
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static inline int
scmi_sensor_request_notify(const struct scmi_handle *handle, u32 sensor_id,
scmi_sensor_request_notify(const struct scmi_protocol_handle *ph, u32 sensor_id,
u8 message_id, bool enable)
{
int ret;
@ -566,8 +568,7 @@ scmi_sensor_request_notify(const struct scmi_handle *handle, u32 sensor_id,
struct scmi_xfer *t;
struct scmi_msg_sensor_request_notify *cfg;
ret = scmi_xfer_get_init(handle, message_id,
SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t);
ret = ph->xops->xfer_get_init(ph, message_id, sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -575,40 +576,40 @@ scmi_sensor_request_notify(const struct scmi_handle *handle, u32 sensor_id,
cfg->id = cpu_to_le32(sensor_id);
cfg->event_control = cpu_to_le32(evt_cntl);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_sensor_trip_point_notify(const struct scmi_handle *handle,
static int scmi_sensor_trip_point_notify(const struct scmi_protocol_handle *ph,
u32 sensor_id, bool enable)
{
return scmi_sensor_request_notify(handle, sensor_id,
return scmi_sensor_request_notify(ph, sensor_id,
SENSOR_TRIP_POINT_NOTIFY,
enable);
}
static int
scmi_sensor_continuous_update_notify(const struct scmi_handle *handle,
scmi_sensor_continuous_update_notify(const struct scmi_protocol_handle *ph,
u32 sensor_id, bool enable)
{
return scmi_sensor_request_notify(handle, sensor_id,
return scmi_sensor_request_notify(ph, sensor_id,
SENSOR_CONTINUOUS_UPDATE_NOTIFY,
enable);
}
static int
scmi_sensor_trip_point_config(const struct scmi_handle *handle, u32 sensor_id,
u8 trip_id, u64 trip_value)
scmi_sensor_trip_point_config(const struct scmi_protocol_handle *ph,
u32 sensor_id, u8 trip_id, u64 trip_value)
{
int ret;
u32 evt_cntl = SENSOR_TP_BOTH;
struct scmi_xfer *t;
struct scmi_msg_set_sensor_trip_point *trip;
ret = scmi_xfer_get_init(handle, SENSOR_TRIP_POINT_CONFIG,
SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t);
ret = ph->xops->xfer_get_init(ph, SENSOR_TRIP_POINT_CONFIG,
sizeof(*trip), 0, &t);
if (ret)
return ret;
@ -618,47 +619,46 @@ scmi_sensor_trip_point_config(const struct scmi_handle *handle, u32 sensor_id,
trip->value_low = cpu_to_le32(trip_value & 0xffffffff);
trip->value_high = cpu_to_le32(trip_value >> 32);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_sensor_config_get(const struct scmi_handle *handle,
static int scmi_sensor_config_get(const struct scmi_protocol_handle *ph,
u32 sensor_id, u32 *sensor_config)
{
int ret;
struct scmi_xfer *t;
ret = scmi_xfer_get_init(handle, SENSOR_CONFIG_GET,
SCMI_PROTOCOL_SENSOR, sizeof(__le32),
sizeof(__le32), &t);
ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_GET,
sizeof(__le32), sizeof(__le32), &t);
if (ret)
return ret;
put_unaligned_le32(cpu_to_le32(sensor_id), t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
struct sensors_info *si = handle->sensor_priv;
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
*sensor_config = get_unaligned_le64(t->rx.buf);
s->sensor_config = *sensor_config;
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_sensor_config_set(const struct scmi_handle *handle,
static int scmi_sensor_config_set(const struct scmi_protocol_handle *ph,
u32 sensor_id, u32 sensor_config)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_sensor_config_set *msg;
ret = scmi_xfer_get_init(handle, SENSOR_CONFIG_SET,
SCMI_PROTOCOL_SENSOR, sizeof(*msg), 0, &t);
ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_SET,
sizeof(*msg), 0, &t);
if (ret)
return ret;
@ -666,21 +666,21 @@ static int scmi_sensor_config_set(const struct scmi_handle *handle,
msg->id = cpu_to_le32(sensor_id);
msg->sensor_config = cpu_to_le32(sensor_config);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
struct sensors_info *si = handle->sensor_priv;
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
s->sensor_config = sensor_config;
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
/**
* scmi_sensor_reading_get - Read scalar sensor value
* @handle: Platform handle
* @ph: Protocol handle
* @sensor_id: Sensor ID
* @value: The 64bit value sensor reading
*
@ -693,17 +693,17 @@ static int scmi_sensor_config_set(const struct scmi_handle *handle,
*
* Return: 0 on Success
*/
static int scmi_sensor_reading_get(const struct scmi_handle *handle,
static int scmi_sensor_reading_get(const struct scmi_protocol_handle *ph,
u32 sensor_id, u64 *value)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_sensor_reading_get *sensor;
struct sensors_info *si = handle->sensor_priv;
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
ret = scmi_xfer_get_init(handle, SENSOR_READING_GET,
SCMI_PROTOCOL_SENSOR, sizeof(*sensor), 0, &t);
ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET,
sizeof(*sensor), 0, &t);
if (ret)
return ret;
@ -711,7 +711,7 @@ static int scmi_sensor_reading_get(const struct scmi_handle *handle,
sensor->id = cpu_to_le32(sensor_id);
if (s->async) {
sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC);
ret = scmi_do_xfer_with_response(handle, t);
ret = ph->xops->do_xfer_with_response(ph, t);
if (!ret) {
struct scmi_resp_sensor_reading_complete *resp;
@ -723,12 +723,12 @@ static int scmi_sensor_reading_get(const struct scmi_handle *handle,
}
} else {
sensor->flags = cpu_to_le32(0);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
*value = get_unaligned_le64(t->rx.buf);
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -742,7 +742,7 @@ scmi_parse_sensor_readings(struct scmi_sensor_reading *out,
/**
* scmi_sensor_reading_get_timestamped - Read multiple-axis timestamped values
* @handle: Platform handle
* @ph: Protocol handle
* @sensor_id: Sensor ID
* @count: The length of the provided @readings array
* @readings: An array of elements each representing a timestamped per-axis
@ -755,22 +755,22 @@ scmi_parse_sensor_readings(struct scmi_sensor_reading *out,
* Return: 0 on Success
*/
static int
scmi_sensor_reading_get_timestamped(const struct scmi_handle *handle,
scmi_sensor_reading_get_timestamped(const struct scmi_protocol_handle *ph,
u32 sensor_id, u8 count,
struct scmi_sensor_reading *readings)
{
int ret;
struct scmi_xfer *t;
struct scmi_msg_sensor_reading_get *sensor;
struct sensors_info *si = handle->sensor_priv;
struct sensors_info *si = ph->get_priv(ph);
struct scmi_sensor_info *s = si->sensors + sensor_id;
if (!count || !readings ||
(!s->num_axis && count > 1) || (s->num_axis && count > s->num_axis))
return -EINVAL;
ret = scmi_xfer_get_init(handle, SENSOR_READING_GET,
SCMI_PROTOCOL_SENSOR, sizeof(*sensor), 0, &t);
ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET,
sizeof(*sensor), 0, &t);
if (ret)
return ret;
@ -778,7 +778,7 @@ scmi_sensor_reading_get_timestamped(const struct scmi_handle *handle,
sensor->id = cpu_to_le32(sensor_id);
if (s->async) {
sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC);
ret = scmi_do_xfer_with_response(handle, t);
ret = ph->xops->do_xfer_with_response(ph, t);
if (!ret) {
int i;
struct scmi_resp_sensor_reading_complete_v3 *resp;
@ -794,7 +794,7 @@ scmi_sensor_reading_get_timestamped(const struct scmi_handle *handle,
}
} else {
sensor->flags = cpu_to_le32(0);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret) {
int i;
struct scmi_sensor_reading_resp *resp_readings;
@ -806,26 +806,26 @@ scmi_sensor_reading_get_timestamped(const struct scmi_handle *handle,
}
}
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static const struct scmi_sensor_info *
scmi_sensor_info_get(const struct scmi_handle *handle, u32 sensor_id)
scmi_sensor_info_get(const struct scmi_protocol_handle *ph, u32 sensor_id)
{
struct sensors_info *si = handle->sensor_priv;
struct sensors_info *si = ph->get_priv(ph);
return si->sensors + sensor_id;
}
static int scmi_sensor_count_get(const struct scmi_handle *handle)
static int scmi_sensor_count_get(const struct scmi_protocol_handle *ph)
{
struct sensors_info *si = handle->sensor_priv;
struct sensors_info *si = ph->get_priv(ph);
return si->num_sensors;
}
static const struct scmi_sensor_ops sensor_ops = {
static const struct scmi_sensor_proto_ops sensor_proto_ops = {
.count_get = scmi_sensor_count_get,
.info_get = scmi_sensor_info_get,
.trip_point_config = scmi_sensor_trip_point_config,
@ -835,18 +835,17 @@ static const struct scmi_sensor_ops sensor_ops = {
.config_set = scmi_sensor_config_set,
};
static int scmi_sensor_set_notify_enabled(const struct scmi_handle *handle,
static int scmi_sensor_set_notify_enabled(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enable)
{
int ret;
switch (evt_id) {
case SCMI_EVENT_SENSOR_TRIP_POINT_EVENT:
ret = scmi_sensor_trip_point_notify(handle, src_id, enable);
ret = scmi_sensor_trip_point_notify(ph, src_id, enable);
break;
case SCMI_EVENT_SENSOR_UPDATE:
ret = scmi_sensor_continuous_update_notify(handle, src_id,
enable);
ret = scmi_sensor_continuous_update_notify(ph, src_id, enable);
break;
default:
ret = -EINVAL;
@ -860,10 +859,11 @@ static int scmi_sensor_set_notify_enabled(const struct scmi_handle *handle,
return ret;
}
static void *scmi_sensor_fill_custom_report(const struct scmi_handle *handle,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
static void *
scmi_sensor_fill_custom_report(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
{
void *rep = NULL;
@ -890,7 +890,7 @@ static void *scmi_sensor_fill_custom_report(const struct scmi_handle *handle,
struct scmi_sensor_info *s;
const struct scmi_sensor_update_notify_payld *p = payld;
struct scmi_sensor_update_report *r = report;
struct sensors_info *sinfo = handle->sensor_priv;
struct sensors_info *sinfo = ph->get_priv(ph);
/* payld_sz is variable for this event */
r->sensor_id = le32_to_cpu(p->sensor_id);
@ -920,6 +920,13 @@ static void *scmi_sensor_fill_custom_report(const struct scmi_handle *handle,
return rep;
}
static int scmi_sensor_get_num_sources(const struct scmi_protocol_handle *ph)
{
struct sensors_info *si = ph->get_priv(ph);
return si->num_sensors;
}
static const struct scmi_event sensor_events[] = {
{
.id = SCMI_EVENT_SENSOR_TRIP_POINT_EVENT,
@ -939,48 +946,55 @@ static const struct scmi_event sensor_events[] = {
};
static const struct scmi_event_ops sensor_event_ops = {
.get_num_sources = scmi_sensor_get_num_sources,
.set_notify_enabled = scmi_sensor_set_notify_enabled,
.fill_custom_report = scmi_sensor_fill_custom_report,
};
static int scmi_sensors_protocol_init(struct scmi_handle *handle)
static const struct scmi_protocol_events sensor_protocol_events = {
.queue_sz = SCMI_PROTO_QUEUE_SZ,
.ops = &sensor_event_ops,
.evts = sensor_events,
.num_events = ARRAY_SIZE(sensor_events),
};
static int scmi_sensors_protocol_init(const struct scmi_protocol_handle *ph)
{
u32 version;
int ret;
struct sensors_info *sinfo;
scmi_version_get(handle, SCMI_PROTOCOL_SENSOR, &version);
ph->xops->version_get(ph, &version);
dev_dbg(handle->dev, "Sensor Version %d.%d\n",
dev_dbg(ph->dev, "Sensor Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
sinfo = devm_kzalloc(handle->dev, sizeof(*sinfo), GFP_KERNEL);
sinfo = devm_kzalloc(ph->dev, sizeof(*sinfo), GFP_KERNEL);
if (!sinfo)
return -ENOMEM;
sinfo->version = version;
ret = scmi_sensor_attributes_get(handle, sinfo);
ret = scmi_sensor_attributes_get(ph, sinfo);
if (ret)
return ret;
sinfo->sensors = devm_kcalloc(handle->dev, sinfo->num_sensors,
sinfo->sensors = devm_kcalloc(ph->dev, sinfo->num_sensors,
sizeof(*sinfo->sensors), GFP_KERNEL);
if (!sinfo->sensors)
return -ENOMEM;
ret = scmi_sensor_description_get(handle, sinfo);
ret = scmi_sensor_description_get(ph, sinfo);
if (ret)
return ret;
scmi_register_protocol_events(handle,
SCMI_PROTOCOL_SENSOR, SCMI_PROTO_QUEUE_SZ,
&sensor_event_ops, sensor_events,
ARRAY_SIZE(sensor_events),
sinfo->num_sensors);
handle->sensor_priv = sinfo;
handle->sensor_ops = &sensor_ops;
return 0;
return ph->set_priv(ph, sinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_SENSOR, sensors)
static const struct scmi_protocol scmi_sensors = {
.id = SCMI_PROTOCOL_SENSOR,
.owner = THIS_MODULE,
.instance_init = &scmi_sensors_protocol_init,
.ops = &sensor_proto_ops,
.events = &sensor_protocol_events,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(sensors, scmi_sensors)

View File

@ -2,11 +2,12 @@
/*
* System Control and Management Interface (SCMI) System Power Protocol
*
* Copyright (C) 2020 ARM Ltd.
* Copyright (C) 2020-2021 ARM Ltd.
*/
#define pr_fmt(fmt) "SCMI Notifications SYSTEM - " fmt
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include "common.h"
@ -32,43 +33,44 @@ struct scmi_system_info {
u32 version;
};
static int scmi_system_request_notify(const struct scmi_handle *handle,
static int scmi_system_request_notify(const struct scmi_protocol_handle *ph,
bool enable)
{
int ret;
struct scmi_xfer *t;
struct scmi_system_power_state_notify *notify;
ret = scmi_xfer_get_init(handle, SYSTEM_POWER_STATE_NOTIFY,
SCMI_PROTOCOL_SYSTEM, sizeof(*notify), 0, &t);
ret = ph->xops->xfer_get_init(ph, SYSTEM_POWER_STATE_NOTIFY,
sizeof(*notify), 0, &t);
if (ret)
return ret;
notify = t->tx.buf;
notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_system_set_notify_enabled(const struct scmi_handle *handle,
static int scmi_system_set_notify_enabled(const struct scmi_protocol_handle *ph,
u8 evt_id, u32 src_id, bool enable)
{
int ret;
ret = scmi_system_request_notify(handle, enable);
ret = scmi_system_request_notify(ph, enable);
if (ret)
pr_debug("FAIL_ENABLE - evt[%X] - ret:%d\n", evt_id, ret);
return ret;
}
static void *scmi_system_fill_custom_report(const struct scmi_handle *handle,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
static void *
scmi_system_fill_custom_report(const struct scmi_protocol_handle *ph,
u8 evt_id, ktime_t timestamp,
const void *payld, size_t payld_sz,
void *report, u32 *src_id)
{
const struct scmi_system_power_state_notifier_payld *p = payld;
struct scmi_system_power_state_notifier_report *r = report;
@ -101,31 +103,38 @@ static const struct scmi_event_ops system_event_ops = {
.fill_custom_report = scmi_system_fill_custom_report,
};
static int scmi_system_protocol_init(struct scmi_handle *handle)
static const struct scmi_protocol_events system_protocol_events = {
.queue_sz = SCMI_PROTO_QUEUE_SZ,
.ops = &system_event_ops,
.evts = system_events,
.num_events = ARRAY_SIZE(system_events),
.num_sources = SCMI_SYSTEM_NUM_SOURCES,
};
static int scmi_system_protocol_init(const struct scmi_protocol_handle *ph)
{
u32 version;
struct scmi_system_info *pinfo;
scmi_version_get(handle, SCMI_PROTOCOL_SYSTEM, &version);
ph->xops->version_get(ph, &version);
dev_dbg(handle->dev, "System Power Version %d.%d\n",
dev_dbg(ph->dev, "System Power Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
pinfo = devm_kzalloc(handle->dev, sizeof(*pinfo), GFP_KERNEL);
pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL);
if (!pinfo)
return -ENOMEM;
scmi_register_protocol_events(handle,
SCMI_PROTOCOL_SYSTEM, SCMI_PROTO_QUEUE_SZ,
&system_event_ops,
system_events,
ARRAY_SIZE(system_events),
SCMI_SYSTEM_NUM_SOURCES);
pinfo->version = version;
handle->system_priv = pinfo;
return 0;
return ph->set_priv(ph, pinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_SYSTEM, system)
static const struct scmi_protocol scmi_system = {
.id = SCMI_PROTOCOL_SYSTEM,
.owner = THIS_MODULE,
.instance_init = &scmi_system_protocol_init,
.ops = NULL,
.events = &system_protocol_events,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(system, scmi_system)

View File

@ -2,9 +2,10 @@
/*
* System Control and Management Interface (SCMI) Voltage Protocol
*
* Copyright (C) 2020 ARM Ltd.
* Copyright (C) 2020-2021 ARM Ltd.
*/
#include <linux/module.h>
#include <linux/scmi_protocol.h>
#include "common.h"
@ -59,23 +60,23 @@ struct voltage_info {
struct scmi_voltage_info *domains;
};
static int scmi_protocol_attributes_get(const struct scmi_handle *handle,
static int scmi_protocol_attributes_get(const struct scmi_protocol_handle *ph,
struct voltage_info *vinfo)
{
int ret;
struct scmi_xfer *t;
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_VOLTAGE, 0, sizeof(__le32), &t);
ret = ph->xops->xfer_get_init(ph, PROTOCOL_ATTRIBUTES, 0,
sizeof(__le32), &t);
if (ret)
return ret;
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
vinfo->num_domains =
NUM_VOLTAGE_DOMAINS(get_unaligned_le32(t->rx.buf));
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
@ -109,24 +110,23 @@ static int scmi_init_voltage_levels(struct device *dev,
return 0;
}
static int scmi_voltage_descriptors_get(const struct scmi_handle *handle,
static int scmi_voltage_descriptors_get(const struct scmi_protocol_handle *ph,
struct voltage_info *vinfo)
{
int ret, dom;
struct scmi_xfer *td, *tl;
struct device *dev = handle->dev;
struct device *dev = ph->dev;
struct scmi_msg_resp_domain_attributes *resp_dom;
struct scmi_msg_resp_describe_levels *resp_levels;
ret = scmi_xfer_get_init(handle, VOLTAGE_DOMAIN_ATTRIBUTES,
SCMI_PROTOCOL_VOLTAGE, sizeof(__le32),
sizeof(*resp_dom), &td);
ret = ph->xops->xfer_get_init(ph, VOLTAGE_DOMAIN_ATTRIBUTES,
sizeof(__le32), sizeof(*resp_dom), &td);
if (ret)
return ret;
resp_dom = td->rx.buf;
ret = scmi_xfer_get_init(handle, VOLTAGE_DESCRIBE_LEVELS,
SCMI_PROTOCOL_VOLTAGE, sizeof(__le64), 0, &tl);
ret = ph->xops->xfer_get_init(ph, VOLTAGE_DESCRIBE_LEVELS,
sizeof(__le64), 0, &tl);
if (ret)
goto outd;
resp_levels = tl->rx.buf;
@ -139,7 +139,7 @@ static int scmi_voltage_descriptors_get(const struct scmi_handle *handle,
/* Retrieve domain attributes at first ... */
put_unaligned_le32(dom, td->tx.buf);
ret = scmi_do_xfer(handle, td);
ret = ph->xops->do_xfer(ph, td);
/* Skip domain on comms error */
if (ret)
continue;
@ -157,7 +157,7 @@ static int scmi_voltage_descriptors_get(const struct scmi_handle *handle,
cmd->domain_id = cpu_to_le32(v->id);
cmd->level_index = desc_index;
ret = scmi_do_xfer(handle, tl);
ret = ph->xops->do_xfer(ph, tl);
if (ret)
break;
@ -176,7 +176,7 @@ static int scmi_voltage_descriptors_get(const struct scmi_handle *handle,
}
if (desc_index + num_returned > v->num_levels) {
dev_err(handle->dev,
dev_err(ph->dev,
"No. of voltage levels can't exceed %d\n",
v->num_levels);
ret = -EINVAL;
@ -195,7 +195,7 @@ static int scmi_voltage_descriptors_get(const struct scmi_handle *handle,
desc_index += num_returned;
scmi_reset_rx_to_maxsz(handle, tl);
ph->xops->reset_rx_to_maxsz(ph, tl);
/* check both to avoid infinite loop due to buggy fw */
} while (num_returned && num_remaining);
@ -204,55 +204,52 @@ static int scmi_voltage_descriptors_get(const struct scmi_handle *handle,
devm_kfree(dev, v->levels_uv);
}
scmi_reset_rx_to_maxsz(handle, td);
ph->xops->reset_rx_to_maxsz(ph, td);
}
scmi_xfer_put(handle, tl);
ph->xops->xfer_put(ph, tl);
outd:
scmi_xfer_put(handle, td);
ph->xops->xfer_put(ph, td);
return ret;
}
static int __scmi_voltage_get_u32(const struct scmi_handle *handle,
static int __scmi_voltage_get_u32(const struct scmi_protocol_handle *ph,
u8 cmd_id, u32 domain_id, u32 *value)
{
int ret;
struct scmi_xfer *t;
struct voltage_info *vinfo = handle->voltage_priv;
struct voltage_info *vinfo = ph->get_priv(ph);
if (domain_id >= vinfo->num_domains)
return -EINVAL;
ret = scmi_xfer_get_init(handle, cmd_id,
SCMI_PROTOCOL_VOLTAGE,
sizeof(__le32), 0, &t);
ret = ph->xops->xfer_get_init(ph, cmd_id, sizeof(__le32), 0, &t);
if (ret)
return ret;
put_unaligned_le32(domain_id, t->tx.buf);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
if (!ret)
*value = get_unaligned_le32(t->rx.buf);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_voltage_config_set(const struct scmi_handle *handle,
static int scmi_voltage_config_set(const struct scmi_protocol_handle *ph,
u32 domain_id, u32 config)
{
int ret;
struct scmi_xfer *t;
struct voltage_info *vinfo = handle->voltage_priv;
struct voltage_info *vinfo = ph->get_priv(ph);
struct scmi_msg_cmd_config_set *cmd;
if (domain_id >= vinfo->num_domains)
return -EINVAL;
ret = scmi_xfer_get_init(handle, VOLTAGE_CONFIG_SET,
SCMI_PROTOCOL_VOLTAGE,
sizeof(*cmd), 0, &t);
ret = ph->xops->xfer_get_init(ph, VOLTAGE_CONFIG_SET,
sizeof(*cmd), 0, &t);
if (ret)
return ret;
@ -260,33 +257,32 @@ static int scmi_voltage_config_set(const struct scmi_handle *handle,
cmd->domain_id = cpu_to_le32(domain_id);
cmd->config = cpu_to_le32(config & GENMASK(3, 0));
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_voltage_config_get(const struct scmi_handle *handle,
static int scmi_voltage_config_get(const struct scmi_protocol_handle *ph,
u32 domain_id, u32 *config)
{
return __scmi_voltage_get_u32(handle, VOLTAGE_CONFIG_GET,
return __scmi_voltage_get_u32(ph, VOLTAGE_CONFIG_GET,
domain_id, config);
}
static int scmi_voltage_level_set(const struct scmi_handle *handle,
static int scmi_voltage_level_set(const struct scmi_protocol_handle *ph,
u32 domain_id, u32 flags, s32 volt_uV)
{
int ret;
struct scmi_xfer *t;
struct voltage_info *vinfo = handle->voltage_priv;
struct voltage_info *vinfo = ph->get_priv(ph);
struct scmi_msg_cmd_level_set *cmd;
if (domain_id >= vinfo->num_domains)
return -EINVAL;
ret = scmi_xfer_get_init(handle, VOLTAGE_LEVEL_SET,
SCMI_PROTOCOL_VOLTAGE,
sizeof(*cmd), 0, &t);
ret = ph->xops->xfer_get_init(ph, VOLTAGE_LEVEL_SET,
sizeof(*cmd), 0, &t);
if (ret)
return ret;
@ -295,23 +291,23 @@ static int scmi_voltage_level_set(const struct scmi_handle *handle,
cmd->flags = cpu_to_le32(flags);
cmd->voltage_level = cpu_to_le32(volt_uV);
ret = scmi_do_xfer(handle, t);
ret = ph->xops->do_xfer(ph, t);
scmi_xfer_put(handle, t);
ph->xops->xfer_put(ph, t);
return ret;
}
static int scmi_voltage_level_get(const struct scmi_handle *handle,
static int scmi_voltage_level_get(const struct scmi_protocol_handle *ph,
u32 domain_id, s32 *volt_uV)
{
return __scmi_voltage_get_u32(handle, VOLTAGE_LEVEL_GET,
return __scmi_voltage_get_u32(ph, VOLTAGE_LEVEL_GET,
domain_id, (u32 *)volt_uV);
}
static const struct scmi_voltage_info * __must_check
scmi_voltage_info_get(const struct scmi_handle *handle, u32 domain_id)
scmi_voltage_info_get(const struct scmi_protocol_handle *ph, u32 domain_id)
{
struct voltage_info *vinfo = handle->voltage_priv;
struct voltage_info *vinfo = ph->get_priv(ph);
if (domain_id >= vinfo->num_domains ||
!vinfo->domains[domain_id].num_levels)
@ -320,14 +316,14 @@ scmi_voltage_info_get(const struct scmi_handle *handle, u32 domain_id)
return vinfo->domains + domain_id;
}
static int scmi_voltage_domains_num_get(const struct scmi_handle *handle)
static int scmi_voltage_domains_num_get(const struct scmi_protocol_handle *ph)
{
struct voltage_info *vinfo = handle->voltage_priv;
struct voltage_info *vinfo = ph->get_priv(ph);
return vinfo->num_domains;
}
static struct scmi_voltage_ops voltage_ops = {
static struct scmi_voltage_proto_ops voltage_proto_ops = {
.num_domains_get = scmi_voltage_domains_num_get,
.info_get = scmi_voltage_info_get,
.config_set = scmi_voltage_config_set,
@ -336,45 +332,49 @@ static struct scmi_voltage_ops voltage_ops = {
.level_get = scmi_voltage_level_get,
};
static int scmi_voltage_protocol_init(struct scmi_handle *handle)
static int scmi_voltage_protocol_init(const struct scmi_protocol_handle *ph)
{
int ret;
u32 version;
struct voltage_info *vinfo;
ret = scmi_version_get(handle, SCMI_PROTOCOL_VOLTAGE, &version);
ret = ph->xops->version_get(ph, &version);
if (ret)
return ret;
dev_dbg(handle->dev, "Voltage Version %d.%d\n",
dev_dbg(ph->dev, "Voltage Version %d.%d\n",
PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version));
vinfo = devm_kzalloc(handle->dev, sizeof(*vinfo), GFP_KERNEL);
vinfo = devm_kzalloc(ph->dev, sizeof(*vinfo), GFP_KERNEL);
if (!vinfo)
return -ENOMEM;
vinfo->version = version;
ret = scmi_protocol_attributes_get(handle, vinfo);
ret = scmi_protocol_attributes_get(ph, vinfo);
if (ret)
return ret;
if (vinfo->num_domains) {
vinfo->domains = devm_kcalloc(handle->dev, vinfo->num_domains,
vinfo->domains = devm_kcalloc(ph->dev, vinfo->num_domains,
sizeof(*vinfo->domains),
GFP_KERNEL);
if (!vinfo->domains)
return -ENOMEM;
ret = scmi_voltage_descriptors_get(handle, vinfo);
ret = scmi_voltage_descriptors_get(ph, vinfo);
if (ret)
return ret;
} else {
dev_warn(handle->dev, "No Voltage domains found.\n");
dev_warn(ph->dev, "No Voltage domains found.\n");
}
handle->voltage_ops = &voltage_ops;
handle->voltage_priv = vinfo;
return 0;
return ph->set_priv(ph, vinfo);
}
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(SCMI_PROTOCOL_VOLTAGE, voltage)
static const struct scmi_protocol scmi_voltage = {
.id = SCMI_PROTOCOL_VOLTAGE,
.owner = THIS_MODULE,
.instance_init = &scmi_voltage_protocol_init,
.ops = &voltage_proto_ops,
};
DEFINE_SCMI_PROTOCOL_REGISTER_UNREGISTER(voltage, scmi_voltage)

View File

@ -29,6 +29,10 @@
* The framework needs some proper extension to support multi power
* domain cases.
*
* Update: Genpd assigns the ->of_node for the virtual device before it
* invokes ->attach_dev() callback, hence parsing for device resources via
* DT should work fine.
*
* 2. It also breaks most of current drivers as the driver probe sequence
* behavior changed if removing ->power_on|off() callback and use
* ->start() and ->stop() instead. genpd_dev_pm_attach will only power
@ -39,8 +43,11 @@
* domain enabled will trigger a HW access error. That means we need fix
* most drivers probe sequence with proper runtime pm.
*
* In summary, we need fix above two issue before being able to switch to
* the "single global power domain" way.
* Update: Runtime PM support isn't necessary. Instead, this can easily be
* fixed in drivers by adding a call to dev_pm_domain_start() during probe.
*
* In summary, the second part needs to be addressed via minor updates to the
* relevant drivers, before the "single global power domain" model can be used.
*
*/
@ -86,6 +93,8 @@ struct imx_sc_pd_soc {
u8 num_ranges;
};
static int imx_con_rsrc;
static const struct imx_sc_pd_range imx8qxp_scu_pd_ranges[] = {
/* LSIO SS */
{ "pwm", IMX_SC_R_PWM_0, 8, true, 0 },
@ -134,7 +143,7 @@ static const struct imx_sc_pd_range imx8qxp_scu_pd_ranges[] = {
{ "can", IMX_SC_R_CAN_0, 3, true, 0 },
{ "ftm", IMX_SC_R_FTM_0, 2, true, 0 },
{ "lpi2c", IMX_SC_R_I2C_0, 4, true, 0 },
{ "adc", IMX_SC_R_ADC_0, 1, true, 0 },
{ "adc", IMX_SC_R_ADC_0, 2, true, 0 },
{ "lcd", IMX_SC_R_LCD_0, 1, true, 0 },
{ "lcd0-pwm", IMX_SC_R_LCD_0_PWM_0, 1, true, 0 },
{ "lpuart", IMX_SC_R_UART_0, 4, true, 0 },
@ -207,6 +216,23 @@ to_imx_sc_pd(struct generic_pm_domain *genpd)
return container_of(genpd, struct imx_sc_pm_domain, pd);
}
static void imx_sc_pd_get_console_rsrc(void)
{
struct of_phandle_args specs;
int ret;
if (!of_stdout)
return;
ret = of_parse_phandle_with_args(of_stdout, "power-domains",
"#power-domain-cells",
0, &specs);
if (ret)
return;
imx_con_rsrc = specs.args[0];
}
static int imx_sc_pd_power(struct generic_pm_domain *domain, bool power_on)
{
struct imx_sc_msg_req_set_resource_power_mode msg;
@ -267,6 +293,7 @@ imx_scu_add_pm_domain(struct device *dev, int idx,
const struct imx_sc_pd_range *pd_ranges)
{
struct imx_sc_pm_domain *sc_pd;
bool is_off = true;
int ret;
if (!imx_sc_rm_is_resource_owned(pm_ipc_handle, pd_ranges->rsrc + idx))
@ -288,6 +315,10 @@ imx_scu_add_pm_domain(struct device *dev, int idx,
"%s", pd_ranges->name);
sc_pd->pd.name = sc_pd->name;
if (imx_con_rsrc == sc_pd->rsrc) {
sc_pd->pd.flags = GENPD_FLAG_RPM_ALWAYS_ON;
is_off = false;
}
if (sc_pd->rsrc >= IMX_SC_R_LAST) {
dev_warn(dev, "invalid pd %s rsrc id %d found",
@ -297,7 +328,7 @@ imx_scu_add_pm_domain(struct device *dev, int idx,
return NULL;
}
ret = pm_genpd_init(&sc_pd->pd, NULL, true);
ret = pm_genpd_init(&sc_pd->pd, NULL, is_off);
if (ret) {
dev_warn(dev, "failed to init pd %s rsrc id %d",
sc_pd->name, sc_pd->rsrc);
@ -363,6 +394,8 @@ static int imx_sc_pd_probe(struct platform_device *pdev)
if (!pd_soc)
return -ENODEV;
imx_sc_pd_get_console_rsrc();
return imx_scu_init_pm_domains(&pdev->dev, pd_soc);
}

View File

@ -118,7 +118,7 @@ static void __scm_legacy_do(const struct arm_smccc_args *smc,
}
/**
* qcom_scm_call() - Sends a command to the SCM and waits for the command to
* scm_legacy_call() - Sends a command to the SCM and waits for the command to
* finish processing.
*
* A note on cache maintenance:
@ -209,7 +209,7 @@ out:
(n & 0xf))
/**
* qcom_scm_call_atomic() - Send an atomic SCM command with up to 5 arguments
* scm_legacy_call_atomic() - Send an atomic SCM command with up to 5 arguments
* and 3 return values
* @desc: SCM call descriptor containing arguments
* @res: SCM call return values

View File

@ -77,8 +77,10 @@ static void __scm_smc_do(const struct arm_smccc_args *smc,
} while (res->a0 == QCOM_SCM_V2_EBUSY);
}
int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
struct qcom_scm_res *res, bool atomic)
int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
enum qcom_scm_convention qcom_convention,
struct qcom_scm_res *res, bool atomic)
{
int arglen = desc->arginfo & 0xf;
int i;
@ -87,9 +89,8 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
size_t alloc_len;
gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL;
u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL;
u32 qcom_smccc_convention =
(qcom_scm_convention == SMC_CONVENTION_ARM_32) ?
ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
u32 qcom_smccc_convention = (qcom_convention == SMC_CONVENTION_ARM_32) ?
ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
struct arm_smccc_res smc_res;
struct arm_smccc_args smc = {0};
@ -148,4 +149,5 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
}
return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0;
}

View File

@ -113,14 +113,10 @@ static void qcom_scm_clk_disable(void)
clk_disable_unprepare(__scm->bus_clk);
}
static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
u32 cmd_id);
enum qcom_scm_convention qcom_scm_convention = SMC_CONVENTION_UNKNOWN;
static DEFINE_SPINLOCK(scm_query_lock);
enum qcom_scm_convention qcom_scm_convention;
static bool has_queried __read_mostly;
static DEFINE_SPINLOCK(query_lock);
static void __query_convention(void)
static enum qcom_scm_convention __get_convention(void)
{
unsigned long flags;
struct qcom_scm_desc desc = {
@ -133,36 +129,50 @@ static void __query_convention(void)
.owner = ARM_SMCCC_OWNER_SIP,
};
struct qcom_scm_res res;
enum qcom_scm_convention probed_convention;
int ret;
bool forced = false;
spin_lock_irqsave(&query_lock, flags);
if (has_queried)
goto out;
if (likely(qcom_scm_convention != SMC_CONVENTION_UNKNOWN))
return qcom_scm_convention;
qcom_scm_convention = SMC_CONVENTION_ARM_64;
// Device isn't required as there is only one argument - no device
// needed to dma_map_single to secure world
ret = scm_smc_call(NULL, &desc, &res, true);
/*
* Device isn't required as there is only one argument - no device
* needed to dma_map_single to secure world
*/
probed_convention = SMC_CONVENTION_ARM_64;
ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
if (!ret && res.result[0] == 1)
goto out;
goto found;
qcom_scm_convention = SMC_CONVENTION_ARM_32;
ret = scm_smc_call(NULL, &desc, &res, true);
/*
* Some SC7180 firmwares didn't implement the
* QCOM_SCM_INFO_IS_CALL_AVAIL call, so we fallback to forcing ARM_64
* calling conventions on these firmwares. Luckily we don't make any
* early calls into the firmware on these SoCs so the device pointer
* will be valid here to check if the compatible matches.
*/
if (of_device_is_compatible(__scm ? __scm->dev->of_node : NULL, "qcom,scm-sc7180")) {
forced = true;
goto found;
}
probed_convention = SMC_CONVENTION_ARM_32;
ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
if (!ret && res.result[0] == 1)
goto out;
goto found;
qcom_scm_convention = SMC_CONVENTION_LEGACY;
out:
has_queried = true;
spin_unlock_irqrestore(&query_lock, flags);
pr_info("qcom_scm: convention: %s\n",
qcom_scm_convention_names[qcom_scm_convention]);
}
probed_convention = SMC_CONVENTION_LEGACY;
found:
spin_lock_irqsave(&scm_query_lock, flags);
if (probed_convention != qcom_scm_convention) {
qcom_scm_convention = probed_convention;
pr_info("qcom_scm: convention: %s%s\n",
qcom_scm_convention_names[qcom_scm_convention],
forced ? " (forced)" : "");
}
spin_unlock_irqrestore(&scm_query_lock, flags);
static inline enum qcom_scm_convention __get_convention(void)
{
if (unlikely(!has_queried))
__query_convention();
return qcom_scm_convention;
}
@ -219,8 +229,8 @@ static int qcom_scm_call_atomic(struct device *dev,
}
}
static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
u32 cmd_id)
static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
u32 cmd_id)
{
int ret;
struct qcom_scm_desc desc = {
@ -247,7 +257,7 @@ static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
ret = qcom_scm_call(dev, &desc, &res);
return ret ? : res.result[0];
return ret ? false : !!res.result[0];
}
/**
@ -585,9 +595,8 @@ bool qcom_scm_pas_supported(u32 peripheral)
};
struct qcom_scm_res res;
ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PIL_PAS_IS_SUPPORTED);
if (ret <= 0)
if (!__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PIL_PAS_IS_SUPPORTED))
return false;
ret = qcom_scm_call(__scm->dev, &desc, &res);
@ -1060,17 +1069,18 @@ EXPORT_SYMBOL(qcom_scm_ice_set_key);
*/
bool qcom_scm_hdcp_available(void)
{
bool avail;
int ret = qcom_scm_clk_enable();
if (ret)
return ret;
ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
avail = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
QCOM_SCM_HDCP_INVOKE);
qcom_scm_clk_disable();
return ret > 0;
return avail;
}
EXPORT_SYMBOL(qcom_scm_hdcp_available);
@ -1242,7 +1252,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
__scm = scm;
__scm->dev = &pdev->dev;
__query_convention();
__get_convention();
/*
* If requested enable "download mode", from this point on warmboot
@ -1291,6 +1301,7 @@ static struct platform_driver qcom_scm_driver = {
.driver = {
.name = "qcom_scm",
.of_match_table = qcom_scm_dt_match,
.suppress_bind_attrs = true,
},
.probe = qcom_scm_probe,
.shutdown = qcom_scm_shutdown,

View File

@ -61,8 +61,11 @@ struct qcom_scm_res {
};
#define SCM_SMC_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF))
extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
struct qcom_scm_res *res, bool atomic);
extern int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
enum qcom_scm_convention qcom_convention,
struct qcom_scm_res *res, bool atomic);
#define scm_smc_call(dev, desc, res, atomic) \
__scm_smc_call((dev), (desc), qcom_scm_convention, (res), (atomic))
#define SCM_LEGACY_FNID(s, c) (((s) << 10) | ((c) & 0x3ff))
extern int scm_legacy_call_atomic(struct device *dev,

View File

@ -7,6 +7,7 @@
*/
#include <linux/dma-mapping.h>
#include <linux/kref.h>
#include <linux/mailbox_client.h>
#include <linux/module.h>
#include <linux/of_platform.h>
@ -27,6 +28,8 @@ struct rpi_firmware {
struct mbox_chan *chan; /* The property channel. */
struct completion c;
u32 enabled;
struct kref consumers;
};
static DEFINE_MUTEX(transaction_lock);
@ -225,12 +228,38 @@ static void rpi_register_clk_driver(struct device *dev)
-1, NULL, 0);
}
static void rpi_firmware_delete(struct kref *kref)
{
struct rpi_firmware *fw = container_of(kref, struct rpi_firmware,
consumers);
mbox_free_channel(fw->chan);
kfree(fw);
}
void rpi_firmware_put(struct rpi_firmware *fw)
{
kref_put(&fw->consumers, rpi_firmware_delete);
}
EXPORT_SYMBOL_GPL(rpi_firmware_put);
static void devm_rpi_firmware_put(void *data)
{
struct rpi_firmware *fw = data;
rpi_firmware_put(fw);
}
static int rpi_firmware_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rpi_firmware *fw;
fw = devm_kzalloc(dev, sizeof(*fw), GFP_KERNEL);
/*
* Memory will be freed by rpi_firmware_delete() once all users have
* released their firmware handles. Don't use devm_kzalloc() here.
*/
fw = kzalloc(sizeof(*fw), GFP_KERNEL);
if (!fw)
return -ENOMEM;
@ -247,6 +276,7 @@ static int rpi_firmware_probe(struct platform_device *pdev)
}
init_completion(&fw->c);
kref_init(&fw->consumers);
platform_set_drvdata(pdev, fw);
@ -275,7 +305,8 @@ static int rpi_firmware_remove(struct platform_device *pdev)
rpi_hwmon = NULL;
platform_device_unregister(rpi_clk);
rpi_clk = NULL;
mbox_free_channel(fw->chan);
rpi_firmware_put(fw);
return 0;
}
@ -284,19 +315,51 @@ static int rpi_firmware_remove(struct platform_device *pdev)
* rpi_firmware_get - Get pointer to rpi_firmware structure.
* @firmware_node: Pointer to the firmware Device Tree node.
*
* The reference to rpi_firmware has to be released with rpi_firmware_put().
*
* Returns NULL is the firmware device is not ready.
*/
struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
{
struct platform_device *pdev = of_find_device_by_node(firmware_node);
struct rpi_firmware *fw;
if (!pdev)
return NULL;
return platform_get_drvdata(pdev);
fw = platform_get_drvdata(pdev);
if (!fw)
return NULL;
if (!kref_get_unless_zero(&fw->consumers))
return NULL;
return fw;
}
EXPORT_SYMBOL_GPL(rpi_firmware_get);
/**
* devm_rpi_firmware_get - Get pointer to rpi_firmware structure.
* @firmware_node: Pointer to the firmware Device Tree node.
*
* Returns NULL is the firmware device is not ready.
*/
struct rpi_firmware *devm_rpi_firmware_get(struct device *dev,
struct device_node *firmware_node)
{
struct rpi_firmware *fw;
fw = rpi_firmware_get(firmware_node);
if (!fw)
return NULL;
if (devm_add_action_or_reset(dev, devm_rpi_firmware_put, fw))
return NULL;
return fw;
}
EXPORT_SYMBOL_GPL(devm_rpi_firmware_get);
static const struct of_device_id rpi_firmware_of_match[] = {
{ .compatible = "raspberrypi,bcm2835-firmware", },
{},

View File

@ -2,7 +2,7 @@
/*
* Xilinx Zynq MPSoC Firmware layer
*
* Copyright (C) 2014-2020 Xilinx, Inc.
* Copyright (C) 2014-2021 Xilinx, Inc.
*
* Michal Simek <michal.simek@xilinx.com>
* Davorin Mista <davorin.mista@aggios.com>
@ -1280,12 +1280,13 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
static int zynqmp_firmware_remove(struct platform_device *pdev)
{
struct pm_api_feature_data *feature_data;
struct hlist_node *tmp;
int i;
mfd_remove_devices(&pdev->dev);
zynqmp_pm_api_debugfs_exit();
hash_for_each(pm_api_features_map, i, feature_data, hentry) {
hash_for_each_safe(pm_api_features_map, i, tmp, feature_data, hentry) {
hash_del(&feature_data->hentry);
kfree(feature_data);
}

View File

@ -14,13 +14,13 @@ if FPGA
config FPGA_MGR_SOCFPGA
tristate "Altera SOCFPGA FPGA Manager"
depends on ARCH_SOCFPGA || COMPILE_TEST
depends on ARCH_INTEL_SOCFPGA || COMPILE_TEST
help
FPGA manager driver support for Altera SOCFPGA.
config FPGA_MGR_SOCFPGA_A10
tristate "Altera SoCFPGA Arria10"
depends on ARCH_SOCFPGA || COMPILE_TEST
depends on ARCH_INTEL_SOCFPGA || COMPILE_TEST
select REGMAP_MMIO
help
FPGA manager driver support for Altera Arria10 SoCFPGA.
@ -60,7 +60,7 @@ config FPGA_MGR_ZYNQ_FPGA
config FPGA_MGR_STRATIX10_SOC
tristate "Intel Stratix10 SoC FPGA Manager"
depends on (ARCH_STRATIX10 && INTEL_STRATIX10_SERVICE)
depends on (ARCH_INTEL_SOCFPGA && INTEL_STRATIX10_SERVICE)
help
FPGA manager driver support for the Intel Stratix10 SoC.
@ -99,7 +99,7 @@ config FPGA_BRIDGE
config SOCFPGA_FPGA_BRIDGE
tristate "Altera SoCFPGA FPGA Bridges"
depends on ARCH_SOCFPGA && FPGA_BRIDGE
depends on ARCH_INTEL_SOCFPGA && FPGA_BRIDGE
help
Say Y to enable drivers for FPGA bridges for Altera SOCFPGA
devices.

View File

@ -208,7 +208,7 @@ static int rpi_exp_gpio_probe(struct platform_device *pdev)
return -ENOENT;
}
fw = rpi_firmware_get(fw_node);
fw = devm_rpi_firmware_get(&pdev->dev, fw_node);
of_node_put(fw_node);
if (!fw)
return -EPROBE_DEFER;

View File

@ -2,7 +2,7 @@
/*
* System Control and Management Interface(SCMI) based hwmon sensor driver
*
* Copyright (C) 2018 ARM Ltd.
* Copyright (C) 2018-2021 ARM Ltd.
* Sudeep Holla <sudeep.holla@arm.com>
*/
@ -13,8 +13,10 @@
#include <linux/sysfs.h>
#include <linux/thermal.h>
static const struct scmi_sensor_proto_ops *sensor_ops;
struct scmi_sensors {
const struct scmi_handle *handle;
const struct scmi_protocol_handle *ph;
const struct scmi_sensor_info **info[hwmon_max];
};
@ -69,10 +71,9 @@ static int scmi_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
u64 value;
const struct scmi_sensor_info *sensor;
struct scmi_sensors *scmi_sensors = dev_get_drvdata(dev);
const struct scmi_handle *h = scmi_sensors->handle;
sensor = *(scmi_sensors->info[type] + channel);
ret = h->sensor_ops->reading_get(h, sensor->id, &value);
ret = sensor_ops->reading_get(scmi_sensors->ph, sensor->id, &value);
if (ret)
return ret;
@ -169,11 +170,16 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
struct hwmon_channel_info *scmi_hwmon_chan;
const struct hwmon_channel_info **ptr_scmi_ci;
const struct scmi_handle *handle = sdev->handle;
struct scmi_protocol_handle *ph;
if (!handle || !handle->sensor_ops)
if (!handle)
return -ENODEV;
nr_sensors = handle->sensor_ops->count_get(handle);
sensor_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_SENSOR, &ph);
if (IS_ERR(sensor_ops))
return PTR_ERR(sensor_ops);
nr_sensors = sensor_ops->count_get(ph);
if (!nr_sensors)
return -EIO;
@ -181,10 +187,10 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
if (!scmi_sensors)
return -ENOMEM;
scmi_sensors->handle = handle;
scmi_sensors->ph = ph;
for (i = 0; i < nr_sensors; i++) {
sensor = handle->sensor_ops->info_get(handle, i);
sensor = sensor_ops->info_get(ph, i);
if (!sensor)
return -EINVAL;
@ -236,7 +242,7 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
}
for (i = nr_sensors - 1; i >= 0 ; i--) {
sensor = handle->sensor_ops->info_get(handle, i);
sensor = sensor_ops->info_get(ph, i);
if (!sensor)
continue;

View File

@ -369,7 +369,7 @@ comment "I2C system bus drivers (mostly embedded / system-on-chip)"
config I2C_ALTERA
tristate "Altera Soft IP I2C"
depends on ARCH_SOCFPGA || NIOS2 || COMPILE_TEST
depends on ARCH_INTEL_SOCFPGA || NIOS2 || COMPILE_TEST
depends on OF
help
If you say yes to this option, support will be included for the

View File

@ -22,7 +22,8 @@
#define SCMI_IIO_NUM_OF_AXIS 3
struct scmi_iio_priv {
struct scmi_handle *handle;
const struct scmi_sensor_proto_ops *sensor_ops;
struct scmi_protocol_handle *ph;
const struct scmi_sensor_info *sensor_info;
struct iio_dev *indio_dev;
/* adding one additional channel for timestamp */
@ -82,7 +83,6 @@ static int scmi_iio_sensor_update_cb(struct notifier_block *nb,
static int scmi_iio_buffer_preenable(struct iio_dev *iio_dev)
{
struct scmi_iio_priv *sensor = iio_priv(iio_dev);
u32 sensor_id = sensor->sensor_info->id;
u32 sensor_config = 0;
int err;
@ -92,27 +92,12 @@ static int scmi_iio_buffer_preenable(struct iio_dev *iio_dev)
sensor_config |= FIELD_PREP(SCMI_SENS_CFG_SENSOR_ENABLED_MASK,
SCMI_SENS_CFG_SENSOR_ENABLE);
err = sensor->handle->notify_ops->register_event_notifier(sensor->handle,
SCMI_PROTOCOL_SENSOR, SCMI_EVENT_SENSOR_UPDATE,
&sensor_id, &sensor->sensor_update_nb);
if (err) {
dev_err(&iio_dev->dev,
"Error in registering sensor update notifier for sensor %s err %d",
sensor->sensor_info->name, err);
return err;
}
err = sensor->handle->sensor_ops->config_set(sensor->handle,
sensor->sensor_info->id, sensor_config);
if (err) {
sensor->handle->notify_ops->unregister_event_notifier(sensor->handle,
SCMI_PROTOCOL_SENSOR,
SCMI_EVENT_SENSOR_UPDATE, &sensor_id,
&sensor->sensor_update_nb);
err = sensor->sensor_ops->config_set(sensor->ph,
sensor->sensor_info->id,
sensor_config);
if (err)
dev_err(&iio_dev->dev, "Error in enabling sensor %s err %d",
sensor->sensor_info->name, err);
}
return err;
}
@ -120,25 +105,14 @@ static int scmi_iio_buffer_preenable(struct iio_dev *iio_dev)
static int scmi_iio_buffer_postdisable(struct iio_dev *iio_dev)
{
struct scmi_iio_priv *sensor = iio_priv(iio_dev);
u32 sensor_id = sensor->sensor_info->id;
u32 sensor_config = 0;
int err;
sensor_config |= FIELD_PREP(SCMI_SENS_CFG_SENSOR_ENABLED_MASK,
SCMI_SENS_CFG_SENSOR_DISABLE);
err = sensor->handle->notify_ops->unregister_event_notifier(sensor->handle,
SCMI_PROTOCOL_SENSOR, SCMI_EVENT_SENSOR_UPDATE,
&sensor_id, &sensor->sensor_update_nb);
if (err) {
dev_err(&iio_dev->dev,
"Error in unregistering sensor update notifier for sensor %s err %d",
sensor->sensor_info->name, err);
return err;
}
err = sensor->handle->sensor_ops->config_set(sensor->handle, sensor_id,
sensor_config);
err = sensor->sensor_ops->config_set(sensor->ph,
sensor->sensor_info->id,
sensor_config);
if (err) {
dev_err(&iio_dev->dev,
"Error in disabling sensor %s with err %d",
@ -161,8 +135,9 @@ static int scmi_iio_set_odr_val(struct iio_dev *iio_dev, int val, int val2)
u32 sensor_config;
char buf[32];
int err = sensor->handle->sensor_ops->config_get(sensor->handle,
sensor->sensor_info->id, &sensor_config);
int err = sensor->sensor_ops->config_get(sensor->ph,
sensor->sensor_info->id,
&sensor_config);
if (err) {
dev_err(&iio_dev->dev,
"Error in getting sensor config for sensor %s err %d",
@ -208,8 +183,9 @@ static int scmi_iio_set_odr_val(struct iio_dev *iio_dev, int val, int val2)
sensor_config |=
FIELD_PREP(SCMI_SENS_CFG_ROUND_MASK, SCMI_SENS_CFG_ROUND_AUTO);
err = sensor->handle->sensor_ops->config_set(sensor->handle,
sensor->sensor_info->id, sensor_config);
err = sensor->sensor_ops->config_set(sensor->ph,
sensor->sensor_info->id,
sensor_config);
if (err)
dev_err(&iio_dev->dev,
"Error in setting sensor update interval for sensor %s value %u err %d",
@ -274,8 +250,9 @@ static int scmi_iio_get_odr_val(struct iio_dev *iio_dev, int *val, int *val2)
u32 sensor_config;
int mult;
int err = sensor->handle->sensor_ops->config_get(sensor->handle,
sensor->sensor_info->id, &sensor_config);
int err = sensor->sensor_ops->config_get(sensor->ph,
sensor->sensor_info->id,
&sensor_config);
if (err) {
dev_err(&iio_dev->dev,
"Error in getting sensor config for sensor %s err %d",
@ -528,15 +505,19 @@ static int scmi_iio_set_sampling_freq_avail(struct iio_dev *iio_dev)
return 0;
}
static struct iio_dev *scmi_alloc_iiodev(struct device *dev,
struct scmi_handle *handle,
const struct scmi_sensor_info *sensor_info)
static struct iio_dev *
scmi_alloc_iiodev(struct scmi_device *sdev,
const struct scmi_sensor_proto_ops *ops,
struct scmi_protocol_handle *ph,
const struct scmi_sensor_info *sensor_info)
{
struct iio_chan_spec *iio_channels;
struct scmi_iio_priv *sensor;
enum iio_modifier modifier;
enum iio_chan_type type;
struct iio_dev *iiodev;
struct device *dev = &sdev->dev;
const struct scmi_handle *handle = sdev->handle;
int i, ret;
iiodev = devm_iio_device_alloc(dev, sizeof(*sensor));
@ -546,7 +527,8 @@ static struct iio_dev *scmi_alloc_iiodev(struct device *dev,
iiodev->modes = INDIO_DIRECT_MODE;
iiodev->dev.parent = dev;
sensor = iio_priv(iiodev);
sensor->handle = handle;
sensor->sensor_ops = ops;
sensor->ph = ph;
sensor->sensor_info = sensor_info;
sensor->sensor_update_nb.notifier_call = scmi_iio_sensor_update_cb;
sensor->indio_dev = iiodev;
@ -581,6 +563,17 @@ static struct iio_dev *scmi_alloc_iiodev(struct device *dev,
sensor_info->axis[i].id);
}
ret = handle->notify_ops->devm_event_notifier_register(sdev,
SCMI_PROTOCOL_SENSOR, SCMI_EVENT_SENSOR_UPDATE,
&sensor->sensor_info->id,
&sensor->sensor_update_nb);
if (ret) {
dev_err(&iiodev->dev,
"Error in registering sensor update notifier for sensor %s err %d",
sensor->sensor_info->name, ret);
return ERR_PTR(ret);
}
scmi_iio_set_timestamp_channel(&iio_channels[i], i);
iiodev->channels = iio_channels;
return iiodev;
@ -590,24 +583,30 @@ static int scmi_iio_dev_probe(struct scmi_device *sdev)
{
const struct scmi_sensor_info *sensor_info;
struct scmi_handle *handle = sdev->handle;
const struct scmi_sensor_proto_ops *sensor_ops;
struct scmi_protocol_handle *ph;
struct device *dev = &sdev->dev;
struct iio_dev *scmi_iio_dev;
u16 nr_sensors;
int err = -ENODEV, i;
if (!handle || !handle->sensor_ops) {
if (!handle)
return -ENODEV;
sensor_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_SENSOR, &ph);
if (IS_ERR(sensor_ops)) {
dev_err(dev, "SCMI device has no sensor interface\n");
return -EINVAL;
return PTR_ERR(sensor_ops);
}
nr_sensors = handle->sensor_ops->count_get(handle);
nr_sensors = sensor_ops->count_get(ph);
if (!nr_sensors) {
dev_dbg(dev, "0 sensors found via SCMI bus\n");
return -ENODEV;
}
for (i = 0; i < nr_sensors; i++) {
sensor_info = handle->sensor_ops->info_get(handle, i);
sensor_info = sensor_ops->info_get(ph, i);
if (!sensor_info) {
dev_err(dev, "SCMI sensor %d has missing info\n", i);
return -EINVAL;
@ -622,7 +621,8 @@ static int scmi_iio_dev_probe(struct scmi_device *sdev)
sensor_info->axis[0].type != RADIANS_SEC)
continue;
scmi_iio_dev = scmi_alloc_iiodev(dev, handle, sensor_info);
scmi_iio_dev = scmi_alloc_iiodev(sdev, sensor_ops, ph,
sensor_info);
if (IS_ERR(scmi_iio_dev)) {
dev_err(dev,
"failed to allocate IIO device for sensor %s: %ld\n",

View File

@ -160,7 +160,7 @@ static int rpi_ts_probe(struct platform_device *pdev)
touchbuf = (u32)ts->fw_regs_phys;
error = rpi_firmware_property(fw, RPI_FIRMWARE_FRAMEBUFFER_SET_TOUCHBUF,
&touchbuf, sizeof(touchbuf));
rpi_firmware_put(fw);
if (error || touchbuf != 0) {
dev_warn(dev, "Failed to set touchbuf, %d\n", error);
return error;

View File

@ -192,10 +192,8 @@ static int ccf_probe(struct platform_device *pdev)
}
ccf->regs = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(ccf->regs)) {
dev_err(&pdev->dev, "%s: can't map mem resource\n", __func__);
if (IS_ERR(ccf->regs))
return PTR_ERR(ccf->regs);
}
ccf->dev = &pdev->dev;
ccf->info = match->data;

View File

@ -319,6 +319,7 @@ static int mtk_smi_larb_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *smi_node;
struct platform_device *smi_pdev;
struct device_link *link;
larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL);
if (!larb)
@ -358,6 +359,12 @@ static int mtk_smi_larb_probe(struct platform_device *pdev)
if (!platform_get_drvdata(smi_pdev))
return -EPROBE_DEFER;
larb->smi_common_dev = &smi_pdev->dev;
link = device_link_add(dev, larb->smi_common_dev,
DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
if (!link) {
dev_err(dev, "Unable to link smi-common dev\n");
return -ENODEV;
}
} else {
dev_err(dev, "Failed to get the smi_common device\n");
return -EINVAL;
@ -370,6 +377,9 @@ static int mtk_smi_larb_probe(struct platform_device *pdev)
static int mtk_smi_larb_remove(struct platform_device *pdev)
{
struct mtk_smi_larb *larb = platform_get_drvdata(pdev);
device_link_remove(&pdev->dev, larb->smi_common_dev);
pm_runtime_disable(&pdev->dev);
component_del(&pdev->dev, &mtk_smi_larb_component_ops);
return 0;
@ -381,17 +391,9 @@ static int __maybe_unused mtk_smi_larb_resume(struct device *dev)
const struct mtk_smi_larb_gen *larb_gen = larb->larb_gen;
int ret;
/* Power on smi-common. */
ret = pm_runtime_resume_and_get(larb->smi_common_dev);
if (ret < 0) {
dev_err(dev, "Failed to pm get for smi-common(%d).\n", ret);
return ret;
}
ret = mtk_smi_clk_enable(&larb->smi);
if (ret < 0) {
dev_err(dev, "Failed to enable clock(%d).\n", ret);
pm_runtime_put_sync(larb->smi_common_dev);
return ret;
}
@ -406,7 +408,6 @@ static int __maybe_unused mtk_smi_larb_suspend(struct device *dev)
struct mtk_smi_larb *larb = dev_get_drvdata(dev);
mtk_smi_clk_disable(&larb->smi);
pm_runtime_put_sync(larb->smi_common_dev);
return 0;
}

View File

@ -1009,8 +1009,8 @@ EXPORT_SYMBOL(gpmc_cs_request);
void gpmc_cs_free(int cs)
{
struct gpmc_cs_data *gpmc = &gpmc_cs[cs];
struct resource *res = &gpmc->mem;
struct gpmc_cs_data *gpmc;
struct resource *res;
spin_lock(&gpmc_mem_lock);
if (cs >= gpmc_cs_num || cs < 0 || !gpmc_cs_reserved(cs)) {
@ -1018,6 +1018,9 @@ void gpmc_cs_free(int cs)
spin_unlock(&gpmc_mem_lock);
return;
}
gpmc = &gpmc_cs[cs];
res = &gpmc->mem;
gpmc_cs_disable_mem(cs);
if (res->flags)
release_resource(res);

View File

@ -63,7 +63,7 @@
/* ECC memory config register specific constants */
#define PL353_SMC_ECC_MEMCFG_MODE_MASK 0xC
#define PL353_SMC_ECC_MEMCFG_MODE_SHIFT 2
#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0xC
#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK 0x3
#define PL353_SMC_DC_UPT_NAND_REGS ((4 << 23) | /* CS: NAND chip */ \
(2 << 21)) /* UpdateRegs operation */

View File

@ -192,10 +192,10 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
rpc->size = resource_size(res);
rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(rpc->dirmap))
rpc->dirmap = NULL;
rpc->size = resource_size(res);
rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);

View File

@ -1298,7 +1298,9 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
dmc->curr_volt = target_volt;
clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
ret = clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
if (ret)
return ret;
clk_prepare_enable(dmc->fout_bpll);
clk_prepare_enable(dmc->mout_bpll);

View File

@ -827,6 +827,15 @@ static int tegra_mc_probe(struct platform_device *pdev)
return err;
}
mc->debugfs.root = debugfs_create_dir("mc", NULL);
if (mc->soc->init) {
err = mc->soc->init(mc);
if (err < 0)
dev_err(&pdev->dev, "failed to initialize SoC driver: %d\n",
err);
}
err = tegra_mc_reset_setup(mc);
if (err < 0)
dev_err(&pdev->dev, "failed to register reset controller: %d\n",

View File

@ -92,12 +92,12 @@ icc_provider_to_tegra_mc(struct icc_provider *provider)
return container_of(provider, struct tegra_mc, provider);
}
static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset)
static inline u32 mc_readl(const struct tegra_mc *mc, unsigned long offset)
{
return readl_relaxed(mc->regs + offset);
}
static inline void mc_writel(struct tegra_mc *mc, u32 value,
static inline void mc_writel(const struct tegra_mc *mc, u32 value,
unsigned long offset)
{
writel_relaxed(value, mc->regs + offset);

View File

@ -905,7 +905,7 @@ static int emc_init(struct tegra_emc *emc)
else
emc->dram_bus_width = 32;
dev_info(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width);
dev_info_once(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width);
emc->dram_type &= EMC_FBIO_CFG5_DRAM_TYPE_MASK;
emc->dram_type >>= EMC_FBIO_CFG5_DRAM_TYPE_SHIFT;
@ -1204,7 +1204,7 @@ static int tegra_emc_debug_min_rate_set(void *data, u64 rate)
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops,
DEFINE_DEBUGFS_ATTRIBUTE(tegra_emc_debug_min_rate_fops,
tegra_emc_debug_min_rate_get,
tegra_emc_debug_min_rate_set, "%llu\n");
@ -1234,7 +1234,7 @@ static int tegra_emc_debug_max_rate_set(void *data, u64 rate)
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops,
DEFINE_DEBUGFS_ATTRIBUTE(tegra_emc_debug_max_rate_fops,
tegra_emc_debug_max_rate_get,
tegra_emc_debug_max_rate_set, "%llu\n");
@ -1419,8 +1419,8 @@ static int tegra_emc_opp_table_init(struct tegra_emc *emc)
goto put_hw_table;
}
dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
/* first dummy rate-set initializes voltage state */
err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));
@ -1475,9 +1475,9 @@ static int tegra_emc_probe(struct platform_device *pdev)
if (err)
return err;
} else {
dev_info(&pdev->dev,
"no memory timings for RAM code %u found in DT\n",
ram_code);
dev_info_once(&pdev->dev,
"no memory timings for RAM code %u found in DT\n",
ram_code);
}
err = emc_init(emc);

View File

@ -411,12 +411,12 @@ static int tegra_emc_load_timings_from_dt(struct tegra_emc *emc,
sort(emc->timings, emc->num_timings, sizeof(*timing), cmp_timings,
NULL);
dev_info(emc->dev,
"got %u timings for RAM code %u (min %luMHz max %luMHz)\n",
emc->num_timings,
tegra_read_ram_code(),
emc->timings[0].rate / 1000000,
emc->timings[emc->num_timings - 1].rate / 1000000);
dev_info_once(emc->dev,
"got %u timings for RAM code %u (min %luMHz max %luMHz)\n",
emc->num_timings,
tegra_read_ram_code(),
emc->timings[0].rate / 1000000,
emc->timings[emc->num_timings - 1].rate / 1000000);
return 0;
}
@ -429,7 +429,7 @@ tegra_emc_find_node_by_ram_code(struct device *dev)
int err;
if (of_get_child_count(dev->of_node) == 0) {
dev_info(dev, "device-tree doesn't have memory timings\n");
dev_info_once(dev, "device-tree doesn't have memory timings\n");
return NULL;
}
@ -496,7 +496,7 @@ static int emc_setup_hw(struct tegra_emc *emc)
else
emc->dram_bus_width = 32;
dev_info(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width);
dev_info_once(emc->dev, "%ubit DRAM bus\n", emc->dram_bus_width);
return 0;
}
@ -931,8 +931,8 @@ static int tegra_emc_opp_table_init(struct tegra_emc *emc)
goto put_hw_table;
}
dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
/* first dummy rate-set initializes voltage state */
err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));

View File

@ -3,6 +3,9 @@
* Copyright (C) 2012 NVIDIA CORPORATION. All rights reserved.
*/
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/of_device.h>
#include <linux/slab.h>
#include <linux/string.h>
@ -11,6 +14,79 @@
#include "mc.h"
#define MC_STAT_CONTROL 0x90
#define MC_STAT_EMC_CLOCK_LIMIT 0xa0
#define MC_STAT_EMC_CLOCKS 0xa4
#define MC_STAT_EMC_CONTROL_0 0xa8
#define MC_STAT_EMC_CONTROL_1 0xac
#define MC_STAT_EMC_COUNT_0 0xb8
#define MC_STAT_EMC_COUNT_1 0xbc
#define MC_STAT_CONTROL_CLIENT_ID GENMASK(13, 8)
#define MC_STAT_CONTROL_EVENT GENMASK(23, 16)
#define MC_STAT_CONTROL_PRI_EVENT GENMASK(25, 24)
#define MC_STAT_CONTROL_FILTER_CLIENT_ENABLE GENMASK(26, 26)
#define MC_STAT_CONTROL_FILTER_PRI GENMASK(29, 28)
#define MC_STAT_CONTROL_PRI_EVENT_HP 0
#define MC_STAT_CONTROL_PRI_EVENT_TM 1
#define MC_STAT_CONTROL_PRI_EVENT_BW 2
#define MC_STAT_CONTROL_FILTER_PRI_DISABLE 0
#define MC_STAT_CONTROL_FILTER_PRI_NO 1
#define MC_STAT_CONTROL_FILTER_PRI_YES 2
#define MC_STAT_CONTROL_EVENT_QUALIFIED 0
#define MC_STAT_CONTROL_EVENT_ANY_READ 1
#define MC_STAT_CONTROL_EVENT_ANY_WRITE 2
#define MC_STAT_CONTROL_EVENT_RD_WR_CHANGE 3
#define MC_STAT_CONTROL_EVENT_SUCCESSIVE 4
#define MC_STAT_CONTROL_EVENT_ARB_BANK_AA 5
#define MC_STAT_CONTROL_EVENT_ARB_BANK_BB 6
#define MC_STAT_CONTROL_EVENT_PAGE_MISS 7
#define MC_STAT_CONTROL_EVENT_AUTO_PRECHARGE 8
#define EMC_GATHER_RST (0 << 8)
#define EMC_GATHER_CLEAR (1 << 8)
#define EMC_GATHER_DISABLE (2 << 8)
#define EMC_GATHER_ENABLE (3 << 8)
#define MC_STAT_SAMPLE_TIME_USEC 16000
/* we store collected statistics as a fixed point values */
#define MC_FX_FRAC_SCALE 100
static DEFINE_MUTEX(tegra20_mc_stat_lock);
struct tegra20_mc_stat_gather {
unsigned int pri_filter;
unsigned int pri_event;
unsigned int result;
unsigned int client;
unsigned int event;
bool client_enb;
};
struct tegra20_mc_stat {
struct tegra20_mc_stat_gather gather0;
struct tegra20_mc_stat_gather gather1;
unsigned int sample_time_usec;
const struct tegra_mc *mc;
};
struct tegra20_mc_client_stat {
unsigned int events;
unsigned int arb_high_prio;
unsigned int arb_timeout;
unsigned int arb_bandwidth;
unsigned int rd_wr_change;
unsigned int successive;
unsigned int page_miss;
unsigned int auto_precharge;
unsigned int arb_bank_aa;
unsigned int arb_bank_bb;
};
static const struct tegra_mc_client tegra20_mc_clients[] = {
{
.id = 0x00,
@ -356,6 +432,261 @@ static const struct tegra_mc_icc_ops tegra20_mc_icc_ops = {
.set = tegra20_mc_icc_set,
};
static u32 tegra20_mc_stat_gather_control(const struct tegra20_mc_stat_gather *g)
{
u32 control;
control = FIELD_PREP(MC_STAT_CONTROL_EVENT, g->event);
control |= FIELD_PREP(MC_STAT_CONTROL_CLIENT_ID, g->client);
control |= FIELD_PREP(MC_STAT_CONTROL_PRI_EVENT, g->pri_event);
control |= FIELD_PREP(MC_STAT_CONTROL_FILTER_PRI, g->pri_filter);
control |= FIELD_PREP(MC_STAT_CONTROL_FILTER_CLIENT_ENABLE, g->client_enb);
return control;
}
static void tegra20_mc_stat_gather(struct tegra20_mc_stat *stat)
{
u32 clocks, count0, count1, control_0, control_1;
const struct tegra_mc *mc = stat->mc;
control_0 = tegra20_mc_stat_gather_control(&stat->gather0);
control_1 = tegra20_mc_stat_gather_control(&stat->gather1);
/*
* Reset statistic gathers state, select statistics collection mode
* and set clocks counter saturation limit to maximum.
*/
mc_writel(mc, 0x00000000, MC_STAT_CONTROL);
mc_writel(mc, control_0, MC_STAT_EMC_CONTROL_0);
mc_writel(mc, control_1, MC_STAT_EMC_CONTROL_1);
mc_writel(mc, 0xffffffff, MC_STAT_EMC_CLOCK_LIMIT);
mc_writel(mc, EMC_GATHER_ENABLE, MC_STAT_CONTROL);
fsleep(stat->sample_time_usec);
mc_writel(mc, EMC_GATHER_DISABLE, MC_STAT_CONTROL);
count0 = mc_readl(mc, MC_STAT_EMC_COUNT_0);
count1 = mc_readl(mc, MC_STAT_EMC_COUNT_1);
clocks = mc_readl(mc, MC_STAT_EMC_CLOCKS);
clocks = max(clocks / 100 / MC_FX_FRAC_SCALE, 1u);
stat->gather0.result = DIV_ROUND_UP(count0, clocks);
stat->gather1.result = DIV_ROUND_UP(count1, clocks);
}
static void tegra20_mc_stat_events(const struct tegra_mc *mc,
const struct tegra_mc_client *client0,
const struct tegra_mc_client *client1,
unsigned int pri_filter,
unsigned int pri_event,
unsigned int event,
unsigned int *result0,
unsigned int *result1)
{
struct tegra20_mc_stat stat = {};
stat.gather0.client = client0 ? client0->id : 0;
stat.gather0.pri_filter = pri_filter;
stat.gather0.client_enb = !!client0;
stat.gather0.pri_event = pri_event;
stat.gather0.event = event;
stat.gather1.client = client1 ? client1->id : 0;
stat.gather1.pri_filter = pri_filter;
stat.gather1.client_enb = !!client1;
stat.gather1.pri_event = pri_event;
stat.gather1.event = event;
stat.sample_time_usec = MC_STAT_SAMPLE_TIME_USEC;
stat.mc = mc;
tegra20_mc_stat_gather(&stat);
*result0 = stat.gather0.result;
*result1 = stat.gather1.result;
}
static void tegra20_mc_collect_stats(const struct tegra_mc *mc,
struct tegra20_mc_client_stat *stats)
{
const struct tegra_mc_client *client0, *client1;
unsigned int i;
/* collect memory controller utilization percent for each client */
for (i = 0; i < mc->soc->num_clients; i += 2) {
client0 = &mc->soc->clients[i];
client1 = &mc->soc->clients[i + 1];
if (i + 1 == mc->soc->num_clients)
client1 = NULL;
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_DISABLE,
MC_STAT_CONTROL_PRI_EVENT_HP,
MC_STAT_CONTROL_EVENT_QUALIFIED,
&stats[i + 0].events,
&stats[i + 1].events);
}
/* collect more info from active clients */
for (i = 0; i < mc->soc->num_clients; i++) {
unsigned int clienta, clientb = mc->soc->num_clients;
for (client0 = NULL; i < mc->soc->num_clients; i++) {
if (stats[i].events) {
client0 = &mc->soc->clients[i];
clienta = i++;
break;
}
}
for (client1 = NULL; i < mc->soc->num_clients; i++) {
if (stats[i].events) {
client1 = &mc->soc->clients[i];
clientb = i;
break;
}
}
if (!client0 && !client1)
break;
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_YES,
MC_STAT_CONTROL_PRI_EVENT_HP,
MC_STAT_CONTROL_EVENT_QUALIFIED,
&stats[clienta].arb_high_prio,
&stats[clientb].arb_high_prio);
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_YES,
MC_STAT_CONTROL_PRI_EVENT_TM,
MC_STAT_CONTROL_EVENT_QUALIFIED,
&stats[clienta].arb_timeout,
&stats[clientb].arb_timeout);
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_YES,
MC_STAT_CONTROL_PRI_EVENT_BW,
MC_STAT_CONTROL_EVENT_QUALIFIED,
&stats[clienta].arb_bandwidth,
&stats[clientb].arb_bandwidth);
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_DISABLE,
MC_STAT_CONTROL_PRI_EVENT_HP,
MC_STAT_CONTROL_EVENT_RD_WR_CHANGE,
&stats[clienta].rd_wr_change,
&stats[clientb].rd_wr_change);
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_DISABLE,
MC_STAT_CONTROL_PRI_EVENT_HP,
MC_STAT_CONTROL_EVENT_SUCCESSIVE,
&stats[clienta].successive,
&stats[clientb].successive);
tegra20_mc_stat_events(mc, client0, client1,
MC_STAT_CONTROL_FILTER_PRI_DISABLE,
MC_STAT_CONTROL_PRI_EVENT_HP,
MC_STAT_CONTROL_EVENT_PAGE_MISS,
&stats[clienta].page_miss,
&stats[clientb].page_miss);
}
}
static void tegra20_mc_printf_percents(struct seq_file *s,
const char *fmt,
unsigned int percents_fx)
{
char percents_str[8];
snprintf(percents_str, ARRAY_SIZE(percents_str), "%3u.%02u%%",
percents_fx / MC_FX_FRAC_SCALE, percents_fx % MC_FX_FRAC_SCALE);
seq_printf(s, fmt, percents_str);
}
static int tegra20_mc_stats_show(struct seq_file *s, void *unused)
{
const struct tegra_mc *mc = dev_get_drvdata(s->private);
struct tegra20_mc_client_stat *stats;
unsigned int i;
stats = kcalloc(mc->soc->num_clients + 1, sizeof(*stats), GFP_KERNEL);
if (!stats)
return -ENOMEM;
mutex_lock(&tegra20_mc_stat_lock);
tegra20_mc_collect_stats(mc, stats);
mutex_unlock(&tegra20_mc_stat_lock);
seq_puts(s, "Memory client Events Timeout High priority Bandwidth ARB RW change Successive Page miss\n");
seq_puts(s, "-----------------------------------------------------------------------------------------------------\n");
for (i = 0; i < mc->soc->num_clients; i++) {
seq_printf(s, "%-14s ", mc->soc->clients[i].name);
/* An event is generated when client performs R/W request. */
tegra20_mc_printf_percents(s, "%-9s", stats[i].events);
/*
* An event is generated based on the timeout (TM) signal
* accompanying a request for arbitration.
*/
tegra20_mc_printf_percents(s, "%-10s", stats[i].arb_timeout);
/*
* An event is generated based on the high-priority (HP) signal
* accompanying a request for arbitration.
*/
tegra20_mc_printf_percents(s, "%-16s", stats[i].arb_high_prio);
/*
* An event is generated based on the bandwidth (BW) signal
* accompanying a request for arbitration.
*/
tegra20_mc_printf_percents(s, "%-16s", stats[i].arb_bandwidth);
/*
* An event is generated when the memory controller switches
* between making a read request to making a write request.
*/
tegra20_mc_printf_percents(s, "%-12s", stats[i].rd_wr_change);
/*
* An even generated when the chosen client has wins arbitration
* when it was also the winner at the previous request. If a
* client makes N requests in a row that are honored, SUCCESSIVE
* will be counted (N-1) times. Large values for this event
* imply that if we were patient enough, all of those requests
* could have been coalesced.
*/
tegra20_mc_printf_percents(s, "%-13s", stats[i].successive);
/*
* An event is generated when the memory controller detects a
* page miss for the current request.
*/
tegra20_mc_printf_percents(s, "%-12s\n", stats[i].page_miss);
}
kfree(stats);
return 0;
}
static int tegra20_mc_init(struct tegra_mc *mc)
{
debugfs_create_devm_seqfile(mc->dev, "stats", mc->debugfs.root,
tegra20_mc_stats_show);
return 0;
}
const struct tegra_mc_soc tegra20_mc_soc = {
.clients = tegra20_mc_clients,
.num_clients = ARRAY_SIZE(tegra20_mc_clients),
@ -367,4 +698,5 @@ const struct tegra_mc_soc tegra20_mc_soc = {
.resets = tegra20_mc_resets,
.num_resets = ARRAY_SIZE(tegra20_mc_resets),
.icc_ops = &tegra20_mc_icc_ops,
.init = tegra20_mc_init,
};

View File

@ -998,12 +998,12 @@ static int emc_load_timings_from_dt(struct tegra_emc *emc,
if (err)
return err;
dev_info(emc->dev,
"got %u timings for RAM code %u (min %luMHz max %luMHz)\n",
emc->num_timings,
tegra_read_ram_code(),
emc->timings[0].rate / 1000000,
emc->timings[emc->num_timings - 1].rate / 1000000);
dev_info_once(emc->dev,
"got %u timings for RAM code %u (min %luMHz max %luMHz)\n",
emc->num_timings,
tegra_read_ram_code(),
emc->timings[0].rate / 1000000,
emc->timings[emc->num_timings - 1].rate / 1000000);
return 0;
}
@ -1015,7 +1015,7 @@ static struct device_node *emc_find_node_by_ram_code(struct device *dev)
int err;
if (of_get_child_count(dev->of_node) == 0) {
dev_info(dev, "device-tree doesn't have memory timings\n");
dev_info_once(dev, "device-tree doesn't have memory timings\n");
return NULL;
}
@ -1503,8 +1503,8 @@ static int tegra_emc_opp_table_init(struct tegra_emc *emc)
goto put_hw_table;
}
dev_info(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
/* first dummy rate-set initializes voltage state */
err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));

View File

@ -21,7 +21,7 @@ config MFD_CS5535
config MFD_ALTERA_A10SR
bool "Altera Arria10 DevKit System Resource chip"
depends on ARCH_SOCFPGA && SPI_MASTER=y && OF
depends on ARCH_INTEL_SOCFPGA && SPI_MASTER=y && OF
select REGMAP_SPI
select MFD_CORE
help
@ -32,7 +32,7 @@ config MFD_ALTERA_A10SR
config MFD_ALTERA_SYSMGR
bool "Altera SOCFPGA System Manager"
depends on (ARCH_SOCFPGA || ARCH_STRATIX10) && OF
depends on ARCH_INTEL_SOCFPGA && OF
select MFD_SYSCON
help
Select this to get System Manager support for all Altera branded

View File

@ -140,8 +140,8 @@ config DWMAC_ROCKCHIP
config DWMAC_SOCFPGA
tristate "SOCFPGA dwmac support"
default (ARCH_SOCFPGA || ARCH_STRATIX10)
depends on OF && (ARCH_SOCFPGA || ARCH_STRATIX10 || COMPILE_TEST)
default ARCH_INTEL_SOCFPGA
depends on OF && (ARCH_INTEL_SOCFPGA || COMPILE_TEST)
select MFD_SYSCON
help
Support for ethernet controller on Altera SOCFPGA

View File

@ -60,7 +60,7 @@
#define COND2 { ASPEED_IP_SCU, SCU94, GENMASK(1, 0), 0, 0 }
/* LHCR0 is offset from the end of the H8S/2168-compatible registers */
#define LHCR0 0x20
#define LHCR0 0xa0
#define GFX064 0x64
#define B14 0
@ -2648,14 +2648,19 @@ static struct regmap *aspeed_g5_acquire_regmap(struct aspeed_pinmux_data *ctx,
}
if (ip == ASPEED_IP_LPC) {
struct device_node *node;
struct device_node *np;
struct regmap *map;
node = of_parse_phandle(ctx->dev->of_node,
np = of_parse_phandle(ctx->dev->of_node,
"aspeed,external-nodes", 1);
if (node) {
map = syscon_node_to_regmap(node->parent);
of_node_put(node);
if (np) {
if (!of_device_is_compatible(np->parent, "aspeed,ast2400-lpc-v2") &&
!of_device_is_compatible(np->parent, "aspeed,ast2500-lpc-v2") &&
!of_device_is_compatible(np->parent, "aspeed,ast2600-lpc-v2"))
return ERR_PTR(-ENODEV);
map = syscon_node_to_regmap(np->parent);
of_node_put(np);
if (IS_ERR(map))
return map;
} else

View File

@ -423,6 +423,15 @@ config PWM_PXA
To compile this driver as a module, choose M here: the module
will be called pwm-pxa.
config PWM_RASPBERRYPI_POE
tristate "Raspberry Pi Firwmware PoE Hat PWM support"
# Make sure not 'y' when RASPBERRYPI_FIRMWARE is 'm'. This can only
# happen when COMPILE_TEST=y, hence the added !RASPBERRYPI_FIRMWARE.
depends on RASPBERRYPI_FIRMWARE || (COMPILE_TEST && !RASPBERRYPI_FIRMWARE)
help
Enable Raspberry Pi firmware controller PWM bus used to control the
official RPI PoE hat
config PWM_RCAR
tristate "Renesas R-Car PWM support"
depends on ARCH_RENESAS || COMPILE_TEST

View File

@ -38,6 +38,7 @@ obj-$(CONFIG_PWM_MXS) += pwm-mxs.o
obj-$(CONFIG_PWM_OMAP_DMTIMER) += pwm-omap-dmtimer.o
obj-$(CONFIG_PWM_PCA9685) += pwm-pca9685.o
obj-$(CONFIG_PWM_PXA) += pwm-pxa.o
obj-$(CONFIG_PWM_RASPBERRYPI_POE) += pwm-raspberrypi-poe.o
obj-$(CONFIG_PWM_RCAR) += pwm-rcar.o
obj-$(CONFIG_PWM_RENESAS_TPU) += pwm-renesas-tpu.o
obj-$(CONFIG_PWM_ROCKCHIP) += pwm-rockchip.o

View File

@ -0,0 +1,206 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright 2021 Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
* For more information on Raspberry Pi's PoE hat see:
* https://www.raspberrypi.org/products/poe-hat/
*
* Limitations:
* - No disable bit, so a disabled PWM is simulated by duty_cycle 0
* - Only normal polarity
* - Fixed 12.5 kHz period
*
* The current period is completed when HW is reconfigured.
*/
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pwm.h>
#include <soc/bcm2835/raspberrypi-firmware.h>
#include <dt-bindings/pwm/raspberrypi,firmware-poe-pwm.h>
#define RPI_PWM_MAX_DUTY 255
#define RPI_PWM_PERIOD_NS 80000 /* 12.5 kHz */
#define RPI_PWM_CUR_DUTY_REG 0x0
struct raspberrypi_pwm {
struct rpi_firmware *firmware;
struct pwm_chip chip;
unsigned int duty_cycle;
};
struct raspberrypi_pwm_prop {
__le32 reg;
__le32 val;
__le32 ret;
} __packed;
static inline
struct raspberrypi_pwm *raspberrypi_pwm_from_chip(struct pwm_chip *chip)
{
return container_of(chip, struct raspberrypi_pwm, chip);
}
static int raspberrypi_pwm_set_property(struct rpi_firmware *firmware,
u32 reg, u32 val)
{
struct raspberrypi_pwm_prop msg = {
.reg = cpu_to_le32(reg),
.val = cpu_to_le32(val),
};
int ret;
ret = rpi_firmware_property(firmware, RPI_FIRMWARE_SET_POE_HAT_VAL,
&msg, sizeof(msg));
if (ret)
return ret;
if (msg.ret)
return -EIO;
return 0;
}
static int raspberrypi_pwm_get_property(struct rpi_firmware *firmware,
u32 reg, u32 *val)
{
struct raspberrypi_pwm_prop msg = {
.reg = reg
};
int ret;
ret = rpi_firmware_property(firmware, RPI_FIRMWARE_GET_POE_HAT_VAL,
&msg, sizeof(msg));
if (ret)
return ret;
if (msg.ret)
return -EIO;
*val = le32_to_cpu(msg.val);
return 0;
}
static void raspberrypi_pwm_get_state(struct pwm_chip *chip,
struct pwm_device *pwm,
struct pwm_state *state)
{
struct raspberrypi_pwm *rpipwm = raspberrypi_pwm_from_chip(chip);
state->period = RPI_PWM_PERIOD_NS;
state->duty_cycle = DIV_ROUND_UP(rpipwm->duty_cycle * RPI_PWM_PERIOD_NS,
RPI_PWM_MAX_DUTY);
state->enabled = !!(rpipwm->duty_cycle);
state->polarity = PWM_POLARITY_NORMAL;
}
static int raspberrypi_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
const struct pwm_state *state)
{
struct raspberrypi_pwm *rpipwm = raspberrypi_pwm_from_chip(chip);
unsigned int duty_cycle;
int ret;
if (state->period < RPI_PWM_PERIOD_NS ||
state->polarity != PWM_POLARITY_NORMAL)
return -EINVAL;
if (!state->enabled)
duty_cycle = 0;
else if (state->duty_cycle < RPI_PWM_PERIOD_NS)
duty_cycle = DIV_ROUND_DOWN_ULL(state->duty_cycle * RPI_PWM_MAX_DUTY,
RPI_PWM_PERIOD_NS);
else
duty_cycle = RPI_PWM_MAX_DUTY;
if (duty_cycle == rpipwm->duty_cycle)
return 0;
ret = raspberrypi_pwm_set_property(rpipwm->firmware, RPI_PWM_CUR_DUTY_REG,
duty_cycle);
if (ret) {
dev_err(chip->dev, "Failed to set duty cycle: %pe\n",
ERR_PTR(ret));
return ret;
}
rpipwm->duty_cycle = duty_cycle;
return 0;
}
static const struct pwm_ops raspberrypi_pwm_ops = {
.get_state = raspberrypi_pwm_get_state,
.apply = raspberrypi_pwm_apply,
.owner = THIS_MODULE,
};
static int raspberrypi_pwm_probe(struct platform_device *pdev)
{
struct device_node *firmware_node;
struct device *dev = &pdev->dev;
struct rpi_firmware *firmware;
struct raspberrypi_pwm *rpipwm;
int ret;
firmware_node = of_get_parent(dev->of_node);
if (!firmware_node) {
dev_err(dev, "Missing firmware node\n");
return -ENOENT;
}
firmware = devm_rpi_firmware_get(&pdev->dev, firmware_node);
of_node_put(firmware_node);
if (!firmware)
return dev_err_probe(dev, -EPROBE_DEFER,
"Failed to get firmware handle\n");
rpipwm = devm_kzalloc(&pdev->dev, sizeof(*rpipwm), GFP_KERNEL);
if (!rpipwm)
return -ENOMEM;
rpipwm->firmware = firmware;
rpipwm->chip.dev = dev;
rpipwm->chip.ops = &raspberrypi_pwm_ops;
rpipwm->chip.base = -1;
rpipwm->chip.npwm = RASPBERRYPI_FIRMWARE_PWM_NUM;
platform_set_drvdata(pdev, rpipwm);
ret = raspberrypi_pwm_get_property(rpipwm->firmware, RPI_PWM_CUR_DUTY_REG,
&rpipwm->duty_cycle);
if (ret) {
dev_err(dev, "Failed to get duty cycle: %pe\n", ERR_PTR(ret));
return ret;
}
return pwmchip_add(&rpipwm->chip);
}
static int raspberrypi_pwm_remove(struct platform_device *pdev)
{
struct raspberrypi_pwm *rpipwm = platform_get_drvdata(pdev);
return pwmchip_remove(&rpipwm->chip);
}
static const struct of_device_id raspberrypi_pwm_of_match[] = {
{ .compatible = "raspberrypi,firmware-poe-pwm", },
{ }
};
MODULE_DEVICE_TABLE(of, raspberrypi_pwm_of_match);
static struct platform_driver raspberrypi_pwm_driver = {
.driver = {
.name = "raspberrypi-poe-pwm",
.of_match_table = raspberrypi_pwm_of_match,
},
.probe = raspberrypi_pwm_probe,
.remove = raspberrypi_pwm_remove,
};
module_platform_driver(raspberrypi_pwm_driver);
MODULE_AUTHOR("Nicolas Saenz Julienne <nsaenzjulienne@suse.de>");
MODULE_DESCRIPTION("Raspberry Pi Firmware Based PWM Bus Driver");
MODULE_LICENSE("GPL v2");

View File

@ -2,7 +2,7 @@
//
// System Control and Management Interface (SCMI) based regulator driver
//
// Copyright (C) 2020 ARM Ltd.
// Copyright (C) 2020-2021 ARM Ltd.
//
// Implements a regulator driver on top of the SCMI Voltage Protocol.
//
@ -33,9 +33,12 @@
#include <linux/slab.h>
#include <linux/types.h>
static const struct scmi_voltage_proto_ops *voltage_ops;
struct scmi_regulator {
u32 id;
struct scmi_device *sdev;
struct scmi_protocol_handle *ph;
struct regulator_dev *rdev;
struct device_node *of_node;
struct regulator_desc desc;
@ -50,19 +53,17 @@ struct scmi_regulator_info {
static int scmi_reg_enable(struct regulator_dev *rdev)
{
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
const struct scmi_handle *handle = sreg->sdev->handle;
return handle->voltage_ops->config_set(handle, sreg->id,
SCMI_VOLTAGE_ARCH_STATE_ON);
return voltage_ops->config_set(sreg->ph, sreg->id,
SCMI_VOLTAGE_ARCH_STATE_ON);
}
static int scmi_reg_disable(struct regulator_dev *rdev)
{
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
const struct scmi_handle *handle = sreg->sdev->handle;
return handle->voltage_ops->config_set(handle, sreg->id,
SCMI_VOLTAGE_ARCH_STATE_OFF);
return voltage_ops->config_set(sreg->ph, sreg->id,
SCMI_VOLTAGE_ARCH_STATE_OFF);
}
static int scmi_reg_is_enabled(struct regulator_dev *rdev)
@ -70,10 +71,8 @@ static int scmi_reg_is_enabled(struct regulator_dev *rdev)
int ret;
u32 config;
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
const struct scmi_handle *handle = sreg->sdev->handle;
ret = handle->voltage_ops->config_get(handle, sreg->id,
&config);
ret = voltage_ops->config_get(sreg->ph, sreg->id, &config);
if (ret) {
dev_err(&sreg->sdev->dev,
"Error %d reading regulator %s status.\n",
@ -89,9 +88,8 @@ static int scmi_reg_get_voltage_sel(struct regulator_dev *rdev)
int ret;
s32 volt_uV;
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
const struct scmi_handle *handle = sreg->sdev->handle;
ret = handle->voltage_ops->level_get(handle, sreg->id, &volt_uV);
ret = voltage_ops->level_get(sreg->ph, sreg->id, &volt_uV);
if (ret)
return ret;
@ -103,13 +101,12 @@ static int scmi_reg_set_voltage_sel(struct regulator_dev *rdev,
{
s32 volt_uV;
struct scmi_regulator *sreg = rdev_get_drvdata(rdev);
const struct scmi_handle *handle = sreg->sdev->handle;
volt_uV = sreg->desc.ops->list_voltage(rdev, selector);
if (volt_uV <= 0)
return -EINVAL;
return handle->voltage_ops->level_set(handle, sreg->id, 0x0, volt_uV);
return voltage_ops->level_set(sreg->ph, sreg->id, 0x0, volt_uV);
}
static const struct regulator_ops scmi_reg_fixed_ops = {
@ -204,11 +201,10 @@ scmi_config_discrete_regulator_mappings(struct scmi_regulator *sreg,
static int scmi_regulator_common_init(struct scmi_regulator *sreg)
{
int ret;
const struct scmi_handle *handle = sreg->sdev->handle;
struct device *dev = &sreg->sdev->dev;
const struct scmi_voltage_info *vinfo;
vinfo = handle->voltage_ops->info_get(handle, sreg->id);
vinfo = voltage_ops->info_get(sreg->ph, sreg->id);
if (!vinfo) {
dev_warn(dev, "Failure to get voltage domain %d\n",
sreg->id);
@ -257,6 +253,7 @@ static int scmi_regulator_common_init(struct scmi_regulator *sreg)
}
static int process_scmi_regulator_of_node(struct scmi_device *sdev,
struct scmi_protocol_handle *ph,
struct device_node *np,
struct scmi_regulator_info *rinfo)
{
@ -284,6 +281,7 @@ static int process_scmi_regulator_of_node(struct scmi_device *sdev,
rinfo->sregv[dom]->id = dom;
rinfo->sregv[dom]->sdev = sdev;
rinfo->sregv[dom]->ph = ph;
/* get hold of good nodes */
of_node_get(np);
@ -302,11 +300,17 @@ static int scmi_regulator_probe(struct scmi_device *sdev)
struct device_node *np, *child;
const struct scmi_handle *handle = sdev->handle;
struct scmi_regulator_info *rinfo;
struct scmi_protocol_handle *ph;
if (!handle || !handle->voltage_ops)
if (!handle)
return -ENODEV;
num_doms = handle->voltage_ops->num_domains_get(handle);
voltage_ops = handle->devm_protocol_get(sdev,
SCMI_PROTOCOL_VOLTAGE, &ph);
if (IS_ERR(voltage_ops))
return PTR_ERR(voltage_ops);
num_doms = voltage_ops->num_domains_get(ph);
if (num_doms <= 0) {
if (!num_doms) {
dev_err(&sdev->dev,
@ -341,7 +345,7 @@ static int scmi_regulator_probe(struct scmi_device *sdev)
*/
np = of_find_node_by_name(handle->dev->of_node, "regulators");
for_each_child_of_node(np, child) {
ret = process_scmi_regulator_of_node(sdev, child, rinfo);
ret = process_scmi_regulator_of_node(sdev, ph, child, rinfo);
/* abort on any mem issue */
if (ret == -ENOMEM)
return ret;

View File

@ -183,7 +183,7 @@ config RESET_SCMI
config RESET_SIMPLE
bool "Simple Reset Controller Driver" if COMPILE_TEST
default ARCH_AGILEX || ARCH_ASPEED || ARCH_BCM4908 || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARC
default ARCH_ASPEED || ARCH_BCM4908 || ARCH_BITMAIN || ARCH_REALTEK || ARCH_STM32 || (ARCH_INTEL_SOCFPGA && ARM64) || ARCH_SUNXI || ARC
help
This enables a simple reset controller driver for reset lines that
that can be asserted and deasserted by toggling bits in a contiguous,
@ -205,8 +205,8 @@ config RESET_STM32MP157
This enables the RCC reset controller driver for STM32 MPUs.
config RESET_SOCFPGA
bool "SoCFPGA Reset Driver" if COMPILE_TEST && !ARCH_SOCFPGA
default ARCH_SOCFPGA
bool "SoCFPGA Reset Driver" if COMPILE_TEST && (!ARM || !ARCH_INTEL_SOCFPGA)
default ARM && ARCH_INTEL_SOCFPGA
select RESET_SIMPLE
help
This enables the reset driver for the SoCFPGA ARMv7 platforms. This

View File

@ -82,7 +82,7 @@ static int rpi_reset_probe(struct platform_device *pdev)
return -ENOENT;
}
fw = rpi_firmware_get(np);
fw = devm_rpi_firmware_get(&pdev->dev, np);
of_node_put(np);
if (!fw)
return -EPROBE_DEFER;

View File

@ -2,7 +2,7 @@
/*
* ARM System Control and Management Interface (ARM SCMI) reset driver
*
* Copyright (C) 2019 ARM Ltd.
* Copyright (C) 2019-2021 ARM Ltd.
*/
#include <linux/module.h>
@ -11,18 +11,20 @@
#include <linux/reset-controller.h>
#include <linux/scmi_protocol.h>
static const struct scmi_reset_proto_ops *reset_ops;
/**
* struct scmi_reset_data - reset controller information structure
* @rcdev: reset controller entity
* @handle: ARM SCMI handle used for communication with system controller
* @ph: ARM SCMI protocol handle used for communication with system controller
*/
struct scmi_reset_data {
struct reset_controller_dev rcdev;
const struct scmi_handle *handle;
const struct scmi_protocol_handle *ph;
};
#define to_scmi_reset_data(p) container_of((p), struct scmi_reset_data, rcdev)
#define to_scmi_handle(p) (to_scmi_reset_data(p)->handle)
#define to_scmi_handle(p) (to_scmi_reset_data(p)->ph)
/**
* scmi_reset_assert() - assert device reset
@ -37,9 +39,9 @@ struct scmi_reset_data {
static int
scmi_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
{
const struct scmi_handle *handle = to_scmi_handle(rcdev);
const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev);
return handle->reset_ops->assert(handle, id);
return reset_ops->assert(ph, id);
}
/**
@ -55,9 +57,9 @@ scmi_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
static int
scmi_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
{
const struct scmi_handle *handle = to_scmi_handle(rcdev);
const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev);
return handle->reset_ops->deassert(handle, id);
return reset_ops->deassert(ph, id);
}
/**
@ -73,9 +75,9 @@ scmi_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
static int
scmi_reset_reset(struct reset_controller_dev *rcdev, unsigned long id)
{
const struct scmi_handle *handle = to_scmi_handle(rcdev);
const struct scmi_protocol_handle *ph = to_scmi_handle(rcdev);
return handle->reset_ops->reset(handle, id);
return reset_ops->reset(ph, id);
}
static const struct reset_control_ops scmi_reset_ops = {
@ -90,10 +92,15 @@ static int scmi_reset_probe(struct scmi_device *sdev)
struct device *dev = &sdev->dev;
struct device_node *np = dev->of_node;
const struct scmi_handle *handle = sdev->handle;
struct scmi_protocol_handle *ph;
if (!handle || !handle->reset_ops)
if (!handle)
return -ENODEV;
reset_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_RESET, &ph);
if (IS_ERR(reset_ops))
return PTR_ERR(reset_ops);
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
@ -101,8 +108,8 @@ static int scmi_reset_probe(struct scmi_device *sdev)
data->rcdev.ops = &scmi_reset_ops;
data->rcdev.owner = THIS_MODULE;
data->rcdev.of_node = np;
data->rcdev.nr_resets = handle->reset_ops->num_domains_get(handle);
data->handle = handle;
data->rcdev.nr_resets = reset_ops->num_domains_get(ph);
data->ph = ph;
return devm_reset_controller_register(dev, &data->rcdev);
}

View File

@ -18,15 +18,15 @@
#define DEVICE_NAME "aspeed-lpc-ctrl"
#define HICR5 0x0
#define HICR5 0x80
#define HICR5_ENL2H BIT(8)
#define HICR5_ENFWH BIT(10)
#define HICR6 0x4
#define HICR6 0x84
#define SW_FWH2AHB BIT(17)
#define HICR7 0x8
#define HICR8 0xc
#define HICR7 0x88
#define HICR8 0x8c
struct aspeed_lpc_ctrl {
struct miscdevice miscdev;
@ -215,6 +215,7 @@ static int aspeed_lpc_ctrl_probe(struct platform_device *pdev)
struct device_node *node;
struct resource resm;
struct device *dev;
struct device_node *np;
int rc;
dev = &pdev->dev;
@ -270,8 +271,15 @@ static int aspeed_lpc_ctrl_probe(struct platform_device *pdev)
}
}
lpc_ctrl->regmap = syscon_node_to_regmap(
pdev->dev.parent->of_node);
np = pdev->dev.parent->of_node;
if (!of_device_is_compatible(np, "aspeed,ast2400-lpc-v2") &&
!of_device_is_compatible(np, "aspeed,ast2500-lpc-v2") &&
!of_device_is_compatible(np, "aspeed,ast2600-lpc-v2")) {
dev_err(dev, "unsupported LPC device binding\n");
return -ENODEV;
}
lpc_ctrl->regmap = syscon_node_to_regmap(np);
if (IS_ERR(lpc_ctrl->regmap)) {
dev_err(dev, "Couldn't get regmap\n");
return -ENODEV;

View File

@ -29,26 +29,25 @@
#define NUM_SNOOP_CHANNELS 2
#define SNOOP_FIFO_SIZE 2048
#define HICR5 0x0
#define HICR5 0x80
#define HICR5_EN_SNP0W BIT(0)
#define HICR5_ENINT_SNP0W BIT(1)
#define HICR5_EN_SNP1W BIT(2)
#define HICR5_ENINT_SNP1W BIT(3)
#define HICR6 0x4
#define HICR6 0x84
#define HICR6_STR_SNP0W BIT(0)
#define HICR6_STR_SNP1W BIT(1)
#define SNPWADR 0x10
#define SNPWADR 0x90
#define SNPWADR_CH0_MASK GENMASK(15, 0)
#define SNPWADR_CH0_SHIFT 0
#define SNPWADR_CH1_MASK GENMASK(31, 16)
#define SNPWADR_CH1_SHIFT 16
#define SNPWDR 0x14
#define SNPWDR 0x94
#define SNPWDR_CH0_MASK GENMASK(7, 0)
#define SNPWDR_CH0_SHIFT 0
#define SNPWDR_CH1_MASK GENMASK(15, 8)
#define SNPWDR_CH1_SHIFT 8
#define HICRB 0x80
#define HICRB 0x100
#define HICRB_ENSNP0D BIT(14)
#define HICRB_ENSNP1D BIT(15)
@ -95,8 +94,10 @@ static ssize_t snoop_file_read(struct file *file, char __user *buffer,
return -EINTR;
}
ret = kfifo_to_user(&chan->fifo, buffer, count, &copied);
if (ret)
return ret;
return ret ? ret : copied;
return copied;
}
static __poll_t snoop_file_poll(struct file *file,
@ -260,6 +261,7 @@ static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
{
struct aspeed_lpc_snoop *lpc_snoop;
struct device *dev;
struct device_node *np;
u32 port;
int rc;
@ -269,8 +271,15 @@ static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
if (!lpc_snoop)
return -ENOMEM;
lpc_snoop->regmap = syscon_node_to_regmap(
pdev->dev.parent->of_node);
np = pdev->dev.parent->of_node;
if (!of_device_is_compatible(np, "aspeed,ast2400-lpc-v2") &&
!of_device_is_compatible(np, "aspeed,ast2500-lpc-v2") &&
!of_device_is_compatible(np, "aspeed,ast2600-lpc-v2")) {
dev_err(dev, "unsupported LPC device binding\n");
return -ENODEV;
}
lpc_snoop->regmap = syscon_node_to_regmap(np);
if (IS_ERR(lpc_snoop->regmap)) {
dev_err(dev, "Couldn't get regmap\n");
return -ENODEV;

View File

@ -209,6 +209,28 @@ static int bcm_pmb_power_on_device(struct bcm_pmb *pmb, int bus, u8 device)
return err;
}
static int bcm_pmb_power_on_sata(struct bcm_pmb *pmb, int bus, u8 device)
{
int err;
err = bcm_pmb_power_on_zone(pmb, bus, device, 0);
if (err)
return err;
/* Does not apply to the BCM963158 */
err = bcm_pmb_bpcm_write(pmb, bus, device, BPCM_MISC_CONTROL, 0);
if (err)
return err;
err = bcm_pmb_bpcm_write(pmb, bus, device, BPCM_SR_CONTROL, 0xffffffff);
if (err)
return err;
err = bcm_pmb_bpcm_write(pmb, bus, device, BPCM_SR_CONTROL, 0);
return err;
}
static int bcm_pmb_power_on(struct generic_pm_domain *genpd)
{
struct bcm_pmb_pm_domain *pd = container_of(genpd, struct bcm_pmb_pm_domain, genpd);
@ -222,6 +244,8 @@ static int bcm_pmb_power_on(struct generic_pm_domain *genpd)
return bcm_pmb_power_on_zone(pmb, data->bus, data->device, 0);
case BCM_PMB_HOST_USB:
return bcm_pmb_power_on_device(pmb, data->bus, data->device);
case BCM_PMB_SATA:
return bcm_pmb_power_on_sata(pmb, data->bus, data->device);
default:
dev_err(pmb->dev, "unsupported device id: %d\n", data->id);
return -EINVAL;
@ -317,8 +341,14 @@ static const struct bcm_pmb_pd_data bcm_pmb_bcm4908_data[] = {
{ },
};
static const struct bcm_pmb_pd_data bcm_pmb_bcm63138_data[] = {
{ .name = "sata", .id = BCM_PMB_SATA, .bus = 0, .device = 3, },
{ },
};
static const struct of_device_id bcm_pmb_of_match[] = {
{ .compatible = "brcm,bcm4908-pmb", .data = &bcm_pmb_bcm4908_data, },
{ .compatible = "brcm,bcm63138-pmb", .data = &bcm_pmb_bcm63138_data, },
{ },
};

Some files were not shown because too many files have changed in this diff Show More