ARM: Drivers for 5.14

- Reset controllers: Adding support for Microchip Sparx5 Switch.
 
 - Memory controllers: ARM Primecell PL35x SMC memory controller
   driver cleanups and improvements.
 
 - i.MX SoC drivers: Power domain support for i.MX8MM and i.MX8MN.
 
 - Rockchip: RK3568 power domains support + DT binding updates,
   cleanups.
 
 - Qualcomm SoC drivers: Amend socinfo with more SoC/PMIC details,
   including support for MSM8226, MDM9607, SM6125 and SC8180X.
 
 - ARM FFA driver: "Firmware Framework for ARMv8-A", defining
   management interfaces and communication (including bus model)
   between partitions both in Normal and Secure Worlds.
 
 - Tegra Memory controller changes, including major rework to deal
   with identity mappings at boot and integration with ARM SMMU
   pieces.
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCgAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAmDokgYPHG9sb2ZAbGl4
 b20ubmV0AAoJEIwa5zzehBx3looP/20uQAjRadPJFdV/B2mpZYqXMI4dIN9g7KJ1
 6uEoaGurzYWQQreDXswQ5vFUcQfIudEJ9Im9IF+9BUsFQ2uvPTJ4I+HDN++WH70B
 cIsmwwBr7Q4JUVP+O7T2WGtBY69jvHTpJrCCVtyHtwEyL4a1uyfelsAJXbxqaqis
 w1lmXNkkSqx5c67H3maNNDRnbutyLL2gO0TYdiBapOcc5V03OYKNnMbDqRTddqyt
 4UH4eYkFkNai8UJ476BXHU9ldlWzEkRBib/OKwF9k3oPj9W3kdQ/vd2IKK5a1fTX
 jIbOPSRRC8K/9Bxn1KEtdoU0Yy+rlm3xd7DtQl5RyGTD+tHVq3dN55WjoXBY83Yh
 r37y7uII9i09tPg5+APSX/jgodsIt4c46dKwvYuWXvB7ziomfsKxQiRanApJG6UX
 qS5NCUrlfYWlL302JOTvEtDBePXXiXQ065GuRjM948WMnVzXwEKwYUakGhvXQWMS
 jXCcOGW7GhnbY3+Ipn9chyhydHpKSxIb8oBk4cMRJU9jlN2GmjHgW8RMvT2WM6VF
 1F8acyMvf6en5tV6f23cjbW+iIMTS5egKNfqi8tdjGVxbowypyJYzjYOhaqk6veJ
 jHOmpglTXas0QD3ZRU7vGVlrvHqik8XyRsq3N9CQjVenRCbsQLKZRi1gTbIuspcR
 rejqH3Fs
 =kPg8
 -----END PGP SIGNATURE-----

Merge tag 'arm-drivers-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM driver updates from Olof Johansson:

 - Reset controllers: Adding support for Microchip Sparx5 Switch.

 - Memory controllers: ARM Primecell PL35x SMC memory controller driver
   cleanups and improvements.

 - i.MX SoC drivers: Power domain support for i.MX8MM and i.MX8MN.

 - Rockchip: RK3568 power domains support + DT binding updates,
   cleanups.

 - Qualcomm SoC drivers: Amend socinfo with more SoC/PMIC details,
   including support for MSM8226, MDM9607, SM6125 and SC8180X.

 - ARM FFA driver: "Firmware Framework for ARMv8-A", defining management
   interfaces and communication (including bus model) between partitions
   both in Normal and Secure Worlds.

 - Tegra Memory controller changes, including major rework to deal with
   identity mappings at boot and integration with ARM SMMU pieces.

* tag 'arm-drivers-5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (120 commits)
  firmware: turris-mox-rwtm: add marvell,armada-3700-rwtm-firmware compatible string
  firmware: turris-mox-rwtm: show message about HWRNG registration
  firmware: turris-mox-rwtm: fail probing when firmware does not support hwrng
  firmware: turris-mox-rwtm: report failures better
  firmware: turris-mox-rwtm: fix reply status decoding function
  soc: imx: gpcv2: add support for i.MX8MN power domains
  dt-bindings: add defines for i.MX8MN power domains
  firmware: tegra: bpmp: Fix Tegra234-only builds
  iommu/arm-smmu: Use Tegra implementation on Tegra186
  iommu/arm-smmu: tegra: Implement SID override programming
  iommu/arm-smmu: tegra: Detect number of instances at runtime
  dt-bindings: arm-smmu: Add Tegra186 compatible string
  firmware: qcom_scm: Add MDM9607 compatible
  soc: qcom: rpmpd: Add MDM9607 RPM Power Domains
  soc: renesas: Add support to read LSI DEVID register of RZ/G2{L,LC} SoC's
  soc: renesas: Add ARCH_R9A07G044 for the new RZ/G2L SoC's
  dt-bindings: soc: rockchip: drop unnecessary #phy-cells from grf.yaml
  memory: emif: remove unused frequency and voltage notifiers
  memory: fsl_ifc: fix leak of private memory on probe failure
  memory: fsl_ifc: fix leak of IO mapping on probe failure
  ...
This commit is contained in:
Linus Torvalds 2021-07-10 09:46:20 -07:00
commit 071e5aceeb
97 changed files with 8776 additions and 5522 deletions

View File

@ -1,16 +0,0 @@
Rockchip power-management-unit:
-------------------------------
The pmu is used to turn off and on different power domains of the SoCs
This includes the power to the CPU cores.
Required node properties:
- compatible value : = "rockchip,rk3066-pmu";
- reg : physical base address and the size of the registers window
Example:
pmu@20004000 {
compatible = "rockchip,rk3066-pmu";
reg = <0x20004000 0x100>;
};

View File

@ -0,0 +1,55 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/arm/rockchip/pmu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip Power Management Unit (PMU)
maintainers:
- Elaine Zhang <zhangqing@rock-chips.com>
- Heiko Stuebner <heiko@sntech.de>
description: |
The PMU is used to turn on and off different power domains of the SoCs.
This includes the power to the CPU cores.
select:
properties:
compatible:
contains:
enum:
- rockchip,px30-pmu
- rockchip,rk3066-pmu
- rockchip,rk3288-pmu
- rockchip,rk3399-pmu
required:
- compatible
properties:
compatible:
items:
- enum:
- rockchip,px30-pmu
- rockchip,rk3066-pmu
- rockchip,rk3288-pmu
- rockchip,rk3399-pmu
- const: syscon
- const: simple-mfd
reg:
maxItems: 1
required:
- compatible
- reg
additionalProperties: true
examples:
- |
pmu@20004000 {
compatible = "rockchip,rk3066-pmu", "syscon", "simple-mfd";
reg = <0x20004000 0x100>;
};

View File

@ -12,6 +12,7 @@ Required properties:
* "qcom,scm-ipq4019"
* "qcom,scm-ipq806x"
* "qcom,scm-ipq8074"
* "qcom,scm-mdm9607"
* "qcom,scm-msm8660"
* "qcom,scm-msm8916"
* "qcom,scm-msm8960"

View File

@ -54,8 +54,14 @@ properties:
- const: arm,mmu-500
- description: NVIDIA SoCs that program two ARM MMU-500s identically
items:
- description: NVIDIA SoCs that require memory controller interaction
and may program multiple ARM MMU-500s identically with the memory
controller interleaving translations between multiple instances
for improved performance.
items:
- enum:
- nvidia,tegra194-smmu
- const: nvidia,tegra194-smmu
- const: nvidia,tegra186-smmu
- const: nvidia,smmu-500
- items:
- const: arm,mmu-500
@ -165,10 +171,11 @@ allOf:
contains:
enum:
- nvidia,tegra194-smmu
- nvidia,tegra186-smmu
then:
properties:
reg:
minItems: 2
minItems: 1
maxItems: 2
else:
properties:

View File

@ -30,9 +30,6 @@ properties:
"#clock-cells":
const: 0
"#phy-cells":
const: 0
clocks:
maxItems: 1
@ -120,7 +117,6 @@ required:
- reg
- clock-output-names
- "#clock-cells"
- "#phy-cells"
- host-port
- otg-port
@ -131,26 +127,25 @@ examples:
#include <dt-bindings/clock/rk3399-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
u2phy0: usb2-phy@e450 {
u2phy0: usb2phy@e450 {
compatible = "rockchip,rk3399-usb2phy";
reg = <0xe450 0x10>;
clocks = <&cru SCLK_USB2PHY0_REF>;
clock-names = "phyclk";
clock-output-names = "clk_usbphy0_480m";
#clock-cells = <0>;
#phy-cells = <0>;
u2phy0_host: host-port {
#phy-cells = <0>;
interrupts = <GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH 0>;
interrupt-names = "linestate";
#phy-cells = <0>;
};
u2phy0_otg: otg-port {
#phy-cells = <0>;
interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH 0>;
interrupt-names = "otg-bvalid", "otg-id", "linestate";
#phy-cells = <0>;
};
};

View File

@ -25,7 +25,9 @@ properties:
compatible:
enum:
- fsl,imx7d-gpc
- fsl,imx8mn-gpc
- fsl,imx8mq-gpc
- fsl,imx8mm-gpc
reg:
maxItems: 1
@ -54,6 +56,7 @@ properties:
Power domain index. Valid values are defined in
include/dt-bindings/power/imx7-power.h for fsl,imx7d-gpc and
include/dt-bindings/power/imx8m-power.h for fsl,imx8mq-gpc
include/dt-bindings/power/imx8mm-power.h for fsl,imx8mm-gpc
maxItems: 1
clocks:

View File

@ -16,6 +16,7 @@ description:
properties:
compatible:
enum:
- qcom,mdm9607-rpmpd
- qcom,msm8916-rpmpd
- qcom,msm8939-rpmpd
- qcom,msm8976-rpmpd
@ -26,6 +27,7 @@ properties:
- qcom,sdm660-rpmpd
- qcom,sc7180-rpmhpd
- qcom,sc7280-rpmhpd
- qcom,sc8180x-rpmhpd
- qcom,sdm845-rpmhpd
- qcom,sdx55-rpmhpd
- qcom,sm8150-rpmhpd

View File

@ -0,0 +1,248 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/power/rockchip,power-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip Power Domains
maintainers:
- Elaine Zhang <zhangqing@rock-chips.com>
- Heiko Stuebner <heiko@sntech.de>
description: |
Rockchip processors include support for multiple power domains
which can be powered up/down by software based on different
application scenarios to save power.
Power domains contained within power-controller node are
generic power domain providers documented in
Documentation/devicetree/bindings/power/power-domain.yaml.
IP cores belonging to a power domain should contain a
"power-domains" property that is a phandle for the
power domain node representing the domain.
properties:
$nodename:
const: power-controller
compatible:
enum:
- rockchip,px30-power-controller
- rockchip,rk3036-power-controller
- rockchip,rk3066-power-controller
- rockchip,rk3128-power-controller
- rockchip,rk3188-power-controller
- rockchip,rk3228-power-controller
- rockchip,rk3288-power-controller
- rockchip,rk3328-power-controller
- rockchip,rk3366-power-controller
- rockchip,rk3368-power-controller
- rockchip,rk3399-power-controller
- rockchip,rk3568-power-controller
"#power-domain-cells":
const: 1
"#address-cells":
const: 1
"#size-cells":
const: 0
required:
- compatible
- "#power-domain-cells"
additionalProperties: false
patternProperties:
"^power-domain@[0-9a-f]+$":
$ref: "#/$defs/pd-node"
unevaluatedProperties: false
properties:
"#address-cells":
const: 1
"#size-cells":
const: 0
patternProperties:
"^power-domain@[0-9a-f]+$":
$ref: "#/$defs/pd-node"
unevaluatedProperties: false
properties:
"#address-cells":
const: 1
"#size-cells":
const: 0
patternProperties:
"^power-domain@[0-9a-f]+$":
$ref: "#/$defs/pd-node"
unevaluatedProperties: false
properties:
"#power-domain-cells":
const: 0
$defs:
pd-node:
type: object
description: |
Represents the power domains within the power controller node.
properties:
reg:
maxItems: 1
description: |
Power domain index. Valid values are defined in
"include/dt-bindings/power/px30-power.h"
"include/dt-bindings/power/rk3036-power.h"
"include/dt-bindings/power/rk3066-power.h"
"include/dt-bindings/power/rk3128-power.h"
"include/dt-bindings/power/rk3188-power.h"
"include/dt-bindings/power/rk3228-power.h"
"include/dt-bindings/power/rk3288-power.h"
"include/dt-bindings/power/rk3328-power.h"
"include/dt-bindings/power/rk3366-power.h"
"include/dt-bindings/power/rk3368-power.h"
"include/dt-bindings/power/rk3399-power.h"
"include/dt-bindings/power/rk3568-power.h"
clocks:
minItems: 1
maxItems: 30
description: |
A number of phandles to clocks that need to be enabled
while power domain switches state.
pm_qos:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: |
A number of phandles to qos blocks which need to be saved and restored
while power domain switches state.
"#power-domain-cells":
enum: [0, 1]
description:
Must be 0 for nodes representing a single PM domain and 1 for nodes
providing multiple PM domains.
required:
- reg
- "#power-domain-cells"
examples:
- |
#include <dt-bindings/clock/rk3399-cru.h>
#include <dt-bindings/power/rk3399-power.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
qos_hdcp: qos@ffa90000 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffa90000 0x0 0x20>;
};
qos_iep: qos@ffa98000 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffa98000 0x0 0x20>;
};
qos_rga_r: qos@ffab0000 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffab0000 0x0 0x20>;
};
qos_rga_w: qos@ffab0080 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffab0080 0x0 0x20>;
};
qos_video_m0: qos@ffab8000 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffab8000 0x0 0x20>;
};
qos_video_m1_r: qos@ffac0000 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffac0000 0x0 0x20>;
};
qos_video_m1_w: qos@ffac0080 {
compatible = "rockchip,rk3399-qos", "syscon";
reg = <0x0 0xffac0080 0x0 0x20>;
};
power-management@ff310000 {
compatible = "rockchip,rk3399-pmu", "syscon", "simple-mfd";
reg = <0x0 0xff310000 0x0 0x1000>;
power-controller {
compatible = "rockchip,rk3399-power-controller";
#power-domain-cells = <1>;
#address-cells = <1>;
#size-cells = <0>;
/* These power domains are grouped by VD_CENTER */
power-domain@RK3399_PD_IEP {
reg = <RK3399_PD_IEP>;
clocks = <&cru ACLK_IEP>,
<&cru HCLK_IEP>;
pm_qos = <&qos_iep>;
#power-domain-cells = <0>;
};
power-domain@RK3399_PD_RGA {
reg = <RK3399_PD_RGA>;
clocks = <&cru ACLK_RGA>,
<&cru HCLK_RGA>;
pm_qos = <&qos_rga_r>,
<&qos_rga_w>;
#power-domain-cells = <0>;
};
power-domain@RK3399_PD_VCODEC {
reg = <RK3399_PD_VCODEC>;
clocks = <&cru ACLK_VCODEC>,
<&cru HCLK_VCODEC>;
pm_qos = <&qos_video_m0>;
#power-domain-cells = <0>;
};
power-domain@RK3399_PD_VDU {
reg = <RK3399_PD_VDU>;
clocks = <&cru ACLK_VDU>,
<&cru HCLK_VDU>;
pm_qos = <&qos_video_m1_r>,
<&qos_video_m1_w>;
#power-domain-cells = <0>;
};
power-domain@RK3399_PD_VIO {
reg = <RK3399_PD_VIO>;
#power-domain-cells = <1>;
#address-cells = <1>;
#size-cells = <0>;
power-domain@RK3399_PD_HDCP {
reg = <RK3399_PD_HDCP>;
clocks = <&cru ACLK_HDCP>,
<&cru HCLK_HDCP>,
<&cru PCLK_HDCP>;
pm_qos = <&qos_hdcp>;
#power-domain-cells = <0>;
};
};
};
};
};

View File

@ -0,0 +1,58 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/reset/microchip,rst.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Microchip Sparx5 Switch Reset Controller
maintainers:
- Steen Hegelund <steen.hegelund@microchip.com>
- Lars Povlsen <lars.povlsen@microchip.com>
description: |
The Microchip Sparx5 Switch provides reset control and implements the following
functions
- One Time Switch Core Reset (Soft Reset)
properties:
$nodename:
pattern: "^reset-controller@[0-9a-f]+$"
compatible:
const: microchip,sparx5-switch-reset
reg:
items:
- description: global control block registers
reg-names:
items:
- const: gcb
"#reset-cells":
const: 1
cpu-syscon:
$ref: "/schemas/types.yaml#/definitions/phandle"
description: syscon used to access CPU reset
required:
- compatible
- reg
- reg-names
- "#reset-cells"
- cpu-syscon
additionalProperties: false
examples:
- |
reset: reset-controller@11010008 {
compatible = "microchip,sparx5-switch-reset";
reg = <0x11010008 0x4>;
reg-names = "gcb";
#reset-cells = <1>;
cpu-syscon = <&cpu_ctrl>;
};

View File

@ -27,6 +27,7 @@ Required properties in pwrap device node.
"mediatek,mt8135-pwrap" for MT8135 SoCs
"mediatek,mt8173-pwrap" for MT8173 SoCs
"mediatek,mt8183-pwrap" for MT8183 SoCs
"mediatek,mt8195-pwrap" for MT8195 SoCs
"mediatek,mt8516-pwrap" for MT8516 SoCs
- interrupts: IRQ for pwrap in SOC
- reg-names: Must include the following entries:

View File

@ -32,12 +32,14 @@ properties:
enum:
- qcom,rpm-apq8084
- qcom,rpm-ipq6018
- qcom,rpm-msm8226
- qcom,rpm-msm8916
- qcom,rpm-msm8974
- qcom,rpm-msm8976
- qcom,rpm-msm8996
- qcom,rpm-msm8998
- qcom,rpm-sdm660
- qcom,rpm-sm6125
- qcom,rpm-qcs404
qcom,smd-channels:

View File

@ -1,61 +0,0 @@
* Rockchip General Register Files (GRF)
The general register file will be used to do static set by software, which
is composed of many registers for system control.
From RK3368 SoCs, the GRF is divided into two sections,
- GRF, used for general non-secure system,
- SGRF, used for general secure system,
- PMUGRF, used for always on system
On RK3328 SoCs, the GRF adds a section for USB2PHYGRF,
ON RK3308 SoC, the GRF is divided into four sections:
- GRF, used for general non-secure system,
- SGRF, used for general secure system,
- DETECTGRF, used for audio codec system,
- COREGRF, used for pvtm,
Required Properties:
- compatible: GRF should be one of the following:
- "rockchip,px30-grf", "syscon": for px30
- "rockchip,rk3036-grf", "syscon": for rk3036
- "rockchip,rk3066-grf", "syscon": for rk3066
- "rockchip,rk3188-grf", "syscon": for rk3188
- "rockchip,rk3228-grf", "syscon": for rk3228
- "rockchip,rk3288-grf", "syscon": for rk3288
- "rockchip,rk3308-grf", "syscon": for rk3308
- "rockchip,rk3328-grf", "syscon": for rk3328
- "rockchip,rk3368-grf", "syscon": for rk3368
- "rockchip,rk3399-grf", "syscon": for rk3399
- "rockchip,rv1108-grf", "syscon": for rv1108
- compatible: DETECTGRF should be one of the following:
- "rockchip,rk3308-detect-grf", "syscon": for rk3308
- compatilbe: COREGRF should be one of the following:
- "rockchip,rk3308-core-grf", "syscon": for rk3308
- compatible: PMUGRF should be one of the following:
- "rockchip,px30-pmugrf", "syscon": for px30
- "rockchip,rk3368-pmugrf", "syscon": for rk3368
- "rockchip,rk3399-pmugrf", "syscon": for rk3399
- compatible: SGRF should be one of the following:
- "rockchip,rk3288-sgrf", "syscon": for rk3288
- compatible: USB2PHYGRF should be one of the following:
- "rockchip,px30-usb2phy-grf", "syscon": for px30
- "rockchip,rk3328-usb2phy-grf", "syscon": for rk3328
- compatible: USBGRF should be one of the following:
- "rockchip,rv1108-usbgrf", "syscon": for rv1108
- reg: physical base address of the controller and length of memory mapped
region.
Example: GRF and PMUGRF of RK3399 SoCs
pmugrf: syscon@ff320000 {
compatible = "rockchip,rk3399-pmugrf", "syscon";
reg = <0x0 0xff320000 0x0 0x1000>;
};
grf: syscon@ff770000 {
compatible = "rockchip,rk3399-grf", "syscon";
reg = <0x0 0xff770000 0x0 0x10000>;
};

View File

@ -0,0 +1,261 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/soc/rockchip/grf.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Rockchip General Register Files (GRF)
maintainers:
- Heiko Stuebner <heiko@sntech.de>
properties:
compatible:
oneOf:
- items:
- enum:
- rockchip,rk3288-sgrf
- rockchip,rv1108-pmugrf
- rockchip,rv1108-usbgrf
- const: syscon
- items:
- enum:
- rockchip,px30-grf
- rockchip,px30-pmugrf
- rockchip,px30-usb2phy-grf
- rockchip,rk3036-grf
- rockchip,rk3066-grf
- rockchip,rk3188-grf
- rockchip,rk3228-grf
- rockchip,rk3288-grf
- rockchip,rk3308-core-grf
- rockchip,rk3308-detect-grf
- rockchip,rk3308-grf
- rockchip,rk3308-usb2phy-grf
- rockchip,rk3328-grf
- rockchip,rk3328-usb2phy-grf
- rockchip,rk3368-grf
- rockchip,rk3368-pmugrf
- rockchip,rk3399-grf
- rockchip,rk3399-pmugrf
- rockchip,rk3568-grf
- rockchip,rk3568-pmugrf
- rockchip,rv1108-grf
- const: syscon
- const: simple-mfd
reg:
maxItems: 1
"#address-cells":
const: 1
"#size-cells":
const: 1
required:
- compatible
- reg
additionalProperties:
type: object
allOf:
- if:
properties:
compatible:
contains:
const: rockchip,px30-grf
then:
properties:
lvds:
description:
Documentation/devicetree/bindings/display/rockchip/rockchip-lvds.txt
- if:
properties:
compatible:
contains:
const: rockchip,rk3288-grf
then:
properties:
edp-phy:
description:
Documentation/devicetree/bindings/phy/rockchip-dp-phy.txt
- if:
properties:
compatible:
contains:
enum:
- rockchip,rk3066-grf
- rockchip,rk3188-grf
- rockchip,rk3288-grf
then:
properties:
usbphy:
type: object
$ref: "/schemas/phy/rockchip-usb-phy.yaml#"
unevaluatedProperties: false
- if:
properties:
compatible:
contains:
const: rockchip,rk3328-grf
then:
properties:
gpio:
type: object
$ref: "/schemas/gpio/rockchip,rk3328-grf-gpio.yaml#"
unevaluatedProperties: false
power-controller:
type: object
$ref: "/schemas/power/rockchip,power-controller.yaml#"
unevaluatedProperties: false
- if:
properties:
compatible:
contains:
const: rockchip,rk3399-grf
then:
properties:
mipi-dphy-rx0:
type: object
$ref: "/schemas/phy/rockchip-mipi-dphy-rx0.yaml#"
unevaluatedProperties: false
pcie-phy:
description:
Documentation/devicetree/bindings/phy/rockchip-pcie-phy.txt
patternProperties:
"phy@[0-9a-f]+$":
description:
Documentation/devicetree/bindings/phy/rockchip-emmc-phy.txt
- if:
properties:
compatible:
contains:
enum:
- rockchip,px30-pmugrf
- rockchip,rk3036-grf
- rockchip,rk3308-grf
- rockchip,rk3368-pmugrf
then:
properties:
reboot-mode:
type: object
$ref: "/schemas/power/reset/syscon-reboot-mode.yaml#"
unevaluatedProperties: false
- if:
properties:
compatible:
contains:
enum:
- rockchip,px30-usb2phy-grf
- rockchip,rk3228-grf
- rockchip,rk3308-usb2phy-grf
- rockchip,rk3328-usb2phy-grf
- rockchip,rk3399-grf
- rockchip,rv1108-grf
then:
required:
- "#address-cells"
- "#size-cells"
patternProperties:
"usb2phy@[0-9a-f]+$":
type: object
$ref: "/schemas/phy/phy-rockchip-inno-usb2.yaml#"
unevaluatedProperties: false
- if:
properties:
compatible:
contains:
enum:
- rockchip,px30-pmugrf
- rockchip,px30-grf
- rockchip,rk3228-grf
- rockchip,rk3288-grf
- rockchip,rk3328-grf
- rockchip,rk3368-pmugrf
- rockchip,rk3368-grf
- rockchip,rk3399-pmugrf
- rockchip,rk3399-grf
then:
properties:
io-domains:
description:
Documentation/devicetree/bindings/power/rockchip-io-domain.txt
examples:
- |
#include <dt-bindings/clock/rk3399-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/rk3399-power.h>
grf: syscon@ff770000 {
compatible = "rockchip,rk3399-grf", "syscon", "simple-mfd";
reg = <0xff770000 0x10000>;
#address-cells = <1>;
#size-cells = <1>;
mipi_dphy_rx0: mipi-dphy-rx0 {
compatible = "rockchip,rk3399-mipi-dphy-rx0";
clocks = <&cru SCLK_MIPIDPHY_REF>,
<&cru SCLK_DPHY_RX0_CFG>,
<&cru PCLK_VIO_GRF>;
clock-names = "dphy-ref", "dphy-cfg", "grf";
power-domains = <&power RK3399_PD_VIO>;
#phy-cells = <0>;
};
u2phy0: usb2phy@e450 {
compatible = "rockchip,rk3399-usb2phy";
reg = <0xe450 0x10>;
clocks = <&cru SCLK_USB2PHY0_REF>;
clock-names = "phyclk";
#clock-cells = <0>;
clock-output-names = "clk_usbphy0_480m";
u2phy0_host: host-port {
#phy-cells = <0>;
interrupts = <GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH 0>;
interrupt-names = "linestate";
};
u2phy0_otg: otg-port {
#phy-cells = <0>;
interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH 0>;
interrupt-names = "otg-bvalid", "otg-id",
"linestate";
};
};
};

View File

@ -1,136 +0,0 @@
* Rockchip Power Domains
Rockchip processors include support for multiple power domains which can be
powered up/down by software based on different application scenes to save power.
Required properties for power domain controller:
- compatible: Should be one of the following.
"rockchip,px30-power-controller" - for PX30 SoCs.
"rockchip,rk3036-power-controller" - for RK3036 SoCs.
"rockchip,rk3066-power-controller" - for RK3066 SoCs.
"rockchip,rk3128-power-controller" - for RK3128 SoCs.
"rockchip,rk3188-power-controller" - for RK3188 SoCs.
"rockchip,rk3228-power-controller" - for RK3228 SoCs.
"rockchip,rk3288-power-controller" - for RK3288 SoCs.
"rockchip,rk3328-power-controller" - for RK3328 SoCs.
"rockchip,rk3366-power-controller" - for RK3366 SoCs.
"rockchip,rk3368-power-controller" - for RK3368 SoCs.
"rockchip,rk3399-power-controller" - for RK3399 SoCs.
- #power-domain-cells: Number of cells in a power-domain specifier.
Should be 1 for multiple PM domains.
- #address-cells: Should be 1.
- #size-cells: Should be 0.
Required properties for power domain sub nodes:
- reg: index of the power domain, should use macros in:
"include/dt-bindings/power/px30-power.h" - for PX30 type power domain.
"include/dt-bindings/power/rk3036-power.h" - for RK3036 type power domain.
"include/dt-bindings/power/rk3066-power.h" - for RK3066 type power domain.
"include/dt-bindings/power/rk3128-power.h" - for RK3128 type power domain.
"include/dt-bindings/power/rk3188-power.h" - for RK3188 type power domain.
"include/dt-bindings/power/rk3228-power.h" - for RK3228 type power domain.
"include/dt-bindings/power/rk3288-power.h" - for RK3288 type power domain.
"include/dt-bindings/power/rk3328-power.h" - for RK3328 type power domain.
"include/dt-bindings/power/rk3366-power.h" - for RK3366 type power domain.
"include/dt-bindings/power/rk3368-power.h" - for RK3368 type power domain.
"include/dt-bindings/power/rk3399-power.h" - for RK3399 type power domain.
- clocks (optional): phandles to clocks which need to be enabled while power domain
switches state.
- pm_qos (optional): phandles to qos blocks which need to be saved and restored
while power domain switches state.
Qos Example:
qos_gpu: qos_gpu@ffaf0000 {
compatible ="syscon";
reg = <0x0 0xffaf0000 0x0 0x20>;
};
Example:
power: power-controller {
compatible = "rockchip,rk3288-power-controller";
#power-domain-cells = <1>;
#address-cells = <1>;
#size-cells = <0>;
pd_gpu {
reg = <RK3288_PD_GPU>;
clocks = <&cru ACLK_GPU>;
pm_qos = <&qos_gpu>;
};
};
power: power-controller {
compatible = "rockchip,rk3368-power-controller";
#power-domain-cells = <1>;
#address-cells = <1>;
#size-cells = <0>;
pd_gpu_1 {
reg = <RK3368_PD_GPU_1>;
clocks = <&cru ACLK_GPU_CFG>;
};
};
Example 2:
power: power-controller {
compatible = "rockchip,rk3399-power-controller";
#power-domain-cells = <1>;
#address-cells = <1>;
#size-cells = <0>;
pd_vio {
#address-cells = <1>;
#size-cells = <0>;
reg = <RK3399_PD_VIO>;
pd_vo {
#address-cells = <1>;
#size-cells = <0>;
reg = <RK3399_PD_VO>;
pd_vopb {
reg = <RK3399_PD_VOPB>;
};
pd_vopl {
reg = <RK3399_PD_VOPL>;
};
};
};
};
Node of a device using power domains must have a power-domains property,
containing a phandle to the power device node and an index specifying which
power domain to use.
The index should use macros in:
"include/dt-bindings/power/px30-power.h" - for px30 type power domain.
"include/dt-bindings/power/rk3036-power.h" - for rk3036 type power domain.
"include/dt-bindings/power/rk3128-power.h" - for rk3128 type power domain.
"include/dt-bindings/power/rk3128-power.h" - for rk3228 type power domain.
"include/dt-bindings/power/rk3288-power.h" - for rk3288 type power domain.
"include/dt-bindings/power/rk3328-power.h" - for rk3328 type power domain.
"include/dt-bindings/power/rk3366-power.h" - for rk3366 type power domain.
"include/dt-bindings/power/rk3368-power.h" - for rk3368 type power domain.
"include/dt-bindings/power/rk3399-power.h" - for rk3399 type power domain.
Example of the node using power domain:
node {
/* ... */
power-domains = <&power RK3288_PD_GPU>;
/* ... */
};
node {
/* ... */
power-domains = <&power RK3368_PD_GPU_1>;
/* ... */
};
node {
/* ... */
power-domains = <&power RK3399_PD_VOPB>;
/* ... */
};

View File

@ -7182,6 +7182,13 @@ F: include/linux/firewire.h
F: include/uapi/linux/firewire*.h
F: tools/firewire/
FIRMWARE FRAMEWORK FOR ARMV8-A
M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: drivers/firmware/arm_ffa/
F: include/linux/arm_ffa.h
FIRMWARE LOADER (request_firmware)
M: Luis Chamberlain <mcgrof@kernel.org>
L: linux-kernel@vger.kernel.org
@ -11947,6 +11954,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-mem-ctrl.git
F: Documentation/devicetree/bindings/memory-controllers/
F: drivers/memory/
F: include/dt-bindings/memory/
F: include/memory/
MEMORY FREQUENCY SCALING DRIVERS FOR NVIDIA TEGRA
M: Dmitry Osipenko <digetx@gmail.com>

View File

@ -102,8 +102,8 @@
/**
* struct cs_data - struct with info on a chipselect setting
* @enable_mask: mask to enable the chipselect in the EBI2 config
* @slow_cfg0: offset to XMEMC slow CS config
* @fast_cfg1: offset to XMEMC fast CS config
* @slow_cfg: offset to XMEMC slow CS config
* @fast_cfg: offset to XMEMC fast CS config
*/
struct cs_data {
u32 enable_mask;

View File

@ -9,7 +9,7 @@ menu "Firmware Drivers"
config ARM_SCMI_PROTOCOL
tristate "ARM System Control and Management Interface (SCMI) Message Protocol"
depends on ARM || ARM64 || COMPILE_TEST
depends on MAILBOX
depends on MAILBOX || HAVE_ARM_SMCCC_DISCOVERY
help
ARM System Control and Management Interface (SCMI) protocol is a
set of operating system-independent software interfaces that are
@ -296,6 +296,7 @@ config TURRIS_MOX_RWTM
other manufacturing data and also utilize the Entropy Bit Generator
for hardware random number generation.
source "drivers/firmware/arm_ffa/Kconfig"
source "drivers/firmware/broadcom/Kconfig"
source "drivers/firmware/google/Kconfig"
source "drivers/firmware/efi/Kconfig"

View File

@ -22,6 +22,7 @@ obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o
obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o
obj-$(CONFIG_TURRIS_MOX_RWTM) += turris-mox-rwtm.o
obj-y += arm_ffa/
obj-y += arm_scmi/
obj-y += broadcom/
obj-y += meson/

View File

@ -0,0 +1,21 @@
# SPDX-License-Identifier: GPL-2.0-only
config ARM_FFA_TRANSPORT
tristate "Arm Firmware Framework for Armv8-A"
depends on OF
depends on ARM64
default n
help
This Firmware Framework(FF) for Arm A-profile processors describes
interfaces that standardize communication between the various
software images which includes communication between images in
the Secure world and Normal world. It also leverages the
virtualization extension to isolate software images provided
by an ecosystem of vendors from each other.
This driver provides interface for all the client drivers making
use of the features offered by ARM FF-A.
config ARM_FFA_SMCCC
bool
default ARM_FFA_TRANSPORT
depends on ARM64 && HAVE_ARM_SMCCC_DISCOVERY

View File

@ -0,0 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
ffa-bus-y = bus.o
ffa-driver-y = driver.o
ffa-transport-$(CONFIG_ARM_FFA_SMCCC) += smccc.o
ffa-module-objs := $(ffa-bus-y) $(ffa-driver-y) $(ffa-transport-y)
obj-$(CONFIG_ARM_FFA_TRANSPORT) = ffa-module.o

View File

@ -0,0 +1,210 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2021 ARM Ltd.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/arm_ffa.h>
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/types.h>
#include "common.h"
static int ffa_device_match(struct device *dev, struct device_driver *drv)
{
const struct ffa_device_id *id_table;
struct ffa_device *ffa_dev;
id_table = to_ffa_driver(drv)->id_table;
ffa_dev = to_ffa_dev(dev);
while (!uuid_is_null(&id_table->uuid)) {
/*
* FF-A v1.0 doesn't provide discovery of UUIDs, just the
* partition IDs, so fetch the partitions IDs for this
* id_table UUID and assign the UUID to the device if the
* partition ID matches
*/
if (uuid_is_null(&ffa_dev->uuid))
ffa_device_match_uuid(ffa_dev, &id_table->uuid);
if (uuid_equal(&ffa_dev->uuid, &id_table->uuid))
return 1;
id_table++;
}
return 0;
}
static int ffa_device_probe(struct device *dev)
{
struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
struct ffa_device *ffa_dev = to_ffa_dev(dev);
if (!ffa_device_match(dev, dev->driver))
return -ENODEV;
return ffa_drv->probe(ffa_dev);
}
static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
return add_uevent_var(env, "MODALIAS=arm_ffa:%04x:%pUb",
ffa_dev->vm_id, &ffa_dev->uuid);
}
static ssize_t partition_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
return sprintf(buf, "0x%04x\n", ffa_dev->vm_id);
}
static DEVICE_ATTR_RO(partition_id);
static ssize_t uuid_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
return sprintf(buf, "%pUb\n", &ffa_dev->uuid);
}
static DEVICE_ATTR_RO(uuid);
static struct attribute *ffa_device_attributes_attrs[] = {
&dev_attr_partition_id.attr,
&dev_attr_uuid.attr,
NULL,
};
ATTRIBUTE_GROUPS(ffa_device_attributes);
struct bus_type ffa_bus_type = {
.name = "arm_ffa",
.match = ffa_device_match,
.probe = ffa_device_probe,
.uevent = ffa_device_uevent,
.dev_groups = ffa_device_attributes_groups,
};
EXPORT_SYMBOL_GPL(ffa_bus_type);
int ffa_driver_register(struct ffa_driver *driver, struct module *owner,
const char *mod_name)
{
int ret;
driver->driver.bus = &ffa_bus_type;
driver->driver.name = driver->name;
driver->driver.owner = owner;
driver->driver.mod_name = mod_name;
ret = driver_register(&driver->driver);
if (!ret)
pr_debug("registered new ffa driver %s\n", driver->name);
return ret;
}
EXPORT_SYMBOL_GPL(ffa_driver_register);
void ffa_driver_unregister(struct ffa_driver *driver)
{
driver_unregister(&driver->driver);
}
EXPORT_SYMBOL_GPL(ffa_driver_unregister);
static void ffa_release_device(struct device *dev)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
kfree(ffa_dev);
}
static int __ffa_devices_unregister(struct device *dev, void *data)
{
ffa_release_device(dev);
return 0;
}
static void ffa_devices_unregister(void)
{
bus_for_each_dev(&ffa_bus_type, NULL, NULL,
__ffa_devices_unregister);
}
bool ffa_device_is_valid(struct ffa_device *ffa_dev)
{
bool valid = false;
struct device *dev = NULL;
struct ffa_device *tmp_dev;
do {
dev = bus_find_next_device(&ffa_bus_type, dev);
tmp_dev = to_ffa_dev(dev);
if (tmp_dev == ffa_dev) {
valid = true;
break;
}
put_device(dev);
} while (dev);
put_device(dev);
return valid;
}
struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id)
{
int ret;
struct device *dev;
struct ffa_device *ffa_dev;
ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL);
if (!ffa_dev)
return NULL;
dev = &ffa_dev->dev;
dev->bus = &ffa_bus_type;
dev->release = ffa_release_device;
dev_set_name(&ffa_dev->dev, "arm-ffa-%04x", vm_id);
ffa_dev->vm_id = vm_id;
uuid_copy(&ffa_dev->uuid, uuid);
ret = device_register(&ffa_dev->dev);
if (ret) {
dev_err(dev, "unable to register device %s err=%d\n",
dev_name(dev), ret);
put_device(dev);
return NULL;
}
return ffa_dev;
}
EXPORT_SYMBOL_GPL(ffa_device_register);
void ffa_device_unregister(struct ffa_device *ffa_dev)
{
if (!ffa_dev)
return;
device_unregister(&ffa_dev->dev);
}
EXPORT_SYMBOL_GPL(ffa_device_unregister);
int arm_ffa_bus_init(void)
{
return bus_register(&ffa_bus_type);
}
void arm_ffa_bus_exit(void)
{
ffa_devices_unregister();
bus_unregister(&ffa_bus_type);
}

View File

@ -0,0 +1,31 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2021 ARM Ltd.
*/
#ifndef _FFA_COMMON_H
#define _FFA_COMMON_H
#include <linux/arm_ffa.h>
#include <linux/arm-smccc.h>
#include <linux/err.h>
typedef struct arm_smccc_1_2_regs ffa_value_t;
typedef void (ffa_fn)(ffa_value_t, ffa_value_t *);
int arm_ffa_bus_init(void);
void arm_ffa_bus_exit(void);
bool ffa_device_is_valid(struct ffa_device *ffa_dev);
void ffa_device_match_uuid(struct ffa_device *ffa_dev, const uuid_t *uuid);
#ifdef CONFIG_ARM_FFA_SMCCC
int __init ffa_transport_init(ffa_fn **invoke_ffa_fn);
#else
static inline int __init ffa_transport_init(ffa_fn **invoke_ffa_fn)
{
return -EOPNOTSUPP;
}
#endif
#endif /* _FFA_COMMON_H */

View File

@ -0,0 +1,731 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Arm Firmware Framework for ARMv8-A(FFA) interface driver
*
* The Arm FFA specification[1] describes a software architecture to
* leverages the virtualization extension to isolate software images
* provided by an ecosystem of vendors from each other and describes
* interfaces that standardize communication between the various software
* images including communication between images in the Secure world and
* Normal world. Any Hypervisor could use the FFA interfaces to enable
* communication between VMs it manages.
*
* The Hypervisor a.k.a Partition managers in FFA terminology can assign
* system resources(Memory regions, Devices, CPU cycles) to the partitions
* and manage isolation amongst them.
*
* [1] https://developer.arm.com/docs/den0077/latest
*
* Copyright (C) 2021 ARM Ltd.
*/
#define DRIVER_NAME "ARM FF-A"
#define pr_fmt(fmt) DRIVER_NAME ": " fmt
#include <linux/arm_ffa.h>
#include <linux/bitfield.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include "common.h"
#define FFA_DRIVER_VERSION FFA_VERSION_1_0
#define FFA_SMC(calling_convention, func_num) \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, (calling_convention), \
ARM_SMCCC_OWNER_STANDARD, (func_num))
#define FFA_SMC_32(func_num) FFA_SMC(ARM_SMCCC_SMC_32, (func_num))
#define FFA_SMC_64(func_num) FFA_SMC(ARM_SMCCC_SMC_64, (func_num))
#define FFA_ERROR FFA_SMC_32(0x60)
#define FFA_SUCCESS FFA_SMC_32(0x61)
#define FFA_INTERRUPT FFA_SMC_32(0x62)
#define FFA_VERSION FFA_SMC_32(0x63)
#define FFA_FEATURES FFA_SMC_32(0x64)
#define FFA_RX_RELEASE FFA_SMC_32(0x65)
#define FFA_RXTX_MAP FFA_SMC_32(0x66)
#define FFA_FN64_RXTX_MAP FFA_SMC_64(0x66)
#define FFA_RXTX_UNMAP FFA_SMC_32(0x67)
#define FFA_PARTITION_INFO_GET FFA_SMC_32(0x68)
#define FFA_ID_GET FFA_SMC_32(0x69)
#define FFA_MSG_POLL FFA_SMC_32(0x6A)
#define FFA_MSG_WAIT FFA_SMC_32(0x6B)
#define FFA_YIELD FFA_SMC_32(0x6C)
#define FFA_RUN FFA_SMC_32(0x6D)
#define FFA_MSG_SEND FFA_SMC_32(0x6E)
#define FFA_MSG_SEND_DIRECT_REQ FFA_SMC_32(0x6F)
#define FFA_FN64_MSG_SEND_DIRECT_REQ FFA_SMC_64(0x6F)
#define FFA_MSG_SEND_DIRECT_RESP FFA_SMC_32(0x70)
#define FFA_FN64_MSG_SEND_DIRECT_RESP FFA_SMC_64(0x70)
#define FFA_MEM_DONATE FFA_SMC_32(0x71)
#define FFA_FN64_MEM_DONATE FFA_SMC_64(0x71)
#define FFA_MEM_LEND FFA_SMC_32(0x72)
#define FFA_FN64_MEM_LEND FFA_SMC_64(0x72)
#define FFA_MEM_SHARE FFA_SMC_32(0x73)
#define FFA_FN64_MEM_SHARE FFA_SMC_64(0x73)
#define FFA_MEM_RETRIEVE_REQ FFA_SMC_32(0x74)
#define FFA_FN64_MEM_RETRIEVE_REQ FFA_SMC_64(0x74)
#define FFA_MEM_RETRIEVE_RESP FFA_SMC_32(0x75)
#define FFA_MEM_RELINQUISH FFA_SMC_32(0x76)
#define FFA_MEM_RECLAIM FFA_SMC_32(0x77)
#define FFA_MEM_OP_PAUSE FFA_SMC_32(0x78)
#define FFA_MEM_OP_RESUME FFA_SMC_32(0x79)
#define FFA_MEM_FRAG_RX FFA_SMC_32(0x7A)
#define FFA_MEM_FRAG_TX FFA_SMC_32(0x7B)
#define FFA_NORMAL_WORLD_RESUME FFA_SMC_32(0x7C)
/*
* For some calls it is necessary to use SMC64 to pass or return 64-bit values.
* For such calls FFA_FN_NATIVE(name) will choose the appropriate
* (native-width) function ID.
*/
#ifdef CONFIG_64BIT
#define FFA_FN_NATIVE(name) FFA_FN64_##name
#else
#define FFA_FN_NATIVE(name) FFA_##name
#endif
/* FFA error codes. */
#define FFA_RET_SUCCESS (0)
#define FFA_RET_NOT_SUPPORTED (-1)
#define FFA_RET_INVALID_PARAMETERS (-2)
#define FFA_RET_NO_MEMORY (-3)
#define FFA_RET_BUSY (-4)
#define FFA_RET_INTERRUPTED (-5)
#define FFA_RET_DENIED (-6)
#define FFA_RET_RETRY (-7)
#define FFA_RET_ABORTED (-8)
#define MAJOR_VERSION_MASK GENMASK(30, 16)
#define MINOR_VERSION_MASK GENMASK(15, 0)
#define MAJOR_VERSION(x) ((u16)(FIELD_GET(MAJOR_VERSION_MASK, (x))))
#define MINOR_VERSION(x) ((u16)(FIELD_GET(MINOR_VERSION_MASK, (x))))
#define PACK_VERSION_INFO(major, minor) \
(FIELD_PREP(MAJOR_VERSION_MASK, (major)) | \
FIELD_PREP(MINOR_VERSION_MASK, (minor)))
#define FFA_VERSION_1_0 PACK_VERSION_INFO(1, 0)
#define FFA_MIN_VERSION FFA_VERSION_1_0
#define SENDER_ID_MASK GENMASK(31, 16)
#define RECEIVER_ID_MASK GENMASK(15, 0)
#define SENDER_ID(x) ((u16)(FIELD_GET(SENDER_ID_MASK, (x))))
#define RECEIVER_ID(x) ((u16)(FIELD_GET(RECEIVER_ID_MASK, (x))))
#define PACK_TARGET_INFO(s, r) \
(FIELD_PREP(SENDER_ID_MASK, (s)) | FIELD_PREP(RECEIVER_ID_MASK, (r)))
/**
* FF-A specification mentions explicitly about '4K pages'. This should
* not be confused with the kernel PAGE_SIZE, which is the translation
* granule kernel is configured and may be one among 4K, 16K and 64K.
*/
#define FFA_PAGE_SIZE SZ_4K
/*
* Keeping RX TX buffer size as 4K for now
* 64K may be preferred to keep it min a page in 64K PAGE_SIZE config
*/
#define RXTX_BUFFER_SIZE SZ_4K
static ffa_fn *invoke_ffa_fn;
static const int ffa_linux_errmap[] = {
/* better than switch case as long as return value is continuous */
0, /* FFA_RET_SUCCESS */
-EOPNOTSUPP, /* FFA_RET_NOT_SUPPORTED */
-EINVAL, /* FFA_RET_INVALID_PARAMETERS */
-ENOMEM, /* FFA_RET_NO_MEMORY */
-EBUSY, /* FFA_RET_BUSY */
-EINTR, /* FFA_RET_INTERRUPTED */
-EACCES, /* FFA_RET_DENIED */
-EAGAIN, /* FFA_RET_RETRY */
-ECANCELED, /* FFA_RET_ABORTED */
};
static inline int ffa_to_linux_errno(int errno)
{
if (errno < FFA_RET_SUCCESS && errno >= -ARRAY_SIZE(ffa_linux_errmap))
return ffa_linux_errmap[-errno];
return -EINVAL;
}
struct ffa_drv_info {
u32 version;
u16 vm_id;
struct mutex rx_lock; /* lock to protect Rx buffer */
struct mutex tx_lock; /* lock to protect Tx buffer */
void *rx_buffer;
void *tx_buffer;
};
static struct ffa_drv_info *drv_info;
static int ffa_version_check(u32 *version)
{
ffa_value_t ver;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_VERSION, .a1 = FFA_DRIVER_VERSION,
}, &ver);
if (ver.a0 == FFA_RET_NOT_SUPPORTED) {
pr_info("FFA_VERSION returned not supported\n");
return -EOPNOTSUPP;
}
if (ver.a0 < FFA_MIN_VERSION || ver.a0 > FFA_DRIVER_VERSION) {
pr_err("Incompatible version %d.%d found\n",
MAJOR_VERSION(ver.a0), MINOR_VERSION(ver.a0));
return -EINVAL;
}
*version = ver.a0;
pr_info("Version %d.%d found\n", MAJOR_VERSION(ver.a0),
MINOR_VERSION(ver.a0));
return 0;
}
static int ffa_rx_release(void)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_RX_RELEASE,
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
/* check for ret.a0 == FFA_RX_RELEASE ? */
return 0;
}
static int ffa_rxtx_map(phys_addr_t tx_buf, phys_addr_t rx_buf, u32 pg_cnt)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_FN_NATIVE(RXTX_MAP),
.a1 = tx_buf, .a2 = rx_buf, .a3 = pg_cnt,
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
return 0;
}
static int ffa_rxtx_unmap(u16 vm_id)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_RXTX_UNMAP, .a1 = PACK_TARGET_INFO(vm_id, 0),
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
return 0;
}
/* buffer must be sizeof(struct ffa_partition_info) * num_partitions */
static int
__ffa_partition_info_get(u32 uuid0, u32 uuid1, u32 uuid2, u32 uuid3,
struct ffa_partition_info *buffer, int num_partitions)
{
int count;
ffa_value_t partition_info;
mutex_lock(&drv_info->rx_lock);
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_PARTITION_INFO_GET,
.a1 = uuid0, .a2 = uuid1, .a3 = uuid2, .a4 = uuid3,
}, &partition_info);
if (partition_info.a0 == FFA_ERROR) {
mutex_unlock(&drv_info->rx_lock);
return ffa_to_linux_errno((int)partition_info.a2);
}
count = partition_info.a2;
if (buffer && count <= num_partitions)
memcpy(buffer, drv_info->rx_buffer, sizeof(*buffer) * count);
ffa_rx_release();
mutex_unlock(&drv_info->rx_lock);
return count;
}
/* buffer is allocated and caller must free the same if returned count > 0 */
static int
ffa_partition_probe(const uuid_t *uuid, struct ffa_partition_info **buffer)
{
int count;
u32 uuid0_4[4];
struct ffa_partition_info *pbuf;
export_uuid((u8 *)uuid0_4, uuid);
count = __ffa_partition_info_get(uuid0_4[0], uuid0_4[1], uuid0_4[2],
uuid0_4[3], NULL, 0);
if (count <= 0)
return count;
pbuf = kcalloc(count, sizeof(*pbuf), GFP_KERNEL);
if (!pbuf)
return -ENOMEM;
count = __ffa_partition_info_get(uuid0_4[0], uuid0_4[1], uuid0_4[2],
uuid0_4[3], pbuf, count);
if (count <= 0)
kfree(pbuf);
else
*buffer = pbuf;
return count;
}
#define VM_ID_MASK GENMASK(15, 0)
static int ffa_id_get(u16 *vm_id)
{
ffa_value_t id;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_ID_GET,
}, &id);
if (id.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)id.a2);
*vm_id = FIELD_GET(VM_ID_MASK, (id.a2));
return 0;
}
static int ffa_msg_send_direct_req(u16 src_id, u16 dst_id, bool mode_32bit,
struct ffa_send_direct_data *data)
{
u32 req_id, resp_id, src_dst_ids = PACK_TARGET_INFO(src_id, dst_id);
ffa_value_t ret;
if (mode_32bit) {
req_id = FFA_MSG_SEND_DIRECT_REQ;
resp_id = FFA_MSG_SEND_DIRECT_RESP;
} else {
req_id = FFA_FN_NATIVE(MSG_SEND_DIRECT_REQ);
resp_id = FFA_FN_NATIVE(MSG_SEND_DIRECT_RESP);
}
invoke_ffa_fn((ffa_value_t){
.a0 = req_id, .a1 = src_dst_ids, .a2 = 0,
.a3 = data->data0, .a4 = data->data1, .a5 = data->data2,
.a6 = data->data3, .a7 = data->data4,
}, &ret);
while (ret.a0 == FFA_INTERRUPT)
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_RUN, .a1 = ret.a1,
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
if (ret.a0 == resp_id) {
data->data0 = ret.a3;
data->data1 = ret.a4;
data->data2 = ret.a5;
data->data3 = ret.a6;
data->data4 = ret.a7;
return 0;
}
return -EINVAL;
}
static int ffa_mem_first_frag(u32 func_id, phys_addr_t buf, u32 buf_sz,
u32 frag_len, u32 len, u64 *handle)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = func_id, .a1 = len, .a2 = frag_len,
.a3 = buf, .a4 = buf_sz,
}, &ret);
while (ret.a0 == FFA_MEM_OP_PAUSE)
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_MEM_OP_RESUME,
.a1 = ret.a1, .a2 = ret.a2,
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
if (ret.a0 != FFA_SUCCESS)
return -EOPNOTSUPP;
if (handle)
*handle = PACK_HANDLE(ret.a2, ret.a3);
return frag_len;
}
static int ffa_mem_next_frag(u64 handle, u32 frag_len)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_MEM_FRAG_TX,
.a1 = HANDLE_LOW(handle), .a2 = HANDLE_HIGH(handle),
.a3 = frag_len,
}, &ret);
while (ret.a0 == FFA_MEM_OP_PAUSE)
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_MEM_OP_RESUME,
.a1 = ret.a1, .a2 = ret.a2,
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
if (ret.a0 != FFA_MEM_FRAG_RX)
return -EOPNOTSUPP;
return ret.a3;
}
static int
ffa_transmit_fragment(u32 func_id, phys_addr_t buf, u32 buf_sz, u32 frag_len,
u32 len, u64 *handle, bool first)
{
if (!first)
return ffa_mem_next_frag(*handle, frag_len);
return ffa_mem_first_frag(func_id, buf, buf_sz, frag_len, len, handle);
}
static u32 ffa_get_num_pages_sg(struct scatterlist *sg)
{
u32 num_pages = 0;
do {
num_pages += sg->length / FFA_PAGE_SIZE;
} while ((sg = sg_next(sg)));
return num_pages;
}
static int
ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
struct ffa_mem_ops_args *args)
{
int rc = 0;
bool first = true;
phys_addr_t addr = 0;
struct ffa_composite_mem_region *composite;
struct ffa_mem_region_addr_range *constituents;
struct ffa_mem_region_attributes *ep_mem_access;
struct ffa_mem_region *mem_region = buffer;
u32 idx, frag_len, length, buf_sz = 0, num_entries = sg_nents(args->sg);
mem_region->tag = args->tag;
mem_region->flags = args->flags;
mem_region->sender_id = drv_info->vm_id;
mem_region->attributes = FFA_MEM_NORMAL | FFA_MEM_WRITE_BACK |
FFA_MEM_INNER_SHAREABLE;
ep_mem_access = &mem_region->ep_mem_access[0];
for (idx = 0; idx < args->nattrs; idx++, ep_mem_access++) {
ep_mem_access->receiver = args->attrs[idx].receiver;
ep_mem_access->attrs = args->attrs[idx].attrs;
ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs);
}
mem_region->ep_count = args->nattrs;
composite = buffer + COMPOSITE_OFFSET(args->nattrs);
composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg);
composite->addr_range_cnt = num_entries;
length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries);
frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0);
if (frag_len > max_fragsize)
return -ENXIO;
if (!args->use_txbuf) {
addr = virt_to_phys(buffer);
buf_sz = max_fragsize / FFA_PAGE_SIZE;
}
constituents = buffer + frag_len;
idx = 0;
do {
if (frag_len == max_fragsize) {
rc = ffa_transmit_fragment(func_id, addr, buf_sz,
frag_len, length,
&args->g_handle, first);
if (rc < 0)
return -ENXIO;
first = false;
idx = 0;
frag_len = 0;
constituents = buffer;
}
if ((void *)constituents - buffer > max_fragsize) {
pr_err("Memory Region Fragment > Tx Buffer size\n");
return -EFAULT;
}
constituents->address = sg_phys(args->sg);
constituents->pg_cnt = args->sg->length / FFA_PAGE_SIZE;
constituents++;
frag_len += sizeof(struct ffa_mem_region_addr_range);
} while ((args->sg = sg_next(args->sg)));
return ffa_transmit_fragment(func_id, addr, buf_sz, frag_len,
length, &args->g_handle, first);
}
static int ffa_memory_ops(u32 func_id, struct ffa_mem_ops_args *args)
{
int ret;
void *buffer;
if (!args->use_txbuf) {
buffer = alloc_pages_exact(RXTX_BUFFER_SIZE, GFP_KERNEL);
if (!buffer)
return -ENOMEM;
} else {
buffer = drv_info->tx_buffer;
mutex_lock(&drv_info->tx_lock);
}
ret = ffa_setup_and_transmit(func_id, buffer, RXTX_BUFFER_SIZE, args);
if (args->use_txbuf)
mutex_unlock(&drv_info->tx_lock);
else
free_pages_exact(buffer, RXTX_BUFFER_SIZE);
return ret < 0 ? ret : 0;
}
static int ffa_memory_reclaim(u64 g_handle, u32 flags)
{
ffa_value_t ret;
invoke_ffa_fn((ffa_value_t){
.a0 = FFA_MEM_RECLAIM,
.a1 = HANDLE_LOW(g_handle), .a2 = HANDLE_HIGH(g_handle),
.a3 = flags,
}, &ret);
if (ret.a0 == FFA_ERROR)
return ffa_to_linux_errno((int)ret.a2);
return 0;
}
static u32 ffa_api_version_get(void)
{
return drv_info->version;
}
static int ffa_partition_info_get(const char *uuid_str,
struct ffa_partition_info *buffer)
{
int count;
uuid_t uuid;
struct ffa_partition_info *pbuf;
if (uuid_parse(uuid_str, &uuid)) {
pr_err("invalid uuid (%s)\n", uuid_str);
return -ENODEV;
}
count = ffa_partition_probe(&uuid_null, &pbuf);
if (count <= 0)
return -ENOENT;
memcpy(buffer, pbuf, sizeof(*pbuf) * count);
kfree(pbuf);
return 0;
}
static void ffa_mode_32bit_set(struct ffa_device *dev)
{
dev->mode_32bit = true;
}
static int ffa_sync_send_receive(struct ffa_device *dev,
struct ffa_send_direct_data *data)
{
return ffa_msg_send_direct_req(drv_info->vm_id, dev->vm_id,
dev->mode_32bit, data);
}
static int
ffa_memory_share(struct ffa_device *dev, struct ffa_mem_ops_args *args)
{
if (dev->mode_32bit)
return ffa_memory_ops(FFA_MEM_SHARE, args);
return ffa_memory_ops(FFA_FN_NATIVE(MEM_SHARE), args);
}
static const struct ffa_dev_ops ffa_ops = {
.api_version_get = ffa_api_version_get,
.partition_info_get = ffa_partition_info_get,
.mode_32bit_set = ffa_mode_32bit_set,
.sync_send_receive = ffa_sync_send_receive,
.memory_reclaim = ffa_memory_reclaim,
.memory_share = ffa_memory_share,
};
const struct ffa_dev_ops *ffa_dev_ops_get(struct ffa_device *dev)
{
if (ffa_device_is_valid(dev))
return &ffa_ops;
return NULL;
}
EXPORT_SYMBOL_GPL(ffa_dev_ops_get);
void ffa_device_match_uuid(struct ffa_device *ffa_dev, const uuid_t *uuid)
{
int count, idx;
struct ffa_partition_info *pbuf, *tpbuf;
count = ffa_partition_probe(uuid, &pbuf);
if (count <= 0)
return;
for (idx = 0, tpbuf = pbuf; idx < count; idx++, tpbuf++)
if (tpbuf->id == ffa_dev->vm_id)
uuid_copy(&ffa_dev->uuid, uuid);
kfree(pbuf);
}
static void ffa_setup_partitions(void)
{
int count, idx;
struct ffa_device *ffa_dev;
struct ffa_partition_info *pbuf, *tpbuf;
count = ffa_partition_probe(&uuid_null, &pbuf);
if (count <= 0) {
pr_info("%s: No partitions found, error %d\n", __func__, count);
return;
}
for (idx = 0, tpbuf = pbuf; idx < count; idx++, tpbuf++) {
/* Note that the &uuid_null parameter will require
* ffa_device_match() to find the UUID of this partition id
* with help of ffa_device_match_uuid(). Once the FF-A spec
* is updated to provide correct UUID here for each partition
* as part of the discovery API, we need to pass the
* discovered UUID here instead.
*/
ffa_dev = ffa_device_register(&uuid_null, tpbuf->id);
if (!ffa_dev) {
pr_err("%s: failed to register partition ID 0x%x\n",
__func__, tpbuf->id);
continue;
}
ffa_dev_set_drvdata(ffa_dev, drv_info);
}
kfree(pbuf);
}
static int __init ffa_init(void)
{
int ret;
ret = ffa_transport_init(&invoke_ffa_fn);
if (ret)
return ret;
ret = arm_ffa_bus_init();
if (ret)
return ret;
drv_info = kzalloc(sizeof(*drv_info), GFP_KERNEL);
if (!drv_info) {
ret = -ENOMEM;
goto ffa_bus_exit;
}
ret = ffa_version_check(&drv_info->version);
if (ret)
goto free_drv_info;
if (ffa_id_get(&drv_info->vm_id)) {
pr_err("failed to obtain VM id for self\n");
ret = -ENODEV;
goto free_drv_info;
}
drv_info->rx_buffer = alloc_pages_exact(RXTX_BUFFER_SIZE, GFP_KERNEL);
if (!drv_info->rx_buffer) {
ret = -ENOMEM;
goto free_pages;
}
drv_info->tx_buffer = alloc_pages_exact(RXTX_BUFFER_SIZE, GFP_KERNEL);
if (!drv_info->tx_buffer) {
ret = -ENOMEM;
goto free_pages;
}
ret = ffa_rxtx_map(virt_to_phys(drv_info->tx_buffer),
virt_to_phys(drv_info->rx_buffer),
RXTX_BUFFER_SIZE / FFA_PAGE_SIZE);
if (ret) {
pr_err("failed to register FFA RxTx buffers\n");
goto free_pages;
}
mutex_init(&drv_info->rx_lock);
mutex_init(&drv_info->tx_lock);
ffa_setup_partitions();
return 0;
free_pages:
if (drv_info->tx_buffer)
free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE);
free_pages_exact(drv_info->rx_buffer, RXTX_BUFFER_SIZE);
free_drv_info:
kfree(drv_info);
ffa_bus_exit:
arm_ffa_bus_exit();
return ret;
}
subsys_initcall(ffa_init);
static void __exit ffa_exit(void)
{
ffa_rxtx_unmap(drv_info->vm_id);
free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE);
free_pages_exact(drv_info->rx_buffer, RXTX_BUFFER_SIZE);
kfree(drv_info);
arm_ffa_bus_exit();
}
module_exit(ffa_exit);
MODULE_ALIAS("arm-ffa");
MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
MODULE_DESCRIPTION("Arm FF-A interface driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,39 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2021 ARM Ltd.
*/
#include <linux/printk.h>
#include "common.h"
static void __arm_ffa_fn_smc(ffa_value_t args, ffa_value_t *res)
{
arm_smccc_1_2_smc(&args, res);
}
static void __arm_ffa_fn_hvc(ffa_value_t args, ffa_value_t *res)
{
arm_smccc_1_2_hvc(&args, res);
}
int __init ffa_transport_init(ffa_fn **invoke_ffa_fn)
{
enum arm_smccc_conduit conduit;
if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2)
return -EOPNOTSUPP;
conduit = arm_smccc_1_1_get_conduit();
if (conduit == SMCCC_CONDUIT_NONE) {
pr_err("%s: invalid SMCCC conduit\n", __func__);
return -EOPNOTSUPP;
}
if (conduit == SMCCC_CONDUIT_SMC)
*invoke_ffa_fn = __arm_ffa_fn_smc;
else
*invoke_ffa_fn = __arm_ffa_fn_hvc;
return 0;
}

View File

@ -331,7 +331,7 @@ struct scmi_desc {
};
extern const struct scmi_desc scmi_mailbox_desc;
#ifdef CONFIG_HAVE_ARM_SMCCC
#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
extern const struct scmi_desc scmi_smc_desc;
#endif

View File

@ -241,7 +241,6 @@ static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
xfer = &minfo->xfer_block[xfer_id];
xfer->hdr.seq = xfer_id;
reinit_completion(&xfer->done);
xfer->transfer_id = atomic_inc_return(&transfer_last_id);
return xfer;
@ -335,6 +334,10 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo,
return;
}
/* rx.len could be shrunk in the sync do_xfer, so reset to maxsz */
if (msg_type == MSG_TYPE_DELAYED_RESP)
xfer->rx.len = info->desc->max_msg_size;
scmi_dump_header_dbg(dev, &xfer->hdr);
info->desc->ops->fetch_response(cinfo, xfer);
@ -429,11 +432,12 @@ static int do_xfer(const struct scmi_protocol_handle *ph,
struct scmi_chan_info *cinfo;
/*
* Re-instate protocol id here from protocol handle so that cannot be
* Initialise protocol id now from protocol handle to avoid it being
* overridden by mistake (or malice) by the protocol code mangling with
* the scmi_xfer structure.
* the scmi_xfer structure prior to this.
*/
xfer->hdr.protocol_id = pi->proto->id;
reinit_completion(&xfer->done);
cinfo = idr_find(&info->tx_idr, xfer->hdr.protocol_id);
if (unlikely(!cinfo))
@ -505,16 +509,17 @@ static int do_xfer_with_response(const struct scmi_protocol_handle *ph,
struct scmi_xfer *xfer)
{
int ret, timeout = msecs_to_jiffies(SCMI_MAX_RESPONSE_TIMEOUT);
const struct scmi_protocol_instance *pi = ph_to_pi(ph);
DECLARE_COMPLETION_ONSTACK(async_response);
xfer->hdr.protocol_id = pi->proto->id;
xfer->async_done = &async_response;
ret = do_xfer(ph, xfer);
if (!ret && !wait_for_completion_timeout(xfer->async_done, timeout))
ret = -ETIMEDOUT;
if (!ret) {
if (!wait_for_completion_timeout(xfer->async_done, timeout))
ret = -ETIMEDOUT;
else if (xfer->hdr.status)
ret = scmi_to_linux_errno(xfer->hdr.status);
}
xfer->async_done = NULL;
return ret;
@ -561,7 +566,6 @@ static int xfer_get_init(const struct scmi_protocol_handle *ph,
xfer->tx.len = tx_size;
xfer->rx.len = rx_size ? : info->desc->max_msg_size;
xfer->hdr.id = msg_id;
xfer->hdr.protocol_id = pi->proto->id;
xfer->hdr.poll_completion = false;
*p = xfer;
@ -1567,7 +1571,9 @@ ATTRIBUTE_GROUPS(versions);
/* Each compatible listed below must have descriptor associated with it */
static const struct of_device_id scmi_of_match[] = {
#ifdef CONFIG_MAILBOX
{ .compatible = "arm,scmi", .data = &scmi_mailbox_desc },
#endif
#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY
{ .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
#endif

View File

@ -69,6 +69,9 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
return -ENOMEM;
shmem = of_parse_phandle(cdev->of_node, "shmem", idx);
if (!of_device_is_compatible(shmem, "arm,scmi-shmem"))
return -ENXIO;
ret = of_address_to_resource(shmem, 0, &res);
of_node_put(shmem);
if (ret) {

View File

@ -8,6 +8,7 @@
#include <linux/err.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/pm_clock.h>
#include <linux/pm_domain.h>
#include <linux/scmi_protocol.h>
@ -52,6 +53,27 @@ static int scmi_pd_power_off(struct generic_pm_domain *domain)
return scmi_pd_power(domain, false);
}
static int scmi_pd_attach_dev(struct generic_pm_domain *pd, struct device *dev)
{
int ret;
ret = pm_clk_create(dev);
if (ret)
return ret;
ret = of_pm_clk_add_clks(dev);
if (ret >= 0)
return 0;
pm_clk_destroy(dev);
return ret;
}
static void scmi_pd_detach_dev(struct generic_pm_domain *pd, struct device *dev)
{
pm_clk_destroy(dev);
}
static int scmi_pm_domain_probe(struct scmi_device *sdev)
{
int num_domains, i;
@ -102,6 +124,10 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
scmi_pd->genpd.name = scmi_pd->name;
scmi_pd->genpd.power_off = scmi_pd_power_off;
scmi_pd->genpd.power_on = scmi_pd_power_on;
scmi_pd->genpd.attach_dev = scmi_pd_attach_dev;
scmi_pd->genpd.detach_dev = scmi_pd_detach_dev;
scmi_pd->genpd.flags = GENPD_FLAG_PM_CLK |
GENPD_FLAG_ACTIVE_WAKEUP;
pm_genpd_init(&scmi_pd->genpd, NULL,
state == SCMI_POWER_STATE_GENERIC_OFF);

View File

@ -76,6 +76,9 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
return -ENOMEM;
np = of_parse_phandle(cdev->of_node, "shmem", 0);
if (!of_device_is_compatible(np, "arm,scmi-shmem"))
return -ENXIO;
ret = of_address_to_resource(np, 0, &res);
of_node_put(np);
if (ret) {

View File

@ -899,6 +899,14 @@ static const struct of_device_id legacy_scpi_of_match[] = {
{},
};
static const struct of_device_id shmem_of_match[] __maybe_unused = {
{ .compatible = "amlogic,meson-gxbb-scp-shmem", },
{ .compatible = "amlogic,meson-axg-scp-shmem", },
{ .compatible = "arm,juno-scp-shmem", },
{ .compatible = "arm,scp-shmem", },
{ }
};
static int scpi_probe(struct platform_device *pdev)
{
int count, idx, ret;
@ -935,6 +943,9 @@ static int scpi_probe(struct platform_device *pdev)
struct mbox_client *cl = &pchan->cl;
struct device_node *shmem = of_parse_phandle(np, "shmem", idx);
if (!of_match_node(shmem_of_match, shmem))
return -ENXIO;
ret = of_address_to_resource(shmem, 0, &res);
of_node_put(shmem);
if (ret) {

View File

@ -1281,6 +1281,9 @@ static const struct of_device_id qcom_scm_dt_match[] = {
SCM_HAS_BUS_CLK)
},
{ .compatible = "qcom,scm-ipq4019" },
{ .compatible = "qcom,scm-mdm9607", .data = (void *)(SCM_HAS_CORE_CLK |
SCM_HAS_IFACE_CLK |
SCM_HAS_BUS_CLK) },
{ .compatible = "qcom,scm-msm8660", .data = (void *) SCM_HAS_CORE_CLK },
{ .compatible = "qcom,scm-msm8960", .data = (void *) SCM_HAS_CORE_CLK },
{ .compatible = "qcom,scm-msm8916", .data = (void *)(SCM_HAS_CORE_CLK |

View File

@ -3,6 +3,7 @@ tegra-bpmp-y = bpmp.o
tegra-bpmp-$(CONFIG_ARCH_TEGRA_210_SOC) += bpmp-tegra210.o
tegra-bpmp-$(CONFIG_ARCH_TEGRA_186_SOC) += bpmp-tegra186.o
tegra-bpmp-$(CONFIG_ARCH_TEGRA_194_SOC) += bpmp-tegra186.o
tegra-bpmp-$(CONFIG_ARCH_TEGRA_234_SOC) += bpmp-tegra186.o
tegra-bpmp-$(CONFIG_DEBUG_FS) += bpmp-debugfs.o
obj-$(CONFIG_TEGRA_BPMP) += tegra-bpmp.o
obj-$(CONFIG_TEGRA_IVC) += ivc.o

View File

@ -24,7 +24,8 @@ struct tegra_bpmp_ops {
};
#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \
IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC)
extern const struct tegra_bpmp_ops tegra186_bpmp_ops;
#endif
#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)

View File

@ -210,7 +210,7 @@ static int tegra210_bpmp_init(struct tegra_bpmp *bpmp)
priv->tx_irq_data = irq_get_irq_data(err);
if (!priv->tx_irq_data) {
dev_err(&pdev->dev, "failed to get IRQ data for TX IRQ\n");
return err;
return -ENOENT;
}
err = platform_get_irq_byname(pdev, "rx");

View File

@ -809,7 +809,8 @@ static const struct dev_pm_ops tegra_bpmp_pm_ops = {
};
#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC) || \
IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC)
IS_ENABLED(CONFIG_ARCH_TEGRA_194_SOC) || \
IS_ENABLED(CONFIG_ARCH_TEGRA_234_SOC)
static const struct tegra_bpmp_soc tegra186_soc = {
.channels = {
.cpu_tx = {

View File

@ -147,11 +147,14 @@ MOX_ATTR_RO(pubkey, "%s\n", pubkey);
static int mox_get_status(enum mbox_cmd cmd, u32 retval)
{
if (MBOX_STS_CMD(retval) != cmd ||
MBOX_STS_ERROR(retval) != MBOX_STS_SUCCESS)
if (MBOX_STS_CMD(retval) != cmd)
return -EIO;
else if (MBOX_STS_ERROR(retval) == MBOX_STS_FAIL)
return -(int)MBOX_STS_VALUE(retval);
else if (MBOX_STS_ERROR(retval) == MBOX_STS_BADCMD)
return -ENOSYS;
else if (MBOX_STS_ERROR(retval) != MBOX_STS_SUCCESS)
return -EIO;
else
return MBOX_STS_VALUE(retval);
}
@ -201,11 +204,14 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
return ret;
ret = mox_get_status(MBOX_CMD_BOARD_INFO, reply->retval);
if (ret < 0 && ret != -ENODATA) {
return ret;
} else if (ret == -ENODATA) {
if (ret == -ENODATA) {
dev_warn(rwtm->dev,
"Board does not have manufacturing information burned!\n");
} else if (ret == -ENOSYS) {
dev_notice(rwtm->dev,
"Firmware does not support the BOARD_INFO command\n");
} else if (ret < 0) {
return ret;
} else {
rwtm->serial_number = reply->status[1];
rwtm->serial_number <<= 32;
@ -234,10 +240,13 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
return ret;
ret = mox_get_status(MBOX_CMD_ECDSA_PUB_KEY, reply->retval);
if (ret < 0 && ret != -ENODATA) {
return ret;
} else if (ret == -ENODATA) {
if (ret == -ENODATA) {
dev_warn(rwtm->dev, "Board has no public key burned!\n");
} else if (ret == -ENOSYS) {
dev_notice(rwtm->dev,
"Firmware does not support the ECDSA_PUB_KEY command\n");
} else if (ret < 0) {
return ret;
} else {
u32 *s = reply->status;
@ -251,6 +260,27 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
return 0;
}
static int check_get_random_support(struct mox_rwtm *rwtm)
{
struct armada_37xx_rwtm_tx_msg msg;
int ret;
msg.command = MBOX_CMD_GET_RANDOM;
msg.args[0] = 1;
msg.args[1] = rwtm->buf_phys;
msg.args[2] = 4;
ret = mbox_send_message(rwtm->mbox, &msg);
if (ret < 0)
return ret;
ret = wait_for_completion_timeout(&rwtm->cmd_done, HZ / 2);
if (ret < 0)
return ret;
return mox_get_status(MBOX_CMD_GET_RANDOM, rwtm->reply.retval);
}
static int mox_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct mox_rwtm *rwtm = (struct mox_rwtm *) rng->priv;
@ -488,6 +518,13 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
if (ret < 0)
dev_warn(dev, "Cannot read board information: %i\n", ret);
ret = check_get_random_support(rwtm);
if (ret < 0) {
dev_notice(dev,
"Firmware does not support the GET_RANDOM command\n");
goto free_channel;
}
rwtm->hwrng.name = DRIVER_NAME "_hwrng";
rwtm->hwrng.read = mox_hwrng_read;
rwtm->hwrng.priv = (unsigned long) rwtm;
@ -505,6 +542,8 @@ static int turris_mox_rwtm_probe(struct platform_device *pdev)
goto free_channel;
}
dev_info(dev, "HWRNG successfully registered\n");
return 0;
free_channel:
@ -530,6 +569,7 @@ static int turris_mox_rwtm_remove(struct platform_device *pdev)
static const struct of_device_id turris_mox_rwtm_match[] = {
{ .compatible = "cznic,turris-mox-rwtm", },
{ .compatible = "marvell,armada-3700-rwtm-firmware", },
{ },
};

View File

@ -211,7 +211,8 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu)
if (of_property_read_bool(np, "calxeda,smmu-secure-config-access"))
smmu->impl = &calxeda_impl;
if (of_device_is_compatible(np, "nvidia,tegra194-smmu"))
if (of_device_is_compatible(np, "nvidia,tegra194-smmu") ||
of_device_is_compatible(np, "nvidia,tegra186-smmu"))
return nvidia_smmu_impl_init(smmu);
smmu = qcom_smmu_impl_init(smmu);

View File

@ -7,6 +7,8 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <soc/tegra/mc.h>
#include "arm-smmu.h"
/*
@ -15,18 +17,32 @@
* interleaved IOVA accesses across them and translates accesses from
* non-isochronous HW devices.
* Third one is used for translating accesses from isochronous HW devices.
*
* In addition, the SMMU driver needs to coordinate with the memory controller
* driver to ensure that the right SID override is programmed for any given
* memory client. This is necessary to allow for use-case such as seamlessly
* handing over the display controller configuration from the firmware to the
* kernel.
*
* This implementation supports programming of the two instances that must
* be programmed identically.
* The third instance usage is through standard arm-smmu driver itself and
* is out of scope of this implementation.
* be programmed identically and takes care of invoking the memory controller
* driver for SID override programming after devices have been attached to an
* SMMU instance.
*/
#define NUM_SMMU_INSTANCES 2
#define MAX_SMMU_INSTANCES 2
struct nvidia_smmu {
struct arm_smmu_device smmu;
void __iomem *bases[NUM_SMMU_INSTANCES];
struct arm_smmu_device smmu;
void __iomem *bases[MAX_SMMU_INSTANCES];
unsigned int num_instances;
struct tegra_mc *mc;
};
static inline struct nvidia_smmu *to_nvidia_smmu(struct arm_smmu_device *smmu)
{
return container_of(smmu, struct nvidia_smmu, smmu);
}
static inline void __iomem *nvidia_smmu_page(struct arm_smmu_device *smmu,
unsigned int inst, int page)
{
@ -47,9 +63,10 @@ static u32 nvidia_smmu_read_reg(struct arm_smmu_device *smmu,
static void nvidia_smmu_write_reg(struct arm_smmu_device *smmu,
int page, int offset, u32 val)
{
struct nvidia_smmu *nvidia = to_nvidia_smmu(smmu);
unsigned int i;
for (i = 0; i < NUM_SMMU_INSTANCES; i++) {
for (i = 0; i < nvidia->num_instances; i++) {
void __iomem *reg = nvidia_smmu_page(smmu, i, page) + offset;
writel_relaxed(val, reg);
@ -67,9 +84,10 @@ static u64 nvidia_smmu_read_reg64(struct arm_smmu_device *smmu,
static void nvidia_smmu_write_reg64(struct arm_smmu_device *smmu,
int page, int offset, u64 val)
{
struct nvidia_smmu *nvidia = to_nvidia_smmu(smmu);
unsigned int i;
for (i = 0; i < NUM_SMMU_INSTANCES; i++) {
for (i = 0; i < nvidia->num_instances; i++) {
void __iomem *reg = nvidia_smmu_page(smmu, i, page) + offset;
writeq_relaxed(val, reg);
@ -79,6 +97,7 @@ static void nvidia_smmu_write_reg64(struct arm_smmu_device *smmu,
static void nvidia_smmu_tlb_sync(struct arm_smmu_device *smmu, int page,
int sync, int status)
{
struct nvidia_smmu *nvidia = to_nvidia_smmu(smmu);
unsigned int delay;
arm_smmu_writel(smmu, page, sync, 0);
@ -90,7 +109,7 @@ static void nvidia_smmu_tlb_sync(struct arm_smmu_device *smmu, int page,
u32 val = 0;
unsigned int i;
for (i = 0; i < NUM_SMMU_INSTANCES; i++) {
for (i = 0; i < nvidia->num_instances; i++) {
void __iomem *reg;
reg = nvidia_smmu_page(smmu, i, page) + status;
@ -112,9 +131,10 @@ static void nvidia_smmu_tlb_sync(struct arm_smmu_device *smmu, int page,
static int nvidia_smmu_reset(struct arm_smmu_device *smmu)
{
struct nvidia_smmu *nvidia = to_nvidia_smmu(smmu);
unsigned int i;
for (i = 0; i < NUM_SMMU_INSTANCES; i++) {
for (i = 0; i < nvidia->num_instances; i++) {
u32 val;
void __iomem *reg = nvidia_smmu_page(smmu, i, ARM_SMMU_GR0) +
ARM_SMMU_GR0_sGFSR;
@ -157,8 +177,9 @@ static irqreturn_t nvidia_smmu_global_fault(int irq, void *dev)
unsigned int inst;
irqreturn_t ret = IRQ_NONE;
struct arm_smmu_device *smmu = dev;
struct nvidia_smmu *nvidia = to_nvidia_smmu(smmu);
for (inst = 0; inst < NUM_SMMU_INSTANCES; inst++) {
for (inst = 0; inst < nvidia->num_instances; inst++) {
irqreturn_t irq_ret;
irq_ret = nvidia_smmu_global_fault_inst(irq, smmu, inst);
@ -202,11 +223,13 @@ static irqreturn_t nvidia_smmu_context_fault(int irq, void *dev)
struct arm_smmu_device *smmu;
struct iommu_domain *domain = dev;
struct arm_smmu_domain *smmu_domain;
struct nvidia_smmu *nvidia;
smmu_domain = container_of(domain, struct arm_smmu_domain, domain);
smmu = smmu_domain->smmu;
nvidia = to_nvidia_smmu(smmu);
for (inst = 0; inst < NUM_SMMU_INSTANCES; inst++) {
for (inst = 0; inst < nvidia->num_instances; inst++) {
irqreturn_t irq_ret;
/*
@ -224,6 +247,17 @@ static irqreturn_t nvidia_smmu_context_fault(int irq, void *dev)
return ret;
}
static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct device *dev)
{
struct nvidia_smmu *nvidia = to_nvidia_smmu(smmu);
int err;
err = tegra_mc_probe_device(nvidia->mc, dev);
if (err < 0)
dev_err(smmu->dev, "memory controller probe failed for %s: %d\n",
dev_name(dev), err);
}
static const struct arm_smmu_impl nvidia_smmu_impl = {
.read_reg = nvidia_smmu_read_reg,
.write_reg = nvidia_smmu_write_reg,
@ -233,6 +267,11 @@ static const struct arm_smmu_impl nvidia_smmu_impl = {
.tlb_sync = nvidia_smmu_tlb_sync,
.global_fault = nvidia_smmu_global_fault,
.context_fault = nvidia_smmu_context_fault,
.probe_finalize = nvidia_smmu_probe_finalize,
};
static const struct arm_smmu_impl nvidia_smmu_single_impl = {
.probe_finalize = nvidia_smmu_probe_finalize,
};
struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)
@ -241,23 +280,36 @@ struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)
struct device *dev = smmu->dev;
struct nvidia_smmu *nvidia_smmu;
struct platform_device *pdev = to_platform_device(dev);
unsigned int i;
nvidia_smmu = devm_krealloc(dev, smmu, sizeof(*nvidia_smmu), GFP_KERNEL);
if (!nvidia_smmu)
return ERR_PTR(-ENOMEM);
nvidia_smmu->mc = devm_tegra_memory_controller_get(dev);
if (IS_ERR(nvidia_smmu->mc))
return ERR_CAST(nvidia_smmu->mc);
/* Instance 0 is ioremapped by arm-smmu.c. */
nvidia_smmu->bases[0] = smmu->base;
nvidia_smmu->num_instances++;
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res)
return ERR_PTR(-ENODEV);
for (i = 1; i < MAX_SMMU_INSTANCES; i++) {
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
if (!res)
break;
nvidia_smmu->bases[1] = devm_ioremap_resource(dev, res);
if (IS_ERR(nvidia_smmu->bases[1]))
return ERR_CAST(nvidia_smmu->bases[1]);
nvidia_smmu->bases[i] = devm_ioremap_resource(dev, res);
if (IS_ERR(nvidia_smmu->bases[i]))
return ERR_CAST(nvidia_smmu->bases[i]);
nvidia_smmu->smmu.impl = &nvidia_smmu_impl;
nvidia_smmu->num_instances++;
}
if (nvidia_smmu->num_instances == 1)
nvidia_smmu->smmu.impl = &nvidia_smmu_single_impl;
else
nvidia_smmu->smmu.impl = &nvidia_smmu_impl;
return &nvidia_smmu->smmu;
}

View File

@ -376,9 +376,9 @@ static void tegra_smmu_enable(struct tegra_smmu *smmu, unsigned int swgroup,
if (client->swgroup != swgroup)
continue;
value = smmu_readl(smmu, client->smmu.reg);
value |= BIT(client->smmu.bit);
smmu_writel(smmu, value, client->smmu.reg);
value = smmu_readl(smmu, client->regs.smmu.reg);
value |= BIT(client->regs.smmu.bit);
smmu_writel(smmu, value, client->regs.smmu.reg);
}
}
@ -404,9 +404,9 @@ static void tegra_smmu_disable(struct tegra_smmu *smmu, unsigned int swgroup,
if (client->swgroup != swgroup)
continue;
value = smmu_readl(smmu, client->smmu.reg);
value &= ~BIT(client->smmu.bit);
smmu_writel(smmu, value, client->smmu.reg);
value = smmu_readl(smmu, client->regs.smmu.reg);
value &= ~BIT(client->regs.smmu.bit);
smmu_writel(smmu, value, client->regs.smmu.reg);
}
}
@ -1042,9 +1042,9 @@ static int tegra_smmu_clients_show(struct seq_file *s, void *data)
const struct tegra_mc_client *client = &smmu->soc->clients[i];
const char *status;
value = smmu_readl(smmu, client->smmu.reg);
value = smmu_readl(smmu, client->regs.smmu.reg);
if (value & BIT(client->smmu.bit))
if (value & BIT(client->regs.smmu.bit))
status = "yes";
else
status = "no";

View File

@ -600,8 +600,10 @@ static int atmel_ebi_probe(struct platform_device *pdev)
child);
ret = atmel_ebi_dev_disable(ebi, child);
if (ret)
if (ret) {
of_node_put(child);
return ret;
}
}
}

View File

@ -41,7 +41,6 @@
* @node: node in the device list
* @base: base address of memory-mapped IO registers.
* @dev: device pointer.
* @addressing table with addressing information from the spec
* @regs_cache: An array of 'struct emif_regs' that stores
* calculated register values for different
* frequencies, to avoid re-calculating them on
@ -61,7 +60,6 @@ struct emif_data {
unsigned long irq_state;
void __iomem *base;
struct device *dev;
const struct lpddr2_addressing *addressing;
struct emif_regs *regs_cache[EMIF_MAX_NUM_FREQUENCIES];
struct emif_regs *curr_regs;
struct emif_platform_data *plat_data;
@ -72,7 +70,6 @@ struct emif_data {
static struct emif_data *emif1;
static DEFINE_SPINLOCK(emif_lock);
static unsigned long irq_state;
static u32 t_ck; /* DDR clock period in ps */
static LIST_HEAD(device_list);
#ifdef CONFIG_DEBUG_FS
@ -169,15 +166,6 @@ static inline void __exit emif_debugfs_exit(struct emif_data *emif)
}
#endif
/*
* Calculate the period of DDR clock from frequency value
*/
static void set_ddr_clk_period(u32 freq)
{
/* Divide 10^12 by frequency to get period in ps */
t_ck = (u32)DIV_ROUND_UP_ULL(1000000000000ull, freq);
}
/*
* Get bus width used by EMIF. Note that this may be different from the
* bus width of the DDR devices used. For instance two 16-bit DDR devices
@ -196,19 +184,6 @@ static u32 get_emif_bus_width(struct emif_data *emif)
return width;
}
/*
* Get the CL from SDRAM_CONFIG register
*/
static u32 get_cl(struct emif_data *emif)
{
u32 cl;
void __iomem *base = emif->base;
cl = (readl(base + EMIF_SDRAM_CONFIG) & CL_MASK) >> CL_SHIFT;
return cl;
}
static void set_lpmode(struct emif_data *emif, u8 lpmode)
{
u32 temp;
@ -328,203 +303,6 @@ static const struct lpddr2_addressing *get_addressing_table(
return &lpddr2_jedec_addressing_table[index];
}
/*
* Find the the right timing table from the array of timing
* tables of the device using DDR clock frequency
*/
static const struct lpddr2_timings *get_timings_table(struct emif_data *emif,
u32 freq)
{
u32 i, min, max, freq_nearest;
const struct lpddr2_timings *timings = NULL;
const struct lpddr2_timings *timings_arr = emif->plat_data->timings;
struct device *dev = emif->dev;
/* Start with a very high frequency - 1GHz */
freq_nearest = 1000000000;
/*
* Find the timings table such that:
* 1. the frequency range covers the required frequency(safe) AND
* 2. the max_freq is closest to the required frequency(optimal)
*/
for (i = 0; i < emif->plat_data->timings_arr_size; i++) {
max = timings_arr[i].max_freq;
min = timings_arr[i].min_freq;
if ((freq >= min) && (freq <= max) && (max < freq_nearest)) {
freq_nearest = max;
timings = &timings_arr[i];
}
}
if (!timings)
dev_err(dev, "%s: couldn't find timings for - %dHz\n",
__func__, freq);
dev_dbg(dev, "%s: timings table: freq %d, speed bin freq %d\n",
__func__, freq, freq_nearest);
return timings;
}
static u32 get_sdram_ref_ctrl_shdw(u32 freq,
const struct lpddr2_addressing *addressing)
{
u32 ref_ctrl_shdw = 0, val = 0, freq_khz, t_refi;
/* Scale down frequency and t_refi to avoid overflow */
freq_khz = freq / 1000;
t_refi = addressing->tREFI_ns / 100;
/*
* refresh rate to be set is 'tREFI(in us) * freq in MHz
* division by 10000 to account for change in units
*/
val = t_refi * freq_khz / 10000;
ref_ctrl_shdw |= val << REFRESH_RATE_SHIFT;
return ref_ctrl_shdw;
}
static u32 get_sdram_tim_1_shdw(const struct lpddr2_timings *timings,
const struct lpddr2_min_tck *min_tck,
const struct lpddr2_addressing *addressing)
{
u32 tim1 = 0, val = 0;
val = max(min_tck->tWTR, DIV_ROUND_UP(timings->tWTR, t_ck)) - 1;
tim1 |= val << T_WTR_SHIFT;
if (addressing->num_banks == B8)
val = DIV_ROUND_UP(timings->tFAW, t_ck*4);
else
val = max(min_tck->tRRD, DIV_ROUND_UP(timings->tRRD, t_ck));
tim1 |= (val - 1) << T_RRD_SHIFT;
val = DIV_ROUND_UP(timings->tRAS_min + timings->tRPab, t_ck) - 1;
tim1 |= val << T_RC_SHIFT;
val = max(min_tck->tRASmin, DIV_ROUND_UP(timings->tRAS_min, t_ck));
tim1 |= (val - 1) << T_RAS_SHIFT;
val = max(min_tck->tWR, DIV_ROUND_UP(timings->tWR, t_ck)) - 1;
tim1 |= val << T_WR_SHIFT;
val = max(min_tck->tRCD, DIV_ROUND_UP(timings->tRCD, t_ck)) - 1;
tim1 |= val << T_RCD_SHIFT;
val = max(min_tck->tRPab, DIV_ROUND_UP(timings->tRPab, t_ck)) - 1;
tim1 |= val << T_RP_SHIFT;
return tim1;
}
static u32 get_sdram_tim_1_shdw_derated(const struct lpddr2_timings *timings,
const struct lpddr2_min_tck *min_tck,
const struct lpddr2_addressing *addressing)
{
u32 tim1 = 0, val = 0;
val = max(min_tck->tWTR, DIV_ROUND_UP(timings->tWTR, t_ck)) - 1;
tim1 = val << T_WTR_SHIFT;
/*
* tFAW is approximately 4 times tRRD. So add 1875*4 = 7500ps
* to tFAW for de-rating
*/
if (addressing->num_banks == B8) {
val = DIV_ROUND_UP(timings->tFAW + 7500, 4 * t_ck) - 1;
} else {
val = DIV_ROUND_UP(timings->tRRD + 1875, t_ck);
val = max(min_tck->tRRD, val) - 1;
}
tim1 |= val << T_RRD_SHIFT;
val = DIV_ROUND_UP(timings->tRAS_min + timings->tRPab + 1875, t_ck);
tim1 |= (val - 1) << T_RC_SHIFT;
val = DIV_ROUND_UP(timings->tRAS_min + 1875, t_ck);
val = max(min_tck->tRASmin, val) - 1;
tim1 |= val << T_RAS_SHIFT;
val = max(min_tck->tWR, DIV_ROUND_UP(timings->tWR, t_ck)) - 1;
tim1 |= val << T_WR_SHIFT;
val = max(min_tck->tRCD, DIV_ROUND_UP(timings->tRCD + 1875, t_ck));
tim1 |= (val - 1) << T_RCD_SHIFT;
val = max(min_tck->tRPab, DIV_ROUND_UP(timings->tRPab + 1875, t_ck));
tim1 |= (val - 1) << T_RP_SHIFT;
return tim1;
}
static u32 get_sdram_tim_2_shdw(const struct lpddr2_timings *timings,
const struct lpddr2_min_tck *min_tck,
const struct lpddr2_addressing *addressing,
u32 type)
{
u32 tim2 = 0, val = 0;
val = min_tck->tCKE - 1;
tim2 |= val << T_CKE_SHIFT;
val = max(min_tck->tRTP, DIV_ROUND_UP(timings->tRTP, t_ck)) - 1;
tim2 |= val << T_RTP_SHIFT;
/* tXSNR = tRFCab_ps + 10 ns(tRFCab_ps for LPDDR2). */
val = DIV_ROUND_UP(addressing->tRFCab_ps + 10000, t_ck) - 1;
tim2 |= val << T_XSNR_SHIFT;
/* XSRD same as XSNR for LPDDR2 */
tim2 |= val << T_XSRD_SHIFT;
val = max(min_tck->tXP, DIV_ROUND_UP(timings->tXP, t_ck)) - 1;
tim2 |= val << T_XP_SHIFT;
return tim2;
}
static u32 get_sdram_tim_3_shdw(const struct lpddr2_timings *timings,
const struct lpddr2_min_tck *min_tck,
const struct lpddr2_addressing *addressing,
u32 type, u32 ip_rev, u32 derated)
{
u32 tim3 = 0, val = 0, t_dqsck;
val = timings->tRAS_max_ns / addressing->tREFI_ns - 1;
val = val > 0xF ? 0xF : val;
tim3 |= val << T_RAS_MAX_SHIFT;
val = DIV_ROUND_UP(addressing->tRFCab_ps, t_ck) - 1;
tim3 |= val << T_RFC_SHIFT;
t_dqsck = (derated == EMIF_DERATED_TIMINGS) ?
timings->tDQSCK_max_derated : timings->tDQSCK_max;
if (ip_rev == EMIF_4D5)
val = DIV_ROUND_UP(t_dqsck + 1000, t_ck) - 1;
else
val = DIV_ROUND_UP(t_dqsck, t_ck) - 1;
tim3 |= val << T_TDQSCKMAX_SHIFT;
val = DIV_ROUND_UP(timings->tZQCS, t_ck) - 1;
tim3 |= val << ZQ_ZQCS_SHIFT;
val = DIV_ROUND_UP(timings->tCKESR, t_ck);
val = max(min_tck->tCKESR, val) - 1;
tim3 |= val << T_CKESR_SHIFT;
if (ip_rev == EMIF_4D5) {
tim3 |= (EMIF_T_CSTA - 1) << T_CSTA_SHIFT;
val = DIV_ROUND_UP(EMIF_T_PDLL_UL, 128) - 1;
tim3 |= val << T_PDLL_UL_SHIFT;
}
return tim3;
}
static u32 get_zq_config_reg(const struct lpddr2_addressing *addressing,
bool cs1_used, bool cal_resistors_per_cs)
{
@ -589,117 +367,6 @@ static u32 get_temp_alert_config(const struct lpddr2_addressing *addressing,
return alert;
}
static u32 get_read_idle_ctrl_shdw(u8 volt_ramp)
{
u32 idle = 0, val = 0;
/*
* Maximum value in normal conditions and increased frequency
* when voltage is ramping
*/
if (volt_ramp)
val = READ_IDLE_INTERVAL_DVFS / t_ck / 64 - 1;
else
val = 0x1FF;
/*
* READ_IDLE_CTRL register in EMIF4D has same offset and fields
* as DLL_CALIB_CTRL in EMIF4D5, so use the same shifts
*/
idle |= val << DLL_CALIB_INTERVAL_SHIFT;
idle |= EMIF_READ_IDLE_LEN_VAL << ACK_WAIT_SHIFT;
return idle;
}
static u32 get_dll_calib_ctrl_shdw(u8 volt_ramp)
{
u32 calib = 0, val = 0;
if (volt_ramp == DDR_VOLTAGE_RAMPING)
val = DLL_CALIB_INTERVAL_DVFS / t_ck / 16 - 1;
else
val = 0; /* Disabled when voltage is stable */
calib |= val << DLL_CALIB_INTERVAL_SHIFT;
calib |= DLL_CALIB_ACK_WAIT_VAL << ACK_WAIT_SHIFT;
return calib;
}
static u32 get_ddr_phy_ctrl_1_attilaphy_4d(const struct lpddr2_timings *timings,
u32 freq, u8 RL)
{
u32 phy = EMIF_DDR_PHY_CTRL_1_BASE_VAL_ATTILAPHY, val = 0;
val = RL + DIV_ROUND_UP(timings->tDQSCK_max, t_ck) - 1;
phy |= val << READ_LATENCY_SHIFT_4D;
if (freq <= 100000000)
val = EMIF_DLL_SLAVE_DLY_CTRL_100_MHZ_AND_LESS_ATTILAPHY;
else if (freq <= 200000000)
val = EMIF_DLL_SLAVE_DLY_CTRL_200_MHZ_ATTILAPHY;
else
val = EMIF_DLL_SLAVE_DLY_CTRL_400_MHZ_ATTILAPHY;
phy |= val << DLL_SLAVE_DLY_CTRL_SHIFT_4D;
return phy;
}
static u32 get_phy_ctrl_1_intelliphy_4d5(u32 freq, u8 cl)
{
u32 phy = EMIF_DDR_PHY_CTRL_1_BASE_VAL_INTELLIPHY, half_delay;
/*
* DLL operates at 266 MHz. If DDR frequency is near 266 MHz,
* half-delay is not needed else set half-delay
*/
if (freq >= 265000000 && freq < 267000000)
half_delay = 0;
else
half_delay = 1;
phy |= half_delay << DLL_HALF_DELAY_SHIFT_4D5;
phy |= ((cl + DIV_ROUND_UP(EMIF_PHY_TOTAL_READ_LATENCY_INTELLIPHY_PS,
t_ck) - 1) << READ_LATENCY_SHIFT_4D5);
return phy;
}
static u32 get_ext_phy_ctrl_2_intelliphy_4d5(void)
{
u32 fifo_we_slave_ratio;
fifo_we_slave_ratio = DIV_ROUND_CLOSEST(
EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256, t_ck);
return fifo_we_slave_ratio | fifo_we_slave_ratio << 11 |
fifo_we_slave_ratio << 22;
}
static u32 get_ext_phy_ctrl_3_intelliphy_4d5(void)
{
u32 fifo_we_slave_ratio;
fifo_we_slave_ratio = DIV_ROUND_CLOSEST(
EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256, t_ck);
return fifo_we_slave_ratio >> 10 | fifo_we_slave_ratio << 1 |
fifo_we_slave_ratio << 12 | fifo_we_slave_ratio << 23;
}
static u32 get_ext_phy_ctrl_4_intelliphy_4d5(void)
{
u32 fifo_we_slave_ratio;
fifo_we_slave_ratio = DIV_ROUND_CLOSEST(
EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256, t_ck);
return fifo_we_slave_ratio >> 9 | fifo_we_slave_ratio << 2 |
fifo_we_slave_ratio << 13;
}
static u32 get_pwr_mgmt_ctrl(u32 freq, struct emif_data *emif, u32 ip_rev)
{
u32 pwr_mgmt_ctrl = 0, timeout;
@ -821,51 +488,6 @@ static void get_temperature_level(struct emif_data *emif)
emif->temperature_level = temperature_level;
}
/*
* Program EMIF shadow registers that are not dependent on temperature
* or voltage
*/
static void setup_registers(struct emif_data *emif, struct emif_regs *regs)
{
void __iomem *base = emif->base;
writel(regs->sdram_tim2_shdw, base + EMIF_SDRAM_TIMING_2_SHDW);
writel(regs->phy_ctrl_1_shdw, base + EMIF_DDR_PHY_CTRL_1_SHDW);
writel(regs->pwr_mgmt_ctrl_shdw,
base + EMIF_POWER_MANAGEMENT_CTRL_SHDW);
/* Settings specific for EMIF4D5 */
if (emif->plat_data->ip_rev != EMIF_4D5)
return;
writel(regs->ext_phy_ctrl_2_shdw, base + EMIF_EXT_PHY_CTRL_2_SHDW);
writel(regs->ext_phy_ctrl_3_shdw, base + EMIF_EXT_PHY_CTRL_3_SHDW);
writel(regs->ext_phy_ctrl_4_shdw, base + EMIF_EXT_PHY_CTRL_4_SHDW);
}
/*
* When voltage ramps dll calibration and forced read idle should
* happen more often
*/
static void setup_volt_sensitive_regs(struct emif_data *emif,
struct emif_regs *regs, u32 volt_state)
{
u32 calib_ctrl;
void __iomem *base = emif->base;
/*
* EMIF_READ_IDLE_CTRL in EMIF4D refers to the same register as
* EMIF_DLL_CALIB_CTRL in EMIF4D5 and dll_calib_ctrl_shadow_*
* is an alias of the respective read_idle_ctrl_shdw_* (members of
* a union). So, the below code takes care of both cases
*/
if (volt_state == DDR_VOLTAGE_RAMPING)
calib_ctrl = regs->dll_calib_ctrl_shdw_volt_ramp;
else
calib_ctrl = regs->dll_calib_ctrl_shdw_normal;
writel(calib_ctrl, base + EMIF_DLL_CALIB_CTRL_SHDW);
}
/*
* setup_temperature_sensitive_regs() - set the timings for temperature
* sensitive registers. This happens once at initialisation time based
@ -1508,7 +1130,6 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
}
list_add(&emif->node, &device_list);
emif->addressing = get_addressing_table(emif->plat_data->device_info);
/* Save pointers to each other in emif and device structures */
emif->dev = &pdev->dev;
@ -1563,305 +1184,6 @@ static void emif_shutdown(struct platform_device *pdev)
disable_and_clear_all_interrupts(emif);
}
static int get_emif_reg_values(struct emif_data *emif, u32 freq,
struct emif_regs *regs)
{
u32 ip_rev, phy_type;
u32 cl, type;
const struct lpddr2_timings *timings;
const struct lpddr2_min_tck *min_tck;
const struct ddr_device_info *device_info;
const struct lpddr2_addressing *addressing;
struct emif_data *emif_for_calc;
struct device *dev;
dev = emif->dev;
/*
* If the devices on this EMIF instance is duplicate of EMIF1,
* use EMIF1 details for the calculation
*/
emif_for_calc = emif->duplicate ? emif1 : emif;
timings = get_timings_table(emif_for_calc, freq);
addressing = emif_for_calc->addressing;
if (!timings || !addressing) {
dev_err(dev, "%s: not enough data available for %dHz",
__func__, freq);
return -1;
}
device_info = emif_for_calc->plat_data->device_info;
type = device_info->type;
ip_rev = emif_for_calc->plat_data->ip_rev;
phy_type = emif_for_calc->plat_data->phy_type;
min_tck = emif_for_calc->plat_data->min_tck;
set_ddr_clk_period(freq);
regs->ref_ctrl_shdw = get_sdram_ref_ctrl_shdw(freq, addressing);
regs->sdram_tim1_shdw = get_sdram_tim_1_shdw(timings, min_tck,
addressing);
regs->sdram_tim2_shdw = get_sdram_tim_2_shdw(timings, min_tck,
addressing, type);
regs->sdram_tim3_shdw = get_sdram_tim_3_shdw(timings, min_tck,
addressing, type, ip_rev, EMIF_NORMAL_TIMINGS);
cl = get_cl(emif);
if (phy_type == EMIF_PHY_TYPE_ATTILAPHY && ip_rev == EMIF_4D) {
regs->phy_ctrl_1_shdw = get_ddr_phy_ctrl_1_attilaphy_4d(
timings, freq, cl);
} else if (phy_type == EMIF_PHY_TYPE_INTELLIPHY && ip_rev == EMIF_4D5) {
regs->phy_ctrl_1_shdw = get_phy_ctrl_1_intelliphy_4d5(freq, cl);
regs->ext_phy_ctrl_2_shdw = get_ext_phy_ctrl_2_intelliphy_4d5();
regs->ext_phy_ctrl_3_shdw = get_ext_phy_ctrl_3_intelliphy_4d5();
regs->ext_phy_ctrl_4_shdw = get_ext_phy_ctrl_4_intelliphy_4d5();
} else {
return -1;
}
/* Only timeout values in pwr_mgmt_ctrl_shdw register */
regs->pwr_mgmt_ctrl_shdw =
get_pwr_mgmt_ctrl(freq, emif_for_calc, ip_rev) &
(CS_TIM_MASK | SR_TIM_MASK | PD_TIM_MASK);
if (ip_rev & EMIF_4D) {
regs->read_idle_ctrl_shdw_normal =
get_read_idle_ctrl_shdw(DDR_VOLTAGE_STABLE);
regs->read_idle_ctrl_shdw_volt_ramp =
get_read_idle_ctrl_shdw(DDR_VOLTAGE_RAMPING);
} else if (ip_rev & EMIF_4D5) {
regs->dll_calib_ctrl_shdw_normal =
get_dll_calib_ctrl_shdw(DDR_VOLTAGE_STABLE);
regs->dll_calib_ctrl_shdw_volt_ramp =
get_dll_calib_ctrl_shdw(DDR_VOLTAGE_RAMPING);
}
if (type == DDR_TYPE_LPDDR2_S2 || type == DDR_TYPE_LPDDR2_S4) {
regs->ref_ctrl_shdw_derated = get_sdram_ref_ctrl_shdw(freq / 4,
addressing);
regs->sdram_tim1_shdw_derated =
get_sdram_tim_1_shdw_derated(timings, min_tck,
addressing);
regs->sdram_tim3_shdw_derated = get_sdram_tim_3_shdw(timings,
min_tck, addressing, type, ip_rev,
EMIF_DERATED_TIMINGS);
}
regs->freq = freq;
return 0;
}
/*
* get_regs() - gets the cached emif_regs structure for a given EMIF instance
* given frequency(freq):
*
* As an optimisation, every EMIF instance other than EMIF1 shares the
* register cache with EMIF1 if the devices connected on this instance
* are same as that on EMIF1(indicated by the duplicate flag)
*
* If we do not have an entry corresponding to the frequency given, we
* allocate a new entry and calculate the values
*
* Upon finding the right reg dump, save it in curr_regs. It can be
* directly used for thermal de-rating and voltage ramping changes.
*/
static struct emif_regs *get_regs(struct emif_data *emif, u32 freq)
{
int i;
struct emif_regs **regs_cache;
struct emif_regs *regs = NULL;
struct device *dev;
dev = emif->dev;
if (emif->curr_regs && emif->curr_regs->freq == freq) {
dev_dbg(dev, "%s: using curr_regs - %u Hz", __func__, freq);
return emif->curr_regs;
}
if (emif->duplicate)
regs_cache = emif1->regs_cache;
else
regs_cache = emif->regs_cache;
for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++) {
if (regs_cache[i]->freq == freq) {
regs = regs_cache[i];
dev_dbg(dev,
"%s: reg dump found in reg cache for %u Hz\n",
__func__, freq);
break;
}
}
/*
* If we don't have an entry for this frequency in the cache create one
* and calculate the values
*/
if (!regs) {
regs = devm_kzalloc(emif->dev, sizeof(*regs), GFP_ATOMIC);
if (!regs)
return NULL;
if (get_emif_reg_values(emif, freq, regs)) {
devm_kfree(emif->dev, regs);
return NULL;
}
/*
* Now look for an un-used entry in the cache and save the
* newly created struct. If there are no free entries
* over-write the last entry
*/
for (i = 0; i < EMIF_MAX_NUM_FREQUENCIES && regs_cache[i]; i++)
;
if (i >= EMIF_MAX_NUM_FREQUENCIES) {
dev_warn(dev, "%s: regs_cache full - reusing a slot!!\n",
__func__);
i = EMIF_MAX_NUM_FREQUENCIES - 1;
devm_kfree(emif->dev, regs_cache[i]);
}
regs_cache[i] = regs;
}
return regs;
}
static void do_volt_notify_handling(struct emif_data *emif, u32 volt_state)
{
dev_dbg(emif->dev, "%s: voltage notification : %d", __func__,
volt_state);
if (!emif->curr_regs) {
dev_err(emif->dev,
"%s: volt-notify before registers are ready: %d\n",
__func__, volt_state);
return;
}
setup_volt_sensitive_regs(emif, emif->curr_regs, volt_state);
}
/*
* TODO: voltage notify handling should be hooked up to
* regulator framework as soon as the necessary support
* is available in mainline kernel. This function is un-used
* right now.
*/
static void __attribute__((unused)) volt_notify_handling(u32 volt_state)
{
struct emif_data *emif;
spin_lock_irqsave(&emif_lock, irq_state);
list_for_each_entry(emif, &device_list, node)
do_volt_notify_handling(emif, volt_state);
do_freq_update();
spin_unlock_irqrestore(&emif_lock, irq_state);
}
static void do_freq_pre_notify_handling(struct emif_data *emif, u32 new_freq)
{
struct emif_regs *regs;
regs = get_regs(emif, new_freq);
if (!regs)
return;
emif->curr_regs = regs;
/*
* Update the shadow registers:
* Temperature and voltage-ramp sensitive settings are also configured
* in terms of DDR cycles. So, we need to update them too when there
* is a freq change
*/
dev_dbg(emif->dev, "%s: setting up shadow registers for %uHz",
__func__, new_freq);
setup_registers(emif, regs);
setup_temperature_sensitive_regs(emif, regs);
setup_volt_sensitive_regs(emif, regs, DDR_VOLTAGE_STABLE);
/*
* Part of workaround for errata i728. See do_freq_update()
* for more details
*/
if (emif->lpmode == EMIF_LP_MODE_SELF_REFRESH)
set_lpmode(emif, EMIF_LP_MODE_DISABLE);
}
/*
* TODO: frequency notify handling should be hooked up to
* clock framework as soon as the necessary support is
* available in mainline kernel. This function is un-used
* right now.
*/
static void __attribute__((unused)) freq_pre_notify_handling(u32 new_freq)
{
struct emif_data *emif;
/*
* NOTE: we are taking the spin-lock here and releases it
* only in post-notifier. This doesn't look good and
* Sparse complains about it, but this seems to be
* un-avoidable. We need to lock a sequence of events
* that is split between EMIF and clock framework.
*
* 1. EMIF driver updates EMIF timings in shadow registers in the
* frequency pre-notify callback from clock framework
* 2. clock framework sets up the registers for the new frequency
* 3. clock framework initiates a hw-sequence that updates
* the frequency EMIF timings synchronously.
*
* All these 3 steps should be performed as an atomic operation
* vis-a-vis similar sequence in the EMIF interrupt handler
* for temperature events. Otherwise, there could be race
* conditions that could result in incorrect EMIF timings for
* a given frequency
*/
spin_lock_irqsave(&emif_lock, irq_state);
list_for_each_entry(emif, &device_list, node)
do_freq_pre_notify_handling(emif, new_freq);
}
static void do_freq_post_notify_handling(struct emif_data *emif)
{
/*
* Part of workaround for errata i728. See do_freq_update()
* for more details
*/
if (emif->lpmode == EMIF_LP_MODE_SELF_REFRESH)
set_lpmode(emif, EMIF_LP_MODE_SELF_REFRESH);
}
/*
* TODO: frequency notify handling should be hooked up to
* clock framework as soon as the necessary support is
* available in mainline kernel. This function is un-used
* right now.
*/
static void __attribute__((unused)) freq_post_notify_handling(void)
{
struct emif_data *emif;
list_for_each_entry(emif, &device_list, node)
do_freq_post_notify_handling(emif);
/*
* Lock is done in pre-notify handler. See freq_pre_notify_handling()
* for more details
*/
spin_unlock_irqrestore(&emif_lock, irq_state);
}
#if defined(CONFIG_OF)
static const struct of_device_id emif_of_match[] = {
{ .compatible = "ti,emif-4d" },

View File

@ -97,7 +97,6 @@ static int fsl_ifc_ctrl_remove(struct platform_device *dev)
iounmap(ctrl->gregs);
dev_set_drvdata(&dev->dev, NULL);
kfree(ctrl);
return 0;
}
@ -209,7 +208,8 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
dev_info(&dev->dev, "Freescale Integrated Flash Controller\n");
fsl_ifc_ctrl_dev = kzalloc(sizeof(*fsl_ifc_ctrl_dev), GFP_KERNEL);
fsl_ifc_ctrl_dev = devm_kzalloc(&dev->dev, sizeof(*fsl_ifc_ctrl_dev),
GFP_KERNEL);
if (!fsl_ifc_ctrl_dev)
return -ENOMEM;
@ -219,8 +219,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
fsl_ifc_ctrl_dev->gregs = of_iomap(dev->dev.of_node, 0);
if (!fsl_ifc_ctrl_dev->gregs) {
dev_err(&dev->dev, "failed to get memory region\n");
ret = -ENODEV;
goto err;
return -ENODEV;
}
if (of_property_read_bool(dev->dev.of_node, "little-endian")) {
@ -295,6 +294,7 @@ err_irq:
free_irq(fsl_ifc_ctrl_dev->irq, fsl_ifc_ctrl_dev);
irq_dispose_mapping(fsl_ifc_ctrl_dev->irq);
err:
iounmap(fsl_ifc_ctrl_dev->gregs);
return ret;
}

View File

@ -116,6 +116,7 @@ static int pl353_smc_probe(struct amba_device *adev, const struct amba_id *id)
break;
}
if (!match) {
err = -ENODEV;
dev_err(&adev->dev, "no matching children\n");
goto disable_mem_clk;
}

View File

@ -1048,16 +1048,19 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
if (ret) {
dev_err(dev, "could not retrieve reg property: %d\n",
ret);
of_node_put(child);
return ret;
}
if (bank >= FMC2_MAX_BANKS) {
dev_err(dev, "invalid reg value: %d\n", bank);
of_node_put(child);
return -EINVAL;
}
if (ebi->bank_assigned & BIT(bank)) {
dev_err(dev, "bank already assigned: %d\n", bank);
of_node_put(child);
return -EINVAL;
}
@ -1066,6 +1069,7 @@ static int stm32_fmc2_ebi_parse_dt(struct stm32_fmc2_ebi *ebi)
if (ret) {
dev_err(dev, "setup chip select %d failed: %d\n",
bank, ret);
of_node_put(child);
return ret;
}
}

View File

@ -2,16 +2,18 @@
config TEGRA_MC
bool "NVIDIA Tegra Memory Controller support"
default y
depends on ARCH_TEGRA
depends on ARCH_TEGRA || (COMPILE_TEST && COMMON_CLK)
select INTERCONNECT
help
This driver supports the Memory Controller (MC) hardware found on
NVIDIA Tegra SoCs.
if TEGRA_MC
config TEGRA20_EMC
tristate "NVIDIA Tegra20 External Memory Controller driver"
default y
depends on TEGRA_MC && ARCH_TEGRA_2x_SOC
depends on ARCH_TEGRA_2x_SOC || COMPILE_TEST
select DEVFREQ_GOV_SIMPLE_ONDEMAND
select PM_DEVFREQ
help
@ -23,7 +25,7 @@ config TEGRA20_EMC
config TEGRA30_EMC
tristate "NVIDIA Tegra30 External Memory Controller driver"
default y
depends on TEGRA_MC && ARCH_TEGRA_3x_SOC
depends on ARCH_TEGRA_3x_SOC || COMPILE_TEST
select PM_OPP
help
This driver is for the External Memory Controller (EMC) found on
@ -34,8 +36,8 @@ config TEGRA30_EMC
config TEGRA124_EMC
tristate "NVIDIA Tegra124 External Memory Controller driver"
default y
depends on TEGRA_MC && ARCH_TEGRA_124_SOC
select TEGRA124_CLK_EMC
depends on ARCH_TEGRA_124_SOC || COMPILE_TEST
select TEGRA124_CLK_EMC if ARCH_TEGRA
select PM_OPP
help
This driver is for the External Memory Controller (EMC) found on
@ -45,14 +47,16 @@ config TEGRA124_EMC
config TEGRA210_EMC_TABLE
bool
depends on ARCH_TEGRA_210_SOC
depends on ARCH_TEGRA_210_SOC || COMPILE_TEST
config TEGRA210_EMC
tristate "NVIDIA Tegra210 External Memory Controller driver"
depends on TEGRA_MC && ARCH_TEGRA_210_SOC
depends on ARCH_TEGRA_210_SOC || COMPILE_TEST
select TEGRA210_EMC_TABLE
help
This driver is for the External Memory Controller (EMC) found on
Tegra210 chips. The EMC controls the external DRAM on the board.
This driver is required to change memory timings / clock rate for
external memory.
endif

View File

@ -7,6 +7,8 @@ tegra-mc-$(CONFIG_ARCH_TEGRA_114_SOC) += tegra114.o
tegra-mc-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124.o
tegra-mc-$(CONFIG_ARCH_TEGRA_132_SOC) += tegra124.o
tegra-mc-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o
tegra-mc-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o
tegra-mc-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186.o tegra194.o
obj-$(CONFIG_TEGRA_MC) += tegra-mc.o
@ -15,7 +17,7 @@ obj-$(CONFIG_TEGRA30_EMC) += tegra30-emc.o
obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o
obj-$(CONFIG_TEGRA210_EMC_TABLE) += tegra210-emc-table.o
obj-$(CONFIG_TEGRA210_EMC) += tegra210-emc.o
obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o tegra186-emc.o
obj-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186.o tegra186-emc.o
obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186-emc.o
obj-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186-emc.o
tegra210-emc-y := tegra210-emc-core.o tegra210-emc-cc-r21021.o

View File

@ -39,7 +39,13 @@ static const struct of_device_id tegra_mc_of_match[] = {
#ifdef CONFIG_ARCH_TEGRA_210_SOC
{ .compatible = "nvidia,tegra210-mc", .data = &tegra210_mc_soc },
#endif
{ }
#ifdef CONFIG_ARCH_TEGRA_186_SOC
{ .compatible = "nvidia,tegra186-mc", .data = &tegra186_mc_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_194_SOC
{ .compatible = "nvidia,tegra194-mc", .data = &tegra194_mc_soc },
#endif
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, tegra_mc_of_match);
@ -91,6 +97,15 @@ struct tegra_mc *devm_tegra_memory_controller_get(struct device *dev)
}
EXPORT_SYMBOL_GPL(devm_tegra_memory_controller_get);
int tegra_mc_probe_device(struct tegra_mc *mc, struct device *dev)
{
if (mc->soc->ops && mc->soc->ops->probe_device)
return mc->soc->ops->probe_device(mc, dev);
return 0;
}
EXPORT_SYMBOL_GPL(tegra_mc_probe_device);
static int tegra_mc_block_dma_common(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
@ -299,38 +314,6 @@ static int tegra_mc_reset_setup(struct tegra_mc *mc)
return 0;
}
static int tegra_mc_setup_latency_allowance(struct tegra_mc *mc)
{
unsigned long long tick;
unsigned int i;
u32 value;
/* compute the number of MC clock cycles per tick */
tick = (unsigned long long)mc->tick * clk_get_rate(mc->clk);
do_div(tick, NSEC_PER_SEC);
value = mc_readl(mc, MC_EMEM_ARB_CFG);
value &= ~MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE_MASK;
value |= MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE(tick);
mc_writel(mc, value, MC_EMEM_ARB_CFG);
/* write latency allowance defaults */
for (i = 0; i < mc->soc->num_clients; i++) {
const struct tegra_mc_la *la = &mc->soc->clients[i].la;
u32 value;
value = mc_readl(mc, la->reg);
value &= ~(la->mask << la->shift);
value |= (la->def & la->mask) << la->shift;
mc_writel(mc, value, la->reg);
}
/* latch new values */
mc_writel(mc, MC_TIMING_UPDATE, MC_TIMING_CONTROL);
return 0;
}
int tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate)
{
unsigned int i;
@ -368,6 +351,43 @@ unsigned int tegra_mc_get_emem_device_count(struct tegra_mc *mc)
}
EXPORT_SYMBOL_GPL(tegra_mc_get_emem_device_count);
#if defined(CONFIG_ARCH_TEGRA_3x_SOC) || \
defined(CONFIG_ARCH_TEGRA_114_SOC) || \
defined(CONFIG_ARCH_TEGRA_124_SOC) || \
defined(CONFIG_ARCH_TEGRA_132_SOC) || \
defined(CONFIG_ARCH_TEGRA_210_SOC)
static int tegra_mc_setup_latency_allowance(struct tegra_mc *mc)
{
unsigned long long tick;
unsigned int i;
u32 value;
/* compute the number of MC clock cycles per tick */
tick = (unsigned long long)mc->tick * clk_get_rate(mc->clk);
do_div(tick, NSEC_PER_SEC);
value = mc_readl(mc, MC_EMEM_ARB_CFG);
value &= ~MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE_MASK;
value |= MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE(tick);
mc_writel(mc, value, MC_EMEM_ARB_CFG);
/* write latency allowance defaults */
for (i = 0; i < mc->soc->num_clients; i++) {
const struct tegra_mc_client *client = &mc->soc->clients[i];
u32 value;
value = mc_readl(mc, client->regs.la.reg);
value &= ~(client->regs.la.mask << client->regs.la.shift);
value |= (client->regs.la.def & client->regs.la.mask) << client->regs.la.shift;
mc_writel(mc, value, client->regs.la.reg);
}
/* latch new values */
mc_writel(mc, MC_TIMING_UPDATE, MC_TIMING_CONTROL);
return 0;
}
static int load_one_timing(struct tegra_mc *mc,
struct tegra_mc_timing *timing,
struct device_node *node)
@ -459,27 +479,35 @@ static int tegra_mc_setup_timings(struct tegra_mc *mc)
return 0;
}
static const char *const status_names[32] = {
[ 1] = "External interrupt",
[ 6] = "EMEM address decode error",
[ 7] = "GART page fault",
[ 8] = "Security violation",
[ 9] = "EMEM arbitration error",
[10] = "Page fault",
[11] = "Invalid APB ASID update",
[12] = "VPR violation",
[13] = "Secure carveout violation",
[16] = "MTS carveout violation",
};
int tegra30_mc_probe(struct tegra_mc *mc)
{
int err;
static const char *const error_names[8] = {
[2] = "EMEM decode error",
[3] = "TrustZone violation",
[4] = "Carveout violation",
[6] = "SMMU translation error",
};
mc->clk = devm_clk_get_optional(mc->dev, "mc");
if (IS_ERR(mc->clk)) {
dev_err(mc->dev, "failed to get MC clock: %ld\n", PTR_ERR(mc->clk));
return PTR_ERR(mc->clk);
}
static irqreturn_t tegra_mc_irq(int irq, void *data)
/* ensure that debug features are disabled */
mc_writel(mc, 0x00000000, MC_TIMING_CONTROL_DBG);
err = tegra_mc_setup_latency_allowance(mc);
if (err < 0) {
dev_err(mc->dev, "failed to setup latency allowance: %d\n", err);
return err;
}
err = tegra_mc_setup_timings(mc);
if (err < 0) {
dev_err(mc->dev, "failed to setup timings: %d\n", err);
return err;
}
return 0;
}
static irqreturn_t tegra30_mc_handle_irq(int irq, void *data)
{
struct tegra_mc *mc = data;
unsigned long status;
@ -491,7 +519,7 @@ static irqreturn_t tegra_mc_irq(int irq, void *data)
return IRQ_NONE;
for_each_set_bit(bit, &status, 32) {
const char *error = status_names[bit] ?: "unknown";
const char *error = tegra_mc_status_names[bit] ?: "unknown";
const char *client = "unknown", *desc;
const char *direction, *secure;
phys_addr_t addr = 0;
@ -531,7 +559,7 @@ static irqreturn_t tegra_mc_irq(int irq, void *data)
type = (value & MC_ERR_STATUS_TYPE_MASK) >>
MC_ERR_STATUS_TYPE_SHIFT;
desc = error_names[type];
desc = tegra_mc_error_names[type];
switch (value & MC_ERR_STATUS_TYPE_MASK) {
case MC_ERR_STATUS_TYPE_INVALID_SMMU_PAGE:
@ -576,78 +604,31 @@ static irqreturn_t tegra_mc_irq(int irq, void *data)
return IRQ_HANDLED;
}
static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
{
struct tegra_mc *mc = data;
unsigned long status;
unsigned int bit;
const struct tegra_mc_ops tegra30_mc_ops = {
.probe = tegra30_mc_probe,
.handle_irq = tegra30_mc_handle_irq,
};
#endif
/* mask all interrupts to avoid flooding */
status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask;
if (!status)
return IRQ_NONE;
const char *const tegra_mc_status_names[32] = {
[ 1] = "External interrupt",
[ 6] = "EMEM address decode error",
[ 7] = "GART page fault",
[ 8] = "Security violation",
[ 9] = "EMEM arbitration error",
[10] = "Page fault",
[11] = "Invalid APB ASID update",
[12] = "VPR violation",
[13] = "Secure carveout violation",
[16] = "MTS carveout violation",
};
for_each_set_bit(bit, &status, 32) {
const char *direction = "read", *secure = "";
const char *error = status_names[bit];
const char *client, *desc;
phys_addr_t addr;
u32 value, reg;
u8 id, type;
switch (BIT(bit)) {
case MC_INT_DECERR_EMEM:
reg = MC_DECERR_EMEM_OTHERS_STATUS;
value = mc_readl(mc, reg);
id = value & mc->soc->client_id_mask;
desc = error_names[2];
if (value & BIT(31))
direction = "write";
break;
case MC_INT_INVALID_GART_PAGE:
reg = MC_GART_ERROR_REQ;
value = mc_readl(mc, reg);
id = (value >> 1) & mc->soc->client_id_mask;
desc = error_names[2];
if (value & BIT(0))
direction = "write";
break;
case MC_INT_SECURITY_VIOLATION:
reg = MC_SECURITY_VIOLATION_STATUS;
value = mc_readl(mc, reg);
id = value & mc->soc->client_id_mask;
type = (value & BIT(30)) ? 4 : 3;
desc = error_names[type];
secure = "secure ";
if (value & BIT(31))
direction = "write";
break;
default:
continue;
}
client = mc->soc->clients[id].name;
addr = mc_readl(mc, reg + sizeof(u32));
dev_err_ratelimited(mc->dev, "%s: %s%s @%pa: %s (%s)\n",
client, secure, direction, &addr, error,
desc);
}
/* clear interrupts */
mc_writel(mc, status, MC_INTSTATUS);
return IRQ_HANDLED;
}
const char *const tegra_mc_error_names[8] = {
[2] = "EMEM decode error",
[3] = "TrustZone violation",
[4] = "Carveout violation",
[6] = "SMMU translation error",
};
/*
* Memory Controller (MC) has few Memory Clients that are issuing memory
@ -748,7 +729,6 @@ static int tegra_mc_probe(struct platform_device *pdev)
{
struct resource *res;
struct tegra_mc *mc;
void *isr;
u64 mask;
int err;
@ -777,69 +757,37 @@ static int tegra_mc_probe(struct platform_device *pdev)
if (IS_ERR(mc->regs))
return PTR_ERR(mc->regs);
mc->clk = devm_clk_get(&pdev->dev, "mc");
if (IS_ERR(mc->clk)) {
dev_err(&pdev->dev, "failed to get MC clock: %ld\n",
PTR_ERR(mc->clk));
return PTR_ERR(mc->clk);
}
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
if (mc->soc == &tegra20_mc_soc) {
isr = tegra20_mc_irq;
} else
#endif
{
/* ensure that debug features are disabled */
mc_writel(mc, 0x00000000, MC_TIMING_CONTROL_DBG);
err = tegra_mc_setup_latency_allowance(mc);
if (err < 0) {
dev_err(&pdev->dev,
"failed to setup latency allowance: %d\n",
err);
return err;
}
isr = tegra_mc_irq;
err = tegra_mc_setup_timings(mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to setup timings: %d\n",
err);
return err;
}
}
mc->irq = platform_get_irq(pdev, 0);
if (mc->irq < 0)
return mc->irq;
WARN(!mc->soc->client_id_mask, "missing client ID mask for this SoC\n");
mc_writel(mc, mc->soc->intmask, MC_INTMASK);
err = devm_request_irq(&pdev->dev, mc->irq, isr, 0,
dev_name(&pdev->dev), mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", mc->irq,
err);
return err;
}
mc->debugfs.root = debugfs_create_dir("mc", NULL);
if (mc->soc->init) {
err = mc->soc->init(mc);
if (mc->soc->ops && mc->soc->ops->probe) {
err = mc->soc->ops->probe(mc);
if (err < 0)
dev_err(&pdev->dev, "failed to initialize SoC driver: %d\n",
err);
return err;
}
err = tegra_mc_reset_setup(mc);
if (err < 0)
dev_err(&pdev->dev, "failed to register reset controller: %d\n",
err);
if (mc->soc->ops && mc->soc->ops->handle_irq) {
mc->irq = platform_get_irq(pdev, 0);
if (mc->irq < 0)
return mc->irq;
WARN(!mc->soc->client_id_mask, "missing client ID mask for this SoC\n");
mc_writel(mc, mc->soc->intmask, MC_INTMASK);
err = devm_request_irq(&pdev->dev, mc->irq, mc->soc->ops->handle_irq, 0,
dev_name(&pdev->dev), mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", mc->irq,
err);
return err;
}
}
if (mc->soc->reset_ops) {
err = tegra_mc_reset_setup(mc);
if (err < 0)
dev_err(&pdev->dev, "failed to register reset controller: %d\n", err);
}
err = tegra_mc_interconnect_setup(mc);
if (err < 0)
@ -867,37 +815,28 @@ static int tegra_mc_probe(struct platform_device *pdev)
return 0;
}
static int tegra_mc_suspend(struct device *dev)
static int __maybe_unused tegra_mc_suspend(struct device *dev)
{
struct tegra_mc *mc = dev_get_drvdata(dev);
int err;
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && mc->gart) {
err = tegra_gart_suspend(mc->gart);
if (err)
return err;
}
if (mc->soc->ops && mc->soc->ops->suspend)
return mc->soc->ops->suspend(mc);
return 0;
}
static int tegra_mc_resume(struct device *dev)
static int __maybe_unused tegra_mc_resume(struct device *dev)
{
struct tegra_mc *mc = dev_get_drvdata(dev);
int err;
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && mc->gart) {
err = tegra_gart_resume(mc->gart);
if (err)
return err;
}
if (mc->soc->ops && mc->soc->ops->resume)
return mc->soc->ops->resume(mc);
return 0;
}
static const struct dev_pm_ops tegra_mc_pm_ops = {
.suspend = tegra_mc_suspend,
.resume = tegra_mc_resume,
SET_SYSTEM_SLEEP_PM_OPS(tegra_mc_suspend, tegra_mc_resume)
};
static struct platform_driver tegra_mc_driver = {

View File

@ -129,6 +129,31 @@ extern const struct tegra_mc_soc tegra132_mc_soc;
extern const struct tegra_mc_soc tegra210_mc_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_186_SOC
extern const struct tegra_mc_soc tegra186_mc_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_194_SOC
extern const struct tegra_mc_soc tegra194_mc_soc;
#endif
#if defined(CONFIG_ARCH_TEGRA_3x_SOC) || \
defined(CONFIG_ARCH_TEGRA_114_SOC) || \
defined(CONFIG_ARCH_TEGRA_124_SOC) || \
defined(CONFIG_ARCH_TEGRA_132_SOC) || \
defined(CONFIG_ARCH_TEGRA_210_SOC)
int tegra30_mc_probe(struct tegra_mc *mc);
extern const struct tegra_mc_ops tegra30_mc_ops;
#endif
#if defined(CONFIG_ARCH_TEGRA_186_SOC) || \
defined(CONFIG_ARCH_TEGRA_194_SOC)
extern const struct tegra_mc_ops tegra186_mc_ops;
#endif
extern const char * const tegra_mc_status_names[32];
extern const char * const tegra_mc_error_names[8];
/*
* These IDs are for internal use of Tegra ICC drivers. The ID numbers are
* chosen such that they don't conflict with the device-tree ICC node IDs.

File diff suppressed because it is too large Load Diff

View File

@ -272,8 +272,8 @@
#define EMC_PUTERM_ADJ 0x574
#define DRAM_DEV_SEL_ALL 0
#define DRAM_DEV_SEL_0 (2 << 30)
#define DRAM_DEV_SEL_1 (1 << 30)
#define DRAM_DEV_SEL_0 BIT(31)
#define DRAM_DEV_SEL_1 BIT(30)
#define EMC_CFG_POWER_FEATURES_MASK \
(EMC_CFG_DYN_SREF | EMC_CFG_DRAM_ACPD | EMC_CFG_DRAM_CLKSTOP_SR | \
@ -1269,10 +1269,6 @@ static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc)
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(dev, "failed to create debugfs directory\n");
return;
}
debugfs_create_file("available_rates", 0444, emc->debugfs.root, emc,
&tegra_emc_debug_available_rates_fops);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -776,10 +776,6 @@ static void tegra_emc_debugfs_init(struct tegra_emc *emc)
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(emc->dev, "failed to create debugfs directory\n");
return;
}
debugfs_create_file("available_rates", 0444, emc->debugfs.root,
emc, &tegra_emc_debug_available_rates_fops);
@ -908,49 +904,6 @@ err_msg:
return err;
}
static int tegra_emc_opp_table_init(struct tegra_emc *emc)
{
u32 hw_version = BIT(tegra_sku_info.soc_process_id);
struct opp_table *hw_opp_table;
int err;
hw_opp_table = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1);
err = PTR_ERR_OR_ZERO(hw_opp_table);
if (err) {
dev_err(emc->dev, "failed to set OPP supported HW: %d\n", err);
return err;
}
err = dev_pm_opp_of_add_table(emc->dev);
if (err) {
if (err == -ENODEV)
dev_err(emc->dev, "OPP table not found, please update your device tree\n");
else
dev_err(emc->dev, "failed to add OPP table: %d\n", err);
goto put_hw_table;
}
dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
/* first dummy rate-set initializes voltage state */
err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));
if (err) {
dev_err(emc->dev, "failed to initialize OPP clock: %d\n", err);
goto remove_table;
}
return 0;
remove_table:
dev_pm_opp_of_remove_table(emc->dev);
put_hw_table:
dev_pm_opp_put_supported_hw(hw_opp_table);
return err;
}
static void devm_tegra_emc_unset_callback(void *data)
{
tegra20_clk_set_emc_round_callback(NULL, NULL);
@ -1077,6 +1030,7 @@ static int tegra_emc_devfreq_init(struct tegra_emc *emc)
static int tegra_emc_probe(struct platform_device *pdev)
{
struct tegra_core_opp_params opp_params = {};
struct device_node *np;
struct tegra_emc *emc;
int irq, err;
@ -1122,7 +1076,9 @@ static int tegra_emc_probe(struct platform_device *pdev)
if (err)
return err;
err = tegra_emc_opp_table_init(emc);
opp_params.init_state = true;
err = devm_tegra_core_dev_init_opp_table(&pdev->dev, &opp_params);
if (err)
return err;

View File

@ -679,7 +679,7 @@ static int tegra20_mc_stats_show(struct seq_file *s, void *unused)
return 0;
}
static int tegra20_mc_init(struct tegra_mc *mc)
static int tegra20_mc_probe(struct tegra_mc *mc)
{
debugfs_create_devm_seqfile(mc->dev, "stats", mc->debugfs.root,
tegra20_mc_stats_show);
@ -687,6 +687,112 @@ static int tegra20_mc_init(struct tegra_mc *mc)
return 0;
}
static int tegra20_mc_suspend(struct tegra_mc *mc)
{
int err;
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && mc->gart) {
err = tegra_gart_suspend(mc->gart);
if (err < 0)
return err;
}
return 0;
}
static int tegra20_mc_resume(struct tegra_mc *mc)
{
int err;
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_GART) && mc->gart) {
err = tegra_gart_resume(mc->gart);
if (err < 0)
return err;
}
return 0;
}
static irqreturn_t tegra20_mc_handle_irq(int irq, void *data)
{
struct tegra_mc *mc = data;
unsigned long status;
unsigned int bit;
/* mask all interrupts to avoid flooding */
status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask;
if (!status)
return IRQ_NONE;
for_each_set_bit(bit, &status, 32) {
const char *error = tegra_mc_status_names[bit];
const char *direction = "read", *secure = "";
const char *client, *desc;
phys_addr_t addr;
u32 value, reg;
u8 id, type;
switch (BIT(bit)) {
case MC_INT_DECERR_EMEM:
reg = MC_DECERR_EMEM_OTHERS_STATUS;
value = mc_readl(mc, reg);
id = value & mc->soc->client_id_mask;
desc = tegra_mc_error_names[2];
if (value & BIT(31))
direction = "write";
break;
case MC_INT_INVALID_GART_PAGE:
reg = MC_GART_ERROR_REQ;
value = mc_readl(mc, reg);
id = (value >> 1) & mc->soc->client_id_mask;
desc = tegra_mc_error_names[2];
if (value & BIT(0))
direction = "write";
break;
case MC_INT_SECURITY_VIOLATION:
reg = MC_SECURITY_VIOLATION_STATUS;
value = mc_readl(mc, reg);
id = value & mc->soc->client_id_mask;
type = (value & BIT(30)) ? 4 : 3;
desc = tegra_mc_error_names[type];
secure = "secure ";
if (value & BIT(31))
direction = "write";
break;
default:
continue;
}
client = mc->soc->clients[id].name;
addr = mc_readl(mc, reg + sizeof(u32));
dev_err_ratelimited(mc->dev, "%s: %s%s @%pa: %s (%s)\n",
client, secure, direction, &addr, error,
desc);
}
/* clear interrupts */
mc_writel(mc, status, MC_INTSTATUS);
return IRQ_HANDLED;
}
static const struct tegra_mc_ops tegra20_mc_ops = {
.probe = tegra20_mc_probe,
.suspend = tegra20_mc_suspend,
.resume = tegra20_mc_resume,
.handle_irq = tegra20_mc_handle_irq,
};
const struct tegra_mc_soc tegra20_mc_soc = {
.clients = tegra20_mc_clients,
.num_clients = ARRAY_SIZE(tegra20_mc_clients),
@ -698,5 +804,5 @@ const struct tegra_mc_soc tegra20_mc_soc = {
.resets = tegra20_mc_resets,
.num_resets = ARRAY_SIZE(tegra20_mc_resets),
.icc_ops = &tegra20_mc_icc_ops,
.init = tegra20_mc_init,
.ops = &tegra20_mc_ops,
};

View File

@ -1759,10 +1759,6 @@ static void tegra210_emc_debugfs_init(struct tegra210_emc *emc)
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(dev, "failed to create debugfs directory\n");
return;
}
debugfs_create_file("available_rates", 0444, emc->debugfs.root, emc,
&tegra210_emc_debug_available_rates_fops);

File diff suppressed because it is too large Load Diff

View File

@ -150,8 +150,8 @@
#define EMC_SELF_REF_CMD_ENABLED BIT(0)
#define DRAM_DEV_SEL_ALL (0 << 30)
#define DRAM_DEV_SEL_0 (2 << 30)
#define DRAM_DEV_SEL_1 (1 << 30)
#define DRAM_DEV_SEL_0 BIT(31)
#define DRAM_DEV_SEL_1 BIT(30)
#define DRAM_BROADCAST(num) \
((num) > 1 ? DRAM_DEV_SEL_ALL : DRAM_DEV_SEL_0)
@ -1354,10 +1354,6 @@ static void tegra_emc_debugfs_init(struct tegra_emc *emc)
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(emc->dev, "failed to create debugfs directory\n");
return;
}
debugfs_create_file("available_rates", 0444, emc->debugfs.root,
emc, &tegra_emc_debug_available_rates_fops);
@ -1480,49 +1476,6 @@ err_msg:
return err;
}
static int tegra_emc_opp_table_init(struct tegra_emc *emc)
{
u32 hw_version = BIT(tegra_sku_info.soc_speedo_id);
struct opp_table *hw_opp_table;
int err;
hw_opp_table = dev_pm_opp_set_supported_hw(emc->dev, &hw_version, 1);
err = PTR_ERR_OR_ZERO(hw_opp_table);
if (err) {
dev_err(emc->dev, "failed to set OPP supported HW: %d\n", err);
return err;
}
err = dev_pm_opp_of_add_table(emc->dev);
if (err) {
if (err == -ENODEV)
dev_err(emc->dev, "OPP table not found, please update your device tree\n");
else
dev_err(emc->dev, "failed to add OPP table: %d\n", err);
goto put_hw_table;
}
dev_info_once(emc->dev, "OPP HW ver. 0x%x, current clock rate %lu MHz\n",
hw_version, clk_get_rate(emc->clk) / 1000000);
/* first dummy rate-set initializes voltage state */
err = dev_pm_opp_set_rate(emc->dev, clk_get_rate(emc->clk));
if (err) {
dev_err(emc->dev, "failed to initialize OPP clock: %d\n", err);
goto remove_table;
}
return 0;
remove_table:
dev_pm_opp_of_remove_table(emc->dev);
put_hw_table:
dev_pm_opp_put_supported_hw(hw_opp_table);
return err;
}
static void devm_tegra_emc_unset_callback(void *data)
{
tegra20_clk_set_emc_round_callback(NULL, NULL);
@ -1568,6 +1521,7 @@ static int tegra_emc_init_clk(struct tegra_emc *emc)
static int tegra_emc_probe(struct platform_device *pdev)
{
struct tegra_core_opp_params opp_params = {};
struct device_node *np;
struct tegra_emc *emc;
int err;
@ -1617,7 +1571,9 @@ static int tegra_emc_probe(struct platform_device *pdev)
if (err)
return err;
err = tegra_emc_opp_table_init(emc);
opp_params.init_state = true;
err = devm_tegra_core_dev_init_opp_table(&pdev->dev, &opp_params);
if (err)
return err;

File diff suppressed because it is too large Load Diff

View File

@ -43,8 +43,9 @@ config RESET_BCM6345
This enables the reset controller driver for BCM6345 SoCs.
config RESET_BERLIN
bool "Berlin Reset Driver" if COMPILE_TEST
default ARCH_BERLIN
tristate "Berlin Reset Driver"
depends on ARCH_BERLIN || COMPILE_TEST
default m if ARCH_BERLIN
help
This enables the reset controller driver for Marvell Berlin SoCs.
@ -59,7 +60,8 @@ config RESET_BRCMSTB
config RESET_BRCMSTB_RESCAL
bool "Broadcom STB RESCAL reset controller"
depends on HAS_IOMEM
default ARCH_BRCMSTB || COMPILE_TEST
depends on ARCH_BRCMSTB || COMPILE_TEST
default ARCH_BRCMSTB
help
This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
BCM7216.
@ -82,6 +84,7 @@ config RESET_IMX7
config RESET_INTEL_GW
bool "Intel Reset Controller Driver"
depends on X86 || COMPILE_TEST
depends on OF && HAS_IOMEM
select REGMAP_MMIO
help
@ -111,6 +114,14 @@ config RESET_LPC18XX
help
This enables the reset controller driver for NXP LPC18xx/43xx SoCs.
config RESET_MCHP_SPARX5
bool "Microchip Sparx5 reset driver"
depends on HAS_IOMEM || COMPILE_TEST
default y if SPARX5_SWITCH
select MFD_SYSCON
help
This driver supports switch core reset for the Microchip Sparx5 SoC.
config RESET_MESON
tristate "Meson Reset Driver"
depends on ARCH_MESON || COMPILE_TEST

View File

@ -16,6 +16,7 @@ obj-$(CONFIG_RESET_INTEL_GW) += reset-intel-gw.o
obj-$(CONFIG_RESET_K210) += reset-k210.o
obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o
obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o
obj-$(CONFIG_RESET_MCHP_SPARX5) += reset-microchip-sparx5.o
obj-$(CONFIG_RESET_MESON) += reset-meson.o
obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o
obj-$(CONFIG_RESET_NPCM) += reset-npcm.o

View File

@ -84,7 +84,7 @@ static const char *rcdev_name(struct reset_controller_dev *rcdev)
* without gaps.
*/
static int of_reset_simple_xlate(struct reset_controller_dev *rcdev,
const struct of_phandle_args *reset_spec)
const struct of_phandle_args *reset_spec)
{
if (reset_spec->args[0] >= rcdev->nr_resets)
return -EINVAL;
@ -744,9 +744,9 @@ void reset_control_bulk_release(int num_rstcs,
}
EXPORT_SYMBOL_GPL(reset_control_bulk_release);
static struct reset_control *__reset_control_get_internal(
struct reset_controller_dev *rcdev,
unsigned int index, bool shared, bool acquired)
static struct reset_control *
__reset_control_get_internal(struct reset_controller_dev *rcdev,
unsigned int index, bool shared, bool acquired)
{
struct reset_control *rstc;
@ -774,7 +774,10 @@ static struct reset_control *__reset_control_get_internal(
if (!rstc)
return ERR_PTR(-ENOMEM);
try_module_get(rcdev->owner);
if (!try_module_get(rcdev->owner)) {
kfree(rstc);
return ERR_PTR(-ENODEV);
}
rstc->rcdev = rcdev;
list_add(&rstc->list, &rcdev->reset_control_head);
@ -806,9 +809,9 @@ static void __reset_control_put_internal(struct reset_control *rstc)
kref_put(&rstc->refcnt, __reset_control_release);
}
struct reset_control *__of_reset_control_get(struct device_node *node,
const char *id, int index, bool shared,
bool optional, bool acquired)
struct reset_control *
__of_reset_control_get(struct device_node *node, const char *id, int index,
bool shared, bool optional, bool acquired)
{
struct reset_control *rstc;
struct reset_controller_dev *r, *rcdev;
@ -1027,9 +1030,9 @@ static void devm_reset_control_release(struct device *dev, void *res)
reset_control_put(*(struct reset_control **)res);
}
struct reset_control *__devm_reset_control_get(struct device *dev,
const char *id, int index, bool shared,
bool optional, bool acquired)
struct reset_control *
__devm_reset_control_get(struct device *dev, const char *id, int index,
bool shared, bool optional, bool acquired)
{
struct reset_control **ptr, *rstc;

View File

@ -3,7 +3,7 @@
* Hisilicon Hi6220 reset controller driver
*
* Copyright (c) 2016 Linaro Limited.
* Copyright (c) 2015-2016 Hisilicon Limited.
* Copyright (c) 2015-2016 HiSilicon Limited.
*
* Author: Feng Chen <puck.chen@hisilicon.com>
*/

View File

@ -118,6 +118,7 @@ static struct platform_driver a10sr_reset_driver = {
.probe = a10sr_reset_probe,
.driver = {
.name = "altr_a10sr_reset",
.of_match_table = a10sr_reset_of_match,
},
};
module_platform_driver(a10sr_reset_driver);

View File

@ -86,7 +86,7 @@ static int bcm6345_reset_status(struct reset_controller_dev *rcdev,
return !(__raw_readl(bcm6345_reset->base) & BIT(id));
}
static struct reset_control_ops bcm6345_reset_ops = {
static const struct reset_control_ops bcm6345_reset_ops = {
.assert = bcm6345_reset_assert,
.deassert = bcm6345_reset_deassert,
.reset = bcm6345_reset_reset,

View File

@ -14,7 +14,7 @@
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/mfd/syscon.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/platform_device.h>
@ -55,7 +55,7 @@ static const struct reset_control_ops berlin_reset_ops = {
static int berlin_reset_xlate(struct reset_controller_dev *rcdev,
const struct of_phandle_args *reset_spec)
{
unsigned offset, bit;
unsigned int offset, bit;
offset = reset_spec->args[0];
bit = reset_spec->args[1];
@ -93,6 +93,7 @@ static const struct of_device_id berlin_reset_dt_match[] = {
{ .compatible = "marvell,berlin2-reset" },
{ },
};
MODULE_DEVICE_TABLE(of, berlin_reset_dt_match);
static struct platform_driver berlin_reset_driver = {
.probe = berlin2_reset_probe,
@ -101,4 +102,9 @@ static struct platform_driver berlin_reset_driver = {
.of_match_table = berlin_reset_dt_match,
},
};
builtin_platform_driver(berlin_reset_driver);
module_platform_driver(berlin_reset_driver);
MODULE_AUTHOR("Antoine Tenart <antoine.tenart@free-electrons.com>");
MODULE_AUTHOR("Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>");
MODULE_DESCRIPTION("Synaptics Berlin reset controller");
MODULE_LICENSE("GPL");

View File

@ -111,6 +111,7 @@ static const struct of_device_id brcmstb_reset_of_match[] = {
{ .compatible = "brcm,brcmstb-reset" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, brcmstb_reset_of_match);
static struct platform_driver brcmstb_reset_driver = {
.probe = brcmstb_reset_probe,

View File

@ -186,7 +186,7 @@ static int lantiq_rcu_reset_probe(struct platform_device *pdev)
priv->rcdev.of_xlate = lantiq_rcu_reset_xlate;
priv->rcdev.of_reset_n_cells = 2;
return reset_controller_register(&priv->rcdev);
return devm_reset_controller_register(&pdev->dev, &priv->rcdev);
}
static const struct of_device_id lantiq_rcu_reset_dt_ids[] = {

View File

@ -0,0 +1,146 @@
// SPDX-License-Identifier: GPL-2.0+
/* Microchip Sparx5 Switch Reset driver
*
* Copyright (c) 2020 Microchip Technology Inc. and its subsidiaries.
*
* The Sparx5 Chip Register Model can be browsed at this location:
* https://github.com/microchip-ung/sparx-5_reginfo
*/
#include <linux/mfd/syscon.h>
#include <linux/of_device.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
#define PROTECT_REG 0x84
#define PROTECT_BIT BIT(10)
#define SOFT_RESET_REG 0x00
#define SOFT_RESET_BIT BIT(1)
struct mchp_reset_context {
struct regmap *cpu_ctrl;
struct regmap *gcb_ctrl;
struct reset_controller_dev rcdev;
};
static struct regmap_config sparx5_reset_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
};
static int sparx5_switch_reset(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct mchp_reset_context *ctx =
container_of(rcdev, struct mchp_reset_context, rcdev);
u32 val;
/* Make sure the core is PROTECTED from reset */
regmap_update_bits(ctx->cpu_ctrl, PROTECT_REG, PROTECT_BIT, PROTECT_BIT);
/* Start soft reset */
regmap_write(ctx->gcb_ctrl, SOFT_RESET_REG, SOFT_RESET_BIT);
/* Wait for soft reset done */
return regmap_read_poll_timeout(ctx->gcb_ctrl, SOFT_RESET_REG, val,
(val & SOFT_RESET_BIT) == 0,
1, 100);
}
static const struct reset_control_ops sparx5_reset_ops = {
.reset = sparx5_switch_reset,
};
static int mchp_sparx5_map_syscon(struct platform_device *pdev, char *name,
struct regmap **target)
{
struct device_node *syscon_np;
struct regmap *regmap;
int err;
syscon_np = of_parse_phandle(pdev->dev.of_node, name, 0);
if (!syscon_np)
return -ENODEV;
regmap = syscon_node_to_regmap(syscon_np);
of_node_put(syscon_np);
if (IS_ERR(regmap)) {
err = PTR_ERR(regmap);
dev_err(&pdev->dev, "No '%s' map: %d\n", name, err);
return err;
}
*target = regmap;
return 0;
}
static int mchp_sparx5_map_io(struct platform_device *pdev, int index,
struct regmap **target)
{
struct resource *res;
struct regmap *map;
void __iomem *mem;
mem = devm_platform_get_and_ioremap_resource(pdev, index, &res);
if (IS_ERR(mem)) {
dev_err(&pdev->dev, "Could not map resource %d\n", index);
return PTR_ERR(mem);
}
sparx5_reset_regmap_config.name = res->name;
map = devm_regmap_init_mmio(&pdev->dev, mem, &sparx5_reset_regmap_config);
if (IS_ERR(map))
return PTR_ERR(map);
*target = map;
return 0;
}
static int mchp_sparx5_reset_probe(struct platform_device *pdev)
{
struct device_node *dn = pdev->dev.of_node;
struct mchp_reset_context *ctx;
int err;
ctx = devm_kzalloc(&pdev->dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
err = mchp_sparx5_map_syscon(pdev, "cpu-syscon", &ctx->cpu_ctrl);
if (err)
return err;
err = mchp_sparx5_map_io(pdev, 0, &ctx->gcb_ctrl);
if (err)
return err;
ctx->rcdev.owner = THIS_MODULE;
ctx->rcdev.nr_resets = 1;
ctx->rcdev.ops = &sparx5_reset_ops;
ctx->rcdev.of_node = dn;
return devm_reset_controller_register(&pdev->dev, &ctx->rcdev);
}
static const struct of_device_id mchp_sparx5_reset_of_match[] = {
{
.compatible = "microchip,sparx5-switch-reset",
},
{ }
};
static struct platform_driver mchp_sparx5_reset_driver = {
.probe = mchp_sparx5_reset_probe,
.driver = {
.name = "sparx5-switch-reset",
.of_match_table = mchp_sparx5_reset_of_match,
},
};
static int __init mchp_sparx5_reset_init(void)
{
return platform_driver_register(&mchp_sparx5_reset_driver);
}
postcore_initcall(mchp_sparx5_reset_init);
MODULE_DESCRIPTION("Microchip Sparx5 switch reset driver");
MODULE_AUTHOR("Steen Hegelund <steen.hegelund@microchip.com>");
MODULE_LICENSE("Dual MIT/GPL");

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* drivers/reset/reset-oxnas.c
* Oxford Semiconductor Reset Controller driver
*
* Copyright (C) 2016 Neil Armstrong <narmstrong@baylibre.com>
* Copyright (C) 2014 Ma Haijun <mahaijuns@gmail.com>

View File

@ -58,8 +58,8 @@ struct ti_syscon_reset_data {
unsigned int nr_controls;
};
#define to_ti_syscon_reset_data(rcdev) \
container_of(rcdev, struct ti_syscon_reset_data, rcdev)
#define to_ti_syscon_reset_data(_rcdev) \
container_of(_rcdev, struct ti_syscon_reset_data, rcdev)
/**
* ti_syscon_reset_assert() - assert device reset

View File

@ -20,7 +20,7 @@ struct uniphier_reset_data {
#define UNIPHIER_RESET_ACTIVE_LOW BIT(0)
};
#define UNIPHIER_RESET_ID_END (unsigned int)(-1)
#define UNIPHIER_RESET_ID_END ((unsigned int)(-1))
#define UNIPHIER_RESET_END \
{ .id = UNIPHIER_RESET_ID_END }

View File

@ -83,8 +83,8 @@ static const struct zynqmp_reset_soc_data zynqmp_reset_data = {
};
static const struct zynqmp_reset_soc_data versal_reset_data = {
.reset_id = 0,
.num_resets = VERSAL_NR_RESETS,
.reset_id = 0,
.num_resets = VERSAL_NR_RESETS,
};
static const struct reset_control_ops zynqmp_reset_ops = {

View File

@ -153,7 +153,7 @@ static int syscfg_reset_controller_register(struct device *dev,
if (!rc->channels)
return -ENOMEM;
rc->rst.ops = &syscfg_reset_ops,
rc->rst.ops = &syscfg_reset_ops;
rc->rst.of_node = dev->of_node;
rc->rst.nr_resets = data->nr_channels;
rc->active_low = data->active_low;

View File

@ -68,7 +68,7 @@ struct meson_ee_pwrc_domain_desc {
struct meson_ee_pwrc_top_domain *top_pd;
unsigned int mem_pd_count;
struct meson_ee_pwrc_mem_domain *mem_pd;
bool (*get_power)(struct meson_ee_pwrc_domain *pwrc_domain);
bool (*is_powered_off)(struct meson_ee_pwrc_domain *pwrc_domain);
};
struct meson_ee_pwrc_domain_data {
@ -217,7 +217,7 @@ static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_audio[] = {
{ HHI_AUDIO_MEM_PD_REG0, GENMASK(27, 26) },
};
#define VPU_PD(__name, __top_pd, __mem, __get_power, __resets, __clks) \
#define VPU_PD(__name, __top_pd, __mem, __is_pwr_off, __resets, __clks) \
{ \
.name = __name, \
.reset_names_count = __resets, \
@ -225,46 +225,46 @@ static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_audio[] = {
.top_pd = __top_pd, \
.mem_pd_count = ARRAY_SIZE(__mem), \
.mem_pd = __mem, \
.get_power = __get_power, \
.is_powered_off = __is_pwr_off, \
}
#define TOP_PD(__name, __top_pd, __mem, __get_power) \
#define TOP_PD(__name, __top_pd, __mem, __is_pwr_off) \
{ \
.name = __name, \
.top_pd = __top_pd, \
.mem_pd_count = ARRAY_SIZE(__mem), \
.mem_pd = __mem, \
.get_power = __get_power, \
.is_powered_off = __is_pwr_off, \
}
#define MEM_PD(__name, __mem) \
TOP_PD(__name, NULL, __mem, NULL)
static bool pwrc_ee_get_power(struct meson_ee_pwrc_domain *pwrc_domain);
static bool pwrc_ee_is_powered_off(struct meson_ee_pwrc_domain *pwrc_domain);
static struct meson_ee_pwrc_domain_desc axg_pwrc_domains[] = {
[PWRC_AXG_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, axg_pwrc_mem_vpu,
pwrc_ee_get_power, 5, 2),
pwrc_ee_is_powered_off, 5, 2),
[PWRC_AXG_ETHERNET_MEM_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
[PWRC_AXG_AUDIO_ID] = MEM_PD("AUDIO", axg_pwrc_mem_audio),
};
static struct meson_ee_pwrc_domain_desc g12a_pwrc_domains[] = {
[PWRC_G12A_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, g12a_pwrc_mem_vpu,
pwrc_ee_get_power, 11, 2),
pwrc_ee_is_powered_off, 11, 2),
[PWRC_G12A_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
};
static struct meson_ee_pwrc_domain_desc gxbb_pwrc_domains[] = {
[PWRC_GXBB_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, gxbb_pwrc_mem_vpu,
pwrc_ee_get_power, 12, 2),
pwrc_ee_is_powered_off, 12, 2),
[PWRC_GXBB_ETHERNET_MEM_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
};
static struct meson_ee_pwrc_domain_desc meson8_pwrc_domains[] = {
[PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu,
meson8_pwrc_mem_vpu, pwrc_ee_get_power,
0, 1),
meson8_pwrc_mem_vpu,
pwrc_ee_is_powered_off, 0, 1),
[PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM",
meson_pwrc_mem_eth),
[PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM",
@ -273,8 +273,8 @@ static struct meson_ee_pwrc_domain_desc meson8_pwrc_domains[] = {
static struct meson_ee_pwrc_domain_desc meson8b_pwrc_domains[] = {
[PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu,
meson8_pwrc_mem_vpu, pwrc_ee_get_power,
11, 1),
meson8_pwrc_mem_vpu,
pwrc_ee_is_powered_off, 11, 1),
[PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM",
meson_pwrc_mem_eth),
[PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM",
@ -283,15 +283,15 @@ static struct meson_ee_pwrc_domain_desc meson8b_pwrc_domains[] = {
static struct meson_ee_pwrc_domain_desc sm1_pwrc_domains[] = {
[PWRC_SM1_VPU_ID] = VPU_PD("VPU", &sm1_pwrc_vpu, sm1_pwrc_mem_vpu,
pwrc_ee_get_power, 11, 2),
pwrc_ee_is_powered_off, 11, 2),
[PWRC_SM1_NNA_ID] = TOP_PD("NNA", &sm1_pwrc_nna, sm1_pwrc_mem_nna,
pwrc_ee_get_power),
pwrc_ee_is_powered_off),
[PWRC_SM1_USB_ID] = TOP_PD("USB", &sm1_pwrc_usb, sm1_pwrc_mem_usb,
pwrc_ee_get_power),
pwrc_ee_is_powered_off),
[PWRC_SM1_PCIE_ID] = TOP_PD("PCI", &sm1_pwrc_pci, sm1_pwrc_mem_pcie,
pwrc_ee_get_power),
pwrc_ee_is_powered_off),
[PWRC_SM1_GE2D_ID] = TOP_PD("GE2D", &sm1_pwrc_ge2d, sm1_pwrc_mem_ge2d,
pwrc_ee_get_power),
pwrc_ee_is_powered_off),
[PWRC_SM1_AUDIO_ID] = MEM_PD("AUDIO", sm1_pwrc_mem_audio),
[PWRC_SM1_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
};
@ -314,7 +314,7 @@ struct meson_ee_pwrc {
struct genpd_onecell_data xlate;
};
static bool pwrc_ee_get_power(struct meson_ee_pwrc_domain *pwrc_domain)
static bool pwrc_ee_is_powered_off(struct meson_ee_pwrc_domain *pwrc_domain)
{
u32 reg;
@ -445,7 +445,7 @@ static int meson_ee_pwrc_init_domain(struct platform_device *pdev,
* we need to power the domain off, otherwise the internal clocks
* prepare/enable counters won't be in sync.
*/
if (dom->num_clks && dom->desc.get_power && !dom->desc.get_power(dom)) {
if (dom->num_clks && dom->desc.is_powered_off && !dom->desc.is_powered_off(dom)) {
ret = clk_bulk_prepare_enable(dom->num_clks, dom->clks);
if (ret)
return ret;
@ -456,8 +456,8 @@ static int meson_ee_pwrc_init_domain(struct platform_device *pdev,
return ret;
} else {
ret = pm_genpd_init(&dom->base, NULL,
(dom->desc.get_power ?
dom->desc.get_power(dom) : true));
(dom->desc.is_powered_off ?
dom->desc.is_powered_off(dom) : true));
if (ret)
return ret;
}
@ -536,7 +536,7 @@ static void meson_ee_pwrc_shutdown(struct platform_device *pdev)
for (i = 0 ; i < pwrc->xlate.num_domains ; ++i) {
struct meson_ee_pwrc_domain *dom = &pwrc->domains[i];
if (dom->desc.get_power && !dom->desc.get_power(dom))
if (dom->desc.is_powered_off && !dom->desc.is_powered_off(dom))
meson_ee_pwrc_off(&dom->base);
}
}

View File

@ -14,11 +14,6 @@
static u32 family_id;
static u32 product_id;
static const struct of_device_id brcmstb_machine_match[] = {
{ .compatible = "brcm,brcmstb", },
{ }
};
u32 brcmstb_get_family_id(void)
{
return family_id;

View File

@ -12,11 +12,15 @@
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include <linux/reset.h>
#include <linux/sizes.h>
#include <dt-bindings/power/imx7-power.h>
#include <dt-bindings/power/imx8mq-power.h>
#include <dt-bindings/power/imx8mm-power.h>
#include <dt-bindings/power/imx8mn-power.h>
#define GPC_LPCR_A_CORE_BSC 0x000
@ -42,6 +46,25 @@
#define IMX8M_PCIE1_A53_DOMAIN BIT(3)
#define IMX8M_MIPI_A53_DOMAIN BIT(2)
#define IMX8MM_VPUH1_A53_DOMAIN BIT(15)
#define IMX8MM_VPUG2_A53_DOMAIN BIT(14)
#define IMX8MM_VPUG1_A53_DOMAIN BIT(13)
#define IMX8MM_DISPMIX_A53_DOMAIN BIT(12)
#define IMX8MM_VPUMIX_A53_DOMAIN BIT(10)
#define IMX8MM_GPUMIX_A53_DOMAIN BIT(9)
#define IMX8MM_GPU_A53_DOMAIN (BIT(8) | BIT(11))
#define IMX8MM_DDR1_A53_DOMAIN BIT(7)
#define IMX8MM_OTG2_A53_DOMAIN BIT(5)
#define IMX8MM_OTG1_A53_DOMAIN BIT(4)
#define IMX8MM_PCIE_A53_DOMAIN BIT(3)
#define IMX8MM_MIPI_A53_DOMAIN BIT(2)
#define IMX8MN_DISPMIX_A53_DOMAIN BIT(12)
#define IMX8MN_GPUMIX_A53_DOMAIN BIT(9)
#define IMX8MN_DDR1_A53_DOMAIN BIT(7)
#define IMX8MN_OTG1_A53_DOMAIN BIT(4)
#define IMX8MN_MIPI_A53_DOMAIN BIT(2)
#define GPC_PU_PGC_SW_PUP_REQ 0x0f8
#define GPC_PU_PGC_SW_PDN_REQ 0x104
@ -65,14 +88,55 @@
#define IMX8M_PCIE1_SW_Pxx_REQ BIT(1)
#define IMX8M_MIPI_SW_Pxx_REQ BIT(0)
#define IMX8MM_VPUH1_SW_Pxx_REQ BIT(13)
#define IMX8MM_VPUG2_SW_Pxx_REQ BIT(12)
#define IMX8MM_VPUG1_SW_Pxx_REQ BIT(11)
#define IMX8MM_DISPMIX_SW_Pxx_REQ BIT(10)
#define IMX8MM_VPUMIX_SW_Pxx_REQ BIT(8)
#define IMX8MM_GPUMIX_SW_Pxx_REQ BIT(7)
#define IMX8MM_GPU_SW_Pxx_REQ (BIT(6) | BIT(9))
#define IMX8MM_DDR1_SW_Pxx_REQ BIT(5)
#define IMX8MM_OTG2_SW_Pxx_REQ BIT(3)
#define IMX8MM_OTG1_SW_Pxx_REQ BIT(2)
#define IMX8MM_PCIE_SW_Pxx_REQ BIT(1)
#define IMX8MM_MIPI_SW_Pxx_REQ BIT(0)
#define IMX8MN_DISPMIX_SW_Pxx_REQ BIT(10)
#define IMX8MN_GPUMIX_SW_Pxx_REQ BIT(7)
#define IMX8MN_DDR1_SW_Pxx_REQ BIT(5)
#define IMX8MN_OTG1_SW_Pxx_REQ BIT(2)
#define IMX8MN_MIPI_SW_Pxx_REQ BIT(0)
#define GPC_M4_PU_PDN_FLG 0x1bc
#define GPC_PU_PWRHSK 0x1fc
#define IMX8M_GPU_HSK_PWRDNACKN BIT(26)
#define IMX8M_VPU_HSK_PWRDNACKN BIT(25)
#define IMX8M_DISP_HSK_PWRDNACKN BIT(24)
#define IMX8M_GPU_HSK_PWRDNREQN BIT(6)
#define IMX8M_VPU_HSK_PWRDNREQN BIT(5)
#define IMX8M_DISP_HSK_PWRDNREQN BIT(4)
#define IMX8MM_GPUMIX_HSK_PWRDNACKN BIT(29)
#define IMX8MM_GPU_HSK_PWRDNACKN (BIT(27) | BIT(28))
#define IMX8MM_VPUMIX_HSK_PWRDNACKN BIT(26)
#define IMX8MM_DISPMIX_HSK_PWRDNACKN BIT(25)
#define IMX8MM_HSIO_HSK_PWRDNACKN (BIT(23) | BIT(24))
#define IMX8MM_GPUMIX_HSK_PWRDNREQN BIT(11)
#define IMX8MM_GPU_HSK_PWRDNREQN (BIT(9) | BIT(10))
#define IMX8MM_VPUMIX_HSK_PWRDNREQN BIT(8)
#define IMX8MM_DISPMIX_HSK_PWRDNREQN BIT(7)
#define IMX8MM_HSIO_HSK_PWRDNREQN (BIT(5) | BIT(6))
#define IMX8MN_GPUMIX_HSK_PWRDNACKN (BIT(29) | BIT(27))
#define IMX8MN_DISPMIX_HSK_PWRDNACKN BIT(25)
#define IMX8MN_HSIO_HSK_PWRDNACKN BIT(23)
#define IMX8MN_GPUMIX_HSK_PWRDNREQN (BIT(11) | BIT(9))
#define IMX8MN_DISPMIX_HSK_PWRDNREQN BIT(7)
#define IMX8MN_HSIO_HSK_PWRDNREQN BIT(5)
/*
* The PGC offset values in Reference Manual
* (Rev. 1, 01/2018 and the older ones) GPC chapter's
@ -95,18 +159,37 @@
#define IMX8M_PGC_MIPI_CSI2 28
#define IMX8M_PGC_PCIE2 29
#define IMX8MM_PGC_MIPI 16
#define IMX8MM_PGC_PCIE 17
#define IMX8MM_PGC_OTG1 18
#define IMX8MM_PGC_OTG2 19
#define IMX8MM_PGC_DDR1 21
#define IMX8MM_PGC_GPU2D 22
#define IMX8MM_PGC_GPUMIX 23
#define IMX8MM_PGC_VPUMIX 24
#define IMX8MM_PGC_GPU3D 25
#define IMX8MM_PGC_DISPMIX 26
#define IMX8MM_PGC_VPUG1 27
#define IMX8MM_PGC_VPUG2 28
#define IMX8MM_PGC_VPUH1 29
#define IMX8MN_PGC_MIPI 16
#define IMX8MN_PGC_OTG1 18
#define IMX8MN_PGC_DDR1 21
#define IMX8MN_PGC_GPUMIX 23
#define IMX8MN_PGC_DISPMIX 26
#define GPC_PGC_CTRL(n) (0x800 + (n) * 0x40)
#define GPC_PGC_SR(n) (GPC_PGC_CTRL(n) + 0xc)
#define GPC_PGC_CTRL_PCR BIT(0)
#define GPC_CLK_MAX 6
struct imx_pgc_domain {
struct generic_pm_domain genpd;
struct regmap *regmap;
struct regulator *regulator;
struct clk *clk[GPC_CLK_MAX];
struct reset_control *reset;
struct clk_bulk_data *clks;
int num_clks;
unsigned int pgc;
@ -114,7 +197,8 @@ struct imx_pgc_domain {
const struct {
u32 pxx;
u32 map;
u32 hsk;
u32 hskreq;
u32 hskack;
} bits;
const int voltage;
@ -127,96 +211,172 @@ struct imx_pgc_domain_data {
const struct regmap_access_table *reg_access_table;
};
static int imx_gpc_pu_pgc_sw_pxx_req(struct generic_pm_domain *genpd,
bool on)
static inline struct imx_pgc_domain *
to_imx_pgc_domain(struct generic_pm_domain *genpd)
{
struct imx_pgc_domain *domain = container_of(genpd,
struct imx_pgc_domain,
genpd);
unsigned int offset = on ?
GPC_PU_PGC_SW_PUP_REQ : GPC_PU_PGC_SW_PDN_REQ;
const bool enable_power_control = !on;
const bool has_regulator = !IS_ERR(domain->regulator);
int i, ret = 0;
u32 pxx_req;
return container_of(genpd, struct imx_pgc_domain, genpd);
}
regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING,
domain->bits.map, domain->bits.map);
static int imx_pgc_power_up(struct generic_pm_domain *genpd)
{
struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd);
u32 reg_val;
int ret;
if (has_regulator && on) {
ret = pm_runtime_get_sync(domain->dev);
if (ret < 0) {
pm_runtime_put_noidle(domain->dev);
return ret;
}
if (!IS_ERR(domain->regulator)) {
ret = regulator_enable(domain->regulator);
if (ret) {
dev_err(domain->dev, "failed to enable regulator\n");
goto unmap;
goto out_put_pm;
}
}
/* Enable reset clocks for all devices in the domain */
for (i = 0; i < domain->num_clks; i++)
clk_prepare_enable(domain->clk[i]);
if (enable_power_control)
regmap_update_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc),
GPC_PGC_CTRL_PCR, GPC_PGC_CTRL_PCR);
if (domain->bits.hsk)
regmap_update_bits(domain->regmap, GPC_PU_PWRHSK,
domain->bits.hsk, on ? domain->bits.hsk : 0);
regmap_update_bits(domain->regmap, offset,
domain->bits.pxx, domain->bits.pxx);
/*
* As per "5.5.9.4 Example Code 4" in IMX7DRM.pdf wait
* for PUP_REQ/PDN_REQ bit to be cleared
*/
ret = regmap_read_poll_timeout(domain->regmap, offset, pxx_req,
!(pxx_req & domain->bits.pxx),
0, USEC_PER_MSEC);
ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks);
if (ret) {
dev_err(domain->dev, "failed to command PGC\n");
/*
* If we were in a process of enabling a
* domain and failed we might as well disable
* the regulator we just enabled. And if it
* was the opposite situation and we failed to
* power down -- keep the regulator on
*/
on = !on;
dev_err(domain->dev, "failed to enable reset clocks\n");
goto out_regulator_disable;
}
if (enable_power_control)
regmap_update_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc),
GPC_PGC_CTRL_PCR, 0);
if (domain->bits.pxx) {
/* request the domain to power up */
regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PUP_REQ,
domain->bits.pxx, domain->bits.pxx);
/*
* As per "5.5.9.4 Example Code 4" in IMX7DRM.pdf wait
* for PUP_REQ/PDN_REQ bit to be cleared
*/
ret = regmap_read_poll_timeout(domain->regmap,
GPC_PU_PGC_SW_PUP_REQ, reg_val,
!(reg_val & domain->bits.pxx),
0, USEC_PER_MSEC);
if (ret) {
dev_err(domain->dev, "failed to command PGC\n");
goto out_clk_disable;
}
/* disable power control */
regmap_clear_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc),
GPC_PGC_CTRL_PCR);
}
reset_control_assert(domain->reset);
/* delay for reset to propagate */
udelay(5);
reset_control_deassert(domain->reset);
/* request the ADB400 to power up */
if (domain->bits.hskreq) {
regmap_update_bits(domain->regmap, GPC_PU_PWRHSK,
domain->bits.hskreq, domain->bits.hskreq);
/*
* ret = regmap_read_poll_timeout(domain->regmap, GPC_PU_PWRHSK, reg_val,
* (reg_val & domain->bits.hskack), 0,
* USEC_PER_MSEC);
* Technically we need the commented code to wait handshake. But that needs
* the BLK-CTL module BUS clk-en bit being set.
*
* There is a separate BLK-CTL module and we will have such a driver for it,
* that driver will set the BUS clk-en bit and handshake will be triggered
* automatically there. Just add a delay and suppose the handshake finish
* after that.
*/
}
/* Disable reset clocks for all devices in the domain */
for (i = 0; i < domain->num_clks; i++)
clk_disable_unprepare(domain->clk[i]);
clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
if (has_regulator && !on) {
int err;
return 0;
out_clk_disable:
clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
out_regulator_disable:
if (!IS_ERR(domain->regulator))
regulator_disable(domain->regulator);
out_put_pm:
pm_runtime_put(domain->dev);
err = regulator_disable(domain->regulator);
if (err)
dev_err(domain->dev,
"failed to disable regulator: %d\n", err);
/* Preserve earlier error code */
ret = ret ?: err;
}
unmap:
regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING,
domain->bits.map, 0);
return ret;
}
static int imx_gpc_pu_pgc_sw_pup_req(struct generic_pm_domain *genpd)
static int imx_pgc_power_down(struct generic_pm_domain *genpd)
{
return imx_gpc_pu_pgc_sw_pxx_req(genpd, true);
}
struct imx_pgc_domain *domain = to_imx_pgc_domain(genpd);
u32 reg_val;
int ret;
static int imx_gpc_pu_pgc_sw_pdn_req(struct generic_pm_domain *genpd)
{
return imx_gpc_pu_pgc_sw_pxx_req(genpd, false);
/* Enable reset clocks for all devices in the domain */
ret = clk_bulk_prepare_enable(domain->num_clks, domain->clks);
if (ret) {
dev_err(domain->dev, "failed to enable reset clocks\n");
return ret;
}
/* request the ADB400 to power down */
if (domain->bits.hskreq) {
regmap_clear_bits(domain->regmap, GPC_PU_PWRHSK,
domain->bits.hskreq);
ret = regmap_read_poll_timeout(domain->regmap, GPC_PU_PWRHSK,
reg_val,
!(reg_val & domain->bits.hskack),
0, USEC_PER_MSEC);
if (ret) {
dev_err(domain->dev, "failed to power down ADB400\n");
goto out_clk_disable;
}
}
if (domain->bits.pxx) {
/* enable power control */
regmap_update_bits(domain->regmap, GPC_PGC_CTRL(domain->pgc),
GPC_PGC_CTRL_PCR, GPC_PGC_CTRL_PCR);
/* request the domain to power down */
regmap_update_bits(domain->regmap, GPC_PU_PGC_SW_PDN_REQ,
domain->bits.pxx, domain->bits.pxx);
/*
* As per "5.5.9.4 Example Code 4" in IMX7DRM.pdf wait
* for PUP_REQ/PDN_REQ bit to be cleared
*/
ret = regmap_read_poll_timeout(domain->regmap,
GPC_PU_PGC_SW_PDN_REQ, reg_val,
!(reg_val & domain->bits.pxx),
0, USEC_PER_MSEC);
if (ret) {
dev_err(domain->dev, "failed to command PGC\n");
goto out_clk_disable;
}
}
/* Disable reset clocks for all devices in the domain */
clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
if (!IS_ERR(domain->regulator)) {
ret = regulator_disable(domain->regulator);
if (ret) {
dev_err(domain->dev, "failed to disable regulator\n");
return ret;
}
}
pm_runtime_put(domain->dev);
return 0;
out_clk_disable:
clk_bulk_disable_unprepare(domain->num_clks, domain->clks);
return ret;
}
static const struct imx_pgc_domain imx7_pgc_domains[] = {
@ -342,7 +502,8 @@ static const struct imx_pgc_domain imx8m_pgc_domains[] = {
.bits = {
.pxx = IMX8M_GPU_SW_Pxx_REQ,
.map = IMX8M_GPU_A53_DOMAIN,
.hsk = IMX8M_GPU_HSK_PWRDNREQN,
.hskreq = IMX8M_GPU_HSK_PWRDNREQN,
.hskack = IMX8M_GPU_HSK_PWRDNACKN,
},
.pgc = IMX8M_PGC_GPU,
},
@ -354,7 +515,8 @@ static const struct imx_pgc_domain imx8m_pgc_domains[] = {
.bits = {
.pxx = IMX8M_VPU_SW_Pxx_REQ,
.map = IMX8M_VPU_A53_DOMAIN,
.hsk = IMX8M_VPU_HSK_PWRDNREQN,
.hskreq = IMX8M_VPU_HSK_PWRDNREQN,
.hskack = IMX8M_VPU_HSK_PWRDNACKN,
},
.pgc = IMX8M_PGC_VPU,
},
@ -366,7 +528,8 @@ static const struct imx_pgc_domain imx8m_pgc_domains[] = {
.bits = {
.pxx = IMX8M_DISP_SW_Pxx_REQ,
.map = IMX8M_DISP_A53_DOMAIN,
.hsk = IMX8M_DISP_HSK_PWRDNREQN,
.hskreq = IMX8M_DISP_HSK_PWRDNREQN,
.hskack = IMX8M_DISP_HSK_PWRDNACKN,
},
.pgc = IMX8M_PGC_DISP,
},
@ -443,40 +606,254 @@ static const struct imx_pgc_domain_data imx8m_pgc_domain_data = {
.reg_access_table = &imx8m_access_table,
};
static int imx_pgc_get_clocks(struct imx_pgc_domain *domain)
{
int i, ret;
static const struct imx_pgc_domain imx8mm_pgc_domains[] = {
[IMX8MM_POWER_DOMAIN_HSIOMIX] = {
.genpd = {
.name = "hsiomix",
},
.bits = {
.pxx = 0, /* no power sequence control */
.map = 0, /* no power sequence control */
.hskreq = IMX8MM_HSIO_HSK_PWRDNREQN,
.hskack = IMX8MM_HSIO_HSK_PWRDNACKN,
},
},
for (i = 0; ; i++) {
struct clk *clk = of_clk_get(domain->dev->of_node, i);
if (IS_ERR(clk))
break;
if (i >= GPC_CLK_MAX) {
dev_err(domain->dev, "more than %d clocks\n",
GPC_CLK_MAX);
ret = -EINVAL;
goto clk_err;
}
domain->clk[i] = clk;
}
domain->num_clks = i;
[IMX8MM_POWER_DOMAIN_PCIE] = {
.genpd = {
.name = "pcie",
},
.bits = {
.pxx = IMX8MM_PCIE_SW_Pxx_REQ,
.map = IMX8MM_PCIE_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_PCIE,
},
return 0;
[IMX8MM_POWER_DOMAIN_OTG1] = {
.genpd = {
.name = "usb-otg1",
},
.bits = {
.pxx = IMX8MM_OTG1_SW_Pxx_REQ,
.map = IMX8MM_OTG1_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_OTG1,
},
clk_err:
while (i--)
clk_put(domain->clk[i]);
[IMX8MM_POWER_DOMAIN_OTG2] = {
.genpd = {
.name = "usb-otg2",
},
.bits = {
.pxx = IMX8MM_OTG2_SW_Pxx_REQ,
.map = IMX8MM_OTG2_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_OTG2,
},
return ret;
}
[IMX8MM_POWER_DOMAIN_GPUMIX] = {
.genpd = {
.name = "gpumix",
},
.bits = {
.pxx = IMX8MM_GPUMIX_SW_Pxx_REQ,
.map = IMX8MM_GPUMIX_A53_DOMAIN,
.hskreq = IMX8MM_GPUMIX_HSK_PWRDNREQN,
.hskack = IMX8MM_GPUMIX_HSK_PWRDNACKN,
},
.pgc = IMX8MM_PGC_GPUMIX,
},
static void imx_pgc_put_clocks(struct imx_pgc_domain *domain)
{
int i;
[IMX8MM_POWER_DOMAIN_GPU] = {
.genpd = {
.name = "gpu",
},
.bits = {
.pxx = IMX8MM_GPU_SW_Pxx_REQ,
.map = IMX8MM_GPU_A53_DOMAIN,
.hskreq = IMX8MM_GPU_HSK_PWRDNREQN,
.hskack = IMX8MM_GPU_HSK_PWRDNACKN,
},
.pgc = IMX8MM_PGC_GPU2D,
},
for (i = domain->num_clks - 1; i >= 0; i--)
clk_put(domain->clk[i]);
}
[IMX8MM_POWER_DOMAIN_VPUMIX] = {
.genpd = {
.name = "vpumix",
},
.bits = {
.pxx = IMX8MM_VPUMIX_SW_Pxx_REQ,
.map = IMX8MM_VPUMIX_A53_DOMAIN,
.hskreq = IMX8MM_VPUMIX_HSK_PWRDNREQN,
.hskack = IMX8MM_VPUMIX_HSK_PWRDNACKN,
},
.pgc = IMX8MM_PGC_VPUMIX,
},
[IMX8MM_POWER_DOMAIN_VPUG1] = {
.genpd = {
.name = "vpu-g1",
},
.bits = {
.pxx = IMX8MM_VPUG1_SW_Pxx_REQ,
.map = IMX8MM_VPUG1_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_VPUG1,
},
[IMX8MM_POWER_DOMAIN_VPUG2] = {
.genpd = {
.name = "vpu-g2",
},
.bits = {
.pxx = IMX8MM_VPUG2_SW_Pxx_REQ,
.map = IMX8MM_VPUG2_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_VPUG2,
},
[IMX8MM_POWER_DOMAIN_VPUH1] = {
.genpd = {
.name = "vpu-h1",
},
.bits = {
.pxx = IMX8MM_VPUH1_SW_Pxx_REQ,
.map = IMX8MM_VPUH1_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_VPUH1,
},
[IMX8MM_POWER_DOMAIN_DISPMIX] = {
.genpd = {
.name = "dispmix",
},
.bits = {
.pxx = IMX8MM_DISPMIX_SW_Pxx_REQ,
.map = IMX8MM_DISPMIX_A53_DOMAIN,
.hskreq = IMX8MM_DISPMIX_HSK_PWRDNREQN,
.hskack = IMX8MM_DISPMIX_HSK_PWRDNACKN,
},
.pgc = IMX8MM_PGC_DISPMIX,
},
[IMX8MM_POWER_DOMAIN_MIPI] = {
.genpd = {
.name = "mipi",
},
.bits = {
.pxx = IMX8MM_MIPI_SW_Pxx_REQ,
.map = IMX8MM_MIPI_A53_DOMAIN,
},
.pgc = IMX8MM_PGC_MIPI,
},
};
static const struct regmap_range imx8mm_yes_ranges[] = {
regmap_reg_range(GPC_LPCR_A_CORE_BSC,
GPC_PU_PWRHSK),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_MIPI),
GPC_PGC_SR(IMX8MM_PGC_MIPI)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_PCIE),
GPC_PGC_SR(IMX8MM_PGC_PCIE)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_OTG1),
GPC_PGC_SR(IMX8MM_PGC_OTG1)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_OTG2),
GPC_PGC_SR(IMX8MM_PGC_OTG2)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_DDR1),
GPC_PGC_SR(IMX8MM_PGC_DDR1)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_GPU2D),
GPC_PGC_SR(IMX8MM_PGC_GPU2D)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_GPUMIX),
GPC_PGC_SR(IMX8MM_PGC_GPUMIX)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_VPUMIX),
GPC_PGC_SR(IMX8MM_PGC_VPUMIX)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_GPU3D),
GPC_PGC_SR(IMX8MM_PGC_GPU3D)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_DISPMIX),
GPC_PGC_SR(IMX8MM_PGC_DISPMIX)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_VPUG1),
GPC_PGC_SR(IMX8MM_PGC_VPUG1)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_VPUG2),
GPC_PGC_SR(IMX8MM_PGC_VPUG2)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MM_PGC_VPUH1),
GPC_PGC_SR(IMX8MM_PGC_VPUH1)),
};
static const struct regmap_access_table imx8mm_access_table = {
.yes_ranges = imx8mm_yes_ranges,
.n_yes_ranges = ARRAY_SIZE(imx8mm_yes_ranges),
};
static const struct imx_pgc_domain_data imx8mm_pgc_domain_data = {
.domains = imx8mm_pgc_domains,
.domains_num = ARRAY_SIZE(imx8mm_pgc_domains),
.reg_access_table = &imx8mm_access_table,
};
static const struct imx_pgc_domain imx8mn_pgc_domains[] = {
[IMX8MN_POWER_DOMAIN_HSIOMIX] = {
.genpd = {
.name = "hsiomix",
},
.bits = {
.pxx = 0, /* no power sequence control */
.map = 0, /* no power sequence control */
.hskreq = IMX8MN_HSIO_HSK_PWRDNREQN,
.hskack = IMX8MN_HSIO_HSK_PWRDNACKN,
},
},
[IMX8MN_POWER_DOMAIN_OTG1] = {
.genpd = {
.name = "usb-otg1",
},
.bits = {
.pxx = IMX8MN_OTG1_SW_Pxx_REQ,
.map = IMX8MN_OTG1_A53_DOMAIN,
},
.pgc = IMX8MN_PGC_OTG1,
},
[IMX8MN_POWER_DOMAIN_GPUMIX] = {
.genpd = {
.name = "gpumix",
},
.bits = {
.pxx = IMX8MN_GPUMIX_SW_Pxx_REQ,
.map = IMX8MN_GPUMIX_A53_DOMAIN,
.hskreq = IMX8MN_GPUMIX_HSK_PWRDNREQN,
.hskack = IMX8MN_GPUMIX_HSK_PWRDNACKN,
},
.pgc = IMX8MN_PGC_GPUMIX,
},
};
static const struct regmap_range imx8mn_yes_ranges[] = {
regmap_reg_range(GPC_LPCR_A_CORE_BSC,
GPC_PU_PWRHSK),
regmap_reg_range(GPC_PGC_CTRL(IMX8MN_PGC_MIPI),
GPC_PGC_SR(IMX8MN_PGC_MIPI)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MN_PGC_OTG1),
GPC_PGC_SR(IMX8MN_PGC_OTG1)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MN_PGC_DDR1),
GPC_PGC_SR(IMX8MN_PGC_DDR1)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MN_PGC_GPUMIX),
GPC_PGC_SR(IMX8MN_PGC_GPUMIX)),
regmap_reg_range(GPC_PGC_CTRL(IMX8MN_PGC_DISPMIX),
GPC_PGC_SR(IMX8MN_PGC_DISPMIX)),
};
static const struct regmap_access_table imx8mn_access_table = {
.yes_ranges = imx8mn_yes_ranges,
.n_yes_ranges = ARRAY_SIZE(imx8mn_yes_ranges),
};
static const struct imx_pgc_domain_data imx8mn_pgc_domain_data = {
.domains = imx8mn_pgc_domains,
.domains_num = ARRAY_SIZE(imx8mn_pgc_domains),
.reg_access_table = &imx8mn_access_table,
};
static int imx_pgc_domain_probe(struct platform_device *pdev)
{
@ -495,25 +872,45 @@ static int imx_pgc_domain_probe(struct platform_device *pdev)
domain->voltage, domain->voltage);
}
ret = imx_pgc_get_clocks(domain);
if (ret)
return dev_err_probe(domain->dev, ret, "Failed to get domain's clocks\n");
domain->num_clks = devm_clk_bulk_get_all(domain->dev, &domain->clks);
if (domain->num_clks < 0)
return dev_err_probe(domain->dev, domain->num_clks,
"Failed to get domain's clocks\n");
domain->reset = devm_reset_control_array_get_optional_exclusive(domain->dev);
if (IS_ERR(domain->reset))
return dev_err_probe(domain->dev, PTR_ERR(domain->reset),
"Failed to get domain's resets\n");
pm_runtime_enable(domain->dev);
if (domain->bits.map)
regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING,
domain->bits.map, domain->bits.map);
ret = pm_genpd_init(&domain->genpd, NULL, true);
if (ret) {
dev_err(domain->dev, "Failed to init power domain\n");
imx_pgc_put_clocks(domain);
return ret;
goto out_domain_unmap;
}
ret = of_genpd_add_provider_simple(domain->dev->of_node,
&domain->genpd);
if (ret) {
dev_err(domain->dev, "Failed to add genpd provider\n");
pm_genpd_remove(&domain->genpd);
imx_pgc_put_clocks(domain);
goto out_genpd_remove;
}
return 0;
out_genpd_remove:
pm_genpd_remove(&domain->genpd);
out_domain_unmap:
if (domain->bits.map)
regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING,
domain->bits.map, 0);
pm_runtime_disable(domain->dev);
return ret;
}
@ -523,7 +920,12 @@ static int imx_pgc_domain_remove(struct platform_device *pdev)
of_genpd_del_provider(domain->dev->of_node);
pm_genpd_remove(&domain->genpd);
imx_pgc_put_clocks(domain);
if (domain->bits.map)
regmap_update_bits(domain->regmap, GPC_PGC_CPU_MAPPING,
domain->bits.map, 0);
pm_runtime_disable(domain->dev);
return 0;
}
@ -617,8 +1019,8 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
domain = pd_pdev->dev.platform_data;
domain->regmap = regmap;
domain->genpd.power_on = imx_gpc_pu_pgc_sw_pup_req;
domain->genpd.power_off = imx_gpc_pu_pgc_sw_pdn_req;
domain->genpd.power_on = imx_pgc_power_up;
domain->genpd.power_off = imx_pgc_power_down;
pd_pdev->dev.parent = dev;
pd_pdev->dev.of_node = np;
@ -636,6 +1038,8 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
static const struct of_device_id imx_gpcv2_dt_ids[] = {
{ .compatible = "fsl,imx7d-gpc", .data = &imx7_pgc_domain_data, },
{ .compatible = "fsl,imx8mm-gpc", .data = &imx8mm_pgc_domain_data, },
{ .compatible = "fsl,imx8mn-gpc", .data = &imx8mn_pgc_domain_data, },
{ .compatible = "fsl,imx8mq-gpc", .data = &imx8m_pgc_domain_data, },
{ }
};

View File

@ -234,6 +234,7 @@ static const struct of_device_id mtk_devapc_dt_match[] = {
}, {
},
};
MODULE_DEVICE_TABLE(of, mtk_devapc_dt_match);
static int mtk_devapc_probe(struct platform_device *pdev)
{

View File

@ -211,7 +211,7 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
if (ret)
return ret;
ret = clk_bulk_enable(pd->num_clks, pd->clks);
ret = clk_bulk_prepare_enable(pd->num_clks, pd->clks);
if (ret)
goto err_reg;
@ -229,7 +229,7 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_ISO_BIT);
regmap_set_bits(scpsys->base, pd->data->ctl_offs, PWR_RST_B_BIT);
ret = clk_bulk_enable(pd->num_subsys_clks, pd->subsys_clks);
ret = clk_bulk_prepare_enable(pd->num_subsys_clks, pd->subsys_clks);
if (ret)
goto err_pwr_ack;
@ -246,9 +246,9 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
err_disable_sram:
scpsys_sram_disable(pd);
err_disable_subsys_clks:
clk_bulk_disable(pd->num_subsys_clks, pd->subsys_clks);
clk_bulk_disable_unprepare(pd->num_subsys_clks, pd->subsys_clks);
err_pwr_ack:
clk_bulk_disable(pd->num_clks, pd->clks);
clk_bulk_disable_unprepare(pd->num_clks, pd->clks);
err_reg:
scpsys_regulator_disable(pd->supply);
return ret;
@ -269,7 +269,7 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
if (ret < 0)
return ret;
clk_bulk_disable(pd->num_subsys_clks, pd->subsys_clks);
clk_bulk_disable_unprepare(pd->num_subsys_clks, pd->subsys_clks);
/* subsys power off */
regmap_clear_bits(scpsys->base, pd->data->ctl_offs, PWR_RST_B_BIT);
@ -284,7 +284,7 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
if (ret < 0)
return ret;
clk_bulk_disable(pd->num_clks, pd->clks);
clk_bulk_disable_unprepare(pd->num_clks, pd->clks);
scpsys_regulator_disable(pd->supply);
@ -297,6 +297,7 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
const struct scpsys_domain_data *domain_data;
struct scpsys_domain *pd;
struct device_node *root_node = scpsys->dev->of_node;
struct device_node *smi_node;
struct property *prop;
const char *clk_name;
int i, ret, num_clks;
@ -352,9 +353,13 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
if (IS_ERR(pd->infracfg))
return ERR_CAST(pd->infracfg);
pd->smi = syscon_regmap_lookup_by_phandle_optional(node, "mediatek,smi");
if (IS_ERR(pd->smi))
return ERR_CAST(pd->smi);
smi_node = of_parse_phandle(node, "mediatek,smi", 0);
if (smi_node) {
pd->smi = device_node_to_regmap(smi_node);
of_node_put(smi_node);
if (IS_ERR(pd->smi))
return ERR_CAST(pd->smi);
}
num_clks = of_clk_get_parent_count(node);
if (num_clks > 0) {
@ -405,14 +410,6 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
pd->subsys_clks[i].clk = clk;
}
ret = clk_bulk_prepare(pd->num_clks, pd->clks);
if (ret)
goto err_put_subsys_clocks;
ret = clk_bulk_prepare(pd->num_subsys_clks, pd->subsys_clks);
if (ret)
goto err_unprepare_clocks;
/*
* Initially turn on all domains to make the domains usable
* with !CONFIG_PM and to get the hardware in sync with the
@ -427,7 +424,7 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
ret = scpsys_power_on(&pd->genpd);
if (ret < 0) {
dev_err(scpsys->dev, "%pOF: failed to power on domain: %d\n", node, ret);
goto err_unprepare_clocks;
goto err_put_subsys_clocks;
}
}
@ -435,7 +432,7 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
ret = -EINVAL;
dev_err(scpsys->dev,
"power domain with id %d already exists, check your device-tree\n", id);
goto err_unprepare_subsys_clocks;
goto err_put_subsys_clocks;
}
if (!pd->data->name)
@ -455,10 +452,6 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
return scpsys->pd_data.domains[id];
err_unprepare_subsys_clocks:
clk_bulk_unprepare(pd->num_subsys_clks, pd->subsys_clks);
err_unprepare_clocks:
clk_bulk_unprepare(pd->num_clks, pd->clks);
err_put_subsys_clocks:
clk_bulk_put(pd->num_subsys_clks, pd->subsys_clks);
err_put_clocks:
@ -537,10 +530,7 @@ static void scpsys_remove_one_domain(struct scpsys_domain *pd)
"failed to remove domain '%s' : %d - state may be inconsistent\n",
pd->genpd.name, ret);
clk_bulk_unprepare(pd->num_clks, pd->clks);
clk_bulk_put(pd->num_clks, pd->clks);
clk_bulk_unprepare(pd->num_subsys_clks, pd->subsys_clks);
clk_bulk_put(pd->num_subsys_clks, pd->subsys_clks);
}

View File

@ -961,6 +961,23 @@ static int mt8183_regs[] = {
[PWRAP_WACS2_VLDCLR] = 0xC28,
};
static int mt8195_regs[] = {
[PWRAP_INIT_DONE2] = 0x0,
[PWRAP_STAUPD_CTRL] = 0x4C,
[PWRAP_TIMER_EN] = 0x3E4,
[PWRAP_INT_EN] = 0x420,
[PWRAP_INT_FLG] = 0x428,
[PWRAP_INT_CLR] = 0x42C,
[PWRAP_INT1_EN] = 0x450,
[PWRAP_INT1_FLG] = 0x458,
[PWRAP_INT1_CLR] = 0x45C,
[PWRAP_WACS2_CMD] = 0x880,
[PWRAP_SWINF_2_WDATA_31_0] = 0x884,
[PWRAP_SWINF_2_RDATA_31_0] = 0x894,
[PWRAP_WACS2_VLDCLR] = 0x8A4,
[PWRAP_WACS2_RDATA] = 0x8A8,
};
static int mt8516_regs[] = {
[PWRAP_MUX_SEL] = 0x0,
[PWRAP_WRAP_EN] = 0x4,
@ -1066,6 +1083,7 @@ enum pwrap_type {
PWRAP_MT8135,
PWRAP_MT8173,
PWRAP_MT8183,
PWRAP_MT8195,
PWRAP_MT8516,
};
@ -1525,6 +1543,7 @@ static int pwrap_init_cipher(struct pmic_wrapper *wrp)
break;
case PWRAP_MT6873:
case PWRAP_MT8183:
case PWRAP_MT8195:
break;
}
@ -2025,6 +2044,19 @@ static const struct pmic_wrapper_type pwrap_mt8183 = {
.init_soc_specific = pwrap_mt8183_init_soc_specific,
};
static struct pmic_wrapper_type pwrap_mt8195 = {
.regs = mt8195_regs,
.type = PWRAP_MT8195,
.arb_en_all = 0x777f, /* NEED CONFIRM */
.int_en_all = 0x180000, /* NEED CONFIRM */
.int1_en_all = 0,
.spi_w = PWRAP_MAN_CMD_SPI_WRITE,
.wdt_src = PWRAP_WDT_SRC_MASK_ALL,
.caps = PWRAP_CAP_INT1_EN | PWRAP_CAP_ARB,
.init_reg_clock = pwrap_common_init_reg_clock,
.init_soc_specific = NULL,
};
static struct pmic_wrapper_type pwrap_mt8516 = {
.regs = mt8516_regs,
.type = PWRAP_MT8516,
@ -2065,6 +2097,9 @@ static const struct of_device_id of_pwrap_match_tbl[] = {
}, {
.compatible = "mediatek,mt8183-pwrap",
.data = &pwrap_mt8183,
}, {
.compatible = "mediatek,mt8195-pwrap",
.data = &pwrap_mt8195,
}, {
.compatible = "mediatek,mt8516-pwrap",
.data = &pwrap_mt8516,

View File

@ -271,9 +271,30 @@ static const struct rpmhpd_desc sc7280_desc = {
.num_pds = ARRAY_SIZE(sc7280_rpmhpds),
};
/* SC8180x RPMH powerdomains */
static struct rpmhpd *sc8180x_rpmhpds[] = {
[SC8180X_CX] = &sdm845_cx,
[SC8180X_CX_AO] = &sdm845_cx_ao,
[SC8180X_EBI] = &sdm845_ebi,
[SC8180X_GFX] = &sdm845_gfx,
[SC8180X_LCX] = &sdm845_lcx,
[SC8180X_LMX] = &sdm845_lmx,
[SC8180X_MMCX] = &sm8150_mmcx,
[SC8180X_MMCX_AO] = &sm8150_mmcx_ao,
[SC8180X_MSS] = &sdm845_mss,
[SC8180X_MX] = &sdm845_mx,
[SC8180X_MX_AO] = &sdm845_mx_ao,
};
static const struct rpmhpd_desc sc8180x_desc = {
.rpmhpds = sc8180x_rpmhpds,
.num_pds = ARRAY_SIZE(sc8180x_rpmhpds),
};
static const struct of_device_id rpmhpd_match_table[] = {
{ .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc },
{ .compatible = "qcom,sc7280-rpmhpd", .data = &sc7280_desc },
{ .compatible = "qcom,sc8180x-rpmhpd", .data = &sc8180x_desc },
{ .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc },
{ .compatible = "qcom,sdx55-rpmhpd", .data = &sdx55_desc},
{ .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc },

View File

@ -118,6 +118,27 @@ struct rpmpd_desc {
static DEFINE_MUTEX(rpmpd_lock);
/* mdm9607 RPM Power Domains */
DEFINE_RPMPD_PAIR(mdm9607, vddcx, vddcx_ao, SMPA, LEVEL, 3);
DEFINE_RPMPD_VFL(mdm9607, vddcx_vfl, SMPA, 3);
DEFINE_RPMPD_PAIR(mdm9607, vddmx, vddmx_ao, LDOA, LEVEL, 12);
DEFINE_RPMPD_VFL(mdm9607, vddmx_vfl, LDOA, 12);
static struct rpmpd *mdm9607_rpmpds[] = {
[MDM9607_VDDCX] = &mdm9607_vddcx,
[MDM9607_VDDCX_AO] = &mdm9607_vddcx_ao,
[MDM9607_VDDCX_VFL] = &mdm9607_vddcx_vfl,
[MDM9607_VDDMX] = &mdm9607_vddmx,
[MDM9607_VDDMX_AO] = &mdm9607_vddmx_ao,
[MDM9607_VDDMX_VFL] = &mdm9607_vddmx_vfl,
};
static const struct rpmpd_desc mdm9607_desc = {
.rpmpds = mdm9607_rpmpds,
.num_pds = ARRAY_SIZE(mdm9607_rpmpds),
.max_state = RPM_SMD_LEVEL_TURBO,
};
/* msm8939 RPM Power Domains */
DEFINE_RPMPD_PAIR(msm8939, vddmd, vddmd_ao, SMPA, CORNER, 1);
DEFINE_RPMPD_VFC(msm8939, vddmd_vfc, SMPA, 1);
@ -326,6 +347,7 @@ static const struct rpmpd_desc sdm660_desc = {
};
static const struct of_device_id rpmpd_match_table[] = {
{ .compatible = "qcom,mdm9607-rpmpd", .data = &mdm9607_desc },
{ .compatible = "qcom,msm8916-rpmpd", .data = &msm8916_desc },
{ .compatible = "qcom,msm8939-rpmpd", .data = &msm8939_desc },
{ .compatible = "qcom,msm8976-rpmpd", .data = &msm8976_desc },

View File

@ -233,6 +233,7 @@ static void qcom_smd_rpm_remove(struct rpmsg_device *rpdev)
static const struct of_device_id qcom_smd_rpm_of_match[] = {
{ .compatible = "qcom,rpm-apq8084" },
{ .compatible = "qcom,rpm-ipq6018" },
{ .compatible = "qcom,rpm-msm8226" },
{ .compatible = "qcom,rpm-msm8916" },
{ .compatible = "qcom,rpm-msm8936" },
{ .compatible = "qcom,rpm-msm8974" },
@ -241,6 +242,7 @@ static const struct of_device_id qcom_smd_rpm_of_match[] = {
{ .compatible = "qcom,rpm-msm8996" },
{ .compatible = "qcom,rpm-msm8998" },
{ .compatible = "qcom,rpm-sdm660" },
{ .compatible = "qcom,rpm-sm6125" },
{ .compatible = "qcom,rpm-qcs404" },
{}
};

View File

@ -70,21 +70,33 @@ static const char *const socinfo_image_names[] = {
static const char *const pmic_models[] = {
[0] = "Unknown PMIC model",
[1] = "PM8941",
[2] = "PM8841",
[3] = "PM8019",
[4] = "PM8226",
[5] = "PM8110",
[6] = "PMA8084",
[7] = "PMI8962",
[8] = "PMD9635",
[9] = "PM8994",
[10] = "PMI8994",
[11] = "PM8916",
[13] = "PM8058",
[12] = "PM8004",
[13] = "PM8909/PM8058",
[14] = "PM8028",
[15] = "PM8901",
[16] = "PM8027",
[17] = "ISL9519",
[16] = "PM8950/PM8027",
[17] = "PMI8950/ISL9519",
[18] = "PM8921",
[19] = "PM8018",
[20] = "PM8015",
[21] = "PM8014",
[20] = "PM8998/PM8015",
[21] = "PMI8998/PM8014",
[22] = "PM8821",
[23] = "PM8038",
[24] = "PM8922",
[24] = "PM8005/PM8922",
[25] = "PM8917",
[26] = "PM660L",
[27] = "PM660",
[30] = "PM8150",
[31] = "PM8150L",
[32] = "PM8150B",
@ -195,11 +207,30 @@ static const struct soc_id soc_id[] = {
{ 139, "APQ8060AB" },
{ 140, "MSM8260AB" },
{ 141, "MSM8660AB" },
{ 145, "MSM8626" },
{ 147, "MSM8610" },
{ 153, "APQ8064AB" },
{ 158, "MSM8226" },
{ 159, "MSM8526" },
{ 161, "MSM8110" },
{ 162, "MSM8210" },
{ 163, "MSM8810" },
{ 164, "MSM8212" },
{ 165, "MSM8612" },
{ 166, "MSM8112" },
{ 168, "MSM8225Q" },
{ 169, "MSM8625Q" },
{ 170, "MSM8125Q" },
{ 172, "APQ8064AA" },
{ 178, "APQ8084" },
{ 184, "APQ8074" },
{ 185, "MSM8274" },
{ 186, "MSM8674" },
{ 194, "MSM8974PRO" },
{ 198, "MSM8126" },
{ 199, "APQ8026" },
{ 200, "MSM8926" },
{ 205, "MSM8326" },
{ 206, "MSM8916" },
{ 207, "MSM8994" },
{ 208, "APQ8074-AA" },
@ -213,6 +244,14 @@ static const struct soc_id soc_id[] = {
{ 216, "MSM8674PRO" },
{ 217, "MSM8974-AA" },
{ 218, "MSM8974-AB" },
{ 219, "APQ8028" },
{ 220, "MSM8128" },
{ 221, "MSM8228" },
{ 222, "MSM8528" },
{ 223, "MSM8628" },
{ 224, "MSM8928" },
{ 225, "MSM8510" },
{ 226, "MSM8512" },
{ 233, "MSM8936" },
{ 239, "MSM8939" },
{ 240, "APQ8036" },
@ -254,8 +293,13 @@ static const struct soc_id soc_id[] = {
{ 350, "SDA632" },
{ 351, "SDA450" },
{ 356, "SM8250" },
{ 394, "SM6125" },
{ 402, "IPQ6018" },
{ 403, "IPQ6028" },
{ 421, "IPQ6000" },
{ 422, "IPQ6010" },
{ 425, "SC7180" },
{ 453, "IPQ6005" },
{ 455, "QRB5165" },
};

View File

@ -279,6 +279,11 @@ config ARCH_R8A774B1
help
This enables support for the Renesas RZ/G2N SoC.
config ARCH_R9A07G044
bool "ARM64 Platform support for RZ/G2L"
help
This enables support for the Renesas RZ/G2L SoC variants.
endif # ARM64
config RST_RCAR

View File

@ -56,6 +56,10 @@ static const struct renesas_family fam_rzg2 __initconst __maybe_unused = {
.reg = 0xfff00044, /* PRR (Product Register) */
};
static const struct renesas_family fam_rzg2l __initconst __maybe_unused = {
.name = "RZ/G2L",
};
static const struct renesas_family fam_shmobile __initconst __maybe_unused = {
.name = "SH-Mobile",
.reg = 0xe600101c, /* CCCR (Common Chip Code Register) */
@ -64,7 +68,7 @@ static const struct renesas_family fam_shmobile __initconst __maybe_unused = {
struct renesas_soc {
const struct renesas_family *family;
u8 id;
u32 id;
};
static const struct renesas_soc soc_rz_a1h __initconst __maybe_unused = {
@ -131,6 +135,11 @@ static const struct renesas_soc soc_rz_g2h __initconst __maybe_unused = {
.id = 0x4f,
};
static const struct renesas_soc soc_rz_g2l __initconst __maybe_unused = {
.family = &fam_rzg2l,
.id = 0x841c447,
};
static const struct renesas_soc soc_rcar_m1a __initconst __maybe_unused = {
.family = &fam_rcar_gen1,
};
@ -299,6 +308,9 @@ static const struct of_device_id renesas_socs[] __initconst = {
#ifdef CONFIG_ARCH_R8A779A0
{ .compatible = "renesas,r8a779a0", .data = &soc_rcar_v3u },
#endif
#if defined(CONFIG_ARCH_R9A07G044)
{ .compatible = "renesas,r9a07g044", .data = &soc_rz_g2l },
#endif
#ifdef CONFIG_ARCH_SH73A0
{ .compatible = "renesas,sh73a0", .data = &soc_shmobile_ag5 },
#endif
@ -348,6 +360,25 @@ static int __init renesas_soc_init(void)
goto done;
}
np = of_find_compatible_node(NULL, NULL, "renesas,r9a07g044-sysc");
if (np) {
chipid = of_iomap(np, 0);
of_node_put(np);
if (chipid) {
product = readl(chipid + 0x0a04);
iounmap(chipid);
if (soc->id && (product & 0xfffffff) != soc->id) {
pr_warn("SoC mismatch (product = 0x%x)\n",
product);
return -ENODEV;
}
}
goto done;
}
/* Try PRR first, then hardcoded fallback */
np = of_find_compatible_node(NULL, NULL, "renesas,prr");
if (np) {

View File

@ -27,8 +27,10 @@
#include <dt-bindings/power/rk3366-power.h>
#include <dt-bindings/power/rk3368-power.h>
#include <dt-bindings/power/rk3399-power.h>
#include <dt-bindings/power/rk3568-power.h>
struct rockchip_domain_info {
const char *name;
int pwr_mask;
int status_mask;
int req_mask;
@ -85,8 +87,9 @@ struct rockchip_pmu {
#define to_rockchip_pd(gpd) container_of(gpd, struct rockchip_pm_domain, genpd)
#define DOMAIN(pwr, status, req, idle, ack, wakeup) \
#define DOMAIN(_name, pwr, status, req, idle, ack, wakeup) \
{ \
.name = _name, \
.pwr_mask = (pwr), \
.status_mask = (status), \
.req_mask = (req), \
@ -95,8 +98,9 @@ struct rockchip_pmu {
.active_wakeup = (wakeup), \
}
#define DOMAIN_M(pwr, status, req, idle, ack, wakeup) \
#define DOMAIN_M(_name, pwr, status, req, idle, ack, wakeup) \
{ \
.name = _name, \
.pwr_w_mask = (pwr) << 16, \
.pwr_mask = (pwr), \
.status_mask = (status), \
@ -107,8 +111,9 @@ struct rockchip_pmu {
.active_wakeup = wakeup, \
}
#define DOMAIN_RK3036(req, ack, idle, wakeup) \
#define DOMAIN_RK3036(_name, req, ack, idle, wakeup) \
{ \
.name = _name, \
.req_mask = (req), \
.req_w_mask = (req) << 16, \
.ack_mask = (ack), \
@ -116,20 +121,23 @@ struct rockchip_pmu {
.active_wakeup = wakeup, \
}
#define DOMAIN_PX30(pwr, status, req, wakeup) \
DOMAIN_M(pwr, status, req, (req) << 16, req, wakeup)
#define DOMAIN_PX30(name, pwr, status, req, wakeup) \
DOMAIN_M(name, pwr, status, req, (req) << 16, req, wakeup)
#define DOMAIN_RK3288(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, req, (req) << 16, wakeup)
#define DOMAIN_RK3288(name, pwr, status, req, wakeup) \
DOMAIN(name, pwr, status, req, req, (req) << 16, wakeup)
#define DOMAIN_RK3328(pwr, status, req, wakeup) \
DOMAIN_M(pwr, pwr, req, (req) << 10, req, wakeup)
#define DOMAIN_RK3328(name, pwr, status, req, wakeup) \
DOMAIN_M(name, pwr, pwr, req, (req) << 10, req, wakeup)
#define DOMAIN_RK3368(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, (req) << 16, req, wakeup)
#define DOMAIN_RK3368(name, pwr, status, req, wakeup) \
DOMAIN(name, pwr, status, req, (req) << 16, req, wakeup)
#define DOMAIN_RK3399(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, req, req, wakeup)
#define DOMAIN_RK3399(name, pwr, status, req, wakeup) \
DOMAIN(name, pwr, status, req, req, req, wakeup)
#define DOMAIN_RK3568(name, pwr, req, wakeup) \
DOMAIN_M(name, pwr, pwr, req, req, req, wakeup)
static bool rockchip_pmu_domain_is_idle(struct rockchip_pm_domain *pd)
{
@ -490,7 +498,10 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
goto err_unprepare_clocks;
}
pd->genpd.name = node->name;
if (pd->info->name)
pd->genpd.name = pd->info->name;
else
pd->genpd.name = kbasename(node->full_name);
pd->genpd.power_off = rockchip_pd_power_off;
pd->genpd.power_on = rockchip_pd_power_on;
pd->genpd.attach_dev = rockchip_pd_attach_dev;
@ -716,129 +727,141 @@ err_out:
}
static const struct rockchip_domain_info px30_pm_domains[] = {
[PX30_PD_USB] = DOMAIN_PX30(BIT(5), BIT(5), BIT(10), false),
[PX30_PD_SDCARD] = DOMAIN_PX30(BIT(8), BIT(8), BIT(9), false),
[PX30_PD_GMAC] = DOMAIN_PX30(BIT(10), BIT(10), BIT(6), false),
[PX30_PD_MMC_NAND] = DOMAIN_PX30(BIT(11), BIT(11), BIT(5), false),
[PX30_PD_VPU] = DOMAIN_PX30(BIT(12), BIT(12), BIT(14), false),
[PX30_PD_VO] = DOMAIN_PX30(BIT(13), BIT(13), BIT(7), false),
[PX30_PD_VI] = DOMAIN_PX30(BIT(14), BIT(14), BIT(8), false),
[PX30_PD_GPU] = DOMAIN_PX30(BIT(15), BIT(15), BIT(2), false),
[PX30_PD_USB] = DOMAIN_PX30("usb", BIT(5), BIT(5), BIT(10), false),
[PX30_PD_SDCARD] = DOMAIN_PX30("sdcard", BIT(8), BIT(8), BIT(9), false),
[PX30_PD_GMAC] = DOMAIN_PX30("gmac", BIT(10), BIT(10), BIT(6), false),
[PX30_PD_MMC_NAND] = DOMAIN_PX30("mmc_nand", BIT(11), BIT(11), BIT(5), false),
[PX30_PD_VPU] = DOMAIN_PX30("vpu", BIT(12), BIT(12), BIT(14), false),
[PX30_PD_VO] = DOMAIN_PX30("vo", BIT(13), BIT(13), BIT(7), false),
[PX30_PD_VI] = DOMAIN_PX30("vi", BIT(14), BIT(14), BIT(8), false),
[PX30_PD_GPU] = DOMAIN_PX30("gpu", BIT(15), BIT(15), BIT(2), false),
};
static const struct rockchip_domain_info rk3036_pm_domains[] = {
[RK3036_PD_MSCH] = DOMAIN_RK3036(BIT(14), BIT(23), BIT(30), true),
[RK3036_PD_CORE] = DOMAIN_RK3036(BIT(13), BIT(17), BIT(24), false),
[RK3036_PD_PERI] = DOMAIN_RK3036(BIT(12), BIT(18), BIT(25), false),
[RK3036_PD_VIO] = DOMAIN_RK3036(BIT(11), BIT(19), BIT(26), false),
[RK3036_PD_VPU] = DOMAIN_RK3036(BIT(10), BIT(20), BIT(27), false),
[RK3036_PD_GPU] = DOMAIN_RK3036(BIT(9), BIT(21), BIT(28), false),
[RK3036_PD_SYS] = DOMAIN_RK3036(BIT(8), BIT(22), BIT(29), false),
[RK3036_PD_MSCH] = DOMAIN_RK3036("msch", BIT(14), BIT(23), BIT(30), true),
[RK3036_PD_CORE] = DOMAIN_RK3036("core", BIT(13), BIT(17), BIT(24), false),
[RK3036_PD_PERI] = DOMAIN_RK3036("peri", BIT(12), BIT(18), BIT(25), false),
[RK3036_PD_VIO] = DOMAIN_RK3036("vio", BIT(11), BIT(19), BIT(26), false),
[RK3036_PD_VPU] = DOMAIN_RK3036("vpu", BIT(10), BIT(20), BIT(27), false),
[RK3036_PD_GPU] = DOMAIN_RK3036("gpu", BIT(9), BIT(21), BIT(28), false),
[RK3036_PD_SYS] = DOMAIN_RK3036("sys", BIT(8), BIT(22), BIT(29), false),
};
static const struct rockchip_domain_info rk3066_pm_domains[] = {
[RK3066_PD_GPU] = DOMAIN(BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false),
[RK3066_PD_VIDEO] = DOMAIN(BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false),
[RK3066_PD_VIO] = DOMAIN(BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false),
[RK3066_PD_PERI] = DOMAIN(BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false),
[RK3066_PD_CPU] = DOMAIN(0, BIT(5), BIT(1), BIT(26), BIT(31), false),
[RK3066_PD_GPU] = DOMAIN("gpu", BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false),
[RK3066_PD_VIDEO] = DOMAIN("video", BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false),
[RK3066_PD_VIO] = DOMAIN("vio", BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false),
[RK3066_PD_PERI] = DOMAIN("peri", BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false),
[RK3066_PD_CPU] = DOMAIN("cpu", 0, BIT(5), BIT(1), BIT(26), BIT(31), false),
};
static const struct rockchip_domain_info rk3128_pm_domains[] = {
[RK3128_PD_CORE] = DOMAIN_RK3288(BIT(0), BIT(0), BIT(4), false),
[RK3128_PD_MSCH] = DOMAIN_RK3288(0, 0, BIT(6), true),
[RK3128_PD_VIO] = DOMAIN_RK3288(BIT(3), BIT(3), BIT(2), false),
[RK3128_PD_VIDEO] = DOMAIN_RK3288(BIT(2), BIT(2), BIT(1), false),
[RK3128_PD_GPU] = DOMAIN_RK3288(BIT(1), BIT(1), BIT(3), false),
[RK3128_PD_CORE] = DOMAIN_RK3288("core", BIT(0), BIT(0), BIT(4), false),
[RK3128_PD_MSCH] = DOMAIN_RK3288("msch", 0, 0, BIT(6), true),
[RK3128_PD_VIO] = DOMAIN_RK3288("vio", BIT(3), BIT(3), BIT(2), false),
[RK3128_PD_VIDEO] = DOMAIN_RK3288("video", BIT(2), BIT(2), BIT(1), false),
[RK3128_PD_GPU] = DOMAIN_RK3288("gpu", BIT(1), BIT(1), BIT(3), false),
};
static const struct rockchip_domain_info rk3188_pm_domains[] = {
[RK3188_PD_GPU] = DOMAIN(BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false),
[RK3188_PD_VIDEO] = DOMAIN(BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false),
[RK3188_PD_VIO] = DOMAIN(BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false),
[RK3188_PD_PERI] = DOMAIN(BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false),
[RK3188_PD_CPU] = DOMAIN(BIT(5), BIT(5), BIT(1), BIT(26), BIT(31), false),
[RK3188_PD_GPU] = DOMAIN("gpu", BIT(9), BIT(9), BIT(3), BIT(24), BIT(29), false),
[RK3188_PD_VIDEO] = DOMAIN("video", BIT(8), BIT(8), BIT(4), BIT(23), BIT(28), false),
[RK3188_PD_VIO] = DOMAIN("vio", BIT(7), BIT(7), BIT(5), BIT(22), BIT(27), false),
[RK3188_PD_PERI] = DOMAIN("peri", BIT(6), BIT(6), BIT(2), BIT(25), BIT(30), false),
[RK3188_PD_CPU] = DOMAIN("cpu", BIT(5), BIT(5), BIT(1), BIT(26), BIT(31), false),
};
static const struct rockchip_domain_info rk3228_pm_domains[] = {
[RK3228_PD_CORE] = DOMAIN_RK3036(BIT(0), BIT(0), BIT(16), true),
[RK3228_PD_MSCH] = DOMAIN_RK3036(BIT(1), BIT(1), BIT(17), true),
[RK3228_PD_BUS] = DOMAIN_RK3036(BIT(2), BIT(2), BIT(18), true),
[RK3228_PD_SYS] = DOMAIN_RK3036(BIT(3), BIT(3), BIT(19), true),
[RK3228_PD_VIO] = DOMAIN_RK3036(BIT(4), BIT(4), BIT(20), false),
[RK3228_PD_VOP] = DOMAIN_RK3036(BIT(5), BIT(5), BIT(21), false),
[RK3228_PD_VPU] = DOMAIN_RK3036(BIT(6), BIT(6), BIT(22), false),
[RK3228_PD_RKVDEC] = DOMAIN_RK3036(BIT(7), BIT(7), BIT(23), false),
[RK3228_PD_GPU] = DOMAIN_RK3036(BIT(8), BIT(8), BIT(24), false),
[RK3228_PD_PERI] = DOMAIN_RK3036(BIT(9), BIT(9), BIT(25), true),
[RK3228_PD_GMAC] = DOMAIN_RK3036(BIT(10), BIT(10), BIT(26), false),
[RK3228_PD_CORE] = DOMAIN_RK3036("core", BIT(0), BIT(0), BIT(16), true),
[RK3228_PD_MSCH] = DOMAIN_RK3036("msch", BIT(1), BIT(1), BIT(17), true),
[RK3228_PD_BUS] = DOMAIN_RK3036("bus", BIT(2), BIT(2), BIT(18), true),
[RK3228_PD_SYS] = DOMAIN_RK3036("sys", BIT(3), BIT(3), BIT(19), true),
[RK3228_PD_VIO] = DOMAIN_RK3036("vio", BIT(4), BIT(4), BIT(20), false),
[RK3228_PD_VOP] = DOMAIN_RK3036("vop", BIT(5), BIT(5), BIT(21), false),
[RK3228_PD_VPU] = DOMAIN_RK3036("vpu", BIT(6), BIT(6), BIT(22), false),
[RK3228_PD_RKVDEC] = DOMAIN_RK3036("vdec", BIT(7), BIT(7), BIT(23), false),
[RK3228_PD_GPU] = DOMAIN_RK3036("gpu", BIT(8), BIT(8), BIT(24), false),
[RK3228_PD_PERI] = DOMAIN_RK3036("peri", BIT(9), BIT(9), BIT(25), true),
[RK3228_PD_GMAC] = DOMAIN_RK3036("gmac", BIT(10), BIT(10), BIT(26), false),
};
static const struct rockchip_domain_info rk3288_pm_domains[] = {
[RK3288_PD_VIO] = DOMAIN_RK3288(BIT(7), BIT(7), BIT(4), false),
[RK3288_PD_HEVC] = DOMAIN_RK3288(BIT(14), BIT(10), BIT(9), false),
[RK3288_PD_VIDEO] = DOMAIN_RK3288(BIT(8), BIT(8), BIT(3), false),
[RK3288_PD_GPU] = DOMAIN_RK3288(BIT(9), BIT(9), BIT(2), false),
[RK3288_PD_VIO] = DOMAIN_RK3288("vio", BIT(7), BIT(7), BIT(4), false),
[RK3288_PD_HEVC] = DOMAIN_RK3288("hevc", BIT(14), BIT(10), BIT(9), false),
[RK3288_PD_VIDEO] = DOMAIN_RK3288("video", BIT(8), BIT(8), BIT(3), false),
[RK3288_PD_GPU] = DOMAIN_RK3288("gpu", BIT(9), BIT(9), BIT(2), false),
};
static const struct rockchip_domain_info rk3328_pm_domains[] = {
[RK3328_PD_CORE] = DOMAIN_RK3328(0, BIT(0), BIT(0), false),
[RK3328_PD_GPU] = DOMAIN_RK3328(0, BIT(1), BIT(1), false),
[RK3328_PD_BUS] = DOMAIN_RK3328(0, BIT(2), BIT(2), true),
[RK3328_PD_MSCH] = DOMAIN_RK3328(0, BIT(3), BIT(3), true),
[RK3328_PD_PERI] = DOMAIN_RK3328(0, BIT(4), BIT(4), true),
[RK3328_PD_VIDEO] = DOMAIN_RK3328(0, BIT(5), BIT(5), false),
[RK3328_PD_HEVC] = DOMAIN_RK3328(0, BIT(6), BIT(6), false),
[RK3328_PD_VIO] = DOMAIN_RK3328(0, BIT(8), BIT(8), false),
[RK3328_PD_VPU] = DOMAIN_RK3328(0, BIT(9), BIT(9), false),
[RK3328_PD_CORE] = DOMAIN_RK3328("core", 0, BIT(0), BIT(0), false),
[RK3328_PD_GPU] = DOMAIN_RK3328("gpu", 0, BIT(1), BIT(1), false),
[RK3328_PD_BUS] = DOMAIN_RK3328("bus", 0, BIT(2), BIT(2), true),
[RK3328_PD_MSCH] = DOMAIN_RK3328("msch", 0, BIT(3), BIT(3), true),
[RK3328_PD_PERI] = DOMAIN_RK3328("peri", 0, BIT(4), BIT(4), true),
[RK3328_PD_VIDEO] = DOMAIN_RK3328("video", 0, BIT(5), BIT(5), false),
[RK3328_PD_HEVC] = DOMAIN_RK3328("hevc", 0, BIT(6), BIT(6), false),
[RK3328_PD_VIO] = DOMAIN_RK3328("vio", 0, BIT(8), BIT(8), false),
[RK3328_PD_VPU] = DOMAIN_RK3328("vpu", 0, BIT(9), BIT(9), false),
};
static const struct rockchip_domain_info rk3366_pm_domains[] = {
[RK3366_PD_PERI] = DOMAIN_RK3368(BIT(10), BIT(10), BIT(6), true),
[RK3366_PD_VIO] = DOMAIN_RK3368(BIT(14), BIT(14), BIT(8), false),
[RK3366_PD_VIDEO] = DOMAIN_RK3368(BIT(13), BIT(13), BIT(7), false),
[RK3366_PD_RKVDEC] = DOMAIN_RK3368(BIT(11), BIT(11), BIT(7), false),
[RK3366_PD_WIFIBT] = DOMAIN_RK3368(BIT(8), BIT(8), BIT(9), false),
[RK3366_PD_VPU] = DOMAIN_RK3368(BIT(12), BIT(12), BIT(7), false),
[RK3366_PD_GPU] = DOMAIN_RK3368(BIT(15), BIT(15), BIT(2), false),
[RK3366_PD_PERI] = DOMAIN_RK3368("peri", BIT(10), BIT(10), BIT(6), true),
[RK3366_PD_VIO] = DOMAIN_RK3368("vio", BIT(14), BIT(14), BIT(8), false),
[RK3366_PD_VIDEO] = DOMAIN_RK3368("video", BIT(13), BIT(13), BIT(7), false),
[RK3366_PD_RKVDEC] = DOMAIN_RK3368("vdec", BIT(11), BIT(11), BIT(7), false),
[RK3366_PD_WIFIBT] = DOMAIN_RK3368("wifibt", BIT(8), BIT(8), BIT(9), false),
[RK3366_PD_VPU] = DOMAIN_RK3368("vpu", BIT(12), BIT(12), BIT(7), false),
[RK3366_PD_GPU] = DOMAIN_RK3368("gpu", BIT(15), BIT(15), BIT(2), false),
};
static const struct rockchip_domain_info rk3368_pm_domains[] = {
[RK3368_PD_PERI] = DOMAIN_RK3368(BIT(13), BIT(12), BIT(6), true),
[RK3368_PD_VIO] = DOMAIN_RK3368(BIT(15), BIT(14), BIT(8), false),
[RK3368_PD_VIDEO] = DOMAIN_RK3368(BIT(14), BIT(13), BIT(7), false),
[RK3368_PD_GPU_0] = DOMAIN_RK3368(BIT(16), BIT(15), BIT(2), false),
[RK3368_PD_GPU_1] = DOMAIN_RK3368(BIT(17), BIT(16), BIT(2), false),
[RK3368_PD_PERI] = DOMAIN_RK3368("peri", BIT(13), BIT(12), BIT(6), true),
[RK3368_PD_VIO] = DOMAIN_RK3368("vio", BIT(15), BIT(14), BIT(8), false),
[RK3368_PD_VIDEO] = DOMAIN_RK3368("video", BIT(14), BIT(13), BIT(7), false),
[RK3368_PD_GPU_0] = DOMAIN_RK3368("gpu_0", BIT(16), BIT(15), BIT(2), false),
[RK3368_PD_GPU_1] = DOMAIN_RK3368("gpu_1", BIT(17), BIT(16), BIT(2), false),
};
static const struct rockchip_domain_info rk3399_pm_domains[] = {
[RK3399_PD_TCPD0] = DOMAIN_RK3399(BIT(8), BIT(8), 0, false),
[RK3399_PD_TCPD1] = DOMAIN_RK3399(BIT(9), BIT(9), 0, false),
[RK3399_PD_CCI] = DOMAIN_RK3399(BIT(10), BIT(10), 0, true),
[RK3399_PD_CCI0] = DOMAIN_RK3399(0, 0, BIT(15), true),
[RK3399_PD_CCI1] = DOMAIN_RK3399(0, 0, BIT(16), true),
[RK3399_PD_PERILP] = DOMAIN_RK3399(BIT(11), BIT(11), BIT(1), true),
[RK3399_PD_PERIHP] = DOMAIN_RK3399(BIT(12), BIT(12), BIT(2), true),
[RK3399_PD_CENTER] = DOMAIN_RK3399(BIT(13), BIT(13), BIT(14), true),
[RK3399_PD_VIO] = DOMAIN_RK3399(BIT(14), BIT(14), BIT(17), false),
[RK3399_PD_GPU] = DOMAIN_RK3399(BIT(15), BIT(15), BIT(0), false),
[RK3399_PD_VCODEC] = DOMAIN_RK3399(BIT(16), BIT(16), BIT(3), false),
[RK3399_PD_VDU] = DOMAIN_RK3399(BIT(17), BIT(17), BIT(4), false),
[RK3399_PD_RGA] = DOMAIN_RK3399(BIT(18), BIT(18), BIT(5), false),
[RK3399_PD_IEP] = DOMAIN_RK3399(BIT(19), BIT(19), BIT(6), false),
[RK3399_PD_VO] = DOMAIN_RK3399(BIT(20), BIT(20), 0, false),
[RK3399_PD_VOPB] = DOMAIN_RK3399(0, 0, BIT(7), false),
[RK3399_PD_VOPL] = DOMAIN_RK3399(0, 0, BIT(8), false),
[RK3399_PD_ISP0] = DOMAIN_RK3399(BIT(22), BIT(22), BIT(9), false),
[RK3399_PD_ISP1] = DOMAIN_RK3399(BIT(23), BIT(23), BIT(10), false),
[RK3399_PD_HDCP] = DOMAIN_RK3399(BIT(24), BIT(24), BIT(11), false),
[RK3399_PD_GMAC] = DOMAIN_RK3399(BIT(25), BIT(25), BIT(23), true),
[RK3399_PD_EMMC] = DOMAIN_RK3399(BIT(26), BIT(26), BIT(24), true),
[RK3399_PD_USB3] = DOMAIN_RK3399(BIT(27), BIT(27), BIT(12), true),
[RK3399_PD_EDP] = DOMAIN_RK3399(BIT(28), BIT(28), BIT(22), false),
[RK3399_PD_GIC] = DOMAIN_RK3399(BIT(29), BIT(29), BIT(27), true),
[RK3399_PD_SD] = DOMAIN_RK3399(BIT(30), BIT(30), BIT(28), true),
[RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(BIT(31), BIT(31), BIT(29), true),
[RK3399_PD_TCPD0] = DOMAIN_RK3399("tcpd0", BIT(8), BIT(8), 0, false),
[RK3399_PD_TCPD1] = DOMAIN_RK3399("tcpd1", BIT(9), BIT(9), 0, false),
[RK3399_PD_CCI] = DOMAIN_RK3399("cci", BIT(10), BIT(10), 0, true),
[RK3399_PD_CCI0] = DOMAIN_RK3399("cci0", 0, 0, BIT(15), true),
[RK3399_PD_CCI1] = DOMAIN_RK3399("cci1", 0, 0, BIT(16), true),
[RK3399_PD_PERILP] = DOMAIN_RK3399("perilp", BIT(11), BIT(11), BIT(1), true),
[RK3399_PD_PERIHP] = DOMAIN_RK3399("perihp", BIT(12), BIT(12), BIT(2), true),
[RK3399_PD_CENTER] = DOMAIN_RK3399("center", BIT(13), BIT(13), BIT(14), true),
[RK3399_PD_VIO] = DOMAIN_RK3399("vio", BIT(14), BIT(14), BIT(17), false),
[RK3399_PD_GPU] = DOMAIN_RK3399("gpu", BIT(15), BIT(15), BIT(0), false),
[RK3399_PD_VCODEC] = DOMAIN_RK3399("vcodec", BIT(16), BIT(16), BIT(3), false),
[RK3399_PD_VDU] = DOMAIN_RK3399("vdu", BIT(17), BIT(17), BIT(4), false),
[RK3399_PD_RGA] = DOMAIN_RK3399("rga", BIT(18), BIT(18), BIT(5), false),
[RK3399_PD_IEP] = DOMAIN_RK3399("iep", BIT(19), BIT(19), BIT(6), false),
[RK3399_PD_VO] = DOMAIN_RK3399("vo", BIT(20), BIT(20), 0, false),
[RK3399_PD_VOPB] = DOMAIN_RK3399("vopb", 0, 0, BIT(7), false),
[RK3399_PD_VOPL] = DOMAIN_RK3399("vopl", 0, 0, BIT(8), false),
[RK3399_PD_ISP0] = DOMAIN_RK3399("isp0", BIT(22), BIT(22), BIT(9), false),
[RK3399_PD_ISP1] = DOMAIN_RK3399("isp1", BIT(23), BIT(23), BIT(10), false),
[RK3399_PD_HDCP] = DOMAIN_RK3399("hdcp", BIT(24), BIT(24), BIT(11), false),
[RK3399_PD_GMAC] = DOMAIN_RK3399("gmac", BIT(25), BIT(25), BIT(23), true),
[RK3399_PD_EMMC] = DOMAIN_RK3399("emmc", BIT(26), BIT(26), BIT(24), true),
[RK3399_PD_USB3] = DOMAIN_RK3399("usb3", BIT(27), BIT(27), BIT(12), true),
[RK3399_PD_EDP] = DOMAIN_RK3399("edp", BIT(28), BIT(28), BIT(22), false),
[RK3399_PD_GIC] = DOMAIN_RK3399("gic", BIT(29), BIT(29), BIT(27), true),
[RK3399_PD_SD] = DOMAIN_RK3399("sd", BIT(30), BIT(30), BIT(28), true),
[RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399("sdioaudio", BIT(31), BIT(31), BIT(29), true),
};
static const struct rockchip_domain_info rk3568_pm_domains[] = {
[RK3568_PD_NPU] = DOMAIN_RK3568("npu", BIT(1), BIT(2), false),
[RK3568_PD_GPU] = DOMAIN_RK3568("gpu", BIT(0), BIT(1), false),
[RK3568_PD_VI] = DOMAIN_RK3568("vi", BIT(6), BIT(3), false),
[RK3568_PD_VO] = DOMAIN_RK3568("vo", BIT(7), BIT(4), false),
[RK3568_PD_RGA] = DOMAIN_RK3568("rga", BIT(5), BIT(5), false),
[RK3568_PD_VPU] = DOMAIN_RK3568("vpu", BIT(2), BIT(6), false),
[RK3568_PD_RKVDEC] = DOMAIN_RK3568("vdec", BIT(4), BIT(8), false),
[RK3568_PD_RKVENC] = DOMAIN_RK3568("venc", BIT(3), BIT(7), false),
[RK3568_PD_PIPE] = DOMAIN_RK3568("pipe", BIT(8), BIT(11), false),
};
static const struct rockchip_pmu_info px30_pmu = {
@ -976,6 +999,17 @@ static const struct rockchip_pmu_info rk3399_pmu = {
.domain_info = rk3399_pm_domains,
};
static const struct rockchip_pmu_info rk3568_pmu = {
.pwr_offset = 0xa0,
.status_offset = 0x98,
.req_offset = 0x50,
.idle_offset = 0x68,
.ack_offset = 0x60,
.num_domains = ARRAY_SIZE(rk3568_pm_domains),
.domain_info = rk3568_pm_domains,
};
static const struct of_device_id rockchip_pm_domain_dt_match[] = {
{
.compatible = "rockchip,px30-power-controller",
@ -1021,6 +1055,10 @@ static const struct of_device_id rockchip_pm_domain_dt_match[] = {
.compatible = "rockchip,rk3399-power-controller",
.data = (void *)&rk3399_pmu,
},
{
.compatible = "rockchip,rk3568-power-controller",
.data = (void *)&rk3568_pmu,
},
{ /* sentinel */ },
};

View File

@ -846,10 +846,8 @@ static int omap_sr_probe(struct platform_device *pdev)
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
sr_info->base = devm_ioremap_resource(&pdev->dev, mem);
if (IS_ERR(sr_info->base)) {
dev_err(&pdev->dev, "%s: ioremap fail\n", __func__);
if (IS_ERR(sr_info->base))
return PTR_ERR(sr_info->base);
}
irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);

View File

@ -445,10 +445,8 @@ static int wkup_m3_ipc_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
m3_ipc->ipc_mem_base = devm_ioremap_resource(dev, res);
if (IS_ERR(m3_ipc->ipc_mem_base)) {
dev_err(dev, "could not ioremap ipc_mem\n");
if (IS_ERR(m3_ipc->ipc_mem_base))
return PTR_ERR(m3_ipc->ipc_mem_base);
}
irq = platform_get_irq(pdev, 0);
if (!irq) {

View File

@ -0,0 +1,22 @@
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
/*
* Copyright (C) 2020 Pengutronix, Lucas Stach <kernel@pengutronix.de>
*/
#ifndef __DT_BINDINGS_IMX8MM_POWER_H__
#define __DT_BINDINGS_IMX8MM_POWER_H__
#define IMX8MM_POWER_DOMAIN_HSIOMIX 0
#define IMX8MM_POWER_DOMAIN_PCIE 1
#define IMX8MM_POWER_DOMAIN_OTG1 2
#define IMX8MM_POWER_DOMAIN_OTG2 3
#define IMX8MM_POWER_DOMAIN_GPUMIX 4
#define IMX8MM_POWER_DOMAIN_GPU 5
#define IMX8MM_POWER_DOMAIN_VPUMIX 6
#define IMX8MM_POWER_DOMAIN_VPUG1 7
#define IMX8MM_POWER_DOMAIN_VPUG2 8
#define IMX8MM_POWER_DOMAIN_VPUH1 9
#define IMX8MM_POWER_DOMAIN_DISPMIX 10
#define IMX8MM_POWER_DOMAIN_MIPI 11
#endif

View File

@ -0,0 +1,15 @@
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
/*
* Copyright (C) 2020 Compass Electronics Group, LLC
*/
#ifndef __DT_BINDINGS_IMX8MN_POWER_H__
#define __DT_BINDINGS_IMX8MN_POWER_H__
#define IMX8MN_POWER_DOMAIN_HSIOMIX 0
#define IMX8MN_POWER_DOMAIN_OTG1 1
#define IMX8MN_POWER_DOMAIN_GPUMIX 2
#define IMX8MN_POWER_DOMAIN_DISPMIX 3
#define IMX8MN_POWER_DOMAIN_MIPI 4
#endif

View File

@ -81,6 +81,19 @@
#define SC7280_LCX 7
#define SC7280_MSS 8
/* SC8180X Power Domain Indexes */
#define SC8180X_CX 0
#define SC8180X_CX_AO 1
#define SC8180X_EBI 2
#define SC8180X_GFX 3
#define SC8180X_LCX 4
#define SC8180X_LMX 5
#define SC8180X_MMCX 6
#define SC8180X_MMCX_AO 7
#define SC8180X_MSS 8
#define SC8180X_MX 9
#define SC8180X_MX_AO 10
/* SDM845 Power Domain performance levels */
#define RPMH_REGULATOR_LEVEL_RETENTION 16
#define RPMH_REGULATOR_LEVEL_MIN_SVS 48
@ -95,6 +108,14 @@
#define RPMH_REGULATOR_LEVEL_TURBO 384
#define RPMH_REGULATOR_LEVEL_TURBO_L1 416
/* MDM9607 Power Domains */
#define MDM9607_VDDCX 0
#define MDM9607_VDDCX_AO 1
#define MDM9607_VDDCX_VFL 2
#define MDM9607_VDDMX 3
#define MDM9607_VDDMX_AO 4
#define MDM9607_VDDMX_VFL 5
/* MSM8939 Power Domains */
#define MSM8939_VDDMDCX 0
#define MSM8939_VDDMDCX_AO 1

View File

@ -0,0 +1,32 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __DT_BINDINGS_POWER_RK3568_POWER_H__
#define __DT_BINDINGS_POWER_RK3568_POWER_H__
/* VD_CORE */
#define RK3568_PD_CPU_0 0
#define RK3568_PD_CPU_1 1
#define RK3568_PD_CPU_2 2
#define RK3568_PD_CPU_3 3
#define RK3568_PD_CORE_ALIVE 4
/* VD_PMU */
#define RK3568_PD_PMU 5
/* VD_NPU */
#define RK3568_PD_NPU 6
/* VD_GPU */
#define RK3568_PD_GPU 7
/* VD_LOGIC */
#define RK3568_PD_VI 8
#define RK3568_PD_VO 9
#define RK3568_PD_RGA 10
#define RK3568_PD_VPU 11
#define RK3568_PD_CENTER 12
#define RK3568_PD_RKVDEC 13
#define RK3568_PD_RKVENC 14
#define RK3568_PD_PIPE 15
#define RK3568_PD_LOGIC_ALIVE 16
#endif

267
include/linux/arm_ffa.h Normal file
View File

@ -0,0 +1,267 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2021 ARM Ltd.
*/
#ifndef _LINUX_ARM_FFA_H
#define _LINUX_ARM_FFA_H
#include <linux/device.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/uuid.h>
/* FFA Bus/Device/Driver related */
struct ffa_device {
int vm_id;
bool mode_32bit;
uuid_t uuid;
struct device dev;
};
#define to_ffa_dev(d) container_of(d, struct ffa_device, dev)
struct ffa_device_id {
uuid_t uuid;
};
struct ffa_driver {
const char *name;
int (*probe)(struct ffa_device *sdev);
void (*remove)(struct ffa_device *sdev);
const struct ffa_device_id *id_table;
struct device_driver driver;
};
#define to_ffa_driver(d) container_of(d, struct ffa_driver, driver)
static inline void ffa_dev_set_drvdata(struct ffa_device *fdev, void *data)
{
fdev->dev.driver_data = data;
}
#if IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)
struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id);
void ffa_device_unregister(struct ffa_device *ffa_dev);
int ffa_driver_register(struct ffa_driver *driver, struct module *owner,
const char *mod_name);
void ffa_driver_unregister(struct ffa_driver *driver);
bool ffa_device_is_valid(struct ffa_device *ffa_dev);
const struct ffa_dev_ops *ffa_dev_ops_get(struct ffa_device *dev);
#else
static inline
struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id)
{
return NULL;
}
static inline void ffa_device_unregister(struct ffa_device *dev) {}
static inline int
ffa_driver_register(struct ffa_driver *driver, struct module *owner,
const char *mod_name)
{
return -EINVAL;
}
static inline void ffa_driver_unregister(struct ffa_driver *driver) {}
static inline
bool ffa_device_is_valid(struct ffa_device *ffa_dev) { return false; }
static inline
const struct ffa_dev_ops *ffa_dev_ops_get(struct ffa_device *dev)
{
return NULL;
}
#endif /* CONFIG_ARM_FFA_TRANSPORT */
#define ffa_register(driver) \
ffa_driver_register(driver, THIS_MODULE, KBUILD_MODNAME)
#define ffa_unregister(driver) \
ffa_driver_unregister(driver)
/**
* module_ffa_driver() - Helper macro for registering a psa_ffa driver
* @__ffa_driver: ffa_driver structure
*
* Helper macro for psa_ffa drivers to set up proper module init / exit
* functions. Replaces module_init() and module_exit() and keeps people from
* printing pointless things to the kernel log when their driver is loaded.
*/
#define module_ffa_driver(__ffa_driver) \
module_driver(__ffa_driver, ffa_register, ffa_unregister)
/* FFA transport related */
struct ffa_partition_info {
u16 id;
u16 exec_ctxt;
/* partition supports receipt of direct requests */
#define FFA_PARTITION_DIRECT_RECV BIT(0)
/* partition can send direct requests. */
#define FFA_PARTITION_DIRECT_SEND BIT(1)
/* partition can send and receive indirect messages. */
#define FFA_PARTITION_INDIRECT_MSG BIT(2)
u32 properties;
};
/* For use with FFA_MSG_SEND_DIRECT_{REQ,RESP} which pass data via registers */
struct ffa_send_direct_data {
unsigned long data0; /* w3/x3 */
unsigned long data1; /* w4/x4 */
unsigned long data2; /* w5/x5 */
unsigned long data3; /* w6/x6 */
unsigned long data4; /* w7/x7 */
};
struct ffa_mem_region_addr_range {
/* The base IPA of the constituent memory region, aligned to 4 kiB */
u64 address;
/* The number of 4 kiB pages in the constituent memory region. */
u32 pg_cnt;
u32 reserved;
};
struct ffa_composite_mem_region {
/*
* The total number of 4 kiB pages included in this memory region. This
* must be equal to the sum of page counts specified in each
* `struct ffa_mem_region_addr_range`.
*/
u32 total_pg_cnt;
/* The number of constituents included in this memory region range */
u32 addr_range_cnt;
u64 reserved;
/** An array of `addr_range_cnt` memory region constituents. */
struct ffa_mem_region_addr_range constituents[];
};
struct ffa_mem_region_attributes {
/* The ID of the VM to which the memory is being given or shared. */
u16 receiver;
/*
* The permissions with which the memory region should be mapped in the
* receiver's page table.
*/
#define FFA_MEM_EXEC BIT(3)
#define FFA_MEM_NO_EXEC BIT(2)
#define FFA_MEM_RW BIT(1)
#define FFA_MEM_RO BIT(0)
u8 attrs;
/*
* Flags used during FFA_MEM_RETRIEVE_REQ and FFA_MEM_RETRIEVE_RESP
* for memory regions with multiple borrowers.
*/
#define FFA_MEM_RETRIEVE_SELF_BORROWER BIT(0)
u8 flag;
u32 composite_off;
/*
* Offset in bytes from the start of the outer `ffa_memory_region` to
* an `struct ffa_mem_region_addr_range`.
*/
u64 reserved;
};
struct ffa_mem_region {
/* The ID of the VM/owner which originally sent the memory region */
u16 sender_id;
#define FFA_MEM_NORMAL BIT(5)
#define FFA_MEM_DEVICE BIT(4)
#define FFA_MEM_WRITE_BACK (3 << 2)
#define FFA_MEM_NON_CACHEABLE (1 << 2)
#define FFA_DEV_nGnRnE (0 << 2)
#define FFA_DEV_nGnRE (1 << 2)
#define FFA_DEV_nGRE (2 << 2)
#define FFA_DEV_GRE (3 << 2)
#define FFA_MEM_NON_SHAREABLE (0)
#define FFA_MEM_OUTER_SHAREABLE (2)
#define FFA_MEM_INNER_SHAREABLE (3)
u8 attributes;
u8 reserved_0;
/*
* Clear memory region contents after unmapping it from the sender and
* before mapping it for any receiver.
*/
#define FFA_MEM_CLEAR BIT(0)
/*
* Whether the hypervisor may time slice the memory sharing or retrieval
* operation.
*/
#define FFA_TIME_SLICE_ENABLE BIT(1)
#define FFA_MEM_RETRIEVE_TYPE_IN_RESP (0 << 3)
#define FFA_MEM_RETRIEVE_TYPE_SHARE (1 << 3)
#define FFA_MEM_RETRIEVE_TYPE_LEND (2 << 3)
#define FFA_MEM_RETRIEVE_TYPE_DONATE (3 << 3)
#define FFA_MEM_RETRIEVE_ADDR_ALIGN_HINT BIT(9)
#define FFA_MEM_RETRIEVE_ADDR_ALIGN(x) ((x) << 5)
/* Flags to control behaviour of the transaction. */
u32 flags;
#define HANDLE_LOW_MASK GENMASK_ULL(31, 0)
#define HANDLE_HIGH_MASK GENMASK_ULL(63, 32)
#define HANDLE_LOW(x) ((u32)(FIELD_GET(HANDLE_LOW_MASK, (x))))
#define HANDLE_HIGH(x) ((u32)(FIELD_GET(HANDLE_HIGH_MASK, (x))))
#define PACK_HANDLE(l, h) \
(FIELD_PREP(HANDLE_LOW_MASK, (l)) | FIELD_PREP(HANDLE_HIGH_MASK, (h)))
/*
* A globally-unique ID assigned by the hypervisor for a region
* of memory being sent between VMs.
*/
u64 handle;
/*
* An implementation defined value associated with the receiver and the
* memory region.
*/
u64 tag;
u32 reserved_1;
/*
* The number of `ffa_mem_region_attributes` entries included in this
* transaction.
*/
u32 ep_count;
/*
* An array of endpoint memory access descriptors.
* Each one specifies a memory region offset, an endpoint and the
* attributes with which this memory region should be mapped in that
* endpoint's page table.
*/
struct ffa_mem_region_attributes ep_mem_access[];
};
#define COMPOSITE_OFFSET(x) \
(offsetof(struct ffa_mem_region, ep_mem_access[x]))
#define CONSTITUENTS_OFFSET(x) \
(offsetof(struct ffa_composite_mem_region, constituents[x]))
#define COMPOSITE_CONSTITUENTS_OFFSET(x, y) \
(COMPOSITE_OFFSET(x) + CONSTITUENTS_OFFSET(y))
struct ffa_mem_ops_args {
bool use_txbuf;
u32 nattrs;
u32 flags;
u64 tag;
u64 g_handle;
struct scatterlist *sg;
struct ffa_mem_region_attributes *attrs;
};
struct ffa_dev_ops {
u32 (*api_version_get)(void);
int (*partition_info_get)(const char *uuid_str,
struct ffa_partition_info *buffer);
void (*mode_32bit_set)(struct ffa_device *dev);
int (*sync_send_receive)(struct ffa_device *dev,
struct ffa_send_direct_data *data);
int (*memory_reclaim)(u64 g_handle, u32 flags);
int (*memory_share)(struct ffa_device *dev,
struct ffa_mem_ops_args *args);
};
#endif /* _LINUX_ARM_FFA_H */

View File

@ -79,6 +79,7 @@ struct reset_controller_dev {
unsigned int nr_resets;
};
#if IS_ENABLED(CONFIG_RESET_CONTROLLER)
int reset_controller_register(struct reset_controller_dev *rcdev);
void reset_controller_unregister(struct reset_controller_dev *rcdev);
@ -88,5 +89,26 @@ int devm_reset_controller_register(struct device *dev,
void reset_controller_add_lookup(struct reset_control_lookup *lookup,
unsigned int num_entries);
#else
static inline int reset_controller_register(struct reset_controller_dev *rcdev)
{
return 0;
}
static inline void reset_controller_unregister(struct reset_controller_dev *rcdev)
{
}
static inline int devm_reset_controller_register(struct device *dev,
struct reset_controller_dev *rcdev)
{
return 0;
}
static inline void reset_controller_add_lookup(struct reset_control_lookup *lookup,
unsigned int num_entries)
{
}
#endif
#endif

View File

@ -19,7 +19,7 @@ enum rpcif_data_dir {
RPCIF_DATA_OUT,
};
struct rpcif_op {
struct rpcif_op {
struct {
u8 buswidth;
u8 opcode;
@ -57,7 +57,7 @@ struct rpcif_op {
} data;
};
struct rpcif {
struct rpcif {
struct device *dev;
void __iomem *dirmap;
struct regmap *regmap;
@ -76,7 +76,7 @@ struct rpcif {
u32 ddr; /* DRDRENR or SMDRENR */
};
int rpcif_sw_init(struct rpcif *rpc, struct device *dev);
int rpcif_sw_init(struct rpcif *rpc, struct device *dev);
void rpcif_hw_init(struct rpcif *rpc, bool hyperflash);
void rpcif_prepare(struct rpcif *rpc, const struct rpcif_op *op, u64 *offs,
size_t *len);

View File

@ -10,6 +10,7 @@
#include <linux/debugfs.h>
#include <linux/err.h>
#include <linux/interconnect-provider.h>
#include <linux/irq.h>
#include <linux/reset-controller.h>
#include <linux/types.h>
@ -17,34 +18,48 @@ struct clk;
struct device;
struct page;
struct tegra_smmu_enable {
unsigned int reg;
unsigned int bit;
};
struct tegra_mc_timing {
unsigned long rate;
u32 *emem_data;
};
/* latency allowance */
struct tegra_mc_la {
unsigned int reg;
unsigned int shift;
unsigned int mask;
unsigned int def;
};
struct tegra_mc_client {
unsigned int id;
const char *name;
unsigned int swgroup;
/*
* For Tegra210 and earlier, this is the SWGROUP ID used for IOVA translations in the
* Tegra SMMU, whereas on Tegra186 and later this is the ID used to override the ARM SMMU
* stream ID used for IOVA translations for the given memory client.
*/
union {
unsigned int swgroup;
unsigned int sid;
};
unsigned int fifo_size;
struct tegra_smmu_enable smmu;
struct tegra_mc_la la;
struct {
/* Tegra SMMU enable (Tegra210 and earlier) */
struct {
unsigned int reg;
unsigned int bit;
} smmu;
/* latency allowance */
struct {
unsigned int reg;
unsigned int shift;
unsigned int mask;
unsigned int def;
} la;
/* stream ID overrides (Tegra186 and later) */
struct {
unsigned int override;
unsigned int security;
} sid;
} regs;
};
struct tegra_smmu_swgroup {
@ -155,6 +170,19 @@ struct tegra_mc_icc_ops {
void *data);
};
struct tegra_mc_ops {
/*
* @probe: Callback to set up SoC-specific bits of the memory controller. This is called
* after basic, common set up that is done by the SoC-agnostic bits.
*/
int (*probe)(struct tegra_mc *mc);
void (*remove)(struct tegra_mc *mc);
int (*suspend)(struct tegra_mc *mc);
int (*resume)(struct tegra_mc *mc);
irqreturn_t (*handle_irq)(int irq, void *data);
int (*probe_device)(struct tegra_mc *mc, struct device *dev);
};
struct tegra_mc_soc {
const struct tegra_mc_client *clients;
unsigned int num_clients;
@ -176,8 +204,7 @@ struct tegra_mc_soc {
unsigned int num_resets;
const struct tegra_mc_icc_ops *icc_ops;
int (*init)(struct tegra_mc *mc);
const struct tegra_mc_ops *ops;
};
struct tegra_mc {
@ -218,4 +245,6 @@ devm_tegra_memory_controller_get(struct device *dev)
}
#endif
int tegra_mc_probe_device(struct tegra_mc *mc, struct device *dev);
#endif /* __SOC_TEGRA_MC_H__ */