pci-v6.11-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmaahiEUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwypg/+LSzrx0CyyXruwwkjuoMIzqXoEpxV
 SSdJv47E9rnJymQvd0RAeNyc1BPbtRcP1FdEvV/G1ovb8qJSOJgU22PSSiMQsQ0h
 2WGBl1ShubQDDLBdy1AggAsRJhIH4P4tWZ4k5Ftz6WZPWA1UcrDqmjN4d02UIYZb
 A3YYcBEIm6bvrixxy+xq/Ii7S9A2idikabDLLGXOMSliFHx0ehWDNXyQEBONlrDh
 rEHih21rPtOltVEdJl7yF+SIA467HI09NuXfTviHWnJ1hinFoSlEHIhz4j+i+r//
 xOj7iDqtk/UAIToVsxtwgOnElNwY6ab/h/t1AmSSxX4FUEV2TiS1YEpUfX7pByt+
 dytgvepjQyycC/ZHUtRZFZ6+1M0z+Vgb5c3+jXyPh8pQEPqmXt8+KYVIi/wychmJ
 Opo4xniiDoKHSZ4E0bg/wMbe9yVCjTpX0i0S7BbNa/TRjud6vAhXvgx/y092jsdg
 h4lU0ywNCgea/rZFHZYomPjncx9xJ+rtOaH+/dVQhCm/wuRHnj7tJGZnl5LfCWVw
 +yNOcExQaE+lRvKqp6mQvUva3+4UArAL2tnFC00tGd0emRLIvXrxY2lF1sqp9wCZ
 AJu65El4nnpFNU7vJR7x4X31BvcdquFEvfofPxPXbPz09N8hPRhkunKzgd5ftKZS
 mcxMfStvIFXiMEM=
 =vw2i
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.11-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Define PCIE_RESET_CONFIG_DEVICE_WAIT_MS for the generic 100ms
     required after reset before config access (Kevin Xie)

   - Define PCIE_T_RRS_READY_MS for the generic 100ms required after
     reset before config access (probably should be unified with
     PCIE_RESET_CONFIG_DEVICE_WAIT_MS) (Damien Le Moal)

  Resource management:

   - Rename find_resource() to find_resource_space() to be more
     descriptive (Ilpo Järvinen)

   - Export find_resource_space() for use by PCI core, which needs to
     learn whether there is available space for a bridge window (Ilpo
     Järvinen)

   - Prevent double counting of resources so window size doesn't grow on
     each remove/rescan cycle (Ilpo Järvinen)

   - Relax bridge window sizing algorithm so a device doesn't break
     simply because it was removed and rescanned (Ilpo Järvinen)

   - Evaluate the ACPI PRESERVE_BOOT_CONFIG _DSM in
     pci_register_host_bridge() (not acpi_pci_root_create()) so we can
     unify it with similar DT functionality (Vidya Sagar)

   - Extend use of DT "linux,pci-probe-only" property so it works
     per-host bridge as well as globally (Vidya Sagar)

   - Unify support for ACPI PRESERVE_BOOT_CONFIG _DSM and the DT
     "linux,pci-probe-only" property in pci_preserve_config() (Vidya
     Sagar)

  Driver binding:

   - Add devres infrastructure for managed request and map of partial
     BAR resources (Philipp Stanner)

   - Deprecate pcim_iomap_table() because uses like
     "pcim_iomap_table()[0]" have no good way to return errors (Philipp
     Stanner)

   - Add an always-managed pcim_request_region() for use instead of
     pci_request_region() and similar, which are sometimes managed
     depending on whether pcim_enable_device() has been called
     previously (Philipp Stanner)

   - Reimplement pcim_set_mwi() so it doesn't need to keep store MWI
     state (Philipp Stanner)

   - Add pcim_intx() for use instead of pci_intx(), which is sometimes
     managed depending on whether pcim_enable_device() has been called
     previously (Philipp Stanner)

   - Add managed pcim_iomap_range() to allow mapping of a partial BAR
     (Philipp Stanner)

   - Fix a devres mapping leak in drm/vboxvideo (Philipp Stanner)

  Error handling:

   - Add missing bridge locking in device reset path and add a warning
     for other possible lock issues (Dan Williams)

   - Fix use-after-free on concurrent DPC and hot-removal (Lukas Wunner)

  Power management:

   - Disable AER and DPC during suspend to avoid spurious wakeups if
     they share an interrupt with PME (Kai-Heng Feng)

  PCIe native device hotplug:

   - Detect if a device was removed or replaced during system sleep so
     we don't assume a new device is the one that used to be there
     (Lukas Wunner)

  Virtualization:

   - Add an ACS quirk for Broadcom BCM5760X multi-function NIC; it
     prevents transactions between functions even though it doesn't
     advertise ACS, so the functions can be attached individually via
     VFIO (Ajit Khaparde)

  Peer-to-peer DMA:

   - Add a "pci=config_acs=" kernel command-line parameter to relax
     default ACS settings to enable additional peer-to-peer
     configurations. Requires expert knowledge of topology and ACS
     operation (Vidya Sagar)

  Endpoint framework:

   - Remove unused struct pci_epf_group.type_group (Christophe JAILLET)

   - Fix error handling in vpci_scan_bus() and epf_ntb_epc_cleanup()
     (Dan Carpenter)

   - Make struct pci_epc_class constant (Greg Kroah-Hartman)

   - Remove unused pci_endpoint_test_bar_{readl,writel} functions
     (Jiapeng Chong)

   - Rename "BME" to "Bus Master Enable" (Manivannan Sadhasivam)

   - Rename struct pci_epc_event_ops.core_init() callback to epc_init()
     (Manivannan Sadhasivam)

   - Move DMA init to MHI .epc_init() callback for uniformity
     (Manivannan Sadhasivam)

   - Cancel EPF test delayed work when link goes down (Manivannan
     Sadhasivam)

   - Add struct pci_epc_event_ops.epc_deinit() callback for cleanup
     needed on fundamental reset (Manivannan Sadhasivam)

   - Add 64KB alignment to endpoint test to support Rockchip rk3588
     (Niklas Cassel)

   - Optimize endpoint test by using memcpy() instead of readl() (Niklas
     Cassel)

  Device tree bindings:

   - Add generic "ats-supported" property to advertise that a PCIe Root
     Complex supports ATS (Jean-Philippe Brucker)

  Amazon Annapurna Labs PCIe controller driver:

   - Validate IORESOURCE_BUS presence to avoid NULL pointer dereference
     (Aleksandr Mishin)

  Axis ARTPEC-6 PCIe controller driver:

   - Rename .cpu_addr_fixup() parameter to reflect that it is a PCI
     address, not a CPU address (Niklas Cassel)

  Freescale i.MX6 PCIe controller driver:

   - Convert to agnostic GPIO API (Andy Shevchenko)

  Freescale Layerscape PCIe controller driver:

   - Make struct mobiveil_rp_ops constant (Christophe JAILLET)

   - Use new generic dw_pcie_ep_linkdown() to handle link-down events
     (Manivannan Sadhasivam)

  HiSilicon Kirin PCIe controller driver:

   - Convert to agnostic GPIO API (Andy Shevchenko)

   - Use _scoped() iterator for OF children to ensure refcounts are
     decremented at loop exit (Javier Carrasco)

  Intel VMD host bridge driver:

   - Create sysfs "domain" symlink before downstream devices are exposed
     to userspace by pci_bus_add_devices() (Jiwei Sun)

  Loongson PCIe controller driver:

   - Enable MSI when LS7A is used with new CPUs that have integrated
     PCIe Root Complex, e.g., Loongson-3C6000, so downstream devices can
     use MSI (Huacai Chen)

  Microchip AXI PolarFlare PCIe controller driver:

   - Move pcie-microchip-host.c to a new PLDA directory (Minda Chen)

   - Factor PLDA generic items out to a common
     plda,xpressrich3-axi-common.yaml binding (Minda Chen)

   - Factor PLDA generic data structures and code out to shared
     pcie-plda.h, pcie-plda-host.c (Minda Chen)

   - Add PLDA generic interrupt handling with a .request_event_irq()
     callback for vendor-specific events (Minda Chen)

   - Add PLDA generic host init/deinit and map bus functions for use by
     vendor-specific drivers (Minda Chen)

   - Rework to use PLDA core (Minda Chen)

  Microsoft Hyper-V host bridge driver:

   - Return zero, not garbage, when reading PCI_INTERRUPT_PIN (Wei Liu)

  NVIDIA Tegra194 PCIe controller driver:

   - Remove unused struct tegra_pcie_soc (Dr. David Alan Gilbert)

   - Set 64KB inbound ATU alignment restriction (Jon Hunter)

  Qualcomm PCIe controller driver:

   - Make the MHI reg region mandatory for X1E80100, since all PCIe
     controllers have it (Abel Vesa)

   - Prevent use of uninitialized data and possible error pointer
     dereference (Dan Carpenter)

   - Return error, not success, if dev_pm_opp_find_freq_floor() fails
     (Dan Carpenter)

   - Add Operating Performance Points (OPP) support to scale performance
     state based on aggregate link bandwidth to improve SoC power
     efficiency (Krishna chaitanya chundru)

   - Vote for the CPU-PCIe ICC (interconnect) path to ensure it stays
     active even if other drivers don't vote for it (Krishna chaitanya
     chundru)

   - Use devm_clk_bulk_get_all() to get all the clocks from DT to avoid
     writing out all the clock names (Manivannan Sadhasivam)

   - Add DT binding and driver support for the SA8775P SoC (Mrinmay
     Sarkar)

   - Add HDMA support for the SA8775P SoC (Mrinmay Sarkar)

   - Override the SA8775P NO_SNOOP default to avoid possible memory
     corruption (Mrinmay Sarkar)

   - Make sure resources are disabled during PERST# assertion, even if
     the link is already disabled (Manivannan Sadhasivam)

   - Use new generic dw_pcie_ep_linkdown() to handle link-down events
     (Manivannan Sadhasivam)

   - Add DT and endpoint driver support for the SA8775P SoC (Mrinmay
     Sarkar)

   - Add Hyper DMA (HDMA) support for the SA8775P SoC and enable it in
     the EPF MHI driver (Mrinmay Sarkar)

   - Set PCIE_PARF_NO_SNOOP_OVERIDE to override the default NO_SNOOP
     attribute on the SA8775P SoC (both Root Complex and Endpoint mode)
     to avoid possible memory corruption (Mrinmay Sarkar)

  Renesas R-Car PCIe controller driver:

   - Demote WARN() to dev_warn_ratelimited() in rcar_pcie_wakeup() to
     avoid unnecessary backtrace (Marek Vasut)

   - Add DT and driver support for R-Car V4H (R8A779G0) host and
     endpoint. This requires separate proprietary firmware (Yoshihiro
     Shimoda)

  Rockchip PCIe controller driver:

   - Assert PERST# for 100ms after power is stable (Damien Le Moal)

   - Wait PCIE_T_RRS_READY_MS (100ms) after reset before starting
     configuration (Damien Le Moal)

   - Use GPIOD_OUT_LOW flag while requesting ep_gpio to fix a firmware
     crash on Qcom-based modems with Rockpro64 board (Manivannan
     Sadhasivam)

  Rockchip DesignWare PCIe controller driver:

   - Factor common parts of rockchip-dw-pcie DT binding to be shared by
     Root Complex and Endpoint mode (Niklas Cassel)

   - Add missing INTx signals to common DT binding (Niklas Cassel)

   - Add eDMA items to DT binding for Endpoint controller (Niklas
     Cassel)

   - Fix initial dw-rockchip PERST# GPIO value to prevent unnecessary
     short assert/deassert that causes issues with some WLAN controllers
     (Niklas Cassel)

   - Refactor dw-rockchip and add support for Endpoint mode (Niklas
     Cassel)

   - Call pci_epc_init_notify() and drop dw_pcie_ep_init_notify()
     wrapper (Niklas Cassel)

   - Add error messages in .probe() error paths to improve user
     experience (Uwe Kleine-König)

  Samsung Exynos PCIe controller driver:

   - Use bulk clock APIs to simplify clock setup (Shradha Todi)

  StarFive PCIe controller driver:

   - Add DT binding and driver support for the StarFive JH7110
     PLDA-based PCIe controller (Minda Chen)

  Synopsys DesignWare PCIe controller driver:

   - Add generic support for sending PME_Turn_Off when system suspends
     (Frank Li)

   - Fix incorrect interpretation of iATU slot 0 after PERST#
     assert/deassert (Frank Li)

   - Use msleep() instead of usleep_range() while waiting for link
     (Konrad Dybcio)

   - Refactor dw_pcie_edma_find_chip() to enable adding support for
     Hyper DMA (HDMA) (Manivannan Sadhasivam)

   - Enable drivers to supply the eDMA channel count since some can't
     auto detect this (Manivannan Sadhasivam)

   - Call pci_epc_init_notify() and drop dw_pcie_ep_init_notify()
     wrapper (Manivannan Sadhasivam)

   - Pass the eDMA mapping format directly from drivers instead of
     maintaining a capability for it (Manivannan Sadhasivam)

   - Add generic dw_pcie_ep_linkdown() to notify EPF drivers about
     link-down events and restore non-sticky DWC registers lost on link
     down (Manivannan Sadhasivam)

   - Add vendor-specific "apb" reg name, interrupt names, INTx names to
     generic binding (Niklas Cassel)

   - Enforce DWC restriction that 64-bit BARs must start with an
     even-numbered BAR (Niklas Cassel)

   - Consolidate args of dw_pcie_prog_outbound_atu() into a structure
     (Yoshihiro Shimoda)

   - Add support for endpoints to send Message TLPs, e.g., for INTx
     emulation (Yoshihiro Shimoda)

  TI DRA7xx PCIe controller driver:

   - Rename .cpu_addr_fixup() parameter to reflect that it is a PCI
     address, not a CPU address (Niklas Cassel)

  TI Keystone PCIe controller driver:

   - Validate IORESOURCE_BUS presence to avoid NULL pointer dereference
     (Aleksandr Mishin)

   - Work around AM65x/DRA80xM Errata #i2037 that corrupts TLPs and
     causes processor hangs by limiting Max_Read_Request_Size (MRRS) and
     Max_Payload_Size (MPS) (Kishon Vijay Abraham I)

   - Leave BAR 0 disabled for AM654x to fix a regression caused by
     6ab15b5e70 ("PCI: dwc: keystone: Convert .scan_bus() callback to
     use add_bus"), which caused a 45-second boot delay (Siddharth
     Vadapalli)

  Xilinx Versal CPM PCIe controller driver:

   - Fix overlapping bridge registers and 32-bit BAR addresses in DT
     binding (Thippeswamy Havalige)

  MicroSemi Switchtec management driver:

   - Make struct switchtec_class constant (Greg Kroah-Hartman)

  Miscellaneous:

   - Remove unused struct acpi_handle_node (Dr. David Alan Gilbert)

   - Add missing MODULE_DESCRIPTION() macros (Jeff Johnson)"

* tag 'pci-v6.11-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (154 commits)
  PCI: loongson: Enable MSI in LS7A Root Complex
  PCI: Extend ACS configurability
  PCI: Add missing bridge lock to pci_bus_lock()
  drm/vboxvideo: fix mapping leaks
  PCI: Add managed pcim_iomap_range()
  PCI: Remove legacy pcim_release()
  PCI: Add managed pcim_intx()
  PCI: vmd: Create domain symlink before pci_bus_add_devices()
  PCI: qcom: Prevent use of uninitialized data in qcom_pcie_suspend_noirq()
  PCI: qcom: Prevent potential error pointer dereference
  PCI: qcom: Fix missing error code in qcom_pcie_probe()
  PCI: Give pcim_set_mwi() its own devres cleanup callback
  PCI: Move struct pci_devres.pinned bit to struct pci_dev
  PCI: Remove struct pci_devres.enabled status bit
  PCI: Document hybrid devres hazards
  PCI: Add managed pcim_request_region()
  PCI: Deprecate pcim_iomap_table(), pcim_iomap_regions_request_all()
  PCI: Add managed partial-BAR request and map infrastructure
  PCI: Add devres helpers for iomap table
  PCI: Add and use devres helper for bit masks
  ...
This commit is contained in:
Linus Torvalds 2024-07-19 19:03:18 -07:00
commit 3f386cb8ee
104 changed files with 5202 additions and 1932 deletions

View File

@ -172,8 +172,8 @@ by the PCI endpoint function driver.
* bind: ops to perform when a EPC device has been bound to EPF device
* unbind: ops to perform when a binding has been lost between a EPC
device and EPF device
* linkup: ops to perform when the EPC device has established a
connection with a host system
* add_cfs: optional ops to create function specific configfs
attributes
The PCI Function driver can then register the PCI EPF driver by using
pci_epf_register_driver().

View File

@ -139,7 +139,7 @@ driver data structure.
static struct pcie_port_service_driver root_aerdrv = {
.name = (char *)device_name,
.id_table = &service_id[0],
.id_table = service_id,
.probe = aerdrv_load,
.remove = aerdrv_unload,

View File

@ -4564,6 +4564,38 @@
bridges without forcing it upstream. Note:
this removes isolation between devices and
may put more devices in an IOMMU group.
config_acs=
Format:
<ACS flags>@<pci_dev>[; ...]
Specify one or more PCI devices (in the format
specified above) optionally prepended with flags
and separated by semicolons. The respective
capabilities will be enabled, disabled or
unchanged based on what is specified in
flags.
ACS Flags is defined as follows:
bit-0 : ACS Source Validation
bit-1 : ACS Translation Blocking
bit-2 : ACS P2P Request Redirect
bit-3 : ACS P2P Completion Redirect
bit-4 : ACS Upstream Forwarding
bit-5 : ACS P2P Egress Control
bit-6 : ACS Direct Translated P2P
Each bit can be marked as:
'0' force disabled
'1' force enabled
'x' unchanged
For example,
pci=config_acs=10x
would configure all devices that support
ACS to enable P2P Request Redirect, disable
Translation Blocking, and leave Source
Validation unchanged from whatever power-up
or firmware set it to.
Note: this may remove isolation between devices
and may put more devices in an IOMMU group.
force_floating [S390] Force usage of floating interrupts.
nomio [S390] Do not use MIO instructions.
norid [S390] ignore the RID field and force use of

View File

@ -13,6 +13,35 @@ description: |+
MediaTek MT7621 PCIe subsys supports a single Root Complex (RC)
with 3 Root Ports. Each Root Port supports a Gen1 1-lane Link
MT7621 PCIe HOST Topology
.-------.
| |
| CPU |
| |
'-------'
|
|
|
v
.------------------.
.-----------| HOST/PCI Bridge |------------.
| '------------------' | Type1
BUS0 | | | Access
v v v On Bus0
.-------------. .-------------. .-------------.
| VIRTUAL P2P | | VIRTUAL P2P | | VIRTUAL P2P |
| BUS0 | | BUS0 | | BUS0 |
| DEV0 | | DEV1 | | DEV2 |
'-------------' '-------------' '-------------'
Type0 | Type0 | Type0 |
Access BUS1 | Access BUS2| Access BUS3|
On Bus1 v On Bus2 v On Bus3 v
.----------. .----------. .----------.
| Device 0 | | Device 0 | | Device 0 |
| Func 0 | | Func 0 | | Func 0 |
'----------' '----------' '----------'
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#

View File

@ -10,21 +10,13 @@ maintainers:
- Daire McNamara <daire.mcnamara@microchip.com>
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
- $ref: plda,xpressrich3-axi-common.yaml#
- $ref: /schemas/interrupt-controller/msi-controller.yaml#
properties:
compatible:
const: microchip,pcie-host-1.0 # PolarFire
reg:
maxItems: 2
reg-names:
items:
- const: cfg
- const: apb
clocks:
description:
Fabric Interface Controllers, FICs, are the interface between the FPGA
@ -52,18 +44,6 @@ properties:
items:
pattern: '^fic[0-3]$'
interrupts:
minItems: 1
items:
- description: PCIe host controller
- description: builtin MSI controller
interrupt-names:
minItems: 1
items:
- const: pcie
- const: msi
ranges:
minItems: 1
maxItems: 3
@ -72,39 +52,6 @@ properties:
minItems: 1
maxItems: 6
msi-controller:
description: Identifies the node as an MSI controller.
msi-parent:
description: MSI controller the device is capable of using.
interrupt-controller:
type: object
properties:
'#address-cells':
const: 0
'#interrupt-cells':
const: 1
interrupt-controller: true
required:
- '#address-cells'
- '#interrupt-cells'
- interrupt-controller
additionalProperties: false
required:
- reg
- reg-names
- "#interrupt-cells"
- interrupts
- interrupt-map-mask
- interrupt-map
- msi-controller
unevaluatedProperties: false
examples:

View File

@ -0,0 +1,75 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/plda,xpressrich3-axi-common.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: PLDA XpressRICH PCIe host common properties
maintainers:
- Daire McNamara <daire.mcnamara@microchip.com>
- Kevin Xie <kevin.xie@starfivetech.com>
description:
Generic PLDA XpressRICH PCIe host common properties.
allOf:
- $ref: /schemas/pci/pci-host-bridge.yaml#
properties:
reg:
maxItems: 2
reg-names:
items:
- const: cfg
- const: apb
interrupts:
minItems: 1
items:
- description: PCIe host controller
- description: builtin MSI controller
interrupt-names:
minItems: 1
items:
- const: pcie
- const: msi
msi-controller:
description: Identifies the node as an MSI controller.
msi-parent:
description: MSI controller the device is capable of using.
interrupt-controller:
type: object
properties:
'#address-cells':
const: 0
'#interrupt-cells':
const: 1
interrupt-controller: true
required:
- '#address-cells'
- '#interrupt-cells'
- interrupt-controller
additionalProperties: false
required:
- reg
- reg-names
- interrupts
- msi-controller
- "#interrupt-cells"
- interrupt-map-mask
- interrupt-map
additionalProperties: true
...

View File

@ -13,6 +13,7 @@ properties:
compatible:
oneOf:
- enum:
- qcom,sa8775p-pcie-ep
- qcom,sdx55-pcie-ep
- qcom,sm8450-pcie-ep
- items:
@ -20,6 +21,7 @@ properties:
- const: qcom,sdx55-pcie-ep
reg:
minItems: 6
items:
- description: Qualcomm-specific PARF configuration registers
- description: DesignWare PCIe registers
@ -27,8 +29,10 @@ properties:
- description: Address Translation Unit (ATU) registers
- description: Memory region used to map remote RC address space
- description: BAR memory region
- description: DMA register space
reg-names:
minItems: 6
items:
- const: parf
- const: dbi
@ -36,13 +40,14 @@ properties:
- const: atu
- const: addr_space
- const: mmio
- const: dma
clocks:
minItems: 7
minItems: 5
maxItems: 8
clock-names:
minItems: 7
minItems: 5
maxItems: 8
qcom,perst-regs:
@ -57,14 +62,18 @@ properties:
- description: Perst separation enable offset
interrupts:
minItems: 2
items:
- description: PCIe Global interrupt
- description: PCIe Doorbell interrupt
- description: DMA interrupt
interrupt-names:
minItems: 2
items:
- const: global
- const: doorbell
- const: dma
reset-gpios:
description: GPIO used as PERST# input signal
@ -125,6 +134,10 @@ allOf:
- qcom,sdx55-pcie-ep
then:
properties:
reg:
maxItems: 6
reg-names:
maxItems: 6
clocks:
items:
- description: PCIe Auxiliary clock
@ -143,6 +156,10 @@ allOf:
- const: slave_q2a
- const: sleep
- const: ref
interrupts:
maxItems: 2
interrupt-names:
maxItems: 2
- if:
properties:
@ -152,6 +169,10 @@ allOf:
- qcom,sm8450-pcie-ep
then:
properties:
reg:
maxItems: 6
reg-names:
maxItems: 6
clocks:
items:
- description: PCIe Auxiliary clock
@ -172,6 +193,45 @@ allOf:
- const: ref
- const: ddrss_sf_tbu
- const: aggre_noc_axi
interrupts:
maxItems: 2
interrupt-names:
maxItems: 2
- if:
properties:
compatible:
contains:
enum:
- qcom,sa8775p-pcie-ep
then:
properties:
reg:
minItems: 7
maxItems: 7
reg-names:
minItems: 7
maxItems: 7
clocks:
items:
- description: PCIe Auxiliary clock
- description: PCIe CFG AHB clock
- description: PCIe Master AXI clock
- description: PCIe Slave AXI clock
- description: PCIe Slave Q2A AXI clock
clock-names:
items:
- const: aux
- const: cfg
- const: bus_master
- const: bus_slave
- const: slave_q2a
interrupts:
minItems: 3
maxItems: 3
interrupt-names:
minItems: 3
maxItems: 3
unevaluatedProperties: false

View File

@ -69,6 +69,10 @@ properties:
- const: msi6
- const: msi7
operating-points-v2: true
opp-table:
type: object
resets:
maxItems: 1

View File

@ -19,11 +19,10 @@ properties:
const: qcom,pcie-x1e80100
reg:
minItems: 5
minItems: 6
maxItems: 6
reg-names:
minItems: 5
items:
- const: parf # Qualcomm specific registers
- const: dbi # DesignWare PCIe registers

View File

@ -0,0 +1,126 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/rockchip-dw-pcie-common.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DesignWare based PCIe RC/EP controller on Rockchip SoCs
maintainers:
- Shawn Lin <shawn.lin@rock-chips.com>
- Simon Xue <xxm@rock-chips.com>
- Heiko Stuebner <heiko@sntech.de>
description: |+
Generic properties for the DesignWare based PCIe RC/EP controller on Rockchip
SoCs.
properties:
clocks:
minItems: 5
items:
- description: AHB clock for PCIe master
- description: AHB clock for PCIe slave
- description: AHB clock for PCIe dbi
- description: APB clock for PCIe
- description: Auxiliary clock for PCIe
- description: PIPE clock
- description: Reference clock for PCIe
clock-names:
minItems: 5
items:
- const: aclk_mst
- const: aclk_slv
- const: aclk_dbi
- const: pclk
- const: aux
- const: pipe
- const: ref
interrupts:
minItems: 5
items:
- description:
Combined system interrupt, which is used to signal the following
interrupts - phy_link_up, dll_link_up, link_req_rst_not, hp_pme,
hp, hp_msi, link_auto_bw, link_auto_bw_msi, bw_mgt, bw_mgt_msi,
edma_wr, edma_rd, dpa_sub_upd, rbar_update, link_eq_req, ep_elbi_app
- description:
Combined PM interrupt, which is used to signal the following
interrupts - linkst_in_l1sub, linkst_in_l1, linkst_in_l2,
linkst_in_l0s, linkst_out_l1sub, linkst_out_l1, linkst_out_l2,
linkst_out_l0s, pm_dstate_update
- description:
Combined message interrupt, which is used to signal the following
interrupts - ven_msg, unlock_msg, ltr_msg, cfg_pme, cfg_pme_msi,
pm_pme, pm_to_ack, pm_turnoff, obff_idle, obff_obff, obff_cpu_active
- description:
Combined legacy interrupt, which is used to signal the following
interrupts - inta, intb, intc, intd, tx_inta, tx_intb, tx_intc,
tx_intd
- description:
Combined error interrupt, which is used to signal the following
interrupts - aer_rc_err, aer_rc_err_msi, rx_cpl_timeout,
tx_cpl_timeout, cor_err_sent, nf_err_sent, f_err_sent, cor_err_rx,
nf_err_rx, f_err_rx, radm_qoverflow
- description:
eDMA write channel 0 interrupt
- description:
eDMA write channel 1 interrupt
- description:
eDMA read channel 0 interrupt
- description:
eDMA read channel 1 interrupt
interrupt-names:
minItems: 5
items:
- const: sys
- const: pmc
- const: msg
- const: legacy
- const: err
- const: dma0
- const: dma1
- const: dma2
- const: dma3
num-lanes: true
phys:
maxItems: 1
phy-names:
const: pcie-phy
power-domains:
maxItems: 1
resets:
minItems: 1
maxItems: 2
reset-names:
oneOf:
- const: pipe
- items:
- const: pwr
- const: pipe
required:
- compatible
- reg
- reg-names
- clocks
- clock-names
- num-lanes
- phys
- phy-names
- power-domains
- resets
- reset-names
additionalProperties: true
...

View File

@ -0,0 +1,95 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/rockchip-dw-pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DesignWare based PCIe Endpoint controller on Rockchip SoCs
maintainers:
- Niklas Cassel <cassel@kernel.org>
description: |+
RK3588 SoC PCIe Endpoint controller is based on the Synopsys DesignWare
PCIe IP and thus inherits all the common properties defined in
snps,dw-pcie-ep.yaml.
allOf:
- $ref: /schemas/pci/snps,dw-pcie-ep.yaml#
- $ref: /schemas/pci/rockchip-dw-pcie-common.yaml#
properties:
compatible:
enum:
- rockchip,rk3568-pcie-ep
- rockchip,rk3588-pcie-ep
reg:
items:
- description: Data Bus Interface (DBI) registers
- description: Data Bus Interface (DBI) shadow registers
- description: Rockchip designed configuration registers
- description: Memory region used to map remote RC address space
- description: Internal Address Translation Unit (iATU) registers
reg-names:
items:
- const: dbi
- const: dbi2
- const: apb
- const: addr_space
- const: atu
required:
- interrupts
- interrupt-names
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/rockchip,rk3588-cru.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/power/rk3588-power.h>
#include <dt-bindings/reset/rockchip,rk3588-cru.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie3x4_ep: pcie-ep@fe150000 {
compatible = "rockchip,rk3588-pcie-ep";
reg = <0xa 0x40000000 0x0 0x00100000>,
<0xa 0x40100000 0x0 0x00100000>,
<0x0 0xfe150000 0x0 0x00010000>,
<0x9 0x00000000 0x0 0x40000000>,
<0xa 0x40300000 0x0 0x00100000>;
reg-names = "dbi", "dbi2", "apb", "addr_space", "atu";
clocks = <&cru ACLK_PCIE_4L_MSTR>, <&cru ACLK_PCIE_4L_SLV>,
<&cru ACLK_PCIE_4L_DBI>, <&cru PCLK_PCIE_4L>,
<&cru CLK_PCIE_AUX0>, <&cru CLK_PCIE4L_PIPE>;
clock-names = "aclk_mst", "aclk_slv",
"aclk_dbi", "pclk",
"aux", "pipe";
interrupts = <GIC_SPI 263 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 262 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 261 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 260 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 259 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 271 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 269 IRQ_TYPE_LEVEL_HIGH 0>,
<GIC_SPI 270 IRQ_TYPE_LEVEL_HIGH 0>;
interrupt-names = "sys", "pmc", "msg", "legacy", "err",
"dma0", "dma1", "dma2", "dma3";
max-link-speed = <3>;
num-lanes = <4>;
phys = <&pcie30phy>;
phy-names = "pcie-phy";
power-domains = <&power RK3588_PD_PCIE>;
resets = <&cru SRST_PCIE0_POWER_UP>, <&cru SRST_P_PCIE0>;
reset-names = "pwr", "pipe";
};
};
...

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/pci/rockchip-dw-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: DesignWare based PCIe controller on Rockchip SoCs
title: DesignWare based PCIe Root Complex controller on Rockchip SoCs
maintainers:
- Shawn Lin <shawn.lin@rock-chips.com>
@ -12,12 +12,13 @@ maintainers:
- Heiko Stuebner <heiko@sntech.de>
description: |+
RK3568 SoC PCIe host controller is based on the Synopsys DesignWare
RK3568 SoC PCIe Root Complex controller is based on the Synopsys DesignWare
PCIe IP and thus inherits all the common properties defined in
snps,dw-pcie.yaml.
allOf:
- $ref: /schemas/pci/snps,dw-pcie.yaml#
- $ref: /schemas/pci/rockchip-dw-pcie-common.yaml#
properties:
compatible:
@ -40,61 +41,6 @@ properties:
- const: apb
- const: config
clocks:
minItems: 5
items:
- description: AHB clock for PCIe master
- description: AHB clock for PCIe slave
- description: AHB clock for PCIe dbi
- description: APB clock for PCIe
- description: Auxiliary clock for PCIe
- description: PIPE clock
- description: Reference clock for PCIe
clock-names:
minItems: 5
items:
- const: aclk_mst
- const: aclk_slv
- const: aclk_dbi
- const: pclk
- const: aux
- const: pipe
- const: ref
interrupts:
items:
- description:
Combined system interrupt, which is used to signal the following
interrupts - phy_link_up, dll_link_up, link_req_rst_not, hp_pme,
hp, hp_msi, link_auto_bw, link_auto_bw_msi, bw_mgt, bw_mgt_msi,
edma_wr, edma_rd, dpa_sub_upd, rbar_update, link_eq_req, ep_elbi_app
- description:
Combined PM interrupt, which is used to signal the following
interrupts - linkst_in_l1sub, linkst_in_l1, linkst_in_l2,
linkst_in_l0s, linkst_out_l1sub, linkst_out_l1, linkst_out_l2,
linkst_out_l0s, pm_dstate_update
- description:
Combined message interrupt, which is used to signal the following
interrupts - ven_msg, unlock_msg, ltr_msg, cfg_pme, cfg_pme_msi,
pm_pme, pm_to_ack, pm_turnoff, obff_idle, obff_obff, obff_cpu_active
- description:
Combined legacy interrupt, which is used to signal the following
interrupts - inta, intb, intc, intd
- description:
Combined error interrupt, which is used to signal the following
interrupts - aer_rc_err, aer_rc_err_msi, rx_cpl_timeout,
tx_cpl_timeout, cor_err_sent, nf_err_sent, f_err_sent, cor_err_rx,
nf_err_rx, f_err_rx, radm_qoverflow
interrupt-names:
items:
- const: sys
- const: pmc
- const: msg
- const: legacy
- const: err
legacy-interrupt-controller:
description: Interrupt controller node for handling legacy PCI interrupts.
type: object
@ -119,47 +65,14 @@ properties:
msi-map: true
num-lanes: true
phys:
maxItems: 1
phy-names:
const: pcie-phy
power-domains:
maxItems: 1
ranges:
minItems: 2
maxItems: 3
resets:
minItems: 1
maxItems: 2
reset-names:
oneOf:
- const: pipe
- items:
- const: pwr
- const: pipe
vpcie3v3-supply: true
required:
- compatible
- reg
- reg-names
- clocks
- clock-names
- msi-map
- num-lanes
- phys
- phy-names
- power-domains
- resets
- reset-names
unevaluatedProperties: false

View File

@ -100,7 +100,7 @@ properties:
for new bindings.
oneOf:
- description: See native 'elbi/app' CSR region for details.
enum: [ link, appl ]
enum: [ apb, link, appl ]
- description: See native 'atu' CSR region for details.
enum: [ atu_dma ]
allOf:
@ -151,12 +151,21 @@ properties:
Application-specific IRQ raised depending on the vendor-specific
events basis.
const: app
- description:
Interrupts triggered when the controller itself (in Endpoint mode)
has sent an Assert_INT{A,B,C,D}/Desassert_INT{A,B,C,D} message to
the upstream device.
pattern: "^tx_int(a|b|c|d)$"
- description:
Combined interrupt signal raised when the controller has sent an
Assert_INT{A,B,C,D} message. See "^tx_int(a|b|c|d)$" for details.
const: legacy
- description:
Vendor-specific IRQ names. Consider using the generic names above
for new bindings.
oneOf:
- description: See native "app" IRQ for details
enum: [ intr ]
enum: [ intr, sys, pmc, msg, err ]
max-functions:
maximum: 32

View File

@ -0,0 +1,120 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/starfive,jh7110-pcie.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: StarFive JH7110 PCIe host controller
maintainers:
- Kevin Xie <kevin.xie@starfivetech.com>
allOf:
- $ref: plda,xpressrich3-axi-common.yaml#
properties:
compatible:
const: starfive,jh7110-pcie
clocks:
items:
- description: NOC bus clock
- description: Transport layer clock
- description: AXI MST0 clock
- description: APB clock
clock-names:
items:
- const: noc
- const: tl
- const: axi_mst0
- const: apb
resets:
items:
- description: AXI MST0 reset
- description: AXI SLAVE0 reset
- description: AXI SLAVE reset
- description: PCIE BRIDGE reset
- description: PCIE CORE reset
- description: PCIE APB reset
reset-names:
items:
- const: mst0
- const: slv0
- const: slv
- const: brg
- const: core
- const: apb
starfive,stg-syscon:
$ref: /schemas/types.yaml#/definitions/phandle-array
description:
The phandle to System Register Controller syscon node.
perst-gpios:
description: GPIO controlled connection to PERST# signal
maxItems: 1
phys:
description:
Specified PHY is attached to PCIe controller.
maxItems: 1
required:
- clocks
- resets
- starfive,stg-syscon
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
soc {
#address-cells = <2>;
#size-cells = <2>;
pcie@940000000 {
compatible = "starfive,jh7110-pcie";
reg = <0x9 0x40000000 0x0 0x10000000>,
<0x0 0x2b000000 0x0 0x1000000>;
reg-names = "cfg", "apb";
#address-cells = <3>;
#size-cells = <2>;
#interrupt-cells = <1>;
device_type = "pci";
ranges = <0x82000000 0x0 0x30000000 0x0 0x30000000 0x0 0x08000000>,
<0xc3000000 0x9 0x00000000 0x9 0x00000000 0x0 0x40000000>;
starfive,stg-syscon = <&stg_syscon>;
bus-range = <0x0 0xff>;
interrupt-parent = <&plic>;
interrupts = <56>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc0 0x1>,
<0x0 0x0 0x0 0x2 &pcie_intc0 0x2>,
<0x0 0x0 0x0 0x3 &pcie_intc0 0x3>,
<0x0 0x0 0x0 0x4 &pcie_intc0 0x4>;
msi-controller;
clocks = <&syscrg 86>,
<&stgcrg 10>,
<&stgcrg 8>,
<&stgcrg 9>;
clock-names = "noc", "tl", "axi_mst0", "apb";
resets = <&stgcrg 11>,
<&stgcrg 12>,
<&stgcrg 13>,
<&stgcrg 14>,
<&stgcrg 15>,
<&stgcrg 16>;
perst-gpios = <&gpios 26 GPIO_ACTIVE_LOW>;
phys = <&pciephy0>;
pcie_intc0: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
};
};
};

View File

@ -92,7 +92,7 @@ examples:
<0 0 0 3 &pcie_intc_0 2>,
<0 0 0 4 &pcie_intc_0 3>;
bus-range = <0x00 0xff>;
ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>,
ranges = <0x02000000 0x0 0xe0010000 0x0 0xe0010000 0x0 0x10000000>,
<0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>;
msi-map = <0x0 &its_gic 0x0 0x10000>;
reg = <0x0 0xfca10000 0x0 0x1000>,

View File

@ -124,7 +124,7 @@ pcie_port_service_unregister取代了Linux驱动模型的pci_unregister_driver
static struct pcie_port_service_driver root_aerdrv = {
.name = (char *)device_name,
.id_table = &service_id[0],
.id_table = service_id,
.probe = aerdrv_load,
.remove = aerdrv_unload,

View File

@ -17456,6 +17456,14 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c
PCI DRIVER FOR PLDA PCIE IP
M: Daire McNamara <daire.mcnamara@microchip.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/plda,xpressrich3-axi-common.yaml
F: drivers/pci/controller/plda/pcie-plda-host.c
F: drivers/pci/controller/plda/pcie-plda.h
PCI DRIVER FOR RENESAS R-CAR
M: Marek Vasut <marek.vasut+renesas@gmail.com>
M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
@ -17694,7 +17702,7 @@ M: Daire McNamara <daire.mcnamara@microchip.com>
L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/microchip*
F: drivers/pci/controller/*microchip*
F: drivers/pci/controller/plda/*microchip*
PCIE DRIVER FOR QUALCOMM MSM
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
@ -17724,6 +17732,13 @@ L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/controller/dwc/*spear*
PCIE DRIVER FOR STARFIVE JH71x0
M: Kevin Xie <kevin.xie@starfivetech.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
F: drivers/pci/controller/plda/pcie-starfive.c
PCIE ENDPOINT DRIVER FOR QUALCOMM
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
L: linux-pci@vger.kernel.org
@ -19564,7 +19579,7 @@ F: drivers/clk/microchip/clk-mpfs*.c
F: drivers/firmware/microchip/mpfs-auto-update.c
F: drivers/i2c/busses/i2c-microchip-corei2c.c
F: drivers/mailbox/mailbox-mpfs.c
F: drivers/pci/controller/pcie-microchip-host.c
F: drivers/pci/controller/plda/pcie-microchip-host.c
F: drivers/pwm/pwm-microchip-core.c
F: drivers/reset/reset-mpfs.c
F: drivers/rtc/rtc-mpfs.c

View File

@ -293,11 +293,6 @@ struct acpi_pci_root *acpi_pci_find_root(acpi_handle handle)
}
EXPORT_SYMBOL_GPL(acpi_pci_find_root);
struct acpi_handle_node {
struct list_head node;
acpi_handle handle;
};
/**
* acpi_get_pci_dev - convert ACPI CA handle to struct pci_dev
* @handle: the handle in question
@ -1008,7 +1003,6 @@ struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
int node = acpi_get_node(device->handle);
struct pci_bus *bus;
struct pci_host_bridge *host_bridge;
union acpi_object *obj;
info->root = root;
info->bridge = device;
@ -1050,17 +1044,6 @@ struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
if (!(root->osc_ext_control_set & OSC_CXL_ERROR_REPORTING_CONTROL))
host_bridge->native_cxl_error = 0;
/*
* Evaluate the "PCI Boot Configuration" _DSM Function. If it
* exists and returns 0, we must preserve any PCI resource
* assignments made by firmware for this host bridge.
*/
obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 1,
DSM_PCI_PRESERVE_BOOT_CONFIG, NULL, ACPI_TYPE_INTEGER);
if (obj && obj->integer.value == 0)
host_bridge->preserve_config = 1;
ACPI_FREE(obj);
acpi_dev_power_up_children_with_adr(device);
pci_scan_child_bus(bus);

View File

@ -42,12 +42,11 @@ static int vbox_accel_init(struct vbox_private *vbox)
/* Take a command buffer for each screen from the end of usable VRAM. */
vbox->available_vram_size -= vbox->num_crtcs * VBVA_MIN_BUFFER_SIZE;
vbox->vbva_buffers = pci_iomap_range(pdev, 0,
vbox->available_vram_size,
vbox->num_crtcs *
VBVA_MIN_BUFFER_SIZE);
if (!vbox->vbva_buffers)
return -ENOMEM;
vbox->vbva_buffers = pcim_iomap_range(
pdev, 0, vbox->available_vram_size,
vbox->num_crtcs * VBVA_MIN_BUFFER_SIZE);
if (IS_ERR(vbox->vbva_buffers))
return PTR_ERR(vbox->vbva_buffers);
for (i = 0; i < vbox->num_crtcs; ++i) {
vbva_setup_buffer_context(&vbox->vbva_info[i],
@ -116,11 +115,10 @@ int vbox_hw_init(struct vbox_private *vbox)
DRM_INFO("VRAM %08x\n", vbox->full_vram_size);
/* Map guest-heap at end of vram */
vbox->guest_heap =
pci_iomap_range(pdev, 0, GUEST_HEAP_OFFSET(vbox),
GUEST_HEAP_SIZE);
if (!vbox->guest_heap)
return -ENOMEM;
vbox->guest_heap = pcim_iomap_range(pdev, 0,
GUEST_HEAP_OFFSET(vbox), GUEST_HEAP_SIZE);
if (IS_ERR(vbox->guest_heap))
return PTR_ERR(vbox->guest_heap);
/* Create guest-heap mem-pool use 2^4 = 16 byte chunks */
vbox->guest_pool = devm_gen_pool_create(vbox->ddev.dev, 4, -1,

View File

@ -7,6 +7,7 @@
*/
#include <linux/crc32.h>
#include <linux/cleanup.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/io.h>
@ -84,6 +85,9 @@
#define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025
#define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031
#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
#define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
static DEFINE_IDA(pci_endpoint_test_ida);
#define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \
@ -140,18 +144,6 @@ static inline void pci_endpoint_test_writel(struct pci_endpoint_test *test,
writel(value, test->base + offset);
}
static inline u32 pci_endpoint_test_bar_readl(struct pci_endpoint_test *test,
int bar, int offset)
{
return readl(test->bar[bar] + offset);
}
static inline void pci_endpoint_test_bar_writel(struct pci_endpoint_test *test,
int bar, u32 offset, u32 value)
{
writel(value, test->bar[bar] + offset);
}
static irqreturn_t pci_endpoint_test_irqhandler(int irq, void *dev_id)
{
struct pci_endpoint_test *test = dev_id;
@ -272,31 +264,60 @@ static const u32 bar_test_pattern[] = {
0xA5A5A5A5,
};
static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test,
enum pci_barno barno, int offset,
void *write_buf, void *read_buf,
int size)
{
memset(write_buf, bar_test_pattern[barno], size);
memcpy_toio(test->bar[barno] + offset, write_buf, size);
memcpy_fromio(read_buf, test->bar[barno] + offset, size);
return memcmp(write_buf, read_buf, size);
}
static bool pci_endpoint_test_bar(struct pci_endpoint_test *test,
enum pci_barno barno)
{
int j;
u32 val;
int size;
int j, bar_size, buf_size, iters, remain;
void *write_buf __free(kfree) = NULL;
void *read_buf __free(kfree) = NULL;
struct pci_dev *pdev = test->pdev;
if (!test->bar[barno])
return false;
size = pci_resource_len(pdev, barno);
bar_size = pci_resource_len(pdev, barno);
if (barno == test->test_reg_bar)
size = 0x4;
bar_size = 0x4;
for (j = 0; j < size; j += 4)
pci_endpoint_test_bar_writel(test, barno, j,
bar_test_pattern[barno]);
/*
* Allocate a buffer of max size 1MB, and reuse that buffer while
* iterating over the whole BAR size (which might be much larger).
*/
buf_size = min(SZ_1M, bar_size);
for (j = 0; j < size; j += 4) {
val = pci_endpoint_test_bar_readl(test, barno, j);
if (val != bar_test_pattern[barno])
write_buf = kmalloc(buf_size, GFP_KERNEL);
if (!write_buf)
return false;
read_buf = kmalloc(buf_size, GFP_KERNEL);
if (!read_buf)
return false;
iters = bar_size / buf_size;
for (j = 0; j < iters; j++)
if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * j,
write_buf, read_buf, buf_size))
return false;
remain = bar_size % buf_size;
if (remain)
if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * iters,
write_buf, read_buf, remain))
return false;
}
return true;
}
@ -824,11 +845,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
init_completion(&test->irq_raised);
mutex_init(&test->mutex);
if ((dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)) != 0) &&
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)) != 0) {
dev_err(dev, "Cannot set DMA mask\n");
return -EINVAL;
}
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48));
err = pci_enable_device(pdev);
if (err) {
@ -980,6 +997,15 @@ static const struct pci_endpoint_test_data j721e_data = {
.irq_type = IRQ_TYPE_MSI,
};
static const struct pci_endpoint_test_data rk3588_data = {
.alignment = SZ_64K,
.irq_type = IRQ_TYPE_MSI,
};
/*
* If the controller's Vendor/Device ID are programmable, you may be able to
* use one of the existing entries for testing instead of adding a new one.
*/
static const struct pci_device_id pci_endpoint_test_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x),
.driver_data = (kernel_ulong_t)&default_data,
@ -1017,6 +1043,9 @@ static const struct pci_device_id pci_endpoint_test_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721S2),
.driver_data = (kernel_ulong_t)&j721e_data,
},
{ PCI_DEVICE(PCI_VENDOR_ID_ROCKCHIP, PCI_DEVICE_ID_ROCKCHIP_RK3588),
.driver_data = (kernel_ulong_t)&rk3588_data,
},
{ }
};
MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);

View File

@ -1565,7 +1565,7 @@ static struct class_interface switchtec_interface = {
static int __init switchtec_ntb_init(void)
{
switchtec_interface.class = switchtec_class;
switchtec_interface.class = &switchtec_class;
return class_interface_register(&switchtec_interface);
}
module_init(switchtec_ntb_init);

View File

@ -177,10 +177,7 @@ static void pci_clip_resource_to_region(struct pci_bus *bus,
static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
resource_size_t size, resource_size_t align,
resource_size_t min, unsigned long type_mask,
resource_size_t (*alignf)(void *,
const struct resource *,
resource_size_t,
resource_size_t),
resource_alignf alignf,
void *alignf_data,
struct pci_bus_region *region)
{
@ -251,10 +248,7 @@ static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res,
int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
resource_size_t size, resource_size_t align,
resource_size_t min, unsigned long type_mask,
resource_size_t (*alignf)(void *,
const struct resource *,
resource_size_t,
resource_size_t),
resource_alignf alignf,
void *alignf_data)
{
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT

View File

@ -215,14 +215,6 @@ config PCIE_MT7621
help
This selects a driver for the MediaTek MT7621 PCIe Controller.
config PCIE_MICROCHIP_HOST
tristate "Microchip AXI PCIe controller"
depends on PCI_MSI && OF
select PCI_HOST_COMMON
help
Say Y here if you want kernel to support the Microchip AXI PCIe
Host Bridge driver.
config PCI_HYPERV_INTERFACE
tristate "Microsoft Hyper-V PCI Interface"
depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI
@ -356,4 +348,5 @@ config PCIE_XILINX_CPM
source "drivers/pci/controller/cadence/Kconfig"
source "drivers/pci/controller/dwc/Kconfig"
source "drivers/pci/controller/mobiveil/Kconfig"
source "drivers/pci/controller/plda/Kconfig"
endmenu

View File

@ -33,7 +33,6 @@ obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o
obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
obj-$(CONFIG_VMD) += vmd.o
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
@ -44,6 +43,7 @@ obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
obj-y += dwc/
obj-y += mobiveil/
obj-y += plda/
# The following drivers are for devices that use the generic ACPI

View File

@ -311,16 +311,30 @@ config PCIE_RCAR_GEN4_EP
SoCs. To compile this driver as a module, choose M here: the module
will be called pcie-rcar-gen4.ko. This uses the DesignWare core.
config PCIE_ROCKCHIP_DW
bool
config PCIE_ROCKCHIP_DW_HOST
bool "Rockchip DesignWare PCIe controller"
select PCIE_DW
select PCIE_DW_HOST
bool "Rockchip DesignWare PCIe controller (host mode)"
depends on PCI_MSI
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
select PCIE_DW_HOST
select PCIE_ROCKCHIP_DW
help
Enables support for the DesignWare PCIe controller in the
Rockchip SoC except RK3399.
Rockchip SoC (except RK3399) to work in host mode.
config PCIE_ROCKCHIP_DW_EP
bool "Rockchip DesignWare PCIe controller (endpoint mode)"
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_ROCKCHIP_DW
help
Enables support for the DesignWare PCIe controller in the
Rockchip SoC (except RK3399) to work in endpoint mode.
config PCI_EXYNOS
tristate "Samsung Exynos PCIe controller"

View File

@ -16,7 +16,7 @@ obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o
obj-$(CONFIG_PCIE_ROCKCHIP_DW) += pcie-dw-rockchip.o
obj-$(CONFIG_PCIE_INTEL_GW) += pcie-intel-gw.o
obj-$(CONFIG_PCIE_KEEMBAY) += pcie-keembay.o
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o

View File

@ -13,11 +13,11 @@
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
@ -113,9 +113,9 @@ static inline void dra7xx_pcie_writel(struct dra7xx_pcie *pcie, u32 offset,
writel(value, pcie->base + offset);
}
static u64 dra7xx_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr)
static u64 dra7xx_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr)
{
return pci_addr & DRA7XX_CPU_TO_BUS_ADDR;
return cpu_addr & DRA7XX_CPU_TO_BUS_ADDR;
}
static int dra7xx_pcie_link_up(struct dw_pcie *pci)
@ -474,7 +474,7 @@ static int dra7xx_add_pcie_ep(struct dra7xx_pcie *dra7xx,
return ret;
}
dw_pcie_ep_init_notify(ep);
pci_epc_init_notify(ep->epc);
return 0;
}

View File

@ -54,43 +54,11 @@
struct exynos_pcie {
struct dw_pcie pci;
void __iomem *elbi_base;
struct clk *clk;
struct clk *bus_clk;
struct clk_bulk_data *clks;
struct phy *phy;
struct regulator_bulk_data supplies[2];
};
static int exynos_pcie_init_clk_resources(struct exynos_pcie *ep)
{
struct device *dev = ep->pci.dev;
int ret;
ret = clk_prepare_enable(ep->clk);
if (ret) {
dev_err(dev, "cannot enable pcie rc clock");
return ret;
}
ret = clk_prepare_enable(ep->bus_clk);
if (ret) {
dev_err(dev, "cannot enable pcie bus clock");
goto err_bus_clk;
}
return 0;
err_bus_clk:
clk_disable_unprepare(ep->clk);
return ret;
}
static void exynos_pcie_deinit_clk_resources(struct exynos_pcie *ep)
{
clk_disable_unprepare(ep->bus_clk);
clk_disable_unprepare(ep->clk);
}
static void exynos_pcie_writel(void __iomem *base, u32 val, u32 reg)
{
writel(val, base + reg);
@ -332,17 +300,9 @@ static int exynos_pcie_probe(struct platform_device *pdev)
if (IS_ERR(ep->elbi_base))
return PTR_ERR(ep->elbi_base);
ep->clk = devm_clk_get(dev, "pcie");
if (IS_ERR(ep->clk)) {
dev_err(dev, "Failed to get pcie rc clock\n");
return PTR_ERR(ep->clk);
}
ep->bus_clk = devm_clk_get(dev, "pcie_bus");
if (IS_ERR(ep->bus_clk)) {
dev_err(dev, "Failed to get pcie bus clock\n");
return PTR_ERR(ep->bus_clk);
}
ret = devm_clk_bulk_get_all_enable(dev, &ep->clks);
if (ret < 0)
return ret;
ep->supplies[0].supply = "vdd18";
ep->supplies[1].supply = "vdd10";
@ -351,10 +311,6 @@ static int exynos_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = exynos_pcie_init_clk_resources(ep);
if (ret)
return ret;
ret = regulator_bulk_enable(ARRAY_SIZE(ep->supplies), ep->supplies);
if (ret)
return ret;
@ -369,7 +325,6 @@ static int exynos_pcie_probe(struct platform_device *pdev)
fail_probe:
phy_exit(ep->phy);
exynos_pcie_deinit_clk_resources(ep);
regulator_bulk_disable(ARRAY_SIZE(ep->supplies), ep->supplies);
return ret;
@ -383,7 +338,6 @@ static void exynos_pcie_remove(struct platform_device *pdev)
exynos_pcie_assert_core_reset(ep);
phy_power_off(ep->phy);
phy_exit(ep->phy);
exynos_pcie_deinit_clk_resources(ep);
regulator_bulk_disable(ARRAY_SIZE(ep->supplies), ep->supplies);
}
@ -437,5 +391,6 @@ static struct platform_driver exynos_pcie_driver = {
},
};
module_platform_driver(exynos_pcie_driver);
MODULE_DESCRIPTION("Samsung Exynos PCIe host controller driver");
MODULE_LICENSE("GPL v2");
MODULE_DEVICE_TABLE(of, exynos_pcie_of_match);

View File

@ -11,14 +11,13 @@
#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
#include <linux/mfd/syscon/imx7-iomuxc-gpr.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/of_address.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
@ -107,8 +106,7 @@ struct imx6_pcie_drvdata {
struct imx6_pcie {
struct dw_pcie *pci;
int reset_gpio;
bool gpio_active_high;
struct gpio_desc *reset_gpiod;
bool link_is_up;
struct clk_bulk_data clks[IMX6_PCIE_MAX_CLKS];
struct regmap *iomuxc_gpr;
@ -721,9 +719,7 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
}
/* Some boards don't have PCIe reset GPIO. */
if (gpio_is_valid(imx6_pcie->reset_gpio))
gpio_set_value_cansleep(imx6_pcie->reset_gpio,
imx6_pcie->gpio_active_high);
gpiod_set_value_cansleep(imx6_pcie->reset_gpiod, 1);
}
static int imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
@ -771,10 +767,9 @@ static int imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
}
/* Some boards don't have PCIe reset GPIO. */
if (gpio_is_valid(imx6_pcie->reset_gpio)) {
if (imx6_pcie->reset_gpiod) {
msleep(100);
gpio_set_value_cansleep(imx6_pcie->reset_gpio,
!imx6_pcie->gpio_active_high);
gpiod_set_value_cansleep(imx6_pcie->reset_gpiod, 0);
/* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */
msleep(100);
}
@ -1131,7 +1126,7 @@ static int imx6_add_pcie_ep(struct imx6_pcie *imx6_pcie,
return ret;
}
dw_pcie_ep_init_notify(ep);
pci_epc_init_notify(ep->epc);
/* Start LTSSM. */
imx6_pcie_ltssm_enable(dev);
@ -1285,22 +1280,11 @@ static int imx6_pcie_probe(struct platform_device *pdev)
return PTR_ERR(pci->dbi_base);
/* Fetch GPIOs */
imx6_pcie->reset_gpio = of_get_named_gpio(node, "reset-gpio", 0);
imx6_pcie->gpio_active_high = of_property_read_bool(node,
"reset-gpio-active-high");
if (gpio_is_valid(imx6_pcie->reset_gpio)) {
ret = devm_gpio_request_one(dev, imx6_pcie->reset_gpio,
imx6_pcie->gpio_active_high ?
GPIOF_OUT_INIT_HIGH :
GPIOF_OUT_INIT_LOW,
"PCIe reset");
if (ret) {
dev_err(dev, "unable to get reset gpio\n");
return ret;
}
} else if (imx6_pcie->reset_gpio == -EPROBE_DEFER) {
return imx6_pcie->reset_gpio;
}
imx6_pcie->reset_gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(imx6_pcie->reset_gpiod))
return dev_err_probe(dev, PTR_ERR(imx6_pcie->reset_gpiod),
"unable to get reset gpio\n");
gpiod_set_consumer_name(imx6_pcie->reset_gpiod, "PCIe reset");
if (imx6_pcie->drvdata->clks_cnt >= IMX6_PCIE_MAX_CLKS)
return dev_err_probe(dev, -ENOMEM, "clks_cnt is too big\n");

View File

@ -34,6 +34,11 @@
#define PCIE_DEVICEID_SHIFT 16
/* Application registers */
#define PID 0x000
#define RTL GENMASK(15, 11)
#define RTL_SHIFT 11
#define AM6_PCI_PG1_RTL_VER 0x15
#define CMD_STATUS 0x004
#define LTSSM_EN_VAL BIT(0)
#define OB_XLAT_EN_VAL BIT(1)
@ -104,6 +109,8 @@
#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
#define PCI_DEVICE_ID_TI_AM654X 0xb00c
struct ks_pcie_of_data {
enum dw_pcie_device_mode mode;
const struct dw_pcie_host_ops *host_ops;
@ -245,8 +252,68 @@ static struct irq_chip ks_pcie_msi_irq_chip = {
.irq_unmask = ks_pcie_msi_unmask,
};
/**
* ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers
* @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
* PCIe host controller driver information.
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val |= DBI_CS2;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
do {
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
} while (!(val & DBI_CS2));
}
/**
* ks_pcie_clear_dbi_mode() - Disable DBI mode
* @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
* PCIe host controller driver information.
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val &= ~DBI_CS2;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
do {
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
} while (val & DBI_CS2);
}
static int ks_pcie_msi_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
/* Configure and set up BAR0 */
ks_pcie_set_dbi_mode(ks_pcie);
/* Enable BAR0 */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
ks_pcie_clear_dbi_mode(ks_pcie);
/*
* For BAR0, just setting bus address for inbound writes (MSI) should
* be sufficient. Use physical address to avoid any conflicts.
*/
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
pp->msi_irq_chip = &ks_pcie_msi_irq_chip;
return dw_pcie_allocate_domains(pp);
}
@ -340,59 +407,22 @@ static const struct irq_domain_ops ks_pcie_intx_irq_domain_ops = {
.xlate = irq_domain_xlate_onetwocell,
};
/**
* ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers
* @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
* PCIe host controller driver information.
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val |= DBI_CS2;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
do {
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
} while (!(val & DBI_CS2));
}
/**
* ks_pcie_clear_dbi_mode() - Disable DBI mode
* @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone
* PCIe host controller driver information.
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val &= ~DBI_CS2;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
do {
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
} while (val & DBI_CS2);
}
static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
static int ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
{
u32 val;
u32 num_viewport = ks_pcie->num_viewport;
struct dw_pcie *pci = ks_pcie->pci;
struct dw_pcie_rp *pp = &pci->pp;
u64 start, end;
struct resource_entry *entry;
struct resource *mem;
u64 start, end;
int i;
mem = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM)->res;
entry = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM);
if (!entry)
return -ENODEV;
mem = entry->res;
start = mem->start;
end = mem->end;
@ -403,7 +433,7 @@ static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
ks_pcie_clear_dbi_mode(ks_pcie);
if (ks_pcie->is_am6)
return;
return 0;
val = ilog2(OB_WIN_SIZE);
ks_pcie_app_writel(ks_pcie, OB_SIZE, val);
@ -420,6 +450,8 @@ static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val |= OB_XLAT_EN_VAL;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
return 0;
}
static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
@ -445,44 +477,10 @@ static struct pci_ops ks_child_pcie_ops = {
.write = pci_generic_config_write,
};
/**
* ks_pcie_v3_65_add_bus() - keystone add_bus post initialization
* @bus: A pointer to the PCI bus structure.
*
* This sets BAR0 to enable inbound access for MSI_IRQ register
*/
static int ks_pcie_v3_65_add_bus(struct pci_bus *bus)
{
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
if (!pci_is_root_bus(bus))
return 0;
/* Configure and set up BAR0 */
ks_pcie_set_dbi_mode(ks_pcie);
/* Enable BAR0 */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
ks_pcie_clear_dbi_mode(ks_pcie);
/*
* For BAR0, just setting bus address for inbound writes (MSI) should
* be sufficient. Use physical address to avoid any conflicts.
*/
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
return 0;
}
static struct pci_ops ks_pcie_ops = {
.map_bus = dw_pcie_own_conf_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
.add_bus = ks_pcie_v3_65_add_bus,
};
/**
@ -525,7 +523,11 @@ static int ks_pcie_start_link(struct dw_pcie *pci)
static void ks_pcie_quirk(struct pci_dev *dev)
{
struct pci_bus *bus = dev->bus;
struct keystone_pcie *ks_pcie;
struct device *bridge_dev;
struct pci_dev *bridge;
u32 val;
static const struct pci_device_id rc_pci_devids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK),
.class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
@ -537,6 +539,11 @@ static void ks_pcie_quirk(struct pci_dev *dev)
.class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, },
{ 0, },
};
static const struct pci_device_id am6_pci_devids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654X),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
{ 0, },
};
if (pci_is_root_bus(bus))
bridge = dev;
@ -558,10 +565,36 @@ static void ks_pcie_quirk(struct pci_dev *dev)
*/
if (pci_match_id(rc_pci_devids, bridge)) {
if (pcie_get_readrq(dev) > 256) {
dev_info(&dev->dev, "limiting MRRS to 256\n");
dev_info(&dev->dev, "limiting MRRS to 256 bytes\n");
pcie_set_readrq(dev, 256);
}
}
/*
* Memory transactions fail with PCI controller in AM654 PG1.0
* when MRRS is set to more than 128 bytes. Force the MRRS to
* 128 bytes in all downstream devices.
*/
if (pci_match_id(am6_pci_devids, bridge)) {
bridge_dev = pci_get_host_bridge_device(dev);
if (!bridge_dev && !bridge_dev->parent)
return;
ks_pcie = dev_get_drvdata(bridge_dev->parent);
if (!ks_pcie)
return;
val = ks_pcie_app_readl(ks_pcie, PID);
val &= RTL;
val >>= RTL_SHIFT;
if (val != AM6_PCI_PG1_RTL_VER)
return;
if (pcie_get_readrq(dev) > 128) {
dev_info(&dev->dev, "limiting MRRS to 128 bytes\n");
pcie_set_readrq(dev, 128);
}
}
}
DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk);
@ -814,7 +847,10 @@ static int __init ks_pcie_host_init(struct dw_pcie_rp *pp)
return ret;
ks_pcie_stop_link(pci);
ks_pcie_setup_rc_app_regs(ks_pcie);
ret = ks_pcie_setup_rc_app_regs(ks_pcie);
if (ret)
return ret;
writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8),
pci->dbi_base + PCI_IO_BASE);
@ -1293,7 +1329,7 @@ static int ks_pcie_probe(struct platform_device *pdev)
goto err_ep_init;
}
dw_pcie_ep_init_notify(&pci->ep);
pci_epc_init_notify(pci->ep.epc);
break;
default:

View File

@ -104,7 +104,7 @@ static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
dev_dbg(pci->dev, "Link up\n");
} else if (val & PEX_PF0_PME_MES_DR_LDD) {
dev_dbg(pci->dev, "Link down\n");
pci_epc_linkdown(pci->ep.epc);
dw_pcie_ep_linkdown(&pci->ep);
} else if (val & PEX_PF0_PME_MES_DR_HRD) {
dev_dbg(pci->dev, "Hot reset\n");
}
@ -286,7 +286,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
return ret;
}
dw_pcie_ep_init_notify(&pci->ep);
pci_epc_init_notify(pci->ep.epc);
return ls_pcie_ep_interrupt_init(pcie, pdev);
}

View File

@ -9,7 +9,6 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/reset.h>

View File

@ -242,18 +242,24 @@ static struct pci_ops al_child_pci_ops = {
.write = pci_generic_config_write,
};
static void al_pcie_config_prepare(struct al_pcie *pcie)
static int al_pcie_config_prepare(struct al_pcie *pcie)
{
struct al_pcie_target_bus_cfg *target_bus_cfg;
struct dw_pcie_rp *pp = &pcie->pci->pp;
unsigned int ecam_bus_mask;
struct resource_entry *ft;
u32 cfg_control_offset;
struct resource *bus;
u8 subordinate_bus;
u8 secondary_bus;
u32 cfg_control;
u32 reg;
struct resource *bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res;
ft = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS);
if (!ft)
return -ENODEV;
bus = ft->res;
target_bus_cfg = &pcie->target_bus_cfg;
ecam_bus_mask = (pcie->ecam_size >> PCIE_ECAM_BUS_SHIFT) - 1;
@ -287,6 +293,8 @@ static void al_pcie_config_prepare(struct al_pcie *pcie)
FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus);
al_pcie_controller_writel(pcie, cfg_control_offset, reg);
return 0;
}
static int al_pcie_host_init(struct dw_pcie_rp *pp)
@ -305,7 +313,9 @@ static int al_pcie_host_init(struct dw_pcie_rp *pp)
if (rc)
return rc;
al_pcie_config_prepare(pcie);
rc = al_pcie_config_prepare(pcie);
if (rc)
return rc;
return 0;
}

View File

@ -94,7 +94,7 @@ static void artpec6_pcie_writel(struct artpec6_pcie *artpec6_pcie, u32 offset, u
regmap_write(artpec6_pcie->regmap, offset, val);
}
static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr)
static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr)
{
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci);
struct dw_pcie_rp *pp = &pci->pp;
@ -102,13 +102,13 @@ static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr)
switch (artpec6_pcie->mode) {
case DW_PCIE_RC_TYPE:
return pci_addr - pp->cfg0_base;
return cpu_addr - pp->cfg0_base;
case DW_PCIE_EP_TYPE:
return pci_addr - ep->phys_base;
return cpu_addr - ep->phys_base;
default:
dev_err(pci->dev, "UNKNOWN device type\n");
}
return pci_addr;
return cpu_addr;
}
static int artpec6_pcie_establish_link(struct dw_pcie *pci)
@ -452,7 +452,7 @@ static int artpec6_pcie_probe(struct platform_device *pdev)
return ret;
}
dw_pcie_ep_init_notify(&pci->ep);
pci_epc_init_notify(pci->ep.epc);
break;
default:

View File

@ -15,30 +15,6 @@
#include <linux/pci-epc.h>
#include <linux/pci-epf.h>
/**
* dw_pcie_ep_linkup - Notify EPF drivers about Link Up event
* @ep: DWC EP device
*/
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
pci_epc_linkup(epc);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup);
/**
* dw_pcie_ep_init_notify - Notify EPF drivers about EPC initialization complete
* @ep: DWC EP device
*/
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
pci_epc_init_notify(epc);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify);
/**
* dw_pcie_ep_get_func_from_ep - Get the struct dw_pcie_ep_func corresponding to
* the endpoint function
@ -161,7 +137,7 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
if (!ep->bar_to_atu[bar])
free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows);
else
free_win = ep->bar_to_atu[bar];
free_win = ep->bar_to_atu[bar] - 1;
if (free_win >= pci->num_ib_windows) {
dev_err(pci->dev, "No free inbound window\n");
@ -175,15 +151,18 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type,
return ret;
}
ep->bar_to_atu[bar] = free_win;
/*
* Always increment free_win before assignment, since value 0 is used to identify
* unallocated mapping.
*/
ep->bar_to_atu[bar] = free_win + 1;
set_bit(free_win, ep->ib_window_map);
return 0;
}
static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no,
phys_addr_t phys_addr,
u64 pci_addr, size_t size)
static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep,
struct dw_pcie_ob_atu_cfg *atu)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
u32 free_win;
@ -195,13 +174,13 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no,
return -EINVAL;
}
ret = dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM,
phys_addr, pci_addr, size);
atu->index = free_win;
ret = dw_pcie_prog_outbound_atu(pci, atu);
if (ret)
return ret;
set_bit(free_win, ep->ob_window_map);
ep->outbound_addr[free_win] = phys_addr;
ep->outbound_addr[free_win] = atu->cpu_addr;
return 0;
}
@ -212,7 +191,10 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar = epf_bar->barno;
u32 atu_index = ep->bar_to_atu[bar];
u32 atu_index = ep->bar_to_atu[bar] - 1;
if (!ep->bar_to_atu[bar])
return;
__dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags);
@ -233,6 +215,13 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
int ret, type;
u32 reg;
/*
* DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs
* 1 and 2 to form a 64-bit BAR.
*/
if ((flags & PCI_BASE_ADDRESS_MEM_TYPE_64) && (bar & 1))
return -EINVAL;
reg = PCI_BASE_ADDRESS_0 + (4 * bar);
if (!(flags & PCI_BASE_ADDRESS_SPACE))
@ -301,8 +290,14 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
int ret;
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct dw_pcie_ob_atu_cfg atu = { 0 };
ret = dw_pcie_ep_outbound_atu(ep, func_no, addr, pci_addr, size);
atu.func_no = func_no;
atu.type = PCIE_ATU_TYPE_MEM;
atu.cpu_addr = addr;
atu.pci_addr = pci_addr;
atu.size = size;
ret = dw_pcie_ep_outbound_atu(ep, &atu);
if (ret) {
dev_err(pci->dev, "Failed to enable address\n");
return ret;
@ -632,7 +627,6 @@ void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep)
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
dw_pcie_edma_remove(pci);
ep->epc->init_complete = false;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_cleanup);
@ -674,6 +668,34 @@ static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
return 0;
}
static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
{
unsigned int offset;
unsigned int nbars;
u32 reg, i;
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
dw_pcie_dbi_ro_wr_en(pci);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
PCI_REBAR_CTRL_NBAR_SHIFT;
/*
* PCIe r6.0, sec 7.8.6.2 require us to support at least one
* size in the range from 1 MB to 512 GB. Advertise support
* for 1 MB BAR size only.
*/
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
}
dw_pcie_setup(pci);
dw_pcie_dbi_ro_wr_dis(pci);
}
/**
* dw_pcie_ep_init_registers - Initialize DWC EP specific registers
* @ep: DWC EP device
@ -688,13 +710,11 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
struct dw_pcie_ep_func *ep_func;
struct device *dev = pci->dev;
struct pci_epc *epc = ep->epc;
unsigned int offset, ptm_cap_base;
unsigned int nbars;
u32 ptm_cap_base, reg;
u8 hdr_type;
u8 func_no;
int i, ret;
void *addr;
u32 reg;
int ret;
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE) &
PCI_HEADER_TYPE_MASK;
@ -757,25 +777,8 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
if (ep->ops->init)
ep->ops->init(ep);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
ptm_cap_base = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM);
dw_pcie_dbi_ro_wr_en(pci);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
PCI_REBAR_CTRL_NBAR_SHIFT;
/*
* PCIe r6.0, sec 7.8.6.2 require us to support at least one
* size in the range from 1 MB to 512 GB. Advertise support
* for 1 MB BAR size only.
*/
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
}
/*
* PTM responder capability can be disabled only after disabling
* PTM root capability.
@ -792,8 +795,7 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
dw_pcie_dbi_ro_wr_dis(pci);
}
dw_pcie_setup(pci);
dw_pcie_dbi_ro_wr_dis(pci);
dw_pcie_ep_init_non_sticky_registers(pci);
return 0;
@ -804,6 +806,43 @@ err_remove_edma:
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_registers);
/**
* dw_pcie_ep_linkup - Notify EPF drivers about Link Up event
* @ep: DWC EP device
*/
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
pci_epc_linkup(epc);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup);
/**
* dw_pcie_ep_linkdown - Notify EPF drivers about Link Down event
* @ep: DWC EP device
*
* Non-sticky registers are also initialized before sending the notification to
* the EPF drivers. This is needed since the registers need to be initialized
* before the link comes back again.
*/
void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct pci_epc *epc = ep->epc;
/*
* Initialize the non-sticky DWC registers as they would've reset post
* Link Down. This is specifically needed for drivers not supporting
* PERST# as they have no way to reinitialize the registers before the
* link comes back again.
*/
dw_pcie_ep_init_non_sticky_registers(pci);
pci_epc_linkdown(epc);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkdown);
/**
* dw_pcie_ep_init - Initialize the endpoint device
* @ep: DWC EP device

View File

@ -398,6 +398,32 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
return 0;
}
static void dw_pcie_host_request_msg_tlp_res(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct resource_entry *win;
struct resource *res;
win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM);
if (win) {
res = devm_kzalloc(pci->dev, sizeof(*res), GFP_KERNEL);
if (!res)
return;
/*
* Allocate MSG TLP region of size 'region_align' at the end of
* the host bridge window.
*/
res->start = win->res->end - pci->region_align + 1;
res->end = win->res->end;
res->name = "msg";
res->flags = win->res->flags | IORESOURCE_BUSY;
if (!devm_request_resource(pci->dev, win->res, res))
pp->msg_res = res;
}
}
int dw_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@ -484,6 +510,18 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
dw_pcie_iatu_detect(pci);
/*
* Allocate the resource for MSG TLP before programming the iATU
* outbound window in dw_pcie_setup_rc(). Since the allocation depends
* on the value of 'region_align', this has to be done after
* dw_pcie_iatu_detect().
*
* Glue drivers need to set 'use_atu_msg' before dw_pcie_host_init() to
* make use of the generic MSG TLP implementation.
*/
if (pp->use_atu_msg)
dw_pcie_host_request_msg_tlp_res(pp);
ret = dw_pcie_edma_detect(pci);
if (ret)
goto err_free_msi;
@ -554,6 +592,7 @@ static void __iomem *dw_pcie_other_conf_map_bus(struct pci_bus *bus,
{
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = { 0 };
int type, ret;
u32 busdev;
@ -576,8 +615,12 @@ static void __iomem *dw_pcie_other_conf_map_bus(struct pci_bus *bus,
else
type = PCIE_ATU_TYPE_CFG1;
ret = dw_pcie_prog_outbound_atu(pci, 0, type, pp->cfg0_base, busdev,
pp->cfg0_size);
atu.type = type;
atu.cpu_addr = pp->cfg0_base;
atu.pci_addr = busdev;
atu.size = pp->cfg0_size;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret)
return NULL;
@ -589,6 +632,7 @@ static int dw_pcie_rd_other_conf(struct pci_bus *bus, unsigned int devfn,
{
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = { 0 };
int ret;
ret = pci_generic_config_read(bus, devfn, where, size, val);
@ -596,9 +640,12 @@ static int dw_pcie_rd_other_conf(struct pci_bus *bus, unsigned int devfn,
return ret;
if (pp->cfg0_io_shared) {
ret = dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO,
pp->io_base, pp->io_bus_addr,
pp->io_size);
atu.type = PCIE_ATU_TYPE_IO;
atu.cpu_addr = pp->io_base;
atu.pci_addr = pp->io_bus_addr;
atu.size = pp->io_size;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret)
return PCIBIOS_SET_FAILED;
}
@ -611,6 +658,7 @@ static int dw_pcie_wr_other_conf(struct pci_bus *bus, unsigned int devfn,
{
struct dw_pcie_rp *pp = bus->sysdata;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = { 0 };
int ret;
ret = pci_generic_config_write(bus, devfn, where, size, val);
@ -618,9 +666,12 @@ static int dw_pcie_wr_other_conf(struct pci_bus *bus, unsigned int devfn,
return ret;
if (pp->cfg0_io_shared) {
ret = dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO,
pp->io_base, pp->io_bus_addr,
pp->io_size);
atu.type = PCIE_ATU_TYPE_IO;
atu.cpu_addr = pp->io_base;
atu.pci_addr = pp->io_bus_addr;
atu.size = pp->io_size;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret)
return PCIBIOS_SET_FAILED;
}
@ -655,6 +706,7 @@ static struct pci_ops dw_pcie_ops = {
static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie_ob_atu_cfg atu = { 0 };
struct resource_entry *entry;
int i, ret;
@ -682,10 +734,19 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
if (pci->num_ob_windows <= ++i)
break;
ret = dw_pcie_prog_outbound_atu(pci, i, PCIE_ATU_TYPE_MEM,
entry->res->start,
entry->res->start - entry->offset,
resource_size(entry->res));
atu.index = i;
atu.type = PCIE_ATU_TYPE_MEM;
atu.cpu_addr = entry->res->start;
atu.pci_addr = entry->res->start - entry->offset;
/* Adjust iATU size if MSG TLP region was allocated before */
if (pp->msg_res && pp->msg_res->parent == entry->res)
atu.size = resource_size(entry->res) -
resource_size(pp->msg_res);
else
atu.size = resource_size(entry->res);
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret) {
dev_err(pci->dev, "Failed to set MEM range %pr\n",
entry->res);
@ -695,10 +756,13 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
if (pp->io_size) {
if (pci->num_ob_windows > ++i) {
ret = dw_pcie_prog_outbound_atu(pci, i, PCIE_ATU_TYPE_IO,
pp->io_base,
pp->io_bus_addr,
pp->io_size);
atu.index = i;
atu.type = PCIE_ATU_TYPE_IO;
atu.cpu_addr = pp->io_base;
atu.pci_addr = pp->io_bus_addr;
atu.size = pp->io_size;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret) {
dev_err(pci->dev, "Failed to set IO range %pr\n",
entry->res);
@ -713,6 +777,8 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n",
pci->num_ob_windows);
pp->msg_atu_index = i;
i = 0;
resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) {
if (resource_type(entry->res) != IORESOURCE_MEM)
@ -818,11 +884,47 @@ int dw_pcie_setup_rc(struct dw_pcie_rp *pp)
}
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
static int dw_pcie_pme_turn_off(struct dw_pcie *pci)
{
struct dw_pcie_ob_atu_cfg atu = { 0 };
void __iomem *mem;
int ret;
if (pci->num_ob_windows <= pci->pp.msg_atu_index)
return -ENOSPC;
if (!pci->pp.msg_res)
return -ENOSPC;
atu.code = PCIE_MSG_CODE_PME_TURN_OFF;
atu.routing = PCIE_MSG_TYPE_R_BC;
atu.type = PCIE_ATU_TYPE_MSG;
atu.size = resource_size(pci->pp.msg_res);
atu.index = pci->pp.msg_atu_index;
atu.cpu_addr = pci->pp.msg_res->start;
ret = dw_pcie_prog_outbound_atu(pci, &atu);
if (ret)
return ret;
mem = ioremap(atu.cpu_addr, pci->region_align);
if (!mem)
return -ENOMEM;
/* A dummy write is converted to a Msg TLP */
writel(0, mem);
iounmap(mem);
return 0;
}
int dw_pcie_suspend_noirq(struct dw_pcie *pci)
{
u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
u32 val;
int ret;
int ret = 0;
/*
* If L1SS is supported, then do not put the link into L2 as some
@ -834,10 +936,13 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT)
return 0;
if (!pci->pp.ops->pme_turn_off)
return 0;
if (pci->pp.ops->pme_turn_off)
pci->pp.ops->pme_turn_off(&pci->pp);
else
ret = dw_pcie_pme_turn_off(pci);
pci->pp.ops->pme_turn_off(&pci->pp);
if (ret)
return ret;
ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE,
PCIE_PME_TO_L2_TIMEOUT_US/10,

View File

@ -154,7 +154,7 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
dw_pcie_ep_deinit(&pci->ep);
}
dw_pcie_ep_init_notify(&pci->ep);
pci_epc_init_notify(pci->ep.epc);
break;
default:

View File

@ -465,56 +465,61 @@ static inline u32 dw_pcie_enable_ecrc(u32 val)
return val | PCIE_ATU_TD;
}
static int __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
int index, int type, u64 cpu_addr,
u64 pci_addr, u64 size)
int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
const struct dw_pcie_ob_atu_cfg *atu)
{
u64 cpu_addr = atu->cpu_addr;
u32 retries, val;
u64 limit_addr;
if (pci->ops && pci->ops->cpu_addr_fixup)
cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr);
limit_addr = cpu_addr + size - 1;
limit_addr = cpu_addr + atu->size - 1;
if ((limit_addr & ~pci->region_limit) != (cpu_addr & ~pci->region_limit) ||
!IS_ALIGNED(cpu_addr, pci->region_align) ||
!IS_ALIGNED(pci_addr, pci->region_align) || !size) {
!IS_ALIGNED(atu->pci_addr, pci->region_align) || !atu->size) {
return -EINVAL;
}
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LOWER_BASE,
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LOWER_BASE,
lower_32_bits(cpu_addr));
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_BASE,
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_BASE,
upper_32_bits(cpu_addr));
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LIMIT,
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LIMIT,
lower_32_bits(limit_addr));
if (dw_pcie_ver_is_ge(pci, 460A))
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_LIMIT,
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_LIMIT,
upper_32_bits(limit_addr));
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LOWER_TARGET,
lower_32_bits(pci_addr));
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_TARGET,
upper_32_bits(pci_addr));
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LOWER_TARGET,
lower_32_bits(atu->pci_addr));
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_TARGET,
upper_32_bits(atu->pci_addr));
val = type | PCIE_ATU_FUNC_NUM(func_no);
val = atu->type | atu->routing | PCIE_ATU_FUNC_NUM(atu->func_no);
if (upper_32_bits(limit_addr) > upper_32_bits(cpu_addr) &&
dw_pcie_ver_is_ge(pci, 460A))
val |= PCIE_ATU_INCREASE_REGION_SIZE;
if (dw_pcie_ver_is(pci, 490A))
val = dw_pcie_enable_ecrc(val);
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_REGION_CTRL1, val);
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val);
dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_REGION_CTRL2, PCIE_ATU_ENABLE);
val = PCIE_ATU_ENABLE;
if (atu->type == PCIE_ATU_TYPE_MSG) {
/* The data-less messages only for now */
val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code;
}
dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL2, val);
/*
* Make sure ATU enable takes effect before any subsequent config
* and I/O accesses.
*/
for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) {
val = dw_pcie_readl_atu_ob(pci, index, PCIE_ATU_REGION_CTRL2);
val = dw_pcie_readl_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL2);
if (val & PCIE_ATU_ENABLE)
return 0;
@ -526,21 +531,6 @@ static int __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no,
return -ETIMEDOUT;
}
int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
u64 cpu_addr, u64 pci_addr, u64 size)
{
return __dw_pcie_prog_outbound_atu(pci, 0, index, type,
cpu_addr, pci_addr, size);
}
int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 cpu_addr, u64 pci_addr,
u64 size)
{
return __dw_pcie_prog_outbound_atu(pci, func_no, index, type,
cpu_addr, pci_addr, size);
}
static inline u32 dw_pcie_readl_atu_ib(struct dw_pcie *pci, u32 index, u32 reg)
{
return dw_pcie_readl_atu(pci, PCIE_ATU_REGION_DIR_IB, index, reg);
@ -655,7 +645,7 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
if (dw_pcie_link_up(pci))
break;
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
msleep(LINK_WAIT_SLEEP_MS);
}
if (retries >= LINK_WAIT_MAX_RETRIES) {
@ -880,30 +870,40 @@ static struct dw_edma_plat_ops dw_pcie_edma_ops = {
.irq_vector = dw_pcie_edma_irq_vector,
};
static int dw_pcie_edma_find_chip(struct dw_pcie *pci)
static void dw_pcie_edma_init_data(struct dw_pcie *pci)
{
pci->edma.dev = pci->dev;
if (!pci->edma.ops)
pci->edma.ops = &dw_pcie_edma_ops;
pci->edma.flags |= DW_EDMA_CHIP_LOCAL;
}
static int dw_pcie_edma_find_mf(struct dw_pcie *pci)
{
u32 val;
/*
* Bail out finding the mapping format if it is already set by the glue
* driver. Also ensure that the edma.reg_base is pointing to a valid
* memory region.
*/
if (pci->edma.mf != EDMA_MF_EDMA_LEGACY)
return pci->edma.reg_base ? 0 : -ENODEV;
/*
* Indirect eDMA CSRs access has been completely removed since v5.40a
* thus no space is now reserved for the eDMA channels viewport and
* former DMA CTRL register is no longer fixed to FFs.
*
* Note that Renesas R-Car S4-8's PCIe controllers for unknown reason
* have zeros in the eDMA CTRL register even though the HW-manual
* explicitly states there must FFs if the unrolled mapping is enabled.
* For such cases the low-level drivers are supposed to manually
* activate the unrolled mapping to bypass the auto-detection procedure.
*/
if (dw_pcie_ver_is_ge(pci, 540A) || dw_pcie_cap_is(pci, EDMA_UNROLL))
if (dw_pcie_ver_is_ge(pci, 540A))
val = 0xFFFFFFFF;
else
val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL);
if (val == 0xFFFFFFFF && pci->edma.reg_base) {
pci->edma.mf = EDMA_MF_EDMA_UNROLL;
val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL);
} else if (val != 0xFFFFFFFF) {
pci->edma.mf = EDMA_MF_EDMA_LEGACY;
@ -912,15 +912,25 @@ static int dw_pcie_edma_find_chip(struct dw_pcie *pci)
return -ENODEV;
}
pci->edma.dev = pci->dev;
return 0;
}
if (!pci->edma.ops)
pci->edma.ops = &dw_pcie_edma_ops;
static int dw_pcie_edma_find_channels(struct dw_pcie *pci)
{
u32 val;
pci->edma.flags |= DW_EDMA_CHIP_LOCAL;
/*
* Autodetect the read/write channels count only for non-HDMA platforms.
* HDMA platforms with native CSR mapping doesn't support autodetect,
* so the glue drivers should've passed the valid count already. If not,
* the below sanity check will catch it.
*/
if (pci->edma.mf != EDMA_MF_HDMA_NATIVE) {
val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL);
pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val);
pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val);
pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val);
pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val);
}
/* Sanity check the channels count if the mapping was incorrect */
if (!pci->edma.ll_wr_cnt || pci->edma.ll_wr_cnt > EDMA_MAX_WR_CH ||
@ -930,6 +940,19 @@ static int dw_pcie_edma_find_chip(struct dw_pcie *pci)
return 0;
}
static int dw_pcie_edma_find_chip(struct dw_pcie *pci)
{
int ret;
dw_pcie_edma_init_data(pci);
ret = dw_pcie_edma_find_mf(pci);
if (ret)
return ret;
return dw_pcie_edma_find_channels(pci);
}
static int dw_pcie_edma_irq_verify(struct dw_pcie *pci)
{
struct platform_device *pdev = to_platform_device(pci->dev);

View File

@ -51,9 +51,8 @@
/* DWC PCIe controller capabilities */
#define DW_PCIE_CAP_REQ_RES 0
#define DW_PCIE_CAP_EDMA_UNROLL 1
#define DW_PCIE_CAP_IATU_UNROLL 2
#define DW_PCIE_CAP_CDM_CHECK 3
#define DW_PCIE_CAP_IATU_UNROLL 1
#define DW_PCIE_CAP_CDM_CHECK 2
#define dw_pcie_cap_is(_pci, _cap) \
test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps)
@ -63,14 +62,16 @@
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
#define LINK_WAIT_SLEEP_MS 90
/* Parameters for the waiting for iATU enabled routine */
#define LINK_WAIT_MAX_IATU_RETRIES 5
#define LINK_WAIT_IATU 9
/* Synopsys-specific PCIe configuration registers */
#define PCIE_PORT_FORCE 0x708
#define PORT_FORCE_DO_DESKEW_FOR_SRIS BIT(23)
#define PCIE_PORT_AFR 0x70C
#define PORT_AFR_N_FTS_MASK GENMASK(15, 8)
#define PORT_AFR_N_FTS(n) FIELD_PREP(PORT_AFR_N_FTS_MASK, n)
@ -92,6 +93,9 @@
#define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7)
#define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf)
#define PCIE_PORT_LANE_SKEW 0x714
#define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0)
#define PCIE_PORT_DEBUG0 0x728
#define PORT_LOGIC_LTSSM_STATE_MASK 0x1f
#define PORT_LOGIC_LTSSM_STATE_L0 0x11
@ -148,11 +152,13 @@
#define PCIE_ATU_TYPE_IO 0x2
#define PCIE_ATU_TYPE_CFG0 0x4
#define PCIE_ATU_TYPE_CFG1 0x5
#define PCIE_ATU_TYPE_MSG 0x10
#define PCIE_ATU_TD BIT(8)
#define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20)
#define PCIE_ATU_REGION_CTRL2 0x004
#define PCIE_ATU_ENABLE BIT(31)
#define PCIE_ATU_BAR_MODE_ENABLE BIT(30)
#define PCIE_ATU_INHIBIT_PAYLOAD BIT(22)
#define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19)
#define PCIE_ATU_LOWER_BASE 0x008
#define PCIE_ATU_UPPER_BASE 0x00C
@ -299,6 +305,17 @@ enum dw_pcie_ltssm {
DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF,
};
struct dw_pcie_ob_atu_cfg {
int index;
int type;
u8 func_no;
u8 code;
u8 routing;
u64 cpu_addr;
u64 pci_addr;
u64 size;
};
struct dw_pcie_host_ops {
int (*init)(struct dw_pcie_rp *pp);
void (*deinit)(struct dw_pcie_rp *pp);
@ -328,6 +345,9 @@ struct dw_pcie_rp {
struct pci_host_bridge *bridge;
raw_spinlock_t lock;
DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
bool use_atu_msg;
int msg_atu_index;
struct resource *msg_res;
};
struct dw_pcie_ep_ops {
@ -433,10 +453,8 @@ void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val);
int dw_pcie_link_up(struct dw_pcie *pci);
void dw_pcie_upconfig_setup(struct dw_pcie *pci);
int dw_pcie_wait_for_link(struct dw_pcie *pci);
int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
u64 cpu_addr, u64 pci_addr, u64 size);
int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index,
int type, u64 cpu_addr, u64 pci_addr, u64 size);
int dw_pcie_prog_outbound_atu(struct dw_pcie *pci,
const struct dw_pcie_ob_atu_cfg *atu);
int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type,
u64 cpu_addr, u64 pci_addr, u64 size);
int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
@ -668,9 +686,9 @@ static inline void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus,
#ifdef CONFIG_PCIE_DW_EP
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep);
void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep);
int dw_pcie_ep_init(struct dw_pcie_ep *ep);
int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep);
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep);
void dw_pcie_ep_deinit(struct dw_pcie_ep *ep);
void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep);
int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no);
@ -688,6 +706,10 @@ static inline void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
{
}
static inline void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep)
{
}
static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep)
{
return 0;
@ -698,10 +720,6 @@ static inline int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
return 0;
}
static inline void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
{
}
static inline void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
{
}

View File

@ -34,10 +34,16 @@
#define to_rockchip_pcie(x) dev_get_drvdata((x)->dev)
#define PCIE_CLIENT_RC_MODE HIWORD_UPDATE_BIT(0x40)
#define PCIE_CLIENT_EP_MODE HIWORD_UPDATE(0xf0, 0x0)
#define PCIE_CLIENT_ENABLE_LTSSM HIWORD_UPDATE_BIT(0xc)
#define PCIE_CLIENT_DISABLE_LTSSM HIWORD_UPDATE(0x0c, 0x8)
#define PCIE_CLIENT_INTR_STATUS_MISC 0x10
#define PCIE_CLIENT_INTR_MASK_MISC 0x24
#define PCIE_SMLH_LINKUP BIT(16)
#define PCIE_RDLH_LINKUP BIT(17)
#define PCIE_LINKUP (PCIE_SMLH_LINKUP | PCIE_RDLH_LINKUP)
#define PCIE_RDLH_LINK_UP_CHGED BIT(1)
#define PCIE_LINK_REQ_RST_NOT_INT BIT(2)
#define PCIE_L0S_ENTRY 0x11
#define PCIE_CLIENT_GENERAL_CONTROL 0x0
#define PCIE_CLIENT_INTR_STATUS_LEGACY 0x8
@ -49,25 +55,30 @@
#define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0)
struct rockchip_pcie {
struct dw_pcie pci;
void __iomem *apb_base;
struct phy *phy;
struct clk_bulk_data *clks;
unsigned int clk_cnt;
struct reset_control *rst;
struct gpio_desc *rst_gpio;
struct regulator *vpcie3v3;
struct irq_domain *irq_domain;
struct dw_pcie pci;
void __iomem *apb_base;
struct phy *phy;
struct clk_bulk_data *clks;
unsigned int clk_cnt;
struct reset_control *rst;
struct gpio_desc *rst_gpio;
struct regulator *vpcie3v3;
struct irq_domain *irq_domain;
const struct rockchip_pcie_of_data *data;
};
static int rockchip_pcie_readl_apb(struct rockchip_pcie *rockchip,
u32 reg)
struct rockchip_pcie_of_data {
enum dw_pcie_device_mode mode;
const struct pci_epc_features *epc_features;
};
static int rockchip_pcie_readl_apb(struct rockchip_pcie *rockchip, u32 reg)
{
return readl_relaxed(rockchip->apb_base + reg);
}
static void rockchip_pcie_writel_apb(struct rockchip_pcie *rockchip,
u32 val, u32 reg)
static void rockchip_pcie_writel_apb(struct rockchip_pcie *rockchip, u32 val,
u32 reg)
{
writel_relaxed(val, rockchip->apb_base + reg);
}
@ -144,16 +155,27 @@ static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip)
return 0;
}
static u32 rockchip_pcie_get_ltssm(struct rockchip_pcie *rockchip)
{
return rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS);
}
static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip)
{
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM,
PCIE_CLIENT_GENERAL_CONTROL);
}
static void rockchip_pcie_disable_ltssm(struct rockchip_pcie *rockchip)
{
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_DISABLE_LTSSM,
PCIE_CLIENT_GENERAL_CONTROL);
}
static int rockchip_pcie_link_up(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
u32 val = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS);
u32 val = rockchip_pcie_get_ltssm(rockchip);
if ((val & PCIE_LINKUP) == PCIE_LINKUP &&
(val & PCIE_LTSSM_STATUS_MASK) == PCIE_L0S_ENTRY)
@ -186,12 +208,18 @@ static int rockchip_pcie_start_link(struct dw_pcie *pci)
return 0;
}
static void rockchip_pcie_stop_link(struct dw_pcie *pci)
{
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
rockchip_pcie_disable_ltssm(rockchip);
}
static int rockchip_pcie_host_init(struct dw_pcie_rp *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
struct device *dev = rockchip->pci.dev;
u32 val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
int irq, ret;
irq = of_irq_get_byname(dev->of_node, "legacy");
@ -205,12 +233,6 @@ static int rockchip_pcie_host_init(struct dw_pcie_rp *pp)
irq_set_chained_handler_and_data(irq, rockchip_pcie_intx_handler,
rockchip);
/* LTSSM enable control mode */
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE,
PCIE_CLIENT_GENERAL_CONTROL);
return 0;
}
@ -218,6 +240,82 @@ static const struct dw_pcie_host_ops rockchip_pcie_host_ops = {
.init = rockchip_pcie_host_init,
};
static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++)
dw_pcie_ep_reset_bar(pci, bar);
};
static int rockchip_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
unsigned int type, u16 interrupt_num)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
switch (type) {
case PCI_IRQ_INTX:
return dw_pcie_ep_raise_intx_irq(ep, func_no);
case PCI_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
case PCI_IRQ_MSIX:
return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num);
default:
dev_err(pci->dev, "UNKNOWN IRQ type\n");
}
return 0;
}
static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = {
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
.align = SZ_64K,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
};
/*
* BAR4 on rk3588 exposes the ATU Port Logic Structure to the host regardless of
* iATU settings for BAR4. This means that BAR4 cannot be used by an EPF driver,
* so mark it as RESERVED. (rockchip_pcie_ep_init() will disable all BARs by
* default.) If the host could write to BAR4, the iATU settings (for all other
* BARs) would be overwritten, resulting in (all other BARs) no longer working.
*/
static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = {
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = true,
.align = SZ_64K,
.bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
.bar[BAR_4] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, },
};
static const struct pci_epc_features *
rockchip_pcie_get_features(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct rockchip_pcie *rockchip = to_rockchip_pcie(pci);
return rockchip->data->epc_features;
}
static const struct dw_pcie_ep_ops rockchip_pcie_ep_ops = {
.init = rockchip_pcie_ep_init,
.raise_irq = rockchip_pcie_raise_irq,
.get_features = rockchip_pcie_get_features,
};
static int rockchip_pcie_clk_init(struct rockchip_pcie *rockchip)
{
struct device *dev = rockchip->pci.dev;
@ -225,11 +323,15 @@ static int rockchip_pcie_clk_init(struct rockchip_pcie *rockchip)
ret = devm_clk_bulk_get_all(dev, &rockchip->clks);
if (ret < 0)
return ret;
return dev_err_probe(dev, ret, "failed to get clocks\n");
rockchip->clk_cnt = ret;
return clk_bulk_prepare_enable(rockchip->clk_cnt, rockchip->clks);
ret = clk_bulk_prepare_enable(rockchip->clk_cnt, rockchip->clks);
if (ret)
return dev_err_probe(dev, ret, "failed to enable clocks\n");
return 0;
}
static int rockchip_pcie_resource_get(struct platform_device *pdev,
@ -237,12 +339,14 @@ static int rockchip_pcie_resource_get(struct platform_device *pdev,
{
rockchip->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb");
if (IS_ERR(rockchip->apb_base))
return PTR_ERR(rockchip->apb_base);
return dev_err_probe(&pdev->dev, PTR_ERR(rockchip->apb_base),
"failed to map apb registers\n");
rockchip->rst_gpio = devm_gpiod_get_optional(&pdev->dev, "reset",
GPIOD_OUT_HIGH);
GPIOD_OUT_LOW);
if (IS_ERR(rockchip->rst_gpio))
return PTR_ERR(rockchip->rst_gpio);
return dev_err_probe(&pdev->dev, PTR_ERR(rockchip->rst_gpio),
"failed to get reset gpio\n");
rockchip->rst = devm_reset_control_array_get_exclusive(&pdev->dev);
if (IS_ERR(rockchip->rst))
@ -282,15 +386,127 @@ static void rockchip_pcie_phy_deinit(struct rockchip_pcie *rockchip)
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = rockchip_pcie_link_up,
.start_link = rockchip_pcie_start_link,
.stop_link = rockchip_pcie_stop_link,
};
static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg)
{
struct rockchip_pcie *rockchip = arg;
struct dw_pcie *pci = &rockchip->pci;
struct device *dev = pci->dev;
u32 reg, val;
reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC);
rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC);
dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg);
dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip));
if (reg & PCIE_LINK_REQ_RST_NOT_INT) {
dev_dbg(dev, "hot reset or link-down reset\n");
dw_pcie_ep_linkdown(&pci->ep);
}
if (reg & PCIE_RDLH_LINK_UP_CHGED) {
val = rockchip_pcie_get_ltssm(rockchip);
if ((val & PCIE_LINKUP) == PCIE_LINKUP) {
dev_dbg(dev, "link up\n");
dw_pcie_ep_linkup(&pci->ep);
}
}
return IRQ_HANDLED;
}
static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip)
{
struct dw_pcie_rp *pp;
u32 val;
if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST))
return -ENODEV;
/* LTSSM enable control mode */
val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE,
PCIE_CLIENT_GENERAL_CONTROL);
pp = &rockchip->pci.pp;
pp->ops = &rockchip_pcie_host_ops;
return dw_pcie_host_init(pp);
}
static int rockchip_pcie_configure_ep(struct platform_device *pdev,
struct rockchip_pcie *rockchip)
{
struct device *dev = &pdev->dev;
int irq, ret;
u32 val;
if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_EP))
return -ENODEV;
irq = platform_get_irq_byname(pdev, "sys");
if (irq < 0) {
dev_err(dev, "missing sys IRQ resource\n");
return irq;
}
ret = devm_request_threaded_irq(dev, irq, NULL,
rockchip_pcie_ep_sys_irq_thread,
IRQF_ONESHOT, "pcie-sys", rockchip);
if (ret) {
dev_err(dev, "failed to request PCIe sys IRQ\n");
return ret;
}
/* LTSSM enable control mode */
val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE);
rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL);
rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_EP_MODE,
PCIE_CLIENT_GENERAL_CONTROL);
rockchip->pci.ep.ops = &rockchip_pcie_ep_ops;
rockchip->pci.ep.page_size = SZ_64K;
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
ret = dw_pcie_ep_init(&rockchip->pci.ep);
if (ret) {
dev_err(dev, "failed to initialize endpoint\n");
return ret;
}
ret = dw_pcie_ep_init_registers(&rockchip->pci.ep);
if (ret) {
dev_err(dev, "failed to initialize DWC endpoint registers\n");
dw_pcie_ep_deinit(&rockchip->pci.ep);
return ret;
}
pci_epc_init_notify(rockchip->pci.ep.epc);
/* unmask DLL up/down indicator and hot reset/link-down reset */
rockchip_pcie_writel_apb(rockchip, 0x60000, PCIE_CLIENT_INTR_MASK_MISC);
return ret;
}
static int rockchip_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie *rockchip;
struct dw_pcie_rp *pp;
const struct rockchip_pcie_of_data *data;
int ret;
data = of_device_get_match_data(dev);
if (!data)
return -EINVAL;
rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL);
if (!rockchip)
return -ENOMEM;
@ -299,9 +515,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
rockchip->pci.dev = dev;
rockchip->pci.ops = &dw_pcie_ops;
pp = &rockchip->pci.pp;
pp->ops = &rockchip_pcie_host_ops;
rockchip->data = data;
ret = rockchip_pcie_resource_get(pdev, rockchip);
if (ret)
@ -320,10 +534,9 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
rockchip->vpcie3v3 = NULL;
} else {
ret = regulator_enable(rockchip->vpcie3v3);
if (ret) {
dev_err(dev, "failed to enable vpcie3v3 regulator\n");
return ret;
}
if (ret)
return dev_err_probe(dev, ret,
"failed to enable vpcie3v3 regulator\n");
}
ret = rockchip_pcie_phy_init(rockchip);
@ -338,10 +551,26 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
if (ret)
goto deinit_phy;
ret = dw_pcie_host_init(pp);
if (!ret)
return 0;
switch (data->mode) {
case DW_PCIE_RC_TYPE:
ret = rockchip_pcie_configure_rc(rockchip);
if (ret)
goto deinit_clk;
break;
case DW_PCIE_EP_TYPE:
ret = rockchip_pcie_configure_ep(pdev, rockchip);
if (ret)
goto deinit_clk;
break;
default:
dev_err(dev, "INVALID device type %d\n", data->mode);
ret = -EINVAL;
goto deinit_clk;
}
return 0;
deinit_clk:
clk_bulk_disable_unprepare(rockchip->clk_cnt, rockchip->clks);
deinit_phy:
rockchip_pcie_phy_deinit(rockchip);
@ -352,8 +581,33 @@ disable_regulator:
return ret;
}
static const struct rockchip_pcie_of_data rockchip_pcie_rc_of_data_rk3568 = {
.mode = DW_PCIE_RC_TYPE,
};
static const struct rockchip_pcie_of_data rockchip_pcie_ep_of_data_rk3568 = {
.mode = DW_PCIE_EP_TYPE,
.epc_features = &rockchip_pcie_epc_features_rk3568,
};
static const struct rockchip_pcie_of_data rockchip_pcie_ep_of_data_rk3588 = {
.mode = DW_PCIE_EP_TYPE,
.epc_features = &rockchip_pcie_epc_features_rk3588,
};
static const struct of_device_id rockchip_pcie_of_match[] = {
{ .compatible = "rockchip,rk3568-pcie", },
{
.compatible = "rockchip,rk3568-pcie",
.data = &rockchip_pcie_rc_of_data_rk3568,
},
{
.compatible = "rockchip,rk3568-pcie-ep",
.data = &rockchip_pcie_ep_of_data_rk3568,
},
{
.compatible = "rockchip,rk3588-pcie-ep",
.data = &rockchip_pcie_ep_of_data_rk3588,
},
{},
};

View File

@ -442,7 +442,7 @@ static int keembay_pcie_probe(struct platform_device *pdev)
return ret;
}
dw_pcie_ep_init_notify(&pci->ep);
pci_epc_init_notify(pci->ep.epc);
break;
default:

View File

@ -12,12 +12,10 @@
#include <linux/compiler.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/mfd/syscon.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/of_pci.h>
#include <linux/phy/phy.h>
#include <linux/pci.h>
@ -78,16 +76,16 @@ struct kirin_pcie {
void *phy_priv; /* only for PCIE_KIRIN_INTERNAL_PHY */
/* DWC PERST# */
int gpio_id_dwc_perst;
struct gpio_desc *id_dwc_perst_gpio;
/* Per-slot PERST# */
int num_slots;
int gpio_id_reset[MAX_PCI_SLOTS];
struct gpio_desc *id_reset_gpio[MAX_PCI_SLOTS];
const char *reset_names[MAX_PCI_SLOTS];
/* Per-slot clkreq */
int n_gpio_clkreq;
int gpio_id_clkreq[MAX_PCI_SLOTS];
struct gpio_desc *id_clkreq_gpio[MAX_PCI_SLOTS];
const char *clkreq_names[MAX_PCI_SLOTS];
};
@ -381,15 +379,20 @@ static int kirin_pcie_get_gpio_enable(struct kirin_pcie *pcie,
pcie->n_gpio_clkreq = ret;
for (i = 0; i < pcie->n_gpio_clkreq; i++) {
pcie->gpio_id_clkreq[i] = of_get_named_gpio(dev->of_node,
"hisilicon,clken-gpios", i);
if (pcie->gpio_id_clkreq[i] < 0)
return pcie->gpio_id_clkreq[i];
pcie->id_clkreq_gpio[i] = devm_gpiod_get_index(dev,
"hisilicon,clken", i,
GPIOD_OUT_LOW);
if (IS_ERR(pcie->id_clkreq_gpio[i]))
return dev_err_probe(dev, PTR_ERR(pcie->id_clkreq_gpio[i]),
"unable to get a valid clken gpio\n");
pcie->clkreq_names[i] = devm_kasprintf(dev, GFP_KERNEL,
"pcie_clkreq_%d", i);
if (!pcie->clkreq_names[i])
return -ENOMEM;
gpiod_set_consumer_name(pcie->id_clkreq_gpio[i],
pcie->clkreq_names[i]);
}
return 0;
@ -400,29 +403,33 @@ static int kirin_pcie_parse_port(struct kirin_pcie *pcie,
struct device_node *node)
{
struct device *dev = &pdev->dev;
struct device_node *parent, *child;
int ret, slot, i;
for_each_available_child_of_node(node, parent) {
for_each_available_child_of_node(parent, child) {
for_each_available_child_of_node_scoped(node, parent) {
for_each_available_child_of_node_scoped(parent, child) {
i = pcie->num_slots;
pcie->gpio_id_reset[i] = of_get_named_gpio(child,
"reset-gpios", 0);
if (pcie->gpio_id_reset[i] < 0)
continue;
pcie->id_reset_gpio[i] = devm_fwnode_gpiod_get_index(dev,
of_fwnode_handle(child),
"reset", 0, GPIOD_OUT_LOW,
NULL);
if (IS_ERR(pcie->id_reset_gpio[i])) {
if (PTR_ERR(pcie->id_reset_gpio[i]) == -ENOENT)
continue;
return dev_err_probe(dev, PTR_ERR(pcie->id_reset_gpio[i]),
"unable to get a valid reset gpio\n");
}
pcie->num_slots++;
if (pcie->num_slots > MAX_PCI_SLOTS) {
dev_err(dev, "Too many PCI slots!\n");
ret = -EINVAL;
goto put_node;
return -EINVAL;
}
ret = of_pci_get_devfn(child);
if (ret < 0) {
dev_err(dev, "failed to parse devfn: %d\n", ret);
goto put_node;
return ret;
}
slot = PCI_SLOT(ret);
@ -430,19 +437,15 @@ static int kirin_pcie_parse_port(struct kirin_pcie *pcie,
pcie->reset_names[i] = devm_kasprintf(dev, GFP_KERNEL,
"pcie_perst_%d",
slot);
if (!pcie->reset_names[i]) {
ret = -ENOMEM;
goto put_node;
}
if (!pcie->reset_names[i])
return -ENOMEM;
gpiod_set_consumer_name(pcie->id_reset_gpio[i],
pcie->reset_names[i]);
}
}
return 0;
put_node:
of_node_put(child);
of_node_put(parent);
return ret;
}
static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie,
@ -463,14 +466,11 @@ static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie,
return PTR_ERR(kirin_pcie->apb);
/* pcie internal PERST# gpio */
kirin_pcie->gpio_id_dwc_perst = of_get_named_gpio(dev->of_node,
"reset-gpios", 0);
if (kirin_pcie->gpio_id_dwc_perst == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (!gpio_is_valid(kirin_pcie->gpio_id_dwc_perst)) {
dev_err(dev, "unable to get a valid gpio pin\n");
return -ENODEV;
}
kirin_pcie->id_dwc_perst_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(kirin_pcie->id_dwc_perst_gpio))
return dev_err_probe(dev, PTR_ERR(kirin_pcie->id_dwc_perst_gpio),
"unable to get a valid gpio pin\n");
gpiod_set_consumer_name(kirin_pcie->id_dwc_perst_gpio, "pcie_perst_bridge");
ret = kirin_pcie_get_gpio_enable(kirin_pcie, pdev);
if (ret)
@ -553,7 +553,7 @@ static int kirin_pcie_add_bus(struct pci_bus *bus)
/* Send PERST# to each slot */
for (i = 0; i < kirin_pcie->num_slots; i++) {
ret = gpio_direction_output(kirin_pcie->gpio_id_reset[i], 1);
ret = gpiod_direction_output_raw(kirin_pcie->id_reset_gpio[i], 1);
if (ret) {
dev_err(pci->dev, "PERST# %s error: %d\n",
kirin_pcie->reset_names[i], ret);
@ -623,44 +623,6 @@ static int kirin_pcie_host_init(struct dw_pcie_rp *pp)
return 0;
}
static int kirin_pcie_gpio_request(struct kirin_pcie *kirin_pcie,
struct device *dev)
{
int ret, i;
for (i = 0; i < kirin_pcie->num_slots; i++) {
if (!gpio_is_valid(kirin_pcie->gpio_id_reset[i])) {
dev_err(dev, "unable to get a valid %s gpio\n",
kirin_pcie->reset_names[i]);
return -ENODEV;
}
ret = devm_gpio_request(dev, kirin_pcie->gpio_id_reset[i],
kirin_pcie->reset_names[i]);
if (ret)
return ret;
}
for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) {
if (!gpio_is_valid(kirin_pcie->gpio_id_clkreq[i])) {
dev_err(dev, "unable to get a valid %s gpio\n",
kirin_pcie->clkreq_names[i]);
return -ENODEV;
}
ret = devm_gpio_request(dev, kirin_pcie->gpio_id_clkreq[i],
kirin_pcie->clkreq_names[i]);
if (ret)
return ret;
ret = gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 0);
if (ret)
return ret;
}
return 0;
}
static const struct dw_pcie_ops kirin_dw_pcie_ops = {
.read_dbi = kirin_pcie_read_dbi,
.write_dbi = kirin_pcie_write_dbi,
@ -680,7 +642,7 @@ static int kirin_pcie_power_off(struct kirin_pcie *kirin_pcie)
return hi3660_pcie_phy_power_off(kirin_pcie);
for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++)
gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 1);
gpiod_direction_output_raw(kirin_pcie->id_clkreq_gpio[i], 1);
phy_power_off(kirin_pcie->phy);
phy_exit(kirin_pcie->phy);
@ -707,10 +669,6 @@ static int kirin_pcie_power_on(struct platform_device *pdev,
if (IS_ERR(kirin_pcie->phy))
return PTR_ERR(kirin_pcie->phy);
ret = kirin_pcie_gpio_request(kirin_pcie, dev);
if (ret)
return ret;
ret = phy_init(kirin_pcie->phy);
if (ret)
goto err;
@ -723,11 +681,9 @@ static int kirin_pcie_power_on(struct platform_device *pdev,
/* perst assert Endpoint */
usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX);
if (!gpio_request(kirin_pcie->gpio_id_dwc_perst, "pcie_perst_bridge")) {
ret = gpio_direction_output(kirin_pcie->gpio_id_dwc_perst, 1);
if (ret)
goto err;
}
ret = gpiod_direction_output_raw(kirin_pcie->id_dwc_perst_gpio, 1);
if (ret)
goto err;
usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX);

View File

@ -47,6 +47,7 @@
#define PARF_DBI_BASE_ADDR_HI 0x354
#define PARF_SLV_ADDR_SPACE_SIZE 0x358
#define PARF_SLV_ADDR_SPACE_SIZE_HI 0x35c
#define PARF_NO_SNOOP_OVERIDE 0x3d4
#define PARF_ATU_BASE_ADDR 0x634
#define PARF_ATU_BASE_ADDR_HI 0x638
#define PARF_SRIS_MODE 0x644
@ -86,6 +87,10 @@
#define PARF_DEBUG_INT_CFG_BUS_MASTER_EN BIT(2)
#define PARF_DEBUG_INT_RADM_PM_TURNOFF BIT(3)
/* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERIDE_EN BIT(3)
/* PARF_DEVICE_TYPE register fields */
#define PARF_DEVICE_TYPE_EP 0x0
@ -149,6 +154,16 @@ enum qcom_pcie_ep_link_status {
QCOM_PCIE_EP_LINK_DOWN,
};
/**
* struct qcom_pcie_ep_cfg - Per SoC config struct
* @hdma_support: HDMA support on this SoC
* @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache snooping
*/
struct qcom_pcie_ep_cfg {
bool hdma_support;
bool override_no_snoop;
};
/**
* struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller
* @pci: Designware PCIe controller struct
@ -167,6 +182,7 @@ enum qcom_pcie_ep_link_status {
* @num_clks: PCIe clocks count
* @perst_en: Flag for PERST enable
* @perst_sep_en: Flag for PERST separation enable
* @cfg: PCIe EP config struct
* @link_status: PCIe Link status
* @global_irq: Qualcomm PCIe specific Global IRQ
* @perst_irq: PERST# IRQ
@ -194,6 +210,7 @@ struct qcom_pcie_ep {
u32 perst_en;
u32 perst_sep_en;
const struct qcom_pcie_ep_cfg *cfg;
enum qcom_pcie_ep_link_status link_status;
int global_irq;
int perst_irq;
@ -482,13 +499,17 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
val &= ~PARF_MSTR_AXI_CLK_EN;
writel_relaxed(val, pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
dw_pcie_ep_init_notify(&pcie_ep->pci.ep);
pci_epc_init_notify(pcie_ep->pci.ep.epc);
/* Enable LTSSM */
val = readl_relaxed(pcie_ep->parf + PARF_LTSSM);
val |= BIT(8);
writel_relaxed(val, pcie_ep->parf + PARF_LTSSM);
if (pcie_ep->cfg && pcie_ep->cfg->override_no_snoop)
writel_relaxed(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN,
pcie_ep->parf + PARF_NO_SNOOP_OVERIDE);
return 0;
err_disable_resources:
@ -500,13 +521,8 @@ err_disable_resources:
static void qcom_pcie_perst_assert(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
struct device *dev = pci->dev;
if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) {
dev_dbg(dev, "Link is already disabled\n");
return;
}
pci_epc_deinit_notify(pci->ep.epc);
dw_pcie_ep_cleanup(&pci->ep);
qcom_pcie_disable_resources(pcie_ep);
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
@ -640,12 +656,12 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) {
dev_dbg(dev, "Received Linkdown event\n");
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN;
pci_epc_linkdown(pci->ep.epc);
dw_pcie_ep_linkdown(&pci->ep);
} else if (FIELD_GET(PARF_INT_ALL_BME, status)) {
dev_dbg(dev, "Received BME event. Link is enabled!\n");
dev_dbg(dev, "Received Bus Master Enable event\n");
pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED;
qcom_pcie_ep_icc_update(pcie_ep);
pci_epc_bme_notify(pci->ep.epc);
pci_epc_bus_master_enable_notify(pci->ep.epc);
} else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) {
dev_dbg(dev, "Received PM Turn-off event! Entering L23\n");
val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL);
@ -816,6 +832,14 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
pcie_ep->pci.ops = &pci_ops;
pcie_ep->pci.ep.ops = &pci_ep_ops;
pcie_ep->pci.edma.nr_irqs = 1;
pcie_ep->cfg = of_device_get_match_data(dev);
if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) {
pcie_ep->pci.edma.ll_wr_cnt = 8;
pcie_ep->pci.edma.ll_rd_cnt = 8;
pcie_ep->pci.edma.mf = EDMA_MF_HDMA_NATIVE;
}
platform_set_drvdata(pdev, pcie_ep);
ret = qcom_pcie_ep_get_resources(pdev, pcie_ep);
@ -874,7 +898,13 @@ static void qcom_pcie_ep_remove(struct platform_device *pdev)
qcom_pcie_disable_resources(pcie_ep);
}
static const struct qcom_pcie_ep_cfg cfg_1_34_0 = {
.hdma_support = true,
.override_no_snoop = true,
};
static const struct of_device_id qcom_pcie_ep_match[] = {
{ .compatible = "qcom,sa8775p-pcie-ep", .data = &cfg_1_34_0},
{ .compatible = "qcom,sdx55-pcie-ep", },
{ .compatible = "qcom,sm8450-pcie-ep", },
{ }

View File

@ -18,10 +18,11 @@
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/limits.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
#include <linux/phy/pcie.h>
@ -30,6 +31,7 @@
#include <linux/reset.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/units.h>
#include "../../pci.h"
#include "pcie-designware.h"
@ -51,6 +53,7 @@
#define PARF_SID_OFFSET 0x234
#define PARF_BDF_TRANSLATE_CFG 0x24c
#define PARF_SLV_ADDR_SPACE_SIZE 0x358
#define PARF_NO_SNOOP_OVERIDE 0x3d4
#define PARF_DEVICE_TYPE 0x1000
#define PARF_BDF_TO_SID_TABLE_N 0x2000
#define PARF_BDF_TO_SID_CFG 0x2c00
@ -118,6 +121,10 @@
/* PARF_LTSSM register fields */
#define LTSSM_EN BIT(8)
/* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERIDE_EN BIT(3)
/* PARF_DEVICE_TYPE register fields */
#define DEVICE_TYPE_RC 0x4
@ -154,58 +161,56 @@
#define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \
Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]))
#define QCOM_PCIE_1_0_0_MAX_CLOCKS 4
struct qcom_pcie_resources_1_0_0 {
struct clk_bulk_data clks[QCOM_PCIE_1_0_0_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct reset_control *core;
struct regulator *vdda;
};
#define QCOM_PCIE_2_1_0_MAX_CLOCKS 5
#define QCOM_PCIE_2_1_0_MAX_RESETS 6
#define QCOM_PCIE_2_1_0_MAX_SUPPLY 3
struct qcom_pcie_resources_2_1_0 {
struct clk_bulk_data clks[QCOM_PCIE_2_1_0_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct reset_control_bulk_data resets[QCOM_PCIE_2_1_0_MAX_RESETS];
int num_resets;
struct regulator_bulk_data supplies[QCOM_PCIE_2_1_0_MAX_SUPPLY];
};
#define QCOM_PCIE_2_3_2_MAX_CLOCKS 4
#define QCOM_PCIE_2_3_2_MAX_SUPPLY 2
struct qcom_pcie_resources_2_3_2 {
struct clk_bulk_data clks[QCOM_PCIE_2_3_2_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct regulator_bulk_data supplies[QCOM_PCIE_2_3_2_MAX_SUPPLY];
};
#define QCOM_PCIE_2_3_3_MAX_CLOCKS 5
#define QCOM_PCIE_2_3_3_MAX_RESETS 7
struct qcom_pcie_resources_2_3_3 {
struct clk_bulk_data clks[QCOM_PCIE_2_3_3_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct reset_control_bulk_data rst[QCOM_PCIE_2_3_3_MAX_RESETS];
};
#define QCOM_PCIE_2_4_0_MAX_CLOCKS 4
#define QCOM_PCIE_2_4_0_MAX_RESETS 12
struct qcom_pcie_resources_2_4_0 {
struct clk_bulk_data clks[QCOM_PCIE_2_4_0_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct reset_control_bulk_data resets[QCOM_PCIE_2_4_0_MAX_RESETS];
int num_resets;
};
#define QCOM_PCIE_2_7_0_MAX_CLOCKS 15
#define QCOM_PCIE_2_7_0_MAX_SUPPLIES 2
struct qcom_pcie_resources_2_7_0 {
struct clk_bulk_data clks[QCOM_PCIE_2_7_0_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct regulator_bulk_data supplies[QCOM_PCIE_2_7_0_MAX_SUPPLIES];
struct reset_control *rst;
};
#define QCOM_PCIE_2_9_0_MAX_CLOCKS 5
struct qcom_pcie_resources_2_9_0 {
struct clk_bulk_data clks[QCOM_PCIE_2_9_0_MAX_CLOCKS];
struct clk_bulk_data *clks;
int num_clks;
struct reset_control *rst;
};
@ -231,8 +236,15 @@ struct qcom_pcie_ops {
int (*config_sid)(struct qcom_pcie *pcie);
};
/**
* struct qcom_pcie_cfg - Per SoC config struct
* @ops: qcom PCIe ops structure
* @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache
* snooping
*/
struct qcom_pcie_cfg {
const struct qcom_pcie_ops *ops;
bool override_no_snoop;
bool no_l0s;
};
@ -245,6 +257,7 @@ struct qcom_pcie {
struct phy *phy;
struct gpio_desc *reset;
struct icc_path *icc_mem;
struct icc_path *icc_cpu;
const struct qcom_pcie_cfg *cfg;
struct dentry *debugfs;
bool suspended;
@ -337,21 +350,11 @@ static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie)
if (ret)
return ret;
res->clks[0].id = "iface";
res->clks[1].id = "core";
res->clks[2].id = "phy";
res->clks[3].id = "aux";
res->clks[4].id = "ref";
/* iface, core, phy are required */
ret = devm_clk_bulk_get(dev, 3, res->clks);
if (ret < 0)
return ret;
/* aux, ref are optional */
ret = devm_clk_bulk_get_optional(dev, 2, res->clks + 3);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
res->resets[0].id = "pci";
res->resets[1].id = "axi";
@ -373,7 +376,7 @@ static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0;
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
clk_bulk_disable_unprepare(res->num_clks, res->clks);
reset_control_bulk_assert(res->num_resets, res->resets);
writel(1, pcie->parf + PARF_PHY_CTRL);
@ -425,7 +428,7 @@ static int qcom_pcie_post_init_2_1_0(struct qcom_pcie *pcie)
val &= ~PHY_TEST_PWR_DOWN;
writel(val, pcie->parf + PARF_PHY_CTRL);
ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
if (ret)
return ret;
@ -476,20 +479,16 @@ static int qcom_pcie_get_resources_1_0_0(struct qcom_pcie *pcie)
struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
int ret;
res->vdda = devm_regulator_get(dev, "vdda");
if (IS_ERR(res->vdda))
return PTR_ERR(res->vdda);
res->clks[0].id = "iface";
res->clks[1].id = "aux";
res->clks[2].id = "master_bus";
res->clks[3].id = "slave_bus";
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
res->core = devm_reset_control_get_exclusive(dev, "core");
return PTR_ERR_OR_ZERO(res->core);
@ -500,7 +499,7 @@ static void qcom_pcie_deinit_1_0_0(struct qcom_pcie *pcie)
struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0;
reset_control_assert(res->core);
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
clk_bulk_disable_unprepare(res->num_clks, res->clks);
regulator_disable(res->vdda);
}
@ -517,7 +516,7 @@ static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie)
return ret;
}
ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
if (ret) {
dev_err(dev, "cannot prepare/enable clocks\n");
goto err_assert_reset;
@ -532,7 +531,7 @@ static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie)
return 0;
err_disable_clks:
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
clk_bulk_disable_unprepare(res->num_clks, res->clks);
err_assert_reset:
reset_control_assert(res->core);
@ -580,14 +579,11 @@ static int qcom_pcie_get_resources_2_3_2(struct qcom_pcie *pcie)
if (ret)
return ret;
res->clks[0].id = "aux";
res->clks[1].id = "cfg";
res->clks[2].id = "bus_master";
res->clks[3].id = "bus_slave";
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
return 0;
}
@ -596,7 +592,7 @@ static void qcom_pcie_deinit_2_3_2(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2;
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
clk_bulk_disable_unprepare(res->num_clks, res->clks);
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
}
@ -613,7 +609,7 @@ static int qcom_pcie_init_2_3_2(struct qcom_pcie *pcie)
return ret;
}
ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
if (ret) {
dev_err(dev, "cannot prepare/enable clocks\n");
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
@ -661,17 +657,11 @@ static int qcom_pcie_get_resources_2_4_0(struct qcom_pcie *pcie)
bool is_ipq = of_device_is_compatible(dev->of_node, "qcom,pcie-ipq4019");
int ret;
res->clks[0].id = "aux";
res->clks[1].id = "master_bus";
res->clks[2].id = "slave_bus";
res->clks[3].id = "iface";
/* qcom,pcie-ipq4019 is defined without "iface" */
res->num_clks = is_ipq ? 3 : 4;
ret = devm_clk_bulk_get(dev, res->num_clks, res->clks);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
res->resets[0].id = "axi_m";
res->resets[1].id = "axi_s";
@ -742,15 +732,11 @@ static int qcom_pcie_get_resources_2_3_3(struct qcom_pcie *pcie)
struct device *dev = pci->dev;
int ret;
res->clks[0].id = "iface";
res->clks[1].id = "axi_m";
res->clks[2].id = "axi_s";
res->clks[3].id = "ahb";
res->clks[4].id = "aux";
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
res->rst[0].id = "axi_m";
res->rst[1].id = "axi_s";
@ -771,7 +757,7 @@ static void qcom_pcie_deinit_2_3_3(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3;
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
clk_bulk_disable_unprepare(res->num_clks, res->clks);
}
static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie)
@ -801,7 +787,7 @@ static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie)
*/
usleep_range(2000, 2500);
ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
ret = clk_bulk_prepare_enable(res->num_clks, res->clks);
if (ret) {
dev_err(dev, "cannot prepare/enable clocks\n");
goto err_assert_resets;
@ -862,8 +848,6 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
unsigned int num_clks, num_opt_clks;
unsigned int idx;
int ret;
res->rst = devm_reset_control_array_get_exclusive(dev);
@ -877,36 +861,11 @@ static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie)
if (ret)
return ret;
idx = 0;
res->clks[idx++].id = "aux";
res->clks[idx++].id = "cfg";
res->clks[idx++].id = "bus_master";
res->clks[idx++].id = "bus_slave";
res->clks[idx++].id = "slave_q2a";
num_clks = idx;
ret = devm_clk_bulk_get(dev, num_clks, res->clks);
if (ret < 0)
return ret;
res->clks[idx++].id = "tbu";
res->clks[idx++].id = "ddrss_sf_tbu";
res->clks[idx++].id = "aggre0";
res->clks[idx++].id = "aggre1";
res->clks[idx++].id = "noc_aggr";
res->clks[idx++].id = "noc_aggr_4";
res->clks[idx++].id = "noc_aggr_south_sf";
res->clks[idx++].id = "cnoc_qx";
res->clks[idx++].id = "sleep";
res->clks[idx++].id = "cnoc_sf_axi";
num_opt_clks = idx - num_clks;
res->num_clks = idx;
ret = devm_clk_bulk_get_optional(dev, num_opt_clks, res->clks + num_clks);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
return 0;
}
@ -986,6 +945,12 @@ err_disable_regulators:
static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
{
const struct qcom_pcie_cfg *pcie_cfg = pcie->cfg;
if (pcie_cfg->override_no_snoop)
writel(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN,
pcie->parf + PARF_NO_SNOOP_OVERIDE);
qcom_pcie_clear_aspm_l0s(pcie->pci);
qcom_pcie_clear_hpc(pcie->pci);
@ -1101,17 +1066,12 @@ static int qcom_pcie_get_resources_2_9_0(struct qcom_pcie *pcie)
struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0;
struct dw_pcie *pci = pcie->pci;
struct device *dev = pci->dev;
int ret;
res->clks[0].id = "iface";
res->clks[1].id = "axi_m";
res->clks[2].id = "axi_s";
res->clks[3].id = "axi_bridge";
res->clks[4].id = "rchng";
ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks);
if (ret < 0)
return ret;
res->num_clks = devm_clk_bulk_get_all(dev, &res->clks);
if (res->num_clks < 0) {
dev_err(dev, "Failed to get clocks\n");
return res->num_clks;
}
res->rst = devm_reset_control_array_get_exclusive(dev);
if (IS_ERR(res->rst))
@ -1124,7 +1084,7 @@ static void qcom_pcie_deinit_2_9_0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0;
clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks);
clk_bulk_disable_unprepare(res->num_clks, res->clks);
}
static int qcom_pcie_init_2_9_0(struct qcom_pcie *pcie)
@ -1153,7 +1113,7 @@ static int qcom_pcie_init_2_9_0(struct qcom_pcie *pcie)
usleep_range(2000, 2500);
return clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks);
return clk_bulk_prepare_enable(res->num_clks, res->clks);
}
static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
@ -1366,6 +1326,11 @@ static const struct qcom_pcie_cfg cfg_1_9_0 = {
.ops = &ops_1_9_0,
};
static const struct qcom_pcie_cfg cfg_1_34_0 = {
.ops = &ops_1_9_0,
.override_no_snoop = true,
};
static const struct qcom_pcie_cfg cfg_2_1_0 = {
.ops = &ops_2_1_0,
};
@ -1409,6 +1374,9 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
if (IS_ERR(pcie->icc_mem))
return PTR_ERR(pcie->icc_mem);
pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie");
if (IS_ERR(pcie->icc_cpu))
return PTR_ERR(pcie->icc_cpu);
/*
* Some Qualcomm platforms require interconnect bandwidth constraints
* to be set before enabling interconnect clocks.
@ -1418,23 +1386,35 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
*/
ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1));
if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
dev_err(pci->dev, "Failed to set bandwidth for PCIe-MEM interconnect path: %d\n",
ret);
return ret;
}
/*
* Since the CPU-PCIe path is only used for activities like register
* access of the host controller and endpoint Config/BAR space access,
* HW team has recommended to use a minimal bandwidth of 1KBps just to
* keep the path active.
*/
ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1));
if (ret) {
dev_err(pci->dev, "Failed to set bandwidth for CPU-PCIe interconnect path: %d\n",
ret);
icc_set_bw(pcie->icc_mem, 0, 0);
return ret;
}
return 0;
}
static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
{
u32 offset, status, width, speed;
struct dw_pcie *pci = pcie->pci;
u32 offset, status;
int speed, width;
int ret;
if (!pcie->icc_mem)
return;
unsigned long freq_kbps;
struct dev_pm_opp *opp;
int ret, freq_mbps;
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
@ -1446,10 +1426,28 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
if (ret) {
dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n",
ret);
if (pcie->icc_mem) {
ret = icc_set_bw(pcie->icc_mem, 0,
width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
if (ret) {
dev_err(pci->dev, "Failed to set bandwidth for PCIe-MEM interconnect path: %d\n",
ret);
}
} else {
freq_mbps = pcie_dev_speed_mbps(pcie_link_speed[speed]);
if (freq_mbps < 0)
return;
freq_kbps = freq_mbps * KILO;
opp = dev_pm_opp_find_freq_exact(pci->dev, freq_kbps * width,
true);
if (!IS_ERR(opp)) {
ret = dev_pm_opp_set_opp(pci->dev, opp);
if (ret)
dev_err(pci->dev, "Failed to set OPP for freq (%lu): %d\n",
freq_kbps * width, ret);
dev_pm_opp_put(opp);
}
}
}
@ -1493,7 +1491,9 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
static int qcom_pcie_probe(struct platform_device *pdev)
{
const struct qcom_pcie_cfg *pcie_cfg;
unsigned long max_freq = ULONG_MAX;
struct device *dev = &pdev->dev;
struct dev_pm_opp *opp;
struct qcom_pcie *pcie;
struct dw_pcie_rp *pp;
struct resource *res;
@ -1561,9 +1561,43 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_pm_runtime_put;
}
ret = qcom_pcie_icc_init(pcie);
if (ret)
/* OPP table is optional */
ret = devm_pm_opp_of_add_table(dev);
if (ret && ret != -ENODEV) {
dev_err_probe(dev, ret, "Failed to add OPP table\n");
goto err_pm_runtime_put;
}
/*
* Before the PCIe link is initialized, vote for highest OPP in the OPP
* table, so that we are voting for maximum voltage corner for the
* link to come up in maximum supported speed. At the end of the
* probe(), OPP will be updated using qcom_pcie_icc_opp_update().
*/
if (!ret) {
opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
if (IS_ERR(opp)) {
ret = PTR_ERR(opp);
dev_err_probe(pci->dev, ret,
"Unable to find max freq OPP\n");
goto err_pm_runtime_put;
} else {
ret = dev_pm_opp_set_opp(dev, opp);
}
dev_pm_opp_put(opp);
if (ret) {
dev_err_probe(pci->dev, ret,
"Failed to set OPP for freq %lu\n",
max_freq);
goto err_pm_runtime_put;
}
} else {
/* Skip ICC init if OPP is supported as it is handled by OPP */
ret = qcom_pcie_icc_init(pcie);
if (ret)
goto err_pm_runtime_put;
}
ret = pcie->cfg->ops->get_resources(pcie);
if (ret)
@ -1583,7 +1617,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_phy_exit;
}
qcom_pcie_icc_update(pcie);
qcom_pcie_icc_opp_update(pcie);
if (pcie->mhi)
qcom_pcie_init_debugfs(pcie);
@ -1602,16 +1636,20 @@ err_pm_runtime_put:
static int qcom_pcie_suspend_noirq(struct device *dev)
{
struct qcom_pcie *pcie = dev_get_drvdata(dev);
int ret;
int ret = 0;
/*
* Set minimum bandwidth required to keep data path functional during
* suspend.
*/
ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
if (ret) {
dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
return ret;
if (pcie->icc_mem) {
ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1));
if (ret) {
dev_err(dev,
"Failed to set bandwidth for PCIe-MEM interconnect path: %d\n",
ret);
return ret;
}
}
/*
@ -1634,7 +1672,21 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
pcie->suspended = true;
}
return 0;
/*
* Only disable CPU-PCIe interconnect path if the suspend is non-S2RAM.
* Because on some platforms, DBI access can happen very late during the
* S2RAM and a non-active CPU-PCIe interconnect path may lead to NoC
* error.
*/
if (pm_suspend_target_state != PM_SUSPEND_MEM) {
ret = icc_disable(pcie->icc_cpu);
if (ret)
dev_err(dev, "Failed to disable CPU-PCIe interconnect path: %d\n", ret);
if (!pcie->icc_mem)
dev_pm_opp_set_opp(pcie->pci->dev, NULL);
}
return ret;
}
static int qcom_pcie_resume_noirq(struct device *dev)
@ -1642,6 +1694,14 @@ static int qcom_pcie_resume_noirq(struct device *dev)
struct qcom_pcie *pcie = dev_get_drvdata(dev);
int ret;
if (pm_suspend_target_state != PM_SUSPEND_MEM) {
ret = icc_enable(pcie->icc_cpu);
if (ret) {
dev_err(dev, "Failed to enable CPU-PCIe interconnect path: %d\n", ret);
return ret;
}
}
if (pcie->suspended) {
ret = qcom_pcie_host_init(&pcie->pci->pp);
if (ret)
@ -1650,7 +1710,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
pcie->suspended = false;
}
qcom_pcie_icc_update(pcie);
qcom_pcie_icc_opp_update(pcie);
return 0;
}
@ -1667,7 +1727,7 @@ static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
{ .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_9_0},
{ .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_34_0},
{ .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 },
{ .compatible = "qcom,pcie-sc8180x", .data = &cfg_1_9_0 },
{ .compatible = "qcom,pcie-sc8280xp", .data = &cfg_sc8280xp },

View File

@ -2,11 +2,17 @@
/*
* PCIe controller driver for Renesas R-Car Gen4 Series SoCs
* Copyright (C) 2022-2023 Renesas Electronics Corporation
*
* The r8a779g0 (R-Car V4H) controller requires a specific firmware to be
* provided, to initialize the PHY. Otherwise, the PCIe controller will not
* work.
*/
#include <linux/delay.h>
#include <linux/firmware.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/pci.h>
@ -20,9 +26,10 @@
/* Renesas-specific */
/* PCIe Mode Setting Register 0 */
#define PCIEMSR0 0x0000
#define BIFUR_MOD_SET_ON BIT(0)
#define APP_SRIS_MODE BIT(6)
#define DEVICE_TYPE_EP 0
#define DEVICE_TYPE_RC BIT(4)
#define BIFUR_MOD_SET_ON BIT(0)
/* PCIe Interrupt Status 0 */
#define PCIEINTSTS0 0x0084
@ -37,47 +44,49 @@
#define PCIEDMAINTSTSEN 0x0314
#define PCIEDMAINTSTSEN_INIT GENMASK(15, 0)
/* Port Logic Registers 89 */
#define PRTLGC89 0x0b70
/* Port Logic Registers 90 */
#define PRTLGC90 0x0b74
/* PCIe Reset Control Register 1 */
#define PCIERSTCTRL1 0x0014
#define APP_HOLD_PHY_RST BIT(16)
#define APP_LTSSM_ENABLE BIT(0)
/* PCIe Power Management Control */
#define PCIEPWRMNGCTRL 0x0070
#define APP_CLK_REQ_N BIT(11)
#define APP_CLK_PM_EN BIT(10)
#define RCAR_NUM_SPEED_CHANGE_RETRIES 10
#define RCAR_MAX_LINK_SPEED 4
#define RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET 0x1000
#define RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET 0x800
#define RCAR_GEN4_PCIE_FIRMWARE_NAME "rcar_gen4_pcie.bin"
#define RCAR_GEN4_PCIE_FIRMWARE_BASE_ADDR 0xc000
MODULE_FIRMWARE(RCAR_GEN4_PCIE_FIRMWARE_NAME);
struct rcar_gen4_pcie;
struct rcar_gen4_pcie_drvdata {
void (*additional_common_init)(struct rcar_gen4_pcie *rcar);
int (*ltssm_control)(struct rcar_gen4_pcie *rcar, bool enable);
enum dw_pcie_device_mode mode;
};
struct rcar_gen4_pcie {
struct dw_pcie dw;
void __iomem *base;
void __iomem *phy_base;
struct platform_device *pdev;
enum dw_pcie_device_mode mode;
const struct rcar_gen4_pcie_drvdata *drvdata;
};
#define to_rcar_gen4_pcie(_dw) container_of(_dw, struct rcar_gen4_pcie, dw)
/* Common */
static void rcar_gen4_pcie_ltssm_enable(struct rcar_gen4_pcie *rcar,
bool enable)
{
u32 val;
val = readl(rcar->base + PCIERSTCTRL1);
if (enable) {
val |= APP_LTSSM_ENABLE;
val &= ~APP_HOLD_PHY_RST;
} else {
/*
* Since the datasheet of R-Car doesn't mention how to assert
* the APP_HOLD_PHY_RST, don't assert it again. Otherwise,
* hang-up issue happened in the dw_edma_core_off() when
* the controller didn't detect a PCI device.
*/
val &= ~APP_LTSSM_ENABLE;
}
writel(val, rcar->base + PCIERSTCTRL1);
}
static int rcar_gen4_pcie_link_up(struct dw_pcie *dw)
{
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
@ -123,9 +132,13 @@ static int rcar_gen4_pcie_speed_change(struct dw_pcie *dw)
static int rcar_gen4_pcie_start_link(struct dw_pcie *dw)
{
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
int i, changes;
int i, changes, ret;
rcar_gen4_pcie_ltssm_enable(rcar, true);
if (rcar->drvdata->ltssm_control) {
ret = rcar->drvdata->ltssm_control(rcar, true);
if (ret)
return ret;
}
/*
* Require direct speed change with retrying here if the link_gen is
@ -137,7 +150,7 @@ static int rcar_gen4_pcie_start_link(struct dw_pcie *dw)
* Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained.
* So, this needs remaining times for up to PCIe Gen4 if RC mode.
*/
if (changes && rcar->mode == DW_PCIE_RC_TYPE)
if (changes && rcar->drvdata->mode == DW_PCIE_RC_TYPE)
changes--;
for (i = 0; i < changes; i++) {
@ -153,7 +166,8 @@ static void rcar_gen4_pcie_stop_link(struct dw_pcie *dw)
{
struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw);
rcar_gen4_pcie_ltssm_enable(rcar, false);
if (rcar->drvdata->ltssm_control)
rcar->drvdata->ltssm_control(rcar, false);
}
static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
@ -172,9 +186,9 @@ static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc);
val = readl(rcar->base + PCIEMSR0);
if (rcar->mode == DW_PCIE_RC_TYPE) {
if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) {
val |= DEVICE_TYPE_RC;
} else if (rcar->mode == DW_PCIE_EP_TYPE) {
} else if (rcar->drvdata->mode == DW_PCIE_EP_TYPE) {
val |= DEVICE_TYPE_EP;
} else {
ret = -EINVAL;
@ -190,6 +204,9 @@ static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar)
if (ret)
goto err_unprepare;
if (rcar->drvdata->additional_common_init)
rcar->drvdata->additional_common_init(rcar);
return 0;
err_unprepare:
@ -231,6 +248,10 @@ static void rcar_gen4_pcie_unprepare(struct rcar_gen4_pcie *rcar)
static int rcar_gen4_pcie_get_resources(struct rcar_gen4_pcie *rcar)
{
rcar->phy_base = devm_platform_ioremap_resource_byname(rcar->pdev, "phy");
if (IS_ERR(rcar->phy_base))
return PTR_ERR(rcar->phy_base);
/* Renesas-specific registers */
rcar->base = devm_platform_ioremap_resource_byname(rcar->pdev, "app");
@ -255,7 +276,7 @@ static struct rcar_gen4_pcie *rcar_gen4_pcie_alloc(struct platform_device *pdev)
rcar->dw.ops = &dw_pcie_ops;
rcar->dw.dev = dev;
rcar->pdev = pdev;
dw_pcie_cap_set(&rcar->dw, EDMA_UNROLL);
rcar->dw.edma.mf = EDMA_MF_EDMA_UNROLL;
dw_pcie_cap_set(&rcar->dw, REQ_RES);
platform_set_drvdata(pdev, rcar);
@ -437,7 +458,7 @@ static int rcar_gen4_add_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
rcar_gen4_pcie_ep_deinit(rcar);
}
dw_pcie_ep_init_notify(ep);
pci_epc_init_notify(ep->epc);
return ret;
}
@ -451,9 +472,11 @@ static void rcar_gen4_remove_dw_pcie_ep(struct rcar_gen4_pcie *rcar)
/* Common */
static int rcar_gen4_add_dw_pcie(struct rcar_gen4_pcie *rcar)
{
rcar->mode = (uintptr_t)of_device_get_match_data(&rcar->pdev->dev);
rcar->drvdata = of_device_get_match_data(&rcar->pdev->dev);
if (!rcar->drvdata)
return -EINVAL;
switch (rcar->mode) {
switch (rcar->drvdata->mode) {
case DW_PCIE_RC_TYPE:
return rcar_gen4_add_dw_pcie_rp(rcar);
case DW_PCIE_EP_TYPE:
@ -494,7 +517,7 @@ err_unprepare:
static void rcar_gen4_remove_dw_pcie(struct rcar_gen4_pcie *rcar)
{
switch (rcar->mode) {
switch (rcar->drvdata->mode) {
case DW_PCIE_RC_TYPE:
rcar_gen4_remove_dw_pcie_rp(rcar);
break;
@ -514,14 +537,227 @@ static void rcar_gen4_pcie_remove(struct platform_device *pdev)
rcar_gen4_pcie_unprepare(rcar);
}
static int r8a779f0_pcie_ltssm_control(struct rcar_gen4_pcie *rcar, bool enable)
{
u32 val;
val = readl(rcar->base + PCIERSTCTRL1);
if (enable) {
val |= APP_LTSSM_ENABLE;
val &= ~APP_HOLD_PHY_RST;
} else {
/*
* Since the datasheet of R-Car doesn't mention how to assert
* the APP_HOLD_PHY_RST, don't assert it again. Otherwise,
* hang-up issue happened in the dw_edma_core_off() when
* the controller didn't detect a PCI device.
*/
val &= ~APP_LTSSM_ENABLE;
}
writel(val, rcar->base + PCIERSTCTRL1);
return 0;
}
static void rcar_gen4_pcie_additional_common_init(struct rcar_gen4_pcie *rcar)
{
struct dw_pcie *dw = &rcar->dw;
u32 val;
val = dw_pcie_readl_dbi(dw, PCIE_PORT_LANE_SKEW);
val &= ~PORT_LANE_SKEW_INSERT_MASK;
if (dw->num_lanes < 4)
val |= BIT(6);
dw_pcie_writel_dbi(dw, PCIE_PORT_LANE_SKEW, val);
val = readl(rcar->base + PCIEPWRMNGCTRL);
val |= APP_CLK_REQ_N | APP_CLK_PM_EN;
writel(val, rcar->base + PCIEPWRMNGCTRL);
}
static void rcar_gen4_pcie_phy_reg_update_bits(struct rcar_gen4_pcie *rcar,
u32 offset, u32 mask, u32 val)
{
u32 tmp;
tmp = readl(rcar->phy_base + offset);
tmp &= ~mask;
tmp |= val;
writel(tmp, rcar->phy_base + offset);
}
/*
* SoC datasheet suggests checking port logic register bits during firmware
* write. If read returns non-zero value, then this function returns -EAGAIN
* indicating that the write needs to be done again. If read returns zero,
* then return 0 to indicate success.
*/
static int rcar_gen4_pcie_reg_test_bit(struct rcar_gen4_pcie *rcar,
u32 offset, u32 mask)
{
struct dw_pcie *dw = &rcar->dw;
if (dw_pcie_readl_dbi(dw, offset) & mask)
return -EAGAIN;
return 0;
}
static int rcar_gen4_pcie_download_phy_firmware(struct rcar_gen4_pcie *rcar)
{
/* The check_addr values are magical numbers in the datasheet */
const u32 check_addr[] = { 0x00101018, 0x00101118, 0x00101021, 0x00101121};
struct dw_pcie *dw = &rcar->dw;
const struct firmware *fw;
unsigned int i, timeout;
u32 data;
int ret;
ret = request_firmware(&fw, RCAR_GEN4_PCIE_FIRMWARE_NAME, dw->dev);
if (ret) {
dev_err(dw->dev, "Failed to load firmware (%s): %d\n",
RCAR_GEN4_PCIE_FIRMWARE_NAME, ret);
return ret;
}
for (i = 0; i < (fw->size / 2); i++) {
data = fw->data[(i * 2) + 1] << 8 | fw->data[i * 2];
timeout = 100;
do {
dw_pcie_writel_dbi(dw, PRTLGC89, RCAR_GEN4_PCIE_FIRMWARE_BASE_ADDR + i);
dw_pcie_writel_dbi(dw, PRTLGC90, data);
if (!rcar_gen4_pcie_reg_test_bit(rcar, PRTLGC89, BIT(30)))
break;
if (!(--timeout)) {
ret = -ETIMEDOUT;
goto exit;
}
usleep_range(100, 200);
} while (1);
}
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(17), BIT(17));
for (i = 0; i < ARRAY_SIZE(check_addr); i++) {
timeout = 100;
do {
dw_pcie_writel_dbi(dw, PRTLGC89, check_addr[i]);
ret = rcar_gen4_pcie_reg_test_bit(rcar, PRTLGC89, BIT(30));
ret |= rcar_gen4_pcie_reg_test_bit(rcar, PRTLGC90, BIT(0));
if (!ret)
break;
if (!(--timeout)) {
ret = -ETIMEDOUT;
goto exit;
}
usleep_range(100, 200);
} while (1);
}
exit:
release_firmware(fw);
return ret;
}
static int rcar_gen4_pcie_ltssm_control(struct rcar_gen4_pcie *rcar, bool enable)
{
struct dw_pcie *dw = &rcar->dw;
u32 val;
int ret;
if (!enable) {
val = readl(rcar->base + PCIERSTCTRL1);
val &= ~APP_LTSSM_ENABLE;
writel(val, rcar->base + PCIERSTCTRL1);
return 0;
}
val = dw_pcie_readl_dbi(dw, PCIE_PORT_FORCE);
val |= PORT_FORCE_DO_DESKEW_FOR_SRIS;
dw_pcie_writel_dbi(dw, PCIE_PORT_FORCE, val);
val = readl(rcar->base + PCIEMSR0);
val |= APP_SRIS_MODE;
writel(val, rcar->base + PCIEMSR0);
/*
* The R-Car Gen4 datasheet doesn't describe the PHY registers' name.
* But, the initialization procedure describes these offsets. So,
* this driver has magical offset numbers.
*/
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(28), 0);
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(20), 0);
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(12), 0);
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(4), 0);
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(11, 0));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26));
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0);
rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(19), BIT(19));
val = readl(rcar->base + PCIERSTCTRL1);
val &= ~APP_HOLD_PHY_RST;
writel(val, rcar->base + PCIERSTCTRL1);
ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, !(val & BIT(18)), 100, 10000);
if (ret < 0)
return ret;
ret = rcar_gen4_pcie_download_phy_firmware(rcar);
if (ret)
return ret;
val = readl(rcar->base + PCIERSTCTRL1);
val |= APP_LTSSM_ENABLE;
writel(val, rcar->base + PCIERSTCTRL1);
return 0;
}
static struct rcar_gen4_pcie_drvdata drvdata_r8a779f0_pcie = {
.ltssm_control = r8a779f0_pcie_ltssm_control,
.mode = DW_PCIE_RC_TYPE,
};
static struct rcar_gen4_pcie_drvdata drvdata_r8a779f0_pcie_ep = {
.ltssm_control = r8a779f0_pcie_ltssm_control,
.mode = DW_PCIE_EP_TYPE,
};
static struct rcar_gen4_pcie_drvdata drvdata_rcar_gen4_pcie = {
.additional_common_init = rcar_gen4_pcie_additional_common_init,
.ltssm_control = rcar_gen4_pcie_ltssm_control,
.mode = DW_PCIE_RC_TYPE,
};
static struct rcar_gen4_pcie_drvdata drvdata_rcar_gen4_pcie_ep = {
.additional_common_init = rcar_gen4_pcie_additional_common_init,
.ltssm_control = rcar_gen4_pcie_ltssm_control,
.mode = DW_PCIE_EP_TYPE,
};
static const struct of_device_id rcar_gen4_pcie_of_match[] = {
{
.compatible = "renesas,r8a779f0-pcie",
.data = &drvdata_r8a779f0_pcie,
},
{
.compatible = "renesas,r8a779f0-pcie-ep",
.data = &drvdata_r8a779f0_pcie_ep,
},
{
.compatible = "renesas,rcar-gen4-pcie",
.data = (void *)DW_PCIE_RC_TYPE,
.data = &drvdata_rcar_gen4_pcie,
},
{
.compatible = "renesas,rcar-gen4-pcie-ep",
.data = (void *)DW_PCIE_EP_TYPE,
.data = &drvdata_rcar_gen4_pcie_ep,
},
{},
};

View File

@ -13,7 +13,6 @@
#include <linux/clk.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/interconnect.h>
#include <linux/interrupt.h>
@ -21,7 +20,6 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
@ -308,10 +306,6 @@ static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg)
return readl_relaxed(pcie->appl_base + reg);
}
struct tegra_pcie_soc {
enum dw_pcie_device_mode mode;
};
static void tegra_pcie_icc_set(struct tegra_pcie_dw *pcie)
{
struct dw_pcie *pci = &pcie->pci;
@ -1715,6 +1709,7 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
if (ret)
dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
pci_epc_deinit_notify(pcie->pci.ep.epc);
dw_pcie_ep_cleanup(&pcie->pci.ep);
reset_control_assert(pcie->core_rst);
@ -1903,7 +1898,7 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
goto fail_init_complete;
}
dw_pcie_ep_init_notify(ep);
pci_epc_init_notify(ep->epc);
/* Program the private control to allow sending LTR upstream */
if (pcie->of_data->has_ltr_req_fix) {
@ -2015,6 +2010,7 @@ static const struct pci_epc_features tegra_pcie_epc_features = {
.bar[BAR_3] = { .type = BAR_RESERVED, },
.bar[BAR_4] = { .type = BAR_RESERVED, },
.bar[BAR_5] = { .type = BAR_RESERVED, },
.align = SZ_64K,
};
static const struct pci_epc_features*

View File

@ -410,7 +410,7 @@ static int uniphier_pcie_ep_probe(struct platform_device *pdev)
return ret;
}
dw_pcie_ep_init_notify(&priv->pci.ep);
pci_epc_init_notify(priv->pci.ep.epc);
return 0;
}

View File

@ -190,7 +190,7 @@ static void ls_g4_pcie_reset(struct work_struct *work)
ls_g4_pcie_enable_interrupt(pcie);
}
static struct mobiveil_rp_ops ls_g4_pcie_rp_ops = {
static const struct mobiveil_rp_ops ls_g4_pcie_rp_ops = {
.interrupt_init = ls_g4_pcie_interrupt_init,
};

View File

@ -151,7 +151,7 @@ struct mobiveil_rp_ops {
struct mobiveil_root_port {
void __iomem *config_axi_slave_base; /* endpoint config base */
struct resource *ob_io_res;
struct mobiveil_rp_ops *ops;
const struct mobiveil_rp_ops *ops;
int irq;
raw_spinlock_t intx_mask_lock;
struct irq_domain *intx_domain;

View File

@ -23,7 +23,6 @@
#include <linux/platform_device.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_gpio.h>
#include <linux/of_pci.h>
#include "../pci.h"

View File

@ -73,10 +73,6 @@ int pci_host_common_probe(struct platform_device *pdev)
if (IS_ERR(cfg))
return PTR_ERR(cfg);
/* Do not reassign resources if probe only */
if (!pci_has_flag(PCI_PROBE_ONLY))
pci_add_flags(PCI_REASSIGN_ALL_BUS);
bridge->sysdata = cfg;
bridge->ops = (struct pci_ops *)&ops->pci_ops;
bridge->msi_domain = true;
@ -96,4 +92,5 @@ void pci_host_common_remove(struct platform_device *pdev)
}
EXPORT_SYMBOL_GPL(pci_host_common_remove);
MODULE_DESCRIPTION("Generic PCI host common driver");
MODULE_LICENSE("GPL v2");

View File

@ -86,4 +86,5 @@ static struct platform_driver gen_pci_driver = {
};
module_platform_driver(gen_pci_driver);
MODULE_DESCRIPTION("Generic PCI host controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -1130,8 +1130,8 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where,
PCI_CAPABILITY_LIST) {
/* ROM BARs are unimplemented */
*val = 0;
} else if (where >= PCI_INTERRUPT_LINE && where + size <=
PCI_INTERRUPT_PIN) {
} else if ((where >= PCI_INTERRUPT_LINE && where + size <= PCI_INTERRUPT_PIN) ||
(where >= PCI_INTERRUPT_PIN && where + size <= PCI_MIN_GNT)) {
/*
* Interrupt Line and Interrupt PIN are hard-wired to zero
* because this front-end only supports message-signaled

View File

@ -163,6 +163,19 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON,
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON,
DEV_LS7A_HDMI, loongson_pci_pin_quirk);
static void loongson_pci_msi_quirk(struct pci_dev *dev)
{
u16 val, class = dev->class >> 8;
if (class != PCI_CLASS_BRIDGE_HOST)
return;
pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &val);
val |= PCI_MSI_FLAGS_ENABLE;
pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, val);
}
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, DEV_LS7A_PCIE_PORT5, loongson_pci_msi_quirk);
static struct loongson_pci *pci_bus_to_loongson_pci(struct pci_bus *bus)
{
struct pci_config_window *cfg;

View File

@ -290,4 +290,5 @@ static void __exit altera_msi_exit(void)
subsys_initcall(altera_msi_init);
MODULE_DEVICE_TABLE(of, altera_msi_of_match);
module_exit(altera_msi_exit);
MODULE_DESCRIPTION("Altera PCIe MSI support driver");
MODULE_LICENSE("GPL v2");

View File

@ -826,4 +826,5 @@ static struct platform_driver altera_pcie_driver = {
MODULE_DEVICE_TABLE(of, altera_pcie_of_match);
module_platform_driver(altera_pcie_driver);
MODULE_DESCRIPTION("Altera PCIe host controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -839,4 +839,5 @@ static struct platform_driver apple_pcie_driver = {
};
module_platform_driver(apple_pcie_driver);
MODULE_DESCRIPTION("Apple PCIe host bridge driver");
MODULE_LICENSE("GPL v2");

View File

@ -1091,4 +1091,5 @@ static struct platform_driver mtk_pcie_driver = {
};
module_platform_driver(mtk_pcie_driver);
MODULE_DESCRIPTION("MediaTek Gen3 PCIe host controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -1252,4 +1252,5 @@ static struct platform_driver mtk_pcie_driver = {
},
};
module_platform_driver(mtk_pcie_driver);
MODULE_DESCRIPTION("MediaTek PCIe host controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -549,4 +549,5 @@ static struct platform_driver mt7621_pcie_driver = {
};
builtin_platform_driver(mt7621_pcie_driver);
MODULE_DESCRIPTION("MediaTek MT7621 PCIe host controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -78,7 +78,11 @@ static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base)
writel(L1IATN, pcie_base + PMCTLR);
ret = readl_poll_timeout_atomic(pcie_base + PMSR, val,
val & L1FAEG, 10, 1000);
WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret);
if (ret) {
dev_warn_ratelimited(pcie_dev,
"Timeout waiting for L1 link state, ret=%d\n",
ret);
}
writel(L1FAEG | PMEL1RX, pcie_base + PMSR);
}

View File

@ -322,8 +322,11 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE,
PCIE_CLIENT_CONFIG);
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(rockchip->ep_gpio, 1);
msleep(PCIE_T_RRS_READY_MS);
/* 500ms timeout value should be enough for Gen1/2 training */
err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1,
status, PCIE_LINK_UP(status), 20,

View File

@ -121,7 +121,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
if (rockchip->is_rc) {
rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
GPIOD_OUT_HIGH);
GPIOD_OUT_LOW);
if (IS_ERR(rockchip->ep_gpio))
return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
"failed to get ep GPIO\n");

View File

@ -0,0 +1,30 @@
# SPDX-License-Identifier: GPL-2.0
menu "PLDA-based PCIe controllers"
depends on PCI
config PCIE_PLDA_HOST
bool
config PCIE_MICROCHIP_HOST
tristate "Microchip AXI PCIe controller"
depends on PCI_MSI && OF
select PCI_HOST_COMMON
select PCIE_PLDA_HOST
help
Say Y here if you want kernel to support the Microchip AXI PCIe
Host Bridge driver.
config PCIE_STARFIVE_HOST
tristate "StarFive PCIe host controller"
depends on PCI_MSI && OF
depends on ARCH_STARFIVE || COMPILE_TEST
select PCIE_PLDA_HOST
help
Say Y here if you want to support the StarFive PCIe controller in
host mode. StarFive PCIe controller uses PLDA PCIe core.
If you choose to build this driver as module it will be dynamically
linked and module will be called pcie-starfive.ko.
endmenu

View File

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o
obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o
obj-$(CONFIG_PCIE_STARFIVE_HOST) += pcie-starfive.o

View File

@ -18,10 +18,8 @@
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include "../pci.h"
/* Number of MSI IRQs */
#define MC_MAX_NUM_MSI_IRQS 32
#include "../../pci.h"
#include "pcie-plda.h"
/* PCIe Bridge Phy and Controller Phy offsets */
#define MC_PCIE1_BRIDGE_ADDR 0x00008000u
@ -30,84 +28,6 @@
#define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR)
#define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR)
/* PCIe Bridge Phy Regs */
#define PCIE_PCI_IRQ_DW0 0xa8
#define MSIX_CAP_MASK BIT(31)
#define NUM_MSI_MSGS_MASK GENMASK(6, 4)
#define NUM_MSI_MSGS_SHIFT 4
#define IMASK_LOCAL 0x180
#define DMA_END_ENGINE_0_MASK 0x00000000u
#define DMA_END_ENGINE_0_SHIFT 0
#define DMA_END_ENGINE_1_MASK 0x00000000u
#define DMA_END_ENGINE_1_SHIFT 1
#define DMA_ERROR_ENGINE_0_MASK 0x00000100u
#define DMA_ERROR_ENGINE_0_SHIFT 8
#define DMA_ERROR_ENGINE_1_MASK 0x00000200u
#define DMA_ERROR_ENGINE_1_SHIFT 9
#define A_ATR_EVT_POST_ERR_MASK 0x00010000u
#define A_ATR_EVT_POST_ERR_SHIFT 16
#define A_ATR_EVT_FETCH_ERR_MASK 0x00020000u
#define A_ATR_EVT_FETCH_ERR_SHIFT 17
#define A_ATR_EVT_DISCARD_ERR_MASK 0x00040000u
#define A_ATR_EVT_DISCARD_ERR_SHIFT 18
#define A_ATR_EVT_DOORBELL_MASK 0x00000000u
#define A_ATR_EVT_DOORBELL_SHIFT 19
#define P_ATR_EVT_POST_ERR_MASK 0x00100000u
#define P_ATR_EVT_POST_ERR_SHIFT 20
#define P_ATR_EVT_FETCH_ERR_MASK 0x00200000u
#define P_ATR_EVT_FETCH_ERR_SHIFT 21
#define P_ATR_EVT_DISCARD_ERR_MASK 0x00400000u
#define P_ATR_EVT_DISCARD_ERR_SHIFT 22
#define P_ATR_EVT_DOORBELL_MASK 0x00000000u
#define P_ATR_EVT_DOORBELL_SHIFT 23
#define PM_MSI_INT_INTA_MASK 0x01000000u
#define PM_MSI_INT_INTA_SHIFT 24
#define PM_MSI_INT_INTB_MASK 0x02000000u
#define PM_MSI_INT_INTB_SHIFT 25
#define PM_MSI_INT_INTC_MASK 0x04000000u
#define PM_MSI_INT_INTC_SHIFT 26
#define PM_MSI_INT_INTD_MASK 0x08000000u
#define PM_MSI_INT_INTD_SHIFT 27
#define PM_MSI_INT_INTX_MASK 0x0f000000u
#define PM_MSI_INT_INTX_SHIFT 24
#define PM_MSI_INT_MSI_MASK 0x10000000u
#define PM_MSI_INT_MSI_SHIFT 28
#define PM_MSI_INT_AER_EVT_MASK 0x20000000u
#define PM_MSI_INT_AER_EVT_SHIFT 29
#define PM_MSI_INT_EVENTS_MASK 0x40000000u
#define PM_MSI_INT_EVENTS_SHIFT 30
#define PM_MSI_INT_SYS_ERR_MASK 0x80000000u
#define PM_MSI_INT_SYS_ERR_SHIFT 31
#define NUM_LOCAL_EVENTS 15
#define ISTATUS_LOCAL 0x184
#define IMASK_HOST 0x188
#define ISTATUS_HOST 0x18c
#define IMSI_ADDR 0x190
#define ISTATUS_MSI 0x194
/* PCIe Master table init defines */
#define ATR0_PCIE_WIN0_SRCADDR_PARAM 0x600u
#define ATR0_PCIE_ATR_SIZE 0x25
#define ATR0_PCIE_ATR_SIZE_SHIFT 1
#define ATR0_PCIE_WIN0_SRC_ADDR 0x604u
#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB 0x608u
#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW 0x60cu
#define ATR0_PCIE_WIN0_TRSL_PARAM 0x610u
/* PCIe AXI slave table init defines */
#define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u
#define ATR_SIZE_SHIFT 1
#define ATR_IMPL_ENABLE 1
#define ATR0_AXI4_SLV0_SRC_ADDR 0x804u
#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u
#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu
#define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u
#define PCIE_TX_RX_INTERFACE 0x00000000u
#define PCIE_CONFIG_INTERFACE 0x00000001u
#define ATR_ENTRY_SIZE 32
/* PCIe Controller Phy Regs */
#define SEC_ERROR_EVENT_CNT 0x20
#define DED_ERROR_EVENT_CNT 0x24
@ -179,20 +99,21 @@
#define EVENT_LOCAL_DMA_END_ENGINE_1 12
#define EVENT_LOCAL_DMA_ERROR_ENGINE_0 13
#define EVENT_LOCAL_DMA_ERROR_ENGINE_1 14
#define EVENT_LOCAL_A_ATR_EVT_POST_ERR 15
#define EVENT_LOCAL_A_ATR_EVT_FETCH_ERR 16
#define EVENT_LOCAL_A_ATR_EVT_DISCARD_ERR 17
#define EVENT_LOCAL_A_ATR_EVT_DOORBELL 18
#define EVENT_LOCAL_P_ATR_EVT_POST_ERR 19
#define EVENT_LOCAL_P_ATR_EVT_FETCH_ERR 20
#define EVENT_LOCAL_P_ATR_EVT_DISCARD_ERR 21
#define EVENT_LOCAL_P_ATR_EVT_DOORBELL 22
#define EVENT_LOCAL_PM_MSI_INT_INTX 23
#define EVENT_LOCAL_PM_MSI_INT_MSI 24
#define EVENT_LOCAL_PM_MSI_INT_AER_EVT 25
#define EVENT_LOCAL_PM_MSI_INT_EVENTS 26
#define EVENT_LOCAL_PM_MSI_INT_SYS_ERR 27
#define NUM_EVENTS 28
#define NUM_MC_EVENTS 15
#define EVENT_LOCAL_A_ATR_EVT_POST_ERR (NUM_MC_EVENTS + PLDA_AXI_POST_ERR)
#define EVENT_LOCAL_A_ATR_EVT_FETCH_ERR (NUM_MC_EVENTS + PLDA_AXI_FETCH_ERR)
#define EVENT_LOCAL_A_ATR_EVT_DISCARD_ERR (NUM_MC_EVENTS + PLDA_AXI_DISCARD_ERR)
#define EVENT_LOCAL_A_ATR_EVT_DOORBELL (NUM_MC_EVENTS + PLDA_AXI_DOORBELL)
#define EVENT_LOCAL_P_ATR_EVT_POST_ERR (NUM_MC_EVENTS + PLDA_PCIE_POST_ERR)
#define EVENT_LOCAL_P_ATR_EVT_FETCH_ERR (NUM_MC_EVENTS + PLDA_PCIE_FETCH_ERR)
#define EVENT_LOCAL_P_ATR_EVT_DISCARD_ERR (NUM_MC_EVENTS + PLDA_PCIE_DISCARD_ERR)
#define EVENT_LOCAL_P_ATR_EVT_DOORBELL (NUM_MC_EVENTS + PLDA_PCIE_DOORBELL)
#define EVENT_LOCAL_PM_MSI_INT_INTX (NUM_MC_EVENTS + PLDA_INTX)
#define EVENT_LOCAL_PM_MSI_INT_MSI (NUM_MC_EVENTS + PLDA_MSI)
#define EVENT_LOCAL_PM_MSI_INT_AER_EVT (NUM_MC_EVENTS + PLDA_AER_EVENT)
#define EVENT_LOCAL_PM_MSI_INT_EVENTS (NUM_MC_EVENTS + PLDA_MISC_EVENTS)
#define EVENT_LOCAL_PM_MSI_INT_SYS_ERR (NUM_MC_EVENTS + PLDA_SYS_ERR)
#define NUM_EVENTS (NUM_MC_EVENTS + PLDA_INT_EVENT_NUM)
#define PCIE_EVENT_CAUSE(x, s) \
[EVENT_PCIE_ ## x] = { __stringify(x), s }
@ -255,22 +176,10 @@ struct event_map {
u32 event_bit;
};
struct mc_msi {
struct mutex lock; /* Protect used bitmap */
struct irq_domain *msi_domain;
struct irq_domain *dev_domain;
u32 num_vectors;
u64 vector_phy;
DECLARE_BITMAP(used, MC_MAX_NUM_MSI_IRQS);
};
struct mc_pcie {
struct plda_pcie_rp plda;
void __iomem *axi_base_addr;
struct device *dev;
struct irq_domain *intx_domain;
struct irq_domain *event_domain;
raw_spinlock_t lock;
struct mc_msi msi;
};
struct cause {
@ -388,7 +297,7 @@ static struct mc_pcie *port;
static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *ecam)
{
struct mc_msi *msi = &port->msi;
struct plda_msi *msi = &port->plda.msi;
u16 reg;
u8 queue_size;
@ -409,246 +318,6 @@ static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *ecam)
ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_ADDRESS_HI);
}
static void mc_handle_msi(struct irq_desc *desc)
{
struct mc_pcie *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct device *dev = port->dev;
struct mc_msi *msi = &port->msi;
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long status;
u32 bit;
int ret;
chained_irq_enter(chip, desc);
status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
if (status & PM_MSI_INT_MSI_MASK) {
writel_relaxed(status & PM_MSI_INT_MSI_MASK, bridge_base_addr + ISTATUS_LOCAL);
status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
for_each_set_bit(bit, &status, msi->num_vectors) {
ret = generic_handle_domain_irq(msi->dev_domain, bit);
if (ret)
dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
bit);
}
}
chained_irq_exit(chip, desc);
}
static void mc_msi_bottom_irq_ack(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
u32 bitpos = data->hwirq;
writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
}
static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
phys_addr_t addr = port->msi.vector_phy;
msg->address_lo = lower_32_bits(addr);
msg->address_hi = upper_32_bits(addr);
msg->data = data->hwirq;
dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
(int)data->hwirq, msg->address_hi, msg->address_lo);
}
static int mc_msi_set_affinity(struct irq_data *irq_data,
const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static struct irq_chip mc_msi_bottom_irq_chip = {
.name = "Microchip MSI",
.irq_ack = mc_msi_bottom_irq_ack,
.irq_compose_msi_msg = mc_compose_msi_msg,
.irq_set_affinity = mc_msi_set_affinity,
};
static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct mc_pcie *port = domain->host_data;
struct mc_msi *msi = &port->msi;
unsigned long bit;
mutex_lock(&msi->lock);
bit = find_first_zero_bit(msi->used, msi->num_vectors);
if (bit >= msi->num_vectors) {
mutex_unlock(&msi->lock);
return -ENOSPC;
}
set_bit(bit, msi->used);
irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip,
domain->host_data, handle_edge_irq, NULL, NULL);
mutex_unlock(&msi->lock);
return 0;
}
static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct mc_pcie *port = irq_data_get_irq_chip_data(d);
struct mc_msi *msi = &port->msi;
mutex_lock(&msi->lock);
if (test_bit(d->hwirq, msi->used))
__clear_bit(d->hwirq, msi->used);
else
dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
mutex_unlock(&msi->lock);
}
static const struct irq_domain_ops msi_domain_ops = {
.alloc = mc_irq_msi_domain_alloc,
.free = mc_irq_msi_domain_free,
};
static struct irq_chip mc_msi_irq_chip = {
.name = "Microchip PCIe MSI",
.irq_ack = irq_chip_ack_parent,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
static struct msi_domain_info mc_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_PCI_MSIX),
.chip = &mc_msi_irq_chip,
};
static int mc_allocate_msi_domains(struct mc_pcie *port)
{
struct device *dev = port->dev;
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
struct mc_msi *msi = &port->msi;
mutex_init(&port->msi.lock);
msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
&msi_domain_ops, port);
if (!msi->dev_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode, &mc_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain) {
dev_err(dev, "failed to create MSI domain\n");
irq_domain_remove(msi->dev_domain);
return -ENOMEM;
}
return 0;
}
static void mc_handle_intx(struct irq_desc *desc)
{
struct mc_pcie *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct device *dev = port->dev;
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long status;
u32 bit;
int ret;
chained_irq_enter(chip, desc);
status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
if (status & PM_MSI_INT_INTX_MASK) {
status &= PM_MSI_INT_INTX_MASK;
status >>= PM_MSI_INT_INTX_SHIFT;
for_each_set_bit(bit, &status, PCI_NUM_INTX) {
ret = generic_handle_domain_irq(port->intx_domain, bit);
if (ret)
dev_err_ratelimited(dev, "bad INTx IRQ %d\n",
bit);
}
}
chained_irq_exit(chip, desc);
}
static void mc_ack_intx_irq(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
}
static void mc_mask_intx_irq(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long flags;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
val &= ~mask;
writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static void mc_unmask_intx_irq(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
unsigned long flags;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
val |= mask;
writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static struct irq_chip mc_intx_irq_chip = {
.name = "Microchip PCIe INTx",
.irq_ack = mc_ack_intx_irq,
.irq_mask = mc_mask_intx_irq,
.irq_unmask = mc_unmask_intx_irq,
};
static int mc_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &mc_intx_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
}
static const struct irq_domain_ops intx_domain_ops = {
.map = mc_pcie_intx_map,
};
static inline u32 reg_to_event(u32 reg, struct event_map field)
{
return (reg & field.reg_mask) ? BIT(field.event_bit) : 0;
@ -706,21 +375,22 @@ static u32 local_events(struct mc_pcie *port)
return val;
}
static u32 get_events(struct mc_pcie *port)
static u32 mc_get_events(struct plda_pcie_rp *port)
{
struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda);
u32 events = 0;
events |= pcie_events(port);
events |= sec_errors(port);
events |= ded_errors(port);
events |= local_events(port);
events |= pcie_events(mc_port);
events |= sec_errors(mc_port);
events |= ded_errors(mc_port);
events |= local_events(mc_port);
return events;
}
static irqreturn_t mc_event_handler(int irq, void *dev_id)
{
struct mc_pcie *port = dev_id;
struct plda_pcie_rp *port = dev_id;
struct device *dev = port->dev;
struct irq_data *data;
@ -734,31 +404,15 @@ static irqreturn_t mc_event_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
static void mc_handle_event(struct irq_desc *desc)
{
struct mc_pcie *port = irq_desc_get_handler_data(desc);
unsigned long events;
u32 bit;
struct irq_chip *chip = irq_desc_get_chip(desc);
chained_irq_enter(chip, desc);
events = get_events(port);
for_each_set_bit(bit, &events, NUM_EVENTS)
generic_handle_domain_irq(port->event_domain, bit);
chained_irq_exit(chip, desc);
}
static void mc_ack_event_irq(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda);
u32 event = data->hwirq;
void __iomem *addr;
u32 mask;
addr = port->axi_base_addr + event_descs[event].base +
addr = mc_port->axi_base_addr + event_descs[event].base +
event_descs[event].offset;
mask = event_descs[event].mask;
mask |= event_descs[event].enb_mask;
@ -768,13 +422,14 @@ static void mc_ack_event_irq(struct irq_data *data)
static void mc_mask_event_irq(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda);
u32 event = data->hwirq;
void __iomem *addr;
u32 mask;
u32 val;
addr = port->axi_base_addr + event_descs[event].base +
addr = mc_port->axi_base_addr + event_descs[event].base +
event_descs[event].mask_offset;
mask = event_descs[event].mask;
if (event_descs[event].enb_mask) {
@ -798,13 +453,14 @@ static void mc_mask_event_irq(struct irq_data *data)
static void mc_unmask_event_irq(struct irq_data *data)
{
struct mc_pcie *port = irq_data_get_irq_chip_data(data);
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda);
u32 event = data->hwirq;
void __iomem *addr;
u32 mask;
u32 val;
addr = port->axi_base_addr + event_descs[event].base +
addr = mc_port->axi_base_addr + event_descs[event].base +
event_descs[event].mask_offset;
mask = event_descs[event].mask;
@ -834,19 +490,6 @@ static struct irq_chip mc_event_irq_chip = {
.irq_unmask = mc_unmask_event_irq,
};
static int mc_pcie_event_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &mc_event_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
}
static const struct irq_domain_ops event_domain_ops = {
.map = mc_pcie_event_map,
};
static inline void mc_pcie_deinit_clk(void *data)
{
struct clk *clk = data;
@ -892,105 +535,22 @@ static int mc_pcie_init_clks(struct device *dev)
return 0;
}
static int mc_pcie_init_irq_domains(struct mc_pcie *port)
static int mc_request_event_irq(struct plda_pcie_rp *plda, int event_irq,
int event)
{
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node;
/* Setup INTx */
pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "failed to find PCIe Intc node\n");
return -EINVAL;
}
port->event_domain = irq_domain_add_linear(pcie_intc_node, NUM_EVENTS,
&event_domain_ops, port);
if (!port->event_domain) {
dev_err(dev, "failed to get event domain\n");
of_node_put(pcie_intc_node);
return -ENOMEM;
}
irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops, port);
if (!port->intx_domain) {
dev_err(dev, "failed to get an INTx IRQ domain\n");
of_node_put(pcie_intc_node);
return -ENOMEM;
}
irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
of_node_put(pcie_intc_node);
raw_spin_lock_init(&port->lock);
return mc_allocate_msi_domains(port);
return devm_request_irq(plda->dev, event_irq, mc_event_handler,
0, event_cause[event].sym, plda);
}
static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
phys_addr_t axi_addr, phys_addr_t pci_addr,
size_t size)
{
u32 atr_sz = ilog2(size) - 1;
u32 val;
static const struct plda_event_ops mc_event_ops = {
.get_events = mc_get_events,
};
if (index == 0)
val = PCIE_CONFIG_INTERFACE;
else
val = PCIE_TX_RX_INTERFACE;
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_TRSL_PARAM);
val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
ATR_IMPL_ENABLE;
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_SRCADDR_PARAM);
val = upper_32_bits(axi_addr);
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_SRC_ADDR);
val = lower_32_bits(pci_addr);
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
val = upper_32_bits(pci_addr);
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
}
static int mc_pcie_setup_windows(struct platform_device *pdev,
struct mc_pcie *port)
{
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
struct resource_entry *entry;
u64 pci_addr;
u32 index = 1;
resource_list_for_each_entry(entry, &bridge->windows) {
if (resource_type(entry->res) == IORESOURCE_MEM) {
pci_addr = entry->res->start - entry->offset;
mc_pcie_setup_window(bridge_base_addr, index,
entry->res->start, pci_addr,
resource_size(entry->res));
index++;
}
}
return 0;
}
static const struct plda_event mc_event = {
.request_event_irq = mc_request_event_irq,
.intx_event = EVENT_LOCAL_PM_MSI_INT_INTX,
.msi_event = EVENT_LOCAL_PM_MSI_INT_MSI,
};
static inline void mc_clear_secs(struct mc_pcie *port)
{
@ -1052,85 +612,34 @@ static void mc_disable_interrupts(struct mc_pcie *port)
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
}
static int mc_init_interrupts(struct platform_device *pdev, struct mc_pcie *port)
{
struct device *dev = &pdev->dev;
int irq;
int i, intx_irq, msi_irq, event_irq;
int ret;
ret = mc_pcie_init_irq_domains(port);
if (ret) {
dev_err(dev, "failed creating IRQ domains\n");
return ret;
}
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return -ENODEV;
for (i = 0; i < NUM_EVENTS; i++) {
event_irq = irq_create_mapping(port->event_domain, i);
if (!event_irq) {
dev_err(dev, "failed to map hwirq %d\n", i);
return -ENXIO;
}
ret = devm_request_irq(dev, event_irq, mc_event_handler,
0, event_cause[i].sym, port);
if (ret) {
dev_err(dev, "failed to request IRQ %d\n", event_irq);
return ret;
}
}
intx_irq = irq_create_mapping(port->event_domain,
EVENT_LOCAL_PM_MSI_INT_INTX);
if (!intx_irq) {
dev_err(dev, "failed to map INTx interrupt\n");
return -ENXIO;
}
/* Plug the INTx chained handler */
irq_set_chained_handler_and_data(intx_irq, mc_handle_intx, port);
msi_irq = irq_create_mapping(port->event_domain,
EVENT_LOCAL_PM_MSI_INT_MSI);
if (!msi_irq)
return -ENXIO;
/* Plug the MSI chained handler */
irq_set_chained_handler_and_data(msi_irq, mc_handle_msi, port);
/* Plug the main event chained handler */
irq_set_chained_handler_and_data(irq, mc_handle_event, port);
return 0;
}
static int mc_platform_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct platform_device *pdev = to_platform_device(dev);
struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
int ret;
/* Configure address translation table 0 for PCIe config space */
mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start,
cfg->res.start,
resource_size(&cfg->res));
plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start,
cfg->res.start,
resource_size(&cfg->res));
/* Need some fixups in config space */
mc_pcie_enable_msi(port, cfg->win);
/* Configure non-config space outbound ranges */
ret = mc_pcie_setup_windows(pdev, port);
ret = plda_pcie_setup_iomems(bridge, &port->plda);
if (ret)
return ret;
port->plda.event_ops = &mc_event_ops;
port->plda.event_irq_chip = &mc_event_irq_chip;
port->plda.events_bitmap = GENMASK(NUM_EVENTS - 1, 0);
/* Address translation is up; safe to enable interrupts */
ret = mc_init_interrupts(pdev, port);
ret = plda_init_interrupts(pdev, &port->plda, &mc_event);
if (ret)
return ret;
@ -1141,6 +650,7 @@ static int mc_host_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
void __iomem *bridge_base_addr;
struct plda_pcie_rp *plda;
int ret;
u32 val;
@ -1148,7 +658,8 @@ static int mc_host_probe(struct platform_device *pdev)
if (!port)
return -ENOMEM;
port->dev = dev;
plda = &port->plda;
plda->dev = dev;
port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
if (IS_ERR(port->axi_base_addr))
@ -1157,6 +668,8 @@ static int mc_host_probe(struct platform_device *pdev)
mc_disable_interrupts(port);
bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
plda->bridge_addr = bridge_base_addr;
plda->num_events = NUM_EVENTS;
/* Allow enabling MSI by disabling MSI-X */
val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
@ -1168,10 +681,10 @@ static int mc_host_probe(struct platform_device *pdev)
val &= NUM_MSI_MSGS_MASK;
val >>= NUM_MSI_MSGS_SHIFT;
port->msi.num_vectors = 1 << val;
plda->msi.num_vectors = 1 << val;
/* Pick vector address from design */
port->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR);
plda->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR);
ret = mc_pcie_init_clks(dev);
if (ret) {

View File

@ -0,0 +1,651 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PLDA PCIe XpressRich host controller driver
*
* Copyright (C) 2023 Microchip Co. Ltd
* StarFive Co. Ltd
*
* Author: Daire McNamara <daire.mcnamara@microchip.com>
*/
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/pci_regs.h>
#include <linux/pci-ecam.h>
#include "pcie-plda.h"
void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct plda_pcie_rp *pcie = bus->sysdata;
return pcie->config_base + PCIE_ECAM_OFFSET(bus->number, devfn, where);
}
EXPORT_SYMBOL_GPL(plda_pcie_map_bus);
static void plda_handle_msi(struct irq_desc *desc)
{
struct plda_pcie_rp *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct device *dev = port->dev;
struct plda_msi *msi = &port->msi;
void __iomem *bridge_base_addr = port->bridge_addr;
unsigned long status;
u32 bit;
int ret;
chained_irq_enter(chip, desc);
status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
if (status & PM_MSI_INT_MSI_MASK) {
writel_relaxed(status & PM_MSI_INT_MSI_MASK,
bridge_base_addr + ISTATUS_LOCAL);
status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
for_each_set_bit(bit, &status, msi->num_vectors) {
ret = generic_handle_domain_irq(msi->dev_domain, bit);
if (ret)
dev_err_ratelimited(dev, "bad MSI IRQ %d\n",
bit);
}
}
chained_irq_exit(chip, desc);
}
static void plda_msi_bottom_irq_ack(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr = port->bridge_addr;
u32 bitpos = data->hwirq;
writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
}
static void plda_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
phys_addr_t addr = port->msi.vector_phy;
msg->address_lo = lower_32_bits(addr);
msg->address_hi = upper_32_bits(addr);
msg->data = data->hwirq;
dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n",
(int)data->hwirq, msg->address_hi, msg->address_lo);
}
static int plda_msi_set_affinity(struct irq_data *irq_data,
const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static struct irq_chip plda_msi_bottom_irq_chip = {
.name = "PLDA MSI",
.irq_ack = plda_msi_bottom_irq_ack,
.irq_compose_msi_msg = plda_compose_msi_msg,
.irq_set_affinity = plda_msi_set_affinity,
};
static int plda_irq_msi_domain_alloc(struct irq_domain *domain,
unsigned int virq,
unsigned int nr_irqs,
void *args)
{
struct plda_pcie_rp *port = domain->host_data;
struct plda_msi *msi = &port->msi;
unsigned long bit;
mutex_lock(&msi->lock);
bit = find_first_zero_bit(msi->used, msi->num_vectors);
if (bit >= msi->num_vectors) {
mutex_unlock(&msi->lock);
return -ENOSPC;
}
set_bit(bit, msi->used);
irq_domain_set_info(domain, virq, bit, &plda_msi_bottom_irq_chip,
domain->host_data, handle_edge_irq, NULL, NULL);
mutex_unlock(&msi->lock);
return 0;
}
static void plda_irq_msi_domain_free(struct irq_domain *domain,
unsigned int virq,
unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(d);
struct plda_msi *msi = &port->msi;
mutex_lock(&msi->lock);
if (test_bit(d->hwirq, msi->used))
__clear_bit(d->hwirq, msi->used);
else
dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq);
mutex_unlock(&msi->lock);
}
static const struct irq_domain_ops msi_domain_ops = {
.alloc = plda_irq_msi_domain_alloc,
.free = plda_irq_msi_domain_free,
};
static struct irq_chip plda_msi_irq_chip = {
.name = "PLDA PCIe MSI",
.irq_ack = irq_chip_ack_parent,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
static struct msi_domain_info plda_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_PCI_MSIX),
.chip = &plda_msi_irq_chip,
};
static int plda_allocate_msi_domains(struct plda_pcie_rp *port)
{
struct device *dev = port->dev;
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
struct plda_msi *msi = &port->msi;
mutex_init(&port->msi.lock);
msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors,
&msi_domain_ops, port);
if (!msi->dev_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&plda_msi_domain_info,
msi->dev_domain);
if (!msi->msi_domain) {
dev_err(dev, "failed to create MSI domain\n");
irq_domain_remove(msi->dev_domain);
return -ENOMEM;
}
return 0;
}
static void plda_handle_intx(struct irq_desc *desc)
{
struct plda_pcie_rp *port = irq_desc_get_handler_data(desc);
struct irq_chip *chip = irq_desc_get_chip(desc);
struct device *dev = port->dev;
void __iomem *bridge_base_addr = port->bridge_addr;
unsigned long status;
u32 bit;
int ret;
chained_irq_enter(chip, desc);
status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
if (status & PM_MSI_INT_INTX_MASK) {
status &= PM_MSI_INT_INTX_MASK;
status >>= PM_MSI_INT_INTX_SHIFT;
for_each_set_bit(bit, &status, PCI_NUM_INTX) {
ret = generic_handle_domain_irq(port->intx_domain, bit);
if (ret)
dev_err_ratelimited(dev, "bad INTx IRQ %d\n",
bit);
}
}
chained_irq_exit(chip, desc);
}
static void plda_ack_intx_irq(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr = port->bridge_addr;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL);
}
static void plda_mask_intx_irq(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr = port->bridge_addr;
unsigned long flags;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
val &= ~mask;
writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static void plda_unmask_intx_irq(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
void __iomem *bridge_base_addr = port->bridge_addr;
unsigned long flags;
u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT);
u32 val;
raw_spin_lock_irqsave(&port->lock, flags);
val = readl_relaxed(bridge_base_addr + IMASK_LOCAL);
val |= mask;
writel_relaxed(val, bridge_base_addr + IMASK_LOCAL);
raw_spin_unlock_irqrestore(&port->lock, flags);
}
static struct irq_chip plda_intx_irq_chip = {
.name = "PLDA PCIe INTx",
.irq_ack = plda_ack_intx_irq,
.irq_mask = plda_mask_intx_irq,
.irq_unmask = plda_unmask_intx_irq,
};
static int plda_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &plda_intx_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
}
static const struct irq_domain_ops intx_domain_ops = {
.map = plda_pcie_intx_map,
};
static u32 plda_get_events(struct plda_pcie_rp *port)
{
u32 events, val, origin;
origin = readl_relaxed(port->bridge_addr + ISTATUS_LOCAL);
/* MSI event and sys events */
val = (origin & SYS_AND_MSI_MASK) >> PM_MSI_INT_MSI_SHIFT;
events = val << (PM_MSI_INT_MSI_SHIFT - PCI_NUM_INTX + 1);
/* INTx events */
if (origin & PM_MSI_INT_INTX_MASK)
events |= BIT(PM_MSI_INT_INTX_SHIFT);
/* remains are same with register */
events |= origin & GENMASK(P_ATR_EVT_DOORBELL_SHIFT, 0);
return events;
}
static irqreturn_t plda_event_handler(int irq, void *dev_id)
{
return IRQ_HANDLED;
}
static void plda_handle_event(struct irq_desc *desc)
{
struct plda_pcie_rp *port = irq_desc_get_handler_data(desc);
unsigned long events;
u32 bit;
struct irq_chip *chip = irq_desc_get_chip(desc);
chained_irq_enter(chip, desc);
events = port->event_ops->get_events(port);
events &= port->events_bitmap;
for_each_set_bit(bit, &events, port->num_events)
generic_handle_domain_irq(port->event_domain, bit);
chained_irq_exit(chip, desc);
}
static u32 plda_hwirq_to_mask(int hwirq)
{
u32 mask;
/* hwirq 23 - 0 are the same with register */
if (hwirq < EVENT_PM_MSI_INT_INTX)
mask = BIT(hwirq);
else if (hwirq == EVENT_PM_MSI_INT_INTX)
mask = PM_MSI_INT_INTX_MASK;
else
mask = BIT(hwirq + PCI_NUM_INTX - 1);
return mask;
}
static void plda_ack_event_irq(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
writel_relaxed(plda_hwirq_to_mask(data->hwirq),
port->bridge_addr + ISTATUS_LOCAL);
}
static void plda_mask_event_irq(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
u32 mask, val;
mask = plda_hwirq_to_mask(data->hwirq);
raw_spin_lock(&port->lock);
val = readl_relaxed(port->bridge_addr + IMASK_LOCAL);
val &= ~mask;
writel_relaxed(val, port->bridge_addr + IMASK_LOCAL);
raw_spin_unlock(&port->lock);
}
static void plda_unmask_event_irq(struct irq_data *data)
{
struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data);
u32 mask, val;
mask = plda_hwirq_to_mask(data->hwirq);
raw_spin_lock(&port->lock);
val = readl_relaxed(port->bridge_addr + IMASK_LOCAL);
val |= mask;
writel_relaxed(val, port->bridge_addr + IMASK_LOCAL);
raw_spin_unlock(&port->lock);
}
static struct irq_chip plda_event_irq_chip = {
.name = "PLDA PCIe EVENT",
.irq_ack = plda_ack_event_irq,
.irq_mask = plda_mask_event_irq,
.irq_unmask = plda_unmask_event_irq,
};
static const struct plda_event_ops plda_event_ops = {
.get_events = plda_get_events,
};
static int plda_pcie_event_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
struct plda_pcie_rp *port = (void *)domain->host_data;
irq_set_chip_and_handler(irq, port->event_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
}
static const struct irq_domain_ops plda_event_domain_ops = {
.map = plda_pcie_event_map,
};
static int plda_pcie_init_irq_domains(struct plda_pcie_rp *port)
{
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node;
/* Setup INTx */
pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "failed to find PCIe Intc node\n");
return -EINVAL;
}
port->event_domain = irq_domain_add_linear(pcie_intc_node,
port->num_events,
&plda_event_domain_ops,
port);
if (!port->event_domain) {
dev_err(dev, "failed to get event domain\n");
of_node_put(pcie_intc_node);
return -ENOMEM;
}
irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS);
port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops, port);
if (!port->intx_domain) {
dev_err(dev, "failed to get an INTx IRQ domain\n");
of_node_put(pcie_intc_node);
return -ENOMEM;
}
irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED);
of_node_put(pcie_intc_node);
raw_spin_lock_init(&port->lock);
return plda_allocate_msi_domains(port);
}
int plda_init_interrupts(struct platform_device *pdev,
struct plda_pcie_rp *port,
const struct plda_event *event)
{
struct device *dev = &pdev->dev;
int event_irq, ret;
u32 i;
if (!port->event_ops)
port->event_ops = &plda_event_ops;
if (!port->event_irq_chip)
port->event_irq_chip = &plda_event_irq_chip;
ret = plda_pcie_init_irq_domains(port);
if (ret) {
dev_err(dev, "failed creating IRQ domains\n");
return ret;
}
port->irq = platform_get_irq(pdev, 0);
if (port->irq < 0)
return -ENODEV;
for_each_set_bit(i, &port->events_bitmap, port->num_events) {
event_irq = irq_create_mapping(port->event_domain, i);
if (!event_irq) {
dev_err(dev, "failed to map hwirq %d\n", i);
return -ENXIO;
}
if (event->request_event_irq)
ret = event->request_event_irq(port, event_irq, i);
else
ret = devm_request_irq(dev, event_irq,
plda_event_handler,
0, NULL, port);
if (ret) {
dev_err(dev, "failed to request IRQ %d\n", event_irq);
return ret;
}
}
port->intx_irq = irq_create_mapping(port->event_domain,
event->intx_event);
if (!port->intx_irq) {
dev_err(dev, "failed to map INTx interrupt\n");
return -ENXIO;
}
/* Plug the INTx chained handler */
irq_set_chained_handler_and_data(port->intx_irq, plda_handle_intx, port);
port->msi_irq = irq_create_mapping(port->event_domain,
event->msi_event);
if (!port->msi_irq)
return -ENXIO;
/* Plug the MSI chained handler */
irq_set_chained_handler_and_data(port->msi_irq, plda_handle_msi, port);
/* Plug the main event chained handler */
irq_set_chained_handler_and_data(port->irq, plda_handle_event, port);
return 0;
}
EXPORT_SYMBOL_GPL(plda_init_interrupts);
void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
phys_addr_t axi_addr, phys_addr_t pci_addr,
size_t size)
{
u32 atr_sz = ilog2(size) - 1;
u32 val;
if (index == 0)
val = PCIE_CONFIG_INTERFACE;
else
val = PCIE_TX_RX_INTERFACE;
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_TRSL_PARAM);
val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
ATR_IMPL_ENABLE;
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_SRCADDR_PARAM);
val = upper_32_bits(axi_addr);
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_SRC_ADDR);
val = lower_32_bits(pci_addr);
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_TRSL_ADDR_LSB);
val = upper_32_bits(pci_addr);
writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
}
EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
struct plda_pcie_rp *port)
{
void __iomem *bridge_base_addr = port->bridge_addr;
struct resource_entry *entry;
u64 pci_addr;
u32 index = 1;
resource_list_for_each_entry(entry, &bridge->windows) {
if (resource_type(entry->res) == IORESOURCE_MEM) {
pci_addr = entry->res->start - entry->offset;
plda_pcie_setup_window(bridge_base_addr, index,
entry->res->start, pci_addr,
resource_size(entry->res));
index++;
}
}
return 0;
}
EXPORT_SYMBOL_GPL(plda_pcie_setup_iomems);
static void plda_pcie_irq_domain_deinit(struct plda_pcie_rp *pcie)
{
irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
irq_set_chained_handler_and_data(pcie->intx_irq, NULL, NULL);
irq_domain_remove(pcie->msi.msi_domain);
irq_domain_remove(pcie->msi.dev_domain);
irq_domain_remove(pcie->intx_domain);
irq_domain_remove(pcie->event_domain);
}
int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
const struct plda_event *plda_event)
{
struct device *dev = port->dev;
struct pci_host_bridge *bridge;
struct platform_device *pdev = to_platform_device(dev);
struct resource *cfg_res;
int ret;
pdev = to_platform_device(dev);
port->bridge_addr =
devm_platform_ioremap_resource_byname(pdev, "apb");
if (IS_ERR(port->bridge_addr))
return dev_err_probe(dev, PTR_ERR(port->bridge_addr),
"failed to map reg memory\n");
cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
if (!cfg_res)
return dev_err_probe(dev, -ENODEV,
"failed to get config memory\n");
port->config_base = devm_ioremap_resource(dev, cfg_res);
if (IS_ERR(port->config_base))
return dev_err_probe(dev, PTR_ERR(port->config_base),
"failed to map config memory\n");
bridge = devm_pci_alloc_host_bridge(dev, 0);
if (!bridge)
return dev_err_probe(dev, -ENOMEM,
"failed to alloc bridge\n");
if (port->host_ops && port->host_ops->host_init) {
ret = port->host_ops->host_init(port);
if (ret)
return ret;
}
port->bridge = bridge;
plda_pcie_setup_window(port->bridge_addr, 0, cfg_res->start, 0,
resource_size(cfg_res));
plda_pcie_setup_iomems(bridge, port);
plda_set_default_msi(&port->msi);
ret = plda_init_interrupts(pdev, port, plda_event);
if (ret)
goto err_host;
/* Set default bus ops */
bridge->ops = ops;
bridge->sysdata = port;
ret = pci_host_probe(bridge);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to probe pci host\n");
goto err_probe;
}
return ret;
err_probe:
plda_pcie_irq_domain_deinit(port);
err_host:
if (port->host_ops && port->host_ops->host_deinit)
port->host_ops->host_deinit(port);
return ret;
}
EXPORT_SYMBOL_GPL(plda_pcie_host_init);
void plda_pcie_host_deinit(struct plda_pcie_rp *port)
{
pci_stop_root_bus(port->bridge->bus);
pci_remove_root_bus(port->bridge->bus);
plda_pcie_irq_domain_deinit(port);
if (port->host_ops && port->host_ops->host_deinit)
port->host_ops->host_deinit(port);
}
EXPORT_SYMBOL_GPL(plda_pcie_host_deinit);

View File

@ -0,0 +1,273 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* PLDA PCIe host controller driver
*/
#ifndef _PCIE_PLDA_H
#define _PCIE_PLDA_H
/* Number of MSI IRQs */
#define PLDA_MAX_NUM_MSI_IRQS 32
/* PCIe Bridge Phy Regs */
#define GEN_SETTINGS 0x80
#define RP_ENABLE 1
#define PCIE_PCI_IDS_DW1 0x9c
#define IDS_CLASS_CODE_SHIFT 16
#define REVISION_ID_MASK GENMASK(7, 0)
#define CLASS_CODE_ID_MASK GENMASK(31, 8)
#define PCIE_PCI_IRQ_DW0 0xa8
#define MSIX_CAP_MASK BIT(31)
#define NUM_MSI_MSGS_MASK GENMASK(6, 4)
#define NUM_MSI_MSGS_SHIFT 4
#define PCI_MISC 0xb4
#define PHY_FUNCTION_DIS BIT(15)
#define PCIE_WINROM 0xfc
#define PREF_MEM_WIN_64_SUPPORT BIT(3)
#define IMASK_LOCAL 0x180
#define DMA_END_ENGINE_0_MASK 0x00000000u
#define DMA_END_ENGINE_0_SHIFT 0
#define DMA_END_ENGINE_1_MASK 0x00000000u
#define DMA_END_ENGINE_1_SHIFT 1
#define DMA_ERROR_ENGINE_0_MASK 0x00000100u
#define DMA_ERROR_ENGINE_0_SHIFT 8
#define DMA_ERROR_ENGINE_1_MASK 0x00000200u
#define DMA_ERROR_ENGINE_1_SHIFT 9
#define A_ATR_EVT_POST_ERR_MASK 0x00010000u
#define A_ATR_EVT_POST_ERR_SHIFT 16
#define A_ATR_EVT_FETCH_ERR_MASK 0x00020000u
#define A_ATR_EVT_FETCH_ERR_SHIFT 17
#define A_ATR_EVT_DISCARD_ERR_MASK 0x00040000u
#define A_ATR_EVT_DISCARD_ERR_SHIFT 18
#define A_ATR_EVT_DOORBELL_MASK 0x00000000u
#define A_ATR_EVT_DOORBELL_SHIFT 19
#define P_ATR_EVT_POST_ERR_MASK 0x00100000u
#define P_ATR_EVT_POST_ERR_SHIFT 20
#define P_ATR_EVT_FETCH_ERR_MASK 0x00200000u
#define P_ATR_EVT_FETCH_ERR_SHIFT 21
#define P_ATR_EVT_DISCARD_ERR_MASK 0x00400000u
#define P_ATR_EVT_DISCARD_ERR_SHIFT 22
#define P_ATR_EVT_DOORBELL_MASK 0x00000000u
#define P_ATR_EVT_DOORBELL_SHIFT 23
#define PM_MSI_INT_INTA_MASK 0x01000000u
#define PM_MSI_INT_INTA_SHIFT 24
#define PM_MSI_INT_INTB_MASK 0x02000000u
#define PM_MSI_INT_INTB_SHIFT 25
#define PM_MSI_INT_INTC_MASK 0x04000000u
#define PM_MSI_INT_INTC_SHIFT 26
#define PM_MSI_INT_INTD_MASK 0x08000000u
#define PM_MSI_INT_INTD_SHIFT 27
#define PM_MSI_INT_INTX_MASK 0x0f000000u
#define PM_MSI_INT_INTX_SHIFT 24
#define PM_MSI_INT_MSI_MASK 0x10000000u
#define PM_MSI_INT_MSI_SHIFT 28
#define PM_MSI_INT_AER_EVT_MASK 0x20000000u
#define PM_MSI_INT_AER_EVT_SHIFT 29
#define PM_MSI_INT_EVENTS_MASK 0x40000000u
#define PM_MSI_INT_EVENTS_SHIFT 30
#define PM_MSI_INT_SYS_ERR_MASK 0x80000000u
#define PM_MSI_INT_SYS_ERR_SHIFT 31
#define SYS_AND_MSI_MASK GENMASK(31, 28)
#define NUM_LOCAL_EVENTS 15
#define ISTATUS_LOCAL 0x184
#define IMASK_HOST 0x188
#define ISTATUS_HOST 0x18c
#define IMSI_ADDR 0x190
#define ISTATUS_MSI 0x194
#define PMSG_SUPPORT_RX 0x3f0
#define PMSG_LTR_SUPPORT BIT(2)
/* PCIe Master table init defines */
#define ATR0_PCIE_WIN0_SRCADDR_PARAM 0x600u
#define ATR0_PCIE_ATR_SIZE 0x25
#define ATR0_PCIE_ATR_SIZE_SHIFT 1
#define ATR0_PCIE_WIN0_SRC_ADDR 0x604u
#define ATR0_PCIE_WIN0_TRSL_ADDR_LSB 0x608u
#define ATR0_PCIE_WIN0_TRSL_ADDR_UDW 0x60cu
#define ATR0_PCIE_WIN0_TRSL_PARAM 0x610u
/* PCIe AXI slave table init defines */
#define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u
#define ATR_SIZE_SHIFT 1
#define ATR_IMPL_ENABLE 1
#define ATR0_AXI4_SLV0_SRC_ADDR 0x804u
#define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u
#define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu
#define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u
#define PCIE_TX_RX_INTERFACE 0x00000000u
#define PCIE_CONFIG_INTERFACE 0x00000001u
#define CONFIG_SPACE_ADDR_OFFSET 0x1000u
#define ATR_ENTRY_SIZE 32
enum plda_int_event {
PLDA_AXI_POST_ERR,
PLDA_AXI_FETCH_ERR,
PLDA_AXI_DISCARD_ERR,
PLDA_AXI_DOORBELL,
PLDA_PCIE_POST_ERR,
PLDA_PCIE_FETCH_ERR,
PLDA_PCIE_DISCARD_ERR,
PLDA_PCIE_DOORBELL,
PLDA_INTX,
PLDA_MSI,
PLDA_AER_EVENT,
PLDA_MISC_EVENTS,
PLDA_SYS_ERR,
PLDA_INT_EVENT_NUM
};
#define PLDA_NUM_DMA_EVENTS 16
#define EVENT_PM_MSI_INT_INTX (PLDA_NUM_DMA_EVENTS + PLDA_INTX)
#define EVENT_PM_MSI_INT_MSI (PLDA_NUM_DMA_EVENTS + PLDA_MSI)
#define PLDA_MAX_EVENT_NUM (PLDA_NUM_DMA_EVENTS + PLDA_INT_EVENT_NUM)
/*
* PLDA interrupt register
*
* 31 27 23 15 7 0
* +--+--+--+-+------+-+-+-+-+-+-+-+-+-----------+-----------+
* |12|11|10|9| intx |7|6|5|4|3|2|1|0| DMA error | DMA end |
* +--+--+--+-+------+-+-+-+-+-+-+-+-+-----------+-----------+
* event bit
* 0-7 (0-7) DMA interrupt end : reserved for vendor implement
* 8-15 (8-15) DMA error : reserved for vendor implement
* 16 (16) AXI post error (PLDA_AXI_POST_ERR)
* 17 (17) AXI fetch error (PLDA_AXI_FETCH_ERR)
* 18 (18) AXI discard error (PLDA_AXI_DISCARD_ERR)
* 19 (19) AXI doorbell (PLDA_PCIE_DOORBELL)
* 20 (20) PCIe post error (PLDA_PCIE_POST_ERR)
* 21 (21) PCIe fetch error (PLDA_PCIE_FETCH_ERR)
* 22 (22) PCIe discard error (PLDA_PCIE_DISCARD_ERR)
* 23 (23) PCIe doorbell (PLDA_PCIE_DOORBELL)
* 24 (27-24) INTx interruts (PLDA_INTX)
* 25 (28): MSI interrupt (PLDA_MSI)
* 26 (29): AER event (PLDA_AER_EVENT)
* 27 (30): PM/LTR/Hotplug (PLDA_MISC_EVENTS)
* 28 (31): System error (PLDA_SYS_ERR)
*/
struct plda_pcie_rp;
struct plda_event_ops {
u32 (*get_events)(struct plda_pcie_rp *pcie);
};
struct plda_pcie_host_ops {
int (*host_init)(struct plda_pcie_rp *pcie);
void (*host_deinit)(struct plda_pcie_rp *pcie);
};
struct plda_msi {
struct mutex lock; /* Protect used bitmap */
struct irq_domain *msi_domain;
struct irq_domain *dev_domain;
u32 num_vectors;
u64 vector_phy;
DECLARE_BITMAP(used, PLDA_MAX_NUM_MSI_IRQS);
};
struct plda_pcie_rp {
struct device *dev;
struct pci_host_bridge *bridge;
struct irq_domain *intx_domain;
struct irq_domain *event_domain;
raw_spinlock_t lock;
struct plda_msi msi;
const struct plda_event_ops *event_ops;
const struct irq_chip *event_irq_chip;
const struct plda_pcie_host_ops *host_ops;
void __iomem *bridge_addr;
void __iomem *config_base;
unsigned long events_bitmap;
int irq;
int msi_irq;
int intx_irq;
int num_events;
};
struct plda_event {
int (*request_event_irq)(struct plda_pcie_rp *pcie,
int event_irq, int event);
int intx_event;
int msi_event;
};
void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int where);
int plda_init_interrupts(struct platform_device *pdev,
struct plda_pcie_rp *port,
const struct plda_event *event);
void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
phys_addr_t axi_addr, phys_addr_t pci_addr,
size_t size);
int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
struct plda_pcie_rp *port);
int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
const struct plda_event *plda_event);
void plda_pcie_host_deinit(struct plda_pcie_rp *pcie);
static inline void plda_set_default_msi(struct plda_msi *msi)
{
msi->vector_phy = IMSI_ADDR;
msi->num_vectors = PLDA_MAX_NUM_MSI_IRQS;
}
static inline void plda_pcie_enable_root_port(struct plda_pcie_rp *plda)
{
u32 value;
value = readl_relaxed(plda->bridge_addr + GEN_SETTINGS);
value |= RP_ENABLE;
writel_relaxed(value, plda->bridge_addr + GEN_SETTINGS);
}
static inline void plda_pcie_set_standard_class(struct plda_pcie_rp *plda)
{
u32 value;
/* set class code and reserve revision id */
value = readl_relaxed(plda->bridge_addr + PCIE_PCI_IDS_DW1);
value &= REVISION_ID_MASK;
value |= (PCI_CLASS_BRIDGE_PCI << IDS_CLASS_CODE_SHIFT);
writel_relaxed(value, plda->bridge_addr + PCIE_PCI_IDS_DW1);
}
static inline void plda_pcie_set_pref_win_64bit(struct plda_pcie_rp *plda)
{
u32 value;
value = readl_relaxed(plda->bridge_addr + PCIE_WINROM);
value |= PREF_MEM_WIN_64_SUPPORT;
writel_relaxed(value, plda->bridge_addr + PCIE_WINROM);
}
static inline void plda_pcie_disable_ltr(struct plda_pcie_rp *plda)
{
u32 value;
value = readl_relaxed(plda->bridge_addr + PMSG_SUPPORT_RX);
value &= ~PMSG_LTR_SUPPORT;
writel_relaxed(value, plda->bridge_addr + PMSG_SUPPORT_RX);
}
static inline void plda_pcie_disable_func(struct plda_pcie_rp *plda)
{
u32 value;
value = readl_relaxed(plda->bridge_addr + PCI_MISC);
value |= PHY_FUNCTION_DIS;
writel_relaxed(value, plda->bridge_addr + PCI_MISC);
}
static inline void plda_pcie_write_rc_bar(struct plda_pcie_rp *plda, u64 val)
{
void __iomem *addr = plda->bridge_addr + CONFIG_SPACE_ADDR_OFFSET;
writel_relaxed(lower_32_bits(val), addr + PCI_BASE_ADDRESS_0);
writel_relaxed(upper_32_bits(val), addr + PCI_BASE_ADDRESS_1);
}
#endif /* _PCIE_PLDA_H */

View File

@ -0,0 +1,488 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* PCIe host controller driver for StarFive JH7110 Soc.
*
* Copyright (C) 2023 StarFive Technology Co., Ltd.
*/
#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/reset.h>
#include "../../pci.h"
#include "pcie-plda.h"
#define PCIE_FUNC_NUM 4
/* system control */
#define STG_SYSCON_PCIE0_BASE 0x48
#define STG_SYSCON_PCIE1_BASE 0x1f8
#define STG_SYSCON_AR_OFFSET 0x78
#define STG_SYSCON_AXI4_SLVL_AR_MASK GENMASK(22, 8)
#define STG_SYSCON_AXI4_SLVL_PHY_AR(x) FIELD_PREP(GENMASK(20, 17), x)
#define STG_SYSCON_AW_OFFSET 0x7c
#define STG_SYSCON_AXI4_SLVL_AW_MASK GENMASK(14, 0)
#define STG_SYSCON_AXI4_SLVL_PHY_AW(x) FIELD_PREP(GENMASK(12, 9), x)
#define STG_SYSCON_CLKREQ BIT(22)
#define STG_SYSCON_CKREF_SRC_MASK GENMASK(19, 18)
#define STG_SYSCON_RP_NEP_OFFSET 0xe8
#define STG_SYSCON_K_RP_NEP BIT(8)
#define STG_SYSCON_LNKSTA_OFFSET 0x170
#define DATA_LINK_ACTIVE BIT(5)
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_USLEEP_MIN 90000
#define LINK_WAIT_USLEEP_MAX 100000
struct starfive_jh7110_pcie {
struct plda_pcie_rp plda;
struct reset_control *resets;
struct clk_bulk_data *clks;
struct regmap *reg_syscon;
struct gpio_desc *power_gpio;
struct gpio_desc *reset_gpio;
struct phy *phy;
unsigned int stg_pcie_base;
int num_clks;
};
/*
* JH7110 PCIe port BAR0/1 can be configured as 64-bit prefetchable memory
* space. PCIe read and write requests targeting BAR0/1 are routed to so called
* 'Bridge Configuration space' in PLDA IP datasheet, which contains the bridge
* internal registers, such as interrupt, DMA and ATU registers...
* JH7110 can access the Bridge Configuration space by local bus, and don`t
* want the bridge internal registers accessed by the DMA from EP devices.
* Thus, they are unimplemented and should be hidden here.
*/
static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int devfn,
int offset)
{
if (pci_is_root_bus(bus) && !devfn &&
(offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1))
return true;
return false;
}
static int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 value)
{
if (starfive_pcie_hide_rc_bar(bus, devfn, where))
return PCIBIOS_SUCCESSFUL;
return pci_generic_config_write(bus, devfn, where, size, value);
}
static int starfive_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 *value)
{
if (starfive_pcie_hide_rc_bar(bus, devfn, where)) {
*value = 0;
return PCIBIOS_SUCCESSFUL;
}
return pci_generic_config_read(bus, devfn, where, size, value);
}
static int starfive_pcie_parse_dt(struct starfive_jh7110_pcie *pcie,
struct device *dev)
{
int domain_nr;
pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks);
if (pcie->num_clks < 0)
return dev_err_probe(dev, pcie->num_clks,
"failed to get pcie clocks\n");
pcie->resets = devm_reset_control_array_get_exclusive(dev);
if (IS_ERR(pcie->resets))
return dev_err_probe(dev, PTR_ERR(pcie->resets),
"failed to get pcie resets");
pcie->reg_syscon =
syscon_regmap_lookup_by_phandle(dev->of_node,
"starfive,stg-syscon");
if (IS_ERR(pcie->reg_syscon))
return dev_err_probe(dev, PTR_ERR(pcie->reg_syscon),
"failed to parse starfive,stg-syscon\n");
pcie->phy = devm_phy_optional_get(dev, NULL);
if (IS_ERR(pcie->phy))
return dev_err_probe(dev, PTR_ERR(pcie->phy),
"failed to get pcie phy\n");
/*
* The PCIe domain numbers are set to be static in JH7110 DTS.
* As the STG system controller defines different bases in PCIe RP0 &
* RP1, we use them to identify which controller is doing the hardware
* initialization.
*/
domain_nr = of_get_pci_domain_nr(dev->of_node);
if (domain_nr < 0 || domain_nr > 1)
return dev_err_probe(dev, -ENODEV,
"failed to get valid pcie domain\n");
if (domain_nr == 0)
pcie->stg_pcie_base = STG_SYSCON_PCIE0_BASE;
else
pcie->stg_pcie_base = STG_SYSCON_PCIE1_BASE;
pcie->reset_gpio = devm_gpiod_get_optional(dev, "perst",
GPIOD_OUT_HIGH);
if (IS_ERR(pcie->reset_gpio))
return dev_err_probe(dev, PTR_ERR(pcie->reset_gpio),
"failed to get perst-gpio\n");
pcie->power_gpio = devm_gpiod_get_optional(dev, "enable",
GPIOD_OUT_LOW);
if (IS_ERR(pcie->power_gpio))
return dev_err_probe(dev, PTR_ERR(pcie->power_gpio),
"failed to get power-gpio\n");
return 0;
}
static struct pci_ops starfive_pcie_ops = {
.map_bus = plda_pcie_map_bus,
.read = starfive_pcie_config_read,
.write = starfive_pcie_config_write,
};
static int starfive_pcie_clk_rst_init(struct starfive_jh7110_pcie *pcie)
{
struct device *dev = pcie->plda.dev;
int ret;
ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
if (ret)
return dev_err_probe(dev, ret, "failed to enable clocks\n");
ret = reset_control_deassert(pcie->resets);
if (ret) {
clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
dev_err_probe(dev, ret, "failed to deassert resets\n");
}
return ret;
}
static void starfive_pcie_clk_rst_deinit(struct starfive_jh7110_pcie *pcie)
{
reset_control_assert(pcie->resets);
clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
}
static bool starfive_pcie_link_up(struct plda_pcie_rp *plda)
{
struct starfive_jh7110_pcie *pcie =
container_of(plda, struct starfive_jh7110_pcie, plda);
int ret;
u32 stg_reg_val;
ret = regmap_read(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_LNKSTA_OFFSET,
&stg_reg_val);
if (ret) {
dev_err(pcie->plda.dev, "failed to read link status\n");
return false;
}
return !!(stg_reg_val & DATA_LINK_ACTIVE);
}
static int starfive_pcie_host_wait_for_link(struct starfive_jh7110_pcie *pcie)
{
int retries;
/* Check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (starfive_pcie_link_up(&pcie->plda)) {
dev_info(pcie->plda.dev, "port link up\n");
return 0;
}
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
return -ETIMEDOUT;
}
static int starfive_pcie_enable_phy(struct device *dev,
struct starfive_jh7110_pcie *pcie)
{
int ret;
if (!pcie->phy)
return 0;
ret = phy_init(pcie->phy);
if (ret)
return dev_err_probe(dev, ret,
"failed to initialize pcie phy\n");
ret = phy_set_mode(pcie->phy, PHY_MODE_PCIE);
if (ret) {
dev_err_probe(dev, ret, "failed to set pcie mode\n");
goto err_phy_on;
}
ret = phy_power_on(pcie->phy);
if (ret) {
dev_err_probe(dev, ret, "failed to power on pcie phy\n");
goto err_phy_on;
}
return 0;
err_phy_on:
phy_exit(pcie->phy);
return ret;
}
static void starfive_pcie_disable_phy(struct starfive_jh7110_pcie *pcie)
{
phy_power_off(pcie->phy);
phy_exit(pcie->phy);
}
static void starfive_pcie_host_deinit(struct plda_pcie_rp *plda)
{
struct starfive_jh7110_pcie *pcie =
container_of(plda, struct starfive_jh7110_pcie, plda);
starfive_pcie_clk_rst_deinit(pcie);
if (pcie->power_gpio)
gpiod_set_value_cansleep(pcie->power_gpio, 0);
starfive_pcie_disable_phy(pcie);
}
static int starfive_pcie_host_init(struct plda_pcie_rp *plda)
{
struct starfive_jh7110_pcie *pcie =
container_of(plda, struct starfive_jh7110_pcie, plda);
struct device *dev = plda->dev;
int ret;
int i;
ret = starfive_pcie_enable_phy(dev, pcie);
if (ret)
return ret;
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_RP_NEP_OFFSET,
STG_SYSCON_K_RP_NEP, STG_SYSCON_K_RP_NEP);
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET,
STG_SYSCON_CKREF_SRC_MASK,
FIELD_PREP(STG_SYSCON_CKREF_SRC_MASK, 2));
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET,
STG_SYSCON_CLKREQ, STG_SYSCON_CLKREQ);
ret = starfive_pcie_clk_rst_init(pcie);
if (ret)
return ret;
if (pcie->power_gpio)
gpiod_set_value_cansleep(pcie->power_gpio, 1);
if (pcie->reset_gpio)
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
/* Disable physical functions except #0 */
for (i = 1; i < PCIE_FUNC_NUM; i++) {
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_AR_OFFSET,
STG_SYSCON_AXI4_SLVL_AR_MASK,
STG_SYSCON_AXI4_SLVL_PHY_AR(i));
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET,
STG_SYSCON_AXI4_SLVL_AW_MASK,
STG_SYSCON_AXI4_SLVL_PHY_AW(i));
plda_pcie_disable_func(plda);
}
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_AR_OFFSET,
STG_SYSCON_AXI4_SLVL_AR_MASK, 0);
regmap_update_bits(pcie->reg_syscon,
pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET,
STG_SYSCON_AXI4_SLVL_AW_MASK, 0);
plda_pcie_enable_root_port(plda);
plda_pcie_write_rc_bar(plda, 0);
/* PCIe PCI Standard Configuration Identification Settings. */
plda_pcie_set_standard_class(plda);
/*
* The LTR message receiving is enabled by the register "PCIe Message
* Reception" as default, but the forward id & addr are uninitialized.
* If we do not disable LTR message forwarding here, or set a legal
* forwarding address, the kernel will get stuck.
* To workaround, disable the LTR message forwarding here before using
* this feature.
*/
plda_pcie_disable_ltr(plda);
/*
* Enable the prefetchable memory window 64-bit addressing in JH7110.
* The 64-bits prefetchable address translation configurations in ATU
* can be work after enable the register setting below.
*/
plda_pcie_set_pref_win_64bit(plda);
/*
* Ensure that PERST has been asserted for at least 100 ms,
* the sleep value is T_PVPERL from PCIe CEM spec r2.0 (Table 2-4)
*/
msleep(100);
if (pcie->reset_gpio)
gpiod_set_value_cansleep(pcie->reset_gpio, 0);
/*
* With a Downstream Port (<=5GT/s), software must wait a minimum
* of 100ms following exit from a conventional reset before
* sending a configuration request to the device.
*/
msleep(PCIE_RESET_CONFIG_DEVICE_WAIT_MS);
if (starfive_pcie_host_wait_for_link(pcie))
dev_info(dev, "port link down\n");
return 0;
}
static const struct plda_pcie_host_ops sf_host_ops = {
.host_init = starfive_pcie_host_init,
.host_deinit = starfive_pcie_host_deinit,
};
static const struct plda_event stf_pcie_event = {
.intx_event = EVENT_PM_MSI_INT_INTX,
.msi_event = EVENT_PM_MSI_INT_MSI
};
static int starfive_pcie_probe(struct platform_device *pdev)
{
struct starfive_jh7110_pcie *pcie;
struct device *dev = &pdev->dev;
struct plda_pcie_rp *plda;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
plda = &pcie->plda;
plda->dev = dev;
ret = starfive_pcie_parse_dt(pcie, dev);
if (ret)
return ret;
plda->host_ops = &sf_host_ops;
plda->num_events = PLDA_MAX_EVENT_NUM;
/* mask doorbell event */
plda->events_bitmap = GENMASK(PLDA_INT_EVENT_NUM - 1, 0)
& ~BIT(PLDA_AXI_DOORBELL)
& ~BIT(PLDA_PCIE_DOORBELL);
plda->events_bitmap <<= PLDA_NUM_DMA_EVENTS;
ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops,
&stf_pcie_event);
if (ret)
return ret;
pm_runtime_enable(&pdev->dev);
pm_runtime_get_sync(&pdev->dev);
platform_set_drvdata(pdev, pcie);
return 0;
}
static void starfive_pcie_remove(struct platform_device *pdev)
{
struct starfive_jh7110_pcie *pcie = platform_get_drvdata(pdev);
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
plda_pcie_host_deinit(&pcie->plda);
platform_set_drvdata(pdev, NULL);
}
static int starfive_pcie_suspend_noirq(struct device *dev)
{
struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks);
starfive_pcie_disable_phy(pcie);
return 0;
}
static int starfive_pcie_resume_noirq(struct device *dev)
{
struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev);
int ret;
ret = starfive_pcie_enable_phy(dev, pcie);
if (ret)
return ret;
ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks);
if (ret) {
dev_err(dev, "failed to enable clocks\n");
starfive_pcie_disable_phy(pcie);
return ret;
}
return 0;
}
static const struct dev_pm_ops starfive_pcie_pm_ops = {
NOIRQ_SYSTEM_SLEEP_PM_OPS(starfive_pcie_suspend_noirq,
starfive_pcie_resume_noirq)
};
static const struct of_device_id starfive_pcie_of_match[] = {
{ .compatible = "starfive,jh7110-pcie", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, starfive_pcie_of_match);
static struct platform_driver starfive_pcie_driver = {
.driver = {
.name = "pcie-starfive",
.of_match_table = of_match_ptr(starfive_pcie_of_match),
.pm = pm_sleep_ptr(&starfive_pcie_pm_ops),
},
.probe = starfive_pcie_probe,
.remove_new = starfive_pcie_remove,
};
module_platform_driver(starfive_pcie_driver);
MODULE_DESCRIPTION("StarFive JH7110 PCIe host driver");
MODULE_LICENSE("GPL v2");

View File

@ -925,6 +925,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
dev_set_msi_domain(&vmd->bus->dev,
dev_get_msi_domain(&vmd->dev->dev));
WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
"domain"), "Can't create symlink to domain\n");
vmd_acpi_begin();
pci_scan_child_bus(vmd->bus);
@ -964,9 +967,6 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
pci_bus_add_devices(vmd->bus);
vmd_acpi_end();
WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
"domain"), "Can't create symlink to domain\n");
return 0;
}
@ -1042,8 +1042,8 @@ static void vmd_remove(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_stop_root_bus(vmd->bus);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_remove_root_bus(vmd->bus);
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
@ -1128,5 +1128,6 @@ static struct pci_driver vmd_drv = {
module_pci_driver(vmd_drv);
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("Volume Management Device driver");
MODULE_LICENSE("GPL v2");
MODULE_VERSION("0.6");

File diff suppressed because it is too large Load Diff

View File

@ -137,6 +137,7 @@ static const struct pci_epf_mhi_ep_info sa8775p_info = {
.epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32,
.msi_count = 32,
.mru = 0x8000,
.flags = MHI_EPF_USE_DMA,
};
struct pci_epf_mhi {
@ -716,7 +717,7 @@ static void pci_epf_mhi_dma_deinit(struct pci_epf_mhi *epf_mhi)
epf_mhi->dma_chan_rx = NULL;
}
static int pci_epf_mhi_core_init(struct pci_epf *epf)
static int pci_epf_mhi_epc_init(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
@ -753,9 +754,35 @@ static int pci_epf_mhi_core_init(struct pci_epf *epf)
if (!epf_mhi->epc_features)
return -ENODATA;
if (info->flags & MHI_EPF_USE_DMA) {
ret = pci_epf_mhi_dma_init(epf_mhi);
if (ret) {
dev_err(dev, "Failed to initialize DMA: %d\n", ret);
return ret;
}
}
return 0;
}
static void pci_epf_mhi_epc_deinit(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num];
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
struct pci_epc *epc = epf->epc;
if (mhi_cntrl->mhi_dev) {
mhi_ep_power_down(mhi_cntrl);
if (info->flags & MHI_EPF_USE_DMA)
pci_epf_mhi_dma_deinit(epf_mhi);
mhi_ep_unregister_controller(mhi_cntrl);
}
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, epf_bar);
}
static int pci_epf_mhi_link_up(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
@ -765,14 +792,6 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
struct device *dev = &epf->dev;
int ret;
if (info->flags & MHI_EPF_USE_DMA) {
ret = pci_epf_mhi_dma_init(epf_mhi);
if (ret) {
dev_err(dev, "Failed to initialize DMA: %d\n", ret);
return ret;
}
}
mhi_cntrl->mmio = epf_mhi->mmio;
mhi_cntrl->irq = epf_mhi->irq;
mhi_cntrl->mru = info->mru;
@ -819,7 +838,7 @@ static int pci_epf_mhi_link_down(struct pci_epf *epf)
return 0;
}
static int pci_epf_mhi_bme(struct pci_epf *epf)
static int pci_epf_mhi_bus_master_enable(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
@ -882,8 +901,8 @@ static void pci_epf_mhi_unbind(struct pci_epf *epf)
/*
* Forcefully power down the MHI EP stack. Only way to bring the MHI EP
* stack back to working state after successive bind is by getting BME
* from host.
* stack back to working state after successive bind is by getting Bus
* Master Enable event from host.
*/
if (mhi_cntrl->mhi_dev) {
mhi_ep_power_down(mhi_cntrl);
@ -897,10 +916,11 @@ static void pci_epf_mhi_unbind(struct pci_epf *epf)
}
static const struct pci_epc_event_ops pci_epf_mhi_event_ops = {
.core_init = pci_epf_mhi_core_init,
.epc_init = pci_epf_mhi_epc_init,
.epc_deinit = pci_epf_mhi_epc_deinit,
.link_up = pci_epf_mhi_link_up,
.link_down = pci_epf_mhi_link_down,
.bme = pci_epf_mhi_bme,
.bus_master_enable = pci_epf_mhi_bus_master_enable,
};
static int pci_epf_mhi_probe(struct pci_epf *epf,

View File

@ -686,25 +686,6 @@ reset_handler:
msecs_to_jiffies(1));
}
static void pci_epf_test_unbind(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
int bar;
cancel_delayed_work(&epf_test->cmd_handler);
pci_epf_test_clean_dma_chan(epf_test);
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (!epf_test->reg[bar])
continue;
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no,
&epf->bar[bar]);
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
}
}
static int pci_epf_test_set_bar(struct pci_epf *epf)
{
int bar, ret;
@ -731,23 +712,36 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
return 0;
}
static int pci_epf_test_core_init(struct pci_epf *epf)
static void pci_epf_test_clear_bar(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
int bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (!epf_test->reg[bar])
continue;
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no,
&epf->bar[bar]);
}
}
static int pci_epf_test_epc_init(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct pci_epf_header *header = epf->header;
const struct pci_epc_features *epc_features;
const struct pci_epc_features *epc_features = epf_test->epc_features;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
bool linkup_notifier = false;
bool msix_capable = false;
bool msi_capable = true;
int ret;
epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no);
if (epc_features) {
msix_capable = epc_features->msix_capable;
msi_capable = epc_features->msi_capable;
}
epf_test->dma_supported = true;
ret = pci_epf_test_init_dma_chan(epf_test);
if (ret)
epf_test->dma_supported = false;
if (epf->vfunc_no <= 1) {
ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, header);
@ -761,7 +755,7 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
if (ret)
return ret;
if (msi_capable) {
if (epc_features->msi_capable) {
ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no,
epf->msi_interrupts);
if (ret) {
@ -770,7 +764,7 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
}
}
if (msix_capable) {
if (epc_features->msix_capable) {
ret = pci_epc_set_msix(epc, epf->func_no, epf->vfunc_no,
epf->msix_interrupts,
epf_test->test_reg_bar,
@ -788,6 +782,15 @@ static int pci_epf_test_core_init(struct pci_epf *epf)
return 0;
}
static void pci_epf_test_epc_deinit(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
cancel_delayed_work(&epf_test->cmd_handler);
pci_epf_test_clean_dma_chan(epf_test);
pci_epf_test_clear_bar(epf);
}
static int pci_epf_test_link_up(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
@ -798,9 +801,20 @@ static int pci_epf_test_link_up(struct pci_epf *epf)
return 0;
}
static int pci_epf_test_link_down(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
cancel_delayed_work_sync(&epf_test->cmd_handler);
return 0;
}
static const struct pci_epc_event_ops pci_epf_test_event_ops = {
.core_init = pci_epf_test_core_init,
.epc_init = pci_epf_test_epc_init,
.epc_deinit = pci_epf_test_epc_deinit,
.link_up = pci_epf_test_link_up,
.link_down = pci_epf_test_link_down,
};
static int pci_epf_test_alloc_space(struct pci_epf *epf)
@ -810,19 +824,15 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
size_t msix_table_size = 0;
size_t test_reg_bar_size;
size_t pba_size = 0;
bool msix_capable;
void *base;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
enum pci_barno bar;
const struct pci_epc_features *epc_features;
const struct pci_epc_features *epc_features = epf_test->epc_features;
size_t test_reg_size;
epc_features = epf_test->epc_features;
test_reg_bar_size = ALIGN(sizeof(struct pci_epf_test_reg), 128);
msix_capable = epc_features->msix_capable;
if (msix_capable) {
if (epc_features->msix_capable) {
msix_table_size = PCI_MSIX_ENTRY_SIZE * epf->msix_interrupts;
epf_test->msix_table_offset = test_reg_bar_size;
/* Align to QWORD or 8 Bytes */
@ -857,6 +867,20 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
return 0;
}
static void pci_epf_test_free_space(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
int bar;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (!epf_test->reg[bar])
continue;
pci_epf_free_space(epf, epf_test->reg[bar], bar,
PRIMARY_INTERFACE);
}
}
static int pci_epf_test_bind(struct pci_epf *epf)
{
int ret;
@ -885,15 +909,22 @@ static int pci_epf_test_bind(struct pci_epf *epf)
if (ret)
return ret;
epf_test->dma_supported = true;
ret = pci_epf_test_init_dma_chan(epf_test);
if (ret)
epf_test->dma_supported = false;
return 0;
}
static void pci_epf_test_unbind(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
cancel_delayed_work(&epf_test->cmd_handler);
if (epc->init_complete) {
pci_epf_test_clean_dma_chan(epf_test);
pci_epf_test_clear_bar(epf);
}
pci_epf_test_free_space(epf);
}
static const struct pci_epf_device_id pci_epf_test_ids[] = {
{
.name = "pci_epf_test",

View File

@ -799,8 +799,9 @@ err_config_interrupt:
*/
static void epf_ntb_epc_cleanup(struct epf_ntb *ntb)
{
epf_ntb_db_bar_clear(ntb);
epf_ntb_mw_bar_clear(ntb, ntb->num_mws);
epf_ntb_db_bar_clear(ntb);
epf_ntb_config_sspad_bar_clear(ntb);
}
#define EPF_NTB_R(_name) \
@ -1018,8 +1019,10 @@ static int vpci_scan_bus(void *sysdata)
struct epf_ntb *ndev = sysdata;
vpci_bus = pci_scan_bus(ndev->vbus_number, &vpci_ops, sysdata);
if (vpci_bus)
pr_err("create pci bus\n");
if (!vpci_bus) {
pr_err("create pci bus failed\n");
return -EINVAL;
}
pci_bus_add_devices(vpci_bus);
@ -1335,13 +1338,19 @@ static int epf_ntb_bind(struct pci_epf *epf)
ret = pci_register_driver(&vntb_pci_driver);
if (ret) {
dev_err(dev, "failure register vntb pci driver\n");
goto err_bar_alloc;
goto err_epc_cleanup;
}
vpci_scan_bus(ntb);
ret = vpci_scan_bus(ntb);
if (ret)
goto err_unregister;
return 0;
err_unregister:
pci_unregister_driver(&vntb_pci_driver);
err_epc_cleanup:
epf_ntb_epc_cleanup(ntb);
err_bar_alloc:
epf_ntb_config_spad_bar_free(ntb);

View File

@ -23,7 +23,6 @@ struct pci_epf_group {
struct config_group group;
struct config_group primary_epc_group;
struct config_group secondary_epc_group;
struct config_group *type_group;
struct delayed_work cfs_work;
struct pci_epf *epf;
int index;

View File

@ -14,7 +14,9 @@
#include <linux/pci-epf.h>
#include <linux/pci-ep-cfs.h>
static struct class *pci_epc_class;
static const struct class pci_epc_class = {
.name = "pci_epc",
};
static void devm_pci_epc_release(struct device *dev, void *res)
{
@ -60,7 +62,7 @@ struct pci_epc *pci_epc_get(const char *epc_name)
struct device *dev;
struct class_dev_iter iter;
class_dev_iter_init(&iter, pci_epc_class, NULL, NULL);
class_dev_iter_init(&iter, &pci_epc_class, NULL, NULL);
while ((dev = class_dev_iter_next(&iter))) {
if (strcmp(epc_name, dev_name(dev)))
continue;
@ -727,9 +729,9 @@ void pci_epc_linkdown(struct pci_epc *epc)
EXPORT_SYMBOL_GPL(pci_epc_linkdown);
/**
* pci_epc_init_notify() - Notify the EPF device that EPC device's core
* initialization is completed.
* @epc: the EPC device whose core initialization is completed
* pci_epc_init_notify() - Notify the EPF device that EPC device initialization
* is completed.
* @epc: the EPC device whose initialization is completed
*
* Invoke to Notify the EPF device that the EPC device's initialization
* is completed.
@ -744,8 +746,8 @@ void pci_epc_init_notify(struct pci_epc *epc)
mutex_lock(&epc->list_lock);
list_for_each_entry(epf, &epc->pci_epf, list) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->core_init)
epf->event_ops->core_init(epf);
if (epf->event_ops && epf->event_ops->epc_init)
epf->event_ops->epc_init(epf);
mutex_unlock(&epf->lock);
}
epc->init_complete = true;
@ -756,7 +758,7 @@ EXPORT_SYMBOL_GPL(pci_epc_init_notify);
/**
* pci_epc_notify_pending_init() - Notify the pending EPC device initialization
* complete to the EPF device
* @epc: the EPC device whose core initialization is pending to be notified
* @epc: the EPC device whose initialization is pending to be notified
* @epf: the EPF device to be notified
*
* Invoke to notify the pending EPC device initialization complete to the EPF
@ -767,22 +769,20 @@ void pci_epc_notify_pending_init(struct pci_epc *epc, struct pci_epf *epf)
{
if (epc->init_complete) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->core_init)
epf->event_ops->core_init(epf);
if (epf->event_ops && epf->event_ops->epc_init)
epf->event_ops->epc_init(epf);
mutex_unlock(&epf->lock);
}
}
EXPORT_SYMBOL_GPL(pci_epc_notify_pending_init);
/**
* pci_epc_bme_notify() - Notify the EPF device that the EPC device has received
* the BME event from the Root complex
* @epc: the EPC device that received the BME event
* pci_epc_deinit_notify() - Notify the EPF device about EPC deinitialization
* @epc: the EPC device whose deinitialization is completed
*
* Invoke to Notify the EPF device that the EPC device has received the Bus
* Master Enable (BME) event from the Root complex
* Invoke to notify the EPF device that the EPC deinitialization is completed.
*/
void pci_epc_bme_notify(struct pci_epc *epc)
void pci_epc_deinit_notify(struct pci_epc *epc)
{
struct pci_epf *epf;
@ -792,13 +792,41 @@ void pci_epc_bme_notify(struct pci_epc *epc)
mutex_lock(&epc->list_lock);
list_for_each_entry(epf, &epc->pci_epf, list) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->bme)
epf->event_ops->bme(epf);
if (epf->event_ops && epf->event_ops->epc_deinit)
epf->event_ops->epc_deinit(epf);
mutex_unlock(&epf->lock);
}
epc->init_complete = false;
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_deinit_notify);
/**
* pci_epc_bus_master_enable_notify() - Notify the EPF device that the EPC
* device has received the Bus Master
* Enable event from the Root complex
* @epc: the EPC device that received the Bus Master Enable event
*
* Notify the EPF device that the EPC device has generated the Bus Master Enable
* event due to host setting the Bus Master Enable bit in the Command register.
*/
void pci_epc_bus_master_enable_notify(struct pci_epc *epc)
{
struct pci_epf *epf;
if (IS_ERR_OR_NULL(epc))
return;
mutex_lock(&epc->list_lock);
list_for_each_entry(epf, &epc->pci_epf, list) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->bus_master_enable)
epf->event_ops->bus_master_enable(epf);
mutex_unlock(&epf->lock);
}
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_bme_notify);
EXPORT_SYMBOL_GPL(pci_epc_bus_master_enable_notify);
/**
* pci_epc_destroy() - destroy the EPC device
@ -867,7 +895,7 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
INIT_LIST_HEAD(&epc->pci_epf);
device_initialize(&epc->dev);
epc->dev.class = pci_epc_class;
epc->dev.class = &pci_epc_class;
epc->dev.parent = dev;
epc->dev.release = pci_epc_release;
epc->ops = ops;
@ -927,20 +955,13 @@ EXPORT_SYMBOL_GPL(__devm_pci_epc_create);
static int __init pci_epc_init(void)
{
pci_epc_class = class_create("pci_epc");
if (IS_ERR(pci_epc_class)) {
pr_err("failed to create pci epc class --> %ld\n",
PTR_ERR(pci_epc_class));
return PTR_ERR(pci_epc_class);
}
return 0;
return class_register(&pci_epc_class);
}
module_init(pci_epc_init);
static void __exit pci_epc_exit(void)
{
class_destroy(pci_epc_class);
class_unregister(&pci_epc_class);
}
module_exit(pci_epc_exit);

View File

@ -124,4 +124,5 @@ static struct platform_driver altra_led_driver = {
module_platform_driver(altra_led_driver);
MODULE_AUTHOR("D Scott Phillips <scott@os.amperecomputing.com>");
MODULE_DESCRIPTION("ACPI PCI Hot Plug Extension for Ampere Altra");
MODULE_LICENSE("GPL");

View File

@ -46,6 +46,9 @@ extern int pciehp_poll_time;
/**
* struct controller - PCIe hotplug controller
* @pcie: pointer to the controller's PCIe port service device
* @dsn: cached copy of Device Serial Number of Function 0 in the hotplug slot
* (PCIe r6.2 sec 7.9.3); used to determine whether a hotplugged device
* was replaced with a different one during system sleep
* @slot_cap: cached copy of the Slot Capabilities register
* @inband_presence_disabled: In-Band Presence Detect Disable supported by
* controller and disabled per spec recommendation (PCIe r5.0, appendix I
@ -87,6 +90,7 @@ extern int pciehp_poll_time;
*/
struct controller {
struct pcie_device *pcie;
u64 dsn;
u32 slot_cap; /* capabilities and quirks */
unsigned int inband_presence_disabled:1;

View File

@ -284,6 +284,32 @@ static int pciehp_suspend(struct pcie_device *dev)
return 0;
}
static bool pciehp_device_replaced(struct controller *ctrl)
{
struct pci_dev *pdev __free(pci_dev_put);
u32 reg;
pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
if (!pdev)
return true;
if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) ||
reg != (pdev->vendor | (pdev->device << 16)) ||
pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) ||
reg != (pdev->revision | (pdev->class << 8)))
return true;
if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL &&
(pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) ||
reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16))))
return true;
if (pci_get_dsn(pdev) != ctrl->dsn)
return true;
return false;
}
static int pciehp_resume_noirq(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
@ -293,9 +319,23 @@ static int pciehp_resume_noirq(struct pcie_device *dev)
ctrl->cmd_busy = true;
/* clear spurious events from rediscovery of inserted card */
if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE)
if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) {
pcie_clear_hotplug_events(ctrl);
/*
* If hotplugged device was replaced with a different one
* during system sleep, mark the old device disconnected
* (to prevent its driver from accessing the new device)
* and synthesize a Presence Detect Changed event.
*/
if (pciehp_device_replaced(ctrl)) {
ctrl_dbg(ctrl, "device replaced during system sleep\n");
pci_walk_bus(ctrl->pcie->port->subordinate,
pci_dev_set_disconnected, NULL);
pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC);
}
}
return 0;
}
#endif

View File

@ -1055,6 +1055,11 @@ struct controller *pcie_init(struct pcie_device *dev)
}
}
pdev = pci_get_slot(subordinate, PCI_DEVFN(0, 0));
if (pdev)
ctrl->dsn = pci_get_dsn(pdev);
pci_dev_put(pdev);
return ctrl;
}

View File

@ -72,6 +72,10 @@ int pciehp_configure_device(struct controller *ctrl)
pci_bus_add_devices(parent);
down_read_nested(&ctrl->reset_lock, ctrl->depth);
dev = pci_get_slot(parent, PCI_DEVFN(0, 0));
ctrl->dsn = pci_get_dsn(dev);
pci_dev_put(dev);
out:
pci_unlock_rescan_remove();
return ret;

View File

@ -23,6 +23,10 @@
*
* @maxlen specifies the maximum length to map. If you want to get access to
* the complete BAR from offset to the end, pass %0 here.
*
* NOTE:
* This function is never managed, even if you initialized with
* pcim_enable_device().
* */
void __iomem *pci_iomap_range(struct pci_dev *dev,
int bar,
@ -63,6 +67,10 @@ EXPORT_SYMBOL(pci_iomap_range);
*
* @maxlen specifies the maximum length to map. If you want to get access to
* the complete BAR from offset to the end, pass %0 here.
*
* NOTE:
* This function is never managed, even if you initialized with
* pcim_enable_device().
* */
void __iomem *pci_iomap_wc_range(struct pci_dev *dev,
int bar,
@ -106,6 +114,10 @@ EXPORT_SYMBOL_GPL(pci_iomap_wc_range);
*
* @maxlen specifies the maximum length to map. If you want to get access to
* the complete BAR without checking for its length first, pass %0 here.
*
* NOTE:
* This function is never managed, even if you initialized with
* pcim_enable_device(). If you need automatic cleanup, use pcim_iomap().
* */
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
@ -127,6 +139,10 @@ EXPORT_SYMBOL(pci_iomap);
*
* @maxlen specifies the maximum length to map. If you want to get access to
* the complete BAR without checking for its length first, pass %0 here.
*
* NOTE:
* This function is never managed, even if you initialized with
* pcim_enable_device().
* */
void __iomem *pci_iomap_wc(struct pci_dev *dev, int bar, unsigned long maxlen)
{

View File

@ -239,28 +239,62 @@ int of_get_pci_domain_nr(struct device_node *node)
}
EXPORT_SYMBOL_GPL(of_get_pci_domain_nr);
/**
* of_pci_preserve_config - Return true if the boot configuration needs to
* be preserved
* @node: Device tree node.
*
* Look for "linux,pci-probe-only" property for a given PCI controller's
* node and return true if found. Also look in the chosen node if the
* property is not found in the given controller's node. Having this
* property ensures that the kernel doesn't reconfigure the BARs and bridge
* windows that are already done by the platform firmware.
*
* Return: true if the property exists; false otherwise.
*/
bool of_pci_preserve_config(struct device_node *node)
{
u32 val = 0;
int ret;
if (!node) {
pr_warn("device node is NULL, trying with of_chosen\n");
node = of_chosen;
}
retry:
ret = of_property_read_u32(node, "linux,pci-probe-only", &val);
if (ret) {
if (ret == -ENODATA || ret == -EOVERFLOW) {
pr_warn("Incorrect value for linux,pci-probe-only in %pOF, ignoring\n",
node);
return false;
}
if (ret == -EINVAL) {
if (node == of_chosen)
return false;
node = of_chosen;
goto retry;
}
}
if (val)
return true;
else
return false;
}
/**
* of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only
* is present and valid
*/
void of_pci_check_probe_only(void)
{
u32 val;
int ret;
ret = of_property_read_u32(of_chosen, "linux,pci-probe-only", &val);
if (ret) {
if (ret == -ENODATA || ret == -EOVERFLOW)
pr_warn("linux,pci-probe-only without valid value, ignoring\n");
return;
}
if (val)
if (of_pci_preserve_config(of_chosen))
pci_add_flags(PCI_PROBE_ONLY);
else
pci_clear_flags(PCI_PROBE_ONLY);
pr_info("PROBE_ONLY %s\n", val ? "enabled" : "disabled");
}
EXPORT_SYMBOL_GPL(of_pci_check_probe_only);

View File

@ -119,6 +119,28 @@ phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle)
return (phys_addr_t)mcfg_addr;
}
bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge)
{
if (ACPI_HANDLE(&host_bridge->dev)) {
union acpi_object *obj;
/*
* Evaluate the "PCI Boot Configuration" _DSM Function. If it
* exists and returns 0, we must preserve any PCI resource
* assignments made by firmware for this host bridge.
*/
obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(&host_bridge->dev),
&pci_acpi_dsm_guid,
1, DSM_PCI_PRESERVE_BOOT_CONFIG,
NULL, ACPI_TYPE_INTEGER);
if (obj && obj->integer.value == 0)
return true;
ACPI_FREE(obj);
}
return false;
}
/* _HPX PCI Setting Record (Type 0); same as _HPP */
struct hpx_type0 {
u32 revision; /* Not present in _HPP */

View File

@ -38,8 +38,8 @@ pci_power_t mid_pci_get_power_state(struct pci_dev *pdev)
* arch/x86/platform/intel-mid/pwr.c.
*/
static const struct x86_cpu_id lpss_cpu_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SALTWELL_MID, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_MID, NULL),
X86_MATCH_VFM(INTEL_ATOM_SALTWELL_MID, NULL),
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT_MID, NULL),
{}
};

View File

@ -39,4 +39,5 @@ static struct pci_driver pf_stub_driver = {
};
module_pci_driver(pf_stub_driver);
MODULE_DESCRIPTION("SR-IOV PF stub driver with no functionality");
MODULE_LICENSE("GPL");

View File

@ -92,5 +92,6 @@ static void __exit pci_stub_exit(void)
module_init(pci_stub_init);
module_exit(pci_stub_exit);
MODULE_DESCRIPTION("VM device assignment stub driver");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Chris Wright <chrisw@sous-sol.org>");

View File

@ -946,30 +946,67 @@ void pci_request_acs(void)
}
static const char *disable_acs_redir_param;
static const char *config_acs_param;
/**
* pci_disable_acs_redir - disable ACS redirect capabilities
* @dev: the PCI device
*
* For only devices specified in the disable_acs_redir parameter.
*/
static void pci_disable_acs_redir(struct pci_dev *dev)
{
int ret = 0;
const char *p;
int pos;
struct pci_acs {
u16 cap;
u16 ctrl;
u16 fw_ctrl;
};
if (!disable_acs_redir_param)
static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
const char *p, u16 mask, u16 flags)
{
char *delimit;
int ret = 0;
if (!p)
return;
p = disable_acs_redir_param;
while (*p) {
if (!mask) {
/* Check for ACS flags */
delimit = strstr(p, "@");
if (delimit) {
int end;
u32 shift = 0;
end = delimit - p - 1;
while (end > -1) {
if (*(p + end) == '0') {
mask |= 1 << shift;
shift++;
end--;
} else if (*(p + end) == '1') {
mask |= 1 << shift;
flags |= 1 << shift;
shift++;
end--;
} else if ((*(p + end) == 'x') || (*(p + end) == 'X')) {
shift++;
end--;
} else {
pci_err(dev, "Invalid ACS flags... Ignoring\n");
return;
}
}
p = delimit + 1;
} else {
pci_err(dev, "ACS Flags missing\n");
return;
}
}
if (mask & ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | PCI_ACS_CR |
PCI_ACS_UF | PCI_ACS_EC | PCI_ACS_DT)) {
pci_err(dev, "Invalid ACS flags specified\n");
return;
}
ret = pci_dev_str_match(dev, p, &p);
if (ret < 0) {
pr_info_once("PCI: Can't parse disable_acs_redir parameter: %s\n",
disable_acs_redir_param);
pr_info_once("PCI: Can't parse ACS command line parameter\n");
break;
} else if (ret == 1) {
/* Found a match */
@ -989,56 +1026,38 @@ static void pci_disable_acs_redir(struct pci_dev *dev)
if (!pci_dev_specific_disable_acs_redir(dev))
return;
pos = dev->acs_cap;
if (!pos) {
pci_warn(dev, "cannot disable ACS redirect for this hardware as it does not have ACS capabilities\n");
return;
}
pci_dbg(dev, "ACS mask = %#06x\n", mask);
pci_dbg(dev, "ACS flags = %#06x\n", flags);
pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl);
/* If mask is 0 then we copy the bit from the firmware setting. */
caps->ctrl = (caps->ctrl & ~mask) | (caps->fw_ctrl & mask);
caps->ctrl |= flags;
/* P2P Request & Completion Redirect */
ctrl &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC);
pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl);
pci_info(dev, "disabled ACS redirect\n");
pci_info(dev, "Configured ACS to %#06x\n", caps->ctrl);
}
/**
* pci_std_enable_acs - enable ACS on devices using standard ACS capabilities
* @dev: the PCI device
* @caps: default ACS controls
*/
static void pci_std_enable_acs(struct pci_dev *dev)
static void pci_std_enable_acs(struct pci_dev *dev, struct pci_acs *caps)
{
int pos;
u16 cap;
u16 ctrl;
pos = dev->acs_cap;
if (!pos)
return;
pci_read_config_word(dev, pos + PCI_ACS_CAP, &cap);
pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl);
/* Source Validation */
ctrl |= (cap & PCI_ACS_SV);
caps->ctrl |= (caps->cap & PCI_ACS_SV);
/* P2P Request Redirect */
ctrl |= (cap & PCI_ACS_RR);
caps->ctrl |= (caps->cap & PCI_ACS_RR);
/* P2P Completion Redirect */
ctrl |= (cap & PCI_ACS_CR);
caps->ctrl |= (caps->cap & PCI_ACS_CR);
/* Upstream Forwarding */
ctrl |= (cap & PCI_ACS_UF);
caps->ctrl |= (caps->cap & PCI_ACS_UF);
/* Enable Translation Blocking for external devices and noats */
if (pci_ats_disabled() || dev->external_facing || dev->untrusted)
ctrl |= (cap & PCI_ACS_TB);
pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl);
caps->ctrl |= (caps->cap & PCI_ACS_TB);
}
/**
@ -1047,23 +1066,33 @@ static void pci_std_enable_acs(struct pci_dev *dev)
*/
static void pci_enable_acs(struct pci_dev *dev)
{
if (!pci_acs_enable)
goto disable_acs_redir;
struct pci_acs caps;
int pos;
if (!pci_dev_specific_enable_acs(dev))
goto disable_acs_redir;
pos = dev->acs_cap;
if (!pos)
return;
pci_std_enable_acs(dev);
pci_read_config_word(dev, pos + PCI_ACS_CAP, &caps.cap);
pci_read_config_word(dev, pos + PCI_ACS_CTRL, &caps.ctrl);
caps.fw_ctrl = caps.ctrl;
/* If an iommu is present we start with kernel default caps */
if (pci_acs_enable) {
if (pci_dev_specific_enable_acs(dev))
pci_std_enable_acs(dev, &caps);
}
disable_acs_redir:
/*
* Note: pci_disable_acs_redir() must be called even if ACS was not
* enabled by the kernel because it may have been enabled by
* platform firmware. So if we are told to disable it, we should
* always disable it after setting the kernel's default
* preferences.
* Always apply caps from the command line, even if there is no iommu.
* Trust that the admin has a reason to change the ACS settings.
*/
pci_disable_acs_redir(dev);
__pci_config_acs(dev, &caps, disable_acs_redir_param,
PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC,
~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC));
__pci_config_acs(dev, &caps, config_acs_param, 0, 0);
pci_write_config_word(dev, pos + PCI_ACS_CTRL, caps.ctrl);
}
/**
@ -2218,12 +2247,6 @@ void pci_disable_enabled_device(struct pci_dev *dev)
*/
void pci_disable_device(struct pci_dev *dev)
{
struct pci_devres *dr;
dr = find_pci_dr(dev);
if (dr)
dr->enabled = 0;
dev_WARN_ONCE(&dev->dev, atomic_read(&dev->enable_cnt) <= 0,
"disabling already-disabled device");
@ -3872,7 +3895,15 @@ EXPORT_SYMBOL(pci_enable_atomic_ops_to_root);
*/
void pci_release_region(struct pci_dev *pdev, int bar)
{
struct pci_devres *dr;
/*
* This is done for backwards compatibility, because the old PCI devres
* API had a mode in which the function became managed if it had been
* enabled with pcim_enable_device() instead of pci_enable_device().
*/
if (pci_is_managed(pdev)) {
pcim_release_region(pdev, bar);
return;
}
if (pci_resource_len(pdev, bar) == 0)
return;
@ -3882,10 +3913,6 @@ void pci_release_region(struct pci_dev *pdev, int bar)
else if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM)
release_mem_region(pci_resource_start(pdev, bar),
pci_resource_len(pdev, bar));
dr = find_pci_dr(pdev);
if (dr)
dr->region_mask &= ~(1 << bar);
}
EXPORT_SYMBOL(pci_release_region);
@ -3896,6 +3923,8 @@ EXPORT_SYMBOL(pci_release_region);
* @res_name: Name to be associated with resource.
* @exclusive: whether the region access is exclusive or not
*
* Returns: 0 on success, negative error code on failure.
*
* Mark the PCI region associated with PCI device @pdev BAR @bar as
* being reserved by owner @res_name. Do not access any
* address inside the PCI regions unless this call returns
@ -3911,7 +3940,12 @@ EXPORT_SYMBOL(pci_release_region);
static int __pci_request_region(struct pci_dev *pdev, int bar,
const char *res_name, int exclusive)
{
struct pci_devres *dr;
if (pci_is_managed(pdev)) {
if (exclusive == IORESOURCE_EXCLUSIVE)
return pcim_request_region_exclusive(pdev, bar, res_name);
return pcim_request_region(pdev, bar, res_name);
}
if (pci_resource_len(pdev, bar) == 0)
return 0;
@ -3927,10 +3961,6 @@ static int __pci_request_region(struct pci_dev *pdev, int bar,
goto err_out;
}
dr = find_pci_dr(pdev);
if (dr)
dr->region_mask |= 1 << bar;
return 0;
err_out:
@ -3945,6 +3975,8 @@ err_out:
* @bar: BAR to be reserved
* @res_name: Name to be associated with resource
*
* Returns: 0 on success, negative error code on failure.
*
* Mark the PCI region associated with PCI device @pdev BAR @bar as
* being reserved by owner @res_name. Do not access any
* address inside the PCI regions unless this call returns
@ -3952,6 +3984,11 @@ err_out:
*
* Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure.
*
* NOTE:
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
* when pcim_enable_device() has been called in advance. This hybrid feature is
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
*/
int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name)
{
@ -4002,6 +4039,13 @@ err_out:
* @pdev: PCI device whose resources are to be reserved
* @bars: Bitmask of BARs to be requested
* @res_name: Name to be associated with resource
*
* Returns: 0 on success, negative error code on failure.
*
* NOTE:
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
* when pcim_enable_device() has been called in advance. This hybrid feature is
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
*/
int pci_request_selected_regions(struct pci_dev *pdev, int bars,
const char *res_name)
@ -4010,6 +4054,19 @@ int pci_request_selected_regions(struct pci_dev *pdev, int bars,
}
EXPORT_SYMBOL(pci_request_selected_regions);
/**
* pci_request_selected_regions_exclusive - Request regions exclusively
* @pdev: PCI device to request regions from
* @bars: bit mask of BARs to request
* @res_name: name to be associated with the requests
*
* Returns: 0 on success, negative error code on failure.
*
* NOTE:
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
* when pcim_enable_device() has been called in advance. This hybrid feature is
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
*/
int pci_request_selected_regions_exclusive(struct pci_dev *pdev, int bars,
const char *res_name)
{
@ -4027,7 +4084,6 @@ EXPORT_SYMBOL(pci_request_selected_regions_exclusive);
* successful call to pci_request_regions(). Call this function only
* after all use of the PCI regions has ceased.
*/
void pci_release_regions(struct pci_dev *pdev)
{
pci_release_selected_regions(pdev, (1 << PCI_STD_NUM_BARS) - 1);
@ -4046,6 +4102,11 @@ EXPORT_SYMBOL(pci_release_regions);
*
* Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure.
*
* NOTE:
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
* when pcim_enable_device() has been called in advance. This hybrid feature is
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
*/
int pci_request_regions(struct pci_dev *pdev, const char *res_name)
{
@ -4059,6 +4120,8 @@ EXPORT_SYMBOL(pci_request_regions);
* @pdev: PCI device whose resources are to be reserved
* @res_name: Name to be associated with resource.
*
* Returns: 0 on success, negative error code on failure.
*
* Mark all PCI regions associated with PCI device @pdev as being reserved
* by owner @res_name. Do not access any address inside the PCI regions
* unless this call returns successfully.
@ -4068,6 +4131,11 @@ EXPORT_SYMBOL(pci_request_regions);
*
* Returns 0 on success, or %EBUSY on error. A warning message is also
* printed on failure.
*
* NOTE:
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
* when pcim_enable_device() has been called in advance. This hybrid feature is
* DEPRECATED! If you want managed cleanup, use the pcim_* functions instead.
*/
int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name)
{
@ -4399,11 +4467,22 @@ void pci_disable_parity(struct pci_dev *dev)
* @enable: boolean: whether to enable or disable PCI INTx
*
* Enables/disables PCI INTx for device @pdev
*
* NOTE:
* This is a "hybrid" function: It's normally unmanaged, but becomes managed
* when pcim_enable_device() has been called in advance. This hybrid feature is
* DEPRECATED! If you want managed cleanup, use pcim_intx() instead.
*/
void pci_intx(struct pci_dev *pdev, int enable)
{
u16 pci_command, new;
/* Preserve the "hybrid" behavior for backwards compatibility */
if (pci_is_managed(pdev)) {
WARN_ON_ONCE(pcim_intx(pdev, enable) != 0);
return;
}
pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
if (enable)
@ -4411,17 +4490,8 @@ void pci_intx(struct pci_dev *pdev, int enable)
else
new = pci_command | PCI_COMMAND_INTX_DISABLE;
if (new != pci_command) {
struct pci_devres *dr;
if (new != pci_command)
pci_write_config_word(pdev, PCI_COMMAND, new);
dr = find_pci_dr(pdev);
if (dr && !dr->restore_intx) {
dr->restore_intx = 1;
dr->orig_intx = !enable;
}
}
}
EXPORT_SYMBOL_GPL(pci_intx);
@ -4753,7 +4823,7 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
*/
int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
{
struct pci_dev *child;
struct pci_dev *child __free(pci_dev_put) = NULL;
int delay;
if (pci_dev_is_disconnected(dev))
@ -4782,8 +4852,8 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
return 0;
}
child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
bus_list);
child = pci_dev_get(list_first_entry(&dev->subordinate->devices,
struct pci_dev, bus_list));
up_read(&pci_bus_sem);
/*
@ -4883,6 +4953,9 @@ void __weak pcibios_reset_secondary_bus(struct pci_dev *dev)
*/
int pci_bridge_secondary_bus_reset(struct pci_dev *dev)
{
if (!dev->block_cfg_access)
pci_warn_once(dev, "unlocked secondary bus reset via: %pS\n",
__builtin_return_address(0));
pcibios_reset_secondary_bus(dev);
return pci_bridge_wait_for_secondary_bus(dev, "bus reset");
@ -5441,10 +5514,12 @@ static void pci_bus_lock(struct pci_bus *bus)
{
struct pci_dev *dev;
pci_dev_lock(bus->self);
list_for_each_entry(dev, &bus->devices, bus_list) {
pci_dev_lock(dev);
if (dev->subordinate)
pci_bus_lock(dev->subordinate);
else
pci_dev_lock(dev);
}
}
@ -5456,8 +5531,10 @@ static void pci_bus_unlock(struct pci_bus *bus)
list_for_each_entry(dev, &bus->devices, bus_list) {
if (dev->subordinate)
pci_bus_unlock(dev->subordinate);
pci_dev_unlock(dev);
else
pci_dev_unlock(dev);
}
pci_dev_unlock(bus->self);
}
/* Return 1 on successful lock, 0 on contention */
@ -5465,15 +5542,15 @@ static int pci_bus_trylock(struct pci_bus *bus)
{
struct pci_dev *dev;
if (!pci_dev_trylock(bus->self))
return 0;
list_for_each_entry(dev, &bus->devices, bus_list) {
if (!pci_dev_trylock(dev))
goto unlock;
if (dev->subordinate) {
if (!pci_bus_trylock(dev->subordinate)) {
pci_dev_unlock(dev);
if (!pci_bus_trylock(dev->subordinate))
goto unlock;
}
}
} else if (!pci_dev_trylock(dev))
goto unlock;
}
return 1;
@ -5481,8 +5558,10 @@ unlock:
list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) {
if (dev->subordinate)
pci_bus_unlock(dev->subordinate);
pci_dev_unlock(dev);
else
pci_dev_unlock(dev);
}
pci_dev_unlock(bus->self);
return 0;
}
@ -5514,9 +5593,10 @@ static void pci_slot_lock(struct pci_slot *slot)
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
continue;
pci_dev_lock(dev);
if (dev->subordinate)
pci_bus_lock(dev->subordinate);
else
pci_dev_lock(dev);
}
}
@ -5542,14 +5622,13 @@ static int pci_slot_trylock(struct pci_slot *slot)
list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
continue;
if (!pci_dev_trylock(dev))
goto unlock;
if (dev->subordinate) {
if (!pci_bus_trylock(dev->subordinate)) {
pci_dev_unlock(dev);
goto unlock;
}
}
} else if (!pci_dev_trylock(dev))
goto unlock;
}
return 1;
@ -5560,7 +5639,8 @@ unlock:
continue;
if (dev->subordinate)
pci_bus_unlock(dev->subordinate);
pci_dev_unlock(dev);
else
pci_dev_unlock(dev);
}
return 0;
}
@ -6019,24 +6099,7 @@ int pcie_link_speed_mbps(struct pci_dev *pdev)
if (err)
return err;
switch (to_pcie_link_speed(lnksta)) {
case PCIE_SPEED_2_5GT:
return 2500;
case PCIE_SPEED_5_0GT:
return 5000;
case PCIE_SPEED_8_0GT:
return 8000;
case PCIE_SPEED_16_0GT:
return 16000;
case PCIE_SPEED_32_0GT:
return 32000;
case PCIE_SPEED_64_0GT:
return 64000;
default:
break;
}
return -EINVAL;
return pcie_dev_speed_mbps(to_pcie_link_speed(lnksta));
}
EXPORT_SYMBOL(pcie_link_speed_mbps);
@ -6839,6 +6902,8 @@ static int __init pci_setup(char *str)
pci_add_flags(PCI_SCAN_ALL_PCIE_DEVS);
} else if (!strncmp(str, "disable_acs_redir=", 18)) {
disable_acs_redir_param = str + 18;
} else if (!strncmp(str, "config_acs=", 11)) {
config_acs_param = str + 11;
} else {
pr_err("PCI: Unknown option `%s'\n", str);
}
@ -6863,6 +6928,7 @@ static int __init pci_realloc_setup_params(void)
resource_alignment_param = kstrdup(resource_alignment_param,
GFP_KERNEL);
disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL);
config_acs_param = kstrdup(config_acs_param, GFP_KERNEL);
return 0;
}

View File

@ -16,12 +16,55 @@
/* Power stable to PERST# inactive from PCIe card Electromechanical Spec */
#define PCIE_T_PVPERL_MS 100
/*
* End of conventional reset (PERST# de-asserted) to first configuration
* request (device able to respond with a "Request Retry Status" completion),
* from PCIe r6.0, sec 6.6.1.
*/
#define PCIE_T_RRS_READY_MS 100
/*
* PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization>
* Recommends 1ms to 10ms timeout to check L2 ready.
*/
#define PCIE_PME_TO_L2_TIMEOUT_US 10000
/*
* PCIe r6.0, sec 6.6.1 <Conventional Reset>
*
* - "With a Downstream Port that does not support Link speeds greater
* than 5.0 GT/s, software must wait a minimum of 100 ms following exit
* from a Conventional Reset before sending a Configuration Request to
* the device immediately below that Port."
*
* - "With a Downstream Port that supports Link speeds greater than
* 5.0 GT/s, software must wait a minimum of 100 ms after Link training
* completes before sending a Configuration Request to the device
* immediately below that Port."
*/
#define PCIE_RESET_CONFIG_DEVICE_WAIT_MS 100
/* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */
#define PCIE_MSG_TYPE_R_RC 0
#define PCIE_MSG_TYPE_R_ADDR 1
#define PCIE_MSG_TYPE_R_ID 2
#define PCIE_MSG_TYPE_R_BC 3
#define PCIE_MSG_TYPE_R_LOCAL 4
#define PCIE_MSG_TYPE_R_GATHER 5
/* Power Management Messages; PCIe r6.0, sec 2.2.8.2 */
#define PCIE_MSG_CODE_PME_TURN_OFF 0x19
/* INTx Mechanism Messages; PCIe r6.0, sec 2.2.8.1 */
#define PCIE_MSG_CODE_ASSERT_INTA 0x20
#define PCIE_MSG_CODE_ASSERT_INTB 0x21
#define PCIE_MSG_CODE_ASSERT_INTC 0x22
#define PCIE_MSG_CODE_ASSERT_INTD 0x23
#define PCIE_MSG_CODE_DEASSERT_INTA 0x24
#define PCIE_MSG_CODE_DEASSERT_INTB 0x25
#define PCIE_MSG_CODE_DEASSERT_INTC 0x26
#define PCIE_MSG_CODE_DEASSERT_INTD 0x27
extern const unsigned char pcie_link_speed[];
extern bool pci_early_dump;
@ -290,6 +333,28 @@ void pci_bus_put(struct pci_bus *bus);
(speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \
0)
static inline int pcie_dev_speed_mbps(enum pci_bus_speed speed)
{
switch (speed) {
case PCIE_SPEED_2_5GT:
return 2500;
case PCIE_SPEED_5_0GT:
return 5000;
case PCIE_SPEED_8_0GT:
return 8000;
case PCIE_SPEED_16_0GT:
return 16000;
case PCIE_SPEED_32_0GT:
return 32000;
case PCIE_SPEED_64_0GT:
return 64000;
default:
break;
}
return -EINVAL;
}
const char *pci_speed_string(enum pci_bus_speed speed);
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
@ -648,6 +713,7 @@ int of_pci_get_max_link_speed(struct device_node *node);
u32 of_pci_get_slot_power_limit(struct device_node *node,
u8 *slot_power_limit_value,
u8 *slot_power_limit_scale);
bool of_pci_preserve_config(struct device_node *node);
int pci_set_of_node(struct pci_dev *dev);
void pci_release_of_node(struct pci_dev *dev);
void pci_set_bus_of_node(struct pci_bus *bus);
@ -686,6 +752,11 @@ of_pci_get_slot_power_limit(struct device_node *node,
return 0;
}
static inline bool of_pci_preserve_config(struct device_node *node)
{
return false;
}
static inline int pci_set_of_node(struct pci_dev *dev) { return 0; }
static inline void pci_release_of_node(struct pci_dev *dev) { }
static inline void pci_set_bus_of_node(struct pci_bus *bus) { }
@ -732,6 +803,7 @@ static inline void pci_restore_aer_state(struct pci_dev *dev) { }
#endif
#ifdef CONFIG_ACPI
bool pci_acpi_preserve_config(struct pci_host_bridge *bridge);
int pci_acpi_program_hp_params(struct pci_dev *dev);
extern const struct attribute_group pci_dev_acpi_attr_group;
void pci_set_acpi_fwnode(struct pci_dev *dev);
@ -745,6 +817,10 @@ int acpi_pci_wakeup(struct pci_dev *dev, bool enable);
bool acpi_pci_need_resume(struct pci_dev *dev);
pci_power_t acpi_pci_choose_state(struct pci_dev *pdev);
#else
static inline bool pci_acpi_preserve_config(struct pci_host_bridge *bridge)
{
return false;
}
static inline int pci_dev_acpi_reset(struct pci_dev *dev, bool probe)
{
return -ENOTTY;
@ -810,26 +886,12 @@ static inline pci_power_t mid_pci_get_power_state(struct pci_dev *pdev)
}
#endif
/*
* Managed PCI resources. This manages device on/off, INTx/MSI/MSI-X
* on/off and BAR regions. pci_dev itself records MSI/MSI-X status, so
* there's no need to track it separately. pci_devres is initialized
* when a device is enabled using managed PCI device enable interface.
*
* TODO: Struct pci_devres and find_pci_dr() only need to be here because
* they're used in pci.c. Port or move these functions to devres.c and
* then remove them from here.
*/
struct pci_devres {
unsigned int enabled:1;
unsigned int pinned:1;
unsigned int orig_intx:1;
unsigned int restore_intx:1;
unsigned int mwi:1;
u32 region_mask;
};
int pcim_intx(struct pci_dev *dev, int enable);
struct pci_devres *find_pci_dr(struct pci_dev *pdev);
int pcim_request_region(struct pci_dev *pdev, int bar, const char *name);
int pcim_request_region_exclusive(struct pci_dev *pdev, int bar,
const char *name);
void pcim_release_region(struct pci_dev *pdev, int bar);
/*
* Config Address for PCI Configuration Mechanism #1

View File

@ -1497,6 +1497,22 @@ static int aer_probe(struct pcie_device *dev)
return 0;
}
static int aer_suspend(struct pcie_device *dev)
{
struct aer_rpc *rpc = get_service_data(dev);
aer_disable_rootport(rpc);
return 0;
}
static int aer_resume(struct pcie_device *dev)
{
struct aer_rpc *rpc = get_service_data(dev);
aer_enable_rootport(rpc);
return 0;
}
/**
* aer_root_reset - reset Root Port hierarchy, RCEC, or RCiEP
* @dev: pointer to Root Port, RCEC, or RCiEP
@ -1561,6 +1577,8 @@ static struct pcie_port_service_driver aerdriver = {
.service = PCIE_PORT_SERVICE_AER,
.probe = aer_probe,
.suspend = aer_suspend,
.resume = aer_resume,
.remove = aer_remove,
};

View File

@ -412,13 +412,44 @@ void pci_dpc_init(struct pci_dev *pdev)
}
}
static void dpc_enable(struct pcie_device *dev)
{
struct pci_dev *pdev = dev->port;
int dpc = pdev->dpc_cap;
u16 ctl;
/*
* Clear DPC Interrupt Status so we don't get an interrupt for an
* old event when setting DPC Interrupt Enable.
*/
pci_write_config_word(pdev, dpc + PCI_EXP_DPC_STATUS,
PCI_EXP_DPC_STATUS_INTERRUPT);
pci_read_config_word(pdev, dpc + PCI_EXP_DPC_CTL, &ctl);
ctl &= ~PCI_EXP_DPC_CTL_EN_MASK;
ctl |= PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN;
pci_write_config_word(pdev, dpc + PCI_EXP_DPC_CTL, ctl);
}
static void dpc_disable(struct pcie_device *dev)
{
struct pci_dev *pdev = dev->port;
int dpc = pdev->dpc_cap;
u16 ctl;
/* Disable DPC triggering and DPC interrupts */
pci_read_config_word(pdev, dpc + PCI_EXP_DPC_CTL, &ctl);
ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN);
pci_write_config_word(pdev, dpc + PCI_EXP_DPC_CTL, ctl);
}
#define FLAG(x, y) (((x) & (y)) ? '+' : '-')
static int dpc_probe(struct pcie_device *dev)
{
struct pci_dev *pdev = dev->port;
struct device *device = &dev->device;
int status;
u16 ctl, cap;
u16 cap;
if (!pcie_aer_is_native(pdev) && !pcie_ports_dpc_native)
return -ENOTSUPP;
@ -433,11 +464,7 @@ static int dpc_probe(struct pcie_device *dev)
}
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap);
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl);
ctl &= ~PCI_EXP_DPC_CTL_EN_MASK;
ctl |= PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN;
pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl);
dpc_enable(dev);
pci_info(pdev, "enabled with IRQ %d\n", dev->irq);
pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n",
@ -450,14 +477,21 @@ static int dpc_probe(struct pcie_device *dev)
return status;
}
static int dpc_suspend(struct pcie_device *dev)
{
dpc_disable(dev);
return 0;
}
static int dpc_resume(struct pcie_device *dev)
{
dpc_enable(dev);
return 0;
}
static void dpc_remove(struct pcie_device *dev)
{
struct pci_dev *pdev = dev->port;
u16 ctl;
pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl);
ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN);
pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl);
dpc_disable(dev);
}
static struct pcie_port_service_driver dpcdriver = {
@ -465,6 +499,8 @@ static struct pcie_port_service_driver dpcdriver = {
.port_type = PCIE_ANY_PORT,
.service = PCIE_PORT_SERVICE_DPC,
.probe = dpc_probe,
.suspend = dpc_suspend,
.resume = dpc_resume,
.remove = dpc_remove,
};

View File

@ -786,7 +786,7 @@ static const struct pci_error_handlers pcie_portdrv_err_handler = {
static struct pci_driver pcie_portdriver = {
.name = "pcieport",
.id_table = &port_pci_ids[0],
.id_table = port_pci_ids,
.probe = pcie_portdrv_probe,
.remove = pcie_portdrv_remove,

View File

@ -889,6 +889,17 @@ static void pci_set_bus_msi_domain(struct pci_bus *bus)
dev_set_msi_domain(&bus->dev, d);
}
static bool pci_preserve_config(struct pci_host_bridge *host_bridge)
{
if (pci_acpi_preserve_config(host_bridge))
return true;
if (host_bridge->dev.parent && host_bridge->dev.parent->of_node)
return of_pci_preserve_config(host_bridge->dev.parent->of_node);
return false;
}
static int pci_register_host_bridge(struct pci_host_bridge *bridge)
{
struct device *parent = bridge->dev.parent;
@ -983,6 +994,9 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE)
dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n");
/* Check if the boot configuration by FW needs to be preserved */
bridge->preserve_config = pci_preserve_config(bridge);
/* Coalesce contiguous windows */
resource_list_for_each_entry_safe(window, n, &resources) {
if (list_is_last(&window->node, &resources))
@ -3079,20 +3093,18 @@ int pci_host_probe(struct pci_host_bridge *bridge)
bus = bridge->bus;
/*
* We insert PCI resources into the iomem_resource and
* ioport_resource trees in either pci_bus_claim_resources()
* or pci_bus_assign_resources().
*/
if (pci_has_flag(PCI_PROBE_ONLY)) {
/* If we must preserve the resource configuration, claim now */
if (bridge->preserve_config)
pci_bus_claim_resources(bus);
} else {
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
}
/*
* Assign whatever was left unassigned. If we didn't claim above,
* this will reassign everything.
*/
pci_assign_unassigned_root_bus_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
return 0;

View File

@ -5099,6 +5099,10 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_BROADCOM, 0x1750, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0x1751, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0x1752, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0x1760, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0x1761, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0x1762, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0x1763, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
/* Amazon Annapurna Labs */
{ PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },

View File

@ -14,6 +14,7 @@
* tighter packing. Prefetchable range support.
*/
#include <linux/bitops.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -21,6 +22,8 @@
#include <linux/errno.h>
#include <linux/ioport.h>
#include <linux/cache.h>
#include <linux/limits.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/acpi.h>
#include "pci.h"
@ -829,11 +832,9 @@ static resource_size_t calculate_memsize(resource_size_t size,
size = min_size;
if (old_size == 1)
old_size = 0;
if (size < old_size)
size = old_size;
size = ALIGN(max(size, add_size) + children_add_size, align);
return size;
size = max(size, add_size) + children_add_size;
return ALIGN(max(size, old_size), align);
}
resource_size_t __weak pcibios_window_alignment(struct pci_bus *bus,
@ -959,7 +960,7 @@ static inline resource_size_t calculate_mem_align(resource_size_t *aligns,
for (order = 0; order <= max_order; order++) {
resource_size_t align1 = 1;
align1 <<= (order + 20);
align1 <<= order + __ffs(SZ_1M);
if (!align)
min_align = align1;
@ -971,6 +972,67 @@ static inline resource_size_t calculate_mem_align(resource_size_t *aligns,
return min_align;
}
/**
* pbus_upstream_space_available - Check no upstream resource limits allocation
* @bus: The bus
* @mask: Mask the resource flag, then compare it with type
* @type: The type of resource from bridge
* @size: The size required from the bridge window
* @align: Required alignment for the resource
*
* Checks that @size can fit inside the upstream bridge resources that are
* already assigned.
*
* Return: %true if enough space is available on all assigned upstream
* resources.
*/
static bool pbus_upstream_space_available(struct pci_bus *bus, unsigned long mask,
unsigned long type, resource_size_t size,
resource_size_t align)
{
struct resource_constraint constraint = {
.max = RESOURCE_SIZE_MAX,
.align = align,
};
struct pci_bus *downstream = bus;
struct resource *r;
while ((bus = bus->parent)) {
if (pci_is_root_bus(bus))
break;
pci_bus_for_each_resource(bus, r) {
if (!r || !r->parent || (r->flags & mask) != type)
continue;
if (resource_size(r) >= size) {
struct resource gap = {};
if (find_resource_space(r, &gap, size, &constraint) == 0) {
gap.flags = type;
pci_dbg(bus->self,
"Assigned bridge window %pR to %pR free space at %pR\n",
r, &bus->busn_res, &gap);
return true;
}
}
if (bus->self) {
pci_info(bus->self,
"Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n",
r, &bus->busn_res,
(unsigned long long)size,
pci_name(downstream->self),
&downstream->busn_res);
}
return false;
}
}
return true;
}
/**
* pbus_size_mem() - Size the memory window of a given bus
*
@ -997,7 +1059,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
struct list_head *realloc_head)
{
struct pci_dev *dev;
resource_size_t min_align, align, size, size0, size1;
resource_size_t min_align, win_align, align, size, size0, size1;
resource_size_t aligns[24]; /* Alignments from 1MB to 8TB */
int order, max_order;
struct resource *b_res = find_bus_resource_of_type(bus,
@ -1049,7 +1111,7 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
* resources.
*/
align = pci_resource_alignment(dev, r);
order = __ffs(align) - 20;
order = __ffs(align) - __ffs(SZ_1M);
if (order < 0)
order = 0;
if (order >= ARRAY_SIZE(aligns)) {
@ -1076,10 +1138,23 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
}
}
win_align = window_alignment(bus, b_res->flags);
min_align = calculate_mem_align(aligns, max_order);
min_align = max(min_align, window_alignment(bus, b_res->flags));
min_align = max(min_align, win_align);
size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align);
add_align = max(min_align, add_align);
if (bus->self && size0 &&
!pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type,
size0, add_align)) {
min_align = 1ULL << (max_order + __ffs(SZ_1M));
min_align = max(min_align, win_align);
size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), win_align);
add_align = win_align;
pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n",
b_res, &bus->busn_res);
}
size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 :
calculate_memsize(size, min_size, add_size, children_add_size,
resource_size(b_res), add_align);

View File

@ -37,7 +37,9 @@ MODULE_PARM_DESC(nirqs, "number of interrupts to allocate (more may be useful fo
static dev_t switchtec_devt;
static DEFINE_IDA(switchtec_minor_ida);
struct class *switchtec_class;
const struct class switchtec_class = {
.name = "switchtec",
};
EXPORT_SYMBOL_GPL(switchtec_class);
enum mrpc_state {
@ -1363,7 +1365,7 @@ static struct switchtec_dev *stdev_create(struct pci_dev *pdev)
dev = &stdev->dev;
device_initialize(dev);
dev->class = switchtec_class;
dev->class = &switchtec_class;
dev->parent = &pdev->dev;
dev->groups = switchtec_device_groups;
dev->release = stdev_release;
@ -1851,11 +1853,9 @@ static int __init switchtec_init(void)
if (rc)
return rc;
switchtec_class = class_create("switchtec");
if (IS_ERR(switchtec_class)) {
rc = PTR_ERR(switchtec_class);
rc = class_register(&switchtec_class);
if (rc)
goto err_create_class;
}
rc = pci_register_driver(&switchtec_pci_driver);
if (rc)
@ -1866,7 +1866,7 @@ static int __init switchtec_init(void)
return 0;
err_pci_register:
class_destroy(switchtec_class);
class_unregister(&switchtec_class);
err_create_class:
unregister_chrdev_region(switchtec_devt, max_devices);
@ -1878,7 +1878,7 @@ module_init(switchtec_init);
static void __exit switchtec_exit(void)
{
pci_unregister_driver(&switchtec_pci_driver);
class_destroy(switchtec_class);
class_unregister(&switchtec_class);
unregister_chrdev_region(switchtec_devt, max_devices);
ida_destroy(&switchtec_minor_ida);

View File

@ -231,7 +231,7 @@ static const struct pci_device_id cdnsp_pci_ids[] = {
static struct pci_driver cdnsp_pci_driver = {
.name = "cdnsp-pci",
.id_table = &cdnsp_pci_ids[0],
.id_table = cdnsp_pci_ids,
.probe = cdnsp_pci_probe,
.remove = cdnsp_pci_remove,
.driver = {

View File

@ -121,7 +121,7 @@ static const struct pci_device_id cdns2_pci_ids[] = {
static struct pci_driver cdns2_pci_driver = {
.name = "cdns2-pci",
.id_table = &cdns2_pci_ids[0],
.id_table = cdns2_pci_ids,
.probe = cdns2_pci_probe,
.remove = cdns2_pci_remove,
.driver = {

View File

@ -188,6 +188,42 @@ enum {
#define DEFINE_RES_DMA(_dma) \
DEFINE_RES_DMA_NAMED((_dma), NULL)
/**
* typedef resource_alignf - Resource alignment callback
* @data: Private data used by the callback
* @res: Resource candidate range (an empty resource space)
* @size: The minimum size of the empty space
* @align: Alignment from the constraints
*
* Callback allows calculating resource placement and alignment beyond min,
* max, and align fields in the struct resource_constraint.
*
* Return: Start address for the resource.
*/
typedef resource_size_t (*resource_alignf)(void *data,
const struct resource *res,
resource_size_t size,
resource_size_t align);
/**
* struct resource_constraint - constraints to be met while searching empty
* resource space
* @min: The minimum address for the memory range
* @max: The maximum address for the memory range
* @align: Alignment for the start address of the empty space
* @alignf: Additional alignment constraints callback
* @alignf_data: Data provided for @alignf callback
*
* Contains the range and alignment constraints that have to be met during
* find_resource_space(). @alignf can be NULL indicating no alignment beyond
* @align is necessary.
*/
struct resource_constraint {
resource_size_t min, max, align;
resource_alignf alignf;
void *alignf_data;
};
/* PC/ISA/whatever - the normal PC address spaces: IO and memory */
extern struct resource ioport_resource;
extern struct resource iomem_resource;
@ -207,10 +243,7 @@ extern void arch_remove_reservations(struct resource *avail);
extern int allocate_resource(struct resource *root, struct resource *new,
resource_size_t size, resource_size_t min,
resource_size_t max, resource_size_t align,
resource_size_t (*alignf)(void *,
const struct resource *,
resource_size_t,
resource_size_t),
resource_alignf alignf,
void *alignf_data);
struct resource *lookup_resource(struct resource *root, resource_size_t start);
int adjust_resource(struct resource *res, resource_size_t start,
@ -264,6 +297,9 @@ static inline bool resource_union(const struct resource *r1, const struct resour
return true;
}
int find_resource_space(struct resource *root, struct resource *new,
resource_size_t size, struct resource_constraint *constraint);
/* Convenience shorthand with allocation */
#define request_region(start,n,name) __request_region(&ioport_resource, (start), (n), (name), 0)
#define request_muxed_region(start,n,name) __request_region(&ioport_resource, (start), (n), (name), IORESOURCE_MUXED)

View File

@ -197,6 +197,8 @@ struct pci_epc_features {
#define to_pci_epc(device) container_of((device), struct pci_epc, dev)
#ifdef CONFIG_PCI_ENDPOINT
#define pci_epc_create(dev, ops) \
__pci_epc_create((dev), (ops), THIS_MODULE)
#define devm_pci_epc_create(dev, ops) \
@ -226,7 +228,8 @@ void pci_epc_linkup(struct pci_epc *epc);
void pci_epc_linkdown(struct pci_epc *epc);
void pci_epc_init_notify(struct pci_epc *epc);
void pci_epc_notify_pending_init(struct pci_epc *epc, struct pci_epf *epf);
void pci_epc_bme_notify(struct pci_epc *epc);
void pci_epc_deinit_notify(struct pci_epc *epc);
void pci_epc_bus_master_enable_notify(struct pci_epc *epc);
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
enum pci_epc_interface_type type);
int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
@ -272,4 +275,14 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
phys_addr_t *phys_addr, size_t size);
void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
void __iomem *virt_addr, size_t size);
#else
static inline void pci_epc_init_notify(struct pci_epc *epc)
{
}
static inline void pci_epc_deinit_notify(struct pci_epc *epc)
{
}
#endif /* CONFIG_PCI_ENDPOINT */
#endif /* __LINUX_PCI_EPC_H */

Some files were not shown because too many files have changed in this diff Show More