pci-v6.5-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmSdtQYUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vxp7A/+KmoBOm9ytiM2HPiq6pmHiJ9zF/DP
 0jvKqDlc0BkmCyRw+/woxA5ZQgDnIYXxxt31toeSu+n31p6AR4wZ14Q5HBahABMw
 O/NUXmLAhYaczcp8hK4lS4Opz65+MaDHomu5fNuD7j7CagIwu20MegSEoyo35YC1
 nDRN0IVYRRy/58wW509deQi/3U04TuC3kflc/iBToa/34g77L9ESoxpsZuAzo7wI
 nc9DF28H6PkuOtnp26iVufqkeYD3wfE5VAtaIZhyO+/WkhcTTUU5JB2hgFSACr0G
 qJTofncvGQrRYTNS7aIYPVrtTZpSMmPS08tgZc+iDkTr8iKth1jcPf3YHLpenQzx
 2B197BUOLENOiWJFPIAe2TAgoGYYBgKhnnwcPHCHoinvVLTO82wUUW5qfn/GckgY
 WNYmS4PofkjlbJH3JdyHdH+vsL0VRzAmkhH+k6F+V02T78Lk+QdXKegLbr5yzRwh
 YF/GjX0JYjnONQeL1LQQ/4hfiA1VzmoXbHvXc0XJew6d/RYMon5G5xBAAZ8tnUHC
 PX6WMd/CG8RBxFv7IsPh6hTGKEXw4/1ElynPVP/ZEBGVelHda4Sm4G5nJ6/r2cwK
 VICfsSb9Nw76STGHpVeQB2O2Y/yZ1xIwWRiAsCndsYOlnGi+knRVxH5UH/1aQPOE
 N3DsyT0s/sZaJvc=
 =xpTd
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Export pcie_retrain_link() for use outside ASPM

   - Add Data Link Layer Link Active Reporting as another way for
     pcie_retrain_link() to determine the link is up

   - Work around link training failures (especially on the ASMedia
     ASM2824 switch) by training first at 2.5GT/s and then attempting
     higher rates

  Resource management:

   - When we coalesce host bridge windows, remove invalidated resources
     from the resource tree so future allocations work correctly

  Hotplug:

   - Cancel bringup sequence if card is not present, to keep from
     blinking Power Indicator indefinitely

   - Reassign bridge resources if necessary for ACPI hotplug

  Driver binding:

   - Convert platform_device .remove() callbacks to return void instead
     of a mostly useless int

  Power management:

   - Reduce wait time for secondary bus to be ready to speed up resume

   - Avoid putting EloPOS E2/S2/H2 (as well as Elo i2) PCIe Ports in
     D3cold

   - Call _REG when transitioning D-states so AML that uses the PCI
     config space OpRegion works, which fixes some ASMedia GPIO
     controllers after resume

  Virtualization:

   - Delay extra 250ms after FLR of Solidigm P44 Pro NVMe to avoid KVM
     hang when guest is rebooted

   - Add function 1 DMA alias quirk for Marvell 88SE9235

  Error handling:

   - Unexport pci_save_aer_state() since it's only used in drivers/pci/

   - Drop recommendation for drivers to configure AER Capability, since
     the PCI core does this for all devices

  ASPM:

   - Disable ASPM on MFD function removal to avoid use-after-free

   - Tighten up pci_enable_link_state() and pci_disable_link_state()
     interfaces so they don't enable/disable states the driver didn't
     specify

   - Avoid link retraining race that can happen if ASPM sets link
     control parameters while the link is in the midst of training for
     some other reason

  Endpoint framework:

   - Change "PCI Endpoint Virtual NTB driver" Kconfig prompt to be
     different from "PCI Endpoint NTB driver"

   - Automatically create a function specific attributes group for
     endpoint drivers to avoid reference counting issues

   - Fix many EPC test issues

   - Return pci_epf_type_add_cfs() error if EPF has no driver

   - Add kernel-doc for pci_epc_raise_irq() and pci_epc_map_msi_irq()
     MSI vector parameters

   - Pass EPF device ID to driver probe functions

   - Return -EALREADY if EPC has already been started/stopped

   - Add linkdown notifier support and use it in qcom-ep

   - Add Bus Master Enable event support and use it in qcom-ep

   - Add Qualcomm Modem Host Interface (MHI) endpoint driver

   - Add Layerscape PME interrupt handling to manage link-up
     notification

  Cadence PCIe controller driver:

   - Wait for link retrain to complete when working around the J721E
     i2085 erratum with Gen2 mode

  Faraday FTPC100 PCI controller driver:

   - Release clock resources on error paths

  Freescale i.MX6 PCIe controller driver:

   - Save and restore Root Port MSI control to work around hardware defect

  Intel VMD host bridge driver:

   - Reset VMD config register between soft reboots

   - Capture pci_reset_bus() return value instead of printing junk when
     it fails

  Qualcomm PCIe controller driver:

   - Add SDX65 endpoint compatible string to DT binding

   - Disable register write access after init for IP v2.3.3, v2.9.0

   - Use DWC helpers for enabling/disabling writes to DBI registers

   - Hide slot hotplug capability for IP v1.0.0, v1.9.0, v2.1.0, v2.3.2,
     v2.3.3, v2.7.0, v2.9.0

   - Reuse v2.3.2 post-init sequence for v2.4.0

  Renesas R-Car PCIe controller driver:

   - Remove unused static pcie_base and pcie_dev

  Rockchip PCIe controller driver:

   - Remove writes to unused registers

   - Write endpoint Device ID using correct register

   - Assert PCI Configuration Enable bit after probe so endpoint
     responds instead of generating Request Retry Status messages

   - Poll waiting for PHY PLLs to lock

   - Update RK3399 example DT binding to be valid

   - Use RK3399 PCIE_CLIENT_LEGACY_INT_CTRL to generate INTx instead of
     manually generating PCIe message

   - Use multiple windows to avoid address translation conflicts

   - Use u32 (not u16) when accessing 32-bit registers

   - Hide MSI-X Capability, since RK3399 can't generate MSI-X

   - Set endpoint controller required alignment to 256

  Synopsys DesignWare PCIe controller driver:

   - Wait for link to come up only if we've initiated link training

  Miscellaneous:

   - Add pci_clear_master() stub for non-CONFIG_PCI"

* tag 'pci-v6.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (116 commits)
  Documentation: PCI: correct spelling
  PCI: vmd: Fix uninitialized variable usage in vmd_enable_domain()
  PCI: xgene-msi: Convert to platform remove callback returning void
  PCI: tegra: Convert to platform remove callback returning void
  PCI: rockchip-host: Convert to platform remove callback returning void
  PCI: mvebu: Convert to platform remove callback returning void
  PCI: mt7621: Convert to platform remove callback returning void
  PCI: mediatek-gen3: Convert to platform remove callback returning void
  PCI: mediatek: Convert to platform remove callback returning void
  PCI: iproc: Convert to platform remove callback returning void
  PCI: hisi-error: Convert to platform remove callback returning void
  PCI: dwc: Convert to platform remove callback returning void
  PCI: j721e: Convert to platform remove callback returning void
  PCI: brcmstb: Convert to platform remove callback returning void
  PCI: altera-msi: Convert to platform remove callback returning void
  PCI: altera: Convert to platform remove callback returning void
  PCI: aardvark: Convert to platform remove callback returning void
  PCI: rcar: Use correct product family name for Renesas R-Car
  PCI: layerscape: Add the endpoint linkup notifier support
  PCI: endpoint: pci-epf-vntb: Fix typo in comments
  ...
This commit is contained in:
Linus Torvalds 2023-06-30 15:06:45 -07:00
commit 9070577ae9
70 changed files with 1631 additions and 848 deletions

View File

@ -88,13 +88,10 @@ commands can be used::
# echo 0x104c > functions/pci_epf_ntb/func1/vendorid
# echo 0xb00d > functions/pci_epf_ntb/func1/deviceid
In order to configure NTB specific attributes, a new sub-directory to func1
should be created::
# mkdir functions/pci_epf_ntb/func1/pci_epf_ntb.0/
The NTB function driver will populate this directory with various attributes
that can be configured by the user::
The PCI endpoint framework also automatically creates a sub-directory in the
function attribute directory. This sub-directory has the same name as the name
of the function device and is populated with the following NTB specific
attributes that can be configured by the user::
# ls functions/pci_epf_ntb/func1/pci_epf_ntb.0/
db_count mw1 mw2 mw3 mw4 num_mws

View File

@ -84,13 +84,10 @@ commands can be used::
# echo 0x1957 > functions/pci_epf_vntb/func1/vendorid
# echo 0x0809 > functions/pci_epf_vntb/func1/deviceid
In order to configure NTB specific attributes, a new sub-directory to func1
should be created::
# mkdir functions/pci_epf_vntb/func1/pci_epf_vntb.0/
The NTB function driver will populate this directory with various attributes
that can be configured by the user::
The PCI endpoint framework also automatically creates a sub-directory in the
function attribute directory. This sub-directory has the same name as the name
of the function device and is populated with the following NTB specific
attributes that can be configured by the user::
# ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/
db_count mw1 mw2 mw3 mw4 num_mws
@ -103,7 +100,7 @@ A sample configuration for NTB function is given below::
# echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws
# echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1
A sample configuration for virtual NTB driver for virutal PCI bus::
A sample configuration for virtual NTB driver for virtual PCI bus::
# echo 0x1957 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_vid
# echo 0x080A > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_pid

View File

@ -290,7 +290,7 @@ PCI_IRQ_MSI or PCI_IRQ_MSIX flags.
List of device drivers MSI(-X) APIs
===================================
The PCI/MSI subystem has a dedicated C file for its exported device driver
The PCI/MSI subsystem has a dedicated C file for its exported device driver
APIs — `drivers/pci/msi/api.c`. The following functions are exported:
.. kernel-doc:: drivers/pci/msi/api.c

View File

@ -364,7 +364,7 @@ Note, however, not all failures are truly "permanent". Some are
caused by over-heating, some by a poorly seated card. Many
PCI error events are caused by software bugs, e.g. DMA's to
wild addresses or bogus split transactions due to programming
errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
errors. See the discussion in Documentation/powerpc/eeh-pci-error-recovery.rst
for additional detail on real-life experience of the causes of
software errors.

View File

@ -16,62 +16,61 @@ Overview
About this guide
----------------
This guide describes the basics of the PCI Express Advanced Error
This guide describes the basics of the PCI Express (PCIe) Advanced Error
Reporting (AER) driver and provides information on how to use it, as
well as how to enable the drivers of endpoint devices to conform with
PCI Express AER driver.
well as how to enable the drivers of Endpoint devices to conform with
the PCIe AER driver.
What is the PCI Express AER Driver?
-----------------------------------
What is the PCIe AER Driver?
----------------------------
PCI Express error signaling can occur on the PCI Express link itself
or on behalf of transactions initiated on the link. PCI Express
PCIe error signaling can occur on the PCIe link itself
or on behalf of transactions initiated on the link. PCIe
defines two error reporting paradigms: the baseline capability and
the Advanced Error Reporting capability. The baseline capability is
required of all PCI Express components providing a minimum defined
required of all PCIe components providing a minimum defined
set of error reporting requirements. Advanced Error Reporting
capability is implemented with a PCI Express advanced error reporting
capability is implemented with a PCIe Advanced Error Reporting
extended capability structure providing more robust error reporting.
The PCI Express AER driver provides the infrastructure to support PCI
Express Advanced Error Reporting capability. The PCI Express AER
driver provides three basic functions:
The PCIe AER driver provides the infrastructure to support PCIe Advanced
Error Reporting capability. The PCIe AER driver provides three basic
functions:
- Gathers the comprehensive error information if errors occurred.
- Reports error to the users.
- Performs error recovery actions.
AER driver only attaches root ports which support PCI-Express AER
capability.
The AER driver only attaches to Root Ports and RCECs that support the PCIe
AER capability.
User Guide
==========
Include the PCI Express AER Root Driver into the Linux Kernel
-------------------------------------------------------------
Include the PCIe AER Root Driver into the Linux Kernel
------------------------------------------------------
The PCI Express AER Root driver is a Root Port service driver attached
to the PCI Express Port Bus driver. If a user wants to use it, the driver
has to be compiled. Option CONFIG_PCIEAER supports this capability. It
depends on CONFIG_PCIEPORTBUS, so pls. set CONFIG_PCIEPORTBUS=y and
CONFIG_PCIEAER = y.
The PCIe AER driver is a Root Port service driver attached
via the PCIe Port Bus driver. If a user wants to use it, the driver
must be compiled. It is enabled with CONFIG_PCIEAER, which
depends on CONFIG_PCIEPORTBUS.
Load PCI Express AER Root Driver
--------------------------------
Load PCIe AER Root Driver
-------------------------
Some systems have AER support in firmware. Enabling Linux AER support at
the same time the firmware handles AER may result in unpredictable
the same time the firmware handles AER would result in unpredictable
behavior. Therefore, Linux does not handle AER events unless the firmware
grants AER control to the OS via the ACPI _OSC method. See the PCI FW 3.0
grants AER control to the OS via the ACPI _OSC method. See the PCI Firmware
Specification for details regarding _OSC usage.
AER error output
----------------
When a PCIe AER error is captured, an error message will be output to
console. If it's a correctable error, it is output as a warning.
console. If it's a correctable error, it is output as an info message.
Otherwise, it is printed as an error. So users could choose different
log level to filter out correctable error messages.
@ -82,9 +81,9 @@ Below shows an example::
0000:50:00.0: [20] Unsupported Request (First)
0000:50:00.0: TLP Header: 04000001 00200a03 05010000 00050100
In the example, 'Requester ID' means the ID of the device who sends
the error message to root port. Pls. refer to pci express specs for
other fields.
In the example, 'Requester ID' means the ID of the device that sent
the error message to the Root Port. Please refer to PCIe specs for other
fields.
AER Statistics / Counters
-------------------------
@ -96,65 +95,56 @@ Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
Developer Guide
===============
To enable AER aware support requires a software driver to configure
the AER capability structure within its device and to provide callbacks.
To enable error recovery, a software driver must provide callbacks.
To support AER better, developers need understand how AER does work
firstly.
To support AER better, developers need to understand how AER works.
PCI Express errors are classified into two types: correctable errors
and uncorrectable errors. This classification is based on the impacts
PCIe errors are classified into two types: correctable errors
and uncorrectable errors. This classification is based on the impact
of those errors, which may result in degraded performance or function
failure.
Correctable errors pose no impacts on the functionality of the
interface. The PCI Express protocol can recover without any software
interface. The PCIe protocol can recover without any software
intervention or any loss of data. These errors are detected and
corrected by hardware. Unlike correctable errors, uncorrectable
corrected by hardware.
Unlike correctable errors, uncorrectable
errors impact functionality of the interface. Uncorrectable errors
can cause a particular transaction or a particular PCI Express link
can cause a particular transaction or a particular PCIe link
to be unreliable. Depending on those error conditions, uncorrectable
errors are further classified into non-fatal errors and fatal errors.
Non-fatal errors cause the particular transaction to be unreliable,
but the PCI Express link itself is fully functional. Fatal errors, on
but the PCIe link itself is fully functional. Fatal errors, on
the other hand, cause the link to be unreliable.
When AER is enabled, a PCI Express device will automatically send an
error message to the PCIe root port above it when the device captures
When PCIe error reporting is enabled, a device will automatically send an
error message to the Root Port above it when it captures
an error. The Root Port, upon receiving an error reporting message,
internally processes and logs the error message in its PCI Express
capability structure. Error information being logged includes storing
internally processes and logs the error message in its AER
Capability structure. Error information being logged includes storing
the error reporting agent's requestor ID into the Error Source
Identification Registers and setting the error bits of the Root Error
Status Register accordingly. If AER error reporting is enabled in Root
Error Command Register, the Root Port generates an interrupt if an
Status Register accordingly. If AER error reporting is enabled in the Root
Error Command Register, the Root Port generates an interrupt when an
error is detected.
Note that the errors as described above are related to the PCI Express
Note that the errors as described above are related to the PCIe
hierarchy and links. These errors do not include any device specific
errors because device specific errors will still get sent directly to
the device driver.
Configure the AER capability structure
--------------------------------------
AER aware drivers of PCI Express component need change the device
control registers to enable AER. They also could change AER registers,
including mask and severity registers. Helper function
pci_enable_pcie_error_reporting could be used to enable AER. See
section 3.3.
Provide callbacks
-----------------
callback reset_link to reset pci express link
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
callback reset_link to reset PCIe link
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This callback is used to reset the pci express physical link when a
fatal error happens. The root port aer service driver provides a
default reset_link function, but different upstream ports might
have different specifications to reset pci express link, so all
upstream ports should provide their own reset_link functions.
This callback is used to reset the PCIe physical link when a
fatal error happens. The Root Port AER service driver provides a
default reset_link function, but different Upstream Ports might
have different specifications to reset the PCIe link, so
Upstream Port drivers may provide their own reset_link functions.
Section 3.2.2.2 provides more detailed info on when to call
reset_link.
@ -162,24 +152,24 @@ reset_link.
PCI error-recovery callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The PCI Express AER Root driver uses error callbacks to coordinate
The PCIe AER Root driver uses error callbacks to coordinate
with downstream device drivers associated with a hierarchy in question
when performing error recovery actions.
Data struct pci_driver has a pointer, err_handler, to point to
pci_error_handlers who consists of a couple of callback function
pointers. AER driver follows the rules defined in
pci-error-recovery.txt except pci express specific parts (e.g.
reset_link). Pls. refer to pci-error-recovery.txt for detailed
pointers. The AER driver follows the rules defined in
pci-error-recovery.rst except PCIe-specific parts (e.g.
reset_link). Please refer to pci-error-recovery.rst for detailed
definitions of the callbacks.
Below sections specify when to call the error callback functions.
The sections below specify when to call the error callback functions.
Correctable errors
~~~~~~~~~~~~~~~~~~
Correctable errors pose no impacts on the functionality of
the interface. The PCI Express protocol can recover without any
the interface. The PCIe protocol can recover without any
software intervention or any loss of data. These errors do not
require any recovery actions. The AER driver clears the device's
correctable error status register accordingly and logs these errors.
@ -190,12 +180,12 @@ Non-correctable (non-fatal and fatal) errors
If an error message indicates a non-fatal error, performing link reset
at upstream is not required. The AER driver calls error_detected(dev,
pci_channel_io_normal) to all drivers associated within a hierarchy in
question. for example::
question. For example::
EndPoint<==>DownstreamPort B<==>UpstreamPort A<==>RootPort
Endpoint <==> Downstream Port B <==> Upstream Port A <==> Root Port
If Upstream port A captures an AER error, the hierarchy consists of
Downstream port B and EndPoint.
If Upstream Port A captures an AER error, the hierarchy consists of
Downstream Port B and Endpoint.
A driver may return PCI_ERS_RESULT_CAN_RECOVER,
PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on
@ -212,36 +202,11 @@ to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER
and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
to mmio_enabled.
helper functions
----------------
::
int pci_enable_pcie_error_reporting(struct pci_dev *dev);
pci_enable_pcie_error_reporting enables the device to send error
messages to root port when an error is detected. Note that devices
don't enable the error reporting by default, so device drivers need
call this function to enable it.
::
int pci_disable_pcie_error_reporting(struct pci_dev *dev);
pci_disable_pcie_error_reporting disables the device to send error
messages to root port when an error is detected.
::
int pci_aer_clear_nonfatal_status(struct pci_dev *dev);`
pci_aer_clear_nonfatal_status clears non-fatal errors in the uncorrectable
error status register.
Frequent Asked Questions
------------------------
Q:
What happens if a PCI Express device driver does not provide an
What happens if a PCIe device driver does not provide an
error recovery handler (pci_driver->err_handler is equal to NULL)?
A:
@ -257,24 +222,6 @@ A:
Fatal error recovery will fail if the errors are reported by the
upstream ports who are attached by the service driver.
Q:
How does this infrastructure deal with driver that is not PCI
Express aware?
A:
This infrastructure calls the error callback functions of the
driver when an error happens. But if the driver is not aware of
PCI Express, the device might not report its own errors to root
port.
Q:
What modifications will that driver need to make it compatible
with the PCI Express AER Root driver?
A:
It could call the helper functions to enable AER in devices and
cleanup uncorrectable status register. Pls. refer to section 3.3.
Software error injection
========================
@ -296,5 +243,5 @@ from:
https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/
More information about aer-inject can be found in the document comes
with its source code.
More information about aer-inject can be found in the document in
its source code.

View File

@ -13,6 +13,7 @@ properties:
compatible:
enum:
- qcom,sdx55-pcie-ep
- qcom,sdx65-pcie-ep
- qcom,sm8450-pcie-ep
reg:
@ -109,6 +110,7 @@ allOf:
contains:
enum:
- qcom,sdx55-pcie-ep
- qcom,sdx65-pcie-ep
then:
properties:
clocks:

View File

@ -47,7 +47,7 @@ examples:
pcie-ep@f8000000 {
compatible = "rockchip,rk3399-pcie-ep";
reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0x80000000 0x0 0x20000>;
reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0xfa000000 0x0 0x2000000>;
reg-names = "apb-base", "mem-base";
clocks = <&cru ACLK_PCIE>, <&cru ACLK_PERF_PCIE>,
<&cru PCLK_PCIE>, <&cru SCLK_PCIE_PM>;
@ -63,6 +63,8 @@ examples:
phys = <&pcie_phy 0>, <&pcie_phy 1>, <&pcie_phy 2>, <&pcie_phy 3>;
phy-names = "pcie-phy-0", "pcie-phy-1", "pcie-phy-2", "pcie-phy-3";
rockchip,max-outbound-regions = <16>;
pinctrl-names = "default";
pinctrl-0 = <&pcie_clkreqnb_cpm>;
};
};
...

View File

@ -13744,6 +13744,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
F: Documentation/ABI/stable/sysfs-bus-mhi
F: Documentation/mhi/
F: drivers/bus/mhi/
F: drivers/pci/endpoint/functions/pci-epf-mhi.c
F: include/linux/mhi.h
MICROBLAZE ARCHITECTURE

View File

@ -671,9 +671,8 @@ static void eeh_bridge_check_link(struct eeh_dev *edev)
eeh_ops->write_config(edev, cap + PCI_EXP_LNKCTL, 2, val);
/* Check link */
eeh_ops->read_config(edev, cap + PCI_EXP_LNKCAP, 4, &val);
if (!(val & PCI_EXP_LNKCAP_DLLLARC)) {
eeh_edev_dbg(edev, "No link reporting capability (0x%08x) \n", val);
if (!edev->pdev->link_active_reporting) {
eeh_edev_dbg(edev, "No link reporting capability\n");
msleep(1000);
return;
}

View File

@ -159,10 +159,7 @@ static irqreturn_t pci_endpoint_test_irqhandler(int irq, void *dev_id)
if (reg & STATUS_IRQ_RAISED) {
test->last_irq = irq;
complete(&test->irq_raised);
reg &= ~STATUS_IRQ_RAISED;
}
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS,
reg);
return IRQ_HANDLED;
}
@ -316,21 +313,17 @@ static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
struct pci_dev *pdev = test->pdev;
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE,
msix == false ? IRQ_TYPE_MSI :
IRQ_TYPE_MSIX);
msix ? IRQ_TYPE_MSIX : IRQ_TYPE_MSI);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, msi_num);
pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND,
msix == false ? COMMAND_RAISE_MSI_IRQ :
COMMAND_RAISE_MSIX_IRQ);
msix ? COMMAND_RAISE_MSIX_IRQ :
COMMAND_RAISE_MSI_IRQ);
val = wait_for_completion_timeout(&test->irq_raised,
msecs_to_jiffies(1000));
if (!val)
return false;
if (pci_irq_vector(pdev, msi_num - 1) == test->last_irq)
return true;
return false;
return pci_irq_vector(pdev, msi_num - 1) == test->last_irq;
}
static int pci_endpoint_test_validate_xfer_params(struct device *dev,
@ -729,6 +722,10 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
struct pci_dev *pdev = test->pdev;
mutex_lock(&test->mutex);
reinit_completion(&test->irq_raised);
test->last_irq = -ENODATA;
switch (cmd) {
case PCITEST_BAR:
bar = arg;
@ -938,6 +935,9 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
if (id < 0)
return;
pci_endpoint_test_release_irq(test);
pci_endpoint_test_free_irq_vectors(test);
misc_deregister(&test->miscdev);
kfree(misc_device->name);
kfree(test->name);
@ -947,9 +947,6 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
pci_iounmap(pdev, test->bar[bar]);
}
pci_endpoint_test_release_irq(test);
pci_endpoint_test_free_irq_vectors(test);
pci_release_regions(pdev);
pci_disable_device(pdev);
}

View File

@ -368,7 +368,6 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
struct pci_dev *sdev;
u16 reg16, dev_id;
int cap, err;
u32 reg32;
err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, &dev_id);
if (err)
@ -399,11 +398,8 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
return err;
/* Check link */
err = pci_read_config_dword(bridge, cap + PCI_EXP_LNKCAP, &reg32);
if (err)
return err;
if (!(reg32 & PCI_EXP_LNKCAP_DLLLARC)) {
mlx5_core_warn(dev, "No PCI link reporting capability (0x%08x)\n", reg32);
if (!bridge->link_active_reporting) {
mlx5_core_warn(dev, "No PCI link reporting capability\n");
msleep(1000);
goto restore;
}

View File

@ -542,7 +542,7 @@ err_get_sync:
return ret;
}
static int j721e_pcie_remove(struct platform_device *pdev)
static void j721e_pcie_remove(struct platform_device *pdev)
{
struct j721e_pcie *pcie = platform_get_drvdata(pdev);
struct cdns_pcie *cdns_pcie = pcie->cdns_pcie;
@ -552,13 +552,11 @@ static int j721e_pcie_remove(struct platform_device *pdev)
cdns_pcie_disable_phy(cdns_pcie);
pm_runtime_put(dev);
pm_runtime_disable(dev);
return 0;
}
static struct platform_driver j721e_pcie_driver = {
.probe = j721e_pcie_probe,
.remove = j721e_pcie_remove,
.remove_new = j721e_pcie_remove,
.driver = {
.name = "j721e-pcie",
.of_match_table = of_j721e_pcie_match,

View File

@ -12,6 +12,8 @@
#include "pcie-cadence.h"
#define LINK_RETRAIN_TIMEOUT HZ
static u64 bar_max_size[] = {
[RP_BAR0] = _ULL(128 * SZ_2G),
[RP_BAR1] = SZ_2G,
@ -77,6 +79,27 @@ static struct pci_ops cdns_pcie_host_ops = {
.write = pci_generic_config_write,
};
static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
{
u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
unsigned long end_jiffies;
u16 lnk_stat;
/* Wait for link training to complete. Exit after timeout. */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
break;
usleep_range(0, 1000);
} while (time_before(jiffies, end_jiffies));
if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
return 0;
return -ETIMEDOUT;
}
static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
{
struct device *dev = pcie->dev;
@ -118,6 +141,10 @@ static int cdns_pcie_retrain(struct cdns_pcie *pcie)
cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
lnk_ctl);
ret = cdns_pcie_host_training_complete(pcie);
if (ret)
return ret;
ret = cdns_pcie_host_wait_for_link(pcie);
}
return ret;

View File

@ -80,6 +80,7 @@ struct imx6_pcie {
struct clk *pcie;
struct clk *pcie_aux;
struct regmap *iomuxc_gpr;
u16 msi_ctrl;
u32 controller_id;
struct reset_control *pciephy_reset;
struct reset_control *apps_reset;
@ -1178,6 +1179,26 @@ pm_turnoff_sleep:
usleep_range(1000, 10000);
}
static void imx6_pcie_msi_save_restore(struct imx6_pcie *imx6_pcie, bool save)
{
u8 offset;
u16 val;
struct dw_pcie *pci = imx6_pcie->pci;
if (pci_msi_enabled()) {
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
if (save) {
val = dw_pcie_readw_dbi(pci, offset + PCI_MSI_FLAGS);
imx6_pcie->msi_ctrl = val;
} else {
dw_pcie_dbi_ro_wr_en(pci);
val = imx6_pcie->msi_ctrl;
dw_pcie_writew_dbi(pci, offset + PCI_MSI_FLAGS, val);
dw_pcie_dbi_ro_wr_dis(pci);
}
}
}
static int imx6_pcie_suspend_noirq(struct device *dev)
{
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
@ -1186,6 +1207,7 @@ static int imx6_pcie_suspend_noirq(struct device *dev)
if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND))
return 0;
imx6_pcie_msi_save_restore(imx6_pcie, true);
imx6_pcie_pm_turnoff(imx6_pcie);
imx6_pcie_stop_link(imx6_pcie->pci);
imx6_pcie_host_exit(pp);
@ -1205,6 +1227,7 @@ static int imx6_pcie_resume_noirq(struct device *dev)
ret = imx6_pcie_host_init(pp);
if (ret)
return ret;
imx6_pcie_msi_save_restore(imx6_pcie, false);
dw_pcie_setup_rc(pp);
if (imx6_pcie->link_is_up)

View File

@ -18,6 +18,20 @@
#include "pcie-designware.h"
#define PEX_PF0_CONFIG 0xC0014
#define PEX_PF0_CFG_READY BIT(0)
/* PEX PFa PCIE PME and message interrupt registers*/
#define PEX_PF0_PME_MES_DR 0xC0020
#define PEX_PF0_PME_MES_DR_LUD BIT(7)
#define PEX_PF0_PME_MES_DR_LDD BIT(9)
#define PEX_PF0_PME_MES_DR_HRD BIT(10)
#define PEX_PF0_PME_MES_IER 0xC0028
#define PEX_PF0_PME_MES_IER_LUDIE BIT(7)
#define PEX_PF0_PME_MES_IER_LDDIE BIT(9)
#define PEX_PF0_PME_MES_IER_HRDIE BIT(10)
#define to_ls_pcie_ep(x) dev_get_drvdata((x)->dev)
struct ls_pcie_ep_drvdata {
@ -30,8 +44,84 @@ struct ls_pcie_ep {
struct dw_pcie *pci;
struct pci_epc_features *ls_epc;
const struct ls_pcie_ep_drvdata *drvdata;
int irq;
bool big_endian;
};
static u32 ls_lut_readl(struct ls_pcie_ep *pcie, u32 offset)
{
struct dw_pcie *pci = pcie->pci;
if (pcie->big_endian)
return ioread32be(pci->dbi_base + offset);
else
return ioread32(pci->dbi_base + offset);
}
static void ls_lut_writel(struct ls_pcie_ep *pcie, u32 offset, u32 value)
{
struct dw_pcie *pci = pcie->pci;
if (pcie->big_endian)
iowrite32be(value, pci->dbi_base + offset);
else
iowrite32(value, pci->dbi_base + offset);
}
static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
{
struct ls_pcie_ep *pcie = dev_id;
struct dw_pcie *pci = pcie->pci;
u32 val, cfg;
val = ls_lut_readl(pcie, PEX_PF0_PME_MES_DR);
ls_lut_writel(pcie, PEX_PF0_PME_MES_DR, val);
if (!val)
return IRQ_NONE;
if (val & PEX_PF0_PME_MES_DR_LUD) {
cfg = ls_lut_readl(pcie, PEX_PF0_CONFIG);
cfg |= PEX_PF0_CFG_READY;
ls_lut_writel(pcie, PEX_PF0_CONFIG, cfg);
dw_pcie_ep_linkup(&pci->ep);
dev_dbg(pci->dev, "Link up\n");
} else if (val & PEX_PF0_PME_MES_DR_LDD) {
dev_dbg(pci->dev, "Link down\n");
} else if (val & PEX_PF0_PME_MES_DR_HRD) {
dev_dbg(pci->dev, "Hot reset\n");
}
return IRQ_HANDLED;
}
static int ls_pcie_ep_interrupt_init(struct ls_pcie_ep *pcie,
struct platform_device *pdev)
{
u32 val;
int ret;
pcie->irq = platform_get_irq_byname(pdev, "pme");
if (pcie->irq < 0)
return pcie->irq;
ret = devm_request_irq(&pdev->dev, pcie->irq, ls_pcie_ep_event_handler,
IRQF_SHARED, pdev->name, pcie);
if (ret) {
dev_err(&pdev->dev, "Can't register PCIe IRQ\n");
return ret;
}
/* Enable interrupts */
val = ls_lut_readl(pcie, PEX_PF0_PME_MES_IER);
val |= PEX_PF0_PME_MES_IER_LDDIE | PEX_PF0_PME_MES_IER_HRDIE |
PEX_PF0_PME_MES_IER_LUDIE;
ls_lut_writel(pcie, PEX_PF0_PME_MES_IER, val);
return 0;
}
static const struct pci_epc_features*
ls_pcie_ep_get_features(struct dw_pcie_ep *ep)
{
@ -125,6 +215,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
struct ls_pcie_ep *pcie;
struct pci_epc_features *ls_epc;
struct resource *dbi_base;
int ret;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
@ -144,6 +235,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
pci->ops = pcie->drvdata->dw_pcie_ops;
ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4);
ls_epc->linkup_notifier = true;
pcie->pci = pci;
pcie->ls_epc = ls_epc;
@ -155,9 +247,15 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
pci->ep.ops = &ls_pcie_ep_ops;
pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian");
platform_set_drvdata(pdev, pcie);
return dw_pcie_ep_init(&pci->ep);
ret = dw_pcie_ep_init(&pci->ep);
if (ret)
return ret;
return ls_pcie_ep_interrupt_init(pcie, pdev);
}
static struct platform_driver ls_pcie_ep_driver = {

View File

@ -617,13 +617,11 @@ static int bt1_pcie_probe(struct platform_device *pdev)
return bt1_pcie_add_port(btpci);
}
static int bt1_pcie_remove(struct platform_device *pdev)
static void bt1_pcie_remove(struct platform_device *pdev)
{
struct bt1_pcie *btpci = platform_get_drvdata(pdev);
bt1_pcie_del_port(btpci);
return 0;
}
static const struct of_device_id bt1_pcie_of_match[] = {
@ -634,7 +632,7 @@ MODULE_DEVICE_TABLE(of, bt1_pcie_of_match);
static struct platform_driver bt1_pcie_driver = {
.probe = bt1_pcie_probe,
.remove = bt1_pcie_remove,
.remove_new = bt1_pcie_remove,
.driver = {
.name = "bt1-pcie",
.of_match_table = bt1_pcie_of_match,

View File

@ -485,14 +485,19 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
goto err_remove_edma;
if (!dw_pcie_link_up(pci)) {
if (dw_pcie_link_up(pci)) {
dw_pcie_print_link_status(pci);
} else {
ret = dw_pcie_start_link(pci);
if (ret)
goto err_remove_edma;
}
/* Ignore errors, the link may come up later */
dw_pcie_wait_for_link(pci);
if (pci->ops && pci->ops->start_link) {
ret = dw_pcie_wait_for_link(pci);
if (ret)
goto err_stop_link;
}
}
bridge->sysdata = pp;

View File

@ -644,9 +644,20 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index)
dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0);
}
int dw_pcie_wait_for_link(struct dw_pcie *pci)
void dw_pcie_print_link_status(struct dw_pcie *pci)
{
u32 offset, val;
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);
dev_info(pci->dev, "PCIe Gen.%u x%u link up\n",
FIELD_GET(PCI_EXP_LNKSTA_CLS, val),
FIELD_GET(PCI_EXP_LNKSTA_NLW, val));
}
int dw_pcie_wait_for_link(struct dw_pcie *pci)
{
int retries;
/* Check if the link is up or not */
@ -662,12 +673,7 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
return -ETIMEDOUT;
}
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);
dev_info(pci->dev, "PCIe Gen.%u x%u link up\n",
FIELD_GET(PCI_EXP_LNKSTA_CLS, val),
FIELD_GET(PCI_EXP_LNKSTA_NLW, val));
dw_pcie_print_link_status(pci);
return 0;
}

View File

@ -429,6 +429,7 @@ void dw_pcie_setup(struct dw_pcie *pci);
void dw_pcie_iatu_detect(struct dw_pcie *pci);
int dw_pcie_edma_detect(struct dw_pcie *pci);
void dw_pcie_edma_remove(struct dw_pcie *pci);
void dw_pcie_print_link_status(struct dw_pcie *pci);
static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
{

View File

@ -421,7 +421,7 @@ static int histb_pcie_probe(struct platform_device *pdev)
return 0;
}
static int histb_pcie_remove(struct platform_device *pdev)
static void histb_pcie_remove(struct platform_device *pdev)
{
struct histb_pcie *hipcie = platform_get_drvdata(pdev);
@ -429,8 +429,6 @@ static int histb_pcie_remove(struct platform_device *pdev)
if (hipcie->phy)
phy_exit(hipcie->phy);
return 0;
}
static const struct of_device_id histb_pcie_of_match[] = {
@ -441,7 +439,7 @@ MODULE_DEVICE_TABLE(of, histb_pcie_of_match);
static struct platform_driver histb_pcie_platform_driver = {
.probe = histb_pcie_probe,
.remove = histb_pcie_remove,
.remove_new = histb_pcie_remove,
.driver = {
.name = "histb-pcie",
.of_match_table = histb_pcie_of_match,

View File

@ -340,15 +340,13 @@ static void __intel_pcie_remove(struct intel_pcie *pcie)
phy_exit(pcie->phy);
}
static int intel_pcie_remove(struct platform_device *pdev)
static void intel_pcie_remove(struct platform_device *pdev)
{
struct intel_pcie *pcie = platform_get_drvdata(pdev);
struct dw_pcie_rp *pp = &pcie->pci.pp;
dw_pcie_host_deinit(pp);
__intel_pcie_remove(pcie);
return 0;
}
static int intel_pcie_suspend_noirq(struct device *dev)
@ -443,7 +441,7 @@ static const struct of_device_id of_intel_pcie_match[] = {
static struct platform_driver intel_pcie_driver = {
.probe = intel_pcie_probe,
.remove = intel_pcie_remove,
.remove_new = intel_pcie_remove,
.driver = {
.name = "intel-gw-pcie",
.of_match_table = of_intel_pcie_match,

View File

@ -569,9 +569,11 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) {
dev_dbg(dev, "Received Linkdown event\n");
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN;
pci_epc_linkdown(pci->ep.epc);
} else if (FIELD_GET(PARF_INT_ALL_BME, status)) {
dev_dbg(dev, "Received BME event. Link is enabled!\n");
pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED;
pci_epc_bme_notify(pci->ep.epc);
} else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) {
dev_dbg(dev, "Received PM Turn-off event! Entering L23\n");
val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL);
@ -784,7 +786,7 @@ err_disable_resources:
return ret;
}
static int qcom_pcie_ep_remove(struct platform_device *pdev)
static void qcom_pcie_ep_remove(struct platform_device *pdev)
{
struct qcom_pcie_ep *pcie_ep = platform_get_drvdata(pdev);
@ -794,11 +796,9 @@ static int qcom_pcie_ep_remove(struct platform_device *pdev)
debugfs_remove_recursive(pcie_ep->debugfs);
if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED)
return 0;
return;
qcom_pcie_disable_resources(pcie_ep);
return 0;
}
static const struct of_device_id qcom_pcie_ep_match[] = {
@ -810,7 +810,7 @@ MODULE_DEVICE_TABLE(of, qcom_pcie_ep_match);
static struct platform_driver qcom_pcie_ep_driver = {
.probe = qcom_pcie_ep_probe,
.remove = qcom_pcie_ep_remove,
.remove_new = qcom_pcie_ep_remove,
.driver = {
.name = "qcom-pcie-ep",
.of_match_table = qcom_pcie_ep_match,

View File

@ -61,7 +61,6 @@
/* DBI registers */
#define AXI_MSTR_RESP_COMP_CTRL0 0x818
#define AXI_MSTR_RESP_COMP_CTRL1 0x81c
#define MISC_CONTROL_1_REG 0x8bc
/* MHI registers */
#define PARF_DEBUG_CNT_PM_LINKST_IN_L2 0xc04
@ -132,9 +131,6 @@
/* AXI_MSTR_RESP_COMP_CTRL1 register fields */
#define CFG_BRIDGE_SB_INIT BIT(0)
/* MISC_CONTROL_1_REG register fields */
#define DBI_RO_WR_EN 1
/* PCI_EXP_SLTCAP register fields */
#define PCIE_CAP_SLOT_POWER_LIMIT_VAL FIELD_PREP(PCI_EXP_SLTCAP_SPLV, 250)
#define PCIE_CAP_SLOT_POWER_LIMIT_SCALE FIELD_PREP(PCI_EXP_SLTCAP_SPLS, 1)
@ -144,7 +140,6 @@
PCI_EXP_SLTCAP_AIP | \
PCI_EXP_SLTCAP_PIP | \
PCI_EXP_SLTCAP_HPS | \
PCI_EXP_SLTCAP_HPC | \
PCI_EXP_SLTCAP_EIP | \
PCIE_CAP_SLOT_POWER_LIMIT_VAL | \
PCIE_CAP_SLOT_POWER_LIMIT_SCALE)
@ -274,6 +269,20 @@ static int qcom_pcie_start_link(struct dw_pcie *pci)
return 0;
}
static void qcom_pcie_clear_hpc(struct dw_pcie *pci)
{
u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
u32 val;
dw_pcie_dbi_ro_wr_en(pci);
val = readl(pci->dbi_base + offset + PCI_EXP_SLTCAP);
val &= ~PCI_EXP_SLTCAP_HPC;
writel(val, pci->dbi_base + offset + PCI_EXP_SLTCAP);
dw_pcie_dbi_ro_wr_dis(pci);
}
static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie)
{
u32 val;
@ -429,6 +438,8 @@ static int qcom_pcie_post_init_2_1_0(struct qcom_pcie *pcie)
writel(CFG_BRIDGE_SB_INIT,
pci->dbi_base + AXI_MSTR_RESP_COMP_CTRL1);
qcom_pcie_clear_hpc(pcie->pci);
return 0;
}
@ -512,6 +523,8 @@ static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie)
writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT);
}
qcom_pcie_clear_hpc(pcie->pci);
return 0;
}
@ -607,6 +620,8 @@ static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie)
val |= EN;
writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2);
qcom_pcie_clear_hpc(pcie->pci);
return 0;
}
@ -692,34 +707,6 @@ static int qcom_pcie_init_2_4_0(struct qcom_pcie *pcie)
return 0;
}
static int qcom_pcie_post_init_2_4_0(struct qcom_pcie *pcie)
{
u32 val;
/* enable PCIe clocks and resets */
val = readl(pcie->parf + PARF_PHY_CTRL);
val &= ~PHY_TEST_PWR_DOWN;
writel(val, pcie->parf + PARF_PHY_CTRL);
/* change DBI base address */
writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
/* MAC PHY_POWERDOWN MUX DISABLE */
val = readl(pcie->parf + PARF_SYS_CTRL);
val &= ~MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN;
writel(val, pcie->parf + PARF_SYS_CTRL);
val = readl(pcie->parf + PARF_MHI_CLOCK_RESET_CTRL);
val |= BYPASS;
writel(val, pcie->parf + PARF_MHI_CLOCK_RESET_CTRL);
val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2);
val |= EN;
writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2);
return 0;
}
static int qcom_pcie_get_resources_2_3_3(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3;
@ -826,7 +813,9 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie)
writel(0, pcie->parf + PARF_Q2A_FLUSH);
writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND);
writel(DBI_RO_WR_EN, pci->dbi_base + MISC_CONTROL_1_REG);
dw_pcie_dbi_ro_wr_en(pci);
writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP);
val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP);
@ -836,6 +825,8 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie)
writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset +
PCI_EXP_DEVCTL2);
dw_pcie_dbi_ro_wr_dis(pci);
return 0;
}
@ -966,6 +957,13 @@ err_disable_regulators:
return ret;
}
static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie)
{
qcom_pcie_clear_hpc(pcie->pci);
return 0;
}
static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie)
{
struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0;
@ -1136,6 +1134,7 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
writel(0, pcie->parf + PARF_Q2A_FLUSH);
dw_pcie_dbi_ro_wr_en(pci);
writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP);
val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP);
@ -1145,6 +1144,8 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset +
PCI_EXP_DEVCTL2);
dw_pcie_dbi_ro_wr_dis(pci);
for (i = 0; i < 256; i++)
writel(0, pcie->parf + PARF_BDF_TO_SID_TABLE_N + (4 * i));
@ -1251,7 +1252,7 @@ static const struct qcom_pcie_ops ops_2_3_2 = {
static const struct qcom_pcie_ops ops_2_4_0 = {
.get_resources = qcom_pcie_get_resources_2_4_0,
.init = qcom_pcie_init_2_4_0,
.post_init = qcom_pcie_post_init_2_4_0,
.post_init = qcom_pcie_post_init_2_3_2,
.deinit = qcom_pcie_deinit_2_4_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
};
@ -1269,6 +1270,7 @@ static const struct qcom_pcie_ops ops_2_3_3 = {
static const struct qcom_pcie_ops ops_2_7_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
};
@ -1277,6 +1279,7 @@ static const struct qcom_pcie_ops ops_2_7_0 = {
static const struct qcom_pcie_ops ops_1_9_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
.config_sid = qcom_pcie_config_sid_1_9_0,

View File

@ -2296,13 +2296,13 @@ fail:
return ret;
}
static int tegra_pcie_dw_remove(struct platform_device *pdev)
static void tegra_pcie_dw_remove(struct platform_device *pdev)
{
struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
if (pcie->of_data->mode == DW_PCIE_RC_TYPE) {
if (!pcie->link_state)
return 0;
return;
debugfs_remove_recursive(pcie->debugfs);
tegra_pcie_deinit_controller(pcie);
@ -2316,8 +2316,6 @@ static int tegra_pcie_dw_remove(struct platform_device *pdev)
tegra_bpmp_put(pcie->bpmp);
if (pcie->pex_refclk_sel_gpiod)
gpiod_set_value(pcie->pex_refclk_sel_gpiod, 0);
return 0;
}
static int tegra_pcie_dw_suspend_late(struct device *dev)
@ -2511,7 +2509,7 @@ static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
static struct platform_driver tegra_pcie_dw_driver = {
.probe = tegra_pcie_dw_probe,
.remove = tegra_pcie_dw_remove,
.remove_new = tegra_pcie_dw_remove,
.shutdown = tegra_pcie_dw_shutdown,
.driver = {
.name = "tegra194-pcie",

View File

@ -1927,7 +1927,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
return 0;
}
static int advk_pcie_remove(struct platform_device *pdev)
static void advk_pcie_remove(struct platform_device *pdev)
{
struct advk_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
@ -1989,8 +1989,6 @@ static int advk_pcie_remove(struct platform_device *pdev)
/* Disable phy */
advk_pcie_disable_phy(pcie);
return 0;
}
static const struct of_device_id advk_pcie_of_match_table[] = {
@ -2005,7 +2003,7 @@ static struct platform_driver advk_pcie_driver = {
.of_match_table = advk_pcie_of_match_table,
},
.probe = advk_pcie_probe,
.remove = advk_pcie_remove,
.remove_new = advk_pcie_remove,
};
module_platform_driver(advk_pcie_driver);

View File

@ -429,22 +429,12 @@ static int faraday_pci_probe(struct platform_device *pdev)
p->dev = dev;
/* Retrieve and enable optional clocks */
clk = devm_clk_get(dev, "PCLK");
clk = devm_clk_get_enabled(dev, "PCLK");
if (IS_ERR(clk))
return PTR_ERR(clk);
ret = clk_prepare_enable(clk);
if (ret) {
dev_err(dev, "could not prepare PCLK\n");
return ret;
}
p->bus_clk = devm_clk_get(dev, "PCICLK");
p->bus_clk = devm_clk_get_enabled(dev, "PCICLK");
if (IS_ERR(p->bus_clk))
return PTR_ERR(p->bus_clk);
ret = clk_prepare_enable(p->bus_clk);
if (ret) {
dev_err(dev, "could not prepare PCICLK\n");
return ret;
}
p->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(p->base))

View File

@ -1649,7 +1649,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
return pci_host_probe(bridge);
}
static int mvebu_pcie_remove(struct platform_device *pdev)
static void mvebu_pcie_remove(struct platform_device *pdev)
{
struct mvebu_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
@ -1707,8 +1707,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
/* Power down card and disable clocks. Must be the last step. */
mvebu_pcie_powerdown(port);
}
return 0;
}
static const struct of_device_id mvebu_pcie_of_match_table[] = {
@ -1730,7 +1728,7 @@ static struct platform_driver mvebu_pcie_driver = {
.pm = &mvebu_pcie_pm_ops,
},
.probe = mvebu_pcie_probe,
.remove = mvebu_pcie_remove,
.remove_new = mvebu_pcie_remove,
};
module_platform_driver(mvebu_pcie_driver);

View File

@ -2680,7 +2680,7 @@ put_resources:
return err;
}
static int tegra_pcie_remove(struct platform_device *pdev)
static void tegra_pcie_remove(struct platform_device *pdev)
{
struct tegra_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
@ -2701,8 +2701,6 @@ static int tegra_pcie_remove(struct platform_device *pdev)
list_for_each_entry_safe(port, tmp, &pcie->ports, list)
tegra_pcie_port_free(port);
return 0;
}
static int tegra_pcie_pm_suspend(struct device *dev)
@ -2808,6 +2806,6 @@ static struct platform_driver tegra_pcie_driver = {
.pm = &tegra_pcie_pm_ops,
},
.probe = tegra_pcie_probe,
.remove = tegra_pcie_remove,
.remove_new = tegra_pcie_remove,
};
module_platform_driver(tegra_pcie_driver);

View File

@ -348,7 +348,7 @@ static void xgene_msi_isr(struct irq_desc *desc)
static enum cpuhp_state pci_xgene_online;
static int xgene_msi_remove(struct platform_device *pdev)
static void xgene_msi_remove(struct platform_device *pdev)
{
struct xgene_msi *msi = platform_get_drvdata(pdev);
@ -362,8 +362,6 @@ static int xgene_msi_remove(struct platform_device *pdev)
msi->bitmap = NULL;
xgene_free_domains(msi);
return 0;
}
static int xgene_msi_hwirq_alloc(unsigned int cpu)
@ -521,7 +519,7 @@ static struct platform_driver xgene_msi_driver = {
.of_match_table = xgene_msi_match_table,
},
.probe = xgene_msi_probe,
.remove = xgene_msi_remove,
.remove_new = xgene_msi_remove,
};
static int __init xgene_pcie_msi_init(void)

View File

@ -197,7 +197,7 @@ static void altera_free_domains(struct altera_msi *msi)
irq_domain_remove(msi->inner_domain);
}
static int altera_msi_remove(struct platform_device *pdev)
static void altera_msi_remove(struct platform_device *pdev)
{
struct altera_msi *msi = platform_get_drvdata(pdev);
@ -207,7 +207,6 @@ static int altera_msi_remove(struct platform_device *pdev)
altera_free_domains(msi);
platform_set_drvdata(pdev, NULL);
return 0;
}
static int altera_msi_probe(struct platform_device *pdev)
@ -275,7 +274,7 @@ static struct platform_driver altera_msi_driver = {
.of_match_table = altera_msi_of_match,
},
.probe = altera_msi_probe,
.remove = altera_msi_remove,
.remove_new = altera_msi_remove,
};
static int __init altera_msi_init(void)

View File

@ -806,7 +806,7 @@ static int altera_pcie_probe(struct platform_device *pdev)
return pci_host_probe(bridge);
}
static int altera_pcie_remove(struct platform_device *pdev)
static void altera_pcie_remove(struct platform_device *pdev)
{
struct altera_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
@ -814,13 +814,11 @@ static int altera_pcie_remove(struct platform_device *pdev)
pci_stop_root_bus(bridge->bus);
pci_remove_root_bus(bridge->bus);
altera_pcie_irq_teardown(pcie);
return 0;
}
static struct platform_driver altera_pcie_driver = {
.probe = altera_pcie_probe,
.remove = altera_pcie_remove,
.remove_new = altera_pcie_remove,
.driver = {
.name = "altera-pcie",
.of_match_table = altera_pcie_of_match,

View File

@ -1396,7 +1396,7 @@ static void __brcm_pcie_remove(struct brcm_pcie *pcie)
clk_disable_unprepare(pcie->clk);
}
static int brcm_pcie_remove(struct platform_device *pdev)
static void brcm_pcie_remove(struct platform_device *pdev)
{
struct brcm_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
@ -1404,8 +1404,6 @@ static int brcm_pcie_remove(struct platform_device *pdev)
pci_stop_root_bus(bridge->bus);
pci_remove_root_bus(bridge->bus);
__brcm_pcie_remove(pcie);
return 0;
}
static const int pcie_offsets[] = {
@ -1612,7 +1610,7 @@ static const struct dev_pm_ops brcm_pcie_pm_ops = {
static struct platform_driver brcm_pcie_driver = {
.probe = brcm_pcie_probe,
.remove = brcm_pcie_remove,
.remove_new = brcm_pcie_remove,
.driver = {
.name = "brcm-pcie",
.of_match_table = brcm_pcie_match,

View File

@ -299,13 +299,11 @@ static int hisi_pcie_error_handler_probe(struct platform_device *pdev)
return 0;
}
static int hisi_pcie_error_handler_remove(struct platform_device *pdev)
static void hisi_pcie_error_handler_remove(struct platform_device *pdev)
{
struct hisi_pcie_error_private *priv = platform_get_drvdata(pdev);
ghes_unregister_vendor_record_notifier(&priv->nb);
return 0;
}
static const struct acpi_device_id hisi_pcie_acpi_match[] = {
@ -319,7 +317,7 @@ static struct platform_driver hisi_pcie_error_handler_driver = {
.acpi_match_table = hisi_pcie_acpi_match,
},
.probe = hisi_pcie_error_handler_probe,
.remove = hisi_pcie_error_handler_remove,
.remove_new = hisi_pcie_error_handler_remove,
};
module_platform_driver(hisi_pcie_error_handler_driver);

View File

@ -114,11 +114,11 @@ static int iproc_pltfm_pcie_probe(struct platform_device *pdev)
return 0;
}
static int iproc_pltfm_pcie_remove(struct platform_device *pdev)
static void iproc_pltfm_pcie_remove(struct platform_device *pdev)
{
struct iproc_pcie *pcie = platform_get_drvdata(pdev);
return iproc_pcie_remove(pcie);
iproc_pcie_remove(pcie);
}
static void iproc_pltfm_pcie_shutdown(struct platform_device *pdev)
@ -134,7 +134,7 @@ static struct platform_driver iproc_pltfm_pcie_driver = {
.of_match_table = of_match_ptr(iproc_pcie_of_match_table),
},
.probe = iproc_pltfm_pcie_probe,
.remove = iproc_pltfm_pcie_remove,
.remove_new = iproc_pltfm_pcie_remove,
.shutdown = iproc_pltfm_pcie_shutdown,
};
module_platform_driver(iproc_pltfm_pcie_driver);

View File

@ -1537,7 +1537,7 @@ err_exit_phy:
}
EXPORT_SYMBOL(iproc_pcie_setup);
int iproc_pcie_remove(struct iproc_pcie *pcie)
void iproc_pcie_remove(struct iproc_pcie *pcie)
{
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
@ -1548,8 +1548,6 @@ int iproc_pcie_remove(struct iproc_pcie *pcie)
phy_power_off(pcie->phy);
phy_exit(pcie->phy);
return 0;
}
EXPORT_SYMBOL(iproc_pcie_remove);

View File

@ -111,7 +111,7 @@ struct iproc_pcie {
};
int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res);
int iproc_pcie_remove(struct iproc_pcie *pcie);
void iproc_pcie_remove(struct iproc_pcie *pcie);
int iproc_pcie_shutdown(struct iproc_pcie *pcie);
#ifdef CONFIG_PCIE_IPROC_MSI

View File

@ -943,7 +943,7 @@ static int mtk_pcie_probe(struct platform_device *pdev)
return 0;
}
static int mtk_pcie_remove(struct platform_device *pdev)
static void mtk_pcie_remove(struct platform_device *pdev)
{
struct mtk_gen3_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
@ -955,8 +955,6 @@ static int mtk_pcie_remove(struct platform_device *pdev)
mtk_pcie_irq_teardown(pcie);
mtk_pcie_power_down(pcie);
return 0;
}
static void mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie)
@ -1069,7 +1067,7 @@ MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
.remove = mtk_pcie_remove,
.remove_new = mtk_pcie_remove,
.driver = {
.name = "mtk-pcie-gen3",
.of_match_table = mtk_pcie_of_match,

View File

@ -1134,7 +1134,7 @@ static void mtk_pcie_free_resources(struct mtk_pcie *pcie)
pci_free_resource_list(windows);
}
static int mtk_pcie_remove(struct platform_device *pdev)
static void mtk_pcie_remove(struct platform_device *pdev)
{
struct mtk_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
@ -1146,8 +1146,6 @@ static int mtk_pcie_remove(struct platform_device *pdev)
mtk_pcie_irq_teardown(pcie);
mtk_pcie_put_resources(pcie);
return 0;
}
static int mtk_pcie_suspend_noirq(struct device *dev)
@ -1239,7 +1237,7 @@ MODULE_DEVICE_TABLE(of, mtk_pcie_ids);
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
.remove = mtk_pcie_remove,
.remove_new = mtk_pcie_remove,
.driver = {
.name = "mtk-pcie",
.of_match_table = mtk_pcie_ids,

View File

@ -524,15 +524,13 @@ remove_resets:
return err;
}
static int mt7621_pcie_remove(struct platform_device *pdev)
static void mt7621_pcie_remove(struct platform_device *pdev)
{
struct mt7621_pcie *pcie = platform_get_drvdata(pdev);
struct mt7621_pcie_port *port;
list_for_each_entry(port, &pcie->ports, list)
reset_control_put(port->pcie_rst);
return 0;
}
static const struct of_device_id mt7621_pcie_ids[] = {
@ -543,7 +541,7 @@ MODULE_DEVICE_TABLE(of, mt7621_pcie_ids);
static struct platform_driver mt7621_pcie_driver = {
.probe = mt7621_pcie_probe,
.remove = mt7621_pcie_remove,
.remove_new = mt7621_pcie_remove,
.driver = {
.name = "mt7621-pci",
.of_match_table = mt7621_pcie_ids,

View File

@ -41,21 +41,6 @@ struct rcar_msi {
int irq2;
};
#ifdef CONFIG_ARM
/*
* Here we keep a static copy of the remapped PCIe controller address.
* This is only used on aarch32 systems, all of which have one single
* PCIe controller, to provide quick access to the PCIe controller in
* the L1 link state fixup function, called from the ARM fault handler.
*/
static void __iomem *pcie_base;
/*
* Static copy of PCIe device pointer, so we can check whether the
* device is runtime suspended or not.
*/
static struct device *pcie_dev;
#endif
/* Structure representing the PCIe interface */
struct rcar_pcie_host {
struct rcar_pcie pcie;
@ -684,7 +669,7 @@ static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
}
static struct irq_chip rcar_msi_bottom_chip = {
.name = "Rcar MSI",
.name = "R-Car MSI",
.irq_ack = rcar_msi_irq_ack,
.irq_mask = rcar_msi_irq_mask,
.irq_unmask = rcar_msi_irq_unmask,
@ -813,7 +798,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
/*
* Setup MSI data target using RC base address address, which
* is guaranteed to be in the low 32bit range on any RCar HW.
* is guaranteed to be in the low 32bit range on any R-Car HW.
*/
rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR);
rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR);
@ -879,12 +864,6 @@ static int rcar_pcie_get_resources(struct rcar_pcie_host *host)
}
host->msi.irq2 = i;
#ifdef CONFIG_ARM
/* Cache static copy for L1 link state fixup hook on aarch32 */
pcie_base = pcie->base;
pcie_dev = pcie->dev;
#endif
return 0;
err_irq2:

View File

@ -61,70 +61,38 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(region));
}
static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
u32 r, u32 type, u64 cpu_addr,
u64 pci_addr, size_t size)
u32 r, u64 cpu_addr, u64 pci_addr,
size_t size)
{
u64 sz = 1ULL << fls64(size - 1);
int num_pass_bits = ilog2(sz);
u32 addr0, addr1, desc0, desc1;
bool is_nor_msg = (type == AXI_WRAPPER_NOR_MSG);
int num_pass_bits = fls64(size - 1);
u32 addr0, addr1, desc0;
/* The minimal region size is 1MB */
if (num_pass_bits < 8)
num_pass_bits = 8;
cpu_addr -= rockchip->mem_res->start;
addr0 = ((is_nor_msg ? 0x10 : (num_pass_bits - 1)) &
PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(cpu_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
addr1 = upper_32_bits(is_nor_msg ? cpu_addr : pci_addr);
desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | type;
desc1 = 0;
addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
addr1 = upper_32_bits(pci_addr);
desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | AXI_WRAPPER_MEM_WRITE;
if (is_nor_msg) {
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r));
rockchip_pcie_write(rockchip, desc0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r));
rockchip_pcie_write(rockchip, desc1,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r));
} else {
/* PCI bus address region */
rockchip_pcie_write(rockchip, addr0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r));
rockchip_pcie_write(rockchip, addr1,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r));
rockchip_pcie_write(rockchip, desc0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r));
rockchip_pcie_write(rockchip, desc1,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r));
addr0 =
((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(cpu_addr) &
PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
addr1 = upper_32_bits(cpu_addr);
}
/* CPU bus address region */
/* PCI bus address region */
rockchip_pcie_write(rockchip, addr0,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r));
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r));
rockchip_pcie_write(rockchip, addr1,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r));
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r));
rockchip_pcie_write(rockchip, desc0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r));
}
static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
struct pci_epf_header *hdr)
{
u32 reg;
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
@ -137,8 +105,9 @@ static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn,
PCIE_CORE_CONFIG_VENDOR);
}
rockchip_pcie_write(rockchip, hdr->deviceid << 16,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID);
reg = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_DID_VID);
reg = (reg & 0xFFFF) | (hdr->deviceid << 16);
rockchip_pcie_write(rockchip, reg, PCIE_EP_CONFIG_DID_VID);
rockchip_pcie_write(rockchip,
hdr->revid |
@ -256,26 +225,20 @@ static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, u8 vfn,
ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar));
}
static inline u32 rockchip_ob_region(phys_addr_t addr)
{
return (addr >> ilog2(SZ_1M)) & 0x1f;
}
static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr, u64 pci_addr,
size_t size)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *pcie = &ep->rockchip;
u32 r;
u32 r = rockchip_ob_region(addr);
r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
/*
* Region 0 is reserved for configuration space and shouldn't
* be used elsewhere per TRM, so leave it out.
*/
if (r >= ep->max_regions - 1) {
dev_err(&epc->dev, "no free outbound region\n");
return -EINVAL;
}
rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, AXI_WRAPPER_MEM_WRITE, addr,
pci_addr, size);
rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, addr, pci_addr, size);
set_bit(r, &ep->ob_region_map);
ep->ob_addr[r] = addr;
@ -290,15 +253,11 @@ static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 r;
for (r = 0; r < ep->max_regions - 1; r++)
for (r = 0; r < ep->max_regions; r++)
if (ep->ob_addr[r] == addr)
break;
/*
* Region 0 is reserved for configuration space and shouldn't
* be used elsewhere per TRM, so leave it out.
*/
if (r == ep->max_regions - 1)
if (r == ep->max_regions)
return;
rockchip_pcie_clear_ep_ob_atu(rockchip, r);
@ -312,15 +271,15 @@ static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn,
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u16 flags;
u32 flags;
flags = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK;
flags |=
((multi_msg_cap << 1) << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
PCI_MSI_FLAGS_64BIT;
(multi_msg_cap << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
(PCI_MSI_FLAGS_64BIT << ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET);
flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP;
rockchip_pcie_write(rockchip, flags,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
@ -332,7 +291,7 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u16 flags;
u32 flags;
flags = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
@ -345,48 +304,25 @@ static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn, u8 vfn)
}
static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn,
u8 intx, bool is_asserted)
u8 intx, bool do_assert)
{
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 r = ep->max_regions - 1;
u32 offset;
u32 status;
u8 msg_code;
if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR ||
ep->irq_pci_fn != fn)) {
rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r,
AXI_WRAPPER_NOR_MSG,
ep->irq_phys_addr, 0, 0);
ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR;
ep->irq_pci_fn = fn;
}
intx &= 3;
if (is_asserted) {
if (do_assert) {
ep->irq_pending |= BIT(intx);
msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx;
rockchip_pcie_write(rockchip,
PCIE_CLIENT_INT_IN_ASSERT |
PCIE_CLIENT_INT_PEND_ST_PEND,
PCIE_CLIENT_LEGACY_INT_CTRL);
} else {
ep->irq_pending &= ~BIT(intx);
msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx;
rockchip_pcie_write(rockchip,
PCIE_CLIENT_INT_IN_DEASSERT |
PCIE_CLIENT_INT_PEND_ST_NORMAL,
PCIE_CLIENT_LEGACY_INT_CTRL);
}
status = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_CMD_STATUS);
status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
if ((status != 0) ^ (ep->irq_pending != 0)) {
status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
rockchip_pcie_write(rockchip, status,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_CMD_STATUS);
}
offset =
ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) |
ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA;
writel(0, ep->irq_cpu_addr + offset);
}
static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn,
@ -416,9 +352,10 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
u8 interrupt_num)
{
struct rockchip_pcie *rockchip = &ep->rockchip;
u16 flags, mme, data, data_mask;
u32 flags, mme, data, data_mask;
u8 msi_count;
u64 pci_addr, pci_addr_mask = 0xff;
u64 pci_addr;
u32 r;
/* Check MSI enable bit */
flags = rockchip_pcie_read(&ep->rockchip,
@ -452,21 +389,20 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG +
PCI_MSI_ADDRESS_LO);
pci_addr &= GENMASK_ULL(63, 2);
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
if (unlikely(ep->irq_pci_addr != (pci_addr & PCIE_ADDR_MASK) ||
ep->irq_pci_fn != fn)) {
rockchip_pcie_prog_ep_ob_atu(rockchip, fn, ep->max_regions - 1,
AXI_WRAPPER_MEM_WRITE,
r = rockchip_ob_region(ep->irq_phys_addr);
rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r,
ep->irq_phys_addr,
pci_addr & ~pci_addr_mask,
pci_addr_mask + 1);
ep->irq_pci_addr = (pci_addr & ~pci_addr_mask);
pci_addr & PCIE_ADDR_MASK,
~PCIE_ADDR_MASK + 1);
ep->irq_pci_addr = (pci_addr & PCIE_ADDR_MASK);
ep->irq_pci_fn = fn;
}
writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
writew(data, ep->irq_cpu_addr + (pci_addr & ~PCIE_ADDR_MASK));
return 0;
}
@ -506,6 +442,7 @@ static const struct pci_epc_features rockchip_pcie_epc_features = {
.linkup_notifier = false,
.msi_capable = true,
.msix_capable = false,
.align = 256,
};
static const struct pci_epc_features*
@ -547,6 +484,8 @@ static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip,
if (err < 0 || ep->max_regions > MAX_REGION_LIMIT)
ep->max_regions = MAX_REGION_LIMIT;
ep->ob_region_map = 0;
err = of_property_read_u8(dev->of_node, "max-functions",
&ep->epc->max_functions);
if (err < 0)
@ -567,7 +506,9 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
struct rockchip_pcie *rockchip;
struct pci_epc *epc;
size_t max_regions;
int err;
struct pci_epc_mem_window *windows = NULL;
int err, i;
u32 cfg_msi, cfg_msix_cp;
ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
if (!ep)
@ -614,15 +555,27 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
/* Only enable function 0 by default */
rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG);
err = pci_epc_mem_init(epc, rockchip->mem_res->start,
resource_size(rockchip->mem_res), PAGE_SIZE);
windows = devm_kcalloc(dev, ep->max_regions,
sizeof(struct pci_epc_mem_window), GFP_KERNEL);
if (!windows) {
err = -ENOMEM;
goto err_uninit_port;
}
for (i = 0; i < ep->max_regions; i++) {
windows[i].phys_base = rockchip->mem_res->start + (SZ_1M * i);
windows[i].size = SZ_1M;
windows[i].page_size = SZ_1M;
}
err = pci_epc_multi_mem_init(epc, windows, ep->max_regions);
devm_kfree(dev, windows);
if (err < 0) {
dev_err(dev, "failed to initialize the memory space\n");
goto err_uninit_port;
}
ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
SZ_128K);
SZ_1M);
if (!ep->irq_cpu_addr) {
dev_err(dev, "failed to reserve memory space for MSI\n");
err = -ENOMEM;
@ -631,6 +584,32 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR;
/*
* MSI-X is not supported but the controller still advertises the MSI-X
* capability by default, which can lead to the Root Complex side
* allocating MSI-X vectors which cannot be used. Avoid this by skipping
* the MSI-X capability entry in the PCIe capabilities linked-list: get
* the next pointer from the MSI-X entry and set that in the MSI
* capability entry (which is the previous entry). This way the MSI-X
* entry is skipped (left out of the linked-list) and not advertised.
*/
cfg_msi = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
cfg_msi &= ~ROCKCHIP_PCIE_EP_MSI_CP1_MASK;
cfg_msix_cp = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE +
ROCKCHIP_PCIE_EP_MSIX_CAP_REG) &
ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK;
cfg_msi |= cfg_msix_cp;
rockchip_pcie_write(rockchip, cfg_msi,
PCIE_EP_CONFIG_BASE + ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE,
PCIE_CLIENT_CONFIG);
return 0;
err_epc_mem_exit:
pci_epc_mem_exit(epc);

View File

@ -1009,7 +1009,7 @@ err_set_vpcie:
return err;
}
static int rockchip_pcie_remove(struct platform_device *pdev)
static void rockchip_pcie_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie *rockchip = dev_get_drvdata(dev);
@ -1029,8 +1029,6 @@ static int rockchip_pcie_remove(struct platform_device *pdev)
regulator_disable(rockchip->vpcie3v3);
regulator_disable(rockchip->vpcie1v8);
regulator_disable(rockchip->vpcie0v9);
return 0;
}
static const struct dev_pm_ops rockchip_pcie_pm_ops = {
@ -1051,7 +1049,7 @@ static struct platform_driver rockchip_pcie_driver = {
.pm = &rockchip_pcie_pm_ops,
},
.probe = rockchip_pcie_probe,
.remove = rockchip_pcie_remove,
.remove_new = rockchip_pcie_remove,
};
module_platform_driver(rockchip_pcie_driver);

View File

@ -14,6 +14,7 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/iopoll.h>
#include <linux/of_pci.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
@ -153,6 +154,12 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
}
EXPORT_SYMBOL_GPL(rockchip_pcie_parse_dt);
#define rockchip_pcie_read_addr(addr) rockchip_pcie_read(rockchip, addr)
/* 100 ms max wait time for PHY PLLs to lock */
#define RK_PHY_PLL_LOCK_TIMEOUT_US 100000
/* Sleep should be less than 20ms */
#define RK_PHY_PLL_LOCK_SLEEP_US 1000
int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
{
struct device *dev = rockchip->dev;
@ -254,6 +261,16 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
}
}
err = readx_poll_timeout(rockchip_pcie_read_addr,
PCIE_CLIENT_SIDE_BAND_STATUS,
regs, !(regs & PCIE_CLIENT_PHY_ST),
RK_PHY_PLL_LOCK_SLEEP_US,
RK_PHY_PLL_LOCK_TIMEOUT_US);
if (err) {
dev_err(dev, "PHY PLLs could not lock, %d\n", err);
goto err_power_off_phy;
}
/*
* Please don't reorder the deassert sequence of the following
* four reset pins.

View File

@ -38,6 +38,13 @@
#define PCIE_CLIENT_MODE_EP HIWORD_UPDATE(0x0040, 0)
#define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0)
#define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080)
#define PCIE_CLIENT_LEGACY_INT_CTRL (PCIE_CLIENT_BASE + 0x0c)
#define PCIE_CLIENT_INT_IN_ASSERT HIWORD_UPDATE_BIT(0x0002)
#define PCIE_CLIENT_INT_IN_DEASSERT HIWORD_UPDATE(0x0002, 0)
#define PCIE_CLIENT_INT_PEND_ST_PEND HIWORD_UPDATE_BIT(0x0001)
#define PCIE_CLIENT_INT_PEND_ST_NORMAL HIWORD_UPDATE(0x0001, 0)
#define PCIE_CLIENT_SIDE_BAND_STATUS (PCIE_CLIENT_BASE + 0x20)
#define PCIE_CLIENT_PHY_ST BIT(12)
#define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c)
#define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0)
#define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18
@ -132,7 +139,10 @@
#define PCIE_RC_RP_ATS_BASE 0x400000
#define PCIE_RC_CONFIG_NORMAL_BASE 0x800000
#define PCIE_EP_PF_CONFIG_REGS_BASE 0x800000
#define PCIE_RC_CONFIG_BASE 0xa00000
#define PCIE_EP_CONFIG_BASE 0xa00000
#define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00)
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
@ -148,10 +158,11 @@
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
#define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20)
#define PCIE_ADDR_MASK 0xffffff00
#define PCIE_CORE_AXI_CONF_BASE 0xc00000
#define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0)
#define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f
#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00
#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK
#define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4)
#define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8)
#define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc)
@ -159,7 +170,7 @@
#define PCIE_CORE_AXI_INBOUND_BASE 0xc00800
#define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0)
#define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f
#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00
#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK
#define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4)
/* Size of one AXI Region (not Region 0) */
@ -216,21 +227,28 @@
#define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4
#define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90
#define ROCKCHIP_PCIE_EP_MSI_CP1_OFFSET 8
#define ROCKCHIP_PCIE_EP_MSI_CP1_MASK GENMASK(15, 8)
#define ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET 16
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET 20
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK GENMASK(22, 20)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24)
#define ROCKCHIP_PCIE_EP_MSIX_CAP_REG 0xb0
#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_OFFSET 8
#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK GENMASK(15, 8)
#define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1
#define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3
#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) \
(PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12)))
#define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \
(PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12)))
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
(PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
(PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
(PCIE_RC_RP_ATS_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
(PCIE_RC_RP_ATS_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
(PCIE_CORE_AXI_CONF_BASE + 0x082c + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
(((devfn) << 12) & \
@ -238,20 +256,21 @@
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
(((bus) << 20) & ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
#define PCIE_RC_EP_ATR_OB_REGIONS_1_32 (PCIE_CORE_AXI_CONF_BASE + 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
(PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0000 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
(PCIE_RC_RP_ATS_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
(PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0004 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
(((devfn) << 24) & ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r) \
(PCIE_RC_RP_ATS_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \
(PCIE_RC_RP_ATS_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
(PCIE_RC_RP_ATS_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
(PCIE_RC_RP_ATS_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
(PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0008 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \
(PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x000c + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC2(r) \
(PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0010 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn) \
(PCIE_CORE_CTRL_MGMT_BASE + 0x0240 + (fn) * 0x0008)

View File

@ -927,7 +927,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
if (!list_empty(&child->devices)) {
dev = list_first_entry(&child->devices,
struct pci_dev, bus_list);
if (pci_reset_bus(dev))
ret = pci_reset_bus(dev);
if (ret)
pci_warn(dev, "can't reset device: %d\n", ret);
break;
@ -1036,6 +1037,13 @@ static void vmd_remove(struct pci_dev *dev)
ida_simple_remove(&vmd_instance_ida, vmd->instance);
}
static void vmd_shutdown(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
vmd_remove_irq_domain(vmd);
}
#ifdef CONFIG_PM_SLEEP
static int vmd_suspend(struct device *dev)
{
@ -1101,6 +1109,7 @@ static struct pci_driver vmd_drv = {
.id_table = vmd_ids,
.probe = vmd_probe,
.remove = vmd_remove,
.shutdown = vmd_shutdown,
.driver = {
.pm = &vmd_dev_pm_ops,
},

View File

@ -27,7 +27,7 @@ config PCI_EPF_NTB
If in doubt, say "N" to disable Endpoint NTB driver.
config PCI_EPF_VNTB
tristate "PCI Endpoint NTB driver"
tristate "PCI Endpoint Virtual NTB driver"
depends on PCI_ENDPOINT
depends on NTB
select CONFIGFS_FS
@ -37,3 +37,13 @@ config PCI_EPF_VNTB
between PCI Root Port and PCIe Endpoint.
If in doubt, say "N" to disable Endpoint NTB driver.
config PCI_EPF_MHI
tristate "PCI Endpoint driver for MHI bus"
depends on PCI_ENDPOINT && MHI_BUS_EP
help
Enable this configuration option to enable the PCI Endpoint
driver for Modem Host Interface (MHI) bus in Qualcomm Endpoint
devices such as SDX55.
If in doubt, say "N" to disable Endpoint driver for MHI bus.

View File

@ -6,3 +6,4 @@
obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o
obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o
obj-$(CONFIG_PCI_EPF_VNTB) += pci-epf-vntb.o
obj-$(CONFIG_PCI_EPF_MHI) += pci-epf-mhi.o

View File

@ -0,0 +1,458 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCI EPF driver for MHI Endpoint devices
*
* Copyright (C) 2023 Linaro Ltd.
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
*/
#include <linux/mhi_ep.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pci-epc.h>
#include <linux/pci-epf.h>
#define MHI_VERSION_1_0 0x01000000
#define to_epf_mhi(cntrl) container_of(cntrl, struct pci_epf_mhi, cntrl)
struct pci_epf_mhi_ep_info {
const struct mhi_ep_cntrl_config *config;
struct pci_epf_header *epf_header;
enum pci_barno bar_num;
u32 epf_flags;
u32 msi_count;
u32 mru;
};
#define MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, direction) \
{ \
.num = ch_num, \
.name = ch_name, \
.dir = direction, \
}
#define MHI_EP_CHANNEL_CONFIG_UL(ch_num, ch_name) \
MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, DMA_TO_DEVICE)
#define MHI_EP_CHANNEL_CONFIG_DL(ch_num, ch_name) \
MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, DMA_FROM_DEVICE)
static const struct mhi_ep_channel_config mhi_v1_channels[] = {
MHI_EP_CHANNEL_CONFIG_UL(0, "LOOPBACK"),
MHI_EP_CHANNEL_CONFIG_DL(1, "LOOPBACK"),
MHI_EP_CHANNEL_CONFIG_UL(2, "SAHARA"),
MHI_EP_CHANNEL_CONFIG_DL(3, "SAHARA"),
MHI_EP_CHANNEL_CONFIG_UL(4, "DIAG"),
MHI_EP_CHANNEL_CONFIG_DL(5, "DIAG"),
MHI_EP_CHANNEL_CONFIG_UL(6, "SSR"),
MHI_EP_CHANNEL_CONFIG_DL(7, "SSR"),
MHI_EP_CHANNEL_CONFIG_UL(8, "QDSS"),
MHI_EP_CHANNEL_CONFIG_DL(9, "QDSS"),
MHI_EP_CHANNEL_CONFIG_UL(10, "EFS"),
MHI_EP_CHANNEL_CONFIG_DL(11, "EFS"),
MHI_EP_CHANNEL_CONFIG_UL(12, "MBIM"),
MHI_EP_CHANNEL_CONFIG_DL(13, "MBIM"),
MHI_EP_CHANNEL_CONFIG_UL(14, "QMI"),
MHI_EP_CHANNEL_CONFIG_DL(15, "QMI"),
MHI_EP_CHANNEL_CONFIG_UL(16, "QMI"),
MHI_EP_CHANNEL_CONFIG_DL(17, "QMI"),
MHI_EP_CHANNEL_CONFIG_UL(18, "IP-CTRL-1"),
MHI_EP_CHANNEL_CONFIG_DL(19, "IP-CTRL-1"),
MHI_EP_CHANNEL_CONFIG_UL(20, "IPCR"),
MHI_EP_CHANNEL_CONFIG_DL(21, "IPCR"),
MHI_EP_CHANNEL_CONFIG_UL(32, "DUN"),
MHI_EP_CHANNEL_CONFIG_DL(33, "DUN"),
MHI_EP_CHANNEL_CONFIG_UL(46, "IP_SW0"),
MHI_EP_CHANNEL_CONFIG_DL(47, "IP_SW0"),
};
static const struct mhi_ep_cntrl_config mhi_v1_config = {
.max_channels = 128,
.num_channels = ARRAY_SIZE(mhi_v1_channels),
.ch_cfg = mhi_v1_channels,
.mhi_version = MHI_VERSION_1_0,
};
static struct pci_epf_header sdx55_header = {
.vendorid = PCI_VENDOR_ID_QCOM,
.deviceid = 0x0306,
.baseclass_code = PCI_BASE_CLASS_COMMUNICATION,
.subclass_code = PCI_CLASS_COMMUNICATION_MODEM & 0xff,
.interrupt_pin = PCI_INTERRUPT_INTA,
};
static const struct pci_epf_mhi_ep_info sdx55_info = {
.config = &mhi_v1_config,
.epf_header = &sdx55_header,
.bar_num = BAR_0,
.epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32,
.msi_count = 32,
.mru = 0x8000,
};
struct pci_epf_mhi {
const struct pci_epf_mhi_ep_info *info;
struct mhi_ep_cntrl mhi_cntrl;
struct pci_epf *epf;
struct mutex lock;
void __iomem *mmio;
resource_size_t mmio_phys;
u32 mmio_size;
int irq;
};
static int __pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr,
phys_addr_t *paddr, void __iomem **vaddr,
size_t offset, size_t size)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct pci_epf *epf = epf_mhi->epf;
struct pci_epc *epc = epf->epc;
int ret;
*vaddr = pci_epc_mem_alloc_addr(epc, paddr, size + offset);
if (!*vaddr)
return -ENOMEM;
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, *paddr,
pci_addr - offset, size + offset);
if (ret) {
pci_epc_mem_free_addr(epc, *paddr, *vaddr, size + offset);
return ret;
}
*paddr = *paddr + offset;
*vaddr = *vaddr + offset;
return 0;
}
static int pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr,
phys_addr_t *paddr, void __iomem **vaddr,
size_t size)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct pci_epc *epc = epf_mhi->epf->epc;
size_t offset = pci_addr & (epc->mem->window.page_size - 1);
return __pci_epf_mhi_alloc_map(mhi_cntrl, pci_addr, paddr, vaddr,
offset, size);
}
static void __pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl,
u64 pci_addr, phys_addr_t paddr,
void __iomem *vaddr, size_t offset,
size_t size)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct pci_epf *epf = epf_mhi->epf;
struct pci_epc *epc = epf->epc;
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, paddr - offset);
pci_epc_mem_free_addr(epc, paddr - offset, vaddr - offset,
size + offset);
}
static void pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr,
phys_addr_t paddr, void __iomem *vaddr,
size_t size)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct pci_epf *epf = epf_mhi->epf;
struct pci_epc *epc = epf->epc;
size_t offset = pci_addr & (epc->mem->window.page_size - 1);
__pci_epf_mhi_unmap_free(mhi_cntrl, pci_addr, paddr, vaddr, offset,
size);
}
static void pci_epf_mhi_raise_irq(struct mhi_ep_cntrl *mhi_cntrl, u32 vector)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
struct pci_epf *epf = epf_mhi->epf;
struct pci_epc *epc = epf->epc;
/*
* MHI supplies 0 based MSI vectors but the API expects the vector
* number to start from 1, so we need to increment the vector by 1.
*/
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, PCI_EPC_IRQ_MSI,
vector + 1);
}
static int pci_epf_mhi_read_from_host(struct mhi_ep_cntrl *mhi_cntrl, u64 from,
void *to, size_t size)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
size_t offset = from % SZ_4K;
void __iomem *tre_buf;
phys_addr_t tre_phys;
int ret;
mutex_lock(&epf_mhi->lock);
ret = __pci_epf_mhi_alloc_map(mhi_cntrl, from, &tre_phys, &tre_buf,
offset, size);
if (ret) {
mutex_unlock(&epf_mhi->lock);
return ret;
}
memcpy_fromio(to, tre_buf, size);
__pci_epf_mhi_unmap_free(mhi_cntrl, from, tre_phys, tre_buf, offset,
size);
mutex_unlock(&epf_mhi->lock);
return 0;
}
static int pci_epf_mhi_write_to_host(struct mhi_ep_cntrl *mhi_cntrl,
void *from, u64 to, size_t size)
{
struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
size_t offset = to % SZ_4K;
void __iomem *tre_buf;
phys_addr_t tre_phys;
int ret;
mutex_lock(&epf_mhi->lock);
ret = __pci_epf_mhi_alloc_map(mhi_cntrl, to, &tre_phys, &tre_buf,
offset, size);
if (ret) {
mutex_unlock(&epf_mhi->lock);
return ret;
}
memcpy_toio(tre_buf, from, size);
__pci_epf_mhi_unmap_free(mhi_cntrl, to, tre_phys, tre_buf, offset,
size);
mutex_unlock(&epf_mhi->lock);
return 0;
}
static int pci_epf_mhi_core_init(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num];
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
int ret;
epf_bar->phys_addr = epf_mhi->mmio_phys;
epf_bar->size = epf_mhi->mmio_size;
epf_bar->barno = info->bar_num;
epf_bar->flags = info->epf_flags;
ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, epf_bar);
if (ret) {
dev_err(dev, "Failed to set BAR: %d\n", ret);
return ret;
}
ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no,
order_base_2(info->msi_count));
if (ret) {
dev_err(dev, "Failed to set MSI configuration: %d\n", ret);
return ret;
}
ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no,
epf->header);
if (ret) {
dev_err(dev, "Failed to set Configuration header: %d\n", ret);
return ret;
}
return 0;
}
static int pci_epf_mhi_link_up(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
int ret;
mhi_cntrl->mmio = epf_mhi->mmio;
mhi_cntrl->irq = epf_mhi->irq;
mhi_cntrl->mru = info->mru;
/* Assign the struct dev of PCI EP as MHI controller device */
mhi_cntrl->cntrl_dev = epc->dev.parent;
mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
mhi_cntrl->read_from_host = pci_epf_mhi_read_from_host;
mhi_cntrl->write_to_host = pci_epf_mhi_write_to_host;
/* Register the MHI EP controller */
ret = mhi_ep_register_controller(mhi_cntrl, info->config);
if (ret) {
dev_err(dev, "Failed to register MHI EP controller: %d\n", ret);
return ret;
}
return 0;
}
static int pci_epf_mhi_link_down(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
if (mhi_cntrl->mhi_dev) {
mhi_ep_power_down(mhi_cntrl);
mhi_ep_unregister_controller(mhi_cntrl);
}
return 0;
}
static int pci_epf_mhi_bme(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
struct device *dev = &epf->dev;
int ret;
/*
* Power up the MHI EP stack if link is up and stack is in power down
* state.
*/
if (!mhi_cntrl->enabled && mhi_cntrl->mhi_dev) {
ret = mhi_ep_power_up(mhi_cntrl);
if (ret) {
dev_err(dev, "Failed to power up MHI EP: %d\n", ret);
mhi_ep_unregister_controller(mhi_cntrl);
}
}
return 0;
}
static int pci_epf_mhi_bind(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
struct platform_device *pdev = to_platform_device(epc->dev.parent);
struct resource *res;
int ret;
/* Get MMIO base address from Endpoint controller */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio");
epf_mhi->mmio_phys = res->start;
epf_mhi->mmio_size = resource_size(res);
epf_mhi->mmio = ioremap(epf_mhi->mmio_phys, epf_mhi->mmio_size);
if (!epf_mhi->mmio)
return -ENOMEM;
ret = platform_get_irq_byname(pdev, "doorbell");
if (ret < 0) {
iounmap(epf_mhi->mmio);
return ret;
}
epf_mhi->irq = ret;
return 0;
}
static void pci_epf_mhi_unbind(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
const struct pci_epf_mhi_ep_info *info = epf_mhi->info;
struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num];
struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl;
struct pci_epc *epc = epf->epc;
/*
* Forcefully power down the MHI EP stack. Only way to bring the MHI EP
* stack back to working state after successive bind is by getting BME
* from host.
*/
if (mhi_cntrl->mhi_dev) {
mhi_ep_power_down(mhi_cntrl);
mhi_ep_unregister_controller(mhi_cntrl);
}
iounmap(epf_mhi->mmio);
pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, epf_bar);
}
static struct pci_epc_event_ops pci_epf_mhi_event_ops = {
.core_init = pci_epf_mhi_core_init,
.link_up = pci_epf_mhi_link_up,
.link_down = pci_epf_mhi_link_down,
.bme = pci_epf_mhi_bme,
};
static int pci_epf_mhi_probe(struct pci_epf *epf,
const struct pci_epf_device_id *id)
{
struct pci_epf_mhi_ep_info *info =
(struct pci_epf_mhi_ep_info *)id->driver_data;
struct pci_epf_mhi *epf_mhi;
struct device *dev = &epf->dev;
epf_mhi = devm_kzalloc(dev, sizeof(*epf_mhi), GFP_KERNEL);
if (!epf_mhi)
return -ENOMEM;
epf->header = info->epf_header;
epf_mhi->info = info;
epf_mhi->epf = epf;
epf->event_ops = &pci_epf_mhi_event_ops;
mutex_init(&epf_mhi->lock);
epf_set_drvdata(epf, epf_mhi);
return 0;
}
static const struct pci_epf_device_id pci_epf_mhi_ids[] = {
{
.name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info,
},
{},
};
static struct pci_epf_ops pci_epf_mhi_ops = {
.unbind = pci_epf_mhi_unbind,
.bind = pci_epf_mhi_bind,
};
static struct pci_epf_driver pci_epf_mhi_driver = {
.driver.name = "pci_epf_mhi",
.probe = pci_epf_mhi_probe,
.id_table = pci_epf_mhi_ids,
.ops = &pci_epf_mhi_ops,
.owner = THIS_MODULE,
};
static int __init pci_epf_mhi_init(void)
{
return pci_epf_register_driver(&pci_epf_mhi_driver);
}
module_init(pci_epf_mhi_init);
static void __exit pci_epf_mhi_exit(void)
{
pci_epf_unregister_driver(&pci_epf_mhi_driver);
}
module_exit(pci_epf_mhi_exit);
MODULE_DESCRIPTION("PCI EPF driver for MHI Endpoint devices");
MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
MODULE_LICENSE("GPL");

View File

@ -2075,11 +2075,13 @@ static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf,
/**
* epf_ntb_probe() - Probe NTB function driver
* @epf: NTB endpoint function device
* @id: NTB endpoint function device ID
*
* Probe NTB function driver when endpoint function bus detects a NTB
* endpoint function.
*/
static int epf_ntb_probe(struct pci_epf *epf)
static int epf_ntb_probe(struct pci_epf *epf,
const struct pci_epf_device_id *id)
{
struct epf_ntb *ntb;
struct device *dev;

View File

@ -54,6 +54,9 @@ struct pci_epf_test {
struct delayed_work cmd_handler;
struct dma_chan *dma_chan_tx;
struct dma_chan *dma_chan_rx;
struct dma_chan *transfer_chan;
dma_cookie_t transfer_cookie;
enum dma_status transfer_status;
struct completion transfer_complete;
bool dma_supported;
bool dma_private;
@ -85,8 +88,14 @@ static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 };
static void pci_epf_test_dma_callback(void *param)
{
struct pci_epf_test *epf_test = param;
struct dma_tx_state state;
complete(&epf_test->transfer_complete);
epf_test->transfer_status =
dmaengine_tx_status(epf_test->transfer_chan,
epf_test->transfer_cookie, &state);
if (epf_test->transfer_status == DMA_COMPLETE ||
epf_test->transfer_status == DMA_ERROR)
complete(&epf_test->transfer_complete);
}
/**
@ -112,7 +121,7 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
size_t len, dma_addr_t dma_remote,
enum dma_transfer_direction dir)
{
struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ?
struct dma_chan *chan = (dir == DMA_MEM_TO_DEV) ?
epf_test->dma_chan_tx : epf_test->dma_chan_rx;
dma_addr_t dma_local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst;
enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
@ -120,7 +129,6 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
struct dma_async_tx_descriptor *tx;
struct dma_slave_config sconf = {};
struct device *dev = &epf->dev;
dma_cookie_t cookie;
int ret;
if (IS_ERR_OR_NULL(chan)) {
@ -151,26 +159,34 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
return -EIO;
}
reinit_completion(&epf_test->transfer_complete);
epf_test->transfer_chan = chan;
tx->callback = pci_epf_test_dma_callback;
tx->callback_param = epf_test;
cookie = tx->tx_submit(tx);
reinit_completion(&epf_test->transfer_complete);
epf_test->transfer_cookie = dmaengine_submit(tx);
ret = dma_submit_error(cookie);
ret = dma_submit_error(epf_test->transfer_cookie);
if (ret) {
dev_err(dev, "Failed to do DMA tx_submit %d\n", cookie);
return -EIO;
dev_err(dev, "Failed to do DMA tx_submit %d\n", ret);
goto terminate;
}
dma_async_issue_pending(chan);
ret = wait_for_completion_interruptible(&epf_test->transfer_complete);
if (ret < 0) {
dmaengine_terminate_sync(chan);
dev_err(dev, "DMA wait_for_completion_timeout\n");
return -ETIMEDOUT;
dev_err(dev, "DMA wait_for_completion interrupted\n");
goto terminate;
}
return 0;
if (epf_test->transfer_status == DMA_ERROR) {
dev_err(dev, "DMA transfer failed\n");
ret = -EIO;
}
terminate:
dmaengine_terminate_sync(chan);
return ret;
}
struct epf_dma_filter {
@ -279,40 +295,29 @@ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
return;
}
static void pci_epf_test_print_rate(const char *ops, u64 size,
static void pci_epf_test_print_rate(struct pci_epf_test *epf_test,
const char *op, u64 size,
struct timespec64 *start,
struct timespec64 *end, bool dma)
{
struct timespec64 ts;
u64 rate, ns;
ts = timespec64_sub(*end, *start);
/* convert both size (stored in 'rate') and time in terms of 'ns' */
ns = timespec64_to_ns(&ts);
rate = size * NSEC_PER_SEC;
/* Divide both size (stored in 'rate') and ns by a common factor */
while (ns > UINT_MAX) {
rate >>= 1;
ns >>= 1;
}
if (!ns)
return;
struct timespec64 ts = timespec64_sub(*end, *start);
u64 rate = 0, ns;
/* calculate the rate */
do_div(rate, (uint32_t)ns);
ns = timespec64_to_ns(&ts);
if (ns)
rate = div64_u64(size * NSEC_PER_SEC, ns * 1000);
pr_info("\n%s => Size: %llu bytes\t DMA: %s\t Time: %llu.%09u seconds\t"
"Rate: %llu KB/s\n", ops, size, dma ? "YES" : "NO",
(u64)ts.tv_sec, (u32)ts.tv_nsec, rate / 1024);
dev_info(&epf_test->epf->dev,
"%s => Size: %llu B, DMA: %s, Time: %llu.%09u s, Rate: %llu KB/s\n",
op, size, dma ? "YES" : "NO",
(u64)ts.tv_sec, (u32)ts.tv_nsec, rate);
}
static int pci_epf_test_copy(struct pci_epf_test *epf_test)
static void pci_epf_test_copy(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
int ret;
bool use_dma;
void __iomem *src_addr;
void __iomem *dst_addr;
phys_addr_t src_phys_addr;
@ -321,8 +326,6 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size);
if (!src_addr) {
@ -357,14 +360,7 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
}
ktime_get_ts64(&start);
use_dma = !!(reg->flags & FLAG_USE_DMA);
if (use_dma) {
if (!epf_test->dma_supported) {
dev_err(dev, "Cannot transfer data using DMA\n");
ret = -EINVAL;
goto err_map_addr;
}
if (reg->flags & FLAG_USE_DMA) {
if (epf_test->dma_private) {
dev_err(dev, "Cannot transfer data using DMA\n");
ret = -EINVAL;
@ -390,7 +386,8 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
kfree(buf);
}
ktime_get_ts64(&end);
pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start, &end,
reg->flags & FLAG_USE_DMA);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr);
@ -405,16 +402,19 @@ err_src_addr:
pci_epc_mem_free_addr(epc, src_phys_addr, src_addr, reg->size);
err:
return ret;
if (!ret)
reg->status |= STATUS_COPY_SUCCESS;
else
reg->status |= STATUS_COPY_FAIL;
}
static int pci_epf_test_read(struct pci_epf_test *epf_test)
static void pci_epf_test_read(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
int ret;
void __iomem *src_addr;
void *buf;
u32 crc32;
bool use_dma;
phys_addr_t phys_addr;
phys_addr_t dst_phys_addr;
struct timespec64 start, end;
@ -422,8 +422,6 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
struct device *dma_dev = epf->epc->dev.parent;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size);
if (!src_addr) {
@ -447,14 +445,7 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
goto err_map_addr;
}
use_dma = !!(reg->flags & FLAG_USE_DMA);
if (use_dma) {
if (!epf_test->dma_supported) {
dev_err(dev, "Cannot transfer data using DMA\n");
ret = -EINVAL;
goto err_dma_map;
}
if (reg->flags & FLAG_USE_DMA) {
dst_phys_addr = dma_map_single(dma_dev, buf, reg->size,
DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, dst_phys_addr)) {
@ -479,7 +470,8 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
ktime_get_ts64(&end);
}
pci_epf_test_print_rate("READ", reg->size, &start, &end, use_dma);
pci_epf_test_print_rate(epf_test, "READ", reg->size, &start, &end,
reg->flags & FLAG_USE_DMA);
crc32 = crc32_le(~0, buf, reg->size);
if (crc32 != reg->checksum)
@ -495,15 +487,18 @@ err_addr:
pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size);
err:
return ret;
if (!ret)
reg->status |= STATUS_READ_SUCCESS;
else
reg->status |= STATUS_READ_FAIL;
}
static int pci_epf_test_write(struct pci_epf_test *epf_test)
static void pci_epf_test_write(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
int ret;
void __iomem *dst_addr;
void *buf;
bool use_dma;
phys_addr_t phys_addr;
phys_addr_t src_phys_addr;
struct timespec64 start, end;
@ -511,8 +506,6 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
struct device *dma_dev = epf->epc->dev.parent;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size);
if (!dst_addr) {
@ -539,14 +532,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
get_random_bytes(buf, reg->size);
reg->checksum = crc32_le(~0, buf, reg->size);
use_dma = !!(reg->flags & FLAG_USE_DMA);
if (use_dma) {
if (!epf_test->dma_supported) {
dev_err(dev, "Cannot transfer data using DMA\n");
ret = -EINVAL;
goto err_dma_map;
}
if (reg->flags & FLAG_USE_DMA) {
src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
DMA_TO_DEVICE);
if (dma_mapping_error(dma_dev, src_phys_addr)) {
@ -573,7 +559,8 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
ktime_get_ts64(&end);
}
pci_epf_test_print_rate("WRITE", reg->size, &start, &end, use_dma);
pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start, &end,
reg->flags & FLAG_USE_DMA);
/*
* wait 1ms inorder for the write to complete. Without this delay L3
@ -591,32 +578,51 @@ err_addr:
pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size);
err:
return ret;
if (!ret)
reg->status |= STATUS_WRITE_SUCCESS;
else
reg->status |= STATUS_WRITE_FAIL;
}
static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type,
u16 irq)
static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
u32 status = reg->status | STATUS_IRQ_RAISED;
int count;
reg->status |= STATUS_IRQ_RAISED;
/*
* Set the status before raising the IRQ to ensure that the host sees
* the updated value when it gets the IRQ.
*/
WRITE_ONCE(reg->status, status);
switch (irq_type) {
switch (reg->irq_type) {
case IRQ_TYPE_LEGACY:
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_LEGACY, 0);
break;
case IRQ_TYPE_MSI:
count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0) {
dev_err(dev, "Invalid MSI IRQ number %d / %d\n",
reg->irq_number, count);
return;
}
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSI, irq);
PCI_EPC_IRQ_MSI, reg->irq_number);
break;
case IRQ_TYPE_MSIX:
count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0) {
dev_err(dev, "Invalid MSIX IRQ number %d / %d\n",
reg->irq_number, count);
return;
}
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSIX, irq);
PCI_EPC_IRQ_MSIX, reg->irq_number);
break;
default:
dev_err(dev, "Failed to raise IRQ, unknown type\n");
@ -626,87 +632,53 @@ static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type,
static void pci_epf_test_cmd_handler(struct work_struct *work)
{
int ret;
int count;
u32 command;
struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test,
cmd_handler.work);
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
enum pci_barno test_reg_bar = epf_test->test_reg_bar;
struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar];
command = reg->command;
command = READ_ONCE(reg->command);
if (!command)
goto reset_handler;
reg->command = 0;
reg->status = 0;
WRITE_ONCE(reg->command, 0);
WRITE_ONCE(reg->status, 0);
if ((READ_ONCE(reg->flags) & FLAG_USE_DMA) &&
!epf_test->dma_supported) {
dev_err(dev, "Cannot transfer data using DMA\n");
goto reset_handler;
}
if (reg->irq_type > IRQ_TYPE_MSIX) {
dev_err(dev, "Failed to detect IRQ type\n");
goto reset_handler;
}
if (command & COMMAND_RAISE_LEGACY_IRQ) {
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_LEGACY, 0);
goto reset_handler;
}
if (command & COMMAND_WRITE) {
ret = pci_epf_test_write(epf_test);
if (ret)
reg->status |= STATUS_WRITE_FAIL;
else
reg->status |= STATUS_WRITE_SUCCESS;
pci_epf_test_raise_irq(epf_test, reg->irq_type,
reg->irq_number);
goto reset_handler;
}
if (command & COMMAND_READ) {
ret = pci_epf_test_read(epf_test);
if (!ret)
reg->status |= STATUS_READ_SUCCESS;
else
reg->status |= STATUS_READ_FAIL;
pci_epf_test_raise_irq(epf_test, reg->irq_type,
reg->irq_number);
goto reset_handler;
}
if (command & COMMAND_COPY) {
ret = pci_epf_test_copy(epf_test);
if (!ret)
reg->status |= STATUS_COPY_SUCCESS;
else
reg->status |= STATUS_COPY_FAIL;
pci_epf_test_raise_irq(epf_test, reg->irq_type,
reg->irq_number);
goto reset_handler;
}
if (command & COMMAND_RAISE_MSI_IRQ) {
count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0)
goto reset_handler;
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSI, reg->irq_number);
goto reset_handler;
}
if (command & COMMAND_RAISE_MSIX_IRQ) {
count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no);
if (reg->irq_number > count || count <= 0)
goto reset_handler;
reg->status = STATUS_IRQ_RAISED;
pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no,
PCI_EPC_IRQ_MSIX, reg->irq_number);
goto reset_handler;
switch (command) {
case COMMAND_RAISE_LEGACY_IRQ:
case COMMAND_RAISE_MSI_IRQ:
case COMMAND_RAISE_MSIX_IRQ:
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_WRITE:
pci_epf_test_write(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_READ:
pci_epf_test_read(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
case COMMAND_COPY:
pci_epf_test_copy(epf_test, reg);
pci_epf_test_raise_irq(epf_test, reg);
break;
default:
dev_err(dev, "Invalid command 0x%x\n", command);
break;
}
reset_handler:
@ -980,7 +952,8 @@ static const struct pci_epf_device_id pci_epf_test_ids[] = {
{},
};
static int pci_epf_test_probe(struct pci_epf *epf)
static int pci_epf_test_probe(struct pci_epf *epf,
const struct pci_epf_device_id *id)
{
struct pci_epf_test *epf_test;
struct device *dev = &epf->dev;

View File

@ -84,15 +84,15 @@ enum epf_ntb_bar {
* | |
* | |
* | |
* +-----------------------+--------------------------+ Base+span_offset
* +-----------------------+--------------------------+ Base+spad_offset
* | | |
* | Peer Span Space | Span Space |
* | Peer Spad Space | Spad Space |
* | | |
* | | |
* +-----------------------+--------------------------+ Base+span_offset
* | | | +span_count * 4
* +-----------------------+--------------------------+ Base+spad_offset
* | | | +spad_count * 4
* | | |
* | Span Space | Peer Span Space |
* | Spad Space | Peer Spad Space |
* | | |
* +-----------------------+--------------------------+
* Virtual PCI PCIe Endpoint
@ -1395,13 +1395,15 @@ static struct pci_epf_ops epf_ntb_ops = {
/**
* epf_ntb_probe() - Probe NTB function driver
* @epf: NTB endpoint function device
* @id: NTB endpoint function device ID
*
* Probe NTB function driver when endpoint function bus detects a NTB
* endpoint function.
*
* Returns: Zero for success, or an error code in case of failure
*/
static int epf_ntb_probe(struct pci_epf *epf)
static int epf_ntb_probe(struct pci_epf *epf,
const struct pci_epf_device_id *id)
{
struct epf_ntb *ntb;
struct device *dev;

View File

@ -23,6 +23,7 @@ struct pci_epf_group {
struct config_group group;
struct config_group primary_epc_group;
struct config_group secondary_epc_group;
struct config_group *type_group;
struct delayed_work cfs_work;
struct pci_epf *epf;
int index;
@ -178,6 +179,9 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
if (kstrtobool(page, &start) < 0)
return -EINVAL;
if (start == epc_group->start)
return -EALREADY;
if (!start) {
pci_epc_stop(epc);
epc_group->start = 0;
@ -502,34 +506,65 @@ static struct configfs_item_operations pci_epf_ops = {
.release = pci_epf_release,
};
static struct config_group *pci_epf_type_make(struct config_group *group,
const char *name)
{
struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item);
struct config_group *epf_type_group;
epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group);
return epf_type_group;
}
static void pci_epf_type_drop(struct config_group *group,
struct config_item *item)
{
config_item_put(item);
}
static struct configfs_group_operations pci_epf_type_group_ops = {
.make_group = &pci_epf_type_make,
.drop_item = &pci_epf_type_drop,
};
static const struct config_item_type pci_epf_type = {
.ct_group_ops = &pci_epf_type_group_ops,
.ct_item_ops = &pci_epf_ops,
.ct_attrs = pci_epf_attrs,
.ct_owner = THIS_MODULE,
};
/**
* pci_epf_type_add_cfs() - Help function drivers to expose function specific
* attributes in configfs
* @epf: the EPF device that has to be configured using configfs
* @group: the parent configfs group (corresponding to entries in
* pci_epf_device_id)
*
* Invoke to expose function specific attributes in configfs.
*
* Return: A pointer to a config_group structure or NULL if the function driver
* does not have anything to expose (attributes configured by user) or if
* the function driver does not implement the add_cfs() method.
*
* Returns an error pointer if this function is called for an unbound EPF device
* or if the EPF driver add_cfs() method fails.
*/
static struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
struct config_group *group)
{
struct config_group *epf_type_group;
if (!epf->driver) {
dev_err(&epf->dev, "epf device not bound to driver\n");
return ERR_PTR(-ENODEV);
}
if (!epf->driver->ops->add_cfs)
return NULL;
mutex_lock(&epf->lock);
epf_type_group = epf->driver->ops->add_cfs(epf, group);
mutex_unlock(&epf->lock);
return epf_type_group;
}
static void pci_ep_cfs_add_type_group(struct pci_epf_group *epf_group)
{
struct config_group *group;
group = pci_epf_type_add_cfs(epf_group->epf, &epf_group->group);
if (!group)
return;
if (IS_ERR(group)) {
dev_err(&epf_group->epf->dev,
"failed to create epf type specific attributes\n");
return;
}
configfs_register_group(&epf_group->group, group);
}
static void pci_epf_cfs_work(struct work_struct *work)
{
struct pci_epf_group *epf_group;
@ -547,6 +582,8 @@ static void pci_epf_cfs_work(struct work_struct *work)
pr_err("failed to create 'secondary' EPC interface\n");
return;
}
pci_ep_cfs_add_type_group(epf_group);
}
static struct config_group *pci_epf_make(struct config_group *group,

View File

@ -213,7 +213,7 @@ EXPORT_SYMBOL_GPL(pci_epc_start);
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @type: specify the type of interrupt; legacy, MSI or MSI-X
* @interrupt_num: the MSI or MSI-X interrupt number
* @interrupt_num: the MSI or MSI-X interrupt number with range (1-N)
*
* Invoke to raise an legacy, MSI or MSI-X interrupt
*/
@ -246,7 +246,7 @@ EXPORT_SYMBOL_GPL(pci_epc_raise_irq);
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @phys_addr: the physical address of the outbound region
* @interrupt_num: the MSI interrupt number
* @interrupt_num: the MSI interrupt number with range (1-N)
* @entry_size: Size of Outbound address region for each interrupt
* @msi_data: the data that should be written in order to raise MSI interrupt
* with interrupt number as 'interrupt num'
@ -706,6 +706,32 @@ void pci_epc_linkup(struct pci_epc *epc)
}
EXPORT_SYMBOL_GPL(pci_epc_linkup);
/**
* pci_epc_linkdown() - Notify the EPF device that EPC device has dropped the
* connection with the Root Complex.
* @epc: the EPC device which has dropped the link with the host
*
* Invoke to Notify the EPF device that the EPC device has dropped the
* connection with the Root Complex.
*/
void pci_epc_linkdown(struct pci_epc *epc)
{
struct pci_epf *epf;
if (!epc || IS_ERR(epc))
return;
mutex_lock(&epc->list_lock);
list_for_each_entry(epf, &epc->pci_epf, list) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->link_down)
epf->event_ops->link_down(epf);
mutex_unlock(&epf->lock);
}
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_linkdown);
/**
* pci_epc_init_notify() - Notify the EPF device that EPC device's core
* initialization is completed.
@ -732,6 +758,32 @@ void pci_epc_init_notify(struct pci_epc *epc)
}
EXPORT_SYMBOL_GPL(pci_epc_init_notify);
/**
* pci_epc_bme_notify() - Notify the EPF device that the EPC device has received
* the BME event from the Root complex
* @epc: the EPC device that received the BME event
*
* Invoke to Notify the EPF device that the EPC device has received the Bus
* Master Enable (BME) event from the Root complex
*/
void pci_epc_bme_notify(struct pci_epc *epc)
{
struct pci_epf *epf;
if (!epc || IS_ERR(epc))
return;
mutex_lock(&epc->list_lock);
list_for_each_entry(epf, &epc->pci_epf, list) {
mutex_lock(&epf->lock);
if (epf->event_ops && epf->event_ops->bme)
epf->event_ops->bme(epf);
mutex_unlock(&epf->lock);
}
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_bme_notify);
/**
* pci_epc_destroy() - destroy the EPC device
* @epc: the EPC device that has to be destroyed

View File

@ -20,38 +20,6 @@ static DEFINE_MUTEX(pci_epf_mutex);
static struct bus_type pci_epf_bus_type;
static const struct device_type pci_epf_type;
/**
* pci_epf_type_add_cfs() - Help function drivers to expose function specific
* attributes in configfs
* @epf: the EPF device that has to be configured using configfs
* @group: the parent configfs group (corresponding to entries in
* pci_epf_device_id)
*
* Invoke to expose function specific attributes in configfs. If the function
* driver does not have anything to expose (attributes configured by user),
* return NULL.
*/
struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
struct config_group *group)
{
struct config_group *epf_type_group;
if (!epf->driver) {
dev_err(&epf->dev, "epf device not bound to driver\n");
return NULL;
}
if (!epf->driver->ops->add_cfs)
return NULL;
mutex_lock(&epf->lock);
epf_type_group = epf->driver->ops->add_cfs(epf, group);
mutex_unlock(&epf->lock);
return epf_type_group;
}
EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs);
/**
* pci_epf_unbind() - Notify the function driver that the binding between the
* EPF device and EPC device has been lost
@ -493,16 +461,16 @@ static const struct device_type pci_epf_type = {
.release = pci_epf_dev_release,
};
static int
static const struct pci_epf_device_id *
pci_epf_match_id(const struct pci_epf_device_id *id, const struct pci_epf *epf)
{
while (id->name[0]) {
if (strcmp(epf->name, id->name) == 0)
return true;
return id;
id++;
}
return false;
return NULL;
}
static int pci_epf_device_match(struct device *dev, struct device_driver *drv)
@ -511,7 +479,7 @@ static int pci_epf_device_match(struct device *dev, struct device_driver *drv)
struct pci_epf_driver *driver = to_pci_epf_driver(drv);
if (driver->id_table)
return pci_epf_match_id(driver->id_table, epf);
return !!pci_epf_match_id(driver->id_table, epf);
return !strcmp(epf->name, drv->name);
}
@ -526,7 +494,7 @@ static int pci_epf_device_probe(struct device *dev)
epf->driver = driver;
return driver->probe(epf);
return driver->probe(epf, pci_epf_match_id(driver->id_table, epf));
}
static void pci_epf_device_remove(struct device *dev)

View File

@ -498,7 +498,6 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
acpiphp_native_scan_bridge(dev);
}
} else {
LIST_HEAD(add_list);
int max, pass;
acpiphp_rescan_slot(slot);
@ -512,12 +511,10 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
if (pass && dev->subordinate) {
check_hotplug_bridge(slot, dev);
pcibios_resource_survey_bus(dev->subordinate);
__pci_bus_size_bridges(dev->subordinate,
&add_list);
}
}
}
__pci_bus_assign_resources(bus, &add_list, NULL);
pci_assign_unassigned_bridge_resources(bus->self);
}
acpiphp_sanitize_bus(bus);

View File

@ -166,11 +166,11 @@ void pciehp_handle_button_press(struct controller *ctrl)
case ON_STATE:
if (ctrl->state == ON_STATE) {
ctrl->state = BLINKINGOFF_STATE;
ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n",
ctrl_info(ctrl, "Slot(%s): Button press: will power off in 5 sec\n",
slot_name(ctrl));
} else {
ctrl->state = BLINKINGON_STATE;
ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n",
ctrl_info(ctrl, "Slot(%s): Button press: will power on in 5 sec\n",
slot_name(ctrl));
}
/* blink power indicator and turn off attention */
@ -185,22 +185,23 @@ void pciehp_handle_button_press(struct controller *ctrl)
* press the attention again before the 5 sec. limit
* expires to cancel hot-add or hot-remove
*/
ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl));
cancel_delayed_work(&ctrl->button_work);
if (ctrl->state == BLINKINGOFF_STATE) {
ctrl->state = ON_STATE;
pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
PCI_EXP_SLTCTL_ATTN_IND_OFF);
ctrl_info(ctrl, "Slot(%s): Button press: canceling request to power off\n",
slot_name(ctrl));
} else {
ctrl->state = OFF_STATE;
pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
PCI_EXP_SLTCTL_ATTN_IND_OFF);
ctrl_info(ctrl, "Slot(%s): Button press: canceling request to power on\n",
slot_name(ctrl));
}
ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n",
slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n",
ctrl_err(ctrl, "Slot(%s): Button press: ignoring invalid state %#x\n",
slot_name(ctrl), ctrl->state);
break;
}
@ -256,6 +257,14 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
present = pciehp_card_present(ctrl);
link_active = pciehp_check_link_active(ctrl);
if (present <= 0 && link_active <= 0) {
if (ctrl->state == BLINKINGON_STATE) {
ctrl->state = OFF_STATE;
cancel_delayed_work(&ctrl->button_work);
pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
INDICATOR_NOOP);
ctrl_info(ctrl, "Slot(%s): Card not present\n",
slot_name(ctrl));
}
mutex_unlock(&ctrl->state_lock);
return;
}

View File

@ -722,11 +722,8 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
}
/* Check Attention Button Pressed */
if (events & PCI_EXP_SLTSTA_ABP) {
ctrl_info(ctrl, "Slot(%s): Attention button pressed\n",
slot_name(ctrl));
if (events & PCI_EXP_SLTSTA_ABP)
pciehp_handle_button_press(ctrl);
}
/* Check Power Fault Detected */
if (events & PCI_EXP_SLTSTA_PFD) {
@ -984,7 +981,7 @@ static inline int pcie_hotplug_depth(struct pci_dev *dev)
struct controller *pcie_init(struct pcie_device *dev)
{
struct controller *ctrl;
u32 slot_cap, slot_cap2, link_cap;
u32 slot_cap, slot_cap2;
u8 poweron;
struct pci_dev *pdev = dev->port;
struct pci_bus *subordinate = pdev->subordinate;
@ -1030,9 +1027,6 @@ struct controller *pcie_init(struct pcie_device *dev)
if (dmi_first_match(inband_presence_disabled_dmi_table))
ctrl->inband_presence_disabled = 1;
/* Check if Data Link Layer Link Active Reporting is implemented */
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap);
/* Clear all remaining event bits in Slot Status register. */
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD |
@ -1051,7 +1045,7 @@ struct controller *pcie_init(struct pcie_device *dev)
FLAG(slot_cap, PCI_EXP_SLTCAP_EIP),
FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS),
FLAG(slot_cap2, PCI_EXP_SLTCAP2_IBPD),
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC),
FLAG(pdev->link_active_reporting, true),
pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : "");
/*

View File

@ -39,16 +39,14 @@ int pci_set_of_node(struct pci_dev *dev)
return -ENODEV;
}
dev->dev.of_node = node;
dev->dev.fwnode = &node->fwnode;
device_set_node(&dev->dev, of_fwnode_handle(node));
return 0;
}
void pci_release_of_node(struct pci_dev *dev)
{
of_node_put(dev->dev.of_node);
dev->dev.of_node = NULL;
dev->dev.fwnode = NULL;
device_set_node(&dev->dev, NULL);
}
void pci_set_bus_of_node(struct pci_bus *bus)
@ -63,17 +61,13 @@ void pci_set_bus_of_node(struct pci_bus *bus)
bus->self->external_facing = true;
}
bus->dev.of_node = node;
if (bus->dev.of_node)
bus->dev.fwnode = &bus->dev.of_node->fwnode;
device_set_node(&bus->dev, of_fwnode_handle(node));
}
void pci_release_bus_of_node(struct pci_bus *bus)
{
of_node_put(bus->dev.of_node);
bus->dev.of_node = NULL;
bus->dev.fwnode = NULL;
device_set_node(&bus->dev, NULL);
}
struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)

View File

@ -1043,6 +1043,16 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
return false;
}
static void acpi_pci_config_space_access(struct pci_dev *dev, bool enable)
{
int val = enable ? ACPI_REG_CONNECT : ACPI_REG_DISCONNECT;
int ret = acpi_evaluate_reg(ACPI_HANDLE(&dev->dev),
ACPI_ADR_SPACE_PCI_CONFIG, val);
if (ret)
pci_dbg(dev, "ACPI _REG %s evaluation failed (%d)\n",
enable ? "connect" : "disconnect", ret);
}
int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
{
struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
@ -1053,32 +1063,49 @@ int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
[PCI_D3hot] = ACPI_STATE_D3_HOT,
[PCI_D3cold] = ACPI_STATE_D3_COLD,
};
int error = -EINVAL;
int error;
/* If the ACPI device has _EJ0, ignore the device */
if (!adev || acpi_has_method(adev->handle, "_EJ0"))
return -ENODEV;
switch (state) {
case PCI_D3cold:
if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) ==
PM_QOS_FLAGS_ALL) {
error = -EBUSY;
break;
}
fallthrough;
case PCI_D0:
case PCI_D1:
case PCI_D2:
case PCI_D3hot:
error = acpi_device_set_power(adev, state_conv[state]);
case PCI_D3cold:
break;
default:
return -EINVAL;
}
if (!error)
pci_dbg(dev, "power state changed by ACPI to %s\n",
acpi_power_state_string(adev->power.state));
if (state == PCI_D3cold) {
if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) ==
PM_QOS_FLAGS_ALL)
return -EBUSY;
return error;
/* Notify AML lack of PCI config space availability */
acpi_pci_config_space_access(dev, false);
}
error = acpi_device_set_power(adev, state_conv[state]);
if (error)
return error;
pci_dbg(dev, "power state changed by ACPI to %s\n",
acpi_power_state_string(adev->power.state));
/*
* Notify AML of PCI config space availability. Config space is
* accessible in all states except D3cold; the only transitions
* that change availability are transitions to D3cold and from
* D3cold to D0.
*/
if (state == PCI_D0)
acpi_pci_config_space_access(dev, true);
return 0;
}
pci_power_t acpi_pci_get_power_state(struct pci_dev *dev)

View File

@ -64,6 +64,13 @@ struct pci_pme_device {
#define PME_TIMEOUT 1000 /* How long between PME checks */
/*
* Following exit from Conventional Reset, devices must be ready within 1 sec
* (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional
* Reset (PCIe r6.0 sec 5.8).
*/
#define PCI_RESET_WAIT 1000 /* msec */
/*
* Devices may extend the 1 sec period through Request Retry Status
* completions (PCIe r6.0 sec 2.3.1). The spec does not provide an upper
@ -1156,7 +1163,14 @@ void pci_resume_bus(struct pci_bus *bus)
static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
{
int delay = 1;
u32 id;
bool retrain = false;
struct pci_dev *bridge;
if (pci_is_pcie(dev)) {
bridge = pci_upstream_bridge(dev);
if (bridge)
retrain = true;
}
/*
* After reset, the device should not silently discard config
@ -1170,21 +1184,33 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
* Command register instead of Vendor ID so we don't have to
* contend with the CRS SV value.
*/
pci_read_config_dword(dev, PCI_COMMAND, &id);
while (PCI_POSSIBLE_ERROR(id)) {
for (;;) {
u32 id;
pci_read_config_dword(dev, PCI_COMMAND, &id);
if (!PCI_POSSIBLE_ERROR(id))
break;
if (delay > timeout) {
pci_warn(dev, "not ready %dms after %s; giving up\n",
delay - 1, reset_type);
return -ENOTTY;
}
if (delay > PCI_RESET_WAIT)
if (delay > PCI_RESET_WAIT) {
if (retrain) {
retrain = false;
if (pcie_failed_link_retrain(bridge)) {
delay = 1;
continue;
}
}
pci_info(dev, "not ready %dms after %s; waiting\n",
delay - 1, reset_type);
}
msleep(delay);
delay *= 2;
pci_read_config_dword(dev, PCI_COMMAND, &id);
}
if (delay > PCI_RESET_WAIT)
@ -2949,13 +2975,13 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
{
/*
* Downstream device is not accessible after putting a root port
* into D3cold and back into D0 on Elo i2.
* into D3cold and back into D0 on Elo Continental Z2 board
*/
.ident = "Elo i2",
.ident = "Elo Continental Z2",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"),
DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"),
DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"),
DMI_MATCH(DMI_BOARD_VENDOR, "Elo Touch Solutions"),
DMI_MATCH(DMI_BOARD_NAME, "Geminilake"),
DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"),
},
},
#endif
@ -4856,6 +4882,79 @@ static int pci_pm_reset(struct pci_dev *dev, bool probe)
return pci_dev_wait(dev, "PM D3hot->D0", PCIE_RESET_READY_POLL_MS);
}
/**
* pcie_wait_for_link_status - Wait for link status change
* @pdev: Device whose link to wait for.
* @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE.
* @active: Waiting for active or inactive?
*
* Return 0 if successful, or -ETIMEDOUT if status has not changed within
* PCIE_LINK_RETRAIN_TIMEOUT_MS milliseconds.
*/
static int pcie_wait_for_link_status(struct pci_dev *pdev,
bool use_lt, bool active)
{
u16 lnksta_mask, lnksta_match;
unsigned long end_jiffies;
u16 lnksta;
lnksta_mask = use_lt ? PCI_EXP_LNKSTA_LT : PCI_EXP_LNKSTA_DLLLA;
lnksta_match = active ? lnksta_mask : 0;
end_jiffies = jiffies + msecs_to_jiffies(PCIE_LINK_RETRAIN_TIMEOUT_MS);
do {
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta);
if ((lnksta & lnksta_mask) == lnksta_match)
return 0;
msleep(1);
} while (time_before(jiffies, end_jiffies));
return -ETIMEDOUT;
}
/**
* pcie_retrain_link - Request a link retrain and wait for it to complete
* @pdev: Device whose link to retrain.
* @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE, for status.
*
* Retrain completion status is retrieved from the Link Status Register
* according to @use_lt. It is not verified whether the use of the DLLLA
* bit is valid.
*
* Return 0 if successful, or -ETIMEDOUT if training has not completed
* within PCIE_LINK_RETRAIN_TIMEOUT_MS milliseconds.
*/
int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
{
int rc;
u16 lnkctl;
/*
* Ensure the updated LNKCTL parameters are used during link
* training by checking that there is no ongoing link training to
* avoid LTSSM race as recommended in Implementation Note at the
* end of PCIe r6.0.1 sec 7.5.3.7.
*/
rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt);
if (rc)
return rc;
pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnkctl);
lnkctl |= PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl);
if (pdev->clear_retrain_link) {
/*
* Due to an erratum in some devices the Retrain Link bit
* needs to be cleared again manually to allow the link
* training to succeed.
*/
lnkctl &= ~PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl);
}
return pcie_wait_for_link_status(pdev, use_lt, !use_lt);
}
/**
* pcie_wait_for_link_delay - Wait until link is active or inactive
* @pdev: Bridge device
@ -4867,16 +4966,14 @@ static int pci_pm_reset(struct pci_dev *dev, bool probe)
static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
int delay)
{
int timeout = 1000;
bool ret;
u16 lnk_status;
int rc;
/*
* Some controllers might not implement link active reporting. In this
* case, we wait for 1000 ms + any delay requested by the caller.
*/
if (!pdev->link_active_reporting) {
msleep(timeout + delay);
msleep(PCIE_LINK_RETRAIN_TIMEOUT_MS + delay);
return true;
}
@ -4891,20 +4988,21 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
*/
if (active)
msleep(20);
for (;;) {
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
if (ret == active)
break;
if (timeout <= 0)
break;
msleep(10);
timeout -= 10;
}
if (active && ret)
msleep(delay);
rc = pcie_wait_for_link_status(pdev, false, active);
if (active) {
if (rc)
rc = pcie_failed_link_retrain(pdev);
if (rc)
return false;
return ret == active;
msleep(delay);
return true;
}
if (rc)
return false;
return true;
}
/**
@ -5011,11 +5109,9 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
*
* However, 100 ms is the minimum and the PCIe spec says the
* software must allow at least 1s before it can determine that the
* device that did not respond is a broken device. There is
* evidence that 100 ms is not always enough, for example certain
* Titan Ridge xHCI controller does not always respond to
* configuration requests if we only wait for 100 ms (see
* https://bugzilla.kernel.org/show_bug.cgi?id=203885).
* device that did not respond is a broken device. Also device can
* take longer than that to respond if it indicates so through Request
* Retry Status completions.
*
* Therefore we wait for 100 ms and check for the device presence
* until the timeout expires.
@ -5024,16 +5120,36 @@ int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type)
return 0;
if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
u16 status;
pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
msleep(delay);
} else {
pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
delay);
if (!pcie_wait_for_link_delay(dev, true, delay)) {
/* Did not train, no need to wait any further */
pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
if (!pci_dev_wait(child, reset_type, PCI_RESET_WAIT - delay))
return 0;
/*
* If the port supports active link reporting we now check
* whether the link is active and if not bail out early with
* the assumption that the device is not present anymore.
*/
if (!dev->link_active_reporting)
return -ENOTTY;
}
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &status);
if (!(status & PCI_EXP_LNKSTA_DLLLA))
return -ENOTTY;
return pci_dev_wait(child, reset_type,
PCIE_RESET_READY_POLL_MS - PCI_RESET_WAIT);
}
pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",
delay);
if (!pcie_wait_for_link_delay(dev, true, delay)) {
/* Did not train, no need to wait any further */
pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
return -ENOTTY;
}
return pci_dev_wait(child, reset_type,

View File

@ -11,6 +11,8 @@
#define PCI_VSEC_ID_INTEL_TBT 0x1234 /* Thunderbolt */
#define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000
extern const unsigned char pcie_link_speed[];
extern bool pci_early_dump;
@ -64,13 +66,6 @@ struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev,
#define PCI_PM_D3HOT_WAIT 10 /* msec */
#define PCI_PM_D3COLD_WAIT 100 /* msec */
/*
* Following exit from Conventional Reset, devices must be ready within 1 sec
* (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional
* Reset (PCIe r6.0 sec 5.8).
*/
#define PCI_RESET_WAIT 1000 /* msec */
void pci_update_current_state(struct pci_dev *dev, pci_power_t state);
void pci_refresh_power_state(struct pci_dev *dev);
int pci_power_up(struct pci_dev *dev);
@ -541,6 +536,7 @@ void pci_acs_init(struct pci_dev *dev);
int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags);
int pci_dev_specific_enable_acs(struct pci_dev *dev);
int pci_dev_specific_disable_acs_redir(struct pci_dev *dev);
bool pcie_failed_link_retrain(struct pci_dev *dev);
#else
static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev,
u16 acs_flags)
@ -555,6 +551,10 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
{
return -ENOTTY;
}
static inline bool pcie_failed_link_retrain(struct pci_dev *dev)
{
return false;
}
#endif
/* PCI error reporting and recovery */
@ -563,6 +563,7 @@ pci_ers_result_t pcie_do_recovery(struct pci_dev *dev,
pci_ers_result_t (*reset_subordinates)(struct pci_dev *pdev));
bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
int pcie_retrain_link(struct pci_dev *pdev, bool use_lt);
#ifdef CONFIG_PCIEASPM
void pcie_aspm_init_link_state(struct pci_dev *pdev);
void pcie_aspm_exit_link_state(struct pci_dev *pdev);
@ -686,6 +687,8 @@ extern const struct attribute_group aer_stats_attr_group;
void pci_aer_clear_fatal_status(struct pci_dev *dev);
int pci_aer_clear_status(struct pci_dev *dev);
int pci_aer_raw_clear_status(struct pci_dev *dev);
void pci_save_aer_state(struct pci_dev *dev);
void pci_restore_aer_state(struct pci_dev *dev);
#else
static inline void pci_no_aer(void) { }
static inline void pci_aer_init(struct pci_dev *d) { }
@ -693,6 +696,8 @@ static inline void pci_aer_exit(struct pci_dev *d) { }
static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { }
static inline int pci_aer_clear_status(struct pci_dev *dev) { return -EINVAL; }
static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL; }
static inline void pci_save_aer_state(struct pci_dev *dev) { }
static inline void pci_restore_aer_state(struct pci_dev *dev) { }
#endif
#ifdef CONFIG_ACPI

View File

@ -90,8 +90,6 @@ static const char *policy_str[] = {
[POLICY_POWER_SUPERSAVE] = "powersupersave"
};
#define LINK_RETRAIN_TIMEOUT HZ
/*
* The L1 PM substate capability is only implemented in function 0 in a
* multi function device.
@ -193,36 +191,6 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
link->clkpm_disable = blacklist ? 1 : 0;
}
static bool pcie_retrain_link(struct pcie_link_state *link)
{
struct pci_dev *parent = link->pdev;
unsigned long end_jiffies;
u16 reg16;
pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
reg16 |= PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
if (parent->clear_retrain_link) {
/*
* Due to an erratum in some devices the Retrain Link bit
* needs to be cleared again manually to allow the link
* training to succeed.
*/
reg16 &= ~PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
}
/* Wait for link training end. Break out after waiting for timeout */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
if (!(reg16 & PCI_EXP_LNKSTA_LT))
break;
msleep(1);
} while (time_before(jiffies, end_jiffies));
return !(reg16 & PCI_EXP_LNKSTA_LT);
}
/*
* pcie_aspm_configure_common_clock: check if the 2 ends of a link
* could use common clock. If they are, configure them to use the
@ -289,15 +257,15 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
reg16 &= ~PCI_EXP_LNKCTL_CCC;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
if (pcie_retrain_link(link))
return;
if (pcie_retrain_link(link->pdev, true)) {
/* Training failed. Restore common clock configurations */
pci_err(parent, "ASPM: Could not configure common clock\n");
list_for_each_entry(child, &linkbus->devices, bus_list)
pcie_capability_write_word(child, PCI_EXP_LNKCTL,
/* Training failed. Restore common clock configurations */
pci_err(parent, "ASPM: Could not configure common clock\n");
list_for_each_entry(child, &linkbus->devices, bus_list)
pcie_capability_write_word(child, PCI_EXP_LNKCTL,
child_reg[PCI_FUNC(child->devfn)]);
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
}
}
/* Convert L0s latency encoding to ns */
@ -337,7 +305,7 @@ static u32 calc_l1_acceptable(u32 encoding)
}
/* Convert L1SS T_pwr encoding to usec */
static u32 calc_l1ss_pwron(struct pci_dev *pdev, u32 scale, u32 val)
static u32 calc_l12_pwron(struct pci_dev *pdev, u32 scale, u32 val)
{
switch (scale) {
case 0:
@ -471,7 +439,7 @@ static void pci_clear_and_set_dword(struct pci_dev *pdev, int pos,
}
/* Calculate L1.2 PM substate timing parameters */
static void aspm_calc_l1ss_info(struct pcie_link_state *link,
static void aspm_calc_l12_info(struct pcie_link_state *link,
u32 parent_l1ss_cap, u32 child_l1ss_cap)
{
struct pci_dev *child = link->downstream, *parent = link->pdev;
@ -481,9 +449,6 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link,
u32 pctl1, pctl2, cctl1, cctl2;
u32 pl1_2_enables, cl1_2_enables;
if (!(link->aspm_support & ASPM_STATE_L1_2_MASK))
return;
/* Choose the greater of the two Port Common_Mode_Restore_Times */
val1 = (parent_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8;
val2 = (child_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8;
@ -495,13 +460,13 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link,
val2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19;
scale2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16;
if (calc_l1ss_pwron(parent, scale1, val1) >
calc_l1ss_pwron(child, scale2, val2)) {
if (calc_l12_pwron(parent, scale1, val1) >
calc_l12_pwron(child, scale2, val2)) {
ctl2 |= scale1 | (val1 << 3);
t_power_on = calc_l1ss_pwron(parent, scale1, val1);
t_power_on = calc_l12_pwron(parent, scale1, val1);
} else {
ctl2 |= scale2 | (val2 << 3);
t_power_on = calc_l1ss_pwron(child, scale2, val2);
t_power_on = calc_l12_pwron(child, scale2, val2);
}
/*
@ -616,8 +581,8 @@ static void aspm_l1ss_init(struct pcie_link_state *link)
if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2)
link->aspm_enabled |= ASPM_STATE_L1_2_PCIPM;
if (link->aspm_support & ASPM_STATE_L1SS)
aspm_calc_l1ss_info(link, parent_l1ss_cap, child_l1ss_cap);
if (link->aspm_support & ASPM_STATE_L1_2_MASK)
aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap);
}
static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
@ -1010,21 +975,24 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
down_read(&pci_bus_sem);
mutex_lock(&aspm_lock);
/*
* All PCIe functions are in one slot, remove one function will remove
* the whole slot, so just wait until we are the last function left.
*/
if (!list_empty(&parent->subordinate->devices))
goto out;
link = parent->link_state;
root = link->root;
parent_link = link->parent;
/* All functions are removed, so just disable ASPM for the link */
/*
* link->downstream is a pointer to the pci_dev of function 0. If
* we remove that function, the pci_dev is about to be deallocated,
* so we can't use link->downstream again. Free the link state to
* avoid this.
*
* If we're removing a non-0 function, it's possible we could
* retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
* programming the same ASPM Control value for all functions of
* multi-function devices, so disable ASPM for all of them.
*/
pcie_config_aspm_link(link, 0);
list_del(&link->sibling);
/* Clock PM is for endpoint device */
free_link_state(link);
/* Recheck latencies and configure upstream links */
@ -1032,7 +1000,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
pcie_update_aspm_capable(root);
pcie_config_aspm_path(parent_link);
}
out:
mutex_unlock(&aspm_lock);
up_read(&pci_bus_sem);
}
@ -1095,8 +1063,7 @@ static int __pci_disable_link_state(struct pci_dev *pdev, int state, bool sem)
if (state & PCIE_LINK_STATE_L0S)
link->aspm_disable |= ASPM_STATE_L0S;
if (state & PCIE_LINK_STATE_L1)
/* L1 PM substates require L1 */
link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS;
link->aspm_disable |= ASPM_STATE_L1;
if (state & PCIE_LINK_STATE_L1_1)
link->aspm_disable |= ASPM_STATE_L1_1;
if (state & PCIE_LINK_STATE_L1_2)
@ -1171,16 +1138,16 @@ int pci_enable_link_state(struct pci_dev *pdev, int state)
if (state & PCIE_LINK_STATE_L0S)
link->aspm_default |= ASPM_STATE_L0S;
if (state & PCIE_LINK_STATE_L1)
/* L1 PM substates require L1 */
link->aspm_default |= ASPM_STATE_L1 | ASPM_STATE_L1SS;
link->aspm_default |= ASPM_STATE_L1;
/* L1 PM substates require L1 */
if (state & PCIE_LINK_STATE_L1_1)
link->aspm_default |= ASPM_STATE_L1_1;
link->aspm_default |= ASPM_STATE_L1_1 | ASPM_STATE_L1;
if (state & PCIE_LINK_STATE_L1_2)
link->aspm_default |= ASPM_STATE_L1_2;
link->aspm_default |= ASPM_STATE_L1_2 | ASPM_STATE_L1;
if (state & PCIE_LINK_STATE_L1_1_PCIPM)
link->aspm_default |= ASPM_STATE_L1_1_PCIPM;
link->aspm_default |= ASPM_STATE_L1_1_PCIPM | ASPM_STATE_L1;
if (state & PCIE_LINK_STATE_L1_2_PCIPM)
link->aspm_default |= ASPM_STATE_L1_2_PCIPM;
link->aspm_default |= ASPM_STATE_L1_2_PCIPM | ASPM_STATE_L1;
pcie_config_aspm_link(link, policy_to_aspm_state(link));
link->clkpm_default = (state & PCIE_LINK_STATE_CLKPM) ? 1 : 0;

View File

@ -820,7 +820,6 @@ static void pci_set_bus_speed(struct pci_bus *bus)
pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap);
bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS];
bridge->link_active_reporting = !!(linkcap & PCI_EXP_LNKCAP_DLLLARC);
pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta);
pcie_update_link_speed(bus, linksta);
@ -997,8 +996,10 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
resource_list_for_each_entry_safe(window, n, &resources) {
offset = window->offset;
res = window->res;
if (!res->flags && !res->start && !res->end)
if (!res->flags && !res->start && !res->end) {
release_resource(res);
continue;
}
list_move_tail(&window->node, &bridge->windows);
@ -1527,6 +1528,7 @@ void set_pcie_port_type(struct pci_dev *pdev)
{
int pos;
u16 reg16;
u32 reg32;
int type;
struct pci_dev *parent;
@ -1540,6 +1542,10 @@ void set_pcie_port_type(struct pci_dev *pdev)
pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap);
pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap);
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &reg32);
if (reg32 & PCI_EXP_LNKCAP_DLLLARC)
pdev->link_active_reporting = 1;
parent = pci_upstream_bridge(pdev);
if (!parent)
return;
@ -2546,6 +2552,8 @@ void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
dma_set_max_seg_size(&dev->dev, 65536);
dma_set_seg_boundary(&dev->dev, 0xffffffff);
pcie_failed_link_retrain(dev);
/* Fix up broken headers */
pci_fixup_device(pci_fixup_header, dev);

View File

@ -33,6 +33,99 @@
#include <linux/switchtec.h>
#include "pci.h"
/*
* Retrain the link of a downstream PCIe port by hand if necessary.
*
* This is needed at least where a downstream port of the ASMedia ASM2824
* Gen 3 switch is wired to the upstream port of the Pericom PI7C9X2G304
* Gen 2 switch, and observed with the Delock Riser Card PCI Express x1 >
* 2 x PCIe x1 device, P/N 41433, plugged into the SiFive HiFive Unmatched
* board.
*
* In such a configuration the switches are supposed to negotiate the link
* speed of preferably 5.0GT/s, falling back to 2.5GT/s. However the link
* continues switching between the two speeds indefinitely and the data
* link layer never reaches the active state, with link training reported
* repeatedly active ~84% of the time. Forcing the target link speed to
* 2.5GT/s with the upstream ASM2824 device makes the two switches talk to
* each other correctly however. And more interestingly retraining with a
* higher target link speed afterwards lets the two successfully negotiate
* 5.0GT/s.
*
* With the ASM2824 we can rely on the otherwise optional Data Link Layer
* Link Active status bit and in the failed link training scenario it will
* be off along with the Link Bandwidth Management Status indicating that
* hardware has changed the link speed or width in an attempt to correct
* unreliable link operation. For a port that has been left unconnected
* both bits will be clear. So use this information to detect the problem
* rather than polling the Link Training bit and watching out for flips or
* at least the active status.
*
* Since the exact nature of the problem isn't known and in principle this
* could trigger where an ASM2824 device is downstream rather upstream,
* apply this erratum workaround to any downstream ports as long as they
* support Link Active reporting and have the Link Control 2 register.
* Restrict the speed to 2.5GT/s then with the Target Link Speed field,
* request a retrain and wait 200ms for the data link to go up.
*
* If this turns out successful and we know by the Vendor:Device ID it is
* safe to do so, then lift the restriction, letting the devices negotiate
* a higher speed. Also check for a similar 2.5GT/s speed restriction the
* firmware may have already arranged and lift it with ports that already
* report their data link being up.
*
* Return TRUE if the link has been successfully retrained, otherwise FALSE.
*/
bool pcie_failed_link_retrain(struct pci_dev *dev)
{
static const struct pci_device_id ids[] = {
{ PCI_VDEVICE(ASMEDIA, 0x2824) }, /* ASMedia ASM2824 */
{}
};
u16 lnksta, lnkctl2;
if (!pci_is_pcie(dev) || !pcie_downstream_port(dev) ||
!pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting)
return false;
pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2);
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
if ((lnksta & (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_DLLLA)) ==
PCI_EXP_LNKSTA_LBMS) {
pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n");
lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS;
lnkctl2 |= PCI_EXP_LNKCTL2_TLS_2_5GT;
pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2);
if (pcie_retrain_link(dev, false)) {
pci_info(dev, "retraining failed\n");
return false;
}
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
}
if ((lnksta & PCI_EXP_LNKSTA_DLLLA) &&
(lnkctl2 & PCI_EXP_LNKCTL2_TLS) == PCI_EXP_LNKCTL2_TLS_2_5GT &&
pci_match_id(ids, dev)) {
u32 lnkcap;
pci_info(dev, "removing 2.5GT/s downstream link speed restriction\n");
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS;
lnkctl2 |= lnkcap & PCI_EXP_LNKCAP_SLS;
pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2);
if (pcie_retrain_link(dev, false)) {
pci_info(dev, "retraining failed\n");
return false;
}
}
return true;
}
static ktime_t fixup_debug_start(struct pci_dev *dev,
void (*fn)(struct pci_dev *dev))
{
@ -2420,9 +2513,9 @@ static void quirk_enable_clear_retrain_link(struct pci_dev *dev)
dev->clear_retrain_link = 1;
pci_info(dev, "Enable PCIe Retrain Link quirk\n");
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe110, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe111, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe130, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe110, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe111, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe130, quirk_enable_clear_retrain_link);
static void fixup_rev1_53c810(struct pci_dev *dev)
{
@ -3993,10 +4086,11 @@ static int nvme_disable_and_flr(struct pci_dev *dev, bool probe)
}
/*
* Intel DC P3700 NVMe controller will timeout waiting for ready status
* to change after NVMe enable if the driver starts interacting with the
* device too soon after FLR. A 250ms delay after FLR has heuristically
* proven to produce reliably working results for device assignment cases.
* Some NVMe controllers such as Intel DC P3700 and Solidigm P44 Pro will
* timeout waiting for ready status to change after NVMe enable if the driver
* starts interacting with the device too soon after FLR. A 250ms delay after
* FLR has heuristically proven to produce reliably working results for device
* assignment cases.
*/
static int delay_250ms_after_flr(struct pci_dev *dev, bool probe)
{
@ -4083,6 +4177,7 @@ static const struct pci_dev_reset_methods pci_dev_reset_methods[] = {
{ PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr },
{ PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr },
{ PCI_VENDOR_ID_INTEL, 0x0a54, delay_250ms_after_flr },
{ PCI_VENDOR_ID_SOLIDIGM, 0xf1ac, delay_250ms_after_flr },
{ PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
reset_chelsio_generic_dev },
{ PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HINIC_VF,
@ -4174,6 +4269,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220,
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230,
quirk_dma_func1_alias);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9235,
quirk_dma_func1_alias);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642,
quirk_dma_func1_alias);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0645,

View File

@ -45,8 +45,6 @@ struct aer_capability_regs {
int pci_enable_pcie_error_reporting(struct pci_dev *dev);
int pci_disable_pcie_error_reporting(struct pci_dev *dev);
int pci_aer_clear_nonfatal_status(struct pci_dev *dev);
void pci_save_aer_state(struct pci_dev *dev);
void pci_restore_aer_state(struct pci_dev *dev);
#else
static inline int pci_enable_pcie_error_reporting(struct pci_dev *dev)
{
@ -60,8 +58,6 @@ static inline int pci_aer_clear_nonfatal_status(struct pci_dev *dev)
{
return -EINVAL;
}
static inline void pci_save_aer_state(struct pci_dev *dev) {}
static inline void pci_restore_aer_state(struct pci_dev *dev) {}
#endif
void cper_print_aer(struct pci_dev *dev, int aer_severity,

View File

@ -203,7 +203,9 @@ void pci_epc_destroy(struct pci_epc *epc);
int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf,
enum pci_epc_interface_type type);
void pci_epc_linkup(struct pci_epc *epc);
void pci_epc_linkdown(struct pci_epc *epc);
void pci_epc_init_notify(struct pci_epc *epc);
void pci_epc_bme_notify(struct pci_epc *epc);
void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
enum pci_epc_interface_type type);
int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,

View File

@ -71,10 +71,14 @@ struct pci_epf_ops {
* struct pci_epf_event_ops - Callbacks for capturing the EPC events
* @core_init: Callback for the EPC initialization complete event
* @link_up: Callback for the EPC link up event
* @link_down: Callback for the EPC link down event
* @bme: Callback for the EPC BME (Bus Master Enable) event
*/
struct pci_epc_event_ops {
int (*core_init)(struct pci_epf *epf);
int (*link_up)(struct pci_epf *epf);
int (*link_down)(struct pci_epf *epf);
int (*bme)(struct pci_epf *epf);
};
/**
@ -89,7 +93,8 @@ struct pci_epc_event_ops {
* @id_table: identifies EPF devices for probing
*/
struct pci_epf_driver {
int (*probe)(struct pci_epf *epf);
int (*probe)(struct pci_epf *epf,
const struct pci_epf_device_id *id);
void (*remove)(struct pci_epf *epf);
struct device_driver driver;
@ -131,6 +136,7 @@ struct pci_epf_bar {
* @epc: the EPC device to which this EPF device is bound
* @epf_pf: the physical EPF device to which this virtual EPF device is bound
* @driver: the EPF driver to which this EPF device is bound
* @id: Pointer to the EPF device ID
* @list: to add pci_epf as a list of PCI endpoint functions to pci_epc
* @lock: mutex to protect pci_epf_ops
* @sec_epc: the secondary EPC device to which this EPF device is bound
@ -158,6 +164,7 @@ struct pci_epf {
struct pci_epc *epc;
struct pci_epf *epf_pf;
struct pci_epf_driver *driver;
const struct pci_epf_device_id *id;
struct list_head list;
/* mutex to protect against concurrent access of pci_epf_ops */
struct mutex lock;
@ -214,8 +221,6 @@ void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar,
enum pci_epc_interface_type type);
int pci_epf_bind(struct pci_epf *epf);
void pci_epf_unbind(struct pci_epf *epf);
struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf,
struct config_group *group);
int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf);
void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf);
#endif /* __LINUX_PCI_EPF_H */

View File

@ -1903,6 +1903,7 @@ static inline int pci_dev_present(const struct pci_device_id *ids)
#define pci_dev_put(dev) do { } while (0)
static inline void pci_set_master(struct pci_dev *dev) { }
static inline void pci_clear_master(struct pci_dev *dev) { }
static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; }
static inline void pci_disable_device(struct pci_dev *dev) { }
static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }

View File

@ -2,7 +2,7 @@
/*
* PCI Class, Vendor and Device IDs
*
* Please keep sorted.
* Please keep sorted by numeric Vendor ID and Device ID.
*
* Do not add new entries to this file unless the definitions
* are shared between multiple drivers.
@ -164,6 +164,8 @@
#define PCI_DEVICE_ID_LOONGSON_HDA 0x7a07
#define PCI_DEVICE_ID_LOONGSON_HDMI 0x7a37
#define PCI_VENDOR_ID_SOLIDIGM 0x025e
#define PCI_VENDOR_ID_TTTECH 0x0357
#define PCI_DEVICE_ID_TTTECH_MC322 0x000a

View File

@ -738,6 +738,7 @@
#define PCI_EXT_CAP_ID_DVSEC 0x23 /* Designated Vendor-Specific */
#define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */
#define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */
#define PCI_EXT_CAP_ID_PL_32GT 0x2A /* Physical Layer 32.0 GT/s */
#define PCI_EXT_CAP_ID_DOE 0x2E /* Data Object Exchange */
#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_DOE