mirror of
https://github.com/torvalds/linux.git
synced 2024-11-10 14:11:52 +00:00
pci-v4.10-changes
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJYUt1vAAoJEFmIoMA60/r8abgP/3R+5Lsk5/kfAHk5/2Mtqbvg mZ0eDUpY9GbUeMjSq84Nr2H8u7d+1AJCCu8KtDJYZCmjZpnSp2SuE2PS5JoGC7zC fintD24jlIF4/J5+HeVXXmbfr3xATxvpTuiSLEi8sLBRJ3KRIswhMSwoPwOyeTQw v/EclWKPGYcI5Zp0oigY9/Jd3q3lQ17KXppi/0dDoLh7PNOFvEHItXWzmf++u/NP iYT9R1xmzEsy0/HRd6hiwPT2xA8YsAXxgobhHooUgh1FWmZ02Tg1WjgDemOW4lVh kNIUcsLczh7wZCceogrrJ+pwb9+NyyIyKuHPv6OG3ieyz1IZdznaj1fAE5HJYiPo eVS7cP1S6DyV3Y5qFj5F2dSRS7T4GXdXG5mNhmeCpUHs0vfzSCG36jLmhTy8UIxs 1rCf5oFa+uU9q0okfH8VtcGOXqWjGgyxTSGGfF71HUMLnPbsci2fxC2cO6svzIX7 wDY0uxOzpyMIYMuQR6iz7VqvAwEaZ+7pfMIrWWdDcQ9/5tCNJ49cLuKaThPL4bVu juiGBQtnTLg8tjrhjDL9tQiJpuVIweVXyyQ1fvZoVXkMLlhVCF2ttirvwFUit2PB 84OlevQZ+9QdE/qalrWbv4qzhesuiwu0avkzjGoqg6tWTF0epu2AHI2vqy6UBYEG tcfJPEcz1019PKZNSvWy =ut0k -----END PGP SIGNATURE----- Merge tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull PCI updates from Bjorn Helgaas: "PCI changes: - add support for PCI on ARM64 boxes with ACPI. We already had this for theoretical spec-compliant hardware; now we're adding quirks for the actual hardware (Cavium, HiSilicon, Qualcomm, X-Gene) - add runtime PM support for hotplug ports - enable runtime suspend for Intel UHCI that uses platform-specific wakeup signaling - add yet another host bridge registration interface. We hope this is extensible enough to subsume the others - expose device revision in sysfs for DRM - to avoid device conflicts, make sure any VF BAR updates are done before enabling the VF - avoid unnecessary link retrains for ASPM - allow INTx masking on Mellanox devices that support it - allow access to non-standard VPD for Chelsio devices - update Broadcom iProc support for PAXB v2, PAXC v2, inbound DMA, etc - update Rockchip support for max-link-speed - add NVIDIA Tegra210 support - add Layerscape LS1046a support - update R-Car compatibility strings - add Qualcomm MSM8996 support - remove some uninformative bootup messages" * tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (115 commits) PCI: Enable access to non-standard VPD for Chelsio devices (cxgb3) PCI: Expand "VPD access disabled" quirk message PCI: pciehp: Remove loading message PCI: hotplug: Remove hotplug core message PCI: Remove service driver load/unload messages PCI/AER: Log AER IRQ when claiming Root Port PCI/AER: Log errors with PCI device, not PCIe service device PCI/AER: Remove unused version macros PCI/PME: Log PME IRQ when claiming Root Port PCI/PME: Drop unused support for PMEs from Root Complex Event Collectors PCI: Move config space size macros to pci_regs.h x86/platform/intel-mid: Constify mid_pci_platform_pm PCI/ASPM: Don't retrain link if ASPM not possible PCI: iproc: Skip check for legacy IRQ on PAXC buses PCI: pciehp: Leave power indicator on when enabling already-enabled slot PCI: pciehp: Prioritize data-link event over presence detect PCI: rcar: Add gen3 fallback compatibility string for pcie-rcar PCI: rcar: Use gen2 fallback compatibility last PCI: rcar-gen2: Use gen2 fallback compatibility last PCI: rockchip: Move the deassert of pm/aclk/pclk after phy_init() ..
This commit is contained in:
commit
0ab7b12c49
@ -294,3 +294,10 @@ Description:
|
||||
a firmware bug to the system vendor. Writing to this file
|
||||
taints the kernel with TAINT_FIRMWARE_WORKAROUND, which
|
||||
reduces the supportability of your system.
|
||||
|
||||
What: /sys/bus/pci/devices/.../revision
|
||||
Date: November 2016
|
||||
Contact: Emil Velikov <emil.l.velikov@gmail.com>
|
||||
Description:
|
||||
This file contains the revision field of the the PCI device.
|
||||
The value comes from device config space. The file is read only.
|
||||
|
@ -1,10 +1,17 @@
|
||||
* Broadcom iProc PCIe controller with the platform bus interface
|
||||
|
||||
Required properties:
|
||||
- compatible: Must be "brcm,iproc-pcie" for PAXB, or "brcm,iproc-pcie-paxc"
|
||||
for PAXC. PAXB-based root complex is used for external endpoint devices.
|
||||
PAXC-based root complex is connected to emulated endpoint devices
|
||||
internal to the ASIC
|
||||
- compatible:
|
||||
"brcm,iproc-pcie" for the first generation of PAXB based controller,
|
||||
used in SoCs including NSP, Cygnus, NS2, and Pegasus
|
||||
"brcm,iproc-pcie-paxb-v2" for the second generation of PAXB-based
|
||||
controllers, used in Stingray
|
||||
"brcm,iproc-pcie-paxc" for the first generation of PAXC based
|
||||
controller, used in NS2
|
||||
"brcm,iproc-pcie-paxc-v2" for the second generation of PAXC based
|
||||
controller, used in Stingray
|
||||
PAXB-based root complex is used for external endpoint devices. PAXC-based
|
||||
root complex is connected to emulated endpoint devices internal to the ASIC
|
||||
- reg: base address and length of the PCIe controller I/O register space
|
||||
- #interrupt-cells: set to <1>
|
||||
- interrupt-map-mask and interrupt-map, standard PCI properties to define the
|
||||
@ -19,6 +26,10 @@ Required properties:
|
||||
Optional properties:
|
||||
- phys: phandle of the PCIe PHY device
|
||||
- phy-names: must be "pcie-phy"
|
||||
- dma-coherent: present if DMA operations are coherent
|
||||
- dma-ranges: Some PAXB-based root complexes do not have inbound mapping done
|
||||
by the ASIC after power on reset. In this case, SW is required to configure
|
||||
the mapping, based on inbound memory regions specified by this property.
|
||||
|
||||
- brcm,pcie-ob: Some iProc SoCs do not have the outbound address mapping done
|
||||
by the ASIC after power on reset. In this case, SW needs to configure it
|
||||
@ -29,11 +40,6 @@ effective:
|
||||
Required:
|
||||
- brcm,pcie-ob-axi-offset: The offset from the AXI address to the internal
|
||||
address used by the iProc PCIe core (not the PCIe address)
|
||||
- brcm,pcie-ob-window-size: The outbound address mapping window size (in MB)
|
||||
|
||||
Optional:
|
||||
- brcm,pcie-ob-oarr-size: Some iProc SoCs need the OARR size bit to be set to
|
||||
increase the outbound window size
|
||||
|
||||
MSI support (optional):
|
||||
|
||||
@ -41,10 +47,19 @@ For older platforms without MSI integrated in the GIC, iProc PCIe core provides
|
||||
an event queue based MSI support. The iProc MSI uses host memories to store
|
||||
MSI posted writes in the event queues
|
||||
|
||||
- msi-parent: Link to the device node of the MSI controller. On newer iProc
|
||||
platforms, the MSI controller may be gicv2m or gicv3-its. On older iProc
|
||||
platforms without MSI support in its interrupt controller, one may use the
|
||||
event queue based MSI support integrated within the iProc PCIe core.
|
||||
On newer iProc platforms, gicv2m or gicv3-its based MSI support should be used
|
||||
|
||||
- msi-map: Maps a Requester ID to an MSI controller and associated MSI
|
||||
sideband data
|
||||
|
||||
- msi-parent: Link to the device node of the MSI controller, used when no MSI
|
||||
sideband data is passed between the iProc PCIe controller and the MSI
|
||||
controller
|
||||
|
||||
Refer to the following binding documents for more detailed description on
|
||||
the use of 'msi-map' and 'msi-parent':
|
||||
Documentation/devicetree/bindings/pci/pci-msi.txt
|
||||
Documentation/devicetree/bindings/interrupt-controller/msi.txt
|
||||
|
||||
When the iProc event queue based MSI is used, one needs to define the
|
||||
following properties in the MSI device node:
|
||||
@ -80,9 +95,7 @@ Example:
|
||||
phy-names = "pcie-phy";
|
||||
|
||||
brcm,pcie-ob;
|
||||
brcm,pcie-ob-oarr-size;
|
||||
brcm,pcie-ob-axi-offset = <0x00000000>;
|
||||
brcm,pcie-ob-window-size = <256>;
|
||||
|
||||
msi-parent = <&msi0>;
|
||||
|
||||
|
@ -15,6 +15,7 @@ Required properties:
|
||||
- compatible: should contain the platform identifier such as:
|
||||
"fsl,ls1021a-pcie", "snps,dw-pcie"
|
||||
"fsl,ls2080a-pcie", "fsl,ls2085a-pcie", "snps,dw-pcie"
|
||||
"fsl,ls1046a-pcie"
|
||||
- reg: base addresses and lengths of the PCIe controller
|
||||
- interrupts: A list of interrupt outputs of the controller. Must contain an
|
||||
entry for each entry in the interrupt-names property.
|
||||
|
@ -110,6 +110,20 @@ Power supplies for Tegra124:
|
||||
- avdd-pll-erefe-supply: Power supply for PLLE (shared with USB3). Must
|
||||
supply 1.05 V.
|
||||
|
||||
Power supplies for Tegra210:
|
||||
- Required:
|
||||
- avdd-pll-uerefe-supply: Power supply for PLLE (shared with USB3). Must
|
||||
supply 1.05 V.
|
||||
- hvddio-pex-supply: High-voltage supply for PCIe I/O and PCIe output
|
||||
clocks. Must supply 1.8 V.
|
||||
- dvddio-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V.
|
||||
- dvdd-pex-pll-supply: Power supply for dedicated (internal) PCIe PLL. Must
|
||||
supply 1.05 V.
|
||||
- hvdd-pex-pll-e-supply: High-voltage supply for PLLE (shared with USB3).
|
||||
Must supply 3.3 V.
|
||||
- vddio-pex-ctl-supply: Power supply for PCIe control I/O partition. Must
|
||||
supply 1.8 V.
|
||||
|
||||
Root ports are defined as subnodes of the PCIe controller node.
|
||||
|
||||
Required properties:
|
||||
@ -436,3 +450,99 @@ Board DTS:
|
||||
status = "okay";
|
||||
};
|
||||
};
|
||||
|
||||
Tegra210:
|
||||
---------
|
||||
|
||||
SoC DTSI:
|
||||
|
||||
pcie-controller@01003000 {
|
||||
compatible = "nvidia,tegra210-pcie";
|
||||
device_type = "pci";
|
||||
reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */
|
||||
0x0 0x01003800 0x0 0x00000800 /* AFI registers */
|
||||
0x0 0x02000000 0x0 0x10000000>; /* configuration space */
|
||||
reg-names = "pads", "afi", "cs";
|
||||
interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
|
||||
<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
|
||||
interrupt-names = "intr", "msi";
|
||||
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
||||
bus-range = <0x00 0xff>;
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
|
||||
ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */
|
||||
0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */
|
||||
0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */
|
||||
0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */
|
||||
0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */
|
||||
|
||||
clocks = <&tegra_car TEGRA210_CLK_PCIE>,
|
||||
<&tegra_car TEGRA210_CLK_AFI>,
|
||||
<&tegra_car TEGRA210_CLK_PLL_E>,
|
||||
<&tegra_car TEGRA210_CLK_CML0>;
|
||||
clock-names = "pex", "afi", "pll_e", "cml";
|
||||
resets = <&tegra_car 70>,
|
||||
<&tegra_car 72>,
|
||||
<&tegra_car 74>;
|
||||
reset-names = "pex", "afi", "pcie_x";
|
||||
status = "disabled";
|
||||
|
||||
pci@1,0 {
|
||||
device_type = "pci";
|
||||
assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>;
|
||||
reg = <0x000800 0 0 0 0>;
|
||||
status = "disabled";
|
||||
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges;
|
||||
|
||||
nvidia,num-lanes = <4>;
|
||||
};
|
||||
|
||||
pci@2,0 {
|
||||
device_type = "pci";
|
||||
assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>;
|
||||
reg = <0x001000 0 0 0 0>;
|
||||
status = "disabled";
|
||||
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges;
|
||||
|
||||
nvidia,num-lanes = <1>;
|
||||
};
|
||||
};
|
||||
|
||||
Board DTS:
|
||||
|
||||
pcie-controller@01003000 {
|
||||
status = "okay";
|
||||
|
||||
avdd-pll-uerefe-supply = <&avdd_1v05_pll>;
|
||||
hvddio-pex-supply = <&vdd_1v8>;
|
||||
dvddio-pex-supply = <&vdd_pex_1v05>;
|
||||
dvdd-pex-pll-supply = <&vdd_pex_1v05>;
|
||||
hvdd-pex-pll-e-supply = <&vdd_1v8>;
|
||||
vddio-pex-ctl-supply = <&vdd_1v8>;
|
||||
|
||||
pci@1,0 {
|
||||
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-0}>,
|
||||
<&{/padctl@7009f000/pads/pcie/lanes/pcie-1}>,
|
||||
<&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>,
|
||||
<&{/padctl@7009f000/pads/pcie/lanes/pcie-3}>;
|
||||
phy-names = "pcie-0", "pcie-1", "pcie-2", "pcie-3";
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
pci@2,0 {
|
||||
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>;
|
||||
phy-names = "pcie-0";
|
||||
status = "okay";
|
||||
};
|
||||
};
|
||||
|
@ -18,3 +18,9 @@ driver implementation may support the following properties:
|
||||
host bridges in the system, otherwise potentially conflicting domain numbers
|
||||
may be assigned to root buses behind different host bridges. The domain
|
||||
number for each host bridge in the system must be unique.
|
||||
- max-link-speed:
|
||||
If present this property specifies PCI gen for link capability. Host
|
||||
drivers could add this as a strategy to avoid unnecessary operation for
|
||||
unsupported link speed, for instance, trying to do training for
|
||||
unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2'
|
||||
for gen2, and '1' for gen1. Any other values are invalid.
|
||||
|
@ -7,6 +7,7 @@
|
||||
- "qcom,pcie-ipq8064" for ipq8064
|
||||
- "qcom,pcie-apq8064" for apq8064
|
||||
- "qcom,pcie-apq8084" for apq8084
|
||||
- "qcom,pcie-msm8996" for msm8996 or apq8096
|
||||
|
||||
- reg:
|
||||
Usage: required
|
||||
@ -92,6 +93,17 @@
|
||||
- "aux" Auxiliary (AUX) clock
|
||||
- "bus_master" Master AXI clock
|
||||
- "bus_slave" Slave AXI clock
|
||||
|
||||
- clock-names:
|
||||
Usage: required for msm8996/apq8096
|
||||
Value type: <stringlist>
|
||||
Definition: Should contain the following entries
|
||||
- "pipe" Pipe Clock driving internal logic
|
||||
- "aux" Auxiliary (AUX) clock
|
||||
- "cfg" Configuration clock
|
||||
- "bus_master" Master AXI clock
|
||||
- "bus_slave" Slave AXI clock
|
||||
|
||||
- resets:
|
||||
Usage: required
|
||||
Value type: <prop-encoded-array>
|
||||
@ -115,7 +127,7 @@
|
||||
- "core" Core reset
|
||||
|
||||
- power-domains:
|
||||
Usage: required for apq8084
|
||||
Usage: required for apq8084 and msm8996/apq8096
|
||||
Value type: <prop-encoded-array>
|
||||
Definition: A phandle and power domain specifier pair to the
|
||||
power domain which is responsible for collapsing
|
||||
|
@ -7,6 +7,7 @@ compatible: "renesas,pcie-r8a7779" for the R8A7779 SoC;
|
||||
"renesas,pcie-r8a7793" for the R8A7793 SoC;
|
||||
"renesas,pcie-r8a7795" for the R8A7795 SoC;
|
||||
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 compatible device.
|
||||
"renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.
|
||||
|
||||
When compatible with the generic version, nodes must list the
|
||||
SoC-specific version corresponding to the platform first
|
||||
|
@ -17,6 +17,7 @@ that support it. For example, a given bus might look like this:
|
||||
| |-- resource0
|
||||
| |-- resource1
|
||||
| |-- resource2
|
||||
| |-- revision
|
||||
| |-- rom
|
||||
| |-- subsystem_device
|
||||
| |-- subsystem_vendor
|
||||
@ -41,6 +42,7 @@ files, each with their own function.
|
||||
resource PCI resource host addresses (ascii, ro)
|
||||
resource0..N PCI resource N, if present (binary, mmap, rw[1])
|
||||
resource0_wc..N_wc PCI WC map resource N, if prefetchable (binary, mmap)
|
||||
revision PCI revision (ascii, ro)
|
||||
rom PCI ROM resource, if present (binary, ro)
|
||||
subsystem_device PCI subsystem device (ascii, ro)
|
||||
subsystem_vendor PCI subsystem vendor (ascii, ro)
|
||||
|
@ -7,6 +7,32 @@
|
||||
model = "NVIDIA Jetson TX1 Developer Kit";
|
||||
compatible = "nvidia,p2371-2180", "nvidia,tegra210";
|
||||
|
||||
pcie-controller@01003000 {
|
||||
status = "okay";
|
||||
|
||||
avdd-pll-uerefe-supply = <&avdd_1v05_pll>;
|
||||
hvddio-pex-supply = <&vdd_1v8>;
|
||||
dvddio-pex-supply = <&vdd_pex_1v05>;
|
||||
dvdd-pex-pll-supply = <&vdd_pex_1v05>;
|
||||
hvdd-pex-pll-e-supply = <&vdd_1v8>;
|
||||
vddio-pex-ctl-supply = <&vdd_1v8>;
|
||||
|
||||
pci@1,0 {
|
||||
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-0}>,
|
||||
<&{/padctl@7009f000/pads/pcie/lanes/pcie-1}>,
|
||||
<&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>,
|
||||
<&{/padctl@7009f000/pads/pcie/lanes/pcie-3}>;
|
||||
phy-names = "pcie-0", "pcie-1", "pcie-2", "pcie-3";
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
pci@2,0 {
|
||||
phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>;
|
||||
phy-names = "pcie-0";
|
||||
status = "okay";
|
||||
};
|
||||
};
|
||||
|
||||
host1x@50000000 {
|
||||
dsi@54300000 {
|
||||
status = "okay";
|
||||
|
@ -11,6 +11,69 @@
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
pcie-controller@01003000 {
|
||||
compatible = "nvidia,tegra210-pcie";
|
||||
device_type = "pci";
|
||||
reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */
|
||||
0x0 0x01003800 0x0 0x00000800 /* AFI registers */
|
||||
0x0 0x02000000 0x0 0x10000000>; /* configuration space */
|
||||
reg-names = "pads", "afi", "cs";
|
||||
interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
|
||||
<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
|
||||
interrupt-names = "intr", "msi";
|
||||
|
||||
#interrupt-cells = <1>;
|
||||
interrupt-map-mask = <0 0 0 0>;
|
||||
interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
||||
bus-range = <0x00 0xff>;
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
|
||||
ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */
|
||||
0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */
|
||||
0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */
|
||||
0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */
|
||||
0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */
|
||||
|
||||
clocks = <&tegra_car TEGRA210_CLK_PCIE>,
|
||||
<&tegra_car TEGRA210_CLK_AFI>,
|
||||
<&tegra_car TEGRA210_CLK_PLL_E>,
|
||||
<&tegra_car TEGRA210_CLK_CML0>;
|
||||
clock-names = "pex", "afi", "pll_e", "cml";
|
||||
resets = <&tegra_car 70>,
|
||||
<&tegra_car 72>,
|
||||
<&tegra_car 74>;
|
||||
reset-names = "pex", "afi", "pcie_x";
|
||||
status = "disabled";
|
||||
|
||||
pci@1,0 {
|
||||
device_type = "pci";
|
||||
assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>;
|
||||
reg = <0x000800 0 0 0 0>;
|
||||
status = "disabled";
|
||||
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges;
|
||||
|
||||
nvidia,num-lanes = <4>;
|
||||
};
|
||||
|
||||
pci@2,0 {
|
||||
device_type = "pci";
|
||||
assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>;
|
||||
reg = <0x001000 0 0 0 0>;
|
||||
status = "disabled";
|
||||
|
||||
#address-cells = <3>;
|
||||
#size-cells = <2>;
|
||||
ranges;
|
||||
|
||||
nvidia,num-lanes = <1>;
|
||||
};
|
||||
};
|
||||
|
||||
host1x@50000000 {
|
||||
compatible = "nvidia,tegra210-host1x", "simple-bus";
|
||||
reg = <0x0 0x50000000 0x0 0x00034000>;
|
||||
|
@ -114,6 +114,19 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci)
|
||||
{
|
||||
struct resource_entry *entry, *tmp;
|
||||
int status;
|
||||
|
||||
status = acpi_pci_probe_root_resources(ci);
|
||||
resource_list_for_each_entry_safe(entry, tmp, &ci->resources) {
|
||||
if (!(entry->res->flags & IORESOURCE_WINDOW))
|
||||
resource_list_destroy_entry(entry);
|
||||
}
|
||||
return status;
|
||||
}
|
||||
|
||||
/*
|
||||
* Lookup the bus range for the domain in MCFG, and set up config space
|
||||
* mapping.
|
||||
@ -121,31 +134,33 @@ int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
|
||||
static struct pci_config_window *
|
||||
pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root)
|
||||
{
|
||||
struct device *dev = &root->device->dev;
|
||||
struct resource *bus_res = &root->secondary;
|
||||
u16 seg = root->segment;
|
||||
struct pci_config_window *cfg;
|
||||
struct pci_ecam_ops *ecam_ops;
|
||||
struct resource cfgres;
|
||||
unsigned int bsz;
|
||||
struct acpi_device *adev;
|
||||
struct pci_config_window *cfg;
|
||||
int ret;
|
||||
|
||||
/* Use address from _CBA if present, otherwise lookup MCFG */
|
||||
if (!root->mcfg_addr)
|
||||
root->mcfg_addr = pci_mcfg_lookup(seg, bus_res);
|
||||
|
||||
if (!root->mcfg_addr) {
|
||||
dev_err(&root->device->dev, "%04x:%pR ECAM region not found\n",
|
||||
seg, bus_res);
|
||||
ret = pci_mcfg_lookup(root, &cfgres, &ecam_ops);
|
||||
if (ret) {
|
||||
dev_err(dev, "%04x:%pR ECAM region not found\n", seg, bus_res);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bsz = 1 << pci_generic_ecam_ops.bus_shift;
|
||||
cfgres.start = root->mcfg_addr + bus_res->start * bsz;
|
||||
cfgres.end = cfgres.start + resource_size(bus_res) * bsz - 1;
|
||||
cfgres.flags = IORESOURCE_MEM;
|
||||
cfg = pci_ecam_create(&root->device->dev, &cfgres, bus_res,
|
||||
&pci_generic_ecam_ops);
|
||||
adev = acpi_resource_consumer(&cfgres);
|
||||
if (adev)
|
||||
dev_info(dev, "ECAM area %pR reserved by %s\n", &cfgres,
|
||||
dev_name(&adev->dev));
|
||||
else
|
||||
dev_warn(dev, FW_BUG "ECAM area %pR not reserved in ACPI namespace\n",
|
||||
&cfgres);
|
||||
|
||||
cfg = pci_ecam_create(dev, &cfgres, bus_res, ecam_ops);
|
||||
if (IS_ERR(cfg)) {
|
||||
dev_err(&root->device->dev, "%04x:%pR error %ld mapping ECAM\n",
|
||||
seg, bus_res, PTR_ERR(cfg));
|
||||
dev_err(dev, "%04x:%pR error %ld mapping ECAM\n", seg, bus_res,
|
||||
PTR_ERR(cfg));
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@ -159,33 +174,37 @@ static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci)
|
||||
|
||||
ri = container_of(ci, struct acpi_pci_generic_root_info, common);
|
||||
pci_ecam_free(ri->cfg);
|
||||
kfree(ci->ops);
|
||||
kfree(ri);
|
||||
}
|
||||
|
||||
static struct acpi_pci_root_ops acpi_pci_root_ops = {
|
||||
.release_info = pci_acpi_generic_release_info,
|
||||
};
|
||||
|
||||
/* Interface called from ACPI code to setup PCI host controller */
|
||||
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
|
||||
{
|
||||
int node = acpi_get_node(root->device->handle);
|
||||
struct acpi_pci_generic_root_info *ri;
|
||||
struct pci_bus *bus, *child;
|
||||
struct acpi_pci_root_ops *root_ops;
|
||||
|
||||
ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node);
|
||||
if (!ri)
|
||||
return NULL;
|
||||
|
||||
root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node);
|
||||
if (!root_ops)
|
||||
return NULL;
|
||||
|
||||
ri->cfg = pci_acpi_setup_ecam_mapping(root);
|
||||
if (!ri->cfg) {
|
||||
kfree(ri);
|
||||
kfree(root_ops);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
acpi_pci_root_ops.pci_ops = &ri->cfg->ops->pci_ops;
|
||||
bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &ri->common,
|
||||
ri->cfg);
|
||||
root_ops->release_info = pci_acpi_generic_release_info;
|
||||
root_ops->prepare_resources = pci_acpi_root_prepare_resources;
|
||||
root_ops->pci_ops = &ri->cfg->ops->pci_ops;
|
||||
bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg);
|
||||
if (!bus)
|
||||
return NULL;
|
||||
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-acpi.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
|
||||
/* Structure to hold entries from the MCFG table */
|
||||
struct mcfg_entry {
|
||||
@ -32,12 +33,166 @@ struct mcfg_entry {
|
||||
u8 bus_end;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PCI_QUIRKS
|
||||
struct mcfg_fixup {
|
||||
char oem_id[ACPI_OEM_ID_SIZE + 1];
|
||||
char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1];
|
||||
u32 oem_revision;
|
||||
u16 segment;
|
||||
struct resource bus_range;
|
||||
struct pci_ecam_ops *ops;
|
||||
struct resource cfgres;
|
||||
};
|
||||
|
||||
#define MCFG_BUS_RANGE(start, end) DEFINE_RES_NAMED((start), \
|
||||
((end) - (start) + 1), \
|
||||
NULL, IORESOURCE_BUS)
|
||||
#define MCFG_BUS_ANY MCFG_BUS_RANGE(0x0, 0xff)
|
||||
|
||||
static struct mcfg_fixup mcfg_quirks[] = {
|
||||
/* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
|
||||
|
||||
#define QCOM_ECAM32(seg) \
|
||||
{ "QCOM ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops }
|
||||
QCOM_ECAM32(0),
|
||||
QCOM_ECAM32(1),
|
||||
QCOM_ECAM32(2),
|
||||
QCOM_ECAM32(3),
|
||||
QCOM_ECAM32(4),
|
||||
QCOM_ECAM32(5),
|
||||
QCOM_ECAM32(6),
|
||||
QCOM_ECAM32(7),
|
||||
|
||||
#define HISI_QUAD_DOM(table_id, seg, ops) \
|
||||
{ "HISI ", table_id, 0, (seg) + 0, MCFG_BUS_ANY, ops }, \
|
||||
{ "HISI ", table_id, 0, (seg) + 1, MCFG_BUS_ANY, ops }, \
|
||||
{ "HISI ", table_id, 0, (seg) + 2, MCFG_BUS_ANY, ops }, \
|
||||
{ "HISI ", table_id, 0, (seg) + 3, MCFG_BUS_ANY, ops }
|
||||
HISI_QUAD_DOM("HIP05 ", 0, &hisi_pcie_ops),
|
||||
HISI_QUAD_DOM("HIP06 ", 0, &hisi_pcie_ops),
|
||||
HISI_QUAD_DOM("HIP07 ", 0, &hisi_pcie_ops),
|
||||
HISI_QUAD_DOM("HIP07 ", 4, &hisi_pcie_ops),
|
||||
HISI_QUAD_DOM("HIP07 ", 8, &hisi_pcie_ops),
|
||||
HISI_QUAD_DOM("HIP07 ", 12, &hisi_pcie_ops),
|
||||
|
||||
#define THUNDER_PEM_RES(addr, node) \
|
||||
DEFINE_RES_MEM((addr) + ((u64) (node) << 44), 0x39 * SZ_16M)
|
||||
#define THUNDER_PEM_QUIRK(rev, node) \
|
||||
{ "CAVIUM", "THUNDERX", rev, 4 + (10 * (node)), MCFG_BUS_ANY, \
|
||||
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88001f000000UL, node) }, \
|
||||
{ "CAVIUM", "THUNDERX", rev, 5 + (10 * (node)), MCFG_BUS_ANY, \
|
||||
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x884057000000UL, node) }, \
|
||||
{ "CAVIUM", "THUNDERX", rev, 6 + (10 * (node)), MCFG_BUS_ANY, \
|
||||
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88808f000000UL, node) }, \
|
||||
{ "CAVIUM", "THUNDERX", rev, 7 + (10 * (node)), MCFG_BUS_ANY, \
|
||||
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89001f000000UL, node) }, \
|
||||
{ "CAVIUM", "THUNDERX", rev, 8 + (10 * (node)), MCFG_BUS_ANY, \
|
||||
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x894057000000UL, node) }, \
|
||||
{ "CAVIUM", "THUNDERX", rev, 9 + (10 * (node)), MCFG_BUS_ANY, \
|
||||
&thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89808f000000UL, node) }
|
||||
/* SoC pass2.x */
|
||||
THUNDER_PEM_QUIRK(1, 0),
|
||||
THUNDER_PEM_QUIRK(1, 1),
|
||||
|
||||
#define THUNDER_ECAM_QUIRK(rev, seg) \
|
||||
{ "CAVIUM", "THUNDERX", rev, seg, MCFG_BUS_ANY, \
|
||||
&pci_thunder_ecam_ops }
|
||||
/* SoC pass1.x */
|
||||
THUNDER_PEM_QUIRK(2, 0), /* off-chip devices */
|
||||
THUNDER_PEM_QUIRK(2, 1), /* off-chip devices */
|
||||
THUNDER_ECAM_QUIRK(2, 0),
|
||||
THUNDER_ECAM_QUIRK(2, 1),
|
||||
THUNDER_ECAM_QUIRK(2, 2),
|
||||
THUNDER_ECAM_QUIRK(2, 3),
|
||||
THUNDER_ECAM_QUIRK(2, 10),
|
||||
THUNDER_ECAM_QUIRK(2, 11),
|
||||
THUNDER_ECAM_QUIRK(2, 12),
|
||||
THUNDER_ECAM_QUIRK(2, 13),
|
||||
|
||||
#define XGENE_V1_ECAM_MCFG(rev, seg) \
|
||||
{"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \
|
||||
&xgene_v1_pcie_ecam_ops }
|
||||
#define XGENE_V2_ECAM_MCFG(rev, seg) \
|
||||
{"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \
|
||||
&xgene_v2_pcie_ecam_ops }
|
||||
/* X-Gene SoC with v1 PCIe controller */
|
||||
XGENE_V1_ECAM_MCFG(1, 0),
|
||||
XGENE_V1_ECAM_MCFG(1, 1),
|
||||
XGENE_V1_ECAM_MCFG(1, 2),
|
||||
XGENE_V1_ECAM_MCFG(1, 3),
|
||||
XGENE_V1_ECAM_MCFG(1, 4),
|
||||
XGENE_V1_ECAM_MCFG(2, 0),
|
||||
XGENE_V1_ECAM_MCFG(2, 1),
|
||||
XGENE_V1_ECAM_MCFG(2, 2),
|
||||
XGENE_V1_ECAM_MCFG(2, 3),
|
||||
XGENE_V1_ECAM_MCFG(2, 4),
|
||||
/* X-Gene SoC with v2.1 PCIe controller */
|
||||
XGENE_V2_ECAM_MCFG(3, 0),
|
||||
XGENE_V2_ECAM_MCFG(3, 1),
|
||||
/* X-Gene SoC with v2.2 PCIe controller */
|
||||
XGENE_V2_ECAM_MCFG(4, 0),
|
||||
XGENE_V2_ECAM_MCFG(4, 1),
|
||||
XGENE_V2_ECAM_MCFG(4, 2),
|
||||
};
|
||||
|
||||
static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
|
||||
static char mcfg_oem_table_id[ACPI_OEM_TABLE_ID_SIZE];
|
||||
static u32 mcfg_oem_revision;
|
||||
|
||||
static int pci_mcfg_quirk_matches(struct mcfg_fixup *f, u16 segment,
|
||||
struct resource *bus_range)
|
||||
{
|
||||
if (!memcmp(f->oem_id, mcfg_oem_id, ACPI_OEM_ID_SIZE) &&
|
||||
!memcmp(f->oem_table_id, mcfg_oem_table_id,
|
||||
ACPI_OEM_TABLE_ID_SIZE) &&
|
||||
f->oem_revision == mcfg_oem_revision &&
|
||||
f->segment == segment &&
|
||||
resource_contains(&f->bus_range, bus_range))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void pci_mcfg_apply_quirks(struct acpi_pci_root *root,
|
||||
struct resource *cfgres,
|
||||
struct pci_ecam_ops **ecam_ops)
|
||||
{
|
||||
#ifdef CONFIG_PCI_QUIRKS
|
||||
u16 segment = root->segment;
|
||||
struct resource *bus_range = &root->secondary;
|
||||
struct mcfg_fixup *f;
|
||||
int i;
|
||||
|
||||
for (i = 0, f = mcfg_quirks; i < ARRAY_SIZE(mcfg_quirks); i++, f++) {
|
||||
if (pci_mcfg_quirk_matches(f, segment, bus_range)) {
|
||||
if (f->cfgres.start)
|
||||
*cfgres = f->cfgres;
|
||||
if (f->ops)
|
||||
*ecam_ops = f->ops;
|
||||
dev_info(&root->device->dev, "MCFG quirk: ECAM at %pR for %pR with %ps\n",
|
||||
cfgres, bus_range, *ecam_ops);
|
||||
return;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
/* List to save MCFG entries */
|
||||
static LIST_HEAD(pci_mcfg_list);
|
||||
|
||||
phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res)
|
||||
int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres,
|
||||
struct pci_ecam_ops **ecam_ops)
|
||||
{
|
||||
struct pci_ecam_ops *ops = &pci_generic_ecam_ops;
|
||||
struct resource *bus_res = &root->secondary;
|
||||
u16 seg = root->segment;
|
||||
struct mcfg_entry *e;
|
||||
struct resource res;
|
||||
|
||||
/* Use address from _CBA if present, otherwise lookup MCFG */
|
||||
if (root->mcfg_addr)
|
||||
goto skip_lookup;
|
||||
|
||||
/*
|
||||
* We expect exact match, unless MCFG entry end bus covers more than
|
||||
@ -45,10 +200,32 @@ phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res)
|
||||
*/
|
||||
list_for_each_entry(e, &pci_mcfg_list, list) {
|
||||
if (e->segment == seg && e->bus_start == bus_res->start &&
|
||||
e->bus_end >= bus_res->end)
|
||||
return e->addr;
|
||||
e->bus_end >= bus_res->end) {
|
||||
root->mcfg_addr = e->addr;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
skip_lookup:
|
||||
memset(&res, 0, sizeof(res));
|
||||
if (root->mcfg_addr) {
|
||||
res.start = root->mcfg_addr + (bus_res->start << 20);
|
||||
res.end = res.start + (resource_size(bus_res) << 20) - 1;
|
||||
res.flags = IORESOURCE_MEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allow quirks to override default ECAM ops and CFG resource
|
||||
* range. This may even fabricate a CFG resource range in case
|
||||
* MCFG does not have it. Invalid CFG start address means MCFG
|
||||
* firmware bug or we need another quirk in array.
|
||||
*/
|
||||
pci_mcfg_apply_quirks(root, &res, &ops);
|
||||
if (!res.start)
|
||||
return -ENXIO;
|
||||
|
||||
*cfgres = res;
|
||||
*ecam_ops = ops;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -79,6 +256,13 @@ static __init int pci_mcfg_parse(struct acpi_table_header *header)
|
||||
list_add(&e->list, &pci_mcfg_list);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCI_QUIRKS
|
||||
/* Save MCFG IDs and revision for quirks matching */
|
||||
memcpy(mcfg_oem_id, header->oem_id, ACPI_OEM_ID_SIZE);
|
||||
memcpy(mcfg_oem_table_id, header->oem_table_id, ACPI_OEM_TABLE_ID_SIZE);
|
||||
mcfg_oem_revision = header->oem_revision;
|
||||
#endif
|
||||
|
||||
pr_info("MCFG table detected, %d entries\n", n);
|
||||
return 0;
|
||||
}
|
||||
|
@ -664,3 +664,60 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
|
||||
return (type & types) ? 0 : 1;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
|
||||
|
||||
static int acpi_dev_consumes_res(struct acpi_device *adev, struct resource *res)
|
||||
{
|
||||
struct list_head resource_list;
|
||||
struct resource_entry *rentry;
|
||||
int ret, found = 0;
|
||||
|
||||
INIT_LIST_HEAD(&resource_list);
|
||||
ret = acpi_dev_get_resources(adev, &resource_list, NULL, NULL);
|
||||
if (ret < 0)
|
||||
return 0;
|
||||
|
||||
list_for_each_entry(rentry, &resource_list, node) {
|
||||
if (resource_contains(rentry->res, res)) {
|
||||
found = 1;
|
||||
break;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
acpi_dev_free_resource_list(&resource_list);
|
||||
return found;
|
||||
}
|
||||
|
||||
static acpi_status acpi_res_consumer_cb(acpi_handle handle, u32 depth,
|
||||
void *context, void **ret)
|
||||
{
|
||||
struct resource *res = context;
|
||||
struct acpi_device **consumer = (struct acpi_device **) ret;
|
||||
struct acpi_device *adev;
|
||||
|
||||
if (acpi_bus_get_device(handle, &adev))
|
||||
return AE_OK;
|
||||
|
||||
if (acpi_dev_consumes_res(adev, res)) {
|
||||
*consumer = adev;
|
||||
return AE_CTRL_TERMINATE;
|
||||
}
|
||||
|
||||
return AE_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_resource_consumer - Find the ACPI device that consumes @res.
|
||||
* @res: Resource to search for.
|
||||
*
|
||||
* Search the current resource settings (_CRS) of every ACPI device node
|
||||
* for @res. If we find an ACPI device whose _CRS includes @res, return
|
||||
* it. Otherwise, return NULL.
|
||||
*/
|
||||
struct acpi_device *acpi_resource_consumer(struct resource *res)
|
||||
{
|
||||
struct acpi_device *consumer = NULL;
|
||||
|
||||
acpi_get_devices(NULL, acpi_res_consumer_cb, res, (void **) &consumer);
|
||||
return consumer;
|
||||
}
|
||||
|
@ -4020,49 +4020,51 @@ int mlx4_restart_one(struct pci_dev *pdev)
|
||||
return err;
|
||||
}
|
||||
|
||||
#define MLX_SP(id) { PCI_VDEVICE(MELLANOX, id), MLX4_PCI_DEV_FORCE_SENSE_PORT }
|
||||
#define MLX_VF(id) { PCI_VDEVICE(MELLANOX, id), MLX4_PCI_DEV_IS_VF }
|
||||
#define MLX_GN(id) { PCI_VDEVICE(MELLANOX, id), 0 }
|
||||
|
||||
static const struct pci_device_id mlx4_pci_table[] = {
|
||||
/* MT25408 "Hermon" SDR */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6340), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25408 "Hermon" DDR */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x634a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25408 "Hermon" QDR */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6354), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25408 "Hermon" DDR PCIe gen2 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6732), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25408 "Hermon" QDR PCIe gen2 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x673c), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25408 "Hermon" EN 10GigE */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6368), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25408 "Hermon" EN 10GigE PCIe gen2 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6750), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25458 ConnectX EN 10GBASE-T 10GigE */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6372), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25458 ConnectX EN 10GBASE-T+Gen2 10GigE */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x675a), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT26468 ConnectX EN 10GigE PCIe gen2*/
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6764), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x6746), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT26478 ConnectX2 40GigE PCIe gen2 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x676e), MLX4_PCI_DEV_FORCE_SENSE_PORT },
|
||||
/* MT25400 Family [ConnectX-2 Virtual Function] */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1002), MLX4_PCI_DEV_IS_VF },
|
||||
/* MT25408 "Hermon" */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_SDR), /* SDR */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_DDR), /* DDR */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_QDR), /* QDR */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2), /* DDR Gen2 */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2), /* QDR Gen2 */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_EN), /* EN 10GigE */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2), /* EN 10GigE Gen2 */
|
||||
/* MT25458 ConnectX EN 10GBASE-T */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN),
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2), /* Gen2 */
|
||||
/* MT26468 ConnectX EN 10GigE PCIe Gen2*/
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2),
|
||||
/* MT26438 ConnectX EN 40GigE PCIe Gen2 5GT/s */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2),
|
||||
/* MT26478 ConnectX2 40GigE PCIe Gen2 */
|
||||
MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX2),
|
||||
/* MT25400 Family [ConnectX-2] */
|
||||
MLX_VF(0x1002), /* Virtual Function */
|
||||
/* MT27500 Family [ConnectX-3] */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1003), 0 },
|
||||
/* MT27500 Family [ConnectX-3 Virtual Function] */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1004), MLX4_PCI_DEV_IS_VF },
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1005), 0 }, /* MT27510 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1006), 0 }, /* MT27511 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1007), 0 }, /* MT27520 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1008), 0 }, /* MT27521 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1009), 0 }, /* MT27530 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x100a), 0 }, /* MT27531 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x100b), 0 }, /* MT27540 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x100c), 0 }, /* MT27541 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x100d), 0 }, /* MT27550 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x100e), 0 }, /* MT27551 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x100f), 0 }, /* MT27560 Family */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1010), 0 }, /* MT27561 Family */
|
||||
MLX_GN(PCI_DEVICE_ID_MELLANOX_CONNECTX3),
|
||||
MLX_VF(0x1004), /* Virtual Function */
|
||||
MLX_GN(0x1005), /* MT27510 Family */
|
||||
MLX_GN(0x1006), /* MT27511 Family */
|
||||
MLX_GN(PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO), /* MT27520 Family */
|
||||
MLX_GN(0x1008), /* MT27521 Family */
|
||||
MLX_GN(0x1009), /* MT27530 Family */
|
||||
MLX_GN(0x100a), /* MT27531 Family */
|
||||
MLX_GN(0x100b), /* MT27540 Family */
|
||||
MLX_GN(0x100c), /* MT27541 Family */
|
||||
MLX_GN(0x100d), /* MT27550 Family */
|
||||
MLX_GN(0x100e), /* MT27551 Family */
|
||||
MLX_GN(0x100f), /* MT27560 Family */
|
||||
MLX_GN(0x1010), /* MT27561 Family */
|
||||
|
||||
/*
|
||||
* See the mellanox_check_broken_intx_masking() quirk when
|
||||
* adding devices
|
||||
*/
|
||||
|
||||
{ 0, }
|
||||
};
|
||||
|
||||
|
@ -119,6 +119,27 @@ int of_get_pci_domain_nr(struct device_node *node)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_get_pci_domain_nr);
|
||||
|
||||
/**
|
||||
* This function will try to find the limitation of link speed by finding
|
||||
* a property called "max-link-speed" of the given device node.
|
||||
*
|
||||
* @node: device tree node with the max link speed information
|
||||
*
|
||||
* Returns the associated max link speed from DT, or a negative value if the
|
||||
* required property is not found or is invalid.
|
||||
*/
|
||||
int of_pci_get_max_link_speed(struct device_node *node)
|
||||
{
|
||||
u32 max_link_speed;
|
||||
|
||||
if (of_property_read_u32(node, "max-link-speed", &max_link_speed) ||
|
||||
max_link_speed > 4)
|
||||
return -EINVAL;
|
||||
|
||||
return max_link_speed;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed);
|
||||
|
||||
/**
|
||||
* of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only
|
||||
* is present and valid
|
||||
|
@ -142,10 +142,22 @@ int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
|
||||
if (size == 4) {
|
||||
writel(val, addr);
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
} else {
|
||||
mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
|
||||
}
|
||||
|
||||
/*
|
||||
* In general, hardware that supports only 32-bit writes on PCI is
|
||||
* not spec-compliant. For example, software may perform a 16-bit
|
||||
* write. If the hardware only supports 32-bit accesses, we must
|
||||
* do a 32-bit read, merge in the 16 bits we intend to write,
|
||||
* followed by a 32-bit write. If the 16 bits we *don't* intend to
|
||||
* write happen to have any RW1C (write-one-to-clear) bits set, we
|
||||
* just inadvertently cleared something we shouldn't have.
|
||||
*/
|
||||
dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
|
||||
size, pci_domain_nr(bus), bus->number,
|
||||
PCI_SLOT(devfn), PCI_FUNC(devfn), where);
|
||||
|
||||
mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
|
||||
tmp = readl(addr) & mask;
|
||||
tmp |= val << ((where & 0x3) * 8);
|
||||
writel(tmp, addr);
|
||||
|
@ -320,7 +320,7 @@ void pci_bus_add_device(struct pci_dev *dev)
|
||||
pci_fixup_device(pci_fixup_final, dev);
|
||||
pci_create_sysfs_dev_files(dev);
|
||||
pci_proc_attach_device(dev);
|
||||
pci_bridge_d3_device_changed(dev);
|
||||
pci_bridge_d3_update(dev);
|
||||
|
||||
dev->match_driver = true;
|
||||
retval = device_attach(&dev->dev);
|
||||
|
@ -162,3 +162,15 @@ struct pci_ecam_ops pci_generic_ecam_ops = {
|
||||
.write = pci_generic_config_write,
|
||||
}
|
||||
};
|
||||
|
||||
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
|
||||
/* ECAM ops for 32-bit access only (non-compliant) */
|
||||
struct pci_ecam_ops pci_32b_ops = {
|
||||
.bus_shift = 20,
|
||||
.pci_ops = {
|
||||
.map_bus = pci_ecam_map_bus,
|
||||
.read = pci_generic_config_read32,
|
||||
.write = pci_generic_config_write32,
|
||||
}
|
||||
};
|
||||
#endif
|
||||
|
@ -69,7 +69,7 @@ config PCI_IMX6
|
||||
|
||||
config PCI_TEGRA
|
||||
bool "NVIDIA Tegra PCIe controller"
|
||||
depends on ARCH_TEGRA && !ARM64
|
||||
depends on ARCH_TEGRA
|
||||
help
|
||||
Say Y here if you want support for the PCIe host controller found
|
||||
on NVIDIA Tegra SoCs.
|
||||
@ -133,8 +133,8 @@ config PCIE_XILINX
|
||||
|
||||
config PCI_XGENE
|
||||
bool "X-Gene PCIe controller"
|
||||
depends on ARCH_XGENE
|
||||
depends on OF
|
||||
depends on ARM64
|
||||
depends on OF || (ACPI && PCI_QUIRKS)
|
||||
select PCIEPORTBUS
|
||||
help
|
||||
Say Y here if you want internal PCI support on APM X-Gene SoC.
|
||||
@ -240,14 +240,16 @@ config PCIE_QCOM
|
||||
|
||||
config PCI_HOST_THUNDER_PEM
|
||||
bool "Cavium Thunder PCIe controller to off-chip devices"
|
||||
depends on OF && ARM64
|
||||
depends on ARM64
|
||||
depends on OF || (ACPI && PCI_QUIRKS)
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
Say Y here if you want PCIe support for CN88XX Cavium Thunder SoCs.
|
||||
|
||||
config PCI_HOST_THUNDER_ECAM
|
||||
bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon"
|
||||
depends on OF && ARM64
|
||||
depends on ARM64
|
||||
depends on OF || (ACPI && PCI_QUIRKS)
|
||||
select PCI_HOST_COMMON
|
||||
help
|
||||
Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs.
|
||||
@ -276,7 +278,7 @@ config PCIE_ARTPEC6
|
||||
|
||||
config PCIE_ROCKCHIP
|
||||
bool "Rockchip PCIe controller"
|
||||
depends on ARCH_ROCKCHIP
|
||||
depends on ARCH_ROCKCHIP || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on PCI_MSI_IRQ_DOMAIN
|
||||
select MFD_SYSCON
|
||||
@ -286,7 +288,7 @@ config PCIE_ROCKCHIP
|
||||
4 slots.
|
||||
|
||||
config VMD
|
||||
depends on PCI_MSI && X86_64
|
||||
depends on PCI_MSI && X86_64 && SRCU
|
||||
tristate "Intel Volume Management Device Driver"
|
||||
default N
|
||||
---help---
|
||||
|
@ -15,7 +15,6 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
|
||||
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o
|
||||
obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
|
||||
obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o
|
||||
obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
|
||||
obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o
|
||||
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
|
||||
obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o
|
||||
@ -25,11 +24,23 @@ obj-$(CONFIG_PCIE_IPROC_PLATFORM) += pcie-iproc-platform.o
|
||||
obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o
|
||||
obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o
|
||||
obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o
|
||||
obj-$(CONFIG_PCI_HISI) += pcie-hisi.o
|
||||
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
|
||||
obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o
|
||||
obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
|
||||
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
|
||||
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
|
||||
obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
|
||||
obj-$(CONFIG_VMD) += vmd.o
|
||||
|
||||
# The following drivers are for devices that use the generic ACPI
|
||||
# pci_root.c driver but don't support standard ECAM config access.
|
||||
# They contain MCFG quirks to replace the generic ECAM accessors with
|
||||
# device-specific ones that are shared with the DT driver.
|
||||
|
||||
# The ACPI driver is generic and should not require driver-specific
|
||||
# config options to be enabled, so we always build these drivers on
|
||||
# ARM64 and use internal ifdefs to only build the pieces we need
|
||||
# depending on whether ACPI, the DT driver, or both are enabled.
|
||||
|
||||
obj-$(CONFIG_ARM64) += pcie-hisi.o
|
||||
obj-$(CONFIG_ARM64) += pci-thunder-ecam.o
|
||||
obj-$(CONFIG_ARM64) += pci-thunder-pem.o
|
||||
obj-$(CONFIG_ARM64) += pci-xgene.o
|
||||
|
@ -378,6 +378,8 @@ struct hv_pcibus_device {
|
||||
struct msi_domain_info msi_info;
|
||||
struct msi_controller msi_chip;
|
||||
struct irq_domain *irq_domain;
|
||||
struct retarget_msi_interrupt retarget_msi_interrupt_params;
|
||||
spinlock_t retarget_msi_interrupt_lock;
|
||||
};
|
||||
|
||||
/*
|
||||
@ -755,7 +757,7 @@ static int hv_set_affinity(struct irq_data *data, const struct cpumask *dest,
|
||||
return parent->chip->irq_set_affinity(parent, dest, force);
|
||||
}
|
||||
|
||||
void hv_irq_mask(struct irq_data *data)
|
||||
static void hv_irq_mask(struct irq_data *data)
|
||||
{
|
||||
pci_msi_mask_irq(data);
|
||||
}
|
||||
@ -770,38 +772,44 @@ void hv_irq_mask(struct irq_data *data)
|
||||
* is built out of this PCI bus's instance GUID and the function
|
||||
* number of the device.
|
||||
*/
|
||||
void hv_irq_unmask(struct irq_data *data)
|
||||
static void hv_irq_unmask(struct irq_data *data)
|
||||
{
|
||||
struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
|
||||
struct irq_cfg *cfg = irqd_cfg(data);
|
||||
struct retarget_msi_interrupt params;
|
||||
struct retarget_msi_interrupt *params;
|
||||
struct hv_pcibus_device *hbus;
|
||||
struct cpumask *dest;
|
||||
struct pci_bus *pbus;
|
||||
struct pci_dev *pdev;
|
||||
int cpu;
|
||||
unsigned long flags;
|
||||
|
||||
dest = irq_data_get_affinity_mask(data);
|
||||
pdev = msi_desc_to_pci_dev(msi_desc);
|
||||
pbus = pdev->bus;
|
||||
hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
|
||||
|
||||
memset(¶ms, 0, sizeof(params));
|
||||
params.partition_id = HV_PARTITION_ID_SELF;
|
||||
params.source = 1; /* MSI(-X) */
|
||||
params.address = msi_desc->msg.address_lo;
|
||||
params.data = msi_desc->msg.data;
|
||||
params.device_id = (hbus->hdev->dev_instance.b[5] << 24) |
|
||||
spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags);
|
||||
|
||||
params = &hbus->retarget_msi_interrupt_params;
|
||||
memset(params, 0, sizeof(*params));
|
||||
params->partition_id = HV_PARTITION_ID_SELF;
|
||||
params->source = 1; /* MSI(-X) */
|
||||
params->address = msi_desc->msg.address_lo;
|
||||
params->data = msi_desc->msg.data;
|
||||
params->device_id = (hbus->hdev->dev_instance.b[5] << 24) |
|
||||
(hbus->hdev->dev_instance.b[4] << 16) |
|
||||
(hbus->hdev->dev_instance.b[7] << 8) |
|
||||
(hbus->hdev->dev_instance.b[6] & 0xf8) |
|
||||
PCI_FUNC(pdev->devfn);
|
||||
params.vector = cfg->vector;
|
||||
params->vector = cfg->vector;
|
||||
|
||||
for_each_cpu_and(cpu, dest, cpu_online_mask)
|
||||
params.vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu));
|
||||
params->vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu));
|
||||
|
||||
hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, ¶ms, NULL);
|
||||
hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, params, NULL);
|
||||
|
||||
spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags);
|
||||
|
||||
pci_msi_unmask_irq(data);
|
||||
}
|
||||
@ -1271,9 +1279,9 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
|
||||
struct hv_pci_dev *hpdev;
|
||||
struct pci_child_message *res_req;
|
||||
struct q_res_req_compl comp_pkt;
|
||||
union {
|
||||
struct pci_packet init_packet;
|
||||
u8 buffer[0x100];
|
||||
struct {
|
||||
struct pci_packet init_packet;
|
||||
u8 buffer[sizeof(struct pci_child_message)];
|
||||
} pkt;
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
@ -1582,6 +1590,10 @@ static void hv_eject_device_work(struct work_struct *work)
|
||||
pci_dev_put(pdev);
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags);
|
||||
list_del(&hpdev->list_entry);
|
||||
spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
|
||||
|
||||
memset(&ctxt, 0, sizeof(ctxt));
|
||||
ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
|
||||
ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE;
|
||||
@ -1590,10 +1602,6 @@ static void hv_eject_device_work(struct work_struct *work)
|
||||
sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
|
||||
VM_PKT_DATA_INBAND, 0);
|
||||
|
||||
spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags);
|
||||
list_del(&hpdev->list_entry);
|
||||
spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
|
||||
|
||||
put_pcichild(hpdev, hv_pcidev_ref_childlist);
|
||||
put_pcichild(hpdev, hv_pcidev_ref_pnp);
|
||||
put_hvpcibus(hpdev->hbus);
|
||||
@ -2186,6 +2194,7 @@ static int hv_pci_probe(struct hv_device *hdev,
|
||||
INIT_LIST_HEAD(&hbus->resources_for_children);
|
||||
spin_lock_init(&hbus->config_lock);
|
||||
spin_lock_init(&hbus->device_list_lock);
|
||||
spin_lock_init(&hbus->retarget_msi_interrupt_lock);
|
||||
sema_init(&hbus->enum_sem, 1);
|
||||
init_completion(&hbus->remove_event);
|
||||
|
||||
@ -2266,24 +2275,32 @@ free_bus:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* hv_pci_remove() - Remove routine for this VMBus channel
|
||||
* @hdev: VMBus's tracking struct for this root PCI bus
|
||||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
static int hv_pci_remove(struct hv_device *hdev)
|
||||
static void hv_pci_bus_exit(struct hv_device *hdev)
|
||||
{
|
||||
int ret;
|
||||
struct hv_pcibus_device *hbus;
|
||||
union {
|
||||
struct hv_pcibus_device *hbus = hv_get_drvdata(hdev);
|
||||
struct {
|
||||
struct pci_packet teardown_packet;
|
||||
u8 buffer[0x100];
|
||||
u8 buffer[sizeof(struct pci_message)];
|
||||
} pkt;
|
||||
struct pci_bus_relations relations;
|
||||
struct hv_pci_compl comp_pkt;
|
||||
int ret;
|
||||
|
||||
hbus = hv_get_drvdata(hdev);
|
||||
/*
|
||||
* After the host sends the RESCIND_CHANNEL message, it doesn't
|
||||
* access the per-channel ringbuffer any longer.
|
||||
*/
|
||||
if (hdev->channel->rescind)
|
||||
return;
|
||||
|
||||
/* Delete any children which might still exist. */
|
||||
memset(&relations, 0, sizeof(relations));
|
||||
hv_pci_devices_present(hbus, &relations);
|
||||
|
||||
ret = hv_send_resources_released(hdev);
|
||||
if (ret)
|
||||
dev_err(&hdev->device,
|
||||
"Couldn't send resources released packet(s)\n");
|
||||
|
||||
memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet));
|
||||
init_completion(&comp_pkt.host_event);
|
||||
@ -2298,7 +2315,19 @@ static int hv_pci_remove(struct hv_device *hdev)
|
||||
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
|
||||
if (!ret)
|
||||
wait_for_completion_timeout(&comp_pkt.host_event, 10 * HZ);
|
||||
}
|
||||
|
||||
/**
|
||||
* hv_pci_remove() - Remove routine for this VMBus channel
|
||||
* @hdev: VMBus's tracking struct for this root PCI bus
|
||||
*
|
||||
* Return: 0 on success, -errno on failure
|
||||
*/
|
||||
static int hv_pci_remove(struct hv_device *hdev)
|
||||
{
|
||||
struct hv_pcibus_device *hbus;
|
||||
|
||||
hbus = hv_get_drvdata(hdev);
|
||||
if (hbus->state == hv_pcibus_installed) {
|
||||
/* Remove the bus from PCI's point of view. */
|
||||
pci_lock_rescan_remove();
|
||||
@ -2307,17 +2336,10 @@ static int hv_pci_remove(struct hv_device *hdev)
|
||||
pci_unlock_rescan_remove();
|
||||
}
|
||||
|
||||
ret = hv_send_resources_released(hdev);
|
||||
if (ret)
|
||||
dev_err(&hdev->device,
|
||||
"Couldn't send resources released packet(s)\n");
|
||||
hv_pci_bus_exit(hdev);
|
||||
|
||||
vmbus_close(hdev->channel);
|
||||
|
||||
/* Delete any children which might still exist. */
|
||||
memset(&relations, 0, sizeof(relations));
|
||||
hv_pci_devices_present(hbus, &relations);
|
||||
|
||||
iounmap(hbus->cfg_addr);
|
||||
hv_free_config_window(hbus);
|
||||
pci_free_resource_list(&hbus->resources_for_children);
|
||||
|
@ -35,12 +35,10 @@
|
||||
#define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */
|
||||
#define PCIE_DBI_RO_WR_EN 0x8bc /* DBI Read-Only Write Enable Register */
|
||||
|
||||
/* PEX LUT registers */
|
||||
#define PCIE_LUT_DBG 0x7FC /* PEX LUT Debug Register */
|
||||
|
||||
struct ls_pcie_drvdata {
|
||||
u32 lut_offset;
|
||||
u32 ltssm_shift;
|
||||
u32 lut_dbg;
|
||||
struct pcie_host_ops *ops;
|
||||
};
|
||||
|
||||
@ -134,7 +132,7 @@ static int ls_pcie_link_up(struct pcie_port *pp)
|
||||
struct ls_pcie *pcie = to_ls_pcie(pp);
|
||||
u32 state;
|
||||
|
||||
state = (ioread32(pcie->lut + PCIE_LUT_DBG) >>
|
||||
state = (ioread32(pcie->lut + pcie->drvdata->lut_dbg) >>
|
||||
pcie->drvdata->ltssm_shift) &
|
||||
LTSSM_STATE_MASK;
|
||||
|
||||
@ -196,18 +194,28 @@ static struct ls_pcie_drvdata ls1021_drvdata = {
|
||||
static struct ls_pcie_drvdata ls1043_drvdata = {
|
||||
.lut_offset = 0x10000,
|
||||
.ltssm_shift = 24,
|
||||
.lut_dbg = 0x7fc,
|
||||
.ops = &ls_pcie_host_ops,
|
||||
};
|
||||
|
||||
static struct ls_pcie_drvdata ls1046_drvdata = {
|
||||
.lut_offset = 0x80000,
|
||||
.ltssm_shift = 24,
|
||||
.lut_dbg = 0x407fc,
|
||||
.ops = &ls_pcie_host_ops,
|
||||
};
|
||||
|
||||
static struct ls_pcie_drvdata ls2080_drvdata = {
|
||||
.lut_offset = 0x80000,
|
||||
.ltssm_shift = 0,
|
||||
.lut_dbg = 0x7fc,
|
||||
.ops = &ls_pcie_host_ops,
|
||||
};
|
||||
|
||||
static const struct of_device_id ls_pcie_of_match[] = {
|
||||
{ .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata },
|
||||
{ .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata },
|
||||
{ .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata },
|
||||
{ .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata },
|
||||
{ .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata },
|
||||
{ },
|
||||
@ -252,10 +260,8 @@ static int __init ls_pcie_probe(struct platform_device *pdev)
|
||||
|
||||
dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
|
||||
pcie->pp.dbi_base = devm_ioremap_resource(dev, dbi_base);
|
||||
if (IS_ERR(pcie->pp.dbi_base)) {
|
||||
dev_err(dev, "missing *regs* space\n");
|
||||
if (IS_ERR(pcie->pp.dbi_base))
|
||||
return PTR_ERR(pcie->pp.dbi_base);
|
||||
}
|
||||
|
||||
pcie->lut = pcie->pp.dbi_base + pcie->drvdata->lut_offset;
|
||||
|
||||
|
@ -430,10 +430,10 @@ static int rcar_pci_probe(struct platform_device *pdev)
|
||||
}
|
||||
|
||||
static struct of_device_id rcar_pci_of_match[] = {
|
||||
{ .compatible = "renesas,pci-rcar-gen2", },
|
||||
{ .compatible = "renesas,pci-r8a7790", },
|
||||
{ .compatible = "renesas,pci-r8a7791", },
|
||||
{ .compatible = "renesas,pci-r8a7794", },
|
||||
{ .compatible = "renesas,pci-rcar-gen2", },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
@ -51,10 +51,6 @@
|
||||
#include <soc/tegra/cpuidle.h>
|
||||
#include <soc/tegra/pmc.h>
|
||||
|
||||
#include <asm/mach/irq.h>
|
||||
#include <asm/mach/map.h>
|
||||
#include <asm/mach/pci.h>
|
||||
|
||||
#define INT_PCI_MSI_NR (8 * 32)
|
||||
|
||||
/* register definitions */
|
||||
@ -188,6 +184,9 @@
|
||||
#define RP_VEND_XP 0x00000f00
|
||||
#define RP_VEND_XP_DL_UP (1 << 30)
|
||||
|
||||
#define RP_VEND_CTL2 0x00000fa8
|
||||
#define RP_VEND_CTL2_PCA_ENABLE (1 << 7)
|
||||
|
||||
#define RP_PRIV_MISC 0x00000fe0
|
||||
#define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xe << 0)
|
||||
#define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xf << 0)
|
||||
@ -252,6 +251,7 @@ struct tegra_pcie_soc {
|
||||
bool has_intr_prsnt_sense;
|
||||
bool has_cml_clk;
|
||||
bool has_gen2;
|
||||
bool force_pca_enable;
|
||||
};
|
||||
|
||||
static inline struct tegra_msi *to_tegra_msi(struct msi_controller *chip)
|
||||
@ -322,11 +322,6 @@ struct tegra_pcie_bus {
|
||||
unsigned int nr;
|
||||
};
|
||||
|
||||
static inline struct tegra_pcie *sys_to_pcie(struct pci_sys_data *sys)
|
||||
{
|
||||
return sys->private_data;
|
||||
}
|
||||
|
||||
static inline void afi_writel(struct tegra_pcie *pcie, u32 value,
|
||||
unsigned long offset)
|
||||
{
|
||||
@ -385,8 +380,7 @@ static struct tegra_pcie_bus *tegra_pcie_bus_alloc(struct tegra_pcie *pcie,
|
||||
unsigned int busnr)
|
||||
{
|
||||
struct device *dev = pcie->dev;
|
||||
pgprot_t prot = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
|
||||
L_PTE_XN | L_PTE_MT_DEV_SHARED | L_PTE_SHARED);
|
||||
pgprot_t prot = pgprot_device(PAGE_KERNEL);
|
||||
phys_addr_t cs = pcie->cs->start;
|
||||
struct tegra_pcie_bus *bus;
|
||||
unsigned int i;
|
||||
@ -430,7 +424,8 @@ free:
|
||||
|
||||
static int tegra_pcie_add_bus(struct pci_bus *bus)
|
||||
{
|
||||
struct tegra_pcie *pcie = sys_to_pcie(bus->sysdata);
|
||||
struct pci_host_bridge *host = pci_find_host_bridge(bus);
|
||||
struct tegra_pcie *pcie = pci_host_bridge_priv(host);
|
||||
struct tegra_pcie_bus *b;
|
||||
|
||||
b = tegra_pcie_bus_alloc(pcie, bus->number);
|
||||
@ -444,7 +439,8 @@ static int tegra_pcie_add_bus(struct pci_bus *bus)
|
||||
|
||||
static void tegra_pcie_remove_bus(struct pci_bus *child)
|
||||
{
|
||||
struct tegra_pcie *pcie = sys_to_pcie(child->sysdata);
|
||||
struct pci_host_bridge *host = pci_find_host_bridge(child);
|
||||
struct tegra_pcie *pcie = pci_host_bridge_priv(host);
|
||||
struct tegra_pcie_bus *bus, *tmp;
|
||||
|
||||
list_for_each_entry_safe(bus, tmp, &pcie->buses, list) {
|
||||
@ -461,7 +457,8 @@ static void __iomem *tegra_pcie_map_bus(struct pci_bus *bus,
|
||||
unsigned int devfn,
|
||||
int where)
|
||||
{
|
||||
struct tegra_pcie *pcie = sys_to_pcie(bus->sysdata);
|
||||
struct pci_host_bridge *host = pci_find_host_bridge(bus);
|
||||
struct tegra_pcie *pcie = pci_host_bridge_priv(host);
|
||||
struct device *dev = pcie->dev;
|
||||
void __iomem *addr = NULL;
|
||||
|
||||
@ -558,6 +555,12 @@ static void tegra_pcie_port_enable(struct tegra_pcie_port *port)
|
||||
afi_writel(port->pcie, value, ctrl);
|
||||
|
||||
tegra_pcie_port_reset(port);
|
||||
|
||||
if (soc->force_pca_enable) {
|
||||
value = readl(port->base + RP_VEND_CTL2);
|
||||
value |= RP_VEND_CTL2_PCA_ENABLE;
|
||||
writel(value, port->base + RP_VEND_CTL2);
|
||||
}
|
||||
}
|
||||
|
||||
static void tegra_pcie_port_disable(struct tegra_pcie_port *port)
|
||||
@ -610,39 +613,31 @@ static void tegra_pcie_relax_enable(struct pci_dev *dev)
|
||||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, tegra_pcie_relax_enable);
|
||||
|
||||
static int tegra_pcie_setup(int nr, struct pci_sys_data *sys)
|
||||
static int tegra_pcie_request_resources(struct tegra_pcie *pcie)
|
||||
{
|
||||
struct tegra_pcie *pcie = sys_to_pcie(sys);
|
||||
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
|
||||
struct list_head *windows = &host->windows;
|
||||
struct device *dev = pcie->dev;
|
||||
int err;
|
||||
|
||||
sys->mem_offset = pcie->offset.mem;
|
||||
sys->io_offset = pcie->offset.io;
|
||||
pci_add_resource_offset(windows, &pcie->pio, pcie->offset.io);
|
||||
pci_add_resource_offset(windows, &pcie->mem, pcie->offset.mem);
|
||||
pci_add_resource_offset(windows, &pcie->prefetch, pcie->offset.mem);
|
||||
pci_add_resource(windows, &pcie->busn);
|
||||
|
||||
err = devm_request_resource(dev, &iomem_resource, &pcie->io);
|
||||
err = devm_request_pci_bus_resources(dev, windows);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = pci_remap_iospace(&pcie->pio, pcie->io.start);
|
||||
if (!err)
|
||||
pci_add_resource_offset(&sys->resources, &pcie->pio,
|
||||
sys->io_offset);
|
||||
pci_remap_iospace(&pcie->pio, pcie->io.start);
|
||||
|
||||
pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset);
|
||||
pci_add_resource_offset(&sys->resources, &pcie->prefetch,
|
||||
sys->mem_offset);
|
||||
pci_add_resource(&sys->resources, &pcie->busn);
|
||||
|
||||
err = devm_request_pci_bus_resources(dev, &sys->resources);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra_pcie_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin)
|
||||
{
|
||||
struct tegra_pcie *pcie = sys_to_pcie(pdev->bus->sysdata);
|
||||
struct pci_host_bridge *host = pci_find_host_bridge(pdev->bus);
|
||||
struct tegra_pcie *pcie = pci_host_bridge_priv(host);
|
||||
int irq;
|
||||
|
||||
tegra_cpuidle_pcie_irqs_in_use();
|
||||
@ -1499,10 +1494,11 @@ static const struct irq_domain_ops msi_domain_ops = {
|
||||
|
||||
static int tegra_pcie_enable_msi(struct tegra_pcie *pcie)
|
||||
{
|
||||
struct device *dev = pcie->dev;
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
|
||||
struct platform_device *pdev = to_platform_device(pcie->dev);
|
||||
const struct tegra_pcie_soc *soc = pcie->soc;
|
||||
struct tegra_msi *msi = &pcie->msi;
|
||||
struct device *dev = pcie->dev;
|
||||
unsigned long base;
|
||||
int err;
|
||||
u32 reg;
|
||||
@ -1559,6 +1555,8 @@ static int tegra_pcie_enable_msi(struct tegra_pcie *pcie)
|
||||
reg |= AFI_INTR_MASK_MSI_MASK;
|
||||
afi_writel(pcie, reg, AFI_INTR_MASK);
|
||||
|
||||
host->msi = &msi->chip;
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
@ -1609,7 +1607,8 @@ static int tegra_pcie_get_xbar_config(struct tegra_pcie *pcie, u32 lanes,
|
||||
struct device *dev = pcie->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
|
||||
if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) {
|
||||
if (of_device_is_compatible(np, "nvidia,tegra124-pcie") ||
|
||||
of_device_is_compatible(np, "nvidia,tegra210-pcie")) {
|
||||
switch (lanes) {
|
||||
case 0x0000104:
|
||||
dev_info(dev, "4x1, 1x1 configuration\n");
|
||||
@ -1730,7 +1729,22 @@ static int tegra_pcie_get_regulators(struct tegra_pcie *pcie, u32 lane_mask)
|
||||
struct device_node *np = dev->of_node;
|
||||
unsigned int i = 0;
|
||||
|
||||
if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) {
|
||||
if (of_device_is_compatible(np, "nvidia,tegra210-pcie")) {
|
||||
pcie->num_supplies = 6;
|
||||
|
||||
pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies,
|
||||
sizeof(*pcie->supplies),
|
||||
GFP_KERNEL);
|
||||
if (!pcie->supplies)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie->supplies[i++].supply = "avdd-pll-uerefe";
|
||||
pcie->supplies[i++].supply = "hvddio-pex";
|
||||
pcie->supplies[i++].supply = "dvddio-pex";
|
||||
pcie->supplies[i++].supply = "dvdd-pex-pll";
|
||||
pcie->supplies[i++].supply = "hvdd-pex-pll-e";
|
||||
pcie->supplies[i++].supply = "vddio-pex-ctl";
|
||||
} else if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) {
|
||||
pcie->num_supplies = 7;
|
||||
|
||||
pcie->supplies = devm_kcalloc(dev, pcie->num_supplies,
|
||||
@ -2021,11 +2035,10 @@ retry:
|
||||
return false;
|
||||
}
|
||||
|
||||
static int tegra_pcie_enable(struct tegra_pcie *pcie)
|
||||
static void tegra_pcie_enable_ports(struct tegra_pcie *pcie)
|
||||
{
|
||||
struct device *dev = pcie->dev;
|
||||
struct tegra_pcie_port *port, *tmp;
|
||||
struct hw_pci hw;
|
||||
|
||||
list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
|
||||
dev_info(dev, "probing port %u, using %u lanes\n",
|
||||
@ -2041,21 +2054,6 @@ static int tegra_pcie_enable(struct tegra_pcie *pcie)
|
||||
tegra_pcie_port_disable(port);
|
||||
tegra_pcie_port_free(port);
|
||||
}
|
||||
|
||||
memset(&hw, 0, sizeof(hw));
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
hw.msi_ctrl = &pcie->msi.chip;
|
||||
#endif
|
||||
|
||||
hw.nr_controllers = 1;
|
||||
hw.private_data = (void **)&pcie;
|
||||
hw.setup = tegra_pcie_setup;
|
||||
hw.map_irq = tegra_pcie_map_irq;
|
||||
hw.ops = &tegra_pcie_ops;
|
||||
|
||||
pci_common_init_dev(dev, &hw);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct tegra_pcie_soc tegra20_pcie = {
|
||||
@ -2069,6 +2067,7 @@ static const struct tegra_pcie_soc tegra20_pcie = {
|
||||
.has_intr_prsnt_sense = false,
|
||||
.has_cml_clk = false,
|
||||
.has_gen2 = false,
|
||||
.force_pca_enable = false,
|
||||
};
|
||||
|
||||
static const struct tegra_pcie_soc tegra30_pcie = {
|
||||
@ -2083,6 +2082,7 @@ static const struct tegra_pcie_soc tegra30_pcie = {
|
||||
.has_intr_prsnt_sense = true,
|
||||
.has_cml_clk = true,
|
||||
.has_gen2 = false,
|
||||
.force_pca_enable = false,
|
||||
};
|
||||
|
||||
static const struct tegra_pcie_soc tegra124_pcie = {
|
||||
@ -2096,9 +2096,25 @@ static const struct tegra_pcie_soc tegra124_pcie = {
|
||||
.has_intr_prsnt_sense = true,
|
||||
.has_cml_clk = true,
|
||||
.has_gen2 = true,
|
||||
.force_pca_enable = false,
|
||||
};
|
||||
|
||||
static const struct tegra_pcie_soc tegra210_pcie = {
|
||||
.num_ports = 2,
|
||||
.msi_base_shift = 8,
|
||||
.pads_pll_ctl = PADS_PLL_CTL_TEGRA30,
|
||||
.tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN,
|
||||
.pads_refclk_cfg0 = 0x90b890b8,
|
||||
.has_pex_clkreq_en = true,
|
||||
.has_pex_bias_ctrl = true,
|
||||
.has_intr_prsnt_sense = true,
|
||||
.has_cml_clk = true,
|
||||
.has_gen2 = true,
|
||||
.force_pca_enable = true,
|
||||
};
|
||||
|
||||
static const struct of_device_id tegra_pcie_of_match[] = {
|
||||
{ .compatible = "nvidia,tegra210-pcie", .data = &tegra210_pcie },
|
||||
{ .compatible = "nvidia,tegra124-pcie", .data = &tegra124_pcie },
|
||||
{ .compatible = "nvidia,tegra30-pcie", .data = &tegra30_pcie },
|
||||
{ .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie },
|
||||
@ -2217,13 +2233,17 @@ remove:
|
||||
static int tegra_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct pci_host_bridge *host;
|
||||
struct tegra_pcie *pcie;
|
||||
struct pci_bus *child;
|
||||
int err;
|
||||
|
||||
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
|
||||
if (!pcie)
|
||||
host = pci_alloc_host_bridge(sizeof(*pcie));
|
||||
if (!host)
|
||||
return -ENOMEM;
|
||||
|
||||
pcie = pci_host_bridge_priv(host);
|
||||
|
||||
pcie->soc = of_device_get_match_data(dev);
|
||||
INIT_LIST_HEAD(&pcie->buses);
|
||||
INIT_LIST_HEAD(&pcie->ports);
|
||||
@ -2243,6 +2263,10 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
||||
if (err)
|
||||
goto put_resources;
|
||||
|
||||
err = tegra_pcie_request_resources(pcie);
|
||||
if (err)
|
||||
goto put_resources;
|
||||
|
||||
/* setup the AFI address translations */
|
||||
tegra_pcie_setup_translations(pcie);
|
||||
|
||||
@ -2254,12 +2278,30 @@ static int tegra_pcie_probe(struct platform_device *pdev)
|
||||
}
|
||||
}
|
||||
|
||||
err = tegra_pcie_enable(pcie);
|
||||
tegra_pcie_enable_ports(pcie);
|
||||
|
||||
pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS);
|
||||
host->busnr = pcie->busn.start;
|
||||
host->dev.parent = &pdev->dev;
|
||||
host->ops = &tegra_pcie_ops;
|
||||
|
||||
err = pci_register_host_bridge(host);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to enable PCIe ports: %d\n", err);
|
||||
dev_err(dev, "failed to register host: %d\n", err);
|
||||
goto disable_msi;
|
||||
}
|
||||
|
||||
pci_scan_child_bus(host->bus);
|
||||
|
||||
pci_fixup_irqs(pci_common_swizzle, tegra_pcie_map_irq);
|
||||
pci_bus_size_bridges(host->bus);
|
||||
pci_bus_assign_resources(host->bus);
|
||||
|
||||
list_for_each_entry(child, &host->bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
|
||||
pci_bus_add_devices(host->bus);
|
||||
|
||||
if (IS_ENABLED(CONFIG_DEBUG_FS)) {
|
||||
err = tegra_pcie_debugfs_init(pcie);
|
||||
if (err < 0)
|
||||
|
@ -14,6 +14,8 @@
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#if defined(CONFIG_PCI_HOST_THUNDER_ECAM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
|
||||
|
||||
static void set_val(u32 v, int where, int size, u32 *val)
|
||||
{
|
||||
int shift = (where & 3) * 8;
|
||||
@ -346,7 +348,7 @@ static int thunder_ecam_config_write(struct pci_bus *bus, unsigned int devfn,
|
||||
return pci_generic_config_write(bus, devfn, where, size, val);
|
||||
}
|
||||
|
||||
static struct pci_ecam_ops pci_thunder_ecam_ops = {
|
||||
struct pci_ecam_ops pci_thunder_ecam_ops = {
|
||||
.bus_shift = 20,
|
||||
.pci_ops = {
|
||||
.map_bus = pci_ecam_map_bus,
|
||||
@ -355,6 +357,8 @@ static struct pci_ecam_ops pci_thunder_ecam_ops = {
|
||||
}
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PCI_HOST_THUNDER_ECAM
|
||||
|
||||
static const struct of_device_id thunder_ecam_of_match[] = {
|
||||
{ .compatible = "cavium,pci-host-thunder-ecam" },
|
||||
{ },
|
||||
@ -373,3 +377,6 @@ static struct platform_driver thunder_ecam_driver = {
|
||||
.probe = thunder_ecam_probe,
|
||||
};
|
||||
builtin_platform_driver(thunder_ecam_driver);
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
@ -18,8 +18,12 @@
|
||||
#include <linux/init.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci-acpi.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include "../pci.h"
|
||||
|
||||
#if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
|
||||
|
||||
#define PEM_CFG_WR 0x28
|
||||
#define PEM_CFG_RD 0x30
|
||||
@ -284,35 +288,16 @@ static int thunder_pem_config_write(struct pci_bus *bus, unsigned int devfn,
|
||||
return pci_generic_config_write(bus, devfn, where, size, val);
|
||||
}
|
||||
|
||||
static int thunder_pem_init(struct pci_config_window *cfg)
|
||||
static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
|
||||
struct resource *res_pem)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
resource_size_t bar4_start;
|
||||
struct resource *res_pem;
|
||||
struct thunder_pem_pci *pem_pci;
|
||||
struct platform_device *pdev;
|
||||
|
||||
/* Only OF support for now */
|
||||
if (!dev->of_node)
|
||||
return -EINVAL;
|
||||
resource_size_t bar4_start;
|
||||
|
||||
pem_pci = devm_kzalloc(dev, sizeof(*pem_pci), GFP_KERNEL);
|
||||
if (!pem_pci)
|
||||
return -ENOMEM;
|
||||
|
||||
pdev = to_platform_device(dev);
|
||||
|
||||
/*
|
||||
* The second register range is the PEM bridge to the PCIe
|
||||
* bus. It has a different config access method than those
|
||||
* devices behind the bridge.
|
||||
*/
|
||||
res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
if (!res_pem) {
|
||||
dev_err(dev, "missing \"reg[1]\"property\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
pem_pci->pem_reg_base = devm_ioremap(dev, res_pem->start, 0x10000);
|
||||
if (!pem_pci->pem_reg_base)
|
||||
return -ENOMEM;
|
||||
@ -332,9 +317,69 @@ static int thunder_pem_init(struct pci_config_window *cfg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
|
||||
|
||||
static int thunder_pem_acpi_init(struct pci_config_window *cfg)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
struct acpi_device *adev = to_acpi_device(dev);
|
||||
struct acpi_pci_root *root = acpi_driver_data(adev);
|
||||
struct resource *res_pem;
|
||||
int ret;
|
||||
|
||||
res_pem = devm_kzalloc(&adev->dev, sizeof(*res_pem), GFP_KERNEL);
|
||||
if (!res_pem)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = acpi_get_rc_resources(dev, "THRX0002", root->segment, res_pem);
|
||||
if (ret) {
|
||||
dev_err(dev, "can't get rc base address\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return thunder_pem_init(dev, cfg, res_pem);
|
||||
}
|
||||
|
||||
struct pci_ecam_ops thunder_pem_ecam_ops = {
|
||||
.bus_shift = 24,
|
||||
.init = thunder_pem_acpi_init,
|
||||
.pci_ops = {
|
||||
.map_bus = pci_ecam_map_bus,
|
||||
.read = thunder_pem_config_read,
|
||||
.write = thunder_pem_config_write,
|
||||
}
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_HOST_THUNDER_PEM
|
||||
|
||||
static int thunder_pem_platform_init(struct pci_config_window *cfg)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
struct resource *res_pem;
|
||||
|
||||
if (!dev->of_node)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* The second register range is the PEM bridge to the PCIe
|
||||
* bus. It has a different config access method than those
|
||||
* devices behind the bridge.
|
||||
*/
|
||||
res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
if (!res_pem) {
|
||||
dev_err(dev, "missing \"reg[1]\"property\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return thunder_pem_init(dev, cfg, res_pem);
|
||||
}
|
||||
|
||||
static struct pci_ecam_ops pci_thunder_pem_ops = {
|
||||
.bus_shift = 24,
|
||||
.init = thunder_pem_init,
|
||||
.init = thunder_pem_platform_init,
|
||||
.pci_ops = {
|
||||
.map_bus = pci_ecam_map_bus,
|
||||
.read = thunder_pem_config_read,
|
||||
@ -360,3 +405,6 @@ static struct platform_driver thunder_pem_driver = {
|
||||
.probe = thunder_pem_probe,
|
||||
};
|
||||
builtin_platform_driver(thunder_pem_driver);
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
@ -27,6 +27,8 @@
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-acpi.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
@ -64,7 +66,9 @@
|
||||
/* PCIe IP version */
|
||||
#define XGENE_PCIE_IP_VER_UNKN 0
|
||||
#define XGENE_PCIE_IP_VER_1 1
|
||||
#define XGENE_PCIE_IP_VER_2 2
|
||||
|
||||
#if defined(CONFIG_PCI_XGENE) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS))
|
||||
struct xgene_pcie_port {
|
||||
struct device_node *node;
|
||||
struct device *dev;
|
||||
@ -91,13 +95,24 @@ static inline u32 pcie_bar_low_val(u32 addr, u32 flags)
|
||||
return (addr & PCI_BASE_ADDRESS_MEM_MASK) | flags;
|
||||
}
|
||||
|
||||
static inline struct xgene_pcie_port *pcie_bus_to_port(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_config_window *cfg;
|
||||
|
||||
if (acpi_disabled)
|
||||
return (struct xgene_pcie_port *)(bus->sysdata);
|
||||
|
||||
cfg = bus->sysdata;
|
||||
return (struct xgene_pcie_port *)(cfg->priv);
|
||||
}
|
||||
|
||||
/*
|
||||
* When the address bit [17:16] is 2'b01, the Configuration access will be
|
||||
* treated as Type 1 and it will be forwarded to external PCIe device.
|
||||
*/
|
||||
static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus)
|
||||
{
|
||||
struct xgene_pcie_port *port = bus->sysdata;
|
||||
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
|
||||
|
||||
if (bus->number >= (bus->primary + 1))
|
||||
return port->cfg_base + AXI_EP_CFG_ACCESS;
|
||||
@ -111,7 +126,7 @@ static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus)
|
||||
*/
|
||||
static void xgene_pcie_set_rtdid_reg(struct pci_bus *bus, uint devfn)
|
||||
{
|
||||
struct xgene_pcie_port *port = bus->sysdata;
|
||||
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
|
||||
unsigned int b, d, f;
|
||||
u32 rtdid_val = 0;
|
||||
|
||||
@ -158,7 +173,7 @@ static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
|
||||
static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
|
||||
int where, int size, u32 *val)
|
||||
{
|
||||
struct xgene_pcie_port *port = bus->sysdata;
|
||||
struct xgene_pcie_port *port = pcie_bus_to_port(bus);
|
||||
|
||||
if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) !=
|
||||
PCIBIOS_SUCCESSFUL)
|
||||
@ -182,13 +197,103 @@ static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn,
|
||||
|
||||
return PCIBIOS_SUCCESSFUL;
|
||||
}
|
||||
#endif
|
||||
|
||||
static struct pci_ops xgene_pcie_ops = {
|
||||
.map_bus = xgene_pcie_map_bus,
|
||||
.read = xgene_pcie_config_read32,
|
||||
.write = pci_generic_config_write32,
|
||||
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
|
||||
static int xgene_get_csr_resource(struct acpi_device *adev,
|
||||
struct resource *res)
|
||||
{
|
||||
struct device *dev = &adev->dev;
|
||||
struct resource_entry *entry;
|
||||
struct list_head list;
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
INIT_LIST_HEAD(&list);
|
||||
flags = IORESOURCE_MEM;
|
||||
ret = acpi_dev_get_resources(adev, &list,
|
||||
acpi_dev_filter_resource_type_cb,
|
||||
(void *) flags);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to parse _CRS method, error code %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (ret == 0) {
|
||||
dev_err(dev, "no IO and memory resources present in _CRS\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
entry = list_first_entry(&list, struct resource_entry, node);
|
||||
*res = *entry->res;
|
||||
acpi_dev_free_resource_list(&list);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int xgene_pcie_ecam_init(struct pci_config_window *cfg, u32 ipversion)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
struct acpi_device *adev = to_acpi_device(dev);
|
||||
struct xgene_pcie_port *port;
|
||||
struct resource csr;
|
||||
int ret;
|
||||
|
||||
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
|
||||
if (!port)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = xgene_get_csr_resource(adev, &csr);
|
||||
if (ret) {
|
||||
dev_err(dev, "can't get CSR resource\n");
|
||||
kfree(port);
|
||||
return ret;
|
||||
}
|
||||
port->csr_base = devm_ioremap_resource(dev, &csr);
|
||||
if (IS_ERR(port->csr_base)) {
|
||||
kfree(port);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
port->cfg_base = cfg->win;
|
||||
port->version = ipversion;
|
||||
|
||||
cfg->priv = port;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int xgene_v1_pcie_ecam_init(struct pci_config_window *cfg)
|
||||
{
|
||||
return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_1);
|
||||
}
|
||||
|
||||
struct pci_ecam_ops xgene_v1_pcie_ecam_ops = {
|
||||
.bus_shift = 16,
|
||||
.init = xgene_v1_pcie_ecam_init,
|
||||
.pci_ops = {
|
||||
.map_bus = xgene_pcie_map_bus,
|
||||
.read = xgene_pcie_config_read32,
|
||||
.write = pci_generic_config_write,
|
||||
}
|
||||
};
|
||||
|
||||
static int xgene_v2_pcie_ecam_init(struct pci_config_window *cfg)
|
||||
{
|
||||
return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_2);
|
||||
}
|
||||
|
||||
struct pci_ecam_ops xgene_v2_pcie_ecam_ops = {
|
||||
.bus_shift = 16,
|
||||
.init = xgene_v2_pcie_ecam_init,
|
||||
.pci_ops = {
|
||||
.map_bus = xgene_pcie_map_bus,
|
||||
.read = xgene_pcie_config_read32,
|
||||
.write = pci_generic_config_write,
|
||||
}
|
||||
};
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_PCI_XGENE)
|
||||
static u64 xgene_pcie_set_ib_mask(struct xgene_pcie_port *port, u32 addr,
|
||||
u32 flags, u64 size)
|
||||
{
|
||||
@ -521,6 +626,12 @@ static int xgene_pcie_setup(struct xgene_pcie_port *port,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct pci_ops xgene_pcie_ops = {
|
||||
.map_bus = xgene_pcie_map_bus,
|
||||
.read = xgene_pcie_config_read32,
|
||||
.write = pci_generic_config_write32,
|
||||
};
|
||||
|
||||
static int xgene_pcie_probe_bridge(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
@ -591,3 +702,4 @@ static struct platform_driver xgene_pcie_driver = {
|
||||
.probe = xgene_pcie_probe_bridge,
|
||||
};
|
||||
builtin_platform_driver(xgene_pcie_driver);
|
||||
#endif
|
||||
|
@ -550,10 +550,8 @@ static int altera_pcie_parse_dt(struct altera_pcie *pcie)
|
||||
|
||||
cra = platform_get_resource_byname(pdev, IORESOURCE_MEM, "Cra");
|
||||
pcie->cra_base = devm_ioremap_resource(dev, cra);
|
||||
if (IS_ERR(pcie->cra_base)) {
|
||||
dev_err(dev, "failed to map cra memory\n");
|
||||
if (IS_ERR(pcie->cra_base))
|
||||
return PTR_ERR(pcie->cra_base);
|
||||
}
|
||||
|
||||
/* setup IRQ */
|
||||
pcie->irq = platform_get_irq(pdev, 0);
|
||||
@ -641,8 +639,4 @@ static struct platform_driver altera_pcie_driver = {
|
||||
},
|
||||
};
|
||||
|
||||
static int altera_pcie_init(void)
|
||||
{
|
||||
return platform_driver_register(&altera_pcie_driver);
|
||||
}
|
||||
device_initcall(altera_pcie_init);
|
||||
builtin_platform_driver(altera_pcie_driver);
|
||||
|
@ -18,7 +18,106 @@
|
||||
#include <linux/of_pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pci-acpi.h>
|
||||
#include <linux/pci-ecam.h>
|
||||
#include <linux/regmap.h>
|
||||
#include "../pci.h"
|
||||
|
||||
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
|
||||
|
||||
static int hisi_pcie_acpi_rd_conf(struct pci_bus *bus, u32 devfn, int where,
|
||||
int size, u32 *val)
|
||||
{
|
||||
struct pci_config_window *cfg = bus->sysdata;
|
||||
int dev = PCI_SLOT(devfn);
|
||||
|
||||
if (bus->number == cfg->busr.start) {
|
||||
/* access only one slot on each root port */
|
||||
if (dev > 0)
|
||||
return PCIBIOS_DEVICE_NOT_FOUND;
|
||||
else
|
||||
return pci_generic_config_read32(bus, devfn, where,
|
||||
size, val);
|
||||
}
|
||||
|
||||
return pci_generic_config_read(bus, devfn, where, size, val);
|
||||
}
|
||||
|
||||
static int hisi_pcie_acpi_wr_conf(struct pci_bus *bus, u32 devfn,
|
||||
int where, int size, u32 val)
|
||||
{
|
||||
struct pci_config_window *cfg = bus->sysdata;
|
||||
int dev = PCI_SLOT(devfn);
|
||||
|
||||
if (bus->number == cfg->busr.start) {
|
||||
/* access only one slot on each root port */
|
||||
if (dev > 0)
|
||||
return PCIBIOS_DEVICE_NOT_FOUND;
|
||||
else
|
||||
return pci_generic_config_write32(bus, devfn, where,
|
||||
size, val);
|
||||
}
|
||||
|
||||
return pci_generic_config_write(bus, devfn, where, size, val);
|
||||
}
|
||||
|
||||
static void __iomem *hisi_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
|
||||
int where)
|
||||
{
|
||||
struct pci_config_window *cfg = bus->sysdata;
|
||||
void __iomem *reg_base = cfg->priv;
|
||||
|
||||
if (bus->number == cfg->busr.start)
|
||||
return reg_base + where;
|
||||
else
|
||||
return pci_ecam_map_bus(bus, devfn, where);
|
||||
}
|
||||
|
||||
static int hisi_pcie_init(struct pci_config_window *cfg)
|
||||
{
|
||||
struct device *dev = cfg->parent;
|
||||
struct acpi_device *adev = to_acpi_device(dev);
|
||||
struct acpi_pci_root *root = acpi_driver_data(adev);
|
||||
struct resource *res;
|
||||
void __iomem *reg_base;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Retrieve RC base and size from a HISI0081 device with _UID
|
||||
* matching our segment.
|
||||
*/
|
||||
res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);
|
||||
if (!res)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = acpi_get_rc_resources(dev, "HISI0081", root->segment, res);
|
||||
if (ret) {
|
||||
dev_err(dev, "can't get rc base address\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
reg_base = devm_ioremap(dev, res->start, resource_size(res));
|
||||
if (!reg_base)
|
||||
return -ENOMEM;
|
||||
|
||||
cfg->priv = reg_base;
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct pci_ecam_ops hisi_pcie_ops = {
|
||||
.bus_shift = 20,
|
||||
.init = hisi_pcie_init,
|
||||
.pci_ops = {
|
||||
.map_bus = hisi_pcie_map_bus,
|
||||
.read = hisi_pcie_acpi_rd_conf,
|
||||
.write = hisi_pcie_acpi_wr_conf,
|
||||
}
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_HISI
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
@ -185,17 +284,13 @@ static int hisi_pcie_probe(struct platform_device *pdev)
|
||||
|
||||
reg = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbi");
|
||||
pp->dbi_base = devm_ioremap_resource(dev, reg);
|
||||
if (IS_ERR(pp->dbi_base)) {
|
||||
dev_err(dev, "cannot get rc_dbi base\n");
|
||||
if (IS_ERR(pp->dbi_base))
|
||||
return PTR_ERR(pp->dbi_base);
|
||||
}
|
||||
|
||||
ret = hisi_add_pcie_port(hisi_pcie, pdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dev_warn(dev, "only 32-bit config accesses supported; smaller writes may corrupt adjacent RW1C fields\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -227,3 +322,5 @@ static struct platform_driver hisi_pcie_driver = {
|
||||
},
|
||||
};
|
||||
builtin_platform_driver(hisi_pcie_driver);
|
||||
|
||||
#endif
|
||||
|
@ -54,6 +54,7 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
|
||||
|
||||
pcie->dev = dev;
|
||||
|
||||
pcie->type = IPROC_PCIE_PAXB_BCMA;
|
||||
pcie->base = bdev->io_addr;
|
||||
if (!pcie->base) {
|
||||
dev_err(dev, "no controller registers\n");
|
||||
|
@ -563,6 +563,7 @@ int iproc_msi_init(struct iproc_pcie *pcie, struct device_node *node)
|
||||
}
|
||||
|
||||
switch (pcie->type) {
|
||||
case IPROC_PCIE_PAXB_BCMA:
|
||||
case IPROC_PCIE_PAXB:
|
||||
msi->reg_offsets = iproc_msi_reg_paxb;
|
||||
msi->nr_eq_region = 1;
|
||||
|
@ -30,9 +30,15 @@ static const struct of_device_id iproc_pcie_of_match_table[] = {
|
||||
{
|
||||
.compatible = "brcm,iproc-pcie",
|
||||
.data = (int *)IPROC_PCIE_PAXB,
|
||||
}, {
|
||||
.compatible = "brcm,iproc-pcie-paxb-v2",
|
||||
.data = (int *)IPROC_PCIE_PAXB_V2,
|
||||
}, {
|
||||
.compatible = "brcm,iproc-pcie-paxc",
|
||||
.data = (int *)IPROC_PCIE_PAXC,
|
||||
}, {
|
||||
.compatible = "brcm,iproc-pcie-paxc-v2",
|
||||
.data = (int *)IPROC_PCIE_PAXC_V2,
|
||||
},
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
@ -84,19 +90,6 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
|
||||
return ret;
|
||||
}
|
||||
pcie->ob.axi_offset = val;
|
||||
|
||||
ret = of_property_read_u32(np, "brcm,pcie-ob-window-size",
|
||||
&val);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"missing brcm,pcie-ob-window-size property\n");
|
||||
return ret;
|
||||
}
|
||||
pcie->ob.window_size = (resource_size_t)val * SZ_1M;
|
||||
|
||||
if (of_property_read_bool(np, "brcm,pcie-ob-oarr-size"))
|
||||
pcie->ob.set_oarr_size = true;
|
||||
|
||||
pcie->need_ob_cfg = true;
|
||||
}
|
||||
|
||||
@ -115,7 +108,14 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
pcie->map_irq = of_irq_parse_and_map_pci;
|
||||
/* PAXC doesn't support legacy IRQs, skip mapping */
|
||||
switch (pcie->type) {
|
||||
case IPROC_PCIE_PAXC:
|
||||
case IPROC_PCIE_PAXC_V2:
|
||||
break;
|
||||
default:
|
||||
pcie->map_irq = of_irq_parse_and_map_pci;
|
||||
}
|
||||
|
||||
ret = iproc_pcie_setup(pcie, &res);
|
||||
if (ret)
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -24,23 +24,34 @@
|
||||
* endpoint devices.
|
||||
*/
|
||||
enum iproc_pcie_type {
|
||||
IPROC_PCIE_PAXB = 0,
|
||||
IPROC_PCIE_PAXB_BCMA = 0,
|
||||
IPROC_PCIE_PAXB,
|
||||
IPROC_PCIE_PAXB_V2,
|
||||
IPROC_PCIE_PAXC,
|
||||
IPROC_PCIE_PAXC_V2,
|
||||
};
|
||||
|
||||
/**
|
||||
* iProc PCIe outbound mapping
|
||||
* @set_oarr_size: indicates the OARR size bit needs to be set
|
||||
* @axi_offset: offset from the AXI address to the internal address used by
|
||||
* the iProc PCIe core
|
||||
* @window_size: outbound window size
|
||||
* @nr_windows: total number of supported outbound mapping windows
|
||||
*/
|
||||
struct iproc_pcie_ob {
|
||||
bool set_oarr_size;
|
||||
resource_size_t axi_offset;
|
||||
resource_size_t window_size;
|
||||
unsigned int nr_windows;
|
||||
};
|
||||
|
||||
/**
|
||||
* iProc PCIe inbound mapping
|
||||
* @nr_regions: total number of supported inbound mapping regions
|
||||
*/
|
||||
struct iproc_pcie_ib {
|
||||
unsigned int nr_regions;
|
||||
};
|
||||
|
||||
struct iproc_pcie_ob_map;
|
||||
struct iproc_pcie_ib_map;
|
||||
struct iproc_msi;
|
||||
|
||||
/**
|
||||
@ -55,14 +66,25 @@ struct iproc_msi;
|
||||
* @root_bus: pointer to root bus
|
||||
* @phy: optional PHY device that controls the Serdes
|
||||
* @map_irq: function callback to map interrupts
|
||||
* @ep_is_internal: indicates an internal emulated endpoint device is connected
|
||||
* @has_apb_err_disable: indicates the controller can be configured to prevent
|
||||
* unsupported request from being forwarded as an APB bus error
|
||||
*
|
||||
* @need_ob_cfg: indicates SW needs to configure the outbound mapping window
|
||||
* @ob: outbound mapping parameters
|
||||
* @ob: outbound mapping related parameters
|
||||
* @ob_map: outbound mapping related parameters specific to the controller
|
||||
*
|
||||
* @ib: inbound mapping related parameters
|
||||
* @ib_map: outbound mapping region related parameters
|
||||
*
|
||||
* @need_msi_steer: indicates additional configuration of the iProc PCIe
|
||||
* controller is required to steer MSI writes to external interrupt controller
|
||||
* @msi: MSI data
|
||||
*/
|
||||
struct iproc_pcie {
|
||||
struct device *dev;
|
||||
enum iproc_pcie_type type;
|
||||
const u16 *reg_offsets;
|
||||
u16 *reg_offsets;
|
||||
void __iomem *base;
|
||||
phys_addr_t base_addr;
|
||||
#ifdef CONFIG_ARM
|
||||
@ -71,8 +93,17 @@ struct iproc_pcie {
|
||||
struct pci_bus *root_bus;
|
||||
struct phy *phy;
|
||||
int (*map_irq)(const struct pci_dev *, u8, u8);
|
||||
bool ep_is_internal;
|
||||
bool has_apb_err_disable;
|
||||
|
||||
bool need_ob_cfg;
|
||||
struct iproc_pcie_ob ob;
|
||||
const struct iproc_pcie_ob_map *ob_map;
|
||||
|
||||
struct iproc_pcie_ib ib;
|
||||
const struct iproc_pcie_ib_map *ib_map;
|
||||
|
||||
bool need_msi_steer;
|
||||
struct iproc_msi *msi;
|
||||
};
|
||||
|
||||
|
@ -36,11 +36,17 @@
|
||||
|
||||
#include "pcie-designware.h"
|
||||
|
||||
#define PCIE20_PARF_SYS_CTRL 0x00
|
||||
#define PCIE20_PARF_PHY_CTRL 0x40
|
||||
#define PCIE20_PARF_PHY_REFCLK 0x4C
|
||||
#define PCIE20_PARF_DBI_BASE_ADDR 0x168
|
||||
#define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16c
|
||||
#define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C
|
||||
#define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174
|
||||
#define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT 0x178
|
||||
#define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1A8
|
||||
#define PCIE20_PARF_LTSSM 0x1B0
|
||||
#define PCIE20_PARF_SID_OFFSET 0x234
|
||||
#define PCIE20_PARF_BDF_TRANSLATE_CFG 0x24C
|
||||
|
||||
#define PCIE20_ELBI_SYS_CTRL 0x04
|
||||
#define PCIE20_ELBI_SYS_CTRL_LT_ENABLE BIT(0)
|
||||
@ -72,9 +78,18 @@ struct qcom_pcie_resources_v1 {
|
||||
struct regulator *vdda;
|
||||
};
|
||||
|
||||
struct qcom_pcie_resources_v2 {
|
||||
struct clk *aux_clk;
|
||||
struct clk *master_clk;
|
||||
struct clk *slave_clk;
|
||||
struct clk *cfg_clk;
|
||||
struct clk *pipe_clk;
|
||||
};
|
||||
|
||||
union qcom_pcie_resources {
|
||||
struct qcom_pcie_resources_v0 v0;
|
||||
struct qcom_pcie_resources_v1 v1;
|
||||
struct qcom_pcie_resources_v2 v2;
|
||||
};
|
||||
|
||||
struct qcom_pcie;
|
||||
@ -82,7 +97,9 @@ struct qcom_pcie;
|
||||
struct qcom_pcie_ops {
|
||||
int (*get_resources)(struct qcom_pcie *pcie);
|
||||
int (*init)(struct qcom_pcie *pcie);
|
||||
int (*post_init)(struct qcom_pcie *pcie);
|
||||
void (*deinit)(struct qcom_pcie *pcie);
|
||||
void (*ltssm_enable)(struct qcom_pcie *pcie);
|
||||
};
|
||||
|
||||
struct qcom_pcie {
|
||||
@ -116,17 +133,35 @@ static irqreturn_t qcom_pcie_msi_irq_handler(int irq, void *arg)
|
||||
return dw_handle_msi_irq(pp);
|
||||
}
|
||||
|
||||
static int qcom_pcie_establish_link(struct qcom_pcie *pcie)
|
||||
static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
if (dw_pcie_link_up(&pcie->pp))
|
||||
return 0;
|
||||
|
||||
/* enable link training */
|
||||
val = readl(pcie->elbi + PCIE20_ELBI_SYS_CTRL);
|
||||
val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE;
|
||||
writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL);
|
||||
}
|
||||
|
||||
static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* enable link training */
|
||||
val = readl(pcie->parf + PCIE20_PARF_LTSSM);
|
||||
val |= BIT(8);
|
||||
writel(val, pcie->parf + PCIE20_PARF_LTSSM);
|
||||
}
|
||||
|
||||
static int qcom_pcie_establish_link(struct qcom_pcie *pcie)
|
||||
{
|
||||
|
||||
if (dw_pcie_link_up(&pcie->pp))
|
||||
return 0;
|
||||
|
||||
/* Enable Link Training state machine */
|
||||
if (pcie->ops->ltssm_enable)
|
||||
pcie->ops->ltssm_enable(pcie);
|
||||
|
||||
return dw_pcie_wait_for_link(&pcie->pp);
|
||||
}
|
||||
@ -421,6 +456,113 @@ err_res:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
|
||||
struct device *dev = pcie->pp.dev;
|
||||
|
||||
res->aux_clk = devm_clk_get(dev, "aux");
|
||||
if (IS_ERR(res->aux_clk))
|
||||
return PTR_ERR(res->aux_clk);
|
||||
|
||||
res->cfg_clk = devm_clk_get(dev, "cfg");
|
||||
if (IS_ERR(res->cfg_clk))
|
||||
return PTR_ERR(res->cfg_clk);
|
||||
|
||||
res->master_clk = devm_clk_get(dev, "bus_master");
|
||||
if (IS_ERR(res->master_clk))
|
||||
return PTR_ERR(res->master_clk);
|
||||
|
||||
res->slave_clk = devm_clk_get(dev, "bus_slave");
|
||||
if (IS_ERR(res->slave_clk))
|
||||
return PTR_ERR(res->slave_clk);
|
||||
|
||||
res->pipe_clk = devm_clk_get(dev, "pipe");
|
||||
if (IS_ERR(res->pipe_clk))
|
||||
return PTR_ERR(res->pipe_clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qcom_pcie_init_v2(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
|
||||
struct device *dev = pcie->pp.dev;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
ret = clk_prepare_enable(res->aux_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "cannot prepare/enable aux clock\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(res->cfg_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "cannot prepare/enable cfg clock\n");
|
||||
goto err_cfg_clk;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(res->master_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "cannot prepare/enable master clock\n");
|
||||
goto err_master_clk;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(res->slave_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "cannot prepare/enable slave clock\n");
|
||||
goto err_slave_clk;
|
||||
}
|
||||
|
||||
/* enable PCIe clocks and resets */
|
||||
val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
|
||||
val &= ~BIT(0);
|
||||
writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
|
||||
|
||||
/* change DBI base address */
|
||||
writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR);
|
||||
|
||||
/* MAC PHY_POWERDOWN MUX DISABLE */
|
||||
val = readl(pcie->parf + PCIE20_PARF_SYS_CTRL);
|
||||
val &= ~BIT(29);
|
||||
writel(val, pcie->parf + PCIE20_PARF_SYS_CTRL);
|
||||
|
||||
val = readl(pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL);
|
||||
val |= BIT(4);
|
||||
writel(val, pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL);
|
||||
|
||||
val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
|
||||
val |= BIT(31);
|
||||
writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2);
|
||||
|
||||
return 0;
|
||||
|
||||
err_slave_clk:
|
||||
clk_disable_unprepare(res->master_clk);
|
||||
err_master_clk:
|
||||
clk_disable_unprepare(res->cfg_clk);
|
||||
err_cfg_clk:
|
||||
clk_disable_unprepare(res->aux_clk);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
|
||||
struct device *dev = pcie->pp.dev;
|
||||
int ret;
|
||||
|
||||
ret = clk_prepare_enable(res->pipe_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "cannot prepare/enable pipe clock\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qcom_pcie_link_up(struct pcie_port *pp)
|
||||
{
|
||||
struct qcom_pcie *pcie = to_qcom_pcie(pp);
|
||||
@ -429,6 +571,17 @@ static int qcom_pcie_link_up(struct pcie_port *pp)
|
||||
return !!(val & PCI_EXP_LNKSTA_DLLLA);
|
||||
}
|
||||
|
||||
static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie)
|
||||
{
|
||||
struct qcom_pcie_resources_v2 *res = &pcie->res.v2;
|
||||
|
||||
clk_disable_unprepare(res->pipe_clk);
|
||||
clk_disable_unprepare(res->slave_clk);
|
||||
clk_disable_unprepare(res->master_clk);
|
||||
clk_disable_unprepare(res->cfg_clk);
|
||||
clk_disable_unprepare(res->aux_clk);
|
||||
}
|
||||
|
||||
static void qcom_pcie_host_init(struct pcie_port *pp)
|
||||
{
|
||||
struct qcom_pcie *pcie = to_qcom_pcie(pp);
|
||||
@ -444,6 +597,9 @@ static void qcom_pcie_host_init(struct pcie_port *pp)
|
||||
if (ret)
|
||||
goto err_deinit;
|
||||
|
||||
if (pcie->ops->post_init)
|
||||
pcie->ops->post_init(pcie);
|
||||
|
||||
dw_pcie_setup_rc(pp);
|
||||
|
||||
if (IS_ENABLED(CONFIG_PCI_MSI))
|
||||
@ -487,12 +643,22 @@ static const struct qcom_pcie_ops ops_v0 = {
|
||||
.get_resources = qcom_pcie_get_resources_v0,
|
||||
.init = qcom_pcie_init_v0,
|
||||
.deinit = qcom_pcie_deinit_v0,
|
||||
.ltssm_enable = qcom_pcie_v0_v1_ltssm_enable,
|
||||
};
|
||||
|
||||
static const struct qcom_pcie_ops ops_v1 = {
|
||||
.get_resources = qcom_pcie_get_resources_v1,
|
||||
.init = qcom_pcie_init_v1,
|
||||
.deinit = qcom_pcie_deinit_v1,
|
||||
.ltssm_enable = qcom_pcie_v0_v1_ltssm_enable,
|
||||
};
|
||||
|
||||
static const struct qcom_pcie_ops ops_v2 = {
|
||||
.get_resources = qcom_pcie_get_resources_v2,
|
||||
.init = qcom_pcie_init_v2,
|
||||
.post_init = qcom_pcie_post_init_v2,
|
||||
.deinit = qcom_pcie_deinit_v2,
|
||||
.ltssm_enable = qcom_pcie_v2_ltssm_enable,
|
||||
};
|
||||
|
||||
static int qcom_pcie_probe(struct platform_device *pdev)
|
||||
@ -572,6 +738,7 @@ static const struct of_device_id qcom_pcie_match[] = {
|
||||
{ .compatible = "qcom,pcie-ipq8064", .data = &ops_v0 },
|
||||
{ .compatible = "qcom,pcie-apq8064", .data = &ops_v0 },
|
||||
{ .compatible = "qcom,pcie-apq8084", .data = &ops_v1 },
|
||||
{ .compatible = "qcom,pcie-msm8996", .data = &ops_v2 },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
@ -1071,13 +1071,14 @@ static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie,
|
||||
|
||||
static const struct of_device_id rcar_pcie_of_match[] = {
|
||||
{ .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 },
|
||||
{ .compatible = "renesas,pcie-rcar-gen2",
|
||||
.data = rcar_pcie_hw_init_gen2 },
|
||||
{ .compatible = "renesas,pcie-r8a7790",
|
||||
.data = rcar_pcie_hw_init_gen2 },
|
||||
{ .compatible = "renesas,pcie-r8a7791",
|
||||
.data = rcar_pcie_hw_init_gen2 },
|
||||
{ .compatible = "renesas,pcie-rcar-gen2",
|
||||
.data = rcar_pcie_hw_init_gen2 },
|
||||
{ .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init },
|
||||
{ .compatible = "renesas,pcie-rcar-gen3", .data = rcar_pcie_hw_init },
|
||||
{},
|
||||
};
|
||||
|
||||
|
@ -53,6 +53,7 @@
|
||||
#define PCIE_CLIENT_ARI_ENABLE HIWORD_UPDATE_BIT(0x0008)
|
||||
#define PCIE_CLIENT_CONF_LANE_NUM(x) HIWORD_UPDATE(0x0030, ENCODE_LANES(x))
|
||||
#define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040)
|
||||
#define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0)
|
||||
#define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080)
|
||||
#define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48)
|
||||
#define PCIE_CLIENT_LINK_STATUS_UP 0x00300000
|
||||
@ -135,13 +136,14 @@
|
||||
#define PCIE_RC_CONFIG_VENDOR (PCIE_RC_CONFIG_BASE + 0x00)
|
||||
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
|
||||
#define PCIE_RC_CONFIG_SCC_SHIFT 16
|
||||
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
|
||||
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
|
||||
#define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff
|
||||
#define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26
|
||||
#define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0)
|
||||
#define PCIE_RC_CONFIG_LCS_RETRAIN_LINK BIT(5)
|
||||
#define PCIE_RC_CONFIG_LCS_LBMIE BIT(10)
|
||||
#define PCIE_RC_CONFIG_LCS_LABIE BIT(11)
|
||||
#define PCIE_RC_CONFIG_LCS_LBMS BIT(30)
|
||||
#define PCIE_RC_CONFIG_LCS_LAMS BIT(31)
|
||||
#define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c)
|
||||
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
|
||||
#define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20)
|
||||
|
||||
#define PCIE_CORE_AXI_CONF_BASE 0xc00000
|
||||
#define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0)
|
||||
@ -203,8 +205,14 @@ struct rockchip_pcie {
|
||||
struct gpio_desc *ep_gpio;
|
||||
u32 lanes;
|
||||
u8 root_bus_nr;
|
||||
int link_gen;
|
||||
struct device *dev;
|
||||
struct irq_domain *irq_domain;
|
||||
u32 io_size;
|
||||
int offset;
|
||||
phys_addr_t io_bus_addr;
|
||||
u32 mem_size;
|
||||
phys_addr_t mem_bus_addr;
|
||||
};
|
||||
|
||||
static u32 rockchip_pcie_read(struct rockchip_pcie *rockchip, u32 reg)
|
||||
@ -223,7 +231,7 @@ static void rockchip_pcie_enable_bw_int(struct rockchip_pcie *rockchip)
|
||||
u32 status;
|
||||
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
|
||||
status |= (PCIE_RC_CONFIG_LCS_LBMIE | PCIE_RC_CONFIG_LCS_LABIE);
|
||||
status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE);
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
|
||||
}
|
||||
|
||||
@ -232,7 +240,7 @@ static void rockchip_pcie_clr_bw_int(struct rockchip_pcie *rockchip)
|
||||
u32 status;
|
||||
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
|
||||
status |= (PCIE_RC_CONFIG_LCS_LBMS | PCIE_RC_CONFIG_LCS_LAMS);
|
||||
status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16;
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
|
||||
}
|
||||
|
||||
@ -398,6 +406,40 @@ static struct pci_ops rockchip_pcie_ops = {
|
||||
.write = rockchip_pcie_wr_conf,
|
||||
};
|
||||
|
||||
static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip)
|
||||
{
|
||||
u32 status, curr, scale, power;
|
||||
|
||||
if (IS_ERR(rockchip->vpcie3v3))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Set RC's captured slot power limit and scale if
|
||||
* vpcie3v3 available. The default values are both zero
|
||||
* which means the software should set these two according
|
||||
* to the actual power supply.
|
||||
*/
|
||||
curr = regulator_get_current_limit(rockchip->vpcie3v3);
|
||||
if (curr > 0) {
|
||||
scale = 3; /* 0.001x */
|
||||
curr = curr / 1000; /* convert to mA */
|
||||
power = (curr * 3300) / 1000; /* milliwatt */
|
||||
while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) {
|
||||
if (!scale) {
|
||||
dev_warn(rockchip->dev, "invalid power supply\n");
|
||||
return;
|
||||
}
|
||||
scale--;
|
||||
power = power / 10;
|
||||
}
|
||||
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR);
|
||||
status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) |
|
||||
(scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT);
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* rockchip_pcie_init_port - Initialize hardware
|
||||
* @rockchip: PCIe port information
|
||||
@ -429,26 +471,6 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
||||
return err;
|
||||
}
|
||||
|
||||
udelay(10);
|
||||
|
||||
err = reset_control_deassert(rockchip->pm_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert pm_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->aclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->pclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = phy_init(rockchip->phy);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "fail to init phy, err %d\n", err);
|
||||
@ -479,14 +501,40 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
||||
return err;
|
||||
}
|
||||
|
||||
udelay(10);
|
||||
|
||||
err = reset_control_deassert(rockchip->pm_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert pm_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->aclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert aclk_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = reset_control_deassert(rockchip->pclk_rst);
|
||||
if (err) {
|
||||
dev_err(dev, "deassert pclk_rst err %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (rockchip->link_gen == 2)
|
||||
rockchip_pcie_write(rockchip, PCIE_CLIENT_GEN_SEL_2,
|
||||
PCIE_CLIENT_CONFIG);
|
||||
else
|
||||
rockchip_pcie_write(rockchip, PCIE_CLIENT_GEN_SEL_1,
|
||||
PCIE_CLIENT_CONFIG);
|
||||
|
||||
rockchip_pcie_write(rockchip,
|
||||
PCIE_CLIENT_CONF_ENABLE |
|
||||
PCIE_CLIENT_LINK_TRAIN_ENABLE |
|
||||
PCIE_CLIENT_ARI_ENABLE |
|
||||
PCIE_CLIENT_CONF_LANE_NUM(rockchip->lanes) |
|
||||
PCIE_CLIENT_MODE_RC |
|
||||
PCIE_CLIENT_GEN_SEL_2,
|
||||
PCIE_CLIENT_CONFIG);
|
||||
PCIE_CLIENT_MODE_RC,
|
||||
PCIE_CLIENT_CONFIG);
|
||||
|
||||
err = phy_power_on(rockchip->phy);
|
||||
if (err) {
|
||||
@ -522,21 +570,19 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* We need to read/write PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 before
|
||||
* enabling ASPM. Otherwise L1PwrOnSc and L1PwrOnVal isn't
|
||||
* reliable and enabling ASPM doesn't work. This is a controller
|
||||
* bug we need to work around.
|
||||
*/
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2);
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2);
|
||||
|
||||
/* Fix the transmitted FTS count desired to exit from L0s. */
|
||||
status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL_PLC1);
|
||||
status = (status & PCIE_CORE_CTRL_PLC1_FTS_MASK) |
|
||||
status = (status & ~PCIE_CORE_CTRL_PLC1_FTS_MASK) |
|
||||
(PCIE_CORE_CTRL_PLC1_FTS_CNT << PCIE_CORE_CTRL_PLC1_FTS_SHIFT);
|
||||
rockchip_pcie_write(rockchip, status, PCIE_CORE_CTRL_PLC1);
|
||||
|
||||
rockchip_pcie_set_power_limit(rockchip);
|
||||
|
||||
/* Set RC's clock architecture as common clock */
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
|
||||
status |= PCI_EXP_LNKCTL_CCC;
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
|
||||
|
||||
/* Enable Gen1 training */
|
||||
rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE,
|
||||
PCIE_CLIENT_CONFIG);
|
||||
@ -563,35 +609,37 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
||||
msleep(20);
|
||||
}
|
||||
|
||||
/*
|
||||
* Enable retrain for gen2. This should be configured only after
|
||||
* gen1 finished.
|
||||
*/
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
|
||||
status |= PCIE_RC_CONFIG_LCS_RETRAIN_LINK;
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
|
||||
if (rockchip->link_gen == 2) {
|
||||
/*
|
||||
* Enable retrain for gen2. This should be configured only after
|
||||
* gen1 finished.
|
||||
*/
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS);
|
||||
status |= PCI_EXP_LNKCTL_RL;
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS);
|
||||
|
||||
timeout = jiffies + msecs_to_jiffies(500);
|
||||
for (;;) {
|
||||
status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL);
|
||||
if ((status & PCIE_CORE_PL_CONF_SPEED_MASK) ==
|
||||
PCIE_CORE_PL_CONF_SPEED_5G) {
|
||||
dev_dbg(dev, "PCIe link training gen2 pass!\n");
|
||||
break;
|
||||
timeout = jiffies + msecs_to_jiffies(500);
|
||||
for (;;) {
|
||||
status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL);
|
||||
if ((status & PCIE_CORE_PL_CONF_SPEED_MASK) ==
|
||||
PCIE_CORE_PL_CONF_SPEED_5G) {
|
||||
dev_dbg(dev, "PCIe link training gen2 pass!\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (time_after(jiffies, timeout)) {
|
||||
dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n");
|
||||
break;
|
||||
}
|
||||
|
||||
msleep(20);
|
||||
}
|
||||
|
||||
if (time_after(jiffies, timeout)) {
|
||||
dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n");
|
||||
break;
|
||||
}
|
||||
|
||||
msleep(20);
|
||||
}
|
||||
|
||||
/* Check the final link width from negotiated lane counter from MGMT */
|
||||
status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL);
|
||||
status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >>
|
||||
PCIE_CORE_PL_CONF_LANE_MASK);
|
||||
status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >>
|
||||
PCIE_CORE_PL_CONF_LANE_SHIFT);
|
||||
dev_dbg(dev, "current link width is x%d\n", status);
|
||||
|
||||
rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
|
||||
@ -599,6 +647,12 @@ static int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
|
||||
rockchip_pcie_write(rockchip,
|
||||
PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT,
|
||||
PCIE_RC_CONFIG_RID_CCR);
|
||||
|
||||
/* Clear THP cap's next cap pointer to remove L1 substate cap */
|
||||
status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_THP_CAP);
|
||||
status &= ~PCIE_RC_CONFIG_THP_CAP_NEXT_MASK;
|
||||
rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_THP_CAP);
|
||||
|
||||
rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF);
|
||||
|
||||
rockchip_pcie_write(rockchip,
|
||||
@ -794,6 +848,10 @@ static int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
|
||||
rockchip->lanes = 1;
|
||||
}
|
||||
|
||||
rockchip->link_gen = of_pci_get_max_link_speed(node);
|
||||
if (rockchip->link_gen < 0 || rockchip->link_gen > 2)
|
||||
rockchip->link_gen = 2;
|
||||
|
||||
rockchip->core_rst = devm_reset_control_get(dev, "core");
|
||||
if (IS_ERR(rockchip->core_rst)) {
|
||||
if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER)
|
||||
@ -1087,6 +1145,50 @@ static int rockchip_pcie_prog_ib_atu(struct rockchip_pcie *rockchip,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rockchip_cfg_atu(struct rockchip_pcie *rockchip)
|
||||
{
|
||||
struct device *dev = rockchip->dev;
|
||||
int offset;
|
||||
int err;
|
||||
int reg_no;
|
||||
|
||||
for (reg_no = 0; reg_no < (rockchip->mem_size >> 20); reg_no++) {
|
||||
err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1,
|
||||
AXI_WRAPPER_MEM_WRITE,
|
||||
20 - 1,
|
||||
rockchip->mem_bus_addr +
|
||||
(reg_no << 20),
|
||||
0);
|
||||
if (err) {
|
||||
dev_err(dev, "program RC mem outbound ATU failed\n");
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
err = rockchip_pcie_prog_ib_atu(rockchip, 2, 32 - 1, 0x0, 0);
|
||||
if (err) {
|
||||
dev_err(dev, "program RC mem inbound ATU failed\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
offset = rockchip->mem_size >> 20;
|
||||
for (reg_no = 0; reg_no < (rockchip->io_size >> 20); reg_no++) {
|
||||
err = rockchip_pcie_prog_ob_atu(rockchip,
|
||||
reg_no + 1 + offset,
|
||||
AXI_WRAPPER_IO_WRITE,
|
||||
20 - 1,
|
||||
rockchip->io_bus_addr +
|
||||
(reg_no << 20),
|
||||
0);
|
||||
if (err) {
|
||||
dev_err(dev, "program RC io outbound ATU failed\n");
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rockchip_pcie_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct rockchip_pcie *rockchip;
|
||||
@ -1096,13 +1198,7 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
|
||||
resource_size_t io_base;
|
||||
struct resource *mem;
|
||||
struct resource *io;
|
||||
phys_addr_t io_bus_addr = 0;
|
||||
u32 io_size;
|
||||
phys_addr_t mem_bus_addr = 0;
|
||||
u32 mem_size = 0;
|
||||
int reg_no;
|
||||
int err;
|
||||
int offset;
|
||||
|
||||
LIST_HEAD(res);
|
||||
|
||||
@ -1169,14 +1265,13 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
|
||||
goto err_vpcie;
|
||||
|
||||
/* Get the I/O and memory ranges from DT */
|
||||
io_size = 0;
|
||||
resource_list_for_each_entry(win, &res) {
|
||||
switch (resource_type(win->res)) {
|
||||
case IORESOURCE_IO:
|
||||
io = win->res;
|
||||
io->name = "I/O";
|
||||
io_size = resource_size(io);
|
||||
io_bus_addr = io->start - win->offset;
|
||||
rockchip->io_size = resource_size(io);
|
||||
rockchip->io_bus_addr = io->start - win->offset;
|
||||
err = pci_remap_iospace(io, io_base);
|
||||
if (err) {
|
||||
dev_warn(dev, "error %d: failed to map resource %pR\n",
|
||||
@ -1187,8 +1282,8 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
|
||||
case IORESOURCE_MEM:
|
||||
mem = win->res;
|
||||
mem->name = "MEM";
|
||||
mem_size = resource_size(mem);
|
||||
mem_bus_addr = mem->start - win->offset;
|
||||
rockchip->mem_size = resource_size(mem);
|
||||
rockchip->mem_bus_addr = mem->start - win->offset;
|
||||
break;
|
||||
case IORESOURCE_BUS:
|
||||
rockchip->root_bus_nr = win->res->start;
|
||||
@ -1198,45 +1293,9 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
|
||||
}
|
||||
}
|
||||
|
||||
if (mem_size) {
|
||||
for (reg_no = 0; reg_no < (mem_size >> 20); reg_no++) {
|
||||
err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1,
|
||||
AXI_WRAPPER_MEM_WRITE,
|
||||
20 - 1,
|
||||
mem_bus_addr +
|
||||
(reg_no << 20),
|
||||
0);
|
||||
if (err) {
|
||||
dev_err(dev, "program RC mem outbound ATU failed\n");
|
||||
goto err_vpcie;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err = rockchip_pcie_prog_ib_atu(rockchip, 2, 32 - 1, 0x0, 0);
|
||||
if (err) {
|
||||
dev_err(dev, "program RC mem inbound ATU failed\n");
|
||||
err = rockchip_cfg_atu(rockchip);
|
||||
if (err)
|
||||
goto err_vpcie;
|
||||
}
|
||||
|
||||
offset = mem_size >> 20;
|
||||
|
||||
if (io_size) {
|
||||
for (reg_no = 0; reg_no < (io_size >> 20); reg_no++) {
|
||||
err = rockchip_pcie_prog_ob_atu(rockchip,
|
||||
reg_no + 1 + offset,
|
||||
AXI_WRAPPER_IO_WRITE,
|
||||
20 - 1,
|
||||
io_bus_addr +
|
||||
(reg_no << 20),
|
||||
0);
|
||||
if (err) {
|
||||
dev_err(dev, "program RC io outbound ATU failed\n");
|
||||
goto err_vpcie;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bus = pci_scan_root_bus(&pdev->dev, 0, &rockchip_pcie_ops, rockchip, &res);
|
||||
if (!bus) {
|
||||
err = -ENOMEM;
|
||||
@ -1249,9 +1308,6 @@ static int rockchip_pcie_probe(struct platform_device *pdev)
|
||||
pcie_bus_configure_settings(child);
|
||||
|
||||
pci_bus_add_devices(bus);
|
||||
|
||||
dev_warn(dev, "only 32-bit config accesses supported; smaller writes may corrupt adjacent RW1C fields\n");
|
||||
|
||||
return err;
|
||||
|
||||
err_vpcie:
|
||||
|
@ -296,8 +296,4 @@ static struct platform_driver spear13xx_pcie_driver = {
|
||||
},
|
||||
};
|
||||
|
||||
static int __init spear13xx_pcie_init(void)
|
||||
{
|
||||
return platform_driver_register(&spear13xx_pcie_driver);
|
||||
}
|
||||
device_initcall(spear13xx_pcie_init);
|
||||
builtin_platform_driver(spear13xx_pcie_driver);
|
||||
|
@ -19,6 +19,7 @@
|
||||
#include <linux/module.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/srcu.h>
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/rcupdate.h>
|
||||
|
||||
@ -39,7 +40,6 @@ static DEFINE_RAW_SPINLOCK(list_lock);
|
||||
/**
|
||||
* struct vmd_irq - private data to map driver IRQ to the VMD shared vector
|
||||
* @node: list item for parent traversal.
|
||||
* @rcu: RCU callback item for freeing.
|
||||
* @irq: back pointer to parent.
|
||||
* @enabled: true if driver enabled IRQ
|
||||
* @virq: the virtual IRQ value provided to the requesting driver.
|
||||
@ -49,7 +49,6 @@ static DEFINE_RAW_SPINLOCK(list_lock);
|
||||
*/
|
||||
struct vmd_irq {
|
||||
struct list_head node;
|
||||
struct rcu_head rcu;
|
||||
struct vmd_irq_list *irq;
|
||||
bool enabled;
|
||||
unsigned int virq;
|
||||
@ -58,11 +57,13 @@ struct vmd_irq {
|
||||
/**
|
||||
* struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector
|
||||
* @irq_list: the list of irq's the VMD one demuxes to.
|
||||
* @srcu: SRCU struct for local synchronization.
|
||||
* @count: number of child IRQs assigned to this vector; used to track
|
||||
* sharing.
|
||||
*/
|
||||
struct vmd_irq_list {
|
||||
struct list_head irq_list;
|
||||
struct srcu_struct srcu;
|
||||
unsigned int count;
|
||||
};
|
||||
|
||||
@ -224,14 +225,14 @@ static void vmd_msi_free(struct irq_domain *domain,
|
||||
struct vmd_irq *vmdirq = irq_get_chip_data(virq);
|
||||
unsigned long flags;
|
||||
|
||||
synchronize_rcu();
|
||||
synchronize_srcu(&vmdirq->irq->srcu);
|
||||
|
||||
/* XXX: Potential optimization to rebalance */
|
||||
raw_spin_lock_irqsave(&list_lock, flags);
|
||||
vmdirq->irq->count--;
|
||||
raw_spin_unlock_irqrestore(&list_lock, flags);
|
||||
|
||||
kfree_rcu(vmdirq, rcu);
|
||||
kfree(vmdirq);
|
||||
}
|
||||
|
||||
static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev,
|
||||
@ -646,11 +647,12 @@ static irqreturn_t vmd_irq(int irq, void *data)
|
||||
{
|
||||
struct vmd_irq_list *irqs = data;
|
||||
struct vmd_irq *vmdirq;
|
||||
int idx;
|
||||
|
||||
rcu_read_lock();
|
||||
idx = srcu_read_lock(&irqs->srcu);
|
||||
list_for_each_entry_rcu(vmdirq, &irqs->irq_list, node)
|
||||
generic_handle_irq(vmdirq->virq);
|
||||
rcu_read_unlock();
|
||||
srcu_read_unlock(&irqs->srcu, idx);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
@ -696,6 +698,10 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < vmd->msix_count; i++) {
|
||||
err = init_srcu_struct(&vmd->irqs[i].srcu);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
INIT_LIST_HEAD(&vmd->irqs[i].irq_list);
|
||||
err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i),
|
||||
vmd_irq, 0, "vmd", &vmd->irqs[i]);
|
||||
@ -714,12 +720,20 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void vmd_cleanup_srcu(struct vmd_dev *vmd)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < vmd->msix_count; i++)
|
||||
cleanup_srcu_struct(&vmd->irqs[i].srcu);
|
||||
}
|
||||
|
||||
static void vmd_remove(struct pci_dev *dev)
|
||||
{
|
||||
struct vmd_dev *vmd = pci_get_drvdata(dev);
|
||||
|
||||
vmd_detach_resources(vmd);
|
||||
pci_set_drvdata(dev, NULL);
|
||||
vmd_cleanup_srcu(vmd);
|
||||
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
|
||||
pci_stop_root_bus(vmd->bus);
|
||||
pci_remove_root_bus(vmd->bus);
|
||||
@ -727,7 +741,7 @@ static void vmd_remove(struct pci_dev *dev)
|
||||
irq_domain_remove(vmd->irq_domain);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int vmd_suspend(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
|
@ -222,35 +222,6 @@ static void acpiphp_post_dock_fixup(struct acpi_device *adev)
|
||||
acpiphp_let_context_go(context);
|
||||
}
|
||||
|
||||
/* Check whether the PCI device is managed by native PCIe hotplug driver */
|
||||
static bool device_is_managed_by_native_pciehp(struct pci_dev *pdev)
|
||||
{
|
||||
u32 reg32;
|
||||
acpi_handle tmp;
|
||||
struct acpi_pci_root *root;
|
||||
|
||||
/* Check whether the PCIe port supports native PCIe hotplug */
|
||||
if (pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, ®32))
|
||||
return false;
|
||||
if (!(reg32 & PCI_EXP_SLTCAP_HPC))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Check whether native PCIe hotplug has been enabled for
|
||||
* this PCIe hierarchy.
|
||||
*/
|
||||
tmp = acpi_find_root_bridge_handle(pdev);
|
||||
if (!tmp)
|
||||
return false;
|
||||
root = acpi_pci_find_root(tmp);
|
||||
if (!root)
|
||||
return false;
|
||||
if (!(root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpiphp_add_context - Add ACPIPHP context to an ACPI device object.
|
||||
* @handle: ACPI handle of the object to add a context to.
|
||||
@ -334,7 +305,7 @@ static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data,
|
||||
* expose slots to user space in those cases.
|
||||
*/
|
||||
if ((acpi_pci_check_ejectable(pbus, handle) || is_dock_device(adev))
|
||||
&& !(pdev && device_is_managed_by_native_pciehp(pdev))) {
|
||||
&& !(pdev && pdev->is_hotplug_bridge && pciehp_is_native(pdev))) {
|
||||
unsigned long long sun;
|
||||
int retval;
|
||||
|
||||
|
@ -867,7 +867,8 @@ static int cpqhpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
*/
|
||||
if ((pdev->revision <= 2) && (vendor_id != PCI_VENDOR_ID_INTEL)) {
|
||||
err(msg_HPC_not_supported);
|
||||
return -ENODEV;
|
||||
rc = -ENODEV;
|
||||
goto err_disable_device;
|
||||
}
|
||||
|
||||
/* TODO: This code can be made to support non-Compaq or Intel
|
||||
|
@ -23,6 +23,9 @@
|
||||
*
|
||||
* Send feedback to <kristen.c.accardi@intel.com>
|
||||
*
|
||||
* Authors:
|
||||
* Greg Kroah-Hartman <greg@kroah.com>
|
||||
* Scott Murray <scottm@somanetworks.com>
|
||||
*/
|
||||
|
||||
#include <linux/module.h> /* try_module_get & module_put */
|
||||
@ -50,15 +53,9 @@
|
||||
#define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg)
|
||||
#define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg)
|
||||
|
||||
|
||||
/* local variables */
|
||||
static bool debug;
|
||||
|
||||
#define DRIVER_VERSION "0.5"
|
||||
#define DRIVER_AUTHOR "Greg Kroah-Hartman <greg@kroah.com>, Scott Murray <scottm@somanetworks.com>"
|
||||
#define DRIVER_DESC "PCI Hot Plug PCI Core"
|
||||
|
||||
|
||||
static LIST_HEAD(pci_hotplug_slot_list);
|
||||
static DEFINE_MUTEX(pci_hp_mutex);
|
||||
|
||||
@ -534,7 +531,6 @@ static int __init pci_hotplug_init(void)
|
||||
return result;
|
||||
}
|
||||
|
||||
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
|
||||
return result;
|
||||
}
|
||||
device_initcall(pci_hotplug_init);
|
||||
|
@ -25,6 +25,10 @@
|
||||
*
|
||||
* Send feedback to <greg@kroah.com>, <kristen.c.accardi@intel.com>
|
||||
*
|
||||
* Authors:
|
||||
* Dan Zink <dan.zink@compaq.com>
|
||||
* Greg Kroah-Hartman <greg@kroah.com>
|
||||
* Dely Sy <dely.l.sy@intel.com>"
|
||||
*/
|
||||
|
||||
#include <linux/moduleparam.h>
|
||||
@ -42,10 +46,6 @@ bool pciehp_poll_mode;
|
||||
int pciehp_poll_time;
|
||||
static bool pciehp_force;
|
||||
|
||||
#define DRIVER_VERSION "0.4"
|
||||
#define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>"
|
||||
#define DRIVER_DESC "PCI Express Hot Plug Controller Driver"
|
||||
|
||||
/*
|
||||
* not really modular, but the easiest way to keep compat with existing
|
||||
* bootargs behaviour is to continue using module_param here.
|
||||
@ -333,7 +333,6 @@ static int __init pcied_init(void)
|
||||
|
||||
retval = pcie_port_service_register(&hpdriver_portdrv);
|
||||
dbg("pcie_port_service_register = %d\n", retval);
|
||||
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
|
||||
if (retval)
|
||||
dbg("Failure to register service\n");
|
||||
|
||||
|
@ -31,6 +31,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/pci.h>
|
||||
#include "../pci.h"
|
||||
#include "pciehp.h"
|
||||
@ -98,6 +99,7 @@ static int board_added(struct slot *p_slot)
|
||||
pciehp_green_led_blink(p_slot);
|
||||
|
||||
/* Check link training status */
|
||||
pm_runtime_get_sync(&ctrl->pcie->port->dev);
|
||||
retval = pciehp_check_link_status(ctrl);
|
||||
if (retval) {
|
||||
ctrl_err(ctrl, "Failed to check link status\n");
|
||||
@ -118,12 +120,14 @@ static int board_added(struct slot *p_slot)
|
||||
if (retval != -EEXIST)
|
||||
goto err_exit;
|
||||
}
|
||||
pm_runtime_put(&ctrl->pcie->port->dev);
|
||||
|
||||
pciehp_green_led_on(p_slot);
|
||||
pciehp_set_attention_status(p_slot, 0);
|
||||
return 0;
|
||||
|
||||
err_exit:
|
||||
pm_runtime_put(&ctrl->pcie->port->dev);
|
||||
set_slot_off(ctrl, p_slot);
|
||||
return retval;
|
||||
}
|
||||
@ -137,7 +141,9 @@ static int remove_board(struct slot *p_slot)
|
||||
int retval;
|
||||
struct controller *ctrl = p_slot->ctrl;
|
||||
|
||||
pm_runtime_get_sync(&ctrl->pcie->port->dev);
|
||||
retval = pciehp_unconfigure_device(p_slot);
|
||||
pm_runtime_put(&ctrl->pcie->port->dev);
|
||||
if (retval)
|
||||
return retval;
|
||||
|
||||
@ -410,7 +416,7 @@ int pciehp_enable_slot(struct slot *p_slot)
|
||||
if (getstatus) {
|
||||
ctrl_info(ctrl, "Slot(%s): Already enabled\n",
|
||||
slot_name(p_slot));
|
||||
return -EINVAL;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -620,8 +620,18 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
|
||||
pciehp_queue_interrupt_event(slot, INT_BUTTON_PRESS);
|
||||
}
|
||||
|
||||
/* Check Presence Detect Changed */
|
||||
if (events & PCI_EXP_SLTSTA_PDC) {
|
||||
/*
|
||||
* Check Link Status Changed at higher precedence than Presence
|
||||
* Detect Changed. The PDS value may be set to "card present" from
|
||||
* out-of-band detection, which may be in conflict with a Link Down
|
||||
* and cause the wrong event to queue.
|
||||
*/
|
||||
if (events & PCI_EXP_SLTSTA_DLLSC) {
|
||||
ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot),
|
||||
link ? "Up" : "Down");
|
||||
pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP :
|
||||
INT_LINK_DOWN);
|
||||
} else if (events & PCI_EXP_SLTSTA_PDC) {
|
||||
present = !!(status & PCI_EXP_SLTSTA_PDS);
|
||||
ctrl_info(ctrl, "Slot(%s): Card %spresent\n", slot_name(slot),
|
||||
present ? "" : "not ");
|
||||
@ -636,13 +646,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
|
||||
pciehp_queue_interrupt_event(slot, INT_POWER_FAULT);
|
||||
}
|
||||
|
||||
if (events & PCI_EXP_SLTSTA_DLLSC) {
|
||||
ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot),
|
||||
link ? "Up" : "Down");
|
||||
pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP :
|
||||
INT_LINK_DOWN);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
@ -306,13 +306,6 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
|
||||
return rc;
|
||||
}
|
||||
|
||||
pci_iov_set_numvfs(dev, nr_virtfn);
|
||||
iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE;
|
||||
pci_cfg_access_lock(dev);
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl);
|
||||
msleep(100);
|
||||
pci_cfg_access_unlock(dev);
|
||||
|
||||
iov->initial_VFs = initial;
|
||||
if (nr_virtfn < initial)
|
||||
initial = nr_virtfn;
|
||||
@ -323,6 +316,13 @@ static int sriov_enable(struct pci_dev *dev, int nr_virtfn)
|
||||
goto err_pcibios;
|
||||
}
|
||||
|
||||
pci_iov_set_numvfs(dev, nr_virtfn);
|
||||
iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE;
|
||||
pci_cfg_access_lock(dev);
|
||||
pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl);
|
||||
msleep(100);
|
||||
pci_cfg_access_unlock(dev);
|
||||
|
||||
for (i = 0; i < initial; i++) {
|
||||
rc = pci_iov_add_virtfn(dev, i, 0);
|
||||
if (rc)
|
||||
@ -554,21 +554,61 @@ void pci_iov_release(struct pci_dev *dev)
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_iov_resource_bar - get position of the SR-IOV BAR
|
||||
* pci_iov_update_resource - update a VF BAR
|
||||
* @dev: the PCI device
|
||||
* @resno: the resource number
|
||||
*
|
||||
* Returns position of the BAR encapsulated in the SR-IOV capability.
|
||||
* Update a VF BAR in the SR-IOV capability of a PF.
|
||||
*/
|
||||
int pci_iov_resource_bar(struct pci_dev *dev, int resno)
|
||||
void pci_iov_update_resource(struct pci_dev *dev, int resno)
|
||||
{
|
||||
if (resno < PCI_IOV_RESOURCES || resno > PCI_IOV_RESOURCE_END)
|
||||
return 0;
|
||||
struct pci_sriov *iov = dev->is_physfn ? dev->sriov : NULL;
|
||||
struct resource *res = dev->resource + resno;
|
||||
int vf_bar = resno - PCI_IOV_RESOURCES;
|
||||
struct pci_bus_region region;
|
||||
u16 cmd;
|
||||
u32 new;
|
||||
int reg;
|
||||
|
||||
BUG_ON(!dev->is_physfn);
|
||||
/*
|
||||
* The generic pci_restore_bars() path calls this for all devices,
|
||||
* including VFs and non-SR-IOV devices. If this is not a PF, we
|
||||
* have nothing to do.
|
||||
*/
|
||||
if (!iov)
|
||||
return;
|
||||
|
||||
return dev->sriov->pos + PCI_SRIOV_BAR +
|
||||
4 * (resno - PCI_IOV_RESOURCES);
|
||||
pci_read_config_word(dev, iov->pos + PCI_SRIOV_CTRL, &cmd);
|
||||
if ((cmd & PCI_SRIOV_CTRL_VFE) && (cmd & PCI_SRIOV_CTRL_MSE)) {
|
||||
dev_WARN(&dev->dev, "can't update enabled VF BAR%d %pR\n",
|
||||
vf_bar, res);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Ignore unimplemented BARs, unused resource slots for 64-bit
|
||||
* BARs, and non-movable resources, e.g., those described via
|
||||
* Enhanced Allocation.
|
||||
*/
|
||||
if (!res->flags)
|
||||
return;
|
||||
|
||||
if (res->flags & IORESOURCE_UNSET)
|
||||
return;
|
||||
|
||||
if (res->flags & IORESOURCE_PCI_FIXED)
|
||||
return;
|
||||
|
||||
pcibios_resource_to_bus(dev->bus, ®ion, res);
|
||||
new = region.start;
|
||||
new |= res->flags & ~PCI_BASE_ADDRESS_MEM_MASK;
|
||||
|
||||
reg = iov->pos + PCI_SRIOV_BAR + 4 * vf_bar;
|
||||
pci_write_config_dword(dev, reg, new);
|
||||
if (res->flags & IORESOURCE_MEM_64) {
|
||||
new = region.start >> 16 >> 16;
|
||||
pci_write_config_dword(dev, reg + 4, new);
|
||||
}
|
||||
}
|
||||
|
||||
resource_size_t __weak pcibios_iov_resource_alignment(struct pci_dev *dev,
|
||||
|
@ -1302,7 +1302,8 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
|
||||
} else if (dev->msi_enabled) {
|
||||
struct msi_desc *entry = first_pci_msi_entry(dev);
|
||||
|
||||
if (WARN_ON_ONCE(!entry || nr >= entry->nvec_used))
|
||||
if (WARN_ON_ONCE(!entry || !entry->affinity ||
|
||||
nr >= entry->nvec_used))
|
||||
return NULL;
|
||||
|
||||
return &entry->affinity[nr];
|
||||
|
@ -29,6 +29,82 @@ const u8 pci_acpi_dsm_uuid[] = {
|
||||
0x91, 0x17, 0xea, 0x4d, 0x19, 0xc3, 0x43, 0x4d
|
||||
};
|
||||
|
||||
#if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
|
||||
static int acpi_get_rc_addr(struct acpi_device *adev, struct resource *res)
|
||||
{
|
||||
struct device *dev = &adev->dev;
|
||||
struct resource_entry *entry;
|
||||
struct list_head list;
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
INIT_LIST_HEAD(&list);
|
||||
flags = IORESOURCE_MEM;
|
||||
ret = acpi_dev_get_resources(adev, &list,
|
||||
acpi_dev_filter_resource_type_cb,
|
||||
(void *) flags);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to parse _CRS method, error code %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (ret == 0) {
|
||||
dev_err(dev, "no IO and memory resources present in _CRS\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
entry = list_first_entry(&list, struct resource_entry, node);
|
||||
*res = *entry->res;
|
||||
acpi_dev_free_resource_list(&list);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static acpi_status acpi_match_rc(acpi_handle handle, u32 lvl, void *context,
|
||||
void **retval)
|
||||
{
|
||||
u16 *segment = context;
|
||||
unsigned long long uid;
|
||||
acpi_status status;
|
||||
|
||||
status = acpi_evaluate_integer(handle, "_UID", NULL, &uid);
|
||||
if (ACPI_FAILURE(status) || uid != *segment)
|
||||
return AE_CTRL_DEPTH;
|
||||
|
||||
*(acpi_handle *)retval = handle;
|
||||
return AE_CTRL_TERMINATE;
|
||||
}
|
||||
|
||||
int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
|
||||
struct resource *res)
|
||||
{
|
||||
struct acpi_device *adev;
|
||||
acpi_status status;
|
||||
acpi_handle handle;
|
||||
int ret;
|
||||
|
||||
status = acpi_get_devices(hid, acpi_match_rc, &segment, &handle);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
dev_err(dev, "can't find _HID %s device to locate resources\n",
|
||||
hid);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = acpi_bus_get_device(handle, &adev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = acpi_get_rc_addr(adev, res);
|
||||
if (ret) {
|
||||
dev_err(dev, "can't get resource from %s\n",
|
||||
dev_name(&adev->dev));
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle)
|
||||
{
|
||||
acpi_status status = AE_NOT_EXIST;
|
||||
@ -293,6 +369,30 @@ int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_get_hp_params);
|
||||
|
||||
/**
|
||||
* pciehp_is_native - Check whether a hotplug port is handled by the OS
|
||||
* @pdev: Hotplug port to check
|
||||
*
|
||||
* Walk up from @pdev to the host bridge, obtain its cached _OSC Control Field
|
||||
* and return the value of the "PCI Express Native Hot Plug control" bit.
|
||||
* On failure to obtain the _OSC Control Field return %false.
|
||||
*/
|
||||
bool pciehp_is_native(struct pci_dev *pdev)
|
||||
{
|
||||
struct acpi_pci_root *root;
|
||||
acpi_handle handle;
|
||||
|
||||
handle = acpi_find_root_bridge_handle(pdev);
|
||||
if (!handle)
|
||||
return false;
|
||||
|
||||
root = acpi_pci_find_root(handle);
|
||||
if (!root)
|
||||
return false;
|
||||
|
||||
return root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_acpi_wake_bus - Root bus wakeup notification fork function.
|
||||
* @work: Work item to handle.
|
||||
|
@ -54,7 +54,7 @@ static bool mid_pci_need_resume(struct pci_dev *dev)
|
||||
return false;
|
||||
}
|
||||
|
||||
static struct pci_platform_pm_ops mid_pci_platform_pm = {
|
||||
static const struct pci_platform_pm_ops mid_pci_platform_pm = {
|
||||
.is_manageable = mid_pci_power_manageable,
|
||||
.set_state = mid_pci_set_power_state,
|
||||
.get_state = mid_pci_get_power_state,
|
||||
|
@ -50,6 +50,7 @@ pci_config_attr(vendor, "0x%04x\n");
|
||||
pci_config_attr(device, "0x%04x\n");
|
||||
pci_config_attr(subsystem_vendor, "0x%04x\n");
|
||||
pci_config_attr(subsystem_device, "0x%04x\n");
|
||||
pci_config_attr(revision, "0x%02x\n");
|
||||
pci_config_attr(class, "0x%06x\n");
|
||||
pci_config_attr(irq, "%u\n");
|
||||
|
||||
@ -568,6 +569,7 @@ static struct attribute *pci_dev_attrs[] = {
|
||||
&dev_attr_device.attr,
|
||||
&dev_attr_subsystem_vendor.attr,
|
||||
&dev_attr_subsystem_device.attr,
|
||||
&dev_attr_revision.attr,
|
||||
&dev_attr_class.attr,
|
||||
&dev_attr_irq.attr,
|
||||
&dev_attr_local_cpus.attr,
|
||||
|
@ -564,10 +564,6 @@ static void pci_restore_bars(struct pci_dev *dev)
|
||||
{
|
||||
int i;
|
||||
|
||||
/* Per SR-IOV spec 3.4.1.11, VF BARs are RO zero */
|
||||
if (dev->is_virtfn)
|
||||
return;
|
||||
|
||||
for (i = 0; i < PCI_BRIDGE_RESOURCES; i++)
|
||||
pci_update_resource(dev, i);
|
||||
}
|
||||
@ -2106,6 +2102,10 @@ bool pci_dev_run_wake(struct pci_dev *dev)
|
||||
if (!dev->pme_support)
|
||||
return false;
|
||||
|
||||
/* PME-capable in principle, but not from the intended sleep state */
|
||||
if (!pci_pme_capable(dev, pci_target_state(dev)))
|
||||
return false;
|
||||
|
||||
while (bus->parent) {
|
||||
struct pci_dev *bridge = bus->self;
|
||||
|
||||
@ -2226,7 +2226,7 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev)
|
||||
* This function checks if it is possible to move the bridge to D3.
|
||||
* Currently we only allow D3 for recent enough PCIe ports.
|
||||
*/
|
||||
static bool pci_bridge_d3_possible(struct pci_dev *bridge)
|
||||
bool pci_bridge_d3_possible(struct pci_dev *bridge)
|
||||
{
|
||||
unsigned int year;
|
||||
|
||||
@ -2239,6 +2239,14 @@ static bool pci_bridge_d3_possible(struct pci_dev *bridge)
|
||||
case PCI_EXP_TYPE_DOWNSTREAM:
|
||||
if (pci_bridge_d3_disable)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Hotplug ports handled by firmware in System Management Mode
|
||||
* may not be put into D3 by the OS (Thunderbolt on non-Macs).
|
||||
*/
|
||||
if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge))
|
||||
return false;
|
||||
|
||||
if (pci_bridge_d3_force)
|
||||
return true;
|
||||
|
||||
@ -2259,32 +2267,36 @@ static bool pci_bridge_d3_possible(struct pci_dev *bridge)
|
||||
static int pci_dev_check_d3cold(struct pci_dev *dev, void *data)
|
||||
{
|
||||
bool *d3cold_ok = data;
|
||||
bool no_d3cold;
|
||||
|
||||
/*
|
||||
* The device needs to be allowed to go D3cold and if it is wake
|
||||
* capable to do so from D3cold.
|
||||
*/
|
||||
no_d3cold = dev->no_d3cold || !dev->d3cold_allowed ||
|
||||
(device_may_wakeup(&dev->dev) && !pci_pme_capable(dev, PCI_D3cold)) ||
|
||||
!pci_power_manageable(dev);
|
||||
if (/* The device needs to be allowed to go D3cold ... */
|
||||
dev->no_d3cold || !dev->d3cold_allowed ||
|
||||
|
||||
*d3cold_ok = !no_d3cold;
|
||||
/* ... and if it is wakeup capable to do so from D3cold. */
|
||||
(device_may_wakeup(&dev->dev) &&
|
||||
!pci_pme_capable(dev, PCI_D3cold)) ||
|
||||
|
||||
return no_d3cold;
|
||||
/* If it is a bridge it must be allowed to go to D3. */
|
||||
!pci_power_manageable(dev) ||
|
||||
|
||||
/* Hotplug interrupts cannot be delivered if the link is down. */
|
||||
dev->is_hotplug_bridge)
|
||||
|
||||
*d3cold_ok = false;
|
||||
|
||||
return !*d3cold_ok;
|
||||
}
|
||||
|
||||
/*
|
||||
* pci_bridge_d3_update - Update bridge D3 capabilities
|
||||
* @dev: PCI device which is changed
|
||||
* @remove: Is the device being removed
|
||||
*
|
||||
* Update upstream bridge PM capabilities accordingly depending on if the
|
||||
* device PM configuration was changed or the device is being removed. The
|
||||
* change is also propagated upstream.
|
||||
*/
|
||||
static void pci_bridge_d3_update(struct pci_dev *dev, bool remove)
|
||||
void pci_bridge_d3_update(struct pci_dev *dev)
|
||||
{
|
||||
bool remove = !device_is_registered(&dev->dev);
|
||||
struct pci_dev *bridge;
|
||||
bool d3cold_ok = true;
|
||||
|
||||
@ -2292,55 +2304,39 @@ static void pci_bridge_d3_update(struct pci_dev *dev, bool remove)
|
||||
if (!bridge || !pci_bridge_d3_possible(bridge))
|
||||
return;
|
||||
|
||||
pci_dev_get(bridge);
|
||||
/*
|
||||
* If the device is removed we do not care about its D3cold
|
||||
* capabilities.
|
||||
* If D3 is currently allowed for the bridge, removing one of its
|
||||
* children won't change that.
|
||||
*/
|
||||
if (remove && bridge->bridge_d3)
|
||||
return;
|
||||
|
||||
/*
|
||||
* If D3 is currently allowed for the bridge and a child is added or
|
||||
* changed, disallowance of D3 can only be caused by that child, so
|
||||
* we only need to check that single device, not any of its siblings.
|
||||
*
|
||||
* If D3 is currently not allowed for the bridge, checking the device
|
||||
* first may allow us to skip checking its siblings.
|
||||
*/
|
||||
if (!remove)
|
||||
pci_dev_check_d3cold(dev, &d3cold_ok);
|
||||
|
||||
if (d3cold_ok) {
|
||||
/*
|
||||
* We need to go through all children to find out if all of
|
||||
* them can still go to D3cold.
|
||||
*/
|
||||
/*
|
||||
* If D3 is currently not allowed for the bridge, this may be caused
|
||||
* either by the device being changed/removed or any of its siblings,
|
||||
* so we need to go through all children to find out if one of them
|
||||
* continues to block D3.
|
||||
*/
|
||||
if (d3cold_ok && !bridge->bridge_d3)
|
||||
pci_walk_bus(bridge->subordinate, pci_dev_check_d3cold,
|
||||
&d3cold_ok);
|
||||
}
|
||||
|
||||
if (bridge->bridge_d3 != d3cold_ok) {
|
||||
bridge->bridge_d3 = d3cold_ok;
|
||||
/* Propagate change to upstream bridges */
|
||||
pci_bridge_d3_update(bridge, false);
|
||||
pci_bridge_d3_update(bridge);
|
||||
}
|
||||
|
||||
pci_dev_put(bridge);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_bridge_d3_device_changed - Update bridge D3 capabilities on change
|
||||
* @dev: PCI device that was changed
|
||||
*
|
||||
* If a device is added or its PM configuration, such as is it allowed to
|
||||
* enter D3cold, is changed this function updates upstream bridge PM
|
||||
* capabilities accordingly.
|
||||
*/
|
||||
void pci_bridge_d3_device_changed(struct pci_dev *dev)
|
||||
{
|
||||
pci_bridge_d3_update(dev, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_bridge_d3_device_removed - Update bridge D3 capabilities on remove
|
||||
* @dev: PCI device being removed
|
||||
*
|
||||
* Function updates upstream bridge PM capabilities based on other devices
|
||||
* still left on the bus.
|
||||
*/
|
||||
void pci_bridge_d3_device_removed(struct pci_dev *dev)
|
||||
{
|
||||
pci_bridge_d3_update(dev, true);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -2355,7 +2351,7 @@ void pci_d3cold_enable(struct pci_dev *dev)
|
||||
{
|
||||
if (dev->no_d3cold) {
|
||||
dev->no_d3cold = false;
|
||||
pci_bridge_d3_device_changed(dev);
|
||||
pci_bridge_d3_update(dev);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_d3cold_enable);
|
||||
@ -2372,7 +2368,7 @@ void pci_d3cold_disable(struct pci_dev *dev)
|
||||
{
|
||||
if (!dev->no_d3cold) {
|
||||
dev->no_d3cold = true;
|
||||
pci_bridge_d3_device_changed(dev);
|
||||
pci_bridge_d3_update(dev);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_d3cold_disable);
|
||||
@ -4831,36 +4827,6 @@ int pci_select_bars(struct pci_dev *dev, unsigned long flags)
|
||||
}
|
||||
EXPORT_SYMBOL(pci_select_bars);
|
||||
|
||||
/**
|
||||
* pci_resource_bar - get position of the BAR associated with a resource
|
||||
* @dev: the PCI device
|
||||
* @resno: the resource number
|
||||
* @type: the BAR type to be filled in
|
||||
*
|
||||
* Returns BAR position in config space, or 0 if the BAR is invalid.
|
||||
*/
|
||||
int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type)
|
||||
{
|
||||
int reg;
|
||||
|
||||
if (resno < PCI_ROM_RESOURCE) {
|
||||
*type = pci_bar_unknown;
|
||||
return PCI_BASE_ADDRESS_0 + 4 * resno;
|
||||
} else if (resno == PCI_ROM_RESOURCE) {
|
||||
*type = pci_bar_mem32;
|
||||
return dev->rom_base_reg;
|
||||
} else if (resno < PCI_BRIDGE_RESOURCES) {
|
||||
/* device specific resource */
|
||||
*type = pci_bar_unknown;
|
||||
reg = pci_iov_resource_bar(dev, resno);
|
||||
if (reg)
|
||||
return reg;
|
||||
}
|
||||
|
||||
dev_err(&dev->dev, "BAR %d: invalid resource\n", resno);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Some architectures require additional programming to enable VGA */
|
||||
static arch_set_vga_state_t arch_set_vga_state;
|
||||
|
||||
|
@ -1,9 +1,6 @@
|
||||
#ifndef DRIVERS_PCI_H
|
||||
#define DRIVERS_PCI_H
|
||||
|
||||
#define PCI_CFG_SPACE_SIZE 256
|
||||
#define PCI_CFG_SPACE_EXP_SIZE 4096
|
||||
|
||||
#define PCI_FIND_CAP_TTL 48
|
||||
|
||||
extern const unsigned char pcie_link_speed[];
|
||||
@ -85,8 +82,8 @@ void pci_pm_init(struct pci_dev *dev);
|
||||
void pci_ea_init(struct pci_dev *dev);
|
||||
void pci_allocate_cap_save_buffers(struct pci_dev *dev);
|
||||
void pci_free_cap_save_buffers(struct pci_dev *dev);
|
||||
void pci_bridge_d3_device_changed(struct pci_dev *dev);
|
||||
void pci_bridge_d3_device_removed(struct pci_dev *dev);
|
||||
bool pci_bridge_d3_possible(struct pci_dev *dev);
|
||||
void pci_bridge_d3_update(struct pci_dev *dev);
|
||||
|
||||
static inline void pci_wakeup_event(struct pci_dev *dev)
|
||||
{
|
||||
@ -245,7 +242,6 @@ bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl,
|
||||
int pci_setup_device(struct pci_dev *dev);
|
||||
int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
||||
struct resource *res, unsigned int reg);
|
||||
int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type);
|
||||
void pci_configure_ari(struct pci_dev *dev);
|
||||
void __pci_bus_size_bridges(struct pci_bus *bus,
|
||||
struct list_head *realloc_head);
|
||||
@ -289,7 +285,7 @@ static inline void pci_restore_ats_state(struct pci_dev *dev)
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
int pci_iov_init(struct pci_dev *dev);
|
||||
void pci_iov_release(struct pci_dev *dev);
|
||||
int pci_iov_resource_bar(struct pci_dev *dev, int resno);
|
||||
void pci_iov_update_resource(struct pci_dev *dev, int resno);
|
||||
resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno);
|
||||
void pci_restore_iov_state(struct pci_dev *dev);
|
||||
int pci_iov_bus_range(struct pci_bus *bus);
|
||||
@ -303,10 +299,6 @@ static inline void pci_iov_release(struct pci_dev *dev)
|
||||
|
||||
{
|
||||
}
|
||||
static inline int pci_iov_resource_bar(struct pci_dev *dev, int resno)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void pci_restore_iov_state(struct pci_dev *dev)
|
||||
{
|
||||
}
|
||||
@ -356,4 +348,9 @@ static inline int pci_dev_specific_reset(struct pci_dev *dev, int probe)
|
||||
}
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64)
|
||||
int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment,
|
||||
struct resource *res);
|
||||
#endif
|
||||
|
||||
#endif /* DRIVERS_PCI_H */
|
||||
|
@ -30,13 +30,6 @@
|
||||
#include "aerdrv.h"
|
||||
#include "../../pci.h"
|
||||
|
||||
/*
|
||||
* Version Information
|
||||
*/
|
||||
#define DRIVER_VERSION "v1.0"
|
||||
#define DRIVER_AUTHOR "tom.l.nguyen@intel.com"
|
||||
#define DRIVER_DESC "Root Port Advanced Error Reporting Driver"
|
||||
|
||||
static int aer_probe(struct pcie_device *dev);
|
||||
static void aer_remove(struct pcie_device *dev);
|
||||
static pci_ers_result_t aer_error_detected(struct pci_dev *dev,
|
||||
@ -297,12 +290,12 @@ static int aer_probe(struct pcie_device *dev)
|
||||
{
|
||||
int status;
|
||||
struct aer_rpc *rpc;
|
||||
struct device *device = &dev->device;
|
||||
struct device *device = &dev->port->dev;
|
||||
|
||||
/* Alloc rpc data structure */
|
||||
rpc = aer_alloc_rpc(dev);
|
||||
if (!rpc) {
|
||||
dev_printk(KERN_DEBUG, device, "alloc rpc failed\n");
|
||||
dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n");
|
||||
aer_remove(dev);
|
||||
return -ENOMEM;
|
||||
}
|
||||
@ -310,7 +303,8 @@ static int aer_probe(struct pcie_device *dev)
|
||||
/* Request IRQ ISR */
|
||||
status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev);
|
||||
if (status) {
|
||||
dev_printk(KERN_DEBUG, device, "request IRQ failed\n");
|
||||
dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n",
|
||||
dev->irq);
|
||||
aer_remove(dev);
|
||||
return status;
|
||||
}
|
||||
@ -318,8 +312,8 @@ static int aer_probe(struct pcie_device *dev)
|
||||
rpc->isr = 1;
|
||||
|
||||
aer_enable_rootport(rpc);
|
||||
|
||||
return status;
|
||||
dev_info(device, "AER enabled with IRQ %d\n", dev->irq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -351,14 +351,28 @@ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
|
||||
return;
|
||||
}
|
||||
|
||||
/* Configure common clock before checking latencies */
|
||||
pcie_aspm_configure_common_clock(link);
|
||||
|
||||
/* Get upstream/downstream components' register state */
|
||||
pcie_get_aspm_reg(parent, &upreg);
|
||||
child = list_entry(linkbus->devices.next, struct pci_dev, bus_list);
|
||||
pcie_get_aspm_reg(child, &dwreg);
|
||||
|
||||
/*
|
||||
* If ASPM not supported, don't mess with the clocks and link,
|
||||
* bail out now.
|
||||
*/
|
||||
if (!(upreg.support & dwreg.support))
|
||||
return;
|
||||
|
||||
/* Configure common clock before checking latencies */
|
||||
pcie_aspm_configure_common_clock(link);
|
||||
|
||||
/*
|
||||
* Re-read upstream/downstream components' register state
|
||||
* after clock configuration
|
||||
*/
|
||||
pcie_get_aspm_reg(parent, &upreg);
|
||||
pcie_get_aspm_reg(child, &dwreg);
|
||||
|
||||
/*
|
||||
* Setup L0s state
|
||||
*
|
||||
@ -886,8 +900,8 @@ static ssize_t clk_ctl_store(struct device *dev,
|
||||
return n;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(link_state, 0644, link_state_show, link_state_store);
|
||||
static DEVICE_ATTR(clk_ctl, 0644, clk_ctl_show, clk_ctl_store);
|
||||
static DEVICE_ATTR_RW(link_state);
|
||||
static DEVICE_ATTR_RW(clk_ctl);
|
||||
|
||||
static char power_group[] = "power";
|
||||
void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev)
|
||||
|
@ -300,8 +300,6 @@ static irqreturn_t pcie_pme_irq(int irq, void *context)
|
||||
*/
|
||||
static int pcie_pme_set_native(struct pci_dev *dev, void *ign)
|
||||
{
|
||||
dev_info(&dev->dev, "Signaling PME through PCIe PME interrupt\n");
|
||||
|
||||
device_set_run_wake(&dev->dev, true);
|
||||
dev->pme_interrupt = true;
|
||||
return 0;
|
||||
@ -319,23 +317,8 @@ static int pcie_pme_set_native(struct pci_dev *dev, void *ign)
|
||||
static void pcie_pme_mark_devices(struct pci_dev *port)
|
||||
{
|
||||
pcie_pme_set_native(port, NULL);
|
||||
if (port->subordinate) {
|
||||
if (port->subordinate)
|
||||
pci_walk_bus(port->subordinate, pcie_pme_set_native, NULL);
|
||||
} else {
|
||||
struct pci_bus *bus = port->bus;
|
||||
struct pci_dev *dev;
|
||||
|
||||
/* Check if this is a root port event collector. */
|
||||
if (pci_pcie_type(port) != PCI_EXP_TYPE_RC_EC || !bus)
|
||||
return;
|
||||
|
||||
down_read(&pci_bus_sem);
|
||||
list_for_each_entry(dev, &bus->devices, bus_list)
|
||||
if (pci_is_pcie(dev)
|
||||
&& pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END)
|
||||
pcie_pme_set_native(dev, NULL);
|
||||
up_read(&pci_bus_sem);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -364,12 +347,14 @@ static int pcie_pme_probe(struct pcie_device *srv)
|
||||
ret = request_irq(srv->irq, pcie_pme_irq, IRQF_SHARED, "PCIe PME", srv);
|
||||
if (ret) {
|
||||
kfree(data);
|
||||
} else {
|
||||
pcie_pme_mark_devices(port);
|
||||
pcie_pme_interrupt_enable(port, true);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
dev_info(&port->dev, "Signaling PME with IRQ %d\n", srv->irq);
|
||||
|
||||
pcie_pme_mark_devices(port);
|
||||
pcie_pme_interrupt_enable(port, true);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool pcie_pme_check_wakeup(struct pci_bus *bus)
|
||||
|
@ -499,7 +499,6 @@ static int pcie_port_probe_service(struct device *dev)
|
||||
if (status)
|
||||
return status;
|
||||
|
||||
dev_printk(KERN_DEBUG, dev, "service driver %s loaded\n", driver->name);
|
||||
get_device(dev);
|
||||
return 0;
|
||||
}
|
||||
@ -524,8 +523,6 @@ static int pcie_port_remove_service(struct device *dev)
|
||||
pciedev = to_pcie_device(dev);
|
||||
driver = to_service_driver(dev->driver);
|
||||
if (driver && driver->remove) {
|
||||
dev_printk(KERN_DEBUG, dev, "unloading service driver %s\n",
|
||||
driver->name);
|
||||
driver->remove(pciedev);
|
||||
put_device(dev);
|
||||
}
|
||||
|
@ -19,6 +19,7 @@
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/pci-aspm.h>
|
||||
|
||||
#include "../pci.h"
|
||||
#include "portdrv.h"
|
||||
#include "aer/aerdrv.h"
|
||||
|
||||
@ -149,15 +150,7 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
|
||||
|
||||
pci_save_state(dev);
|
||||
|
||||
/*
|
||||
* Prevent runtime PM if the port is advertising support for PCIe
|
||||
* hotplug. Otherwise the BIOS hotplug SMI code might not be able
|
||||
* to enumerate devices behind this port properly (the port is
|
||||
* powered down preventing all config space accesses to the
|
||||
* subordinate devices). We can't be sure for native PCIe hotplug
|
||||
* either so prevent that as well.
|
||||
*/
|
||||
if (!dev->is_hotplug_bridge) {
|
||||
if (pci_bridge_d3_possible(dev)) {
|
||||
/*
|
||||
* Keep the port resumed 100ms to make sure things like
|
||||
* config space accesses from userspace (lspci) will not
|
||||
@ -175,7 +168,7 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
|
||||
|
||||
static void pcie_portdrv_remove(struct pci_dev *dev)
|
||||
{
|
||||
if (!dev->is_hotplug_bridge) {
|
||||
if (pci_bridge_d3_possible(dev)) {
|
||||
pm_runtime_forbid(&dev->dev);
|
||||
pm_runtime_get_noresume(&dev->dev);
|
||||
pm_runtime_dont_use_autosuspend(&dev->dev);
|
||||
|
@ -227,7 +227,8 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
|
||||
mask64 = (u32)PCI_BASE_ADDRESS_MEM_MASK;
|
||||
}
|
||||
} else {
|
||||
res->flags |= (l & IORESOURCE_ROM_ENABLE);
|
||||
if (l & PCI_ROM_ADDRESS_ENABLE)
|
||||
res->flags |= IORESOURCE_ROM_ENABLE;
|
||||
l64 = l & PCI_ROM_ADDRESS_MASK;
|
||||
sz64 = sz & PCI_ROM_ADDRESS_MASK;
|
||||
mask64 = (u32)PCI_ROM_ADDRESS_MASK;
|
||||
@ -521,18 +522,19 @@ static void pci_release_host_bridge_dev(struct device *dev)
|
||||
kfree(bridge);
|
||||
}
|
||||
|
||||
static struct pci_host_bridge *pci_alloc_host_bridge(struct pci_bus *b)
|
||||
struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
|
||||
{
|
||||
struct pci_host_bridge *bridge;
|
||||
|
||||
bridge = kzalloc(sizeof(*bridge), GFP_KERNEL);
|
||||
bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
|
||||
if (!bridge)
|
||||
return NULL;
|
||||
|
||||
INIT_LIST_HEAD(&bridge->windows);
|
||||
bridge->bus = b;
|
||||
|
||||
return bridge;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_alloc_host_bridge);
|
||||
|
||||
static const unsigned char pcix_bus_speed[] = {
|
||||
PCI_SPEED_UNKNOWN, /* 0 */
|
||||
@ -717,6 +719,123 @@ static void pci_set_bus_msi_domain(struct pci_bus *bus)
|
||||
dev_set_msi_domain(&bus->dev, d);
|
||||
}
|
||||
|
||||
int pci_register_host_bridge(struct pci_host_bridge *bridge)
|
||||
{
|
||||
struct device *parent = bridge->dev.parent;
|
||||
struct resource_entry *window, *n;
|
||||
struct pci_bus *bus, *b;
|
||||
resource_size_t offset;
|
||||
LIST_HEAD(resources);
|
||||
struct resource *res;
|
||||
char addr[64], *fmt;
|
||||
const char *name;
|
||||
int err;
|
||||
|
||||
bus = pci_alloc_bus(NULL);
|
||||
if (!bus)
|
||||
return -ENOMEM;
|
||||
|
||||
bridge->bus = bus;
|
||||
|
||||
/* temporarily move resources off the list */
|
||||
list_splice_init(&bridge->windows, &resources);
|
||||
bus->sysdata = bridge->sysdata;
|
||||
bus->msi = bridge->msi;
|
||||
bus->ops = bridge->ops;
|
||||
bus->number = bus->busn_res.start = bridge->busnr;
|
||||
#ifdef CONFIG_PCI_DOMAINS_GENERIC
|
||||
bus->domain_nr = pci_bus_find_domain_nr(bus, parent);
|
||||
#endif
|
||||
|
||||
b = pci_find_bus(pci_domain_nr(bus), bridge->busnr);
|
||||
if (b) {
|
||||
/* If we already got to this bus through a different bridge, ignore it */
|
||||
dev_dbg(&b->dev, "bus already known\n");
|
||||
err = -EEXIST;
|
||||
goto free;
|
||||
}
|
||||
|
||||
dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(bus),
|
||||
bridge->busnr);
|
||||
|
||||
err = pcibios_root_bridge_prepare(bridge);
|
||||
if (err)
|
||||
goto free;
|
||||
|
||||
err = device_register(&bridge->dev);
|
||||
if (err)
|
||||
put_device(&bridge->dev);
|
||||
|
||||
bus->bridge = get_device(&bridge->dev);
|
||||
device_enable_async_suspend(bus->bridge);
|
||||
pci_set_bus_of_node(bus);
|
||||
pci_set_bus_msi_domain(bus);
|
||||
|
||||
if (!parent)
|
||||
set_dev_node(bus->bridge, pcibus_to_node(bus));
|
||||
|
||||
bus->dev.class = &pcibus_class;
|
||||
bus->dev.parent = bus->bridge;
|
||||
|
||||
dev_set_name(&bus->dev, "%04x:%02x", pci_domain_nr(bus), bus->number);
|
||||
name = dev_name(&bus->dev);
|
||||
|
||||
err = device_register(&bus->dev);
|
||||
if (err)
|
||||
goto unregister;
|
||||
|
||||
pcibios_add_bus(bus);
|
||||
|
||||
/* Create legacy_io and legacy_mem files for this bus */
|
||||
pci_create_legacy_files(bus);
|
||||
|
||||
if (parent)
|
||||
dev_info(parent, "PCI host bridge to bus %s\n", name);
|
||||
else
|
||||
pr_info("PCI host bridge to bus %s\n", name);
|
||||
|
||||
/* Add initial resources to the bus */
|
||||
resource_list_for_each_entry_safe(window, n, &resources) {
|
||||
list_move_tail(&window->node, &bridge->windows);
|
||||
offset = window->offset;
|
||||
res = window->res;
|
||||
|
||||
if (res->flags & IORESOURCE_BUS)
|
||||
pci_bus_insert_busn_res(bus, bus->number, res->end);
|
||||
else
|
||||
pci_bus_add_resource(bus, res, 0);
|
||||
|
||||
if (offset) {
|
||||
if (resource_type(res) == IORESOURCE_IO)
|
||||
fmt = " (bus address [%#06llx-%#06llx])";
|
||||
else
|
||||
fmt = " (bus address [%#010llx-%#010llx])";
|
||||
|
||||
snprintf(addr, sizeof(addr), fmt,
|
||||
(unsigned long long)(res->start - offset),
|
||||
(unsigned long long)(res->end - offset));
|
||||
} else
|
||||
addr[0] = '\0';
|
||||
|
||||
dev_info(&bus->dev, "root bus resource %pR%s\n", res, addr);
|
||||
}
|
||||
|
||||
down_write(&pci_bus_sem);
|
||||
list_add_tail(&bus->node, &pci_root_buses);
|
||||
up_write(&pci_bus_sem);
|
||||
|
||||
return 0;
|
||||
|
||||
unregister:
|
||||
put_device(&bridge->dev);
|
||||
device_unregister(&bridge->dev);
|
||||
|
||||
free:
|
||||
kfree(bus);
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(pci_register_host_bridge);
|
||||
|
||||
static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
|
||||
struct pci_dev *bridge, int busnr)
|
||||
{
|
||||
@ -2155,113 +2274,43 @@ void __weak pcibios_remove_bus(struct pci_bus *bus)
|
||||
{
|
||||
}
|
||||
|
||||
struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
|
||||
struct pci_ops *ops, void *sysdata, struct list_head *resources)
|
||||
static struct pci_bus *pci_create_root_bus_msi(struct device *parent,
|
||||
int bus, struct pci_ops *ops, void *sysdata,
|
||||
struct list_head *resources, struct msi_controller *msi)
|
||||
{
|
||||
int error;
|
||||
struct pci_host_bridge *bridge;
|
||||
struct pci_bus *b, *b2;
|
||||
struct resource_entry *window, *n;
|
||||
struct resource *res;
|
||||
resource_size_t offset;
|
||||
char bus_addr[64];
|
||||
char *fmt;
|
||||
|
||||
b = pci_alloc_bus(NULL);
|
||||
if (!b)
|
||||
return NULL;
|
||||
|
||||
b->sysdata = sysdata;
|
||||
b->ops = ops;
|
||||
b->number = b->busn_res.start = bus;
|
||||
#ifdef CONFIG_PCI_DOMAINS_GENERIC
|
||||
b->domain_nr = pci_bus_find_domain_nr(b, parent);
|
||||
#endif
|
||||
b2 = pci_find_bus(pci_domain_nr(b), bus);
|
||||
if (b2) {
|
||||
/* If we already got to this bus through a different bridge, ignore it */
|
||||
dev_dbg(&b2->dev, "bus already known\n");
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
bridge = pci_alloc_host_bridge(b);
|
||||
bridge = pci_alloc_host_bridge(0);
|
||||
if (!bridge)
|
||||
goto err_out;
|
||||
return NULL;
|
||||
|
||||
bridge->dev.parent = parent;
|
||||
bridge->dev.release = pci_release_host_bridge_dev;
|
||||
dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus);
|
||||
error = pcibios_root_bridge_prepare(bridge);
|
||||
if (error) {
|
||||
kfree(bridge);
|
||||
|
||||
list_splice_init(resources, &bridge->windows);
|
||||
bridge->sysdata = sysdata;
|
||||
bridge->busnr = bus;
|
||||
bridge->ops = ops;
|
||||
bridge->msi = msi;
|
||||
|
||||
error = pci_register_host_bridge(bridge);
|
||||
if (error < 0)
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
error = device_register(&bridge->dev);
|
||||
if (error) {
|
||||
put_device(&bridge->dev);
|
||||
goto err_out;
|
||||
}
|
||||
b->bridge = get_device(&bridge->dev);
|
||||
device_enable_async_suspend(b->bridge);
|
||||
pci_set_bus_of_node(b);
|
||||
pci_set_bus_msi_domain(b);
|
||||
return bridge->bus;
|
||||
|
||||
if (!parent)
|
||||
set_dev_node(b->bridge, pcibus_to_node(b));
|
||||
|
||||
b->dev.class = &pcibus_class;
|
||||
b->dev.parent = b->bridge;
|
||||
dev_set_name(&b->dev, "%04x:%02x", pci_domain_nr(b), bus);
|
||||
error = device_register(&b->dev);
|
||||
if (error)
|
||||
goto class_dev_reg_err;
|
||||
|
||||
pcibios_add_bus(b);
|
||||
|
||||
/* Create legacy_io and legacy_mem files for this bus */
|
||||
pci_create_legacy_files(b);
|
||||
|
||||
if (parent)
|
||||
dev_info(parent, "PCI host bridge to bus %s\n", dev_name(&b->dev));
|
||||
else
|
||||
printk(KERN_INFO "PCI host bridge to bus %s\n", dev_name(&b->dev));
|
||||
|
||||
/* Add initial resources to the bus */
|
||||
resource_list_for_each_entry_safe(window, n, resources) {
|
||||
list_move_tail(&window->node, &bridge->windows);
|
||||
res = window->res;
|
||||
offset = window->offset;
|
||||
if (res->flags & IORESOURCE_BUS)
|
||||
pci_bus_insert_busn_res(b, bus, res->end);
|
||||
else
|
||||
pci_bus_add_resource(b, res, 0);
|
||||
if (offset) {
|
||||
if (resource_type(res) == IORESOURCE_IO)
|
||||
fmt = " (bus address [%#06llx-%#06llx])";
|
||||
else
|
||||
fmt = " (bus address [%#010llx-%#010llx])";
|
||||
snprintf(bus_addr, sizeof(bus_addr), fmt,
|
||||
(unsigned long long) (res->start - offset),
|
||||
(unsigned long long) (res->end - offset));
|
||||
} else
|
||||
bus_addr[0] = '\0';
|
||||
dev_info(&b->dev, "root bus resource %pR%s\n", res, bus_addr);
|
||||
}
|
||||
|
||||
down_write(&pci_bus_sem);
|
||||
list_add_tail(&b->node, &pci_root_buses);
|
||||
up_write(&pci_bus_sem);
|
||||
|
||||
return b;
|
||||
|
||||
class_dev_reg_err:
|
||||
put_device(&bridge->dev);
|
||||
device_unregister(&bridge->dev);
|
||||
err_out:
|
||||
kfree(b);
|
||||
kfree(bridge);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
|
||||
struct pci_ops *ops, void *sysdata, struct list_head *resources)
|
||||
{
|
||||
return pci_create_root_bus_msi(parent, bus, ops, sysdata, resources,
|
||||
NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_create_root_bus);
|
||||
|
||||
int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int bus_max)
|
||||
@ -2342,12 +2391,10 @@ struct pci_bus *pci_scan_root_bus_msi(struct device *parent, int bus,
|
||||
break;
|
||||
}
|
||||
|
||||
b = pci_create_root_bus(parent, bus, ops, sysdata, resources);
|
||||
b = pci_create_root_bus_msi(parent, bus, ops, sysdata, resources, msi);
|
||||
if (!b)
|
||||
return NULL;
|
||||
|
||||
b->msi = msi;
|
||||
|
||||
if (!found) {
|
||||
dev_info(&b->dev,
|
||||
"No busn resource found for root bus, will use [bus %02x-ff]\n",
|
||||
|
@ -2156,7 +2156,7 @@ static void quirk_blacklist_vpd(struct pci_dev *dev)
|
||||
{
|
||||
if (dev->vpd) {
|
||||
dev->vpd->len = 0;
|
||||
dev_warn(&dev->dev, FW_BUG "VPD access disabled\n");
|
||||
dev_warn(&dev->dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n");
|
||||
}
|
||||
}
|
||||
|
||||
@ -3137,8 +3137,9 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3_delay);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay);
|
||||
|
||||
/*
|
||||
* Some devices may pass our check in pci_intx_mask_supported if
|
||||
* Some devices may pass our check in pci_intx_mask_supported() if
|
||||
* PCI_COMMAND_INTX_DISABLE works though they actually do not properly
|
||||
* support this feature.
|
||||
*/
|
||||
@ -3146,53 +3147,139 @@ static void quirk_broken_intx_masking(struct pci_dev *dev)
|
||||
{
|
||||
dev->broken_intx_masking = 1;
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, 0x0030,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x0030,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */
|
||||
quirk_broken_intx_masking);
|
||||
|
||||
/*
|
||||
* Realtek RTL8169 PCI Gigabit Ethernet Controller (rev 10)
|
||||
* Subsystem: Realtek RTL8169/8110 Family PCI Gigabit Ethernet NIC
|
||||
*
|
||||
* RTL8110SC - Fails under PCI device assignment using DisINTx masking.
|
||||
*/
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REALTEK, 0x8169,
|
||||
quirk_broken_intx_masking);
|
||||
|
||||
/*
|
||||
* Intel i40e (XL710/X710) 10/20/40GbE NICs all have broken INTx masking,
|
||||
* DisINTx can be set but the interrupt status bit is non-functional.
|
||||
*/
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1572,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1574,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1580,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1581,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1583,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1584,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1585,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1586,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1587,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1588,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1589,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d0,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d1,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d2,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1572,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1574,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1580,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1581,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1583,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1584,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1585,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1586,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1587,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1588,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1589,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d0,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d1,
|
||||
quirk_broken_intx_masking);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d2,
|
||||
quirk_broken_intx_masking);
|
||||
|
||||
static u16 mellanox_broken_intx_devs[] = {
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_SDR,
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_DDR,
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_QDR,
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2,
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2,
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_EN,
|
||||
PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX_EN,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX2,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX3,
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO,
|
||||
};
|
||||
|
||||
#define CONNECTX_4_CURR_MAX_MINOR 99
|
||||
#define CONNECTX_4_INTX_SUPPORT_MINOR 14
|
||||
|
||||
/*
|
||||
* Check ConnectX-4/LX FW version to see if it supports legacy interrupts.
|
||||
* If so, don't mark it as broken.
|
||||
* FW minor > 99 means older FW version format and no INTx masking support.
|
||||
* FW minor < 14 means new FW version format and no INTx masking support.
|
||||
*/
|
||||
static void mellanox_check_broken_intx_masking(struct pci_dev *pdev)
|
||||
{
|
||||
__be32 __iomem *fw_ver;
|
||||
u16 fw_major;
|
||||
u16 fw_minor;
|
||||
u16 fw_subminor;
|
||||
u32 fw_maj_min;
|
||||
u32 fw_sub_min;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(mellanox_broken_intx_devs); i++) {
|
||||
if (pdev->device == mellanox_broken_intx_devs[i]) {
|
||||
pdev->broken_intx_masking = 1;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
/* Getting here means Connect-IB cards and up. Connect-IB has no INTx
|
||||
* support so shouldn't be checked further
|
||||
*/
|
||||
if (pdev->device == PCI_DEVICE_ID_MELLANOX_CONNECTIB)
|
||||
return;
|
||||
|
||||
if (pdev->device != PCI_DEVICE_ID_MELLANOX_CONNECTX4 &&
|
||||
pdev->device != PCI_DEVICE_ID_MELLANOX_CONNECTX4_LX)
|
||||
return;
|
||||
|
||||
/* For ConnectX-4 and ConnectX-4LX, need to check FW support */
|
||||
if (pci_enable_device_mem(pdev)) {
|
||||
dev_warn(&pdev->dev, "Can't enable device memory\n");
|
||||
return;
|
||||
}
|
||||
|
||||
fw_ver = ioremap(pci_resource_start(pdev, 0), 4);
|
||||
if (!fw_ver) {
|
||||
dev_warn(&pdev->dev, "Can't map ConnectX-4 initialization segment\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Reading from resource space should be 32b aligned */
|
||||
fw_maj_min = ioread32be(fw_ver);
|
||||
fw_sub_min = ioread32be(fw_ver + 1);
|
||||
fw_major = fw_maj_min & 0xffff;
|
||||
fw_minor = fw_maj_min >> 16;
|
||||
fw_subminor = fw_sub_min & 0xffff;
|
||||
if (fw_minor > CONNECTX_4_CURR_MAX_MINOR ||
|
||||
fw_minor < CONNECTX_4_INTX_SUPPORT_MINOR) {
|
||||
dev_warn(&pdev->dev, "ConnectX-4: FW %u.%u.%u doesn't support INTx masking, disabling. Please upgrade FW to %d.14.1100 and up for INTx support\n",
|
||||
fw_major, fw_minor, fw_subminor, pdev->device ==
|
||||
PCI_DEVICE_ID_MELLANOX_CONNECTX4 ? 12 : 14);
|
||||
pdev->broken_intx_masking = 1;
|
||||
}
|
||||
|
||||
iounmap(fw_ver);
|
||||
|
||||
out:
|
||||
pci_disable_device(pdev);
|
||||
}
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID,
|
||||
mellanox_check_broken_intx_masking);
|
||||
|
||||
static void quirk_no_bus_reset(struct pci_dev *dev)
|
||||
{
|
||||
@ -3255,6 +3342,25 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PORT_RIDGE,
|
||||
quirk_thunderbolt_hotplug_msi);
|
||||
|
||||
static void quirk_chelsio_extend_vpd(struct pci_dev *dev)
|
||||
{
|
||||
pci_set_vpd_size(dev, 8192);
|
||||
}
|
||||
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x20, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x21, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x22, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x23, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x24, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x25, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x26, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x30, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x31, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x32, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x35, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x36, quirk_chelsio_extend_vpd);
|
||||
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x37, quirk_chelsio_extend_vpd);
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
/*
|
||||
* Apple: Shutdown Cactus Ridge Thunderbolt controller.
|
||||
|
@ -40,7 +40,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
|
||||
list_del(&dev->bus_list);
|
||||
up_write(&pci_bus_sem);
|
||||
|
||||
pci_bridge_d3_device_removed(dev);
|
||||
pci_bridge_d3_update(dev);
|
||||
pci_free_resources(dev);
|
||||
put_device(&dev->dev);
|
||||
}
|
||||
|
@ -35,6 +35,11 @@ int pci_enable_rom(struct pci_dev *pdev)
|
||||
if (res->flags & IORESOURCE_ROM_SHADOW)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Ideally pci_update_resource() would update the ROM BAR address,
|
||||
* and we would only set the enable bit here. But apparently some
|
||||
* devices have buggy ROM BARs that read as zero when disabled.
|
||||
*/
|
||||
pcibios_resource_to_bus(pdev->bus, ®ion, res);
|
||||
pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr);
|
||||
rom_addr &= ~PCI_ROM_ADDRESS_MASK;
|
||||
|
@ -25,21 +25,18 @@
|
||||
#include <linux/slab.h>
|
||||
#include "pci.h"
|
||||
|
||||
|
||||
void pci_update_resource(struct pci_dev *dev, int resno)
|
||||
static void pci_std_update_resource(struct pci_dev *dev, int resno)
|
||||
{
|
||||
struct pci_bus_region region;
|
||||
bool disable;
|
||||
u16 cmd;
|
||||
u32 new, check, mask;
|
||||
int reg;
|
||||
enum pci_bar_type type;
|
||||
struct resource *res = dev->resource + resno;
|
||||
|
||||
if (dev->is_virtfn) {
|
||||
dev_warn(&dev->dev, "can't update VF BAR%d\n", resno);
|
||||
/* Per SR-IOV spec 3.4.1.11, VF BARs are RO zero */
|
||||
if (dev->is_virtfn)
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Ignore resources for unimplemented BARs and unused resource slots
|
||||
@ -60,21 +57,34 @@ void pci_update_resource(struct pci_dev *dev, int resno)
|
||||
return;
|
||||
|
||||
pcibios_resource_to_bus(dev->bus, ®ion, res);
|
||||
new = region.start;
|
||||
|
||||
new = region.start | (res->flags & PCI_REGION_FLAG_MASK);
|
||||
if (res->flags & IORESOURCE_IO)
|
||||
if (res->flags & IORESOURCE_IO) {
|
||||
mask = (u32)PCI_BASE_ADDRESS_IO_MASK;
|
||||
else
|
||||
new |= res->flags & ~PCI_BASE_ADDRESS_IO_MASK;
|
||||
} else if (resno == PCI_ROM_RESOURCE) {
|
||||
mask = (u32)PCI_ROM_ADDRESS_MASK;
|
||||
} else {
|
||||
mask = (u32)PCI_BASE_ADDRESS_MEM_MASK;
|
||||
new |= res->flags & ~PCI_BASE_ADDRESS_MEM_MASK;
|
||||
}
|
||||
|
||||
reg = pci_resource_bar(dev, resno, &type);
|
||||
if (!reg)
|
||||
return;
|
||||
if (type != pci_bar_unknown) {
|
||||
if (resno < PCI_ROM_RESOURCE) {
|
||||
reg = PCI_BASE_ADDRESS_0 + 4 * resno;
|
||||
} else if (resno == PCI_ROM_RESOURCE) {
|
||||
|
||||
/*
|
||||
* Apparently some Matrox devices have ROM BARs that read
|
||||
* as zero when disabled, so don't update ROM BARs unless
|
||||
* they're enabled. See https://lkml.org/lkml/2005/8/30/138.
|
||||
*/
|
||||
if (!(res->flags & IORESOURCE_ROM_ENABLE))
|
||||
return;
|
||||
|
||||
reg = dev->rom_base_reg;
|
||||
new |= PCI_ROM_ADDRESS_ENABLE;
|
||||
}
|
||||
} else
|
||||
return;
|
||||
|
||||
/*
|
||||
* We can't update a 64-bit BAR atomically, so when possible,
|
||||
@ -110,6 +120,16 @@ void pci_update_resource(struct pci_dev *dev, int resno)
|
||||
pci_write_config_word(dev, PCI_COMMAND, cmd);
|
||||
}
|
||||
|
||||
void pci_update_resource(struct pci_dev *dev, int resno)
|
||||
{
|
||||
if (resno <= PCI_ROM_RESOURCE)
|
||||
pci_std_update_resource(dev, resno);
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
else if (resno >= PCI_IOV_RESOURCES && resno <= PCI_IOV_RESOURCE_END)
|
||||
pci_iov_update_resource(dev, resno);
|
||||
#endif
|
||||
}
|
||||
|
||||
int pci_claim_resource(struct pci_dev *dev, int resource)
|
||||
{
|
||||
struct resource *res = &dev->resource[resource];
|
||||
|
@ -129,6 +129,10 @@ static int uhci_pci_init(struct usb_hcd *hcd)
|
||||
if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_HP)
|
||||
uhci->wait_for_hp = 1;
|
||||
|
||||
/* Intel controllers use non-PME wakeup signalling */
|
||||
if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL)
|
||||
device_set_run_wake(uhci_dev(uhci), 1);
|
||||
|
||||
/* Set up pointers to PCI-specific functions */
|
||||
uhci->reset_hc = uhci_pci_reset_hc;
|
||||
uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
|
||||
|
@ -31,8 +31,6 @@
|
||||
|
||||
#include "vfio_pci_private.h"
|
||||
|
||||
#define PCI_CFG_SPACE_SIZE 256
|
||||
|
||||
/* Fake capability ID for standard config space */
|
||||
#define PCI_CAP_ID_BASIC 0
|
||||
|
||||
|
@ -437,6 +437,8 @@ static inline int acpi_dev_filter_resource_type_cb(struct acpi_resource *ares,
|
||||
return acpi_dev_filter_resource_type(ares, (unsigned long)arg);
|
||||
}
|
||||
|
||||
struct acpi_device *acpi_resource_consumer(struct resource *res);
|
||||
|
||||
int acpi_check_resource_conflict(const struct resource *res);
|
||||
|
||||
int acpi_check_region(resource_size_t start, resource_size_t n,
|
||||
@ -787,6 +789,11 @@ static inline int acpi_reconfig_notifier_unregister(struct notifier_block *nb)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static inline struct acpi_device *acpi_resource_consumer(struct resource *res)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#endif /* !CONFIG_ACPI */
|
||||
|
||||
#ifdef CONFIG_ACPI_HOTPLUG_IOAPIC
|
||||
|
@ -16,6 +16,7 @@ int of_pci_get_devfn(struct device_node *np);
|
||||
int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
|
||||
int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
|
||||
int of_get_pci_domain_nr(struct device_node *node);
|
||||
int of_pci_get_max_link_speed(struct device_node *node);
|
||||
void of_pci_check_probe_only(void);
|
||||
int of_pci_map_rid(struct device_node *np, u32 rid,
|
||||
const char *map_name, const char *map_mask_name,
|
||||
@ -62,6 +63,12 @@ static inline int of_pci_map_rid(struct device_node *np, u32 rid,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static inline int
|
||||
of_pci_get_max_link_speed(struct device_node *node)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static inline void of_pci_check_probe_only(void) { }
|
||||
#endif
|
||||
|
||||
|
@ -24,7 +24,9 @@ static inline acpi_status pci_acpi_remove_pm_notifier(struct acpi_device *dev)
|
||||
}
|
||||
extern phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle);
|
||||
|
||||
extern phys_addr_t pci_mcfg_lookup(u16 domain, struct resource *bus_res);
|
||||
struct pci_ecam_ops;
|
||||
extern int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres,
|
||||
struct pci_ecam_ops **ecam_ops);
|
||||
|
||||
static inline acpi_handle acpi_find_root_bridge_handle(struct pci_dev *pdev)
|
||||
{
|
||||
|
@ -59,6 +59,15 @@ void __iomem *pci_ecam_map_bus(struct pci_bus *bus, unsigned int devfn,
|
||||
/* default ECAM ops */
|
||||
extern struct pci_ecam_ops pci_generic_ecam_ops;
|
||||
|
||||
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
|
||||
extern struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */
|
||||
extern struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */
|
||||
extern struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */
|
||||
extern struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */
|
||||
extern struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */
|
||||
extern struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PCI_HOST_GENERIC
|
||||
/* for DT-based PCI controllers that support ECAM */
|
||||
int pci_host_common_probe(struct platform_device *pdev,
|
||||
|
@ -420,9 +420,13 @@ static inline int pci_channel_offline(struct pci_dev *pdev)
|
||||
struct pci_host_bridge {
|
||||
struct device dev;
|
||||
struct pci_bus *bus; /* root bus */
|
||||
struct pci_ops *ops;
|
||||
void *sysdata;
|
||||
int busnr;
|
||||
struct list_head windows; /* resource_entry */
|
||||
void (*release_fn)(struct pci_host_bridge *);
|
||||
void *release_data;
|
||||
struct msi_controller *msi;
|
||||
unsigned int ignore_reset_delay:1; /* for entire hierarchy */
|
||||
/* Resource alignment requirements */
|
||||
resource_size_t (*align_resource)(struct pci_dev *dev,
|
||||
@ -430,10 +434,23 @@ struct pci_host_bridge {
|
||||
resource_size_t start,
|
||||
resource_size_t size,
|
||||
resource_size_t align);
|
||||
unsigned long private[0] ____cacheline_aligned;
|
||||
};
|
||||
|
||||
#define to_pci_host_bridge(n) container_of(n, struct pci_host_bridge, dev)
|
||||
|
||||
static inline void *pci_host_bridge_priv(struct pci_host_bridge *bridge)
|
||||
{
|
||||
return (void *)bridge->private;
|
||||
}
|
||||
|
||||
static inline struct pci_host_bridge *pci_host_bridge_from_priv(void *priv)
|
||||
{
|
||||
return container_of(priv, struct pci_host_bridge, private);
|
||||
}
|
||||
|
||||
struct pci_host_bridge *pci_alloc_host_bridge(size_t priv);
|
||||
int pci_register_host_bridge(struct pci_host_bridge *bridge);
|
||||
struct pci_host_bridge *pci_find_host_bridge(struct pci_bus *bus);
|
||||
|
||||
void pci_set_host_bridge_release(struct pci_host_bridge *bridge,
|
||||
|
@ -176,6 +176,7 @@ struct hotplug_params {
|
||||
#ifdef CONFIG_ACPI
|
||||
#include <linux/acpi.h>
|
||||
int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp);
|
||||
bool pciehp_is_native(struct pci_dev *pdev);
|
||||
int acpi_get_hp_hw_control_from_firmware(struct pci_dev *dev, u32 flags);
|
||||
int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle);
|
||||
int acpi_pci_detect_ejectable(acpi_handle handle);
|
||||
@ -185,5 +186,6 @@ static inline int pci_get_hp_params(struct pci_dev *dev,
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
static inline bool pciehp_is_native(struct pci_dev *pdev) { return true; }
|
||||
#endif
|
||||
#endif
|
||||
|
@ -2259,12 +2259,29 @@
|
||||
#define PCI_DEVICE_ID_ZOLTRIX_2BD0 0x2bd0
|
||||
|
||||
#define PCI_VENDOR_ID_MELLANOX 0x15b3
|
||||
#define PCI_DEVICE_ID_MELLANOX_TAVOR 0x5a44
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX3 0x1003
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO 0x1007
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTIB 0x1011
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX4 0x1013
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX4_LX 0x1015
|
||||
#define PCI_DEVICE_ID_MELLANOX_TAVOR 0x5a44
|
||||
#define PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE 0x5a46
|
||||
#define PCI_DEVICE_ID_MELLANOX_ARBEL_COMPAT 0x6278
|
||||
#define PCI_DEVICE_ID_MELLANOX_ARBEL 0x6282
|
||||
#define PCI_DEVICE_ID_MELLANOX_SINAI_OLD 0x5e8c
|
||||
#define PCI_DEVICE_ID_MELLANOX_SINAI 0x6274
|
||||
#define PCI_DEVICE_ID_MELLANOX_SINAI_OLD 0x5e8c
|
||||
#define PCI_DEVICE_ID_MELLANOX_SINAI 0x6274
|
||||
#define PCI_DEVICE_ID_MELLANOX_ARBEL_COMPAT 0x6278
|
||||
#define PCI_DEVICE_ID_MELLANOX_ARBEL 0x6282
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_SDR 0x6340
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_DDR 0x634a
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_QDR 0x6354
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_EN 0x6368
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN 0x6372
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2 0x6732
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2 0x673c
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2 0x6746
|
||||
#define PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2 0x6750
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2 0x675a
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2 0x6764
|
||||
#define PCI_DEVICE_ID_MELLANOX_CONNECTX2 0x676e
|
||||
|
||||
#define PCI_VENDOR_ID_DFI 0x15bd
|
||||
|
||||
|
@ -22,6 +22,14 @@
|
||||
#ifndef LINUX_PCI_REGS_H
|
||||
#define LINUX_PCI_REGS_H
|
||||
|
||||
/*
|
||||
* Conventional PCI and PCI-X Mode 1 devices have 256 bytes of
|
||||
* configuration space. PCI-X Mode 2 and PCIe devices have 4096 bytes of
|
||||
* configuration space.
|
||||
*/
|
||||
#define PCI_CFG_SPACE_SIZE 256
|
||||
#define PCI_CFG_SPACE_EXP_SIZE 4096
|
||||
|
||||
/*
|
||||
* Under PCI, each device has 256 bytes of configuration address space,
|
||||
* of which the first 64 bytes are standardized as follows:
|
||||
|
Loading…
Reference in New Issue
Block a user