pci-v4.20-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlvPV7IUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vyaUg//WnCaRIu2oKOp8c/bplZJDW5eT10d
 oYAN9qeyptU9RYrg4KBNbZL9UKGFTk3AoN5AUjrk8njxc/dY2ra/79esOvZyyYQy
 qLXBvrXKg3yZnlNlnyBneGSnUVwv/kl2hZS+kmYby2YOa8AH/mhU0FIFvsnfRK2I
 XvwABFm2ZYvXCqh3e5HXaHhOsR88NQ9In0AXVC7zHGqv1r/bMVn2YzPZHL/zzMrF
 mS79tdBTH+shSvchH9zvfgIs+UEKvvjEJsG2liwMkcQaV41i5dZjSKTdJ3EaD/Y2
 BreLxXRnRYGUkBqfcon16Yx+P6VCefDRLa+RhwYO3dxFF2N4ZpblbkIdBATwKLjL
 npiGc6R8yFjTmZU0/7olMyMCm7igIBmDvWPcsKEE8R4PezwoQv6YKHBMwEaflIbl
 Rv4IUqjJzmQPaA0KkRoAVgAKHxldaNqno/6G1FR2gwz+fr68p5WSYFlQ3axhvTjc
 bBMJpB/fbp9WmpGJieTt6iMOI6V1pnCVjibM5ZON59WCFfytHGGpbYW05gtZEod4
 d/3yRuU53JRSj3jQAQuF1B6qYhyxvv5YEtAQqIFeHaPZ67nL6agw09hE+TlXjWbE
 rTQRShflQ+ydnzIfKicFgy6/53D5hq7iH2l7HwJVXbXRQ104T5DB/XHUUTr+UWQn
 /Nkhov32/n6GjxQ=
 =58I4
 -----END PGP SIGNATURE-----

Merge tag 'pci-v4.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

 - Fix ASPM link_state teardown on removal (Lukas Wunner)

 - Fix misleading _OSC ASPM message (Sinan Kaya)

 - Make _OSC optional for PCI (Sinan Kaya)

 - Don't initialize ASPM link state when ACPI_FADT_NO_ASPM is set
   (Patrick Talbert)

 - Remove x86 and arm64 node-local allocation for host bridge structures
   (Punit Agrawal)

 - Pay attention to device-specific _PXM node values (Jonathan Cameron)

 - Support new Immediate Readiness bit (Felipe Balbi)

 - Differentiate between pciehp surprise and safe removal (Lukas Wunner)

 - Remove unnecessary pciehp includes (Lukas Wunner)

 - Drop pciehp hotplug_slot_ops wrappers (Lukas Wunner)

 - Tolerate PCIe Slot Presence Detect being hardwired to zero to
   workaround broken hardware, e.g., the Wilocity switch/wireless device
   (Lukas Wunner)

 - Unify pciehp controller & slot structs (Lukas Wunner)

 - Constify hotplug_slot_ops (Lukas Wunner)

 - Drop hotplug_slot_info (Lukas Wunner)

 - Embed hotplug_slot struct into users instead of allocating it
   separately (Lukas Wunner)

 - Initialize PCIe port service drivers directly instead of relying on
   initcall ordering (Keith Busch)

 - Restore PCI config state after a slot reset (Keith Busch)

 - Save/restore DPC config state along with other PCI config state
   (Keith Busch)

 - Reference count devices during AER handling to avoid race issue with
   concurrent hot removal (Keith Busch)

 - If an Upstream Port reports ERR_FATAL, don't try to read the Port's
   config space because it is probably unreachable (Keith Busch)

 - During error handling, use slot-specific reset instead of secondary
   bus reset to avoid link up/down issues on hotplug ports (Keith Busch)

 - Restore previous AER/DPC handling that does not remove and
   re-enumerate devices on ERR_FATAL (Keith Busch)

 - Notify all drivers that may be affected by error recovery resets
   (Keith Busch)

 - Always generate error recovery uevents, even if a driver doesn't have
   error callbacks (Keith Busch)

 - Make PCIe link active reporting detection generic (Keith Busch)

 - Support D3cold in PCIe hierarchies during system sleep and runtime,
   including hotplug and Thunderbolt ports (Mika Westerberg)

 - Handle hpmemsize/hpiosize kernel parameters uniformly, whether slots
   are empty or occupied (Jon Derrick)

 - Remove duplicated include from pci/pcie/err.c and unused variable
   from cpqphp (YueHaibing)

 - Remove driver pci_cleanup_aer_uncorrect_error_status() calls (Oza
   Pawandeep)

 - Uninline PCI bus accessors for better ftracing (Keith Busch)

 - Remove unused AER Root Port .error_resume method (Keith Busch)

 - Use kfifo in AER instead of a local version (Keith Busch)

 - Use threaded IRQ in AER bottom half (Keith Busch)

 - Use managed resources in AER core (Keith Busch)

 - Reuse pcie_port_find_device() for AER injection (Keith Busch)

 - Abstract AER interrupt handling to disconnect error injection (Keith
   Busch)

 - Refactor AER injection callbacks to simplify future improvments
   (Keith Busch)

 - Remove unused Netronome NFP32xx Device IDs (Jakub Kicinski)

 - Use bitmap_zalloc() for dma_alias_mask (Andy Shevchenko)

 - Add switch fall-through annotations (Gustavo A. R. Silva)

 - Remove unused Switchtec quirk variable (Joshua Abraham)

 - Fix pci.c kernel-doc warning (Randy Dunlap)

 - Remove trivial PCI wrappers for DMA APIs (Christoph Hellwig)

 - Add Intel GPU device IDs to spurious interrupt quirk (Bin Meng)

 - Run Switchtec DMA aliasing quirk only on NTB endpoints to avoid
   useless dmesg errors (Logan Gunthorpe)

 - Update Switchtec NTB documentation (Wesley Yung)

 - Remove redundant "default n" from Kconfig (Bartlomiej Zolnierkiewicz)

 - Avoid panic when drivers enable MSI/MSI-X twice (Tonghao Zhang)

 - Add PCI support for peer-to-peer DMA (Logan Gunthorpe)

 - Add sysfs group for PCI peer-to-peer memory statistics (Logan
   Gunthorpe)

 - Add PCI peer-to-peer DMA scatterlist mapping interface (Logan
   Gunthorpe)

 - Add PCI configfs/sysfs helpers for use by peer-to-peer users (Logan
   Gunthorpe)

 - Add PCI peer-to-peer DMA driver writer's documentation (Logan
   Gunthorpe)

 - Add block layer flag to indicate driver support for PCI peer-to-peer
   DMA (Logan Gunthorpe)

 - Map Infiniband scatterlists for peer-to-peer DMA if they contain P2P
   memory (Logan Gunthorpe)

 - Register nvme-pci CMB buffer as PCI peer-to-peer memory (Logan
   Gunthorpe)

 - Add nvme-pci support for PCI peer-to-peer memory in requests (Logan
   Gunthorpe)

 - Use PCI peer-to-peer memory in nvme (Stephen Bates, Steve Wise,
   Christoph Hellwig, Logan Gunthorpe)

 - Cache VF config space size to optimize enumeration of many VFs
   (KarimAllah Ahmed)

 - Remove unnecessary <linux/pci-ats.h> include (Bjorn Helgaas)

 - Fix VMD AERSID quirk Device ID matching (Jon Derrick)

 - Fix Cadence PHY handling during probe (Alan Douglas)

 - Signal Cadence Endpoint interrupts via AXI region 0 instead of last
   region (Alan Douglas)

 - Write Cadence Endpoint MSI interrupts with 32 bits of data (Alan
   Douglas)

 - Remove redundant controller tests for "device_type == pci" (Rob
   Herring)

 - Document R-Car E3 (R8A77990) bindings (Tho Vu)

 - Add device tree support for R-Car r8a7744 (Biju Das)

 - Drop unused mvebu PCIe capability code (Thomas Petazzoni)

 - Add shared PCI bridge emulation code (Thomas Petazzoni)

 - Convert mvebu to use shared PCI bridge emulation (Thomas Petazzoni)

 - Add aardvark Root Port emulation (Thomas Petazzoni)

 - Support 100MHz/200MHz refclocks for i.MX6 (Lucas Stach)

 - Add initial power management for i.MX7 (Leonard Crestez)

 - Add PME_Turn_Off support for i.MX7 (Leonard Crestez)

 - Fix qcom runtime power management error handling (Bjorn Andersson)

 - Update TI dra7xx unaligned access errata workaround for host mode as
   well as endpoint mode (Vignesh R)

 - Fix kirin section mismatch warning (Nathan Chancellor)

 - Remove iproc PAXC slot check to allow VF support (Jitendra Bhivare)

 - Quirk Keystone K2G to limit MRRS to 256 (Kishon Vijay Abraham I)

 - Update Keystone to use MRRS quirk for host bridge instead of open
   coding (Kishon Vijay Abraham I)

 - Refactor Keystone link establishment (Kishon Vijay Abraham I)

 - Simplify and speed up Keystone link training (Kishon Vijay Abraham I)

 - Remove unused Keystone host_init argument (Kishon Vijay Abraham I)

 - Merge Keystone driver files into one (Kishon Vijay Abraham I)

 - Remove redundant Keystone platform_set_drvdata() (Kishon Vijay
   Abraham I)

 - Rename Keystone functions for uniformity (Kishon Vijay Abraham I)

 - Add Keystone device control module DT binding (Kishon Vijay Abraham
   I)

 - Use SYSCON API to get Keystone control module device IDs (Kishon
   Vijay Abraham I)

 - Clean up Keystone PHY handling (Kishon Vijay Abraham I)

 - Use runtime PM APIs to enable Keystone clock (Kishon Vijay Abraham I)

 - Clean up Keystone config space access checks (Kishon Vijay Abraham I)

 - Get Keystone outbound window count from DT (Kishon Vijay Abraham I)

 - Clean up Keystone outbound window configuration (Kishon Vijay Abraham
   I)

 - Clean up Keystone DBI setup (Kishon Vijay Abraham I)

 - Clean up Keystone ks_pcie_link_up() (Kishon Vijay Abraham I)

 - Fix Keystone IRQ status checking (Kishon Vijay Abraham I)

 - Add debug messages for all Keystone errors (Kishon Vijay Abraham I)

 - Clean up Keystone includes and macros (Kishon Vijay Abraham I)

 - Fix Mediatek unchecked return value from devm_pci_remap_iospace()
   (Gustavo A. R. Silva)

 - Fix Mediatek endpoint/port matching logic (Honghui Zhang)

 - Change Mediatek Root Port Class Code to PCI_CLASS_BRIDGE_PCI (Honghui
   Zhang)

 - Remove redundant Mediatek PM domain check (Honghui Zhang)

 - Convert Mediatek to pci_host_probe() (Honghui Zhang)

 - Fix Mediatek MSI enablement (Honghui Zhang)

 - Add Mediatek system PM support for MT2712 and MT7622 (Honghui Zhang)

 - Add Mediatek loadable module support (Honghui Zhang)

 - Detach VMD resources after stopping root bus to prevent orphan
   resources (Jon Derrick)

 - Convert pcitest build process to that used by other tools (iio, perf,
   etc) (Gustavo Pimentel)

* tag 'pci-v4.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (140 commits)
  PCI/AER: Refactor error injection fallbacks
  PCI/AER: Abstract AER interrupt handling
  PCI/AER: Reuse existing pcie_port_find_device() interface
  PCI/AER: Use managed resource allocations
  PCI: pcie: Remove redundant 'default n' from Kconfig
  PCI: aardvark: Implement emulated root PCI bridge config space
  PCI: mvebu: Convert to PCI emulated bridge config space
  PCI: mvebu: Drop unused PCI express capability code
  PCI: Introduce PCI bridge emulated config space common logic
  PCI: vmd: Detach resources after stopping root bus
  nvmet: Optionally use PCI P2P memory
  nvmet: Introduce helper functions to allocate and free request SGLs
  nvme-pci: Add support for P2P memory in requests
  nvme-pci: Use PCI p2pmem subsystem to manage the CMB
  IB/core: Ensure we map P2P memory correctly in rdma_rw_ctx_[init|destroy]()
  block: Add PCI P2P flag for request queue
  PCI/P2PDMA: Add P2P DMA driver writer's documentation
  docs-rst: Add a new directory for PCI documentation
  PCI/P2PDMA: Introduce configfs/sysfs enable attribute helpers
  PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset
  ...
This commit is contained in:
Linus Torvalds 2018-10-25 06:50:48 -07:00
commit bd6bf7c104
162 changed files with 5009 additions and 3119 deletions

View File

@ -323,3 +323,27 @@ Description:
This is similar to /sys/bus/pci/drivers_autoprobe, but
affects only the VFs associated with a specific PF.
What: /sys/bus/pci/devices/.../p2pmem/size
Date: November 2017
Contact: Logan Gunthorpe <logang@deltatee.com>
Description:
If the device has any Peer-to-Peer memory registered, this
file contains the total amount of memory that the device
provides (in decimal).
What: /sys/bus/pci/devices/.../p2pmem/available
Date: November 2017
Contact: Logan Gunthorpe <logang@deltatee.com>
Description:
If the device has any Peer-to-Peer memory registered, this
file contains the amount of memory that has not been
allocated (in decimal).
What: /sys/bus/pci/devices/.../p2pmem/published
Date: November 2017
Contact: Logan Gunthorpe <logang@deltatee.com>
Description:
If the device has any Peer-to-Peer memory registered, this
file contains a '1' if the memory has been published for
use outside the driver that owns the device.

View File

@ -99,17 +99,20 @@ Note that the devices listed here correspond to the value populated in 1.4 above
2.2 Using Endpoint Test function Device
pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint
tests. Before pcitest.sh can be used pcitest.c should be compiled using the
following commands.
tests. To compile this tool the following commands should be used:
cd <kernel-dir>
make headers_install ARCH=arm
arm-linux-gnueabihf-gcc -Iusr/include tools/pci/pcitest.c -o pcitest
cp pcitest <rootfs>/usr/sbin/
cp tools/pci/pcitest.sh <rootfs>
# cd <kernel-dir>
# make -C tools/pci
or if you desire to compile and install in your system:
# cd <kernel-dir>
# make -C tools/pci install
The tool and script will be located in <rootfs>/usr/bin/
2.2.1 pcitest.sh Output
# ./pcitest.sh
# pcitest.sh
BAR tests
BAR0: OKAY

View File

@ -110,7 +110,7 @@ The actual steps taken by a platform to recover from a PCI error
event will be platform-dependent, but will follow the general
sequence described below.
STEP 0: Error Event: ERR_NONFATAL
STEP 0: Error Event
-------------------
A PCI bus error is detected by the PCI hardware. On powerpc, the slot
is isolated, in that all I/O is blocked: all reads return 0xffffffff,
@ -228,7 +228,13 @@ proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations).
If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
proceeds to STEP 4 (Slot Reset)
STEP 3: Slot Reset
STEP 3: Link Reset
------------------
The platform resets the link. This is a PCI-Express specific step
and is done whenever a fatal error has been detected that can be
"solved" by resetting the link.
STEP 4: Slot Reset
------------------
In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
@ -314,7 +320,7 @@ Failure).
>>> However, it probably should.
STEP 4: Resume Operations
STEP 5: Resume Operations
-------------------------
The platform will call the resume() callback on all affected device
drivers if all drivers on the segment have returned
@ -326,7 +332,7 @@ a result code.
At this point, if a new error happens, the platform will restart
a new error recovery sequence.
STEP 5: Permanent Failure
STEP 6: Permanent Failure
-------------------------
A "permanent failure" has occurred, and the platform cannot recover
the device. The platform will call error_detected() with a
@ -349,27 +355,6 @@ errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
for additional detail on real-life experience of the causes of
software errors.
STEP 0: Error Event: ERR_FATAL
-------------------
PCI bus error is detected by the PCI hardware. On powerpc, the slot is
isolated, in that all I/O is blocked: all reads return 0xffffffff, all
writes are ignored.
STEP 1: Remove devices
--------------------
Platform removes the devices depending on the error agent, it could be
this port for all subordinates or upstream component (likely downstream
port)
STEP 2: Reset link
--------------------
The platform resets the link. This is a PCI-Express specific step and is
done whenever a fatal error has been detected that can be "solved" by
resetting the link.
STEP 3: Re-enumerate the devices
--------------------
Initiates the re-enumeration.
Conclusion; General Remarks
---------------------------

View File

@ -50,6 +50,7 @@ Additional required properties for imx7d-pcie:
- reset-names: Must contain the following entires:
- "pciephy"
- "apps"
- "turnoff"
Example:

View File

@ -19,6 +19,9 @@ pcie_msi_intc : Interrupt controller device node for MSI IRQ chip
interrupt-cells: should be set to 1
interrupts: GIC interrupt lines connected to PCI MSI interrupt lines
ti,syscon-pcie-id : phandle to the device control module required to set device
id and vendor id.
Example:
pcie_msi_intc: msi-interrupt-controller {
interrupt-controller;

View File

@ -7,6 +7,7 @@ OHCI and EHCI controllers.
Required properties:
- compatible: "renesas,pci-r8a7743" for the R8A7743 SoC;
"renesas,pci-r8a7744" for the R8A7744 SoC;
"renesas,pci-r8a7745" for the R8A7745 SoC;
"renesas,pci-r8a7790" for the R8A7790 SoC;
"renesas,pci-r8a7791" for the R8A7791 SoC;

View File

@ -2,6 +2,7 @@
Required properties:
compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC;
"renesas,pcie-r8a7744" for the R8A7744 SoC;
"renesas,pcie-r8a7779" for the R8A7779 SoC;
"renesas,pcie-r8a7790" for the R8A7790 SoC;
"renesas,pcie-r8a7791" for the R8A7791 SoC;
@ -9,6 +10,7 @@ compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC;
"renesas,pcie-r8a7795" for the R8A7795 SoC;
"renesas,pcie-r8a7796" for the R8A7796 SoC;
"renesas,pcie-r8a77980" for the R8A77980 SoC;
"renesas,pcie-r8a77990" for the R8A77990 SoC;
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or
RZ/G1 compatible device.
"renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.

View File

@ -26,6 +26,11 @@ HOST MODE
ranges,
interrupt-map-mask,
interrupt-map : as specified in ../designware-pcie.txt
- ti,syscon-unaligned-access: phandle to the syscon DT node. The 1st argument
should contain the register offset within syscon
and the 2nd argument should contain the bit field
for setting the bit to enable unaligned
access.
DEVICE MODE
===========

View File

@ -30,7 +30,7 @@ available subsections can be seen below.
input
usb/index
firewire
pci
pci/index
spi
i2c
hsi

View File

@ -0,0 +1,22 @@
.. SPDX-License-Identifier: GPL-2.0
============================================
The Linux PCI driver implementer's API guide
============================================
.. class:: toc-title
Table of contents
.. toctree::
:maxdepth: 2
pci
p2pdma
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View File

@ -0,0 +1,145 @@
.. SPDX-License-Identifier: GPL-2.0
============================
PCI Peer-to-Peer DMA Support
============================
The PCI bus has pretty decent support for performing DMA transfers
between two devices on the bus. This type of transaction is henceforth
called Peer-to-Peer (or P2P). However, there are a number of issues that
make P2P transactions tricky to do in a perfectly safe way.
One of the biggest issues is that PCI doesn't require forwarding
transactions between hierarchy domains, and in PCIe, each Root Port
defines a separate hierarchy domain. To make things worse, there is no
simple way to determine if a given Root Complex supports this or not.
(See PCIe r4.0, sec 1.3.1). Therefore, as of this writing, the kernel
only supports doing P2P when the endpoints involved are all behind the
same PCI bridge, as such devices are all in the same PCI hierarchy
domain, and the spec guarantees that all transactions within the
hierarchy will be routable, but it does not require routing
between hierarchies.
The second issue is that to make use of existing interfaces in Linux,
memory that is used for P2P transactions needs to be backed by struct
pages. However, PCI BARs are not typically cache coherent so there are
a few corner case gotchas with these pages so developers need to
be careful about what they do with them.
Driver Writer's Guide
=====================
In a given P2P implementation there may be three or more different
types of kernel drivers in play:
* Provider - A driver which provides or publishes P2P resources like
memory or doorbell registers to other drivers.
* Client - A driver which makes use of a resource by setting up a
DMA transaction to or from it.
* Orchestrator - A driver which orchestrates the flow of data between
clients and providers.
In many cases there could be overlap between these three types (i.e.,
it may be typical for a driver to be both a provider and a client).
For example, in the NVMe Target Copy Offload implementation:
* The NVMe PCI driver is both a client, provider and orchestrator
in that it exposes any CMB (Controller Memory Buffer) as a P2P memory
resource (provider), it accepts P2P memory pages as buffers in requests
to be used directly (client) and it can also make use of the CMB as
submission queue entries (orchastrator).
* The RDMA driver is a client in this arrangement so that an RNIC
can DMA directly to the memory exposed by the NVMe device.
* The NVMe Target driver (nvmet) can orchestrate the data from the RNIC
to the P2P memory (CMB) and then to the NVMe device (and vice versa).
This is currently the only arrangement supported by the kernel but
one could imagine slight tweaks to this that would allow for the same
functionality. For example, if a specific RNIC added a BAR with some
memory behind it, its driver could add support as a P2P provider and
then the NVMe Target could use the RNIC's memory instead of the CMB
in cases where the NVMe cards in use do not have CMB support.
Provider Drivers
----------------
A provider simply needs to register a BAR (or a portion of a BAR)
as a P2P DMA resource using :c:func:`pci_p2pdma_add_resource()`.
This will register struct pages for all the specified memory.
After that it may optionally publish all of its resources as
P2P memory using :c:func:`pci_p2pmem_publish()`. This will allow
any orchestrator drivers to find and use the memory. When marked in
this way, the resource must be regular memory with no side effects.
For the time being this is fairly rudimentary in that all resources
are typically going to be P2P memory. Future work will likely expand
this to include other types of resources like doorbells.
Client Drivers
--------------
A client driver typically only has to conditionally change its DMA map
routine to use the mapping function :c:func:`pci_p2pdma_map_sg()` instead
of the usual :c:func:`dma_map_sg()` function. Memory mapped in this
way does not need to be unmapped.
The client may also, optionally, make use of
:c:func:`is_pci_p2pdma_page()` to determine when to use the P2P mapping
functions and when to use the regular mapping functions. In some
situations, it may be more appropriate to use a flag to indicate a
given request is P2P memory and map appropriately. It is important to
ensure that struct pages that back P2P memory stay out of code that
does not have support for them as other code may treat the pages as
regular memory which may not be appropriate.
Orchestrator Drivers
--------------------
The first task an orchestrator driver must do is compile a list of
all client devices that will be involved in a given transaction. For
example, the NVMe Target driver creates a list including the namespace
block device and the RNIC in use. If the orchestrator has access to
a specific P2P provider to use it may check compatibility using
:c:func:`pci_p2pdma_distance()` otherwise it may find a memory provider
that's compatible with all clients using :c:func:`pci_p2pmem_find()`.
If more than one provider is supported, the one nearest to all the clients will
be chosen first. If more than one provider is an equal distance away, the
one returned will be chosen at random (it is not an arbitrary but
truely random). This function returns the PCI device to use for the provider
with a reference taken and therefore when it's no longer needed it should be
returned with pci_dev_put().
Once a provider is selected, the orchestrator can then use
:c:func:`pci_alloc_p2pmem()` and :c:func:`pci_free_p2pmem()` to
allocate P2P memory from the provider. :c:func:`pci_p2pmem_alloc_sgl()`
and :c:func:`pci_p2pmem_free_sgl()` are convenience functions for
allocating scatter-gather lists with P2P memory.
Struct Page Caveats
-------------------
Driver writers should be very careful about not passing these special
struct pages to code that isn't prepared for it. At this time, the kernel
interfaces do not have any checks for ensuring this. This obviously
precludes passing these pages to userspace.
P2P memory is also technically IO memory but should never have any side
effects behind it. Thus, the order of loads and stores should not be important
and ioreadX(), iowriteX() and friends should not be necessary.
However, as the memory is not cache coherent, if access ever needs to
be protected by a spinlock then :c:func:`mmiowb()` must be used before
unlocking the lock. (See ACQUIRES VS I/O ACCESSES in
Documentation/memory-barriers.txt)
P2P DMA Support Library
=======================
.. kernel-doc:: drivers/pci/p2pdma.c
:export:

View File

@ -23,7 +23,7 @@ The primary means of communicating with the Switchtec management firmware is
through the Memory-mapped Remote Procedure Call (MRPC) interface.
Commands are submitted to the interface with a 4-byte command
identifier and up to 1KB of command specific data. The firmware will
respond with a 4 bytes return code and up to 1KB of command specific
respond with a 4-byte return code and up to 1KB of command-specific
data. The interface only processes a single command at a time.
@ -36,8 +36,8 @@ device: /dev/switchtec#, one for each management endpoint in the system.
The char device has the following semantics:
* A write must consist of at least 4 bytes and no more than 1028 bytes.
The first four bytes will be interpreted as the command to run and
the remainder will be used as the input data. A write will send the
The first 4 bytes will be interpreted as the Command ID and the
remainder will be used as the input data. A write will send the
command to the firmware to begin processing.
* Each write must be followed by exactly one read. Any double write will
@ -45,9 +45,9 @@ The char device has the following semantics:
produce an error.
* A read will block until the firmware completes the command and return
the four bytes of status plus up to 1024 bytes of output data. (The
length will be specified by the size parameter of the read call --
reading less than 4 bytes will produce an error.
the 4-byte Command Return Value plus up to 1024 bytes of output
data. (The length will be specified by the size parameter of the read
call -- reading less than 4 bytes will produce an error.)
* The poll call will also be supported for userspace applications that
need to do other things while waiting for the command to complete.
@ -83,10 +83,20 @@ The following IOCTLs are also supported by the device:
Non-Transparent Bridge (NTB) Driver
===================================
An NTB driver is provided for the switchtec hardware in switchtec_ntb.
Currently, it only supports switches configured with exactly 2
partitions. It also requires the following configuration settings:
An NTB hardware driver is provided for the Switchtec hardware in
ntb_hw_switchtec. Currently, it only supports switches configured with
exactly 2 NT partitions and zero or more non-NT partitions. It also requires
the following configuration settings:
* Both partitions must be able to access each other's GAS spaces.
* Both NT partitions must be able to access each other's GAS spaces.
Thus, the bits in the GAS Access Vector under Management Settings
must be set to support this.
* Kernel configuration MUST include support for NTB (CONFIG_NTB needs
to be set)
NT EP BAR 2 will be dynamically configured as a Direct Window, and
the configuration file does not need to configure it explicitly.
Please refer to Documentation/ntb.txt in Linux source tree for an overall
understanding of the Linux NTB stack. ntb_hw_switchtec works as an NTB
Hardware Driver in this stack.

View File

@ -11299,7 +11299,7 @@ M: Murali Karicheri <m-karicheri2@ti.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/controller/dwc/*keystone*
F: drivers/pci/controller/dwc/pci-keystone.c
PCI ENDPOINT SUBSYSTEM
M: Kishon Vijay Abraham I <kishon@ti.com>

View File

@ -146,8 +146,9 @@
fsl,max-link-speed = <2>;
power-domains = <&pgc_pcie_phy>;
resets = <&src IMX7_RESET_PCIEPHY>,
<&src IMX7_RESET_PCIE_CTRL_APPS_EN>;
reset-names = "pciephy", "apps";
<&src IMX7_RESET_PCIE_CTRL_APPS_EN>,
<&src IMX7_RESET_PCIE_CTRL_APPS_TURNOFF>;
reset-names = "pciephy", "apps", "turnoff";
status = "disabled";
};
};

View File

@ -165,16 +165,15 @@ static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci)
/* Interface called from ACPI code to setup PCI host controller */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
int node = acpi_get_node(root->device->handle);
struct acpi_pci_generic_root_info *ri;
struct pci_bus *bus, *child;
struct acpi_pci_root_ops *root_ops;
ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node);
ri = kzalloc(sizeof(*ri), GFP_KERNEL);
if (!ri)
return NULL;
root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node);
root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL);
if (!root_ops) {
kfree(ri);
return NULL;

View File

@ -54,7 +54,6 @@ void pnv_cxl_release_hwirq_ranges(struct cxl_irq_ranges *irqs,
struct pnv_php_slot {
struct hotplug_slot slot;
struct hotplug_slot_info slot_info;
uint64_t id;
char *name;
int slot_no;
@ -72,6 +71,7 @@ struct pnv_php_slot {
struct pci_dev *pdev;
struct pci_bus *bus;
bool power_state_check;
u8 attention_state;
void *fdt;
void *dt;
struct of_changeset ocs;

View File

@ -356,7 +356,7 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
} else {
struct pci_root_info *info;
info = kzalloc_node(sizeof(*info), GFP_KERNEL, node);
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
dev_err(&root->device->dev,
"pci_bus %04x:%02x: ignored (out of memory)\n",

View File

@ -629,17 +629,11 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff);
static void quirk_no_aersid(struct pci_dev *pdev)
{
/* VMD Domain */
if (is_vmd(pdev->bus))
if (is_vmd(pdev->bus) && pci_is_root_bus(pdev->bus))
pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334a, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334b, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334c, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334d, quirk_no_aersid);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid);
#ifdef CONFIG_PHYS_ADDR_T_64BIT

View File

@ -421,7 +421,8 @@ out:
}
EXPORT_SYMBOL(acpi_pci_osc_control_set);
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm)
static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
bool is_pcie)
{
u32 support, control, requested;
acpi_status status;
@ -455,9 +456,15 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm)
decode_osc_support(root, "OS supports", support);
status = acpi_pci_osc_support(root, support);
if (ACPI_FAILURE(status)) {
dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n",
acpi_format_exception(status));
*no_aspm = 1;
/* _OSC is optional for PCI host bridges */
if ((status == AE_NOT_FOUND) && !is_pcie)
return;
dev_info(&device->dev, "_OSC failed (%s)%s\n",
acpi_format_exception(status),
pcie_aspm_support_enabled() ? "; disabling ASPM" : "");
return;
}
@ -533,6 +540,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
acpi_handle handle = device->handle;
int no_aspm = 0;
bool hotadd = system_state == SYSTEM_RUNNING;
bool is_pcie;
root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
if (!root)
@ -590,7 +598,8 @@ static int acpi_pci_root_add(struct acpi_device *device,
root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle);
negotiate_os_control(root, &no_aspm);
is_pcie = strcmp(acpi_device_hid(device), "PNP0A08") == 0;
negotiate_os_control(root, &no_aspm, is_pcie);
/*
* TBD: Need PCI interface for enumeration/configuration of roots.

View File

@ -24,11 +24,15 @@ static int acpi_data_get_property_array(const struct acpi_device_data *data,
acpi_object_type type,
const union acpi_object **obj);
/* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
static const guid_t prp_guid =
static const guid_t prp_guids[] = {
/* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c,
0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01);
/* ACPI _DSD data subnodes GUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */
0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01),
/* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */
GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3,
0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4),
};
static const guid_t ads_guid =
GUID_INIT(0xdbb8e3e6, 0x5886, 0x4ba6,
0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b);
@ -56,6 +60,7 @@ static bool acpi_nondev_subnode_extract(const union acpi_object *desc,
dn->name = link->package.elements[0].string.pointer;
dn->fwnode.ops = &acpi_data_fwnode_ops;
dn->parent = parent;
INIT_LIST_HEAD(&dn->data.properties);
INIT_LIST_HEAD(&dn->data.subnodes);
result = acpi_extract_properties(desc, &dn->data);
@ -288,6 +293,35 @@ static void acpi_init_of_compatible(struct acpi_device *adev)
adev->flags.of_compatible_ok = 1;
}
static bool acpi_is_property_guid(const guid_t *guid)
{
int i;
for (i = 0; i < ARRAY_SIZE(prp_guids); i++) {
if (guid_equal(guid, &prp_guids[i]))
return true;
}
return false;
}
struct acpi_device_properties *
acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid,
const union acpi_object *properties)
{
struct acpi_device_properties *props;
props = kzalloc(sizeof(*props), GFP_KERNEL);
if (props) {
INIT_LIST_HEAD(&props->list);
props->guid = guid;
props->properties = properties;
list_add_tail(&props->list, &data->properties);
}
return props;
}
static bool acpi_extract_properties(const union acpi_object *desc,
struct acpi_device_data *data)
{
@ -312,7 +346,7 @@ static bool acpi_extract_properties(const union acpi_object *desc,
properties->type != ACPI_TYPE_PACKAGE)
break;
if (!guid_equal((guid_t *)guid->buffer.pointer, &prp_guid))
if (!acpi_is_property_guid((guid_t *)guid->buffer.pointer))
continue;
/*
@ -320,13 +354,13 @@ static bool acpi_extract_properties(const union acpi_object *desc,
* package immediately following it.
*/
if (!acpi_properties_format_valid(properties))
break;
continue;
data->properties = properties;
return true;
acpi_data_add_props(data, (const guid_t *)guid->buffer.pointer,
properties);
}
return false;
return !list_empty(&data->properties);
}
void acpi_init_properties(struct acpi_device *adev)
@ -336,6 +370,7 @@ void acpi_init_properties(struct acpi_device *adev)
acpi_status status;
bool acpi_of = false;
INIT_LIST_HEAD(&adev->data.properties);
INIT_LIST_HEAD(&adev->data.subnodes);
if (!adev->handle)
@ -398,11 +433,16 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list)
void acpi_free_properties(struct acpi_device *adev)
{
struct acpi_device_properties *props, *tmp;
acpi_destroy_nondev_subnodes(&adev->data.subnodes);
ACPI_FREE((void *)adev->data.pointer);
adev->data.of_compatible = NULL;
adev->data.pointer = NULL;
adev->data.properties = NULL;
list_for_each_entry_safe(props, tmp, &adev->data.properties, list) {
list_del(&props->list);
kfree(props);
}
}
/**
@ -427,32 +467,37 @@ static int acpi_data_get_property(const struct acpi_device_data *data,
const char *name, acpi_object_type type,
const union acpi_object **obj)
{
const union acpi_object *properties;
int i;
const struct acpi_device_properties *props;
if (!data || !name)
return -EINVAL;
if (!data->pointer || !data->properties)
if (!data->pointer || list_empty(&data->properties))
return -EINVAL;
properties = data->properties;
for (i = 0; i < properties->package.count; i++) {
const union acpi_object *propname, *propvalue;
const union acpi_object *property;
list_for_each_entry(props, &data->properties, list) {
const union acpi_object *properties;
unsigned int i;
property = &properties->package.elements[i];
properties = props->properties;
for (i = 0; i < properties->package.count; i++) {
const union acpi_object *propname, *propvalue;
const union acpi_object *property;
propname = &property->package.elements[0];
propvalue = &property->package.elements[1];
property = &properties->package.elements[i];
if (!strcmp(name, propname->string.pointer)) {
if (type != ACPI_TYPE_ANY && propvalue->type != type)
return -EPROTO;
if (obj)
*obj = propvalue;
propname = &property->package.elements[0];
propvalue = &property->package.elements[1];
return 0;
if (!strcmp(name, propname->string.pointer)) {
if (type != ACPI_TYPE_ANY &&
propvalue->type != type)
return -EPROTO;
if (obj)
*obj = propvalue;
return 0;
}
}
}
return -EINVAL;

View File

@ -132,8 +132,8 @@ void acpi_extract_apple_properties(struct acpi_device *adev)
}
WARN_ON(free_space != (void *)newprops + newsize);
adev->data.properties = newprops;
adev->data.pointer = newprops;
acpi_data_add_props(&adev->data, &apple_prp_guid, newprops);
out_free:
ACPI_FREE(props);

View File

@ -873,7 +873,7 @@ static int inic_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
* like others but it will lock up the whole machine HARD if
* 65536 byte PRD entry is fed. Reduce maximum segment size.
*/
rc = pci_set_dma_max_seg_size(pdev, 65536 - 512);
rc = dma_set_max_seg_size(&pdev->dev, 65536 - 512);
if (rc) {
dev_err(&pdev->dev, "failed to set the maximum segment size\n");
return rc;

View File

@ -780,7 +780,7 @@ static int rsxx_pci_probe(struct pci_dev *dev,
goto failed_enable;
pci_set_master(dev);
pci_set_dma_max_seg_size(dev, RSXX_HW_BLK_SIZE);
dma_set_max_seg_size(&dev->dev, RSXX_HW_BLK_SIZE);
st = dma_set_mask(&dev->dev, DMA_BIT_MASK(64));
if (st) {

View File

@ -198,7 +198,6 @@ static pci_ers_result_t adf_slot_reset(struct pci_dev *pdev)
pr_err("QAT: Can't find acceleration device\n");
return PCI_ERS_RESULT_DISCONNECT;
}
pci_cleanup_aer_uncorrect_error_status(pdev);
if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC))
return PCI_ERS_RESULT_DISCONNECT;

View File

@ -1258,7 +1258,6 @@ static pci_ers_result_t ioat_pcie_error_detected(struct pci_dev *pdev,
static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev)
{
pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED;
int err;
dev_dbg(&pdev->dev, "%s post reset handling\n", DRV_NAME);
@ -1273,12 +1272,6 @@ static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev)
pci_wake_from_d3(pdev, false);
}
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
dev_err(&pdev->dev,
"AER uncorrect error status clear failed: %#x\n", err);
}
return result;
}

View File

@ -1194,7 +1194,7 @@ int acpi_gpio_count(struct device *dev, const char *con_id)
bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id)
{
/* Never allow fallback if the device has properties */
if (adev->data.properties || adev->driver_gpios)
if (acpi_dev_has_props(adev) || adev->driver_gpios)
return false;
return con_id == NULL;

View File

@ -12,6 +12,7 @@
*/
#include <linux/moduleparam.h>
#include <linux/slab.h>
#include <linux/pci-p2pdma.h>
#include <rdma/mr_pool.h>
#include <rdma/rw.h>
@ -280,7 +281,11 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
struct ib_device *dev = qp->pd->device;
int ret;
ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
if (is_pci_p2pdma_page(sg_page(sg)))
ret = pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir);
else
ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
if (!ret)
return -ENOMEM;
sg_cnt = ret;
@ -602,7 +607,9 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
break;
}
ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
/* P2PDMA contexts do not need to be unmapped */
if (!is_pci_p2pdma_page(sg_page(sg)))
ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
}
EXPORT_SYMBOL(rdma_rw_ctx_destroy);

View File

@ -99,7 +99,7 @@ static void dealloc_oc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
static void dealloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
{
dma_free_coherent(&(rdev->lldi.pdev->dev), sq->memsize, sq->queue,
pci_unmap_addr(sq, mapping));
dma_unmap_addr(sq, mapping));
}
static void dealloc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
@ -132,7 +132,7 @@ static int alloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
if (!sq->queue)
return -ENOMEM;
sq->phys_addr = virt_to_phys(sq->queue);
pci_unmap_addr_set(sq, mapping, sq->dma_addr);
dma_unmap_addr_set(sq, mapping, sq->dma_addr);
return 0;
}
@ -2521,7 +2521,7 @@ static void free_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx,
dma_free_coherent(&rdev->lldi.pdev->dev,
wq->memsize, wq->queue,
pci_unmap_addr(wq, mapping));
dma_unmap_addr(wq, mapping));
c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size);
kfree(wq->sw_rq);
c4iw_put_qpid(rdev, wq->qid, uctx);
@ -2570,7 +2570,7 @@ static int alloc_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx,
goto err_free_rqtpool;
memset(wq->queue, 0, wq->memsize);
pci_unmap_addr_set(wq, mapping, wq->dma_addr);
dma_unmap_addr_set(wq, mapping, wq->dma_addr);
wq->bar2_va = c4iw_bar2_addrs(rdev, wq->qid, T4_BAR2_QTYPE_EGRESS,
&wq->bar2_qid,
@ -2649,7 +2649,7 @@ static int alloc_srq_queue(struct c4iw_srq *srq, struct c4iw_dev_ucontext *uctx,
err_free_queue:
dma_free_coherent(&rdev->lldi.pdev->dev,
wq->memsize, wq->queue,
pci_unmap_addr(wq, mapping));
dma_unmap_addr(wq, mapping));
err_free_rqtpool:
c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size);
err_free_pending_wrs:

View File

@ -397,7 +397,7 @@ struct t4_srq_pending_wr {
struct t4_srq {
union t4_recv_wr *queue;
dma_addr_t dma_addr;
DECLARE_PCI_UNMAP_ADDR(mapping);
DEFINE_DMA_UNMAP_ADDR(mapping);
struct t4_swrqe *sw_rq;
void __iomem *bar2_va;
u64 bar2_pa;

View File

@ -650,7 +650,6 @@ pci_resume(struct pci_dev *pdev)
struct hfi1_devdata *dd = pci_get_drvdata(pdev);
dd_dev_info(dd, "HFI1 resume function called\n");
pci_cleanup_aer_uncorrect_error_status(pdev);
/*
* Running jobs will fail, since it's asynchronous
* unlike sysfs-requested reset. Better than

View File

@ -597,7 +597,6 @@ qib_pci_resume(struct pci_dev *pdev)
struct qib_devdata *dd = pci_get_drvdata(pdev);
qib_devinfo(pdev, "QIB resume function called\n");
pci_cleanup_aer_uncorrect_error_status(pdev);
/*
* Running jobs will fail, since it's asynchronous
* unlike sysfs-requested reset. Better than

View File

@ -1964,8 +1964,6 @@ static pci_ers_result_t alx_pci_error_slot_reset(struct pci_dev *pdev)
if (!alx_reset_mac(hw))
rc = PCI_ERS_RESULT_RECOVERED;
out:
pci_cleanup_aer_uncorrect_error_status(pdev);
rtnl_unlock();
return rc;

View File

@ -8793,13 +8793,6 @@ static pci_ers_result_t bnx2_io_slot_reset(struct pci_dev *pdev)
if (!(bp->flags & BNX2_FLAG_AER_ENABLED))
return result;
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
dev_err(&pdev->dev,
"pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
err); /* non-fatal, continue */
}
return result;
}

View File

@ -14380,14 +14380,6 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
rtnl_unlock();
/* If AER, perform cleanup of the PCIe registers */
if (bp->flags & AER_ENABLED) {
if (pci_cleanup_aer_uncorrect_error_status(pdev))
BNX2X_ERR("pci_cleanup_aer_uncorrect_error_status failed\n");
else
DP(NETIF_MSG_HW, "pci_cleanup_aer_uncorrect_error_status succeeded\n");
}
return PCI_ERS_RESULT_RECOVERED;
}

View File

@ -10354,13 +10354,6 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
rtnl_unlock();
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
dev_err(&pdev->dev,
"pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
err); /* non-fatal, continue */
}
return PCI_ERS_RESULT_RECOVERED;
}

View File

@ -4767,7 +4767,6 @@ static pci_ers_result_t eeh_slot_reset(struct pci_dev *pdev)
pci_set_master(pdev);
pci_restore_state(pdev);
pci_save_state(pdev);
pci_cleanup_aer_uncorrect_error_status(pdev);
if (t4_wait_dev_ready(adap->regs) < 0)
return PCI_ERS_RESULT_DISCONNECT;

View File

@ -6146,7 +6146,6 @@ static pci_ers_result_t be_eeh_reset(struct pci_dev *pdev)
if (status)
return PCI_ERS_RESULT_DISCONNECT;
pci_cleanup_aer_uncorrect_error_status(pdev);
be_clear_error(adapter, BE_CLEAR_ALL);
return PCI_ERS_RESULT_RECOVERED;
}

View File

@ -6854,8 +6854,6 @@ static pci_ers_result_t e1000_io_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_RECOVERED;
}
pci_cleanup_aer_uncorrect_error_status(pdev);
return result;
}

View File

@ -2440,8 +2440,6 @@ static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_RECOVERED;
}
pci_cleanup_aer_uncorrect_error_status(pdev);
return result;
}

View File

@ -14552,7 +14552,6 @@ static pci_ers_result_t i40e_pci_error_slot_reset(struct pci_dev *pdev)
{
struct i40e_pf *pf = pci_get_drvdata(pdev);
pci_ers_result_t result;
int err;
u32 reg;
dev_dbg(&pdev->dev, "%s\n", __func__);
@ -14573,14 +14572,6 @@ static pci_ers_result_t i40e_pci_error_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_DISCONNECT;
}
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
dev_info(&pdev->dev,
"pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
err);
/* non-fatal, continue */
}
return result;
}

View File

@ -9086,7 +9086,6 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
struct igb_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
pci_ers_result_t result;
int err;
if (pci_enable_device_mem(pdev)) {
dev_err(&pdev->dev,
@ -9110,14 +9109,6 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_RECOVERED;
}
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
dev_err(&pdev->dev,
"pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n",
err);
/* non-fatal, continue */
}
return result;
}

View File

@ -11322,8 +11322,6 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev,
/* Free device reference count */
pci_dev_put(vfdev);
}
pci_cleanup_aer_uncorrect_error_status(pdev);
}
/*
@ -11373,7 +11371,6 @@ static pci_ers_result_t ixgbe_io_slot_reset(struct pci_dev *pdev)
{
struct ixgbe_adapter *adapter = pci_get_drvdata(pdev);
pci_ers_result_t result;
int err;
if (pci_enable_device_mem(pdev)) {
e_err(probe, "Cannot re-enable PCI device after reset.\n");
@ -11393,13 +11390,6 @@ static pci_ers_result_t ixgbe_io_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_RECOVERED;
}
err = pci_cleanup_aer_uncorrect_error_status(pdev);
if (err) {
e_dev_err("pci_cleanup_aer_uncorrect_error_status "
"failed 0x%0x\n", err);
/* non-fatal, continue */
}
return result;
}

View File

@ -1784,11 +1784,6 @@ static pci_ers_result_t netxen_io_slot_reset(struct pci_dev *pdev)
return err ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
static void netxen_io_resume(struct pci_dev *pdev)
{
pci_cleanup_aer_uncorrect_error_status(pdev);
}
static void netxen_nic_shutdown(struct pci_dev *pdev)
{
struct netxen_adapter *adapter = pci_get_drvdata(pdev);
@ -3465,7 +3460,6 @@ netxen_free_ip_list(struct netxen_adapter *adapter, bool master)
static const struct pci_error_handlers netxen_err_handler = {
.error_detected = netxen_io_error_detected,
.slot_reset = netxen_io_slot_reset,
.resume = netxen_io_resume,
};
static struct pci_driver netxen_driver = {

View File

@ -4233,7 +4233,6 @@ static void qlcnic_83xx_io_resume(struct pci_dev *pdev)
{
struct qlcnic_adapter *adapter = pci_get_drvdata(pdev);
pci_cleanup_aer_uncorrect_error_status(pdev);
if (test_and_clear_bit(__QLCNIC_AER, &adapter->state))
qlcnic_83xx_aer_start_poll_work(adapter);
}

View File

@ -3930,7 +3930,6 @@ static void qlcnic_82xx_io_resume(struct pci_dev *pdev)
u32 state;
struct qlcnic_adapter *adapter = pci_get_drvdata(pdev);
pci_cleanup_aer_uncorrect_error_status(pdev);
state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE);
if (state == QLCNIC_DEV_READY && test_and_clear_bit(__QLCNIC_AER,
&adapter->state))

View File

@ -3821,7 +3821,6 @@ static pci_ers_result_t efx_io_slot_reset(struct pci_dev *pdev)
{
struct efx_nic *efx = pci_get_drvdata(pdev);
pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED;
int rc;
if (pci_enable_device(pdev)) {
netif_err(efx, hw, efx->net_dev,
@ -3829,13 +3828,6 @@ static pci_ers_result_t efx_io_slot_reset(struct pci_dev *pdev)
status = PCI_ERS_RESULT_DISCONNECT;
}
rc = pci_cleanup_aer_uncorrect_error_status(pdev);
if (rc) {
netif_err(efx, hw, efx->net_dev,
"pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc);
/* Non-fatal error. Continue. */
}
return status;
}

View File

@ -3160,7 +3160,6 @@ static pci_ers_result_t ef4_io_slot_reset(struct pci_dev *pdev)
{
struct ef4_nic *efx = pci_get_drvdata(pdev);
pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED;
int rc;
if (pci_enable_device(pdev)) {
netif_err(efx, hw, efx->net_dev,
@ -3168,13 +3167,6 @@ static pci_ers_result_t ef4_io_slot_reset(struct pci_dev *pdev)
status = PCI_ERS_RESULT_DISCONNECT;
}
rc = pci_cleanup_aer_uncorrect_error_status(pdev);
if (rc) {
netif_err(efx, hw, efx->net_dev,
"pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc);
/* Non-fatal error. Continue. */
}
return status;
}

View File

@ -3064,7 +3064,11 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
ns->queue = blk_mq_init_queue(ctrl->tagset);
if (IS_ERR(ns->queue))
goto out_free_ns;
blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
if (ctrl->ops->flags & NVME_F_PCI_P2PDMA)
blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
ns->queue->queuedata = ns;
ns->ctrl = ctrl;

View File

@ -343,6 +343,7 @@ struct nvme_ctrl_ops {
unsigned int flags;
#define NVME_F_FABRICS (1 << 0)
#define NVME_F_METADATA_SUPPORTED (1 << 1)
#define NVME_F_PCI_P2PDMA (1 << 2)
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);

View File

@ -30,6 +30,7 @@
#include <linux/types.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/sed-opal.h>
#include <linux/pci-p2pdma.h>
#include "nvme.h"
@ -99,9 +100,8 @@ struct nvme_dev {
struct work_struct remove_work;
struct mutex shutdown_lock;
bool subsystem;
void __iomem *cmb;
pci_bus_addr_t cmb_bus_addr;
u64 cmb_size;
bool cmb_use_sqes;
u32 cmbsz;
u32 cmbloc;
struct nvme_ctrl ctrl;
@ -158,7 +158,7 @@ struct nvme_queue {
struct nvme_dev *dev;
spinlock_t sq_lock;
struct nvme_command *sq_cmds;
struct nvme_command __iomem *sq_cmds_io;
bool sq_cmds_is_io;
spinlock_t cq_lock ____cacheline_aligned_in_smp;
volatile struct nvme_completion *cqes;
struct blk_mq_tags **tags;
@ -447,11 +447,8 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd)
{
spin_lock(&nvmeq->sq_lock);
if (nvmeq->sq_cmds_io)
memcpy_toio(&nvmeq->sq_cmds_io[nvmeq->sq_tail], cmd,
sizeof(*cmd));
else
memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd));
memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd));
if (++nvmeq->sq_tail == nvmeq->q_depth)
nvmeq->sq_tail = 0;
@ -748,8 +745,13 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
goto out;
ret = BLK_STS_RESOURCE;
nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, dma_dir,
DMA_ATTR_NO_WARN);
if (is_pci_p2pdma_page(sg_page(iod->sg)))
nr_mapped = pci_p2pdma_map_sg(dev->dev, iod->sg, iod->nents,
dma_dir);
else
nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents,
dma_dir, DMA_ATTR_NO_WARN);
if (!nr_mapped)
goto out;
@ -791,7 +793,10 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
DMA_TO_DEVICE : DMA_FROM_DEVICE;
if (iod->nents) {
dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir);
/* P2PDMA requests do not need to be unmapped */
if (!is_pci_p2pdma_page(sg_page(iod->sg)))
dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir);
if (blk_integrity_rq(req))
dma_unmap_sg(dev->dev, &iod->meta_sg, 1, dma_dir);
}
@ -1232,9 +1237,18 @@ static void nvme_free_queue(struct nvme_queue *nvmeq)
{
dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth),
(void *)nvmeq->cqes, nvmeq->cq_dma_addr);
if (nvmeq->sq_cmds)
dma_free_coherent(nvmeq->q_dmadev, SQ_SIZE(nvmeq->q_depth),
nvmeq->sq_cmds, nvmeq->sq_dma_addr);
if (nvmeq->sq_cmds) {
if (nvmeq->sq_cmds_is_io)
pci_free_p2pmem(to_pci_dev(nvmeq->q_dmadev),
nvmeq->sq_cmds,
SQ_SIZE(nvmeq->q_depth));
else
dma_free_coherent(nvmeq->q_dmadev,
SQ_SIZE(nvmeq->q_depth),
nvmeq->sq_cmds,
nvmeq->sq_dma_addr);
}
}
static void nvme_free_queues(struct nvme_dev *dev, int lowest)
@ -1323,12 +1337,21 @@ static int nvme_cmb_qdepth(struct nvme_dev *dev, int nr_io_queues,
static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq,
int qid, int depth)
{
/* CMB SQEs will be mapped before creation */
if (qid && dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS))
return 0;
struct pci_dev *pdev = to_pci_dev(dev->dev);
if (qid && dev->cmb_use_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) {
nvmeq->sq_cmds = pci_alloc_p2pmem(pdev, SQ_SIZE(depth));
nvmeq->sq_dma_addr = pci_p2pmem_virt_to_bus(pdev,
nvmeq->sq_cmds);
nvmeq->sq_cmds_is_io = true;
}
if (!nvmeq->sq_cmds) {
nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
&nvmeq->sq_dma_addr, GFP_KERNEL);
nvmeq->sq_cmds_is_io = false;
}
nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
&nvmeq->sq_dma_addr, GFP_KERNEL);
if (!nvmeq->sq_cmds)
return -ENOMEM;
return 0;
@ -1405,13 +1428,6 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid)
int result;
s16 vector;
if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) {
unsigned offset = (qid - 1) * roundup(SQ_SIZE(nvmeq->q_depth),
dev->ctrl.page_size);
nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset;
nvmeq->sq_cmds_io = dev->cmb + offset;
}
/*
* A queue's vector matches the queue identifier unless the controller
* has only one vector available.
@ -1652,9 +1668,6 @@ static void nvme_map_cmb(struct nvme_dev *dev)
return;
dev->cmbloc = readl(dev->bar + NVME_REG_CMBLOC);
if (!use_cmb_sqes)
return;
size = nvme_cmb_size_unit(dev) * nvme_cmb_size(dev);
offset = nvme_cmb_size_unit(dev) * NVME_CMB_OFST(dev->cmbloc);
bar = NVME_CMB_BIR(dev->cmbloc);
@ -1671,11 +1684,18 @@ static void nvme_map_cmb(struct nvme_dev *dev)
if (size > bar_size - offset)
size = bar_size - offset;
dev->cmb = ioremap_wc(pci_resource_start(pdev, bar) + offset, size);
if (!dev->cmb)
if (pci_p2pdma_add_resource(pdev, bar, size, offset)) {
dev_warn(dev->ctrl.device,
"failed to register the CMB\n");
return;
dev->cmb_bus_addr = pci_bus_address(pdev, bar) + offset;
}
dev->cmb_size = size;
dev->cmb_use_sqes = use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS);
if ((dev->cmbsz & (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) ==
(NVME_CMBSZ_WDS | NVME_CMBSZ_RDS))
pci_p2pmem_publish(pdev, true);
if (sysfs_add_file_to_group(&dev->ctrl.device->kobj,
&dev_attr_cmb.attr, NULL))
@ -1685,12 +1705,10 @@ static void nvme_map_cmb(struct nvme_dev *dev)
static inline void nvme_release_cmb(struct nvme_dev *dev)
{
if (dev->cmb) {
iounmap(dev->cmb);
dev->cmb = NULL;
if (dev->cmb_size) {
sysfs_remove_file_from_group(&dev->ctrl.device->kobj,
&dev_attr_cmb.attr, NULL);
dev->cmbsz = 0;
dev->cmb_size = 0;
}
}
@ -1889,13 +1907,13 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
if (nr_io_queues == 0)
return 0;
if (dev->cmb && (dev->cmbsz & NVME_CMBSZ_SQS)) {
if (dev->cmb_use_sqes) {
result = nvme_cmb_qdepth(dev, nr_io_queues,
sizeof(struct nvme_command));
if (result > 0)
dev->q_depth = result;
else
nvme_release_cmb(dev);
dev->cmb_use_sqes = false;
}
do {
@ -2390,7 +2408,8 @@ static int nvme_pci_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
.name = "pcie",
.module = THIS_MODULE,
.flags = NVME_F_METADATA_SUPPORTED,
.flags = NVME_F_METADATA_SUPPORTED |
NVME_F_PCI_P2PDMA,
.reg_read32 = nvme_pci_reg_read32,
.reg_write32 = nvme_pci_reg_write32,
.reg_read64 = nvme_pci_reg_read64,
@ -2648,7 +2667,6 @@ static void nvme_error_resume(struct pci_dev *pdev)
struct nvme_dev *dev = pci_get_drvdata(pdev);
flush_work(&dev->ctrl.reset_work);
pci_cleanup_aer_uncorrect_error_status(pdev);
}
static const struct pci_error_handlers nvme_err_handler = {

View File

@ -17,6 +17,8 @@
#include <linux/slab.h>
#include <linux/stat.h>
#include <linux/ctype.h>
#include <linux/pci.h>
#include <linux/pci-p2pdma.h>
#include "nvmet.h"
@ -340,6 +342,48 @@ out_unlock:
CONFIGFS_ATTR(nvmet_ns_, device_path);
#ifdef CONFIG_PCI_P2PDMA
static ssize_t nvmet_ns_p2pmem_show(struct config_item *item, char *page)
{
struct nvmet_ns *ns = to_nvmet_ns(item);
return pci_p2pdma_enable_show(page, ns->p2p_dev, ns->use_p2pmem);
}
static ssize_t nvmet_ns_p2pmem_store(struct config_item *item,
const char *page, size_t count)
{
struct nvmet_ns *ns = to_nvmet_ns(item);
struct pci_dev *p2p_dev = NULL;
bool use_p2pmem;
int ret = count;
int error;
mutex_lock(&ns->subsys->lock);
if (ns->enabled) {
ret = -EBUSY;
goto out_unlock;
}
error = pci_p2pdma_enable_store(page, &p2p_dev, &use_p2pmem);
if (error) {
ret = error;
goto out_unlock;
}
ns->use_p2pmem = use_p2pmem;
pci_dev_put(ns->p2p_dev);
ns->p2p_dev = p2p_dev;
out_unlock:
mutex_unlock(&ns->subsys->lock);
return ret;
}
CONFIGFS_ATTR(nvmet_ns_, p2pmem);
#endif /* CONFIG_PCI_P2PDMA */
static ssize_t nvmet_ns_device_uuid_show(struct config_item *item, char *page)
{
return sprintf(page, "%pUb\n", &to_nvmet_ns(item)->uuid);
@ -509,6 +553,9 @@ static struct configfs_attribute *nvmet_ns_attrs[] = {
&nvmet_ns_attr_ana_grpid,
&nvmet_ns_attr_enable,
&nvmet_ns_attr_buffered_io,
#ifdef CONFIG_PCI_P2PDMA
&nvmet_ns_attr_p2pmem,
#endif
NULL,
};

View File

@ -15,6 +15,7 @@
#include <linux/module.h>
#include <linux/random.h>
#include <linux/rculist.h>
#include <linux/pci-p2pdma.h>
#include "nvmet.h"
@ -365,9 +366,93 @@ static void nvmet_ns_dev_disable(struct nvmet_ns *ns)
nvmet_file_ns_disable(ns);
}
static int nvmet_p2pmem_ns_enable(struct nvmet_ns *ns)
{
int ret;
struct pci_dev *p2p_dev;
if (!ns->use_p2pmem)
return 0;
if (!ns->bdev) {
pr_err("peer-to-peer DMA is not supported by non-block device namespaces\n");
return -EINVAL;
}
if (!blk_queue_pci_p2pdma(ns->bdev->bd_queue)) {
pr_err("peer-to-peer DMA is not supported by the driver of %s\n",
ns->device_path);
return -EINVAL;
}
if (ns->p2p_dev) {
ret = pci_p2pdma_distance(ns->p2p_dev, nvmet_ns_dev(ns), true);
if (ret < 0)
return -EINVAL;
} else {
/*
* Right now we just check that there is p2pmem available so
* we can report an error to the user right away if there
* is not. We'll find the actual device to use once we
* setup the controller when the port's device is available.
*/
p2p_dev = pci_p2pmem_find(nvmet_ns_dev(ns));
if (!p2p_dev) {
pr_err("no peer-to-peer memory is available for %s\n",
ns->device_path);
return -EINVAL;
}
pci_dev_put(p2p_dev);
}
return 0;
}
/*
* Note: ctrl->subsys->lock should be held when calling this function
*/
static void nvmet_p2pmem_ns_add_p2p(struct nvmet_ctrl *ctrl,
struct nvmet_ns *ns)
{
struct device *clients[2];
struct pci_dev *p2p_dev;
int ret;
if (!ctrl->p2p_client)
return;
if (ns->p2p_dev) {
ret = pci_p2pdma_distance(ns->p2p_dev, ctrl->p2p_client, true);
if (ret < 0)
return;
p2p_dev = pci_dev_get(ns->p2p_dev);
} else {
clients[0] = ctrl->p2p_client;
clients[1] = nvmet_ns_dev(ns);
p2p_dev = pci_p2pmem_find_many(clients, ARRAY_SIZE(clients));
if (!p2p_dev) {
pr_err("no peer-to-peer memory is available that's supported by %s and %s\n",
dev_name(ctrl->p2p_client), ns->device_path);
return;
}
}
ret = radix_tree_insert(&ctrl->p2p_ns_map, ns->nsid, p2p_dev);
if (ret < 0)
pci_dev_put(p2p_dev);
pr_info("using p2pmem on %s for nsid %d\n", pci_name(p2p_dev),
ns->nsid);
}
int nvmet_ns_enable(struct nvmet_ns *ns)
{
struct nvmet_subsys *subsys = ns->subsys;
struct nvmet_ctrl *ctrl;
int ret;
mutex_lock(&subsys->lock);
@ -384,6 +469,13 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
if (ret)
goto out_unlock;
ret = nvmet_p2pmem_ns_enable(ns);
if (ret)
goto out_unlock;
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
nvmet_p2pmem_ns_add_p2p(ctrl, ns);
ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace,
0, GFP_KERNEL);
if (ret)
@ -418,6 +510,9 @@ out_unlock:
mutex_unlock(&subsys->lock);
return ret;
out_dev_put:
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
nvmet_ns_dev_disable(ns);
goto out_unlock;
}
@ -425,6 +520,7 @@ out_dev_put:
void nvmet_ns_disable(struct nvmet_ns *ns)
{
struct nvmet_subsys *subsys = ns->subsys;
struct nvmet_ctrl *ctrl;
mutex_lock(&subsys->lock);
if (!ns->enabled)
@ -434,6 +530,10 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
list_del_rcu(&ns->dev_link);
if (ns->nsid == subsys->max_nsid)
subsys->max_nsid = nvmet_max_nsid(subsys);
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
mutex_unlock(&subsys->lock);
/*
@ -450,6 +550,7 @@ void nvmet_ns_disable(struct nvmet_ns *ns)
percpu_ref_exit(&ns->ref);
mutex_lock(&subsys->lock);
subsys->nr_namespaces--;
nvmet_ns_changed(subsys, ns->nsid);
nvmet_ns_dev_disable(ns);
@ -725,6 +826,51 @@ void nvmet_req_execute(struct nvmet_req *req)
}
EXPORT_SYMBOL_GPL(nvmet_req_execute);
int nvmet_req_alloc_sgl(struct nvmet_req *req)
{
struct pci_dev *p2p_dev = NULL;
if (IS_ENABLED(CONFIG_PCI_P2PDMA)) {
if (req->sq->ctrl && req->ns)
p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map,
req->ns->nsid);
req->p2p_dev = NULL;
if (req->sq->qid && p2p_dev) {
req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt,
req->transfer_len);
if (req->sg) {
req->p2p_dev = p2p_dev;
return 0;
}
}
/*
* If no P2P memory was available we fallback to using
* regular memory
*/
}
req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt);
if (!req->sg)
return -ENOMEM;
return 0;
}
EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl);
void nvmet_req_free_sgl(struct nvmet_req *req)
{
if (req->p2p_dev)
pci_p2pmem_free_sgl(req->p2p_dev, req->sg);
else
sgl_free(req->sg);
req->sg = NULL;
req->sg_cnt = 0;
}
EXPORT_SYMBOL_GPL(nvmet_req_free_sgl);
static inline bool nvmet_cc_en(u32 cc)
{
return (cc >> NVME_CC_EN_SHIFT) & 0x1;
@ -921,6 +1067,37 @@ bool nvmet_host_allowed(struct nvmet_req *req, struct nvmet_subsys *subsys,
return __nvmet_host_allowed(subsys, hostnqn);
}
/*
* Note: ctrl->subsys->lock should be held when calling this function
*/
static void nvmet_setup_p2p_ns_map(struct nvmet_ctrl *ctrl,
struct nvmet_req *req)
{
struct nvmet_ns *ns;
if (!req->p2p_client)
return;
ctrl->p2p_client = get_device(req->p2p_client);
list_for_each_entry_rcu(ns, &ctrl->subsys->namespaces, dev_link)
nvmet_p2pmem_ns_add_p2p(ctrl, ns);
}
/*
* Note: ctrl->subsys->lock should be held when calling this function
*/
static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl)
{
struct radix_tree_iter iter;
void __rcu **slot;
radix_tree_for_each_slot(slot, &ctrl->p2p_ns_map, &iter, 0)
pci_dev_put(radix_tree_deref_slot(slot));
put_device(ctrl->p2p_client);
}
u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
{
@ -962,6 +1139,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
INIT_LIST_HEAD(&ctrl->async_events);
INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
@ -1026,6 +1204,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
mutex_lock(&subsys->lock);
list_add_tail(&ctrl->subsys_entry, &subsys->ctrls);
nvmet_setup_p2p_ns_map(ctrl, req);
mutex_unlock(&subsys->lock);
*ctrlp = ctrl;
@ -1053,6 +1232,7 @@ static void nvmet_ctrl_free(struct kref *ref)
struct nvmet_subsys *subsys = ctrl->subsys;
mutex_lock(&subsys->lock);
nvmet_release_p2p_ns_map(ctrl);
list_del(&ctrl->subsys_entry);
mutex_unlock(&subsys->lock);

View File

@ -78,6 +78,9 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
op = REQ_OP_READ;
}
if (is_pci_p2pdma_page(sg_page(req->sg)))
op_flags |= REQ_NOMERGE;
sector = le64_to_cpu(req->cmd->rw.slba);
sector <<= (req->ns->blksize_shift - 9);

View File

@ -26,6 +26,7 @@
#include <linux/configfs.h>
#include <linux/rcupdate.h>
#include <linux/blkdev.h>
#include <linux/radix-tree.h>
#define NVMET_ASYNC_EVENTS 4
#define NVMET_ERROR_LOG_SLOTS 128
@ -77,6 +78,9 @@ struct nvmet_ns {
struct completion disable_done;
mempool_t *bvec_pool;
struct kmem_cache *bvec_cache;
int use_p2pmem;
struct pci_dev *p2p_dev;
};
static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
@ -84,6 +88,11 @@ static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item)
return container_of(to_config_group(item), struct nvmet_ns, group);
}
static inline struct device *nvmet_ns_dev(struct nvmet_ns *ns)
{
return ns->bdev ? disk_to_dev(ns->bdev->bd_disk) : NULL;
}
struct nvmet_cq {
u16 qid;
u16 size;
@ -184,6 +193,9 @@ struct nvmet_ctrl {
char subsysnqn[NVMF_NQN_FIELD_LEN];
char hostnqn[NVMF_NQN_FIELD_LEN];
struct device *p2p_client;
struct radix_tree_root p2p_ns_map;
};
struct nvmet_subsys {
@ -295,6 +307,9 @@ struct nvmet_req {
void (*execute)(struct nvmet_req *req);
const struct nvmet_fabrics_ops *ops;
struct pci_dev *p2p_dev;
struct device *p2p_client;
};
extern struct workqueue_struct *buffered_io_wq;
@ -337,6 +352,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
void nvmet_req_uninit(struct nvmet_req *req);
void nvmet_req_execute(struct nvmet_req *req);
void nvmet_req_complete(struct nvmet_req *req, u16 status);
int nvmet_req_alloc_sgl(struct nvmet_req *req);
void nvmet_req_free_sgl(struct nvmet_req *req);
void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid,
u16 size);

View File

@ -504,7 +504,7 @@ static void nvmet_rdma_release_rsp(struct nvmet_rdma_rsp *rsp)
}
if (rsp->req.sg != rsp->cmd->inline_sg)
sgl_free(rsp->req.sg);
nvmet_req_free_sgl(&rsp->req);
if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list)))
nvmet_rdma_process_wr_wait_list(queue);
@ -653,24 +653,24 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
{
struct rdma_cm_id *cm_id = rsp->queue->cm_id;
u64 addr = le64_to_cpu(sgl->addr);
u32 len = get_unaligned_le24(sgl->length);
u32 key = get_unaligned_le32(sgl->key);
int ret;
rsp->req.transfer_len = get_unaligned_le24(sgl->length);
/* no data command? */
if (!len)
if (!rsp->req.transfer_len)
return 0;
rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt);
if (!rsp->req.sg)
return NVME_SC_INTERNAL;
ret = nvmet_req_alloc_sgl(&rsp->req);
if (ret < 0)
goto error_out;
ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
nvmet_data_dir(&rsp->req));
if (ret < 0)
return NVME_SC_INTERNAL;
rsp->req.transfer_len += len;
goto error_out;
rsp->n_rdma += ret;
if (invalidate) {
@ -679,6 +679,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
}
return 0;
error_out:
rsp->req.transfer_len = 0;
return NVME_SC_INTERNAL;
}
static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp)
@ -746,6 +750,8 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
cmd->send_sge.addr, cmd->send_sge.length,
DMA_TO_DEVICE);
cmd->req.p2p_client = &queue->dev->device->dev;
if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
&queue->nvme_sq, &nvmet_rdma_ops))
return;

View File

@ -98,6 +98,9 @@ config PCI_ECAM
config PCI_LOCKLESS_CONFIG
bool
config PCI_BRIDGE_EMUL
bool
config PCI_IOV
bool "PCI IOV support"
depends on PCI
@ -132,6 +135,23 @@ config PCI_PASID
If unsure, say N.
config PCI_P2PDMA
bool "PCI peer-to-peer transfer support"
depends on PCI && ZONE_DEVICE
select GENERIC_ALLOCATOR
help
Enableѕ drivers to do PCI peer-to-peer transactions to and from
BARs that are exposed in other devices that are the part of
the hierarchy where peer-to-peer DMA is guaranteed by the PCI
specification to work (ie. anything below a single PCI bridge).
Many PCIe root complexes do not support P2P transactions and
it's hard to tell which support it at all, so at this time,
P2P DMA transations must be between devices behind the same root
port.
If unsure, say N.
config PCI_LABEL
def_bool y if (DMI || ACPI)
depends on PCI

View File

@ -19,6 +19,7 @@ obj-$(CONFIG_HOTPLUG_PCI) += hotplug/
obj-$(CONFIG_PCI_MSI) += msi.o
obj-$(CONFIG_PCI_ATS) += ats.o
obj-$(CONFIG_PCI_IOV) += iov.o
obj-$(CONFIG_PCI_BRIDGE_EMUL) += pci-bridge-emul.o
obj-$(CONFIG_ACPI) += pci-acpi.o
obj-$(CONFIG_PCI_LABEL) += pci-label.o
obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o
@ -26,6 +27,7 @@ obj-$(CONFIG_PCI_SYSCALL) += syscall.o
obj-$(CONFIG_PCI_STUB) += pci-stub.o
obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o
obj-$(CONFIG_PCI_ECAM) += ecam.o
obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o
obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o
# Endpoint library must be initialized before its users

View File

@ -33,7 +33,7 @@ DEFINE_RAW_SPINLOCK(pci_lock);
#endif
#define PCI_OP_READ(size, type, len) \
int pci_bus_read_config_##size \
int noinline pci_bus_read_config_##size \
(struct pci_bus *bus, unsigned int devfn, int pos, type *value) \
{ \
int res; \
@ -48,7 +48,7 @@ int pci_bus_read_config_##size \
}
#define PCI_OP_WRITE(size, type, len) \
int pci_bus_write_config_##size \
int noinline pci_bus_write_config_##size \
(struct pci_bus *bus, unsigned int devfn, int pos, type value) \
{ \
int res; \

View File

@ -9,12 +9,14 @@ config PCI_MVEBU
depends on MVEBU_MBUS
depends on ARM
depends on OF
select PCI_BRIDGE_EMUL
config PCI_AARDVARK
bool "Aardvark PCIe controller"
depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
select PCI_BRIDGE_EMUL
help
Add support for Aardvark 64bit PCIe Host Controller. This
controller is part of the South Bridge of the Marvel Armada
@ -231,7 +233,7 @@ config PCIE_ROCKCHIP_EP
available to support GEN2 with 4 slots.
config PCIE_MEDIATEK
bool "MediaTek PCIe controller"
tristate "MediaTek PCIe controller"
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN

View File

@ -7,7 +7,7 @@ obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o
obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o

View File

@ -542,7 +542,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
};
/*
* dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
* dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
* @dra7xx: the dra7xx device where the workaround should be applied
*
* Access to the PCIe slave port that are not 32-bit aligned will result
@ -552,7 +552,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
*
* To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
*/
static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
{
int ret;
struct device_node *np = dev->of_node;
@ -704,6 +704,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
DEVICE_TYPE_RC);
ret = dra7xx_pcie_unaligned_memaccess(dev);
if (ret)
dev_err(dev, "WA for Errata i870 not applied\n");
ret = dra7xx_add_pcie_port(dra7xx, pdev);
if (ret < 0)
goto err_gpio;
@ -717,7 +722,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
DEVICE_TYPE_EP);
ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
ret = dra7xx_pcie_unaligned_memaccess(dev);
if (ret)
goto err_gpio;

View File

@ -50,6 +50,7 @@ struct imx6_pcie {
struct regmap *iomuxc_gpr;
struct reset_control *pciephy_reset;
struct reset_control *apps_reset;
struct reset_control *turnoff_reset;
enum imx6_pcie_variants variant;
u32 tx_deemph_gen1;
u32 tx_deemph_gen2_3p5db;
@ -97,6 +98,16 @@ struct imx6_pcie {
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
/* PHY registers (not memory-mapped) */
#define PCIE_PHY_ATEOVRD 0x10
#define PCIE_PHY_ATEOVRD_EN (0x1 << 2)
#define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0
#define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1
#define PCIE_PHY_MPLL_OVRD_IN_LO 0x11
#define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2
#define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f
#define PCIE_PHY_MPLL_MULTIPLIER_OVRD (0x1 << 9)
#define PCIE_PHY_RX_ASIC_OUT 0x100D
#define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0)
@ -508,6 +519,50 @@ static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie)
IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12);
}
static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie)
{
unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy);
int mult, div;
u32 val;
switch (phy_rate) {
case 125000000:
/*
* The default settings of the MPLL are for a 125MHz input
* clock, so no need to reconfigure anything in that case.
*/
return 0;
case 100000000:
mult = 25;
div = 0;
break;
case 200000000:
mult = 25;
div = 1;
break;
default:
dev_err(imx6_pcie->pci->dev,
"Unsupported PHY reference clock rate %lu\n", phy_rate);
return -EINVAL;
}
pcie_phy_read(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, &val);
val &= ~(PCIE_PHY_MPLL_MULTIPLIER_MASK <<
PCIE_PHY_MPLL_MULTIPLIER_SHIFT);
val |= mult << PCIE_PHY_MPLL_MULTIPLIER_SHIFT;
val |= PCIE_PHY_MPLL_MULTIPLIER_OVRD;
pcie_phy_write(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, val);
pcie_phy_read(imx6_pcie, PCIE_PHY_ATEOVRD, &val);
val &= ~(PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK <<
PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT);
val |= div << PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT;
val |= PCIE_PHY_ATEOVRD_EN;
pcie_phy_write(imx6_pcie, PCIE_PHY_ATEOVRD, val);
return 0;
}
static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie)
{
struct dw_pcie *pci = imx6_pcie->pci;
@ -542,6 +597,24 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
return -EINVAL;
}
static void imx6_pcie_ltssm_enable(struct device *dev)
{
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
switch (imx6_pcie->variant) {
case IMX6Q:
case IMX6SX:
case IMX6QP:
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2,
IMX6Q_GPR12_PCIE_CTL_2);
break;
case IMX7D:
reset_control_deassert(imx6_pcie->apps_reset);
break;
}
}
static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
{
struct dw_pcie *pci = imx6_pcie->pci;
@ -560,11 +633,7 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp);
/* Start LTSSM. */
if (imx6_pcie->variant == IMX7D)
reset_control_deassert(imx6_pcie->apps_reset);
else
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 1 << 10);
imx6_pcie_ltssm_enable(dev);
ret = imx6_pcie_wait_for_link(imx6_pcie);
if (ret)
@ -632,6 +701,7 @@ static int imx6_pcie_host_init(struct pcie_port *pp)
imx6_pcie_assert_core_reset(imx6_pcie);
imx6_pcie_init_phy(imx6_pcie);
imx6_pcie_deassert_core_reset(imx6_pcie);
imx6_setup_phy_mpll(imx6_pcie);
dw_pcie_setup_rc(pp);
imx6_pcie_establish_link(imx6_pcie);
@ -682,6 +752,94 @@ static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = imx6_pcie_link_up,
};
#ifdef CONFIG_PM_SLEEP
static void imx6_pcie_ltssm_disable(struct device *dev)
{
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
switch (imx6_pcie->variant) {
case IMX6SX:
case IMX6QP:
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6Q_GPR12_PCIE_CTL_2, 0);
break;
case IMX7D:
reset_control_assert(imx6_pcie->apps_reset);
break;
default:
dev_err(dev, "ltssm_disable not supported\n");
}
}
static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie)
{
reset_control_assert(imx6_pcie->turnoff_reset);
reset_control_deassert(imx6_pcie->turnoff_reset);
/*
* Components with an upstream port must respond to
* PME_Turn_Off with PME_TO_Ack but we can't check.
*
* The standard recommends a 1-10ms timeout after which to
* proceed anyway as if acks were received.
*/
usleep_range(1000, 10000);
}
static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie)
{
clk_disable_unprepare(imx6_pcie->pcie);
clk_disable_unprepare(imx6_pcie->pcie_phy);
clk_disable_unprepare(imx6_pcie->pcie_bus);
if (imx6_pcie->variant == IMX7D) {
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX7D_GPR12_PCIE_PHY_REFCLK_SEL,
IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
}
}
static int imx6_pcie_suspend_noirq(struct device *dev)
{
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
if (imx6_pcie->variant != IMX7D)
return 0;
imx6_pcie_pm_turnoff(imx6_pcie);
imx6_pcie_clk_disable(imx6_pcie);
imx6_pcie_ltssm_disable(dev);
return 0;
}
static int imx6_pcie_resume_noirq(struct device *dev)
{
int ret;
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
struct pcie_port *pp = &imx6_pcie->pci->pp;
if (imx6_pcie->variant != IMX7D)
return 0;
imx6_pcie_assert_core_reset(imx6_pcie);
imx6_pcie_init_phy(imx6_pcie);
imx6_pcie_deassert_core_reset(imx6_pcie);
dw_pcie_setup_rc(pp);
ret = imx6_pcie_establish_link(imx6_pcie);
if (ret < 0)
dev_info(dev, "pcie link is down after resume.\n");
return 0;
}
#endif
static const struct dev_pm_ops imx6_pcie_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx6_pcie_suspend_noirq,
imx6_pcie_resume_noirq)
};
static int imx6_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -776,6 +934,13 @@ static int imx6_pcie_probe(struct platform_device *pdev)
break;
}
/* Grab turnoff reset */
imx6_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff");
if (IS_ERR(imx6_pcie->turnoff_reset)) {
dev_err(dev, "Failed to get TURNOFF reset control\n");
return PTR_ERR(imx6_pcie->turnoff_reset);
}
/* Grab GPR config register range */
imx6_pcie->iomuxc_gpr =
syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
@ -848,6 +1013,7 @@ static struct platform_driver imx6_pcie_driver = {
.name = "imx6q-pcie",
.of_match_table = imx6_pcie_of_match,
.suppress_bind_attrs = true,
.pm = &imx6_pcie_pm_ops,
},
.probe = imx6_pcie_probe,
.shutdown = imx6_pcie_shutdown,

View File

@ -1,484 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* DesignWare application register space functions for Keystone PCI controller
*
* Copyright (C) 2013-2014 Texas Instruments., Ltd.
* http://www.ti.com
*
* Author: Murali Karicheri <m-karicheri2@ti.com>
*/
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/irqreturn.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include "pcie-designware.h"
#include "pci-keystone.h"
/* Application register defines */
#define LTSSM_EN_VAL 1
#define LTSSM_STATE_MASK 0x1f
#define LTSSM_STATE_L0 0x11
#define DBI_CS2_EN_VAL 0x20
#define OB_XLAT_EN_VAL 2
/* Application registers */
#define CMD_STATUS 0x004
#define CFG_SETUP 0x008
#define OB_SIZE 0x030
#define CFG_PCIM_WIN_SZ_IDX 3
#define CFG_PCIM_WIN_CNT 32
#define SPACE0_REMOTE_CFG_OFFSET 0x1000
#define OB_OFFSET_INDEX(n) (0x200 + (8 * n))
#define OB_OFFSET_HI(n) (0x204 + (8 * n))
/* IRQ register defines */
#define IRQ_EOI 0x050
#define IRQ_STATUS 0x184
#define IRQ_ENABLE_SET 0x188
#define IRQ_ENABLE_CLR 0x18c
#define MSI_IRQ 0x054
#define MSI0_IRQ_STATUS 0x104
#define MSI0_IRQ_ENABLE_SET 0x108
#define MSI0_IRQ_ENABLE_CLR 0x10c
#define IRQ_STATUS 0x184
#define MSI_IRQ_OFFSET 4
/* Error IRQ bits */
#define ERR_AER BIT(5) /* ECRC error */
#define ERR_AXI BIT(4) /* AXI tag lookup fatal error */
#define ERR_CORR BIT(3) /* Correctable error */
#define ERR_NONFATAL BIT(2) /* Non-fatal error */
#define ERR_FATAL BIT(1) /* Fatal error */
#define ERR_SYS BIT(0) /* System (fatal, non-fatal, or correctable) */
#define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \
ERR_NONFATAL | ERR_FATAL | ERR_SYS)
#define ERR_FATAL_IRQ (ERR_FATAL | ERR_AXI)
#define ERR_IRQ_STATUS_RAW 0x1c0
#define ERR_IRQ_STATUS 0x1c4
#define ERR_IRQ_ENABLE_SET 0x1c8
#define ERR_IRQ_ENABLE_CLR 0x1cc
/* Config space registers */
#define DEBUG0 0x728
#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset,
u32 *bit_pos)
{
*reg_offset = offset % 8;
*bit_pos = offset >> 3;
}
phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
return ks_pcie->app.start + MSI_IRQ;
}
static u32 ks_dw_app_readl(struct keystone_pcie *ks_pcie, u32 offset)
{
return readl(ks_pcie->va_app_base + offset);
}
static void ks_dw_app_writel(struct keystone_pcie *ks_pcie, u32 offset, u32 val)
{
writel(val, ks_pcie->va_app_base + offset);
}
void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
u32 pending, vector;
int src, virq;
pending = ks_dw_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4));
/*
* MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit
* shows 1, 9, 17, 25 and so forth
*/
for (src = 0; src < 4; src++) {
if (BIT(src) & pending) {
vector = offset + (src << 3);
virq = irq_linear_revmap(pp->irq_domain, vector);
dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n",
src, vector, virq);
generic_handle_irq(virq);
}
}
}
void ks_dw_pcie_msi_irq_ack(int irq, struct pcie_port *pp)
{
u32 reg_offset, bit_pos;
struct keystone_pcie *ks_pcie;
struct dw_pcie *pci;
pci = to_dw_pcie_from_pp(pp);
ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_dw_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4),
BIT(bit_pos));
ks_dw_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET);
}
void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
{
u32 reg_offset, bit_pos;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4),
BIT(bit_pos));
}
void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
{
u32 reg_offset, bit_pos;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4),
BIT(bit_pos));
}
int ks_dw_pcie_msi_host_init(struct pcie_port *pp)
{
return dw_pcie_allocate_domains(pp);
}
void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie)
{
int i;
for (i = 0; i < PCI_NUM_INTX; i++)
ks_dw_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1);
}
void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset)
{
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
u32 pending;
int virq;
pending = ks_dw_app_readl(ks_pcie, IRQ_STATUS + (offset << 4));
if (BIT(0) & pending) {
virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset);
dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq);
generic_handle_irq(virq);
}
/* EOI the INTx interrupt */
ks_dw_app_writel(ks_pcie, IRQ_EOI, offset);
}
void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie)
{
ks_dw_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL);
}
irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie)
{
u32 status;
status = ks_dw_app_readl(ks_pcie, ERR_IRQ_STATUS_RAW) & ERR_IRQ_ALL;
if (!status)
return IRQ_NONE;
if (status & ERR_FATAL_IRQ)
dev_err(ks_pcie->pci->dev, "fatal error (status %#010x)\n",
status);
/* Ack the IRQ; status bits are RW1C */
ks_dw_app_writel(ks_pcie, ERR_IRQ_STATUS, status);
return IRQ_HANDLED;
}
static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d)
{
}
static void ks_dw_pcie_mask_legacy_irq(struct irq_data *d)
{
}
static void ks_dw_pcie_unmask_legacy_irq(struct irq_data *d)
{
}
static struct irq_chip ks_dw_pcie_legacy_irq_chip = {
.name = "Keystone-PCI-Legacy-IRQ",
.irq_ack = ks_dw_pcie_ack_legacy_irq,
.irq_mask = ks_dw_pcie_mask_legacy_irq,
.irq_unmask = ks_dw_pcie_unmask_legacy_irq,
};
static int ks_dw_pcie_init_legacy_irq_map(struct irq_domain *d,
unsigned int irq, irq_hw_number_t hw_irq)
{
irq_set_chip_and_handler(irq, &ks_dw_pcie_legacy_irq_chip,
handle_level_irq);
irq_set_chip_data(irq, d->host_data);
return 0;
}
static const struct irq_domain_ops ks_dw_pcie_legacy_irq_domain_ops = {
.map = ks_dw_pcie_init_legacy_irq_map,
.xlate = irq_domain_xlate_onetwocell,
};
/**
* ks_dw_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask
* registers
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_dw_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
ks_dw_app_writel(ks_pcie, CMD_STATUS, DBI_CS2_EN_VAL | val);
do {
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
} while (!(val & DBI_CS2_EN_VAL));
}
/**
* ks_dw_pcie_clear_dbi_mode() - Disable DBI mode
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_dw_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
ks_dw_app_writel(ks_pcie, CMD_STATUS, ~DBI_CS2_EN_VAL & val);
do {
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
} while (val & DBI_CS2_EN_VAL);
}
void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 start = pp->mem->start, end = pp->mem->end;
int i, tr_size;
u32 val;
/* Disable BARs for inbound access */
ks_dw_pcie_set_dbi_mode(ks_pcie);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0);
ks_dw_pcie_clear_dbi_mode(ks_pcie);
/* Set outbound translation size per window division */
ks_dw_app_writel(ks_pcie, OB_SIZE, CFG_PCIM_WIN_SZ_IDX & 0x7);
tr_size = (1 << (CFG_PCIM_WIN_SZ_IDX & 0x7)) * SZ_1M;
/* Using Direct 1:1 mapping of RC <-> PCI memory space */
for (i = 0; (i < CFG_PCIM_WIN_CNT) && (start < end); i++) {
ks_dw_app_writel(ks_pcie, OB_OFFSET_INDEX(i), start | 1);
ks_dw_app_writel(ks_pcie, OB_OFFSET_HI(i), 0);
start += tr_size;
}
/* Enable OB translation */
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
ks_dw_app_writel(ks_pcie, CMD_STATUS, OB_XLAT_EN_VAL | val);
}
/**
* ks_pcie_cfg_setup() - Set up configuration space address for a device
*
* @ks_pcie: ptr to keystone_pcie structure
* @bus: Bus number the device is residing on
* @devfn: device, function number info
*
* Forms and returns the address of configuration space mapped in PCIESS
* address space 0. Also configures CFG_SETUP for remote configuration space
* access.
*
* The address space has two regions to access configuration - local and remote.
* We access local region for bus 0 (as RC is attached on bus 0) and remote
* region for others with TYPE 1 access when bus > 1. As for device on bus = 1,
* we will do TYPE 0 access as it will be on our secondary bus (logical).
* CFG_SETUP is needed only for remote configuration access.
*/
static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus,
unsigned int devfn)
{
u8 device = PCI_SLOT(devfn), function = PCI_FUNC(devfn);
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 regval;
if (bus == 0)
return pci->dbi_base;
regval = (bus << 16) | (device << 8) | function;
/*
* Since Bus#1 will be a virtual bus, we need to have TYPE0
* access only.
* TYPE 1
*/
if (bus != 1)
regval |= BIT(24);
ks_dw_app_writel(ks_pcie, CFG_SETUP, regval);
return pp->va_cfg0_base;
}
int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u8 bus_num = bus->number;
void __iomem *addr;
addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn);
return dw_pcie_read(addr + where, size, val);
}
int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u8 bus_num = bus->number;
void __iomem *addr;
addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn);
return dw_pcie_write(addr + where, size, val);
}
/**
* ks_dw_pcie_v3_65_scan_bus() - keystone scan_bus post initialization
*
* This sets BAR0 to enable inbound access for MSI_IRQ register
*/
void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
/* Configure and set up BAR0 */
ks_dw_pcie_set_dbi_mode(ks_pcie);
/* Enable BAR0 */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
ks_dw_pcie_clear_dbi_mode(ks_pcie);
/*
* For BAR0, just setting bus address for inbound writes (MSI) should
* be sufficient. Use physical address to avoid any conflicts.
*/
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
}
/**
* ks_dw_pcie_link_up() - Check if link up
*/
int ks_dw_pcie_link_up(struct dw_pcie *pci)
{
u32 val;
val = dw_pcie_readl_dbi(pci, DEBUG0);
return (val & LTSSM_STATE_MASK) == LTSSM_STATE_L0;
}
void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie)
{
u32 val;
/* Disable Link training */
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
val &= ~LTSSM_EN_VAL;
ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
/* Initiate Link Training */
val = ks_dw_app_readl(ks_pcie, CMD_STATUS);
ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
}
/**
* ks_dw_pcie_host_init() - initialize host for v3_65 dw hardware
*
* Ioremap the register resources, initialize legacy irq domain
* and call dw_pcie_v3_65_host_init() API to initialize the Keystone
* PCI host controller.
*/
int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie,
struct device_node *msi_intc_np)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res;
/* Index 0 is the config reg. space address */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/*
* We set these same and is used in pcie rd/wr_other_conf
* functions
*/
pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET;
pp->va_cfg1_base = pp->va_cfg0_base;
/* Index 1 is the application reg. space address */
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
ks_pcie->va_app_base = devm_ioremap_resource(dev, res);
if (IS_ERR(ks_pcie->va_app_base))
return PTR_ERR(ks_pcie->va_app_base);
ks_pcie->app = *res;
/* Create legacy IRQ domain */
ks_pcie->legacy_irq_domain =
irq_domain_add_linear(ks_pcie->legacy_intc_np,
PCI_NUM_INTX,
&ks_dw_pcie_legacy_irq_domain_ops,
NULL);
if (!ks_pcie->legacy_irq_domain) {
dev_err(dev, "Failed to add irq domain for legacy irqs\n");
return -EINVAL;
}
return dw_pcie_host_init(pp);
}

View File

@ -9,40 +9,510 @@
* Implementation based on pci-exynos.c and pcie-designware.c
*/
#include <linux/irqchip/chained_irq.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/mfd/syscon.h>
#include <linux/msi.h>
#include <linux/of_irq.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/platform_device.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include "pcie-designware.h"
#include "pci-keystone.h"
#define DRIVER_NAME "keystone-pcie"
#define PCIE_VENDORID_MASK 0xffff
#define PCIE_DEVICEID_SHIFT 16
/* DEV_STAT_CTRL */
#define PCIE_CAP_BASE 0x70
/* Application registers */
#define CMD_STATUS 0x004
#define LTSSM_EN_VAL BIT(0)
#define OB_XLAT_EN_VAL BIT(1)
#define DBI_CS2 BIT(5)
#define CFG_SETUP 0x008
#define CFG_BUS(x) (((x) & 0xff) << 16)
#define CFG_DEVICE(x) (((x) & 0x1f) << 8)
#define CFG_FUNC(x) ((x) & 0x7)
#define CFG_TYPE1 BIT(24)
#define OB_SIZE 0x030
#define SPACE0_REMOTE_CFG_OFFSET 0x1000
#define OB_OFFSET_INDEX(n) (0x200 + (8 * (n)))
#define OB_OFFSET_HI(n) (0x204 + (8 * (n)))
#define OB_ENABLEN BIT(0)
#define OB_WIN_SIZE 8 /* 8MB */
/* IRQ register defines */
#define IRQ_EOI 0x050
#define IRQ_STATUS 0x184
#define IRQ_ENABLE_SET 0x188
#define IRQ_ENABLE_CLR 0x18c
#define MSI_IRQ 0x054
#define MSI0_IRQ_STATUS 0x104
#define MSI0_IRQ_ENABLE_SET 0x108
#define MSI0_IRQ_ENABLE_CLR 0x10c
#define IRQ_STATUS 0x184
#define MSI_IRQ_OFFSET 4
#define ERR_IRQ_STATUS 0x1c4
#define ERR_IRQ_ENABLE_SET 0x1c8
#define ERR_AER BIT(5) /* ECRC error */
#define ERR_AXI BIT(4) /* AXI tag lookup fatal error */
#define ERR_CORR BIT(3) /* Correctable error */
#define ERR_NONFATAL BIT(2) /* Non-fatal error */
#define ERR_FATAL BIT(1) /* Fatal error */
#define ERR_SYS BIT(0) /* System error */
#define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \
ERR_NONFATAL | ERR_FATAL | ERR_SYS)
#define MAX_MSI_HOST_IRQS 8
/* PCIE controller device IDs */
#define PCIE_RC_K2HK 0xb008
#define PCIE_RC_K2E 0xb009
#define PCIE_RC_K2L 0xb00a
#define PCIE_RC_K2HK 0xb008
#define PCIE_RC_K2E 0xb009
#define PCIE_RC_K2L 0xb00a
#define PCIE_RC_K2G 0xb00b
#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
#define to_keystone_pcie(x) dev_get_drvdata((x)->dev)
static void quirk_limit_mrrs(struct pci_dev *dev)
struct keystone_pcie {
struct dw_pcie *pci;
/* PCI Device ID */
u32 device_id;
int num_legacy_host_irqs;
int legacy_host_irqs[PCI_NUM_INTX];
struct device_node *legacy_intc_np;
int num_msi_host_irqs;
int msi_host_irqs[MAX_MSI_HOST_IRQS];
int num_lanes;
u32 num_viewport;
struct phy **phy;
struct device_link **link;
struct device_node *msi_intc_np;
struct irq_domain *legacy_irq_domain;
struct device_node *np;
int error_irq;
/* Application register space */
void __iomem *va_app_base; /* DT 1st resource */
struct resource app;
};
static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset,
u32 *bit_pos)
{
*reg_offset = offset % 8;
*bit_pos = offset >> 3;
}
static phys_addr_t ks_pcie_get_msi_addr(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
return ks_pcie->app.start + MSI_IRQ;
}
static u32 ks_pcie_app_readl(struct keystone_pcie *ks_pcie, u32 offset)
{
return readl(ks_pcie->va_app_base + offset);
}
static void ks_pcie_app_writel(struct keystone_pcie *ks_pcie, u32 offset,
u32 val)
{
writel(val, ks_pcie->va_app_base + offset);
}
static void ks_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
u32 pending, vector;
int src, virq;
pending = ks_pcie_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4));
/*
* MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit
* shows 1, 9, 17, 25 and so forth
*/
for (src = 0; src < 4; src++) {
if (BIT(src) & pending) {
vector = offset + (src << 3);
virq = irq_linear_revmap(pp->irq_domain, vector);
dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n",
src, vector, virq);
generic_handle_irq(virq);
}
}
}
static void ks_pcie_msi_irq_ack(int irq, struct pcie_port *pp)
{
u32 reg_offset, bit_pos;
struct keystone_pcie *ks_pcie;
struct dw_pcie *pci;
pci = to_dw_pcie_from_pp(pp);
ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_pcie_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4),
BIT(bit_pos));
ks_pcie_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET);
}
static void ks_pcie_msi_set_irq(struct pcie_port *pp, int irq)
{
u32 reg_offset, bit_pos;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4),
BIT(bit_pos));
}
static void ks_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
{
u32 reg_offset, bit_pos;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos);
ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4),
BIT(bit_pos));
}
static int ks_pcie_msi_host_init(struct pcie_port *pp)
{
return dw_pcie_allocate_domains(pp);
}
static void ks_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie)
{
int i;
for (i = 0; i < PCI_NUM_INTX; i++)
ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1);
}
static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie,
int offset)
{
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
u32 pending;
int virq;
pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS + (offset << 4));
if (BIT(0) & pending) {
virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset);
dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq);
generic_handle_irq(virq);
}
/* EOI the INTx interrupt */
ks_pcie_app_writel(ks_pcie, IRQ_EOI, offset);
}
static void ks_pcie_enable_error_irq(struct keystone_pcie *ks_pcie)
{
ks_pcie_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL);
}
static irqreturn_t ks_pcie_handle_error_irq(struct keystone_pcie *ks_pcie)
{
u32 reg;
struct device *dev = ks_pcie->pci->dev;
reg = ks_pcie_app_readl(ks_pcie, ERR_IRQ_STATUS);
if (!reg)
return IRQ_NONE;
if (reg & ERR_SYS)
dev_err(dev, "System Error\n");
if (reg & ERR_FATAL)
dev_err(dev, "Fatal Error\n");
if (reg & ERR_NONFATAL)
dev_dbg(dev, "Non Fatal Error\n");
if (reg & ERR_CORR)
dev_dbg(dev, "Correctable Error\n");
if (reg & ERR_AXI)
dev_err(dev, "AXI tag lookup fatal Error\n");
if (reg & ERR_AER)
dev_err(dev, "ECRC Error\n");
ks_pcie_app_writel(ks_pcie, ERR_IRQ_STATUS, reg);
return IRQ_HANDLED;
}
static void ks_pcie_ack_legacy_irq(struct irq_data *d)
{
}
static void ks_pcie_mask_legacy_irq(struct irq_data *d)
{
}
static void ks_pcie_unmask_legacy_irq(struct irq_data *d)
{
}
static struct irq_chip ks_pcie_legacy_irq_chip = {
.name = "Keystone-PCI-Legacy-IRQ",
.irq_ack = ks_pcie_ack_legacy_irq,
.irq_mask = ks_pcie_mask_legacy_irq,
.irq_unmask = ks_pcie_unmask_legacy_irq,
};
static int ks_pcie_init_legacy_irq_map(struct irq_domain *d,
unsigned int irq,
irq_hw_number_t hw_irq)
{
irq_set_chip_and_handler(irq, &ks_pcie_legacy_irq_chip,
handle_level_irq);
irq_set_chip_data(irq, d->host_data);
return 0;
}
static const struct irq_domain_ops ks_pcie_legacy_irq_domain_ops = {
.map = ks_pcie_init_legacy_irq_map,
.xlate = irq_domain_xlate_onetwocell,
};
/**
* ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask
* registers
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val |= DBI_CS2;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
do {
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
} while (!(val & DBI_CS2));
}
/**
* ks_pcie_clear_dbi_mode() - Disable DBI mode
*
* Since modification of dbi_cs2 involves different clock domain, read the
* status back to ensure the transition is complete.
*/
static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie)
{
u32 val;
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val &= ~DBI_CS2;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
do {
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
} while (val & DBI_CS2);
}
static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie)
{
u32 val;
u32 num_viewport = ks_pcie->num_viewport;
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
u64 start = pp->mem->start;
u64 end = pp->mem->end;
int i;
/* Disable BARs for inbound access */
ks_pcie_set_dbi_mode(ks_pcie);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0);
ks_pcie_clear_dbi_mode(ks_pcie);
val = ilog2(OB_WIN_SIZE);
ks_pcie_app_writel(ks_pcie, OB_SIZE, val);
/* Using Direct 1:1 mapping of RC <-> PCI memory space */
for (i = 0; i < num_viewport && (start < end); i++) {
ks_pcie_app_writel(ks_pcie, OB_OFFSET_INDEX(i),
lower_32_bits(start) | OB_ENABLEN);
ks_pcie_app_writel(ks_pcie, OB_OFFSET_HI(i),
upper_32_bits(start));
start += OB_WIN_SIZE;
}
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val |= OB_XLAT_EN_VAL;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, val);
}
static int ks_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size,
u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 reg;
reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
CFG_FUNC(PCI_FUNC(devfn));
if (bus->parent->number != pp->root_bus_nr)
reg |= CFG_TYPE1;
ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg);
return dw_pcie_read(pp->va_cfg0_base + where, size, val);
}
static int ks_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size,
u32 val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 reg;
reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
CFG_FUNC(PCI_FUNC(devfn));
if (bus->parent->number != pp->root_bus_nr)
reg |= CFG_TYPE1;
ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg);
return dw_pcie_write(pp->va_cfg0_base + where, size, val);
}
/**
* ks_pcie_v3_65_scan_bus() - keystone scan_bus post initialization
*
* This sets BAR0 to enable inbound access for MSI_IRQ register
*/
static void ks_pcie_v3_65_scan_bus(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
/* Configure and set up BAR0 */
ks_pcie_set_dbi_mode(ks_pcie);
/* Enable BAR0 */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1);
ks_pcie_clear_dbi_mode(ks_pcie);
/*
* For BAR0, just setting bus address for inbound writes (MSI) should
* be sufficient. Use physical address to avoid any conflicts.
*/
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start);
}
/**
* ks_pcie_link_up() - Check if link up
*/
static int ks_pcie_link_up(struct dw_pcie *pci)
{
u32 val;
val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0);
val &= PORT_LOGIC_LTSSM_STATE_MASK;
return (val == PORT_LOGIC_LTSSM_STATE_L0);
}
static void ks_pcie_initiate_link_train(struct keystone_pcie *ks_pcie)
{
u32 val;
/* Disable Link training */
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
val &= ~LTSSM_EN_VAL;
ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
/* Initiate Link Training */
val = ks_pcie_app_readl(ks_pcie, CMD_STATUS);
ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val);
}
/**
* ks_pcie_dw_host_init() - initialize host for v3_65 dw hardware
*
* Ioremap the register resources, initialize legacy irq domain
* and call dw_pcie_v3_65_host_init() API to initialize the Keystone
* PCI host controller.
*/
static int __init ks_pcie_dw_host_init(struct keystone_pcie *ks_pcie)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *res;
/* Index 0 is the config reg. space address */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/*
* We set these same and is used in pcie rd/wr_other_conf
* functions
*/
pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET;
pp->va_cfg1_base = pp->va_cfg0_base;
/* Index 1 is the application reg. space address */
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
ks_pcie->va_app_base = devm_ioremap_resource(dev, res);
if (IS_ERR(ks_pcie->va_app_base))
return PTR_ERR(ks_pcie->va_app_base);
ks_pcie->app = *res;
/* Create legacy IRQ domain */
ks_pcie->legacy_irq_domain =
irq_domain_add_linear(ks_pcie->legacy_intc_np,
PCI_NUM_INTX,
&ks_pcie_legacy_irq_domain_ops,
NULL);
if (!ks_pcie->legacy_irq_domain) {
dev_err(dev, "Failed to add irq domain for legacy irqs\n");
return -EINVAL;
}
return dw_pcie_host_init(pp);
}
static void ks_pcie_quirk(struct pci_dev *dev)
{
struct pci_bus *bus = dev->bus;
struct pci_dev *bridge = bus->self;
struct pci_dev *bridge;
static const struct pci_device_id rc_pci_devids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
@ -50,11 +520,13 @@ static void quirk_limit_mrrs(struct pci_dev *dev)
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G),
.class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, },
{ 0, },
};
if (pci_is_root_bus(bus))
return;
bridge = dev;
/* look for the host bridge */
while (!pci_is_root_bus(bus)) {
@ -62,43 +534,39 @@ static void quirk_limit_mrrs(struct pci_dev *dev)
bus = bus->parent;
}
if (bridge) {
/*
* Keystone PCI controller has a h/w limitation of
* 256 bytes maximum read request size. It can't handle
* anything higher than this. So force this limit on
* all downstream devices.
*/
if (pci_match_id(rc_pci_devids, bridge)) {
if (pcie_get_readrq(dev) > 256) {
dev_info(&dev->dev, "limiting MRRS to 256\n");
pcie_set_readrq(dev, 256);
}
if (!bridge)
return;
/*
* Keystone PCI controller has a h/w limitation of
* 256 bytes maximum read request size. It can't handle
* anything higher than this. So force this limit on
* all downstream devices.
*/
if (pci_match_id(rc_pci_devids, bridge)) {
if (pcie_get_readrq(dev) > 256) {
dev_info(&dev->dev, "limiting MRRS to 256\n");
pcie_set_readrq(dev, 256);
}
}
}
DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, quirk_limit_mrrs);
DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk);
static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = pci->dev;
unsigned int retries;
dw_pcie_setup_rc(pp);
if (dw_pcie_link_up(pci)) {
dev_info(dev, "Link already up\n");
return 0;
}
ks_pcie_initiate_link_train(ks_pcie);
/* check if the link is up or not */
for (retries = 0; retries < 5; retries++) {
ks_dw_pcie_initiate_link_train(ks_pcie);
if (!dw_pcie_wait_for_link(pci))
return 0;
}
if (!dw_pcie_wait_for_link(pci))
return 0;
dev_err(dev, "phy link never came up\n");
return -ETIMEDOUT;
@ -121,7 +589,7 @@ static void ks_pcie_msi_irq_handler(struct irq_desc *desc)
* ack operation.
*/
chained_irq_enter(chip, desc);
ks_dw_pcie_handle_msi_irq(ks_pcie, offset);
ks_pcie_handle_msi_irq(ks_pcie, offset);
chained_irq_exit(chip, desc);
}
@ -150,7 +618,7 @@ static void ks_pcie_legacy_irq_handler(struct irq_desc *desc)
* ack operation.
*/
chained_irq_enter(chip, desc);
ks_dw_pcie_handle_legacy_irq(ks_pcie, irq_offset);
ks_pcie_handle_legacy_irq(ks_pcie, irq_offset);
chained_irq_exit(chip, desc);
}
@ -222,7 +690,7 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie)
ks_pcie_legacy_irq_handler,
ks_pcie);
}
ks_dw_pcie_enable_legacy_irqs(ks_pcie);
ks_pcie_enable_legacy_irqs(ks_pcie);
/* MSI IRQ */
if (IS_ENABLED(CONFIG_PCI_MSI)) {
@ -234,7 +702,7 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie)
}
if (ks_pcie->error_irq > 0)
ks_dw_pcie_enable_error_irq(ks_pcie);
ks_pcie_enable_error_irq(ks_pcie);
}
/*
@ -242,8 +710,8 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie)
* bus error instead of returning 0xffffffff. This handler always returns 0
* for this kind of faults.
*/
static int keystone_pcie_fault(unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
static int ks_pcie_fault(unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
{
unsigned long instr = *(unsigned long *) instruction_pointer(regs);
@ -257,59 +725,78 @@ static int keystone_pcie_fault(unsigned long addr, unsigned int fsr,
return 0;
}
static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie)
{
int ret;
unsigned int id;
struct regmap *devctrl_regs;
struct dw_pcie *pci = ks_pcie->pci;
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
devctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-id");
if (IS_ERR(devctrl_regs))
return PTR_ERR(devctrl_regs);
ret = regmap_read(devctrl_regs, 0, &id);
if (ret)
return ret;
dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, id & PCIE_VENDORID_MASK);
dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, id >> PCIE_DEVICEID_SHIFT);
return 0;
}
static int __init ks_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 val;
int ret;
dw_pcie_setup_rc(pp);
ks_pcie_establish_link(ks_pcie);
ks_dw_pcie_setup_rc_app_regs(ks_pcie);
ks_pcie_setup_rc_app_regs(ks_pcie);
ks_pcie_setup_interrupts(ks_pcie);
writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8),
pci->dbi_base + PCI_IO_BASE);
/* update the Vendor ID */
writew(ks_pcie->device_id, pci->dbi_base + PCI_DEVICE_ID);
/* update the DEV_STAT_CTRL to publish right mrrs */
val = readl(pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
val &= ~PCI_EXP_DEVCTL_READRQ;
/* set the mrrs to 256 bytes */
val |= BIT(12);
writel(val, pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL);
ret = ks_pcie_init_id(ks_pcie);
if (ret < 0)
return ret;
/*
* PCIe access errors that result into OCP errors are caught by ARM as
* "External aborts"
*/
hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0,
hook_fault_code(17, ks_pcie_fault, SIGBUS, 0,
"Asynchronous external abort");
return 0;
}
static const struct dw_pcie_host_ops keystone_pcie_host_ops = {
.rd_other_conf = ks_dw_pcie_rd_other_conf,
.wr_other_conf = ks_dw_pcie_wr_other_conf,
static const struct dw_pcie_host_ops ks_pcie_host_ops = {
.rd_other_conf = ks_pcie_rd_other_conf,
.wr_other_conf = ks_pcie_wr_other_conf,
.host_init = ks_pcie_host_init,
.msi_set_irq = ks_dw_pcie_msi_set_irq,
.msi_clear_irq = ks_dw_pcie_msi_clear_irq,
.get_msi_addr = ks_dw_pcie_get_msi_addr,
.msi_host_init = ks_dw_pcie_msi_host_init,
.msi_irq_ack = ks_dw_pcie_msi_irq_ack,
.scan_bus = ks_dw_pcie_v3_65_scan_bus,
.msi_set_irq = ks_pcie_msi_set_irq,
.msi_clear_irq = ks_pcie_msi_clear_irq,
.get_msi_addr = ks_pcie_get_msi_addr,
.msi_host_init = ks_pcie_msi_host_init,
.msi_irq_ack = ks_pcie_msi_irq_ack,
.scan_bus = ks_pcie_v3_65_scan_bus,
};
static irqreturn_t pcie_err_irq_handler(int irq, void *priv)
static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv)
{
struct keystone_pcie *ks_pcie = priv;
return ks_dw_pcie_handle_error_irq(ks_pcie);
return ks_pcie_handle_error_irq(ks_pcie);
}
static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie,
struct platform_device *pdev)
static int __init ks_pcie_add_pcie_port(struct keystone_pcie *ks_pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = ks_pcie->pci;
struct pcie_port *pp = &pci->pp;
@ -338,7 +825,7 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie,
if (ks_pcie->error_irq <= 0)
dev_info(dev, "no error IRQ defined\n");
else {
ret = request_irq(ks_pcie->error_irq, pcie_err_irq_handler,
ret = request_irq(ks_pcie->error_irq, ks_pcie_err_irq_handler,
IRQF_SHARED, "pcie-error-irq", ks_pcie);
if (ret < 0) {
dev_err(dev, "failed to request error IRQ %d\n",
@ -347,8 +834,8 @@ static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie,
}
}
pp->ops = &keystone_pcie_host_ops;
ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np);
pp->ops = &ks_pcie_host_ops;
ret = ks_pcie_dw_host_init(ks_pcie);
if (ret) {
dev_err(dev, "failed to initialize host\n");
return ret;
@ -365,28 +852,62 @@ static const struct of_device_id ks_pcie_of_match[] = {
{ },
};
static const struct dw_pcie_ops dw_pcie_ops = {
.link_up = ks_dw_pcie_link_up,
static const struct dw_pcie_ops ks_pcie_dw_pcie_ops = {
.link_up = ks_pcie_link_up,
};
static int __exit ks_pcie_remove(struct platform_device *pdev)
static void ks_pcie_disable_phy(struct keystone_pcie *ks_pcie)
{
struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
int num_lanes = ks_pcie->num_lanes;
clk_disable_unprepare(ks_pcie->clk);
while (num_lanes--) {
phy_power_off(ks_pcie->phy[num_lanes]);
phy_exit(ks_pcie->phy[num_lanes]);
}
}
static int ks_pcie_enable_phy(struct keystone_pcie *ks_pcie)
{
int i;
int ret;
int num_lanes = ks_pcie->num_lanes;
for (i = 0; i < num_lanes; i++) {
ret = phy_init(ks_pcie->phy[i]);
if (ret < 0)
goto err_phy;
ret = phy_power_on(ks_pcie->phy[i]);
if (ret < 0) {
phy_exit(ks_pcie->phy[i]);
goto err_phy;
}
}
return 0;
err_phy:
while (--i >= 0) {
phy_power_off(ks_pcie->phy[i]);
phy_exit(ks_pcie->phy[i]);
}
return ret;
}
static int __init ks_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct dw_pcie *pci;
struct keystone_pcie *ks_pcie;
struct resource *res;
void __iomem *reg_p;
struct phy *phy;
struct device_link **link;
u32 num_viewport;
struct phy **phy;
u32 num_lanes;
char name[10];
int ret;
int i;
ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL);
if (!ks_pcie)
@ -397,54 +918,99 @@ static int __init ks_pcie_probe(struct platform_device *pdev)
return -ENOMEM;
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pci->ops = &ks_pcie_dw_pcie_ops;
ks_pcie->pci = pci;
/* initialize SerDes Phy if present */
phy = devm_phy_get(dev, "pcie-phy");
if (PTR_ERR_OR_ZERO(phy) == -EPROBE_DEFER)
return PTR_ERR(phy);
if (!IS_ERR_OR_NULL(phy)) {
ret = phy_init(phy);
if (ret < 0)
return ret;
}
/* index 2 is to read PCI DEVICE_ID */
res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
reg_p = devm_ioremap_resource(dev, res);
if (IS_ERR(reg_p))
return PTR_ERR(reg_p);
ks_pcie->device_id = readl(reg_p) >> 16;
devm_iounmap(dev, reg_p);
devm_release_mem_region(dev, res->start, resource_size(res));
ks_pcie->np = dev->of_node;
platform_set_drvdata(pdev, ks_pcie);
ks_pcie->clk = devm_clk_get(dev, "pcie");
if (IS_ERR(ks_pcie->clk)) {
dev_err(dev, "Failed to get pcie rc clock\n");
return PTR_ERR(ks_pcie->clk);
}
ret = clk_prepare_enable(ks_pcie->clk);
if (ret)
ret = of_property_read_u32(np, "num-viewport", &num_viewport);
if (ret < 0) {
dev_err(dev, "unable to read *num-viewport* property\n");
return ret;
}
ret = of_property_read_u32(np, "num-lanes", &num_lanes);
if (ret)
num_lanes = 1;
phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL);
if (!phy)
return -ENOMEM;
link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL);
if (!link)
return -ENOMEM;
for (i = 0; i < num_lanes; i++) {
snprintf(name, sizeof(name), "pcie-phy%d", i);
phy[i] = devm_phy_optional_get(dev, name);
if (IS_ERR(phy[i])) {
ret = PTR_ERR(phy[i]);
goto err_link;
}
if (!phy[i])
continue;
link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
if (!link[i]) {
ret = -EINVAL;
goto err_link;
}
}
ks_pcie->np = np;
ks_pcie->pci = pci;
ks_pcie->link = link;
ks_pcie->num_lanes = num_lanes;
ks_pcie->num_viewport = num_viewport;
ks_pcie->phy = phy;
ret = ks_pcie_enable_phy(ks_pcie);
if (ret) {
dev_err(dev, "failed to enable phy\n");
goto err_link;
}
platform_set_drvdata(pdev, ks_pcie);
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
dev_err(dev, "pm_runtime_get_sync failed\n");
goto err_get_sync;
}
ret = ks_add_pcie_port(ks_pcie, pdev);
ret = ks_pcie_add_pcie_port(ks_pcie, pdev);
if (ret < 0)
goto fail_clk;
goto err_get_sync;
return 0;
fail_clk:
clk_disable_unprepare(ks_pcie->clk);
err_get_sync:
pm_runtime_put(dev);
pm_runtime_disable(dev);
ks_pcie_disable_phy(ks_pcie);
err_link:
while (--i >= 0 && link[i])
device_link_del(link[i]);
return ret;
}
static int __exit ks_pcie_remove(struct platform_device *pdev)
{
struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev);
struct device_link **link = ks_pcie->link;
int num_lanes = ks_pcie->num_lanes;
struct device *dev = &pdev->dev;
pm_runtime_put(dev);
pm_runtime_disable(dev);
ks_pcie_disable_phy(ks_pcie);
while (num_lanes--)
device_link_del(link[num_lanes]);
return 0;
}
static struct platform_driver ks_pcie_driver __refdata = {
.probe = ks_pcie_probe,
.remove = __exit_p(ks_pcie_remove),

View File

@ -1,57 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Keystone PCI Controller's common includes
*
* Copyright (C) 2013-2014 Texas Instruments., Ltd.
* http://www.ti.com
*
* Author: Murali Karicheri <m-karicheri2@ti.com>
*/
#define MAX_MSI_HOST_IRQS 8
struct keystone_pcie {
struct dw_pcie *pci;
struct clk *clk;
/* PCI Device ID */
u32 device_id;
int num_legacy_host_irqs;
int legacy_host_irqs[PCI_NUM_INTX];
struct device_node *legacy_intc_np;
int num_msi_host_irqs;
int msi_host_irqs[MAX_MSI_HOST_IRQS];
struct device_node *msi_intc_np;
struct irq_domain *legacy_irq_domain;
struct device_node *np;
int error_irq;
/* Application register space */
void __iomem *va_app_base; /* DT 1st resource */
struct resource app;
};
/* Keystone DW specific MSI controller APIs/definitions */
void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset);
phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp);
/* Keystone specific PCI controller APIs */
void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie);
void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset);
void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie);
irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie);
int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie,
struct device_node *msi_intc_np);
int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 val);
int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val);
void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie);
void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie);
void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp);
void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp);
int ks_dw_pcie_msi_host_init(struct pcie_port *pp);
int ks_dw_pcie_link_up(struct dw_pcie *pci);

View File

@ -36,6 +36,10 @@
#define PORT_LINK_MODE_4_LANES (0x7 << 16)
#define PORT_LINK_MODE_8_LANES (0xf << 16)
#define PCIE_PORT_DEBUG0 0x728
#define PORT_LOGIC_LTSSM_STATE_MASK 0x1f
#define PORT_LOGIC_LTSSM_STATE_L0 0x11
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
#define PORT_LOGIC_SPEED_CHANGE (0x1 << 17)
#define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8)

View File

@ -467,8 +467,8 @@ static int kirin_pcie_add_msi(struct dw_pcie *pci,
return 0;
}
static int __init kirin_add_pcie_port(struct dw_pcie *pci,
struct platform_device *pdev)
static int kirin_add_pcie_port(struct dw_pcie *pci,
struct platform_device *pdev)
{
int ret;

View File

@ -1089,7 +1089,6 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
struct qcom_pcie *pcie = to_qcom_pcie(pci);
int ret;
pm_runtime_get_sync(pci->dev);
qcom_ep_reset_assert(pcie);
ret = pcie->ops->init(pcie);
@ -1126,7 +1125,6 @@ err_disable_phy:
phy_power_off(pcie->phy);
err_deinit:
pcie->ops->deinit(pcie);
pm_runtime_put(pci->dev);
return ret;
}
@ -1216,6 +1214,12 @@ static int qcom_pcie_probe(struct platform_device *pdev)
return -ENOMEM;
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
pm_runtime_disable(dev);
return ret;
}
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pp = &pci->pp;
@ -1225,44 +1229,56 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pcie->ops = of_device_get_match_data(dev);
pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
if (IS_ERR(pcie->reset))
return PTR_ERR(pcie->reset);
if (IS_ERR(pcie->reset)) {
ret = PTR_ERR(pcie->reset);
goto err_pm_runtime_put;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf");
pcie->parf = devm_ioremap_resource(dev, res);
if (IS_ERR(pcie->parf))
return PTR_ERR(pcie->parf);
if (IS_ERR(pcie->parf)) {
ret = PTR_ERR(pcie->parf);
goto err_pm_runtime_put;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pci->dbi_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
if (IS_ERR(pci->dbi_base)) {
ret = PTR_ERR(pci->dbi_base);
goto err_pm_runtime_put;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi");
pcie->elbi = devm_ioremap_resource(dev, res);
if (IS_ERR(pcie->elbi))
return PTR_ERR(pcie->elbi);
if (IS_ERR(pcie->elbi)) {
ret = PTR_ERR(pcie->elbi);
goto err_pm_runtime_put;
}
pcie->phy = devm_phy_optional_get(dev, "pciephy");
if (IS_ERR(pcie->phy))
return PTR_ERR(pcie->phy);
if (IS_ERR(pcie->phy)) {
ret = PTR_ERR(pcie->phy);
goto err_pm_runtime_put;
}
ret = pcie->ops->get_resources(pcie);
if (ret)
return ret;
goto err_pm_runtime_put;
pp->ops = &qcom_pcie_dw_ops;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = platform_get_irq_byname(pdev, "msi");
if (pp->msi_irq < 0)
return pp->msi_irq;
if (pp->msi_irq < 0) {
ret = pp->msi_irq;
goto err_pm_runtime_put;
}
}
ret = phy_init(pcie->phy);
if (ret) {
pm_runtime_disable(&pdev->dev);
return ret;
goto err_pm_runtime_put;
}
platform_set_drvdata(pdev, pcie);
@ -1271,10 +1287,16 @@ static int qcom_pcie_probe(struct platform_device *pdev)
if (ret) {
dev_err(dev, "cannot initialize host\n");
pm_runtime_disable(&pdev->dev);
return ret;
goto err_pm_runtime_put;
}
return 0;
err_pm_runtime_put:
pm_runtime_put(dev);
pm_runtime_disable(dev);
return ret;
}
static const struct of_device_id qcom_pcie_match[] = {

View File

@ -20,12 +20,16 @@
#include <linux/of_pci.h>
#include "../pci.h"
#include "../pci-bridge-emul.h"
/* PCIe core registers */
#define PCIE_CORE_DEV_ID_REG 0x0
#define PCIE_CORE_CMD_STATUS_REG 0x4
#define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0)
#define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1)
#define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2)
#define PCIE_CORE_DEV_REV_REG 0x8
#define PCIE_CORE_PCIEXP_CAP 0xc0
#define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8
#define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4)
#define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5
@ -41,7 +45,10 @@
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6)
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7)
#define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8)
#define PCIE_CORE_INT_A_ASSERT_ENABLE 1
#define PCIE_CORE_INT_B_ASSERT_ENABLE 2
#define PCIE_CORE_INT_C_ASSERT_ENABLE 3
#define PCIE_CORE_INT_D_ASSERT_ENABLE 4
/* PIO registers base address and register offsets */
#define PIO_BASE_ADDR 0x4000
#define PIO_CTRL (PIO_BASE_ADDR + 0x0)
@ -93,7 +100,9 @@
#define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5)
#define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6)
#define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10)
#define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30)
#define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40)
#define PCIE_MSG_PM_PME_MASK BIT(7)
#define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44)
#define PCIE_ISR0_MSI_INT_PENDING BIT(24)
#define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val))
@ -189,6 +198,7 @@ struct advk_pcie {
struct mutex msi_used_lock;
u16 msi_msg;
int root_bus_nr;
struct pci_bridge_emul bridge;
};
static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg)
@ -390,6 +400,109 @@ static int advk_pcie_wait_pio(struct advk_pcie *pcie)
return -ETIMEDOUT;
}
static pci_bridge_emul_read_status_t
advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
int reg, u32 *value)
{
struct advk_pcie *pcie = bridge->data;
switch (reg) {
case PCI_EXP_SLTCTL:
*value = PCI_EXP_SLTSTA_PDS << 16;
return PCI_BRIDGE_EMUL_HANDLED;
case PCI_EXP_RTCTL: {
u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
*value = (val & PCIE_MSG_PM_PME_MASK) ? PCI_EXP_RTCTL_PMEIE : 0;
return PCI_BRIDGE_EMUL_HANDLED;
}
case PCI_EXP_RTSTA: {
u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
*value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16);
return PCI_BRIDGE_EMUL_HANDLED;
}
case PCI_CAP_LIST_ID:
case PCI_EXP_DEVCAP:
case PCI_EXP_DEVCTL:
case PCI_EXP_LNKCAP:
case PCI_EXP_LNKCTL:
*value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
return PCI_BRIDGE_EMUL_HANDLED;
default:
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
}
static void
advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
int reg, u32 old, u32 new, u32 mask)
{
struct advk_pcie *pcie = bridge->data;
switch (reg) {
case PCI_EXP_DEVCTL:
case PCI_EXP_LNKCTL:
advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg);
break;
case PCI_EXP_RTCTL:
new = (new & PCI_EXP_RTCTL_PMEIE) << 3;
advk_writel(pcie, new, PCIE_ISR0_MASK_REG);
break;
case PCI_EXP_RTSTA:
new = (new & PCI_EXP_RTSTA_PME) >> 9;
advk_writel(pcie, new, PCIE_ISR0_REG);
break;
default:
break;
}
}
struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = {
.read_pcie = advk_pci_bridge_emul_pcie_conf_read,
.write_pcie = advk_pci_bridge_emul_pcie_conf_write,
};
/*
* Initialize the configuration space of the PCI-to-PCI bridge
* associated with the given PCIe interface.
*/
static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
{
struct pci_bridge_emul *bridge = &pcie->bridge;
bridge->conf.vendor = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff;
bridge->conf.device = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) >> 16;
bridge->conf.class_revision =
advk_readl(pcie, PCIE_CORE_DEV_REV_REG) & 0xff;
/* Support 32 bits I/O addressing */
bridge->conf.iobase = PCI_IO_RANGE_TYPE_32;
bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
/* Support 64 bits memory pref */
bridge->conf.pref_mem_base = PCI_PREF_RANGE_TYPE_64;
bridge->conf.pref_mem_limit = PCI_PREF_RANGE_TYPE_64;
/* Support interrupt A for MSI feature */
bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
bridge->has_pcie = true;
bridge->data = pcie;
bridge->ops = &advk_pci_bridge_emul_ops;
pci_bridge_emul_init(bridge);
}
static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
int devfn)
{
@ -411,6 +524,10 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
return PCIBIOS_DEVICE_NOT_FOUND;
}
if (bus->number == pcie->root_bus_nr)
return pci_bridge_emul_conf_read(&pcie->bridge, where,
size, val);
/* Start PIO */
advk_writel(pcie, 0, PIO_START);
advk_writel(pcie, 1, PIO_ISR);
@ -418,7 +535,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn,
/* Program the control register */
reg = advk_readl(pcie, PIO_CTRL);
reg &= ~PIO_CTRL_TYPE_MASK;
if (bus->number == pcie->root_bus_nr)
if (bus->primary == pcie->root_bus_nr)
reg |= PCIE_CONFIG_RD_TYPE0;
else
reg |= PCIE_CONFIG_RD_TYPE1;
@ -463,6 +580,10 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
if (!advk_pcie_valid_device(pcie, bus, devfn))
return PCIBIOS_DEVICE_NOT_FOUND;
if (bus->number == pcie->root_bus_nr)
return pci_bridge_emul_conf_write(&pcie->bridge, where,
size, val);
if (where % size)
return PCIBIOS_SET_FAILED;
@ -473,7 +594,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
/* Program the control register */
reg = advk_readl(pcie, PIO_CTRL);
reg &= ~PIO_CTRL_TYPE_MASK;
if (bus->number == pcie->root_bus_nr)
if (bus->primary == pcie->root_bus_nr)
reg |= PCIE_CONFIG_WR_TYPE0;
else
reg |= PCIE_CONFIG_WR_TYPE1;
@ -875,6 +996,8 @@ static int advk_pcie_probe(struct platform_device *pdev)
advk_pcie_setup_hw(pcie);
advk_sw_pci_bridge_init(pcie);
ret = advk_pcie_init_irq_domain(pcie);
if (ret) {
dev_err(dev, "Failed to initialize irq\n");

View File

@ -58,9 +58,7 @@ err_out:
int pci_host_common_probe(struct platform_device *pdev,
struct pci_ecam_ops *ops)
{
const char *type;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
struct pci_config_window *cfg;
struct list_head resources;
@ -70,12 +68,6 @@ int pci_host_common_probe(struct platform_device *pdev,
if (!bridge)
return -ENOMEM;
type = of_get_property(np, "device_type", NULL);
if (!type || strcmp(type, "pci")) {
dev_err(dev, "invalid \"device_type\" %s\n", type);
return -EINVAL;
}
of_pci_check_probe_only();
/* Parse and map our Configuration Space windows */

View File

@ -22,6 +22,7 @@
#include <linux/of_platform.h>
#include "../pci.h"
#include "../pci-bridge-emul.h"
/*
* PCIe unit register offsets.
@ -63,61 +64,6 @@
#define PCIE_DEBUG_CTRL 0x1a60
#define PCIE_DEBUG_SOFT_RESET BIT(20)
enum {
PCISWCAP = PCI_BRIDGE_CONTROL + 2,
PCISWCAP_EXP_LIST_ID = PCISWCAP + PCI_CAP_LIST_ID,
PCISWCAP_EXP_DEVCAP = PCISWCAP + PCI_EXP_DEVCAP,
PCISWCAP_EXP_DEVCTL = PCISWCAP + PCI_EXP_DEVCTL,
PCISWCAP_EXP_LNKCAP = PCISWCAP + PCI_EXP_LNKCAP,
PCISWCAP_EXP_LNKCTL = PCISWCAP + PCI_EXP_LNKCTL,
PCISWCAP_EXP_SLTCAP = PCISWCAP + PCI_EXP_SLTCAP,
PCISWCAP_EXP_SLTCTL = PCISWCAP + PCI_EXP_SLTCTL,
PCISWCAP_EXP_RTCTL = PCISWCAP + PCI_EXP_RTCTL,
PCISWCAP_EXP_RTSTA = PCISWCAP + PCI_EXP_RTSTA,
PCISWCAP_EXP_DEVCAP2 = PCISWCAP + PCI_EXP_DEVCAP2,
PCISWCAP_EXP_DEVCTL2 = PCISWCAP + PCI_EXP_DEVCTL2,
PCISWCAP_EXP_LNKCAP2 = PCISWCAP + PCI_EXP_LNKCAP2,
PCISWCAP_EXP_LNKCTL2 = PCISWCAP + PCI_EXP_LNKCTL2,
PCISWCAP_EXP_SLTCAP2 = PCISWCAP + PCI_EXP_SLTCAP2,
PCISWCAP_EXP_SLTCTL2 = PCISWCAP + PCI_EXP_SLTCTL2,
};
/* PCI configuration space of a PCI-to-PCI bridge */
struct mvebu_sw_pci_bridge {
u16 vendor;
u16 device;
u16 command;
u16 status;
u16 class;
u8 interface;
u8 revision;
u8 bist;
u8 header_type;
u8 latency_timer;
u8 cache_line_size;
u32 bar[2];
u8 primary_bus;
u8 secondary_bus;
u8 subordinate_bus;
u8 secondary_latency_timer;
u8 iobase;
u8 iolimit;
u16 secondary_status;
u16 membase;
u16 memlimit;
u16 iobaseupper;
u16 iolimitupper;
u32 romaddr;
u8 intline;
u8 intpin;
u16 bridgectrl;
/* PCI express capability */
u32 pcie_sltcap;
u16 pcie_devctl;
u16 pcie_rtctl;
};
struct mvebu_pcie_port;
/* Structure representing all PCIe interfaces */
@ -153,7 +99,7 @@ struct mvebu_pcie_port {
struct clk *clk;
struct gpio_desc *reset_gpio;
char *reset_name;
struct mvebu_sw_pci_bridge bridge;
struct pci_bridge_emul bridge;
struct device_node *dn;
struct mvebu_pcie *pcie;
struct mvebu_pcie_window memwin;
@ -415,11 +361,12 @@ static void mvebu_pcie_set_window(struct mvebu_pcie_port *port,
static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {};
struct pci_bridge_emul_conf *conf = &port->bridge.conf;
/* Are the new iobase/iolimit values invalid? */
if (port->bridge.iolimit < port->bridge.iobase ||
port->bridge.iolimitupper < port->bridge.iobaseupper ||
!(port->bridge.command & PCI_COMMAND_IO)) {
if (conf->iolimit < conf->iobase ||
conf->iolimitupper < conf->iobaseupper ||
!(conf->command & PCI_COMMAND_IO)) {
mvebu_pcie_set_window(port, port->io_target, port->io_attr,
&desired, &port->iowin);
return;
@ -438,11 +385,11 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
* specifications. iobase is the bus address, port->iowin_base
* is the CPU address.
*/
desired.remap = ((port->bridge.iobase & 0xF0) << 8) |
(port->bridge.iobaseupper << 16);
desired.remap = ((conf->iobase & 0xF0) << 8) |
(conf->iobaseupper << 16);
desired.base = port->pcie->io.start + desired.remap;
desired.size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) |
(port->bridge.iolimitupper << 16)) -
desired.size = ((0xFFF | ((conf->iolimit & 0xF0) << 8) |
(conf->iolimitupper << 16)) -
desired.remap) +
1;
@ -453,10 +400,11 @@ static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port)
static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
{
struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP};
struct pci_bridge_emul_conf *conf = &port->bridge.conf;
/* Are the new membase/memlimit values invalid? */
if (port->bridge.memlimit < port->bridge.membase ||
!(port->bridge.command & PCI_COMMAND_MEMORY)) {
if (conf->memlimit < conf->membase ||
!(conf->command & PCI_COMMAND_MEMORY)) {
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr,
&desired, &port->memwin);
return;
@ -468,130 +416,32 @@ static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port)
* window to setup, according to the PCI-to-PCI bridge
* specifications.
*/
desired.base = ((port->bridge.membase & 0xFFF0) << 16);
desired.size = (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) -
desired.base = ((conf->membase & 0xFFF0) << 16);
desired.size = (((conf->memlimit & 0xFFF0) << 16) | 0xFFFFF) -
desired.base + 1;
mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired,
&port->memwin);
}
/*
* Initialize the configuration space of the PCI-to-PCI bridge
* associated with the given PCIe interface.
*/
static void mvebu_sw_pci_bridge_init(struct mvebu_pcie_port *port)
static pci_bridge_emul_read_status_t
mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
int reg, u32 *value)
{
struct mvebu_sw_pci_bridge *bridge = &port->bridge;
struct mvebu_pcie_port *port = bridge->data;
memset(bridge, 0, sizeof(struct mvebu_sw_pci_bridge));
bridge->class = PCI_CLASS_BRIDGE_PCI;
bridge->vendor = PCI_VENDOR_ID_MARVELL;
bridge->device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
bridge->revision = mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff;
bridge->header_type = PCI_HEADER_TYPE_BRIDGE;
bridge->cache_line_size = 0x10;
/* We support 32 bits I/O addressing */
bridge->iobase = PCI_IO_RANGE_TYPE_32;
bridge->iolimit = PCI_IO_RANGE_TYPE_32;
/* Add capabilities */
bridge->status = PCI_STATUS_CAP_LIST;
}
/*
* Read the configuration space of the PCI-to-PCI bridge associated to
* the given PCIe interface.
*/
static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port,
unsigned int where, int size, u32 *value)
{
struct mvebu_sw_pci_bridge *bridge = &port->bridge;
switch (where & ~3) {
case PCI_VENDOR_ID:
*value = bridge->device << 16 | bridge->vendor;
break;
case PCI_COMMAND:
*value = bridge->command | bridge->status << 16;
break;
case PCI_CLASS_REVISION:
*value = bridge->class << 16 | bridge->interface << 8 |
bridge->revision;
break;
case PCI_CACHE_LINE_SIZE:
*value = bridge->bist << 24 | bridge->header_type << 16 |
bridge->latency_timer << 8 | bridge->cache_line_size;
break;
case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1:
*value = bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4];
break;
case PCI_PRIMARY_BUS:
*value = (bridge->secondary_latency_timer << 24 |
bridge->subordinate_bus << 16 |
bridge->secondary_bus << 8 |
bridge->primary_bus);
break;
case PCI_IO_BASE:
if (!mvebu_has_ioport(port))
*value = bridge->secondary_status << 16;
else
*value = (bridge->secondary_status << 16 |
bridge->iolimit << 8 |
bridge->iobase);
break;
case PCI_MEMORY_BASE:
*value = (bridge->memlimit << 16 | bridge->membase);
break;
case PCI_PREF_MEMORY_BASE:
*value = 0;
break;
case PCI_IO_BASE_UPPER16:
*value = (bridge->iolimitupper << 16 | bridge->iobaseupper);
break;
case PCI_CAPABILITY_LIST:
*value = PCISWCAP;
break;
case PCI_ROM_ADDRESS1:
*value = 0;
break;
case PCI_INTERRUPT_LINE:
/* LINE PIN MIN_GNT MAX_LAT */
*value = 0;
break;
case PCISWCAP_EXP_LIST_ID:
/* Set PCIe v2, root port, slot support */
*value = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
PCI_EXP_FLAGS_SLOT) << 16 | PCI_CAP_ID_EXP;
break;
case PCISWCAP_EXP_DEVCAP:
switch (reg) {
case PCI_EXP_DEVCAP:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP);
break;
case PCISWCAP_EXP_DEVCTL:
case PCI_EXP_DEVCTL:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) &
~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
*value |= bridge->pcie_devctl;
break;
case PCISWCAP_EXP_LNKCAP:
case PCI_EXP_LNKCAP:
/*
* PCIe requires the clock power management capability to be
* hard-wired to zero for downstream ports
@ -600,176 +450,140 @@ static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port,
~PCI_EXP_LNKCAP_CLKPM;
break;
case PCISWCAP_EXP_LNKCTL:
case PCI_EXP_LNKCTL:
*value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL);
break;
case PCISWCAP_EXP_SLTCAP:
*value = bridge->pcie_sltcap;
break;
case PCISWCAP_EXP_SLTCTL:
case PCI_EXP_SLTCTL:
*value = PCI_EXP_SLTSTA_PDS << 16;
break;
case PCISWCAP_EXP_RTCTL:
*value = bridge->pcie_rtctl;
break;
case PCISWCAP_EXP_RTSTA:
case PCI_EXP_RTSTA:
*value = mvebu_readl(port, PCIE_RC_RTSTA);
break;
/* PCIe requires the v2 fields to be hard-wired to zero */
case PCISWCAP_EXP_DEVCAP2:
case PCISWCAP_EXP_DEVCTL2:
case PCISWCAP_EXP_LNKCAP2:
case PCISWCAP_EXP_LNKCTL2:
case PCISWCAP_EXP_SLTCAP2:
case PCISWCAP_EXP_SLTCTL2:
default:
/*
* PCI defines configuration read accesses to reserved or
* unimplemented registers to read as zero and complete
* normally.
*/
*value = 0;
return PCIBIOS_SUCCESSFUL;
return PCI_BRIDGE_EMUL_NOT_HANDLED;
}
if (size == 2)
*value = (*value >> (8 * (where & 3))) & 0xffff;
else if (size == 1)
*value = (*value >> (8 * (where & 3))) & 0xff;
return PCIBIOS_SUCCESSFUL;
return PCI_BRIDGE_EMUL_HANDLED;
}
/* Write to the PCI-to-PCI bridge configuration space */
static int mvebu_sw_pci_bridge_write(struct mvebu_pcie_port *port,
unsigned int where, int size, u32 value)
static void
mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
int reg, u32 old, u32 new, u32 mask)
{
struct mvebu_sw_pci_bridge *bridge = &port->bridge;
u32 mask, reg;
int err;
struct mvebu_pcie_port *port = bridge->data;
struct pci_bridge_emul_conf *conf = &bridge->conf;
if (size == 4)
mask = 0x0;
else if (size == 2)
mask = ~(0xffff << ((where & 3) * 8));
else if (size == 1)
mask = ~(0xff << ((where & 3) * 8));
else
return PCIBIOS_BAD_REGISTER_NUMBER;
err = mvebu_sw_pci_bridge_read(port, where & ~3, 4, &reg);
if (err)
return err;
value = (reg & mask) | value << ((where & 3) * 8);
switch (where & ~3) {
switch (reg) {
case PCI_COMMAND:
{
u32 old = bridge->command;
if (!mvebu_has_ioport(port))
value &= ~PCI_COMMAND_IO;
conf->command &= ~PCI_COMMAND_IO;
bridge->command = value & 0xffff;
if ((old ^ bridge->command) & PCI_COMMAND_IO)
if ((old ^ new) & PCI_COMMAND_IO)
mvebu_pcie_handle_iobase_change(port);
if ((old ^ bridge->command) & PCI_COMMAND_MEMORY)
if ((old ^ new) & PCI_COMMAND_MEMORY)
mvebu_pcie_handle_membase_change(port);
break;
}
case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1:
bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4] = value;
break;
case PCI_IO_BASE:
/*
* We also keep bit 1 set, it is a read-only bit that
* We keep bit 1 set, it is a read-only bit that
* indicates we support 32 bits addressing for the
* I/O
*/
bridge->iobase = (value & 0xff) | PCI_IO_RANGE_TYPE_32;
bridge->iolimit = ((value >> 8) & 0xff) | PCI_IO_RANGE_TYPE_32;
conf->iobase |= PCI_IO_RANGE_TYPE_32;
conf->iolimit |= PCI_IO_RANGE_TYPE_32;
mvebu_pcie_handle_iobase_change(port);
break;
case PCI_MEMORY_BASE:
bridge->membase = value & 0xffff;
bridge->memlimit = value >> 16;
mvebu_pcie_handle_membase_change(port);
break;
case PCI_IO_BASE_UPPER16:
bridge->iobaseupper = value & 0xffff;
bridge->iolimitupper = value >> 16;
mvebu_pcie_handle_iobase_change(port);
break;
case PCI_PRIMARY_BUS:
bridge->primary_bus = value & 0xff;
bridge->secondary_bus = (value >> 8) & 0xff;
bridge->subordinate_bus = (value >> 16) & 0xff;
bridge->secondary_latency_timer = (value >> 24) & 0xff;
mvebu_pcie_set_local_bus_nr(port, bridge->secondary_bus);
mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus);
break;
case PCISWCAP_EXP_DEVCTL:
default:
break;
}
}
static void
mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
int reg, u32 old, u32 new, u32 mask)
{
struct mvebu_pcie_port *port = bridge->data;
switch (reg) {
case PCI_EXP_DEVCTL:
/*
* Armada370 data says these bits must always
* be zero when in root complex mode.
*/
value &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
new &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE |
PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE);
/*
* If the mask is 0xffff0000, then we only want to write
* the device control register, rather than clearing the
* RW1C bits in the device status register. Mask out the
* status register bits.
*/
if (mask == 0xffff0000)
value &= 0xffff;
mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL);
break;
case PCISWCAP_EXP_LNKCTL:
case PCI_EXP_LNKCTL:
/*
* If we don't support CLKREQ, we must ensure that the
* CLKREQ enable bit always reads zero. Since we haven't
* had this capability, and it's dependent on board wiring,
* disable it for the time being.
*/
value &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
new &= ~PCI_EXP_LNKCTL_CLKREQ_EN;
/*
* If the mask is 0xffff0000, then we only want to write
* the link control register, rather than clearing the
* RW1C bits in the link status register. Mask out the
* RW1C status register bits.
*/
if (mask == 0xffff0000)
value &= ~((PCI_EXP_LNKSTA_LABS |
PCI_EXP_LNKSTA_LBMS) << 16);
mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL);
mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL);
break;
case PCISWCAP_EXP_RTSTA:
mvebu_writel(port, value, PCIE_RC_RTSTA);
break;
default:
case PCI_EXP_RTSTA:
mvebu_writel(port, new, PCIE_RC_RTSTA);
break;
}
}
return PCIBIOS_SUCCESSFUL;
struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = {
.write_base = mvebu_pci_bridge_emul_base_conf_write,
.read_pcie = mvebu_pci_bridge_emul_pcie_conf_read,
.write_pcie = mvebu_pci_bridge_emul_pcie_conf_write,
};
/*
* Initialize the configuration space of the PCI-to-PCI bridge
* associated with the given PCIe interface.
*/
static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
{
struct pci_bridge_emul *bridge = &port->bridge;
bridge->conf.vendor = PCI_VENDOR_ID_MARVELL;
bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16;
bridge->conf.class_revision =
mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff;
if (mvebu_has_ioport(port)) {
/* We support 32 bits I/O addressing */
bridge->conf.iobase = PCI_IO_RANGE_TYPE_32;
bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32;
}
bridge->has_pcie = true;
bridge->data = port;
bridge->ops = &mvebu_pci_bridge_emul_ops;
pci_bridge_emul_init(bridge);
}
static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
@ -789,8 +603,8 @@ static struct mvebu_pcie_port *mvebu_pcie_find_port(struct mvebu_pcie *pcie,
if (bus->number == 0 && port->devfn == devfn)
return port;
if (bus->number != 0 &&
bus->number >= port->bridge.secondary_bus &&
bus->number <= port->bridge.subordinate_bus)
bus->number >= port->bridge.conf.secondary_bus &&
bus->number <= port->bridge.conf.subordinate_bus)
return port;
}
@ -811,7 +625,8 @@ static int mvebu_pcie_wr_conf(struct pci_bus *bus, u32 devfn,
/* Access the emulated PCI-to-PCI bridge */
if (bus->number == 0)
return mvebu_sw_pci_bridge_write(port, where, size, val);
return pci_bridge_emul_conf_write(&port->bridge, where,
size, val);
if (!mvebu_pcie_link_up(port))
return PCIBIOS_DEVICE_NOT_FOUND;
@ -839,7 +654,8 @@ static int mvebu_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where,
/* Access the emulated PCI-to-PCI bridge */
if (bus->number == 0)
return mvebu_sw_pci_bridge_read(port, where, size, val);
return pci_bridge_emul_conf_read(&port->bridge, where,
size, val);
if (!mvebu_pcie_link_up(port)) {
*val = 0xffffffff;
@ -1297,7 +1113,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
mvebu_pcie_setup_hw(port);
mvebu_pcie_set_local_dev_nr(port, 1);
mvebu_sw_pci_bridge_init(port);
mvebu_pci_bridge_emul_init(port);
}
pcie->nports = i;

View File

@ -258,7 +258,6 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
u8 intx, bool is_asserted)
{
struct cdns_pcie *pcie = &ep->pcie;
u32 r = ep->max_regions - 1;
u32 offset;
u16 status;
u8 msg_code;
@ -268,8 +267,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
ep->irq_pci_fn != fn)) {
/* Last region was reserved for IRQ writes. */
cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r,
/* First region was reserved for IRQ writes. */
cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0,
ep->irq_phys_addr);
ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
ep->irq_pci_fn = fn;
@ -347,8 +346,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
ep->irq_pci_fn != fn)) {
/* Last region was reserved for IRQ writes. */
cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1,
/* First region was reserved for IRQ writes. */
cdns_pcie_set_outbound_region(pcie, fn, 0,
false,
ep->irq_phys_addr,
pci_addr & ~pci_addr_mask,
@ -356,7 +355,7 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
ep->irq_pci_addr = (pci_addr & ~pci_addr_mask);
ep->irq_pci_fn = fn;
}
writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
writel(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
return 0;
}
@ -517,6 +516,8 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
goto free_epc_mem;
}
ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
/* Reserve region 0 for IRQs */
set_bit(0, &ep->ob_region_map);
return 0;

View File

@ -235,7 +235,6 @@ static int cdns_pcie_host_init(struct device *dev,
static int cdns_pcie_host_probe(struct platform_device *pdev)
{
const char *type;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct pci_host_bridge *bridge;
@ -268,12 +267,6 @@ static int cdns_pcie_host_probe(struct platform_device *pdev)
rc->device_id = 0xffff;
of_property_read_u16(np, "device-id", &rc->device_id);
type = of_get_property(np, "device_type", NULL);
if (!type || strcmp(type, "pci")) {
dev_err(dev, "invalid \"device_type\" %s\n", type);
return -EINVAL;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
pcie->reg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pcie->reg_base)) {

View File

@ -190,14 +190,16 @@ int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
for (i = 0; i < phy_count; i++) {
of_property_read_string_index(np, "phy-names", i, &name);
phy[i] = devm_phy_optional_get(dev, name);
if (IS_ERR(phy))
return PTR_ERR(phy);
phy[i] = devm_phy_get(dev, name);
if (IS_ERR(phy[i])) {
ret = PTR_ERR(phy[i]);
goto err_phy;
}
link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS);
if (!link[i]) {
devm_phy_put(dev, phy[i]);
ret = -EINVAL;
goto err_link;
goto err_phy;
}
}
@ -207,13 +209,15 @@ int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
ret = cdns_pcie_enable_phy(pcie);
if (ret)
goto err_link;
goto err_phy;
return 0;
err_link:
while (--i >= 0)
err_phy:
while (--i >= 0) {
device_link_del(link[i]);
devm_phy_put(dev, phy[i]);
}
return ret;
}

View File

@ -630,14 +630,6 @@ static void __iomem *iproc_pcie_map_cfg_bus(struct iproc_pcie *pcie,
return (pcie->base + offset);
}
/*
* PAXC is connected to an internally emulated EP within the SoC. It
* allows only one device.
*/
if (pcie->ep_is_internal)
if (slot > 0)
return NULL;
return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where);
}

View File

@ -15,6 +15,7 @@
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/msi.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
@ -162,6 +163,7 @@ struct mtk_pcie_soc {
* @phy: pointer to PHY control block
* @lane: lane count
* @slot: port slot
* @irq: GIC irq
* @irq_domain: legacy INTx IRQ domain
* @inner_domain: inner IRQ domain
* @msi_domain: MSI IRQ domain
@ -182,6 +184,7 @@ struct mtk_pcie_port {
struct phy *phy;
u32 lane;
u32 slot;
int irq;
struct irq_domain *irq_domain;
struct irq_domain *inner_domain;
struct irq_domain *msi_domain;
@ -225,10 +228,8 @@ static void mtk_pcie_subsys_powerdown(struct mtk_pcie *pcie)
clk_disable_unprepare(pcie->free_ck);
if (dev->pm_domain) {
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
}
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
}
static void mtk_pcie_port_free(struct mtk_pcie_port *port)
@ -337,6 +338,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus,
{
struct mtk_pcie *pcie = bus->sysdata;
struct mtk_pcie_port *port;
struct pci_dev *dev = NULL;
/*
* Walk the bus hierarchy to get the devfn value
* of the port in the root bus.
*/
while (bus && bus->number) {
dev = bus->self;
bus = dev->bus;
devfn = dev->devfn;
}
list_for_each_entry(port, &pcie->ports, list)
if (port->slot == PCI_SLOT(devfn))
@ -383,75 +395,6 @@ static struct pci_ops mtk_pcie_ops_v2 = {
.write = mtk_pcie_config_write,
};
static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = port->pcie;
struct resource *mem = &pcie->mem;
const struct mtk_pcie_soc *soc = port->pcie->soc;
u32 val;
size_t size;
int err;
/* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
if (pcie->base) {
val = readl(pcie->base + PCIE_SYS_CFG_V2);
val |= PCIE_CSR_LTSSM_EN(port->slot) |
PCIE_CSR_ASPM_L1_EN(port->slot);
writel(val, pcie->base + PCIE_SYS_CFG_V2);
}
/* Assert all reset signals */
writel(0, port->base + PCIE_RST_CTRL);
/*
* Enable PCIe link down reset, if link status changed from link up to
* link down, this will reset MAC control registers and configuration
* space.
*/
writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
/* De-assert PHY, PE, PIPE, MAC and configuration reset */
val = readl(port->base + PCIE_RST_CTRL);
val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |
PCIE_MAC_SRSTB | PCIE_CRSTB;
writel(val, port->base + PCIE_RST_CTRL);
/* Set up vendor ID and class code */
if (soc->need_fix_class_id) {
val = PCI_VENDOR_ID_MEDIATEK;
writew(val, port->base + PCIE_CONF_VEND_ID);
val = PCI_CLASS_BRIDGE_HOST;
writew(val, port->base + PCIE_CONF_CLASS_ID);
}
/* 100ms timeout value should be enough for Gen1/2 training */
err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
!!(val & PCIE_PORT_LINKUP_V2), 20,
100 * USEC_PER_MSEC);
if (err)
return -ETIMEDOUT;
/* Set INTx mask */
val = readl(port->base + PCIE_INT_MASK);
val &= ~INTX_MASK;
writel(val, port->base + PCIE_INT_MASK);
/* Set AHB to PCIe translation windows */
size = mem->end - mem->start;
val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
val = upper_32_bits(mem->start);
writel(val, port->base + PCIE_AHB_TRANS_BASE0_H);
/* Set PCIe to AXI translation memory space.*/
val = fls(0xffffffff) | WIN_ENABLE;
writel(val, port->base + PCIE_AXI_WINDOW0);
return 0;
}
static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
@ -590,6 +533,27 @@ static void mtk_pcie_enable_msi(struct mtk_pcie_port *port)
writel(val, port->base + PCIE_INT_MASK);
}
static void mtk_pcie_irq_teardown(struct mtk_pcie *pcie)
{
struct mtk_pcie_port *port, *tmp;
list_for_each_entry_safe(port, tmp, &pcie->ports, list) {
irq_set_chained_handler_and_data(port->irq, NULL, NULL);
if (port->irq_domain)
irq_domain_remove(port->irq_domain);
if (IS_ENABLED(CONFIG_PCI_MSI)) {
if (port->msi_domain)
irq_domain_remove(port->msi_domain);
if (port->inner_domain)
irq_domain_remove(port->inner_domain);
}
irq_dispose_mapping(port->irq);
}
}
static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
@ -628,8 +592,6 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port,
ret = mtk_pcie_allocate_msi_domains(port);
if (ret)
return ret;
mtk_pcie_enable_msi(port);
}
return 0;
@ -682,7 +644,7 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
struct mtk_pcie *pcie = port->pcie;
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
int err, irq;
int err;
err = mtk_pcie_init_irq_domain(port, node);
if (err) {
@ -690,8 +652,81 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
return err;
}
irq = platform_get_irq(pdev, port->slot);
irq_set_chained_handler_and_data(irq, mtk_pcie_intr_handler, port);
port->irq = platform_get_irq(pdev, port->slot);
irq_set_chained_handler_and_data(port->irq,
mtk_pcie_intr_handler, port);
return 0;
}
static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = port->pcie;
struct resource *mem = &pcie->mem;
const struct mtk_pcie_soc *soc = port->pcie->soc;
u32 val;
size_t size;
int err;
/* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
if (pcie->base) {
val = readl(pcie->base + PCIE_SYS_CFG_V2);
val |= PCIE_CSR_LTSSM_EN(port->slot) |
PCIE_CSR_ASPM_L1_EN(port->slot);
writel(val, pcie->base + PCIE_SYS_CFG_V2);
}
/* Assert all reset signals */
writel(0, port->base + PCIE_RST_CTRL);
/*
* Enable PCIe link down reset, if link status changed from link up to
* link down, this will reset MAC control registers and configuration
* space.
*/
writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL);
/* De-assert PHY, PE, PIPE, MAC and configuration reset */
val = readl(port->base + PCIE_RST_CTRL);
val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB |
PCIE_MAC_SRSTB | PCIE_CRSTB;
writel(val, port->base + PCIE_RST_CTRL);
/* Set up vendor ID and class code */
if (soc->need_fix_class_id) {
val = PCI_VENDOR_ID_MEDIATEK;
writew(val, port->base + PCIE_CONF_VEND_ID);
val = PCI_CLASS_BRIDGE_PCI;
writew(val, port->base + PCIE_CONF_CLASS_ID);
}
/* 100ms timeout value should be enough for Gen1/2 training */
err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
!!(val & PCIE_PORT_LINKUP_V2), 20,
100 * USEC_PER_MSEC);
if (err)
return -ETIMEDOUT;
/* Set INTx mask */
val = readl(port->base + PCIE_INT_MASK);
val &= ~INTX_MASK;
writel(val, port->base + PCIE_INT_MASK);
if (IS_ENABLED(CONFIG_PCI_MSI))
mtk_pcie_enable_msi(port);
/* Set AHB to PCIe translation windows */
size = mem->end - mem->start;
val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
val = upper_32_bits(mem->start);
writel(val, port->base + PCIE_AHB_TRANS_BASE0_H);
/* Set PCIe to AXI translation memory space.*/
val = fls(0xffffffff) | WIN_ENABLE;
writel(val, port->base + PCIE_AXI_WINDOW0);
return 0;
}
@ -987,10 +1022,8 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie)
pcie->free_ck = NULL;
}
if (dev->pm_domain) {
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
}
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
/* enable top level clock */
err = clk_prepare_enable(pcie->free_ck);
@ -1002,10 +1035,8 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie)
return 0;
err_free_ck:
if (dev->pm_domain) {
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
}
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
return err;
}
@ -1109,36 +1140,10 @@ static int mtk_pcie_request_resources(struct mtk_pcie *pcie)
if (err < 0)
return err;
devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start);
return 0;
}
static int mtk_pcie_register_host(struct pci_host_bridge *host)
{
struct mtk_pcie *pcie = pci_host_bridge_priv(host);
struct pci_bus *child;
int err;
host->busnr = pcie->busn.start;
host->dev.parent = pcie->dev;
host->ops = pcie->soc->ops;
host->map_irq = of_irq_parse_and_map_pci;
host->swizzle_irq = pci_common_swizzle;
host->sysdata = pcie;
err = pci_scan_root_bus_bridge(host);
if (err < 0)
err = devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start);
if (err)
return err;
pci_bus_size_bridges(host->bus);
pci_bus_assign_resources(host->bus);
list_for_each_entry(child, &host->bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(host->bus);
return 0;
}
@ -1168,7 +1173,14 @@ static int mtk_pcie_probe(struct platform_device *pdev)
if (err)
goto put_resources;
err = mtk_pcie_register_host(host);
host->busnr = pcie->busn.start;
host->dev.parent = pcie->dev;
host->ops = pcie->soc->ops;
host->map_irq = of_irq_parse_and_map_pci;
host->swizzle_irq = pci_common_swizzle;
host->sysdata = pcie;
err = pci_host_probe(host);
if (err)
goto put_resources;
@ -1181,6 +1193,80 @@ put_resources:
return err;
}
static void mtk_pcie_free_resources(struct mtk_pcie *pcie)
{
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct list_head *windows = &host->windows;
pci_free_resource_list(windows);
}
static int mtk_pcie_remove(struct platform_device *pdev)
{
struct mtk_pcie *pcie = platform_get_drvdata(pdev);
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
pci_stop_root_bus(host->bus);
pci_remove_root_bus(host->bus);
mtk_pcie_free_resources(pcie);
mtk_pcie_irq_teardown(pcie);
mtk_pcie_put_resources(pcie);
return 0;
}
static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev)
{
struct mtk_pcie *pcie = dev_get_drvdata(dev);
struct mtk_pcie_port *port;
if (list_empty(&pcie->ports))
return 0;
list_for_each_entry(port, &pcie->ports, list) {
clk_disable_unprepare(port->pipe_ck);
clk_disable_unprepare(port->obff_ck);
clk_disable_unprepare(port->axi_ck);
clk_disable_unprepare(port->aux_ck);
clk_disable_unprepare(port->ahb_ck);
clk_disable_unprepare(port->sys_ck);
phy_power_off(port->phy);
phy_exit(port->phy);
}
clk_disable_unprepare(pcie->free_ck);
return 0;
}
static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev)
{
struct mtk_pcie *pcie = dev_get_drvdata(dev);
struct mtk_pcie_port *port, *tmp;
if (list_empty(&pcie->ports))
return 0;
clk_prepare_enable(pcie->free_ck);
list_for_each_entry_safe(port, tmp, &pcie->ports, list)
mtk_pcie_enable_port(port);
/* In case of EP was removed while system suspend. */
if (list_empty(&pcie->ports))
clk_disable_unprepare(pcie->free_ck);
return 0;
}
static const struct dev_pm_ops mtk_pcie_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq,
mtk_pcie_resume_noirq)
};
static const struct mtk_pcie_soc mtk_pcie_soc_v1 = {
.ops = &mtk_pcie_ops,
.startup = mtk_pcie_startup_port,
@ -1209,10 +1295,13 @@ static const struct of_device_id mtk_pcie_ids[] = {
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
.remove = mtk_pcie_remove,
.driver = {
.name = "mtk-pcie",
.of_match_table = mtk_pcie_ids,
.suppress_bind_attrs = true,
.pm = &mtk_pcie_pm_ops,
},
};
builtin_platform_driver(mtk_pcie_driver);
module_platform_driver(mtk_pcie_driver);
MODULE_LICENSE("GPL v2");

View File

@ -301,13 +301,6 @@ static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
struct platform_device *pdev = pcie->pdev;
struct device_node *node = dev->of_node;
struct resource *res;
const char *type;
type = of_get_property(node, "device_type", NULL);
if (!type || strcmp(type, "pci")) {
dev_err(dev, "invalid \"device_type\" %s\n", type);
return -EINVAL;
}
/* map config resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,

View File

@ -777,16 +777,7 @@ static int nwl_pcie_parse_dt(struct nwl_pcie *pcie,
struct platform_device *pdev)
{
struct device *dev = pcie->dev;
struct device_node *node = dev->of_node;
struct resource *res;
const char *type;
/* Check for device type */
type = of_get_property(node, "device_type", NULL);
if (!type || strcmp(type, "pci")) {
dev_err(dev, "invalid \"device_type\" %s\n", type);
return -EINVAL;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "breg");
pcie->breg_base = devm_ioremap_resource(dev, res);

View File

@ -574,15 +574,8 @@ static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port)
struct device *dev = port->dev;
struct device_node *node = dev->of_node;
struct resource regs;
const char *type;
int err;
type = of_get_property(node, "device_type", NULL);
if (!type || strcmp(type, "pci")) {
dev_err(dev, "invalid \"device_type\" %s\n", type);
return -EINVAL;
}
err = of_address_to_resource(node, 0, &regs);
if (err) {
dev_err(dev, "missing \"reg\" property\n");

View File

@ -809,12 +809,12 @@ static void vmd_remove(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
vmd_detach_resources(vmd);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_stop_root_bus(vmd->bus);
pci_remove_root_bus(vmd->bus);
vmd_cleanup_srcu(vmd);
vmd_teardown_dma_ops(vmd);
vmd_detach_resources(vmd);
irq_domain_remove(vmd->irq_domain);
}

74
drivers/pci/hotplug/TODO Normal file
View File

@ -0,0 +1,74 @@
Contributions are solicited in particular to remedy the following issues:
cpcihp:
* There are no implementations of the ->hardware_test, ->get_power and
->set_power callbacks in struct cpci_hp_controller_ops. Why were they
introduced? Can they be removed from the struct?
cpqphp:
* The driver spawns a kthread cpqhp_event_thread() which is woken by the
hardirq handler cpqhp_ctrl_intr(). Convert this to threaded IRQ handling.
The kthread is also woken from the timer pushbutton_helper_thread(),
convert it to call irq_wake_thread(). Use pciehp as a template.
* A large portion of cpqphp_ctrl.c and cpqphp_pci.c concerns resource
management. Doesn't this duplicate functionality in the core?
ibmphp:
* Implementations of hotplug_slot_ops callbacks such as get_adapter_present()
in ibmphp_core.c create a copy of the struct slot on the stack, then perform
the actual operation on that copy. Determine if this overhead is necessary,
delete it if not. The functions also perform a NULL pointer check on the
struct hotplug_slot, this seems superfluous.
* Several functions access the pci_slot member in struct hotplug_slot even
though pci_hotplug.h declares it private. See get_max_bus_speed() for an
example. Either the pci_slot member should no longer be declared private
or ibmphp should store a pointer to its bus in struct slot. Probably the
former.
* The functions get_max_adapter_speed() and get_bus_name() are commented out.
Can they be deleted? There are also forward declarations at the top of
ibmphp_core.c as well as pointers in ibmphp_hotplug_slot_ops, likewise
commented out.
* ibmphp_init_devno() takes a struct slot **, it could instead take a
struct slot *.
* The return value of pci_hp_register() is not checked.
* iounmap(io_mem) is called in the error path of ebda_rsrc_controller()
and once more in the error path of its caller ibmphp_access_ebda().
* The various slot data structures are difficult to follow and need to be
simplified. A lot of functions are too large and too complex, they need
to be broken up into smaller, manageable pieces. Negative examples are
ebda_rsrc_controller() and configure_bridge().
* A large portion of ibmphp_res.c and ibmphp_pci.c concerns resource
management. Doesn't this duplicate functionality in the core?
sgi_hotplug:
* Several functions access the pci_slot member in struct hotplug_slot even
though pci_hotplug.h declares it private. See sn_hp_destroy() for an
example. Either the pci_slot member should no longer be declared private
or sgi_hotplug should store a pointer to it in struct slot. Probably the
former.
shpchp:
* There is only a single implementation of struct hpc_ops. Can the struct be
removed and its functions invoked directly? This has already been done in
pciehp with commit 82a9e79ef132 ("PCI: pciehp: remove hpc_ops"). Clarify
if there was a specific reason not to apply the same change to shpchp.
* The ->get_mode1_ECC_cap callback in shpchp_hpc_ops is never invoked.
Why was it introduced? Can it be removed?
* The hardirq handler shpc_isr() queues events on a workqueue. It can be
simplified by converting it to threaded IRQ handling. Use pciehp as a
template.

View File

@ -33,15 +33,19 @@ struct acpiphp_slot;
* struct slot - slot information for each *physical* slot
*/
struct slot {
struct hotplug_slot *hotplug_slot;
struct hotplug_slot hotplug_slot;
struct acpiphp_slot *acpi_slot;
struct hotplug_slot_info info;
unsigned int sun; /* ACPI _SUN (Slot User Number) value */
};
static inline const char *slot_name(struct slot *slot)
{
return hotplug_slot_name(slot->hotplug_slot);
return hotplug_slot_name(&slot->hotplug_slot);
}
static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
{
return container_of(hotplug_slot, struct slot, hotplug_slot);
}
/*

View File

@ -57,7 +57,7 @@ static int get_attention_status(struct hotplug_slot *slot, u8 *value);
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
static struct hotplug_slot_ops acpi_hotplug_slot_ops = {
static const struct hotplug_slot_ops acpi_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.set_attention_status = set_attention_status,
@ -118,7 +118,7 @@ EXPORT_SYMBOL_GPL(acpiphp_unregister_attention);
*/
static int enable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -135,7 +135,7 @@ static int enable_slot(struct hotplug_slot *hotplug_slot)
*/
static int disable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -179,7 +179,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
*/
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -225,7 +225,7 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
*/
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -245,7 +245,7 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
*/
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -266,39 +266,26 @@ int acpiphp_register_hotplug_slot(struct acpiphp_slot *acpiphp_slot,
if (!slot)
goto error;
slot->hotplug_slot = kzalloc(sizeof(*slot->hotplug_slot), GFP_KERNEL);
if (!slot->hotplug_slot)
goto error_slot;
slot->hotplug_slot->info = &slot->info;
slot->hotplug_slot->private = slot;
slot->hotplug_slot->ops = &acpi_hotplug_slot_ops;
slot->hotplug_slot.ops = &acpi_hotplug_slot_ops;
slot->acpi_slot = acpiphp_slot;
slot->hotplug_slot->info->power_status = acpiphp_get_power_status(slot->acpi_slot);
slot->hotplug_slot->info->attention_status = 0;
slot->hotplug_slot->info->latch_status = acpiphp_get_latch_status(slot->acpi_slot);
slot->hotplug_slot->info->adapter_status = acpiphp_get_adapter_status(slot->acpi_slot);
acpiphp_slot->slot = slot;
slot->sun = sun;
snprintf(name, SLOT_NAME_SIZE, "%u", sun);
retval = pci_hp_register(slot->hotplug_slot, acpiphp_slot->bus,
retval = pci_hp_register(&slot->hotplug_slot, acpiphp_slot->bus,
acpiphp_slot->device, name);
if (retval == -EBUSY)
goto error_hpslot;
goto error_slot;
if (retval) {
pr_err("pci_hp_register failed with error %d\n", retval);
goto error_hpslot;
goto error_slot;
}
pr_info("Slot [%s] registered\n", slot_name(slot));
return 0;
error_hpslot:
kfree(slot->hotplug_slot);
error_slot:
kfree(slot);
error:
@ -312,8 +299,7 @@ void acpiphp_unregister_hotplug_slot(struct acpiphp_slot *acpiphp_slot)
pr_info("Slot [%s] unregistered\n", slot_name(slot));
pci_hp_deregister(slot->hotplug_slot);
kfree(slot->hotplug_slot);
pci_hp_deregister(&slot->hotplug_slot);
kfree(slot);
}

View File

@ -41,7 +41,7 @@ MODULE_VERSION(DRIVER_VERSION);
#define IBM_HARDWARE_ID1 "IBM37D0"
#define IBM_HARDWARE_ID2 "IBM37D4"
#define hpslot_to_sun(A) (((struct slot *)((A)->private))->sun)
#define hpslot_to_sun(A) (to_slot(A)->sun)
/* union apci_descriptor - allows access to the
* various device descriptors that are embedded in the

View File

@ -32,8 +32,10 @@ struct slot {
unsigned int devfn;
struct pci_bus *bus;
struct pci_dev *dev;
unsigned int latch_status:1;
unsigned int adapter_status:1;
unsigned int extracting;
struct hotplug_slot *hotplug_slot;
struct hotplug_slot hotplug_slot;
struct list_head slot_list;
};
@ -58,7 +60,12 @@ struct cpci_hp_controller {
static inline const char *slot_name(struct slot *slot)
{
return hotplug_slot_name(slot->hotplug_slot);
return hotplug_slot_name(&slot->hotplug_slot);
}
static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
{
return container_of(hotplug_slot, struct slot, hotplug_slot);
}
int cpci_hp_register_controller(struct cpci_hp_controller *controller);

View File

@ -57,7 +57,7 @@ static int get_attention_status(struct hotplug_slot *slot, u8 *value);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
static struct hotplug_slot_ops cpci_hotplug_slot_ops = {
static const struct hotplug_slot_ops cpci_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.set_attention_status = set_attention_status,
@ -67,30 +67,10 @@ static struct hotplug_slot_ops cpci_hotplug_slot_ops = {
.get_latch_status = get_latch_status,
};
static int
update_latch_status(struct hotplug_slot *hotplug_slot, u8 value)
{
struct hotplug_slot_info info;
memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info));
info.latch_status = value;
return pci_hp_change_slot_info(hotplug_slot, &info);
}
static int
update_adapter_status(struct hotplug_slot *hotplug_slot, u8 value)
{
struct hotplug_slot_info info;
memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info));
info.adapter_status = value;
return pci_hp_change_slot_info(hotplug_slot, &info);
}
static int
enable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
int retval = 0;
dbg("%s - physical_slot = %s", __func__, slot_name(slot));
@ -103,7 +83,7 @@ enable_slot(struct hotplug_slot *hotplug_slot)
static int
disable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
int retval = 0;
dbg("%s - physical_slot = %s", __func__, slot_name(slot));
@ -135,8 +115,7 @@ disable_slot(struct hotplug_slot *hotplug_slot)
goto disable_error;
}
if (update_adapter_status(slot->hotplug_slot, 0))
warn("failure to update adapter file");
slot->adapter_status = 0;
if (slot->extracting) {
slot->extracting = 0;
@ -160,7 +139,7 @@ cpci_get_power_status(struct slot *slot)
static int
get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
*value = cpci_get_power_status(slot);
return 0;
@ -169,7 +148,7 @@ get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
static int
get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
*value = cpci_get_attention_status(slot);
return 0;
@ -178,27 +157,29 @@ get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
static int
set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{
return cpci_set_attention_status(hotplug_slot->private, status);
return cpci_set_attention_status(to_slot(hotplug_slot), status);
}
static int
get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
*value = hotplug_slot->info->adapter_status;
struct slot *slot = to_slot(hotplug_slot);
*value = slot->adapter_status;
return 0;
}
static int
get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
*value = hotplug_slot->info->latch_status;
struct slot *slot = to_slot(hotplug_slot);
*value = slot->latch_status;
return 0;
}
static void release_slot(struct slot *slot)
{
kfree(slot->hotplug_slot->info);
kfree(slot->hotplug_slot);
pci_dev_put(slot->dev);
kfree(slot);
}
@ -209,8 +190,6 @@ int
cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last)
{
struct slot *slot;
struct hotplug_slot *hotplug_slot;
struct hotplug_slot_info *info;
char name[SLOT_NAME_SIZE];
int status;
int i;
@ -229,43 +208,19 @@ cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last)
goto error;
}
hotplug_slot =
kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL);
if (!hotplug_slot) {
status = -ENOMEM;
goto error_slot;
}
slot->hotplug_slot = hotplug_slot;
info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL);
if (!info) {
status = -ENOMEM;
goto error_hpslot;
}
hotplug_slot->info = info;
slot->bus = bus;
slot->number = i;
slot->devfn = PCI_DEVFN(i, 0);
snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i);
hotplug_slot->private = slot;
hotplug_slot->ops = &cpci_hotplug_slot_ops;
/*
* Initialize the slot info structure with some known
* good values.
*/
dbg("initializing slot %s", name);
info->power_status = cpci_get_power_status(slot);
info->attention_status = cpci_get_attention_status(slot);
slot->hotplug_slot.ops = &cpci_hotplug_slot_ops;
dbg("registering slot %s", name);
status = pci_hp_register(slot->hotplug_slot, bus, i, name);
status = pci_hp_register(&slot->hotplug_slot, bus, i, name);
if (status) {
err("pci_hp_register failed with error %d", status);
goto error_info;
goto error_slot;
}
dbg("slot registered with name: %s", slot_name(slot));
@ -276,10 +231,6 @@ cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last)
up_write(&list_rwsem);
}
return 0;
error_info:
kfree(info);
error_hpslot:
kfree(hotplug_slot);
error_slot:
kfree(slot);
error:
@ -305,7 +256,7 @@ cpci_hp_unregister_bus(struct pci_bus *bus)
slots--;
dbg("deregistering slot %s", slot_name(slot));
pci_hp_deregister(slot->hotplug_slot);
pci_hp_deregister(&slot->hotplug_slot);
release_slot(slot);
}
}
@ -359,10 +310,8 @@ init_slots(int clear_ins)
__func__, slot_name(slot));
dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0));
if (dev) {
if (update_adapter_status(slot->hotplug_slot, 1))
warn("failure to update adapter file");
if (update_latch_status(slot->hotplug_slot, 1))
warn("failure to update latch file");
slot->adapter_status = 1;
slot->latch_status = 1;
slot->dev = dev;
}
}
@ -424,11 +373,8 @@ check_slots(void)
dbg("%s - slot %s HS_CSR (2) = %04x",
__func__, slot_name(slot), hs_csr);
if (update_latch_status(slot->hotplug_slot, 1))
warn("failure to update latch file");
if (update_adapter_status(slot->hotplug_slot, 1))
warn("failure to update adapter file");
slot->latch_status = 1;
slot->adapter_status = 1;
cpci_led_off(slot);
@ -449,9 +395,7 @@ check_slots(void)
__func__, slot_name(slot), hs_csr);
if (!slot->extracting) {
if (update_latch_status(slot->hotplug_slot, 0))
warn("failure to update latch file");
slot->latch_status = 0;
slot->extracting = 1;
atomic_inc(&extracting);
}
@ -465,8 +409,7 @@ check_slots(void)
*/
err("card in slot %s was improperly removed",
slot_name(slot));
if (update_adapter_status(slot->hotplug_slot, 0))
warn("failure to update adapter file");
slot->adapter_status = 0;
slot->extracting = 0;
atomic_dec(&extracting);
}
@ -615,7 +558,7 @@ cleanup_slots(void)
goto cleanup_null;
list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) {
list_del(&slot->slot_list);
pci_hp_deregister(slot->hotplug_slot);
pci_hp_deregister(&slot->hotplug_slot);
release_slot(slot);
}
cleanup_null:

View File

@ -194,8 +194,7 @@ int cpci_led_on(struct slot *slot)
slot->devfn,
hs_cap + 2,
hs_csr)) {
err("Could not set LOO for slot %s",
hotplug_slot_name(slot->hotplug_slot));
err("Could not set LOO for slot %s", slot_name(slot));
return -ENODEV;
}
}
@ -223,8 +222,7 @@ int cpci_led_off(struct slot *slot)
slot->devfn,
hs_cap + 2,
hs_csr)) {
err("Could not clear LOO for slot %s",
hotplug_slot_name(slot->hotplug_slot));
err("Could not clear LOO for slot %s", slot_name(slot));
return -ENODEV;
}
}

View File

@ -260,7 +260,7 @@ struct slot {
u8 hp_slot;
struct controller *ctrl;
void __iomem *p_sm_slot;
struct hotplug_slot *hotplug_slot;
struct hotplug_slot hotplug_slot;
};
struct pci_resource {
@ -445,7 +445,12 @@ extern u8 cpqhp_disk_irq;
static inline const char *slot_name(struct slot *slot)
{
return hotplug_slot_name(slot->hotplug_slot);
return hotplug_slot_name(&slot->hotplug_slot);
}
static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
{
return container_of(hotplug_slot, struct slot, hotplug_slot);
}
/*

View File

@ -121,7 +121,6 @@ static int init_SERR(struct controller *ctrl)
{
u32 tempdword;
u32 number_of_slots;
u8 physical_slot;
if (!ctrl)
return 1;
@ -131,7 +130,6 @@ static int init_SERR(struct controller *ctrl)
number_of_slots = readb(ctrl->hpc_reg + SLOT_MASK) & 0x0F;
/* Loop through slots */
while (number_of_slots) {
physical_slot = tempdword;
writeb(0, ctrl->hpc_reg + SLOT_SERR);
tempdword++;
number_of_slots--;
@ -275,9 +273,7 @@ static int ctrl_slot_cleanup(struct controller *ctrl)
while (old_slot) {
next_slot = old_slot->next;
pci_hp_deregister(old_slot->hotplug_slot);
kfree(old_slot->hotplug_slot->info);
kfree(old_slot->hotplug_slot);
pci_hp_deregister(&old_slot->hotplug_slot);
kfree(old_slot);
old_slot = next_slot;
}
@ -419,7 +415,7 @@ cpqhp_set_attention_status(struct controller *ctrl, struct pci_func *func,
static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{
struct pci_func *slot_func;
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
u8 bus;
u8 devfn;
@ -446,7 +442,7 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
static int process_SI(struct hotplug_slot *hotplug_slot)
{
struct pci_func *slot_func;
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
u8 bus;
u8 devfn;
@ -478,7 +474,7 @@ static int process_SI(struct hotplug_slot *hotplug_slot)
static int process_SS(struct hotplug_slot *hotplug_slot)
{
struct pci_func *slot_func;
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
u8 bus;
u8 devfn;
@ -505,7 +501,7 @@ static int process_SS(struct hotplug_slot *hotplug_slot)
static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -516,7 +512,7 @@ static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value)
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -527,7 +523,7 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -538,7 +534,7 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -550,7 +546,7 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
struct controller *ctrl = slot->ctrl;
dbg("%s - physical_slot = %s\n", __func__, slot_name(slot));
@ -560,7 +556,7 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
return 0;
}
static struct hotplug_slot_ops cpqphp_hotplug_slot_ops = {
static const struct hotplug_slot_ops cpqphp_hotplug_slot_ops = {
.set_attention_status = set_attention_status,
.enable_slot = process_SI,
.disable_slot = process_SS,
@ -578,8 +574,6 @@ static int ctrl_slot_setup(struct controller *ctrl,
void __iomem *smbios_table)
{
struct slot *slot;
struct hotplug_slot *hotplug_slot;
struct hotplug_slot_info *hotplug_slot_info;
struct pci_bus *bus = ctrl->pci_bus;
u8 number_of_slots;
u8 slot_device;
@ -605,22 +599,6 @@ static int ctrl_slot_setup(struct controller *ctrl,
goto error;
}
slot->hotplug_slot = kzalloc(sizeof(*(slot->hotplug_slot)),
GFP_KERNEL);
if (!slot->hotplug_slot) {
result = -ENOMEM;
goto error_slot;
}
hotplug_slot = slot->hotplug_slot;
hotplug_slot->info = kzalloc(sizeof(*(hotplug_slot->info)),
GFP_KERNEL);
if (!hotplug_slot->info) {
result = -ENOMEM;
goto error_hpslot;
}
hotplug_slot_info = hotplug_slot->info;
slot->ctrl = ctrl;
slot->bus = ctrl->bus;
slot->device = slot_device;
@ -669,29 +647,20 @@ static int ctrl_slot_setup(struct controller *ctrl,
((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04;
/* register this slot with the hotplug pci core */
hotplug_slot->private = slot;
snprintf(name, SLOT_NAME_SIZE, "%u", slot->number);
hotplug_slot->ops = &cpqphp_hotplug_slot_ops;
hotplug_slot_info->power_status = get_slot_enabled(ctrl, slot);
hotplug_slot_info->attention_status =
cpq_get_attention_status(ctrl, slot);
hotplug_slot_info->latch_status =
cpq_get_latch_status(ctrl, slot);
hotplug_slot_info->adapter_status =
get_presence_status(ctrl, slot);
slot->hotplug_slot.ops = &cpqphp_hotplug_slot_ops;
dbg("registering bus %d, dev %d, number %d, ctrl->slot_device_offset %d, slot %d\n",
slot->bus, slot->device,
slot->number, ctrl->slot_device_offset,
slot_number);
result = pci_hp_register(hotplug_slot,
result = pci_hp_register(&slot->hotplug_slot,
ctrl->pci_dev->bus,
slot->device,
name);
if (result) {
err("pci_hp_register failed with error %d\n", result);
goto error_info;
goto error_slot;
}
slot->next = ctrl->slot;
@ -703,10 +672,6 @@ static int ctrl_slot_setup(struct controller *ctrl,
}
return 0;
error_info:
kfree(hotplug_slot_info);
error_hpslot:
kfree(hotplug_slot);
error_slot:
kfree(slot);
error:

View File

@ -1130,9 +1130,7 @@ static u8 set_controller_speed(struct controller *ctrl, u8 adapter_speed, u8 hp_
for (slot = ctrl->slot; slot; slot = slot->next) {
if (slot->device == (hp_slot + ctrl->slot_device_offset))
continue;
if (!slot->hotplug_slot || !slot->hotplug_slot->info)
continue;
if (slot->hotplug_slot->info->adapter_status == 0)
if (get_presence_status(ctrl, slot) == 0)
continue;
/* If another adapter is running on the same segment but at a
* lower speed/mode, we allow the new adapter to function at
@ -1767,24 +1765,6 @@ void cpqhp_event_stop_thread(void)
}
static int update_slot_info(struct controller *ctrl, struct slot *slot)
{
struct hotplug_slot_info *info;
int result;
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
info->power_status = get_slot_enabled(ctrl, slot);
info->attention_status = cpq_get_attention_status(ctrl, slot);
info->latch_status = cpq_get_latch_status(ctrl, slot);
info->adapter_status = get_presence_status(ctrl, slot);
result = pci_hp_change_slot_info(slot->hotplug_slot, info);
kfree(info);
return result;
}
static void interrupt_event_handler(struct controller *ctrl)
{
int loop = 0;
@ -1884,9 +1864,6 @@ static void interrupt_event_handler(struct controller *ctrl)
/***********POWER FAULT */
else if (ctrl->event_queue[loop].event_type == INT_POWER_FAULT) {
dbg("power fault\n");
} else {
/* refresh notification */
update_slot_info(ctrl, p_slot);
}
ctrl->event_queue[loop].event_type = 0;
@ -2057,9 +2034,6 @@ int cpqhp_process_SI(struct controller *ctrl, struct pci_func *func)
if (rc)
dbg("%s: rc = %d\n", __func__, rc);
if (p_slot)
update_slot_info(ctrl, p_slot);
return rc;
}
@ -2125,9 +2099,6 @@ int cpqhp_process_SS(struct controller *ctrl, struct pci_func *func)
rc = 1;
}
if (p_slot)
update_slot_info(ctrl, p_slot);
return rc;
}

View File

@ -698,7 +698,7 @@ struct slot {
u8 supported_bus_mode;
u8 flag; /* this is for disable slot and polling */
u8 ctlr_index;
struct hotplug_slot *hotplug_slot;
struct hotplug_slot hotplug_slot;
struct controller *ctrl;
struct pci_func *func;
u8 irq[4];
@ -740,7 +740,12 @@ int ibmphp_do_disable_slot(struct slot *slot_cur);
int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */
int ibmphp_configure_card(struct pci_func *, u8);
int ibmphp_unconfigure_card(struct slot **, int);
extern struct hotplug_slot_ops ibmphp_hotplug_slot_ops;
extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops;
static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot)
{
return container_of(hotplug_slot, struct slot, hotplug_slot);
}
#endif //__IBMPHP_H

View File

@ -247,11 +247,8 @@ static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value)
break;
}
if (rc == 0) {
pslot = hotplug_slot->private;
if (pslot)
rc = ibmphp_hpc_writeslot(pslot, cmd);
else
rc = -ENODEV;
pslot = to_slot(hotplug_slot);
rc = ibmphp_hpc_writeslot(pslot, cmd);
}
} else
rc = -ENODEV;
@ -273,19 +270,15 @@ static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
ibmphp_lock_operations();
if (hotplug_slot) {
pslot = hotplug_slot->private;
if (pslot) {
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&(myslot.status));
if (!rc)
rc = ibmphp_hpc_readslot(pslot,
READ_EXTSLOTSTATUS,
&(myslot.ext_status));
if (!rc)
*value = SLOT_ATTN(myslot.status,
myslot.ext_status);
}
pslot = to_slot(hotplug_slot);
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&myslot.status);
if (!rc)
rc = ibmphp_hpc_readslot(pslot, READ_EXTSLOTSTATUS,
&myslot.ext_status);
if (!rc)
*value = SLOT_ATTN(myslot.status, myslot.ext_status);
}
ibmphp_unlock_operations();
@ -303,14 +296,12 @@ static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
(ulong) hotplug_slot, (ulong) value);
ibmphp_lock_operations();
if (hotplug_slot) {
pslot = hotplug_slot->private;
if (pslot) {
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&(myslot.status));
if (!rc)
*value = SLOT_LATCH(myslot.status);
}
pslot = to_slot(hotplug_slot);
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&myslot.status);
if (!rc)
*value = SLOT_LATCH(myslot.status);
}
ibmphp_unlock_operations();
@ -330,14 +321,12 @@ static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
(ulong) hotplug_slot, (ulong) value);
ibmphp_lock_operations();
if (hotplug_slot) {
pslot = hotplug_slot->private;
if (pslot) {
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&(myslot.status));
if (!rc)
*value = SLOT_PWRGD(myslot.status);
}
pslot = to_slot(hotplug_slot);
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&myslot.status);
if (!rc)
*value = SLOT_PWRGD(myslot.status);
}
ibmphp_unlock_operations();
@ -357,18 +346,16 @@ static int get_adapter_present(struct hotplug_slot *hotplug_slot, u8 *value)
(ulong) hotplug_slot, (ulong) value);
ibmphp_lock_operations();
if (hotplug_slot) {
pslot = hotplug_slot->private;
if (pslot) {
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&(myslot.status));
if (!rc) {
present = SLOT_PRESENT(myslot.status);
if (present == HPC_SLOT_EMPTY)
*value = 0;
else
*value = 1;
}
pslot = to_slot(hotplug_slot);
memcpy(&myslot, pslot, sizeof(struct slot));
rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS,
&myslot.status);
if (!rc) {
present = SLOT_PRESENT(myslot.status);
if (present == HPC_SLOT_EMPTY)
*value = 0;
else
*value = 1;
}
}
@ -382,7 +369,7 @@ static int get_max_bus_speed(struct slot *slot)
int rc = 0;
u8 mode = 0;
enum pci_bus_speed speed;
struct pci_bus *bus = slot->hotplug_slot->pci_slot->bus;
struct pci_bus *bus = slot->hotplug_slot.pci_slot->bus;
debug("%s - Entry slot[%p]\n", __func__, slot);
@ -582,29 +569,10 @@ static int validate(struct slot *slot_cur, int opn)
****************************************************************************/
int ibmphp_update_slot_info(struct slot *slot_cur)
{
struct hotplug_slot_info *info;
struct pci_bus *bus = slot_cur->hotplug_slot->pci_slot->bus;
int rc;
struct pci_bus *bus = slot_cur->hotplug_slot.pci_slot->bus;
u8 bus_speed;
u8 mode;
info = kmalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL);
if (!info)
return -ENOMEM;
info->power_status = SLOT_PWRGD(slot_cur->status);
info->attention_status = SLOT_ATTN(slot_cur->status,
slot_cur->ext_status);
info->latch_status = SLOT_LATCH(slot_cur->status);
if (!SLOT_PRESENT(slot_cur->status)) {
info->adapter_status = 0;
/* info->max_adapter_speed_status = MAX_ADAPTER_NONE; */
} else {
info->adapter_status = 1;
/* get_max_adapter_speed_1(slot_cur->hotplug_slot,
&info->max_adapter_speed_status, 0); */
}
bus_speed = slot_cur->bus_on->current_speed;
mode = slot_cur->bus_on->current_bus_mode;
@ -630,9 +598,7 @@ int ibmphp_update_slot_info(struct slot *slot_cur)
bus->cur_bus_speed = bus_speed;
// To do: bus_names
rc = pci_hp_change_slot_info(slot_cur->hotplug_slot, info);
kfree(info);
return rc;
return 0;
}
@ -673,7 +639,7 @@ static void free_slots(void)
list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head,
ibm_slot_list) {
pci_hp_del(slot_cur->hotplug_slot);
pci_hp_del(&slot_cur->hotplug_slot);
slot_cur->ctrl = NULL;
slot_cur->bus_on = NULL;
@ -683,9 +649,7 @@ static void free_slots(void)
*/
ibmphp_unconfigure_card(&slot_cur, -1);
pci_hp_destroy(slot_cur->hotplug_slot);
kfree(slot_cur->hotplug_slot->info);
kfree(slot_cur->hotplug_slot);
pci_hp_destroy(&slot_cur->hotplug_slot);
kfree(slot_cur);
}
debug("%s -- exit\n", __func__);
@ -1007,7 +971,7 @@ static int enable_slot(struct hotplug_slot *hs)
ibmphp_lock_operations();
debug("ENABLING SLOT........\n");
slot_cur = hs->private;
slot_cur = to_slot(hs);
rc = validate(slot_cur, ENABLE);
if (rc) {
@ -1095,8 +1059,7 @@ static int enable_slot(struct hotplug_slot *hs)
slot_cur->func = kzalloc(sizeof(struct pci_func), GFP_KERNEL);
if (!slot_cur->func) {
/* We cannot do update_slot_info here, since no memory for
* kmalloc n.e.ways, and update_slot_info allocates some */
/* do update_slot_info here? */
rc = -ENOMEM;
goto error_power;
}
@ -1169,7 +1132,7 @@ error_power:
**************************************************************/
static int ibmphp_disable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
struct slot *slot = to_slot(hotplug_slot);
int rc;
ibmphp_lock_operations();
@ -1259,7 +1222,7 @@ error:
goto exit;
}
struct hotplug_slot_ops ibmphp_hotplug_slot_ops = {
const struct hotplug_slot_ops ibmphp_hotplug_slot_ops = {
.set_attention_status = set_attention_status,
.enable_slot = enable_slot,
.disable_slot = ibmphp_disable_slot,

View File

@ -666,36 +666,8 @@ static int fillslotinfo(struct hotplug_slot *hotplug_slot)
struct slot *slot;
int rc = 0;
if (!hotplug_slot || !hotplug_slot->private)
return -EINVAL;
slot = hotplug_slot->private;
slot = to_slot(hotplug_slot);
rc = ibmphp_hpc_readslot(slot, READ_ALLSTAT, NULL);
if (rc)
return rc;
// power - enabled:1 not:0
hotplug_slot->info->power_status = SLOT_POWER(slot->status);
// attention - off:0, on:1, blinking:2
hotplug_slot->info->attention_status = SLOT_ATTN(slot->status, slot->ext_status);
// latch - open:1 closed:0
hotplug_slot->info->latch_status = SLOT_LATCH(slot->status);
// pci board - present:1 not:0
if (SLOT_PRESENT(slot->status))
hotplug_slot->info->adapter_status = 1;
else
hotplug_slot->info->adapter_status = 0;
/*
if (slot->bus_on->supported_bus_mode
&& (slot->bus_on->supported_speed == BUS_SPEED_66))
hotplug_slot->info->max_bus_speed_status = BUS_SPEED_66PCIX;
else
hotplug_slot->info->max_bus_speed_status = slot->bus_on->supported_speed;
*/
return rc;
}
@ -712,7 +684,6 @@ static int __init ebda_rsrc_controller(void)
u8 ctlr_id, temp, bus_index;
u16 ctlr, slot, bus;
u16 slot_num, bus_num, index;
struct hotplug_slot *hp_slot_ptr;
struct controller *hpc_ptr;
struct ebda_hpc_bus *bus_ptr;
struct ebda_hpc_slot *slot_ptr;
@ -771,7 +742,7 @@ static int __init ebda_rsrc_controller(void)
bus_info_ptr1 = kzalloc(sizeof(struct bus_info), GFP_KERNEL);
if (!bus_info_ptr1) {
rc = -ENOMEM;
goto error_no_hp_slot;
goto error_no_slot;
}
bus_info_ptr1->slot_min = slot_ptr->slot_num;
bus_info_ptr1->slot_max = slot_ptr->slot_num;
@ -842,7 +813,7 @@ static int __init ebda_rsrc_controller(void)
(hpc_ptr->u.isa_ctlr.io_end - hpc_ptr->u.isa_ctlr.io_start + 1),
"ibmphp")) {
rc = -ENODEV;
goto error_no_hp_slot;
goto error_no_slot;
}
hpc_ptr->irq = readb(io_mem + addr + 4);
addr += 5;
@ -857,7 +828,7 @@ static int __init ebda_rsrc_controller(void)
break;
default:
rc = -ENODEV;
goto error_no_hp_slot;
goto error_no_slot;
}
//reorganize chassis' linked list
@ -870,19 +841,6 @@ static int __init ebda_rsrc_controller(void)
// register slots with hpc core as well as create linked list of ibm slot
for (index = 0; index < hpc_ptr->slot_count; index++) {
hp_slot_ptr = kzalloc(sizeof(*hp_slot_ptr), GFP_KERNEL);
if (!hp_slot_ptr) {
rc = -ENOMEM;
goto error_no_hp_slot;
}
hp_slot_ptr->info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL);
if (!hp_slot_ptr->info) {
rc = -ENOMEM;
goto error_no_hp_info;
}
tmp_slot = kzalloc(sizeof(*tmp_slot), GFP_KERNEL);
if (!tmp_slot) {
rc = -ENOMEM;
@ -909,7 +867,6 @@ static int __init ebda_rsrc_controller(void)
bus_info_ptr1 = ibmphp_find_same_bus_num(hpc_ptr->slots[index].slot_bus_num);
if (!bus_info_ptr1) {
kfree(tmp_slot);
rc = -ENODEV;
goto error;
}
@ -919,22 +876,19 @@ static int __init ebda_rsrc_controller(void)
tmp_slot->ctlr_index = hpc_ptr->slots[index].ctl_index;
tmp_slot->number = hpc_ptr->slots[index].slot_num;
tmp_slot->hotplug_slot = hp_slot_ptr;
hp_slot_ptr->private = tmp_slot;
rc = fillslotinfo(hp_slot_ptr);
rc = fillslotinfo(&tmp_slot->hotplug_slot);
if (rc)
goto error;
rc = ibmphp_init_devno((struct slot **) &hp_slot_ptr->private);
rc = ibmphp_init_devno(&tmp_slot);
if (rc)
goto error;
hp_slot_ptr->ops = &ibmphp_hotplug_slot_ops;
tmp_slot->hotplug_slot.ops = &ibmphp_hotplug_slot_ops;
// end of registering ibm slot with hotplug core
list_add(&((struct slot *)(hp_slot_ptr->private))->ibm_slot_list, &ibmphp_slot_head);
list_add(&tmp_slot->ibm_slot_list, &ibmphp_slot_head);
}
print_bus_info();
@ -944,7 +898,7 @@ static int __init ebda_rsrc_controller(void)
list_for_each_entry(tmp_slot, &ibmphp_slot_head, ibm_slot_list) {
snprintf(name, SLOT_NAME_SIZE, "%s", create_file_name(tmp_slot));
pci_hp_register(tmp_slot->hotplug_slot,
pci_hp_register(&tmp_slot->hotplug_slot,
pci_find_bus(0, tmp_slot->bus), tmp_slot->device, name);
}
@ -953,12 +907,8 @@ static int __init ebda_rsrc_controller(void)
return 0;
error:
kfree(hp_slot_ptr->private);
kfree(tmp_slot);
error_no_slot:
kfree(hp_slot_ptr->info);
error_no_hp_info:
kfree(hp_slot_ptr);
error_no_hp_slot:
free_ebda_hpc(hpc_ptr);
error_no_hpc:
iounmap(io_mem);

View File

@ -49,15 +49,13 @@ static DEFINE_MUTEX(pci_hp_mutex);
#define GET_STATUS(name, type) \
static int get_##name(struct hotplug_slot *slot, type *value) \
{ \
struct hotplug_slot_ops *ops = slot->ops; \
const struct hotplug_slot_ops *ops = slot->ops; \
int retval = 0; \
if (!try_module_get(ops->owner)) \
if (!try_module_get(slot->owner)) \
return -ENODEV; \
if (ops->get_##name) \
retval = ops->get_##name(slot, value); \
else \
*value = slot->info->name; \
module_put(ops->owner); \
module_put(slot->owner); \
return retval; \
}
@ -90,7 +88,7 @@ static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf,
power = (u8)(lpower & 0xff);
dbg("power = %d\n", power);
if (!try_module_get(slot->ops->owner)) {
if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
@ -109,7 +107,7 @@ static ssize_t power_write_file(struct pci_slot *pci_slot, const char *buf,
err("Illegal value specified for power\n");
retval = -EINVAL;
}
module_put(slot->ops->owner);
module_put(slot->owner);
exit:
if (retval)
@ -138,7 +136,8 @@ static ssize_t attention_read_file(struct pci_slot *pci_slot, char *buf)
static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf,
size_t count)
{
struct hotplug_slot_ops *ops = pci_slot->hotplug->ops;
struct hotplug_slot *slot = pci_slot->hotplug;
const struct hotplug_slot_ops *ops = slot->ops;
unsigned long lattention;
u8 attention;
int retval = 0;
@ -147,13 +146,13 @@ static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf,
attention = (u8)(lattention & 0xff);
dbg(" - attention = %d\n", attention);
if (!try_module_get(ops->owner)) {
if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
if (ops->set_attention_status)
retval = ops->set_attention_status(pci_slot->hotplug, attention);
module_put(ops->owner);
retval = ops->set_attention_status(slot, attention);
module_put(slot->owner);
exit:
if (retval)
@ -213,13 +212,13 @@ static ssize_t test_write_file(struct pci_slot *pci_slot, const char *buf,
test = (u32)(ltest & 0xffffffff);
dbg("test = %d\n", test);
if (!try_module_get(slot->ops->owner)) {
if (!try_module_get(slot->owner)) {
retval = -ENODEV;
goto exit;
}
if (slot->ops->hardware_test)
retval = slot->ops->hardware_test(slot, test);
module_put(slot->ops->owner);
module_put(slot->owner);
exit:
if (retval)
@ -444,11 +443,11 @@ int __pci_hp_initialize(struct hotplug_slot *slot, struct pci_bus *bus,
if (slot == NULL)
return -ENODEV;
if ((slot->info == NULL) || (slot->ops == NULL))
if (slot->ops == NULL)
return -EINVAL;
slot->ops->owner = owner;
slot->ops->mod_name = mod_name;
slot->owner = owner;
slot->mod_name = mod_name;
/*
* No problems if we call this interface from both ACPI_PCI_SLOT
@ -559,28 +558,6 @@ void pci_hp_destroy(struct hotplug_slot *slot)
}
EXPORT_SYMBOL_GPL(pci_hp_destroy);
/**
* pci_hp_change_slot_info - changes the slot's information structure in the core
* @slot: pointer to the slot whose info has changed
* @info: pointer to the info copy into the slot's info structure
*
* @slot must have been registered with the pci
* hotplug subsystem previously with a call to pci_hp_register().
*
* Returns 0 if successful, anything else for an error.
*/
int pci_hp_change_slot_info(struct hotplug_slot *slot,
struct hotplug_slot_info *info)
{
if (!slot || !info)
return -ENODEV;
memcpy(slot->info, info, sizeof(struct hotplug_slot_info));
return 0;
}
EXPORT_SYMBOL_GPL(pci_hp_change_slot_info);
static int __init pci_hotplug_init(void)
{
int result;

View File

@ -19,7 +19,6 @@
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
#include <linux/delay.h>
#include <linux/sched/signal.h> /* signal_pending() */
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/workqueue.h>
@ -59,72 +58,64 @@ do { \
#define SLOT_NAME_SIZE 10
/**
* struct slot - PCIe hotplug slot
* @state: current state machine position
* @ctrl: pointer to the slot's controller structure
* @hotplug_slot: pointer to the structure registered with the PCI hotplug core
* @work: work item to turn the slot on or off after 5 seconds in response to
* an Attention Button press
* @lock: protects reads and writes of @state;
* protects scheduling, execution and cancellation of @work
*/
struct slot {
u8 state;
struct controller *ctrl;
struct hotplug_slot *hotplug_slot;
struct delayed_work work;
struct mutex lock;
};
/**
* struct controller - PCIe hotplug controller
* @ctrl_lock: serializes writes to the Slot Control register
* @pcie: pointer to the controller's PCIe port service device
* @reset_lock: prevents access to the Data Link Layer Link Active bit in the
* Link Status register and to the Presence Detect State bit in the Slot
* Status register during a slot reset which may cause them to flap
* @slot: pointer to the controller's slot structure
* @queue: wait queue to wake up on reception of a Command Completed event,
* used for synchronous writes to the Slot Control register
* @slot_cap: cached copy of the Slot Capabilities register
* @slot_ctrl: cached copy of the Slot Control register
* @poll_thread: thread to poll for slot events if no IRQ is available,
* enabled with pciehp_poll_mode module parameter
* @ctrl_lock: serializes writes to the Slot Control register
* @cmd_started: jiffies when the Slot Control register was last written;
* the next write is allowed 1 second later, absent a Command Completed
* interrupt (PCIe r4.0, sec 6.7.3.2)
* @cmd_busy: flag set on Slot Control register write, cleared by IRQ handler
* on reception of a Command Completed event
* @link_active_reporting: cached copy of Data Link Layer Link Active Reporting
* Capable bit in Link Capabilities register; if this bit is zero, the
* Data Link Layer Link Active bit in the Link Status register will never
* be set and the driver is thus confined to wait 1 second before assuming
* the link to a hotplugged device is up and accessing it
* @queue: wait queue to wake up on reception of a Command Completed event,
* used for synchronous writes to the Slot Control register
* @pending_events: used by the IRQ handler to save events retrieved from the
* Slot Status register for later consumption by the IRQ thread
* @notification_enabled: whether the IRQ was requested successfully
* @power_fault_detected: whether a power fault was detected by the hardware
* that has not yet been cleared by the user
* @pending_events: used by the IRQ handler to save events retrieved from the
* Slot Status register for later consumption by the IRQ thread
* @poll_thread: thread to poll for slot events if no IRQ is available,
* enabled with pciehp_poll_mode module parameter
* @state: current state machine position
* @state_lock: protects reads and writes of @state;
* protects scheduling, execution and cancellation of @button_work
* @button_work: work item to turn the slot on or off after 5 seconds
* in response to an Attention Button press
* @hotplug_slot: structure registered with the PCI hotplug core
* @reset_lock: prevents access to the Data Link Layer Link Active bit in the
* Link Status register and to the Presence Detect State bit in the Slot
* Status register during a slot reset which may cause them to flap
* @request_result: result of last user request submitted to the IRQ thread
* @requester: wait queue to wake up on completion of user request,
* used for synchronous slot enable/disable request via sysfs
*
* PCIe hotplug has a 1:1 relationship between controller and slot, hence
* unlike other drivers, the two aren't represented by separate structures.
*/
struct controller {
struct mutex ctrl_lock;
struct pcie_device *pcie;
struct rw_semaphore reset_lock;
struct slot *slot;
wait_queue_head_t queue;
u32 slot_cap;
u16 slot_ctrl;
struct task_struct *poll_thread;
unsigned long cmd_started; /* jiffies */
u32 slot_cap; /* capabilities and quirks */
u16 slot_ctrl; /* control register access */
struct mutex ctrl_lock;
unsigned long cmd_started;
unsigned int cmd_busy:1;
unsigned int link_active_reporting:1;
wait_queue_head_t queue;
atomic_t pending_events; /* event handling */
unsigned int notification_enabled:1;
unsigned int power_fault_detected;
atomic_t pending_events;
struct task_struct *poll_thread;
u8 state; /* state machine */
struct mutex state_lock;
struct delayed_work button_work;
struct hotplug_slot hotplug_slot; /* hotplug core interface */
struct rw_semaphore reset_lock;
int request_result;
wait_queue_head_t requester;
};
@ -174,42 +165,50 @@ struct controller {
#define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS)
#define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19)
int pciehp_sysfs_enable_slot(struct slot *slot);
int pciehp_sysfs_disable_slot(struct slot *slot);
void pciehp_request(struct controller *ctrl, int action);
void pciehp_handle_button_press(struct slot *slot);
void pciehp_handle_disable_request(struct slot *slot);
void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events);
int pciehp_configure_device(struct slot *p_slot);
void pciehp_unconfigure_device(struct slot *p_slot);
void pciehp_handle_button_press(struct controller *ctrl);
void pciehp_handle_disable_request(struct controller *ctrl);
void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events);
int pciehp_configure_device(struct controller *ctrl);
void pciehp_unconfigure_device(struct controller *ctrl, bool presence);
void pciehp_queue_pushbutton_work(struct work_struct *work);
struct controller *pcie_init(struct pcie_device *dev);
int pcie_init_notification(struct controller *ctrl);
void pcie_shutdown_notification(struct controller *ctrl);
void pcie_clear_hotplug_events(struct controller *ctrl);
int pciehp_power_on_slot(struct slot *slot);
void pciehp_power_off_slot(struct slot *slot);
void pciehp_get_power_status(struct slot *slot, u8 *status);
void pciehp_get_attention_status(struct slot *slot, u8 *status);
void pcie_enable_interrupt(struct controller *ctrl);
void pcie_disable_interrupt(struct controller *ctrl);
int pciehp_power_on_slot(struct controller *ctrl);
void pciehp_power_off_slot(struct controller *ctrl);
void pciehp_get_power_status(struct controller *ctrl, u8 *status);
void pciehp_set_attention_status(struct slot *slot, u8 status);
void pciehp_get_latch_status(struct slot *slot, u8 *status);
void pciehp_get_adapter_status(struct slot *slot, u8 *status);
int pciehp_query_power_fault(struct slot *slot);
void pciehp_green_led_on(struct slot *slot);
void pciehp_green_led_off(struct slot *slot);
void pciehp_green_led_blink(struct slot *slot);
void pciehp_set_attention_status(struct controller *ctrl, u8 status);
void pciehp_get_latch_status(struct controller *ctrl, u8 *status);
int pciehp_query_power_fault(struct controller *ctrl);
void pciehp_green_led_on(struct controller *ctrl);
void pciehp_green_led_off(struct controller *ctrl);
void pciehp_green_led_blink(struct controller *ctrl);
bool pciehp_card_present(struct controller *ctrl);
bool pciehp_card_present_or_link_active(struct controller *ctrl);
int pciehp_check_link_status(struct controller *ctrl);
bool pciehp_check_link_active(struct controller *ctrl);
void pciehp_release_ctrl(struct controller *ctrl);
int pciehp_reset_slot(struct slot *slot, int probe);
int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot);
int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot);
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe);
int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status);
int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status);
int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status);
static inline const char *slot_name(struct slot *slot)
static inline const char *slot_name(struct controller *ctrl)
{
return hotplug_slot_name(slot->hotplug_slot);
return hotplug_slot_name(&ctrl->hotplug_slot);
}
static inline struct controller *to_ctrl(struct hotplug_slot *hotplug_slot)
{
return container_of(hotplug_slot, struct controller, hotplug_slot);
}
#endif /* _PCIEHP_H */

View File

@ -23,8 +23,6 @@
#include <linux/types.h>
#include <linux/pci.h>
#include "pciehp.h"
#include <linux/interrupt.h>
#include <linux/time.h>
#include "../pci.h"
@ -47,45 +45,30 @@ MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds");
#define PCIE_MODULE_NAME "pciehp"
static int set_attention_status(struct hotplug_slot *slot, u8 value);
static int enable_slot(struct hotplug_slot *slot);
static int disable_slot(struct hotplug_slot *slot);
static int get_power_status(struct hotplug_slot *slot, u8 *value);
static int get_attention_status(struct hotplug_slot *slot, u8 *value);
static int get_latch_status(struct hotplug_slot *slot, u8 *value);
static int get_adapter_status(struct hotplug_slot *slot, u8 *value);
static int reset_slot(struct hotplug_slot *slot, int probe);
static int init_slot(struct controller *ctrl)
{
struct slot *slot = ctrl->slot;
struct hotplug_slot *hotplug = NULL;
struct hotplug_slot_info *info = NULL;
struct hotplug_slot_ops *ops = NULL;
struct hotplug_slot_ops *ops;
char name[SLOT_NAME_SIZE];
int retval = -ENOMEM;
hotplug = kzalloc(sizeof(*hotplug), GFP_KERNEL);
if (!hotplug)
goto out;
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
goto out;
int retval;
/* Setup hotplug slot ops */
ops = kzalloc(sizeof(*ops), GFP_KERNEL);
if (!ops)
goto out;
return -ENOMEM;
ops->enable_slot = enable_slot;
ops->disable_slot = disable_slot;
ops->enable_slot = pciehp_sysfs_enable_slot;
ops->disable_slot = pciehp_sysfs_disable_slot;
ops->get_power_status = get_power_status;
ops->get_adapter_status = get_adapter_status;
ops->reset_slot = reset_slot;
ops->reset_slot = pciehp_reset_slot;
if (MRL_SENS(ctrl))
ops->get_latch_status = get_latch_status;
if (ATTN_LED(ctrl)) {
ops->get_attention_status = get_attention_status;
ops->get_attention_status = pciehp_get_attention_status;
ops->set_attention_status = set_attention_status;
} else if (ctrl->pcie->port->hotplug_user_indicators) {
ops->get_attention_status = pciehp_get_raw_indicator_status;
@ -93,33 +76,24 @@ static int init_slot(struct controller *ctrl)
}
/* register this slot with the hotplug pci core */
hotplug->info = info;
hotplug->private = slot;
hotplug->ops = ops;
slot->hotplug_slot = hotplug;
ctrl->hotplug_slot.ops = ops;
snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl));
retval = pci_hp_initialize(hotplug,
retval = pci_hp_initialize(&ctrl->hotplug_slot,
ctrl->pcie->port->subordinate, 0, name);
if (retval)
ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval);
out:
if (retval) {
ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval);
kfree(ops);
kfree(info);
kfree(hotplug);
}
return retval;
}
static void cleanup_slot(struct controller *ctrl)
{
struct hotplug_slot *hotplug_slot = ctrl->slot->hotplug_slot;
struct hotplug_slot *hotplug_slot = &ctrl->hotplug_slot;
pci_hp_destroy(hotplug_slot);
kfree(hotplug_slot->ops);
kfree(hotplug_slot->info);
kfree(hotplug_slot);
}
/*
@ -127,79 +101,48 @@ static void cleanup_slot(struct controller *ctrl)
*/
static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status)
{
struct slot *slot = hotplug_slot->private;
struct pci_dev *pdev = slot->ctrl->pcie->port;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
pciehp_set_attention_status(slot, status);
pciehp_set_attention_status(ctrl, status);
pci_config_pm_runtime_put(pdev);
return 0;
}
static int enable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
return pciehp_sysfs_enable_slot(slot);
}
static int disable_slot(struct hotplug_slot *hotplug_slot)
{
struct slot *slot = hotplug_slot->private;
return pciehp_sysfs_disable_slot(slot);
}
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct pci_dev *pdev = slot->ctrl->pcie->port;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
pciehp_get_power_status(slot, value);
pciehp_get_power_status(ctrl, value);
pci_config_pm_runtime_put(pdev);
return 0;
}
static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
pciehp_get_attention_status(slot, value);
return 0;
}
static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct pci_dev *pdev = slot->ctrl->pcie->port;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
pciehp_get_latch_status(slot, value);
pciehp_get_latch_status(ctrl, value);
pci_config_pm_runtime_put(pdev);
return 0;
}
static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct slot *slot = hotplug_slot->private;
struct pci_dev *pdev = slot->ctrl->pcie->port;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl->pcie->port;
pci_config_pm_runtime_get(pdev);
pciehp_get_adapter_status(slot, value);
*value = pciehp_card_present_or_link_active(ctrl);
pci_config_pm_runtime_put(pdev);
return 0;
}
static int reset_slot(struct hotplug_slot *hotplug_slot, int probe)
{
struct slot *slot = hotplug_slot->private;
return pciehp_reset_slot(slot, probe);
}
/**
* pciehp_check_presence() - synthesize event if presence has changed
*
@ -212,20 +155,19 @@ static int reset_slot(struct hotplug_slot *hotplug_slot, int probe)
*/
static void pciehp_check_presence(struct controller *ctrl)
{
struct slot *slot = ctrl->slot;
u8 occupied;
bool occupied;
down_read(&ctrl->reset_lock);
mutex_lock(&slot->lock);
mutex_lock(&ctrl->state_lock);
pciehp_get_adapter_status(slot, &occupied);
if ((occupied && (slot->state == OFF_STATE ||
slot->state == BLINKINGON_STATE)) ||
(!occupied && (slot->state == ON_STATE ||
slot->state == BLINKINGOFF_STATE)))
occupied = pciehp_card_present_or_link_active(ctrl);
if ((occupied && (ctrl->state == OFF_STATE ||
ctrl->state == BLINKINGON_STATE)) ||
(!occupied && (ctrl->state == ON_STATE ||
ctrl->state == BLINKINGOFF_STATE)))
pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC);
mutex_unlock(&slot->lock);
mutex_unlock(&ctrl->state_lock);
up_read(&ctrl->reset_lock);
}
@ -233,7 +175,6 @@ static int pciehp_probe(struct pcie_device *dev)
{
int rc;
struct controller *ctrl;
struct slot *slot;
/* If this is not a "hotplug" service, we have no business here. */
if (dev->service != PCIE_PORT_SERVICE_HP)
@ -271,8 +212,7 @@ static int pciehp_probe(struct pcie_device *dev)
}
/* Publish to user space */
slot = ctrl->slot;
rc = pci_hp_add(slot->hotplug_slot);
rc = pci_hp_add(&ctrl->hotplug_slot);
if (rc) {
ctrl_err(ctrl, "Publication to user space failed (%d)\n", rc);
goto err_out_shutdown_notification;
@ -295,29 +235,43 @@ static void pciehp_remove(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
pci_hp_del(ctrl->slot->hotplug_slot);
pci_hp_del(&ctrl->hotplug_slot);
pcie_shutdown_notification(ctrl);
cleanup_slot(ctrl);
pciehp_release_ctrl(ctrl);
}
#ifdef CONFIG_PM
static bool pme_is_native(struct pcie_device *dev)
{
const struct pci_host_bridge *host;
host = pci_find_host_bridge(dev->port->bus);
return pcie_ports_native || host->native_pme;
}
static int pciehp_suspend(struct pcie_device *dev)
{
/*
* Disable hotplug interrupt so that it does not trigger
* immediately when the downstream link goes down.
*/
if (pme_is_native(dev))
pcie_disable_interrupt(get_service_data(dev));
return 0;
}
static int pciehp_resume_noirq(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
struct slot *slot = ctrl->slot;
/* pci_restore_state() just wrote to the Slot Control register */
ctrl->cmd_started = jiffies;
ctrl->cmd_busy = true;
/* clear spurious events from rediscovery of inserted card */
if (slot->state == ON_STATE || slot->state == BLINKINGOFF_STATE)
if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE)
pcie_clear_hotplug_events(ctrl);
return 0;
@ -327,10 +281,29 @@ static int pciehp_resume(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
if (pme_is_native(dev))
pcie_enable_interrupt(ctrl);
pciehp_check_presence(ctrl);
return 0;
}
static int pciehp_runtime_resume(struct pcie_device *dev)
{
struct controller *ctrl = get_service_data(dev);
/* pci_restore_state() just wrote to the Slot Control register */
ctrl->cmd_started = jiffies;
ctrl->cmd_busy = true;
/* clear spurious events from rediscovery of inserted card */
if ((ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) &&
pme_is_native(dev))
pcie_clear_hotplug_events(ctrl);
return pciehp_resume(dev);
}
#endif /* PM */
static struct pcie_port_service_driver hpdriver_portdrv = {
@ -345,10 +318,12 @@ static struct pcie_port_service_driver hpdriver_portdrv = {
.suspend = pciehp_suspend,
.resume_noirq = pciehp_resume_noirq,
.resume = pciehp_resume,
.runtime_suspend = pciehp_suspend,
.runtime_resume = pciehp_runtime_resume,
#endif /* PM */
};
static int __init pcied_init(void)
int __init pcie_hp_init(void)
{
int retval = 0;
@ -359,4 +334,3 @@ static int __init pcied_init(void)
return retval;
}
device_initcall(pcied_init);

View File

@ -13,24 +13,24 @@
*
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
#include <linux/pci.h>
#include "../pci.h"
#include "pciehp.h"
/* The following routines constitute the bulk of the
hotplug controller logic
*/
static void set_slot_off(struct controller *ctrl, struct slot *pslot)
#define SAFE_REMOVAL true
#define SURPRISE_REMOVAL false
static void set_slot_off(struct controller *ctrl)
{
/* turn off slot, turn on Amber LED, turn off Green LED if supported*/
if (POWER_CTRL(ctrl)) {
pciehp_power_off_slot(pslot);
pciehp_power_off_slot(ctrl);
/*
* After turning power off, we must wait for at least 1 second
@ -40,31 +40,30 @@ static void set_slot_off(struct controller *ctrl, struct slot *pslot)
msleep(1000);
}
pciehp_green_led_off(pslot);
pciehp_set_attention_status(pslot, 1);
pciehp_green_led_off(ctrl);
pciehp_set_attention_status(ctrl, 1);
}
/**
* board_added - Called after a board has been added to the system.
* @p_slot: &slot where board is added
* @ctrl: PCIe hotplug controller where board is added
*
* Turns power on for the board.
* Configures board.
*/
static int board_added(struct slot *p_slot)
static int board_added(struct controller *ctrl)
{
int retval = 0;
struct controller *ctrl = p_slot->ctrl;
struct pci_bus *parent = ctrl->pcie->port->subordinate;
if (POWER_CTRL(ctrl)) {
/* Power on slot */
retval = pciehp_power_on_slot(p_slot);
retval = pciehp_power_on_slot(ctrl);
if (retval)
return retval;
}
pciehp_green_led_blink(p_slot);
pciehp_green_led_blink(ctrl);
/* Check link training status */
retval = pciehp_check_link_status(ctrl);
@ -74,13 +73,13 @@ static int board_added(struct slot *p_slot)
}
/* Check for a power fault */
if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) {
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(p_slot));
if (ctrl->power_fault_detected || pciehp_query_power_fault(ctrl)) {
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
retval = -EIO;
goto err_exit;
}
retval = pciehp_configure_device(p_slot);
retval = pciehp_configure_device(ctrl);
if (retval) {
if (retval != -EEXIST) {
ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n",
@ -89,27 +88,26 @@ static int board_added(struct slot *p_slot)
}
}
pciehp_green_led_on(p_slot);
pciehp_set_attention_status(p_slot, 0);
pciehp_green_led_on(ctrl);
pciehp_set_attention_status(ctrl, 0);
return 0;
err_exit:
set_slot_off(ctrl, p_slot);
set_slot_off(ctrl);
return retval;
}
/**
* remove_board - Turns off slot and LEDs
* @p_slot: slot where board is being removed
* @ctrl: PCIe hotplug controller where board is being removed
* @safe_removal: whether the board is safely removed (versus surprise removed)
*/
static void remove_board(struct slot *p_slot)
static void remove_board(struct controller *ctrl, bool safe_removal)
{
struct controller *ctrl = p_slot->ctrl;
pciehp_unconfigure_device(p_slot);
pciehp_unconfigure_device(ctrl, safe_removal);
if (POWER_CTRL(ctrl)) {
pciehp_power_off_slot(p_slot);
pciehp_power_off_slot(ctrl);
/*
* After turning power off, we must wait for at least 1 second
@ -120,11 +118,11 @@ static void remove_board(struct slot *p_slot)
}
/* turn off Green LED */
pciehp_green_led_off(p_slot);
pciehp_green_led_off(ctrl);
}
static int pciehp_enable_slot(struct slot *slot);
static int pciehp_disable_slot(struct slot *slot);
static int pciehp_enable_slot(struct controller *ctrl);
static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal);
void pciehp_request(struct controller *ctrl, int action)
{
@ -135,11 +133,11 @@ void pciehp_request(struct controller *ctrl, int action)
void pciehp_queue_pushbutton_work(struct work_struct *work)
{
struct slot *p_slot = container_of(work, struct slot, work.work);
struct controller *ctrl = p_slot->ctrl;
struct controller *ctrl = container_of(work, struct controller,
button_work.work);
mutex_lock(&p_slot->lock);
switch (p_slot->state) {
mutex_lock(&ctrl->state_lock);
switch (ctrl->state) {
case BLINKINGOFF_STATE:
pciehp_request(ctrl, DISABLE_SLOT);
break;
@ -149,30 +147,28 @@ void pciehp_queue_pushbutton_work(struct work_struct *work)
default:
break;
}
mutex_unlock(&p_slot->lock);
mutex_unlock(&ctrl->state_lock);
}
void pciehp_handle_button_press(struct slot *p_slot)
void pciehp_handle_button_press(struct controller *ctrl)
{
struct controller *ctrl = p_slot->ctrl;
mutex_lock(&p_slot->lock);
switch (p_slot->state) {
mutex_lock(&ctrl->state_lock);
switch (ctrl->state) {
case OFF_STATE:
case ON_STATE:
if (p_slot->state == ON_STATE) {
p_slot->state = BLINKINGOFF_STATE;
if (ctrl->state == ON_STATE) {
ctrl->state = BLINKINGOFF_STATE;
ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n",
slot_name(p_slot));
slot_name(ctrl));
} else {
p_slot->state = BLINKINGON_STATE;
ctrl->state = BLINKINGON_STATE;
ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n",
slot_name(p_slot));
slot_name(ctrl));
}
/* blink green LED and turn off amber */
pciehp_green_led_blink(p_slot);
pciehp_set_attention_status(p_slot, 0);
schedule_delayed_work(&p_slot->work, 5 * HZ);
pciehp_green_led_blink(ctrl);
pciehp_set_attention_status(ctrl, 0);
schedule_delayed_work(&ctrl->button_work, 5 * HZ);
break;
case BLINKINGOFF_STATE:
case BLINKINGON_STATE:
@ -181,197 +177,184 @@ void pciehp_handle_button_press(struct slot *p_slot)
* press the attention again before the 5 sec. limit
* expires to cancel hot-add or hot-remove
*/
ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(p_slot));
cancel_delayed_work(&p_slot->work);
if (p_slot->state == BLINKINGOFF_STATE) {
p_slot->state = ON_STATE;
pciehp_green_led_on(p_slot);
ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl));
cancel_delayed_work(&ctrl->button_work);
if (ctrl->state == BLINKINGOFF_STATE) {
ctrl->state = ON_STATE;
pciehp_green_led_on(ctrl);
} else {
p_slot->state = OFF_STATE;
pciehp_green_led_off(p_slot);
ctrl->state = OFF_STATE;
pciehp_green_led_off(ctrl);
}
pciehp_set_attention_status(p_slot, 0);
pciehp_set_attention_status(ctrl, 0);
ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n",
slot_name(p_slot));
slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n",
slot_name(p_slot), p_slot->state);
slot_name(ctrl), ctrl->state);
break;
}
mutex_unlock(&p_slot->lock);
mutex_unlock(&ctrl->state_lock);
}
void pciehp_handle_disable_request(struct slot *slot)
void pciehp_handle_disable_request(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
mutex_lock(&slot->lock);
switch (slot->state) {
mutex_lock(&ctrl->state_lock);
switch (ctrl->state) {
case BLINKINGON_STATE:
case BLINKINGOFF_STATE:
cancel_delayed_work(&slot->work);
cancel_delayed_work(&ctrl->button_work);
break;
}
slot->state = POWEROFF_STATE;
mutex_unlock(&slot->lock);
ctrl->state = POWEROFF_STATE;
mutex_unlock(&ctrl->state_lock);
ctrl->request_result = pciehp_disable_slot(slot);
ctrl->request_result = pciehp_disable_slot(ctrl, SAFE_REMOVAL);
}
void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events)
void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
{
struct controller *ctrl = slot->ctrl;
bool link_active;
u8 present;
bool present, link_active;
/*
* If the slot is on and presence or link has changed, turn it off.
* Even if it's occupied again, we cannot assume the card is the same.
*/
mutex_lock(&slot->lock);
switch (slot->state) {
mutex_lock(&ctrl->state_lock);
switch (ctrl->state) {
case BLINKINGOFF_STATE:
cancel_delayed_work(&slot->work);
cancel_delayed_work(&ctrl->button_work);
/* fall through */
case ON_STATE:
slot->state = POWEROFF_STATE;
mutex_unlock(&slot->lock);
ctrl->state = POWEROFF_STATE;
mutex_unlock(&ctrl->state_lock);
if (events & PCI_EXP_SLTSTA_DLLSC)
ctrl_info(ctrl, "Slot(%s): Link Down\n",
slot_name(slot));
slot_name(ctrl));
if (events & PCI_EXP_SLTSTA_PDC)
ctrl_info(ctrl, "Slot(%s): Card not present\n",
slot_name(slot));
pciehp_disable_slot(slot);
slot_name(ctrl));
pciehp_disable_slot(ctrl, SURPRISE_REMOVAL);
break;
default:
mutex_unlock(&slot->lock);
mutex_unlock(&ctrl->state_lock);
break;
}
/* Turn the slot on if it's occupied or link is up */
mutex_lock(&slot->lock);
pciehp_get_adapter_status(slot, &present);
mutex_lock(&ctrl->state_lock);
present = pciehp_card_present(ctrl);
link_active = pciehp_check_link_active(ctrl);
if (!present && !link_active) {
mutex_unlock(&slot->lock);
mutex_unlock(&ctrl->state_lock);
return;
}
switch (slot->state) {
switch (ctrl->state) {
case BLINKINGON_STATE:
cancel_delayed_work(&slot->work);
cancel_delayed_work(&ctrl->button_work);
/* fall through */
case OFF_STATE:
slot->state = POWERON_STATE;
mutex_unlock(&slot->lock);
ctrl->state = POWERON_STATE;
mutex_unlock(&ctrl->state_lock);
if (present)
ctrl_info(ctrl, "Slot(%s): Card present\n",
slot_name(slot));
slot_name(ctrl));
if (link_active)
ctrl_info(ctrl, "Slot(%s): Link Up\n",
slot_name(slot));
ctrl->request_result = pciehp_enable_slot(slot);
slot_name(ctrl));
ctrl->request_result = pciehp_enable_slot(ctrl);
break;
default:
mutex_unlock(&slot->lock);
mutex_unlock(&ctrl->state_lock);
break;
}
}
static int __pciehp_enable_slot(struct slot *p_slot)
static int __pciehp_enable_slot(struct controller *ctrl)
{
u8 getstatus = 0;
struct controller *ctrl = p_slot->ctrl;
pciehp_get_adapter_status(p_slot, &getstatus);
if (!getstatus) {
ctrl_info(ctrl, "Slot(%s): No adapter\n", slot_name(p_slot));
return -ENODEV;
}
if (MRL_SENS(p_slot->ctrl)) {
pciehp_get_latch_status(p_slot, &getstatus);
if (MRL_SENS(ctrl)) {
pciehp_get_latch_status(ctrl, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Slot(%s): Latch open\n",
slot_name(p_slot));
slot_name(ctrl));
return -ENODEV;
}
}
if (POWER_CTRL(p_slot->ctrl)) {
pciehp_get_power_status(p_slot, &getstatus);
if (POWER_CTRL(ctrl)) {
pciehp_get_power_status(ctrl, &getstatus);
if (getstatus) {
ctrl_info(ctrl, "Slot(%s): Already enabled\n",
slot_name(p_slot));
slot_name(ctrl));
return 0;
}
}
return board_added(p_slot);
return board_added(ctrl);
}
static int pciehp_enable_slot(struct slot *slot)
static int pciehp_enable_slot(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
int ret;
pm_runtime_get_sync(&ctrl->pcie->port->dev);
ret = __pciehp_enable_slot(slot);
ret = __pciehp_enable_slot(ctrl);
if (ret && ATTN_BUTTN(ctrl))
pciehp_green_led_off(slot); /* may be blinking */
pciehp_green_led_off(ctrl); /* may be blinking */
pm_runtime_put(&ctrl->pcie->port->dev);
mutex_lock(&slot->lock);
slot->state = ret ? OFF_STATE : ON_STATE;
mutex_unlock(&slot->lock);
mutex_lock(&ctrl->state_lock);
ctrl->state = ret ? OFF_STATE : ON_STATE;
mutex_unlock(&ctrl->state_lock);
return ret;
}
static int __pciehp_disable_slot(struct slot *p_slot)
static int __pciehp_disable_slot(struct controller *ctrl, bool safe_removal)
{
u8 getstatus = 0;
struct controller *ctrl = p_slot->ctrl;
if (POWER_CTRL(p_slot->ctrl)) {
pciehp_get_power_status(p_slot, &getstatus);
if (POWER_CTRL(ctrl)) {
pciehp_get_power_status(ctrl, &getstatus);
if (!getstatus) {
ctrl_info(ctrl, "Slot(%s): Already disabled\n",
slot_name(p_slot));
slot_name(ctrl));
return -EINVAL;
}
}
remove_board(p_slot);
remove_board(ctrl, safe_removal);
return 0;
}
static int pciehp_disable_slot(struct slot *slot)
static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal)
{
struct controller *ctrl = slot->ctrl;
int ret;
pm_runtime_get_sync(&ctrl->pcie->port->dev);
ret = __pciehp_disable_slot(slot);
ret = __pciehp_disable_slot(ctrl, safe_removal);
pm_runtime_put(&ctrl->pcie->port->dev);
mutex_lock(&slot->lock);
slot->state = OFF_STATE;
mutex_unlock(&slot->lock);
mutex_lock(&ctrl->state_lock);
ctrl->state = OFF_STATE;
mutex_unlock(&ctrl->state_lock);
return ret;
}
int pciehp_sysfs_enable_slot(struct slot *p_slot)
int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot)
{
struct controller *ctrl = p_slot->ctrl;
struct controller *ctrl = to_ctrl(hotplug_slot);
mutex_lock(&p_slot->lock);
switch (p_slot->state) {
mutex_lock(&ctrl->state_lock);
switch (ctrl->state) {
case BLINKINGON_STATE:
case OFF_STATE:
mutex_unlock(&p_slot->lock);
mutex_unlock(&ctrl->state_lock);
/*
* The IRQ thread becomes a no-op if the user pulls out the
* card before the thread wakes up, so initialize to -ENODEV.
@ -383,53 +366,53 @@ int pciehp_sysfs_enable_slot(struct slot *p_slot)
return ctrl->request_result;
case POWERON_STATE:
ctrl_info(ctrl, "Slot(%s): Already in powering on state\n",
slot_name(p_slot));
slot_name(ctrl));
break;
case BLINKINGOFF_STATE:
case ON_STATE:
case POWEROFF_STATE:
ctrl_info(ctrl, "Slot(%s): Already enabled\n",
slot_name(p_slot));
slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n",
slot_name(p_slot), p_slot->state);
slot_name(ctrl), ctrl->state);
break;
}
mutex_unlock(&p_slot->lock);
mutex_unlock(&ctrl->state_lock);
return -ENODEV;
}
int pciehp_sysfs_disable_slot(struct slot *p_slot)
int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot)
{
struct controller *ctrl = p_slot->ctrl;
struct controller *ctrl = to_ctrl(hotplug_slot);
mutex_lock(&p_slot->lock);
switch (p_slot->state) {
mutex_lock(&ctrl->state_lock);
switch (ctrl->state) {
case BLINKINGOFF_STATE:
case ON_STATE:
mutex_unlock(&p_slot->lock);
mutex_unlock(&ctrl->state_lock);
pciehp_request(ctrl, DISABLE_SLOT);
wait_event(ctrl->requester,
!atomic_read(&ctrl->pending_events));
return ctrl->request_result;
case POWEROFF_STATE:
ctrl_info(ctrl, "Slot(%s): Already in powering off state\n",
slot_name(p_slot));
slot_name(ctrl));
break;
case BLINKINGON_STATE:
case OFF_STATE:
case POWERON_STATE:
ctrl_info(ctrl, "Slot(%s): Already disabled\n",
slot_name(p_slot));
slot_name(ctrl));
break;
default:
ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n",
slot_name(p_slot), p_slot->state);
slot_name(ctrl), ctrl->state);
break;
}
mutex_unlock(&p_slot->lock);
mutex_unlock(&ctrl->state_lock);
return -ENODEV;
}

View File

@ -13,15 +13,12 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/signal.h>
#include <linux/jiffies.h>
#include <linux/kthread.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/interrupt.h>
#include <linux/time.h>
#include <linux/slab.h>
#include "../pci.h"
@ -43,7 +40,7 @@ static inline int pciehp_request_irq(struct controller *ctrl)
if (pciehp_poll_mode) {
ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl,
"pciehp_poll-%s",
slot_name(ctrl->slot));
slot_name(ctrl));
return PTR_ERR_OR_ZERO(ctrl->poll_thread);
}
@ -217,13 +214,6 @@ bool pciehp_check_link_active(struct controller *ctrl)
return ret;
}
static void pcie_wait_link_active(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
pcie_wait_for_link(pdev, true);
}
static bool pci_bus_check_dev(struct pci_bus *bus, int devfn)
{
u32 l;
@ -256,18 +246,9 @@ int pciehp_check_link_status(struct controller *ctrl)
bool found;
u16 lnk_status;
/*
* Data Link Layer Link Active Reporting must be capable for
* hot-plug capable downstream port. But old controller might
* not implement it. In this case, we wait for 1000 ms.
*/
if (ctrl->link_active_reporting)
pcie_wait_link_active(ctrl);
else
msleep(1000);
if (!pcie_wait_for_link(pdev, true))
return -1;
/* wait 100ms before read pci conf, and try in 1s */
msleep(100);
found = pci_bus_check_dev(ctrl->pcie->port->subordinate,
PCI_DEVFN(0, 0));
@ -318,8 +299,8 @@ static int pciehp_link_enable(struct controller *ctrl)
int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot,
u8 *status)
{
struct slot *slot = hotplug_slot->private;
struct pci_dev *pdev = ctrl_dev(slot->ctrl);
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
pci_config_pm_runtime_get(pdev);
@ -329,9 +310,9 @@ int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot,
return 0;
}
void pciehp_get_attention_status(struct slot *slot, u8 *status)
int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status)
{
struct controller *ctrl = slot->ctrl;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
@ -355,11 +336,12 @@ void pciehp_get_attention_status(struct slot *slot, u8 *status)
*status = 0xFF;
break;
}
return 0;
}
void pciehp_get_power_status(struct slot *slot, u8 *status)
void pciehp_get_power_status(struct controller *ctrl, u8 *status)
{
struct controller *ctrl = slot->ctrl;
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
@ -380,27 +362,41 @@ void pciehp_get_power_status(struct slot *slot, u8 *status)
}
}
void pciehp_get_latch_status(struct slot *slot, u8 *status)
void pciehp_get_latch_status(struct controller *ctrl, u8 *status)
{
struct pci_dev *pdev = ctrl_dev(slot->ctrl);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
*status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS);
}
void pciehp_get_adapter_status(struct slot *slot, u8 *status)
bool pciehp_card_present(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(slot->ctrl);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
*status = !!(slot_status & PCI_EXP_SLTSTA_PDS);
return slot_status & PCI_EXP_SLTSTA_PDS;
}
int pciehp_query_power_fault(struct slot *slot)
/**
* pciehp_card_present_or_link_active() - whether given slot is occupied
* @ctrl: PCIe hotplug controller
*
* Unlike pciehp_card_present(), which determines presence solely from the
* Presence Detect State bit, this helper also returns true if the Link Active
* bit is set. This is a concession to broken hotplug ports which hardwire
* Presence Detect State to zero, such as Wilocity's [1ae9:0200].
*/
bool pciehp_card_present_or_link_active(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(slot->ctrl);
return pciehp_card_present(ctrl) || pciehp_check_link_active(ctrl);
}
int pciehp_query_power_fault(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
@ -410,8 +406,7 @@ int pciehp_query_power_fault(struct slot *slot)
int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
u8 status)
{
struct slot *slot = hotplug_slot->private;
struct controller *ctrl = slot->ctrl;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
pci_config_pm_runtime_get(pdev);
@ -421,9 +416,8 @@ int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
return 0;
}
void pciehp_set_attention_status(struct slot *slot, u8 value)
void pciehp_set_attention_status(struct controller *ctrl, u8 value)
{
struct controller *ctrl = slot->ctrl;
u16 slot_cmd;
if (!ATTN_LED(ctrl))
@ -447,10 +441,8 @@ void pciehp_set_attention_status(struct slot *slot, u8 value)
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
}
void pciehp_green_led_on(struct slot *slot)
void pciehp_green_led_on(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
if (!PWR_LED(ctrl))
return;
@ -461,10 +453,8 @@ void pciehp_green_led_on(struct slot *slot)
PCI_EXP_SLTCTL_PWR_IND_ON);
}
void pciehp_green_led_off(struct slot *slot)
void pciehp_green_led_off(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
if (!PWR_LED(ctrl))
return;
@ -475,10 +465,8 @@ void pciehp_green_led_off(struct slot *slot)
PCI_EXP_SLTCTL_PWR_IND_OFF);
}
void pciehp_green_led_blink(struct slot *slot)
void pciehp_green_led_blink(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
if (!PWR_LED(ctrl))
return;
@ -489,9 +477,8 @@ void pciehp_green_led_blink(struct slot *slot)
PCI_EXP_SLTCTL_PWR_IND_BLINK);
}
int pciehp_power_on_slot(struct slot *slot)
int pciehp_power_on_slot(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
int retval;
@ -515,10 +502,8 @@ int pciehp_power_on_slot(struct slot *slot)
return retval;
}
void pciehp_power_off_slot(struct slot *slot)
void pciehp_power_off_slot(struct controller *ctrl)
{
struct controller *ctrl = slot->ctrl;
pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
@ -533,9 +518,11 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
u16 status, events;
/*
* Interrupts only occur in D3hot or shallower (PCIe r4.0, sec 6.7.3.4).
* Interrupts only occur in D3hot or shallower and only if enabled
* in the Slot Control register (PCIe r4.0, sec 6.7.3.4).
*/
if (pdev->current_state == PCI_D3cold)
if (pdev->current_state == PCI_D3cold ||
(!(ctrl->slot_ctrl & PCI_EXP_SLTCTL_HPIE) && !pciehp_poll_mode))
return IRQ_NONE;
/*
@ -616,7 +603,6 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
{
struct controller *ctrl = (struct controller *)dev_id;
struct pci_dev *pdev = ctrl_dev(ctrl);
struct slot *slot = ctrl->slot;
irqreturn_t ret;
u32 events;
@ -642,16 +628,16 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
/* Check Attention Button Pressed */
if (events & PCI_EXP_SLTSTA_ABP) {
ctrl_info(ctrl, "Slot(%s): Attention button pressed\n",
slot_name(slot));
pciehp_handle_button_press(slot);
slot_name(ctrl));
pciehp_handle_button_press(ctrl);
}
/* Check Power Fault Detected */
if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
ctrl->power_fault_detected = 1;
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot));
pciehp_set_attention_status(slot, 1);
pciehp_green_led_off(slot);
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
pciehp_set_attention_status(ctrl, 1);
pciehp_green_led_off(ctrl);
}
/*
@ -660,9 +646,9 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
*/
down_read(&ctrl->reset_lock);
if (events & DISABLE_SLOT)
pciehp_handle_disable_request(slot);
pciehp_handle_disable_request(ctrl);
else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
pciehp_handle_presence_or_link_change(slot, events);
pciehp_handle_presence_or_link_change(ctrl, events);
up_read(&ctrl->reset_lock);
pci_config_pm_runtime_put(pdev);
@ -748,6 +734,16 @@ void pcie_clear_hotplug_events(struct controller *ctrl)
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC);
}
void pcie_enable_interrupt(struct controller *ctrl)
{
pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE);
}
void pcie_disable_interrupt(struct controller *ctrl)
{
pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE);
}
/*
* pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary
* bus reset of the bridge, but at the same time we want to ensure that it is
@ -756,9 +752,9 @@ void pcie_clear_hotplug_events(struct controller *ctrl)
* momentarily, if we see that they could interfere. Also, clear any spurious
* events after.
*/
int pciehp_reset_slot(struct slot *slot, int probe)
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
{
struct controller *ctrl = slot->ctrl;
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 stat_mask = 0, ctrl_mask = 0;
int rc;
@ -808,34 +804,6 @@ void pcie_shutdown_notification(struct controller *ctrl)
}
}
static int pcie_init_slot(struct controller *ctrl)
{
struct pci_bus *subordinate = ctrl_dev(ctrl)->subordinate;
struct slot *slot;
slot = kzalloc(sizeof(*slot), GFP_KERNEL);
if (!slot)
return -ENOMEM;
down_read(&pci_bus_sem);
slot->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE;
up_read(&pci_bus_sem);
slot->ctrl = ctrl;
mutex_init(&slot->lock);
INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work);
ctrl->slot = slot;
return 0;
}
static void pcie_cleanup_slot(struct controller *ctrl)
{
struct slot *slot = ctrl->slot;
cancel_delayed_work_sync(&slot->work);
kfree(slot);
}
static inline void dbg_ctrl(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl->pcie->port;
@ -857,12 +825,13 @@ struct controller *pcie_init(struct pcie_device *dev)
{
struct controller *ctrl;
u32 slot_cap, link_cap;
u8 occupied, poweron;
u8 poweron;
struct pci_dev *pdev = dev->port;
struct pci_bus *subordinate = pdev->subordinate;
ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
goto abort;
return NULL;
ctrl->pcie = dev;
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
@ -879,15 +848,19 @@ struct controller *pcie_init(struct pcie_device *dev)
ctrl->slot_cap = slot_cap;
mutex_init(&ctrl->ctrl_lock);
mutex_init(&ctrl->state_lock);
init_rwsem(&ctrl->reset_lock);
init_waitqueue_head(&ctrl->requester);
init_waitqueue_head(&ctrl->queue);
INIT_DELAYED_WORK(&ctrl->button_work, pciehp_queue_pushbutton_work);
dbg_ctrl(ctrl);
down_read(&pci_bus_sem);
ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE;
up_read(&pci_bus_sem);
/* Check if Data Link Layer Link Active Reporting is implemented */
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap);
if (link_cap & PCI_EXP_LNKCAP_DLLLARC)
ctrl->link_active_reporting = 1;
/* Clear all remaining event bits in Slot Status register. */
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
@ -909,33 +882,24 @@ struct controller *pcie_init(struct pcie_device *dev)
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC),
pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : "");
if (pcie_init_slot(ctrl))
goto abort_ctrl;
/*
* If empty slot's power status is on, turn power off. The IRQ isn't
* requested yet, so avoid triggering a notification with this command.
*/
if (POWER_CTRL(ctrl)) {
pciehp_get_adapter_status(ctrl->slot, &occupied);
pciehp_get_power_status(ctrl->slot, &poweron);
if (!occupied && poweron) {
pciehp_get_power_status(ctrl, &poweron);
if (!pciehp_card_present_or_link_active(ctrl) && poweron) {
pcie_disable_notification(ctrl);
pciehp_power_off_slot(ctrl->slot);
pciehp_power_off_slot(ctrl);
}
}
return ctrl;
abort_ctrl:
kfree(ctrl);
abort:
return NULL;
}
void pciehp_release_ctrl(struct controller *ctrl)
{
pcie_cleanup_slot(ctrl);
cancel_delayed_work_sync(&ctrl->button_work);
kfree(ctrl);
}

View File

@ -13,20 +13,26 @@
*
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include "../pci.h"
#include "pciehp.h"
int pciehp_configure_device(struct slot *p_slot)
/**
* pciehp_configure_device() - enumerate PCI devices below a hotplug bridge
* @ctrl: PCIe hotplug controller
*
* Enumerate PCI devices below a hotplug bridge and add them to the system.
* Return 0 on success, %-EEXIST if the devices are already enumerated or
* %-ENODEV if enumeration failed.
*/
int pciehp_configure_device(struct controller *ctrl)
{
struct pci_dev *dev;
struct pci_dev *bridge = p_slot->ctrl->pcie->port;
struct pci_dev *bridge = ctrl->pcie->port;
struct pci_bus *parent = bridge->subordinate;
int num, ret = 0;
struct controller *ctrl = p_slot->ctrl;
pci_lock_rescan_remove();
@ -62,17 +68,28 @@ int pciehp_configure_device(struct slot *p_slot)
return ret;
}
void pciehp_unconfigure_device(struct slot *p_slot)
/**
* pciehp_unconfigure_device() - remove PCI devices below a hotplug bridge
* @ctrl: PCIe hotplug controller
* @presence: whether the card is still present in the slot;
* true for safe removal via sysfs or an Attention Button press,
* false for surprise removal
*
* Unbind PCI devices below a hotplug bridge from their drivers and remove
* them from the system. Safely removed devices are quiesced. Surprise
* removed devices are marked as such to prevent further accesses.
*/
void pciehp_unconfigure_device(struct controller *ctrl, bool presence)
{
u8 presence = 0;
struct pci_dev *dev, *temp;
struct pci_bus *parent = p_slot->ctrl->pcie->port->subordinate;
struct pci_bus *parent = ctrl->pcie->port->subordinate;
u16 command;
struct controller *ctrl = p_slot->ctrl;
ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n",
__func__, pci_domain_nr(parent), parent->number);
pciehp_get_adapter_status(p_slot, &presence);
if (!presence)
pci_walk_bus(parent, pci_dev_set_disconnected, NULL);
pci_lock_rescan_remove();
@ -85,12 +102,6 @@ void pciehp_unconfigure_device(struct slot *p_slot)
list_for_each_entry_safe_reverse(dev, temp, &parent->devices,
bus_list) {
pci_dev_get(dev);
if (!presence) {
pci_dev_set_disconnected(dev, NULL);
if (pci_has_subordinate(dev))
pci_walk_bus(dev->subordinate,
pci_dev_set_disconnected, NULL);
}
pci_stop_and_remove_bus_device(dev);
/*
* Ensure that no new Requests will be generated from

Some files were not shown because too many files have changed in this diff Show More