Commit Graph

7 Commits

Author SHA1 Message Date
Ben Widawsky
5161a55c06 cxl: Move cxl_core to new directory
CXL core is growing, and it's already arguably unmanageable. To support
future growth, move core functionality to a new directory and rename the
file to represent just bus support. Future work will remove non-bus
functionality.

Note that mem.h is renamed to cxlmem.h to avoid a namespace collision
with the global ARCH=um mem.h header.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/162792537866.368511.8915631504621088321.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-08-06 08:22:53 -07:00
Dan Williams
8fdcb1704f cxl/pmem: Add initial infrastructure for pmem support
Register an 'nvdimm-bridge' device to act as an anchor for a libnvdimm
bus hierarchy. Also, flesh out the cxl_bus definition to allow a
cxl_nvdimm_bridge_driver to attach to the bridge and trigger the
nvdimm-bus registration.

The creation of the bridge is gated on the detection of a PMEM capable
address space registered to the root. The bridge indirection allows the
libnvdimm module to remain unloaded on platforms without PMEM support.

Given that the probing of ACPI0017 is asynchronous to CXL endpoint
devices, and the expectation that CXL endpoint devices register other
PMEM resources on the 'CXL' nvdimm bus, a workqueue is added. The
workqueue is needed to run bus_rescan_devices() outside of the
device_lock() of the nvdimm-bridge device to rendezvous nvdimm resources
as they arrive. For now only the bus is taken online/offline in the
workqueue.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/162379909706.2993820.14051258608641140169.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-06-15 16:47:14 -07:00
Dan Williams
4812be97c0 cxl/acpi: Introduce the root of a cxl_port topology
While CXL builds upon the PCI software model for enumeration and
endpoint control, a static platform component is required to bootstrap
the CXL memory layout. Similar to how ACPI identifies root-level PCI
memory resources, ACPI data enumerates the address space and interleave
configuration for CXL Memory.

In addition to identifying host bridges, ACPI is responsible for
enumerating the CXL memory space that can be addressed by downstream
decoders. This is similar to the requirement for ACPI to publish
resources via the _CRS method for PCI host bridges. Specifically, ACPI
publishes a table, CXL Early Discovery Table (CEDT), which includes a
list of CXL Memory resources, CXL Fixed Memory Window Structures
(CFMWS).

For now, introduce the core infrastructure for a cxl_port hierarchy
starting with a root level anchor represented by the ACPI0017 device.

Follow on changes model support for the configurable decode capabilities
of cxl_port instances, i.e. CXL switch support.

Co-developed-by: Alison Schofield <alison.schofield@intel.com>
Signed-off-by: Alison Schofield <alison.schofield@intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/162325449515.2293126.15303270193010154608.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-06-09 18:02:38 -07:00
Ben Widawsky
21e9f76733 cxl: Rename mem to pci
As the driver has undergone development, it's become clear that the
majority [entirety?] of the current functionality in mem.c is actually a
layer encapsulating functionality exposed through PCI based
interactions. This layer can be used either in isolation or to provide
functionality for higher level functionality.

CXL capabilities exist in a parallel domain to PCIe. CXL devices are
enumerable and controllable via "legacy" PCIe mechanisms; however, their
CXL capabilities are a superset of PCIe. For example, a CXL device may
be connected to a non-CXL capable PCIe root port, and therefore will not
be able to participate in CXL.mem or CXL.cache operations, but can still
be accessed through PCIe mechanisms for CXL.io operations.

To properly represent the PCI nature of this driver, and in preparation for
introducing a new driver for the CXL.mem / HDM decoder (Host-managed Device
Memory) capabilities of a CXL memory expander, rename mem.c to pci.c so that
mem.c is available for this new driver.

The result of the change is that there is a clear layering distinction
in the driver, and a systems administrator may load only the cxl_pci
module and gain access to such operations as, firmware update, offline
provisioning of devices, and error collection. In addition to freeing up
the file name for another purpose, there are two primary reasons this is
useful,
    1. Acting upon devices which don't have full CXL capabilities. This
       may happen for instance if the CXL device is connected in a CXL
       unaware part of the platform topology.
    2. Userspace-first provisioning for devices without kernel driver
       interference. This may be useful when provisioning a new device
       in a specific manner that might otherwise be blocked or prevented
       by the real CXL mem driver.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Link: https://lore.kernel.org/r/20210526174413.802913-1-ben.widawsky@intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-26 11:19:05 -07:00
Dan Williams
5f653f7590 cxl/core: Rename bus.c to core.c
In preparation for more generic shared functionality across endpoint
consumers of core cxl resources, and platform-firmware producers of
those resources, rename bus.c to core.c. In addition to the central
rendezvous for interleave coordination, the core will also define common
routines like CXL register block mapping.

Acked-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Link: https://lore.kernel.org/r/162096972018.1865304.11079951161445408423.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-14 16:13:19 -07:00
Dan Williams
b39cb1052a cxl/mem: Register CXL memX devices
Create the /sys/bus/cxl hierarchy to enumerate:

* Memory Devices (per-endpoint control devices)

* Memory Address Space Devices (platform address ranges with
  interleaving, performance, and persistence attributes)

* Memory Regions (active provisioned memory from an address space device
  that is in use as System RAM or delegated to libnvdimm as Persistent
  Memory regions).

For now, only the per-endpoint control devices are registered on the
'cxl' bus. However, going forward it will provide a mechanism to
coordinate cross-device interleave.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> (v2)
Link: https://lore.kernel.org/r/20210217040958.1354670-4-ben.widawsky@intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-16 20:36:38 -08:00
Dan Williams
4cdadfd5e0 cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints
The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the memory
was attached to the typical CPU memory controller.

With the CXL-2.0 specification a PCI endpoint can implement a "Type-3"
device interface and give the operating system control over "Host
Managed Device Memory". See section 2.3 Type 3 CXL Device.

The memory range exported by the device may optionally be described by
the platform firmware memory map, or by infrastructure like LIBNVDIMM to
provision persistent memory capacity from one, or more, CXL.mem devices.

A pre-requisite for Linux-managed memory-capacity provisioning is this
cxl_mem driver that can speak the mailbox protocol defined in section
8.2.8.4 Mailbox Registers.

For now just land the initial driver boiler-plate and Documentation/
infrastructure.

Signed-off-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Acked-by: David Rientjes <rientjes@google.com> (v1)
Cc: Jonathan Corbet <corbet@lwn.net>
Link: https://www.computeexpresslink.org/download-the-specification
Link: https://lore.kernel.org/r/20210217040958.1354670-2-ben.widawsky@intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-02-16 20:36:38 -08:00