linux/drivers/cxl/Kconfig
Dan Williams d18bc74ace cxl/region: Manage CPU caches relative to DPA invalidation events
A "DPA invalidation event" is any scenario where the contents of a DPA
(Device Physical Address) is modified in a way that is incoherent with
CPU caches, or if the HPA (Host Physical Address) to DPA association
changes due to a remapping event.

PMEM security events like Unlock and Passphrase Secure Erase already
manage caches through LIBNVDIMM, so that leaves HPA to DPA remap events
that need cache management by the CXL core. Those only happen when the
boot time CXL configuration has changed. That event occurs when
userspace attaches an endpoint decoder to a region configuration, and
that region is subsequently activated.

The implications of not invalidating caches between remap events is that
reads from the region at different points in time may return different
results due to stale cached data from the previous HPA to DPA mapping.
Without a guarantee that the region contents after cxl_region_probe()
are written before being read (a layering-violation assumption that
cxl_region_probe() can not make) the CXL subsystem needs to ensure that
reads that precede writes see consistent results.

A CONFIG_CXL_REGION_INVALIDATION_TEST option is added to support debug
and unit testing of the CXL implementation in QEMU or other environments
where cpu_cache_has_invalidate_memregion() returns false. This may prove
too restrictive for QEMU where the HDM decoders are emulated, but in
that case the CXL subsystem needs some new mechanism / indication that
the HDM decoder is emulated and not a passthrough of real hardware.

Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/166993222098.1995348.16604163596374520890.stgit@dwillia2-xfh.jf.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-03 00:03:57 -08:00

133 lines
4.9 KiB
Plaintext

# SPDX-License-Identifier: GPL-2.0-only
menuconfig CXL_BUS
tristate "CXL (Compute Express Link) Devices Support"
depends on PCI
select PCI_DOE
help
CXL is a bus that is electrically compatible with PCI Express, but
layers three protocols on that signalling (CXL.io, CXL.cache, and
CXL.mem). The CXL.cache protocol allows devices to hold cachelines
locally, the CXL.mem protocol allows devices to be fully coherent
memory targets, the CXL.io protocol is equivalent to PCI Express.
Say 'y' to enable support for the configuration and management of
devices supporting these protocols.
if CXL_BUS
config CXL_PCI
tristate "PCI manageability"
default CXL_BUS
help
The CXL specification defines a "CXL memory device" sub-class in the
PCI "memory controller" base class of devices. Device's identified by
this class code provide support for volatile and / or persistent
memory to be mapped into the system address map (Host-managed Device
Memory (HDM)).
Say 'y/m' to enable a driver that will attach to CXL memory expander
devices enumerated by the memory device class code for configuration
and management primarily via the mailbox interface. See Chapter 2.3
Type 3 CXL Device in the CXL 2.0 specification for more details.
If unsure say 'm'.
config CXL_MEM_RAW_COMMANDS
bool "RAW Command Interface for Memory Devices"
depends on CXL_PCI
help
Enable CXL RAW command interface.
The CXL driver ioctl interface may assign a kernel ioctl command
number for each specification defined opcode. At any given point in
time the number of opcodes that the specification defines and a device
may implement may exceed the kernel's set of associated ioctl function
numbers. The mismatch is either by omission, specification is too new,
or by design. When prototyping new hardware, or developing / debugging
the driver it is useful to be able to submit any possible command to
the hardware, even commands that may crash the kernel due to their
potential impact to memory currently in use by the kernel.
If developing CXL hardware or the driver say Y, otherwise say N.
config CXL_ACPI
tristate "CXL ACPI: Platform Support"
depends on ACPI
default CXL_BUS
select ACPI_TABLE_LIB
help
Enable support for host managed device memory (HDM) resources
published by a platform's ACPI CXL memory layout description. See
Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
(https://www.computeexpresslink.org/spec-landing). The CXL core
consumes these resource to publish the root of a cxl_port decode
hierarchy to map regions that represent System RAM, or Persistent
Memory regions to be managed by LIBNVDIMM.
If unsure say 'm'.
config CXL_PMEM
tristate "CXL PMEM: Persistent Memory Support"
depends on LIBNVDIMM
default CXL_BUS
help
In addition to typical memory resources a platform may also advertise
support for persistent memory attached via CXL. This support is
managed via a bridge driver from CXL to the LIBNVDIMM system
subsystem. Say 'y/m' to enable support for enumerating and
provisioning the persistent memory capacity of CXL memory expanders.
If unsure say 'm'.
config CXL_MEM
tristate "CXL: Memory Expansion"
depends on CXL_PCI
default CXL_BUS
help
The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the
memory were attached to the typical CPU memory controller. This is
known as HDM "Host-managed Device Memory".
Say 'y/m' to enable a driver that will attach to CXL.mem devices for
memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
specification for a detailed description of HDM.
If unsure say 'm'.
config CXL_PORT
default CXL_BUS
tristate
config CXL_SUSPEND
def_bool y
depends on SUSPEND && CXL_MEM
config CXL_REGION
bool
default CXL_BUS
# For MAX_PHYSMEM_BITS
depends on SPARSEMEM
select MEMREGION
select GET_FREE_REGION
config CXL_REGION_INVALIDATION_TEST
bool "CXL: Region Cache Management Bypass (TEST)"
depends on CXL_REGION
help
CXL Region management and security operations potentially invalidate
the content of CPU caches without notifiying those caches to
invalidate the affected cachelines. The CXL Region driver attempts
to invalidate caches when those events occur. If that invalidation
fails the region will fail to enable. Reasons for cache
invalidation failure are due to the CPU not providing a cache
invalidation mechanism. For example usage of wbinvd is restricted to
bare metal x86. However, for testing purposes toggling this option
can disable that data integrity safety and proceed with enabling
regions when there might be conflicting contents in the CPU cache.
If unsure, or if this kernel is meant for production environments,
say N.
endif