Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicts:

drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
  9e26680733 ("bnxt_en: Update firmware call to retrieve TX PTP timestamp")
  9e518f2580 ("bnxt_en: 1PPS functions to configure TSIO pins")
  099fdeda65 ("bnxt_en: Event handler for PPS events")

kernel/bpf/helpers.c
include/linux/bpf-cgroup.h
  a2baf4e8bb ("bpf: Fix potentially incorrect results with bpf_get_local_storage()")
  c7603cfa04 ("bpf: Add ambient BPF runtime context stored in current")

drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
  5957cc557d ("net/mlx5: Set all field of mlx5_irq before inserting it to the xarray")
  2d0b41a376 ("net/mlx5: Refcount mlx5_irq with integer")

MAINTAINERS
  7b637cd52f ("MAINTAINERS: fix Microchip CAN BUS Analyzer Tool entry typo")
  7d901a1e87 ("net: phy: add Maxlinear GPY115/21x/24x driver")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2021-08-13 06:41:22 -07:00
commit f4083a752a
362 changed files with 2986 additions and 1480 deletions

View File

@ -108,7 +108,7 @@ This bump in ABI version is at most once per kernel development cycle.
For example, if current state of ``libbpf.map`` is:
.. code-block:: c
.. code-block:: none
LIBBPF_0.0.1 {
global:
@ -121,7 +121,7 @@ For example, if current state of ``libbpf.map`` is:
, and a new symbol ``bpf_func_c`` is being introduced, then
``libbpf.map`` should be changed like this:
.. code-block:: c
.. code-block:: none
LIBBPF_0.0.1 {
global:

View File

@ -18,114 +18,5 @@ real, with all the uAPI bits is:
* Route shmem backend over to TTM SYSTEM for discrete
* TTM purgeable object support
* Move i915 buddy allocator over to TTM
* MMAP ioctl mode(see `I915 MMAP`_)
* SET/GET ioctl caching(see `I915 SET/GET CACHING`_)
* Send RFC(with mesa-dev on cc) for final sign off on the uAPI
* Add pciid for DG1 and turn on uAPI for real
New object placement and region query uAPI
==========================================
Starting from DG1 we need to give userspace the ability to allocate buffers from
device local-memory. Currently the driver supports gem_create, which can place
buffers in system memory via shmem, and the usual assortment of other
interfaces, like dumb buffers and userptr.
To support this new capability, while also providing a uAPI which will work
beyond just DG1, we propose to offer three new bits of uAPI:
DRM_I915_QUERY_MEMORY_REGIONS
-----------------------------
New query ID which allows userspace to discover the list of supported memory
regions(like system-memory and local-memory) for a given device. We identify
each region with a class and instance pair, which should be unique. The class
here would be DEVICE or SYSTEM, and the instance would be zero, on platforms
like DG1.
Side note: The class/instance design is borrowed from our existing engine uAPI,
where we describe every physical engine in terms of its class, and the
particular instance, since we can have more than one per class.
In the future we also want to expose more information which can further
describe the capabilities of a region.
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_memory_class drm_i915_gem_memory_class_instance drm_i915_memory_region_info drm_i915_query_memory_regions
GEM_CREATE_EXT
--------------
New ioctl which is basically just gem_create but now allows userspace to provide
a chain of possible extensions. Note that if we don't provide any extensions and
set flags=0 then we get the exact same behaviour as gem_create.
Side note: We also need to support PXP[1] in the near future, which is also
applicable to integrated platforms, and adds its own gem_create_ext extension,
which basically lets userspace mark a buffer as "protected".
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_create_ext
I915_GEM_CREATE_EXT_MEMORY_REGIONS
----------------------------------
Implemented as an extension for gem_create_ext, we would now allow userspace to
optionally provide an immutable list of preferred placements at creation time,
in priority order, for a given buffer object. For the placements we expect
them each to use the class/instance encoding, as per the output of the regions
query. Having the list in priority order will be useful in the future when
placing an object, say during eviction.
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_create_ext_memory_regions
One fair criticism here is that this seems a little over-engineered[2]. If we
just consider DG1 then yes, a simple gem_create.flags or something is totally
all that's needed to tell the kernel to allocate the buffer in local-memory or
whatever. However looking to the future we need uAPI which can also support
upcoming Xe HP multi-tile architecture in a sane way, where there can be
multiple local-memory instances for a given device, and so using both class and
instance in our uAPI to describe regions is desirable, although specifically
for DG1 it's uninteresting, since we only have a single local-memory instance.
Existing uAPI issues
====================
Some potential issues we still need to resolve.
I915 MMAP
---------
In i915 there are multiple ways to MMAP GEM object, including mapping the same
object using different mapping types(WC vs WB), i.e multiple active mmaps per
object. TTM expects one MMAP at most for the lifetime of the object. If it
turns out that we have to backpedal here, there might be some potential
userspace fallout.
I915 SET/GET CACHING
--------------------
In i915 we have set/get_caching ioctl. TTM doesn't let us to change this, but
DG1 doesn't support non-snooped pcie transactions, so we can just always
allocate as WB for smem-only buffers. If/when our hw gains support for
non-snooped pcie transactions then we must fix this mode at allocation time as
a new GEM extension.
This is related to the mmap problem, because in general (meaning, when we're
not running on intel cpus) the cpu mmap must not, ever, be inconsistent with
allocation mode.
Possible idea is to let the kernel picks the mmap mode for userspace from the
following table:
smem-only: WB. Userspace does not need to call clflush.
smem+lmem: We only ever allow a single mode, so simply allocate this as uncached
memory, and always give userspace a WC mapping. GPU still does snooped access
here(assuming we can't turn it off like on DG1), which is a bit inefficient.
lmem only: always WC
This means on discrete you only get a single mmap mode, all others must be
rejected. That's probably going to be a new default mode or something like
that.
Links
=====
[1] https://patchwork.freedesktop.org/series/86798/
[2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5599#note_553791

View File

@ -191,19 +191,9 @@ nf_flowtable_tcp_timeout - INTEGER (seconds)
TCP connections may be offloaded from nf conntrack to nf flow table.
Once aged, the connection is returned to nf conntrack with tcp pickup timeout.
nf_flowtable_tcp_pickup - INTEGER (seconds)
default 120
TCP connection timeout after being aged from nf flow table offload.
nf_flowtable_udp_timeout - INTEGER (seconds)
default 30
Control offload timeout for udp connections.
UDP connections may be offloaded from nf conntrack to nf flow table.
Once aged, the connection is returned to nf conntrack with udp pickup timeout.
nf_flowtable_udp_pickup - INTEGER (seconds)
default 30
UDP connection timeout after being aged from nf flow table offload.

View File

@ -263,7 +263,7 @@ Userspace can also add file descriptors to the notifying process via
``ioctl(SECCOMP_IOCTL_NOTIF_ADDFD)``. The ``id`` member of
``struct seccomp_notif_addfd`` should be the same ``id`` as in
``struct seccomp_notif``. The ``newfd_flags`` flag may be used to set flags
like O_EXEC on the file descriptor in the notifying process. If the supervisor
like O_CLOEXEC on the file descriptor in the notifying process. If the supervisor
wants to inject the file descriptor with a specific number, the
``SECCOMP_ADDFD_FLAG_SETFD`` flag can be used, and set the ``newfd`` member to
the specific number to use. If that file descriptor is already open in the

View File

@ -11347,7 +11347,7 @@ L: netdev@vger.kernel.org
S: Supported
F: drivers/net/phy/mxl-gpy.c
MCAB MICROCHIP CAN BUS ANALYZER TOOL DRIVER
MCBA MICROCHIP CAN BUS ANALYZER TOOL DRIVER
R: Yasushi SHOJI <yashi@spacecubics.com>
L: linux-can@vger.kernel.org
S: Maintained
@ -15823,7 +15823,7 @@ F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml
F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET DRIVERS
R: Sergei Shtylyov <sergei.shtylyov@gmail.com>
R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,*.yaml
@ -17835,7 +17835,7 @@ F: include/linux/sync_file.h
F: include/uapi/linux/sync_file.h
SYNOPSYS ARC ARCHITECTURE
M: Vineet Gupta <vgupta@synopsys.com>
M: Vineet Gupta <vgupta@kernel.org>
L: linux-snps-arc@lists.infradead.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc.git
@ -20037,7 +20037,8 @@ F: Documentation/devicetree/bindings/extcon/wlf,arizona.yaml
F: Documentation/devicetree/bindings/mfd/wlf,arizona.yaml
F: Documentation/devicetree/bindings/mfd/wm831x.txt
F: Documentation/devicetree/bindings/regulator/wlf,arizona.yaml
F: Documentation/devicetree/bindings/sound/wlf,arizona.yaml
F: Documentation/devicetree/bindings/sound/wlf,*.yaml
F: Documentation/devicetree/bindings/sound/wm*
F: Documentation/hwmon/wm83??.rst
F: arch/arm/mach-s3c/mach-crag6410*
F: drivers/clk/clk-wm83*.c

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 14
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc5
NAME = Opossums on Parade
# *DOCUMENTATION*
@ -1316,6 +1316,16 @@ PHONY += scripts_unifdef
scripts_unifdef: scripts_basic
$(Q)$(MAKE) $(build)=scripts scripts/unifdef
# ---------------------------------------------------------------------------
# Install
# Many distributions have the custom install script, /sbin/installkernel.
# If DKMS is installed, 'make install' will eventually recuses back
# to the this Makefile to build and install external modules.
# Cancel sub_make_done so that options such as M=, V=, etc. are parsed.
install: sub_make_done :=
# ---------------------------------------------------------------------------
# Tools

View File

@ -409,7 +409,7 @@ choice
help
Depending on the configuration, CPU can contain DSP registers
(ACC0_GLO, ACC0_GHI, DSP_BFLY0, DSP_CTRL, DSP_FFT_CTRL).
Bellow is options describing how to handle these registers in
Below are options describing how to handle these registers in
interrupt entry / exit and in context switch.
config ARC_DSP_NONE

View File

@ -24,7 +24,7 @@
*/
static inline __sum16 csum_fold(__wsum s)
{
unsigned r = s << 16 | s >> 16; /* ror */
unsigned int r = s << 16 | s >> 16; /* ror */
s = ~s;
s -= r;
return s >> 16;

View File

@ -123,7 +123,7 @@ static const char * const arc_pmu_ev_hw_map[] = {
#define C(_x) PERF_COUNT_HW_CACHE_##_x
#define CACHE_OP_UNSUPPORTED 0xffff
static const unsigned arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
static const unsigned int arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = PERF_COUNT_ARC_LDC,

View File

@ -57,23 +57,26 @@ void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
void fpu_init_task(struct pt_regs *regs)
{
const unsigned int fwe = 0x80000000;
/* default rounding mode */
write_aux_reg(ARC_REG_FPU_CTRL, 0x100);
/* set "Write enable" to allow explicit write to exception flags */
write_aux_reg(ARC_REG_FPU_STATUS, 0x80000000);
/* Initialize to zero: setting requires FWE be set */
write_aux_reg(ARC_REG_FPU_STATUS, fwe);
}
void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
{
struct arc_fpu *save = &prev->thread.fpu;
struct arc_fpu *restore = &next->thread.fpu;
const unsigned int fwe = 0x80000000;
save->ctrl = read_aux_reg(ARC_REG_FPU_CTRL);
save->status = read_aux_reg(ARC_REG_FPU_STATUS);
write_aux_reg(ARC_REG_FPU_CTRL, restore->ctrl);
write_aux_reg(ARC_REG_FPU_STATUS, restore->status);
write_aux_reg(ARC_REG_FPU_STATUS, (fwe | restore->status));
}
#endif

View File

@ -260,7 +260,7 @@ static void init_unwind_hdr(struct unwind_table *table,
{
const u8 *ptr;
unsigned long tableSize = table->size, hdrSize;
unsigned n;
unsigned int n;
const u32 *fde;
struct {
u8 version;
@ -462,7 +462,7 @@ static uleb128_t get_uleb128(const u8 **pcur, const u8 *end)
{
const u8 *cur = *pcur;
uleb128_t value;
unsigned shift;
unsigned int shift;
for (shift = 0, value = 0; cur < end; shift += 7) {
if (shift + 7 > 8 * sizeof(value)
@ -483,7 +483,7 @@ static sleb128_t get_sleb128(const u8 **pcur, const u8 *end)
{
const u8 *cur = *pcur;
sleb128_t value;
unsigned shift;
unsigned int shift;
for (shift = 0, value = 0; cur < end; shift += 7) {
if (shift + 7 > 8 * sizeof(value)
@ -609,7 +609,7 @@ static unsigned long read_pointer(const u8 **pLoc, const void *end,
static signed fde_pointer_type(const u32 *cie)
{
const u8 *ptr = (const u8 *)(cie + 2);
unsigned version = *ptr;
unsigned int version = *ptr;
if (*++ptr) {
const char *aug;
@ -904,7 +904,7 @@ int arc_unwind(struct unwind_frame_info *frame)
const u8 *ptr = NULL, *end = NULL;
unsigned long pc = UNW_PC(frame) - frame->call_frame;
unsigned long startLoc = 0, endLoc = 0, cfa;
unsigned i;
unsigned int i;
signed ptrType = -1;
uleb128_t retAddrReg = 0;
const struct unwind_table *table;

View File

@ -88,6 +88,8 @@ SECTIONS
CPUIDLE_TEXT
LOCK_TEXT
KPROBES_TEXT
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
*(.fixup)
*(.gnu.warning)
}

View File

@ -1595,7 +1595,7 @@
compatible = "ti,am4372-d_can", "ti,am3352-d_can";
reg = <0x0 0x2000>;
clocks = <&dcan1_fck>;
clock-name = "fck";
clock-names = "fck";
syscon-raminit = <&scm_conf 0x644 1>;
interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";

View File

@ -582,7 +582,7 @@
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins>;
clock-frequency = <400000>;
clock-frequency = <100000>;
tps65218: tps65218@24 {
reg = <0x24>;

View File

@ -388,13 +388,13 @@
pinctrl_power_button: powerbutgrp {
fsl,pins = <
MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4
MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4
>;
};
pinctrl_power_out: poweroutgrp {
fsl,pins = <
MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4
MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4
>;
};

View File

@ -54,7 +54,13 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_microsom_enet_ar8035>;
phy-mode = "rgmii-id";
phy-reset-duration = <2>;
/*
* The PHY seems to require a long-enough reset duration to avoid
* some rare issues where the PHY gets stuck in an inconsistent and
* non-functional state at boot-up. 10ms proved to be fine .
*/
phy-reset-duration = <10>;
phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
status = "okay";

View File

@ -43,6 +43,7 @@
assigned-clock-rates = <0>, <198000000>;
cap-power-off-card;
keep-power-in-suspend;
max-frequency = <25000000>;
mmc-pwrseq = <&wifi_pwrseq>;
no-1-8-v;
non-removable;

View File

@ -30,14 +30,6 @@
regulator-max-microvolt = <5000000>;
};
vdds_1v8_main: fixedregulator-vdds_1v8_main {
compatible = "regulator-fixed";
regulator-name = "vdds_1v8_main";
vin-supply = <&smps7_reg>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
vmmcsd_fixed: fixedregulator-mmcsd {
compatible = "regulator-fixed";
regulator-name = "vmmcsd_fixed";
@ -487,6 +479,7 @@
regulator-boot-on;
};
vdds_1v8_main:
smps7_reg: smps7 {
/* VDDS_1v8_OMAP over VDDS_1v8_MAIN */
regulator-name = "smps7";

View File

@ -755,14 +755,14 @@
status = "disabled";
};
vica: intc@10140000 {
vica: interrupt-controller@10140000 {
compatible = "arm,versatile-vic";
interrupt-controller;
#interrupt-cells = <1>;
reg = <0x10140000 0x20>;
};
vicb: intc@10140020 {
vicb: interrupt-controller@10140020 {
compatible = "arm,versatile-vic";
interrupt-controller;
#interrupt-cells = <1>;

View File

@ -37,7 +37,7 @@
poll-interval = <20>;
/*
* The EXTi IRQ line 3 is shared with touchscreen and ethernet,
* The EXTi IRQ line 3 is shared with ethernet,
* so mark this as polled GPIO key.
*/
button-0 {
@ -46,6 +46,16 @@
gpios = <&gpiof 3 GPIO_ACTIVE_LOW>;
};
/*
* The EXTi IRQ line 6 is shared with touchscreen,
* so mark this as polled GPIO key.
*/
button-1 {
label = "TA2-GPIO-B";
linux,code = <KEY_B>;
gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
};
/*
* The EXTi IRQ line 0 is shared with PMIC,
* so mark this as polled GPIO key.
@ -60,13 +70,6 @@
gpio-keys {
compatible = "gpio-keys";
button-1 {
label = "TA2-GPIO-B";
linux,code = <KEY_B>;
gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
wakeup-source;
};
button-3 {
label = "TA4-GPIO-D";
linux,code = <KEY_D>;
@ -82,6 +85,7 @@
label = "green:led5";
gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>;
default-state = "off";
status = "disabled";
};
led-1 {
@ -185,8 +189,8 @@
touchscreen@38 {
compatible = "edt,edt-ft5406";
reg = <0x38>;
interrupt-parent = <&gpiog>;
interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
interrupt-parent = <&gpioc>;
interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
};
};

View File

@ -12,6 +12,8 @@
aliases {
ethernet0 = &ethernet0;
ethernet1 = &ksz8851;
rtc0 = &hwrtc;
rtc1 = &rtc;
};
memory@c0000000 {
@ -138,6 +140,7 @@
reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
reset-assert-us = <500>;
reset-deassert-us = <500>;
smsc,disable-energy-detect;
interrupt-parent = <&gpioi>;
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
};
@ -248,7 +251,7 @@
/delete-property/dmas;
/delete-property/dma-names;
rtc@32 {
hwrtc: rtc@32 {
compatible = "microcrystal,rv8803";
reg = <0x32>;
};

View File

@ -68,7 +68,6 @@ void imx_set_cpu_arg(int cpu, u32 arg);
void v7_secondary_startup(void);
void imx_scu_map_io(void);
void imx_smp_prepare(void);
void imx_gpcv2_set_core1_pdn_pup_by_software(bool pdn);
#else
static inline void imx_scu_map_io(void) {}
static inline void imx_smp_prepare(void) {}
@ -81,6 +80,7 @@ void imx_gpc_mask_all(void);
void imx_gpc_restore_all(void);
void imx_gpc_hwirq_mask(unsigned int hwirq);
void imx_gpc_hwirq_unmask(unsigned int hwirq);
void imx_gpcv2_set_core1_pdn_pup_by_software(bool pdn);
void imx_anatop_init(void);
void imx_anatop_pre_suspend(void);
void imx_anatop_post_resume(void);

View File

@ -103,6 +103,7 @@ struct mmdc_pmu {
struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
struct hlist_node node;
struct fsl_mmdc_devtype_data *devtype_data;
struct clk *mmdc_ipg_clk;
};
/*
@ -462,11 +463,14 @@ static int imx_mmdc_remove(struct platform_device *pdev)
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
perf_pmu_unregister(&pmu_mmdc->pmu);
iounmap(pmu_mmdc->mmdc_base);
clk_disable_unprepare(pmu_mmdc->mmdc_ipg_clk);
kfree(pmu_mmdc);
return 0;
}
static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base)
static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base,
struct clk *mmdc_ipg_clk)
{
struct mmdc_pmu *pmu_mmdc;
char *name;
@ -494,6 +498,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
}
mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
if (mmdc_num == 0)
name = "mmdc";
else
@ -529,7 +534,7 @@ pmu_free:
#else
#define imx_mmdc_remove NULL
#define imx_mmdc_perf_init(pdev, mmdc_base) 0
#define imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk) 0
#endif
static int imx_mmdc_probe(struct platform_device *pdev)
@ -567,7 +572,13 @@ static int imx_mmdc_probe(struct platform_device *pdev)
val &= ~(1 << BP_MMDC_MAPSR_PSD);
writel_relaxed(val, reg);
return imx_mmdc_perf_init(pdev, mmdc_base);
err = imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk);
if (err) {
iounmap(mmdc_base);
clk_disable_unprepare(mmdc_ipg_clk);
}
return err;
}
int imx_mmdc_get_ddr_type(void)

View File

@ -91,6 +91,7 @@ config MACH_IXDP465
config MACH_GORAMO_MLR
bool "GORAMO Multi Link Router"
depends on IXP4XX_PCI_LEGACY
help
Say 'Y' here if you want your kernel to support GORAMO
MultiLink router.

View File

@ -3776,6 +3776,7 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
struct omap_hwmod_ocp_if *oi;
struct clockdomain *clkdm;
struct clk_hw_omap *clk;
struct clk_hw *hw;
if (!oh)
return NULL;
@ -3792,7 +3793,14 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
c = oi->_clk;
}
clk = to_clk_hw_omap(__clk_get_hw(c));
hw = __clk_get_hw(c);
if (!hw)
return NULL;
clk = to_clk_hw_omap(hw);
if (!clk)
return NULL;
clkdm = clk->clkdm;
if (!clkdm)
return NULL;

View File

@ -1800,11 +1800,11 @@ config RANDOMIZE_BASE
If unsure, say N.
config RANDOMIZE_MODULE_REGION_FULL
bool "Randomize the module region over a 4 GB range"
bool "Randomize the module region over a 2 GB range"
depends on RANDOMIZE_BASE
default y
help
Randomizes the location of the module region inside a 4 GB window
Randomizes the location of the module region inside a 2 GB window
covering the core kernel. This way, it is less likely for modules
to leak information about the location of core kernel data structures
but it does imply that function calls between modules and the core
@ -1812,7 +1812,10 @@ config RANDOMIZE_MODULE_REGION_FULL
When this option is not set, the module region will be randomized over
a limited range that contains the [_stext, _etext] interval of the
core kernel, so branch relocations are always in range.
core kernel, so branch relocations are almost always in range unless
ARM64_MODULE_PLTS is enabled and the region is exhausted. In this
particular case of region exhaustion, modules might be able to fall
back to a larger 2GB area.
config CC_HAVE_STACKPROTECTOR_SYSREG
def_bool $(cc-option,-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0)

View File

@ -21,19 +21,11 @@ LDFLAGS_vmlinux += -shared -Bsymbolic -z notext \
endif
ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
ifneq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y)
$(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum)
else
ifeq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y)
LDFLAGS_vmlinux += --fix-cortex-a53-843419
endif
endif
ifeq ($(CONFIG_ARM64_USE_LSE_ATOMICS), y)
ifneq ($(CONFIG_ARM64_LSE_ATOMICS), y)
$(warning LSE atomics not supported by binutils)
endif
endif
cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
@ -176,6 +168,17 @@ vdso_install:
archprepare:
$(Q)$(MAKE) $(build)=arch/arm64/tools kapi
ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
ifneq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y)
@echo "warning: ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum" >&2
endif
endif
ifeq ($(CONFIG_ARM64_USE_LSE_ATOMICS),y)
ifneq ($(CONFIG_ARM64_LSE_ATOMICS),y)
@echo "warning: LSE atomics not supported by binutils" >&2
endif
endif
# We use MRPROPER_FILES and CLEAN_FILES now
archclean:

View File

@ -54,6 +54,7 @@
&mscc_felix_port0 {
label = "swp0";
managed = "in-band-status";
phy-handle = <&phy0>;
phy-mode = "sgmii";
status = "okay";
@ -61,6 +62,7 @@
&mscc_felix_port1 {
label = "swp1";
managed = "in-band-status";
phy-handle = <&phy1>;
phy-mode = "sgmii";
status = "okay";

View File

@ -66,7 +66,7 @@
};
};
sysclk: clock-sysclk {
sysclk: sysclk {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <100000000>;

View File

@ -19,6 +19,8 @@
aliases {
spi0 = &spi0;
ethernet1 = &eth1;
mmc0 = &sdhci0;
mmc1 = &sdhci1;
};
chosen {
@ -119,6 +121,7 @@
pinctrl-names = "default";
pinctrl-0 = <&i2c1_pins>;
clock-frequency = <100000>;
/delete-property/ mrvl,i2c-fast-mode;
status = "okay";
rtc@6f {

View File

@ -1840,7 +1840,11 @@
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE1R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE1W &emc>;
interconnect-names = "read", "write";
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE1>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE1 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie@14120000 {
@ -1890,7 +1894,11 @@
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE2AR &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE2AW &emc>;
interconnect-names = "read", "write";
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE2>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE2 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie@14140000 {
@ -1940,7 +1948,11 @@
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE3R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE3W &emc>;
interconnect-names = "read", "write";
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE3>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE3 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie@14160000 {
@ -1990,7 +2002,11 @@
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>;
interconnect-names = "read", "write";
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE4>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE4 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie@14180000 {
@ -2040,7 +2056,11 @@
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>;
interconnect-names = "read", "write";
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE0>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE0 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie@141a0000 {
@ -2094,7 +2114,11 @@
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>;
interconnect-names = "read", "write";
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE5>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE5 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie_ep@14160000 {
@ -2127,6 +2151,14 @@
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>;
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE4>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE4 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie_ep@14180000 {
@ -2159,6 +2191,14 @@
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>;
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE0>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE0 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
pcie_ep@141a0000 {
@ -2194,6 +2234,14 @@
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>;
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE5>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE5 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
};
sram@40000000 {

View File

@ -320,7 +320,17 @@ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
static inline unsigned long regs_return_value(struct pt_regs *regs)
{
return regs->regs[0];
unsigned long val = regs->regs[0];
/*
* Audit currently uses regs_return_value() instead of
* syscall_get_return_value(). Apply the same sign-extension here until
* audit is updated to use syscall_get_return_value().
*/
if (compat_user_mode(regs))
val = sign_extend64(val, 31);
return val;
}
static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)

View File

@ -35,7 +35,7 @@ struct stack_info {
* accounting information necessary for robust unwinding.
*
* @fp: The fp value in the frame record (or the real fp)
* @pc: The fp value in the frame record (or the real lr)
* @pc: The lr value in the frame record (or the real lr)
*
* @stacks_done: Stacks which have been entirely unwound, for which it is no
* longer valid to unwind to.

View File

@ -29,24 +29,25 @@ static inline void syscall_rollback(struct task_struct *task,
regs->regs[0] = regs->orig_x0;
}
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
unsigned long val = regs->regs[0];
if (is_compat_thread(task_thread_info(task)))
val = sign_extend64(val, 31);
return val;
}
static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
unsigned long error = regs->regs[0];
if (is_compat_thread(task_thread_info(task)))
error = sign_extend64(error, 31);
unsigned long error = syscall_get_return_value(task, regs);
return IS_ERR_VALUE(error) ? error : 0;
}
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
return regs->regs[0];
}
static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs,
int error, long val)

View File

@ -162,7 +162,9 @@ u64 __init kaslr_early_init(void)
* a PAGE_SIZE multiple in the range [_etext - MODULES_VSIZE,
* _stext) . This guarantees that the resulting region still
* covers [_stext, _etext], and that all relative branches can
* be resolved without veneers.
* be resolved without veneers unless this region is exhausted
* and we fall back to a larger 2GB window in module_alloc()
* when ARM64_MODULE_PLTS is enabled.
*/
module_range = MODULES_VSIZE - (u64)(_etext - _stext);
module_alloc_base = (u64)_etext + offset - MODULES_VSIZE;

View File

@ -1862,7 +1862,7 @@ void syscall_trace_exit(struct pt_regs *regs)
audit_syscall_exit(regs);
if (flags & _TIF_SYSCALL_TRACEPOINT)
trace_sys_exit(regs, regs_return_value(regs));
trace_sys_exit(regs, syscall_get_return_value(current, regs));
if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);

View File

@ -29,6 +29,7 @@
#include <asm/unistd.h>
#include <asm/fpsimd.h>
#include <asm/ptrace.h>
#include <asm/syscall.h>
#include <asm/signal32.h>
#include <asm/traps.h>
#include <asm/vdso.h>
@ -890,7 +891,7 @@ static void do_signal(struct pt_regs *regs)
retval == -ERESTART_RESTARTBLOCK ||
(retval == -ERESTARTSYS &&
!(ksig.ka.sa.sa_flags & SA_RESTART)))) {
regs->regs[0] = -EINTR;
syscall_set_return_value(current, regs, -EINTR, 0);
regs->pc = continue_addr;
}

View File

@ -218,7 +218,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
#ifdef CONFIG_STACKTRACE
noinline void arch_stack_walk(stack_trace_consume_fn consume_entry,
noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
void *cookie, struct task_struct *task,
struct pt_regs *regs)
{

View File

@ -54,10 +54,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
ret = do_ni_syscall(regs, scno);
}
if (is_compat_task())
ret = lower_32_bits(ret);
regs->regs[0] = ret;
syscall_set_return_value(current, regs, 0, ret);
/*
* Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
@ -115,7 +112,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
* syscall. do_notify_resume() will send a signal to userspace
* before the syscall is restarted.
*/
regs->regs[0] = -ERESTARTNOINTR;
syscall_set_return_value(current, regs, -ERESTARTNOINTR, 0);
return;
}
@ -136,7 +133,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
* anyway.
*/
if (scno == NO_SYSCALL)
regs->regs[0] = -ENOSYS;
syscall_set_return_value(current, regs, -ENOSYS, 0);
scno = syscall_trace_enter(regs);
if (scno == NO_SYSCALL)
goto trace_exit;

View File

@ -321,7 +321,7 @@ KBUILD_LDFLAGS += -m $(ld-emul)
ifdef CONFIG_MIPS
CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
endif

View File

@ -58,15 +58,20 @@ do { \
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{
pmd_t *pmd = NULL;
pmd_t *pmd;
struct page *pg;
pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER);
if (pg) {
pgtable_pmd_page_ctor(pg);
pmd = (pmd_t *)page_address(pg);
pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER);
if (!pg)
return NULL;
if (!pgtable_pmd_page_ctor(pg)) {
__free_pages(pg, PMD_ORDER);
return NULL;
}
pmd = (pmd_t *)page_address(pg);
pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
return pmd;
}

View File

@ -48,7 +48,8 @@ static struct plat_serial8250_port uart8250_data[] = {
.mapbase = 0x1f000900, /* The CBUS UART */
.irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2,
.uartclk = 3686400, /* Twice the usual clk! */
.iotype = UPIO_MEM32,
.iotype = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ?
UPIO_MEM32BE : UPIO_MEM32,
.flags = CBUS_UART_FLAGS,
.regshift = 3,
},

View File

@ -492,10 +492,16 @@ config CC_HAVE_STACKPROTECTOR_TLS
config STACKPROTECTOR_PER_TASK
def_bool y
depends on !GCC_PLUGIN_RANDSTRUCT
depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS
config PHYS_RAM_BASE_FIXED
bool "Explicitly specified physical RAM address"
default n
config PHYS_RAM_BASE
hex "Platform Physical RAM address"
depends on PHYS_RAM_BASE_FIXED
default "0x80000000"
help
This is the physical address of RAM in the system. It has to be
@ -508,6 +514,7 @@ config XIP_KERNEL
# This prevents XIP from being enabled by all{yes,mod}config, which
# fail to build since XIP doesn't support large kernels.
depends on !COMPILE_TEST
select PHYS_RAM_BASE_FIXED
help
Execute-In-Place allows the kernel to run from non-volatile storage
directly addressable by the CPU, such as NOR flash. This saves RAM

View File

@ -24,7 +24,7 @@
memory@80000000 {
device_type = "memory";
reg = <0x0 0x80000000 0x2 0x00000000>;
reg = <0x0 0x80000000 0x4 0x00000000>;
};
soc {

View File

@ -103,6 +103,7 @@ struct kernel_mapping {
};
extern struct kernel_mapping kernel_map;
extern phys_addr_t phys_ram_base;
#ifdef CONFIG_64BIT
#define is_kernel_mapping(x) \
@ -113,9 +114,9 @@ extern struct kernel_mapping kernel_map;
#define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset))
#define kernel_mapping_pa_to_va(y) ({ \
unsigned long _y = y; \
(_y >= CONFIG_PHYS_RAM_BASE) ? \
(void *)((unsigned long)(_y) + kernel_map.va_kernel_pa_offset + XIP_OFFSET) : \
(void *)((unsigned long)(_y) + kernel_map.va_kernel_xip_pa_offset); \
(IS_ENABLED(CONFIG_XIP_KERNEL) && _y < phys_ram_base) ? \
(void *)((unsigned long)(_y) + kernel_map.va_kernel_xip_pa_offset) : \
(void *)((unsigned long)(_y) + kernel_map.va_kernel_pa_offset + XIP_OFFSET); \
})
#define __pa_to_va_nodebug(x) linear_mapping_pa_to_va(x)

View File

@ -27,7 +27,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
fp = frame_pointer(regs);
sp = user_stack_pointer(regs);
pc = instruction_pointer(regs);
} else if (task == current) {
} else if (task == NULL || task == current) {
fp = (unsigned long)__builtin_frame_address(1);
sp = (unsigned long)__builtin_frame_address(0);
pc = (unsigned long)__builtin_return_address(0);

View File

@ -36,6 +36,9 @@ EXPORT_SYMBOL(kernel_map);
#define kernel_map (*(struct kernel_mapping *)XIP_FIXUP(&kernel_map))
#endif
phys_addr_t phys_ram_base __ro_after_init;
EXPORT_SYMBOL(phys_ram_base);
#ifdef CONFIG_XIP_KERNEL
extern char _xiprom[], _exiprom[];
#endif
@ -160,7 +163,7 @@ static void __init setup_bootmem(void)
phys_addr_t vmlinux_end = __pa_symbol(&_end);
phys_addr_t vmlinux_start = __pa_symbol(&_start);
phys_addr_t __maybe_unused max_mapped_addr;
phys_addr_t dram_end;
phys_addr_t phys_ram_end;
#ifdef CONFIG_XIP_KERNEL
vmlinux_start = __pa_symbol(&_sdata);
@ -181,9 +184,12 @@ static void __init setup_bootmem(void)
#endif
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
dram_end = memblock_end_of_DRAM();
phys_ram_end = memblock_end_of_DRAM();
#ifndef CONFIG_64BIT
#ifndef CONFIG_XIP_KERNEL
phys_ram_base = memblock_start_of_DRAM();
#endif
/*
* memblock allocator is not aware of the fact that last 4K bytes of
* the addressable memory can not be mapped because of IS_ERR_VALUE
@ -194,12 +200,12 @@ static void __init setup_bootmem(void)
* be done in create_kernel_page_table.
*/
max_mapped_addr = __pa(~(ulong)0);
if (max_mapped_addr == (dram_end - 1))
if (max_mapped_addr == (phys_ram_end - 1))
memblock_set_current_limit(max_mapped_addr - 4096);
#endif
min_low_pfn = PFN_UP(memblock_start_of_DRAM());
max_low_pfn = max_pfn = PFN_DOWN(dram_end);
min_low_pfn = PFN_UP(phys_ram_base);
max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET);
@ -558,6 +564,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
kernel_map.xiprom = (uintptr_t)CONFIG_XIP_PHYS_ADDR;
kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom);
phys_ram_base = CONFIG_PHYS_RAM_BASE;
kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata);

View File

@ -2489,13 +2489,15 @@ void perf_clear_dirty_counters(void)
return;
for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
/* Metrics and fake events don't have corresponding HW counters. */
if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR))
continue;
else if (i >= INTEL_PMC_IDX_FIXED)
if (i >= INTEL_PMC_IDX_FIXED) {
/* Metrics and fake events don't have corresponding HW counters. */
if ((i - INTEL_PMC_IDX_FIXED) >= hybrid(cpuc->pmu, num_counters_fixed))
continue;
wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
else
} else {
wrmsrl(x86_pmu_event_addr(i), 0);
}
}
bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);

View File

@ -2904,24 +2904,28 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
*/
static int intel_pmu_handle_irq(struct pt_regs *regs)
{
struct cpu_hw_events *cpuc;
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
bool late_ack = hybrid_bit(cpuc->pmu, late_ack);
bool mid_ack = hybrid_bit(cpuc->pmu, mid_ack);
int loops;
u64 status;
int handled;
int pmu_enabled;
cpuc = this_cpu_ptr(&cpu_hw_events);
/*
* Save the PMU state.
* It needs to be restored when leaving the handler.
*/
pmu_enabled = cpuc->enabled;
/*
* No known reason to not always do late ACK,
* but just in case do it opt-in.
* In general, the early ACK is only applied for old platforms.
* For the big core starts from Haswell, the late ACK should be
* applied.
* For the small core after Tremont, we have to do the ACK right
* before re-enabling counters, which is in the middle of the
* NMI handler.
*/
if (!x86_pmu.late_ack)
if (!late_ack && !mid_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI);
intel_bts_disable_local();
cpuc->enabled = 0;
@ -2958,6 +2962,8 @@ again:
goto again;
done:
if (mid_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI);
/* Only restore PMU state when it's active. See x86_pmu_disable(). */
cpuc->enabled = pmu_enabled;
if (pmu_enabled)
@ -2969,7 +2975,7 @@ done:
* have been reset. This avoids spurious NMIs on
* Haswell CPUs.
*/
if (x86_pmu.late_ack)
if (late_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI);
return handled;
}
@ -6129,7 +6135,6 @@ __init int intel_pmu_init(void)
static_branch_enable(&perf_is_hybrid);
x86_pmu.num_hybrid_pmus = X86_HYBRID_NUM_PMUS;
x86_pmu.late_ack = true;
x86_pmu.pebs_aliases = NULL;
x86_pmu.pebs_prec_dist = true;
x86_pmu.pebs_block = true;
@ -6167,6 +6172,7 @@ __init int intel_pmu_init(void)
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
pmu->name = "cpu_core";
pmu->cpu_type = hybrid_big;
pmu->late_ack = true;
if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
pmu->num_counters = x86_pmu.num_counters + 2;
pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
@ -6192,6 +6198,7 @@ __init int intel_pmu_init(void)
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
pmu->name = "cpu_atom";
pmu->cpu_type = hybrid_small;
pmu->mid_ack = true;
pmu->num_counters = x86_pmu.num_counters;
pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
pmu->max_pebs_events = x86_pmu.max_pebs_events;

View File

@ -656,6 +656,10 @@ struct x86_hybrid_pmu {
struct event_constraint *event_constraints;
struct event_constraint *pebs_constraints;
struct extra_reg *extra_regs;
unsigned int late_ack :1,
mid_ack :1,
enabled_ack :1;
};
static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
@ -686,6 +690,16 @@ extern struct static_key_false perf_is_hybrid;
__Fp; \
}))
#define hybrid_bit(_pmu, _field) \
({ \
bool __Fp = x86_pmu._field; \
\
if (is_hybrid() && (_pmu)) \
__Fp = hybrid_pmu(_pmu)->_field; \
\
__Fp; \
})
enum hybrid_pmu_type {
hybrid_big = 0x40,
hybrid_small = 0x20,
@ -755,6 +769,7 @@ struct x86_pmu {
/* PMI handler bits */
unsigned int late_ack :1,
mid_ack :1,
enabled_ack :1;
/*
* sysfs attrs
@ -1115,9 +1130,10 @@ void x86_pmu_stop(struct perf_event *event, int flags);
static inline void x86_pmu_disable_event(struct perf_event *event)
{
u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
struct hw_perf_event *hwc = &event->hw;
wrmsrl(hwc->config_base, hwc->config);
wrmsrl(hwc->config_base, hwc->config & ~disable_mask);
if (is_counter_pair(hwc))
wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0);

View File

@ -57,12 +57,12 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
[S_REL] =
"^(__init_(begin|end)|"
"__x86_cpu_dev_(start|end)|"
"(__parainstructions|__alt_instructions)(|_end)|"
"(__iommu_table|__apicdrivers|__smp_locks)(|_end)|"
"(__parainstructions|__alt_instructions)(_end)?|"
"(__iommu_table|__apicdrivers|__smp_locks)(_end)?|"
"__(start|end)_pci_.*|"
"__(start|end)_builtin_fw|"
"__(start|stop)___ksymtab(|_gpl)|"
"__(start|stop)___kcrctab(|_gpl)|"
"__(start|stop)___ksymtab(_gpl)?|"
"__(start|stop)___kcrctab(_gpl)?|"
"__(start|stop)___param|"
"__(start|stop)___modver|"
"__(start|stop)___bug_table|"

View File

@ -790,6 +790,7 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
struct blkcg_gq *parent = blkg->parent;
struct blkg_iostat_set *bisc = per_cpu_ptr(blkg->iostat_cpu, cpu);
struct blkg_iostat cur, delta;
unsigned long flags;
unsigned int seq;
/* fetch the current per-cpu values */
@ -799,21 +800,21 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
} while (u64_stats_fetch_retry(&bisc->sync, seq));
/* propagate percpu delta to global */
u64_stats_update_begin(&blkg->iostat.sync);
flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
blkg_iostat_set(&delta, &cur);
blkg_iostat_sub(&delta, &bisc->last);
blkg_iostat_add(&blkg->iostat.cur, &delta);
blkg_iostat_add(&bisc->last, &delta);
u64_stats_update_end(&blkg->iostat.sync);
u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
/* propagate global delta to parent (unless that's root) */
if (parent && parent->parent) {
u64_stats_update_begin(&parent->iostat.sync);
flags = u64_stats_update_begin_irqsave(&parent->iostat.sync);
blkg_iostat_set(&delta, &blkg->iostat.cur);
blkg_iostat_sub(&delta, &blkg->iostat.last);
blkg_iostat_add(&parent->iostat.cur, &delta);
blkg_iostat_add(&blkg->iostat.last, &delta);
u64_stats_update_end(&parent->iostat.sync);
u64_stats_update_end_irqrestore(&parent->iostat.sync, flags);
}
}
@ -848,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
memset(&tmp, 0, sizeof(tmp));
for_each_possible_cpu(cpu) {
struct disk_stats *cpu_dkstats;
unsigned long flags;
cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
tmp.ios[BLKG_IOSTAT_READ] +=
@ -864,9 +866,9 @@ static void blkcg_fill_root_iostats(void)
tmp.bytes[BLKG_IOSTAT_DISCARD] +=
cpu_dkstats->sectors[STAT_DISCARD] << 9;
u64_stats_update_begin(&blkg->iostat.sync);
flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
blkg_iostat_set(&blkg->iostat.cur, &tmp);
u64_stats_update_end(&blkg->iostat.sync);
u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
}
}
}

View File

@ -833,7 +833,11 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
enable = iolatency_set_min_lat_nsec(blkg, lat_val);
if (enable) {
WARN_ON_ONCE(!blk_get_queue(blkg->q));
if (!blk_get_queue(blkg->q)) {
ret = -ENODEV;
goto out;
}
blkg_get(blkg);
}

View File

@ -596,13 +596,13 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *head = &kcq->rq_list[sched_domain];
spin_lock(&kcq->lock);
trace_block_rq_insert(rq);
if (at_head)
list_move(&rq->queuelist, head);
else
list_move_tail(&rq->queuelist, head);
sbitmap_set_bit(&khd->kcq_map[sched_domain],
rq->mq_ctx->index_hw[hctx->type]);
trace_block_rq_insert(rq);
spin_unlock(&kcq->lock);
}
}

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* ldm - Support for Windows Logical Disk Manager (Dynamic Disks)
*
* Copyright (C) 2001,2002 Richard Russon <ldm@flatcap.org>

View File

@ -379,13 +379,6 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
(*element_ptr)->common.reference_count =
original_ref_count;
/*
* The original_element holds a reference from the package object
* that represents _HID. Since a new element was created by _HID,
* remove the reference from the _CID package.
*/
acpi_ut_remove_reference(original_element);
}
element_ptr++;

View File

@ -653,8 +653,6 @@ dev_groups_failed:
else if (drv->remove)
drv->remove(dev);
probe_failed:
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
@ -662,6 +660,8 @@ pinctrl_bind_failed:
device_links_no_driver(dev);
devres_release_all(dev);
arch_teardown_dma_ops(dev);
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
driver_sysfs_remove(dev);
dev->driver = NULL;
dev_set_drvdata(dev, NULL);

View File

@ -89,12 +89,11 @@ static void __fw_load_abort(struct fw_priv *fw_priv)
{
/*
* There is a small window in which user can write to 'loading'
* between loading done and disappearance of 'loading'
* between loading done/aborted and disappearance of 'loading'
*/
if (fw_sysfs_done(fw_priv))
if (fw_state_is_aborted(fw_priv) || fw_sysfs_done(fw_priv))
return;
list_del_init(&fw_priv->pending_list);
fw_state_aborted(fw_priv);
}
@ -280,7 +279,6 @@ static ssize_t firmware_loading_store(struct device *dev,
* Same logic as fw_load_abort, only the DONE bit
* is ignored and we set ABORT only on failure.
*/
list_del_init(&fw_priv->pending_list);
if (rc) {
fw_state_aborted(fw_priv);
written = rc;
@ -513,6 +511,11 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
}
mutex_lock(&fw_lock);
if (fw_state_is_aborted(fw_priv)) {
mutex_unlock(&fw_lock);
retval = -EINTR;
goto out;
}
list_add(&fw_priv->pending_list, &pending_fw_head);
mutex_unlock(&fw_lock);
@ -535,11 +538,10 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
if (fw_state_is_aborted(fw_priv)) {
if (retval == -ERESTARTSYS)
retval = -EINTR;
else
retval = -EAGAIN;
} else if (fw_priv->is_paged_buf && !fw_priv->data)
retval = -ENOMEM;
out:
device_del(f_dev);
err_put_dev:
put_device(f_dev);

View File

@ -117,8 +117,16 @@ static inline void __fw_state_set(struct fw_priv *fw_priv,
WRITE_ONCE(fw_st->status, status);
if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED)
if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) {
#ifdef CONFIG_FW_LOADER_USER_HELPER
/*
* Doing this here ensures that the fw_priv is deleted from
* the pending list in all abort/done paths.
*/
list_del_init(&fw_priv->pending_list);
#endif
complete_all(&fw_st->completion);
}
}
static inline void fw_state_aborted(struct fw_priv *fw_priv)

View File

@ -783,8 +783,10 @@ static void fw_abort_batch_reqs(struct firmware *fw)
return;
fw_priv = fw->priv;
mutex_lock(&fw_lock);
if (!fw_state_is_aborted(fw_priv))
fw_state_aborted(fw_priv);
mutex_unlock(&fw_lock);
}
/* called from request_firmware() and request_firmware_work_func() */

View File

@ -74,7 +74,7 @@ static bool n64cart_do_bvec(struct device *dev, struct bio_vec *bv, u32 pos)
n64cart_wait_dma();
n64cart_write_reg(PI_DRAM_REG, dma_addr + bv->bv_offset);
n64cart_write_reg(PI_DRAM_REG, dma_addr);
n64cart_write_reg(PI_CART_REG, (bstart | CART_DOMAIN) & CART_MAX);
n64cart_write_reg(PI_WRITE_REG, bv->bv_len - 1);

View File

@ -100,6 +100,7 @@ static const char * const clock_names[SYSC_MAX_CLOCKS] = {
* @cookie: data used by legacy platform callbacks
* @name: name if available
* @revision: interconnect target module revision
* @reserved: target module is reserved and already in use
* @enabled: sysc runtime enabled status
* @needs_resume: runtime resume needed on resume from suspend
* @child_needs_resume: runtime resume needed for child on resume from suspend
@ -130,6 +131,7 @@ struct sysc {
struct ti_sysc_cookie cookie;
const char *name;
u32 revision;
unsigned int reserved:1;
unsigned int enabled:1;
unsigned int needs_resume:1;
unsigned int child_needs_resume:1;
@ -2951,6 +2953,8 @@ static int sysc_init_soc(struct sysc *ddata)
case SOC_3430 ... SOC_3630:
sysc_add_disabled(0x48304000); /* timer12 */
break;
case SOC_AM3:
sysc_add_disabled(0x48310000); /* rng */
default:
break;
}
@ -3093,8 +3097,8 @@ static int sysc_probe(struct platform_device *pdev)
return error;
error = sysc_check_active_timer(ddata);
if (error)
return error;
if (error == -EBUSY)
ddata->reserved = true;
error = sysc_get_clocks(ddata);
if (error)
@ -3130,11 +3134,15 @@ static int sysc_probe(struct platform_device *pdev)
sysc_show_registers(ddata);
ddata->dev->type = &sysc_device_type;
error = of_platform_populate(ddata->dev->of_node, sysc_match_table,
pdata ? pdata->auxdata : NULL,
ddata->dev);
if (error)
goto err;
if (!ddata->reserved) {
error = of_platform_populate(ddata->dev->of_node,
sysc_match_table,
pdata ? pdata->auxdata : NULL,
ddata->dev);
if (error)
goto err;
}
INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle);

View File

@ -254,11 +254,11 @@ static int ftpm_tee_probe(struct device *dev)
pvt_data->session = sess_arg.session;
/* Allocate dynamic shared memory with fTPM TA */
pvt_data->shm = tee_shm_alloc(pvt_data->ctx,
MAX_COMMAND_SIZE + MAX_RESPONSE_SIZE,
TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
pvt_data->shm = tee_shm_alloc_kernel_buf(pvt_data->ctx,
MAX_COMMAND_SIZE +
MAX_RESPONSE_SIZE);
if (IS_ERR(pvt_data->shm)) {
dev_err(dev, "%s: tee_shm_alloc failed\n", __func__);
dev_err(dev, "%s: tee_shm_alloc_kernel_buf failed\n", __func__);
rc = -ENOMEM;
goto out_shm_alloc;
}

View File

@ -382,8 +382,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum;
alt_recent = idx_recent_sum > NR_RECENT / 2;
if (alt_recent || alt_intercepts) {
s64 last_enabled_span_ns = duration_ns;
int last_enabled_idx = idx;
s64 first_suitable_span_ns = duration_ns;
int first_suitable_idx = idx;
/*
* Look for the deepest idle state whose target residency had
@ -397,37 +397,51 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
intercept_sum = 0;
recent_sum = 0;
for (i = idx - 1; i >= idx0; i--) {
for (i = idx - 1; i >= 0; i--) {
struct teo_bin *bin = &cpu_data->state_bins[i];
s64 span_ns;
intercept_sum += bin->intercepts;
recent_sum += bin->recent;
if (dev->states_usage[i].disable)
continue;
span_ns = teo_middle_of_bin(i, drv);
if (!teo_time_ok(span_ns)) {
/*
* The current state is too shallow, so select
* the first enabled deeper state.
*/
duration_ns = last_enabled_span_ns;
idx = last_enabled_idx;
break;
}
if ((!alt_recent || 2 * recent_sum > idx_recent_sum) &&
(!alt_intercepts ||
2 * intercept_sum > idx_intercept_sum)) {
idx = i;
duration_ns = span_ns;
if (teo_time_ok(span_ns) &&
!dev->states_usage[i].disable) {
idx = i;
duration_ns = span_ns;
} else {
/*
* The current state is too shallow or
* disabled, so take the first enabled
* deeper state with suitable time span.
*/
idx = first_suitable_idx;
duration_ns = first_suitable_span_ns;
}
break;
}
last_enabled_span_ns = span_ns;
last_enabled_idx = i;
if (dev->states_usage[i].disable)
continue;
if (!teo_time_ok(span_ns)) {
/*
* The current state is too shallow, but if an
* alternative candidate state has been found,
* it may still turn out to be a better choice.
*/
if (first_suitable_idx != idx)
continue;
break;
}
first_suitable_span_ns = span_ns;
first_suitable_idx = i;
}
}

View File

@ -294,6 +294,14 @@ struct idxd_desc {
struct idxd_wq *wq;
};
/*
* This is software defined error for the completion status. We overload the error code
* that will never appear in completion status and only SWERR register.
*/
enum idxd_completion_status {
IDXD_COMP_DESC_ABORT = 0xff,
};
#define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev)
#define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev)
@ -482,4 +490,10 @@ static inline void perfmon_init(void) {}
static inline void perfmon_exit(void) {}
#endif
static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason)
{
idxd_dma_complete_txd(desc, reason);
idxd_free_desc(desc->wq, desc);
}
#endif

View File

@ -102,6 +102,8 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
spin_lock_init(&idxd->irq_entries[i].list_lock);
}
idxd_msix_perm_setup(idxd);
irq_entry = &idxd->irq_entries[0];
rc = request_threaded_irq(irq_entry->vector, NULL, idxd_misc_thread,
0, "idxd-misc", irq_entry);
@ -148,7 +150,6 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
}
idxd_unmask_error_interrupts(idxd);
idxd_msix_perm_setup(idxd);
return 0;
err_wq_irqs:
@ -162,6 +163,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
err_misc_irq:
/* Disable error interrupt generation */
idxd_mask_error_interrupts(idxd);
idxd_msix_perm_clear(idxd);
err_irq_entries:
pci_free_irq_vectors(pdev);
dev_err(dev, "No usable interrupts\n");
@ -758,32 +760,40 @@ static void idxd_shutdown(struct pci_dev *pdev)
for (i = 0; i < msixcnt; i++) {
irq_entry = &idxd->irq_entries[i];
synchronize_irq(irq_entry->vector);
free_irq(irq_entry->vector, irq_entry);
if (i == 0)
continue;
idxd_flush_pending_llist(irq_entry);
idxd_flush_work_list(irq_entry);
}
idxd_msix_perm_clear(idxd);
idxd_release_int_handles(idxd);
pci_free_irq_vectors(pdev);
pci_iounmap(pdev, idxd->reg_base);
pci_disable_device(pdev);
destroy_workqueue(idxd->wq);
flush_workqueue(idxd->wq);
}
static void idxd_remove(struct pci_dev *pdev)
{
struct idxd_device *idxd = pci_get_drvdata(pdev);
struct idxd_irq_entry *irq_entry;
int msixcnt = pci_msix_vec_count(pdev);
int i;
dev_dbg(&pdev->dev, "%s called\n", __func__);
idxd_shutdown(pdev);
if (device_pasid_enabled(idxd))
idxd_disable_system_pasid(idxd);
idxd_unregister_devices(idxd);
perfmon_pmu_remove(idxd);
for (i = 0; i < msixcnt; i++) {
irq_entry = &idxd->irq_entries[i];
free_irq(irq_entry->vector, irq_entry);
}
idxd_msix_perm_clear(idxd);
idxd_release_int_handles(idxd);
pci_free_irq_vectors(pdev);
pci_iounmap(pdev, idxd->reg_base);
iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA);
pci_disable_device(pdev);
destroy_workqueue(idxd->wq);
perfmon_pmu_remove(idxd);
device_unregister(&idxd->conf_dev);
}
static struct pci_driver idxd_pci_driver = {

View File

@ -245,12 +245,6 @@ static inline bool match_fault(struct idxd_desc *desc, u64 fault_addr)
return false;
}
static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason)
{
idxd_dma_complete_txd(desc, reason);
idxd_free_desc(desc->wq, desc);
}
static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
enum irq_work_type wtype,
int *processed, u64 data)
@ -272,8 +266,16 @@ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
reason = IDXD_COMPLETE_DEV_FAIL;
llist_for_each_entry_safe(desc, t, head, llnode) {
if (desc->completion->status) {
if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS)
u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
if (status) {
if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
complete_desc(desc, IDXD_COMPLETE_ABORT);
(*processed)++;
continue;
}
if (unlikely(status != DSA_COMP_SUCCESS))
match_fault(desc, data);
complete_desc(desc, reason);
(*processed)++;
@ -329,7 +331,14 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
spin_unlock_irqrestore(&irq_entry->list_lock, flags);
list_for_each_entry(desc, &flist, list) {
if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS)
u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
complete_desc(desc, IDXD_COMPLETE_ABORT);
continue;
}
if (unlikely(status != DSA_COMP_SUCCESS))
match_fault(desc, data);
complete_desc(desc, reason);
}

View File

@ -25,11 +25,10 @@ static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu)
* Descriptor completion vectors are 1...N for MSIX. We will round
* robin through the N vectors.
*/
wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
wq->vec_ptr = desc->vector = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
if (!idxd->int_handles) {
desc->hw->int_handle = wq->vec_ptr;
} else {
desc->vector = wq->vec_ptr;
/*
* int_handles are only for descriptor completion. However for device
* MSIX enumeration, vec 0 is used for misc interrupts. Therefore even
@ -88,9 +87,64 @@ void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
}
static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
struct idxd_desc *desc)
{
struct idxd_desc *d, *n;
lockdep_assert_held(&ie->list_lock);
list_for_each_entry_safe(d, n, &ie->work_list, list) {
if (d == desc) {
list_del(&d->list);
return d;
}
}
/*
* At this point, the desc needs to be aborted is held by the completion
* handler where it has taken it off the pending list but has not added to the
* work list. It will be cleaned up by the interrupt handler when it sees the
* IDXD_COMP_DESC_ABORT for completion status.
*/
return NULL;
}
static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
struct idxd_desc *desc)
{
struct idxd_desc *d, *t, *found = NULL;
struct llist_node *head;
unsigned long flags;
desc->completion->status = IDXD_COMP_DESC_ABORT;
/*
* Grab the list lock so it will block the irq thread handler. This allows the
* abort code to locate the descriptor need to be aborted.
*/
spin_lock_irqsave(&ie->list_lock, flags);
head = llist_del_all(&ie->pending_llist);
if (head) {
llist_for_each_entry_safe(d, t, head, llnode) {
if (d == desc) {
found = desc;
continue;
}
list_add_tail(&desc->list, &ie->work_list);
}
}
if (!found)
found = list_abort_desc(wq, ie, desc);
spin_unlock_irqrestore(&ie->list_lock, flags);
if (found)
complete_desc(found, IDXD_COMPLETE_ABORT);
}
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
{
struct idxd_device *idxd = wq->idxd;
struct idxd_irq_entry *ie = NULL;
void __iomem *portal;
int rc;
@ -108,6 +162,16 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
* even on UP because the recipient is a device.
*/
wmb();
/*
* Pending the descriptor to the lockless list for the irq_entry
* that we designated the descriptor to.
*/
if (desc->hw->flags & IDXD_OP_FLAG_RCI) {
ie = &idxd->irq_entries[desc->vector];
llist_add(&desc->llnode, &ie->pending_llist);
}
if (wq_dedicated(wq)) {
iosubmit_cmds512(portal, desc->hw, 1);
} else {
@ -118,29 +182,13 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
* device is not accepting descriptor at all.
*/
rc = enqcmds(portal, desc->hw);
if (rc < 0)
if (rc < 0) {
if (ie)
llist_abort_desc(wq, ie, desc);
return rc;
}
}
percpu_ref_put(&wq->wq_active);
/*
* Pending the descriptor to the lockless list for the irq_entry
* that we designated the descriptor to.
*/
if (desc->hw->flags & IDXD_OP_FLAG_RCI) {
int vec;
/*
* If the driver is on host kernel, it would be the value
* assigned to interrupt handle, which is index for MSIX
* vector. If it's guest then can't use the int_handle since
* that is the index to IMS for the entire device. The guest
* device local index will be used.
*/
vec = !idxd->int_handles ? desc->hw->int_handle : desc->vector;
llist_add(&desc->llnode, &idxd->irq_entries[vec].pending_llist);
}
return 0;
}

View File

@ -1744,8 +1744,6 @@ void idxd_unregister_devices(struct idxd_device *idxd)
device_unregister(&group->conf_dev);
}
device_unregister(&idxd->conf_dev);
}
int idxd_register_bus_type(void)

View File

@ -812,6 +812,8 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg(
dma_length += sg_dma_len(sg);
}
imxdma_config_write(chan, &imxdmac->config, direction);
switch (imxdmac->word_size) {
case DMA_SLAVE_BUSWIDTH_4_BYTES:
if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3)

View File

@ -67,8 +67,12 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
return NULL;
ofdma_target = of_dma_find_controller(&dma_spec_target);
if (!ofdma_target)
return NULL;
if (!ofdma_target) {
ofdma->dma_router->route_free(ofdma->dma_router->dev,
route_data);
chan = ERR_PTR(-EPROBE_DEFER);
goto err;
}
chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
if (IS_ERR_OR_NULL(chan)) {
@ -89,6 +93,7 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
}
}
err:
/*
* Need to put the node back since the ofdma->of_dma_route_allocate
* has taken it for generating the new, translated dma_spec

View File

@ -855,8 +855,8 @@ static int usb_dmac_probe(struct platform_device *pdev)
error:
of_dma_controller_free(pdev->dev.of_node);
pm_runtime_put(&pdev->dev);
error_pm:
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return ret;
}

View File

@ -1200,7 +1200,7 @@ static int stm32_dma_alloc_chan_resources(struct dma_chan *c)
chan->config_init = false;
ret = pm_runtime_get_sync(dmadev->ddev.dev);
ret = pm_runtime_resume_and_get(dmadev->ddev.dev);
if (ret < 0)
return ret;
@ -1470,7 +1470,7 @@ static int stm32_dma_suspend(struct device *dev)
struct stm32_dma_device *dmadev = dev_get_drvdata(dev);
int id, ret, scr;
ret = pm_runtime_get_sync(dev);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;

View File

@ -137,7 +137,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
/* Set dma request */
spin_lock_irqsave(&dmamux->lock, flags);
ret = pm_runtime_get_sync(&pdev->dev);
ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0) {
spin_unlock_irqrestore(&dmamux->lock, flags);
goto error;
@ -336,7 +336,7 @@ static int stm32_dmamux_suspend(struct device *dev)
struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev);
int i, ret;
ret = pm_runtime_get_sync(dev);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;
@ -361,7 +361,7 @@ static int stm32_dmamux_resume(struct device *dev)
if (ret < 0)
return ret;
ret = pm_runtime_get_sync(dev);
ret = pm_runtime_resume_and_get(dev);
if (ret < 0)
return ret;

View File

@ -209,8 +209,8 @@ static int uniphier_xdmac_chan_stop(struct uniphier_xdmac_chan *xc)
writel(0, xc->reg_ch_base + XDMAC_TSS);
/* wait until transfer is stopped */
return readl_poll_timeout(xc->reg_ch_base + XDMAC_STAT, val,
!(val & XDMAC_STAT_TENF), 100, 1000);
return readl_poll_timeout_atomic(xc->reg_ch_base + XDMAC_STAT, val,
!(val & XDMAC_STAT_TENF), 100, 1000);
}
/* xc->vc.lock must be held by caller */

View File

@ -394,6 +394,7 @@ struct xilinx_dma_tx_descriptor {
* @genlock: Support genlock mode
* @err: Channel has errors
* @idle: Check for channel idle
* @terminating: Check for channel being synchronized by user
* @tasklet: Cleanup work after irq
* @config: Device configuration info
* @flush_on_fsync: Flush on Frame sync
@ -431,6 +432,7 @@ struct xilinx_dma_chan {
bool genlock;
bool err;
bool idle;
bool terminating;
struct tasklet_struct tasklet;
struct xilinx_vdma_config config;
bool flush_on_fsync;
@ -1049,6 +1051,13 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
/* Run any dependencies, then free the descriptor */
dma_run_dependencies(&desc->async_tx);
xilinx_dma_free_tx_descriptor(chan, desc);
/*
* While we ran a callback the user called a terminate function,
* which takes care of cleaning up any remaining descriptors
*/
if (chan->terminating)
break;
}
spin_unlock_irqrestore(&chan->lock, flags);
@ -1965,6 +1974,8 @@ static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
if (desc->cyclic)
chan->cyclic = true;
chan->terminating = false;
spin_unlock_irqrestore(&chan->lock, flags);
return cookie;
@ -2436,6 +2447,7 @@ static int xilinx_dma_terminate_all(struct dma_chan *dchan)
xilinx_dma_chan_reset(chan);
/* Remove and free all of the descriptors in the lists */
chan->terminating = true;
xilinx_dma_free_descriptors(chan);
chan->idle = true;

View File

@ -212,10 +212,9 @@ static int tee_bnxt_fw_probe(struct device *dev)
pvt_data.dev = dev;
fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ,
TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
fw_shm_pool = tee_shm_alloc_kernel_buf(pvt_data.ctx, MAX_SHM_MEM_SZ);
if (IS_ERR(fw_shm_pool)) {
dev_err(pvt_data.dev, "tee_shm_alloc failed\n");
dev_err(pvt_data.dev, "tee_shm_alloc_kernel_buf failed\n");
err = PTR_ERR(fw_shm_pool);
goto out_sess;
}
@ -242,6 +241,14 @@ static int tee_bnxt_fw_remove(struct device *dev)
return 0;
}
static void tee_bnxt_fw_shutdown(struct device *dev)
{
tee_shm_free(pvt_data.fw_shm_pool);
tee_client_close_session(pvt_data.ctx, pvt_data.session_id);
tee_client_close_context(pvt_data.ctx);
pvt_data.ctx = NULL;
}
static const struct tee_client_device_id tee_bnxt_fw_id_table[] = {
{UUID_INIT(0x6272636D, 0x2019, 0x0716,
0x42, 0x43, 0x4D, 0x5F, 0x53, 0x43, 0x48, 0x49)},
@ -257,6 +264,7 @@ static struct tee_client_driver tee_bnxt_fw_driver = {
.bus = &tee_bus_type,
.probe = tee_bnxt_fw_probe,
.remove = tee_bnxt_fw_remove,
.shutdown = tee_bnxt_fw_shutdown,
},
};

View File

@ -953,6 +953,8 @@ static int fme_perf_offline_cpu(unsigned int cpu, struct hlist_node *node)
return 0;
priv->cpu = target;
perf_pmu_migrate_context(&priv->pmu, cpu, target);
return 0;
}

View File

@ -1040,7 +1040,7 @@ void amdgpu_acpi_detect(void)
*/
bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
{
#if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE)
#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP)
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
if (adev->flags & AMD_IS_APU)
return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;

View File

@ -468,6 +468,46 @@ bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *ade
return (fw_cap & ATOM_FIRMWARE_CAP_DYNAMIC_BOOT_CFG_ENABLE) ? true : false;
}
/*
* Helper function to query RAS EEPROM address
*
* @adev: amdgpu_device pointer
*
* Return true if vbios supports ras rom address reporting
*/
bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address)
{
struct amdgpu_mode_info *mode_info = &adev->mode_info;
int index;
u16 data_offset, size;
union firmware_info *firmware_info;
u8 frev, crev;
if (i2c_address == NULL)
return false;
*i2c_address = 0;
index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
firmwareinfo);
if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context,
index, &size, &frev, &crev, &data_offset)) {
/* support firmware_info 3.4 + */
if ((frev == 3 && crev >=4) || (frev > 3)) {
firmware_info = (union firmware_info *)
(mode_info->atom_context->bios + data_offset);
*i2c_address = firmware_info->v34.ras_rom_i2c_slave_addr;
}
}
if (*i2c_address != 0)
return true;
return false;
}
union smu_info {
struct atom_smu_info_v3_1 v31;
};

View File

@ -36,6 +36,7 @@ int amdgpu_atomfirmware_get_clock_info(struct amdgpu_device *adev);
int amdgpu_atomfirmware_get_gfx_info(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_mem_ecc_supported(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_sram_ecc_supported(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address);
bool amdgpu_atomfirmware_mem_training_supported(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *adev);
int amdgpu_atomfirmware_get_fw_reserved_fb_size(struct amdgpu_device *adev);

View File

@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
ip->major, ip->minor,
ip->revision);
if (le16_to_cpu(ip->hw_id) == VCN_HWID)
adev->vcn.num_vcn_inst++;
for (k = 0; k < num_base_address; k++) {
/*
* convert the endianness of base addresses in place,
@ -385,7 +388,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
{
struct binary_header *bhdr;
struct harvest_table *harvest_info;
int i;
int i, vcn_harvest_count = 0;
bhdr = (struct binary_header *)adev->mman.discovery_bin;
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
@ -397,8 +400,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
case VCN_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
vcn_harvest_count++;
break;
case DMU_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
@ -407,6 +409,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
break;
}
}
if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
}
}
int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)

View File

@ -1213,6 +1213,13 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
{0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
/* BEIGE_GOBY */
{0x1002, 0x7420, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x7421, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x7422, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x7423, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x743F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0, 0, 0}
};
@ -1564,6 +1571,8 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
pci_ignore_hotplug(pdev);
pci_set_power_state(pdev, PCI_D3cold);
drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
} else if (amdgpu_device_supports_boco(drm_dev)) {
/* nothing to do */
} else if (amdgpu_device_supports_baco(drm_dev)) {
amdgpu_device_baco_enter(drm_dev);
}

View File

@ -26,6 +26,7 @@
#include "amdgpu_ras.h"
#include <linux/bits.h>
#include "atom.h"
#include "amdgpu_atomfirmware.h"
#define EEPROM_I2C_TARGET_ADDR_VEGA20 0xA0
#define EEPROM_I2C_TARGET_ADDR_ARCTURUS 0xA8
@ -96,6 +97,9 @@ static bool __get_eeprom_i2c_addr(struct amdgpu_device *adev,
if (!i2c_addr)
return false;
if (amdgpu_atomfirmware_ras_rom_addr(adev, (uint8_t*)i2c_addr))
return true;
switch (adev->asic_type) {
case CHIP_VEGA20:
*i2c_addr = EEPROM_I2C_TARGET_ADDR_VEGA20;

View File

@ -54,11 +54,12 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
{
struct drm_mm_node *node;
if (!res) {
if (!res || res->mem_type == TTM_PL_SYSTEM) {
cur->start = start;
cur->size = size;
cur->remaining = size;
cur->node = NULL;
WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
return;
}

View File

@ -1295,6 +1295,16 @@ static bool is_raven_kicker(struct amdgpu_device *adev)
return false;
}
static bool check_if_enlarge_doorbell_range(struct amdgpu_device *adev)
{
if ((adev->asic_type == CHIP_RENOIR) &&
(adev->gfx.me_fw_version >= 0x000000a5) &&
(adev->gfx.me_feature_version >= 52))
return true;
else
return false;
}
static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
{
if (gfx_v9_0_should_disable_gfxoff(adev->pdev))
@ -3675,7 +3685,16 @@ static int gfx_v9_0_kiq_init_register(struct amdgpu_ring *ring)
if (ring->use_doorbell) {
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
(adev->doorbell_index.kiq * 2) << 2);
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
/* If GC has entered CGPG, ringing doorbell > first page
* doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to
* workaround this issue. And this change has to align with firmware
* update.
*/
if (check_if_enlarge_doorbell_range(adev))
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
(adev->doorbell.size - 4));
else
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
(adev->doorbell_index.userqueue_end * 2) << 2);
}

View File

@ -1548,6 +1548,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
}
hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data;
adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
@ -1561,7 +1562,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
adev->dm.dmcub_fw_version);
}
adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
dmub_srv = adev->dm.dmub_srv;
@ -9605,7 +9605,12 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
} else if (amdgpu_freesync_vid_mode && aconnector &&
is_freesync_video_mode(&new_crtc_state->mode,
aconnector)) {
set_freesync_fixed_config(dm_new_crtc_state);
struct drm_display_mode *high_mode;
high_mode = get_highest_refresh_rate_mode(aconnector, false);
if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) {
set_freesync_fixed_config(dm_new_crtc_state);
}
}
ret = dm_atomic_get_state(state, &dm_state);

View File

@ -584,7 +584,7 @@ static void amdgpu_dm_irq_schedule_work(struct amdgpu_device *adev,
handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list);
/*allocate a new amdgpu_dm_irq_handler_data*/
handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL);
handler_data_add = kzalloc(sizeof(*handler_data), GFP_ATOMIC);
if (!handler_data_add) {
DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n");
return;

View File

@ -66,9 +66,11 @@ int rn_get_active_display_cnt_wa(
for (i = 0; i < context->stream_count; i++) {
const struct dc_stream_state *stream = context->streams[i];
/* Extend the WA to DP for Linux*/
if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A ||
stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK ||
stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK)
stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK ||
stream->signal == SIGNAL_TYPE_DISPLAY_PORT)
tmds_present = true;
}

View File

@ -3602,29 +3602,12 @@ static bool dpcd_read_sink_ext_caps(struct dc_link *link)
bool dp_retrieve_lttpr_cap(struct dc_link *link)
{
uint8_t lttpr_dpcd_data[6];
bool vbios_lttpr_enable = false;
bool vbios_lttpr_interop = false;
struct dc_bios *bios = link->dc->ctx->dc_bios;
bool vbios_lttpr_enable = link->dc->caps.vbios_lttpr_enable;
bool vbios_lttpr_interop = link->dc->caps.vbios_lttpr_aware;
enum dc_status status = DC_ERROR_UNEXPECTED;
bool is_lttpr_present = false;
memset(lttpr_dpcd_data, '\0', sizeof(lttpr_dpcd_data));
/* Query BIOS to determine if LTTPR functionality is forced on by system */
if (bios->funcs->get_lttpr_caps) {
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = bios->funcs->get_lttpr_caps(bios, &is_vbios_lttpr_enable);
vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
}
if (bios->funcs->get_lttpr_interop) {
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = bios->funcs->get_lttpr_interop(bios, &is_vbios_interop_enabled);
vbios_lttpr_interop = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
}
/*
* Logic to determine LTTPR mode

View File

@ -183,6 +183,8 @@ struct dc_caps {
unsigned int cursor_cache_size;
struct dc_plane_cap planes[MAX_PLANES];
struct dc_color_caps color;
bool vbios_lttpr_aware;
bool vbios_lttpr_enable;
};
struct dc_bug_wa {

View File

@ -464,7 +464,7 @@ void optc2_lock_doublebuffer_enable(struct timing_generator *optc)
REG_UPDATE_2(OTG_GLOBAL_CONTROL1,
MASTER_UPDATE_LOCK_DB_X,
h_blank_start - 200 - 1,
(h_blank_start - 200 - 1) / optc1->opp_count,
MASTER_UPDATE_LOCK_DB_Y,
v_blank_start - 1);
}

View File

@ -1788,7 +1788,6 @@ static bool dcn30_split_stream_for_mpc_or_odm(
}
pri_pipe->next_odm_pipe = sec_pipe;
sec_pipe->prev_odm_pipe = pri_pipe;
ASSERT(sec_pipe->top_pipe == NULL);
if (!sec_pipe->top_pipe)
sec_pipe->stream_res.opp = pool->opps[pipe_idx];
@ -2617,6 +2616,26 @@ static bool dcn30_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
/* read VBIOS LTTPR caps */
{
if (ctx->dc_bios->funcs->get_lttpr_caps) {
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
}
if (ctx->dc_bios->funcs->get_lttpr_interop) {
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios,
&is_vbios_interop_enabled);
dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
}
}
if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV)
dc->debug = debug_defaults_drv;
else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {

View File

@ -146,8 +146,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_03_soc = {
.min_dcfclk = 500.0, /* TODO: set this to actual min DCFCLK */
.num_states = 1,
.sr_exit_time_us = 26.5,
.sr_enter_plus_exit_time_us = 31,
.sr_exit_time_us = 35.5,
.sr_enter_plus_exit_time_us = 40,
.urgent_latency_us = 4.0,
.urgent_latency_pixel_data_only_us = 4.0,
.urgent_latency_pixel_mixed_with_vm_data_us = 4.0,

View File

@ -1968,6 +1968,22 @@ static bool dcn31_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
/* read VBIOS LTTPR caps */
{
if (ctx->dc_bios->funcs->get_lttpr_caps) {
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
}
/* interop bit is implicit */
{
dc->caps.vbios_lttpr_aware = true;
}
}
if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV)
dc->debug = debug_defaults_drv;
else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {

View File

@ -267,11 +267,13 @@ void dmub_dcn31_set_outbox1_rptr(struct dmub_srv *dmub, uint32_t rptr_offset)
bool dmub_dcn31_is_hw_init(struct dmub_srv *dmub)
{
uint32_t is_hw_init;
union dmub_fw_boot_status status;
uint32_t is_enable;
REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_hw_init);
status.all = REG_READ(DMCUB_SCRATCH0);
REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enable);
return is_hw_init != 0;
return is_enable != 0 && status.bits.dal_fw;
}
bool dmub_dcn31_is_supported(struct dmub_srv *dmub)

View File

@ -590,7 +590,7 @@ struct atom_firmware_info_v3_4 {
uint8_t board_i2c_feature_id; // enum of atom_board_i2c_feature_id_def
uint8_t board_i2c_feature_gpio_id; // i2c id find in gpio_lut data table gpio_id
uint8_t board_i2c_feature_slave_addr;
uint8_t reserved3;
uint8_t ras_rom_i2c_slave_addr;
uint16_t bootup_mvddq_mv;
uint16_t bootup_mvpp_mv;
uint32_t zfbstartaddrin16mb;

View File

@ -26,7 +26,7 @@
#include "amdgpu_smu.h"
#define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF
#define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x03
#define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04
#define SMU13_DRIVER_IF_VERSION_ALDE 0x07
/* MP Apertures */

View File

@ -111,7 +111,9 @@ typedef struct {
uint32_t InWhisperMode : 1;
uint32_t spare0 : 1;
uint32_t ZstateStatus : 4;
uint32_t spare1 :12;
uint32_t spare1 : 4;
uint32_t DstateFun : 4;
uint32_t DstateDev : 4;
// MP1_EXT_SCRATCH2
uint32_t P2JobHandler :24;
uint32_t RsmuPmiP2FinishedCnt : 8;

View File

@ -353,8 +353,7 @@ static void sienna_cichlid_check_bxco_support(struct smu_context *smu)
struct amdgpu_device *adev = smu->adev;
uint32_t val;
if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO ||
powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_MACO) {
if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO) {
val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
smu_baco->platform_support =
(val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :

Some files were not shown because too many files have changed in this diff Show More