Merge 5.16-rc8 into char-misc-next

We need the fixes in here as well for testing.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Greg Kroah-Hartman 2022-01-03 13:44:38 +01:00
commit 824adf37ee
494 changed files with 4270 additions and 1921 deletions

View File

@ -1689,6 +1689,8 @@
architectures force reset to be always executed
i8042.unlock [HW] Unlock (ignore) the keylock
i8042.kbdreset [HW] Reset device connected to KBD port
i8042.probe_defer
[HW] Allow deferred probing upon i8042 probe errors
i810= [HW,DRM]
@ -2413,8 +2415,12 @@
Default is 1 (enabled)
kvm-intel.emulate_invalid_guest_state=
[KVM,Intel] Enable emulation of invalid guest states
Default is 0 (disabled)
[KVM,Intel] Disable emulation of invalid guest state.
Ignored if kvm-intel.enable_unrestricted_guest=1, as
guest state is never invalid for unrestricted guests.
This param doesn't apply to nested guests (L2), as KVM
never emulates invalid L2 guest state.
Default is 1 (enabled)
kvm-intel.flexpriority=
[KVM,Intel] Disable FlexPriority feature (TPR shadow).

View File

@ -20,9 +20,9 @@ allOf:
properties:
compatible:
enum:
- apple,t8103-i2c
- apple,i2c
items:
- const: apple,t8103-i2c
- const: apple,i2c
reg:
maxItems: 1
@ -51,7 +51,7 @@ unevaluatedProperties: false
examples:
- |
i2c@35010000 {
compatible = "apple,t8103-i2c";
compatible = "apple,t8103-i2c", "apple,i2c";
reg = <0x35010000 0x4000>;
interrupt-parent = <&aic>;
interrupts = <0 627 4>;

View File

@ -51,6 +51,19 @@ patternProperties:
description:
Properties for single BUCK regulator.
properties:
op_mode:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [0, 1, 2, 3]
default: 1
description: |
Describes the different operating modes of the regulator with power
mode change in SOC. The different possible values are:
0 - always off mode
1 - on in normal mode
2 - low power mode
3 - suspend mode
required:
- regulator-name
@ -63,6 +76,18 @@ patternProperties:
Properties for single BUCK regulator.
properties:
op_mode:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [0, 1, 2, 3]
default: 1
description: |
Describes the different operating modes of the regulator with power
mode change in SOC. The different possible values are:
0 - always off mode
1 - on in normal mode
2 - low power mode
3 - suspend mode
s5m8767,pmic-ext-control-gpios:
maxItems: 1
description: |

View File

@ -11,9 +11,11 @@ systems. Some systems use variants that don't meet branding requirements,
and so are not advertised as being I2C but come under different names,
e.g. TWI (Two Wire Interface), IIC.
The official I2C specification is the `"I2C-bus specification and user
manual" (UM10204) <https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_
published by NXP Semiconductors.
The latest official I2C specification is the `"I2C-bus specification and user
manual" (UM10204) <https://www.nxp.com/webapp/Download?colCode=UM10204>`_
published by NXP Semiconductors. However, you need to log-in to the site to
access the PDF. An older version of the specification (revision 6) is archived
`here <https://web.archive.org/web/20210813122132/https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_.
SMBus (System Management Bus) is based on the I2C protocol, and is mostly
a subset of I2C protocols and signaling. Many I2C devices will work on an

View File

@ -196,11 +196,12 @@ ad_actor_sys_prio
ad_actor_system
In an AD system, this specifies the mac-address for the actor in
protocol packet exchanges (LACPDUs). The value cannot be NULL or
multicast. It is preferred to have the local-admin bit set for this
mac but driver does not enforce it. If the value is not given then
system defaults to using the masters' mac address as actors' system
address.
protocol packet exchanges (LACPDUs). The value cannot be a multicast
address. If the all-zeroes MAC is specified, bonding will internally
use the MAC of the bond itself. It is preferred to have the
local-admin bit set for this mac but driver does not enforce it. If
the value is not given then system defaults to using the masters'
mac address as actors' system address.
This parameter has effect only in 802.3ad mode and is available through
SysFs interface.

View File

@ -183,6 +183,7 @@ PHY and allows physical transmission and reception of Ethernet frames.
IRQ config, enable, reset
DPNI (Datapath Network Interface)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contains TX/RX queues, network interface configuration, and RX buffer pool
configuration mechanisms. The TX/RX queues are in memory and are identified
by queue number.

View File

@ -440,6 +440,22 @@ NOTE: For 82599-based network connections, if you are enabling jumbo frames in
a virtual function (VF), jumbo frames must first be enabled in the physical
function (PF). The VF MTU setting cannot be larger than the PF MTU.
NBASE-T Support
---------------
The ixgbe driver supports NBASE-T on some devices. However, the advertisement
of NBASE-T speeds is suppressed by default, to accommodate broken network
switches which cannot cope with advertised NBASE-T speeds. Use the ethtool
command to enable advertising NBASE-T speeds on devices which support it::
ethtool -s eth? advertise 0x1800000001028
On Linux systems with INTERFACES(5), this can be specified as a pre-up command
in /etc/network/interfaces so that the interface is always brought up with
NBASE-T support, e.g.::
iface eth? inet dhcp
pre-up ethtool -s eth? advertise 0x1800000001028 || true
Generic Receive Offload, aka GRO
--------------------------------
The driver supports the in-kernel software implementation of GRO. GRO has

View File

@ -25,7 +25,8 @@ ip_default_ttl - INTEGER
ip_no_pmtu_disc - INTEGER
Disable Path MTU Discovery. If enabled in mode 1 and a
fragmentation-required ICMP is received, the PMTU to this
destination will be set to min_pmtu (see below). You will need
destination will be set to the smallest of the old MTU to
this destination and min_pmtu (see below). You will need
to raise min_pmtu to the smallest interface MTU on your system
manually if you want to avoid locally generated fragments.
@ -49,7 +50,8 @@ ip_no_pmtu_disc - INTEGER
Default: FALSE
min_pmtu - INTEGER
default 552 - minimum discovered Path MTU
default 552 - minimum Path MTU. Unless this is changed mannually,
each cached pmtu will never be lower than this setting.
ip_forward_use_pmtu - BOOLEAN
By default we don't trust protocol path MTUs while forwarding

View File

@ -582,8 +582,8 @@ Time stamps for outgoing packets are to be generated as follows:
and hardware timestamping is not possible (SKBTX_IN_PROGRESS not set).
- As soon as the driver has sent the packet and/or obtained a
hardware time stamp for it, it passes the time stamp back by
calling skb_hwtstamp_tx() with the original skb, the raw
hardware time stamp. skb_hwtstamp_tx() clones the original skb and
calling skb_tstamp_tx() with the original skb, the raw
hardware time stamp. skb_tstamp_tx() clones the original skb and
adds the timestamps, therefore the original skb has to be freed now.
If obtaining the hardware time stamp somehow fails, then the driver
should not fall back to software time stamping. The rationale is that

View File

@ -326,6 +326,8 @@ usi-headset
Headset support on USI machines
dual-codecs
Lenovo laptops with dual codecs
alc285-hp-amp-init
HP laptops which require speaker amplifier initialization (ALC285)
ALC680
======

View File

@ -3076,7 +3076,7 @@ F: Documentation/devicetree/bindings/phy/phy-ath79-usb.txt
F: drivers/phy/qualcomm/phy-ath79-usb.c
ATHEROS ATH GENERIC UTILITIES
M: Kalle Valo <kvalo@codeaurora.org>
M: Kalle Valo <kvalo@kernel.org>
L: linux-wireless@vger.kernel.org
S: Supported
F: drivers/net/wireless/ath/*
@ -3091,7 +3091,7 @@ W: https://wireless.wiki.kernel.org/en/users/Drivers/ath5k
F: drivers/net/wireless/ath/ath5k/
ATHEROS ATH6KL WIRELESS DRIVER
M: Kalle Valo <kvalo@codeaurora.org>
M: Kalle Valo <kvalo@kernel.org>
L: linux-wireless@vger.kernel.org
S: Supported
W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl
@ -13267,7 +13267,7 @@ F: include/uapi/linux/if_*
F: include/uapi/linux/netdevice.h
NETWORKING DRIVERS (WIRELESS)
M: Kalle Valo <kvalo@codeaurora.org>
M: Kalle Valo <kvalo@kernel.org>
L: linux-wireless@vger.kernel.org
S: Maintained
Q: http://patchwork.kernel.org/project/linux-wireless/list/
@ -14876,7 +14876,7 @@ PCIE DRIVER FOR MEDIATEK
M: Ryder Lee <ryder.lee@mediatek.com>
M: Jianjun Wang <jianjun.wang@mediatek.com>
L: linux-pci@vger.kernel.org
L: linux-mediatek@lists.infradead.org
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: Documentation/devicetree/bindings/pci/mediatek*
F: drivers/pci/controller/*mediatek*
@ -15735,7 +15735,7 @@ T: git git://linuxtv.org/anttip/media_tree.git
F: drivers/media/tuners/qt1010*
QUALCOMM ATHEROS ATH10K WIRELESS DRIVER
M: Kalle Valo <kvalo@codeaurora.org>
M: Kalle Valo <kvalo@kernel.org>
L: ath10k@lists.infradead.org
S: Supported
W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k
@ -15743,7 +15743,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
F: drivers/net/wireless/ath/ath10k/
QUALCOMM ATHEROS ATH11K WIRELESS DRIVER
M: Kalle Valo <kvalo@codeaurora.org>
M: Kalle Valo <kvalo@kernel.org>
L: ath11k@lists.infradead.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
@ -15916,7 +15916,7 @@ F: Documentation/devicetree/bindings/media/*venus*
F: drivers/media/platform/qcom/venus/
QUALCOMM WCN36XX WIRELESS DRIVER
M: Kalle Valo <kvalo@codeaurora.org>
M: Kalle Valo <kvalo@kernel.org>
L: wcn36xx@lists.infradead.org
S: Supported
W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx
@ -17454,7 +17454,7 @@ F: drivers/video/fbdev/sm712*
SILVACO I3C DUAL-ROLE MASTER
M: Miquel Raynal <miquel.raynal@bootlin.com>
M: Conor Culhane <conor.culhane@silvaco.com>
L: linux-i3c@lists.infradead.org
L: linux-i3c@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml
F: drivers/i3c/master/svc-i3c-master.c
@ -21103,7 +21103,7 @@ S: Maintained
F: arch/x86/kernel/cpu/zhaoxin.c
ZONEFS FILESYSTEM
M: Damien Le Moal <damien.lemoal@wdc.com>
M: Damien Le Moal <damien.lemoal@opensource.wdc.com>
M: Naohiro Aota <naohiro.aota@wdc.com>
R: Johannes Thumshirn <jth@kernel.org>
L: linux-fsdevel@vger.kernel.org

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 16
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION = -rc8
NAME = Gobble Gobble
# *DOCUMENTATION*

View File

@ -309,6 +309,7 @@
ethphy: ethernet-phy@1 {
reg = <1>;
qca,clk-out-frequency = <125000000>;
};
};
};

View File

@ -178,6 +178,8 @@
label = "cpu";
ethernet = <&fec>;
phy-mode = "rgmii-id";
rx-internal-delay-ps = <2000>;
tx-internal-delay-ps = <2000>;
fixed-link {
speed = <100>;

View File

@ -82,6 +82,6 @@
#define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0
#endif /* __DTS_IMX6ULL_PINFUNC_H */

View File

@ -91,6 +91,8 @@
/* Internal port connected to eth2 */
ethernet = <&enet2>;
phy-mode = "rgmii";
rx-internal-delay-ps = <0>;
tx-internal-delay-ps = <0>;
reg = <4>;
fixed-link {

View File

@ -12,7 +12,7 @@
flash0: n25q00@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00aa";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -119,7 +119,7 @@
flash: flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q256a";
compatible = "micron,n25q256a", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -124,7 +124,7 @@
flash0: n25q00@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <0>; /* chip select */
spi-max-frequency = <100000000>;

View File

@ -169,7 +169,7 @@
flash: flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -80,7 +80,7 @@
flash: flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q256a";
compatible = "micron,n25q256a", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;
m25p,fast-read;

View File

@ -116,7 +116,7 @@
flash0: n25q512a@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q512a";
compatible = "micron,n25q512a", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -224,7 +224,7 @@
n25q128@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q128";
compatible = "micron,n25q128", "jedec,spi-nor";
reg = <0>; /* chip select */
spi-max-frequency = <100000000>;
m25p,fast-read;
@ -241,7 +241,7 @@
n25q00@1 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <1>; /* chip select */
spi-max-frequency = <100000000>;
m25p,fast-read;

View File

@ -17,7 +17,6 @@
#ifdef CONFIG_EFI
void efi_init(void);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);

View File

@ -596,11 +596,9 @@ call_fpe:
tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2
reteq lr
and r8, r0, #0x00000f00 @ mask out CP number
THUMB( lsr r8, r8, #8 )
mov r7, #1
add r6, r10, #TI_USED_CP
ARM( strb r7, [r6, r8, lsr #8] ) @ set appropriate used_cp[]
THUMB( strb r7, [r6, r8] ) @ set appropriate used_cp[]
add r6, r10, r8, lsr #8 @ add used_cp[] array offset first
strb r7, [r6, #TI_USED_CP] @ set appropriate used_cp[]
#ifdef CONFIG_IWMMXT
@ Test if we need to give access to iWMMXt coprocessors
ldr r5, [r10, #TI_FLAGS]
@ -609,7 +607,7 @@ call_fpe:
bcs iwmmxt_task_enable
#endif
ARM( add pc, pc, r8, lsr #6 )
THUMB( lsl r8, r8, #2 )
THUMB( lsr r8, r8, #6 )
THUMB( add pc, r8 )
nop

View File

@ -114,6 +114,7 @@ ENTRY(secondary_startup)
add r12, r12, r10
ret r12
1: bl __after_proc_init
ldr r7, __secondary_data @ reload r7
ldr sp, [r7, #12] @ set up the stack pointer
ldr r0, [r7, #16] @ set up task pointer
mov fp, #0

View File

@ -189,7 +189,7 @@ static int __init rockchip_smp_prepare_sram(struct device_node *node)
rockchip_boot_fn = __pa_symbol(secondary_startup);
/* copy the trampoline to sram, that runs during startup of the core */
memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz);
memcpy_toio(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz);
flush_cache_all();
outer_clean_range(0, trampoline_sz);

View File

@ -161,7 +161,6 @@ config ARCH_MEDIATEK
config ARCH_MESON
bool "Amlogic Platforms"
select COMMON_CLK
help
This enables support for the arm64 based Amlogic SoCs
such as the s905, S905X/D, S912, A113X/D or S905X/D2

View File

@ -69,7 +69,7 @@
pinctrl-0 = <&emac_rgmii_pins>;
phy-supply = <&reg_gmac_3v3>;
phy-handle = <&ext_rgmii_phy>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
status = "okay";
};

View File

@ -134,23 +134,23 @@
type = "critical";
};
};
};
cpu_cooling_maps: cooling-maps {
map0 {
trip = <&cpu_passive>;
cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
};
cpu_cooling_maps: cooling-maps {
map0 {
trip = <&cpu_passive>;
cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
};
map1 {
trip = <&cpu_hot>;
cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
map1 {
trip = <&cpu_hot>;
cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
<&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
};
};
};
};

View File

@ -60,7 +60,7 @@
&port02 {
bus-range = <3 3>;
ethernet0: pci@0,0 {
ethernet0: ethernet@0,0 {
reg = <0x30000 0x0 0x0 0x0 0x0>;
/* To be filled by the loader */
local-mac-address = [00 10 18 00 00 00];

View File

@ -144,6 +144,7 @@
apple,npins = <212>;
interrupt-controller;
#interrupt-cells = <2>;
interrupt-parent = <&aic>;
interrupts = <AIC_IRQ 190 IRQ_TYPE_LEVEL_HIGH>,
<AIC_IRQ 191 IRQ_TYPE_LEVEL_HIGH>,
@ -170,6 +171,7 @@
apple,npins = <42>;
interrupt-controller;
#interrupt-cells = <2>;
interrupt-parent = <&aic>;
interrupts = <AIC_IRQ 268 IRQ_TYPE_LEVEL_HIGH>,
<AIC_IRQ 269 IRQ_TYPE_LEVEL_HIGH>,
@ -190,6 +192,7 @@
apple,npins = <23>;
interrupt-controller;
#interrupt-cells = <2>;
interrupt-parent = <&aic>;
interrupts = <AIC_IRQ 330 IRQ_TYPE_LEVEL_HIGH>,
<AIC_IRQ 331 IRQ_TYPE_LEVEL_HIGH>,
@ -210,6 +213,7 @@
apple,npins = <16>;
interrupt-controller;
#interrupt-cells = <2>;
interrupt-parent = <&aic>;
interrupts = <AIC_IRQ 391 IRQ_TYPE_LEVEL_HIGH>,
<AIC_IRQ 392 IRQ_TYPE_LEVEL_HIGH>,

View File

@ -38,7 +38,6 @@
powerdn {
label = "External Power Down";
gpios = <&gpio1 17 GPIO_ACTIVE_LOW>;
interrupts = <&gpio1 17 IRQ_TYPE_EDGE_FALLING>;
linux,code = <KEY_POWER>;
};
@ -46,7 +45,6 @@
admin {
label = "ADMIN button";
gpios = <&gpio3 8 GPIO_ACTIVE_HIGH>;
interrupts = <&gpio3 8 IRQ_TYPE_EDGE_RISING>;
linux,code = <KEY_WPS_BUTTON>;
};
};

View File

@ -386,6 +386,8 @@
reg = <2>;
ethernet = <&dpmac17>;
phy-mode = "rgmii-id";
rx-internal-delay-ps = <2000>;
tx-internal-delay-ps = <2000>;
fixed-link {
speed = <1000>;
@ -529,6 +531,8 @@
reg = <2>;
ethernet = <&dpmac18>;
phy-mode = "rgmii-id";
rx-internal-delay-ps = <2000>;
tx-internal-delay-ps = <2000>;
fixed-link {
speed = <1000>;

View File

@ -719,7 +719,7 @@
clock-names = "i2c";
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(16)>;
scl-gpio = <&gpio2 15 GPIO_ACTIVE_HIGH>;
scl-gpios = <&gpio2 15 GPIO_ACTIVE_HIGH>;
status = "disabled";
};
@ -768,7 +768,7 @@
clock-names = "i2c";
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(16)>;
scl-gpio = <&gpio2 16 GPIO_ACTIVE_HIGH>;
scl-gpios = <&gpio2 16 GPIO_ACTIVE_HIGH>;
status = "disabled";
};

View File

@ -524,8 +524,6 @@
<&clk IMX8MQ_VIDEO_PLL1>,
<&clk IMX8MQ_VIDEO_PLL1_OUT>;
assigned-clock-rates = <0>, <0>, <0>, <594000000>;
interconnects = <&noc IMX8MQ_ICM_LCDIF &noc IMX8MQ_ICS_DRAM>;
interconnect-names = "dram";
status = "disabled";
port@0 {

View File

@ -97,7 +97,7 @@
regulator-max-microvolt = <3300000>;
regulator-always-on;
regulator-boot-on;
vim-supply = <&vcc_io>;
vin-supply = <&vcc_io>;
};
vdd_core: vdd-core {

View File

@ -705,7 +705,6 @@
&sdhci {
bus-width = <8>;
mmc-hs400-1_8v;
mmc-hs400-enhanced-strobe;
non-removable;
status = "okay";
};

View File

@ -276,6 +276,7 @@
clock-output-names = "xin32k", "rk808-clkout2";
pinctrl-names = "default";
pinctrl-0 = <&pmic_int_l>;
rockchip,system-power-controller;
vcc1-supply = <&vcc5v0_sys>;
vcc2-supply = <&vcc5v0_sys>;
vcc3-supply = <&vcc5v0_sys>;

View File

@ -55,7 +55,7 @@
regulator-boot-on;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
vim-supply = <&vcc3v3_sys>;
vin-supply = <&vcc3v3_sys>;
};
vcc3v3_sys: vcc3v3-sys {

View File

@ -502,7 +502,7 @@
status = "okay";
bt656-supply = <&vcc_3v0>;
audio-supply = <&vcc_3v0>;
audio-supply = <&vcc1v8_codec>;
sdmmc-supply = <&vcc_sdio>;
gpio1830-supply = <&vcc_3v0>;
};

View File

@ -14,7 +14,6 @@
#ifdef CONFIG_EFI
extern void efi_init(void);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
#else
#define efi_init()
#endif

View File

@ -149,6 +149,7 @@ int load_other_segments(struct kimage *image,
initrd_len, cmdline, 0);
if (!dtb) {
pr_err("Preparing for new dtb failed\n");
ret = -EINVAL;
goto out_err;
}

View File

@ -6,5 +6,7 @@
#define PCI_IOSIZE SZ_64K
#define IO_SPACE_LIMIT (PCI_IOSIZE - 1)
#define pci_remap_iospace pci_remap_iospace
#include <asm/mach-generic/spaces.h>
#endif

View File

@ -20,10 +20,6 @@
#include <linux/list.h>
#include <linux/of.h>
#ifdef CONFIG_PCI_DRIVERS_GENERIC
#define pci_remap_iospace pci_remap_iospace
#endif
#ifdef CONFIG_PCI_DRIVERS_LEGACY
/*

View File

@ -47,6 +47,7 @@ void pcibios_fixup_bus(struct pci_bus *bus)
pci_read_bridge_bases(bus);
}
#ifdef pci_remap_iospace
int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
{
unsigned long vaddr;
@ -60,3 +61,4 @@ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
set_io_port_base(vaddr);
return 0;
}
#endif

View File

@ -85,11 +85,6 @@ config MMU
config STACK_GROWSUP
def_bool y
config ARCH_DEFCONFIG
string
default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT
default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT
config GENERIC_LOCKBREAK
bool
default y

View File

@ -14,7 +14,7 @@ static inline void
_futex_spin_lock(u32 __user *uaddr)
{
extern u32 lws_lock_start[];
long index = ((long)uaddr & 0x3f8) >> 1;
long index = ((long)uaddr & 0x7f8) >> 1;
arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];
preempt_disable();
arch_spin_lock(s);
@ -24,7 +24,7 @@ static inline void
_futex_spin_unlock(u32 __user *uaddr)
{
extern u32 lws_lock_start[];
long index = ((long)uaddr & 0x3f8) >> 1;
long index = ((long)uaddr & 0x7f8) >> 1;
arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];
arch_spin_unlock(s);
preempt_enable();

View File

@ -472,7 +472,7 @@ lws_start:
extrd,u %r1,PSW_W_BIT,1,%r1
/* sp must be aligned on 4, so deposit the W bit setting into
* the bottom of sp temporarily */
or,ev %r1,%r30,%r30
or,od %r1,%r30,%r30
/* Clip LWS number to a 32-bit value for 32-bit processes */
depdi 0, 31, 32, %r20

View File

@ -730,6 +730,8 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
}
mmap_read_unlock(current->mm);
}
/* CPU could not fetch instruction, so clear stale IIR value. */
regs->iir = 0xbaadf00d;
fallthrough;
case 27:
/* Data memory protection ID trap */

View File

@ -422,11 +422,17 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
const char *name)
{
long reladdr;
func_desc_t desc;
int i;
if (is_mprofile_ftrace_call(name))
return create_ftrace_stub(entry, addr, me);
memcpy(entry->jump, ppc64_stub_insns, sizeof(ppc64_stub_insns));
for (i = 0; i < sizeof(ppc64_stub_insns) / sizeof(u32); i++) {
if (patch_instruction(&entry->jump[i],
ppc_inst(ppc64_stub_insns[i])))
return 0;
}
/* Stub uses address relative to r2. */
reladdr = (unsigned long)entry - my_r2(sechdrs, me);
@ -437,10 +443,24 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
}
pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);
entry->jump[0] |= PPC_HA(reladdr);
entry->jump[1] |= PPC_LO(reladdr);
entry->funcdata = func_desc(addr);
entry->magic = STUB_MAGIC;
if (patch_instruction(&entry->jump[0],
ppc_inst(entry->jump[0] | PPC_HA(reladdr))))
return 0;
if (patch_instruction(&entry->jump[1],
ppc_inst(entry->jump[1] | PPC_LO(reladdr))))
return 0;
// func_desc_t is 8 bytes if ABIv2, else 16 bytes
desc = func_desc(addr);
for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
if (patch_instruction(((u32 *)&entry->funcdata) + i,
ppc_inst(((u32 *)(&desc))[i])))
return 0;
}
if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC)))
return 0;
return 1;
}
@ -495,8 +515,11 @@ static int restore_r2(const char *name, u32 *instruction, struct module *me)
me->name, *instruction, instruction);
return 0;
}
/* ld r2,R2_STACK_OFFSET(r1) */
*instruction = PPC_INST_LD_TOC;
if (patch_instruction(instruction, ppc_inst(PPC_INST_LD_TOC)))
return 0;
return 1;
}
@ -636,9 +659,12 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
}
/* Only replace bits 2 through 26 */
*(uint32_t *)location
= (*(uint32_t *)location & ~0x03fffffc)
value = (*(uint32_t *)location & ~0x03fffffc)
| (value & 0x03fffffc);
if (patch_instruction((u32 *)location, ppc_inst(value)))
return -EFAULT;
break;
case R_PPC64_REL64:

View File

@ -183,7 +183,7 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
{
pte_t pte = __pte(st->current_flags);
if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx)
if (!IS_ENABLED(CONFIG_DEBUG_WX) || !st->check_wx)
return;
if (!pte_write(pte) || !pte_exec(pte))

View File

@ -220,7 +220,7 @@ static int smp_85xx_start_cpu(int cpu)
local_irq_save(flags);
hard_irq_disable();
if (qoriq_pm_ops)
if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
qoriq_pm_ops->cpu_up_prepare(cpu);
/* if cpu is not spinning, reset it */
@ -292,7 +292,7 @@ static int smp_85xx_kick_cpu(int nr)
booting_thread_hwid = cpu_thread_in_core(nr);
primary = cpu_first_thread_sibling(nr);
if (qoriq_pm_ops)
if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)
qoriq_pm_ops->cpu_up_prepare(nr);
/*

View File

@ -76,6 +76,7 @@
spi-max-frequency = <20000000>;
voltage-ranges = <3300 3300>;
disable-wp;
gpios = <&gpio 11 GPIO_ACTIVE_LOW>;
};
};

View File

@ -2,6 +2,7 @@
/* Copyright (c) 2020 SiFive, Inc */
#include "fu740-c000.dtsi"
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
/* Clock frequency (in Hz) of the PCB crystal for rtcclk */
@ -54,10 +55,21 @@
temperature-sensor@4c {
compatible = "ti,tmp451";
reg = <0x4c>;
vcc-supply = <&vdd_bpro>;
interrupt-parent = <&gpio>;
interrupts = <6 IRQ_TYPE_LEVEL_LOW>;
};
eeprom@54 {
compatible = "microchip,24c02", "atmel,24c02";
reg = <0x54>;
vcc-supply = <&vdd_bpro>;
label = "board-id";
pagesize = <16>;
read-only;
size = <256>;
};
pmic@58 {
compatible = "dlg,da9063";
reg = <0x58>;
@ -65,48 +77,44 @@
interrupts = <1 IRQ_TYPE_LEVEL_LOW>;
interrupt-controller;
regulators {
vdd_bcore1: bcore1 {
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <900000>;
regulator-min-microamp = <5000000>;
regulator-max-microamp = <5000000>;
regulator-always-on;
};
onkey {
compatible = "dlg,da9063-onkey";
};
vdd_bcore2: bcore2 {
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <900000>;
regulator-min-microamp = <5000000>;
regulator-max-microamp = <5000000>;
rtc {
compatible = "dlg,da9063-rtc";
};
wdt {
compatible = "dlg,da9063-watchdog";
};
regulators {
vdd_bcore: bcores-merged {
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <1050000>;
regulator-min-microamp = <4800000>;
regulator-max-microamp = <4800000>;
regulator-always-on;
};
vdd_bpro: bpro {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <2500000>;
regulator-max-microamp = <2500000>;
regulator-min-microamp = <2400000>;
regulator-max-microamp = <2400000>;
regulator-always-on;
};
vdd_bperi: bperi {
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <1050000>;
regulator-min-microvolt = <1060000>;
regulator-max-microvolt = <1060000>;
regulator-min-microamp = <1500000>;
regulator-max-microamp = <1500000>;
regulator-always-on;
};
vdd_bmem: bmem {
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
regulator-min-microamp = <3000000>;
regulator-max-microamp = <3000000>;
regulator-always-on;
};
vdd_bio: bio {
vdd_bmem_bio: bmem-bio-merged {
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
regulator-min-microamp = <3000000>;
@ -117,86 +125,66 @@
vdd_ldo1: ldo1 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <100000>;
regulator-max-microamp = <100000>;
regulator-always-on;
};
vdd_ldo2: ldo2 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-always-on;
};
vdd_ldo3: ldo3 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
vdd_ldo4: ldo4 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-min-microvolt = <2500000>;
regulator-max-microvolt = <2500000>;
regulator-always-on;
};
vdd_ldo5: ldo5 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <100000>;
regulator-max-microamp = <100000>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
vdd_ldo6: ldo6 {
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
};
vdd_ldo7: ldo7 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
vdd_ldo8: ldo8 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
vdd_ld09: ldo9 {
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <1050000>;
regulator-min-microamp = <200000>;
regulator-max-microamp = <200000>;
regulator-always-on;
};
vdd_ldo10: ldo10 {
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <1000000>;
regulator-min-microamp = <300000>;
regulator-max-microamp = <300000>;
regulator-always-on;
};
vdd_ldo11: ldo11 {
regulator-min-microvolt = <2500000>;
regulator-max-microvolt = <2500000>;
regulator-min-microamp = <300000>;
regulator-max-microamp = <300000>;
regulator-always-on;
};
};
@ -223,6 +211,7 @@
spi-max-frequency = <20000000>;
voltage-ranges = <3300 3300>;
disable-wp;
gpios = <&gpio 15 GPIO_ACTIVE_LOW>;
};
};
@ -245,4 +234,8 @@
&gpio {
status = "okay";
gpio-line-names = "J29.1", "PMICNTB", "PMICSHDN", "J8.1", "J8.3",
"PCIe_PWREN", "THERM", "UBRDG_RSTN", "PCIe_PERSTN",
"ULPI_RSTN", "J8.2", "UHUB_RSTN", "GEMGXL_RST", "J8.4",
"EN_VDD_SD", "SD_CD";
};

View File

@ -13,7 +13,6 @@
#ifdef CONFIG_EFI
extern void efi_init(void);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
#else
#define efi_init()
#endif

View File

@ -117,6 +117,7 @@ CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM_USER=m
CONFIG_NET_KEY=m
CONFIG_NET_SWITCHDEV=y
CONFIG_SMC=m
CONFIG_SMC_DIAG=m
CONFIG_INET=y
@ -511,6 +512,7 @@ CONFIG_NLMON=m
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
CONFIG_MLX5_ESWITCH=y
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set

View File

@ -109,6 +109,7 @@ CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM_USER=m
CONFIG_NET_KEY=m
CONFIG_NET_SWITCHDEV=y
CONFIG_SMC=m
CONFIG_SMC_DIAG=m
CONFIG_INET=y
@ -502,6 +503,7 @@ CONFIG_NLMON=m
CONFIG_MLX4_EN=m
CONFIG_MLX5_CORE=m
CONFIG_MLX5_CORE_EN=y
CONFIG_MLX5_ESWITCH=y
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
# CONFIG_NET_VENDOR_MICROSEMI is not set

View File

@ -290,7 +290,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
return;
regs = ftrace_get_regs(fregs);
preempt_disable_notrace();
p = get_kprobe((kprobe_opcode_t *)ip);
if (unlikely(!p) || kprobe_disabled(p))
goto out;
@ -318,7 +317,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
}
__this_cpu_write(current_kprobe, NULL);
out:
preempt_enable_notrace();
ftrace_test_recursion_unlock(bit);
}
NOKPROBE_SYMBOL(kprobe_ftrace_handler);

View File

@ -138,7 +138,7 @@ void noinstr do_io_irq(struct pt_regs *regs)
struct pt_regs *old_regs = set_irq_regs(regs);
int from_idle;
irq_enter();
irq_enter_rcu();
if (user_mode(regs)) {
update_timer_sys();
@ -158,7 +158,8 @@ void noinstr do_io_irq(struct pt_regs *regs)
do_irq_async(regs, IO_INTERRUPT);
} while (MACHINE_IS_LPAR && irq_pending(regs));
irq_exit();
irq_exit_rcu();
set_irq_regs(old_regs);
irqentry_exit(regs, state);
@ -172,7 +173,7 @@ void noinstr do_ext_irq(struct pt_regs *regs)
struct pt_regs *old_regs = set_irq_regs(regs);
int from_idle;
irq_enter();
irq_enter_rcu();
if (user_mode(regs)) {
update_timer_sys();
@ -190,7 +191,7 @@ void noinstr do_ext_irq(struct pt_regs *regs)
do_irq_async(regs, EXT_INTERRUPT);
irq_exit();
irq_exit_rcu();
set_irq_regs(old_regs);
irqentry_exit(regs, state);

View File

@ -7,6 +7,8 @@
* Author(s): Philipp Rudo <prudo@linux.vnet.ibm.com>
*/
#define pr_fmt(fmt) "kexec: " fmt
#include <linux/elf.h>
#include <linux/errno.h>
#include <linux/kexec.h>
@ -290,8 +292,16 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
const Elf_Shdr *relsec,
const Elf_Shdr *symtab)
{
const char *strtab, *name, *shstrtab;
const Elf_Shdr *sechdrs;
Elf_Rela *relas;
int i, r_type;
int ret;
/* String & section header string table */
sechdrs = (void *)pi->ehdr + pi->ehdr->e_shoff;
strtab = (char *)pi->ehdr + sechdrs[symtab->sh_link].sh_offset;
shstrtab = (char *)pi->ehdr + sechdrs[pi->ehdr->e_shstrndx].sh_offset;
relas = (void *)pi->ehdr + relsec->sh_offset;
@ -304,15 +314,27 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
sym = (void *)pi->ehdr + symtab->sh_offset;
sym += ELF64_R_SYM(relas[i].r_info);
if (sym->st_shndx == SHN_UNDEF)
return -ENOEXEC;
if (sym->st_name)
name = strtab + sym->st_name;
else
name = shstrtab + sechdrs[sym->st_shndx].sh_name;
if (sym->st_shndx == SHN_COMMON)
if (sym->st_shndx == SHN_UNDEF) {
pr_err("Undefined symbol: %s\n", name);
return -ENOEXEC;
}
if (sym->st_shndx == SHN_COMMON) {
pr_err("symbol '%s' in common section\n", name);
return -ENOEXEC;
}
if (sym->st_shndx >= pi->ehdr->e_shnum &&
sym->st_shndx != SHN_ABS)
sym->st_shndx != SHN_ABS) {
pr_err("Invalid section %d for symbol %s\n",
sym->st_shndx, name);
return -ENOEXEC;
}
loc = pi->purgatory_buf;
loc += section->sh_offset;
@ -326,7 +348,15 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
addr = section->sh_addr + relas[i].r_offset;
r_type = ELF64_R_TYPE(relas[i].r_info);
arch_kexec_do_relocs(r_type, loc, val, addr);
if (r_type == R_390_PLT32DBL)
r_type = R_390_PC32DBL;
ret = arch_kexec_do_relocs(r_type, loc, val, addr);
if (ret) {
pr_err("Unknown rela relocation: %d\n", r_type);
return -ENOEXEC;
}
}
return 0;
}

View File

@ -197,8 +197,6 @@ static inline bool efi_runtime_supported(void)
extern void parse_efi_setup(u64 phys_addr, u32 data_len);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
extern void efi_thunk_runtime_setup(void);
efi_status_t efi_set_virtual_address_map(unsigned long memory_map_size,
unsigned long descriptor_size,

View File

@ -47,6 +47,7 @@ KVM_X86_OP(set_dr7)
KVM_X86_OP(cache_reg)
KVM_X86_OP(get_rflags)
KVM_X86_OP(set_rflags)
KVM_X86_OP(get_if_flag)
KVM_X86_OP(tlb_flush_all)
KVM_X86_OP(tlb_flush_current)
KVM_X86_OP_NULL(tlb_remote_flush)

View File

@ -1349,6 +1349,7 @@ struct kvm_x86_ops {
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
bool (*get_if_flag)(struct kvm_vcpu *vcpu);
void (*tlb_flush_all)(struct kvm_vcpu *vcpu);
void (*tlb_flush_current)(struct kvm_vcpu *vcpu);

View File

@ -4,8 +4,8 @@
#include <asm/cpufeature.h>
#define PKRU_AD_BIT 0x1
#define PKRU_WD_BIT 0x2
#define PKRU_AD_BIT 0x1u
#define PKRU_WD_BIT 0x2u
#define PKRU_BITS_PER_PKEY 2
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS

View File

@ -713,9 +713,6 @@ static void __init early_reserve_memory(void)
early_reserve_initrd();
if (efi_enabled(EFI_BOOT))
efi_memblock_x86_reserve_range();
memblock_x86_reserve_range_setup_data();
reserve_ibft_region();
@ -742,28 +739,6 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
return 0;
}
static char * __init prepare_command_line(void)
{
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
parse_early_param();
return command_line;
}
/*
* Determine if we were loaded by an EFI loader. If so, then we have also been
* passed the efi memmap, systab, etc., so we should use these data structures
@ -852,23 +827,6 @@ void __init setup_arch(char **cmdline_p)
x86_init.oem.arch_setup();
/*
* x86_configure_nx() is called before parse_early_param() (called by
* prepare_command_line()) to detect whether hardware doesn't support
* NX (so that the early EHCI debug console setup can safely call
* set_fixmap()). It may then be called again from within noexec_setup()
* during parsing early parameters to honor the respective command line
* option.
*/
x86_configure_nx();
/*
* This parses early params and it needs to run before
* early_reserve_memory() because latter relies on such settings
* supplied as early params.
*/
*cmdline_p = prepare_command_line();
/*
* Do some memory reservations *before* memory is added to memblock, so
* memblock allocations won't overwrite it.
@ -902,6 +860,36 @@ void __init setup_arch(char **cmdline_p)
bss_resource.start = __pa_symbol(__bss_start);
bss_resource.end = __pa_symbol(__bss_stop)-1;
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
*cmdline_p = command_line;
/*
* x86_configure_nx() is called before parse_early_param() to detect
* whether hardware doesn't support NX (so that the early EHCI debug
* console setup can safely call set_fixmap()). It may then be called
* again from within noexec_setup() during parsing early parameters
* to honor the respective command line option.
*/
x86_configure_nx();
parse_early_param();
if (efi_enabled(EFI_BOOT))
efi_memblock_x86_reserve_range();
#ifdef CONFIG_MEMORY_HOTPLUG
/*
* Memory used by the kernel cannot be hot-removed because Linux

View File

@ -3987,7 +3987,21 @@ out_retry:
static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
struct kvm_page_fault *fault, int mmu_seq)
{
if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa)))
struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root_hpa);
/* Special roots, e.g. pae_root, are not backed by shadow pages. */
if (sp && is_obsolete_sp(vcpu->kvm, sp))
return true;
/*
* Roots without an associated shadow page are considered invalid if
* there is a pending request to free obsolete roots. The request is
* only a hint that the current root _may_ be obsolete and needs to be
* reloaded, e.g. if the guest frees a PGD that KVM is tracking as a
* previous root, then __kvm_mmu_prepare_zap_page() signals all vCPUs
* to reload even if no vCPU is actively using the root.
*/
if (!sp && kvm_test_request(KVM_REQ_MMU_RELOAD, vcpu))
return true;
return fault->slot &&

View File

@ -26,6 +26,7 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level)
*/
void tdp_iter_restart(struct tdp_iter *iter)
{
iter->yielded = false;
iter->yielded_gfn = iter->next_last_level_gfn;
iter->level = iter->root_level;
@ -160,6 +161,11 @@ static bool try_step_up(struct tdp_iter *iter)
*/
void tdp_iter_next(struct tdp_iter *iter)
{
if (iter->yielded) {
tdp_iter_restart(iter);
return;
}
if (try_step_down(iter))
return;

View File

@ -45,6 +45,12 @@ struct tdp_iter {
* iterator walks off the end of the paging structure.
*/
bool valid;
/*
* True if KVM dropped mmu_lock and yielded in the middle of a walk, in
* which case tdp_iter_next() needs to restart the walk at the root
* level instead of advancing to the next entry.
*/
bool yielded;
};
/*

View File

@ -502,6 +502,8 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter,
u64 new_spte)
{
WARN_ON_ONCE(iter->yielded);
lockdep_assert_held_read(&kvm->mmu_lock);
/*
@ -575,6 +577,8 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter,
u64 new_spte, bool record_acc_track,
bool record_dirty_log)
{
WARN_ON_ONCE(iter->yielded);
lockdep_assert_held_write(&kvm->mmu_lock);
/*
@ -640,18 +644,19 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
* If this function should yield and flush is set, it will perform a remote
* TLB flush before yielding.
*
* If this function yields, it will also reset the tdp_iter's walk over the
* paging structure and the calling function should skip to the next
* iteration to allow the iterator to continue its traversal from the
* paging structure root.
* If this function yields, iter->yielded is set and the caller must skip to
* the next iteration, where tdp_iter_next() will reset the tdp_iter's walk
* over the paging structures to allow the iterator to continue its traversal
* from the paging structure root.
*
* Return true if this function yielded and the iterator's traversal was reset.
* Return false if a yield was not needed.
* Returns true if this function yielded.
*/
static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
struct tdp_iter *iter, bool flush,
bool shared)
static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm,
struct tdp_iter *iter,
bool flush, bool shared)
{
WARN_ON(iter->yielded);
/* Ensure forward progress has been made before yielding. */
if (iter->next_last_level_gfn == iter->yielded_gfn)
return false;
@ -671,12 +676,10 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
WARN_ON(iter->gfn > iter->next_last_level_gfn);
tdp_iter_restart(iter);
return true;
iter->yielded = true;
}
return false;
return iter->yielded;
}
/*

View File

@ -1585,6 +1585,15 @@ static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
to_svm(vcpu)->vmcb->save.rflags = rflags;
}
static bool svm_get_if_flag(struct kvm_vcpu *vcpu)
{
struct vmcb *vmcb = to_svm(vcpu)->vmcb;
return sev_es_guest(vcpu->kvm)
? vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK
: kvm_get_rflags(vcpu) & X86_EFLAGS_IF;
}
static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
{
switch (reg) {
@ -3568,14 +3577,7 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
if (!gif_set(svm))
return true;
if (sev_es_guest(vcpu->kvm)) {
/*
* SEV-ES guests to not expose RFLAGS. Use the VMCB interrupt mask
* bit to determine the state of the IF flag.
*/
if (!(vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK))
return true;
} else if (is_guest_mode(vcpu)) {
if (is_guest_mode(vcpu)) {
/* As long as interrupts are being delivered... */
if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK)
? !(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF)
@ -3586,7 +3588,7 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
if (nested_exit_on_intr(svm))
return false;
} else {
if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF))
if (!svm_get_if_flag(vcpu))
return true;
}
@ -4621,6 +4623,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.cache_reg = svm_cache_reg,
.get_rflags = svm_get_rflags,
.set_rflags = svm_set_rflags,
.get_if_flag = svm_get_if_flag,
.tlb_flush_all = svm_flush_tlb,
.tlb_flush_current = svm_flush_tlb,

View File

@ -1363,6 +1363,11 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
vmx->emulation_required = vmx_emulation_required(vcpu);
}
static bool vmx_get_if_flag(struct kvm_vcpu *vcpu)
{
return vmx_get_rflags(vcpu) & X86_EFLAGS_IF;
}
u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)
{
u32 interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);
@ -3959,8 +3964,7 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
if (pi_test_and_set_on(&vmx->pi_desc))
return 0;
if (vcpu != kvm_get_running_vcpu() &&
!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
kvm_vcpu_kick(vcpu);
return 0;
@ -5877,18 +5881,14 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
vmx_flush_pml_buffer(vcpu);
/*
* We should never reach this point with a pending nested VM-Enter, and
* more specifically emulation of L2 due to invalid guest state (see
* below) should never happen as that means we incorrectly allowed a
* nested VM-Enter with an invalid vmcs12.
* KVM should never reach this point with a pending nested VM-Enter.
* More specifically, short-circuiting VM-Entry to emulate L2 due to
* invalid guest state should never happen as that means KVM knowingly
* allowed a nested VM-Enter with an invalid vmcs12. More below.
*/
if (KVM_BUG_ON(vmx->nested.nested_run_pending, vcpu->kvm))
return -EIO;
/* If guest state is invalid, start emulating */
if (vmx->emulation_required)
return handle_invalid_guest_state(vcpu);
if (is_guest_mode(vcpu)) {
/*
* PML is never enabled when running L2, bail immediately if a
@ -5910,10 +5910,30 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
*/
nested_mark_vmcs12_pages_dirty(vcpu);
/*
* Synthesize a triple fault if L2 state is invalid. In normal
* operation, nested VM-Enter rejects any attempt to enter L2
* with invalid state. However, those checks are skipped if
* state is being stuffed via RSM or KVM_SET_NESTED_STATE. If
* L2 state is invalid, it means either L1 modified SMRAM state
* or userspace provided bad state. Synthesize TRIPLE_FAULT as
* doing so is architecturally allowed in the RSM case, and is
* the least awful solution for the userspace case without
* risking false positives.
*/
if (vmx->emulation_required) {
nested_vmx_vmexit(vcpu, EXIT_REASON_TRIPLE_FAULT, 0, 0);
return 1;
}
if (nested_vmx_reflect_vmexit(vcpu))
return 1;
}
/* If guest state is invalid, start emulating. L2 is handled above. */
if (vmx->emulation_required)
return handle_invalid_guest_state(vcpu);
if (exit_reason.failed_vmentry) {
dump_vmcs(vcpu);
vcpu->run->exit_reason = KVM_EXIT_FAIL_ENTRY;
@ -6608,9 +6628,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
* consistency check VM-Exit due to invalid guest state and bail.
*/
if (unlikely(vmx->emulation_required)) {
/* We don't emulate invalid state of a nested guest */
vmx->fail = is_guest_mode(vcpu);
vmx->fail = 0;
vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;
vmx->exit_reason.failed_vmentry = 1;
@ -7579,6 +7597,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
.cache_reg = vmx_cache_reg,
.get_rflags = vmx_get_rflags,
.set_rflags = vmx_set_rflags,
.get_if_flag = vmx_get_if_flag,
.tlb_flush_all = vmx_flush_tlb_all,
.tlb_flush_current = vmx_flush_tlb_current,

View File

@ -1331,7 +1331,7 @@ static const u32 msrs_to_save_all[] = {
MSR_IA32_UMWAIT_CONTROL,
MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1,
MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3,
MSR_ARCH_PERFMON_FIXED_CTR0 + 2,
MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS,
MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL,
MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1,
@ -3413,7 +3413,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (!msr_info->host_initiated)
return 1;
if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent))
if (kvm_get_msr_feature(&msr_ent))
return 1;
if (data & ~msr_ent.data)
return 1;
@ -9001,14 +9001,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
{
struct kvm_run *kvm_run = vcpu->run;
/*
* if_flag is obsolete and useless, so do not bother
* setting it for SEV-ES guests. Userspace can just
* use kvm_run->ready_for_interrupt_injection.
*/
kvm_run->if_flag = !vcpu->arch.guest_state_protected
&& (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0;
kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu);
kvm_run->cr8 = kvm_get_cr8(vcpu);
kvm_run->apic_base = kvm_get_apic_base(vcpu);

View File

@ -1252,19 +1252,54 @@ st: if (is_imm8(insn->off))
case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
/* test src_reg, src_reg */
maybe_emit_mod(&prog, src_reg, src_reg, true); /* always 1 byte */
EMIT2(0x85, add_2reg(0xC0, src_reg, src_reg));
/* jne start_of_ldx */
EMIT2(X86_JNE, 0);
/* Though the verifier prevents negative insn->off in BPF_PROBE_MEM
* add abs(insn->off) to the limit to make sure that negative
* offset won't be an issue.
* insn->off is s16, so it won't affect valid pointers.
*/
u64 limit = TASK_SIZE_MAX + PAGE_SIZE + abs(insn->off);
u8 *end_of_jmp1, *end_of_jmp2;
/* Conservatively check that src_reg + insn->off is a kernel address:
* 1. src_reg + insn->off >= limit
* 2. src_reg + insn->off doesn't become small positive.
* Cannot do src_reg + insn->off >= limit in one branch,
* since it needs two spare registers, but JIT has only one.
*/
/* movabsq r11, limit */
EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG));
EMIT((u32)limit, 4);
EMIT(limit >> 32, 4);
/* cmp src_reg, r11 */
maybe_emit_mod(&prog, src_reg, AUX_REG, true);
EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG));
/* if unsigned '<' goto end_of_jmp2 */
EMIT2(X86_JB, 0);
end_of_jmp1 = prog;
/* mov r11, src_reg */
emit_mov_reg(&prog, true, AUX_REG, src_reg);
/* add r11, insn->off */
maybe_emit_1mod(&prog, AUX_REG, true);
EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off);
/* jmp if not carry to start_of_ldx
* Otherwise ERR_PTR(-EINVAL) + 128 will be the user addr
* that has to be rejected.
*/
EMIT2(0x73 /* JNC */, 0);
end_of_jmp2 = prog;
/* xor dst_reg, dst_reg */
emit_mov_imm32(&prog, false, dst_reg, 0);
/* jmp byte_after_ldx */
EMIT2(0xEB, 0);
/* populate jmp_offset for JNE above */
temp[4] = prog - temp - 5 /* sizeof(test + jne) */;
/* populate jmp_offset for JB above to jump to xor dst_reg */
end_of_jmp1[-1] = end_of_jmp2 - end_of_jmp1;
/* populate jmp_offset for JNC above to jump to start_of_ldx */
start_of_ldx = prog;
end_of_jmp2[-1] = start_of_ldx - end_of_jmp2;
}
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
@ -1305,7 +1340,7 @@ st: if (is_imm8(insn->off))
* End result: x86 insn "mov rbx, qword ptr [rax+0x14]"
* of 4 bytes will be ignored and rbx will be zero inited.
*/
ex->fixup = (prog - temp) | (reg2pt_regs[dst_reg] << 8);
ex->fixup = (prog - start_of_ldx) | (reg2pt_regs[dst_reg] << 8);
}
break;

View File

@ -68,7 +68,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
"(__parainstructions|__alt_instructions)(_end)?|"
"(__iommu_table|__apicdrivers|__smp_locks)(_end)?|"
"__(start|end)_pci_.*|"
#if CONFIG_FW_LOADER_BUILTIN
#if CONFIG_FW_LOADER
"__(start|end)_builtin_fw|"
#endif
"__(start|stop)___ksymtab(_gpl)?|"

View File

@ -2311,7 +2311,14 @@ static void ioc_timer_fn(struct timer_list *timer)
hwm = current_hweight_max(iocg);
new_hwi = hweight_after_donation(iocg, old_hwi, hwm,
usage, &now);
if (new_hwi < hwm) {
/*
* Donation calculation assumes hweight_after_donation
* to be positive, a condition that a donor w/ hwa < 2
* can't meet. Don't bother with donation if hwa is
* below 2. It's not gonna make a meaningful difference
* anyway.
*/
if (new_hwi < hwm && hwa >= 2) {
iocg->hweight_donating = hwa;
iocg->hweight_after_donation = new_hwi;
list_add(&iocg->surplus_list, &surpluses);

View File

@ -41,8 +41,7 @@ obj-$(CONFIG_DMADEVICES) += dma/
# SOC specific infrastructure drivers.
obj-y += soc/
obj-$(CONFIG_VIRTIO) += virtio/
obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio/
obj-y += virtio/
obj-$(CONFIG_VDPA) += vdpa/
obj-$(CONFIG_XEN) += xen/

View File

@ -671,7 +671,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
if (buffer->async_transaction) {
alloc->free_async_space += size + sizeof(struct binder_buffer);
alloc->free_async_space += buffer_size + sizeof(struct binder_buffer);
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
"%d: binder_free_buf size %zd async free %zd\n",

View File

@ -2859,8 +2859,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
goto invalid_fld;
}
if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)
tf->protocol = ATA_PROT_NCQ_NODATA;
if ((cdb[2 + cdb_offset] & 0x3) == 0) {
/*
* When T_LENGTH is zero (No data is transferred), dir should
* be DMA_NONE.
*/
if (scmd->sc_data_direction != DMA_NONE) {
fp = 2 + cdb_offset;
goto invalid_fld;
}
if (ata_is_ncq(tf->protocol))
tf->protocol = ATA_PROT_NCQ_NODATA;
}
/* enable LBA */
tf->flags |= ATA_TFLAG_LBA;

View File

@ -37,7 +37,7 @@ struct charlcd_priv {
bool must_clear;
/* contains the LCD config state */
unsigned long int flags;
unsigned long flags;
/* Current escape sequence and it's length or -1 if outside */
struct {
@ -578,6 +578,9 @@ static int charlcd_init(struct charlcd *lcd)
* Since charlcd_init_display() needs to write data, we have to
* enable mark the LCD initialized just before.
*/
if (WARN_ON(!lcd->ops->init_display))
return -EINVAL;
ret = lcd->ops->init_display(lcd);
if (ret)
return ret;

View File

@ -1902,7 +1902,7 @@ int dpm_prepare(pm_message_t state)
device_block_probing();
mutex_lock(&dpm_list_mtx);
while (!list_empty(&dpm_list)) {
while (!list_empty(&dpm_list) && !error) {
struct device *dev = to_device(dpm_list.next);
get_device(dev);

View File

@ -1512,9 +1512,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
unsigned long flags;
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
struct blkfront_info *info = rinfo->dev_info;
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
return IRQ_HANDLED;
}
spin_lock_irqsave(&rinfo->ring_lock, flags);
again:
@ -1530,6 +1533,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
unsigned long id;
unsigned int op;
eoiflag = 0;
RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
id = bret.id;
@ -1646,6 +1651,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
xen_irq_lateeoi(irq, eoiflag);
return IRQ_HANDLED;
err:
@ -1653,6 +1660,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
/* No EOI in order to avoid further interrupts. */
pr_alert("%s disabled for further use\n", info->gd->disk_name);
return IRQ_HANDLED;
}
@ -1692,8 +1701,8 @@ static int setup_blkring(struct xenbus_device *dev,
if (err)
goto fail;
err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,
"blkif", rinfo);
err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,
0, "blkif", rinfo);
if (err <= 0) {
xenbus_dev_fatal(dev, err,
"bind_evtchn_to_irqhandler failed");

View File

@ -687,11 +687,11 @@ err_clk_disable:
static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb)
{
/* Keep the clock and PM reference counts consistent. */
if (pm_runtime_status_suspended(rsb->dev))
pm_runtime_resume(rsb->dev);
reset_control_assert(rsb->rstc);
clk_disable_unprepare(rsb->clk);
/* Keep the clock and PM reference counts consistent. */
if (!pm_runtime_status_suspended(rsb->dev))
clk_disable_unprepare(rsb->clk);
}
static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev)

View File

@ -3031,7 +3031,7 @@ cleanup_bmc_device(struct kref *ref)
* with removing the device attributes while reading a device
* attribute.
*/
schedule_work(&bmc->remove_work);
queue_work(remove_work_wq, &bmc->remove_work);
}
/*
@ -5392,22 +5392,27 @@ static int ipmi_init_msghandler(void)
if (initialized)
goto out;
init_srcu_struct(&ipmi_interfaces_srcu);
rv = init_srcu_struct(&ipmi_interfaces_srcu);
if (rv)
goto out;
remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
if (!remove_work_wq) {
pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
rv = -ENOMEM;
goto out_wq;
}
timer_setup(&ipmi_timer, ipmi_timeout, 0);
mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
if (!remove_work_wq) {
pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
rv = -ENOMEM;
goto out;
}
initialized = true;
out_wq:
if (rv)
cleanup_srcu_struct(&ipmi_interfaces_srcu);
out:
mutex_unlock(&ipmi_interfaces_mutex);
return rv;

View File

@ -1659,6 +1659,9 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
}
}
ssif_info->client = client;
i2c_set_clientdata(client, ssif_info);
rv = ssif_check_and_remove(client, ssif_info);
/* If rv is 0 and addr source is not SI_ACPI, continue probing */
if (!rv && ssif_info->addr_source == SI_ACPI) {
@ -1679,9 +1682,6 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
ipmi_addr_src_to_str(ssif_info->addr_source),
client->addr, client->adapter->name, slave_addr);
ssif_info->client = client;
i2c_set_clientdata(client, ssif_info);
/* Now check for system interface capabilities */
msg[0] = IPMI_NETFN_APP_REQUEST << 2;
msg[1] = IPMI_GET_SYSTEM_INTERFACE_CAPABILITIES_CMD;
@ -1881,6 +1881,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
dev_err(&ssif_info->client->dev,
"Unable to start IPMI SSIF: %d\n", rv);
i2c_set_clientdata(client, NULL);
kfree(ssif_info);
}
kfree(resp);

View File

@ -3418,6 +3418,14 @@ static int __clk_core_init(struct clk_core *core)
clk_prepare_lock();
/*
* Set hw->core after grabbing the prepare_lock to synchronize with
* callers of clk_core_fill_parent_index() where we treat hw->core
* being NULL as the clk not being registered yet. This is crucial so
* that clks aren't parented until their parent is fully registered.
*/
core->hw->core = core;
ret = clk_pm_runtime_get(core);
if (ret)
goto unlock;
@ -3582,8 +3590,10 @@ static int __clk_core_init(struct clk_core *core)
out:
clk_pm_runtime_put(core);
unlock:
if (ret)
if (ret) {
hlist_del_init(&core->child_node);
core->hw->core = NULL;
}
clk_prepare_unlock();
@ -3847,7 +3857,6 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
core->num_parents = init->num_parents;
core->min_rate = 0;
core->max_rate = ULONG_MAX;
hw->core = core;
ret = clk_core_populate_parent_map(core, init);
if (ret)
@ -3865,7 +3874,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
goto fail_create_clk;
}
clk_core_link_consumer(hw->core, hw->clk);
clk_core_link_consumer(core, hw->clk);
ret = __clk_core_init(core);
if (!ret)

View File

@ -211,6 +211,12 @@ static u32 uof_get_ae_mask(u32 obj_num)
return adf_4xxx_fw_config[obj_num].ae_mask;
}
static u32 get_vf2pf_sources(void __iomem *pmisc_addr)
{
/* For the moment do not report vf2pf sources */
return 0;
}
void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &adf_4xxx_class;
@ -254,6 +260,7 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data)
hw_data->set_msix_rttable = set_msix_default_rttable;
hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;
hw_data->enable_pfvf_comms = pfvf_comms_disabled;
hw_data->get_vf2pf_sources = get_vf2pf_sources;
hw_data->disable_iov = adf_disable_sriov;
hw_data->min_iov_compat_ver = ADF_PFVF_COMPAT_THIS_VERSION;

View File

@ -373,7 +373,7 @@ static void axi_chan_block_xfer_start(struct axi_dma_chan *chan,
struct axi_dma_desc *first)
{
u32 priority = chan->chip->dw->hdata->priority[chan->id];
struct axi_dma_chan_config config;
struct axi_dma_chan_config config = {};
u32 irq_mask;
u8 lms = 0; /* Select AXI0 master for LLI fetching */
@ -391,7 +391,7 @@ static void axi_chan_block_xfer_start(struct axi_dma_chan *chan,
config.tt_fc = DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC;
config.prior = priority;
config.hs_sel_dst = DWAXIDMAC_HS_SEL_HW;
config.hs_sel_dst = DWAXIDMAC_HS_SEL_HW;
config.hs_sel_src = DWAXIDMAC_HS_SEL_HW;
switch (chan->direction) {
case DMA_MEM_TO_DEV:
dw_axi_dma_set_byte_halfword(chan, true);

View File

@ -187,17 +187,9 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
/* DMA configuration */
err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
if (!err) {
if (err) {
pci_err(pdev, "DMA mask 64 set failed\n");
return err;
} else {
pci_err(pdev, "DMA mask 64 set failed\n");
err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
if (err) {
pci_err(pdev, "DMA mask 32 set failed\n");
return err;
}
}
/* Data structure allocation */

View File

@ -137,10 +137,10 @@ halt:
INIT_WORK(&idxd->work, idxd_device_reinit);
queue_work(idxd->wq, &idxd->work);
} else {
spin_lock(&idxd->dev_lock);
idxd->state = IDXD_DEV_HALTED;
idxd_wqs_quiesce(idxd);
idxd_wqs_unmap_portal(idxd);
spin_lock(&idxd->dev_lock);
idxd_device_clear_state(idxd);
dev_err(&idxd->pdev->dev,
"idxd halted, need %s.\n",

View File

@ -106,6 +106,7 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
{
struct idxd_desc *d, *t, *found = NULL;
struct llist_node *head;
LIST_HEAD(flist);
desc->completion->status = IDXD_COMP_DESC_ABORT;
/*
@ -120,7 +121,11 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
found = desc;
continue;
}
list_add_tail(&desc->list, &ie->work_list);
if (d->completion->status)
list_add_tail(&d->list, &flist);
else
list_add_tail(&d->list, &ie->work_list);
}
}
@ -130,6 +135,17 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
if (found)
complete_desc(found, IDXD_COMPLETE_ABORT);
/*
* complete_desc() will return desc to allocator and the desc can be
* acquired by a different process and the desc->list can be modified.
* Delete desc from list so the list trasversing does not get corrupted
* by the other process.
*/
list_for_each_entry_safe(d, t, &flist, list) {
list_del_init(&d->list);
complete_desc(d, IDXD_COMPLETE_NORMAL);
}
}
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)

View File

@ -874,4 +874,4 @@ MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>");
MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>");
MODULE_ALIAS("platform: " DRIVER_NAME);
MODULE_ALIAS("platform:" DRIVER_NAME);

View File

@ -4534,45 +4534,60 @@ static int udma_setup_resources(struct udma_dev *ud)
rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
if (IS_ERR(rm_res)) {
bitmap_zero(ud->tchan_map, ud->tchan_cnt);
irq_res.sets = 1;
} else {
bitmap_fill(ud->tchan_map, ud->tchan_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->tchan_map,
&rm_res->desc[i], "tchan");
irq_res.sets = rm_res->sets;
}
irq_res.sets = rm_res->sets;
/* rchan and matching default flow ranges */
rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
if (IS_ERR(rm_res)) {
bitmap_zero(ud->rchan_map, ud->rchan_cnt);
irq_res.sets++;
} else {
bitmap_fill(ud->rchan_map, ud->rchan_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->rchan_map,
&rm_res->desc[i], "rchan");
irq_res.sets += rm_res->sets;
}
irq_res.sets += rm_res->sets;
irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL);
if (!irq_res.desc)
return -ENOMEM;
rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start;
irq_res.desc[i].num = rm_res->desc[i].num;
irq_res.desc[i].start_sec = rm_res->desc[i].start_sec;
irq_res.desc[i].num_sec = rm_res->desc[i].num_sec;
if (IS_ERR(rm_res)) {
irq_res.desc[0].start = 0;
irq_res.desc[0].num = ud->tchan_cnt;
i = 1;
} else {
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start;
irq_res.desc[i].num = rm_res->desc[i].num;
irq_res.desc[i].start_sec = rm_res->desc[i].start_sec;
irq_res.desc[i].num_sec = rm_res->desc[i].num_sec;
}
}
rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
for (j = 0; j < rm_res->sets; j++, i++) {
if (rm_res->desc[j].num) {
irq_res.desc[i].start = rm_res->desc[j].start +
ud->soc_data->oes.udma_rchan;
irq_res.desc[i].num = rm_res->desc[j].num;
}
if (rm_res->desc[j].num_sec) {
irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
ud->soc_data->oes.udma_rchan;
irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
if (IS_ERR(rm_res)) {
irq_res.desc[i].start = 0;
irq_res.desc[i].num = ud->rchan_cnt;
} else {
for (j = 0; j < rm_res->sets; j++, i++) {
if (rm_res->desc[j].num) {
irq_res.desc[i].start = rm_res->desc[j].start +
ud->soc_data->oes.udma_rchan;
irq_res.desc[i].num = rm_res->desc[j].num;
}
if (rm_res->desc[j].num_sec) {
irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
ud->soc_data->oes.udma_rchan;
irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
}
}
}
ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res);
@ -4690,14 +4705,15 @@ static int bcdma_setup_resources(struct udma_dev *ud)
rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN];
if (IS_ERR(rm_res)) {
bitmap_zero(ud->bchan_map, ud->bchan_cnt);
irq_res.sets++;
} else {
bitmap_fill(ud->bchan_map, ud->bchan_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->bchan_map,
&rm_res->desc[i],
"bchan");
irq_res.sets += rm_res->sets;
}
irq_res.sets += rm_res->sets;
}
/* tchan ranges */
@ -4705,14 +4721,15 @@ static int bcdma_setup_resources(struct udma_dev *ud)
rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
if (IS_ERR(rm_res)) {
bitmap_zero(ud->tchan_map, ud->tchan_cnt);
irq_res.sets += 2;
} else {
bitmap_fill(ud->tchan_map, ud->tchan_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->tchan_map,
&rm_res->desc[i],
"tchan");
irq_res.sets += rm_res->sets * 2;
}
irq_res.sets += rm_res->sets * 2;
}
/* rchan ranges */
@ -4720,47 +4737,72 @@ static int bcdma_setup_resources(struct udma_dev *ud)
rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
if (IS_ERR(rm_res)) {
bitmap_zero(ud->rchan_map, ud->rchan_cnt);
irq_res.sets += 2;
} else {
bitmap_fill(ud->rchan_map, ud->rchan_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->rchan_map,
&rm_res->desc[i],
"rchan");
irq_res.sets += rm_res->sets * 2;
}
irq_res.sets += rm_res->sets * 2;
}
irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL);
if (!irq_res.desc)
return -ENOMEM;
if (ud->bchan_cnt) {
rm_res = tisci_rm->rm_ranges[RM_RANGE_BCHAN];
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start +
oes->bcdma_bchan_ring;
irq_res.desc[i].num = rm_res->desc[i].num;
if (IS_ERR(rm_res)) {
irq_res.desc[0].start = oes->bcdma_bchan_ring;
irq_res.desc[0].num = ud->bchan_cnt;
i = 1;
} else {
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start +
oes->bcdma_bchan_ring;
irq_res.desc[i].num = rm_res->desc[i].num;
}
}
}
if (ud->tchan_cnt) {
rm_res = tisci_rm->rm_ranges[RM_RANGE_TCHAN];
for (j = 0; j < rm_res->sets; j++, i += 2) {
irq_res.desc[i].start = rm_res->desc[j].start +
oes->bcdma_tchan_data;
irq_res.desc[i].num = rm_res->desc[j].num;
if (IS_ERR(rm_res)) {
irq_res.desc[i].start = oes->bcdma_tchan_data;
irq_res.desc[i].num = ud->tchan_cnt;
irq_res.desc[i + 1].start = oes->bcdma_tchan_ring;
irq_res.desc[i + 1].num = ud->tchan_cnt;
i += 2;
} else {
for (j = 0; j < rm_res->sets; j++, i += 2) {
irq_res.desc[i].start = rm_res->desc[j].start +
oes->bcdma_tchan_data;
irq_res.desc[i].num = rm_res->desc[j].num;
irq_res.desc[i + 1].start = rm_res->desc[j].start +
oes->bcdma_tchan_ring;
irq_res.desc[i + 1].num = rm_res->desc[j].num;
irq_res.desc[i + 1].start = rm_res->desc[j].start +
oes->bcdma_tchan_ring;
irq_res.desc[i + 1].num = rm_res->desc[j].num;
}
}
}
if (ud->rchan_cnt) {
rm_res = tisci_rm->rm_ranges[RM_RANGE_RCHAN];
for (j = 0; j < rm_res->sets; j++, i += 2) {
irq_res.desc[i].start = rm_res->desc[j].start +
oes->bcdma_rchan_data;
irq_res.desc[i].num = rm_res->desc[j].num;
if (IS_ERR(rm_res)) {
irq_res.desc[i].start = oes->bcdma_rchan_data;
irq_res.desc[i].num = ud->rchan_cnt;
irq_res.desc[i + 1].start = oes->bcdma_rchan_ring;
irq_res.desc[i + 1].num = ud->rchan_cnt;
i += 2;
} else {
for (j = 0; j < rm_res->sets; j++, i += 2) {
irq_res.desc[i].start = rm_res->desc[j].start +
oes->bcdma_rchan_data;
irq_res.desc[i].num = rm_res->desc[j].num;
irq_res.desc[i + 1].start = rm_res->desc[j].start +
oes->bcdma_rchan_ring;
irq_res.desc[i + 1].num = rm_res->desc[j].num;
irq_res.desc[i + 1].start = rm_res->desc[j].start +
oes->bcdma_rchan_ring;
irq_res.desc[i + 1].num = rm_res->desc[j].num;
}
}
}
@ -4858,39 +4900,54 @@ static int pktdma_setup_resources(struct udma_dev *ud)
if (IS_ERR(rm_res)) {
/* all rflows are assigned exclusively to Linux */
bitmap_zero(ud->rflow_in_use, ud->rflow_cnt);
irq_res.sets = 1;
} else {
bitmap_fill(ud->rflow_in_use, ud->rflow_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->rflow_in_use,
&rm_res->desc[i], "rflow");
irq_res.sets = rm_res->sets;
}
irq_res.sets = rm_res->sets;
/* tflow ranges */
rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW];
if (IS_ERR(rm_res)) {
/* all tflows are assigned exclusively to Linux */
bitmap_zero(ud->tflow_map, ud->tflow_cnt);
irq_res.sets++;
} else {
bitmap_fill(ud->tflow_map, ud->tflow_cnt);
for (i = 0; i < rm_res->sets; i++)
udma_mark_resource_ranges(ud, ud->tflow_map,
&rm_res->desc[i], "tflow");
irq_res.sets += rm_res->sets;
}
irq_res.sets += rm_res->sets;
irq_res.desc = kcalloc(irq_res.sets, sizeof(*irq_res.desc), GFP_KERNEL);
if (!irq_res.desc)
return -ENOMEM;
rm_res = tisci_rm->rm_ranges[RM_RANGE_TFLOW];
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start +
oes->pktdma_tchan_flow;
irq_res.desc[i].num = rm_res->desc[i].num;
if (IS_ERR(rm_res)) {
irq_res.desc[0].start = oes->pktdma_tchan_flow;
irq_res.desc[0].num = ud->tflow_cnt;
i = 1;
} else {
for (i = 0; i < rm_res->sets; i++) {
irq_res.desc[i].start = rm_res->desc[i].start +
oes->pktdma_tchan_flow;
irq_res.desc[i].num = rm_res->desc[i].num;
}
}
rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW];
for (j = 0; j < rm_res->sets; j++, i++) {
irq_res.desc[i].start = rm_res->desc[j].start +
oes->pktdma_rchan_flow;
irq_res.desc[i].num = rm_res->desc[j].num;
if (IS_ERR(rm_res)) {
irq_res.desc[i].start = oes->pktdma_rchan_flow;
irq_res.desc[i].num = ud->rflow_cnt;
} else {
for (j = 0; j < rm_res->sets; j++, i++) {
irq_res.desc[i].start = rm_res->desc[j].start +
oes->pktdma_rchan_flow;
irq_res.desc[i].num = rm_res->desc[j].num;
}
}
ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res);
kfree(irq_res.desc);

View File

@ -16,7 +16,6 @@ struct scpi_pm_domain {
struct generic_pm_domain genpd;
struct scpi_ops *ops;
u32 domain;
char name[30];
};
/*
@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev)
scpi_pd->domain = i;
scpi_pd->ops = scpi_ops;
sprintf(scpi_pd->name, "%pOFn.%d", np, i);
scpi_pd->genpd.name = scpi_pd->name;
scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL,
"%pOFn.%d", np, i);
if (!scpi_pd->genpd.name) {
dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n",
np, i);
continue;
}
scpi_pd->genpd.power_off = scpi_pd_power_off;
scpi_pd->genpd.power_on = scpi_pd_power_on;

View File

@ -77,13 +77,14 @@ static const char *get_filename(struct tegra_bpmp *bpmp,
const char *root_path, *filename = NULL;
char *root_path_buf;
size_t root_len;
size_t root_path_buf_len = 512;
root_path_buf = kzalloc(512, GFP_KERNEL);
root_path_buf = kzalloc(root_path_buf_len, GFP_KERNEL);
if (!root_path_buf)
goto out;
root_path = dentry_path(bpmp->debugfs_mirror, root_path_buf,
sizeof(root_path_buf));
root_path_buf_len);
if (IS_ERR(root_path))
goto out;

View File

@ -46,6 +46,7 @@
struct dln2_gpio {
struct platform_device *pdev;
struct gpio_chip gpio;
struct irq_chip irqchip;
/*
* Cache pin direction to save us one transfer, since the hardware has
@ -383,15 +384,6 @@ static void dln2_irq_bus_unlock(struct irq_data *irqd)
mutex_unlock(&dln2->irq_lock);
}
static struct irq_chip dln2_gpio_irqchip = {
.name = "dln2-irq",
.irq_mask = dln2_irq_mask,
.irq_unmask = dln2_irq_unmask,
.irq_set_type = dln2_irq_set_type,
.irq_bus_lock = dln2_irq_bus_lock,
.irq_bus_sync_unlock = dln2_irq_bus_unlock,
};
static void dln2_gpio_event(struct platform_device *pdev, u16 echo,
const void *data, int len)
{
@ -473,8 +465,15 @@ static int dln2_gpio_probe(struct platform_device *pdev)
dln2->gpio.direction_output = dln2_gpio_direction_output;
dln2->gpio.set_config = dln2_gpio_set_config;
dln2->irqchip.name = "dln2-irq",
dln2->irqchip.irq_mask = dln2_irq_mask,
dln2->irqchip.irq_unmask = dln2_irq_unmask,
dln2->irqchip.irq_set_type = dln2_irq_set_type,
dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock,
dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock,
girq = &dln2->gpio.irq;
girq->chip = &dln2_gpio_irqchip;
girq->chip = &dln2->irqchip;
/* The event comes from the outside so no parent handler */
girq->parent_handler = NULL;
girq->num_parents = 0;

View File

@ -100,11 +100,7 @@ static int _virtio_gpio_req(struct virtio_gpio *vgpio, u16 type, u16 gpio,
virtqueue_kick(vgpio->request_vq);
mutex_unlock(&vgpio->lock);
if (!wait_for_completion_timeout(&line->completion, HZ)) {
dev_err(dev, "GPIO operation timed out\n");
ret = -ETIMEDOUT;
goto out;
}
wait_for_completion(&line->completion);
if (unlikely(res->status != VIRTIO_GPIO_STATUS_OK)) {
dev_err(dev, "GPIO request failed: %d\n", gpio);

View File

@ -3166,6 +3166,12 @@ static void amdgpu_device_detect_sriov_bios(struct amdgpu_device *adev)
bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)
{
switch (asic_type) {
#ifdef CONFIG_DRM_AMDGPU_SI
case CHIP_HAINAN:
#endif
case CHIP_TOPAZ:
/* chips with no display hardware */
return false;
#if defined(CONFIG_DRM_AMD_DC)
case CHIP_TAHITI:
case CHIP_PITCAIRN:
@ -4461,7 +4467,7 @@ int amdgpu_device_mode1_reset(struct amdgpu_device *adev)
int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
struct amdgpu_reset_context *reset_context)
{
int i, j, r = 0;
int i, r = 0;
struct amdgpu_job *job = NULL;
bool need_full_reset =
test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
@ -4483,15 +4489,8 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
/*clear job fence from fence drv to avoid force_completion
*leave NULL and vm flush fence in fence drv */
for (j = 0; j <= ring->fence_drv.num_fences_mask; j++) {
struct dma_fence *old, **ptr;
amdgpu_fence_driver_clear_job_fences(ring);
ptr = &ring->fence_drv.fences[j];
old = rcu_dereference_protected(*ptr, 1);
if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &old->flags)) {
RCU_INIT_POINTER(*ptr, NULL);
}
}
/* after all hw jobs are reset, hw fence is meaningless, so force_completion */
amdgpu_fence_driver_force_completion(ring);
}

View File

@ -526,10 +526,15 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
}
}
union gc_info {
struct gc_info_v1_0 v1;
struct gc_info_v2_0 v2;
};
int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
{
struct binary_header *bhdr;
struct gc_info_v1_0 *gc_info;
union gc_info *gc_info;
if (!adev->mman.discovery_bin) {
DRM_ERROR("ip discovery uninitialized\n");
@ -537,28 +542,55 @@ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
}
bhdr = (struct binary_header *)adev->mman.discovery_bin;
gc_info = (struct gc_info_v1_0 *)(adev->mman.discovery_bin +
gc_info = (union gc_info *)(adev->mman.discovery_bin +
le16_to_cpu(bhdr->table_list[GC].offset));
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->gc_num_se);
adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->gc_num_wgp0_per_sa) +
le32_to_cpu(gc_info->gc_num_wgp1_per_sa));
adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->gc_num_sa_per_se);
adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->gc_num_rb_per_se);
adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->gc_num_gl2c);
adev->gfx.config.max_gprs = le32_to_cpu(gc_info->gc_num_gprs);
adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->gc_num_max_gs_thds);
adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->gc_gs_table_depth);
adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->gc_gsprim_buff_depth);
adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->gc_double_offchip_lds_buffer);
adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->gc_wave_size);
adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->gc_max_waves_per_simd);
adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->gc_max_scratch_slots_per_cu);
adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->gc_lds_size);
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->gc_num_sc_per_se) /
le32_to_cpu(gc_info->gc_num_sa_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->gc_num_packer_per_sc);
switch (gc_info->v1.header.version_major) {
case 1:
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v1.gc_num_se);
adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->v1.gc_num_wgp0_per_sa) +
le32_to_cpu(gc_info->v1.gc_num_wgp1_per_sa));
adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v1.gc_num_rb_per_se);
adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v1.gc_num_gl2c);
adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v1.gc_num_gprs);
adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v1.gc_num_max_gs_thds);
adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v1.gc_gs_table_depth);
adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v1.gc_gsprim_buff_depth);
adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v1.gc_double_offchip_lds_buffer);
adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v1.gc_wave_size);
adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v1.gc_max_waves_per_simd);
adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v1.gc_max_scratch_slots_per_cu);
adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v1.gc_lds_size);
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v1.gc_num_sc_per_se) /
le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v1.gc_num_packer_per_sc);
break;
case 2:
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v2.gc_num_se);
adev->gfx.config.max_cu_per_sh = le32_to_cpu(gc_info->v2.gc_num_cu_per_sh);
adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v2.gc_num_rb_per_se);
adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v2.gc_num_tccs);
adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v2.gc_num_gprs);
adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v2.gc_num_max_gs_thds);
adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v2.gc_gs_table_depth);
adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v2.gc_gsprim_buff_depth);
adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v2.gc_double_offchip_lds_buffer);
adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v2.gc_wave_size);
adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v2.gc_max_waves_per_simd);
adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v2.gc_max_scratch_slots_per_cu);
adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v2.gc_lds_size);
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v2.gc_num_sc_per_se) /
le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v2.gc_num_packer_per_sc);
break;
default:
dev_err(adev->dev,
"Unhandled GC info table %d.%d\n",
gc_info->v1.header.version_major,
gc_info->v1.header.version_minor);
return -EINVAL;
}
return 0;
}

View File

@ -384,7 +384,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
struct amdgpu_vm_bo_base *bo_base;
int r;
if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM)
if (!bo->tbo.resource || bo->tbo.resource->mem_type == TTM_PL_SYSTEM)
return;
r = ttm_bo_validate(&bo->tbo, &placement, &ctx);

Some files were not shown because too many files have changed in this diff Show More