Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Several conflicts here.

NFP driver bug fix adding nfp_netdev_is_nfp_repr() check to
nfp_fl_output() needed some adjustments because the code block is in
an else block now.

Parallel additions to net/pkt_cls.h and net/sch_generic.h

A bug fix in __tcp_retransmit_skb() conflicted with some of
the rbtree changes in net-next.

The tc action RCU callback fixes in 'net' had some overlap with some
of the recent tcf_block reworking.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2017-10-30 14:10:01 +09:00
commit e1ea2f9856
252 changed files with 2334 additions and 2095 deletions

View File

@ -14,3 +14,11 @@ Description:
Show or set the gain boost of the amp, from 0-31 range. Show or set the gain boost of the amp, from 0-31 range.
18 = indoors (default) 18 = indoors (default)
14 = outdoors 14 = outdoors
What /sys/bus/iio/devices/iio:deviceX/noise_level_tripped
Date: May 2017
KernelVersion: 4.13
Contact: Matt Ranostay <matt.ranostay@konsulko.com>
Description:
When 1 the noise level is over the trip level and not reporting
valid data

View File

@ -211,7 +211,9 @@ Description:
device, after it has been suspended at run time, from a resume device, after it has been suspended at run time, from a resume
request to the moment the device will be ready to process I/O, request to the moment the device will be ready to process I/O,
in microseconds. If it is equal to 0, however, this means that in microseconds. If it is equal to 0, however, this means that
the PM QoS resume latency may be arbitrary. the PM QoS resume latency may be arbitrary and the special value
"n/a" means that user space cannot accept any resume latency at
all for the given device.
Not all drivers support this attribute. If it isn't supported, Not all drivers support this attribute. If it isn't supported,
it is not present. it is not present.

View File

@ -16,6 +16,10 @@ Optional properties:
- ams,tuning-capacitor-pf: Calibration tuning capacitor stepping - ams,tuning-capacitor-pf: Calibration tuning capacitor stepping
value 0 - 120pF. This will require using the calibration data from value 0 - 120pF. This will require using the calibration data from
the manufacturer. the manufacturer.
- ams,nflwdth: Set the noise and watchdog threshold register on
startup. This will need to set according to the noise from the
MCU board, and possibly the local environment. Refer to the
datasheet for the threshold settings.
Example: Example:
@ -27,4 +31,5 @@ as3935@0 {
interrupt-parent = <&gpio1>; interrupt-parent = <&gpio1>;
interrupts = <16 1>; interrupts = <16 1>;
ams,tuning-capacitor-pf = <80>; ams,tuning-capacitor-pf = <80>;
ams,nflwdth = <0x44>;
}; };

View File

@ -99,7 +99,7 @@ Examples:
compatible = "arm,gic-v3-its"; compatible = "arm,gic-v3-its";
msi-controller; msi-controller;
#msi-cells = <1>; #msi-cells = <1>;
reg = <0x0 0x2c200000 0 0x200000>; reg = <0x0 0x2c200000 0 0x20000>;
}; };
}; };
@ -124,14 +124,14 @@ Examples:
compatible = "arm,gic-v3-its"; compatible = "arm,gic-v3-its";
msi-controller; msi-controller;
#msi-cells = <1>; #msi-cells = <1>;
reg = <0x0 0x2c200000 0 0x200000>; reg = <0x0 0x2c200000 0 0x20000>;
}; };
gic-its@2c400000 { gic-its@2c400000 {
compatible = "arm,gic-v3-its"; compatible = "arm,gic-v3-its";
msi-controller; msi-controller;
#msi-cells = <1>; #msi-cells = <1>;
reg = <0x0 0x2c400000 0 0x200000>; reg = <0x0 0x2c400000 0 0x20000>;
}; };
ppi-partitions { ppi-partitions {

View File

@ -1108,14 +1108,6 @@ When kbuild executes, the following steps are followed (roughly):
ld ld
Link target. Often, LDFLAGS_$@ is used to set specific options to ld. Link target. Often, LDFLAGS_$@ is used to set specific options to ld.
objcopy
Copy binary. Uses OBJCOPYFLAGS usually specified in
arch/$(ARCH)/Makefile.
OBJCOPYFLAGS_$@ may be used to set additional options.
gzip
Compress target. Use maximum compression to compress target.
Example: Example:
#arch/x86/boot/Makefile #arch/x86/boot/Makefile
LDFLAGS_bootsect := -Ttext 0x0 -s --oformat binary LDFLAGS_bootsect := -Ttext 0x0 -s --oformat binary
@ -1139,6 +1131,19 @@ When kbuild executes, the following steps are followed (roughly):
resulting in the target file being recompiled for no resulting in the target file being recompiled for no
obvious reason. obvious reason.
objcopy
Copy binary. Uses OBJCOPYFLAGS usually specified in
arch/$(ARCH)/Makefile.
OBJCOPYFLAGS_$@ may be used to set additional options.
gzip
Compress target. Use maximum compression to compress target.
Example:
#arch/x86/boot/compressed/Makefile
$(obj)/vmlinux.bin.gz: $(vmlinux.bin.all-y) FORCE
$(call if_changed,gzip)
dtc dtc
Create flattened device tree blob object suitable for linking Create flattened device tree blob object suitable for linking
into vmlinux. Device tree blobs linked into vmlinux are placed into vmlinux. Device tree blobs linked into vmlinux are placed
@ -1219,7 +1224,7 @@ When kbuild executes, the following steps are followed (roughly):
that may be shared between individual architectures. that may be shared between individual architectures.
The recommended approach how to use a generic header file is The recommended approach how to use a generic header file is
to list the file in the Kbuild file. to list the file in the Kbuild file.
See "7.3 generic-y" for further info on syntax etc. See "7.2 generic-y" for further info on syntax etc.
--- 6.11 Post-link pass --- 6.11 Post-link pass
@ -1254,13 +1259,13 @@ A Kbuild file may be defined under arch/<arch>/include/uapi/asm/ and
arch/<arch>/include/asm/ to list asm files coming from asm-generic. arch/<arch>/include/asm/ to list asm files coming from asm-generic.
See subsequent chapter for the syntax of the Kbuild file. See subsequent chapter for the syntax of the Kbuild file.
--- 7.1 no-export-headers --- 7.1 no-export-headers
no-export-headers is essentially used by include/uapi/linux/Kbuild to no-export-headers is essentially used by include/uapi/linux/Kbuild to
avoid exporting specific headers (e.g. kvm.h) on architectures that do avoid exporting specific headers (e.g. kvm.h) on architectures that do
not support it. It should be avoided as much as possible. not support it. It should be avoided as much as possible.
--- 7.2 generic-y --- 7.2 generic-y
If an architecture uses a verbatim copy of a header from If an architecture uses a verbatim copy of a header from
include/asm-generic then this is listed in the file include/asm-generic then this is listed in the file
@ -1287,7 +1292,7 @@ See subsequent chapter for the syntax of the Kbuild file.
Example: termios.h Example: termios.h
#include <asm-generic/termios.h> #include <asm-generic/termios.h>
--- 7.3 generated-y --- 7.3 generated-y
If an architecture generates other header files alongside generic-y If an architecture generates other header files alongside generic-y
wrappers, generated-y specifies them. wrappers, generated-y specifies them.
@ -1299,7 +1304,7 @@ See subsequent chapter for the syntax of the Kbuild file.
#arch/x86/include/asm/Kbuild #arch/x86/include/asm/Kbuild
generated-y += syscalls_32.h generated-y += syscalls_32.h
--- 7.5 mandatory-y --- 7.4 mandatory-y
mandatory-y is essentially used by include/uapi/asm-generic/Kbuild.asm mandatory-y is essentially used by include/uapi/asm-generic/Kbuild.asm
to define the minimum set of headers that must be exported in to define the minimum set of headers that must be exported in

View File

@ -9220,7 +9220,6 @@ F: include/linux/isicom.h
MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER
M: Bin Liu <b-liu@ti.com> M: Bin Liu <b-liu@ti.com>
L: linux-usb@vger.kernel.org L: linux-usb@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
S: Maintained S: Maintained
F: drivers/usb/musb/ F: drivers/usb/musb/
@ -10187,7 +10186,6 @@ F: Documentation/parport*.txt
PARAVIRT_OPS INTERFACE PARAVIRT_OPS INTERFACE
M: Juergen Gross <jgross@suse.com> M: Juergen Gross <jgross@suse.com>
M: Chris Wright <chrisw@sous-sol.org>
M: Alok Kataria <akataria@vmware.com> M: Alok Kataria <akataria@vmware.com>
M: Rusty Russell <rusty@rustcorp.com.au> M: Rusty Russell <rusty@rustcorp.com.au>
L: virtualization@lists.linux-foundation.org L: virtualization@lists.linux-foundation.org
@ -10567,6 +10565,8 @@ M: Peter Zijlstra <peterz@infradead.org>
M: Ingo Molnar <mingo@redhat.com> M: Ingo Molnar <mingo@redhat.com>
M: Arnaldo Carvalho de Melo <acme@kernel.org> M: Arnaldo Carvalho de Melo <acme@kernel.org>
R: Alexander Shishkin <alexander.shishkin@linux.intel.com> R: Alexander Shishkin <alexander.shishkin@linux.intel.com>
R: Jiri Olsa <jolsa@redhat.com>
R: Namhyung Kim <namhyung@kernel.org>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
S: Supported S: Supported

View File

@ -1,7 +1,7 @@
VERSION = 4 VERSION = 4
PATCHLEVEL = 14 PATCHLEVEL = 14
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc5 EXTRAVERSION = -rc7
NAME = Fearless Coyote NAME = Fearless Coyote
# *DOCUMENTATION* # *DOCUMENTATION*
@ -130,8 +130,8 @@ endif
ifneq ($(KBUILD_OUTPUT),) ifneq ($(KBUILD_OUTPUT),)
# check that the output directory actually exists # check that the output directory actually exists
saved-output := $(KBUILD_OUTPUT) saved-output := $(KBUILD_OUTPUT)
$(shell [ -d $(KBUILD_OUTPUT) ] || mkdir -p $(KBUILD_OUTPUT)) KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \
KBUILD_OUTPUT := $(realpath $(KBUILD_OUTPUT)) && /bin/pwd)
$(if $(KBUILD_OUTPUT),, \ $(if $(KBUILD_OUTPUT),, \
$(error failed to create output directory "$(saved-output)")) $(error failed to create output directory "$(saved-output)"))
@ -697,11 +697,11 @@ KBUILD_CFLAGS += $(stackp-flag)
ifeq ($(cc-name),clang) ifeq ($(cc-name),clang)
ifneq ($(CROSS_COMPILE),) ifneq ($(CROSS_COMPILE),)
CLANG_TARGET := -target $(notdir $(CROSS_COMPILE:%-=%)) CLANG_TARGET := --target=$(notdir $(CROSS_COMPILE:%-=%))
GCC_TOOLCHAIN := $(realpath $(dir $(shell which $(LD)))/..) GCC_TOOLCHAIN := $(realpath $(dir $(shell which $(LD)))/..)
endif endif
ifneq ($(GCC_TOOLCHAIN),) ifneq ($(GCC_TOOLCHAIN),)
CLANG_GCC_TC := -gcc-toolchain $(GCC_TOOLCHAIN) CLANG_GCC_TC := --gcc-toolchain=$(GCC_TOOLCHAIN)
endif endif
KBUILD_CFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC) KBUILD_CFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC)
KBUILD_AFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC) KBUILD_AFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC)
@ -1399,7 +1399,7 @@ help:
@echo ' Build, install, and boot kernel before' @echo ' Build, install, and boot kernel before'
@echo ' running kselftest on it' @echo ' running kselftest on it'
@echo ' kselftest-clean - Remove all generated kselftest files' @echo ' kselftest-clean - Remove all generated kselftest files'
@echo ' kselftest-merge - Merge all the config dependencies of kselftest to existed' @echo ' kselftest-merge - Merge all the config dependencies of kselftest to existing'
@echo ' .config.' @echo ' .config.'
@echo '' @echo ''
@echo 'Userspace tools targets:' @echo 'Userspace tools targets:'

View File

@ -181,10 +181,10 @@ alcor_init_irq(void)
* comes in on. This makes interrupt processing much easier. * comes in on. This makes interrupt processing much easier.
*/ */
static int __init static int
alcor_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) alcor_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[7][5] __initdata = { static char irq_tab[7][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
/* note: IDSEL 17 is XLT only */ /* note: IDSEL 17 is XLT only */
{16+13, 16+13, 16+13, 16+13, 16+13}, /* IdSel 17, TULIP */ {16+13, 16+13, 16+13, 16+13, 16+13}, /* IdSel 17, TULIP */

View File

@ -173,10 +173,10 @@ pc164_init_irq(void)
* because it is the Saturn IO (SIO) PCI/ISA Bridge Chip. * because it is the Saturn IO (SIO) PCI/ISA Bridge Chip.
*/ */
static inline int __init static inline int
eb66p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) eb66p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[5][5] __initdata = { static char irq_tab[5][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{16+0, 16+0, 16+5, 16+9, 16+13}, /* IdSel 6, slot 0, J25 */ {16+0, 16+0, 16+5, 16+9, 16+13}, /* IdSel 6, slot 0, J25 */
{16+1, 16+1, 16+6, 16+10, 16+14}, /* IdSel 7, slot 1, J26 */ {16+1, 16+1, 16+6, 16+10, 16+14}, /* IdSel 7, slot 1, J26 */
@ -203,10 +203,10 @@ eb66p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
* because it is the Saturn IO (SIO) PCI/ISA Bridge Chip. * because it is the Saturn IO (SIO) PCI/ISA Bridge Chip.
*/ */
static inline int __init static inline int
cabriolet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) cabriolet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[5][5] __initdata = { static char irq_tab[5][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 16+2, 16+2, 16+7, 16+11, 16+15}, /* IdSel 5, slot 2, J21 */ { 16+2, 16+2, 16+7, 16+11, 16+15}, /* IdSel 5, slot 2, J21 */
{ 16+0, 16+0, 16+5, 16+9, 16+13}, /* IdSel 6, slot 0, J19 */ { 16+0, 16+0, 16+5, 16+9, 16+13}, /* IdSel 6, slot 0, J19 */
@ -287,10 +287,10 @@ cia_cab_init_pci(void)
* *
*/ */
static inline int __init static inline int
alphapc164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) alphapc164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[7][5] __initdata = { static char irq_tab[7][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 16+2, 16+2, 16+9, 16+13, 16+17}, /* IdSel 5, slot 2, J20 */ { 16+2, 16+2, 16+9, 16+13, 16+17}, /* IdSel 5, slot 2, J20 */
{ 16+0, 16+0, 16+7, 16+11, 16+15}, /* IdSel 6, slot 0, J29 */ { 16+0, 16+0, 16+7, 16+11, 16+15}, /* IdSel 6, slot 0, J29 */

View File

@ -356,7 +356,7 @@ clipper_init_irq(void)
* 10 64 bit PCI option slot 3 (not bus 0) * 10 64 bit PCI option slot 3 (not bus 0)
*/ */
static int __init static int
isa_irq_fixup(const struct pci_dev *dev, int irq) isa_irq_fixup(const struct pci_dev *dev, int irq)
{ {
u8 irq8; u8 irq8;
@ -372,10 +372,10 @@ isa_irq_fixup(const struct pci_dev *dev, int irq)
return irq8 & 0xf; return irq8 & 0xf;
} }
static int __init static int
dp264_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) dp264_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[6][5] __initdata = { static char irq_tab[6][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ -1, -1, -1, -1, -1}, /* IdSel 5 ISA Bridge */ { -1, -1, -1, -1, -1}, /* IdSel 5 ISA Bridge */
{ 16+ 3, 16+ 3, 16+ 2, 16+ 2, 16+ 2}, /* IdSel 6 SCSI builtin*/ { 16+ 3, 16+ 3, 16+ 2, 16+ 2, 16+ 2}, /* IdSel 6 SCSI builtin*/
@ -394,10 +394,10 @@ dp264_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return isa_irq_fixup(dev, irq); return isa_irq_fixup(dev, irq);
} }
static int __init static int
monet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) monet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[13][5] __initdata = { static char irq_tab[13][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 45, 45, 45, 45, 45}, /* IdSel 3 21143 PCI1 */ { 45, 45, 45, 45, 45}, /* IdSel 3 21143 PCI1 */
{ -1, -1, -1, -1, -1}, /* IdSel 4 unused */ { -1, -1, -1, -1, -1}, /* IdSel 4 unused */
@ -423,7 +423,7 @@ monet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP); return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP);
} }
static u8 __init static u8
monet_swizzle(struct pci_dev *dev, u8 *pinp) monet_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
struct pci_controller *hose = dev->sysdata; struct pci_controller *hose = dev->sysdata;
@ -456,10 +456,10 @@ monet_swizzle(struct pci_dev *dev, u8 *pinp)
return slot; return slot;
} }
static int __init static int
webbrick_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) webbrick_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[13][5] __initdata = { static char irq_tab[13][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ -1, -1, -1, -1, -1}, /* IdSel 7 ISA Bridge */ { -1, -1, -1, -1, -1}, /* IdSel 7 ISA Bridge */
{ -1, -1, -1, -1, -1}, /* IdSel 8 unused */ { -1, -1, -1, -1, -1}, /* IdSel 8 unused */
@ -478,10 +478,10 @@ webbrick_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP); return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP);
} }
static int __init static int
clipper_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) clipper_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[7][5] __initdata = { static char irq_tab[7][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 16+ 8, 16+ 8, 16+ 9, 16+10, 16+11}, /* IdSel 1 slot 1 */ { 16+ 8, 16+ 8, 16+ 9, 16+10, 16+11}, /* IdSel 1 slot 1 */
{ 16+12, 16+12, 16+13, 16+14, 16+15}, /* IdSel 2 slot 2 */ { 16+12, 16+12, 16+13, 16+14, 16+15}, /* IdSel 2 slot 2 */

View File

@ -167,10 +167,10 @@ eb64p_init_irq(void)
* comes in on. This makes interrupt processing much easier. * comes in on. This makes interrupt processing much easier.
*/ */
static int __init static int
eb64p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) eb64p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[5][5] __initdata = { static char irq_tab[5][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{16+7, 16+7, 16+7, 16+7, 16+7}, /* IdSel 5, slot ?, ?? */ {16+7, 16+7, 16+7, 16+7, 16+7}, /* IdSel 5, slot ?, ?? */
{16+0, 16+0, 16+2, 16+4, 16+9}, /* IdSel 6, slot ?, ?? */ {16+0, 16+0, 16+2, 16+4, 16+9}, /* IdSel 6, slot ?, ?? */

View File

@ -141,7 +141,7 @@ eiger_init_irq(void)
} }
} }
static int __init static int
eiger_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) eiger_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
u8 irq_orig; u8 irq_orig;
@ -158,7 +158,7 @@ eiger_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return irq_orig - 0x80; return irq_orig - 0x80;
} }
static u8 __init static u8
eiger_swizzle(struct pci_dev *dev, u8 *pinp) eiger_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
struct pci_controller *hose = dev->sysdata; struct pci_controller *hose = dev->sysdata;

View File

@ -149,10 +149,10 @@ miata_init_irq(void)
* comes in on. This makes interrupt processing much easier. * comes in on. This makes interrupt processing much easier.
*/ */
static int __init static int
miata_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) miata_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[18][5] __initdata = { static char irq_tab[18][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{16+ 8, 16+ 8, 16+ 8, 16+ 8, 16+ 8}, /* IdSel 14, DC21142 */ {16+ 8, 16+ 8, 16+ 8, 16+ 8, 16+ 8}, /* IdSel 14, DC21142 */
{ -1, -1, -1, -1, -1}, /* IdSel 15, EIDE */ { -1, -1, -1, -1, -1}, /* IdSel 15, EIDE */
@ -196,7 +196,7 @@ miata_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;
} }
static u8 __init static u8
miata_swizzle(struct pci_dev *dev, u8 *pinp) miata_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
int slot, pin = *pinp; int slot, pin = *pinp;

View File

@ -145,10 +145,10 @@ mikasa_init_irq(void)
* comes in on. This makes interrupt processing much easier. * comes in on. This makes interrupt processing much easier.
*/ */
static int __init static int
mikasa_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) mikasa_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[8][5] __initdata = { static char irq_tab[8][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{16+12, 16+12, 16+12, 16+12, 16+12}, /* IdSel 17, SCSI */ {16+12, 16+12, 16+12, 16+12, 16+12}, /* IdSel 17, SCSI */
{ -1, -1, -1, -1, -1}, /* IdSel 18, PCEB */ { -1, -1, -1, -1, -1}, /* IdSel 18, PCEB */

View File

@ -62,7 +62,7 @@ nautilus_init_irq(void)
common_init_isa_dma(); common_init_isa_dma();
} }
static int __init static int
nautilus_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) nautilus_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
/* Preserve the IRQ set up by the console. */ /* Preserve the IRQ set up by the console. */

View File

@ -193,10 +193,10 @@ noritake_init_irq(void)
* comes in on. This makes interrupt processing much easier. * comes in on. This makes interrupt processing much easier.
*/ */
static int __init static int
noritake_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) noritake_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[15][5] __initdata = { static char irq_tab[15][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
/* note: IDSELs 16, 17, and 25 are CORELLE only */ /* note: IDSELs 16, 17, and 25 are CORELLE only */
{ 16+1, 16+1, 16+1, 16+1, 16+1}, /* IdSel 16, QLOGIC */ { 16+1, 16+1, 16+1, 16+1, 16+1}, /* IdSel 16, QLOGIC */
@ -221,7 +221,7 @@ noritake_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;
} }
static u8 __init static u8
noritake_swizzle(struct pci_dev *dev, u8 *pinp) noritake_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
int slot, pin = *pinp; int slot, pin = *pinp;

View File

@ -221,10 +221,10 @@ rawhide_init_irq(void)
* *
*/ */
static int __init static int
rawhide_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) rawhide_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[5][5] __initdata = { static char irq_tab[5][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 16+16, 16+16, 16+16, 16+16, 16+16}, /* IdSel 1 SCSI PCI 1 */ { 16+16, 16+16, 16+16, 16+16, 16+16}, /* IdSel 1 SCSI PCI 1 */
{ 16+ 0, 16+ 0, 16+ 1, 16+ 2, 16+ 3}, /* IdSel 2 slot 2 */ { 16+ 0, 16+ 0, 16+ 1, 16+ 2, 16+ 3}, /* IdSel 2 slot 2 */

View File

@ -117,10 +117,10 @@ ruffian_kill_arch (int mode)
* *
*/ */
static int __init static int
ruffian_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) ruffian_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[11][5] __initdata = { static char irq_tab[11][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{-1, -1, -1, -1, -1}, /* IdSel 13, 21052 */ {-1, -1, -1, -1, -1}, /* IdSel 13, 21052 */
{-1, -1, -1, -1, -1}, /* IdSel 14, SIO */ {-1, -1, -1, -1, -1}, /* IdSel 14, SIO */
@ -139,7 +139,7 @@ ruffian_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;
} }
static u8 __init static u8
ruffian_swizzle(struct pci_dev *dev, u8 *pinp) ruffian_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
int slot, pin = *pinp; int slot, pin = *pinp;

View File

@ -142,7 +142,7 @@ rx164_init_irq(void)
* *
*/ */
static int __init static int
rx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) rx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
#if 0 #if 0
@ -156,7 +156,7 @@ rx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ 16+1, 16+1, 16+6, 16+11, 16+16}, /* IdSel 10, slot 4 */ { 16+1, 16+1, 16+6, 16+11, 16+16}, /* IdSel 10, slot 4 */
}; };
#else #else
static char irq_tab[6][5] __initdata = { static char irq_tab[6][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 16+0, 16+0, 16+6, 16+11, 16+16}, /* IdSel 5, slot 0 */ { 16+0, 16+0, 16+6, 16+11, 16+16}, /* IdSel 5, slot 0 */
{ 16+1, 16+1, 16+7, 16+12, 16+17}, /* IdSel 6, slot 1 */ { 16+1, 16+1, 16+7, 16+12, 16+17}, /* IdSel 6, slot 1 */

View File

@ -192,10 +192,10 @@ sable_init_irq(void)
* with the values in the irq swizzling tables above. * with the values in the irq swizzling tables above.
*/ */
static int __init static int
sable_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) sable_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[9][5] __initdata = { static char irq_tab[9][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 32+0, 32+0, 32+0, 32+0, 32+0}, /* IdSel 0, TULIP */ { 32+0, 32+0, 32+0, 32+0, 32+0}, /* IdSel 0, TULIP */
{ 32+1, 32+1, 32+1, 32+1, 32+1}, /* IdSel 1, SCSI */ { 32+1, 32+1, 32+1, 32+1, 32+1}, /* IdSel 1, SCSI */
@ -374,10 +374,10 @@ lynx_init_irq(void)
* with the values in the irq swizzling tables above. * with the values in the irq swizzling tables above.
*/ */
static int __init static int
lynx_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) lynx_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[19][5] __initdata = { static char irq_tab[19][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ -1, -1, -1, -1, -1}, /* IdSel 13, PCEB */ { -1, -1, -1, -1, -1}, /* IdSel 13, PCEB */
{ -1, -1, -1, -1, -1}, /* IdSel 14, PPB */ { -1, -1, -1, -1, -1}, /* IdSel 14, PPB */
@ -404,7 +404,7 @@ lynx_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;
} }
static u8 __init static u8
lynx_swizzle(struct pci_dev *dev, u8 *pinp) lynx_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
int slot, pin = *pinp; int slot, pin = *pinp;

View File

@ -144,7 +144,7 @@ sio_fixup_irq_levels(unsigned int level_bits)
outb((level_bits >> 8) & 0xff, 0x4d1); outb((level_bits >> 8) & 0xff, 0x4d1);
} }
static inline int __init static inline int
noname_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) noname_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
/* /*
@ -165,7 +165,7 @@ noname_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
* that they use the default INTA line, if they are interrupt * that they use the default INTA line, if they are interrupt
* driven at all). * driven at all).
*/ */
static char irq_tab[][5] __initdata = { static char irq_tab[][5] = {
/*INT A B C D */ /*INT A B C D */
{ 3, 3, 3, 3, 3}, /* idsel 6 (53c810) */ { 3, 3, 3, 3, 3}, /* idsel 6 (53c810) */
{-1, -1, -1, -1, -1}, /* idsel 7 (SIO: PCI/ISA bridge) */ {-1, -1, -1, -1, -1}, /* idsel 7 (SIO: PCI/ISA bridge) */
@ -183,10 +183,10 @@ noname_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return irq >= 0 ? tmp : -1; return irq >= 0 ? tmp : -1;
} }
static inline int __init static inline int
p2k_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) p2k_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[][5] __initdata = { static char irq_tab[][5] = {
/*INT A B C D */ /*INT A B C D */
{ 0, 0, -1, -1, -1}, /* idsel 6 (53c810) */ { 0, 0, -1, -1, -1}, /* idsel 6 (53c810) */
{-1, -1, -1, -1, -1}, /* idsel 7 (SIO: PCI/ISA bridge) */ {-1, -1, -1, -1, -1}, /* idsel 7 (SIO: PCI/ISA bridge) */

View File

@ -94,10 +94,10 @@ sx164_init_irq(void)
* 9 32 bit PCI option slot 3 * 9 32 bit PCI option slot 3
*/ */
static int __init static int
sx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) sx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[5][5] __initdata = { static char irq_tab[5][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ 16+ 9, 16+ 9, 16+13, 16+17, 16+21}, /* IdSel 5 slot 2 J17 */ { 16+ 9, 16+ 9, 16+13, 16+17, 16+21}, /* IdSel 5 slot 2 J17 */
{ 16+11, 16+11, 16+15, 16+19, 16+23}, /* IdSel 6 slot 0 J19 */ { 16+11, 16+11, 16+15, 16+19, 16+23}, /* IdSel 6 slot 0 J19 */

View File

@ -155,10 +155,10 @@ takara_init_irq(void)
* assign it whatever the hell IRQ we like and it doesn't matter. * assign it whatever the hell IRQ we like and it doesn't matter.
*/ */
static int __init static int
takara_map_irq_srm(const struct pci_dev *dev, u8 slot, u8 pin) takara_map_irq_srm(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[15][5] __initdata = { static char irq_tab[15][5] = {
{ 16+3, 16+3, 16+3, 16+3, 16+3}, /* slot 6 == device 3 */ { 16+3, 16+3, 16+3, 16+3, 16+3}, /* slot 6 == device 3 */
{ 16+2, 16+2, 16+2, 16+2, 16+2}, /* slot 7 == device 2 */ { 16+2, 16+2, 16+2, 16+2, 16+2}, /* slot 7 == device 2 */
{ 16+1, 16+1, 16+1, 16+1, 16+1}, /* slot 8 == device 1 */ { 16+1, 16+1, 16+1, 16+1, 16+1}, /* slot 8 == device 1 */
@ -210,7 +210,7 @@ takara_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return COMMON_TABLE_LOOKUP; return COMMON_TABLE_LOOKUP;
} }
static u8 __init static u8
takara_swizzle(struct pci_dev *dev, u8 *pinp) takara_swizzle(struct pci_dev *dev, u8 *pinp)
{ {
int slot = PCI_SLOT(dev->devfn); int slot = PCI_SLOT(dev->devfn);

View File

@ -288,10 +288,10 @@ wildfire_device_interrupt(unsigned long vector)
* 7 64 bit PCI 1 option slot 7 * 7 64 bit PCI 1 option slot 7
*/ */
static int __init static int
wildfire_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) wildfire_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{ {
static char irq_tab[8][5] __initdata = { static char irq_tab[8][5] = {
/*INT INTA INTB INTC INTD */ /*INT INTA INTB INTC INTD */
{ -1, -1, -1, -1, -1}, /* IdSel 0 ISA Bridge */ { -1, -1, -1, -1, -1}, /* IdSel 0 ISA Bridge */
{ 36, 36, 36+1, 36+2, 36+3}, /* IdSel 1 SCSI builtin */ { 36, 36, 36+1, 36+2, 36+3}, /* IdSel 1 SCSI builtin */

View File

@ -137,14 +137,15 @@
/* /*
* DW sdio controller has external ciu clock divider * DW sdio controller has external ciu clock divider
* controlled via register in SDIO IP. Due to its * controlled via register in SDIO IP. Due to its
* unexpected default value (it should devide by 1 * unexpected default value (it should divide by 1
* but it devides by 8) SDIO IP uses wrong clock and * but it divides by 8) SDIO IP uses wrong clock and
* works unstable (see STAR 9001204800) * works unstable (see STAR 9001204800)
* We switched to the minimum possible value of the
* divisor (div-by-2) in HSDK platform code.
* So add temporary fix and change clock frequency * So add temporary fix and change clock frequency
* from 100000000 to 12500000 Hz until we fix dw sdio * to 50000000 Hz until we fix dw sdio driver itself.
* driver itself.
*/ */
clock-frequency = <12500000>; clock-frequency = <50000000>;
#clock-cells = <0>; #clock-cells = <0>;
}; };

View File

@ -63,7 +63,6 @@ CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_DW=y CONFIG_MMC_DW=y
# CONFIG_IOMMU_SUPPORT is not set # CONFIG_IOMMU_SUPPORT is not set
CONFIG_RESET_HSDK=y
CONFIG_EXT3_FS=y CONFIG_EXT3_FS=y
CONFIG_VFAT_FS=y CONFIG_VFAT_FS=y
CONFIG_TMPFS=y CONFIG_TMPFS=y

View File

@ -23,6 +23,8 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/export.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/mach_desc.h> #include <asm/mach_desc.h>
@ -30,6 +32,9 @@
#ifndef CONFIG_ARC_HAS_LLSC #ifndef CONFIG_ARC_HAS_LLSC
arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED; arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED;
arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED; arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED;
EXPORT_SYMBOL_GPL(smp_atomic_ops_lock);
EXPORT_SYMBOL_GPL(smp_bitops_lock);
#endif #endif
struct plat_smp_ops __weak plat_smp_ops; struct plat_smp_ops __weak plat_smp_ops;

View File

@ -8,3 +8,4 @@
menuconfig ARC_SOC_HSDK menuconfig ARC_SOC_HSDK
bool "ARC HS Development Kit SOC" bool "ARC HS Development Kit SOC"
select CLK_HSDK select CLK_HSDK
select RESET_HSDK

View File

@ -74,6 +74,10 @@ static void __init hsdk_set_cpu_freq_1ghz(void)
pr_err("Failed to setup CPU frequency to 1GHz!"); pr_err("Failed to setup CPU frequency to 1GHz!");
} }
#define SDIO_BASE (ARC_PERIPHERAL_BASE + 0xA000)
#define SDIO_UHS_REG_EXT (SDIO_BASE + 0x108)
#define SDIO_UHS_REG_EXT_DIV_2 (2 << 30)
static void __init hsdk_init_early(void) static void __init hsdk_init_early(void)
{ {
/* /*
@ -89,6 +93,12 @@ static void __init hsdk_init_early(void)
/* Really apply settings made above */ /* Really apply settings made above */
writel(1, (void __iomem *) CREG_PAE_UPDATE); writel(1, (void __iomem *) CREG_PAE_UPDATE);
/*
* Switch SDIO external ciu clock divider from default div-by-8 to
* minimum possible div-by-2.
*/
iowrite32(SDIO_UHS_REG_EXT_DIV_2, (void __iomem *) SDIO_UHS_REG_EXT);
/* /*
* Setup CPU frequency to 1GHz. * Setup CPU frequency to 1GHz.
* TODO: remove it after smart hsdk pll driver will be introduced. * TODO: remove it after smart hsdk pll driver will be introduced.

View File

@ -1,7 +1,7 @@
#include <linux/bootmem.h> #include <linux/bootmem.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/rwlock.h> #include <linux/spinlock.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>

View File

@ -478,28 +478,30 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
return ret; return ret;
dir = iommu_tce_direction(tce); dir = iommu_tce_direction(tce);
idx = srcu_read_lock(&vcpu->kvm->srcu);
if ((dir != DMA_NONE) && kvmppc_gpa_to_ua(vcpu->kvm, if ((dir != DMA_NONE) && kvmppc_gpa_to_ua(vcpu->kvm,
tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), &ua, NULL)) tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), &ua, NULL)) {
return H_PARAMETER; ret = H_PARAMETER;
goto unlock_exit;
}
entry = ioba >> stt->page_shift; entry = ioba >> stt->page_shift;
list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
if (dir == DMA_NONE) { if (dir == DMA_NONE)
ret = kvmppc_tce_iommu_unmap(vcpu->kvm, ret = kvmppc_tce_iommu_unmap(vcpu->kvm,
stit->tbl, entry); stit->tbl, entry);
} else { else
idx = srcu_read_lock(&vcpu->kvm->srcu);
ret = kvmppc_tce_iommu_map(vcpu->kvm, stit->tbl, ret = kvmppc_tce_iommu_map(vcpu->kvm, stit->tbl,
entry, ua, dir); entry, ua, dir);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
}
if (ret == H_SUCCESS) if (ret == H_SUCCESS)
continue; continue;
if (ret == H_TOO_HARD) if (ret == H_TOO_HARD)
return ret; goto unlock_exit;
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
kvmppc_clear_tce(stit->tbl, entry); kvmppc_clear_tce(stit->tbl, entry);
@ -507,7 +509,10 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
kvmppc_tce_put(stt, entry, tce); kvmppc_tce_put(stt, entry, tce);
return H_SUCCESS; unlock_exit:
srcu_read_unlock(&vcpu->kvm->srcu, idx);
return ret;
} }
EXPORT_SYMBOL_GPL(kvmppc_h_put_tce); EXPORT_SYMBOL_GPL(kvmppc_h_put_tce);

View File

@ -989,13 +989,14 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
beq no_xive beq no_xive
ld r11, VCPU_XIVE_SAVED_STATE(r4) ld r11, VCPU_XIVE_SAVED_STATE(r4)
li r9, TM_QW1_OS li r9, TM_QW1_OS
stdcix r11,r9,r10
eieio eieio
stdcix r11,r9,r10
lwz r11, VCPU_XIVE_CAM_WORD(r4) lwz r11, VCPU_XIVE_CAM_WORD(r4)
li r9, TM_QW1_OS + TM_WORD2 li r9, TM_QW1_OS + TM_WORD2
stwcix r11,r9,r10 stwcix r11,r9,r10
li r9, 1 li r9, 1
stw r9, VCPU_XIVE_PUSHED(r4) stw r9, VCPU_XIVE_PUSHED(r4)
eieio
no_xive: no_xive:
#endif /* CONFIG_KVM_XICS */ #endif /* CONFIG_KVM_XICS */
@ -1310,6 +1311,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
bne 3f bne 3f
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
PPC_MSGSYNC PPC_MSGSYNC
lwsync
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
lbz r0, HSTATE_HOST_IPI(r13) lbz r0, HSTATE_HOST_IPI(r13)
cmpwi r0, 0 cmpwi r0, 0
@ -1400,8 +1402,8 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
cmpldi cr0, r10, 0 cmpldi cr0, r10, 0
beq 1f beq 1f
/* First load to pull the context, we ignore the value */ /* First load to pull the context, we ignore the value */
lwzx r11, r7, r10
eieio eieio
lwzx r11, r7, r10
/* Second load to recover the context state (Words 0 and 1) */ /* Second load to recover the context state (Words 0 and 1) */
ldx r11, r6, r10 ldx r11, r6, r10
b 3f b 3f
@ -1409,8 +1411,8 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
cmpldi cr0, r10, 0 cmpldi cr0, r10, 0
beq 1f beq 1f
/* First load to pull the context, we ignore the value */ /* First load to pull the context, we ignore the value */
lwzcix r11, r7, r10
eieio eieio
lwzcix r11, r7, r10
/* Second load to recover the context state (Words 0 and 1) */ /* Second load to recover the context state (Words 0 and 1) */
ldcix r11, r6, r10 ldcix r11, r6, r10
3: std r11, VCPU_XIVE_SAVED_STATE(r9) 3: std r11, VCPU_XIVE_SAVED_STATE(r9)
@ -1420,6 +1422,7 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
stw r10, VCPU_XIVE_PUSHED(r9) stw r10, VCPU_XIVE_PUSHED(r9)
stb r10, (VCPU_XIVE_SAVED_STATE+3)(r9) stb r10, (VCPU_XIVE_SAVED_STATE+3)(r9)
stb r0, (VCPU_XIVE_SAVED_STATE+4)(r9) stb r0, (VCPU_XIVE_SAVED_STATE+4)(r9)
eieio
1: 1:
#endif /* CONFIG_KVM_XICS */ #endif /* CONFIG_KVM_XICS */
/* Save more register state */ /* Save more register state */
@ -2788,6 +2791,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
PPC_MSGCLR(6) PPC_MSGCLR(6)
/* see if it's a host IPI */ /* see if it's a host IPI */
li r3, 1 li r3, 1
BEGIN_FTR_SECTION
PPC_MSGSYNC
lwsync
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
lbz r0, HSTATE_HOST_IPI(r13) lbz r0, HSTATE_HOST_IPI(r13)
cmpwi r0, 0 cmpwi r0, 0
bnelr bnelr

View File

@ -644,8 +644,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
break; break;
#endif #endif
case KVM_CAP_PPC_HTM: case KVM_CAP_PPC_HTM:
r = cpu_has_feature(CPU_FTR_TM_COMP) && r = cpu_has_feature(CPU_FTR_TM_COMP) && hv_enabled;
is_kvmppc_hv_enabled(kvm);
break; break;
default: default:
r = 0; r = 0;

View File

@ -521,12 +521,15 @@ ENTRY(pgm_check_handler)
tmhh %r8,0x0001 # test problem state bit tmhh %r8,0x0001 # test problem state bit
jnz 2f # -> fault in user space jnz 2f # -> fault in user space
#if IS_ENABLED(CONFIG_KVM) #if IS_ENABLED(CONFIG_KVM)
# cleanup critical section for sie64a # cleanup critical section for program checks in sie64a
lgr %r14,%r9 lgr %r14,%r9
slg %r14,BASED(.Lsie_critical_start) slg %r14,BASED(.Lsie_critical_start)
clg %r14,BASED(.Lsie_critical_length) clg %r14,BASED(.Lsie_critical_length)
jhe 0f jhe 0f
brasl %r14,.Lcleanup_sie lg %r14,__SF_EMPTY(%r15) # get control block pointer
ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
larl %r9,sie_exit # skip forward to sie_exit
#endif #endif
0: tmhh %r8,0x4000 # PER bit set in old PSW ? 0: tmhh %r8,0x4000 # PER bit set in old PSW ?
jnz 1f # -> enabled, can't be a double fault jnz 1f # -> enabled, can't be a double fault

View File

@ -808,7 +808,7 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt
.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
ENTRY(\sym) ENTRY(\sym)
UNWIND_HINT_IRET_REGS offset=8 UNWIND_HINT_IRET_REGS offset=\has_error_code*8
/* Sanity check */ /* Sanity check */
.if \shift_ist != -1 && \paranoid == 0 .if \shift_ist != -1 && \paranoid == 0

View File

@ -546,9 +546,6 @@ static int bts_event_init(struct perf_event *event)
if (event->attr.type != bts_pmu.type) if (event->attr.type != bts_pmu.type)
return -ENOENT; return -ENOENT;
if (x86_add_exclusive(x86_lbr_exclusive_bts))
return -EBUSY;
/* /*
* BTS leaks kernel addresses even when CPL0 tracing is * BTS leaks kernel addresses even when CPL0 tracing is
* disabled, so disallow intel_bts driver for unprivileged * disabled, so disallow intel_bts driver for unprivileged
@ -562,6 +559,9 @@ static int bts_event_init(struct perf_event *event)
!capable(CAP_SYS_ADMIN)) !capable(CAP_SYS_ADMIN))
return -EACCES; return -EACCES;
if (x86_add_exclusive(x86_lbr_exclusive_bts))
return -EBUSY;
ret = x86_reserve_hardware(); ret = x86_reserve_hardware();
if (ret) { if (ret) {
x86_del_exclusive(x86_lbr_exclusive_bts); x86_del_exclusive(x86_lbr_exclusive_bts);

View File

@ -82,12 +82,21 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
#define __flush_tlb_single(addr) __native_flush_tlb_single(addr) #define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
#endif #endif
/* static inline bool tlb_defer_switch_to_init_mm(void)
* If tlb_use_lazy_mode is true, then we try to avoid switching CR3 to point {
* to init_mm when we switch to a kernel thread (e.g. the idle thread). If /*
* it's false, then we immediately switch CR3 when entering a kernel thread. * If we have PCID, then switching to init_mm is reasonably
*/ * fast. If we don't have PCID, then switching to init_mm is
DECLARE_STATIC_KEY_TRUE(tlb_use_lazy_mode); * quite slow, so we try to defer it in the hopes that we can
* avoid it entirely. The latter approach runs the risk of
* receiving otherwise unnecessary IPIs.
*
* This choice is just a heuristic. The tlb code can handle this
* function returning true or false regardless of whether we have
* PCID.
*/
return !static_cpu_has(X86_FEATURE_PCID);
}
/* /*
* 6 because 6 should be plenty and struct tlb_state will fit in * 6 because 6 should be plenty and struct tlb_state will fit in

View File

@ -27,6 +27,8 @@ static const struct pci_device_id amd_root_ids[] = {
{} {}
}; };
#define PCI_DEVICE_ID_AMD_CNB17H_F4 0x1704
const struct pci_device_id amd_nb_misc_ids[] = { const struct pci_device_id amd_nb_misc_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB_MISC) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB_MISC) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_10H_NB_MISC) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_10H_NB_MISC) },
@ -37,6 +39,7 @@ const struct pci_device_id amd_nb_misc_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) },
{} {}
}; };
EXPORT_SYMBOL_GPL(amd_nb_misc_ids); EXPORT_SYMBOL_GPL(amd_nb_misc_ids);
@ -48,6 +51,7 @@ static const struct pci_device_id amd_nb_link_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) },
{} {}
}; };
@ -402,11 +406,48 @@ void amd_flush_garts(void)
} }
EXPORT_SYMBOL_GPL(amd_flush_garts); EXPORT_SYMBOL_GPL(amd_flush_garts);
static void __fix_erratum_688(void *info)
{
#define MSR_AMD64_IC_CFG 0xC0011021
msr_set_bit(MSR_AMD64_IC_CFG, 3);
msr_set_bit(MSR_AMD64_IC_CFG, 14);
}
/* Apply erratum 688 fix so machines without a BIOS fix work. */
static __init void fix_erratum_688(void)
{
struct pci_dev *F4;
u32 val;
if (boot_cpu_data.x86 != 0x14)
return;
if (!amd_northbridges.num)
return;
F4 = node_to_amd_nb(0)->link;
if (!F4)
return;
if (pci_read_config_dword(F4, 0x164, &val))
return;
if (val & BIT(2))
return;
on_each_cpu(__fix_erratum_688, NULL, 0);
pr_info("x86/cpu/AMD: CPU erratum 688 worked around\n");
}
static __init int init_amd_nbs(void) static __init int init_amd_nbs(void)
{ {
amd_cache_northbridges(); amd_cache_northbridges();
amd_cache_gart(); amd_cache_gart();
fix_erratum_688();
return 0; return 0;
} }

View File

@ -831,7 +831,6 @@ static int __cache_amd_cpumap_setup(unsigned int cpu, int index,
} else if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { } else if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
unsigned int apicid, nshared, first, last; unsigned int apicid, nshared, first, last;
this_leaf = this_cpu_ci->info_list + index;
nshared = base->eax.split.num_threads_sharing + 1; nshared = base->eax.split.num_threads_sharing + 1;
apicid = cpu_data(cpu).apicid; apicid = cpu_data(cpu).apicid;
first = apicid - (apicid % nshared); first = apicid - (apicid % nshared);

View File

@ -34,6 +34,7 @@
#include <linux/mm.h> #include <linux/mm.h>
#include <asm/microcode_intel.h> #include <asm/microcode_intel.h>
#include <asm/intel-family.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/setup.h> #include <asm/setup.h>
@ -918,6 +919,18 @@ static int get_ucode_fw(void *to, const void *from, size_t n)
return 0; return 0;
} }
static bool is_blacklisted(unsigned int cpu)
{
struct cpuinfo_x86 *c = &cpu_data(cpu);
if (c->x86 == 6 && c->x86_model == INTEL_FAM6_BROADWELL_X) {
pr_err_once("late loading on model 79 is disabled.\n");
return true;
}
return false;
}
static enum ucode_state request_microcode_fw(int cpu, struct device *device, static enum ucode_state request_microcode_fw(int cpu, struct device *device,
bool refresh_fw) bool refresh_fw)
{ {
@ -926,6 +939,9 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device,
const struct firmware *firmware; const struct firmware *firmware;
enum ucode_state ret; enum ucode_state ret;
if (is_blacklisted(cpu))
return UCODE_NFOUND;
sprintf(name, "intel-ucode/%02x-%02x-%02x", sprintf(name, "intel-ucode/%02x-%02x-%02x",
c->x86, c->x86_model, c->x86_mask); c->x86, c->x86_model, c->x86_mask);
@ -950,6 +966,9 @@ static int get_ucode_user(void *to, const void *from, size_t n)
static enum ucode_state static enum ucode_state
request_microcode_user(int cpu, const void __user *buf, size_t size) request_microcode_user(int cpu, const void __user *buf, size_t size)
{ {
if (is_blacklisted(cpu))
return UCODE_NFOUND;
return generic_load_microcode(cpu, (void *)buf, size, &get_ucode_user); return generic_load_microcode(cpu, (void *)buf, size, &get_ucode_user);
} }

View File

@ -30,10 +30,11 @@ static void __init i386_default_early_setup(void)
asmlinkage __visible void __init i386_start_kernel(void) asmlinkage __visible void __init i386_start_kernel(void)
{ {
cr4_init_shadow(); /* Make sure IDT is set up before any exception happens */
idt_setup_early_handler(); idt_setup_early_handler();
cr4_init_shadow();
sanitize_boot_params(&boot_params); sanitize_boot_params(&boot_params);
x86_early_init_platform_quirks(); x86_early_init_platform_quirks();

View File

@ -86,8 +86,8 @@ static struct orc_entry *orc_find(unsigned long ip)
idx = (ip - LOOKUP_START_IP) / LOOKUP_BLOCK_SIZE; idx = (ip - LOOKUP_START_IP) / LOOKUP_BLOCK_SIZE;
if (unlikely((idx >= lookup_num_blocks-1))) { if (unlikely((idx >= lookup_num_blocks-1))) {
orc_warn("WARNING: bad lookup idx: idx=%u num=%u ip=%lx\n", orc_warn("WARNING: bad lookup idx: idx=%u num=%u ip=%pB\n",
idx, lookup_num_blocks, ip); idx, lookup_num_blocks, (void *)ip);
return NULL; return NULL;
} }
@ -96,8 +96,8 @@ static struct orc_entry *orc_find(unsigned long ip)
if (unlikely((__start_orc_unwind + start >= __stop_orc_unwind) || if (unlikely((__start_orc_unwind + start >= __stop_orc_unwind) ||
(__start_orc_unwind + stop > __stop_orc_unwind))) { (__start_orc_unwind + stop > __stop_orc_unwind))) {
orc_warn("WARNING: bad lookup value: idx=%u num=%u start=%u stop=%u ip=%lx\n", orc_warn("WARNING: bad lookup value: idx=%u num=%u start=%u stop=%u ip=%pB\n",
idx, lookup_num_blocks, start, stop, ip); idx, lookup_num_blocks, start, stop, (void *)ip);
return NULL; return NULL;
} }
@ -373,7 +373,7 @@ bool unwind_next_frame(struct unwind_state *state)
case ORC_REG_R10: case ORC_REG_R10:
if (!state->regs || !state->full_regs) { if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg R10 at ip %p\n", orc_warn("missing regs for base reg R10 at ip %pB\n",
(void *)state->ip); (void *)state->ip);
goto done; goto done;
} }
@ -382,7 +382,7 @@ bool unwind_next_frame(struct unwind_state *state)
case ORC_REG_R13: case ORC_REG_R13:
if (!state->regs || !state->full_regs) { if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg R13 at ip %p\n", orc_warn("missing regs for base reg R13 at ip %pB\n",
(void *)state->ip); (void *)state->ip);
goto done; goto done;
} }
@ -391,7 +391,7 @@ bool unwind_next_frame(struct unwind_state *state)
case ORC_REG_DI: case ORC_REG_DI:
if (!state->regs || !state->full_regs) { if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg DI at ip %p\n", orc_warn("missing regs for base reg DI at ip %pB\n",
(void *)state->ip); (void *)state->ip);
goto done; goto done;
} }
@ -400,7 +400,7 @@ bool unwind_next_frame(struct unwind_state *state)
case ORC_REG_DX: case ORC_REG_DX:
if (!state->regs || !state->full_regs) { if (!state->regs || !state->full_regs) {
orc_warn("missing regs for base reg DX at ip %p\n", orc_warn("missing regs for base reg DX at ip %pB\n",
(void *)state->ip); (void *)state->ip);
goto done; goto done;
} }
@ -408,7 +408,7 @@ bool unwind_next_frame(struct unwind_state *state)
break; break;
default: default:
orc_warn("unknown SP base reg %d for ip %p\n", orc_warn("unknown SP base reg %d for ip %pB\n",
orc->sp_reg, (void *)state->ip); orc->sp_reg, (void *)state->ip);
goto done; goto done;
} }
@ -436,7 +436,7 @@ bool unwind_next_frame(struct unwind_state *state)
case ORC_TYPE_REGS: case ORC_TYPE_REGS:
if (!deref_stack_regs(state, sp, &state->ip, &state->sp, true)) { if (!deref_stack_regs(state, sp, &state->ip, &state->sp, true)) {
orc_warn("can't dereference registers at %p for ip %p\n", orc_warn("can't dereference registers at %p for ip %pB\n",
(void *)sp, (void *)orig_ip); (void *)sp, (void *)orig_ip);
goto done; goto done;
} }
@ -448,7 +448,7 @@ bool unwind_next_frame(struct unwind_state *state)
case ORC_TYPE_REGS_IRET: case ORC_TYPE_REGS_IRET:
if (!deref_stack_regs(state, sp, &state->ip, &state->sp, false)) { if (!deref_stack_regs(state, sp, &state->ip, &state->sp, false)) {
orc_warn("can't dereference iret registers at %p for ip %p\n", orc_warn("can't dereference iret registers at %p for ip %pB\n",
(void *)sp, (void *)orig_ip); (void *)sp, (void *)orig_ip);
goto done; goto done;
} }
@ -465,7 +465,8 @@ bool unwind_next_frame(struct unwind_state *state)
break; break;
default: default:
orc_warn("unknown .orc_unwind entry type %d\n", orc->type); orc_warn("unknown .orc_unwind entry type %d for ip %pB\n",
orc->type, (void *)orig_ip);
break; break;
} }
@ -487,7 +488,7 @@ bool unwind_next_frame(struct unwind_state *state)
break; break;
default: default:
orc_warn("unknown BP base reg %d for ip %p\n", orc_warn("unknown BP base reg %d for ip %pB\n",
orc->bp_reg, (void *)orig_ip); orc->bp_reg, (void *)orig_ip);
goto done; goto done;
} }
@ -496,7 +497,7 @@ bool unwind_next_frame(struct unwind_state *state)
if (state->stack_info.type == prev_type && if (state->stack_info.type == prev_type &&
on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) && on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) &&
state->sp <= prev_sp) { state->sp <= prev_sp) {
orc_warn("stack going in the wrong direction? ip=%p\n", orc_warn("stack going in the wrong direction? ip=%pB\n",
(void *)orig_ip); (void *)orig_ip);
goto done; goto done;
} }

View File

@ -30,7 +30,6 @@
atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1); atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
DEFINE_STATIC_KEY_TRUE(tlb_use_lazy_mode);
static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen,
u16 *new_asid, bool *need_flush) u16 *new_asid, bool *need_flush)
@ -147,8 +146,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
this_cpu_write(cpu_tlbstate.is_lazy, false); this_cpu_write(cpu_tlbstate.is_lazy, false);
if (real_prev == next) { if (real_prev == next) {
VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
next->context.ctx_id); next->context.ctx_id);
/* /*
* We don't currently support having a real mm loaded without * We don't currently support having a real mm loaded without
@ -213,6 +212,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
} }
/* /*
* Please ignore the name of this function. It should be called
* switch_to_kernel_thread().
*
* enter_lazy_tlb() is a hint from the scheduler that we are entering a * enter_lazy_tlb() is a hint from the scheduler that we are entering a
* kernel thread or other context without an mm. Acceptable implementations * kernel thread or other context without an mm. Acceptable implementations
* include doing nothing whatsoever, switching to init_mm, or various clever * include doing nothing whatsoever, switching to init_mm, or various clever
@ -227,7 +229,7 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
return; return;
if (static_branch_unlikely(&tlb_use_lazy_mode)) { if (tlb_defer_switch_to_init_mm()) {
/* /*
* There's a significant optimization that may be possible * There's a significant optimization that may be possible
* here. We have accurate enough TLB flush tracking that we * here. We have accurate enough TLB flush tracking that we
@ -626,57 +628,3 @@ static int __init create_tlb_single_page_flush_ceiling(void)
return 0; return 0;
} }
late_initcall(create_tlb_single_page_flush_ceiling); late_initcall(create_tlb_single_page_flush_ceiling);
static ssize_t tlblazy_read_file(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
{
char buf[2];
buf[0] = static_branch_likely(&tlb_use_lazy_mode) ? '1' : '0';
buf[1] = '\n';
return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
}
static ssize_t tlblazy_write_file(struct file *file,
const char __user *user_buf, size_t count, loff_t *ppos)
{
bool val;
if (kstrtobool_from_user(user_buf, count, &val))
return -EINVAL;
if (val)
static_branch_enable(&tlb_use_lazy_mode);
else
static_branch_disable(&tlb_use_lazy_mode);
return count;
}
static const struct file_operations fops_tlblazy = {
.read = tlblazy_read_file,
.write = tlblazy_write_file,
.llseek = default_llseek,
};
static int __init init_tlb_use_lazy_mode(void)
{
if (boot_cpu_has(X86_FEATURE_PCID)) {
/*
* Heuristic: with PCID on, switching to and from
* init_mm is reasonably fast, but remote flush IPIs
* as expensive as ever, so turn off lazy TLB mode.
*
* We can't do this in setup_pcid() because static keys
* haven't been initialized yet, and it would blow up
* badly.
*/
static_branch_disable(&tlb_use_lazy_mode);
}
debugfs_create_file("tlb_use_lazy_mode", S_IRUSR | S_IWUSR,
arch_debugfs_dir, NULL, &fops_tlblazy);
return 0;
}
late_initcall(init_tlb_use_lazy_mode);

View File

@ -3662,12 +3662,6 @@ static void binder_stat_br(struct binder_proc *proc,
} }
} }
static int binder_has_thread_work(struct binder_thread *thread)
{
return !binder_worklist_empty(thread->proc, &thread->todo) ||
thread->looper_need_return;
}
static int binder_put_node_cmd(struct binder_proc *proc, static int binder_put_node_cmd(struct binder_proc *proc,
struct binder_thread *thread, struct binder_thread *thread,
void __user **ptrp, void __user **ptrp,
@ -4297,12 +4291,9 @@ static unsigned int binder_poll(struct file *filp,
binder_inner_proc_unlock(thread->proc); binder_inner_proc_unlock(thread->proc);
if (binder_has_work(thread, wait_for_proc_work))
return POLLIN;
poll_wait(filp, &thread->wait, wait); poll_wait(filp, &thread->wait, wait);
if (binder_has_thread_work(thread)) if (binder_has_work(thread, wait_for_proc_work))
return POLLIN; return POLLIN;
return 0; return 0;

View File

@ -215,17 +215,12 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
} }
} }
if (!vma && need_mm) if (!vma && need_mm && mmget_not_zero(alloc->vma_vm_mm))
mm = get_task_mm(alloc->tsk); mm = alloc->vma_vm_mm;
if (mm) { if (mm) {
down_write(&mm->mmap_sem); down_write(&mm->mmap_sem);
vma = alloc->vma; vma = alloc->vma;
if (vma && mm != alloc->vma_vm_mm) {
pr_err("%d: vma mm and task mm mismatch\n",
alloc->pid);
vma = NULL;
}
} }
if (!vma && need_mm) { if (!vma && need_mm) {
@ -565,7 +560,7 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc,
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%d: merge free, buffer %pK do not share page with %pK or %pK\n", "%d: merge free, buffer %pK do not share page with %pK or %pK\n",
alloc->pid, buffer->data, alloc->pid, buffer->data,
prev->data, next->data); prev->data, next ? next->data : NULL);
binder_update_page_range(alloc, 0, buffer_start_page(buffer), binder_update_page_range(alloc, 0, buffer_start_page(buffer),
buffer_start_page(buffer) + PAGE_SIZE, buffer_start_page(buffer) + PAGE_SIZE,
NULL); NULL);
@ -720,6 +715,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
barrier(); barrier();
alloc->vma = vma; alloc->vma = vma;
alloc->vma_vm_mm = vma->vm_mm; alloc->vma_vm_mm = vma->vm_mm;
mmgrab(alloc->vma_vm_mm);
return 0; return 0;
@ -795,6 +791,8 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
vfree(alloc->buffer); vfree(alloc->buffer);
} }
mutex_unlock(&alloc->mutex); mutex_unlock(&alloc->mutex);
if (alloc->vma_vm_mm)
mmdrop(alloc->vma_vm_mm);
binder_alloc_debug(BINDER_DEBUG_OPEN_CLOSE, binder_alloc_debug(BINDER_DEBUG_OPEN_CLOSE,
"%s: %d buffers %d, pages %d\n", "%s: %d buffers %d, pages %d\n",
@ -889,7 +887,6 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
void binder_alloc_vma_close(struct binder_alloc *alloc) void binder_alloc_vma_close(struct binder_alloc *alloc)
{ {
WRITE_ONCE(alloc->vma, NULL); WRITE_ONCE(alloc->vma, NULL);
WRITE_ONCE(alloc->vma_vm_mm, NULL);
} }
/** /**
@ -926,9 +923,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE; page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
vma = alloc->vma; vma = alloc->vma;
if (vma) { if (vma) {
mm = get_task_mm(alloc->tsk); if (!mmget_not_zero(alloc->vma_vm_mm))
if (!mm) goto err_mmget;
goto err_get_task_mm_failed; mm = alloc->vma_vm_mm;
if (!down_write_trylock(&mm->mmap_sem)) if (!down_write_trylock(&mm->mmap_sem))
goto err_down_write_mmap_sem_failed; goto err_down_write_mmap_sem_failed;
} }
@ -963,7 +960,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
err_down_write_mmap_sem_failed: err_down_write_mmap_sem_failed:
mmput_async(mm); mmput_async(mm);
err_get_task_mm_failed: err_mmget:
err_page_already_freed: err_page_already_freed:
mutex_unlock(&alloc->mutex); mutex_unlock(&alloc->mutex);
err_get_alloc_mutex_failed: err_get_alloc_mutex_failed:
@ -1002,7 +999,6 @@ struct shrinker binder_shrinker = {
*/ */
void binder_alloc_init(struct binder_alloc *alloc) void binder_alloc_init(struct binder_alloc *alloc)
{ {
alloc->tsk = current->group_leader;
alloc->pid = current->group_leader->pid; alloc->pid = current->group_leader->pid;
mutex_init(&alloc->mutex); mutex_init(&alloc->mutex);
INIT_LIST_HEAD(&alloc->buffers); INIT_LIST_HEAD(&alloc->buffers);

View File

@ -100,7 +100,6 @@ struct binder_lru_page {
*/ */
struct binder_alloc { struct binder_alloc {
struct mutex mutex; struct mutex mutex;
struct task_struct *tsk;
struct vm_area_struct *vma; struct vm_area_struct *vma;
struct mm_struct *vma_vm_mm; struct mm_struct *vma_vm_mm;
void *buffer; void *buffer;

View File

@ -377,7 +377,8 @@ int register_cpu(struct cpu *cpu, int num)
per_cpu(cpu_sys_devices, num) = &cpu->dev; per_cpu(cpu_sys_devices, num) = &cpu->dev;
register_cpu_under_node(num, cpu_to_node(num)); register_cpu_under_node(num, cpu_to_node(num));
dev_pm_qos_expose_latency_limit(&cpu->dev, 0); dev_pm_qos_expose_latency_limit(&cpu->dev,
PM_QOS_RESUME_LATENCY_NO_CONSTRAINT);
return 0; return 0;
} }

View File

@ -14,23 +14,20 @@
static int dev_update_qos_constraint(struct device *dev, void *data) static int dev_update_qos_constraint(struct device *dev, void *data)
{ {
s64 *constraint_ns_p = data; s64 *constraint_ns_p = data;
s32 constraint_ns = -1; s64 constraint_ns = -1;
if (dev->power.subsys_data && dev->power.subsys_data->domain_data) if (dev->power.subsys_data && dev->power.subsys_data->domain_data)
constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns; constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns;
if (constraint_ns < 0) { if (constraint_ns < 0)
constraint_ns = dev_pm_qos_read_value(dev); constraint_ns = dev_pm_qos_read_value(dev);
constraint_ns *= NSEC_PER_USEC;
} if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
if (constraint_ns == 0)
return 0; return 0;
/* constraint_ns *= NSEC_PER_USEC;
* constraint_ns cannot be negative here, because the device has been
* suspended. if (constraint_ns < *constraint_ns_p || *constraint_ns_p < 0)
*/
if (constraint_ns < *constraint_ns_p || *constraint_ns_p == 0)
*constraint_ns_p = constraint_ns; *constraint_ns_p = constraint_ns;
return 0; return 0;
@ -63,10 +60,14 @@ static bool default_suspend_ok(struct device *dev)
spin_unlock_irqrestore(&dev->power.lock, flags); spin_unlock_irqrestore(&dev->power.lock, flags);
if (constraint_ns < 0) if (constraint_ns == 0)
return false; return false;
constraint_ns *= NSEC_PER_USEC; if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
constraint_ns = -1;
else
constraint_ns *= NSEC_PER_USEC;
/* /*
* We can walk the children without any additional locking, because * We can walk the children without any additional locking, because
* they all have been suspended at this point and their * they all have been suspended at this point and their
@ -76,14 +77,19 @@ static bool default_suspend_ok(struct device *dev)
device_for_each_child(dev, &constraint_ns, device_for_each_child(dev, &constraint_ns,
dev_update_qos_constraint); dev_update_qos_constraint);
if (constraint_ns > 0) { if (constraint_ns < 0) {
constraint_ns -= td->suspend_latency_ns + /* The children have no constraints. */
td->resume_latency_ns; td->effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
if (constraint_ns == 0) td->cached_suspend_ok = true;
return false; } else {
constraint_ns -= td->suspend_latency_ns + td->resume_latency_ns;
if (constraint_ns > 0) {
td->effective_constraint_ns = constraint_ns;
td->cached_suspend_ok = true;
} else {
td->effective_constraint_ns = 0;
}
} }
td->effective_constraint_ns = constraint_ns;
td->cached_suspend_ok = constraint_ns >= 0;
/* /*
* The children have been suspended already, so we don't need to take * The children have been suspended already, so we don't need to take
@ -145,13 +151,14 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
td = &to_gpd_data(pdd)->td; td = &to_gpd_data(pdd)->td;
constraint_ns = td->effective_constraint_ns; constraint_ns = td->effective_constraint_ns;
/* default_suspend_ok() need not be called before us. */ /* default_suspend_ok() need not be called before us. */
if (constraint_ns < 0) { if (constraint_ns < 0)
constraint_ns = dev_pm_qos_read_value(pdd->dev); constraint_ns = dev_pm_qos_read_value(pdd->dev);
constraint_ns *= NSEC_PER_USEC;
} if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
if (constraint_ns == 0)
continue; continue;
constraint_ns *= NSEC_PER_USEC;
/* /*
* constraint_ns cannot be negative here, because the device has * constraint_ns cannot be negative here, because the device has
* been suspended. * been suspended.

View File

@ -189,7 +189,7 @@ static int dev_pm_qos_constraints_allocate(struct device *dev)
plist_head_init(&c->list); plist_head_init(&c->list);
c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; c->no_constraint_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
c->type = PM_QOS_MIN; c->type = PM_QOS_MIN;
c->notifiers = n; c->notifiers = n;

View File

@ -253,7 +253,7 @@ static int rpm_check_suspend_allowed(struct device *dev)
|| (dev->power.request_pending || (dev->power.request_pending
&& dev->power.request == RPM_REQ_RESUME)) && dev->power.request == RPM_REQ_RESUME))
retval = -EAGAIN; retval = -EAGAIN;
else if (__dev_pm_qos_read_value(dev) < 0) else if (__dev_pm_qos_read_value(dev) == 0)
retval = -EPERM; retval = -EPERM;
else if (dev->power.runtime_status == RPM_SUSPENDED) else if (dev->power.runtime_status == RPM_SUSPENDED)
retval = 1; retval = 1;

View File

@ -218,7 +218,14 @@ static ssize_t pm_qos_resume_latency_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev)); s32 value = dev_pm_qos_requested_resume_latency(dev);
if (value == 0)
return sprintf(buf, "n/a\n");
else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
value = 0;
return sprintf(buf, "%d\n", value);
} }
static ssize_t pm_qos_resume_latency_store(struct device *dev, static ssize_t pm_qos_resume_latency_store(struct device *dev,
@ -228,11 +235,21 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
s32 value; s32 value;
int ret; int ret;
if (kstrtos32(buf, 0, &value)) if (!kstrtos32(buf, 0, &value)) {
return -EINVAL; /*
* Prevent users from writing negative or "no constraint" values
* directly.
*/
if (value < 0 || value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
return -EINVAL;
if (value < 0) if (value == 0)
value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
} else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) {
value = 0;
} else {
return -EINVAL; return -EINVAL;
}
ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req, ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req,
value); value);

View File

@ -386,6 +386,15 @@ static int sock_xmit(struct nbd_device *nbd, int index, int send,
return result; return result;
} }
/*
* Different settings for sk->sk_sndtimeo can result in different return values
* if there is a signal pending when we enter sendmsg, because reasons?
*/
static inline int was_interrupted(int result)
{
return result == -ERESTARTSYS || result == -EINTR;
}
/* always call with the tx_lock held */ /* always call with the tx_lock held */
static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
{ {
@ -458,7 +467,7 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
result = sock_xmit(nbd, index, 1, &from, result = sock_xmit(nbd, index, 1, &from,
(type == NBD_CMD_WRITE) ? MSG_MORE : 0, &sent); (type == NBD_CMD_WRITE) ? MSG_MORE : 0, &sent);
if (result <= 0) { if (result <= 0) {
if (result == -ERESTARTSYS) { if (was_interrupted(result)) {
/* If we havne't sent anything we can just return BUSY, /* If we havne't sent anything we can just return BUSY,
* however if we have sent something we need to make * however if we have sent something we need to make
* sure we only allow this req to be sent until we are * sure we only allow this req to be sent until we are
@ -502,7 +511,7 @@ send_pages:
} }
result = sock_xmit(nbd, index, 1, &from, flags, &sent); result = sock_xmit(nbd, index, 1, &from, flags, &sent);
if (result <= 0) { if (result <= 0) {
if (result == -ERESTARTSYS) { if (was_interrupted(result)) {
/* We've already sent the header, we /* We've already sent the header, we
* have no choice but to set pending and * have no choice but to set pending and
* return BUSY. * return BUSY.

View File

@ -117,7 +117,8 @@ static irqreturn_t mfgpt_tick(int irq, void *dev_id)
/* Turn off the clock (and clear the event) */ /* Turn off the clock (and clear the event) */
disable_timer(cs5535_event_clock); disable_timer(cs5535_event_clock);
if (clockevent_state_shutdown(&cs5535_clockevent)) if (clockevent_state_detached(&cs5535_clockevent) ||
clockevent_state_shutdown(&cs5535_clockevent))
return IRQ_HANDLED; return IRQ_HANDLED;
/* Clear the counter */ /* Clear the counter */

View File

@ -298,8 +298,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
data->needs_update = 0; data->needs_update = 0;
} }
/* resume_latency is 0 means no restriction */ if (resume_latency < latency_req &&
if (resume_latency && resume_latency < latency_req) resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
latency_req = resume_latency; latency_req = resume_latency;
/* Special case when user has set very strict latency requirement */ /* Special case when user has set very strict latency requirement */

View File

@ -238,7 +238,8 @@ unsigned long efi_entry(void *handle, efi_system_table_t *sys_table,
efi_random_get_seed(sys_table); efi_random_get_seed(sys_table);
if (!nokaslr()) { /* hibernation expects the runtime regions to stay in the same place */
if (!IS_ENABLED(CONFIG_HIBERNATION) && !nokaslr()) {
/* /*
* Randomize the base of the UEFI runtime services region. * Randomize the base of the UEFI runtime services region.
* Preserve the 2 MB alignment of the region by taking a * Preserve the 2 MB alignment of the region by taking a

View File

@ -593,6 +593,9 @@ static long efi_runtime_query_capsulecaps(unsigned long arg)
if (copy_from_user(&qcaps, qcaps_user, sizeof(qcaps))) if (copy_from_user(&qcaps, qcaps_user, sizeof(qcaps)))
return -EFAULT; return -EFAULT;
if (qcaps.capsule_count == ULONG_MAX)
return -EINVAL;
capsules = kcalloc(qcaps.capsule_count + 1, capsules = kcalloc(qcaps.capsule_count + 1,
sizeof(efi_capsule_header_t), GFP_KERNEL); sizeof(efi_capsule_header_t), GFP_KERNEL);
if (!capsules) if (!capsules)

View File

@ -225,11 +225,7 @@ static int uvd_v6_0_suspend(void *handle)
if (r) if (r)
return r; return r;
/* Skip this for APU for now */ return amdgpu_uvd_suspend(adev);
if (!(adev->flags & AMD_IS_APU))
r = amdgpu_uvd_suspend(adev);
return r;
} }
static int uvd_v6_0_resume(void *handle) static int uvd_v6_0_resume(void *handle)
@ -237,12 +233,10 @@ static int uvd_v6_0_resume(void *handle)
int r; int r;
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* Skip this for APU for now */ r = amdgpu_uvd_resume(adev);
if (!(adev->flags & AMD_IS_APU)) { if (r)
r = amdgpu_uvd_resume(adev); return r;
if (r)
return r;
}
return uvd_v6_0_hw_init(adev); return uvd_v6_0_hw_init(adev);
} }

View File

@ -830,7 +830,7 @@ uint32_t smu7_get_xclk(struct pp_hwmgr *hwmgr)
{ {
uint32_t reference_clock, tmp; uint32_t reference_clock, tmp;
struct cgs_display_info info = {0}; struct cgs_display_info info = {0};
struct cgs_mode_info mode_info; struct cgs_mode_info mode_info = {0};
info.mode_info = &mode_info; info.mode_info = &mode_info;
@ -3948,10 +3948,9 @@ static int smu7_program_display_gap(struct pp_hwmgr *hwmgr)
uint32_t ref_clock; uint32_t ref_clock;
uint32_t refresh_rate = 0; uint32_t refresh_rate = 0;
struct cgs_display_info info = {0}; struct cgs_display_info info = {0};
struct cgs_mode_info mode_info; struct cgs_mode_info mode_info = {0};
info.mode_info = &mode_info; info.mode_info = &mode_info;
cgs_get_active_displays_info(hwmgr->device, &info); cgs_get_active_displays_info(hwmgr->device, &info);
num_active_displays = info.display_count; num_active_displays = info.display_count;
@ -3967,6 +3966,7 @@ static int smu7_program_display_gap(struct pp_hwmgr *hwmgr)
frame_time_in_us = 1000000 / refresh_rate; frame_time_in_us = 1000000 / refresh_rate;
pre_vbi_time_in_us = frame_time_in_us - 200 - mode_info.vblank_time_us; pre_vbi_time_in_us = frame_time_in_us - 200 - mode_info.vblank_time_us;
data->frame_time_x2 = frame_time_in_us * 2 / 100; data->frame_time_x2 = frame_time_in_us * 2 / 100;
display_gap2 = pre_vbi_time_in_us * (ref_clock / 100); display_gap2 = pre_vbi_time_in_us * (ref_clock / 100);

View File

@ -2723,6 +2723,9 @@ static int combine_wa_ctx(struct intel_shadow_wa_ctx *wa_ctx)
uint32_t per_ctx_start[CACHELINE_DWORDS] = {0}; uint32_t per_ctx_start[CACHELINE_DWORDS] = {0};
unsigned char *bb_start_sva; unsigned char *bb_start_sva;
if (!wa_ctx->per_ctx.valid)
return 0;
per_ctx_start[0] = 0x18800001; per_ctx_start[0] = 0x18800001;
per_ctx_start[1] = wa_ctx->per_ctx.guest_gma; per_ctx_start[1] = wa_ctx->per_ctx.guest_gma;

View File

@ -701,8 +701,7 @@ static int submit_context(struct intel_vgpu *vgpu, int ring_id,
CACHELINE_BYTES; CACHELINE_BYTES;
workload->wa_ctx.per_ctx.guest_gma = workload->wa_ctx.per_ctx.guest_gma =
per_ctx & PER_CTX_ADDR_MASK; per_ctx & PER_CTX_ADDR_MASK;
workload->wa_ctx.per_ctx.valid = per_ctx & 1;
WARN_ON(workload->wa_ctx.indirect_ctx.size && !(per_ctx & 0x1));
} }
if (emulate_schedule_in) if (emulate_schedule_in)

View File

@ -1429,18 +1429,7 @@ static int skl_lcpll_write(struct intel_vgpu *vgpu, unsigned int offset,
return 0; return 0;
} }
static int ring_timestamp_mmio_read(struct intel_vgpu *vgpu, static int mmio_read_from_hw(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes)
{
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
mmio_hw_access_pre(dev_priv);
vgpu_vreg(vgpu, offset) = I915_READ(_MMIO(offset));
mmio_hw_access_post(dev_priv);
return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes);
}
static int instdone_mmio_read(struct intel_vgpu *vgpu,
unsigned int offset, void *p_data, unsigned int bytes) unsigned int offset, void *p_data, unsigned int bytes)
{ {
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
@ -1589,6 +1578,8 @@ static int ring_reset_ctl_write(struct intel_vgpu *vgpu,
MMIO_F(prefix(BLT_RING_BASE), s, f, am, rm, d, r, w); \ MMIO_F(prefix(BLT_RING_BASE), s, f, am, rm, d, r, w); \
MMIO_F(prefix(GEN6_BSD_RING_BASE), s, f, am, rm, d, r, w); \ MMIO_F(prefix(GEN6_BSD_RING_BASE), s, f, am, rm, d, r, w); \
MMIO_F(prefix(VEBOX_RING_BASE), s, f, am, rm, d, r, w); \ MMIO_F(prefix(VEBOX_RING_BASE), s, f, am, rm, d, r, w); \
if (HAS_BSD2(dev_priv)) \
MMIO_F(prefix(GEN8_BSD2_RING_BASE), s, f, am, rm, d, r, w); \
} while (0) } while (0)
#define MMIO_RING_D(prefix, d) \ #define MMIO_RING_D(prefix, d) \
@ -1635,10 +1626,9 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
#undef RING_REG #undef RING_REG
#define RING_REG(base) (base + 0x6c) #define RING_REG(base) (base + 0x6c)
MMIO_RING_DFH(RING_REG, D_ALL, 0, instdone_mmio_read, NULL); MMIO_RING_DFH(RING_REG, D_ALL, 0, mmio_read_from_hw, NULL);
MMIO_DH(RING_REG(GEN8_BSD2_RING_BASE), D_ALL, instdone_mmio_read, NULL);
#undef RING_REG #undef RING_REG
MMIO_DH(GEN7_SC_INSTDONE, D_BDW_PLUS, instdone_mmio_read, NULL); MMIO_DH(GEN7_SC_INSTDONE, D_BDW_PLUS, mmio_read_from_hw, NULL);
MMIO_GM_RDR(0x2148, D_ALL, NULL, NULL); MMIO_GM_RDR(0x2148, D_ALL, NULL, NULL);
MMIO_GM_RDR(CCID, D_ALL, NULL, NULL); MMIO_GM_RDR(CCID, D_ALL, NULL, NULL);
@ -1648,7 +1638,7 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
MMIO_RING_DFH(RING_TAIL, D_ALL, F_CMD_ACCESS, NULL, NULL); MMIO_RING_DFH(RING_TAIL, D_ALL, F_CMD_ACCESS, NULL, NULL);
MMIO_RING_DFH(RING_HEAD, D_ALL, F_CMD_ACCESS, NULL, NULL); MMIO_RING_DFH(RING_HEAD, D_ALL, F_CMD_ACCESS, NULL, NULL);
MMIO_RING_DFH(RING_CTL, D_ALL, F_CMD_ACCESS, NULL, NULL); MMIO_RING_DFH(RING_CTL, D_ALL, F_CMD_ACCESS, NULL, NULL);
MMIO_RING_DFH(RING_ACTHD, D_ALL, F_CMD_ACCESS, NULL, NULL); MMIO_RING_DFH(RING_ACTHD, D_ALL, F_CMD_ACCESS, mmio_read_from_hw, NULL);
MMIO_RING_GM_RDR(RING_START, D_ALL, NULL, NULL); MMIO_RING_GM_RDR(RING_START, D_ALL, NULL, NULL);
/* RING MODE */ /* RING MODE */
@ -1662,9 +1652,9 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
MMIO_RING_DFH(RING_INSTPM, D_ALL, F_MODE_MASK | F_CMD_ACCESS, MMIO_RING_DFH(RING_INSTPM, D_ALL, F_MODE_MASK | F_CMD_ACCESS,
NULL, NULL); NULL, NULL);
MMIO_RING_DFH(RING_TIMESTAMP, D_ALL, F_CMD_ACCESS, MMIO_RING_DFH(RING_TIMESTAMP, D_ALL, F_CMD_ACCESS,
ring_timestamp_mmio_read, NULL); mmio_read_from_hw, NULL);
MMIO_RING_DFH(RING_TIMESTAMP_UDW, D_ALL, F_CMD_ACCESS, MMIO_RING_DFH(RING_TIMESTAMP_UDW, D_ALL, F_CMD_ACCESS,
ring_timestamp_mmio_read, NULL); mmio_read_from_hw, NULL);
MMIO_DFH(GEN7_GT_MODE, D_ALL, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); MMIO_DFH(GEN7_GT_MODE, D_ALL, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(CACHE_MODE_0_GEN7, D_ALL, F_MODE_MASK | F_CMD_ACCESS, MMIO_DFH(CACHE_MODE_0_GEN7, D_ALL, F_MODE_MASK | F_CMD_ACCESS,
@ -2411,9 +2401,6 @@ static int init_broadwell_mmio_info(struct intel_gvt *gvt)
struct drm_i915_private *dev_priv = gvt->dev_priv; struct drm_i915_private *dev_priv = gvt->dev_priv;
int ret; int ret;
MMIO_DFH(RING_IMR(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS, NULL,
intel_vgpu_reg_imr_handler);
MMIO_DH(GEN8_GT_IMR(0), D_BDW_PLUS, NULL, intel_vgpu_reg_imr_handler); MMIO_DH(GEN8_GT_IMR(0), D_BDW_PLUS, NULL, intel_vgpu_reg_imr_handler);
MMIO_DH(GEN8_GT_IER(0), D_BDW_PLUS, NULL, intel_vgpu_reg_ier_handler); MMIO_DH(GEN8_GT_IER(0), D_BDW_PLUS, NULL, intel_vgpu_reg_ier_handler);
MMIO_DH(GEN8_GT_IIR(0), D_BDW_PLUS, NULL, intel_vgpu_reg_iir_handler); MMIO_DH(GEN8_GT_IIR(0), D_BDW_PLUS, NULL, intel_vgpu_reg_iir_handler);
@ -2476,68 +2463,34 @@ static int init_broadwell_mmio_info(struct intel_gvt *gvt)
MMIO_DH(GEN8_MASTER_IRQ, D_BDW_PLUS, NULL, MMIO_DH(GEN8_MASTER_IRQ, D_BDW_PLUS, NULL,
intel_vgpu_reg_master_irq_handler); intel_vgpu_reg_master_irq_handler);
MMIO_DFH(RING_HWSTAM(GEN8_BSD2_RING_BASE), D_BDW_PLUS, MMIO_RING_DFH(RING_ACTHD_UDW, D_BDW_PLUS, F_CMD_ACCESS,
F_CMD_ACCESS, NULL, NULL); mmio_read_from_hw, NULL);
MMIO_DFH(0x1c134, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(RING_TAIL(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS,
NULL, NULL);
MMIO_DFH(RING_HEAD(GEN8_BSD2_RING_BASE), D_BDW_PLUS,
F_CMD_ACCESS, NULL, NULL);
MMIO_GM_RDR(RING_START(GEN8_BSD2_RING_BASE), D_BDW_PLUS, NULL, NULL);
MMIO_DFH(RING_CTL(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS,
NULL, NULL);
MMIO_DFH(RING_ACTHD(GEN8_BSD2_RING_BASE), D_BDW_PLUS,
F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(RING_ACTHD_UDW(GEN8_BSD2_RING_BASE), D_BDW_PLUS,
F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(0x1c29c, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL,
ring_mode_mmio_write);
MMIO_DFH(RING_MI_MODE(GEN8_BSD2_RING_BASE), D_BDW_PLUS,
F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(RING_INSTPM(GEN8_BSD2_RING_BASE), D_BDW_PLUS,
F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(RING_TIMESTAMP(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS,
ring_timestamp_mmio_read, NULL);
MMIO_RING_DFH(RING_ACTHD_UDW, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL);
#define RING_REG(base) (base + 0xd0) #define RING_REG(base) (base + 0xd0)
MMIO_RING_F(RING_REG, 4, F_RO, 0, MMIO_RING_F(RING_REG, 4, F_RO, 0,
~_MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET), D_BDW_PLUS, NULL, ~_MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET), D_BDW_PLUS, NULL,
ring_reset_ctl_write); ring_reset_ctl_write);
MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 4, F_RO, 0,
~_MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET), D_BDW_PLUS, NULL,
ring_reset_ctl_write);
#undef RING_REG #undef RING_REG
#define RING_REG(base) (base + 0x230) #define RING_REG(base) (base + 0x230)
MMIO_RING_DFH(RING_REG, D_BDW_PLUS, 0, NULL, elsp_mmio_write); MMIO_RING_DFH(RING_REG, D_BDW_PLUS, 0, NULL, elsp_mmio_write);
MMIO_DH(RING_REG(GEN8_BSD2_RING_BASE), D_BDW_PLUS, NULL, elsp_mmio_write);
#undef RING_REG #undef RING_REG
#define RING_REG(base) (base + 0x234) #define RING_REG(base) (base + 0x234)
MMIO_RING_F(RING_REG, 8, F_RO | F_CMD_ACCESS, 0, ~0, D_BDW_PLUS, MMIO_RING_F(RING_REG, 8, F_RO | F_CMD_ACCESS, 0, ~0, D_BDW_PLUS,
NULL, NULL); NULL, NULL);
MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 4, F_RO | F_CMD_ACCESS, 0,
~0LL, D_BDW_PLUS, NULL, NULL);
#undef RING_REG #undef RING_REG
#define RING_REG(base) (base + 0x244) #define RING_REG(base) (base + 0x244)
MMIO_RING_DFH(RING_REG, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); MMIO_RING_DFH(RING_REG, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_DFH(RING_REG(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS,
NULL, NULL);
#undef RING_REG #undef RING_REG
#define RING_REG(base) (base + 0x370) #define RING_REG(base) (base + 0x370)
MMIO_RING_F(RING_REG, 48, F_RO, 0, ~0, D_BDW_PLUS, NULL, NULL); MMIO_RING_F(RING_REG, 48, F_RO, 0, ~0, D_BDW_PLUS, NULL, NULL);
MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 48, F_RO, 0, ~0, D_BDW_PLUS,
NULL, NULL);
#undef RING_REG #undef RING_REG
#define RING_REG(base) (base + 0x3a0) #define RING_REG(base) (base + 0x3a0)
MMIO_RING_DFH(RING_REG, D_BDW_PLUS, F_MODE_MASK, NULL, NULL); MMIO_RING_DFH(RING_REG, D_BDW_PLUS, F_MODE_MASK, NULL, NULL);
MMIO_DFH(RING_REG(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_MODE_MASK, NULL, NULL);
#undef RING_REG #undef RING_REG
MMIO_D(PIPEMISC(PIPE_A), D_BDW_PLUS); MMIO_D(PIPEMISC(PIPE_A), D_BDW_PLUS);
@ -2557,11 +2510,9 @@ static int init_broadwell_mmio_info(struct intel_gvt *gvt)
#define RING_REG(base) (base + 0x270) #define RING_REG(base) (base + 0x270)
MMIO_RING_F(RING_REG, 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL); MMIO_RING_F(RING_REG, 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL);
MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL);
#undef RING_REG #undef RING_REG
MMIO_RING_GM_RDR(RING_HWS_PGA, D_BDW_PLUS, NULL, NULL); MMIO_RING_GM_RDR(RING_HWS_PGA, D_BDW_PLUS, NULL, NULL);
MMIO_GM_RDR(RING_HWS_PGA(GEN8_BSD2_RING_BASE), D_BDW_PLUS, NULL, NULL);
MMIO_DFH(HDC_CHICKEN0, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); MMIO_DFH(HDC_CHICKEN0, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
@ -2849,7 +2800,6 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
MMIO_D(0x65f08, D_SKL | D_KBL); MMIO_D(0x65f08, D_SKL | D_KBL);
MMIO_D(0x320f0, D_SKL | D_KBL); MMIO_D(0x320f0, D_SKL | D_KBL);
MMIO_DFH(_REG_VCS2_EXCC, D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
MMIO_D(0x70034, D_SKL_PLUS); MMIO_D(0x70034, D_SKL_PLUS);
MMIO_D(0x71034, D_SKL_PLUS); MMIO_D(0x71034, D_SKL_PLUS);
MMIO_D(0x72034, D_SKL_PLUS); MMIO_D(0x72034, D_SKL_PLUS);

View File

@ -54,9 +54,6 @@
#define VGT_SPRSTRIDE(pipe) _PIPE(pipe, _SPRA_STRIDE, _PLANE_STRIDE_2_B) #define VGT_SPRSTRIDE(pipe) _PIPE(pipe, _SPRA_STRIDE, _PLANE_STRIDE_2_B)
#define _REG_VECS_EXCC 0x1A028
#define _REG_VCS2_EXCC 0x1c028
#define _REG_701C0(pipe, plane) (0x701c0 + pipe * 0x1000 + (plane - 1) * 0x100) #define _REG_701C0(pipe, plane) (0x701c0 + pipe * 0x1000 + (plane - 1) * 0x100)
#define _REG_701C4(pipe, plane) (0x701c4 + pipe * 0x1000 + (plane - 1) * 0x100) #define _REG_701C4(pipe, plane) (0x701c4 + pipe * 0x1000 + (plane - 1) * 0x100)

View File

@ -68,6 +68,7 @@ struct shadow_indirect_ctx {
struct shadow_per_ctx { struct shadow_per_ctx {
unsigned long guest_gma; unsigned long guest_gma;
unsigned long shadow_gma; unsigned long shadow_gma;
unsigned valid;
}; };
struct intel_shadow_wa_ctx { struct intel_shadow_wa_ctx {

View File

@ -2537,6 +2537,10 @@ static const struct file_operations fops = {
.poll = i915_perf_poll, .poll = i915_perf_poll,
.read = i915_perf_read, .read = i915_perf_read,
.unlocked_ioctl = i915_perf_ioctl, .unlocked_ioctl = i915_perf_ioctl,
/* Our ioctl have no arguments, so it's safe to use the same function
* to handle 32bits compatibility.
*/
.compat_ioctl = i915_perf_ioctl,
}; };

View File

@ -937,7 +937,10 @@ void vmbus_hvsock_device_unregister(struct vmbus_channel *channel)
{ {
BUG_ON(!is_hvsock_channel(channel)); BUG_ON(!is_hvsock_channel(channel));
channel->rescind = true; /* We always get a rescind msg when a connection is closed. */
while (!READ_ONCE(channel->probe_done) || !READ_ONCE(channel->rescind))
msleep(1);
vmbus_device_unregister(channel->device_obj); vmbus_device_unregister(channel->device_obj);
} }
EXPORT_SYMBOL_GPL(vmbus_hvsock_device_unregister); EXPORT_SYMBOL_GPL(vmbus_hvsock_device_unregister);

View File

@ -477,6 +477,11 @@ static int da9052_hwmon_probe(struct platform_device *pdev)
/* disable touchscreen features */ /* disable touchscreen features */
da9052_reg_write(hwmon->da9052, DA9052_TSI_CONT_A_REG, 0x00); da9052_reg_write(hwmon->da9052, DA9052_TSI_CONT_A_REG, 0x00);
/* Sample every 1ms */
da9052_reg_update(hwmon->da9052, DA9052_ADC_CONT_REG,
DA9052_ADCCONT_ADCMODE,
DA9052_ADCCONT_ADCMODE);
err = da9052_request_irq(hwmon->da9052, DA9052_IRQ_TSIREADY, err = da9052_request_irq(hwmon->da9052, DA9052_IRQ_TSIREADY,
"tsiready-irq", da9052_tsi_datardy_irq, "tsiready-irq", da9052_tsi_datardy_irq,
hwmon); hwmon);

View File

@ -268,14 +268,11 @@ static int tmp102_probe(struct i2c_client *client,
return err; return err;
} }
tmp102->ready_time = jiffies; /*
if (tmp102->config_orig & TMP102_CONF_SD) { * Mark that we are not ready with data until the first
/* * conversion is complete
* Mark that we are not ready with data until the first */
* conversion is complete tmp102->ready_time = jiffies + msecs_to_jiffies(CONVERSION_TIME_MS);
*/
tmp102->ready_time += msecs_to_jiffies(CONVERSION_TIME_MS);
}
hwmon_dev = devm_hwmon_device_register_with_info(dev, client->name, hwmon_dev = devm_hwmon_device_register_with_info(dev, client->name,
tmp102, tmp102,

View File

@ -243,6 +243,8 @@ config DA9150_GPADC
config DLN2_ADC config DLN2_ADC
tristate "Diolan DLN-2 ADC driver support" tristate "Diolan DLN-2 ADC driver support"
depends on MFD_DLN2 depends on MFD_DLN2
select IIO_BUFFER
select IIO_TRIGGERED_BUFFER
help help
Say yes here to build support for Diolan DLN-2 ADC. Say yes here to build support for Diolan DLN-2 ADC.

View File

@ -225,6 +225,7 @@ struct at91_adc_trigger {
char *name; char *name;
unsigned int trgmod_value; unsigned int trgmod_value;
unsigned int edge_type; unsigned int edge_type;
bool hw_trig;
}; };
struct at91_adc_state { struct at91_adc_state {
@ -254,16 +255,25 @@ static const struct at91_adc_trigger at91_adc_trigger_list[] = {
.name = "external_rising", .name = "external_rising",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_RISE, .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_RISE,
.edge_type = IRQ_TYPE_EDGE_RISING, .edge_type = IRQ_TYPE_EDGE_RISING,
.hw_trig = true,
}, },
{ {
.name = "external_falling", .name = "external_falling",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_FALL, .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_FALL,
.edge_type = IRQ_TYPE_EDGE_FALLING, .edge_type = IRQ_TYPE_EDGE_FALLING,
.hw_trig = true,
}, },
{ {
.name = "external_any", .name = "external_any",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_ANY, .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_ANY,
.edge_type = IRQ_TYPE_EDGE_BOTH, .edge_type = IRQ_TYPE_EDGE_BOTH,
.hw_trig = true,
},
{
.name = "software",
.trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_NO_TRIGGER,
.edge_type = IRQ_TYPE_NONE,
.hw_trig = false,
}, },
}; };
@ -597,7 +607,7 @@ static int at91_adc_probe(struct platform_device *pdev)
struct at91_adc_state *st; struct at91_adc_state *st;
struct resource *res; struct resource *res;
int ret, i; int ret, i;
u32 edge_type; u32 edge_type = IRQ_TYPE_NONE;
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*st)); indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*st));
if (!indio_dev) if (!indio_dev)
@ -641,14 +651,14 @@ static int at91_adc_probe(struct platform_device *pdev)
ret = of_property_read_u32(pdev->dev.of_node, ret = of_property_read_u32(pdev->dev.of_node,
"atmel,trigger-edge-type", &edge_type); "atmel,trigger-edge-type", &edge_type);
if (ret) { if (ret) {
dev_err(&pdev->dev, dev_dbg(&pdev->dev,
"invalid or missing value for atmel,trigger-edge-type\n"); "atmel,trigger-edge-type not specified, only software trigger available\n");
return ret;
} }
st->selected_trig = NULL; st->selected_trig = NULL;
for (i = 0; i < AT91_SAMA5D2_HW_TRIG_CNT; i++) /* find the right trigger, or no trigger at all */
for (i = 0; i < AT91_SAMA5D2_HW_TRIG_CNT + 1; i++)
if (at91_adc_trigger_list[i].edge_type == edge_type) { if (at91_adc_trigger_list[i].edge_type == edge_type) {
st->selected_trig = &at91_adc_trigger_list[i]; st->selected_trig = &at91_adc_trigger_list[i];
break; break;
@ -717,24 +727,27 @@ static int at91_adc_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, indio_dev); platform_set_drvdata(pdev, indio_dev);
ret = at91_adc_buffer_init(indio_dev); if (st->selected_trig->hw_trig) {
if (ret < 0) { ret = at91_adc_buffer_init(indio_dev);
dev_err(&pdev->dev, "couldn't initialize the buffer.\n"); if (ret < 0) {
goto per_clk_disable_unprepare; dev_err(&pdev->dev, "couldn't initialize the buffer.\n");
} goto per_clk_disable_unprepare;
}
ret = at91_adc_trigger_init(indio_dev); ret = at91_adc_trigger_init(indio_dev);
if (ret < 0) { if (ret < 0) {
dev_err(&pdev->dev, "couldn't setup the triggers.\n"); dev_err(&pdev->dev, "couldn't setup the triggers.\n");
goto per_clk_disable_unprepare; goto per_clk_disable_unprepare;
}
} }
ret = iio_device_register(indio_dev); ret = iio_device_register(indio_dev);
if (ret < 0) if (ret < 0)
goto per_clk_disable_unprepare; goto per_clk_disable_unprepare;
dev_info(&pdev->dev, "setting up trigger as %s\n", if (st->selected_trig->hw_trig)
st->selected_trig->name); dev_info(&pdev->dev, "setting up trigger as %s\n",
st->selected_trig->name);
dev_info(&pdev->dev, "version: %x\n", dev_info(&pdev->dev, "version: %x\n",
readl_relaxed(st->base + AT91_SAMA5D2_VERSION)); readl_relaxed(st->base + AT91_SAMA5D2_VERSION));

View File

@ -72,6 +72,7 @@ int iio_simple_dummy_write_event_config(struct iio_dev *indio_dev,
st->event_en = state; st->event_en = state;
else else
return -EINVAL; return -EINVAL;
break;
default: default:
return -EINVAL; return -EINVAL;
} }

View File

@ -865,7 +865,6 @@ complete:
static int zpa2326_wait_oneshot_completion(const struct iio_dev *indio_dev, static int zpa2326_wait_oneshot_completion(const struct iio_dev *indio_dev,
struct zpa2326_private *private) struct zpa2326_private *private)
{ {
int ret;
unsigned int val; unsigned int val;
long timeout; long timeout;
@ -887,14 +886,11 @@ static int zpa2326_wait_oneshot_completion(const struct iio_dev *indio_dev,
/* Timed out. */ /* Timed out. */
zpa2326_warn(indio_dev, "no one shot interrupt occurred (%ld)", zpa2326_warn(indio_dev, "no one shot interrupt occurred (%ld)",
timeout); timeout);
ret = -ETIME; return -ETIME;
} else if (timeout < 0) {
zpa2326_warn(indio_dev,
"wait for one shot interrupt cancelled");
ret = -ERESTARTSYS;
} }
return ret; zpa2326_warn(indio_dev, "wait for one shot interrupt cancelled");
return -ERESTARTSYS;
} }
static int zpa2326_init_managed_irq(struct device *parent, static int zpa2326_init_managed_irq(struct device *parent,

View File

@ -39,8 +39,12 @@
#define AS3935_AFE_GAIN_MAX 0x1F #define AS3935_AFE_GAIN_MAX 0x1F
#define AS3935_AFE_PWR_BIT BIT(0) #define AS3935_AFE_PWR_BIT BIT(0)
#define AS3935_NFLWDTH 0x01
#define AS3935_NFLWDTH_MASK 0x7f
#define AS3935_INT 0x03 #define AS3935_INT 0x03
#define AS3935_INT_MASK 0x0f #define AS3935_INT_MASK 0x0f
#define AS3935_DISTURB_INT BIT(2)
#define AS3935_EVENT_INT BIT(3) #define AS3935_EVENT_INT BIT(3)
#define AS3935_NOISE_INT BIT(0) #define AS3935_NOISE_INT BIT(0)
@ -48,6 +52,7 @@
#define AS3935_DATA_MASK 0x3F #define AS3935_DATA_MASK 0x3F
#define AS3935_TUNE_CAP 0x08 #define AS3935_TUNE_CAP 0x08
#define AS3935_DEFAULTS 0x3C
#define AS3935_CALIBRATE 0x3D #define AS3935_CALIBRATE 0x3D
#define AS3935_READ_DATA BIT(14) #define AS3935_READ_DATA BIT(14)
@ -62,7 +67,9 @@ struct as3935_state {
struct mutex lock; struct mutex lock;
struct delayed_work work; struct delayed_work work;
unsigned long noise_tripped;
u32 tune_cap; u32 tune_cap;
u32 nflwdth_reg;
u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */ u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */
u8 buf[2] ____cacheline_aligned; u8 buf[2] ____cacheline_aligned;
}; };
@ -145,12 +152,29 @@ static ssize_t as3935_sensor_sensitivity_store(struct device *dev,
return len; return len;
} }
static ssize_t as3935_noise_level_tripped_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct as3935_state *st = iio_priv(dev_to_iio_dev(dev));
int ret;
mutex_lock(&st->lock);
ret = sprintf(buf, "%d\n", !time_after(jiffies, st->noise_tripped + HZ));
mutex_unlock(&st->lock);
return ret;
}
static IIO_DEVICE_ATTR(sensor_sensitivity, S_IRUGO | S_IWUSR, static IIO_DEVICE_ATTR(sensor_sensitivity, S_IRUGO | S_IWUSR,
as3935_sensor_sensitivity_show, as3935_sensor_sensitivity_store, 0); as3935_sensor_sensitivity_show, as3935_sensor_sensitivity_store, 0);
static IIO_DEVICE_ATTR(noise_level_tripped, S_IRUGO,
as3935_noise_level_tripped_show, NULL, 0);
static struct attribute *as3935_attributes[] = { static struct attribute *as3935_attributes[] = {
&iio_dev_attr_sensor_sensitivity.dev_attr.attr, &iio_dev_attr_sensor_sensitivity.dev_attr.attr,
&iio_dev_attr_noise_level_tripped.dev_attr.attr,
NULL, NULL,
}; };
@ -246,7 +270,11 @@ static void as3935_event_work(struct work_struct *work)
case AS3935_EVENT_INT: case AS3935_EVENT_INT:
iio_trigger_poll_chained(st->trig); iio_trigger_poll_chained(st->trig);
break; break;
case AS3935_DISTURB_INT:
case AS3935_NOISE_INT: case AS3935_NOISE_INT:
mutex_lock(&st->lock);
st->noise_tripped = jiffies;
mutex_unlock(&st->lock);
dev_warn(&st->spi->dev, "noise level is too high\n"); dev_warn(&st->spi->dev, "noise level is too high\n");
break; break;
} }
@ -269,15 +297,14 @@ static irqreturn_t as3935_interrupt_handler(int irq, void *private)
static void calibrate_as3935(struct as3935_state *st) static void calibrate_as3935(struct as3935_state *st)
{ {
/* mask disturber interrupt bit */ as3935_write(st, AS3935_DEFAULTS, 0x96);
as3935_write(st, AS3935_INT, BIT(5));
as3935_write(st, AS3935_CALIBRATE, 0x96); as3935_write(st, AS3935_CALIBRATE, 0x96);
as3935_write(st, AS3935_TUNE_CAP, as3935_write(st, AS3935_TUNE_CAP,
BIT(5) | (st->tune_cap / TUNE_CAP_DIV)); BIT(5) | (st->tune_cap / TUNE_CAP_DIV));
mdelay(2); mdelay(2);
as3935_write(st, AS3935_TUNE_CAP, (st->tune_cap / TUNE_CAP_DIV)); as3935_write(st, AS3935_TUNE_CAP, (st->tune_cap / TUNE_CAP_DIV));
as3935_write(st, AS3935_NFLWDTH, st->nflwdth_reg);
} }
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
@ -370,6 +397,15 @@ static int as3935_probe(struct spi_device *spi)
return -EINVAL; return -EINVAL;
} }
ret = of_property_read_u32(np,
"ams,nflwdth", &st->nflwdth_reg);
if (!ret && st->nflwdth_reg > AS3935_NFLWDTH_MASK) {
dev_err(&spi->dev,
"invalid nflwdth setting of %d\n",
st->nflwdth_reg);
return -EINVAL;
}
indio_dev->dev.parent = &spi->dev; indio_dev->dev.parent = &spi->dev;
indio_dev->name = spi_get_device_id(spi)->name; indio_dev->name = spi_get_device_id(spi)->name;
indio_dev->channels = as3935_channels; indio_dev->channels = as3935_channels;
@ -384,6 +420,7 @@ static int as3935_probe(struct spi_device *spi)
return -ENOMEM; return -ENOMEM;
st->trig = trig; st->trig = trig;
st->noise_tripped = jiffies - HZ;
trig->dev.parent = indio_dev->dev.parent; trig->dev.parent = indio_dev->dev.parent;
iio_trigger_set_drvdata(trig, indio_dev); iio_trigger_set_drvdata(trig, indio_dev);
trig->ops = &iio_interrupt_trigger_ops; trig->ops = &iio_interrupt_trigger_ops;

View File

@ -175,13 +175,24 @@ static int rdma_nl_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
!netlink_capable(skb, CAP_NET_ADMIN)) !netlink_capable(skb, CAP_NET_ADMIN))
return -EPERM; return -EPERM;
/*
* LS responses overload the 0x100 (NLM_F_ROOT) flag. Don't
* mistakenly call the .dump() function.
*/
if (index == RDMA_NL_LS) {
if (cb_table[op].doit)
return cb_table[op].doit(skb, nlh, extack);
return -EINVAL;
}
/* FIXME: Convert IWCM to properly handle doit callbacks */ /* FIXME: Convert IWCM to properly handle doit callbacks */
if ((nlh->nlmsg_flags & NLM_F_DUMP) || index == RDMA_NL_RDMA_CM || if ((nlh->nlmsg_flags & NLM_F_DUMP) || index == RDMA_NL_RDMA_CM ||
index == RDMA_NL_IWCM) { index == RDMA_NL_IWCM) {
struct netlink_dump_control c = { struct netlink_dump_control c = {
.dump = cb_table[op].dump, .dump = cb_table[op].dump,
}; };
return netlink_dump_start(nls, skb, nlh, &c); if (c.dump)
return netlink_dump_start(nls, skb, nlh, &c);
return -EINVAL;
} }
if (cb_table[op].doit) if (cb_table[op].doit)

View File

@ -1258,6 +1258,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
{ "ELAN0605", 0 }, { "ELAN0605", 0 },
{ "ELAN0609", 0 }, { "ELAN0609", 0 },
{ "ELAN060B", 0 }, { "ELAN060B", 0 },
{ "ELAN0611", 0 },
{ "ELAN1000", 0 }, { "ELAN1000", 0 },
{ } { }
}; };

View File

@ -232,9 +232,10 @@ static int rmi_f30_map_gpios(struct rmi_function *fn,
unsigned int trackstick_button = BTN_LEFT; unsigned int trackstick_button = BTN_LEFT;
bool button_mapped = false; bool button_mapped = false;
int i; int i;
int button_count = min_t(u8, f30->gpioled_count, TRACKSTICK_RANGE_END);
f30->gpioled_key_map = devm_kcalloc(&fn->dev, f30->gpioled_key_map = devm_kcalloc(&fn->dev,
f30->gpioled_count, button_count,
sizeof(f30->gpioled_key_map[0]), sizeof(f30->gpioled_key_map[0]),
GFP_KERNEL); GFP_KERNEL);
if (!f30->gpioled_key_map) { if (!f30->gpioled_key_map) {
@ -242,7 +243,7 @@ static int rmi_f30_map_gpios(struct rmi_function *fn,
return -ENOMEM; return -ENOMEM;
} }
for (i = 0; i < f30->gpioled_count; i++) { for (i = 0; i < button_count; i++) {
if (!rmi_f30_is_valid_button(i, f30->ctrl)) if (!rmi_f30_is_valid_button(i, f30->ctrl))
continue; continue;

View File

@ -230,13 +230,17 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report,
/* Walk this report and pull out the info we need */ /* Walk this report and pull out the info we need */
while (i < length) { while (i < length) {
prefix = report[i]; prefix = report[i++];
/* Skip over prefix */
i++;
/* Determine data size and save the data in the proper variable */ /* Determine data size and save the data in the proper variable */
size = PREF_SIZE(prefix); size = (1U << PREF_SIZE(prefix)) >> 1;
if (i + size > length) {
dev_err(ddev,
"Not enough data (need %d, have %d)\n",
i + size, length);
break;
}
switch (size) { switch (size) {
case 1: case 1:
data = report[i]; data = report[i];
@ -244,8 +248,7 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report,
case 2: case 2:
data16 = get_unaligned_le16(&report[i]); data16 = get_unaligned_le16(&report[i]);
break; break;
case 3: case 4:
size = 4;
data32 = get_unaligned_le32(&report[i]); data32 = get_unaligned_le32(&report[i]);
break; break;
} }

View File

@ -107,6 +107,10 @@ struct its_node {
#define ITS_ITT_ALIGN SZ_256 #define ITS_ITT_ALIGN SZ_256
/* The maximum number of VPEID bits supported by VLPI commands */
#define ITS_MAX_VPEID_BITS (16)
#define ITS_MAX_VPEID (1 << (ITS_MAX_VPEID_BITS))
/* Convert page order to size in bytes */ /* Convert page order to size in bytes */
#define PAGE_ORDER_TO_SIZE(o) (PAGE_SIZE << (o)) #define PAGE_ORDER_TO_SIZE(o) (PAGE_SIZE << (o))
@ -308,7 +312,7 @@ static void its_encode_size(struct its_cmd_block *cmd, u8 size)
static void its_encode_itt(struct its_cmd_block *cmd, u64 itt_addr) static void its_encode_itt(struct its_cmd_block *cmd, u64 itt_addr)
{ {
its_mask_encode(&cmd->raw_cmd[2], itt_addr >> 8, 50, 8); its_mask_encode(&cmd->raw_cmd[2], itt_addr >> 8, 51, 8);
} }
static void its_encode_valid(struct its_cmd_block *cmd, int valid) static void its_encode_valid(struct its_cmd_block *cmd, int valid)
@ -318,7 +322,7 @@ static void its_encode_valid(struct its_cmd_block *cmd, int valid)
static void its_encode_target(struct its_cmd_block *cmd, u64 target_addr) static void its_encode_target(struct its_cmd_block *cmd, u64 target_addr)
{ {
its_mask_encode(&cmd->raw_cmd[2], target_addr >> 16, 50, 16); its_mask_encode(&cmd->raw_cmd[2], target_addr >> 16, 51, 16);
} }
static void its_encode_collection(struct its_cmd_block *cmd, u16 col) static void its_encode_collection(struct its_cmd_block *cmd, u16 col)
@ -358,7 +362,7 @@ static void its_encode_its_list(struct its_cmd_block *cmd, u16 its_list)
static void its_encode_vpt_addr(struct its_cmd_block *cmd, u64 vpt_pa) static void its_encode_vpt_addr(struct its_cmd_block *cmd, u64 vpt_pa)
{ {
its_mask_encode(&cmd->raw_cmd[3], vpt_pa >> 16, 50, 16); its_mask_encode(&cmd->raw_cmd[3], vpt_pa >> 16, 51, 16);
} }
static void its_encode_vpt_size(struct its_cmd_block *cmd, u8 vpt_size) static void its_encode_vpt_size(struct its_cmd_block *cmd, u8 vpt_size)
@ -1478,9 +1482,9 @@ static int its_setup_baser(struct its_node *its, struct its_baser *baser,
u64 val = its_read_baser(its, baser); u64 val = its_read_baser(its, baser);
u64 esz = GITS_BASER_ENTRY_SIZE(val); u64 esz = GITS_BASER_ENTRY_SIZE(val);
u64 type = GITS_BASER_TYPE(val); u64 type = GITS_BASER_TYPE(val);
u64 baser_phys, tmp;
u32 alloc_pages; u32 alloc_pages;
void *base; void *base;
u64 tmp;
retry_alloc_baser: retry_alloc_baser:
alloc_pages = (PAGE_ORDER_TO_SIZE(order) / psz); alloc_pages = (PAGE_ORDER_TO_SIZE(order) / psz);
@ -1496,8 +1500,24 @@ retry_alloc_baser:
if (!base) if (!base)
return -ENOMEM; return -ENOMEM;
baser_phys = virt_to_phys(base);
/* Check if the physical address of the memory is above 48bits */
if (IS_ENABLED(CONFIG_ARM64_64K_PAGES) && (baser_phys >> 48)) {
/* 52bit PA is supported only when PageSize=64K */
if (psz != SZ_64K) {
pr_err("ITS: no 52bit PA support when psz=%d\n", psz);
free_pages((unsigned long)base, order);
return -ENXIO;
}
/* Convert 52bit PA to 48bit field */
baser_phys = GITS_BASER_PHYS_52_to_48(baser_phys);
}
retry_baser: retry_baser:
val = (virt_to_phys(base) | val = (baser_phys |
(type << GITS_BASER_TYPE_SHIFT) | (type << GITS_BASER_TYPE_SHIFT) |
((esz - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) | ((esz - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) |
((alloc_pages - 1) << GITS_BASER_PAGES_SHIFT) | ((alloc_pages - 1) << GITS_BASER_PAGES_SHIFT) |
@ -1582,13 +1602,12 @@ retry_baser:
static bool its_parse_indirect_baser(struct its_node *its, static bool its_parse_indirect_baser(struct its_node *its,
struct its_baser *baser, struct its_baser *baser,
u32 psz, u32 *order) u32 psz, u32 *order, u32 ids)
{ {
u64 tmp = its_read_baser(its, baser); u64 tmp = its_read_baser(its, baser);
u64 type = GITS_BASER_TYPE(tmp); u64 type = GITS_BASER_TYPE(tmp);
u64 esz = GITS_BASER_ENTRY_SIZE(tmp); u64 esz = GITS_BASER_ENTRY_SIZE(tmp);
u64 val = GITS_BASER_InnerShareable | GITS_BASER_RaWaWb; u64 val = GITS_BASER_InnerShareable | GITS_BASER_RaWaWb;
u32 ids = its->device_ids;
u32 new_order = *order; u32 new_order = *order;
bool indirect = false; bool indirect = false;
@ -1680,9 +1699,13 @@ static int its_alloc_tables(struct its_node *its)
continue; continue;
case GITS_BASER_TYPE_DEVICE: case GITS_BASER_TYPE_DEVICE:
indirect = its_parse_indirect_baser(its, baser,
psz, &order,
its->device_ids);
case GITS_BASER_TYPE_VCPU: case GITS_BASER_TYPE_VCPU:
indirect = its_parse_indirect_baser(its, baser, indirect = its_parse_indirect_baser(its, baser,
psz, &order); psz, &order,
ITS_MAX_VPEID_BITS);
break; break;
} }
@ -2551,7 +2574,7 @@ static struct irq_chip its_vpe_irq_chip = {
static int its_vpe_id_alloc(void) static int its_vpe_id_alloc(void)
{ {
return ida_simple_get(&its_vpeid_ida, 0, 1 << 16, GFP_KERNEL); return ida_simple_get(&its_vpeid_ida, 0, ITS_MAX_VPEID, GFP_KERNEL);
} }
static void its_vpe_id_free(u16 id) static void its_vpe_id_free(u16 id)
@ -2851,7 +2874,7 @@ static int its_init_vpe_domain(void)
return -ENOMEM; return -ENOMEM;
} }
BUG_ON(entries != vpe_proxy.dev->nr_ites); BUG_ON(entries > vpe_proxy.dev->nr_ites);
raw_spin_lock_init(&vpe_proxy.lock); raw_spin_lock_init(&vpe_proxy.lock);
vpe_proxy.next_victim = 0; vpe_proxy.next_victim = 0;

View File

@ -141,7 +141,7 @@ static void __init tangox_irq_init_chip(struct irq_chip_generic *gc,
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
ct[i].chip.irq_ack = irq_gc_ack_set_bit; ct[i].chip.irq_ack = irq_gc_ack_set_bit;
ct[i].chip.irq_mask = irq_gc_mask_disable_reg; ct[i].chip.irq_mask = irq_gc_mask_disable_reg;
ct[i].chip.irq_mask_ack = irq_gc_mask_disable_reg_and_ack; ct[i].chip.irq_mask_ack = irq_gc_mask_disable_and_ack_set;
ct[i].chip.irq_unmask = irq_gc_unmask_enable_reg; ct[i].chip.irq_unmask = irq_gc_unmask_enable_reg;
ct[i].chip.irq_set_type = tangox_irq_set_type; ct[i].chip.irq_set_type = tangox_irq_set_type;
ct[i].chip.name = gc->domain->name; ct[i].chip.name = gc->domain->name;

View File

@ -342,7 +342,7 @@ static int sun4i_can_start(struct net_device *dev)
/* enter the selected mode */ /* enter the selected mode */
mod_reg_val = readl(priv->base + SUN4I_REG_MSEL_ADDR); mod_reg_val = readl(priv->base + SUN4I_REG_MSEL_ADDR);
if (priv->can.ctrlmode & CAN_CTRLMODE_PRESUME_ACK) if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)
mod_reg_val |= SUN4I_MSEL_LOOPBACK_MODE; mod_reg_val |= SUN4I_MSEL_LOOPBACK_MODE;
else if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) else if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
mod_reg_val |= SUN4I_MSEL_LISTEN_ONLY_MODE; mod_reg_val |= SUN4I_MSEL_LISTEN_ONLY_MODE;
@ -811,7 +811,6 @@ static int sun4ican_probe(struct platform_device *pdev)
priv->can.ctrlmode_supported = CAN_CTRLMODE_BERR_REPORTING | priv->can.ctrlmode_supported = CAN_CTRLMODE_BERR_REPORTING |
CAN_CTRLMODE_LISTENONLY | CAN_CTRLMODE_LISTENONLY |
CAN_CTRLMODE_LOOPBACK | CAN_CTRLMODE_LOOPBACK |
CAN_CTRLMODE_PRESUME_ACK |
CAN_CTRLMODE_3_SAMPLES; CAN_CTRLMODE_3_SAMPLES;
priv->base = addr; priv->base = addr;
priv->clk = clk; priv->clk = clk;

View File

@ -137,6 +137,7 @@ static inline bool kvaser_is_usbcan(const struct usb_device_id *id)
#define CMD_RESET_ERROR_COUNTER 49 #define CMD_RESET_ERROR_COUNTER 49
#define CMD_TX_ACKNOWLEDGE 50 #define CMD_TX_ACKNOWLEDGE 50
#define CMD_CAN_ERROR_EVENT 51 #define CMD_CAN_ERROR_EVENT 51
#define CMD_FLUSH_QUEUE_REPLY 68
#define CMD_LEAF_USB_THROTTLE 77 #define CMD_LEAF_USB_THROTTLE 77
#define CMD_LEAF_LOG_MESSAGE 106 #define CMD_LEAF_LOG_MESSAGE 106
@ -1301,6 +1302,11 @@ static void kvaser_usb_handle_message(const struct kvaser_usb *dev,
goto warn; goto warn;
break; break;
case CMD_FLUSH_QUEUE_REPLY:
if (dev->family != KVASER_LEAF)
goto warn;
break;
default: default:
warn: dev_warn(dev->udev->dev.parent, warn: dev_warn(dev->udev->dev.parent,
"Unhandled message (%d)\n", msg->id); "Unhandled message (%d)\n", msg->id);
@ -1609,7 +1615,8 @@ static int kvaser_usb_close(struct net_device *netdev)
if (err) if (err)
netdev_warn(netdev, "Cannot flush queue, error %d\n", err); netdev_warn(netdev, "Cannot flush queue, error %d\n", err);
if (kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, priv->channel)) err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, priv->channel);
if (err)
netdev_warn(netdev, "Cannot reset card, error %d\n", err); netdev_warn(netdev, "Cannot reset card, error %d\n", err);
err = kvaser_usb_stop_chip(priv); err = kvaser_usb_stop_chip(priv);

View File

@ -1824,11 +1824,12 @@ static void e1000_get_ethtool_stats(struct net_device *netdev,
{ {
struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_adapter *adapter = netdev_priv(netdev);
int i; int i;
char *p = NULL;
const struct e1000_stats *stat = e1000_gstrings_stats; const struct e1000_stats *stat = e1000_gstrings_stats;
e1000_update_stats(adapter); e1000_update_stats(adapter);
for (i = 0; i < E1000_GLOBAL_STATS_LEN; i++) { for (i = 0; i < E1000_GLOBAL_STATS_LEN; i++, stat++) {
char *p;
switch (stat->type) { switch (stat->type) {
case NETDEV_STATS: case NETDEV_STATS:
p = (char *)netdev + stat->stat_offset; p = (char *)netdev + stat->stat_offset;
@ -1839,15 +1840,13 @@ static void e1000_get_ethtool_stats(struct net_device *netdev,
default: default:
WARN_ONCE(1, "Invalid E1000 stat type: %u index %d\n", WARN_ONCE(1, "Invalid E1000 stat type: %u index %d\n",
stat->type, i); stat->type, i);
break; continue;
} }
if (stat->sizeof_stat == sizeof(u64)) if (stat->sizeof_stat == sizeof(u64))
data[i] = *(u64 *)p; data[i] = *(u64 *)p;
else else
data[i] = *(u32 *)p; data[i] = *(u32 *)p;
stat++;
} }
/* BUG_ON(i != E1000_STATS_LEN); */ /* BUG_ON(i != E1000_STATS_LEN); */
} }

View File

@ -520,8 +520,6 @@ void e1000_down(struct e1000_adapter *adapter)
struct net_device *netdev = adapter->netdev; struct net_device *netdev = adapter->netdev;
u32 rctl, tctl; u32 rctl, tctl;
netif_carrier_off(netdev);
/* disable receives in the hardware */ /* disable receives in the hardware */
rctl = er32(RCTL); rctl = er32(RCTL);
ew32(RCTL, rctl & ~E1000_RCTL_EN); ew32(RCTL, rctl & ~E1000_RCTL_EN);
@ -537,6 +535,15 @@ void e1000_down(struct e1000_adapter *adapter)
E1000_WRITE_FLUSH(); E1000_WRITE_FLUSH();
msleep(10); msleep(10);
/* Set the carrier off after transmits have been disabled in the
* hardware, to avoid race conditions with e1000_watchdog() (which
* may be running concurrently to us, checking for the carrier
* bit to decide whether it should enable transmits again). Such
* a race condition would result into transmission being disabled
* in the hardware until the next IFF_DOWN+IFF_UP cycle.
*/
netif_carrier_off(netdev);
napi_disable(&adapter->napi); napi_disable(&adapter->napi);
e1000_irq_disable(adapter); e1000_irq_disable(adapter);

View File

@ -2111,6 +2111,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
if (unlikely(i40e_rx_is_programming_status(qword))) { if (unlikely(i40e_rx_is_programming_status(qword))) {
i40e_clean_programming_status(rx_ring, rx_desc, qword); i40e_clean_programming_status(rx_ring, rx_desc, qword);
cleaned_count++;
continue; continue;
} }
size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
@ -2277,7 +2278,7 @@ static inline void i40e_update_enable_itr(struct i40e_vsi *vsi,
goto enable_int; goto enable_int;
} }
if (ITR_IS_DYNAMIC(tx_itr_setting)) { if (ITR_IS_DYNAMIC(rx_itr_setting)) {
rx = i40e_set_new_dynamic_itr(&q_vector->rx); rx = i40e_set_new_dynamic_itr(&q_vector->rx);
rxval = i40e_buildreg_itr(I40E_RX_ITR, q_vector->rx.itr); rxval = i40e_buildreg_itr(I40E_RX_ITR, q_vector->rx.itr);
} }

View File

@ -5673,7 +5673,7 @@ dma_error:
DMA_TO_DEVICE); DMA_TO_DEVICE);
dma_unmap_len_set(tx_buffer, len, 0); dma_unmap_len_set(tx_buffer, len, 0);
if (i--) if (i-- == 0)
i += tx_ring->count; i += tx_ring->count;
tx_buffer = &tx_ring->tx_buffer_info[i]; tx_buffer = &tx_ring->tx_buffer_info[i];
} }

View File

@ -8156,29 +8156,23 @@ static int ixgbe_tx_map(struct ixgbe_ring *tx_ring,
return 0; return 0;
dma_error: dma_error:
dev_err(tx_ring->dev, "TX DMA map failed\n"); dev_err(tx_ring->dev, "TX DMA map failed\n");
tx_buffer = &tx_ring->tx_buffer_info[i];
/* clear dma mappings for failed tx_buffer_info map */ /* clear dma mappings for failed tx_buffer_info map */
while (tx_buffer != first) { for (;;) {
tx_buffer = &tx_ring->tx_buffer_info[i];
if (dma_unmap_len(tx_buffer, len)) if (dma_unmap_len(tx_buffer, len))
dma_unmap_page(tx_ring->dev, dma_unmap_page(tx_ring->dev,
dma_unmap_addr(tx_buffer, dma), dma_unmap_addr(tx_buffer, dma),
dma_unmap_len(tx_buffer, len), dma_unmap_len(tx_buffer, len),
DMA_TO_DEVICE); DMA_TO_DEVICE);
dma_unmap_len_set(tx_buffer, len, 0); dma_unmap_len_set(tx_buffer, len, 0);
if (tx_buffer == first)
if (i--) break;
if (i == 0)
i += tx_ring->count; i += tx_ring->count;
tx_buffer = &tx_ring->tx_buffer_info[i]; i--;
} }
if (dma_unmap_len(tx_buffer, len))
dma_unmap_single(tx_ring->dev,
dma_unmap_addr(tx_buffer, dma),
dma_unmap_len(tx_buffer, len),
DMA_TO_DEVICE);
dma_unmap_len_set(tx_buffer, len, 0);
dev_kfree_skb_any(first->skb); dev_kfree_skb_any(first->skb);
first->skb = NULL; first->skb = NULL;

View File

@ -1167,6 +1167,11 @@ struct mvpp2_bm_pool {
u32 port_map; u32 port_map;
}; };
#define IS_TSO_HEADER(txq_pcpu, addr) \
((addr) >= (txq_pcpu)->tso_headers_dma && \
(addr) < (txq_pcpu)->tso_headers_dma + \
(txq_pcpu)->size * TSO_HEADER_SIZE)
/* Queue modes */ /* Queue modes */
#define MVPP2_QDIST_SINGLE_MODE 0 #define MVPP2_QDIST_SINGLE_MODE 0
#define MVPP2_QDIST_MULTI_MODE 1 #define MVPP2_QDIST_MULTI_MODE 1
@ -1534,7 +1539,7 @@ static bool mvpp2_prs_tcam_data_cmp(struct mvpp2_prs_entry *pe, int offs,
int off = MVPP2_PRS_TCAM_DATA_BYTE(offs); int off = MVPP2_PRS_TCAM_DATA_BYTE(offs);
u16 tcam_data; u16 tcam_data;
tcam_data = (8 << pe->tcam.byte[off + 1]) | pe->tcam.byte[off]; tcam_data = (pe->tcam.byte[off + 1] << 8) | pe->tcam.byte[off];
if (tcam_data != data) if (tcam_data != data)
return false; return false;
return true; return true;
@ -2609,8 +2614,8 @@ static void mvpp2_prs_mac_init(struct mvpp2 *priv)
/* place holders only - no ports */ /* place holders only - no ports */
mvpp2_prs_mac_drop_all_set(priv, 0, false); mvpp2_prs_mac_drop_all_set(priv, 0, false);
mvpp2_prs_mac_promisc_set(priv, 0, false); mvpp2_prs_mac_promisc_set(priv, 0, false);
mvpp2_prs_mac_multi_set(priv, MVPP2_PE_MAC_MC_ALL, 0, false); mvpp2_prs_mac_multi_set(priv, 0, MVPP2_PE_MAC_MC_ALL, false);
mvpp2_prs_mac_multi_set(priv, MVPP2_PE_MAC_MC_IP6, 0, false); mvpp2_prs_mac_multi_set(priv, 0, MVPP2_PE_MAC_MC_IP6, false);
} }
/* Set default entries for various types of dsa packets */ /* Set default entries for various types of dsa packets */
@ -3391,7 +3396,7 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da,
struct mvpp2_prs_entry *pe; struct mvpp2_prs_entry *pe;
int tid; int tid;
pe = kzalloc(sizeof(*pe), GFP_KERNEL); pe = kzalloc(sizeof(*pe), GFP_ATOMIC);
if (!pe) if (!pe)
return NULL; return NULL;
mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC); mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC);
@ -3453,7 +3458,7 @@ static int mvpp2_prs_mac_da_accept(struct mvpp2 *priv, int port,
if (tid < 0) if (tid < 0)
return tid; return tid;
pe = kzalloc(sizeof(*pe), GFP_KERNEL); pe = kzalloc(sizeof(*pe), GFP_ATOMIC);
if (!pe) if (!pe)
return -ENOMEM; return -ENOMEM;
mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC); mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC);
@ -5321,8 +5326,9 @@ static void mvpp2_txq_bufs_free(struct mvpp2_port *port,
struct mvpp2_txq_pcpu_buf *tx_buf = struct mvpp2_txq_pcpu_buf *tx_buf =
txq_pcpu->buffs + txq_pcpu->txq_get_index; txq_pcpu->buffs + txq_pcpu->txq_get_index;
dma_unmap_single(port->dev->dev.parent, tx_buf->dma, if (!IS_TSO_HEADER(txq_pcpu, tx_buf->dma))
tx_buf->size, DMA_TO_DEVICE); dma_unmap_single(port->dev->dev.parent, tx_buf->dma,
tx_buf->size, DMA_TO_DEVICE);
if (tx_buf->skb) if (tx_buf->skb)
dev_kfree_skb_any(tx_buf->skb); dev_kfree_skb_any(tx_buf->skb);
@ -5609,7 +5615,7 @@ static int mvpp2_txq_init(struct mvpp2_port *port,
txq_pcpu->tso_headers = txq_pcpu->tso_headers =
dma_alloc_coherent(port->dev->dev.parent, dma_alloc_coherent(port->dev->dev.parent,
MVPP2_AGGR_TXQ_SIZE * TSO_HEADER_SIZE, txq_pcpu->size * TSO_HEADER_SIZE,
&txq_pcpu->tso_headers_dma, &txq_pcpu->tso_headers_dma,
GFP_KERNEL); GFP_KERNEL);
if (!txq_pcpu->tso_headers) if (!txq_pcpu->tso_headers)
@ -5623,7 +5629,7 @@ cleanup:
kfree(txq_pcpu->buffs); kfree(txq_pcpu->buffs);
dma_free_coherent(port->dev->dev.parent, dma_free_coherent(port->dev->dev.parent,
MVPP2_AGGR_TXQ_SIZE * MVPP2_DESC_ALIGNED_SIZE, txq_pcpu->size * TSO_HEADER_SIZE,
txq_pcpu->tso_headers, txq_pcpu->tso_headers,
txq_pcpu->tso_headers_dma); txq_pcpu->tso_headers_dma);
} }
@ -5647,7 +5653,7 @@ static void mvpp2_txq_deinit(struct mvpp2_port *port,
kfree(txq_pcpu->buffs); kfree(txq_pcpu->buffs);
dma_free_coherent(port->dev->dev.parent, dma_free_coherent(port->dev->dev.parent,
MVPP2_AGGR_TXQ_SIZE * MVPP2_DESC_ALIGNED_SIZE, txq_pcpu->size * TSO_HEADER_SIZE,
txq_pcpu->tso_headers, txq_pcpu->tso_headers,
txq_pcpu->tso_headers_dma); txq_pcpu->tso_headers_dma);
} }
@ -6212,12 +6218,15 @@ static inline void
tx_desc_unmap_put(struct mvpp2_port *port, struct mvpp2_tx_queue *txq, tx_desc_unmap_put(struct mvpp2_port *port, struct mvpp2_tx_queue *txq,
struct mvpp2_tx_desc *desc) struct mvpp2_tx_desc *desc)
{ {
struct mvpp2_txq_pcpu *txq_pcpu = this_cpu_ptr(txq->pcpu);
dma_addr_t buf_dma_addr = dma_addr_t buf_dma_addr =
mvpp2_txdesc_dma_addr_get(port, desc); mvpp2_txdesc_dma_addr_get(port, desc);
size_t buf_sz = size_t buf_sz =
mvpp2_txdesc_size_get(port, desc); mvpp2_txdesc_size_get(port, desc);
dma_unmap_single(port->dev->dev.parent, buf_dma_addr, if (!IS_TSO_HEADER(txq_pcpu, buf_dma_addr))
buf_sz, DMA_TO_DEVICE); dma_unmap_single(port->dev->dev.parent, buf_dma_addr,
buf_sz, DMA_TO_DEVICE);
mvpp2_txq_desc_put(txq); mvpp2_txq_desc_put(txq);
} }
@ -6489,7 +6498,7 @@ out:
} }
/* Finalize TX processing */ /* Finalize TX processing */
if (txq_pcpu->count >= txq->done_pkts_coal) if (!port->has_tx_irqs && txq_pcpu->count >= txq->done_pkts_coal)
mvpp2_txq_done(port, txq, txq_pcpu); mvpp2_txq_done(port, txq, txq_pcpu);
/* Set the timer in case not all frags were processed */ /* Set the timer in case not all frags were processed */

View File

@ -77,35 +77,41 @@ static void add_delayed_event(struct mlx5_priv *priv,
list_add_tail(&delayed_event->list, &priv->waiting_events_list); list_add_tail(&delayed_event->list, &priv->waiting_events_list);
} }
static void fire_delayed_event_locked(struct mlx5_device_context *dev_ctx, static void delayed_event_release(struct mlx5_device_context *dev_ctx,
struct mlx5_core_dev *dev, struct mlx5_priv *priv)
struct mlx5_priv *priv)
{ {
struct mlx5_core_dev *dev = container_of(priv, struct mlx5_core_dev, priv);
struct mlx5_delayed_event *de; struct mlx5_delayed_event *de;
struct mlx5_delayed_event *n; struct mlx5_delayed_event *n;
struct list_head temp;
INIT_LIST_HEAD(&temp);
spin_lock_irq(&priv->ctx_lock);
/* stop delaying events */
priv->is_accum_events = false; priv->is_accum_events = false;
list_splice_init(&priv->waiting_events_list, &temp);
/* fire all accumulated events before new event comes */ if (!dev_ctx->context)
list_for_each_entry_safe(de, n, &priv->waiting_events_list, list) { goto out;
list_for_each_entry_safe(de, n, &priv->waiting_events_list, list)
dev_ctx->intf->event(dev, dev_ctx->context, de->event, de->param); dev_ctx->intf->event(dev, dev_ctx->context, de->event, de->param);
out:
spin_unlock_irq(&priv->ctx_lock);
list_for_each_entry_safe(de, n, &temp, list) {
list_del(&de->list); list_del(&de->list);
kfree(de); kfree(de);
} }
} }
static void cleanup_delayed_evets(struct mlx5_priv *priv) /* accumulating events that can come after mlx5_ib calls to
* ib_register_device, till adding that interface to the events list.
*/
static void delayed_event_start(struct mlx5_priv *priv)
{ {
struct mlx5_delayed_event *de;
struct mlx5_delayed_event *n;
spin_lock_irq(&priv->ctx_lock); spin_lock_irq(&priv->ctx_lock);
priv->is_accum_events = false; priv->is_accum_events = true;
list_for_each_entry_safe(de, n, &priv->waiting_events_list, list) {
list_del(&de->list);
kfree(de);
}
spin_unlock_irq(&priv->ctx_lock); spin_unlock_irq(&priv->ctx_lock);
} }
@ -122,11 +128,8 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
return; return;
dev_ctx->intf = intf; dev_ctx->intf = intf;
/* accumulating events that can come after mlx5_ib calls to
* ib_register_device, till adding that interface to the events list.
*/
priv->is_accum_events = true; delayed_event_start(priv);
dev_ctx->context = intf->add(dev); dev_ctx->context = intf->add(dev);
set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state); set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
@ -137,8 +140,6 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
spin_lock_irq(&priv->ctx_lock); spin_lock_irq(&priv->ctx_lock);
list_add_tail(&dev_ctx->list, &priv->ctx_list); list_add_tail(&dev_ctx->list, &priv->ctx_list);
fire_delayed_event_locked(dev_ctx, dev, priv);
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
if (dev_ctx->intf->pfault) { if (dev_ctx->intf->pfault) {
if (priv->pfault) { if (priv->pfault) {
@ -150,11 +151,12 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
} }
#endif #endif
spin_unlock_irq(&priv->ctx_lock); spin_unlock_irq(&priv->ctx_lock);
} else {
kfree(dev_ctx);
/* delete all accumulated events */
cleanup_delayed_evets(priv);
} }
delayed_event_release(dev_ctx, priv);
if (!dev_ctx->context)
kfree(dev_ctx);
} }
static struct mlx5_device_context *mlx5_get_device(struct mlx5_interface *intf, static struct mlx5_device_context *mlx5_get_device(struct mlx5_interface *intf,
@ -205,17 +207,21 @@ static void mlx5_attach_interface(struct mlx5_interface *intf, struct mlx5_priv
if (!dev_ctx) if (!dev_ctx)
return; return;
delayed_event_start(priv);
if (intf->attach) { if (intf->attach) {
if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state)) if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))
return; goto out;
intf->attach(dev, dev_ctx->context); intf->attach(dev, dev_ctx->context);
set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state); set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
} else { } else {
if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state)) if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
return; goto out;
dev_ctx->context = intf->add(dev); dev_ctx->context = intf->add(dev);
set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state); set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
} }
out:
delayed_event_release(dev_ctx, priv);
} }
void mlx5_attach_device(struct mlx5_core_dev *dev) void mlx5_attach_device(struct mlx5_core_dev *dev)
@ -414,8 +420,14 @@ void mlx5_core_event(struct mlx5_core_dev *dev, enum mlx5_dev_event event,
if (priv->is_accum_events) if (priv->is_accum_events)
add_delayed_event(priv, dev, event, param); add_delayed_event(priv, dev, event, param);
/* After mlx5_detach_device, the dev_ctx->intf is still set and dev_ctx is
* still in priv->ctx_list. In this case, only notify the dev_ctx if its
* ADDED or ATTACHED bit are set.
*/
list_for_each_entry(dev_ctx, &priv->ctx_list, list) list_for_each_entry(dev_ctx, &priv->ctx_list, list)
if (dev_ctx->intf->event) if (dev_ctx->intf->event &&
(test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state) ||
test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state)))
dev_ctx->intf->event(dev, dev_ctx->context, event, param); dev_ctx->intf->event(dev, dev_ctx->context, event, param);
spin_unlock_irqrestore(&priv->ctx_lock, flags); spin_unlock_irqrestore(&priv->ctx_lock, flags);

View File

@ -41,6 +41,11 @@
#define MLX5E_CEE_STATE_UP 1 #define MLX5E_CEE_STATE_UP 1
#define MLX5E_CEE_STATE_DOWN 0 #define MLX5E_CEE_STATE_DOWN 0
enum {
MLX5E_VENDOR_TC_GROUP_NUM = 7,
MLX5E_LOWEST_PRIO_GROUP = 0,
};
/* If dcbx mode is non-host set the dcbx mode to host. /* If dcbx mode is non-host set the dcbx mode to host.
*/ */
static int mlx5e_dcbnl_set_dcbx_mode(struct mlx5e_priv *priv, static int mlx5e_dcbnl_set_dcbx_mode(struct mlx5e_priv *priv,
@ -85,6 +90,9 @@ static int mlx5e_dcbnl_ieee_getets(struct net_device *netdev,
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
u8 tc_group[IEEE_8021QAZ_MAX_TCS];
bool is_tc_group_6_exist = false;
bool is_zero_bw_ets_tc = false;
int err = 0; int err = 0;
int i; int i;
@ -96,37 +104,64 @@ static int mlx5e_dcbnl_ieee_getets(struct net_device *netdev,
err = mlx5_query_port_prio_tc(mdev, i, &ets->prio_tc[i]); err = mlx5_query_port_prio_tc(mdev, i, &ets->prio_tc[i]);
if (err) if (err)
return err; return err;
}
for (i = 0; i < ets->ets_cap; i++) { err = mlx5_query_port_tc_group(mdev, i, &tc_group[i]);
if (err)
return err;
err = mlx5_query_port_tc_bw_alloc(mdev, i, &ets->tc_tx_bw[i]); err = mlx5_query_port_tc_bw_alloc(mdev, i, &ets->tc_tx_bw[i]);
if (err) if (err)
return err; return err;
if (ets->tc_tx_bw[i] < MLX5E_MAX_BW_ALLOC)
priv->dcbx.tc_tsa[i] = IEEE_8021QAZ_TSA_ETS; if (ets->tc_tx_bw[i] < MLX5E_MAX_BW_ALLOC &&
tc_group[i] == (MLX5E_LOWEST_PRIO_GROUP + 1))
is_zero_bw_ets_tc = true;
if (tc_group[i] == (MLX5E_VENDOR_TC_GROUP_NUM - 1))
is_tc_group_6_exist = true;
} }
/* Report 0% ets tc if exits*/
if (is_zero_bw_ets_tc) {
for (i = 0; i < ets->ets_cap; i++)
if (tc_group[i] == MLX5E_LOWEST_PRIO_GROUP)
ets->tc_tx_bw[i] = 0;
}
/* Update tc_tsa based on fw setting*/
for (i = 0; i < ets->ets_cap; i++) {
if (ets->tc_tx_bw[i] < MLX5E_MAX_BW_ALLOC)
priv->dcbx.tc_tsa[i] = IEEE_8021QAZ_TSA_ETS;
else if (tc_group[i] == MLX5E_VENDOR_TC_GROUP_NUM &&
!is_tc_group_6_exist)
priv->dcbx.tc_tsa[i] = IEEE_8021QAZ_TSA_VENDOR;
}
memcpy(ets->tc_tsa, priv->dcbx.tc_tsa, sizeof(ets->tc_tsa)); memcpy(ets->tc_tsa, priv->dcbx.tc_tsa, sizeof(ets->tc_tsa));
return err; return err;
} }
enum {
MLX5E_VENDOR_TC_GROUP_NUM = 7,
MLX5E_ETS_TC_GROUP_NUM = 0,
};
static void mlx5e_build_tc_group(struct ieee_ets *ets, u8 *tc_group, int max_tc) static void mlx5e_build_tc_group(struct ieee_ets *ets, u8 *tc_group, int max_tc)
{ {
bool any_tc_mapped_to_ets = false; bool any_tc_mapped_to_ets = false;
bool ets_zero_bw = false;
int strict_group; int strict_group;
int i; int i;
for (i = 0; i <= max_tc; i++) for (i = 0; i <= max_tc; i++) {
if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) {
any_tc_mapped_to_ets = true; any_tc_mapped_to_ets = true;
if (!ets->tc_tx_bw[i])
ets_zero_bw = true;
}
}
strict_group = any_tc_mapped_to_ets ? 1 : 0; /* strict group has higher priority than ets group */
strict_group = MLX5E_LOWEST_PRIO_GROUP;
if (any_tc_mapped_to_ets)
strict_group++;
if (ets_zero_bw)
strict_group++;
for (i = 0; i <= max_tc; i++) { for (i = 0; i <= max_tc; i++) {
switch (ets->tc_tsa[i]) { switch (ets->tc_tsa[i]) {
@ -137,7 +172,9 @@ static void mlx5e_build_tc_group(struct ieee_ets *ets, u8 *tc_group, int max_tc)
tc_group[i] = strict_group++; tc_group[i] = strict_group++;
break; break;
case IEEE_8021QAZ_TSA_ETS: case IEEE_8021QAZ_TSA_ETS:
tc_group[i] = MLX5E_ETS_TC_GROUP_NUM; tc_group[i] = MLX5E_LOWEST_PRIO_GROUP;
if (ets->tc_tx_bw[i] && ets_zero_bw)
tc_group[i] = MLX5E_LOWEST_PRIO_GROUP + 1;
break; break;
} }
} }
@ -146,8 +183,22 @@ static void mlx5e_build_tc_group(struct ieee_ets *ets, u8 *tc_group, int max_tc)
static void mlx5e_build_tc_tx_bw(struct ieee_ets *ets, u8 *tc_tx_bw, static void mlx5e_build_tc_tx_bw(struct ieee_ets *ets, u8 *tc_tx_bw,
u8 *tc_group, int max_tc) u8 *tc_group, int max_tc)
{ {
int bw_for_ets_zero_bw_tc = 0;
int last_ets_zero_bw_tc = -1;
int num_ets_zero_bw = 0;
int i; int i;
for (i = 0; i <= max_tc; i++) {
if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS &&
!ets->tc_tx_bw[i]) {
num_ets_zero_bw++;
last_ets_zero_bw_tc = i;
}
}
if (num_ets_zero_bw)
bw_for_ets_zero_bw_tc = MLX5E_MAX_BW_ALLOC / num_ets_zero_bw;
for (i = 0; i <= max_tc; i++) { for (i = 0; i <= max_tc; i++) {
switch (ets->tc_tsa[i]) { switch (ets->tc_tsa[i]) {
case IEEE_8021QAZ_TSA_VENDOR: case IEEE_8021QAZ_TSA_VENDOR:
@ -157,12 +208,26 @@ static void mlx5e_build_tc_tx_bw(struct ieee_ets *ets, u8 *tc_tx_bw,
tc_tx_bw[i] = MLX5E_MAX_BW_ALLOC; tc_tx_bw[i] = MLX5E_MAX_BW_ALLOC;
break; break;
case IEEE_8021QAZ_TSA_ETS: case IEEE_8021QAZ_TSA_ETS:
tc_tx_bw[i] = ets->tc_tx_bw[i]; tc_tx_bw[i] = ets->tc_tx_bw[i] ?
ets->tc_tx_bw[i] :
bw_for_ets_zero_bw_tc;
break; break;
} }
} }
/* Make sure the total bw for ets zero bw group is 100% */
if (last_ets_zero_bw_tc != -1)
tc_tx_bw[last_ets_zero_bw_tc] +=
MLX5E_MAX_BW_ALLOC % num_ets_zero_bw;
} }
/* If there are ETS BW 0,
* Set ETS group # to 1 for all ETS non zero BW tcs. Their sum must be 100%.
* Set group #0 to all the ETS BW 0 tcs and
* equally splits the 100% BW between them
* Report both group #0 and #1 as ETS type.
* All the tcs in group #0 will be reported with 0% BW.
*/
int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets) int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets)
{ {
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
@ -188,7 +253,6 @@ int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets)
return err; return err;
memcpy(priv->dcbx.tc_tsa, ets->tc_tsa, sizeof(ets->tc_tsa)); memcpy(priv->dcbx.tc_tsa, ets->tc_tsa, sizeof(ets->tc_tsa));
return err; return err;
} }
@ -209,17 +273,9 @@ static int mlx5e_dbcnl_validate_ets(struct net_device *netdev,
} }
/* Validate Bandwidth Sum */ /* Validate Bandwidth Sum */
for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) { if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS)
if (!ets->tc_tx_bw[i]) {
netdev_err(netdev,
"Failed to validate ETS: BW 0 is illegal\n");
return -EINVAL;
}
bw_sum += ets->tc_tx_bw[i]; bw_sum += ets->tc_tx_bw[i];
}
}
if (bw_sum != 0 && bw_sum != 100) { if (bw_sum != 0 && bw_sum != 100) {
netdev_err(netdev, netdev_err(netdev,
@ -533,8 +589,7 @@ static void mlx5e_dcbnl_getpgtccfgtx(struct net_device *netdev,
static void mlx5e_dcbnl_getpgbwgcfgtx(struct net_device *netdev, static void mlx5e_dcbnl_getpgbwgcfgtx(struct net_device *netdev,
int pgid, u8 *bw_pct) int pgid, u8 *bw_pct)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct ieee_ets ets;
struct mlx5_core_dev *mdev = priv->mdev;
if (pgid >= CEE_DCBX_MAX_PGS) { if (pgid >= CEE_DCBX_MAX_PGS) {
netdev_err(netdev, netdev_err(netdev,
@ -542,8 +597,8 @@ static void mlx5e_dcbnl_getpgbwgcfgtx(struct net_device *netdev,
return; return;
} }
if (mlx5_query_port_tc_bw_alloc(mdev, pgid, bw_pct)) mlx5e_dcbnl_ieee_getets(netdev, &ets);
*bw_pct = 0; *bw_pct = ets.tc_tx_bw[pgid];
} }
static void mlx5e_dcbnl_setpfccfg(struct net_device *netdev, static void mlx5e_dcbnl_setpfccfg(struct net_device *netdev,
@ -739,8 +794,6 @@ static void mlx5e_ets_init(struct mlx5e_priv *priv)
ets.prio_tc[i] = i; ets.prio_tc[i] = i;
} }
memcpy(priv->dcbx.tc_tsa, ets.tc_tsa, sizeof(ets.tc_tsa));
/* tclass[prio=0]=1, tclass[prio=1]=0, tclass[prio=i]=i (for i>1) */ /* tclass[prio=0]=1, tclass[prio=1]=0, tclass[prio=i]=i (for i>1) */
ets.prio_tc[0] = 1; ets.prio_tc[0] = 1;
ets.prio_tc[1] = 0; ets.prio_tc[1] = 0;

View File

@ -78,9 +78,11 @@ struct mlx5e_tc_flow {
}; };
struct mlx5e_tc_flow_parse_attr { struct mlx5e_tc_flow_parse_attr {
struct ip_tunnel_info tun_info;
struct mlx5_flow_spec spec; struct mlx5_flow_spec spec;
int num_mod_hdr_actions; int num_mod_hdr_actions;
void *mod_hdr_actions; void *mod_hdr_actions;
int mirred_ifindex;
}; };
enum { enum {
@ -322,6 +324,12 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv,
static void mlx5e_detach_encap(struct mlx5e_priv *priv, static void mlx5e_detach_encap(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow); struct mlx5e_tc_flow *flow);
static int mlx5e_attach_encap(struct mlx5e_priv *priv,
struct ip_tunnel_info *tun_info,
struct net_device *mirred_dev,
struct net_device **encap_dev,
struct mlx5e_tc_flow *flow);
static struct mlx5_flow_handle * static struct mlx5_flow_handle *
mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
struct mlx5e_tc_flow_parse_attr *parse_attr, struct mlx5e_tc_flow_parse_attr *parse_attr,
@ -329,9 +337,27 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
{ {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_esw_flow_attr *attr = flow->esw_attr; struct mlx5_esw_flow_attr *attr = flow->esw_attr;
struct mlx5_flow_handle *rule; struct net_device *out_dev, *encap_dev = NULL;
struct mlx5_flow_handle *rule = NULL;
struct mlx5e_rep_priv *rpriv;
struct mlx5e_priv *out_priv;
int err; int err;
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP) {
out_dev = __dev_get_by_index(dev_net(priv->netdev),
attr->parse_attr->mirred_ifindex);
err = mlx5e_attach_encap(priv, &parse_attr->tun_info,
out_dev, &encap_dev, flow);
if (err) {
rule = ERR_PTR(err);
if (err != -EAGAIN)
goto err_attach_encap;
}
out_priv = netdev_priv(encap_dev);
rpriv = out_priv->ppriv;
attr->out_rep = rpriv->rep;
}
err = mlx5_eswitch_add_vlan_action(esw, attr); err = mlx5_eswitch_add_vlan_action(esw, attr);
if (err) { if (err) {
rule = ERR_PTR(err); rule = ERR_PTR(err);
@ -347,10 +373,14 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
} }
} }
rule = mlx5_eswitch_add_offloaded_rule(esw, &parse_attr->spec, attr); /* we get here if (1) there's no error (rule being null) or when
if (IS_ERR(rule)) * (2) there's an encap action and we're on -EAGAIN (no valid neigh)
goto err_add_rule; */
if (rule != ERR_PTR(-EAGAIN)) {
rule = mlx5_eswitch_add_offloaded_rule(esw, &parse_attr->spec, attr);
if (IS_ERR(rule))
goto err_add_rule;
}
return rule; return rule;
err_add_rule: err_add_rule:
@ -361,6 +391,7 @@ err_mod_hdr:
err_add_vlan: err_add_vlan:
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP) if (attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP)
mlx5e_detach_encap(priv, flow); mlx5e_detach_encap(priv, flow);
err_attach_encap:
return rule; return rule;
} }
@ -389,6 +420,8 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
struct mlx5e_encap_entry *e) struct mlx5e_encap_entry *e)
{ {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_esw_flow_attr *esw_attr;
struct mlx5e_tc_flow *flow; struct mlx5e_tc_flow *flow;
int err; int err;
@ -404,10 +437,9 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
mlx5e_rep_queue_neigh_stats_work(priv); mlx5e_rep_queue_neigh_stats_work(priv);
list_for_each_entry(flow, &e->flows, encap) { list_for_each_entry(flow, &e->flows, encap) {
flow->esw_attr->encap_id = e->encap_id; esw_attr = flow->esw_attr;
flow->rule = mlx5e_tc_add_fdb_flow(priv, esw_attr->encap_id = e->encap_id;
flow->esw_attr->parse_attr, flow->rule = mlx5_eswitch_add_offloaded_rule(esw, &esw_attr->parse_attr->spec, esw_attr);
flow);
if (IS_ERR(flow->rule)) { if (IS_ERR(flow->rule)) {
err = PTR_ERR(flow->rule); err = PTR_ERR(flow->rule);
mlx5_core_warn(priv->mdev, "Failed to update cached encapsulation flow, %d\n", mlx5_core_warn(priv->mdev, "Failed to update cached encapsulation flow, %d\n",
@ -421,15 +453,13 @@ void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv,
void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv, void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv,
struct mlx5e_encap_entry *e) struct mlx5e_encap_entry *e)
{ {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5e_tc_flow *flow; struct mlx5e_tc_flow *flow;
struct mlx5_fc *counter;
list_for_each_entry(flow, &e->flows, encap) { list_for_each_entry(flow, &e->flows, encap) {
if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) { if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) {
flow->flags &= ~MLX5E_TC_FLOW_OFFLOADED; flow->flags &= ~MLX5E_TC_FLOW_OFFLOADED;
counter = mlx5_flow_rule_counter(flow->rule); mlx5_eswitch_del_offloaded_rule(esw, flow->rule, flow->esw_attr);
mlx5_del_flow_rules(flow->rule);
mlx5_fc_destroy(priv->mdev, counter);
} }
} }
@ -1942,7 +1972,7 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
if (is_tcf_mirred_egress_redirect(a)) { if (is_tcf_mirred_egress_redirect(a)) {
int ifindex = tcf_mirred_ifindex(a); int ifindex = tcf_mirred_ifindex(a);
struct net_device *out_dev, *encap_dev = NULL; struct net_device *out_dev;
struct mlx5e_priv *out_priv; struct mlx5e_priv *out_priv;
out_dev = __dev_get_by_index(dev_net(priv->netdev), ifindex); out_dev = __dev_get_by_index(dev_net(priv->netdev), ifindex);
@ -1955,17 +1985,13 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
rpriv = out_priv->ppriv; rpriv = out_priv->ppriv;
attr->out_rep = rpriv->rep; attr->out_rep = rpriv->rep;
} else if (encap) { } else if (encap) {
err = mlx5e_attach_encap(priv, info, parse_attr->mirred_ifindex = ifindex;
out_dev, &encap_dev, flow); parse_attr->tun_info = *info;
if (err && err != -EAGAIN) attr->parse_attr = parse_attr;
return err;
attr->action |= MLX5_FLOW_CONTEXT_ACTION_ENCAP | attr->action |= MLX5_FLOW_CONTEXT_ACTION_ENCAP |
MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_FWD_DEST |
MLX5_FLOW_CONTEXT_ACTION_COUNT; MLX5_FLOW_CONTEXT_ACTION_COUNT;
out_priv = netdev_priv(encap_dev); /* attr->out_rep is resolved when we handle encap */
rpriv = out_priv->ppriv;
attr->out_rep = rpriv->rep;
attr->parse_attr = parse_attr;
} else { } else {
pr_err("devices %s %s not on same switch HW, can't offload forwarding\n", pr_err("devices %s %s not on same switch HW, can't offload forwarding\n",
priv->netdev->name, out_dev->name); priv->netdev->name, out_dev->name);
@ -2047,7 +2073,7 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv,
if (flow->flags & MLX5E_TC_FLOW_ESWITCH) { if (flow->flags & MLX5E_TC_FLOW_ESWITCH) {
err = parse_tc_fdb_actions(priv, f->exts, parse_attr, flow); err = parse_tc_fdb_actions(priv, f->exts, parse_attr, flow);
if (err < 0) if (err < 0)
goto err_handle_encap_flow; goto err_free;
flow->rule = mlx5e_tc_add_fdb_flow(priv, parse_attr, flow); flow->rule = mlx5e_tc_add_fdb_flow(priv, parse_attr, flow);
} else { } else {
err = parse_tc_nic_actions(priv, f->exts, parse_attr, flow); err = parse_tc_nic_actions(priv, f->exts, parse_attr, flow);
@ -2058,10 +2084,13 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv,
if (IS_ERR(flow->rule)) { if (IS_ERR(flow->rule)) {
err = PTR_ERR(flow->rule); err = PTR_ERR(flow->rule);
goto err_free; if (err != -EAGAIN)
goto err_free;
} }
flow->flags |= MLX5E_TC_FLOW_OFFLOADED; if (err != -EAGAIN)
flow->flags |= MLX5E_TC_FLOW_OFFLOADED;
err = rhashtable_insert_fast(&tc->ht, &flow->node, err = rhashtable_insert_fast(&tc->ht, &flow->node,
tc->ht_params); tc->ht_params);
if (err) if (err)
@ -2075,16 +2104,6 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv,
err_del_rule: err_del_rule:
mlx5e_tc_del_flow(priv, flow); mlx5e_tc_del_flow(priv, flow);
err_handle_encap_flow:
if (err == -EAGAIN) {
err = rhashtable_insert_fast(&tc->ht, &flow->node,
tc->ht_params);
if (err)
mlx5e_tc_del_flow(priv, flow);
else
return 0;
}
err_free: err_free:
kvfree(parse_attr); kvfree(parse_attr);
kfree(flow); kfree(flow);

View File

@ -354,10 +354,11 @@ void mlx5_drain_health_wq(struct mlx5_core_dev *dev)
void mlx5_drain_health_recovery(struct mlx5_core_dev *dev) void mlx5_drain_health_recovery(struct mlx5_core_dev *dev)
{ {
struct mlx5_core_health *health = &dev->priv.health; struct mlx5_core_health *health = &dev->priv.health;
unsigned long flags;
spin_lock(&health->wq_lock); spin_lock_irqsave(&health->wq_lock, flags);
set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags); set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
spin_unlock(&health->wq_lock); spin_unlock_irqrestore(&health->wq_lock, flags);
cancel_delayed_work_sync(&dev->priv.health.recover_work); cancel_delayed_work_sync(&dev->priv.health.recover_work);
} }

View File

@ -677,6 +677,27 @@ int mlx5_set_port_tc_group(struct mlx5_core_dev *mdev, u8 *tc_group)
} }
EXPORT_SYMBOL_GPL(mlx5_set_port_tc_group); EXPORT_SYMBOL_GPL(mlx5_set_port_tc_group);
int mlx5_query_port_tc_group(struct mlx5_core_dev *mdev,
u8 tc, u8 *tc_group)
{
u32 out[MLX5_ST_SZ_DW(qetc_reg)];
void *ets_tcn_conf;
int err;
err = mlx5_query_port_qetcr_reg(mdev, out, sizeof(out));
if (err)
return err;
ets_tcn_conf = MLX5_ADDR_OF(qetc_reg, out,
tc_configuration[tc]);
*tc_group = MLX5_GET(ets_tcn_config_reg, ets_tcn_conf,
group);
return 0;
}
EXPORT_SYMBOL_GPL(mlx5_query_port_tc_group);
int mlx5_set_port_tc_bw_alloc(struct mlx5_core_dev *mdev, u8 *tc_bw) int mlx5_set_port_tc_bw_alloc(struct mlx5_core_dev *mdev, u8 *tc_bw)
{ {
u32 in[MLX5_ST_SZ_DW(qetc_reg)] = {0}; u32 in[MLX5_ST_SZ_DW(qetc_reg)] = {0};

View File

@ -127,6 +127,8 @@ nfp_fl_output(struct nfp_fl_output *output, const struct tc_action *action,
*/ */
if (!switchdev_port_same_parent_id(in_dev, out_dev)) if (!switchdev_port_same_parent_id(in_dev, out_dev))
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (!nfp_netdev_is_nfp_repr(out_dev))
return -EOPNOTSUPP;
output->port = cpu_to_be32(nfp_repr_get_port_id(out_dev)); output->port = cpu_to_be32(nfp_repr_get_port_id(out_dev));
if (!output->port) if (!output->port)

View File

@ -74,7 +74,7 @@ static int dwc_eth_dwmac_config_dt(struct platform_device *pdev,
plat_dat->axi->axi_wr_osr_lmt--; plat_dat->axi->axi_wr_osr_lmt--;
} }
if (of_property_read_u32(np, "read,read-requests", if (of_property_read_u32(np, "snps,read-requests",
&plat_dat->axi->axi_rd_osr_lmt)) { &plat_dat->axi->axi_rd_osr_lmt)) {
/** /**
* Since the register has a reset value of 1, if property * Since the register has a reset value of 1, if property

View File

@ -150,6 +150,13 @@ static void stmmac_mtl_setup(struct platform_device *pdev,
plat->rx_queues_to_use = 1; plat->rx_queues_to_use = 1;
plat->tx_queues_to_use = 1; plat->tx_queues_to_use = 1;
/* First Queue must always be in DCB mode. As MTL_QUEUE_DCB = 1 we need
* to always set this, otherwise Queue will be classified as AVB
* (because MTL_QUEUE_AVB = 0).
*/
plat->rx_queues_cfg[0].mode_to_use = MTL_QUEUE_DCB;
plat->tx_queues_cfg[0].mode_to_use = MTL_QUEUE_DCB;
rx_node = of_parse_phandle(pdev->dev.of_node, "snps,mtl-rx-config", 0); rx_node = of_parse_phandle(pdev->dev.of_node, "snps,mtl-rx-config", 0);
if (!rx_node) if (!rx_node)
return; return;

View File

@ -197,8 +197,8 @@ static int ipvtap_init(void)
{ {
int err; int err;
err = tap_create_cdev(&ipvtap_cdev, &ipvtap_major, "ipvtap"); err = tap_create_cdev(&ipvtap_cdev, &ipvtap_major, "ipvtap",
THIS_MODULE);
if (err) if (err)
goto out1; goto out1;

View File

@ -204,8 +204,8 @@ static int macvtap_init(void)
{ {
int err; int err;
err = tap_create_cdev(&macvtap_cdev, &macvtap_major, "macvtap"); err = tap_create_cdev(&macvtap_cdev, &macvtap_major, "macvtap",
THIS_MODULE);
if (err) if (err)
goto out1; goto out1;

View File

@ -517,6 +517,10 @@ static int tap_open(struct inode *inode, struct file *file)
&tap_proto, 0); &tap_proto, 0);
if (!q) if (!q)
goto err; goto err;
if (skb_array_init(&q->skb_array, tap->dev->tx_queue_len, GFP_KERNEL)) {
sk_free(&q->sk);
goto err;
}
RCU_INIT_POINTER(q->sock.wq, &q->wq); RCU_INIT_POINTER(q->sock.wq, &q->wq);
init_waitqueue_head(&q->wq.wait); init_waitqueue_head(&q->wq.wait);
@ -540,22 +544,18 @@ static int tap_open(struct inode *inode, struct file *file)
if ((tap->dev->features & NETIF_F_HIGHDMA) && (tap->dev->features & NETIF_F_SG)) if ((tap->dev->features & NETIF_F_HIGHDMA) && (tap->dev->features & NETIF_F_SG))
sock_set_flag(&q->sk, SOCK_ZEROCOPY); sock_set_flag(&q->sk, SOCK_ZEROCOPY);
err = -ENOMEM;
if (skb_array_init(&q->skb_array, tap->dev->tx_queue_len, GFP_KERNEL))
goto err_array;
err = tap_set_queue(tap, file, q); err = tap_set_queue(tap, file, q);
if (err) if (err) {
goto err_queue; /* tap_sock_destruct() will take care of freeing skb_array */
goto err_put;
}
dev_put(tap->dev); dev_put(tap->dev);
rtnl_unlock(); rtnl_unlock();
return err; return err;
err_queue: err_put:
skb_array_cleanup(&q->skb_array);
err_array:
sock_put(&q->sk); sock_put(&q->sk);
err: err:
if (tap) if (tap)
@ -1249,8 +1249,8 @@ static int tap_list_add(dev_t major, const char *device_name)
return 0; return 0;
} }
int tap_create_cdev(struct cdev *tap_cdev, int tap_create_cdev(struct cdev *tap_cdev, dev_t *tap_major,
dev_t *tap_major, const char *device_name) const char *device_name, struct module *module)
{ {
int err; int err;
@ -1259,6 +1259,7 @@ int tap_create_cdev(struct cdev *tap_cdev,
goto out1; goto out1;
cdev_init(tap_cdev, &tap_fops); cdev_init(tap_cdev, &tap_fops);
tap_cdev->owner = module;
err = cdev_add(tap_cdev, *tap_major, TAP_NUM_DEVS); err = cdev_add(tap_cdev, *tap_major, TAP_NUM_DEVS);
if (err) if (err)
goto out2; goto out2;

View File

@ -1444,6 +1444,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
buflen += SKB_DATA_ALIGN(len + pad); buflen += SKB_DATA_ALIGN(len + pad);
rcu_read_unlock(); rcu_read_unlock();
alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES);
if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL))) if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL)))
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
@ -2253,7 +2254,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
if (!dev) if (!dev)
return -ENOMEM; return -ENOMEM;
err = dev_get_valid_name(net, dev, name); err = dev_get_valid_name(net, dev, name);
if (err) if (err < 0)
goto err_free_dev; goto err_free_dev;
dev_net_set(dev, net); dev_net_set(dev, net);

View File

@ -561,6 +561,7 @@ static const struct driver_info wwan_info = {
#define HP_VENDOR_ID 0x03f0 #define HP_VENDOR_ID 0x03f0
#define MICROSOFT_VENDOR_ID 0x045e #define MICROSOFT_VENDOR_ID 0x045e
#define UBLOX_VENDOR_ID 0x1546 #define UBLOX_VENDOR_ID 0x1546
#define TPLINK_VENDOR_ID 0x2357
static const struct usb_device_id products[] = { static const struct usb_device_id products[] = {
/* BLACKLIST !! /* BLACKLIST !!
@ -813,6 +814,13 @@ static const struct usb_device_id products[] = {
.driver_info = 0, .driver_info = 0,
}, },
/* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
{
USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, 0x0601, USB_CLASS_COMM,
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
.driver_info = 0,
},
/* WHITELIST!!! /* WHITELIST!!!
* *
* CDC Ether uses two interfaces, not necessarily consecutive. * CDC Ether uses two interfaces, not necessarily consecutive.
@ -863,6 +871,12 @@ static const struct usb_device_id products[] = {
USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, 0x81ba, USB_CLASS_COMM, USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, 0x81ba, USB_CLASS_COMM,
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
.driver_info = (kernel_ulong_t)&wwan_info, .driver_info = (kernel_ulong_t)&wwan_info,
}, {
/* Huawei ME906 and ME909 */
USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x15c1, USB_CLASS_COMM,
USB_CDC_SUBCLASS_ETHERNET,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, { }, {
/* ZTE modules */ /* ZTE modules */
USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, USB_CLASS_COMM, USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, USB_CLASS_COMM,

View File

@ -615,6 +615,7 @@ enum rtl8152_flags {
#define VENDOR_ID_LENOVO 0x17ef #define VENDOR_ID_LENOVO 0x17ef
#define VENDOR_ID_LINKSYS 0x13b1 #define VENDOR_ID_LINKSYS 0x13b1
#define VENDOR_ID_NVIDIA 0x0955 #define VENDOR_ID_NVIDIA 0x0955
#define VENDOR_ID_TPLINK 0x2357
#define MCU_TYPE_PLA 0x0100 #define MCU_TYPE_PLA 0x0100
#define MCU_TYPE_USB 0x0000 #define MCU_TYPE_USB 0x0000
@ -5319,6 +5320,7 @@ static const struct usb_device_id rtl8152_table[] = {
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)}, {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
{REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)}, {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)}, {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
{REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601)},
{} {}
}; };

Some files were not shown because too many files have changed in this diff Show More