Adds 3 new PHY drivers stih407, stih41x and rcar gen2 PHY. It also

includes miscellaneous cleanup of other PHY drivers.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJUIsh+AAoJEA5ceFyATYLZTRMQAJm5YYmFQVV2Ps+WfwZkF5Qi
 99YX8myYkaedvydRcv+bwFbRoEHo3KshjIrjmwBIqqxbkR060xEr8UQxY0fftzWF
 whsxtmW1AELSnsanEinXSgDhIinGL8EQpvjD/PF5SedBbmm3TbD6n4erNl8bR66Z
 lUz///c/77wO0HlVo1QwijWoKrk6ZRNP7yXQDSNcn0lhmRIQi97i/K3G2M3IF9ze
 PpYcBHGJ+ObnsJpMbf/eCkmT+snhDreAe/IENNP16z4Fh4Z6V+H5J+8GKfcDYj/g
 jUnkpGZXnXVqRsTzB3uqHj2KxmGXa+HQ4oJYif+U+gXEvrq3gS1OlLWVvxXlHIjW
 +VHsoRVy4DmHiS01gHNIdg1iG47X4atl99v1qoTxD65h0Ago6HcnfxSFXZEWDX+x
 yLhTzvlTXiYvD9p+YA33RPaSHEk1+3CEHMtTgmep7QjBgSEOMXPE+rgMTwOwxDgI
 bIL4U0MYZ+H8wZ1EF/2XQdOTBMfcGuiAl5kg+YmlWJY/IOlf7xdQ3hHI0olSM7kW
 JjQ2tLxK1WpoZmmH4aR/faj0U7+10kxtiNtI8PVMmOwlUX+YE0f1WljRXO8rbvDR
 dDPalLThXQyQnzgpSURkKqoN9YIl5DJ1QVX/cnDHUNnmWUkt6ZVbmVowpeumZ8Fv
 mxwDIiW7kDiICNu5Vq7x
 =Efl5
 -----END PGP SIGNATURE-----

Merge tag 'phy-for_3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy into usb-next

Kishon writes:

Adds 3 new PHY drivers stih407, stih41x and rcar gen2 PHY. It also
includes miscellaneous cleanup of other PHY drivers.

Conflicts:
	MAINTAINERS
This commit is contained in:
Greg Kroah-Hartman 2014-09-25 13:11:52 +02:00
commit 346e2e4a8b
170 changed files with 2257 additions and 910 deletions

View File

@ -0,0 +1,30 @@
ST STiH407 USB PHY controller
This file documents the dt bindings for the usb picoPHY driver which is the PHY for both USB2 and USB3
host controllers (when controlling usb2/1.1 devices) available on STiH407 SoC family from STMicroelectronics.
Required properties:
- compatible : should be "st,stih407-usb2-phy"
- reg : contain the offset and length of the system configuration registers
used as glue logic to control & parameter phy
- reg-names : the names of the system configuration registers in "reg", should be "param" and "reg"
- st,syscfg : sysconfig register to manage phy parameter at driver level
- resets : list of phandle and reset specifier pairs. There should be two entries, one
for the whole phy and one for the port
- reset-names : list of reset signal names. Should be "global" and "port"
See: Documentation/devicetree/bindings/reset/st,sti-powerdown.txt
See: Documentation/devicetree/bindings/reset/reset.txt
Example:
usb2_picophy0: usbpicophy@f8 {
compatible = "st,stih407-usb2-phy";
reg = <0xf8 0x04>, /* syscfg 5062 */
<0xf4 0x04>; /* syscfg 5061 */
reg-names = "param", "ctrl";
#phy-cells = <0>;
st,syscfg = <&syscfg_core>;
resets = <&softreset STIH407_PICOPHY_SOFTRESET>,
<&picophyreset STIH407_PICOPHY0_RESET>;
reset-names = "global", "port";
};

View File

@ -0,0 +1,24 @@
STMicroelectronics STiH41x USB PHY binding
------------------------------------------
This file contains documentation for the usb phy found in STiH415/6 SoCs from
STMicroelectronics.
Required properties:
- compatible : should be "st,stih416-usb-phy" or "st,stih415-usb-phy"
- st,syscfg : should be a phandle of the syscfg node
- clock-names : must contain "osc_phy"
- clocks : must contain an entry for each name in clock-names.
See: Documentation/devicetree/bindings/clock/clock-bindings.txt
- #phy-cells : must be 0 for this phy
See: Documentation/devicetree/bindings/phy/phy-bindings.txt
Example:
usb2_phy: usb2phy@0 {
compatible = "st,stih416-usb-phy";
#phy-cell = <0>;
st,syscfg = <&syscfg_rear>;
clocks = <&clk_sysin>;
clock-names = "osc_phy";
};

View File

@ -0,0 +1,51 @@
* Renesas R-Car generation 2 USB PHY
This file provides information on what the device node for the R-Car generation
2 USB PHY contains.
Required properties:
- compatible: "renesas,usb-phy-r8a7790" if the device is a part of R8A7790 SoC.
"renesas,usb-phy-r8a7791" if the device is a part of R8A7791 SoC.
- reg: offset and length of the register block.
- #address-cells: number of address cells for the USB channel subnodes, must
be <1>.
- #size-cells: number of size cells for the USB channel subnodes, must be <0>.
- clocks: clock phandle and specifier pair.
- clock-names: string, clock input name, must be "usbhs".
The USB PHY device tree node should have the subnodes corresponding to the USB
channels. These subnodes must contain the following properties:
- reg: the USB controller selector; see the table below for the values.
- #phy-cells: see phy-bindings.txt in the same directory, must be <1>.
The phandle's argument in the PHY specifier is the USB controller selector for
the USB channel; see the selector meanings below:
+-----------+---------------+---------------+
|\ Selector | | |
+ --------- + 0 | 1 |
| Channel \| | |
+-----------+---------------+---------------+
| 0 | PCI EHCI/OHCI | HS-USB |
| 2 | PCI EHCI/OHCI | xHCI |
+-----------+---------------+---------------+
Example (Lager board):
usb-phy@e6590100 {
compatible = "renesas,usb-phy-r8a7790";
reg = <0 0xe6590100 0 0x100>;
#address-cells = <1>;
#size-cells = <0>;
clocks = <&mstp7_clks R8A7790_CLK_HSUSB>;
clock-names = "usbhs";
usb-channel@0 {
reg = <0>;
#phy-cells = <1>;
};
usb-channel@2 {
reg = <2>;
#phy-cells = <1>;
};
};

View File

@ -17,8 +17,11 @@ Samsung EXYNOS SoC series Display Port PHY
------------------------------------------------- -------------------------------------------------
Required properties: Required properties:
- compatible : should be "samsung,exynos5250-dp-video-phy"; - compatible : should be one of the following supported values:
- reg : offset and length of the Display Port PHY register set; - "samsung,exynos5250-dp-video-phy"
- "samsung,exynos5420-dp-video-phy"
- samsung,pmu-syscon: phandle for PMU system controller interface, used to
control pmu registers for power isolation.
- #phy-cells : from the generic PHY bindings, must be 0; - #phy-cells : from the generic PHY bindings, must be 0;
Samsung S5P/EXYNOS SoC series USB PHY Samsung S5P/EXYNOS SoC series USB PHY

View File

@ -462,9 +462,9 @@ JIT compiler
------------ ------------
The Linux kernel has a built-in BPF JIT compiler for x86_64, SPARC, PowerPC, The Linux kernel has a built-in BPF JIT compiler for x86_64, SPARC, PowerPC,
ARM and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler is ARM, MIPS and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler
transparently invoked for each attached filter from user space or for internal is transparently invoked for each attached filter from user space or for
kernel users if it has been previously enabled by root: internal kernel users if it has been previously enabled by root:
echo 1 > /proc/sys/net/core/bpf_jit_enable echo 1 > /proc/sys/net/core/bpf_jit_enable

View File

@ -1392,12 +1392,14 @@ S: Maintained
F: arch/arm/mach-sti/ F: arch/arm/mach-sti/
F: arch/arm/boot/dts/sti* F: arch/arm/boot/dts/sti*
F: drivers/clocksource/arm_global_timer.c F: drivers/clocksource/arm_global_timer.c
F: drivers/reset/sti/
F: drivers/pinctrl/pinctrl-st.c
F: drivers/media/rc/st_rc.c
F: drivers/i2c/busses/i2c-st.c F: drivers/i2c/busses/i2c-st.c
F: drivers/tty/serial/st-asc.c F: drivers/media/rc/st_rc.c
F: drivers/mmc/host/sdhci-st.c F: drivers/mmc/host/sdhci-st.c
F: drivers/phy/phy-stih407-usb.c
F: drivers/phy/phy-stih41x-usb.c
F: drivers/pinctrl/pinctrl-st.c
F: drivers/reset/sti/
F: drivers/tty/serial/st-asc.c
F: drivers/usb/dwc3/dwc3-st.c F: drivers/usb/dwc3/dwc3-st.c
F: drivers/usb/host/ehci-st.c F: drivers/usb/host/ehci-st.c
F: drivers/usb/host/ohci-st.c F: drivers/usb/host/ohci-st.c

View File

@ -1,4 +1,3 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=16 CONFIG_LOG_BUF_SHIFT=16
@ -6,6 +5,8 @@ CONFIG_PROFILING=y
CONFIG_OPROFILE=y CONFIG_OPROFILE=y
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_UNLOAD=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_IA64_DIG=y CONFIG_IA64_DIG=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=2 CONFIG_NR_CPUS=2
@ -51,9 +52,6 @@ CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m CONFIG_DM_ZERO=m
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_DUMMY=y CONFIG_DUMMY=y
CONFIG_NET_ETHERNET=y
CONFIG_MII=y
CONFIG_NET_PCI=y
CONFIG_INPUT_EVDEV=y CONFIG_INPUT_EVDEV=y
CONFIG_SERIAL_8250=y CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_8250_CONSOLE=y
@ -85,7 +83,6 @@ CONFIG_EXT3_FS=y
CONFIG_XFS_FS=y CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y CONFIG_XFS_POSIX_ACL=y
CONFIG_AUTOFS_FS=m
CONFIG_AUTOFS4_FS=m CONFIG_AUTOFS4_FS=m
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y
@ -95,17 +92,13 @@ CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m CONFIG_NFS_FS=m
CONFIG_NFS_V3=y CONFIG_NFS_V4=m
CONFIG_NFS_V4=y
CONFIG_NFSD=m CONFIG_NFSD=m
CONFIG_NFSD_V4=y CONFIG_NFSD_V4=y
CONFIG_CIFS=m CONFIG_CIFS=m
CONFIG_CIFS_STATS=y CONFIG_CIFS_STATS=y
CONFIG_CIFS_XATTR=y CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y CONFIG_CIFS_POSIX=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_UTF8=m CONFIG_NLS_UTF8=m

View File

@ -1,4 +1,3 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
@ -6,13 +5,13 @@ CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=20 CONFIG_LOG_BUF_SHIFT=20
CONFIG_CGROUPS=y CONFIG_CGROUPS=y
CONFIG_CPUSETS=y CONFIG_CPUSETS=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_KALLSYMS_ALL=y CONFIG_KALLSYMS_ALL=y
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y CONFIG_MODVERSIONS=y
# CONFIG_BLK_DEV_BSG is not set CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y CONFIG_IA64_PAGE_SIZE_64KB=y
CONFIG_IA64_CYCLONE=y CONFIG_IA64_CYCLONE=y
@ -29,14 +28,13 @@ CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m CONFIG_ACPI_FAN=m
CONFIG_ACPI_DOCK=y CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=m CONFIG_ACPI_PROCESSOR=m
CONFIG_ACPI_CONTAINER=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_NET=y
CONFIG_PACKET=y CONFIG_PACKET=y
CONFIG_UNIX=y CONFIG_UNIX=y
CONFIG_INET=y CONFIG_INET=y
CONFIG_IP_MULTICAST=y CONFIG_IP_MULTICAST=y
CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set # CONFIG_IPV6 is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
@ -82,16 +80,13 @@ CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=y CONFIG_FUSION_SAS=y
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_DUMMY=m CONFIG_DUMMY=m
CONFIG_NET_ETHERNET=y CONFIG_NETCONSOLE=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y CONFIG_NET_TULIP=y
CONFIG_TULIP=m CONFIG_TULIP=m
CONFIG_NET_PCI=y
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m CONFIG_E100=m
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_IGB=y CONFIG_IGB=y
CONFIG_TIGON3=y
CONFIG_NETCONSOLE=y
# CONFIG_SERIO_SERPORT is not set # CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y CONFIG_SERIAL_NONSTANDARD=y
@ -151,6 +146,7 @@ CONFIG_USB_STORAGE=m
CONFIG_INFINIBAND=m CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_MTHCA=m CONFIG_INFINIBAND_MTHCA=m
CONFIG_INFINIBAND_IPOIB=m CONFIG_INFINIBAND_IPOIB=m
CONFIG_INTEL_IOMMU=y
CONFIG_MSPEC=m CONFIG_MSPEC=m
CONFIG_EXT2_FS=y CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y CONFIG_EXT2_FS_XATTR=y
@ -164,7 +160,6 @@ CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y CONFIG_XFS_FS=y
CONFIG_AUTOFS_FS=m
CONFIG_AUTOFS4_FS=m CONFIG_AUTOFS4_FS=m
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y
@ -175,16 +170,10 @@ CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m CONFIG_NFS_FS=m
CONFIG_NFS_V3=y CONFIG_NFS_V4=m
CONFIG_NFS_V4=y
CONFIG_NFSD=m CONFIG_NFSD=m
CONFIG_NFSD_V4=y CONFIG_NFSD_V4=y
CONFIG_SMB_FS=m
CONFIG_SMB_NLS_DEFAULT=y
CONFIG_CIFS=m CONFIG_CIFS=m
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m CONFIG_NLS_CODEPAGE_775=m
@ -225,11 +214,7 @@ CONFIG_NLS_UTF8=m
CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y CONFIG_DEBUG_MUTEXES=y
# CONFIG_RCU_CPU_STALL_DETECTOR is not set
CONFIG_SYSCTL_SYSCALL_CHECK=y
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MD5=y CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_ANSI_CPRNG is not set # CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRC_T10DIF=y CONFIG_CRC_T10DIF=y
CONFIG_INTEL_IOMMU=y

View File

@ -1,4 +1,3 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
@ -9,6 +8,8 @@ CONFIG_KALLSYMS_ALL=y
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y CONFIG_MODVERSIONS=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y CONFIG_MCKINLEY=y
CONFIG_IA64_CYCLONE=y CONFIG_IA64_CYCLONE=y
CONFIG_SMP=y CONFIG_SMP=y
@ -24,14 +25,12 @@ CONFIG_BINFMT_MISC=m
CONFIG_ACPI_BUTTON=m CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m CONFIG_ACPI_PROCESSOR=m
CONFIG_ACPI_CONTAINER=m
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=m CONFIG_NET=y
CONFIG_PACKET=y CONFIG_PACKET=y
CONFIG_UNIX=y CONFIG_UNIX=y
CONFIG_INET=y CONFIG_INET=y
CONFIG_IP_MULTICAST=y CONFIG_IP_MULTICAST=y
CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set # CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
@ -71,15 +70,12 @@ CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=m CONFIG_FUSION_FC=m
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_DUMMY=m CONFIG_DUMMY=m
CONFIG_NET_ETHERNET=y CONFIG_NETCONSOLE=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y CONFIG_NET_TULIP=y
CONFIG_TULIP=m CONFIG_TULIP=m
CONFIG_NET_PCI=y
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m CONFIG_E100=m
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_TIGON3=y
CONFIG_NETCONSOLE=y
# CONFIG_SERIO_SERPORT is not set # CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y CONFIG_SERIAL_NONSTANDARD=y
@ -146,7 +142,6 @@ CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y CONFIG_XFS_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_AUTOFS4_FS=y CONFIG_AUTOFS4_FS=y
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y
@ -157,16 +152,10 @@ CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m CONFIG_NFS_FS=m
CONFIG_NFS_V3=y CONFIG_NFS_V4=m
CONFIG_NFS_V4=y
CONFIG_NFSD=m CONFIG_NFSD=m
CONFIG_NFSD_V4=y CONFIG_NFSD_V4=y
CONFIG_SMB_FS=m
CONFIG_SMB_NLS_DEFAULT=y
CONFIG_CIFS=m CONFIG_CIFS=m
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m CONFIG_NLS_CODEPAGE_775=m

View File

@ -1,13 +1,12 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16 CONFIG_LOG_BUF_SHIFT=16
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y CONFIG_MODVERSIONS=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_IA64_HP_SIM=y CONFIG_IA64_HP_SIM=y
CONFIG_MCKINLEY=y CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y CONFIG_IA64_PAGE_SIZE_64KB=y
@ -27,7 +26,6 @@ CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM=y
CONFIG_SCSI=y CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SD=y
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SPI_ATTRS=y CONFIG_SCSI_SPI_ATTRS=y
@ -49,8 +47,6 @@ CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=y CONFIG_NFS_FS=y
CONFIG_NFSD=y CONFIG_NFSD=y
CONFIG_NFSD_V3=y CONFIG_NFSD_V3=y
CONFIG_PARTITION_ADVANCED=y CONFIG_DEBUG_INFO=y
CONFIG_EFI_PARTITION=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_INFO=y

View File

@ -1,4 +1,3 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y CONFIG_IKCONFIG=y
@ -11,6 +10,8 @@ CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set # CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_IA64_DIG=y CONFIG_IA64_DIG=y
CONFIG_MCKINLEY=y CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y CONFIG_IA64_PAGE_SIZE_64KB=y
@ -29,14 +30,12 @@ CONFIG_BINFMT_MISC=m
CONFIG_ACPI_BUTTON=m CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m CONFIG_ACPI_PROCESSOR=m
CONFIG_ACPI_CONTAINER=m
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=m CONFIG_NET=y
CONFIG_PACKET=y CONFIG_PACKET=y
CONFIG_UNIX=y CONFIG_UNIX=y
CONFIG_INET=y CONFIG_INET=y
CONFIG_IP_MULTICAST=y CONFIG_IP_MULTICAST=y
CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set # CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
@ -53,6 +52,7 @@ CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m CONFIG_CHR_DEV_SG=m
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y CONFIG_SCSI_QLOGIC_1280=y
CONFIG_MD=y CONFIG_MD=y
@ -72,15 +72,12 @@ CONFIG_FUSION_FC=y
CONFIG_FUSION_CTL=y CONFIG_FUSION_CTL=y
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_DUMMY=m CONFIG_DUMMY=m
CONFIG_NET_ETHERNET=y CONFIG_NETCONSOLE=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y CONFIG_NET_TULIP=y
CONFIG_TULIP=m CONFIG_TULIP=m
CONFIG_NET_PCI=y
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m CONFIG_E100=m
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_TIGON3=y
CONFIG_NETCONSOLE=y
# CONFIG_SERIO_SERPORT is not set # CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y CONFIG_SERIAL_NONSTANDARD=y
@ -118,7 +115,6 @@ CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y CONFIG_XFS_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_AUTOFS4_FS=y CONFIG_AUTOFS4_FS=y
CONFIG_ISO9660_FS=m CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y CONFIG_JOLIET=y
@ -129,16 +125,10 @@ CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m CONFIG_NFS_FS=m
CONFIG_NFS_V3=y CONFIG_NFS_V4=m
CONFIG_NFS_V4=y
CONFIG_NFSD=m CONFIG_NFSD=m
CONFIG_NFSD_V4=y CONFIG_NFSD_V4=y
CONFIG_SMB_FS=m
CONFIG_SMB_NLS_DEFAULT=y
CONFIG_CIFS=m CONFIG_CIFS=m
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m CONFIG_NLS_CODEPAGE_775=m
@ -180,6 +170,5 @@ CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y CONFIG_DEBUG_MUTEXES=y
CONFIG_IA64_GRANULE_16MB=y CONFIG_IA64_GRANULE_16MB=y
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MD5=y CONFIG_CRYPTO_MD5=y

View File

@ -1,9 +1,9 @@
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_KPROBES=y CONFIG_KPROBES=y
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_IA64_HP_ZX1=y CONFIG_IA64_HP_ZX1=y
CONFIG_MCKINLEY=y CONFIG_MCKINLEY=y
CONFIG_SMP=y CONFIG_SMP=y
@ -18,6 +18,7 @@ CONFIG_EFI_VARS=y
CONFIG_BINFMT_MISC=y CONFIG_BINFMT_MISC=y
CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_NET=y
CONFIG_PACKET=y CONFIG_PACKET=y
CONFIG_UNIX=y CONFIG_UNIX=y
CONFIG_INET=y CONFIG_INET=y
@ -37,9 +38,9 @@ CONFIG_CHR_DEV_OSST=y
CONFIG_BLK_DEV_SR=y CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=y CONFIG_CHR_DEV_SG=y
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y CONFIG_SCSI_QLOGIC_1280=y
CONFIG_FUSION=y CONFIG_FUSION=y
@ -48,18 +49,15 @@ CONFIG_FUSION_FC=y
CONFIG_FUSION_CTL=m CONFIG_FUSION_CTL=m
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
CONFIG_DUMMY=y CONFIG_DUMMY=y
CONFIG_NET_ETHERNET=y CONFIG_TIGON3=y
CONFIG_NET_TULIP=y CONFIG_NET_TULIP=y
CONFIG_TULIP=y CONFIG_TULIP=y
CONFIG_TULIP_MWI=y CONFIG_TULIP_MWI=y
CONFIG_TULIP_MMIO=y CONFIG_TULIP_MMIO=y
CONFIG_TULIP_NAPI=y CONFIG_TULIP_NAPI=y
CONFIG_TULIP_NAPI_HW_MITIGATION=y CONFIG_TULIP_NAPI_HW_MITIGATION=y
CONFIG_NET_PCI=y
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=y CONFIG_E100=y
CONFIG_E1000=y CONFIG_E1000=y
CONFIG_TIGON3=y
CONFIG_INPUT_JOYDEV=y CONFIG_INPUT_JOYDEV=y
CONFIG_INPUT_EVDEV=y CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_KEYBOARD is not set # CONFIG_INPUT_KEYBOARD is not set
@ -100,7 +98,6 @@ CONFIG_USB_STORAGE=y
CONFIG_EXT2_FS=y CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y CONFIG_EXT3_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_ISO9660_FS=y CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y CONFIG_JOLIET=y
CONFIG_UDF_FS=y CONFIG_UDF_FS=y
@ -110,12 +107,9 @@ CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=y CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
CONFIG_NFS_V4=y CONFIG_NFS_V4=y
CONFIG_NFSD=y CONFIG_NFSD=y
CONFIG_NFSD_V3=y CONFIG_NFSD_V3=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y CONFIG_NLS_CODEPAGE_775=y

View File

@ -53,6 +53,7 @@
*/ */
unsigned long empty_zero_page, zero_page_mask; unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL_GPL(empty_zero_page); EXPORT_SYMBOL_GPL(empty_zero_page);
EXPORT_SYMBOL(zero_page_mask);
/* /*
* Not static inline because used by IP27 special magic initialization code * Not static inline because used by IP27 special magic initialization code

View File

@ -48,7 +48,12 @@ cflags-y := -pipe
# These flags should be implied by an hppa-linux configuration, but they # These flags should be implied by an hppa-linux configuration, but they
# are not in gcc 3.2. # are not in gcc 3.2.
cflags-y += -mno-space-regs -mfast-indirect-calls cflags-y += -mno-space-regs
# -mfast-indirect-calls is only relevant for 32-bit kernels.
ifndef CONFIG_64BIT
cflags-y += -mfast-indirect-calls
endif
# Currently we save and restore fpregs on all kernel entry/interruption paths. # Currently we save and restore fpregs on all kernel entry/interruption paths.
# If that gets optimized, we might need to disable the use of fpregs in the # If that gets optimized, we might need to disable the use of fpregs in the

View File

@ -17,6 +17,7 @@
#include <linux/user.h> #include <linux/user.h>
#include <linux/personality.h> #include <linux/personality.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/seccomp.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/signal.h> #include <linux/signal.h>
#include <linux/audit.h> #include <linux/audit.h>
@ -271,10 +272,7 @@ long do_syscall_trace_enter(struct pt_regs *regs)
long ret = 0; long ret = 0;
/* Do the secure computing check first. */ /* Do the secure computing check first. */
if (secure_computing(regs->gr[20])) { secure_computing_strict(regs->gr[20]);
/* seccomp failures shouldn't expose any additional code. */
return -1;
}
if (test_thread_flag(TIF_SYSCALL_TRACE) && if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs)) tracehook_report_syscall_entry(regs))

View File

@ -43,6 +43,7 @@ pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__((__aligned__(PAGE_SIZE)));
unsigned long empty_zero_page, zero_page_mask; unsigned long empty_zero_page, zero_page_mask;
EXPORT_SYMBOL(empty_zero_page); EXPORT_SYMBOL(empty_zero_page);
EXPORT_SYMBOL(zero_page_mask);
static void __init setup_zero_pages(void) static void __init setup_zero_pages(void)
{ {

View File

@ -234,12 +234,18 @@ do { BUILD_BUG_ON(FIELD_SIZEOF(STRUCT, FIELD) != sizeof(u8)); \
__emit_load8(BASE, STRUCT, FIELD, DEST); \ __emit_load8(BASE, STRUCT, FIELD, DEST); \
} while (0) } while (0)
#ifdef CONFIG_SPARC64
#define BIAS (STACK_BIAS - 4)
#else
#define BIAS (-4)
#endif
#define emit_ldmem(OFF, DEST) \ #define emit_ldmem(OFF, DEST) \
do { *prog++ = LD32I | RS1(FP) | S13(-(OFF)) | RD(DEST); \ do { *prog++ = LD32I | RS1(SP) | S13(BIAS - (OFF)) | RD(DEST); \
} while (0) } while (0)
#define emit_stmem(OFF, SRC) \ #define emit_stmem(OFF, SRC) \
do { *prog++ = LD32I | RS1(FP) | S13(-(OFF)) | RD(SRC); \ do { *prog++ = ST32I | RS1(SP) | S13(BIAS - (OFF)) | RD(SRC); \
} while (0) } while (0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
@ -615,10 +621,11 @@ void bpf_jit_compile(struct bpf_prog *fp)
case BPF_ANC | SKF_AD_VLAN_TAG: case BPF_ANC | SKF_AD_VLAN_TAG:
case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT: case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT:
emit_skb_load16(vlan_tci, r_A); emit_skb_load16(vlan_tci, r_A);
if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) { if (code != (BPF_ANC | SKF_AD_VLAN_TAG)) {
emit_andi(r_A, VLAN_VID_MASK, r_A); emit_alu_K(SRL, 12);
emit_andi(r_A, 1, r_A);
} else { } else {
emit_loadimm(VLAN_TAG_PRESENT, r_TMP); emit_loadimm(~VLAN_TAG_PRESENT, r_TMP);
emit_and(r_A, r_TMP, r_A); emit_and(r_A, r_TMP, r_A);
} }
break; break;
@ -630,15 +637,19 @@ void bpf_jit_compile(struct bpf_prog *fp)
emit_loadimm(K, r_X); emit_loadimm(K, r_X);
break; break;
case BPF_LD | BPF_MEM: case BPF_LD | BPF_MEM:
seen |= SEEN_MEM;
emit_ldmem(K * 4, r_A); emit_ldmem(K * 4, r_A);
break; break;
case BPF_LDX | BPF_MEM: case BPF_LDX | BPF_MEM:
seen |= SEEN_MEM | SEEN_XREG;
emit_ldmem(K * 4, r_X); emit_ldmem(K * 4, r_X);
break; break;
case BPF_ST: case BPF_ST:
seen |= SEEN_MEM;
emit_stmem(K * 4, r_A); emit_stmem(K * 4, r_A);
break; break;
case BPF_STX: case BPF_STX:
seen |= SEEN_MEM | SEEN_XREG;
emit_stmem(K * 4, r_X); emit_stmem(K * 4, r_X);
break; break;

View File

@ -30,33 +30,6 @@
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
/*
* Adjust our own GOT
*
* The relocation base must be in %ebx
*
* It is safe to call this macro more than once, because in some of the
* code paths multiple invocations are inevitable, e.g. via the efi*
* entry points.
*
* Relocation is only performed the first time.
*/
.macro FIXUP_GOT
cmpb $1, got_fixed(%ebx)
je 2f
leal _got(%ebx), %edx
leal _egot(%ebx), %ecx
1:
cmpl %ecx, %edx
jae 2f
addl %ebx, (%edx)
addl $4, %edx
jmp 1b
2:
movb $1, got_fixed(%ebx)
.endm
__HEAD __HEAD
ENTRY(startup_32) ENTRY(startup_32)
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
@ -83,9 +56,6 @@ ENTRY(efi_pe_entry)
add %esi, 88(%eax) add %esi, 88(%eax)
pushl %eax pushl %eax
movl %esi, %ebx
FIXUP_GOT
call make_boot_params call make_boot_params
cmpl $0, %eax cmpl $0, %eax
je fail je fail
@ -111,10 +81,6 @@ ENTRY(efi32_stub_entry)
leal efi32_config(%esi), %eax leal efi32_config(%esi), %eax
add %esi, 88(%eax) add %esi, 88(%eax)
pushl %eax pushl %eax
movl %esi, %ebx
FIXUP_GOT
2: 2:
call efi_main call efi_main
cmpl $0, %eax cmpl $0, %eax
@ -224,7 +190,19 @@ relocated:
shrl $2, %ecx shrl $2, %ecx
rep stosl rep stosl
FIXUP_GOT /*
* Adjust our own GOT
*/
leal _got(%ebx), %edx
leal _egot(%ebx), %ecx
1:
cmpl %ecx, %edx
jae 2f
addl %ebx, (%edx)
addl $4, %edx
jmp 1b
2:
/* /*
* Do the decompression, and jump to the new kernel.. * Do the decompression, and jump to the new kernel..
*/ */
@ -247,12 +225,8 @@ relocated:
xorl %ebx, %ebx xorl %ebx, %ebx
jmp *%eax jmp *%eax
.data
/* Have we relocated the GOT? */
got_fixed:
.byte 0
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
.data
efi32_config: efi32_config:
.fill 11,8,0 .fill 11,8,0
.long efi_call_phys .long efi_call_phys

View File

@ -32,33 +32,6 @@
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
/*
* Adjust our own GOT
*
* The relocation base must be in %rbx
*
* It is safe to call this macro more than once, because in some of the
* code paths multiple invocations are inevitable, e.g. via the efi*
* entry points.
*
* Relocation is only performed the first time.
*/
.macro FIXUP_GOT
cmpb $1, got_fixed(%rip)
je 2f
leaq _got(%rip), %rdx
leaq _egot(%rip), %rcx
1:
cmpq %rcx, %rdx
jae 2f
addq %rbx, (%rdx)
addq $8, %rdx
jmp 1b
2:
movb $1, got_fixed(%rip)
.endm
__HEAD __HEAD
.code32 .code32
ENTRY(startup_32) ENTRY(startup_32)
@ -279,13 +252,10 @@ ENTRY(efi_pe_entry)
subq $1b, %rbp subq $1b, %rbp
/* /*
* Relocate efi_config->call() and the GOT entries. * Relocate efi_config->call().
*/ */
addq %rbp, efi64_config+88(%rip) addq %rbp, efi64_config+88(%rip)
movq %rbp, %rbx
FIXUP_GOT
movq %rax, %rdi movq %rax, %rdi
call make_boot_params call make_boot_params
cmpq $0,%rax cmpq $0,%rax
@ -301,13 +271,10 @@ handover_entry:
subq $1b, %rbp subq $1b, %rbp
/* /*
* Relocate efi_config->call() and the GOT entries. * Relocate efi_config->call().
*/ */
movq efi_config(%rip), %rax movq efi_config(%rip), %rax
addq %rbp, 88(%rax) addq %rbp, 88(%rax)
movq %rbp, %rbx
FIXUP_GOT
2: 2:
movq efi_config(%rip), %rdi movq efi_config(%rip), %rdi
call efi_main call efi_main
@ -418,7 +385,18 @@ relocated:
shrq $3, %rcx shrq $3, %rcx
rep stosq rep stosq
FIXUP_GOT /*
* Adjust our own GOT
*/
leaq _got(%rip), %rdx
leaq _egot(%rip), %rcx
1:
cmpq %rcx, %rdx
jae 2f
addq %rbx, (%rdx)
addq $8, %rdx
jmp 1b
2:
/* /*
* Do the decompression, and jump to the new kernel.. * Do the decompression, and jump to the new kernel..
@ -459,10 +437,6 @@ gdt:
.quad 0x0000000000000000 /* TS continued */ .quad 0x0000000000000000 /* TS continued */
gdt_end: gdt_end:
/* Have we relocated the GOT? */
got_fixed:
.byte 0
#ifdef CONFIG_EFI_STUB #ifdef CONFIG_EFI_STUB
efi_config: efi_config:
.quad 0 .quad 0

View File

@ -56,6 +56,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
bool is_pm_resume; bool is_pm_resume;
WARN_ON(irqs_disabled()); WARN_ON(irqs_disabled());
WARN_ON(rq->cmd_type == REQ_TYPE_FS);
rq->rq_disk = bd_disk; rq->rq_disk = bd_disk;
rq->end_io = done; rq->end_io = done;

View File

@ -203,7 +203,6 @@ __blk_mq_alloc_request(struct blk_mq_alloc_data *data, int rw)
if (tag != BLK_MQ_TAG_FAIL) { if (tag != BLK_MQ_TAG_FAIL) {
rq = data->hctx->tags->rqs[tag]; rq = data->hctx->tags->rqs[tag];
rq->cmd_flags = 0;
if (blk_mq_tag_busy(data->hctx)) { if (blk_mq_tag_busy(data->hctx)) {
rq->cmd_flags = REQ_MQ_INFLIGHT; rq->cmd_flags = REQ_MQ_INFLIGHT;
atomic_inc(&data->hctx->nr_active); atomic_inc(&data->hctx->nr_active);
@ -258,6 +257,7 @@ static void __blk_mq_free_request(struct blk_mq_hw_ctx *hctx,
if (rq->cmd_flags & REQ_MQ_INFLIGHT) if (rq->cmd_flags & REQ_MQ_INFLIGHT)
atomic_dec(&hctx->nr_active); atomic_dec(&hctx->nr_active);
rq->cmd_flags = 0;
clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags); clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
blk_mq_put_tag(hctx, tag, &ctx->last_tag); blk_mq_put_tag(hctx, tag, &ctx->last_tag);
@ -392,6 +392,12 @@ static void blk_mq_start_request(struct request *rq, bool last)
blk_add_timer(rq); blk_add_timer(rq);
/*
* Ensure that ->deadline is visible before set the started
* flag and clear the completed flag.
*/
smp_mb__before_atomic();
/* /*
* Mark us as started and clear complete. Complete might have been * Mark us as started and clear complete. Complete might have been
* set if requeue raced with timeout, which then marked it as * set if requeue raced with timeout, which then marked it as
@ -473,7 +479,11 @@ static void blk_mq_requeue_work(struct work_struct *work)
blk_mq_insert_request(rq, false, false, false); blk_mq_insert_request(rq, false, false, false);
} }
blk_mq_run_queues(q, false); /*
* Use the start variant of queue running here, so that running
* the requeue work will kick stopped queues.
*/
blk_mq_start_hw_queues(q);
} }
void blk_mq_add_to_requeue_list(struct request *rq, bool at_head) void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
@ -957,14 +967,9 @@ void blk_mq_insert_request(struct request *rq, bool at_head, bool run_queue,
hctx = q->mq_ops->map_queue(q, ctx->cpu); hctx = q->mq_ops->map_queue(q, ctx->cpu);
if (rq->cmd_flags & (REQ_FLUSH | REQ_FUA) &&
!(rq->cmd_flags & (REQ_FLUSH_SEQ))) {
blk_insert_flush(rq);
} else {
spin_lock(&ctx->lock); spin_lock(&ctx->lock);
__blk_mq_insert_request(hctx, rq, at_head); __blk_mq_insert_request(hctx, rq, at_head);
spin_unlock(&ctx->lock); spin_unlock(&ctx->lock);
}
if (run_queue) if (run_queue)
blk_mq_run_hw_queue(hctx, async); blk_mq_run_hw_queue(hctx, async);
@ -1404,6 +1409,8 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set,
left -= to_do * rq_size; left -= to_do * rq_size;
for (j = 0; j < to_do; j++) { for (j = 0; j < to_do; j++) {
tags->rqs[i] = p; tags->rqs[i] = p;
tags->rqs[i]->atomic_flags = 0;
tags->rqs[i]->cmd_flags = 0;
if (set->ops->init_request) { if (set->ops->init_request) {
if (set->ops->init_request(set->driver_data, if (set->ops->init_request(set->driver_data,
tags->rqs[i], hctx_idx, i, tags->rqs[i], hctx_idx, i,
@ -1956,7 +1963,6 @@ out_unwind:
while (--i >= 0) while (--i >= 0)
blk_mq_free_rq_map(set, set->tags[i], i); blk_mq_free_rq_map(set, set->tags[i], i);
set->tags = NULL;
return -ENOMEM; return -ENOMEM;
} }

View File

@ -445,8 +445,6 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt)
*/ */
void blk_free_devt(dev_t devt) void blk_free_devt(dev_t devt)
{ {
might_sleep();
if (devt == MKDEV(0, 0)) if (devt == MKDEV(0, 0))
return; return;

View File

@ -447,7 +447,7 @@ void __init of_at91sam9260_clk_slow_setup(struct device_node *np,
int i; int i;
num_parents = of_count_phandle_with_args(np, "clocks", "#clock-cells"); num_parents = of_count_phandle_with_args(np, "clocks", "#clock-cells");
if (num_parents <= 0 || num_parents > 1) if (num_parents != 2)
return; return;
for (i = 0; i < num_parents; ++i) { for (i = 0; i < num_parents; ++i) {

View File

@ -22,7 +22,7 @@ static struct clk_onecell_data clk_data = {
.clk_num = ARRAY_SIZE(clk), .clk_num = ARRAY_SIZE(clk),
}; };
static int __init efm32gg_cmu_init(struct device_node *np) static void __init efm32gg_cmu_init(struct device_node *np)
{ {
int i; int i;
void __iomem *base; void __iomem *base;
@ -33,7 +33,7 @@ static int __init efm32gg_cmu_init(struct device_node *np)
base = of_iomap(np, 0); base = of_iomap(np, 0);
if (!base) { if (!base) {
pr_warn("Failed to map address range for efm32gg,cmu node\n"); pr_warn("Failed to map address range for efm32gg,cmu node\n");
return -EADDRNOTAVAIL; return;
} }
clk[clk_HFXO] = clk_register_fixed_rate(NULL, "HFXO", NULL, clk[clk_HFXO] = clk_register_fixed_rate(NULL, "HFXO", NULL,
@ -76,6 +76,6 @@ static int __init efm32gg_cmu_init(struct device_node *np)
clk[clk_HFPERCLKDAC0] = clk_register_gate(NULL, "HFPERCLK.DAC0", clk[clk_HFPERCLKDAC0] = clk_register_gate(NULL, "HFPERCLK.DAC0",
"HFXO", 0, base + CMU_HFPERCLKEN0, 17, 0, NULL); "HFXO", 0, base + CMU_HFPERCLKEN0, 17, 0, NULL);
return of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
} }
CLK_OF_DECLARE(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init); CLK_OF_DECLARE(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init);

View File

@ -1467,6 +1467,7 @@ static struct clk *clk_propagate_rate_change(struct clk *clk, unsigned long even
static void clk_change_rate(struct clk *clk) static void clk_change_rate(struct clk *clk)
{ {
struct clk *child; struct clk *child;
struct hlist_node *tmp;
unsigned long old_rate; unsigned long old_rate;
unsigned long best_parent_rate = 0; unsigned long best_parent_rate = 0;
bool skip_set_rate = false; bool skip_set_rate = false;
@ -1502,7 +1503,11 @@ static void clk_change_rate(struct clk *clk)
if (clk->notifier_count && old_rate != clk->rate) if (clk->notifier_count && old_rate != clk->rate)
__clk_notify(clk, POST_RATE_CHANGE, old_rate, clk->rate); __clk_notify(clk, POST_RATE_CHANGE, old_rate, clk->rate);
hlist_for_each_entry(child, &clk->children, child_node) { /*
* Use safe iteration, as change_rate can actually swap parents
* for certain clock types.
*/
hlist_for_each_entry_safe(child, tmp, &clk->children, child_node) {
/* Skip children who will be reparented to another clock */ /* Skip children who will be reparented to another clock */
if (child->new_parent && child->new_parent != clk) if (child->new_parent && child->new_parent != clk)
continue; continue;

View File

@ -1095,7 +1095,7 @@ static struct clk_branch prng_clk = {
}; };
static const struct freq_tbl clk_tbl_sdc[] = { static const struct freq_tbl clk_tbl_sdc[] = {
{ 144000, P_PXO, 5, 18,625 }, { 200000, P_PXO, 2, 2, 125 },
{ 400000, P_PLL8, 4, 1, 240 }, { 400000, P_PLL8, 4, 1, 240 },
{ 16000000, P_PLL8, 4, 1, 6 }, { 16000000, P_PLL8, 4, 1, 6 },
{ 17070000, P_PLL8, 1, 2, 45 }, { 17070000, P_PLL8, 1, 2, 45 },

View File

@ -545,7 +545,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
GATE(PCLK_PWM, "pclk_pwm", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 0, GFLAGS), GATE(PCLK_PWM, "pclk_pwm", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 0, GFLAGS),
GATE(PCLK_TIMER, "pclk_timer", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 1, GFLAGS), GATE(PCLK_TIMER, "pclk_timer", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 1, GFLAGS),
GATE(PCLK_I2C0, "pclk_i2c0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 2, GFLAGS), GATE(PCLK_I2C0, "pclk_i2c0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 2, GFLAGS),
GATE(PCLK_I2C1, "pclk_i2c1", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 3, GFLAGS), GATE(PCLK_I2C2, "pclk_i2c2", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 3, GFLAGS),
GATE(0, "pclk_ddrupctl0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 14, GFLAGS), GATE(0, "pclk_ddrupctl0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 14, GFLAGS),
GATE(0, "pclk_publ0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 15, GFLAGS), GATE(0, "pclk_publ0", "pclk_cpu", 0, RK3288_CLKGATE_CON(10), 15, GFLAGS),
GATE(0, "pclk_ddrupctl1", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 0, GFLAGS), GATE(0, "pclk_ddrupctl1", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 0, GFLAGS),
@ -603,7 +603,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
GATE(PCLK_I2C4, "pclk_i2c4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 15, GFLAGS), GATE(PCLK_I2C4, "pclk_i2c4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 15, GFLAGS),
GATE(PCLK_UART3, "pclk_uart3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 11, GFLAGS), GATE(PCLK_UART3, "pclk_uart3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 11, GFLAGS),
GATE(PCLK_UART4, "pclk_uart4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 12, GFLAGS), GATE(PCLK_UART4, "pclk_uart4", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 12, GFLAGS),
GATE(PCLK_I2C2, "pclk_i2c2", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 13, GFLAGS), GATE(PCLK_I2C1, "pclk_i2c1", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 13, GFLAGS),
GATE(PCLK_I2C3, "pclk_i2c3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 14, GFLAGS), GATE(PCLK_I2C3, "pclk_i2c3", "pclk_peri", 0, RK3288_CLKGATE_CON(6), 14, GFLAGS),
GATE(PCLK_SARADC, "pclk_saradc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 1, GFLAGS), GATE(PCLK_SARADC, "pclk_saradc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 1, GFLAGS),
GATE(PCLK_TSADC, "pclk_tsadc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 2, GFLAGS), GATE(PCLK_TSADC, "pclk_tsadc", "pclk_peri", 0, RK3288_CLKGATE_CON(7), 2, GFLAGS),

View File

@ -139,9 +139,13 @@ static long atl_clk_round_rate(struct clk_hw *hw, unsigned long rate,
static int atl_clk_set_rate(struct clk_hw *hw, unsigned long rate, static int atl_clk_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate) unsigned long parent_rate)
{ {
struct dra7_atl_desc *cdesc = to_atl_desc(hw); struct dra7_atl_desc *cdesc;
u32 divider; u32 divider;
if (!hw || !rate)
return -EINVAL;
cdesc = to_atl_desc(hw);
divider = ((parent_rate + rate / 2) / rate) - 1; divider = ((parent_rate + rate / 2) / rate) - 1;
if (divider > DRA7_ATL_DIVIDER_MASK) if (divider > DRA7_ATL_DIVIDER_MASK)
divider = DRA7_ATL_DIVIDER_MASK; divider = DRA7_ATL_DIVIDER_MASK;

View File

@ -211,11 +211,16 @@ static long ti_clk_divider_round_rate(struct clk_hw *hw, unsigned long rate,
static int ti_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate, static int ti_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate) unsigned long parent_rate)
{ {
struct clk_divider *divider = to_clk_divider(hw); struct clk_divider *divider;
unsigned int div, value; unsigned int div, value;
unsigned long flags = 0; unsigned long flags = 0;
u32 val; u32 val;
if (!hw || !rate)
return -EINVAL;
divider = to_clk_divider(hw);
div = DIV_ROUND_UP(parent_rate, rate); div = DIV_ROUND_UP(parent_rate, rate);
value = _get_val(divider, div); value = _get_val(divider, div);

View File

@ -93,13 +93,29 @@ static ssize_t show_power_crit(struct device *dev,
} }
static DEVICE_ATTR(power1_crit, S_IRUGO, show_power_crit, NULL); static DEVICE_ATTR(power1_crit, S_IRUGO, show_power_crit, NULL);
static umode_t fam15h_power_is_visible(struct kobject *kobj,
struct attribute *attr,
int index)
{
/* power1_input is only reported for Fam15h, Models 00h-0fh */
if (attr == &dev_attr_power1_input.attr &&
(boot_cpu_data.x86 != 0x15 || boot_cpu_data.x86_model > 0xf))
return 0;
return attr->mode;
}
static struct attribute *fam15h_power_attrs[] = { static struct attribute *fam15h_power_attrs[] = {
&dev_attr_power1_input.attr, &dev_attr_power1_input.attr,
&dev_attr_power1_crit.attr, &dev_attr_power1_crit.attr,
NULL NULL
}; };
ATTRIBUTE_GROUPS(fam15h_power); static const struct attribute_group fam15h_power_group = {
.attrs = fam15h_power_attrs,
.is_visible = fam15h_power_is_visible,
};
__ATTRIBUTE_GROUPS(fam15h_power);
static bool fam15h_power_is_internal_node0(struct pci_dev *f4) static bool fam15h_power_is_internal_node0(struct pci_dev *f4)
{ {
@ -216,7 +232,9 @@ static int fam15h_power_probe(struct pci_dev *pdev,
static const struct pci_device_id fam15h_power_id_table[] = { static const struct pci_device_id fam15h_power_id_table[] = {
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) }, { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M30H_NB_F4) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) }, { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) },
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) },
{} {}
}; };
MODULE_DEVICE_TABLE(pci, fam15h_power_id_table); MODULE_DEVICE_TABLE(pci, fam15h_power_id_table);

View File

@ -145,7 +145,7 @@ static int tmp103_probe(struct i2c_client *client,
} }
i2c_set_clientdata(client, regmap); i2c_set_clientdata(client, regmap);
hwmon_dev = hwmon_device_register_with_groups(dev, client->name, hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,
regmap, tmp103_groups); regmap, tmp103_groups);
return PTR_ERR_OR_ZERO(hwmon_dev); return PTR_ERR_OR_ZERO(hwmon_dev);
} }

View File

@ -105,6 +105,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr,
umem->length = size; umem->length = size;
umem->offset = addr & ~PAGE_MASK; umem->offset = addr & ~PAGE_MASK;
umem->page_size = PAGE_SIZE; umem->page_size = PAGE_SIZE;
umem->pid = get_task_pid(current, PIDTYPE_PID);
/* /*
* We ask for writable memory if any access flags other than * We ask for writable memory if any access flags other than
* "remote read" are set. "Local write" and "remote write" * "remote read" are set. "Local write" and "remote write"
@ -198,6 +199,7 @@ out:
if (ret < 0) { if (ret < 0) {
if (need_release) if (need_release)
__ib_umem_release(context->device, umem, 0); __ib_umem_release(context->device, umem, 0);
put_pid(umem->pid);
kfree(umem); kfree(umem);
} else } else
current->mm->pinned_vm = locked; current->mm->pinned_vm = locked;
@ -230,15 +232,19 @@ void ib_umem_release(struct ib_umem *umem)
{ {
struct ib_ucontext *context = umem->context; struct ib_ucontext *context = umem->context;
struct mm_struct *mm; struct mm_struct *mm;
struct task_struct *task;
unsigned long diff; unsigned long diff;
__ib_umem_release(umem->context->device, umem, 1); __ib_umem_release(umem->context->device, umem, 1);
mm = get_task_mm(current); task = get_pid_task(umem->pid, PIDTYPE_PID);
if (!mm) { put_pid(umem->pid);
kfree(umem); if (!task)
return; goto out;
} mm = get_task_mm(task);
put_task_struct(task);
if (!mm)
goto out;
diff = PAGE_ALIGN(umem->length + umem->offset) >> PAGE_SHIFT; diff = PAGE_ALIGN(umem->length + umem->offset) >> PAGE_SHIFT;
@ -262,9 +268,10 @@ void ib_umem_release(struct ib_umem *umem)
} else } else
down_write(&mm->mmap_sem); down_write(&mm->mmap_sem);
current->mm->pinned_vm -= diff; mm->pinned_vm -= diff;
up_write(&mm->mmap_sem); up_write(&mm->mmap_sem);
mmput(mm); mmput(mm);
out:
kfree(umem); kfree(umem);
} }
EXPORT_SYMBOL(ib_umem_release); EXPORT_SYMBOL(ib_umem_release);

View File

@ -140,5 +140,9 @@ void ib_copy_path_rec_from_user(struct ib_sa_path_rec *dst,
dst->packet_life_time = src->packet_life_time; dst->packet_life_time = src->packet_life_time;
dst->preference = src->preference; dst->preference = src->preference;
dst->packet_life_time_selector = src->packet_life_time_selector; dst->packet_life_time_selector = src->packet_life_time_selector;
memset(dst->smac, 0, sizeof(dst->smac));
memset(dst->dmac, 0, sizeof(dst->dmac));
dst->vlan_id = 0xffff;
} }
EXPORT_SYMBOL(ib_copy_path_rec_from_user); EXPORT_SYMBOL(ib_copy_path_rec_from_user);

View File

@ -54,7 +54,7 @@ static void __ipath_release_user_pages(struct page **p, size_t num_pages,
/* call with current->mm->mmap_sem held */ /* call with current->mm->mmap_sem held */
static int __ipath_get_user_pages(unsigned long start_page, size_t num_pages, static int __ipath_get_user_pages(unsigned long start_page, size_t num_pages,
struct page **p, struct vm_area_struct **vma) struct page **p)
{ {
unsigned long lock_limit; unsigned long lock_limit;
size_t got; size_t got;
@ -74,7 +74,7 @@ static int __ipath_get_user_pages(unsigned long start_page, size_t num_pages,
ret = get_user_pages(current, current->mm, ret = get_user_pages(current, current->mm,
start_page + got * PAGE_SIZE, start_page + got * PAGE_SIZE,
num_pages - got, 1, 1, num_pages - got, 1, 1,
p + got, vma); p + got, NULL);
if (ret < 0) if (ret < 0)
goto bail_release; goto bail_release;
} }
@ -165,7 +165,7 @@ int ipath_get_user_pages(unsigned long start_page, size_t num_pages,
down_write(&current->mm->mmap_sem); down_write(&current->mm->mmap_sem);
ret = __ipath_get_user_pages(start_page, num_pages, p, NULL); ret = __ipath_get_user_pages(start_page, num_pages, p);
up_write(&current->mm->mmap_sem); up_write(&current->mm->mmap_sem);

View File

@ -59,6 +59,7 @@
#define MLX4_IB_FLOW_MAX_PRIO 0xFFF #define MLX4_IB_FLOW_MAX_PRIO 0xFFF
#define MLX4_IB_FLOW_QPN_MASK 0xFFFFFF #define MLX4_IB_FLOW_QPN_MASK 0xFFFFFF
#define MLX4_IB_CARD_REV_A0 0xA0
MODULE_AUTHOR("Roland Dreier"); MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("Mellanox ConnectX HCA InfiniBand driver"); MODULE_DESCRIPTION("Mellanox ConnectX HCA InfiniBand driver");
@ -119,6 +120,17 @@ static int check_flow_steering_support(struct mlx4_dev *dev)
return dmfs; return dmfs;
} }
static int num_ib_ports(struct mlx4_dev *dev)
{
int ib_ports = 0;
int i;
mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB)
ib_ports++;
return ib_ports;
}
static int mlx4_ib_query_device(struct ib_device *ibdev, static int mlx4_ib_query_device(struct ib_device *ibdev,
struct ib_device_attr *props) struct ib_device_attr *props)
{ {
@ -126,6 +138,7 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
struct ib_smp *in_mad = NULL; struct ib_smp *in_mad = NULL;
struct ib_smp *out_mad = NULL; struct ib_smp *out_mad = NULL;
int err = -ENOMEM; int err = -ENOMEM;
int have_ib_ports;
in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL); in_mad = kzalloc(sizeof *in_mad, GFP_KERNEL);
out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL); out_mad = kmalloc(sizeof *out_mad, GFP_KERNEL);
@ -142,6 +155,8 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
memset(props, 0, sizeof *props); memset(props, 0, sizeof *props);
have_ib_ports = num_ib_ports(dev->dev);
props->fw_ver = dev->dev->caps.fw_ver; props->fw_ver = dev->dev->caps.fw_ver;
props->device_cap_flags = IB_DEVICE_CHANGE_PHY_PORT | props->device_cap_flags = IB_DEVICE_CHANGE_PHY_PORT |
IB_DEVICE_PORT_ACTIVE_EVENT | IB_DEVICE_PORT_ACTIVE_EVENT |
@ -152,13 +167,15 @@ static int mlx4_ib_query_device(struct ib_device *ibdev,
props->device_cap_flags |= IB_DEVICE_BAD_PKEY_CNTR; props->device_cap_flags |= IB_DEVICE_BAD_PKEY_CNTR;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BAD_QKEY_CNTR) if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BAD_QKEY_CNTR)
props->device_cap_flags |= IB_DEVICE_BAD_QKEY_CNTR; props->device_cap_flags |= IB_DEVICE_BAD_QKEY_CNTR;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_APM) if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_APM && have_ib_ports)
props->device_cap_flags |= IB_DEVICE_AUTO_PATH_MIG; props->device_cap_flags |= IB_DEVICE_AUTO_PATH_MIG;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_UD_AV_PORT) if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_UD_AV_PORT)
props->device_cap_flags |= IB_DEVICE_UD_AV_PORT_ENFORCE; props->device_cap_flags |= IB_DEVICE_UD_AV_PORT_ENFORCE;
if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_IPOIB_CSUM) if (dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_IPOIB_CSUM)
props->device_cap_flags |= IB_DEVICE_UD_IP_CSUM; props->device_cap_flags |= IB_DEVICE_UD_IP_CSUM;
if (dev->dev->caps.max_gso_sz && dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BLH) if (dev->dev->caps.max_gso_sz &&
(dev->dev->rev_id != MLX4_IB_CARD_REV_A0) &&
(dev->dev->caps.flags & MLX4_DEV_CAP_FLAG_BLH))
props->device_cap_flags |= IB_DEVICE_UD_TSO; props->device_cap_flags |= IB_DEVICE_UD_TSO;
if (dev->dev->caps.bmme_flags & MLX4_BMME_FLAG_RESERVED_LKEY) if (dev->dev->caps.bmme_flags & MLX4_BMME_FLAG_RESERVED_LKEY)
props->device_cap_flags |= IB_DEVICE_LOCAL_DMA_LKEY; props->device_cap_flags |= IB_DEVICE_LOCAL_DMA_LKEY;
@ -357,7 +374,7 @@ static int eth_link_query_port(struct ib_device *ibdev, u8 port,
props->state = IB_PORT_DOWN; props->state = IB_PORT_DOWN;
props->phys_state = state_to_phys_state(props->state); props->phys_state = state_to_phys_state(props->state);
props->active_mtu = IB_MTU_256; props->active_mtu = IB_MTU_256;
spin_lock(&iboe->lock); spin_lock_bh(&iboe->lock);
ndev = iboe->netdevs[port - 1]; ndev = iboe->netdevs[port - 1];
if (!ndev) if (!ndev)
goto out_unlock; goto out_unlock;
@ -369,7 +386,7 @@ static int eth_link_query_port(struct ib_device *ibdev, u8 port,
IB_PORT_ACTIVE : IB_PORT_DOWN; IB_PORT_ACTIVE : IB_PORT_DOWN;
props->phys_state = state_to_phys_state(props->state); props->phys_state = state_to_phys_state(props->state);
out_unlock: out_unlock:
spin_unlock(&iboe->lock); spin_unlock_bh(&iboe->lock);
out: out:
mlx4_free_cmd_mailbox(mdev->dev, mailbox); mlx4_free_cmd_mailbox(mdev->dev, mailbox);
return err; return err;
@ -811,11 +828,11 @@ int mlx4_ib_add_mc(struct mlx4_ib_dev *mdev, struct mlx4_ib_qp *mqp,
if (!mqp->port) if (!mqp->port)
return 0; return 0;
spin_lock(&mdev->iboe.lock); spin_lock_bh(&mdev->iboe.lock);
ndev = mdev->iboe.netdevs[mqp->port - 1]; ndev = mdev->iboe.netdevs[mqp->port - 1];
if (ndev) if (ndev)
dev_hold(ndev); dev_hold(ndev);
spin_unlock(&mdev->iboe.lock); spin_unlock_bh(&mdev->iboe.lock);
if (ndev) { if (ndev) {
ret = 1; ret = 1;
@ -1292,11 +1309,11 @@ static int mlx4_ib_mcg_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
mutex_lock(&mqp->mutex); mutex_lock(&mqp->mutex);
ge = find_gid_entry(mqp, gid->raw); ge = find_gid_entry(mqp, gid->raw);
if (ge) { if (ge) {
spin_lock(&mdev->iboe.lock); spin_lock_bh(&mdev->iboe.lock);
ndev = ge->added ? mdev->iboe.netdevs[ge->port - 1] : NULL; ndev = ge->added ? mdev->iboe.netdevs[ge->port - 1] : NULL;
if (ndev) if (ndev)
dev_hold(ndev); dev_hold(ndev);
spin_unlock(&mdev->iboe.lock); spin_unlock_bh(&mdev->iboe.lock);
if (ndev) if (ndev)
dev_put(ndev); dev_put(ndev);
list_del(&ge->list); list_del(&ge->list);
@ -1417,6 +1434,9 @@ static void update_gids_task(struct work_struct *work)
int err; int err;
struct mlx4_dev *dev = gw->dev->dev; struct mlx4_dev *dev = gw->dev->dev;
if (!gw->dev->ib_active)
return;
mailbox = mlx4_alloc_cmd_mailbox(dev); mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox)) { if (IS_ERR(mailbox)) {
pr_warn("update gid table failed %ld\n", PTR_ERR(mailbox)); pr_warn("update gid table failed %ld\n", PTR_ERR(mailbox));
@ -1447,6 +1467,9 @@ static void reset_gids_task(struct work_struct *work)
int err; int err;
struct mlx4_dev *dev = gw->dev->dev; struct mlx4_dev *dev = gw->dev->dev;
if (!gw->dev->ib_active)
return;
mailbox = mlx4_alloc_cmd_mailbox(dev); mailbox = mlx4_alloc_cmd_mailbox(dev);
if (IS_ERR(mailbox)) { if (IS_ERR(mailbox)) {
pr_warn("reset gid table failed\n"); pr_warn("reset gid table failed\n");
@ -1581,7 +1604,7 @@ static int mlx4_ib_addr_event(int event, struct net_device *event_netdev,
return 0; return 0;
iboe = &ibdev->iboe; iboe = &ibdev->iboe;
spin_lock(&iboe->lock); spin_lock_bh(&iboe->lock);
for (port = 1; port <= ibdev->dev->caps.num_ports; ++port) for (port = 1; port <= ibdev->dev->caps.num_ports; ++port)
if ((netif_is_bond_master(real_dev) && if ((netif_is_bond_master(real_dev) &&
@ -1591,7 +1614,7 @@ static int mlx4_ib_addr_event(int event, struct net_device *event_netdev,
update_gid_table(ibdev, port, gid, update_gid_table(ibdev, port, gid,
event == NETDEV_DOWN, 0); event == NETDEV_DOWN, 0);
spin_unlock(&iboe->lock); spin_unlock_bh(&iboe->lock);
return 0; return 0;
} }
@ -1664,13 +1687,21 @@ static void mlx4_ib_update_qps(struct mlx4_ib_dev *ibdev,
new_smac = mlx4_mac_to_u64(dev->dev_addr); new_smac = mlx4_mac_to_u64(dev->dev_addr);
read_unlock(&dev_base_lock); read_unlock(&dev_base_lock);
atomic64_set(&ibdev->iboe.mac[port - 1], new_smac);
/* no need for update QP1 and mac registration in non-SRIOV */
if (!mlx4_is_mfunc(ibdev->dev))
return;
mutex_lock(&ibdev->qp1_proxy_lock[port - 1]); mutex_lock(&ibdev->qp1_proxy_lock[port - 1]);
qp = ibdev->qp1_proxy[port - 1]; qp = ibdev->qp1_proxy[port - 1];
if (qp) { if (qp) {
int new_smac_index; int new_smac_index;
u64 old_smac = qp->pri.smac; u64 old_smac;
struct mlx4_update_qp_params update_params; struct mlx4_update_qp_params update_params;
mutex_lock(&qp->mutex);
old_smac = qp->pri.smac;
if (new_smac == old_smac) if (new_smac == old_smac)
goto unlock; goto unlock;
@ -1680,22 +1711,25 @@ static void mlx4_ib_update_qps(struct mlx4_ib_dev *ibdev,
goto unlock; goto unlock;
update_params.smac_index = new_smac_index; update_params.smac_index = new_smac_index;
if (mlx4_update_qp(ibdev->dev, &qp->mqp, MLX4_UPDATE_QP_SMAC, if (mlx4_update_qp(ibdev->dev, qp->mqp.qpn, MLX4_UPDATE_QP_SMAC,
&update_params)) { &update_params)) {
release_mac = new_smac; release_mac = new_smac;
goto unlock; goto unlock;
} }
/* if old port was zero, no mac was yet registered for this QP */
qp->pri.smac = new_smac; if (qp->pri.smac_port)
qp->pri.smac_index = new_smac_index;
release_mac = old_smac; release_mac = old_smac;
qp->pri.smac = new_smac;
qp->pri.smac_port = port;
qp->pri.smac_index = new_smac_index;
} }
unlock: unlock:
mutex_unlock(&ibdev->qp1_proxy_lock[port - 1]);
if (release_mac != MLX4_IB_INVALID_MAC) if (release_mac != MLX4_IB_INVALID_MAC)
mlx4_unregister_mac(ibdev->dev, port, release_mac); mlx4_unregister_mac(ibdev->dev, port, release_mac);
if (qp)
mutex_unlock(&qp->mutex);
mutex_unlock(&ibdev->qp1_proxy_lock[port - 1]);
} }
static void mlx4_ib_get_dev_addr(struct net_device *dev, static void mlx4_ib_get_dev_addr(struct net_device *dev,
@ -1706,6 +1740,7 @@ static void mlx4_ib_get_dev_addr(struct net_device *dev,
struct inet6_dev *in6_dev; struct inet6_dev *in6_dev;
union ib_gid *pgid; union ib_gid *pgid;
struct inet6_ifaddr *ifp; struct inet6_ifaddr *ifp;
union ib_gid default_gid;
#endif #endif
union ib_gid gid; union ib_gid gid;
@ -1726,12 +1761,15 @@ static void mlx4_ib_get_dev_addr(struct net_device *dev,
in_dev_put(in_dev); in_dev_put(in_dev);
} }
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
mlx4_make_default_gid(dev, &default_gid);
/* IPv6 gids */ /* IPv6 gids */
in6_dev = in6_dev_get(dev); in6_dev = in6_dev_get(dev);
if (in6_dev) { if (in6_dev) {
read_lock_bh(&in6_dev->lock); read_lock_bh(&in6_dev->lock);
list_for_each_entry(ifp, &in6_dev->addr_list, if_list) { list_for_each_entry(ifp, &in6_dev->addr_list, if_list) {
pgid = (union ib_gid *)&ifp->addr; pgid = (union ib_gid *)&ifp->addr;
if (!memcmp(pgid, &default_gid, sizeof(*pgid)))
continue;
update_gid_table(ibdev, port, pgid, 0, 0); update_gid_table(ibdev, port, pgid, 0, 0);
} }
read_unlock_bh(&in6_dev->lock); read_unlock_bh(&in6_dev->lock);
@ -1753,24 +1791,33 @@ static int mlx4_ib_init_gid_table(struct mlx4_ib_dev *ibdev)
struct net_device *dev; struct net_device *dev;
struct mlx4_ib_iboe *iboe = &ibdev->iboe; struct mlx4_ib_iboe *iboe = &ibdev->iboe;
int i; int i;
int err = 0;
for (i = 1; i <= ibdev->num_ports; ++i) for (i = 1; i <= ibdev->num_ports; ++i) {
if (reset_gid_table(ibdev, i)) if (rdma_port_get_link_layer(&ibdev->ib_dev, i) ==
return -1; IB_LINK_LAYER_ETHERNET) {
err = reset_gid_table(ibdev, i);
if (err)
goto out;
}
}
read_lock(&dev_base_lock); read_lock(&dev_base_lock);
spin_lock(&iboe->lock); spin_lock_bh(&iboe->lock);
for_each_netdev(&init_net, dev) { for_each_netdev(&init_net, dev) {
u8 port = mlx4_ib_get_dev_port(dev, ibdev); u8 port = mlx4_ib_get_dev_port(dev, ibdev);
if (port) /* port will be non-zero only for ETH ports */
if (port) {
mlx4_ib_set_default_gid(ibdev, dev, port);
mlx4_ib_get_dev_addr(dev, ibdev, port); mlx4_ib_get_dev_addr(dev, ibdev, port);
} }
}
spin_unlock(&iboe->lock); spin_unlock_bh(&iboe->lock);
read_unlock(&dev_base_lock); read_unlock(&dev_base_lock);
out:
return 0; return err;
} }
static void mlx4_ib_scan_netdevs(struct mlx4_ib_dev *ibdev, static void mlx4_ib_scan_netdevs(struct mlx4_ib_dev *ibdev,
@ -1784,7 +1831,7 @@ static void mlx4_ib_scan_netdevs(struct mlx4_ib_dev *ibdev,
iboe = &ibdev->iboe; iboe = &ibdev->iboe;
spin_lock(&iboe->lock); spin_lock_bh(&iboe->lock);
mlx4_foreach_ib_transport_port(port, ibdev->dev) { mlx4_foreach_ib_transport_port(port, ibdev->dev) {
enum ib_port_state port_state = IB_PORT_NOP; enum ib_port_state port_state = IB_PORT_NOP;
struct net_device *old_master = iboe->masters[port - 1]; struct net_device *old_master = iboe->masters[port - 1];
@ -1816,35 +1863,47 @@ static void mlx4_ib_scan_netdevs(struct mlx4_ib_dev *ibdev,
port_state = (netif_running(curr_netdev) && netif_carrier_ok(curr_netdev)) ? port_state = (netif_running(curr_netdev) && netif_carrier_ok(curr_netdev)) ?
IB_PORT_ACTIVE : IB_PORT_DOWN; IB_PORT_ACTIVE : IB_PORT_DOWN;
mlx4_ib_set_default_gid(ibdev, curr_netdev, port); mlx4_ib_set_default_gid(ibdev, curr_netdev, port);
} else { if (curr_master) {
reset_gid_table(ibdev, port); /* if using bonding/team and a slave port is down, we
} * don't want the bond IP based gids in the table since
/* if using bonding/team and a slave port is down, we don't the bond IP * flows that select port by gid may get the down port.
* based gids in the table since flows that select port by gid may get
* the down port.
*/ */
if (curr_master && (port_state == IB_PORT_DOWN)) { if (port_state == IB_PORT_DOWN) {
reset_gid_table(ibdev, port); reset_gid_table(ibdev, port);
mlx4_ib_set_default_gid(ibdev, curr_netdev, port); mlx4_ib_set_default_gid(ibdev,
curr_netdev,
port);
} else {
/* gids from the upper dev (bond/team)
* should appear in port's gid table
*/
mlx4_ib_get_dev_addr(curr_master,
ibdev, port);
} }
/* if bonding is used it is possible that we add it to masters }
* only after IP address is assigned to the net bonding /* if bonding is used it is possible that we add it to
* interface. * masters only after IP address is assigned to the
* net bonding interface.
*/ */
if (curr_master && (old_master != curr_master)) { if (curr_master && (old_master != curr_master)) {
reset_gid_table(ibdev, port); reset_gid_table(ibdev, port);
mlx4_ib_set_default_gid(ibdev, curr_netdev, port); mlx4_ib_set_default_gid(ibdev,
curr_netdev, port);
mlx4_ib_get_dev_addr(curr_master, ibdev, port); mlx4_ib_get_dev_addr(curr_master, ibdev, port);
} }
if (!curr_master && (old_master != curr_master)) { if (!curr_master && (old_master != curr_master)) {
reset_gid_table(ibdev, port); reset_gid_table(ibdev, port);
mlx4_ib_set_default_gid(ibdev, curr_netdev, port); mlx4_ib_set_default_gid(ibdev,
curr_netdev, port);
mlx4_ib_get_dev_addr(curr_netdev, ibdev, port); mlx4_ib_get_dev_addr(curr_netdev, ibdev, port);
} }
} else {
reset_gid_table(ibdev, port);
}
} }
spin_unlock(&iboe->lock); spin_unlock_bh(&iboe->lock);
if (update_qps_port > 0) if (update_qps_port > 0)
mlx4_ib_update_qps(ibdev, dev, update_qps_port); mlx4_ib_update_qps(ibdev, dev, update_qps_port);
@ -2186,6 +2245,9 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
goto err_steer_free_bitmap; goto err_steer_free_bitmap;
} }
for (j = 1; j <= ibdev->dev->caps.num_ports; j++)
atomic64_set(&iboe->mac[j - 1], ibdev->dev->caps.def_mac[j]);
if (ib_register_device(&ibdev->ib_dev, NULL)) if (ib_register_device(&ibdev->ib_dev, NULL))
goto err_steer_free_bitmap; goto err_steer_free_bitmap;
@ -2222,12 +2284,8 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
} }
} }
#endif #endif
for (i = 1 ; i <= ibdev->num_ports ; ++i) if (mlx4_ib_init_gid_table(ibdev))
reset_gid_table(ibdev, i); goto err_notif;
rtnl_lock();
mlx4_ib_scan_netdevs(ibdev, NULL, 0);
rtnl_unlock();
mlx4_ib_init_gid_table(ibdev);
} }
for (j = 0; j < ARRAY_SIZE(mlx4_class_attributes); ++j) { for (j = 0; j < ARRAY_SIZE(mlx4_class_attributes); ++j) {
@ -2375,6 +2433,9 @@ static void mlx4_ib_remove(struct mlx4_dev *dev, void *ibdev_ptr)
struct mlx4_ib_dev *ibdev = ibdev_ptr; struct mlx4_ib_dev *ibdev = ibdev_ptr;
int p; int p;
ibdev->ib_active = false;
flush_workqueue(wq);
mlx4_ib_close_sriov(ibdev); mlx4_ib_close_sriov(ibdev);
mlx4_ib_mad_cleanup(ibdev); mlx4_ib_mad_cleanup(ibdev);
ib_unregister_device(&ibdev->ib_dev); ib_unregister_device(&ibdev->ib_dev);

View File

@ -451,6 +451,7 @@ struct mlx4_ib_iboe {
spinlock_t lock; spinlock_t lock;
struct net_device *netdevs[MLX4_MAX_PORTS]; struct net_device *netdevs[MLX4_MAX_PORTS];
struct net_device *masters[MLX4_MAX_PORTS]; struct net_device *masters[MLX4_MAX_PORTS];
atomic64_t mac[MLX4_MAX_PORTS];
struct notifier_block nb; struct notifier_block nb;
struct notifier_block nb_inet; struct notifier_block nb_inet;
struct notifier_block nb_inet6; struct notifier_block nb_inet6;

View File

@ -234,14 +234,13 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags,
0); 0);
if (IS_ERR(mmr->umem)) { if (IS_ERR(mmr->umem)) {
err = PTR_ERR(mmr->umem); err = PTR_ERR(mmr->umem);
/* Prevent mlx4_ib_dereg_mr from free'ing invalid pointer */
mmr->umem = NULL; mmr->umem = NULL;
goto release_mpt_entry; goto release_mpt_entry;
} }
n = ib_umem_page_count(mmr->umem); n = ib_umem_page_count(mmr->umem);
shift = ilog2(mmr->umem->page_size); shift = ilog2(mmr->umem->page_size);
mmr->mmr.iova = virt_addr;
mmr->mmr.size = length;
err = mlx4_mr_rereg_mem_write(dev->dev, &mmr->mmr, err = mlx4_mr_rereg_mem_write(dev->dev, &mmr->mmr,
virt_addr, length, n, shift, virt_addr, length, n, shift,
*pmpt_entry); *pmpt_entry);
@ -249,6 +248,8 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags,
ib_umem_release(mmr->umem); ib_umem_release(mmr->umem);
goto release_mpt_entry; goto release_mpt_entry;
} }
mmr->mmr.iova = virt_addr;
mmr->mmr.size = length;
err = mlx4_ib_umem_write_mtt(dev, &mmr->mmr.mtt, mmr->umem); err = mlx4_ib_umem_write_mtt(dev, &mmr->mmr.mtt, mmr->umem);
if (err) { if (err) {
@ -262,6 +263,8 @@ int mlx4_ib_rereg_user_mr(struct ib_mr *mr, int flags,
* return a failure. But dereg_mr will free the resources. * return a failure. But dereg_mr will free the resources.
*/ */
err = mlx4_mr_hw_write_mpt(dev->dev, &mmr->mmr, pmpt_entry); err = mlx4_mr_hw_write_mpt(dev->dev, &mmr->mmr, pmpt_entry);
if (!err && flags & IB_MR_REREG_ACCESS)
mmr->mmr.access = mr_access_flags;
release_mpt_entry: release_mpt_entry:
mlx4_mr_hw_put_mpt(dev->dev, pmpt_entry); mlx4_mr_hw_put_mpt(dev->dev, pmpt_entry);

View File

@ -964,9 +964,10 @@ static void destroy_qp_common(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp,
MLX4_QP_STATE_RST, NULL, 0, 0, &qp->mqp)) MLX4_QP_STATE_RST, NULL, 0, 0, &qp->mqp))
pr_warn("modify QP %06x to RESET failed.\n", pr_warn("modify QP %06x to RESET failed.\n",
qp->mqp.qpn); qp->mqp.qpn);
if (qp->pri.smac) { if (qp->pri.smac || (!qp->pri.smac && qp->pri.smac_port)) {
mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac); mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac);
qp->pri.smac = 0; qp->pri.smac = 0;
qp->pri.smac_port = 0;
} }
if (qp->alt.smac) { if (qp->alt.smac) {
mlx4_unregister_mac(dev->dev, qp->alt.smac_port, qp->alt.smac); mlx4_unregister_mac(dev->dev, qp->alt.smac_port, qp->alt.smac);
@ -1325,7 +1326,8 @@ static int _mlx4_set_path(struct mlx4_ib_dev *dev, const struct ib_ah_attr *ah,
* If one was already assigned, but the new mac differs, * If one was already assigned, but the new mac differs,
* unregister the old one and register the new one. * unregister the old one and register the new one.
*/ */
if (!smac_info->smac || smac_info->smac != smac) { if ((!smac_info->smac && !smac_info->smac_port) ||
smac_info->smac != smac) {
/* register candidate now, unreg if needed, after success */ /* register candidate now, unreg if needed, after success */
smac_index = mlx4_register_mac(dev->dev, port, smac); smac_index = mlx4_register_mac(dev->dev, port, smac);
if (smac_index >= 0) { if (smac_index >= 0) {
@ -1390,21 +1392,13 @@ static void update_mcg_macs(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp)
static int handle_eth_ud_smac_index(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp, u8 *smac, static int handle_eth_ud_smac_index(struct mlx4_ib_dev *dev, struct mlx4_ib_qp *qp, u8 *smac,
struct mlx4_qp_context *context) struct mlx4_qp_context *context)
{ {
struct net_device *ndev;
u64 u64_mac; u64 u64_mac;
int smac_index; int smac_index;
u64_mac = atomic64_read(&dev->iboe.mac[qp->port - 1]);
ndev = dev->iboe.netdevs[qp->port - 1];
if (ndev) {
smac = ndev->dev_addr;
u64_mac = mlx4_mac_to_u64(smac);
} else {
u64_mac = dev->dev->caps.def_mac[qp->port];
}
context->pri_path.sched_queue = MLX4_IB_DEFAULT_SCHED_QUEUE | ((qp->port - 1) << 6); context->pri_path.sched_queue = MLX4_IB_DEFAULT_SCHED_QUEUE | ((qp->port - 1) << 6);
if (!qp->pri.smac) { if (!qp->pri.smac && !qp->pri.smac_port) {
smac_index = mlx4_register_mac(dev->dev, qp->port, u64_mac); smac_index = mlx4_register_mac(dev->dev, qp->port, u64_mac);
if (smac_index >= 0) { if (smac_index >= 0) {
qp->pri.candidate_smac_index = smac_index; qp->pri.candidate_smac_index = smac_index;
@ -1432,6 +1426,12 @@ static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
int steer_qp = 0; int steer_qp = 0;
int err = -EINVAL; int err = -EINVAL;
/* APM is not supported under RoCE */
if (attr_mask & IB_QP_ALT_PATH &&
rdma_port_get_link_layer(&dev->ib_dev, qp->port) ==
IB_LINK_LAYER_ETHERNET)
return -ENOTSUPP;
context = kzalloc(sizeof *context, GFP_KERNEL); context = kzalloc(sizeof *context, GFP_KERNEL);
if (!context) if (!context)
return -ENOMEM; return -ENOMEM;
@ -1682,7 +1682,7 @@ static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
MLX4_IB_LINK_TYPE_ETH; MLX4_IB_LINK_TYPE_ETH;
if (dev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) { if (dev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
/* set QP to receive both tunneled & non-tunneled packets */ /* set QP to receive both tunneled & non-tunneled packets */
if (!(context->flags & (1 << MLX4_RSS_QPC_FLAG_OFFSET))) if (!(context->flags & cpu_to_be32(1 << MLX4_RSS_QPC_FLAG_OFFSET)))
context->srqn = cpu_to_be32(7 << 28); context->srqn = cpu_to_be32(7 << 28);
} }
} }
@ -1786,9 +1786,10 @@ static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
if (qp->flags & MLX4_IB_QP_NETIF) if (qp->flags & MLX4_IB_QP_NETIF)
mlx4_ib_steer_qp_reg(dev, qp, 0); mlx4_ib_steer_qp_reg(dev, qp, 0);
} }
if (qp->pri.smac) { if (qp->pri.smac || (!qp->pri.smac && qp->pri.smac_port)) {
mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac); mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac);
qp->pri.smac = 0; qp->pri.smac = 0;
qp->pri.smac_port = 0;
} }
if (qp->alt.smac) { if (qp->alt.smac) {
mlx4_unregister_mac(dev->dev, qp->alt.smac_port, qp->alt.smac); mlx4_unregister_mac(dev->dev, qp->alt.smac_port, qp->alt.smac);
@ -1812,11 +1813,12 @@ out:
if (err && steer_qp) if (err && steer_qp)
mlx4_ib_steer_qp_reg(dev, qp, 0); mlx4_ib_steer_qp_reg(dev, qp, 0);
kfree(context); kfree(context);
if (qp->pri.candidate_smac) { if (qp->pri.candidate_smac ||
(!qp->pri.candidate_smac && qp->pri.candidate_smac_port)) {
if (err) { if (err) {
mlx4_unregister_mac(dev->dev, qp->pri.candidate_smac_port, qp->pri.candidate_smac); mlx4_unregister_mac(dev->dev, qp->pri.candidate_smac_port, qp->pri.candidate_smac);
} else { } else {
if (qp->pri.smac) if (qp->pri.smac || (!qp->pri.smac && qp->pri.smac_port))
mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac); mlx4_unregister_mac(dev->dev, qp->pri.smac_port, qp->pri.smac);
qp->pri.smac = qp->pri.candidate_smac; qp->pri.smac = qp->pri.candidate_smac;
qp->pri.smac_index = qp->pri.candidate_smac_index; qp->pri.smac_index = qp->pri.candidate_smac_index;
@ -2089,6 +2091,16 @@ static int build_sriov_qp0_header(struct mlx4_ib_sqp *sqp,
return 0; return 0;
} }
static void mlx4_u64_to_smac(u8 *dst_mac, u64 src_mac)
{
int i;
for (i = ETH_ALEN; i; i--) {
dst_mac[i - 1] = src_mac & 0xff;
src_mac >>= 8;
}
}
static int build_mlx_header(struct mlx4_ib_sqp *sqp, struct ib_send_wr *wr, static int build_mlx_header(struct mlx4_ib_sqp *sqp, struct ib_send_wr *wr,
void *wqe, unsigned *mlx_seg_len) void *wqe, unsigned *mlx_seg_len)
{ {
@ -2203,7 +2215,6 @@ static int build_mlx_header(struct mlx4_ib_sqp *sqp, struct ib_send_wr *wr,
} }
if (is_eth) { if (is_eth) {
u8 *smac;
struct in6_addr in6; struct in6_addr in6;
u16 pcp = (be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 29) << 13; u16 pcp = (be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 29) << 13;
@ -2216,12 +2227,17 @@ static int build_mlx_header(struct mlx4_ib_sqp *sqp, struct ib_send_wr *wr,
memcpy(&ctrl->imm, ah->av.eth.mac + 2, 4); memcpy(&ctrl->imm, ah->av.eth.mac + 2, 4);
memcpy(&in6, sgid.raw, sizeof(in6)); memcpy(&in6, sgid.raw, sizeof(in6));
if (!mlx4_is_mfunc(to_mdev(ib_dev)->dev)) if (!mlx4_is_mfunc(to_mdev(ib_dev)->dev)) {
smac = to_mdev(sqp->qp.ibqp.device)-> u64 mac = atomic64_read(&to_mdev(ib_dev)->iboe.mac[sqp->qp.port - 1]);
iboe.netdevs[sqp->qp.port - 1]->dev_addr; u8 smac[ETH_ALEN];
else /* use the src mac of the tunnel */
smac = ah->av.eth.s_mac; mlx4_u64_to_smac(smac, mac);
memcpy(sqp->ud_header.eth.smac_h, smac, 6); memcpy(sqp->ud_header.eth.smac_h, smac, ETH_ALEN);
} else {
/* use the src mac of the tunnel */
memcpy(sqp->ud_header.eth.smac_h, ah->av.eth.s_mac, ETH_ALEN);
}
if (!memcmp(sqp->ud_header.eth.smac_h, sqp->ud_header.eth.dmac_h, 6)) if (!memcmp(sqp->ud_header.eth.smac_h, sqp->ud_header.eth.dmac_h, 6))
mlx->flags |= cpu_to_be32(MLX4_WQE_CTRL_FORCE_LOOPBACK); mlx->flags |= cpu_to_be32(MLX4_WQE_CTRL_FORCE_LOOPBACK);
if (!is_vlan) { if (!is_vlan) {

View File

@ -38,7 +38,7 @@
#define OCRDMA_VID_PCP_SHIFT 0xD #define OCRDMA_VID_PCP_SHIFT 0xD
static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah, static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
struct ib_ah_attr *attr, int pdid) struct ib_ah_attr *attr, union ib_gid *sgid, int pdid)
{ {
int status = 0; int status = 0;
u16 vlan_tag; bool vlan_enabled = false; u16 vlan_tag; bool vlan_enabled = false;
@ -49,8 +49,7 @@ static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
memset(&eth, 0, sizeof(eth)); memset(&eth, 0, sizeof(eth));
memset(&grh, 0, sizeof(grh)); memset(&grh, 0, sizeof(grh));
ah->sgid_index = attr->grh.sgid_index; /* VLAN */
vlan_tag = attr->vlan_id; vlan_tag = attr->vlan_id;
if (!vlan_tag || (vlan_tag > 0xFFF)) if (!vlan_tag || (vlan_tag > 0xFFF))
vlan_tag = dev->pvid; vlan_tag = dev->pvid;
@ -65,15 +64,14 @@ static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
eth.eth_type = cpu_to_be16(OCRDMA_ROCE_ETH_TYPE); eth.eth_type = cpu_to_be16(OCRDMA_ROCE_ETH_TYPE);
eth_sz = sizeof(struct ocrdma_eth_basic); eth_sz = sizeof(struct ocrdma_eth_basic);
} }
/* MAC */
memcpy(&eth.smac[0], &dev->nic_info.mac_addr[0], ETH_ALEN); memcpy(&eth.smac[0], &dev->nic_info.mac_addr[0], ETH_ALEN);
memcpy(&eth.dmac[0], attr->dmac, ETH_ALEN);
status = ocrdma_resolve_dmac(dev, attr, &eth.dmac[0]); status = ocrdma_resolve_dmac(dev, attr, &eth.dmac[0]);
if (status) if (status)
return status; return status;
status = ocrdma_query_gid(&dev->ibdev, 1, attr->grh.sgid_index, ah->sgid_index = attr->grh.sgid_index;
(union ib_gid *)&grh.sgid[0]); memcpy(&grh.sgid[0], sgid->raw, sizeof(union ib_gid));
if (status) memcpy(&grh.dgid[0], attr->grh.dgid.raw, sizeof(attr->grh.dgid.raw));
return status;
grh.tclass_flow = cpu_to_be32((6 << 28) | grh.tclass_flow = cpu_to_be32((6 << 28) |
(attr->grh.traffic_class << 24) | (attr->grh.traffic_class << 24) |
@ -81,8 +79,7 @@ static inline int set_av_attr(struct ocrdma_dev *dev, struct ocrdma_ah *ah,
/* 0x1b is next header value in GRH */ /* 0x1b is next header value in GRH */
grh.pdid_hoplimit = cpu_to_be32((pdid << 16) | grh.pdid_hoplimit = cpu_to_be32((pdid << 16) |
(0x1b << 8) | attr->grh.hop_limit); (0x1b << 8) | attr->grh.hop_limit);
/* Eth HDR */
memcpy(&grh.dgid[0], attr->grh.dgid.raw, sizeof(attr->grh.dgid.raw));
memcpy(&ah->av->eth_hdr, &eth, eth_sz); memcpy(&ah->av->eth_hdr, &eth, eth_sz);
memcpy((u8 *)ah->av + eth_sz, &grh, sizeof(struct ocrdma_grh)); memcpy((u8 *)ah->av + eth_sz, &grh, sizeof(struct ocrdma_grh));
if (vlan_enabled) if (vlan_enabled)
@ -98,6 +95,8 @@ struct ib_ah *ocrdma_create_ah(struct ib_pd *ibpd, struct ib_ah_attr *attr)
struct ocrdma_ah *ah; struct ocrdma_ah *ah;
struct ocrdma_pd *pd = get_ocrdma_pd(ibpd); struct ocrdma_pd *pd = get_ocrdma_pd(ibpd);
struct ocrdma_dev *dev = get_ocrdma_dev(ibpd->device); struct ocrdma_dev *dev = get_ocrdma_dev(ibpd->device);
union ib_gid sgid;
u8 zmac[ETH_ALEN];
if (!(attr->ah_flags & IB_AH_GRH)) if (!(attr->ah_flags & IB_AH_GRH))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
@ -111,7 +110,27 @@ struct ib_ah *ocrdma_create_ah(struct ib_pd *ibpd, struct ib_ah_attr *attr)
status = ocrdma_alloc_av(dev, ah); status = ocrdma_alloc_av(dev, ah);
if (status) if (status)
goto av_err; goto av_err;
status = set_av_attr(dev, ah, attr, pd->id);
status = ocrdma_query_gid(&dev->ibdev, 1, attr->grh.sgid_index, &sgid);
if (status) {
pr_err("%s(): Failed to query sgid, status = %d\n",
__func__, status);
goto av_conf_err;
}
memset(&zmac, 0, ETH_ALEN);
if (pd->uctx &&
memcmp(attr->dmac, &zmac, ETH_ALEN)) {
status = rdma_addr_find_dmac_by_grh(&sgid, &attr->grh.dgid,
attr->dmac, &attr->vlan_id);
if (status) {
pr_err("%s(): Failed to resolve dmac from gid."
"status = %d\n", __func__, status);
goto av_conf_err;
}
}
status = set_av_attr(dev, ah, attr, &sgid, pd->id);
if (status) if (status)
goto av_conf_err; goto av_conf_err;
@ -145,7 +164,7 @@ int ocrdma_query_ah(struct ib_ah *ibah, struct ib_ah_attr *attr)
struct ocrdma_av *av = ah->av; struct ocrdma_av *av = ah->av;
struct ocrdma_grh *grh; struct ocrdma_grh *grh;
attr->ah_flags |= IB_AH_GRH; attr->ah_flags |= IB_AH_GRH;
if (ah->av->valid & Bit(1)) { if (ah->av->valid & OCRDMA_AV_VALID) {
grh = (struct ocrdma_grh *)((u8 *)ah->av + grh = (struct ocrdma_grh *)((u8 *)ah->av +
sizeof(struct ocrdma_eth_vlan)); sizeof(struct ocrdma_eth_vlan));
attr->sl = be16_to_cpu(av->eth_hdr.vlan_tag) >> 13; attr->sl = be16_to_cpu(av->eth_hdr.vlan_tag) >> 13;

View File

@ -101,7 +101,7 @@ int ocrdma_query_device(struct ib_device *ibdev, struct ib_device_attr *attr)
attr->max_srq_sge = dev->attr.max_srq_sge; attr->max_srq_sge = dev->attr.max_srq_sge;
attr->max_srq_wr = dev->attr.max_rqe; attr->max_srq_wr = dev->attr.max_rqe;
attr->local_ca_ack_delay = dev->attr.local_ca_ack_delay; attr->local_ca_ack_delay = dev->attr.local_ca_ack_delay;
attr->max_fast_reg_page_list_len = 0; attr->max_fast_reg_page_list_len = dev->attr.max_pages_per_frmr;
attr->max_pkeys = 1; attr->max_pkeys = 1;
return 0; return 0;
} }
@ -2846,11 +2846,9 @@ int ocrdma_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags cq_flags)
if (cq->first_arm) { if (cq->first_arm) {
ocrdma_ring_cq_db(dev, cq_id, arm_needed, sol_needed, 0); ocrdma_ring_cq_db(dev, cq_id, arm_needed, sol_needed, 0);
cq->first_arm = false; cq->first_arm = false;
goto skip_defer;
} }
cq->deferred_arm = true;
skip_defer: cq->deferred_arm = true;
cq->deferred_sol = sol_needed; cq->deferred_sol = sol_needed;
spin_unlock_irqrestore(&cq->cq_lock, flags); spin_unlock_irqrestore(&cq->cq_lock, flags);

View File

@ -193,6 +193,7 @@ static void *_qp_stats_seq_start(struct seq_file *s, loff_t *pos)
struct qib_qp_iter *iter; struct qib_qp_iter *iter;
loff_t n = *pos; loff_t n = *pos;
rcu_read_lock();
iter = qib_qp_iter_init(s->private); iter = qib_qp_iter_init(s->private);
if (!iter) if (!iter)
return NULL; return NULL;
@ -224,7 +225,7 @@ static void *_qp_stats_seq_next(struct seq_file *s, void *iter_ptr,
static void _qp_stats_seq_stop(struct seq_file *s, void *iter_ptr) static void _qp_stats_seq_stop(struct seq_file *s, void *iter_ptr)
{ {
/* nothing for now */ rcu_read_unlock();
} }
static int _qp_stats_seq_show(struct seq_file *s, void *iter_ptr) static int _qp_stats_seq_show(struct seq_file *s, void *iter_ptr)

View File

@ -1325,7 +1325,6 @@ int qib_qp_iter_next(struct qib_qp_iter *iter)
struct qib_qp *pqp = iter->qp; struct qib_qp *pqp = iter->qp;
struct qib_qp *qp; struct qib_qp *qp;
rcu_read_lock();
for (; n < dev->qp_table_size; n++) { for (; n < dev->qp_table_size; n++) {
if (pqp) if (pqp)
qp = rcu_dereference(pqp->next); qp = rcu_dereference(pqp->next);
@ -1333,18 +1332,11 @@ int qib_qp_iter_next(struct qib_qp_iter *iter)
qp = rcu_dereference(dev->qp_table[n]); qp = rcu_dereference(dev->qp_table[n]);
pqp = qp; pqp = qp;
if (qp) { if (qp) {
if (iter->qp)
atomic_dec(&iter->qp->refcount);
atomic_inc(&qp->refcount);
rcu_read_unlock();
iter->qp = qp; iter->qp = qp;
iter->n = n; iter->n = n;
return 0; return 0;
} }
} }
rcu_read_unlock();
if (iter->qp)
atomic_dec(&iter->qp->refcount);
return ret; return ret;
} }

View File

@ -52,7 +52,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages,
* Call with current->mm->mmap_sem held. * Call with current->mm->mmap_sem held.
*/ */
static int __qib_get_user_pages(unsigned long start_page, size_t num_pages, static int __qib_get_user_pages(unsigned long start_page, size_t num_pages,
struct page **p, struct vm_area_struct **vma) struct page **p)
{ {
unsigned long lock_limit; unsigned long lock_limit;
size_t got; size_t got;
@ -69,7 +69,7 @@ static int __qib_get_user_pages(unsigned long start_page, size_t num_pages,
ret = get_user_pages(current, current->mm, ret = get_user_pages(current, current->mm,
start_page + got * PAGE_SIZE, start_page + got * PAGE_SIZE,
num_pages - got, 1, 1, num_pages - got, 1, 1,
p + got, vma); p + got, NULL);
if (ret < 0) if (ret < 0)
goto bail_release; goto bail_release;
} }
@ -136,7 +136,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages,
down_write(&current->mm->mmap_sem); down_write(&current->mm->mmap_sem);
ret = __qib_get_user_pages(start_page, num_pages, p, NULL); ret = __qib_get_user_pages(start_page, num_pages, p);
up_write(&current->mm->mmap_sem); up_write(&current->mm->mmap_sem);

View File

@ -131,6 +131,12 @@ struct ipoib_cb {
u8 hwaddr[INFINIBAND_ALEN]; u8 hwaddr[INFINIBAND_ALEN];
}; };
static inline struct ipoib_cb *ipoib_skb_cb(const struct sk_buff *skb)
{
BUILD_BUG_ON(sizeof(skb->cb) < sizeof(struct ipoib_cb));
return (struct ipoib_cb *)skb->cb;
}
/* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */ /* Used for all multicast joins (broadcast, IPv4 mcast and IPv6 mcast) */
struct ipoib_mcast { struct ipoib_mcast {
struct ib_sa_mcmember_rec mcmember; struct ib_sa_mcmember_rec mcmember;

View File

@ -716,7 +716,7 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct ipoib_dev_priv *priv = netdev_priv(dev); struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ipoib_neigh *neigh; struct ipoib_neigh *neigh;
struct ipoib_cb *cb = (struct ipoib_cb *) skb->cb; struct ipoib_cb *cb = ipoib_skb_cb(skb);
struct ipoib_header *header; struct ipoib_header *header;
unsigned long flags; unsigned long flags;
@ -813,7 +813,7 @@ static int ipoib_hard_header(struct sk_buff *skb,
const void *daddr, const void *saddr, unsigned len) const void *daddr, const void *saddr, unsigned len)
{ {
struct ipoib_header *header; struct ipoib_header *header;
struct ipoib_cb *cb = (struct ipoib_cb *) skb->cb; struct ipoib_cb *cb = ipoib_skb_cb(skb);
header = (struct ipoib_header *) skb_push(skb, sizeof *header); header = (struct ipoib_header *) skb_push(skb, sizeof *header);

View File

@ -529,21 +529,13 @@ void ipoib_mcast_join_task(struct work_struct *work)
port_attr.state); port_attr.state);
return; return;
} }
priv->local_lid = port_attr.lid;
if (ib_query_gid(priv->ca, priv->port, 0, &priv->local_gid)) if (ib_query_gid(priv->ca, priv->port, 0, &priv->local_gid))
ipoib_warn(priv, "ib_query_gid() failed\n"); ipoib_warn(priv, "ib_query_gid() failed\n");
else else
memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid)); memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid));
{
struct ib_port_attr attr;
if (!ib_query_port(priv->ca, priv->port, &attr))
priv->local_lid = attr.lid;
else
ipoib_warn(priv, "ib_query_port failed\n");
}
if (!priv->broadcast) { if (!priv->broadcast) {
struct ipoib_mcast *broadcast; struct ipoib_mcast *broadcast;

View File

@ -344,7 +344,6 @@ iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
int is_leading) int is_leading)
{ {
struct iscsi_conn *conn = cls_conn->dd_data; struct iscsi_conn *conn = cls_conn->dd_data;
struct iscsi_session *session;
struct iser_conn *ib_conn; struct iser_conn *ib_conn;
struct iscsi_endpoint *ep; struct iscsi_endpoint *ep;
int error; int error;
@ -363,9 +362,17 @@ iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
} }
ib_conn = ep->dd_data; ib_conn = ep->dd_data;
session = conn->session; mutex_lock(&ib_conn->state_mutex);
if (iser_alloc_rx_descriptors(ib_conn, session)) if (ib_conn->state != ISER_CONN_UP) {
return -ENOMEM; error = -EINVAL;
iser_err("iser_conn %p state is %d, teardown started\n",
ib_conn, ib_conn->state);
goto out;
}
error = iser_alloc_rx_descriptors(ib_conn, conn->session);
if (error)
goto out;
/* binds the iSER connection retrieved from the previously /* binds the iSER connection retrieved from the previously
* connected ep_handle to the iSCSI layer connection. exchanges * connected ep_handle to the iSCSI layer connection. exchanges
@ -375,7 +382,9 @@ iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
conn->dd_data = ib_conn; conn->dd_data = ib_conn;
ib_conn->iscsi_conn = conn; ib_conn->iscsi_conn = conn;
return 0; out:
mutex_unlock(&ib_conn->state_mutex);
return error;
} }
static int static int

View File

@ -69,7 +69,7 @@
#define DRV_NAME "iser" #define DRV_NAME "iser"
#define PFX DRV_NAME ": " #define PFX DRV_NAME ": "
#define DRV_VER "1.4" #define DRV_VER "1.4.1"
#define iser_dbg(fmt, arg...) \ #define iser_dbg(fmt, arg...) \
do { \ do { \

View File

@ -73,7 +73,7 @@ static int iser_create_device_ib_res(struct iser_device *device)
{ {
struct iser_cq_desc *cq_desc; struct iser_cq_desc *cq_desc;
struct ib_device_attr *dev_attr = &device->dev_attr; struct ib_device_attr *dev_attr = &device->dev_attr;
int ret, i, j; int ret, i;
ret = ib_query_device(device->ib_device, dev_attr); ret = ib_query_device(device->ib_device, dev_attr);
if (ret) { if (ret) {
@ -125,16 +125,20 @@ static int iser_create_device_ib_res(struct iser_device *device)
iser_cq_event_callback, iser_cq_event_callback,
(void *)&cq_desc[i], (void *)&cq_desc[i],
ISER_MAX_RX_CQ_LEN, i); ISER_MAX_RX_CQ_LEN, i);
if (IS_ERR(device->rx_cq[i])) if (IS_ERR(device->rx_cq[i])) {
device->rx_cq[i] = NULL;
goto cq_err; goto cq_err;
}
device->tx_cq[i] = ib_create_cq(device->ib_device, device->tx_cq[i] = ib_create_cq(device->ib_device,
NULL, iser_cq_event_callback, NULL, iser_cq_event_callback,
(void *)&cq_desc[i], (void *)&cq_desc[i],
ISER_MAX_TX_CQ_LEN, i); ISER_MAX_TX_CQ_LEN, i);
if (IS_ERR(device->tx_cq[i])) if (IS_ERR(device->tx_cq[i])) {
device->tx_cq[i] = NULL;
goto cq_err; goto cq_err;
}
if (ib_req_notify_cq(device->rx_cq[i], IB_CQ_NEXT_COMP)) if (ib_req_notify_cq(device->rx_cq[i], IB_CQ_NEXT_COMP))
goto cq_err; goto cq_err;
@ -160,14 +164,14 @@ static int iser_create_device_ib_res(struct iser_device *device)
handler_err: handler_err:
ib_dereg_mr(device->mr); ib_dereg_mr(device->mr);
dma_mr_err: dma_mr_err:
for (j = 0; j < device->cqs_used; j++) for (i = 0; i < device->cqs_used; i++)
tasklet_kill(&device->cq_tasklet[j]); tasklet_kill(&device->cq_tasklet[i]);
cq_err: cq_err:
for (j = 0; j < i; j++) { for (i = 0; i < device->cqs_used; i++) {
if (device->tx_cq[j]) if (device->tx_cq[i])
ib_destroy_cq(device->tx_cq[j]); ib_destroy_cq(device->tx_cq[i]);
if (device->rx_cq[j]) if (device->rx_cq[i])
ib_destroy_cq(device->rx_cq[j]); ib_destroy_cq(device->rx_cq[i]);
} }
ib_dealloc_pd(device->pd); ib_dealloc_pd(device->pd);
pd_err: pd_err:

View File

@ -29,7 +29,7 @@ config FUSION_SPI
config FUSION_FC config FUSION_FC
tristate "Fusion MPT ScsiHost drivers for FC" tristate "Fusion MPT ScsiHost drivers for FC"
depends on PCI && SCSI depends on PCI && SCSI
select SCSI_FC_ATTRS depends on SCSI_FC_ATTRS
---help--- ---help---
SCSI HOST support for a Fiber Channel host adapters. SCSI HOST support for a Fiber Channel host adapters.

View File

@ -175,7 +175,7 @@ MODULE_PARM_DESC(fail_over_mac, "For active-backup, do not set all slaves to "
"the same MAC; 0 for none (default), " "the same MAC; 0 for none (default), "
"1 for active, 2 for follow"); "1 for active, 2 for follow");
module_param(all_slaves_active, int, 0); module_param(all_slaves_active, int, 0);
MODULE_PARM_DESC(all_slaves_active, "Keep all frames received on an interface" MODULE_PARM_DESC(all_slaves_active, "Keep all frames received on an interface "
"by setting active flag for all slaves; " "by setting active flag for all slaves; "
"0 for never (default), 1 for always."); "0 for never (default), 1 for always.");
module_param(resend_igmp, int, 0); module_param(resend_igmp, int, 0);
@ -3659,8 +3659,14 @@ static int bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev
else else
bond_xmit_slave_id(bond, skb, 0); bond_xmit_slave_id(bond, skb, 0);
} else { } else {
int slave_cnt = ACCESS_ONCE(bond->slave_cnt);
if (likely(slave_cnt)) {
slave_id = bond_rr_gen_slave_id(bond); slave_id = bond_rr_gen_slave_id(bond);
bond_xmit_slave_id(bond, skb, slave_id % bond->slave_cnt); bond_xmit_slave_id(bond, skb, slave_id % slave_cnt);
} else {
dev_kfree_skb_any(skb);
}
} }
return NETDEV_TX_OK; return NETDEV_TX_OK;
@ -3691,8 +3697,13 @@ static int bond_xmit_activebackup(struct sk_buff *skb, struct net_device *bond_d
static int bond_xmit_xor(struct sk_buff *skb, struct net_device *bond_dev) static int bond_xmit_xor(struct sk_buff *skb, struct net_device *bond_dev)
{ {
struct bonding *bond = netdev_priv(bond_dev); struct bonding *bond = netdev_priv(bond_dev);
int slave_cnt = ACCESS_ONCE(bond->slave_cnt);
bond_xmit_slave_id(bond, skb, bond_xmit_hash(bond, skb) % bond->slave_cnt); if (likely(slave_cnt))
bond_xmit_slave_id(bond, skb,
bond_xmit_hash(bond, skb) % slave_cnt);
else
dev_kfree_skb_any(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }

View File

@ -1123,7 +1123,9 @@ static int at91_open(struct net_device *dev)
struct at91_priv *priv = netdev_priv(dev); struct at91_priv *priv = netdev_priv(dev);
int err; int err;
clk_enable(priv->clk); err = clk_prepare_enable(priv->clk);
if (err)
return err;
/* check or determine and set bittime */ /* check or determine and set bittime */
err = open_candev(dev); err = open_candev(dev);
@ -1149,7 +1151,7 @@ static int at91_open(struct net_device *dev)
out_close: out_close:
close_candev(dev); close_candev(dev);
out: out:
clk_disable(priv->clk); clk_disable_unprepare(priv->clk);
return err; return err;
} }
@ -1166,7 +1168,7 @@ static int at91_close(struct net_device *dev)
at91_chip_stop(dev, CAN_STATE_STOPPED); at91_chip_stop(dev, CAN_STATE_STOPPED);
free_irq(dev->irq, dev); free_irq(dev->irq, dev);
clk_disable(priv->clk); clk_disable_unprepare(priv->clk);
close_candev(dev); close_candev(dev);

View File

@ -97,14 +97,14 @@ static void c_can_hw_raminit_ti(const struct c_can_priv *priv, bool enable)
ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance); ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance);
writel(ctrl, priv->raminit_ctrlreg); writel(ctrl, priv->raminit_ctrlreg);
ctrl &= ~CAN_RAMINIT_DONE_MASK(priv->instance); ctrl &= ~CAN_RAMINIT_DONE_MASK(priv->instance);
c_can_hw_raminit_wait_ti(priv, ctrl, mask); c_can_hw_raminit_wait_ti(priv, mask, ctrl);
if (enable) { if (enable) {
/* Set start bit and wait for the done bit. */ /* Set start bit and wait for the done bit. */
ctrl |= CAN_RAMINIT_START_MASK(priv->instance); ctrl |= CAN_RAMINIT_START_MASK(priv->instance);
writel(ctrl, priv->raminit_ctrlreg); writel(ctrl, priv->raminit_ctrlreg);
ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance); ctrl |= CAN_RAMINIT_DONE_MASK(priv->instance);
c_can_hw_raminit_wait_ti(priv, ctrl, mask); c_can_hw_raminit_wait_ti(priv, mask, ctrl);
} }
spin_unlock(&raminit_lock); spin_unlock(&raminit_lock);
} }

View File

@ -62,7 +62,7 @@
#define FLEXCAN_MCR_BCC BIT(16) #define FLEXCAN_MCR_BCC BIT(16)
#define FLEXCAN_MCR_LPRIO_EN BIT(13) #define FLEXCAN_MCR_LPRIO_EN BIT(13)
#define FLEXCAN_MCR_AEN BIT(12) #define FLEXCAN_MCR_AEN BIT(12)
#define FLEXCAN_MCR_MAXMB(x) ((x) & 0x1f) #define FLEXCAN_MCR_MAXMB(x) ((x) & 0x7f)
#define FLEXCAN_MCR_IDAM_A (0 << 8) #define FLEXCAN_MCR_IDAM_A (0 << 8)
#define FLEXCAN_MCR_IDAM_B (1 << 8) #define FLEXCAN_MCR_IDAM_B (1 << 8)
#define FLEXCAN_MCR_IDAM_C (2 << 8) #define FLEXCAN_MCR_IDAM_C (2 << 8)
@ -125,7 +125,9 @@
FLEXCAN_ESR_BOFF_INT | FLEXCAN_ESR_ERR_INT) FLEXCAN_ESR_BOFF_INT | FLEXCAN_ESR_ERR_INT)
/* FLEXCAN interrupt flag register (IFLAG) bits */ /* FLEXCAN interrupt flag register (IFLAG) bits */
#define FLEXCAN_TX_BUF_ID 8 /* Errata ERR005829 step7: Reserve first valid MB */
#define FLEXCAN_TX_BUF_RESERVED 8
#define FLEXCAN_TX_BUF_ID 9
#define FLEXCAN_IFLAG_BUF(x) BIT(x) #define FLEXCAN_IFLAG_BUF(x) BIT(x)
#define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7) #define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7)
#define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6) #define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6)
@ -136,6 +138,17 @@
/* FLEXCAN message buffers */ /* FLEXCAN message buffers */
#define FLEXCAN_MB_CNT_CODE(x) (((x) & 0xf) << 24) #define FLEXCAN_MB_CNT_CODE(x) (((x) & 0xf) << 24)
#define FLEXCAN_MB_CODE_RX_INACTIVE (0x0 << 24)
#define FLEXCAN_MB_CODE_RX_EMPTY (0x4 << 24)
#define FLEXCAN_MB_CODE_RX_FULL (0x2 << 24)
#define FLEXCAN_MB_CODE_RX_OVERRRUN (0x6 << 24)
#define FLEXCAN_MB_CODE_RX_RANSWER (0xa << 24)
#define FLEXCAN_MB_CODE_TX_INACTIVE (0x8 << 24)
#define FLEXCAN_MB_CODE_TX_ABORT (0x9 << 24)
#define FLEXCAN_MB_CODE_TX_DATA (0xc << 24)
#define FLEXCAN_MB_CODE_TX_TANSWER (0xe << 24)
#define FLEXCAN_MB_CNT_SRR BIT(22) #define FLEXCAN_MB_CNT_SRR BIT(22)
#define FLEXCAN_MB_CNT_IDE BIT(21) #define FLEXCAN_MB_CNT_IDE BIT(21)
#define FLEXCAN_MB_CNT_RTR BIT(20) #define FLEXCAN_MB_CNT_RTR BIT(20)
@ -298,7 +311,7 @@ static int flexcan_chip_enable(struct flexcan_priv *priv)
flexcan_write(reg, &regs->mcr); flexcan_write(reg, &regs->mcr);
while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)) while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK))
usleep_range(10, 20); udelay(10);
if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK) if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)
return -ETIMEDOUT; return -ETIMEDOUT;
@ -317,7 +330,7 @@ static int flexcan_chip_disable(struct flexcan_priv *priv)
flexcan_write(reg, &regs->mcr); flexcan_write(reg, &regs->mcr);
while (timeout-- && !(flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)) while (timeout-- && !(flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK))
usleep_range(10, 20); udelay(10);
if (!(flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK)) if (!(flexcan_read(&regs->mcr) & FLEXCAN_MCR_LPM_ACK))
return -ETIMEDOUT; return -ETIMEDOUT;
@ -336,7 +349,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
flexcan_write(reg, &regs->mcr); flexcan_write(reg, &regs->mcr);
while (timeout-- && !(flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) while (timeout-- && !(flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK))
usleep_range(100, 200); udelay(100);
if (!(flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) if (!(flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK))
return -ETIMEDOUT; return -ETIMEDOUT;
@ -355,7 +368,7 @@ static int flexcan_chip_unfreeze(struct flexcan_priv *priv)
flexcan_write(reg, &regs->mcr); flexcan_write(reg, &regs->mcr);
while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)) while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK))
usleep_range(10, 20); udelay(10);
if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK) if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_FRZ_ACK)
return -ETIMEDOUT; return -ETIMEDOUT;
@ -370,7 +383,7 @@ static int flexcan_chip_softreset(struct flexcan_priv *priv)
flexcan_write(FLEXCAN_MCR_SOFTRST, &regs->mcr); flexcan_write(FLEXCAN_MCR_SOFTRST, &regs->mcr);
while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_SOFTRST)) while (timeout-- && (flexcan_read(&regs->mcr) & FLEXCAN_MCR_SOFTRST))
usleep_range(10, 20); udelay(10);
if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_SOFTRST) if (flexcan_read(&regs->mcr) & FLEXCAN_MCR_SOFTRST)
return -ETIMEDOUT; return -ETIMEDOUT;
@ -428,6 +441,14 @@ static int flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
flexcan_write(can_id, &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_id); flexcan_write(can_id, &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_id);
flexcan_write(ctrl, &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl); flexcan_write(ctrl, &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
/* Errata ERR005829 step8:
* Write twice INACTIVE(0x8) code to first MB.
*/
flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
&regs->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl);
flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
&regs->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
@ -744,6 +765,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
stats->tx_bytes += can_get_echo_skb(dev, 0); stats->tx_bytes += can_get_echo_skb(dev, 0);
stats->tx_packets++; stats->tx_packets++;
can_led_event(dev, CAN_LED_EVENT_TX); can_led_event(dev, CAN_LED_EVENT_TX);
/* after sending a RTR frame mailbox is in RX mode */
flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
&regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
flexcan_write((1 << FLEXCAN_TX_BUF_ID), &regs->iflag1); flexcan_write((1 << FLEXCAN_TX_BUF_ID), &regs->iflag1);
netif_wake_queue(dev); netif_wake_queue(dev);
} }
@ -801,6 +825,7 @@ static int flexcan_chip_start(struct net_device *dev)
struct flexcan_regs __iomem *regs = priv->base; struct flexcan_regs __iomem *regs = priv->base;
int err; int err;
u32 reg_mcr, reg_ctrl; u32 reg_mcr, reg_ctrl;
int i;
/* enable module */ /* enable module */
err = flexcan_chip_enable(priv); err = flexcan_chip_enable(priv);
@ -867,8 +892,18 @@ static int flexcan_chip_start(struct net_device *dev)
netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl); netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl);
flexcan_write(reg_ctrl, &regs->ctrl); flexcan_write(reg_ctrl, &regs->ctrl);
/* Abort any pending TX, mark Mailbox as INACTIVE */ /* clear and invalidate all mailboxes first */
flexcan_write(FLEXCAN_MB_CNT_CODE(0x4), for (i = FLEXCAN_TX_BUF_ID; i < ARRAY_SIZE(regs->cantxfg); i++) {
flexcan_write(FLEXCAN_MB_CODE_RX_INACTIVE,
&regs->cantxfg[i].can_ctrl);
}
/* Errata ERR005829: mark first TX mailbox as INACTIVE */
flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
&regs->cantxfg[FLEXCAN_TX_BUF_RESERVED].can_ctrl);
/* mark TX mailbox as INACTIVE */
flexcan_write(FLEXCAN_MB_CODE_TX_INACTIVE,
&regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl); &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
/* acceptance mask/acceptance code (accept everything) */ /* acceptance mask/acceptance code (accept everything) */

View File

@ -70,6 +70,8 @@ struct peak_pci_chan {
#define PEAK_PC_104P_DEVICE_ID 0x0006 /* PCAN-PC/104+ cards */ #define PEAK_PC_104P_DEVICE_ID 0x0006 /* PCAN-PC/104+ cards */
#define PEAK_PCI_104E_DEVICE_ID 0x0007 /* PCAN-PCI/104 Express cards */ #define PEAK_PCI_104E_DEVICE_ID 0x0007 /* PCAN-PCI/104 Express cards */
#define PEAK_MPCIE_DEVICE_ID 0x0008 /* The miniPCIe slot cards */ #define PEAK_MPCIE_DEVICE_ID 0x0008 /* The miniPCIe slot cards */
#define PEAK_PCIE_OEM_ID 0x0009 /* PCAN-PCI Express OEM */
#define PEAK_PCIEC34_DEVICE_ID 0x000A /* PCAN-PCI Express 34 (one channel) */
#define PEAK_PCI_CHAN_MAX 4 #define PEAK_PCI_CHAN_MAX 4
@ -87,6 +89,7 @@ static const struct pci_device_id peak_pci_tbl[] = {
{PEAK_PCI_VENDOR_ID, PEAK_CPCI_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,}, {PEAK_PCI_VENDOR_ID, PEAK_CPCI_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,},
#ifdef CONFIG_CAN_PEAK_PCIEC #ifdef CONFIG_CAN_PEAK_PCIEC
{PEAK_PCI_VENDOR_ID, PEAK_PCIEC_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,}, {PEAK_PCI_VENDOR_ID, PEAK_PCIEC_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,},
{PEAK_PCI_VENDOR_ID, PEAK_PCIEC34_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,},
#endif #endif
{0,} {0,}
}; };
@ -653,7 +656,8 @@ static int peak_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* This must be done *before* register_sja1000dev() but * This must be done *before* register_sja1000dev() but
* *after* devices linkage * *after* devices linkage
*/ */
if (pdev->device == PEAK_PCIEC_DEVICE_ID) { if (pdev->device == PEAK_PCIEC_DEVICE_ID ||
pdev->device == PEAK_PCIEC34_DEVICE_ID) {
err = peak_pciec_probe(pdev, dev); err = peak_pciec_probe(pdev, dev);
if (err) { if (err) {
dev_err(&pdev->dev, dev_err(&pdev->dev,

View File

@ -2129,6 +2129,7 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
int entry = vp->cur_tx % TX_RING_SIZE; int entry = vp->cur_tx % TX_RING_SIZE;
struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE]; struct boom_tx_desc *prev_entry = &vp->tx_ring[(vp->cur_tx-1) % TX_RING_SIZE];
unsigned long flags; unsigned long flags;
dma_addr_t dma_addr;
if (vortex_debug > 6) { if (vortex_debug > 6) {
pr_debug("boomerang_start_xmit()\n"); pr_debug("boomerang_start_xmit()\n");
@ -2163,24 +2164,48 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum); vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
if (!skb_shinfo(skb)->nr_frags) { if (!skb_shinfo(skb)->nr_frags) {
vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data, skb->len,
skb->len, PCI_DMA_TODEVICE)); PCI_DMA_TODEVICE);
if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
goto out_dma_err;
vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr);
vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG); vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb->len | LAST_FRAG);
} else { } else {
int i; int i;
vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, dma_addr = pci_map_single(VORTEX_PCI(vp), skb->data,
skb_headlen(skb), PCI_DMA_TODEVICE)); skb_headlen(skb), PCI_DMA_TODEVICE);
if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
goto out_dma_err;
vp->tx_ring[entry].frag[0].addr = cpu_to_le32(dma_addr);
vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb)); vp->tx_ring[entry].frag[0].length = cpu_to_le32(skb_headlen(skb));
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
dma_addr = skb_frag_dma_map(&VORTEX_PCI(vp)->dev, frag,
0,
frag->size,
DMA_TO_DEVICE);
if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr)) {
for(i = i-1; i >= 0; i--)
dma_unmap_page(&VORTEX_PCI(vp)->dev,
le32_to_cpu(vp->tx_ring[entry].frag[i+1].addr),
le32_to_cpu(vp->tx_ring[entry].frag[i+1].length),
DMA_TO_DEVICE);
pci_unmap_single(VORTEX_PCI(vp),
le32_to_cpu(vp->tx_ring[entry].frag[0].addr),
le32_to_cpu(vp->tx_ring[entry].frag[0].length),
PCI_DMA_TODEVICE);
goto out_dma_err;
}
vp->tx_ring[entry].frag[i+1].addr = vp->tx_ring[entry].frag[i+1].addr =
cpu_to_le32(skb_frag_dma_map( cpu_to_le32(dma_addr);
&VORTEX_PCI(vp)->dev,
frag,
frag->page_offset, frag->size, DMA_TO_DEVICE));
if (i == skb_shinfo(skb)->nr_frags-1) if (i == skb_shinfo(skb)->nr_frags-1)
vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(skb_frag_size(frag)|LAST_FRAG); vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(skb_frag_size(frag)|LAST_FRAG);
@ -2189,7 +2214,10 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
} }
} }
#else #else
vp->tx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE)); dma_addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, skb->len, PCI_DMA_TODEVICE));
if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma_addr))
goto out_dma_err;
vp->tx_ring[entry].addr = cpu_to_le32(dma_addr);
vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG); vp->tx_ring[entry].length = cpu_to_le32(skb->len | LAST_FRAG);
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded); vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
#endif #endif
@ -2217,7 +2245,11 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
skb_tx_timestamp(skb); skb_tx_timestamp(skb);
iowrite16(DownUnstall, ioaddr + EL3_CMD); iowrite16(DownUnstall, ioaddr + EL3_CMD);
spin_unlock_irqrestore(&vp->lock, flags); spin_unlock_irqrestore(&vp->lock, flags);
out:
return NETDEV_TX_OK; return NETDEV_TX_OK;
out_dma_err:
dev_err(&VORTEX_PCI(vp)->dev, "Error mapping dma buffer\n");
goto out;
} }
/* The interrupt handler does all of the Rx thread work and cleans up /* The interrupt handler does all of the Rx thread work and cleans up

View File

@ -29,6 +29,17 @@
#define DRV_NAME "arc_emac" #define DRV_NAME "arc_emac"
#define DRV_VERSION "1.0" #define DRV_VERSION "1.0"
/**
* arc_emac_tx_avail - Return the number of available slots in the tx ring.
* @priv: Pointer to ARC EMAC private data structure.
*
* returns: the number of slots available for transmission in tx the ring.
*/
static inline int arc_emac_tx_avail(struct arc_emac_priv *priv)
{
return (priv->txbd_dirty + TX_BD_NUM - priv->txbd_curr - 1) % TX_BD_NUM;
}
/** /**
* arc_emac_adjust_link - Adjust the PHY link duplex. * arc_emac_adjust_link - Adjust the PHY link duplex.
* @ndev: Pointer to the net_device structure. * @ndev: Pointer to the net_device structure.
@ -180,10 +191,15 @@ static void arc_emac_tx_clean(struct net_device *ndev)
txbd->info = 0; txbd->info = 0;
*txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM; *txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM;
if (netif_queue_stopped(ndev))
netif_wake_queue(ndev);
} }
/* Ensure that txbd_dirty is visible to tx() before checking
* for queue stopped.
*/
smp_mb();
if (netif_queue_stopped(ndev) && arc_emac_tx_avail(priv))
netif_wake_queue(ndev);
} }
/** /**
@ -298,7 +314,7 @@ static int arc_emac_poll(struct napi_struct *napi, int budget)
work_done = arc_emac_rx(ndev, budget); work_done = arc_emac_rx(ndev, budget);
if (work_done < budget) { if (work_done < budget) {
napi_complete(napi); napi_complete(napi);
arc_reg_or(priv, R_ENABLE, RXINT_MASK); arc_reg_or(priv, R_ENABLE, RXINT_MASK | TXINT_MASK);
} }
return work_done; return work_done;
@ -327,9 +343,9 @@ static irqreturn_t arc_emac_intr(int irq, void *dev_instance)
/* Reset all flags except "MDIO complete" */ /* Reset all flags except "MDIO complete" */
arc_reg_set(priv, R_STATUS, status); arc_reg_set(priv, R_STATUS, status);
if (status & RXINT_MASK) { if (status & (RXINT_MASK | TXINT_MASK)) {
if (likely(napi_schedule_prep(&priv->napi))) { if (likely(napi_schedule_prep(&priv->napi))) {
arc_reg_clr(priv, R_ENABLE, RXINT_MASK); arc_reg_clr(priv, R_ENABLE, RXINT_MASK | TXINT_MASK);
__napi_schedule(&priv->napi); __napi_schedule(&priv->napi);
} }
} }
@ -440,7 +456,7 @@ static int arc_emac_open(struct net_device *ndev)
arc_reg_set(priv, R_TX_RING, (unsigned int)priv->txbd_dma); arc_reg_set(priv, R_TX_RING, (unsigned int)priv->txbd_dma);
/* Enable interrupts */ /* Enable interrupts */
arc_reg_set(priv, R_ENABLE, RXINT_MASK | ERR_MASK); arc_reg_set(priv, R_ENABLE, RXINT_MASK | TXINT_MASK | ERR_MASK);
/* Set CONTROL */ /* Set CONTROL */
arc_reg_set(priv, R_CTRL, arc_reg_set(priv, R_CTRL,
@ -511,7 +527,7 @@ static int arc_emac_stop(struct net_device *ndev)
netif_stop_queue(ndev); netif_stop_queue(ndev);
/* Disable interrupts */ /* Disable interrupts */
arc_reg_clr(priv, R_ENABLE, RXINT_MASK | ERR_MASK); arc_reg_clr(priv, R_ENABLE, RXINT_MASK | TXINT_MASK | ERR_MASK);
/* Disable EMAC */ /* Disable EMAC */
arc_reg_clr(priv, R_CTRL, EN_MASK); arc_reg_clr(priv, R_CTRL, EN_MASK);
@ -574,11 +590,9 @@ static int arc_emac_tx(struct sk_buff *skb, struct net_device *ndev)
len = max_t(unsigned int, ETH_ZLEN, skb->len); len = max_t(unsigned int, ETH_ZLEN, skb->len);
/* EMAC still holds this buffer in its possession. if (unlikely(!arc_emac_tx_avail(priv))) {
* CPU must not modify this buffer descriptor
*/
if (unlikely((le32_to_cpu(*info) & OWN_MASK) == FOR_EMAC)) {
netif_stop_queue(ndev); netif_stop_queue(ndev);
netdev_err(ndev, "BUG! Tx Ring full when queue awake!\n");
return NETDEV_TX_BUSY; return NETDEV_TX_BUSY;
} }
@ -607,12 +621,19 @@ static int arc_emac_tx(struct sk_buff *skb, struct net_device *ndev)
/* Increment index to point to the next BD */ /* Increment index to point to the next BD */
*txbd_curr = (*txbd_curr + 1) % TX_BD_NUM; *txbd_curr = (*txbd_curr + 1) % TX_BD_NUM;
/* Get "info" of the next BD */ /* Ensure that tx_clean() sees the new txbd_curr before
info = &priv->txbd[*txbd_curr].info; * checking the queue status. This prevents an unneeded wake
* of the queue in tx_clean().
*/
smp_mb();
/* Check if if Tx BD ring is full - next BD is still owned by EMAC */ if (!arc_emac_tx_avail(priv)) {
if (unlikely((le32_to_cpu(*info) & OWN_MASK) == FOR_EMAC))
netif_stop_queue(ndev); netif_stop_queue(ndev);
/* Refresh tx_dirty */
smp_mb();
if (arc_emac_tx_avail(priv))
netif_start_queue(ndev);
}
arc_reg_set(priv, R_STATUS, TXPL_MASK); arc_reg_set(priv, R_STATUS, TXPL_MASK);

View File

@ -1697,7 +1697,7 @@ static struct rtnl_link_stats64 *b44_get_stats64(struct net_device *dev,
hwstat->tx_underruns + hwstat->tx_underruns +
hwstat->tx_excessive_cols + hwstat->tx_excessive_cols +
hwstat->tx_late_cols); hwstat->tx_late_cols);
nstat->multicast = hwstat->tx_multicast_pkts; nstat->multicast = hwstat->rx_multicast_pkts;
nstat->collisions = hwstat->tx_total_cols; nstat->collisions = hwstat->tx_total_cols;
nstat->rx_length_errors = (hwstat->rx_oversize_pkts + nstat->rx_length_errors = (hwstat->rx_oversize_pkts +

View File

@ -534,6 +534,25 @@ static unsigned int bcm_sysport_desc_rx(struct bcm_sysport_priv *priv,
while ((processed < to_process) && (processed < budget)) { while ((processed < to_process) && (processed < budget)) {
cb = &priv->rx_cbs[priv->rx_read_ptr]; cb = &priv->rx_cbs[priv->rx_read_ptr];
skb = cb->skb; skb = cb->skb;
processed++;
priv->rx_read_ptr++;
if (priv->rx_read_ptr == priv->num_rx_bds)
priv->rx_read_ptr = 0;
/* We do not have a backing SKB, so we do not a corresponding
* DMA mapping for this incoming packet since
* bcm_sysport_rx_refill always either has both skb and mapping
* or none.
*/
if (unlikely(!skb)) {
netif_err(priv, rx_err, ndev, "out of memory!\n");
ndev->stats.rx_dropped++;
ndev->stats.rx_errors++;
goto refill;
}
dma_unmap_single(kdev, dma_unmap_addr(cb, dma_addr), dma_unmap_single(kdev, dma_unmap_addr(cb, dma_addr),
RX_BUF_LENGTH, DMA_FROM_DEVICE); RX_BUF_LENGTH, DMA_FROM_DEVICE);
@ -543,23 +562,11 @@ static unsigned int bcm_sysport_desc_rx(struct bcm_sysport_priv *priv,
status = (rsb->rx_status_len >> DESC_STATUS_SHIFT) & status = (rsb->rx_status_len >> DESC_STATUS_SHIFT) &
DESC_STATUS_MASK; DESC_STATUS_MASK;
processed++;
priv->rx_read_ptr++;
if (priv->rx_read_ptr == priv->num_rx_bds)
priv->rx_read_ptr = 0;
netif_dbg(priv, rx_status, ndev, netif_dbg(priv, rx_status, ndev,
"p=%d, c=%d, rd_ptr=%d, len=%d, flag=0x%04x\n", "p=%d, c=%d, rd_ptr=%d, len=%d, flag=0x%04x\n",
p_index, priv->rx_c_index, priv->rx_read_ptr, p_index, priv->rx_c_index, priv->rx_read_ptr,
len, status); len, status);
if (unlikely(!skb)) {
netif_err(priv, rx_err, ndev, "out of memory!\n");
ndev->stats.rx_dropped++;
ndev->stats.rx_errors++;
goto refill;
}
if (unlikely(!(status & DESC_EOP) || !(status & DESC_SOP))) { if (unlikely(!(status & DESC_EOP) || !(status & DESC_SOP))) {
netif_err(priv, rx_status, ndev, "fragmented packet!\n"); netif_err(priv, rx_status, ndev, "fragmented packet!\n");
ndev->stats.rx_dropped++; ndev->stats.rx_dropped++;

View File

@ -875,6 +875,7 @@ static void __bcmgenet_tx_reclaim(struct net_device *dev,
int last_tx_cn, last_c_index, num_tx_bds; int last_tx_cn, last_c_index, num_tx_bds;
struct enet_cb *tx_cb_ptr; struct enet_cb *tx_cb_ptr;
struct netdev_queue *txq; struct netdev_queue *txq;
unsigned int bds_compl;
unsigned int c_index; unsigned int c_index;
/* Compute how many buffers are transmitted since last xmit call */ /* Compute how many buffers are transmitted since last xmit call */
@ -899,7 +900,9 @@ static void __bcmgenet_tx_reclaim(struct net_device *dev,
/* Reclaim transmitted buffers */ /* Reclaim transmitted buffers */
while (last_tx_cn-- > 0) { while (last_tx_cn-- > 0) {
tx_cb_ptr = ring->cbs + last_c_index; tx_cb_ptr = ring->cbs + last_c_index;
bds_compl = 0;
if (tx_cb_ptr->skb) { if (tx_cb_ptr->skb) {
bds_compl = skb_shinfo(tx_cb_ptr->skb)->nr_frags + 1;
dev->stats.tx_bytes += tx_cb_ptr->skb->len; dev->stats.tx_bytes += tx_cb_ptr->skb->len;
dma_unmap_single(&dev->dev, dma_unmap_single(&dev->dev,
dma_unmap_addr(tx_cb_ptr, dma_addr), dma_unmap_addr(tx_cb_ptr, dma_addr),
@ -916,7 +919,7 @@ static void __bcmgenet_tx_reclaim(struct net_device *dev,
dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0); dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0);
} }
dev->stats.tx_packets++; dev->stats.tx_packets++;
ring->free_bds += 1; ring->free_bds += bds_compl;
last_c_index++; last_c_index++;
last_c_index &= (num_tx_bds - 1); last_c_index &= (num_tx_bds - 1);
@ -1274,12 +1277,29 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_priv *priv,
while ((rxpktprocessed < rxpkttoprocess) && while ((rxpktprocessed < rxpkttoprocess) &&
(rxpktprocessed < budget)) { (rxpktprocessed < budget)) {
cb = &priv->rx_cbs[priv->rx_read_ptr];
skb = cb->skb;
rxpktprocessed++;
priv->rx_read_ptr++;
priv->rx_read_ptr &= (priv->num_rx_bds - 1);
/* We do not have a backing SKB, so we do not have a
* corresponding DMA mapping for this incoming packet since
* bcmgenet_rx_refill always either has both skb and mapping or
* none.
*/
if (unlikely(!skb)) {
dev->stats.rx_dropped++;
dev->stats.rx_errors++;
goto refill;
}
/* Unmap the packet contents such that we can use the /* Unmap the packet contents such that we can use the
* RSV from the 64 bytes descriptor when enabled and save * RSV from the 64 bytes descriptor when enabled and save
* a 32-bits register read * a 32-bits register read
*/ */
cb = &priv->rx_cbs[priv->rx_read_ptr];
skb = cb->skb;
dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr), dma_unmap_single(&dev->dev, dma_unmap_addr(cb, dma_addr),
priv->rx_buf_len, DMA_FROM_DEVICE); priv->rx_buf_len, DMA_FROM_DEVICE);
@ -1307,18 +1327,6 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_priv *priv,
__func__, p_index, priv->rx_c_index, __func__, p_index, priv->rx_c_index,
priv->rx_read_ptr, dma_length_status); priv->rx_read_ptr, dma_length_status);
rxpktprocessed++;
priv->rx_read_ptr++;
priv->rx_read_ptr &= (priv->num_rx_bds - 1);
/* out of memory, just drop packets at the hardware level */
if (unlikely(!skb)) {
dev->stats.rx_dropped++;
dev->stats.rx_errors++;
goto refill;
}
if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) { if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
netif_err(priv, rx_status, dev, netif_err(priv, rx_status, dev,
"dropping fragmented packet!\n"); "dropping fragmented packet!\n");
@ -1736,13 +1744,63 @@ static void bcmgenet_init_multiq(struct net_device *dev)
bcmgenet_tdma_writel(priv, reg, DMA_CTRL); bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
} }
static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
{
int ret = 0;
int timeout = 0;
u32 reg;
/* Disable TDMA to stop add more frames in TX DMA */
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg &= ~DMA_EN;
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
/* Check TDMA status register to confirm TDMA is disabled */
while (timeout++ < DMA_TIMEOUT_VAL) {
reg = bcmgenet_tdma_readl(priv, DMA_STATUS);
if (reg & DMA_DISABLED)
break;
udelay(1);
}
if (timeout == DMA_TIMEOUT_VAL) {
netdev_warn(priv->dev, "Timed out while disabling TX DMA\n");
ret = -ETIMEDOUT;
}
/* Wait 10ms for packet drain in both tx and rx dma */
usleep_range(10000, 20000);
/* Disable RDMA */
reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
reg &= ~DMA_EN;
bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
timeout = 0;
/* Check RDMA status register to confirm RDMA is disabled */
while (timeout++ < DMA_TIMEOUT_VAL) {
reg = bcmgenet_rdma_readl(priv, DMA_STATUS);
if (reg & DMA_DISABLED)
break;
udelay(1);
}
if (timeout == DMA_TIMEOUT_VAL) {
netdev_warn(priv->dev, "Timed out while disabling RX DMA\n");
ret = -ETIMEDOUT;
}
return ret;
}
static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
{ {
int i; int i;
/* disable DMA */ /* disable DMA */
bcmgenet_rdma_writel(priv, 0, DMA_CTRL); bcmgenet_dma_teardown(priv);
bcmgenet_tdma_writel(priv, 0, DMA_CTRL);
for (i = 0; i < priv->num_tx_bds; i++) { for (i = 0; i < priv->num_tx_bds; i++) {
if (priv->tx_cbs[i].skb != NULL) { if (priv->tx_cbs[i].skb != NULL) {
@ -2101,57 +2159,6 @@ err_clk_disable:
return ret; return ret;
} }
static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
{
int ret = 0;
int timeout = 0;
u32 reg;
/* Disable TDMA to stop add more frames in TX DMA */
reg = bcmgenet_tdma_readl(priv, DMA_CTRL);
reg &= ~DMA_EN;
bcmgenet_tdma_writel(priv, reg, DMA_CTRL);
/* Check TDMA status register to confirm TDMA is disabled */
while (timeout++ < DMA_TIMEOUT_VAL) {
reg = bcmgenet_tdma_readl(priv, DMA_STATUS);
if (reg & DMA_DISABLED)
break;
udelay(1);
}
if (timeout == DMA_TIMEOUT_VAL) {
netdev_warn(priv->dev, "Timed out while disabling TX DMA\n");
ret = -ETIMEDOUT;
}
/* Wait 10ms for packet drain in both tx and rx dma */
usleep_range(10000, 20000);
/* Disable RDMA */
reg = bcmgenet_rdma_readl(priv, DMA_CTRL);
reg &= ~DMA_EN;
bcmgenet_rdma_writel(priv, reg, DMA_CTRL);
timeout = 0;
/* Check RDMA status register to confirm RDMA is disabled */
while (timeout++ < DMA_TIMEOUT_VAL) {
reg = bcmgenet_rdma_readl(priv, DMA_STATUS);
if (reg & DMA_DISABLED)
break;
udelay(1);
}
if (timeout == DMA_TIMEOUT_VAL) {
netdev_warn(priv->dev, "Timed out while disabling RX DMA\n");
ret = -ETIMEDOUT;
}
return ret;
}
static void bcmgenet_netif_stop(struct net_device *dev) static void bcmgenet_netif_stop(struct net_device *dev)
{ {
struct bcmgenet_priv *priv = netdev_priv(dev); struct bcmgenet_priv *priv = netdev_priv(dev);

View File

@ -7914,8 +7914,6 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
entry = tnapi->tx_prod; entry = tnapi->tx_prod;
base_flags = 0; base_flags = 0;
if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
mss = skb_shinfo(skb)->gso_size; mss = skb_shinfo(skb)->gso_size;
if (mss) { if (mss) {
@ -7929,6 +7927,13 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - ETH_HLEN; hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb) - ETH_HLEN;
/* HW/FW can not correctly segment packets that have been
* vlan encapsulated.
*/
if (skb->protocol == htons(ETH_P_8021Q) ||
skb->protocol == htons(ETH_P_8021AD))
return tg3_tso_bug(tp, tnapi, txq, skb);
if (!skb_is_gso_v6(skb)) { if (!skb_is_gso_v6(skb)) {
if (unlikely((ETH_HLEN + hdr_len) > 80) && if (unlikely((ETH_HLEN + hdr_len) > 80) &&
tg3_flag(tp, TSO_BUG)) tg3_flag(tp, TSO_BUG))
@ -7979,6 +7984,17 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
base_flags |= tsflags << 12; base_flags |= tsflags << 12;
} }
} }
} else if (skb->ip_summed == CHECKSUM_PARTIAL) {
/* HW/FW can not correctly checksum packets that have been
* vlan encapsulated.
*/
if (skb->protocol == htons(ETH_P_8021Q) ||
skb->protocol == htons(ETH_P_8021AD)) {
if (skb_checksum_help(skb))
goto drop;
} else {
base_flags |= TXD_FLAG_TCPUDP_CSUM;
}
} }
if (tg3_flag(tp, USE_JUMBO_BDFLAG) && if (tg3_flag(tp, USE_JUMBO_BDFLAG) &&

View File

@ -6478,6 +6478,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
struct port_info *pi; struct port_info *pi;
bool highdma = false; bool highdma = false;
struct adapter *adapter = NULL; struct adapter *adapter = NULL;
void __iomem *regs;
printk_once(KERN_INFO "%s - version %s\n", DRV_DESC, DRV_VERSION); printk_once(KERN_INFO "%s - version %s\n", DRV_DESC, DRV_VERSION);
@ -6494,19 +6495,35 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_release_regions; goto out_release_regions;
} }
regs = pci_ioremap_bar(pdev, 0);
if (!regs) {
dev_err(&pdev->dev, "cannot map device registers\n");
err = -ENOMEM;
goto out_disable_device;
}
/* We control everything through one PF */
func = SOURCEPF_GET(readl(regs + PL_WHOAMI));
if (func != ent->driver_data) {
iounmap(regs);
pci_disable_device(pdev);
pci_save_state(pdev); /* to restore SR-IOV later */
goto sriov;
}
if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
highdma = true; highdma = true;
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
if (err) { if (err) {
dev_err(&pdev->dev, "unable to obtain 64-bit DMA for " dev_err(&pdev->dev, "unable to obtain 64-bit DMA for "
"coherent allocations\n"); "coherent allocations\n");
goto out_disable_device; goto out_unmap_bar0;
} }
} else { } else {
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (err) { if (err) {
dev_err(&pdev->dev, "no usable DMA configuration\n"); dev_err(&pdev->dev, "no usable DMA configuration\n");
goto out_disable_device; goto out_unmap_bar0;
} }
} }
@ -6518,7 +6535,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
adapter = kzalloc(sizeof(*adapter), GFP_KERNEL); adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
if (!adapter) { if (!adapter) {
err = -ENOMEM; err = -ENOMEM;
goto out_disable_device; goto out_unmap_bar0;
} }
adapter->workq = create_singlethread_workqueue("cxgb4"); adapter->workq = create_singlethread_workqueue("cxgb4");
@ -6530,20 +6547,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* PCI device has been enabled */ /* PCI device has been enabled */
adapter->flags |= DEV_ENABLED; adapter->flags |= DEV_ENABLED;
adapter->regs = pci_ioremap_bar(pdev, 0); adapter->regs = regs;
if (!adapter->regs) {
dev_err(&pdev->dev, "cannot map device registers\n");
err = -ENOMEM;
goto out_free_adapter;
}
/* We control everything through one PF */
func = SOURCEPF_GET(readl(adapter->regs + PL_WHOAMI));
if (func != ent->driver_data) {
pci_save_state(pdev); /* to restore SR-IOV later */
goto sriov;
}
adapter->pdev = pdev; adapter->pdev = pdev;
adapter->pdev_dev = &pdev->dev; adapter->pdev_dev = &pdev->dev;
adapter->mbox = func; adapter->mbox = func;
@ -6560,7 +6564,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
err = t4_prep_adapter(adapter); err = t4_prep_adapter(adapter);
if (err) if (err)
goto out_unmap_bar0; goto out_free_adapter;
if (!is_t4(adapter->params.chip)) { if (!is_t4(adapter->params.chip)) {
s_qpp = QUEUESPERPAGEPF1 * adapter->fn; s_qpp = QUEUESPERPAGEPF1 * adapter->fn;
@ -6577,14 +6582,14 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
dev_err(&pdev->dev, dev_err(&pdev->dev,
"Incorrect number of egress queues per page\n"); "Incorrect number of egress queues per page\n");
err = -EINVAL; err = -EINVAL;
goto out_unmap_bar0; goto out_free_adapter;
} }
adapter->bar2 = ioremap_wc(pci_resource_start(pdev, 2), adapter->bar2 = ioremap_wc(pci_resource_start(pdev, 2),
pci_resource_len(pdev, 2)); pci_resource_len(pdev, 2));
if (!adapter->bar2) { if (!adapter->bar2) {
dev_err(&pdev->dev, "cannot map device bar2 region\n"); dev_err(&pdev->dev, "cannot map device bar2 region\n");
err = -ENOMEM; err = -ENOMEM;
goto out_unmap_bar0; goto out_free_adapter;
} }
} }
@ -6722,13 +6727,13 @@ sriov:
out_unmap_bar: out_unmap_bar:
if (!is_t4(adapter->params.chip)) if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2); iounmap(adapter->bar2);
out_unmap_bar0:
iounmap(adapter->regs);
out_free_adapter: out_free_adapter:
if (adapter->workq) if (adapter->workq)
destroy_workqueue(adapter->workq); destroy_workqueue(adapter->workq);
kfree(adapter); kfree(adapter);
out_unmap_bar0:
iounmap(regs);
out_disable_device: out_disable_device:
pci_disable_pcie_error_reporting(pdev); pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);

View File

@ -1399,7 +1399,7 @@ static struct dm9000_plat_data *dm9000_parse_dt(struct device *dev)
const void *mac_addr; const void *mac_addr;
if (!IS_ENABLED(CONFIG_OF) || !np) if (!IS_ENABLED(CONFIG_OF) || !np)
return NULL; return ERR_PTR(-ENXIO);
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) if (!pdata)

View File

@ -2389,6 +2389,22 @@ struct mlx4_slaves_pport mlx4_phys_to_slaves_pport_actv(
} }
EXPORT_SYMBOL_GPL(mlx4_phys_to_slaves_pport_actv); EXPORT_SYMBOL_GPL(mlx4_phys_to_slaves_pport_actv);
static int mlx4_slaves_closest_port(struct mlx4_dev *dev, int slave, int port)
{
struct mlx4_active_ports actv_ports = mlx4_get_active_ports(dev, slave);
int min_port = find_first_bit(actv_ports.ports, dev->caps.num_ports)
+ 1;
int max_port = min_port +
bitmap_weight(actv_ports.ports, dev->caps.num_ports);
if (port < min_port)
port = min_port;
else if (port >= max_port)
port = max_port - 1;
return port;
}
int mlx4_set_vf_mac(struct mlx4_dev *dev, int port, int vf, u64 mac) int mlx4_set_vf_mac(struct mlx4_dev *dev, int port, int vf, u64 mac)
{ {
struct mlx4_priv *priv = mlx4_priv(dev); struct mlx4_priv *priv = mlx4_priv(dev);
@ -2402,6 +2418,7 @@ int mlx4_set_vf_mac(struct mlx4_dev *dev, int port, int vf, u64 mac)
if (slave < 0) if (slave < 0)
return -EINVAL; return -EINVAL;
port = mlx4_slaves_closest_port(dev, slave, port);
s_info = &priv->mfunc.master.vf_admin[slave].vport[port]; s_info = &priv->mfunc.master.vf_admin[slave].vport[port];
s_info->mac = mac; s_info->mac = mac;
mlx4_info(dev, "default mac on vf %d port %d to %llX will take afect only after vf restart\n", mlx4_info(dev, "default mac on vf %d port %d to %llX will take afect only after vf restart\n",
@ -2428,6 +2445,7 @@ int mlx4_set_vf_vlan(struct mlx4_dev *dev, int port, int vf, u16 vlan, u8 qos)
if (slave < 0) if (slave < 0)
return -EINVAL; return -EINVAL;
port = mlx4_slaves_closest_port(dev, slave, port);
vf_admin = &priv->mfunc.master.vf_admin[slave].vport[port]; vf_admin = &priv->mfunc.master.vf_admin[slave].vport[port];
if ((0 == vlan) && (0 == qos)) if ((0 == vlan) && (0 == qos))
@ -2455,6 +2473,7 @@ bool mlx4_get_slave_default_vlan(struct mlx4_dev *dev, int port, int slave,
struct mlx4_priv *priv; struct mlx4_priv *priv;
priv = mlx4_priv(dev); priv = mlx4_priv(dev);
port = mlx4_slaves_closest_port(dev, slave, port);
vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port]; vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port];
if (MLX4_VGT != vp_oper->state.default_vlan) { if (MLX4_VGT != vp_oper->state.default_vlan) {
@ -2482,6 +2501,7 @@ int mlx4_set_vf_spoofchk(struct mlx4_dev *dev, int port, int vf, bool setting)
if (slave < 0) if (slave < 0)
return -EINVAL; return -EINVAL;
port = mlx4_slaves_closest_port(dev, slave, port);
s_info = &priv->mfunc.master.vf_admin[slave].vport[port]; s_info = &priv->mfunc.master.vf_admin[slave].vport[port];
s_info->spoofchk = setting; s_info->spoofchk = setting;
@ -2535,6 +2555,7 @@ int mlx4_set_vf_link_state(struct mlx4_dev *dev, int port, int vf, int link_stat
if (slave < 0) if (slave < 0)
return -EINVAL; return -EINVAL;
port = mlx4_slaves_closest_port(dev, slave, port);
switch (link_state) { switch (link_state) {
case IFLA_VF_LINK_STATE_AUTO: case IFLA_VF_LINK_STATE_AUTO:
/* get current link state */ /* get current link state */

View File

@ -487,6 +487,9 @@ static int mlx4_en_set_pauseparam(struct net_device *dev,
struct mlx4_en_dev *mdev = priv->mdev; struct mlx4_en_dev *mdev = priv->mdev;
int err; int err;
if (pause->autoneg)
return -EINVAL;
priv->prof->tx_pause = pause->tx_pause != 0; priv->prof->tx_pause = pause->tx_pause != 0;
priv->prof->rx_pause = pause->rx_pause != 0; priv->prof->rx_pause = pause->rx_pause != 0;
err = mlx4_SET_PORT_general(mdev->dev, priv->port, err = mlx4_SET_PORT_general(mdev->dev, priv->port,

View File

@ -298,6 +298,7 @@ static int mlx4_HW2SW_MPT(struct mlx4_dev *dev, struct mlx4_cmd_mailbox *mailbox
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED); MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED);
} }
/* Must protect against concurrent access */
int mlx4_mr_hw_get_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr, int mlx4_mr_hw_get_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr,
struct mlx4_mpt_entry ***mpt_entry) struct mlx4_mpt_entry ***mpt_entry)
{ {
@ -305,13 +306,10 @@ int mlx4_mr_hw_get_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr,
int key = key_to_hw_index(mmr->key) & (dev->caps.num_mpts - 1); int key = key_to_hw_index(mmr->key) & (dev->caps.num_mpts - 1);
struct mlx4_cmd_mailbox *mailbox = NULL; struct mlx4_cmd_mailbox *mailbox = NULL;
/* Make sure that at this point we have single-threaded access only */
if (mmr->enabled != MLX4_MPT_EN_HW) if (mmr->enabled != MLX4_MPT_EN_HW)
return -EINVAL; return -EINVAL;
err = mlx4_HW2SW_MPT(dev, NULL, key); err = mlx4_HW2SW_MPT(dev, NULL, key);
if (err) { if (err) {
mlx4_warn(dev, "HW2SW_MPT failed (%d).", err); mlx4_warn(dev, "HW2SW_MPT failed (%d).", err);
mlx4_warn(dev, "Most likely the MR has MWs bound to it.\n"); mlx4_warn(dev, "Most likely the MR has MWs bound to it.\n");
@ -333,7 +331,6 @@ int mlx4_mr_hw_get_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr,
0, MLX4_CMD_QUERY_MPT, 0, MLX4_CMD_QUERY_MPT,
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_TIME_CLASS_B,
MLX4_CMD_WRAPPED); MLX4_CMD_WRAPPED);
if (err) if (err)
goto free_mailbox; goto free_mailbox;
@ -378,9 +375,10 @@ int mlx4_mr_hw_write_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr,
err = mlx4_SW2HW_MPT(dev, mailbox, key); err = mlx4_SW2HW_MPT(dev, mailbox, key);
} }
if (!err) {
mmr->pd = be32_to_cpu((*mpt_entry)->pd_flags) & MLX4_MPT_PD_MASK; mmr->pd = be32_to_cpu((*mpt_entry)->pd_flags) & MLX4_MPT_PD_MASK;
if (!err)
mmr->enabled = MLX4_MPT_EN_HW; mmr->enabled = MLX4_MPT_EN_HW;
}
return err; return err;
} }
EXPORT_SYMBOL_GPL(mlx4_mr_hw_write_mpt); EXPORT_SYMBOL_GPL(mlx4_mr_hw_write_mpt);
@ -400,11 +398,12 @@ EXPORT_SYMBOL_GPL(mlx4_mr_hw_put_mpt);
int mlx4_mr_hw_change_pd(struct mlx4_dev *dev, struct mlx4_mpt_entry *mpt_entry, int mlx4_mr_hw_change_pd(struct mlx4_dev *dev, struct mlx4_mpt_entry *mpt_entry,
u32 pdn) u32 pdn)
{ {
u32 pd_flags = be32_to_cpu(mpt_entry->pd_flags); u32 pd_flags = be32_to_cpu(mpt_entry->pd_flags) & ~MLX4_MPT_PD_MASK;
/* The wrapper function will put the slave's id here */ /* The wrapper function will put the slave's id here */
if (mlx4_is_mfunc(dev)) if (mlx4_is_mfunc(dev))
pd_flags &= ~MLX4_MPT_PD_VF_MASK; pd_flags &= ~MLX4_MPT_PD_VF_MASK;
mpt_entry->pd_flags = cpu_to_be32((pd_flags & ~MLX4_MPT_PD_MASK) |
mpt_entry->pd_flags = cpu_to_be32(pd_flags |
(pdn & MLX4_MPT_PD_MASK) (pdn & MLX4_MPT_PD_MASK)
| MLX4_MPT_PD_FLAG_EN_INV); | MLX4_MPT_PD_FLAG_EN_INV);
return 0; return 0;
@ -600,14 +599,18 @@ int mlx4_mr_rereg_mem_write(struct mlx4_dev *dev, struct mlx4_mr *mr,
{ {
int err; int err;
mpt_entry->start = cpu_to_be64(mr->iova); mpt_entry->start = cpu_to_be64(iova);
mpt_entry->length = cpu_to_be64(mr->size); mpt_entry->length = cpu_to_be64(size);
mpt_entry->entity_size = cpu_to_be32(mr->mtt.page_shift); mpt_entry->entity_size = cpu_to_be32(page_shift);
err = mlx4_mtt_init(dev, npages, page_shift, &mr->mtt); err = mlx4_mtt_init(dev, npages, page_shift, &mr->mtt);
if (err) if (err)
return err; return err;
mpt_entry->pd_flags &= cpu_to_be32(MLX4_MPT_PD_MASK |
MLX4_MPT_PD_FLAG_EN_INV);
mpt_entry->flags &= cpu_to_be32(MLX4_MPT_FLAG_FREE |
MLX4_MPT_FLAG_SW_OWNS);
if (mr->mtt.order < 0) { if (mr->mtt.order < 0) {
mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_PHYSICAL); mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_PHYSICAL);
mpt_entry->mtt_addr = 0; mpt_entry->mtt_addr = 0;
@ -617,6 +620,14 @@ int mlx4_mr_rereg_mem_write(struct mlx4_dev *dev, struct mlx4_mr *mr,
if (mr->mtt.page_shift == 0) if (mr->mtt.page_shift == 0)
mpt_entry->mtt_sz = cpu_to_be32(1 << mr->mtt.order); mpt_entry->mtt_sz = cpu_to_be32(1 << mr->mtt.order);
} }
if (mr->mtt.order >= 0 && mr->mtt.page_shift == 0) {
/* fast register MR in free state */
mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_FREE);
mpt_entry->pd_flags |= cpu_to_be32(MLX4_MPT_PD_FLAG_FAST_REG |
MLX4_MPT_PD_FLAG_RAE);
} else {
mpt_entry->flags |= cpu_to_be32(MLX4_MPT_FLAG_SW_OWNS);
}
mr->enabled = MLX4_MPT_EN_SW; mr->enabled = MLX4_MPT_EN_SW;
return 0; return 0;

View File

@ -103,7 +103,8 @@ static int find_index(struct mlx4_dev *dev,
int i; int i;
for (i = 0; i < MLX4_MAX_MAC_NUM; i++) { for (i = 0; i < MLX4_MAX_MAC_NUM; i++) {
if ((mac & MLX4_MAC_MASK) == if (table->refs[i] &&
(MLX4_MAC_MASK & mac) ==
(MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) (MLX4_MAC_MASK & be64_to_cpu(table->entries[i])))
return i; return i;
} }
@ -165,12 +166,14 @@ int __mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac)
mutex_lock(&table->mutex); mutex_lock(&table->mutex);
for (i = 0; i < MLX4_MAX_MAC_NUM; i++) { for (i = 0; i < MLX4_MAX_MAC_NUM; i++) {
if (free < 0 && !table->entries[i]) { if (!table->refs[i]) {
if (free < 0)
free = i; free = i;
continue; continue;
} }
if (mac == (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) { if ((MLX4_MAC_MASK & mac) ==
(MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
/* MAC already registered, increment ref count */ /* MAC already registered, increment ref count */
err = i; err = i;
++table->refs[i]; ++table->refs[i];

View File

@ -390,13 +390,14 @@ err_icm:
EXPORT_SYMBOL_GPL(mlx4_qp_alloc); EXPORT_SYMBOL_GPL(mlx4_qp_alloc);
#define MLX4_UPDATE_QP_SUPPORTED_ATTRS MLX4_UPDATE_QP_SMAC #define MLX4_UPDATE_QP_SUPPORTED_ATTRS MLX4_UPDATE_QP_SMAC
int mlx4_update_qp(struct mlx4_dev *dev, struct mlx4_qp *qp, int mlx4_update_qp(struct mlx4_dev *dev, u32 qpn,
enum mlx4_update_qp_attr attr, enum mlx4_update_qp_attr attr,
struct mlx4_update_qp_params *params) struct mlx4_update_qp_params *params)
{ {
struct mlx4_cmd_mailbox *mailbox; struct mlx4_cmd_mailbox *mailbox;
struct mlx4_update_qp_context *cmd; struct mlx4_update_qp_context *cmd;
u64 pri_addr_path_mask = 0; u64 pri_addr_path_mask = 0;
u64 qp_mask = 0;
int err = 0; int err = 0;
mailbox = mlx4_alloc_cmd_mailbox(dev); mailbox = mlx4_alloc_cmd_mailbox(dev);
@ -413,9 +414,16 @@ int mlx4_update_qp(struct mlx4_dev *dev, struct mlx4_qp *qp,
cmd->qp_context.pri_path.grh_mylmc = params->smac_index; cmd->qp_context.pri_path.grh_mylmc = params->smac_index;
} }
cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask); if (attr & MLX4_UPDATE_QP_VSD) {
qp_mask |= 1ULL << MLX4_UPD_QP_MASK_VSD;
if (params->flags & MLX4_UPDATE_QP_PARAMS_FLAGS_VSD_ENABLE)
cmd->qp_context.param3 |= cpu_to_be32(MLX4_STRIP_VLAN);
}
err = mlx4_cmd(dev, mailbox->dma, qp->qpn & 0xffffff, 0, cmd->primary_addr_path_mask = cpu_to_be64(pri_addr_path_mask);
cmd->qp_mask = cpu_to_be64(qp_mask);
err = mlx4_cmd(dev, mailbox->dma, qpn & 0xffffff, 0,
MLX4_CMD_UPDATE_QP, MLX4_CMD_TIME_CLASS_A, MLX4_CMD_UPDATE_QP, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE); MLX4_CMD_NATIVE);

View File

@ -702,11 +702,13 @@ static int update_vport_qp_param(struct mlx4_dev *dev,
struct mlx4_qp_context *qpc = inbox->buf + 8; struct mlx4_qp_context *qpc = inbox->buf + 8;
struct mlx4_vport_oper_state *vp_oper; struct mlx4_vport_oper_state *vp_oper;
struct mlx4_priv *priv; struct mlx4_priv *priv;
u32 qp_type;
int port; int port;
port = (qpc->pri_path.sched_queue & 0x40) ? 2 : 1; port = (qpc->pri_path.sched_queue & 0x40) ? 2 : 1;
priv = mlx4_priv(dev); priv = mlx4_priv(dev);
vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port]; vp_oper = &priv->mfunc.master.vf_oper[slave].vport[port];
qp_type = (be32_to_cpu(qpc->flags) >> 16) & 0xff;
if (MLX4_VGT != vp_oper->state.default_vlan) { if (MLX4_VGT != vp_oper->state.default_vlan) {
/* the reserved QPs (special, proxy, tunnel) /* the reserved QPs (special, proxy, tunnel)
@ -715,8 +717,20 @@ static int update_vport_qp_param(struct mlx4_dev *dev,
if (mlx4_is_qp_reserved(dev, qpn)) if (mlx4_is_qp_reserved(dev, qpn))
return 0; return 0;
/* force strip vlan by clear vsd */ /* force strip vlan by clear vsd, MLX QP refers to Raw Ethernet */
if (qp_type == MLX4_QP_ST_UD ||
(qp_type == MLX4_QP_ST_MLX && mlx4_is_eth(dev, port))) {
if (dev->caps.bmme_flags & MLX4_BMME_FLAG_VSD_INIT2RTR) {
*(__be32 *)inbox->buf =
cpu_to_be32(be32_to_cpu(*(__be32 *)inbox->buf) |
MLX4_QP_OPTPAR_VLAN_STRIPPING);
qpc->param3 &= ~cpu_to_be32(MLX4_STRIP_VLAN); qpc->param3 &= ~cpu_to_be32(MLX4_STRIP_VLAN);
} else {
struct mlx4_update_qp_params params = {.flags = 0};
mlx4_update_qp(dev, qpn, MLX4_UPDATE_QP_VSD, &params);
}
}
if (vp_oper->state.link_state == IFLA_VF_LINK_STATE_DISABLE && if (vp_oper->state.link_state == IFLA_VF_LINK_STATE_DISABLE &&
dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_UPDATE_QP) { dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_UPDATE_QP) {
@ -3998,14 +4012,18 @@ int mlx4_UPDATE_QP_wrapper(struct mlx4_dev *dev, int slave,
} }
port = (rqp->sched_queue >> 6 & 1) + 1; port = (rqp->sched_queue >> 6 & 1) + 1;
if (pri_addr_path_mask & (1ULL << MLX4_UPD_QP_PATH_MASK_MAC_INDEX)) {
smac_index = cmd->qp_context.pri_path.grh_mylmc; smac_index = cmd->qp_context.pri_path.grh_mylmc;
err = mac_find_smac_ix_in_slave(dev, slave, port, err = mac_find_smac_ix_in_slave(dev, slave, port,
smac_index, &mac); smac_index, &mac);
if (err) { if (err) {
mlx4_err(dev, "Failed to update qpn 0x%x, MAC is invalid. smac_ix: %d\n", mlx4_err(dev, "Failed to update qpn 0x%x, MAC is invalid. smac_ix: %d\n",
qpn, smac_index); qpn, smac_index);
goto err_mac; goto err_mac;
} }
}
err = mlx4_cmd(dev, inbox->dma, err = mlx4_cmd(dev, inbox->dma,
vhcr->in_modifier, 0, vhcr->in_modifier, 0,
@ -4818,7 +4836,7 @@ void mlx4_vf_immed_vlan_work_handler(struct work_struct *_work)
MLX4_VLAN_CTRL_ETH_RX_BLOCK_UNTAGGED; MLX4_VLAN_CTRL_ETH_RX_BLOCK_UNTAGGED;
upd_context = mailbox->buf; upd_context = mailbox->buf;
upd_context->qp_mask = cpu_to_be64(MLX4_UPD_QP_MASK_VSD); upd_context->qp_mask = cpu_to_be64(1ULL << MLX4_UPD_QP_MASK_VSD);
spin_lock_irq(mlx4_tlock(dev)); spin_lock_irq(mlx4_tlock(dev));
list_for_each_entry_safe(qp, tmp, qp_list, com.list) { list_for_each_entry_safe(qp, tmp, qp_list, com.list) {

View File

@ -290,9 +290,11 @@ static void octeon_mgmt_clean_tx_buffers(struct octeon_mgmt *p)
/* Read the hardware TX timestamp if one was recorded */ /* Read the hardware TX timestamp if one was recorded */
if (unlikely(re.s.tstamp)) { if (unlikely(re.s.tstamp)) {
struct skb_shared_hwtstamps ts; struct skb_shared_hwtstamps ts;
u64 ns;
memset(&ts, 0, sizeof(ts)); memset(&ts, 0, sizeof(ts));
/* Read the timestamp */ /* Read the timestamp */
u64 ns = cvmx_read_csr(CVMX_MIXX_TSTAMP(p->port)); ns = cvmx_read_csr(CVMX_MIXX_TSTAMP(p->port));
/* Remove the timestamp from the FIFO */ /* Remove the timestamp from the FIFO */
cvmx_write_csr(CVMX_MIXX_TSCTL(p->port), 0); cvmx_write_csr(CVMX_MIXX_TSCTL(p->port), 0);
/* Tell the kernel about the timestamp */ /* Tell the kernel about the timestamp */

View File

@ -7,6 +7,7 @@ config PCH_GBE
depends on PCI && (X86_32 || COMPILE_TEST) depends on PCI && (X86_32 || COMPILE_TEST)
select MII select MII
select PTP_1588_CLOCK_PCH select PTP_1588_CLOCK_PCH
select NET_PTP_CLASSIFY
---help--- ---help---
This is a gigabit ethernet driver for EG20T PCH. This is a gigabit ethernet driver for EG20T PCH.
EG20T PCH is the platform controller hub that is used in Intel's EG20T PCH is the platform controller hub that is used in Intel's

View File

@ -1783,33 +1783,31 @@ static void __rtl8169_set_features(struct net_device *dev,
netdev_features_t features) netdev_features_t features)
{ {
struct rtl8169_private *tp = netdev_priv(dev); struct rtl8169_private *tp = netdev_priv(dev);
netdev_features_t changed = features ^ dev->features;
void __iomem *ioaddr = tp->mmio_addr; void __iomem *ioaddr = tp->mmio_addr;
u32 rx_config;
if (!(changed & (NETIF_F_RXALL | NETIF_F_RXCSUM | rx_config = RTL_R32(RxConfig);
NETIF_F_HW_VLAN_CTAG_RX))) if (features & NETIF_F_RXALL)
return; rx_config |= (AcceptErr | AcceptRunt);
else
rx_config &= ~(AcceptErr | AcceptRunt);
RTL_W32(RxConfig, rx_config);
if (changed & (NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX)) {
if (features & NETIF_F_RXCSUM) if (features & NETIF_F_RXCSUM)
tp->cp_cmd |= RxChkSum; tp->cp_cmd |= RxChkSum;
else else
tp->cp_cmd &= ~RxChkSum; tp->cp_cmd &= ~RxChkSum;
if (dev->features & NETIF_F_HW_VLAN_CTAG_RX) if (features & NETIF_F_HW_VLAN_CTAG_RX)
tp->cp_cmd |= RxVlan; tp->cp_cmd |= RxVlan;
else else
tp->cp_cmd &= ~RxVlan; tp->cp_cmd &= ~RxVlan;
tp->cp_cmd |= RTL_R16(CPlusCmd) & ~(RxVlan | RxChkSum);
RTL_W16(CPlusCmd, tp->cp_cmd); RTL_W16(CPlusCmd, tp->cp_cmd);
RTL_R16(CPlusCmd); RTL_R16(CPlusCmd);
}
if (changed & NETIF_F_RXALL) {
int tmp = (RTL_R32(RxConfig) & ~(AcceptErr | AcceptRunt));
if (features & NETIF_F_RXALL)
tmp |= (AcceptErr | AcceptRunt);
RTL_W32(RxConfig, tmp);
}
} }
static int rtl8169_set_features(struct net_device *dev, static int rtl8169_set_features(struct net_device *dev,
@ -1817,7 +1815,10 @@ static int rtl8169_set_features(struct net_device *dev,
{ {
struct rtl8169_private *tp = netdev_priv(dev); struct rtl8169_private *tp = netdev_priv(dev);
features &= NETIF_F_RXALL | NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_RX;
rtl_lock_work(tp); rtl_lock_work(tp);
if (features ^ dev->features)
__rtl8169_set_features(dev, features); __rtl8169_set_features(dev, features);
rtl_unlock_work(tp); rtl_unlock_work(tp);
@ -7118,8 +7119,7 @@ static void rtl_hw_initialize(struct rtl8169_private *tp)
} }
} }
static int static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{ {
const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data; const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data;
const unsigned int region = cfg->region; const unsigned int region = cfg->region;
@ -7194,7 +7194,7 @@ rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_mwi_2; goto err_out_mwi_2;
} }
tp->cp_cmd = RxChkSum; tp->cp_cmd = 0;
if ((sizeof(dma_addr_t) > 4) && if ((sizeof(dma_addr_t) > 4) &&
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) { !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) {
@ -7235,13 +7235,6 @@ rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_master(pdev); pci_set_master(pdev);
/*
* Pretend we are using VLANs; This bypasses a nasty bug where
* Interrupts stop flowing on high load on 8110SCd controllers.
*/
if (tp->mac_version == RTL_GIGA_MAC_VER_05)
tp->cp_cmd |= RxVlan;
rtl_init_mdio_ops(tp); rtl_init_mdio_ops(tp);
rtl_init_pll_power_ops(tp); rtl_init_pll_power_ops(tp);
rtl_init_jumbo_ops(tp); rtl_init_jumbo_ops(tp);
@ -7302,8 +7295,14 @@ rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO | dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
NETIF_F_HIGHDMA; NETIF_F_HIGHDMA;
tp->cp_cmd |= RxChkSum | RxVlan;
/*
* Pretend we are using VLANs; This bypasses a nasty bug where
* Interrupts stop flowing on high load on 8110SCd controllers.
*/
if (tp->mac_version == RTL_GIGA_MAC_VER_05) if (tp->mac_version == RTL_GIGA_MAC_VER_05)
/* 8110SCd requires hardware Rx VLAN - disallow toggling */ /* Disallow toggling */
dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX; dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX;
if (tp->txd_version == RTL_TD_0) if (tp->txd_version == RTL_TD_0)

View File

@ -2933,6 +2933,9 @@ void efx_farch_filter_sync_rx_mode(struct efx_nic *efx)
u32 crc; u32 crc;
int bit; int bit;
if (!efx_dev_registered(efx))
return;
netif_addr_lock_bh(net_dev); netif_addr_lock_bh(net_dev);
efx->unicast_filter = !(net_dev->flags & IFF_PROMISC); efx->unicast_filter = !(net_dev->flags & IFF_PROMISC);

View File

@ -350,14 +350,17 @@ static int vnet_walk_rx_one(struct vnet_port *port,
if (IS_ERR(desc)) if (IS_ERR(desc))
return PTR_ERR(desc); return PTR_ERR(desc);
if (desc->hdr.state != VIO_DESC_READY)
return 1;
rmb();
viodbg(DATA, "vio_walk_rx_one desc[%02x:%02x:%08x:%08x:%llx:%llx]\n", viodbg(DATA, "vio_walk_rx_one desc[%02x:%02x:%08x:%08x:%llx:%llx]\n",
desc->hdr.state, desc->hdr.ack, desc->hdr.state, desc->hdr.ack,
desc->size, desc->ncookies, desc->size, desc->ncookies,
desc->cookies[0].cookie_addr, desc->cookies[0].cookie_addr,
desc->cookies[0].cookie_size); desc->cookies[0].cookie_size);
if (desc->hdr.state != VIO_DESC_READY)
return 1;
err = vnet_rx_one(port, desc->size, desc->cookies, desc->ncookies); err = vnet_rx_one(port, desc->size, desc->cookies, desc->ncookies);
if (err == -ECONNRESET) if (err == -ECONNRESET)
return err; return err;

View File

@ -699,6 +699,28 @@ static void cpsw_rx_handler(void *token, int len, int status)
cpsw_dual_emac_src_port_detect(status, priv, ndev, skb); cpsw_dual_emac_src_port_detect(status, priv, ndev, skb);
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) { if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
bool ndev_status = false;
struct cpsw_slave *slave = priv->slaves;
int n;
if (priv->data.dual_emac) {
/* In dual emac mode check for all interfaces */
for (n = priv->data.slaves; n; n--, slave++)
if (netif_running(slave->ndev))
ndev_status = true;
}
if (ndev_status && (status >= 0)) {
/* The packet received is for the interface which
* is already down and the other interface is up
* and running, intead of freeing which results
* in reducing of the number of rx descriptor in
* DMA engine, requeue skb back to cpdma.
*/
new_skb = skb;
goto requeue;
}
/* the interface is going down, skbs are purged */ /* the interface is going down, skbs are purged */
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
return; return;
@ -717,6 +739,7 @@ static void cpsw_rx_handler(void *token, int len, int status)
new_skb = skb; new_skb = skb;
} }
requeue:
ret = cpdma_chan_submit(priv->rxch, new_skb, new_skb->data, ret = cpdma_chan_submit(priv->rxch, new_skb, new_skb->data,
skb_tailroom(new_skb), 0); skb_tailroom(new_skb), 0);
if (WARN_ON(ret < 0)) if (WARN_ON(ret < 0))
@ -2311,10 +2334,19 @@ static int cpsw_suspend(struct device *dev)
struct net_device *ndev = platform_get_drvdata(pdev); struct net_device *ndev = platform_get_drvdata(pdev);
struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_priv *priv = netdev_priv(ndev);
if (priv->data.dual_emac) {
int i;
for (i = 0; i < priv->data.slaves; i++) {
if (netif_running(priv->slaves[i].ndev))
cpsw_ndo_stop(priv->slaves[i].ndev);
soft_reset_slave(priv->slaves + i);
}
} else {
if (netif_running(ndev)) if (netif_running(ndev))
cpsw_ndo_stop(ndev); cpsw_ndo_stop(ndev);
for_each_slave(priv, soft_reset_slave); for_each_slave(priv, soft_reset_slave);
}
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
@ -2328,14 +2360,24 @@ static int cpsw_resume(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct net_device *ndev = platform_get_drvdata(pdev); struct net_device *ndev = platform_get_drvdata(pdev);
struct cpsw_priv *priv = netdev_priv(ndev);
pm_runtime_get_sync(&pdev->dev); pm_runtime_get_sync(&pdev->dev);
/* Select default pin state */ /* Select default pin state */
pinctrl_pm_select_default_state(&pdev->dev); pinctrl_pm_select_default_state(&pdev->dev);
if (priv->data.dual_emac) {
int i;
for (i = 0; i < priv->data.slaves; i++) {
if (netif_running(priv->slaves[i].ndev))
cpsw_ndo_open(priv->slaves[i].ndev);
}
} else {
if (netif_running(ndev)) if (netif_running(ndev))
cpsw_ndo_open(ndev); cpsw_ndo_open(ndev);
}
return 0; return 0;
} }

View File

@ -36,6 +36,7 @@
#include <linux/netpoll.h> #include <linux/netpoll.h>
#define MACVLAN_HASH_SIZE (1 << BITS_PER_BYTE) #define MACVLAN_HASH_SIZE (1 << BITS_PER_BYTE)
#define MACVLAN_BC_QUEUE_LEN 1000
struct macvlan_port { struct macvlan_port {
struct net_device *dev; struct net_device *dev;
@ -248,7 +249,7 @@ static void macvlan_broadcast_enqueue(struct macvlan_port *port,
goto err; goto err;
spin_lock(&port->bc_queue.lock); spin_lock(&port->bc_queue.lock);
if (skb_queue_len(&port->bc_queue) < skb->dev->tx_queue_len) { if (skb_queue_len(&port->bc_queue) < MACVLAN_BC_QUEUE_LEN) {
__skb_queue_tail(&port->bc_queue, nskb); __skb_queue_tail(&port->bc_queue, nskb);
err = 0; err = 0;
} }
@ -806,6 +807,7 @@ static netdev_features_t macvlan_fix_features(struct net_device *dev,
features, features,
mask); mask);
features |= ALWAYS_ON_FEATURES; features |= ALWAYS_ON_FEATURES;
features &= ~NETIF_F_NETNS_LOCAL;
return features; return features;
} }

View File

@ -592,8 +592,7 @@ static struct phy_driver ksphy_driver[] = {
.phy_id = PHY_ID_KSZ9031, .phy_id = PHY_ID_KSZ9031,
.phy_id_mask = 0x00fffff0, .phy_id_mask = 0x00fffff0,
.name = "Micrel KSZ9031 Gigabit PHY", .name = "Micrel KSZ9031 Gigabit PHY",
.features = (PHY_GBIT_FEATURES | SUPPORTED_Pause .features = (PHY_GBIT_FEATURES | SUPPORTED_Pause),
| SUPPORTED_Asym_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT, .flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
.config_init = ksz9031_config_init, .config_init = ksz9031_config_init,
.config_aneg = genphy_config_aneg, .config_aneg = genphy_config_aneg,

View File

@ -2019,7 +2019,7 @@ static int rtl8153_enable(struct r8152 *tp)
return rtl_enable(tp); return rtl_enable(tp);
} }
static void rtl8152_disable(struct r8152 *tp) static void rtl_disable(struct r8152 *tp)
{ {
u32 ocp_data; u32 ocp_data;
int i; int i;
@ -2232,6 +2232,13 @@ static inline void r8152b_enable_aldps(struct r8152 *tp)
LINKENA | DIS_SDSAVE); LINKENA | DIS_SDSAVE);
} }
static void rtl8152_disable(struct r8152 *tp)
{
r8152b_disable_aldps(tp);
rtl_disable(tp);
r8152b_enable_aldps(tp);
}
static void r8152b_hw_phy_cfg(struct r8152 *tp) static void r8152b_hw_phy_cfg(struct r8152 *tp)
{ {
u16 data; u16 data;
@ -2242,11 +2249,8 @@ static void r8152b_hw_phy_cfg(struct r8152 *tp)
r8152_mdio_write(tp, MII_BMCR, data); r8152_mdio_write(tp, MII_BMCR, data);
} }
r8152b_disable_aldps(tp);
rtl_clear_bp(tp); rtl_clear_bp(tp);
r8152b_enable_aldps(tp);
set_bit(PHY_RESET, &tp->flags); set_bit(PHY_RESET, &tp->flags);
} }
@ -2255,9 +2259,6 @@ static void r8152b_exit_oob(struct r8152 *tp)
u32 ocp_data; u32 ocp_data;
int i; int i;
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
ocp_data = ocp_read_dword(tp, MCU_TYPE_PLA, PLA_RCR); ocp_data = ocp_read_dword(tp, MCU_TYPE_PLA, PLA_RCR);
ocp_data &= ~RCR_ACPT_ALL; ocp_data &= ~RCR_ACPT_ALL;
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data); ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RCR, ocp_data);
@ -2347,7 +2348,7 @@ static void r8152b_enter_oob(struct r8152 *tp)
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL1, RXFIFO_THR2_OOB); ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL1, RXFIFO_THR2_OOB);
ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL2, RXFIFO_THR3_OOB); ocp_write_dword(tp, MCU_TYPE_PLA, PLA_RXFIFO_CTRL2, RXFIFO_THR3_OOB);
rtl8152_disable(tp); rtl_disable(tp);
for (i = 0; i < 1000; i++) { for (i = 0; i < 1000; i++) {
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL); ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
@ -2485,9 +2486,6 @@ static void r8153_first_init(struct r8152 *tp)
u32 ocp_data; u32 ocp_data;
int i; int i;
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
rxdy_gated_en(tp, true); rxdy_gated_en(tp, true);
r8153_teredo_off(tp); r8153_teredo_off(tp);
@ -2560,7 +2558,7 @@ static void r8153_enter_oob(struct r8152 *tp)
ocp_data &= ~NOW_IS_OOB; ocp_data &= ~NOW_IS_OOB;
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data); ocp_write_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL, ocp_data);
rtl8152_disable(tp); rtl_disable(tp);
for (i = 0; i < 1000; i++) { for (i = 0; i < 1000; i++) {
ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL); ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_OOB_CTRL);
@ -2624,6 +2622,13 @@ static void r8153_enable_aldps(struct r8152 *tp)
ocp_reg_write(tp, OCP_POWER_CFG, data); ocp_reg_write(tp, OCP_POWER_CFG, data);
} }
static void rtl8153_disable(struct r8152 *tp)
{
r8153_disable_aldps(tp);
rtl_disable(tp);
r8153_enable_aldps(tp);
}
static int rtl8152_set_speed(struct r8152 *tp, u8 autoneg, u16 speed, u8 duplex) static int rtl8152_set_speed(struct r8152 *tp, u8 autoneg, u16 speed, u8 duplex)
{ {
u16 bmcr, anar, gbcr; u16 bmcr, anar, gbcr;
@ -2714,6 +2719,16 @@ out:
return ret; return ret;
} }
static void rtl8152_up(struct r8152 *tp)
{
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
r8152b_disable_aldps(tp);
r8152b_exit_oob(tp);
r8152b_enable_aldps(tp);
}
static void rtl8152_down(struct r8152 *tp) static void rtl8152_down(struct r8152 *tp)
{ {
if (test_bit(RTL8152_UNPLUG, &tp->flags)) { if (test_bit(RTL8152_UNPLUG, &tp->flags)) {
@ -2727,6 +2742,16 @@ static void rtl8152_down(struct r8152 *tp)
r8152b_enable_aldps(tp); r8152b_enable_aldps(tp);
} }
static void rtl8153_up(struct r8152 *tp)
{
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
r8153_disable_aldps(tp);
r8153_first_init(tp);
r8153_enable_aldps(tp);
}
static void rtl8153_down(struct r8152 *tp) static void rtl8153_down(struct r8152 *tp)
{ {
if (test_bit(RTL8152_UNPLUG, &tp->flags)) { if (test_bit(RTL8152_UNPLUG, &tp->flags)) {
@ -2946,6 +2971,8 @@ static void r8152b_init(struct r8152 *tp)
if (test_bit(RTL8152_UNPLUG, &tp->flags)) if (test_bit(RTL8152_UNPLUG, &tp->flags))
return; return;
r8152b_disable_aldps(tp);
if (tp->version == RTL_VER_01) { if (tp->version == RTL_VER_01) {
ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_LED_FEATURE); ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_LED_FEATURE);
ocp_data &= ~LED_MODE_MASK; ocp_data &= ~LED_MODE_MASK;
@ -2984,6 +3011,7 @@ static void r8153_init(struct r8152 *tp)
if (test_bit(RTL8152_UNPLUG, &tp->flags)) if (test_bit(RTL8152_UNPLUG, &tp->flags))
return; return;
r8153_disable_aldps(tp);
r8153_u1u2en(tp, false); r8153_u1u2en(tp, false);
for (i = 0; i < 500; i++) { for (i = 0; i < 500; i++) {
@ -3392,7 +3420,7 @@ static int rtl_ops_init(struct r8152 *tp, const struct usb_device_id *id)
ops->init = r8152b_init; ops->init = r8152b_init;
ops->enable = rtl8152_enable; ops->enable = rtl8152_enable;
ops->disable = rtl8152_disable; ops->disable = rtl8152_disable;
ops->up = r8152b_exit_oob; ops->up = rtl8152_up;
ops->down = rtl8152_down; ops->down = rtl8152_down;
ops->unload = rtl8152_unload; ops->unload = rtl8152_unload;
ret = 0; ret = 0;
@ -3400,8 +3428,8 @@ static int rtl_ops_init(struct r8152 *tp, const struct usb_device_id *id)
case PRODUCT_ID_RTL8153: case PRODUCT_ID_RTL8153:
ops->init = r8153_init; ops->init = r8153_init;
ops->enable = rtl8153_enable; ops->enable = rtl8153_enable;
ops->disable = rtl8152_disable; ops->disable = rtl8153_disable;
ops->up = r8153_first_init; ops->up = rtl8153_up;
ops->down = rtl8153_down; ops->down = rtl8153_down;
ops->unload = rtl8153_unload; ops->unload = rtl8153_unload;
ret = 0; ret = 0;
@ -3416,8 +3444,8 @@ static int rtl_ops_init(struct r8152 *tp, const struct usb_device_id *id)
case PRODUCT_ID_SAMSUNG: case PRODUCT_ID_SAMSUNG:
ops->init = r8153_init; ops->init = r8153_init;
ops->enable = rtl8153_enable; ops->enable = rtl8153_enable;
ops->disable = rtl8152_disable; ops->disable = rtl8153_disable;
ops->up = r8153_first_init; ops->up = rtl8153_up;
ops->down = rtl8153_down; ops->down = rtl8153_down;
ops->unload = rtl8153_unload; ops->unload = rtl8153_unload;
ret = 0; ret = 0;

View File

@ -57,7 +57,7 @@ int ath9k_cmn_beacon_config_sta(struct ath_hw *ah,
struct ath9k_beacon_state *bs) struct ath9k_beacon_state *bs)
{ {
struct ath_common *common = ath9k_hw_common(ah); struct ath_common *common = ath9k_hw_common(ah);
int dtim_intval, sleepduration; int dtim_intval;
u64 tsf; u64 tsf;
/* No need to configure beacon if we are not associated */ /* No need to configure beacon if we are not associated */
@ -75,7 +75,6 @@ int ath9k_cmn_beacon_config_sta(struct ath_hw *ah,
* last beacon we received (which may be none). * last beacon we received (which may be none).
*/ */
dtim_intval = conf->intval * conf->dtim_period; dtim_intval = conf->intval * conf->dtim_period;
sleepduration = ah->hw->conf.listen_interval * conf->intval;
/* /*
* Pull nexttbtt forward to reflect the current * Pull nexttbtt forward to reflect the current
@ -113,7 +112,7 @@ int ath9k_cmn_beacon_config_sta(struct ath_hw *ah,
*/ */
bs->bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100), bs->bs_sleepduration = TU_TO_USEC(roundup(IEEE80211_MS_TO_TU(100),
sleepduration)); conf->intval));
if (bs->bs_sleepduration > bs->bs_dtimperiod) if (bs->bs_sleepduration > bs->bs_dtimperiod)
bs->bs_sleepduration = bs->bs_dtimperiod; bs->bs_sleepduration = bs->bs_dtimperiod;

View File

@ -978,7 +978,7 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
struct ath_hw *ah = common->ah; struct ath_hw *ah = common->ah;
struct ath_htc_rx_status *rxstatus; struct ath_htc_rx_status *rxstatus;
struct ath_rx_status rx_stats; struct ath_rx_status rx_stats;
bool decrypt_error; bool decrypt_error = false;
if (skb->len < HTC_RX_FRAME_HEADER_SIZE) { if (skb->len < HTC_RX_FRAME_HEADER_SIZE) {
ath_err(common, "Corrupted RX frame, dropping (len: %d)\n", ath_err(common, "Corrupted RX frame, dropping (len: %d)\n",

View File

@ -27,10 +27,17 @@ config BRCMFMAC
one of the bus interface support. If you choose to build a module, one of the bus interface support. If you choose to build a module,
it'll be called brcmfmac.ko. it'll be called brcmfmac.ko.
config BRCMFMAC_PROTO_BCDC
bool
config BRCMFMAC_PROTO_MSGBUF
bool
config BRCMFMAC_SDIO config BRCMFMAC_SDIO
bool "SDIO bus interface support for FullMAC driver" bool "SDIO bus interface support for FullMAC driver"
depends on (MMC = y || MMC = BRCMFMAC) depends on (MMC = y || MMC = BRCMFMAC)
depends on BRCMFMAC depends on BRCMFMAC
select BRCMFMAC_PROTO_BCDC
select FW_LOADER select FW_LOADER
default y default y
---help--- ---help---
@ -42,6 +49,7 @@ config BRCMFMAC_USB
bool "USB bus interface support for FullMAC driver" bool "USB bus interface support for FullMAC driver"
depends on (USB = y || USB = BRCMFMAC) depends on (USB = y || USB = BRCMFMAC)
depends on BRCMFMAC depends on BRCMFMAC
select BRCMFMAC_PROTO_BCDC
select FW_LOADER select FW_LOADER
---help--- ---help---
This option enables the USB bus interface support for Broadcom This option enables the USB bus interface support for Broadcom
@ -52,6 +60,8 @@ config BRCMFMAC_PCIE
bool "PCIE bus interface support for FullMAC driver" bool "PCIE bus interface support for FullMAC driver"
depends on BRCMFMAC depends on BRCMFMAC
depends on PCI depends on PCI
depends on HAS_DMA
select BRCMFMAC_PROTO_MSGBUF
select FW_LOADER select FW_LOADER
---help--- ---help---
This option enables the PCIE bus interface support for Broadcom This option enables the PCIE bus interface support for Broadcom

View File

@ -30,16 +30,18 @@ brcmfmac-objs += \
fwsignal.o \ fwsignal.o \
p2p.o \ p2p.o \
proto.o \ proto.o \
bcdc.o \
commonring.o \
flowring.o \
msgbuf.o \
dhd_common.o \ dhd_common.o \
dhd_linux.o \ dhd_linux.o \
firmware.o \ firmware.o \
feature.o \ feature.o \
btcoex.o \ btcoex.o \
vendor.o vendor.o
brcmfmac-$(CONFIG_BRCMFMAC_PROTO_BCDC) += \
bcdc.o
brcmfmac-$(CONFIG_BRCMFMAC_PROTO_MSGBUF) += \
commonring.o \
flowring.o \
msgbuf.o
brcmfmac-$(CONFIG_BRCMFMAC_SDIO) += \ brcmfmac-$(CONFIG_BRCMFMAC_SDIO) += \
dhd_sdio.o \ dhd_sdio.o \
bcmsdh.o bcmsdh.o

View File

@ -16,9 +16,12 @@
#ifndef BRCMFMAC_BCDC_H #ifndef BRCMFMAC_BCDC_H
#define BRCMFMAC_BCDC_H #define BRCMFMAC_BCDC_H
#ifdef CONFIG_BRCMFMAC_PROTO_BCDC
int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr); int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr);
void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr); void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr);
#else
static inline int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr) { return 0; }
static inline void brcmf_proto_bcdc_detach(struct brcmf_pub *drvr) {}
#endif
#endif /* BRCMFMAC_BCDC_H */ #endif /* BRCMFMAC_BCDC_H */

View File

@ -185,7 +185,13 @@ static void brcmf_fweh_handle_if_event(struct brcmf_pub *drvr,
ifevent->action, ifevent->ifidx, ifevent->bssidx, ifevent->action, ifevent->ifidx, ifevent->bssidx,
ifevent->flags, ifevent->role); ifevent->flags, ifevent->role);
if (ifevent->flags & BRCMF_E_IF_FLAG_NOIF) { /* The P2P Device interface event must not be ignored
* contrary to what firmware tells us. The only way to
* distinguish the P2P Device is by looking at the ifidx
* and bssidx received.
*/
if (!(ifevent->ifidx == 0 && ifevent->bssidx == 1) &&
(ifevent->flags & BRCMF_E_IF_FLAG_NOIF)) {
brcmf_dbg(EVENT, "event can be ignored\n"); brcmf_dbg(EVENT, "event can be ignored\n");
return; return;
} }
@ -210,12 +216,12 @@ static void brcmf_fweh_handle_if_event(struct brcmf_pub *drvr,
return; return;
} }
if (ifevent->action == BRCMF_E_IF_CHANGE) if (ifp && ifevent->action == BRCMF_E_IF_CHANGE)
brcmf_fws_reset_interface(ifp); brcmf_fws_reset_interface(ifp);
err = brcmf_fweh_call_event_handler(ifp, emsg->event_code, emsg, data); err = brcmf_fweh_call_event_handler(ifp, emsg->event_code, emsg, data);
if (ifevent->action == BRCMF_E_IF_DEL) { if (ifp && ifevent->action == BRCMF_E_IF_DEL) {
brcmf_fws_del_interface(ifp); brcmf_fws_del_interface(ifp);
brcmf_del_if(drvr, ifevent->bssidx); brcmf_del_if(drvr, ifevent->bssidx);
} }

View File

@ -172,6 +172,8 @@ enum brcmf_fweh_event_code {
#define BRCMF_E_IF_ROLE_STA 0 #define BRCMF_E_IF_ROLE_STA 0
#define BRCMF_E_IF_ROLE_AP 1 #define BRCMF_E_IF_ROLE_AP 1
#define BRCMF_E_IF_ROLE_WDS 2 #define BRCMF_E_IF_ROLE_WDS 2
#define BRCMF_E_IF_ROLE_P2P_GO 3
#define BRCMF_E_IF_ROLE_P2P_CLIENT 4
/** /**
* definitions for event packet validation. * definitions for event packet validation.

View File

@ -15,6 +15,7 @@
#ifndef BRCMFMAC_MSGBUF_H #ifndef BRCMFMAC_MSGBUF_H
#define BRCMFMAC_MSGBUF_H #define BRCMFMAC_MSGBUF_H
#ifdef CONFIG_BRCMFMAC_PROTO_MSGBUF
#define BRCMF_H2D_MSGRING_CONTROL_SUBMIT_MAX_ITEM 20 #define BRCMF_H2D_MSGRING_CONTROL_SUBMIT_MAX_ITEM 20
#define BRCMF_H2D_MSGRING_RXPOST_SUBMIT_MAX_ITEM 256 #define BRCMF_H2D_MSGRING_RXPOST_SUBMIT_MAX_ITEM 256
@ -32,9 +33,15 @@
int brcmf_proto_msgbuf_rx_trigger(struct device *dev); int brcmf_proto_msgbuf_rx_trigger(struct device *dev);
void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid);
int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr); int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr);
void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr); void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr);
void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid); #else
static inline int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr)
{
return 0;
}
static inline void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr) {}
#endif
#endif /* BRCMFMAC_MSGBUF_H */ #endif /* BRCMFMAC_MSGBUF_H */

View File

@ -497,8 +497,11 @@ brcmf_configure_arp_offload(struct brcmf_if *ifp, bool enable)
static void static void
brcmf_cfg80211_update_proto_addr_mode(struct wireless_dev *wdev) brcmf_cfg80211_update_proto_addr_mode(struct wireless_dev *wdev)
{ {
struct net_device *ndev = wdev->netdev; struct brcmf_cfg80211_vif *vif;
struct brcmf_if *ifp = netdev_priv(ndev); struct brcmf_if *ifp;
vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev);
ifp = vif->ifp;
if ((wdev->iftype == NL80211_IFTYPE_ADHOC) || if ((wdev->iftype == NL80211_IFTYPE_ADHOC) ||
(wdev->iftype == NL80211_IFTYPE_AP) || (wdev->iftype == NL80211_IFTYPE_AP) ||
@ -5143,6 +5146,7 @@ static int brcmf_enable_bw40_2g(struct brcmf_cfg80211_info *cfg)
ch.band = BRCMU_CHAN_BAND_2G; ch.band = BRCMU_CHAN_BAND_2G;
ch.bw = BRCMU_CHAN_BW_40; ch.bw = BRCMU_CHAN_BW_40;
ch.sb = BRCMU_CHAN_SB_NONE;
ch.chnum = 0; ch.chnum = 0;
cfg->d11inf.encchspec(&ch); cfg->d11inf.encchspec(&ch);
@ -5176,6 +5180,7 @@ static int brcmf_enable_bw40_2g(struct brcmf_cfg80211_info *cfg)
brcmf_update_bw40_channel_flag(&band->channels[j], &ch); brcmf_update_bw40_channel_flag(&band->channels[j], &ch);
} }
kfree(pbuf);
} }
return err; return err;
} }

View File

@ -40,7 +40,7 @@
#include "commands.h" #include "commands.h"
#include "power.h" #include "power.h"
static bool force_cam; static bool force_cam = true;
module_param(force_cam, bool, 0644); module_param(force_cam, bool, 0644);
MODULE_PARM_DESC(force_cam, "force continuously aware mode (no power saving at all)"); MODULE_PARM_DESC(force_cam, "force continuously aware mode (no power saving at all)");

View File

@ -83,6 +83,8 @@
#define IWL7260_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL7260_TX_POWER_VERSION 0xffff /* meaningless */
#define IWL3160_NVM_VERSION 0x709 #define IWL3160_NVM_VERSION 0x709
#define IWL3160_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL3160_TX_POWER_VERSION 0xffff /* meaningless */
#define IWL3165_NVM_VERSION 0x709
#define IWL3165_TX_POWER_VERSION 0xffff /* meaningless */
#define IWL7265_NVM_VERSION 0x0a1d #define IWL7265_NVM_VERSION 0x0a1d
#define IWL7265_TX_POWER_VERSION 0xffff /* meaningless */ #define IWL7265_TX_POWER_VERSION 0xffff /* meaningless */
@ -92,6 +94,9 @@
#define IWL3160_FW_PRE "iwlwifi-3160-" #define IWL3160_FW_PRE "iwlwifi-3160-"
#define IWL3160_MODULE_FIRMWARE(api) IWL3160_FW_PRE __stringify(api) ".ucode" #define IWL3160_MODULE_FIRMWARE(api) IWL3160_FW_PRE __stringify(api) ".ucode"
#define IWL3165_FW_PRE "iwlwifi-3165-"
#define IWL3165_MODULE_FIRMWARE(api) IWL3165_FW_PRE __stringify(api) ".ucode"
#define IWL7265_FW_PRE "iwlwifi-7265-" #define IWL7265_FW_PRE "iwlwifi-7265-"
#define IWL7265_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode" #define IWL7265_MODULE_FIRMWARE(api) IWL7265_FW_PRE __stringify(api) ".ucode"
@ -213,6 +218,16 @@ static const struct iwl_pwr_tx_backoff iwl7265_pwr_tx_backoffs[] = {
{0}, {0},
}; };
const struct iwl_cfg iwl3165_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 3165",
.fw_name_pre = IWL3165_FW_PRE,
IWL_DEVICE_7000,
.ht_params = &iwl7000_ht_params,
.nvm_ver = IWL3165_NVM_VERSION,
.nvm_calib_ver = IWL3165_TX_POWER_VERSION,
.pwr_tx_backoffs = iwl7265_pwr_tx_backoffs,
};
const struct iwl_cfg iwl7265_2ac_cfg = { const struct iwl_cfg iwl7265_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 7265", .name = "Intel(R) Dual Band Wireless AC 7265",
.fw_name_pre = IWL7265_FW_PRE, .fw_name_pre = IWL7265_FW_PRE,
@ -245,4 +260,5 @@ const struct iwl_cfg iwl7265_n_cfg = {
MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK)); MODULE_FIRMWARE(IWL7260_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));
MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK)); MODULE_FIRMWARE(IWL3160_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
MODULE_FIRMWARE(IWL3165_MODULE_FIRMWARE(IWL3160_UCODE_API_OK));
MODULE_FIRMWARE(IWL7265_MODULE_FIRMWARE(IWL7260_UCODE_API_OK)); MODULE_FIRMWARE(IWL7265_MODULE_FIRMWARE(IWL7260_UCODE_API_OK));

View File

@ -120,6 +120,8 @@ enum iwl_led_mode {
#define IWL_LONG_WD_TIMEOUT 10000 #define IWL_LONG_WD_TIMEOUT 10000
#define IWL_MAX_WD_TIMEOUT 120000 #define IWL_MAX_WD_TIMEOUT 120000
#define IWL_DEFAULT_MAX_TX_POWER 22
/* Antenna presence definitions */ /* Antenna presence definitions */
#define ANT_NONE 0x0 #define ANT_NONE 0x0
#define ANT_A BIT(0) #define ANT_A BIT(0)
@ -335,6 +337,7 @@ extern const struct iwl_cfg iwl7260_n_cfg;
extern const struct iwl_cfg iwl3160_2ac_cfg; extern const struct iwl_cfg iwl3160_2ac_cfg;
extern const struct iwl_cfg iwl3160_2n_cfg; extern const struct iwl_cfg iwl3160_2n_cfg;
extern const struct iwl_cfg iwl3160_n_cfg; extern const struct iwl_cfg iwl3160_n_cfg;
extern const struct iwl_cfg iwl3165_2ac_cfg;
extern const struct iwl_cfg iwl7265_2ac_cfg; extern const struct iwl_cfg iwl7265_2ac_cfg;
extern const struct iwl_cfg iwl7265_2n_cfg; extern const struct iwl_cfg iwl7265_2n_cfg;
extern const struct iwl_cfg iwl7265_n_cfg; extern const struct iwl_cfg iwl7265_n_cfg;

View File

@ -146,8 +146,6 @@ static const u8 iwl_nvm_channels_family_8000[] = {
#define LAST_2GHZ_HT_PLUS 9 #define LAST_2GHZ_HT_PLUS 9
#define LAST_5GHZ_HT 161 #define LAST_5GHZ_HT 161
#define DEFAULT_MAX_TX_POWER 16
/* rate data (static) */ /* rate data (static) */
static struct ieee80211_rate iwl_cfg80211_rates[] = { static struct ieee80211_rate iwl_cfg80211_rates[] = {
{ .bitrate = 1 * 10, .hw_value = 0, .hw_value_short = 0, }, { .bitrate = 1 * 10, .hw_value = 0, .hw_value_short = 0, },
@ -295,7 +293,7 @@ static int iwl_init_channel_map(struct device *dev, const struct iwl_cfg *cfg,
* Default value - highest tx power value. max_power * Default value - highest tx power value. max_power
* is not used in mvm, and is used for backwards compatibility * is not used in mvm, and is used for backwards compatibility
*/ */
channel->max_power = DEFAULT_MAX_TX_POWER; channel->max_power = IWL_DEFAULT_MAX_TX_POWER;
is_5ghz = channel->band == IEEE80211_BAND_5GHZ; is_5ghz = channel->band == IEEE80211_BAND_5GHZ;
IWL_DEBUG_EEPROM(dev, IWL_DEBUG_EEPROM(dev,
"Ch. %d [%sGHz] %s%s%s%s%s%s%s(0x%02x %ddBm): Ad-Hoc %ssupported\n", "Ch. %d [%sGHz] %s%s%s%s%s%s%s(0x%02x %ddBm): Ad-Hoc %ssupported\n",

View File

@ -585,8 +585,6 @@ int iwl_send_bt_init_conf(struct iwl_mvm *mvm)
lockdep_assert_held(&mvm->mutex); lockdep_assert_held(&mvm->mutex);
if (unlikely(mvm->bt_force_ant_mode != BT_FORCE_ANT_DIS)) { if (unlikely(mvm->bt_force_ant_mode != BT_FORCE_ANT_DIS)) {
u32 mode;
switch (mvm->bt_force_ant_mode) { switch (mvm->bt_force_ant_mode) {
case BT_FORCE_ANT_BT: case BT_FORCE_ANT_BT:
mode = BT_COEX_BT; mode = BT_COEX_BT;
@ -756,7 +754,8 @@ static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
struct iwl_bt_iterator_data *data = _data; struct iwl_bt_iterator_data *data = _data;
struct iwl_mvm *mvm = data->mvm; struct iwl_mvm *mvm = data->mvm;
struct ieee80211_chanctx_conf *chanctx_conf; struct ieee80211_chanctx_conf *chanctx_conf;
enum ieee80211_smps_mode smps_mode; /* default smps_mode is AUTOMATIC - only used for client modes */
enum ieee80211_smps_mode smps_mode = IEEE80211_SMPS_AUTOMATIC;
u32 bt_activity_grading; u32 bt_activity_grading;
int ave_rssi; int ave_rssi;
@ -764,8 +763,6 @@ static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
switch (vif->type) { switch (vif->type) {
case NL80211_IFTYPE_STATION: case NL80211_IFTYPE_STATION:
/* default smps_mode for BSS / P2P client is AUTOMATIC */
smps_mode = IEEE80211_SMPS_AUTOMATIC;
break; break;
case NL80211_IFTYPE_AP: case NL80211_IFTYPE_AP:
if (!mvmvif->ap_ibss_active) if (!mvmvif->ap_ibss_active)
@ -797,7 +794,7 @@ static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
else if (bt_activity_grading >= BT_LOW_TRAFFIC) else if (bt_activity_grading >= BT_LOW_TRAFFIC)
smps_mode = IEEE80211_SMPS_DYNAMIC; smps_mode = IEEE80211_SMPS_DYNAMIC;
/* relax SMPS contraints for next association */ /* relax SMPS constraints for next association */
if (!vif->bss_conf.assoc) if (!vif->bss_conf.assoc)
smps_mode = IEEE80211_SMPS_AUTOMATIC; smps_mode = IEEE80211_SMPS_AUTOMATIC;

View File

@ -74,8 +74,7 @@ static void iwl_dbgfs_update_pm(struct iwl_mvm *mvm,
switch (param) { switch (param) {
case MVM_DEBUGFS_PM_KEEP_ALIVE: { case MVM_DEBUGFS_PM_KEEP_ALIVE: {
struct ieee80211_hw *hw = mvm->hw; int dtimper = vif->bss_conf.dtim_period ?: 1;
int dtimper = hw->conf.ps_dtim_period ?: 1;
int dtimper_msec = dtimper * vif->bss_conf.beacon_int; int dtimper_msec = dtimper * vif->bss_conf.beacon_int;
IWL_DEBUG_POWER(mvm, "debugfs: set keep_alive= %d sec\n", val); IWL_DEBUG_POWER(mvm, "debugfs: set keep_alive= %d sec\n", val);

View File

@ -1563,14 +1563,14 @@ enum iwl_sf_scenario {
/** /**
* Smart Fifo configuration command. * Smart Fifo configuration command.
* @state: smart fifo state, types listed in iwl_sf_sate. * @state: smart fifo state, types listed in enum %iwl_sf_sate.
* @watermark: Minimum allowed availabe free space in RXF for transient state. * @watermark: Minimum allowed availabe free space in RXF for transient state.
* @long_delay_timeouts: aging and idle timer values for each scenario * @long_delay_timeouts: aging and idle timer values for each scenario
* in long delay state. * in long delay state.
* @full_on_timeouts: timer values for each scenario in full on state. * @full_on_timeouts: timer values for each scenario in full on state.
*/ */
struct iwl_sf_cfg_cmd { struct iwl_sf_cfg_cmd {
enum iwl_sf_state state; __le32 state;
__le32 watermark[SF_TRANSIENT_STATES_NUMBER]; __le32 watermark[SF_TRANSIENT_STATES_NUMBER];
__le32 long_delay_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES]; __le32 long_delay_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES];
__le32 full_on_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES]; __le32 full_on_timeouts[SF_NUM_SCENARIO][SF_NUM_TIMEOUT_TYPES];

View File

@ -721,11 +721,6 @@ static int iwl_mvm_mac_ctxt_cmd_sta(struct iwl_mvm *mvm,
!force_assoc_off) { !force_assoc_off) {
u32 dtim_offs; u32 dtim_offs;
/* Allow beacons to pass through as long as we are not
* associated, or we do not have dtim period information.
*/
cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON);
/* /*
* The DTIM count counts down, so when it is N that means N * The DTIM count counts down, so when it is N that means N
* more beacon intervals happen until the DTIM TBTT. Therefore * more beacon intervals happen until the DTIM TBTT. Therefore
@ -759,6 +754,11 @@ static int iwl_mvm_mac_ctxt_cmd_sta(struct iwl_mvm *mvm,
ctxt_sta->is_assoc = cpu_to_le32(1); ctxt_sta->is_assoc = cpu_to_le32(1);
} else { } else {
ctxt_sta->is_assoc = cpu_to_le32(0); ctxt_sta->is_assoc = cpu_to_le32(0);
/* Allow beacons to pass through as long as we are not
* associated, or we do not have dtim period information.
*/
cmd.filter_flags |= cpu_to_le32(MAC_FILTER_IN_BEACON);
} }
ctxt_sta->bi = cpu_to_le32(vif->bss_conf.beacon_int); ctxt_sta->bi = cpu_to_le32(vif->bss_conf.beacon_int);

View File

@ -396,12 +396,14 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
else else
hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT;
/* TODO: enable that only for firmwares that don't crash */ if (IWL_UCODE_API(mvm->fw->ucode_ver) >= 10) {
/* hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN; */ hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_SCHED_SCAN;
hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX; hw->wiphy->max_sched_scan_ssids = PROBE_OPTION_MAX;
hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES; hw->wiphy->max_match_sets = IWL_SCAN_MAX_PROFILES;
/* we create the 802.11 header and zero length SSID IE. */ /* we create the 802.11 header and zero length SSID IE. */
hw->wiphy->max_sched_scan_ie_len = SCAN_OFFLOAD_PROBE_REQ_SIZE - 24 - 2; hw->wiphy->max_sched_scan_ie_len =
SCAN_OFFLOAD_PROBE_REQ_SIZE - 24 - 2;
}
hw->wiphy->features |= NL80211_FEATURE_P2P_GO_CTWIN | hw->wiphy->features |= NL80211_FEATURE_P2P_GO_CTWIN |
NL80211_FEATURE_LOW_PRIORITY_SCAN | NL80211_FEATURE_LOW_PRIORITY_SCAN |
@ -1524,11 +1526,6 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
*/ */
iwl_mvm_remove_time_event(mvm, mvmvif, iwl_mvm_remove_time_event(mvm, mvmvif,
&mvmvif->time_event_data); &mvmvif->time_event_data);
} else if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS |
BSS_CHANGED_QOS)) {
ret = iwl_mvm_power_update_mac(mvm);
if (ret)
IWL_ERR(mvm, "failed to update power mode\n");
} }
if (changes & BSS_CHANGED_BEACON_INFO) { if (changes & BSS_CHANGED_BEACON_INFO) {
@ -1536,6 +1533,12 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0)); WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
} }
if (changes & (BSS_CHANGED_PS | BSS_CHANGED_P2P_PS | BSS_CHANGED_QOS)) {
ret = iwl_mvm_power_update_mac(mvm);
if (ret)
IWL_ERR(mvm, "failed to update power mode\n");
}
if (changes & BSS_CHANGED_TXPOWER) { if (changes & BSS_CHANGED_TXPOWER) {
IWL_DEBUG_CALIB(mvm, "Changing TX Power to %d\n", IWL_DEBUG_CALIB(mvm, "Changing TX Power to %d\n",
bss_conf->txpower); bss_conf->txpower);

View File

@ -281,7 +281,6 @@ static void iwl_mvm_power_build_cmd(struct iwl_mvm *mvm,
struct ieee80211_vif *vif, struct ieee80211_vif *vif,
struct iwl_mac_power_cmd *cmd) struct iwl_mac_power_cmd *cmd)
{ {
struct ieee80211_hw *hw = mvm->hw;
struct ieee80211_chanctx_conf *chanctx_conf; struct ieee80211_chanctx_conf *chanctx_conf;
struct ieee80211_channel *chan; struct ieee80211_channel *chan;
int dtimper, dtimper_msec; int dtimper, dtimper_msec;
@ -292,7 +291,7 @@ static void iwl_mvm_power_build_cmd(struct iwl_mvm *mvm,
cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id, cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
mvmvif->color)); mvmvif->color));
dtimper = hw->conf.ps_dtim_period ?: 1; dtimper = vif->bss_conf.dtim_period;
/* /*
* Regardless of power management state the driver must set * Regardless of power management state the driver must set
@ -885,7 +884,7 @@ int iwl_mvm_update_d0i3_power_mode(struct iwl_mvm *mvm,
iwl_mvm_power_build_cmd(mvm, vif, &cmd); iwl_mvm_power_build_cmd(mvm, vif, &cmd);
if (enable) { if (enable) {
/* configure skip over dtim up to 300 msec */ /* configure skip over dtim up to 300 msec */
int dtimper = mvm->hw->conf.ps_dtim_period ?: 1; int dtimper = vif->bss_conf.dtim_period ?: 1;
int dtimper_msec = dtimper * vif->bss_conf.beacon_int; int dtimper_msec = dtimper * vif->bss_conf.beacon_int;
if (WARN_ON(!dtimper_msec)) if (WARN_ON(!dtimper_msec))

View File

@ -149,13 +149,13 @@ static void iwl_mvm_get_signal_strength(struct iwl_mvm *mvm,
le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_ENERGY_ANT_ABC_IDX]); le32_to_cpu(phy_info->non_cfg_phy[IWL_RX_INFO_ENERGY_ANT_ABC_IDX]);
energy_a = (val & IWL_RX_INFO_ENERGY_ANT_A_MSK) >> energy_a = (val & IWL_RX_INFO_ENERGY_ANT_A_MSK) >>
IWL_RX_INFO_ENERGY_ANT_A_POS; IWL_RX_INFO_ENERGY_ANT_A_POS;
energy_a = energy_a ? -energy_a : -256; energy_a = energy_a ? -energy_a : S8_MIN;
energy_b = (val & IWL_RX_INFO_ENERGY_ANT_B_MSK) >> energy_b = (val & IWL_RX_INFO_ENERGY_ANT_B_MSK) >>
IWL_RX_INFO_ENERGY_ANT_B_POS; IWL_RX_INFO_ENERGY_ANT_B_POS;
energy_b = energy_b ? -energy_b : -256; energy_b = energy_b ? -energy_b : S8_MIN;
energy_c = (val & IWL_RX_INFO_ENERGY_ANT_C_MSK) >> energy_c = (val & IWL_RX_INFO_ENERGY_ANT_C_MSK) >>
IWL_RX_INFO_ENERGY_ANT_C_POS; IWL_RX_INFO_ENERGY_ANT_C_POS;
energy_c = energy_c ? -energy_c : -256; energy_c = energy_c ? -energy_c : S8_MIN;
max_energy = max(energy_a, energy_b); max_energy = max(energy_a, energy_b);
max_energy = max(max_energy, energy_c); max_energy = max(max_energy, energy_c);

View File

@ -172,7 +172,7 @@ static int iwl_mvm_sf_config(struct iwl_mvm *mvm, u8 sta_id,
enum iwl_sf_state new_state) enum iwl_sf_state new_state)
{ {
struct iwl_sf_cfg_cmd sf_cmd = { struct iwl_sf_cfg_cmd sf_cmd = {
.state = new_state, .state = cpu_to_le32(new_state),
}; };
struct ieee80211_sta *sta; struct ieee80211_sta *sta;
int ret = 0; int ret = 0;

View File

@ -168,10 +168,14 @@ static void iwl_mvm_set_tx_cmd_rate(struct iwl_mvm *mvm,
/* /*
* for data packets, rate info comes from the table inside the fw. This * for data packets, rate info comes from the table inside the fw. This
* table is controlled by LINK_QUALITY commands * table is controlled by LINK_QUALITY commands. Exclude ctrl port
* frames like EAPOLs which should be treated as mgmt frames. This
* avoids them being sent initially in high rates which increases the
* chances for completion of the 4-Way handshake.
*/ */
if (ieee80211_is_data(fc) && sta) { if (ieee80211_is_data(fc) && sta &&
!(info->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO)) {
tx_cmd->initial_rate_index = 0; tx_cmd->initial_rate_index = 0;
tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE);
return; return;

Some files were not shown because too many files have changed in this diff Show More