Networking fixes for 5.12-rc7, including fixes from can, ipsec,

mac80211, wireless, and bpf trees. No scary regressions here
 or in the works, but small fixes for 5.12 changes keep coming.
 
 Current release - regressions:
 
  - virtio: do not pull payload in skb->head
 
  - virtio: ensure mac header is set in virtio_net_hdr_to_skb()
 
  - Revert "net: correct sk_acceptq_is_full()"
 
  - mptcp: revert "mptcp: provide subflow aware release function"
 
  - ethernet: lan743x: fix ethernet frame cutoff issue
 
  - dsa: fix type was not set for devlink port
 
  - ethtool: remove link_mode param and derive link params
             from driver
 
  - sched: htb: fix null pointer dereference on a null new_q
 
  - wireless: iwlwifi: Fix softirq/hardirq disabling in
                       iwl_pcie_enqueue_hcmd()
 
  - wireless: iwlwifi: fw: fix notification wait locking
 
  - wireless: brcmfmac: p2p: Fix deadlock introduced by avoiding
                             the rtnl dependency
 
 Current release - new code bugs:
 
  - napi: fix hangup on napi_disable for threaded napi
 
  - bpf: take module reference for trampoline in module
 
  - wireless: mt76: mt7921: fix airtime reporting and related
                            tx hangs
 
  - wireless: iwlwifi: mvm: rfi: don't lock mvm->mutex when sending
                                 config command
 
 Previous releases - regressions:
 
  - rfkill: revert back to old userspace API by default
 
  - nfc: fix infinite loop, refcount & memory leaks in LLCP sockets
 
  - let skb_orphan_partial wake-up waiters
 
  - xfrm/compat: Cleanup WARN()s that can be user-triggered
 
  - vxlan, geneve: do not modify the shared tunnel info when PMTU
                   triggers an ICMP reply
 
  - can: fix msg_namelen values depending on CAN_REQUIRED_SIZE
 
  - can: uapi: mark union inside struct can_frame packed
 
  - sched: cls: fix action overwrite reference counting
 
  - sched: cls: fix err handler in tcf_action_init()
 
  - ethernet: mlxsw: fix ECN marking in tunnel decapsulation
 
  - ethernet: nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx
 
  - ethernet: i40e: fix receiving of single packets in xsk zero-copy
                    mode
 
  - ethernet: cxgb4: avoid collecting SGE_QBASE regs during traffic
 
 Previous releases - always broken:
 
  - bpf: Refuse non-O_RDWR flags in BPF_OBJ_GET
 
  - bpf: Refcount task stack in bpf_get_task_stack
 
  - bpf, x86: Validate computation of branch displacements
 
  - ieee802154: fix many similar syzbot-found bugs
     - fix NULL dereferences in netlink attribute handling
     - reject unsupported operations on monitor interfaces
     - fix error handling in llsec_key_alloc()
 
  - xfrm: make ipv4 pmtu check honor ip header df
 
  - xfrm: make hash generation lock per network namespace
 
  - xfrm: esp: delete NETIF_F_SCTP_CRC bit from features for esp
               offload
 
  - ethtool: fix incorrect datatype in set_eee ops
 
  - xdp: fix xdp_return_frame() kernel BUG throw for page_pool
         memory model
 
  - openvswitch: fix send of uninitialized stack memory in ct limit
                 reply
 
 Misc:
 
  - udp: add get handling for UDP_GRO sockopt
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmBwyfAACgkQMUZtbf5S
 IruJ/BAAnjghw2kWXRCKK3Tkm0pi0zjaKvTS30AcKCW2+GnqSxTdiWNv+mxqFgnm
 YdduPKiGwLoDkA2i2d4EF8/HK6m+Q6bHcUbZ2npEm1ElkKfxCYGmocor8n2kD+a9
 je94VGYV7zytnxXw85V6/jFLDqOXXwhBfHhlDMVBZP8OyzUfbDKGorWmyGuy9GJp
 81bvzqN2bHUGIM0cDr+ol3eYw2ituGWgiqNfnq7z+/NVcYmD0EPChDRbp0jtH1ng
 dcoONI6YlymDEDpu/9GmyKL1ken9lcWoVdvv/aDGtP62x6SYDt5HKe3wAtJ+Kjbq
 jIPADxPx5BymYIZRBtdNR0rP66LycA7hDtM/C/h1WoihDXwpGeNUU4g0aJ+hsP5Q
 ldwJI1DJo79VbwM2c3Kg73PaphLcPD4RdwF0/ovFsl0+bTDfj8i93ah4Wnzj0Qli
 EMiSDEDNb51e9nkW+xu+FjLWmxHJvLOL/+VgHV5bPJJBob2fqnjAMj2PkPEuEtXY
 TPWEh9y3zaEyp/9tNx0cstGOt6Gf5DQ5Nk6tX6hMpJT/BeL8mju1jm0yPLZhMJjF
 LlTrJgXftfP/cjltdSm4aVqSU5okjHNYDhmHlNgvzih5mt+NVslRJfzwq62Vudqy
 C0kpmVdQNFkOB0UcqQihevZg9mvem3m/dYl+v/MV7Uq6r4s4M2A=
 =SHL0
 -----END PGP SIGNATURE-----

Merge tag 'net-5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Networking fixes for 5.12-rc7, including fixes from can, ipsec,
  mac80211, wireless, and bpf trees.

  No scary regressions here or in the works, but small fixes for 5.12
  changes keep coming.

  Current release - regressions:

   - virtio: do not pull payload in skb->head

   - virtio: ensure mac header is set in virtio_net_hdr_to_skb()

   - Revert "net: correct sk_acceptq_is_full()"

   - mptcp: revert "mptcp: provide subflow aware release function"

   - ethernet: lan743x: fix ethernet frame cutoff issue

   - dsa: fix type was not set for devlink port

   - ethtool: remove link_mode param and derive link params from driver

   - sched: htb: fix null pointer dereference on a null new_q

   - wireless: iwlwifi: Fix softirq/hardirq disabling in
     iwl_pcie_enqueue_hcmd()

   - wireless: iwlwifi: fw: fix notification wait locking

   - wireless: brcmfmac: p2p: Fix deadlock introduced by avoiding the
     rtnl dependency

  Current release - new code bugs:

   - napi: fix hangup on napi_disable for threaded napi

   - bpf: take module reference for trampoline in module

   - wireless: mt76: mt7921: fix airtime reporting and related tx hangs

   - wireless: iwlwifi: mvm: rfi: don't lock mvm->mutex when sending
     config command

  Previous releases - regressions:

   - rfkill: revert back to old userspace API by default

   - nfc: fix infinite loop, refcount & memory leaks in LLCP sockets

   - let skb_orphan_partial wake-up waiters

   - xfrm/compat: Cleanup WARN()s that can be user-triggered

   - vxlan, geneve: do not modify the shared tunnel info when PMTU
     triggers an ICMP reply

   - can: fix msg_namelen values depending on CAN_REQUIRED_SIZE

   - can: uapi: mark union inside struct can_frame packed

   - sched: cls: fix action overwrite reference counting

   - sched: cls: fix err handler in tcf_action_init()

   - ethernet: mlxsw: fix ECN marking in tunnel decapsulation

   - ethernet: nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx

   - ethernet: i40e: fix receiving of single packets in xsk zero-copy
     mode

   - ethernet: cxgb4: avoid collecting SGE_QBASE regs during traffic

  Previous releases - always broken:

   - bpf: Refuse non-O_RDWR flags in BPF_OBJ_GET

   - bpf: Refcount task stack in bpf_get_task_stack

   - bpf, x86: Validate computation of branch displacements

   - ieee802154: fix many similar syzbot-found bugs
       - fix NULL dereferences in netlink attribute handling
       - reject unsupported operations on monitor interfaces
       - fix error handling in llsec_key_alloc()

   - xfrm: make ipv4 pmtu check honor ip header df

   - xfrm: make hash generation lock per network namespace

   - xfrm: esp: delete NETIF_F_SCTP_CRC bit from features for esp
     offload

   - ethtool: fix incorrect datatype in set_eee ops

   - xdp: fix xdp_return_frame() kernel BUG throw for page_pool memory
     model

   - openvswitch: fix send of uninitialized stack memory in ct limit
     reply

  Misc:

   - udp: add get handling for UDP_GRO sockopt"

* tag 'net-5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (182 commits)
  net: fix hangup on napi_disable for threaded napi
  net: hns3: Trivial spell fix in hns3 driver
  lan743x: fix ethernet frame cutoff issue
  net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh
  net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits
  net: dsa: lantiq_gswip: Don't use PHY auto polling
  net: sched: sch_teql: fix null-pointer dereference
  ipv6: report errors for iftoken via netlink extack
  net: sched: fix err handler in tcf_action_init()
  net: sched: fix action overwrite reference counting
  Revert "net: sched: bump refcount for new action in ACT replace mode"
  ice: fix memory leak of aRFS after resuming from suspend
  i40e: Fix sparse warning: missing error code 'err'
  i40e: Fix sparse error: 'vsi->netdev' could be null
  i40e: Fix sparse error: uninitialized symbol 'ring'
  i40e: Fix sparse errors in i40e_txrx.c
  i40e: Fix parameters in aq_get_phy_register()
  nl80211: fix beacon head validation
  bpf, x86: Validate computation of branch displacements for x86-32
  bpf, x86: Validate computation of branch displacements for x86-64
  ...
This commit is contained in:
Linus Torvalds 2021-04-09 15:26:51 -07:00
commit 4e04e7513b
179 changed files with 1871 additions and 714 deletions

View File

@ -32,7 +32,7 @@ required:
- interrupts
- interrupt-names
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -49,7 +49,7 @@ properties:
description:
Reference to an nvmem node for the MAC address
nvmem-cells-names:
nvmem-cell-names:
const: mac-address
phy-connection-type:

View File

@ -65,6 +65,71 @@ KSZ9031:
step is 60ps. The default value is the neutral setting, so setting
rxc-skew-ps=<0> actually results in -900 picoseconds adjustment.
The KSZ9031 hardware supports a range of skew values from negative to
positive, where the specific range is property dependent. All values
specified in the devicetree are offset by the minimum value so they
can be represented as positive integers in the devicetree since it's
difficult to represent a negative number in the devictree.
The following 5-bit values table apply to rxc-skew-ps and txc-skew-ps.
Pad Skew Value Delay (ps) Devicetree Value
------------------------------------------------------
0_0000 -900ps 0
0_0001 -840ps 60
0_0010 -780ps 120
0_0011 -720ps 180
0_0100 -660ps 240
0_0101 -600ps 300
0_0110 -540ps 360
0_0111 -480ps 420
0_1000 -420ps 480
0_1001 -360ps 540
0_1010 -300ps 600
0_1011 -240ps 660
0_1100 -180ps 720
0_1101 -120ps 780
0_1110 -60ps 840
0_1111 0ps 900
1_0000 60ps 960
1_0001 120ps 1020
1_0010 180ps 1080
1_0011 240ps 1140
1_0100 300ps 1200
1_0101 360ps 1260
1_0110 420ps 1320
1_0111 480ps 1380
1_1000 540ps 1440
1_1001 600ps 1500
1_1010 660ps 1560
1_1011 720ps 1620
1_1100 780ps 1680
1_1101 840ps 1740
1_1110 900ps 1800
1_1111 960ps 1860
The following 4-bit values table apply to the txdX-skew-ps, rxdX-skew-ps
data pads, and the rxdv-skew-ps, txen-skew-ps control pads.
Pad Skew Value Delay (ps) Devicetree Value
------------------------------------------------------
0000 -420ps 0
0001 -360ps 60
0010 -300ps 120
0011 -240ps 180
0100 -180ps 240
0101 -120ps 300
0110 -60ps 360
0111 0ps 420
1000 60ps 480
1001 120ps 540
1010 180ps 600
1011 240ps 660
1100 300ps 720
1101 360ps 780
1110 420ps 840
1111 480ps 900
Optional properties:
Maximum value of 1860, default value 900:
@ -120,11 +185,21 @@ KSZ9131:
Examples:
/* Attach to an Ethernet device with autodetected PHY */
&enet {
rxc-skew-ps = <1800>;
rxdv-skew-ps = <0>;
txc-skew-ps = <1800>;
txen-skew-ps = <0>;
status = "okay";
};
/* Attach to an explicitly-specified PHY */
mdio {
phy0: ethernet-phy@0 {
rxc-skew-ps = <3000>;
rxc-skew-ps = <1800>;
rxdv-skew-ps = <0>;
txc-skew-ps = <3000>;
txc-skew-ps = <1800>;
txen-skew-ps = <0>;
reg = <0>;
};
@ -133,3 +208,20 @@ Examples:
phy = <&phy0>;
phy-mode = "rgmii-id";
};
References
Micrel ksz9021rl/rn Data Sheet, Revision 1.2. Dated 2/13/2014.
http://www.micrel.com/_PDF/Ethernet/datasheets/ksz9021rl-rn_ds.pdf
Micrel ksz9031rnx Data Sheet, Revision 2.1. Dated 11/20/2014.
http://www.micrel.com/_PDF/Ethernet/datasheets/KSZ9031RNX.pdf
Notes:
Note that a previous version of the Micrel ksz9021rl/rn Data Sheet
was missing extended register 106 (transmit data pad skews), and
incorrectly specified the ps per step as 200ps/step instead of
120ps/step. The latest update to this document reflects the latest
revision of the Micrel specification even though usage in the kernel
still reflects that incorrect document.

View File

@ -976,9 +976,9 @@ constraints on coalescing parameters and their values.
PAUSE_GET
============
=========
Gets channel counts like ``ETHTOOL_GPAUSE`` ioctl request.
Gets pause frame settings like ``ETHTOOL_GPAUSEPARAM`` ioctl request.
Request contents:
@ -1007,7 +1007,7 @@ the statistics in the following structure:
Each member has a corresponding attribute defined.
PAUSE_SET
============
=========
Sets pause parameters like ``ETHTOOL_GPAUSEPARAM`` ioctl request.
@ -1024,7 +1024,7 @@ Request contents:
EEE_GET
=======
Gets channel counts like ``ETHTOOL_GEEE`` ioctl request.
Gets Energy Efficient Ethernet settings like ``ETHTOOL_GEEE`` ioctl request.
Request contents:
@ -1054,7 +1054,7 @@ first 32 are provided by the ``ethtool_ops`` callback.
EEE_SET
=======
Sets pause parameters like ``ETHTOOL_GEEEPARAM`` ioctl request.
Sets Energy Efficient Ethernet parameters like ``ETHTOOL_SEEE`` ioctl request.
Request contents:

View File

@ -14850,6 +14850,14 @@ L: linux-arm-msm@vger.kernel.org
S: Maintained
F: drivers/iommu/arm/arm-smmu/qcom_iommu.c
QUALCOMM IPC ROUTER (QRTR) DRIVER
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: include/trace/events/qrtr.h
F: include/uapi/linux/qrtr.h
F: net/qrtr/
QUALCOMM IPCC MAILBOX DRIVER
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
L: linux-arm-msm@vger.kernel.org

View File

@ -1689,7 +1689,16 @@ emit_jmp:
}
if (image) {
if (unlikely(proglen + ilen > oldproglen)) {
/*
* When populating the image, assert that:
*
* i) We do not write beyond the allocated space, and
* ii) addrs[i] did not change from the prior run, in order
* to validate assumptions made for computing branch
* displacements.
*/
if (unlikely(proglen + ilen > oldproglen ||
proglen + ilen != addrs[i])) {
pr_err("bpf_jit: fatal error\n");
return -EFAULT;
}

View File

@ -2276,7 +2276,16 @@ notyet:
}
if (image) {
if (unlikely(proglen + ilen > oldproglen)) {
/*
* When populating the image, assert that:
*
* i) We do not write beyond the allocated space, and
* ii) addrs[i] did not change from the prior run, in order
* to validate assumptions made for computing branch
* displacements.
*/
if (unlikely(proglen + ilen > oldproglen ||
proglen + ilen != addrs[i])) {
pr_err("bpf_jit: fatal error\n");
return -EFAULT;
}

View File

@ -314,6 +314,18 @@ static int mcp251x_spi_trans(struct spi_device *spi, int len)
return ret;
}
static int mcp251x_spi_write(struct spi_device *spi, int len)
{
struct mcp251x_priv *priv = spi_get_drvdata(spi);
int ret;
ret = spi_write(spi, priv->spi_tx_buf, len);
if (ret)
dev_err(&spi->dev, "spi write failed: ret = %d\n", ret);
return ret;
}
static u8 mcp251x_read_reg(struct spi_device *spi, u8 reg)
{
struct mcp251x_priv *priv = spi_get_drvdata(spi);
@ -361,7 +373,7 @@ static void mcp251x_write_reg(struct spi_device *spi, u8 reg, u8 val)
priv->spi_tx_buf[1] = reg;
priv->spi_tx_buf[2] = val;
mcp251x_spi_trans(spi, 3);
mcp251x_spi_write(spi, 3);
}
static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2)
@ -373,7 +385,7 @@ static void mcp251x_write_2regs(struct spi_device *spi, u8 reg, u8 v1, u8 v2)
priv->spi_tx_buf[2] = v1;
priv->spi_tx_buf[3] = v2;
mcp251x_spi_trans(spi, 4);
mcp251x_spi_write(spi, 4);
}
static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
@ -386,7 +398,7 @@ static void mcp251x_write_bits(struct spi_device *spi, u8 reg,
priv->spi_tx_buf[2] = mask;
priv->spi_tx_buf[3] = val;
mcp251x_spi_trans(spi, 4);
mcp251x_spi_write(spi, 4);
}
static u8 mcp251x_read_stat(struct spi_device *spi)
@ -618,7 +630,7 @@ static void mcp251x_hw_tx_frame(struct spi_device *spi, u8 *buf,
buf[i]);
} else {
memcpy(priv->spi_tx_buf, buf, TXBDAT_OFF + len);
mcp251x_spi_trans(spi, TXBDAT_OFF + len);
mcp251x_spi_write(spi, TXBDAT_OFF + len);
}
}
@ -650,7 +662,7 @@ static void mcp251x_hw_tx(struct spi_device *spi, struct can_frame *frame,
/* use INSTRUCTION_RTS, to avoid "repeated frame problem" */
priv->spi_tx_buf[0] = INSTRUCTION_RTS(1 << tx_buf_idx);
mcp251x_spi_trans(priv->spi, 1);
mcp251x_spi_write(priv->spi, 1);
}
static void mcp251x_hw_rx_frame(struct spi_device *spi, u8 *buf,
@ -888,7 +900,7 @@ static int mcp251x_hw_reset(struct spi_device *spi)
mdelay(MCP251X_OST_DELAY_MS);
priv->spi_tx_buf[0] = INSTRUCTION_RESET;
ret = mcp251x_spi_trans(spi, 1);
ret = mcp251x_spi_write(spi, 1);
if (ret)
return ret;

View File

@ -857,7 +857,7 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
if (dev->adapter->dev_set_bus) {
err = dev->adapter->dev_set_bus(dev, 0);
if (err)
goto lbl_unregister_candev;
goto adap_dev_free;
}
/* get device number early */
@ -869,6 +869,10 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
return 0;
adap_dev_free:
if (dev->adapter->dev_free)
dev->adapter->dev_free(dev);
lbl_unregister_candev:
unregister_candev(netdev);

View File

@ -93,8 +93,12 @@
/* GSWIP MII Registers */
#define GSWIP_MII_CFGp(p) (0x2 * (p))
#define GSWIP_MII_CFG_RESET BIT(15)
#define GSWIP_MII_CFG_EN BIT(14)
#define GSWIP_MII_CFG_ISOLATE BIT(13)
#define GSWIP_MII_CFG_LDCLKDIS BIT(12)
#define GSWIP_MII_CFG_RGMII_IBS BIT(8)
#define GSWIP_MII_CFG_RMII_CLK BIT(7)
#define GSWIP_MII_CFG_MODE_MIIP 0x0
#define GSWIP_MII_CFG_MODE_MIIM 0x1
#define GSWIP_MII_CFG_MODE_RMIIP 0x2
@ -190,6 +194,23 @@
#define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA))
#define GSWIP_MAC_FLEN 0x8C5
#define GSWIP_MAC_CTRL_0p(p) (0x903 + ((p) * 0xC))
#define GSWIP_MAC_CTRL_0_PADEN BIT(8)
#define GSWIP_MAC_CTRL_0_FCS_EN BIT(7)
#define GSWIP_MAC_CTRL_0_FCON_MASK 0x0070
#define GSWIP_MAC_CTRL_0_FCON_AUTO 0x0000
#define GSWIP_MAC_CTRL_0_FCON_RX 0x0010
#define GSWIP_MAC_CTRL_0_FCON_TX 0x0020
#define GSWIP_MAC_CTRL_0_FCON_RXTX 0x0030
#define GSWIP_MAC_CTRL_0_FCON_NONE 0x0040
#define GSWIP_MAC_CTRL_0_FDUP_MASK 0x000C
#define GSWIP_MAC_CTRL_0_FDUP_AUTO 0x0000
#define GSWIP_MAC_CTRL_0_FDUP_EN 0x0004
#define GSWIP_MAC_CTRL_0_FDUP_DIS 0x000C
#define GSWIP_MAC_CTRL_0_GMII_MASK 0x0003
#define GSWIP_MAC_CTRL_0_GMII_AUTO 0x0000
#define GSWIP_MAC_CTRL_0_GMII_MII 0x0001
#define GSWIP_MAC_CTRL_0_GMII_RGMII 0x0002
#define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC))
#define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */
@ -653,16 +674,13 @@ static int gswip_port_enable(struct dsa_switch *ds, int port,
GSWIP_SDMA_PCTRLp(port));
if (!dsa_is_cpu_port(ds, port)) {
u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO |
GSWIP_MDIO_PHY_SPEED_AUTO |
GSWIP_MDIO_PHY_FDUP_AUTO |
GSWIP_MDIO_PHY_FCONTX_AUTO |
GSWIP_MDIO_PHY_FCONRX_AUTO |
(phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK);
u32 mdio_phy = 0;
gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port));
/* Activate MDIO auto polling */
gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0);
if (phydev)
mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
}
return 0;
@ -675,14 +693,6 @@ static void gswip_port_disable(struct dsa_switch *ds, int port)
if (!dsa_is_user_port(ds, port))
return;
if (!dsa_is_cpu_port(ds, port)) {
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN,
GSWIP_MDIO_PHY_LINK_MASK,
GSWIP_MDIO_PHYp(port));
/* Deactivate MDIO auto polling */
gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0);
}
gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0,
GSWIP_FDMA_PCTRLp(port));
gswip_switch_mask(priv, GSWIP_SDMA_PCTRL_EN, 0,
@ -794,14 +804,32 @@ static int gswip_setup(struct dsa_switch *ds)
gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2);
gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3);
/* disable PHY auto polling */
/* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an
* interoperability problem with this auto polling mechanism because
* their status registers think that the link is in a different state
* than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set
* as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the
* auto polling state machine consider the link being negotiated with
* 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads
* to the switch port being completely dead (RX and TX are both not
* working).
* Also with various other PHY / port combinations (PHY11G GPHY, PHY22F
* GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes
* it would work fine for a few minutes to hours and then stop, on
* other device it would no traffic could be sent or received at all.
* Testing shows that when PHY auto polling is disabled these problems
* go away.
*/
gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0);
/* Configure the MDIO Clock 2.5 MHz */
gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1);
/* Disable the xMII link */
/* Disable the xMII interface and clear it's isolation bit */
for (i = 0; i < priv->hw_info->max_ports; i++)
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i);
gswip_mii_mask_cfg(priv,
GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE,
0, i);
/* enable special tag insertion on cpu port */
gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN,
@ -1450,6 +1478,112 @@ unsupported:
return;
}
static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link)
{
u32 mdio_phy;
if (link)
mdio_phy = GSWIP_MDIO_PHY_LINK_UP;
else
mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN;
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
}
static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed,
phy_interface_t interface)
{
u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0;
switch (speed) {
case SPEED_10:
mdio_phy = GSWIP_MDIO_PHY_SPEED_M10;
if (interface == PHY_INTERFACE_MODE_RMII)
mii_cfg = GSWIP_MII_CFG_RATE_M50;
else
mii_cfg = GSWIP_MII_CFG_RATE_M2P5;
mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
break;
case SPEED_100:
mdio_phy = GSWIP_MDIO_PHY_SPEED_M100;
if (interface == PHY_INTERFACE_MODE_RMII)
mii_cfg = GSWIP_MII_CFG_RATE_M50;
else
mii_cfg = GSWIP_MII_CFG_RATE_M25;
mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
break;
case SPEED_1000:
mdio_phy = GSWIP_MDIO_PHY_SPEED_G1;
mii_cfg = GSWIP_MII_CFG_RATE_M125;
mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII;
break;
}
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port);
gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0,
GSWIP_MAC_CTRL_0p(port));
}
static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex)
{
u32 mac_ctrl_0, mdio_phy;
if (duplex == DUPLEX_FULL) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN;
mdio_phy = GSWIP_MDIO_PHY_FDUP_EN;
} else {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS;
mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS;
}
gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0,
GSWIP_MAC_CTRL_0p(port));
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
}
static void gswip_port_set_pause(struct gswip_priv *priv, int port,
bool tx_pause, bool rx_pause)
{
u32 mac_ctrl_0, mdio_phy;
if (tx_pause && rx_pause) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
GSWIP_MDIO_PHY_FCONRX_EN;
} else if (tx_pause) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
GSWIP_MDIO_PHY_FCONRX_DIS;
} else if (rx_pause) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
GSWIP_MDIO_PHY_FCONRX_EN;
} else {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
GSWIP_MDIO_PHY_FCONRX_DIS;
}
gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK,
mac_ctrl_0, GSWIP_MAC_CTRL_0p(port));
gswip_mdio_mask(priv,
GSWIP_MDIO_PHY_FCONTX_MASK |
GSWIP_MDIO_PHY_FCONRX_MASK,
mdio_phy, GSWIP_MDIO_PHYp(port));
}
static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
unsigned int mode,
const struct phylink_link_state *state)
@ -1469,6 +1603,9 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
break;
case PHY_INTERFACE_MODE_RMII:
miicfg |= GSWIP_MII_CFG_MODE_RMIIM;
/* Configure the RMII clock as output: */
miicfg |= GSWIP_MII_CFG_RMII_CLK;
break;
case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID:
@ -1481,7 +1618,11 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
"Unsupported interface: %d\n", state->interface);
return;
}
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port);
gswip_mii_mask_cfg(priv,
GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK |
GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS,
miicfg, port);
switch (state->interface) {
case PHY_INTERFACE_MODE_RGMII_ID:
@ -1506,6 +1647,9 @@ static void gswip_phylink_mac_link_down(struct dsa_switch *ds, int port,
struct gswip_priv *priv = ds->priv;
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port);
if (!dsa_is_cpu_port(ds, port))
gswip_port_set_link(priv, port, false);
}
static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
@ -1517,6 +1661,13 @@ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
{
struct gswip_priv *priv = ds->priv;
if (!dsa_is_cpu_port(ds, port)) {
gswip_port_set_link(priv, port, true);
gswip_port_set_speed(priv, port, speed, interface);
gswip_port_set_duplex(priv, port, duplex);
gswip_port_set_pause(priv, port, tx_pause, rx_pause);
}
gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port);
}

View File

@ -1534,8 +1534,7 @@ pcnet32_probe_pci(struct pci_dev *pdev, const struct pci_device_id *ent)
}
pci_set_master(pdev);
ioaddr = pci_resource_start(pdev, 0);
if (!ioaddr) {
if (!pci_resource_len(pdev, 0)) {
if (pcnet32_debug & NETIF_MSG_PROBE)
pr_err("card has no PCI IO resources, aborting\n");
err = -ENODEV;
@ -1548,6 +1547,8 @@ pcnet32_probe_pci(struct pci_dev *pdev, const struct pci_device_id *ent)
pr_err("architecture does not support 32bit PCI busmaster DMA\n");
goto err_disable_dev;
}
ioaddr = pci_resource_start(pdev, 0);
if (!request_region(ioaddr, PCNET32_TOTAL_SIZE, "pcnet32_probe_pci")) {
if (pcnet32_debug & NETIF_MSG_PROBE)
pr_err("io address range already allocated\n");

View File

@ -180,9 +180,9 @@
#define XGBE_DMA_SYS_AWCR 0x30303030
/* DMA cache settings - PCI device */
#define XGBE_DMA_PCI_ARCR 0x00000003
#define XGBE_DMA_PCI_AWCR 0x13131313
#define XGBE_DMA_PCI_AWARCR 0x00000313
#define XGBE_DMA_PCI_ARCR 0x000f0f0f
#define XGBE_DMA_PCI_AWCR 0x0f0f0f0f
#define XGBE_DMA_PCI_AWARCR 0x00000f0f
/* DMA channel interrupt modes */
#define XGBE_IRQ_MODE_EDGE 0

View File

@ -172,6 +172,7 @@ static int bcm4908_dma_alloc_buf_descs(struct bcm4908_enet *enet,
err_free_buf_descs:
dma_free_coherent(dev, size, ring->cpu_addr, ring->dma_addr);
ring->cpu_addr = NULL;
return -ENOMEM;
}

View File

@ -3239,6 +3239,9 @@ static void gem_prog_cmp_regs(struct macb *bp, struct ethtool_rx_flow_spec *fs)
bool cmp_b = false;
bool cmp_c = false;
if (!macb_is_gem(bp))
return;
tp4sp_v = &(fs->h_u.tcp_ip4_spec);
tp4sp_m = &(fs->m_u.tcp_ip4_spec);
@ -3607,6 +3610,7 @@ static void macb_restore_features(struct macb *bp)
{
struct net_device *netdev = bp->dev;
netdev_features_t features = netdev->features;
struct ethtool_rx_fs_item *item;
/* TX checksum offload */
macb_set_txcsum_feature(bp, features);
@ -3615,6 +3619,9 @@ static void macb_restore_features(struct macb *bp)
macb_set_rxcsum_feature(bp, features);
/* RX Flow Filters */
list_for_each_entry(item, &bp->rx_fs_list.list, list)
gem_prog_cmp_regs(bp, &item->fs);
macb_set_rxflow_feature(bp, features);
}

View File

@ -1794,11 +1794,25 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
struct cudbg_buffer temp_buff = { 0 };
struct sge_qbase_reg_field *sge_qbase;
struct ireg_buf *ch_sge_dbg;
u8 padap_running = 0;
int i, rc;
u32 size;
rc = cudbg_get_buff(pdbg_init, dbg_buff,
sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase),
&temp_buff);
/* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can
* lead to SGE missing doorbells under heavy traffic. So, only
* collect them when adapter is idle.
*/
for_each_port(padap, i) {
padap_running = netif_running(padap->port[i]);
if (padap_running)
break;
}
size = sizeof(*ch_sge_dbg) * 2;
if (!padap_running)
size += sizeof(*sge_qbase);
rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff);
if (rc)
return rc;
@ -1820,7 +1834,8 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
ch_sge_dbg++;
}
if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) {
if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 &&
!padap_running) {
sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg;
/* 1 addr reg SGE_QBASE_INDEX and 4 data reg
* SGE_QBASE_MAP[0-3]

View File

@ -2090,7 +2090,8 @@ void t4_get_regs(struct adapter *adap, void *buf, size_t buf_size)
0x1190, 0x1194,
0x11a0, 0x11a4,
0x11b0, 0x11b4,
0x11fc, 0x1274,
0x11fc, 0x123c,
0x1254, 0x1274,
0x1280, 0x133c,
0x1800, 0x18fc,
0x3000, 0x302c,

View File

@ -363,7 +363,11 @@ static void gfar_set_mac_for_addr(struct net_device *dev, int num,
static int gfar_set_mac_addr(struct net_device *dev, void *p)
{
eth_mac_addr(dev, p);
int ret;
ret = eth_mac_addr(dev, p);
if (ret)
return ret;
gfar_set_mac_for_addr(dev, 0, dev->dev_addr);

View File

@ -3966,7 +3966,6 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
* normalcy is to reset.
* 2. A new reset request from the stack due to timeout
*
* For the first case,error event might not have ae handle available.
* check if this is a new reset request and we are not here just because
* last reset attempt did not succeed and watchdog hit us again. We will
* know this if last reset request did not occur very recently (watchdog
@ -3976,14 +3975,14 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
* want to make sure we throttle the reset request. Therefore, we will
* not allow it again before 3*HZ times.
*/
if (!handle)
handle = &hdev->vport[0].nic;
if (time_before(jiffies, (hdev->last_reset_time +
HCLGE_RESET_INTERVAL))) {
mod_timer(&hdev->reset_timer, jiffies + HCLGE_RESET_INTERVAL);
return;
} else if (hdev->default_reset_request) {
}
if (hdev->default_reset_request) {
hdev->reset_level =
hclge_get_reset_level(ae_dev,
&hdev->default_reset_request);
@ -11211,7 +11210,7 @@ static int hclge_set_channels(struct hnae3_handle *handle, u32 new_tqps_num,
if (ret)
return ret;
/* RSS indirection table has been configuared by user */
/* RSS indirection table has been configured by user */
if (rxfh_configured)
goto out;

View File

@ -2193,7 +2193,7 @@ static void hclgevf_reset_service_task(struct hclgevf_dev *hdev)
if (test_and_clear_bit(HCLGEVF_RESET_PENDING,
&hdev->reset_state)) {
/* PF has initmated that it is about to reset the hardware.
/* PF has intimated that it is about to reset the hardware.
* We now have to poll & check if hardware has actually
* completed the reset sequence. On hardware reset completion,
* VF needs to reset the client and ae device.
@ -2624,14 +2624,14 @@ static int hclgevf_ae_start(struct hnae3_handle *handle)
{
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
hclgevf_reset_tqp_stats(handle);
hclgevf_request_link_info(hdev);
hclgevf_update_link_mode(hdev);
clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
return 0;
}
@ -3497,7 +3497,7 @@ static int hclgevf_set_channels(struct hnae3_handle *handle, u32 new_tqps_num,
if (ret)
return ret;
/* RSS indirection table has been configuared by user */
/* RSS indirection table has been configured by user */
if (rxfh_configured)
goto out;

View File

@ -142,6 +142,7 @@ enum i40e_state_t {
__I40E_VIRTCHNL_OP_PENDING,
__I40E_RECOVERY_MODE,
__I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */
__I40E_VFS_RELEASING,
/* This must be last as it determines the size of the BITMAP */
__I40E_STATE_SIZE__,
};

View File

@ -578,6 +578,9 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
case RING_TYPE_XDP:
ring = kmemdup(vsi->xdp_rings[ring_id], sizeof(*ring), GFP_KERNEL);
break;
default:
ring = NULL;
break;
}
if (!ring)
return;

View File

@ -232,6 +232,8 @@ static void __i40e_add_stat_strings(u8 **p, const struct i40e_stats stats[],
I40E_STAT(struct i40e_vsi, _name, _stat)
#define I40E_VEB_STAT(_name, _stat) \
I40E_STAT(struct i40e_veb, _name, _stat)
#define I40E_VEB_TC_STAT(_name, _stat) \
I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat)
#define I40E_PFC_STAT(_name, _stat) \
I40E_STAT(struct i40e_pfc_stats, _name, _stat)
#define I40E_QUEUE_STAT(_name, _stat) \
@ -266,11 +268,18 @@ static const struct i40e_stats i40e_gstrings_veb_stats[] = {
I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol),
};
struct i40e_cp_veb_tc_stats {
u64 tc_rx_packets;
u64 tc_rx_bytes;
u64 tc_tx_packets;
u64 tc_tx_bytes;
};
static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = {
I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets),
I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes),
I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets),
I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes),
I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets),
I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes),
I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets),
I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes),
};
static const struct i40e_stats i40e_gstrings_misc_stats[] = {
@ -1101,6 +1110,7 @@ static int i40e_get_link_ksettings(struct net_device *netdev,
/* Set flow control settings */
ethtool_link_ksettings_add_link_mode(ks, supported, Pause);
ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause);
switch (hw->fc.requested_mode) {
case I40E_FC_FULL:
@ -2216,6 +2226,29 @@ static int i40e_get_sset_count(struct net_device *netdev, int sset)
}
}
/**
* i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure
* @tc: the TC statistics in VEB structure (veb->tc_stats)
* @i: the index of traffic class in (veb->tc_stats) structure to copy
*
* Copy VEB TC statistics from structure of arrays (veb->tc_stats) to
* one dimensional structure i40e_cp_veb_tc_stats.
* Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC
* statistics for the given TC.
**/
static struct i40e_cp_veb_tc_stats
i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i)
{
struct i40e_cp_veb_tc_stats veb_tc = {
.tc_rx_packets = tc->tc_rx_packets[i],
.tc_rx_bytes = tc->tc_rx_bytes[i],
.tc_tx_packets = tc->tc_tx_packets[i],
.tc_tx_bytes = tc->tc_tx_bytes[i],
};
return veb_tc;
}
/**
* i40e_get_pfc_stats - copy HW PFC statistics to formatted structure
* @pf: the PF device structure
@ -2300,8 +2333,16 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
i40e_gstrings_veb_stats);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL,
i40e_gstrings_veb_tc_stats);
if (veb_stats) {
struct i40e_cp_veb_tc_stats veb_tc =
i40e_get_veb_tc_stats(&veb->tc_stats, i);
i40e_add_ethtool_stats(&data, &veb_tc,
i40e_gstrings_veb_tc_stats);
} else {
i40e_add_ethtool_stats(&data, NULL,
i40e_gstrings_veb_tc_stats);
}
i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats);
@ -5439,7 +5480,7 @@ static int i40e_get_module_eeprom(struct net_device *netdev,
status = i40e_aq_get_phy_register(hw,
I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE,
true, addr, offset, &value, NULL);
addr, true, offset, &value, NULL);
if (status)
return -EIO;
data[i] = value;

View File

@ -2560,8 +2560,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
i40e_stat_str(hw, aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status));
} else {
dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n",
vsi->netdev->name,
dev_info(&pf->pdev->dev, "%s allmulti mode.\n",
cur_multipromisc ? "entering" : "leaving");
}
}
@ -6738,9 +6737,9 @@ out:
set_bit(__I40E_CLIENT_SERVICE_REQUESTED, pf->state);
set_bit(__I40E_CLIENT_L2_CHANGE, pf->state);
}
/* registers are set, lets apply */
if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB)
ret = i40e_hw_set_dcb_config(pf, new_cfg);
/* registers are set, lets apply */
if (pf->hw_features & I40E_HW_USE_SET_LLDP_MIB)
ret = i40e_hw_set_dcb_config(pf, new_cfg);
}
err:
@ -10573,12 +10572,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
goto end_core_reset;
}
if (!lock_acquired)
rtnl_lock();
ret = i40e_setup_pf_switch(pf, reinit);
if (ret)
goto end_unlock;
#ifdef CONFIG_I40E_DCB
/* Enable FW to write a default DCB config on link-up
* unless I40E_FLAG_TC_MQPRIO was enabled or DCB
@ -10593,7 +10586,7 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
i40e_aq_set_dcb_parameters(hw, false, NULL);
dev_warn(&pf->pdev->dev,
"DCB is not supported for X710-T*L 2.5/5G speeds\n");
pf->flags &= ~I40E_FLAG_DCB_CAPABLE;
pf->flags &= ~I40E_FLAG_DCB_CAPABLE;
} else {
i40e_aq_set_dcb_parameters(hw, true, NULL);
ret = i40e_init_pf_dcb(pf);
@ -10607,6 +10600,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
}
#endif /* CONFIG_I40E_DCB */
if (!lock_acquired)
rtnl_lock();
ret = i40e_setup_pf_switch(pf, reinit);
if (ret)
goto end_unlock;
/* The driver only wants link up/down and module qualification
* reports from firmware. Note the negative logic.
@ -15140,12 +15138,16 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
* in order to register the netdev
*/
v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN);
if (v_idx < 0)
if (v_idx < 0) {
err = v_idx;
goto err_switch_setup;
}
pf->lan_vsi = v_idx;
vsi = pf->vsi[v_idx];
if (!vsi)
if (!vsi) {
err = -EFAULT;
goto err_switch_setup;
}
vsi->alloc_queue_pairs = 1;
err = i40e_config_netdev(vsi);
if (err)

View File

@ -2295,8 +2295,7 @@ int i40e_xmit_xdp_tx_ring(struct xdp_buff *xdp, struct i40e_ring *xdp_ring)
* @rx_ring: Rx ring being processed
* @xdp: XDP buffer containing the frame
**/
static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
struct xdp_buff *xdp)
static int i40e_run_xdp(struct i40e_ring *rx_ring, struct xdp_buff *xdp)
{
int err, result = I40E_XDP_PASS;
struct i40e_ring *xdp_ring;
@ -2335,7 +2334,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
}
xdp_out:
rcu_read_unlock();
return ERR_PTR(-result);
return result;
}
/**
@ -2448,6 +2447,7 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
unsigned int xdp_xmit = 0;
bool failure = false;
struct xdp_buff xdp;
int xdp_res = 0;
#if (PAGE_SIZE < 8192)
frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
@ -2513,12 +2513,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
/* At larger PAGE_SIZE, frame_sz depend on len size */
xdp.frame_sz = i40e_rx_frame_truesize(rx_ring, size);
#endif
skb = i40e_run_xdp(rx_ring, &xdp);
xdp_res = i40e_run_xdp(rx_ring, &xdp);
}
if (IS_ERR(skb)) {
unsigned int xdp_res = -PTR_ERR(skb);
if (xdp_res) {
if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
xdp_xmit |= xdp_res;
i40e_rx_buffer_flip(rx_ring, rx_buffer, size);

View File

@ -137,6 +137,7 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
**/
static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
{
struct i40e_pf *pf = vf->pf;
int i;
i40e_vc_notify_vf_reset(vf);
@ -147,6 +148,11 @@ static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
* ensure a reset.
*/
for (i = 0; i < 20; i++) {
/* If PF is in VFs releasing state reset VF is impossible,
* so leave it.
*/
if (test_bit(__I40E_VFS_RELEASING, pf->state))
return;
if (i40e_reset_vf(vf, false))
return;
usleep_range(10000, 20000);
@ -1574,6 +1580,8 @@ void i40e_free_vfs(struct i40e_pf *pf)
if (!pf->vf)
return;
set_bit(__I40E_VFS_RELEASING, pf->state);
while (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
usleep_range(1000, 2000);
@ -1631,6 +1639,7 @@ void i40e_free_vfs(struct i40e_pf *pf)
}
}
clear_bit(__I40E_VF_DISABLE, pf->state);
clear_bit(__I40E_VFS_RELEASING, pf->state);
}
#ifdef CONFIG_PCI_IOV

View File

@ -471,7 +471,7 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, descs, budget);
if (!nb_pkts)
return false;
return true;
if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
nb_processed = xdp_ring->count - xdp_ring->next_to_use;
@ -488,7 +488,7 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
i40e_update_tx_stats(xdp_ring, nb_pkts, total_bytes);
return true;
return nb_pkts < budget;
}
/**

View File

@ -196,7 +196,6 @@ enum ice_state {
__ICE_NEEDS_RESTART,
__ICE_PREPARED_FOR_RESET, /* set by driver when prepared */
__ICE_RESET_OICR_RECV, /* set by driver after rcv reset OICR */
__ICE_DCBNL_DEVRESET, /* set by dcbnl devreset */
__ICE_PFR_REQ, /* set by driver and peers */
__ICE_CORER_REQ, /* set by driver and peers */
__ICE_GLOBR_REQ, /* set by driver and peers */
@ -624,7 +623,7 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset);
void ice_print_link_msg(struct ice_vsi *vsi, bool isup);
const char *ice_stat_str(enum ice_status stat_err);
const char *ice_aq_str(enum ice_aq_err aq_err);
bool ice_is_wol_supported(struct ice_pf *pf);
bool ice_is_wol_supported(struct ice_hw *hw);
int
ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add,
bool is_tun);
@ -642,6 +641,7 @@ int ice_fdir_create_dflt_rules(struct ice_pf *pf);
int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
struct ice_rq_event_info *event);
int ice_open(struct net_device *netdev);
int ice_open_internal(struct net_device *netdev);
int ice_stop(struct net_device *netdev);
void ice_service_task_schedule(struct ice_pf *pf);

View File

@ -717,8 +717,8 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
if (!data) {
data = devm_kcalloc(ice_hw_to_dev(hw),
sizeof(*data),
ICE_AQC_FW_LOG_ID_MAX,
sizeof(*data),
GFP_KERNEL);
if (!data)
return ICE_ERR_NO_MEMORY;

View File

@ -31,8 +31,8 @@ enum ice_ctl_q {
ICE_CTL_Q_MAILBOX,
};
/* Control Queue timeout settings - max delay 250ms */
#define ICE_CTL_Q_SQ_CMD_TIMEOUT 2500 /* Count 2500 times */
/* Control Queue timeout settings - max delay 1s */
#define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */
#define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */
#define ICE_CTL_Q_ADMIN_INIT_TIMEOUT 10 /* Count 10 times */
#define ICE_CTL_Q_ADMIN_INIT_MSEC 100 /* Check every 100msec */

View File

@ -738,22 +738,27 @@ ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
/**
* ice_cee_to_dcb_cfg
* @cee_cfg: pointer to CEE configuration struct
* @dcbcfg: DCB configuration struct
* @pi: port information structure
*
* Convert CEE configuration from firmware to DCB configuration
*/
static void
ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
struct ice_dcbx_cfg *dcbcfg)
struct ice_port_info *pi)
{
u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status);
u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
u8 i, j, err, sync, oper, app_index, ice_app_sel_type;
u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio);
u8 i, err, sync, oper, app_index, ice_app_sel_type;
u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
u16 ice_app_prot_id_type;
/* CEE PG data to ETS config */
dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE;
dcbcfg->tlv_status = tlv_status;
/* CEE PG data */
dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc;
/* Note that the FW creates the oper_prio_tc nibbles reversed
@ -780,10 +785,16 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
}
}
/* CEE PFC data to ETS config */
/* CEE PFC data */
dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en;
dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS;
/* CEE APP TLV data */
if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg;
else
cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg;
app_index = 0;
for (i = 0; i < 3; i++) {
if (i == 0) {
@ -802,6 +813,18 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
ice_app_sel_type = ICE_APP_SEL_TCPIP;
ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
for (j = 0; j < cmp_dcbcfg->numapps; j++) {
u16 prot_id = cmp_dcbcfg->app[j].prot_id;
u8 sel = cmp_dcbcfg->app[j].selector;
if (sel == ICE_APP_SEL_TCPIP &&
(prot_id == ICE_APP_PROT_ID_ISCSI ||
prot_id == ICE_APP_PROT_ID_ISCSI_860)) {
ice_app_prot_id_type = prot_id;
break;
}
}
} else {
/* FIP APP */
ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M;
@ -892,11 +915,8 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL);
if (!ret) {
/* CEE mode */
dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE;
dcbx_cfg->tlv_status = le32_to_cpu(cee_cfg.tlv_status);
ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg);
ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE);
ice_cee_to_dcb_cfg(&cee_cfg, pi);
} else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) {
/* CEE mode not enabled try querying IEEE data */
dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;

View File

@ -18,12 +18,10 @@ static void ice_dcbnl_devreset(struct net_device *netdev)
while (ice_is_reset_in_progress(pf->state))
usleep_range(1000, 2000);
set_bit(__ICE_DCBNL_DEVRESET, pf->state);
dev_close(netdev);
netdev_state_change(netdev);
dev_open(netdev, NULL);
netdev_state_change(netdev);
clear_bit(__ICE_DCBNL_DEVRESET, pf->state);
}
/**

View File

@ -3472,7 +3472,7 @@ static void ice_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
netdev_warn(netdev, "Wake on LAN is not supported on this interface!\n");
/* Get WoL settings based on the HW capability */
if (ice_is_wol_supported(pf)) {
if (ice_is_wol_supported(&pf->hw)) {
wol->supported = WAKE_MAGIC;
wol->wolopts = pf->wol_ena ? WAKE_MAGIC : 0;
} else {
@ -3492,7 +3492,7 @@ static int ice_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
struct ice_vsi *vsi = np->vsi;
struct ice_pf *pf = vsi->back;
if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(pf))
if (vsi->type != ICE_VSI_PF || !ice_is_wol_supported(&pf->hw))
return -EOPNOTSUPP;
/* only magic packet is supported */

View File

@ -2620,7 +2620,7 @@ int ice_ena_vsi(struct ice_vsi *vsi, bool locked)
if (!locked)
rtnl_lock();
err = ice_open(vsi->netdev);
err = ice_open_internal(vsi->netdev);
if (!locked)
rtnl_unlock();
@ -2649,7 +2649,7 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
if (!locked)
rtnl_lock();
ice_stop(vsi->netdev);
ice_vsi_close(vsi);
if (!locked)
rtnl_unlock();
@ -3078,7 +3078,6 @@ err_vsi:
bool ice_is_reset_in_progress(unsigned long *state)
{
return test_bit(__ICE_RESET_OICR_RECV, state) ||
test_bit(__ICE_DCBNL_DEVRESET, state) ||
test_bit(__ICE_PFR_REQ, state) ||
test_bit(__ICE_CORER_REQ, state) ||
test_bit(__ICE_GLOBR_REQ, state);

View File

@ -3537,15 +3537,14 @@ static int ice_init_interrupt_scheme(struct ice_pf *pf)
}
/**
* ice_is_wol_supported - get NVM state of WoL
* @pf: board private structure
* ice_is_wol_supported - check if WoL is supported
* @hw: pointer to hardware info
*
* Check if WoL is supported based on the HW configuration.
* Returns true if NVM supports and enables WoL for this port, false otherwise
*/
bool ice_is_wol_supported(struct ice_pf *pf)
bool ice_is_wol_supported(struct ice_hw *hw)
{
struct ice_hw *hw = &pf->hw;
u16 wol_ctrl;
/* A bit set to 1 in the NVM Software Reserved Word 2 (WoL control
@ -3554,7 +3553,7 @@ bool ice_is_wol_supported(struct ice_pf *pf)
if (ice_read_sr_word(hw, ICE_SR_NVM_WOL_CFG, &wol_ctrl))
return false;
return !(BIT(hw->pf_id) & wol_ctrl);
return !(BIT(hw->port_info->lport) & wol_ctrl);
}
/**
@ -4192,28 +4191,25 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
goto err_send_version_unroll;
}
/* not a fatal error if this fails */
err = ice_init_nvm_phy_type(pf->hw.port_info);
if (err) {
if (err)
dev_err(dev, "ice_init_nvm_phy_type failed: %d\n", err);
goto err_send_version_unroll;
}
/* not a fatal error if this fails */
err = ice_update_link_info(pf->hw.port_info);
if (err) {
if (err)
dev_err(dev, "ice_update_link_info failed: %d\n", err);
goto err_send_version_unroll;
}
ice_init_link_dflt_override(pf->hw.port_info);
/* if media available, initialize PHY settings */
if (pf->hw.port_info->phy.link_info.link_info &
ICE_AQ_MEDIA_AVAILABLE) {
/* not a fatal error if this fails */
err = ice_init_phy_user_cfg(pf->hw.port_info);
if (err) {
if (err)
dev_err(dev, "ice_init_phy_user_cfg failed: %d\n", err);
goto err_send_version_unroll;
}
if (!test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, pf->flags)) {
struct ice_vsi *vsi = ice_get_main_vsi(pf);
@ -4568,6 +4564,7 @@ static int __maybe_unused ice_suspend(struct device *dev)
continue;
ice_vsi_free_q_vectors(pf->vsi[v]);
}
ice_free_cpu_rx_rmap(ice_get_main_vsi(pf));
ice_clear_interrupt_scheme(pf);
pci_save_state(pdev);
@ -6635,6 +6632,28 @@ static void ice_tx_timeout(struct net_device *netdev, unsigned int txqueue)
* Returns 0 on success, negative value on failure
*/
int ice_open(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_pf *pf = np->vsi->back;
if (ice_is_reset_in_progress(pf->state)) {
netdev_err(netdev, "can't open net device while reset is in progress");
return -EBUSY;
}
return ice_open_internal(netdev);
}
/**
* ice_open_internal - Called when a network interface becomes active
* @netdev: network interface device structure
*
* Internal ice_open implementation. Should not be used directly except for ice_open and reset
* handling routine
*
* Returns 0 on success, negative value on failure
*/
int ice_open_internal(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
@ -6715,6 +6734,12 @@ int ice_stop(struct net_device *netdev)
{
struct ice_netdev_priv *np = netdev_priv(netdev);
struct ice_vsi *vsi = np->vsi;
struct ice_pf *pf = vsi->back;
if (ice_is_reset_in_progress(pf->state)) {
netdev_err(netdev, "can't stop net device while reset is in progress");
return -EBUSY;
}
ice_vsi_close(vsi);

View File

@ -1238,6 +1238,9 @@ ice_add_update_vsi_list(struct ice_hw *hw,
ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
vsi_list_id);
if (!m_entry->vsi_list_info)
return ICE_ERR_NO_MEMORY;
/* If this entry was large action then the large action needs
* to be updated to point to FWD to VSI list
*/
@ -2220,6 +2223,7 @@ ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
fm_entry->fltr_info.vsi_handle == vsi_handle) ||
(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
fm_entry->vsi_list_info &&
(test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map))));
}
@ -2292,14 +2296,12 @@ ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
return ICE_ERR_PARAM;
list_for_each_entry(fm_entry, lkup_list_head, list_entry) {
struct ice_fltr_info *fi;
fi = &fm_entry->fltr_info;
if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
if (!ice_vsi_uses_fltr(fm_entry, vsi_handle))
continue;
status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
vsi_list_head, fi);
vsi_list_head,
&fm_entry->fltr_info);
if (status)
return status;
}
@ -2622,7 +2624,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
&remove_list_head);
mutex_unlock(rule_lock);
if (status)
return;
goto free_fltr_list;
switch (lkup) {
case ICE_SW_LKUP_MAC:
@ -2645,6 +2647,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
break;
}
free_fltr_list:
list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) {
list_del(&fm_entry->list_entry);
devm_kfree(ice_hw_to_dev(hw), fm_entry);

View File

@ -535,6 +535,7 @@ struct ice_dcb_app_priority_table {
#define ICE_TLV_STATUS_ERR 0x4
#define ICE_APP_PROT_ID_FCOE 0x8906
#define ICE_APP_PROT_ID_ISCSI 0x0cbc
#define ICE_APP_PROT_ID_ISCSI_860 0x035c
#define ICE_APP_PROT_ID_FIP 0x8914
#define ICE_APP_SEL_ETHTYPE 0x1
#define ICE_APP_SEL_TCPIP 0x2

View File

@ -191,12 +191,12 @@ static bool is_ib_supported(struct mlx5_core_dev *dev)
}
enum {
MLX5_INTERFACE_PROTOCOL_ETH_REP,
MLX5_INTERFACE_PROTOCOL_ETH,
MLX5_INTERFACE_PROTOCOL_ETH_REP,
MLX5_INTERFACE_PROTOCOL_IB,
MLX5_INTERFACE_PROTOCOL_IB_REP,
MLX5_INTERFACE_PROTOCOL_MPIB,
MLX5_INTERFACE_PROTOCOL_IB,
MLX5_INTERFACE_PROTOCOL_VNET,
};

View File

@ -516,6 +516,7 @@ struct mlx5e_icosq {
struct mlx5_wq_cyc wq;
void __iomem *uar_map;
u32 sqn;
u16 reserved_room;
unsigned long state;
/* control path */

View File

@ -185,6 +185,28 @@ mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry)
return !!(entry->tuple_nat_node.next);
}
static int
mlx5_get_label_mapping(struct mlx5_tc_ct_priv *ct_priv,
u32 *labels, u32 *id)
{
if (!memchr_inv(labels, 0, sizeof(u32) * 4)) {
*id = 0;
return 0;
}
if (mapping_add(ct_priv->labels_mapping, labels, id))
return -EOPNOTSUPP;
return 0;
}
static void
mlx5_put_label_mapping(struct mlx5_tc_ct_priv *ct_priv, u32 id)
{
if (id)
mapping_remove(ct_priv->labels_mapping, id);
}
static int
mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule)
{
@ -436,7 +458,7 @@ mlx5_tc_ct_entry_del_rule(struct mlx5_tc_ct_priv *ct_priv,
mlx5_tc_rule_delete(netdev_priv(ct_priv->netdev), zone_rule->rule, attr);
mlx5e_mod_hdr_detach(ct_priv->dev,
ct_priv->mod_hdr_tbl, zone_rule->mh);
mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
kfree(attr);
}
@ -639,8 +661,8 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
if (!meta)
return -EOPNOTSUPP;
err = mapping_add(ct_priv->labels_mapping, meta->ct_metadata.labels,
&attr->ct_attr.ct_labels_id);
err = mlx5_get_label_mapping(ct_priv, meta->ct_metadata.labels,
&attr->ct_attr.ct_labels_id);
if (err)
return -EOPNOTSUPP;
if (nat) {
@ -677,7 +699,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
err_mapping:
dealloc_mod_hdr_actions(&mod_acts);
mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
return err;
}
@ -745,7 +767,7 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
err_rule:
mlx5e_mod_hdr_detach(ct_priv->dev,
ct_priv->mod_hdr_tbl, zone_rule->mh);
mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
mlx5_put_label_mapping(ct_priv, attr->ct_attr.ct_labels_id);
err_mod_hdr:
kfree(attr);
err_attr:
@ -1197,7 +1219,7 @@ void mlx5_tc_ct_match_del(struct mlx5_tc_ct_priv *priv, struct mlx5_ct_attr *ct_
if (!priv || !ct_attr->ct_labels_id)
return;
mapping_remove(priv->labels_mapping, ct_attr->ct_labels_id);
mlx5_put_label_mapping(priv, ct_attr->ct_labels_id);
}
int
@ -1280,7 +1302,7 @@ mlx5_tc_ct_match_add(struct mlx5_tc_ct_priv *priv,
ct_labels[1] = key->ct_labels[1] & mask->ct_labels[1];
ct_labels[2] = key->ct_labels[2] & mask->ct_labels[2];
ct_labels[3] = key->ct_labels[3] & mask->ct_labels[3];
if (mapping_add(priv->labels_mapping, ct_labels, &ct_attr->ct_labels_id))
if (mlx5_get_label_mapping(priv, ct_labels, &ct_attr->ct_labels_id))
return -EOPNOTSUPP;
mlx5e_tc_match_to_reg_match(spec, LABELS_TO_REG, ct_attr->ct_labels_id,
MLX5_CT_LABELS_MASK);

View File

@ -21,6 +21,11 @@ enum {
MLX5E_TC_TUNNEL_TYPE_MPLSOUDP,
};
struct mlx5e_encap_key {
const struct ip_tunnel_key *ip_tun_key;
struct mlx5e_tc_tunnel *tc_tunnel;
};
struct mlx5e_tc_tunnel {
int tunnel_type;
enum mlx5_flow_match_level match_level;
@ -44,6 +49,8 @@ struct mlx5e_tc_tunnel {
struct flow_cls_offload *f,
void *headers_c,
void *headers_v);
bool (*encap_info_equal)(struct mlx5e_encap_key *a,
struct mlx5e_encap_key *b);
};
extern struct mlx5e_tc_tunnel vxlan_tunnel;
@ -101,6 +108,9 @@ int mlx5e_tc_tun_parse_udp_ports(struct mlx5e_priv *priv,
void *headers_c,
void *headers_v);
bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a,
struct mlx5e_encap_key *b);
#endif /* CONFIG_MLX5_ESWITCH */
#endif //__MLX5_EN_TC_TUNNEL_H__

View File

@ -476,16 +476,11 @@ void mlx5e_detach_decap(struct mlx5e_priv *priv,
mlx5e_decap_dealloc(priv, d);
}
struct encap_key {
const struct ip_tunnel_key *ip_tun_key;
struct mlx5e_tc_tunnel *tc_tunnel;
};
static int cmp_encap_info(struct encap_key *a,
struct encap_key *b)
bool mlx5e_tc_tun_encap_info_equal_generic(struct mlx5e_encap_key *a,
struct mlx5e_encap_key *b)
{
return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) ||
a->tc_tunnel->tunnel_type != b->tc_tunnel->tunnel_type;
return memcmp(a->ip_tun_key, b->ip_tun_key, sizeof(*a->ip_tun_key)) == 0 &&
a->tc_tunnel->tunnel_type == b->tc_tunnel->tunnel_type;
}
static int cmp_decap_info(struct mlx5e_decap_key *a,
@ -494,7 +489,7 @@ static int cmp_decap_info(struct mlx5e_decap_key *a,
return memcmp(&a->key, &b->key, sizeof(b->key));
}
static int hash_encap_info(struct encap_key *key)
static int hash_encap_info(struct mlx5e_encap_key *key)
{
return jhash(key->ip_tun_key, sizeof(*key->ip_tun_key),
key->tc_tunnel->tunnel_type);
@ -516,18 +511,18 @@ static bool mlx5e_decap_take(struct mlx5e_decap_entry *e)
}
static struct mlx5e_encap_entry *
mlx5e_encap_get(struct mlx5e_priv *priv, struct encap_key *key,
mlx5e_encap_get(struct mlx5e_priv *priv, struct mlx5e_encap_key *key,
uintptr_t hash_key)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5e_encap_key e_key;
struct mlx5e_encap_entry *e;
struct encap_key e_key;
hash_for_each_possible_rcu(esw->offloads.encap_tbl, e,
encap_hlist, hash_key) {
e_key.ip_tun_key = &e->tun_info->key;
e_key.tc_tunnel = e->tunnel;
if (!cmp_encap_info(&e_key, key) &&
if (e->tunnel->encap_info_equal(&e_key, key) &&
mlx5e_encap_take(e))
return e;
}
@ -694,8 +689,8 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
struct mlx5_flow_attr *attr = flow->attr;
const struct ip_tunnel_info *tun_info;
unsigned long tbl_time_before = 0;
struct encap_key key;
struct mlx5e_encap_entry *e;
struct mlx5e_encap_key key;
bool entry_created = false;
unsigned short family;
uintptr_t hash_key;

View File

@ -329,6 +329,34 @@ static int mlx5e_tc_tun_parse_geneve(struct mlx5e_priv *priv,
return mlx5e_tc_tun_parse_geneve_options(priv, spec, f);
}
static bool mlx5e_tc_tun_encap_info_equal_geneve(struct mlx5e_encap_key *a,
struct mlx5e_encap_key *b)
{
struct ip_tunnel_info *a_info;
struct ip_tunnel_info *b_info;
bool a_has_opts, b_has_opts;
if (!mlx5e_tc_tun_encap_info_equal_generic(a, b))
return false;
a_has_opts = !!(a->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT);
b_has_opts = !!(b->ip_tun_key->tun_flags & TUNNEL_GENEVE_OPT);
/* keys are equal when both don't have any options attached */
if (!a_has_opts && !b_has_opts)
return true;
if (a_has_opts != b_has_opts)
return false;
/* geneve options stored in memory next to ip_tunnel_info struct */
a_info = container_of(a->ip_tun_key, struct ip_tunnel_info, key);
b_info = container_of(b->ip_tun_key, struct ip_tunnel_info, key);
return a_info->options_len == b_info->options_len &&
memcmp(a_info + 1, b_info + 1, a_info->options_len) == 0;
}
struct mlx5e_tc_tunnel geneve_tunnel = {
.tunnel_type = MLX5E_TC_TUNNEL_TYPE_GENEVE,
.match_level = MLX5_MATCH_L4,
@ -338,4 +366,5 @@ struct mlx5e_tc_tunnel geneve_tunnel = {
.generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_geneve,
.parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_geneve,
.parse_tunnel = mlx5e_tc_tun_parse_geneve,
.encap_info_equal = mlx5e_tc_tun_encap_info_equal_geneve,
};

View File

@ -94,4 +94,5 @@ struct mlx5e_tc_tunnel gre_tunnel = {
.generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_gretap,
.parse_udp_ports = NULL,
.parse_tunnel = mlx5e_tc_tun_parse_gretap,
.encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic,
};

View File

@ -131,4 +131,5 @@ struct mlx5e_tc_tunnel mplsoudp_tunnel = {
.generate_ip_tun_hdr = generate_ip_tun_hdr,
.parse_udp_ports = parse_udp_ports,
.parse_tunnel = parse_tunnel,
.encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic,
};

View File

@ -150,4 +150,5 @@ struct mlx5e_tc_tunnel vxlan_tunnel = {
.generate_ip_tun_hdr = mlx5e_gen_ip_tunnel_header_vxlan,
.parse_udp_ports = mlx5e_tc_tun_parse_udp_ports_vxlan,
.parse_tunnel = mlx5e_tc_tun_parse_vxlan,
.encap_info_equal = mlx5e_tc_tun_encap_info_equal_generic,
};

View File

@ -441,4 +441,10 @@ static inline u16 mlx5e_stop_room_for_wqe(u16 wqe_size)
return wqe_size * 2 - 1;
}
static inline bool mlx5e_icosq_can_post_wqe(struct mlx5e_icosq *sq, u16 wqe_size)
{
u16 room = sq->reserved_room + mlx5e_stop_room_for_wqe(wqe_size);
return mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room);
}
#endif

View File

@ -46,7 +46,8 @@ struct mlx5e_ktls_offload_context_rx {
struct tls12_crypto_info_aes_gcm_128 crypto_info;
struct accel_rule rule;
struct sock *sk;
struct mlx5e_rq_stats *stats;
struct mlx5e_rq_stats *rq_stats;
struct mlx5e_tls_sw_stats *sw_stats;
struct completion add_ctx;
u32 tirn;
u32 key_id;
@ -137,11 +138,10 @@ post_static_params(struct mlx5e_icosq *sq,
{
struct mlx5e_set_tls_static_params_wqe *wqe;
struct mlx5e_icosq_wqe_info wi;
u16 pi, num_wqebbs, room;
u16 pi, num_wqebbs;
num_wqebbs = MLX5E_TLS_SET_STATIC_PARAMS_WQEBBS;
room = mlx5e_stop_room_for_wqe(num_wqebbs);
if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room)))
if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs)))
return ERR_PTR(-ENOSPC);
pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs);
@ -168,11 +168,10 @@ post_progress_params(struct mlx5e_icosq *sq,
{
struct mlx5e_set_tls_progress_params_wqe *wqe;
struct mlx5e_icosq_wqe_info wi;
u16 pi, num_wqebbs, room;
u16 pi, num_wqebbs;
num_wqebbs = MLX5E_TLS_SET_PROGRESS_PARAMS_WQEBBS;
room = mlx5e_stop_room_for_wqe(num_wqebbs);
if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room)))
if (unlikely(!mlx5e_icosq_can_post_wqe(sq, num_wqebbs)))
return ERR_PTR(-ENOSPC);
pi = mlx5e_icosq_get_next_pi(sq, num_wqebbs);
@ -218,7 +217,7 @@ unlock:
return err;
err_out:
priv_rx->stats->tls_resync_req_skip++;
priv_rx->rq_stats->tls_resync_req_skip++;
err = PTR_ERR(cseg);
complete(&priv_rx->add_ctx);
goto unlock;
@ -277,17 +276,15 @@ resync_post_get_progress_params(struct mlx5e_icosq *sq,
buf->priv_rx = priv_rx;
BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1);
spin_lock_bh(&sq->channel->async_icosq_lock);
if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) {
if (unlikely(!mlx5e_icosq_can_post_wqe(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS))) {
spin_unlock_bh(&sq->channel->async_icosq_lock);
err = -ENOSPC;
goto err_dma_unmap;
}
pi = mlx5e_icosq_get_next_pi(sq, 1);
pi = mlx5e_icosq_get_next_pi(sq, MLX5E_KTLS_GET_PROGRESS_WQEBBS);
wqe = MLX5E_TLS_FETCH_GET_PROGRESS_PARAMS_WQE(sq, pi);
#define GET_PSV_DS_CNT (DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS))
@ -307,7 +304,7 @@ resync_post_get_progress_params(struct mlx5e_icosq *sq,
wi = (struct mlx5e_icosq_wqe_info) {
.wqe_type = MLX5E_ICOSQ_WQE_GET_PSV_TLS,
.num_wqebbs = 1,
.num_wqebbs = MLX5E_KTLS_GET_PROGRESS_WQEBBS,
.tls_get_params.buf = buf,
};
icosq_fill_wi(sq, pi, &wi);
@ -322,7 +319,7 @@ err_dma_unmap:
err_free:
kfree(buf);
err_out:
priv_rx->stats->tls_resync_req_skip++;
priv_rx->rq_stats->tls_resync_req_skip++;
return err;
}
@ -378,13 +375,13 @@ static int resync_handle_seq_match(struct mlx5e_ktls_offload_context_rx *priv_rx
cseg = post_static_params(sq, priv_rx);
if (IS_ERR(cseg)) {
priv_rx->stats->tls_resync_res_skip++;
priv_rx->rq_stats->tls_resync_res_skip++;
err = PTR_ERR(cseg);
goto unlock;
}
/* Do not increment priv_rx refcnt, CQE handling is empty */
mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg);
priv_rx->stats->tls_resync_res_ok++;
priv_rx->rq_stats->tls_resync_res_ok++;
unlock:
spin_unlock_bh(&c->async_icosq_lock);
@ -420,13 +417,13 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi,
auth_state = MLX5_GET(tls_progress_params, ctx, auth_state);
if (tracker_state != MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING ||
auth_state != MLX5E_TLS_PROGRESS_PARAMS_AUTH_STATE_NO_OFFLOAD) {
priv_rx->stats->tls_resync_req_skip++;
priv_rx->rq_stats->tls_resync_req_skip++;
goto out;
}
hw_seq = MLX5_GET(tls_progress_params, ctx, hw_resync_tcp_sn);
tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq));
priv_rx->stats->tls_resync_req_end++;
priv_rx->rq_stats->tls_resync_req_end++;
out:
mlx5e_ktls_priv_rx_put(priv_rx);
dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
@ -609,7 +606,8 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
priv_rx->rxq = rxq;
priv_rx->sk = sk;
priv_rx->stats = &priv->channel_stats[rxq].rq;
priv_rx->rq_stats = &priv->channel_stats[rxq].rq;
priv_rx->sw_stats = &priv->tls->sw_stats;
mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx);
rqtn = priv->direct_tir[rxq].rqt.rqtn;
@ -630,7 +628,7 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
if (err)
goto err_post_wqes;
priv_rx->stats->tls_ctx++;
atomic64_inc(&priv_rx->sw_stats->rx_tls_ctx);
return 0;
@ -666,7 +664,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx)
if (cancel_work_sync(&resync->work))
mlx5e_ktls_priv_rx_put(priv_rx);
priv_rx->stats->tls_del++;
atomic64_inc(&priv_rx->sw_stats->rx_tls_del);
if (priv_rx->rule.rule)
mlx5e_accel_fs_del_sk(priv_rx->rule.rule);

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
// Copyright (c) 2019 Mellanox Technologies.
#include "en_accel/tls.h"
#include "en_accel/ktls_txrx.h"
#include "en_accel/ktls_utils.h"
@ -50,6 +51,7 @@ static int mlx5e_ktls_create_tis(struct mlx5_core_dev *mdev, u32 *tisn)
struct mlx5e_ktls_offload_context_tx {
struct tls_offload_context_tx *tx_ctx;
struct tls12_crypto_info_aes_gcm_128 crypto_info;
struct mlx5e_tls_sw_stats *sw_stats;
u32 expected_seq;
u32 tisn;
u32 key_id;
@ -99,6 +101,7 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
if (err)
goto err_create_key;
priv_tx->sw_stats = &priv->tls->sw_stats;
priv_tx->expected_seq = start_offload_tcp_sn;
priv_tx->crypto_info =
*(struct tls12_crypto_info_aes_gcm_128 *)crypto_info;
@ -111,6 +114,7 @@ int mlx5e_ktls_add_tx(struct net_device *netdev, struct sock *sk,
goto err_create_tis;
priv_tx->ctx_post_pending = true;
atomic64_inc(&priv_tx->sw_stats->tx_tls_ctx);
return 0;
@ -452,7 +456,6 @@ bool mlx5e_ktls_handle_tx_skb(struct tls_context *tls_ctx, struct mlx5e_txqsq *s
if (unlikely(mlx5e_ktls_tx_offload_test_and_clear_pending(priv_tx))) {
mlx5e_ktls_tx_post_param_wqes(sq, priv_tx, false, false);
stats->tls_ctx++;
}
seq = ntohl(tcp_hdr(skb)->seq);

View File

@ -41,10 +41,13 @@
#include "en.h"
struct mlx5e_tls_sw_stats {
atomic64_t tx_tls_ctx;
atomic64_t tx_tls_drop_metadata;
atomic64_t tx_tls_drop_resync_alloc;
atomic64_t tx_tls_drop_no_sync_data;
atomic64_t tx_tls_drop_bypass_required;
atomic64_t rx_tls_ctx;
atomic64_t rx_tls_del;
atomic64_t rx_tls_drop_resync_request;
atomic64_t rx_tls_resync_request;
atomic64_t rx_tls_resync_reply;

View File

@ -45,49 +45,60 @@ static const struct counter_desc mlx5e_tls_sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_drop_bypass_required) },
};
static const struct counter_desc mlx5e_ktls_sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, tx_tls_ctx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_ctx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_tls_sw_stats, rx_tls_del) },
};
#define MLX5E_READ_CTR_ATOMIC64(ptr, dsc, i) \
atomic64_read((atomic64_t *)((char *)(ptr) + (dsc)[i].offset))
#define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc)
static bool is_tls_atomic_stats(struct mlx5e_priv *priv)
static const struct counter_desc *get_tls_atomic_stats(struct mlx5e_priv *priv)
{
return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev);
if (!priv->tls)
return NULL;
if (mlx5_accel_is_ktls_device(priv->mdev))
return mlx5e_ktls_sw_stats_desc;
return mlx5e_tls_sw_stats_desc;
}
int mlx5e_tls_get_count(struct mlx5e_priv *priv)
{
if (!is_tls_atomic_stats(priv))
if (!priv->tls)
return 0;
return NUM_TLS_SW_COUNTERS;
if (mlx5_accel_is_ktls_device(priv->mdev))
return ARRAY_SIZE(mlx5e_ktls_sw_stats_desc);
return ARRAY_SIZE(mlx5e_tls_sw_stats_desc);
}
int mlx5e_tls_get_strings(struct mlx5e_priv *priv, uint8_t *data)
{
unsigned int i, idx = 0;
const struct counter_desc *stats_desc;
unsigned int i, n, idx = 0;
if (!is_tls_atomic_stats(priv))
return 0;
stats_desc = get_tls_atomic_stats(priv);
n = mlx5e_tls_get_count(priv);
for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
for (i = 0; i < n; i++)
strcpy(data + (idx++) * ETH_GSTRING_LEN,
mlx5e_tls_sw_stats_desc[i].format);
stats_desc[i].format);
return NUM_TLS_SW_COUNTERS;
return n;
}
int mlx5e_tls_get_stats(struct mlx5e_priv *priv, u64 *data)
{
int i, idx = 0;
const struct counter_desc *stats_desc;
unsigned int i, n, idx = 0;
if (!is_tls_atomic_stats(priv))
return 0;
stats_desc = get_tls_atomic_stats(priv);
n = mlx5e_tls_get_count(priv);
for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
for (i = 0; i < n; i++)
data[idx++] =
MLX5E_READ_CTR_ATOMIC64(&priv->tls->sw_stats,
mlx5e_tls_sw_stats_desc, i);
stats_desc, i);
return NUM_TLS_SW_COUNTERS;
return n;
}

View File

@ -758,11 +758,11 @@ static int get_fec_supported_advertised(struct mlx5_core_dev *dev,
return 0;
}
static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings,
u32 eth_proto_cap,
u8 connector_type, bool ext)
static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev,
struct ethtool_link_ksettings *link_ksettings,
u32 eth_proto_cap, u8 connector_type)
{
if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) {
if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) {
if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
| MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
| MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
@ -898,9 +898,9 @@ static int ptys2connector_type[MLX5E_CONNECTOR_TYPE_NUMBER] = {
[MLX5E_PORT_OTHER] = PORT_OTHER,
};
static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext)
static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type)
{
if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER)
if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type))
return ptys2connector_type[connector_type];
if (eth_proto &
@ -1001,11 +1001,11 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
data_rate_oper, link_ksettings);
eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
link_ksettings->base.port = get_connector_port(eth_proto_oper,
connector_type, ext);
ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin,
connector_type, ext);
connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ?
connector_type : MLX5E_PORT_UNKNOWN;
link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type);
ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin,
connector_type);
get_lp_advertising(mdev, eth_proto_lp, link_ksettings);
if (an_status == MLX5_AN_COMPLETE)

View File

@ -1091,6 +1091,7 @@ static int mlx5e_alloc_icosq(struct mlx5e_channel *c,
sq->channel = c;
sq->uar_map = mdev->mlx5e_res.bfreg.map;
sq->reserved_room = param->stop_room;
param->wq.db_numa_node = cpu_to_node(c->cpu);
err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, wq, &sq->wq_ctrl);
@ -2350,6 +2351,24 @@ void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
mlx5e_build_ico_cq_param(priv, log_wq_size, &param->cqp);
}
static void mlx5e_build_async_icosq_param(struct mlx5e_priv *priv,
struct mlx5e_params *params,
u8 log_wq_size,
struct mlx5e_sq_param *param)
{
void *sqc = param->sqc;
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
mlx5e_build_sq_param_common(priv, param);
/* async_icosq is used by XSK only if xdp_prog is active */
if (params->xdp_prog)
param->stop_room = mlx5e_stop_room_for_wqe(1); /* for XSK NOP */
MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq));
MLX5_SET(wq, wq, log_wq_sz, log_wq_size);
mlx5e_build_ico_cq_param(priv, log_wq_size, &param->cqp);
}
void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv,
struct mlx5e_params *params,
struct mlx5e_sq_param *param)
@ -2398,7 +2417,7 @@ static void mlx5e_build_channel_param(struct mlx5e_priv *priv,
mlx5e_build_sq_param(priv, params, &cparam->txq_sq);
mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq);
mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq);
mlx5e_build_icosq_param(priv, async_icosq_log_wq_sz, &cparam->async_icosq);
mlx5e_build_async_icosq_param(priv, params, async_icosq_log_wq_sz, &cparam->async_icosq);
}
int mlx5e_open_channels(struct mlx5e_priv *priv,

View File

@ -1107,8 +1107,9 @@ static void mlx5e_uplink_rep_enable(struct mlx5e_priv *priv)
mlx5e_rep_tc_enable(priv);
mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK,
0, 0, MLX5_VPORT_ADMIN_STATE_AUTO);
if (MLX5_CAP_GEN(mdev, uplink_follow))
mlx5_modify_vport_admin_state(mdev, MLX5_VPORT_STATE_OP_MOD_UPLINK,
0, 0, MLX5_VPORT_ADMIN_STATE_AUTO);
mlx5_lag_add(mdev, netdev);
priv->events_nb.notifier_call = uplink_rep_async_event;
mlx5_notifier_register(mdev, &priv->events_nb);

View File

@ -116,7 +116,6 @@ static const struct counter_desc sw_stats_desc[] = {
#ifdef CONFIG_MLX5_EN_TLS
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ctx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_ooo) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_dump_bytes) },
@ -180,8 +179,6 @@ static const struct counter_desc sw_stats_desc[] = {
#ifdef CONFIG_MLX5_EN_TLS
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_ctx) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_del) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_pkt) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_start) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_resync_req_end) },
@ -342,8 +339,6 @@ static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s,
#ifdef CONFIG_MLX5_EN_TLS
s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets;
s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes;
s->rx_tls_ctx += rq_stats->tls_ctx;
s->rx_tls_del += rq_stats->tls_del;
s->rx_tls_resync_req_pkt += rq_stats->tls_resync_req_pkt;
s->rx_tls_resync_req_start += rq_stats->tls_resync_req_start;
s->rx_tls_resync_req_end += rq_stats->tls_resync_req_end;
@ -390,7 +385,6 @@ static void mlx5e_stats_grp_sw_update_stats_sq(struct mlx5e_sw_stats *s,
#ifdef CONFIG_MLX5_EN_TLS
s->tx_tls_encrypted_packets += sq_stats->tls_encrypted_packets;
s->tx_tls_encrypted_bytes += sq_stats->tls_encrypted_bytes;
s->tx_tls_ctx += sq_stats->tls_ctx;
s->tx_tls_ooo += sq_stats->tls_ooo;
s->tx_tls_dump_bytes += sq_stats->tls_dump_bytes;
s->tx_tls_dump_packets += sq_stats->tls_dump_packets;
@ -1622,8 +1616,6 @@ static const struct counter_desc rq_stats_desc[] = {
#ifdef CONFIG_MLX5_EN_TLS
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_ctx) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_del) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_pkt) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_start) },
{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_resync_req_end) },
@ -1650,7 +1642,6 @@ static const struct counter_desc sq_stats_desc[] = {
#ifdef CONFIG_MLX5_EN_TLS
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ctx) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_ooo) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) },
{ MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },
@ -1776,7 +1767,6 @@ static const struct counter_desc qos_sq_stats_desc[] = {
#ifdef CONFIG_MLX5_EN_TLS
{ MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) },
{ MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) },
{ MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ctx) },
{ MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_ooo) },
{ MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_packets) },
{ MLX5E_DECLARE_QOS_TX_STAT(struct mlx5e_sq_stats, tls_dump_bytes) },

View File

@ -191,7 +191,6 @@ struct mlx5e_sw_stats {
#ifdef CONFIG_MLX5_EN_TLS
u64 tx_tls_encrypted_packets;
u64 tx_tls_encrypted_bytes;
u64 tx_tls_ctx;
u64 tx_tls_ooo;
u64 tx_tls_dump_packets;
u64 tx_tls_dump_bytes;
@ -202,8 +201,6 @@ struct mlx5e_sw_stats {
u64 rx_tls_decrypted_packets;
u64 rx_tls_decrypted_bytes;
u64 rx_tls_ctx;
u64 rx_tls_del;
u64 rx_tls_resync_req_pkt;
u64 rx_tls_resync_req_start;
u64 rx_tls_resync_req_end;
@ -334,8 +331,6 @@ struct mlx5e_rq_stats {
#ifdef CONFIG_MLX5_EN_TLS
u64 tls_decrypted_packets;
u64 tls_decrypted_bytes;
u64 tls_ctx;
u64 tls_del;
u64 tls_resync_req_pkt;
u64 tls_resync_req_start;
u64 tls_resync_req_end;
@ -364,7 +359,6 @@ struct mlx5e_sq_stats {
#ifdef CONFIG_MLX5_EN_TLS
u64 tls_encrypted_packets;
u64 tls_encrypted_bytes;
u64 tls_ctx;
u64 tls_ooo;
u64 tls_dump_packets;
u64 tls_dump_bytes;

View File

@ -931,13 +931,24 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
mutex_unlock(&table->lock);
}
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
#define MLX5_MAX_ASYNC_EQS 4
#else
#define MLX5_MAX_ASYNC_EQS 3
#endif
int mlx5_eq_table_create(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *eq_table = dev->priv.eq_table;
int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
MLX5_CAP_GEN(dev, max_num_eqs) :
1 << MLX5_CAP_GEN(dev, log_max_eq);
int err;
eq_table->num_comp_eqs =
mlx5_irq_get_num_comp(eq_table->irq_table);
min_t(int,
mlx5_irq_get_num_comp(eq_table->irq_table),
num_eqs - MLX5_MAX_ASYNC_EQS);
err = create_async_eqs(dev);
if (err) {

View File

@ -248,7 +248,7 @@ err_mod_hdr_regc0:
err_ethertype:
kfree(rule);
out:
kfree(rule_spec);
kvfree(rule_spec);
return err;
}
@ -328,7 +328,7 @@ static int mlx5_create_indir_recirc_group(struct mlx5_eswitch *esw,
e->recirc_cnt = 0;
out:
kfree(in);
kvfree(in);
return err;
}
@ -347,7 +347,7 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw,
spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
if (!spec) {
kfree(in);
kvfree(in);
return -ENOMEM;
}
@ -371,8 +371,8 @@ static int mlx5_create_indir_fwd_group(struct mlx5_eswitch *esw,
}
err_out:
kfree(spec);
kfree(in);
kvfree(spec);
kvfree(in);
return err;
}

View File

@ -537,6 +537,14 @@ esw_setup_vport_dests(struct mlx5_flow_destination *dest, struct mlx5_flow_act *
return i;
}
static bool
esw_src_port_rewrite_supported(struct mlx5_eswitch *esw)
{
return MLX5_CAP_GEN(esw->dev, reg_c_preserve) &&
mlx5_eswitch_vport_match_metadata_enabled(esw) &&
MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level);
}
static int
esw_setup_dests(struct mlx5_flow_destination *dest,
struct mlx5_flow_act *flow_act,
@ -550,9 +558,7 @@ esw_setup_dests(struct mlx5_flow_destination *dest,
int err = 0;
if (!mlx5_eswitch_termtbl_required(esw, attr, flow_act, spec) &&
MLX5_CAP_GEN(esw_attr->in_mdev, reg_c_preserve) &&
mlx5_eswitch_vport_match_metadata_enabled(esw) &&
MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level))
esw_src_port_rewrite_supported(esw))
attr->flags |= MLX5_ESW_ATTR_FLAG_SRC_REWRITE;
if (attr->dest_ft) {
@ -1716,36 +1722,40 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw)
}
esw->fdb_table.offloads.send_to_vport_grp = g;
/* meta send to vport */
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
MLX5_MATCH_MISC_PARAMETERS_2);
if (esw_src_port_rewrite_supported(esw)) {
/* meta send to vport */
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
MLX5_MATCH_MISC_PARAMETERS_2);
match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
MLX5_SET(fte_match_param, match_criteria,
misc_parameters_2.metadata_reg_c_0, mlx5_eswitch_get_vport_metadata_mask());
MLX5_SET(fte_match_param, match_criteria,
misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK);
MLX5_SET(fte_match_param, match_criteria,
misc_parameters_2.metadata_reg_c_0,
mlx5_eswitch_get_vport_metadata_mask());
MLX5_SET(fte_match_param, match_criteria,
misc_parameters_2.metadata_reg_c_1, ESW_TUN_MASK);
num_vfs = esw->esw_funcs.num_vfs;
if (num_vfs) {
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ix + num_vfs - 1);
ix += num_vfs;
num_vfs = esw->esw_funcs.num_vfs;
if (num_vfs) {
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
MLX5_SET(create_flow_group_in, flow_group_in,
end_flow_index, ix + num_vfs - 1);
ix += num_vfs;
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n",
err);
goto send_vport_meta_err;
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create send-to-vport meta flow group err(%d)\n",
err);
goto send_vport_meta_err;
}
esw->fdb_table.offloads.send_to_vport_meta_grp = g;
err = mlx5_eswitch_add_send_to_vport_meta_rules(esw);
if (err)
goto meta_rule_err;
}
esw->fdb_table.offloads.send_to_vport_meta_grp = g;
err = mlx5_eswitch_add_send_to_vport_meta_rules(esw);
if (err)
goto meta_rule_err;
}
if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {

View File

@ -21,6 +21,7 @@
#include <net/red.h>
#include <net/vxlan.h>
#include <net/flow_offload.h>
#include <net/inet_ecn.h>
#include "port.h"
#include "core.h"
@ -347,6 +348,20 @@ struct mlxsw_sp_port_type_speed_ops {
u32 (*ptys_proto_cap_masked_get)(u32 eth_proto_cap);
};
static inline u8 mlxsw_sp_tunnel_ecn_decap(u8 outer_ecn, u8 inner_ecn,
bool *trap_en)
{
bool set_ce = false;
*trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
if (set_ce)
return INET_ECN_CE;
else if (outer_ecn == INET_ECN_ECT_1 && inner_ecn == INET_ECN_ECT_0)
return INET_ECN_ECT_1;
else
return inner_ecn;
}
static inline struct net_device *
mlxsw_sp_bridge_vxlan_dev_find(struct net_device *br_dev)
{

View File

@ -1230,16 +1230,22 @@ mlxsw_sp1_from_ptys_link_mode(struct mlxsw_sp *mlxsw_sp, bool carrier_ok,
u32 ptys_eth_proto,
struct ethtool_link_ksettings *cmd)
{
struct mlxsw_sp1_port_link_mode link;
int i;
cmd->link_mode = -1;
cmd->base.speed = SPEED_UNKNOWN;
cmd->base.duplex = DUPLEX_UNKNOWN;
cmd->lanes = 0;
if (!carrier_ok)
return;
for (i = 0; i < MLXSW_SP1_PORT_LINK_MODE_LEN; i++) {
if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask)
cmd->link_mode = mlxsw_sp1_port_link_mode[i].mask_ethtool;
if (ptys_eth_proto & mlxsw_sp1_port_link_mode[i].mask) {
link = mlxsw_sp1_port_link_mode[i];
ethtool_params_from_link_mode(cmd,
link.mask_ethtool);
}
}
}
@ -1672,7 +1678,9 @@ mlxsw_sp2_from_ptys_link_mode(struct mlxsw_sp *mlxsw_sp, bool carrier_ok,
struct mlxsw_sp2_port_link_mode link;
int i;
cmd->link_mode = -1;
cmd->base.speed = SPEED_UNKNOWN;
cmd->base.duplex = DUPLEX_UNKNOWN;
cmd->lanes = 0;
if (!carrier_ok)
return;
@ -1680,7 +1688,8 @@ mlxsw_sp2_from_ptys_link_mode(struct mlxsw_sp *mlxsw_sp, bool carrier_ok,
for (i = 0; i < MLXSW_SP2_PORT_LINK_MODE_LEN; i++) {
if (ptys_eth_proto & mlxsw_sp2_port_link_mode[i].mask) {
link = mlxsw_sp2_port_link_mode[i];
cmd->link_mode = link.mask_ethtool[1];
ethtool_params_from_link_mode(cmd,
link.mask_ethtool[1]);
}
}
}

View File

@ -335,12 +335,11 @@ static int mlxsw_sp_ipip_ecn_decap_init_one(struct mlxsw_sp *mlxsw_sp,
u8 inner_ecn, u8 outer_ecn)
{
char tidem_pl[MLXSW_REG_TIDEM_LEN];
bool trap_en, set_ce = false;
u8 new_inner_ecn;
bool trap_en;
trap_en = __INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn;
new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn,
&trap_en);
mlxsw_reg_tidem_pack(tidem_pl, outer_ecn, inner_ecn, new_inner_ecn,
trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0);
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tidem), tidem_pl);

View File

@ -909,12 +909,11 @@ static int __mlxsw_sp_nve_ecn_decap_init(struct mlxsw_sp *mlxsw_sp,
u8 inner_ecn, u8 outer_ecn)
{
char tndem_pl[MLXSW_REG_TNDEM_LEN];
bool trap_en, set_ce = false;
u8 new_inner_ecn;
bool trap_en;
trap_en = !!__INET_ECN_decapsulate(outer_ecn, inner_ecn, &set_ce);
new_inner_ecn = set_ce ? INET_ECN_CE : inner_ecn;
new_inner_ecn = mlxsw_sp_tunnel_ecn_decap(outer_ecn, inner_ecn,
&trap_en);
mlxsw_reg_tndem_pack(tndem_pl, outer_ecn, inner_ecn, new_inner_ecn,
trap_en, trap_en ? MLXSW_TRAP_ID_DECAP_ECN0 : 0);
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(tndem), tndem_pl);

View File

@ -885,8 +885,8 @@ static int lan743x_mac_set_mtu(struct lan743x_adapter *adapter, int new_mtu)
}
mac_rx &= ~(MAC_RX_MAX_SIZE_MASK_);
mac_rx |= (((new_mtu + ETH_HLEN + 4) << MAC_RX_MAX_SIZE_SHIFT_) &
MAC_RX_MAX_SIZE_MASK_);
mac_rx |= (((new_mtu + ETH_HLEN + ETH_FCS_LEN)
<< MAC_RX_MAX_SIZE_SHIFT_) & MAC_RX_MAX_SIZE_MASK_);
lan743x_csr_write(adapter, MAC_RX, mac_rx);
if (enabled) {
@ -1944,7 +1944,7 @@ static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index)
struct sk_buff *skb;
dma_addr_t dma_ptr;
buffer_length = netdev->mtu + ETH_HLEN + 4 + RX_HEAD_PADDING;
buffer_length = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + RX_HEAD_PADDING;
descriptor = &rx->ring_cpu_ptr[index];
buffer_info = &rx->buffer_info[index];
@ -2040,7 +2040,7 @@ lan743x_rx_trim_skb(struct sk_buff *skb, int frame_length)
dev_kfree_skb_irq(skb);
return NULL;
}
frame_length = max_t(int, 0, frame_length - RX_HEAD_PADDING - 4);
frame_length = max_t(int, 0, frame_length - ETH_FCS_LEN);
if (skb->len > frame_length) {
skb->tail -= skb->len - frame_length;
skb->len = frame_length;

View File

@ -2897,7 +2897,7 @@ static netdev_tx_t myri10ge_sw_tso(struct sk_buff *skb,
dev_kfree_skb_any(curr);
if (segs != NULL) {
curr = segs;
segs = segs->next;
segs = next;
curr->next = NULL;
dev_kfree_skb_any(segs);
}

View File

@ -454,6 +454,7 @@ void nfp_bpf_ctrl_msg_rx(struct nfp_app *app, struct sk_buff *skb)
dev_consume_skb_any(skb);
else
dev_kfree_skb_any(skb);
return;
}
nfp_ccm_rx(&bpf->ccm, skb);

View File

@ -190,6 +190,7 @@ struct nfp_fl_internal_ports {
* @qos_rate_limiters: Current active qos rate limiters
* @qos_stats_lock: Lock on qos stats updates
* @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded
* @merge_table: Hash table to store merged flows
*/
struct nfp_flower_priv {
struct nfp_app *app;
@ -223,6 +224,7 @@ struct nfp_flower_priv {
unsigned int qos_rate_limiters;
spinlock_t qos_stats_lock; /* Protect the qos stats */
int pre_tun_rule_cnt;
struct rhashtable merge_table;
};
/**
@ -350,6 +352,12 @@ struct nfp_fl_payload_link {
};
extern const struct rhashtable_params nfp_flower_table_params;
extern const struct rhashtable_params merge_table_params;
struct nfp_merge_info {
u64 parent_ctx;
struct rhash_head ht_node;
};
struct nfp_fl_stats_frame {
__be32 stats_con_id;

View File

@ -490,6 +490,12 @@ const struct rhashtable_params nfp_flower_table_params = {
.automatic_shrinking = true,
};
const struct rhashtable_params merge_table_params = {
.key_offset = offsetof(struct nfp_merge_info, parent_ctx),
.head_offset = offsetof(struct nfp_merge_info, ht_node),
.key_len = sizeof(u64),
};
int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
unsigned int host_num_mems)
{
@ -506,6 +512,10 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
if (err)
goto err_free_flow_table;
err = rhashtable_init(&priv->merge_table, &merge_table_params);
if (err)
goto err_free_stats_ctx_table;
get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed));
/* Init ring buffer and unallocated mask_ids. */
@ -513,7 +523,7 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS,
NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL);
if (!priv->mask_ids.mask_id_free_list.buf)
goto err_free_stats_ctx_table;
goto err_free_merge_table;
priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1;
@ -550,6 +560,8 @@ err_free_last_used:
kfree(priv->mask_ids.last_used);
err_free_mask_id:
kfree(priv->mask_ids.mask_id_free_list.buf);
err_free_merge_table:
rhashtable_destroy(&priv->merge_table);
err_free_stats_ctx_table:
rhashtable_destroy(&priv->stats_ctx_table);
err_free_flow_table:
@ -568,6 +580,8 @@ void nfp_flower_metadata_cleanup(struct nfp_app *app)
nfp_check_rhashtable_empty, NULL);
rhashtable_free_and_destroy(&priv->stats_ctx_table,
nfp_check_rhashtable_empty, NULL);
rhashtable_free_and_destroy(&priv->merge_table,
nfp_check_rhashtable_empty, NULL);
kvfree(priv->stats);
kfree(priv->mask_ids.mask_id_free_list.buf);
kfree(priv->mask_ids.last_used);

View File

@ -1009,6 +1009,8 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *merge_flow;
struct nfp_fl_key_ls merge_key_ls;
struct nfp_merge_info *merge_info;
u64 parent_ctx = 0;
int err;
ASSERT_RTNL();
@ -1019,6 +1021,15 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
nfp_flower_is_merge_flow(sub_flow2))
return -EINVAL;
/* check if the two flows are already merged */
parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32;
parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id));
if (rhashtable_lookup_fast(&priv->merge_table,
&parent_ctx, merge_table_params)) {
nfp_flower_cmsg_warn(app, "The two flows are already merged.\n");
return 0;
}
err = nfp_flower_can_merge(sub_flow1, sub_flow2);
if (err)
return err;
@ -1060,16 +1071,33 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
if (err)
goto err_release_metadata;
merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL);
if (!merge_info) {
err = -ENOMEM;
goto err_remove_rhash;
}
merge_info->parent_ctx = parent_ctx;
err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node,
merge_table_params);
if (err)
goto err_destroy_merge_info;
err = nfp_flower_xmit_flow(app, merge_flow,
NFP_FLOWER_CMSG_TYPE_FLOW_MOD);
if (err)
goto err_remove_rhash;
goto err_remove_merge_info;
merge_flow->in_hw = true;
sub_flow1->in_hw = false;
return 0;
err_remove_merge_info:
WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
&merge_info->ht_node,
merge_table_params));
err_destroy_merge_info:
kfree(merge_info);
err_remove_rhash:
WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table,
&merge_flow->fl_node,
@ -1359,7 +1387,9 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
{
struct nfp_flower_priv *priv = app->priv;
struct nfp_fl_payload_link *link, *temp;
struct nfp_merge_info *merge_info;
struct nfp_fl_payload *origin;
u64 parent_ctx = 0;
bool mod = false;
int err;
@ -1396,8 +1426,22 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
err_free_links:
/* Clean any links connected with the merged flow. */
list_for_each_entry_safe(link, temp, &merge_flow->linked_flows,
merge_flow.list)
merge_flow.list) {
u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id);
parent_ctx = (parent_ctx << 32) | (u64)(ctx_id);
nfp_flower_unlink_flow(link);
}
merge_info = rhashtable_lookup_fast(&priv->merge_table,
&parent_ctx,
merge_table_params);
if (merge_info) {
WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
&merge_info->ht_node,
merge_table_params));
kfree(merge_info);
}
kfree(merge_flow->action_data);
kfree(merge_flow->mask_data);

View File

@ -504,6 +504,18 @@ static inline u32 axinet_ior_read_mcr(struct axienet_local *lp)
return axienet_ior(lp, XAE_MDIO_MCR_OFFSET);
}
static inline void axienet_lock_mii(struct axienet_local *lp)
{
if (lp->mii_bus)
mutex_lock(&lp->mii_bus->mdio_lock);
}
static inline void axienet_unlock_mii(struct axienet_local *lp)
{
if (lp->mii_bus)
mutex_unlock(&lp->mii_bus->mdio_lock);
}
/**
* axienet_iow - Memory mapped Axi Ethernet register write
* @lp: Pointer to axienet local structure

View File

@ -1053,9 +1053,9 @@ static int axienet_open(struct net_device *ndev)
* including the MDIO. MDIO must be disabled before resetting.
* Hold MDIO bus lock to avoid MDIO accesses during the reset.
*/
mutex_lock(&lp->mii_bus->mdio_lock);
axienet_lock_mii(lp);
ret = axienet_device_reset(ndev);
mutex_unlock(&lp->mii_bus->mdio_lock);
axienet_unlock_mii(lp);
ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
if (ret) {
@ -1148,9 +1148,9 @@ static int axienet_stop(struct net_device *ndev)
}
/* Do a reset to ensure DMA is really stopped */
mutex_lock(&lp->mii_bus->mdio_lock);
axienet_lock_mii(lp);
__axienet_device_reset(lp);
mutex_unlock(&lp->mii_bus->mdio_lock);
axienet_unlock_mii(lp);
cancel_work_sync(&lp->dma_err_task);
@ -1709,9 +1709,9 @@ static void axienet_dma_err_handler(struct work_struct *work)
* including the MDIO. MDIO must be disabled before resetting.
* Hold MDIO bus lock to avoid MDIO accesses during the reset.
*/
mutex_lock(&lp->mii_bus->mdio_lock);
axienet_lock_mii(lp);
__axienet_device_reset(lp);
mutex_unlock(&lp->mii_bus->mdio_lock);
axienet_unlock_mii(lp);
for (i = 0; i < lp->tx_bd_num; i++) {
cur_p = &lp->tx_bd_v[i];

View File

@ -908,8 +908,16 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
info = skb_tunnel_info(skb);
if (info) {
info->key.u.ipv4.dst = fl4.saddr;
info->key.u.ipv4.src = fl4.daddr;
struct ip_tunnel_info *unclone;
unclone = skb_tunnel_info_unclone(skb);
if (unlikely(!unclone)) {
dst_release(&rt->dst);
return -ENOMEM;
}
unclone->key.u.ipv4.dst = fl4.saddr;
unclone->key.u.ipv4.src = fl4.daddr;
}
if (!pskb_may_pull(skb, ETH_HLEN)) {
@ -993,8 +1001,16 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
struct ip_tunnel_info *info = skb_tunnel_info(skb);
if (info) {
info->key.u.ipv6.dst = fl6.saddr;
info->key.u.ipv6.src = fl6.daddr;
struct ip_tunnel_info *unclone;
unclone = skb_tunnel_info_unclone(skb);
if (unlikely(!unclone)) {
dst_release(dst);
return -ENOMEM;
}
unclone->key.u.ipv6.dst = fl6.saddr;
unclone->key.u.ipv6.src = fl6.daddr;
}
if (!pskb_may_pull(skb, ETH_HLEN)) {

View File

@ -365,6 +365,7 @@ static int atusb_alloc_urbs(struct atusb *atusb, int n)
return -ENOMEM;
}
usb_anchor_urb(urb, &atusb->idle_urbs);
usb_free_urb(urb);
n--;
}
return 0;

View File

@ -369,7 +369,7 @@ EXPORT_SYMBOL_GPL(bcm_phy_enable_apd);
int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
{
int val;
int val, mask = 0;
/* Enable EEE at PHY level */
val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL);
@ -388,10 +388,17 @@ int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
if (val < 0)
return val;
if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
phydev->supported))
mask |= MDIO_EEE_1000T;
if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT,
phydev->supported))
mask |= MDIO_EEE_100TX;
if (enable)
val |= (MDIO_EEE_100TX | MDIO_EEE_1000T);
val |= mask;
else
val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T);
val &= ~mask;
phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val);

View File

@ -69,6 +69,14 @@
#include <linux/bpf.h>
#include <linux/bpf_trace.h>
#include <linux/mutex.h>
#include <linux/ieee802154.h>
#include <linux/if_ltalk.h>
#include <uapi/linux/if_fddi.h>
#include <uapi/linux/if_hippi.h>
#include <uapi/linux/if_fc.h>
#include <net/ax25.h>
#include <net/rose.h>
#include <net/6lowpan.h>
#include <linux/uaccess.h>
#include <linux/proc_fs.h>
@ -2919,6 +2927,45 @@ static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog __rcu **prog_p,
return __tun_set_ebpf(tun, prog_p, prog);
}
/* Return correct value for tun->dev->addr_len based on tun->dev->type. */
static unsigned char tun_get_addr_len(unsigned short type)
{
switch (type) {
case ARPHRD_IP6GRE:
case ARPHRD_TUNNEL6:
return sizeof(struct in6_addr);
case ARPHRD_IPGRE:
case ARPHRD_TUNNEL:
case ARPHRD_SIT:
return 4;
case ARPHRD_ETHER:
return ETH_ALEN;
case ARPHRD_IEEE802154:
case ARPHRD_IEEE802154_MONITOR:
return IEEE802154_EXTENDED_ADDR_LEN;
case ARPHRD_PHONET_PIPE:
case ARPHRD_PPP:
case ARPHRD_NONE:
return 0;
case ARPHRD_6LOWPAN:
return EUI64_ADDR_LEN;
case ARPHRD_FDDI:
return FDDI_K_ALEN;
case ARPHRD_HIPPI:
return HIPPI_ALEN;
case ARPHRD_IEEE802:
return FC_ALEN;
case ARPHRD_ROSE:
return ROSE_ADDR_LEN;
case ARPHRD_NETROM:
return AX25_ADDR_LEN;
case ARPHRD_LOCALTLK:
return LTALK_ALEN;
default:
return 0;
}
}
static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
unsigned long arg, int ifreq_len)
{
@ -3082,6 +3129,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
break;
}
tun->dev->type = (int) arg;
tun->dev->addr_len = tun_get_addr_len(tun->dev->type);
netif_info(tun, drv, tun->dev, "linktype set to %d\n",
tun->dev->type);
call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE,

View File

@ -611,7 +611,7 @@ static struct hso_serial *get_serial_by_index(unsigned index)
return serial;
}
static int get_free_serial_index(void)
static int obtain_minor(struct hso_serial *serial)
{
int index;
unsigned long flags;
@ -619,8 +619,10 @@ static int get_free_serial_index(void)
spin_lock_irqsave(&serial_table_lock, flags);
for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) {
if (serial_table[index] == NULL) {
serial_table[index] = serial->parent;
serial->minor = index;
spin_unlock_irqrestore(&serial_table_lock, flags);
return index;
return 0;
}
}
spin_unlock_irqrestore(&serial_table_lock, flags);
@ -629,15 +631,12 @@ static int get_free_serial_index(void)
return -1;
}
static void set_serial_by_index(unsigned index, struct hso_serial *serial)
static void release_minor(struct hso_serial *serial)
{
unsigned long flags;
spin_lock_irqsave(&serial_table_lock, flags);
if (serial)
serial_table[index] = serial->parent;
else
serial_table[index] = NULL;
serial_table[serial->minor] = NULL;
spin_unlock_irqrestore(&serial_table_lock, flags);
}
@ -2230,6 +2229,7 @@ static int hso_stop_serial_device(struct hso_device *hso_dev)
static void hso_serial_tty_unregister(struct hso_serial *serial)
{
tty_unregister_device(tty_drv, serial->minor);
release_minor(serial);
}
static void hso_serial_common_free(struct hso_serial *serial)
@ -2253,24 +2253,22 @@ static void hso_serial_common_free(struct hso_serial *serial)
static int hso_serial_common_create(struct hso_serial *serial, int num_urbs,
int rx_size, int tx_size)
{
int minor;
int i;
tty_port_init(&serial->port);
minor = get_free_serial_index();
if (minor < 0)
if (obtain_minor(serial))
goto exit2;
/* register our minor number */
serial->parent->dev = tty_port_register_device_attr(&serial->port,
tty_drv, minor, &serial->parent->interface->dev,
tty_drv, serial->minor, &serial->parent->interface->dev,
serial->parent, hso_serial_dev_groups);
if (IS_ERR(serial->parent->dev))
if (IS_ERR(serial->parent->dev)) {
release_minor(serial);
goto exit2;
}
/* fill in specific data for later use */
serial->minor = minor;
serial->magic = HSO_SERIAL_MAGIC;
spin_lock_init(&serial->serial_lock);
serial->num_rx_urbs = num_urbs;
@ -2667,9 +2665,6 @@ static struct hso_device *hso_create_bulk_serial_device(
serial->write_data = hso_std_serial_write_data;
/* and record this serial */
set_serial_by_index(serial->minor, serial);
/* setup the proc dirs and files if needed */
hso_log_port(hso_dev);
@ -2726,9 +2721,6 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
serial->shared_int->ref_count++;
mutex_unlock(&serial->shared_int->shared_int_lock);
/* and record this serial */
set_serial_by_index(serial->minor, serial);
/* setup the proc dirs and files if needed */
hso_log_port(hso_dev);
@ -3113,7 +3105,6 @@ static void hso_free_interface(struct usb_interface *interface)
cancel_work_sync(&serial_table[i]->async_get_intf);
hso_serial_tty_unregister(serial);
kref_put(&serial_table[i]->ref, hso_serial_ref_free);
set_serial_by_index(i, NULL);
}
}

View File

@ -406,9 +406,13 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
offset += hdr_padded_len;
p += hdr_padded_len;
copy = len;
if (copy > skb_tailroom(skb))
copy = skb_tailroom(skb);
/* Copy all frame if it fits skb->head, otherwise
* we let virtio_net_hdr_to_skb() and GRO pull headers as needed.
*/
if (len <= skb_tailroom(skb))
copy = len;
else
copy = ETH_HLEN + metasize;
skb_put_data(skb, p, copy);
if (metasize) {

View File

@ -2725,12 +2725,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto tx_error;
} else if (err) {
if (info) {
struct ip_tunnel_info *unclone;
struct in_addr src, dst;
unclone = skb_tunnel_info_unclone(skb);
if (unlikely(!unclone))
goto tx_error;
src = remote_ip.sin.sin_addr;
dst = local_ip.sin.sin_addr;
info->key.u.ipv4.src = src.s_addr;
info->key.u.ipv4.dst = dst.s_addr;
unclone->key.u.ipv4.src = src.s_addr;
unclone->key.u.ipv4.dst = dst.s_addr;
}
vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
dst_release(ndst);
@ -2781,12 +2786,17 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto tx_error;
} else if (err) {
if (info) {
struct ip_tunnel_info *unclone;
struct in6_addr src, dst;
unclone = skb_tunnel_info_unclone(skb);
if (unlikely(!unclone))
goto tx_error;
src = remote_ip.sin6.sin6_addr;
dst = local_ip.sin6.sin6_addr;
info->key.u.ipv6.src = src;
info->key.u.ipv6.dst = dst;
unclone->key.u.ipv6.src = src;
unclone->key.u.ipv6.dst = dst;
}
vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);

View File

@ -415,7 +415,7 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
if (pad > 0) { /* Pad the frame with zeros */
if (__skb_pad(skb, pad, false))
goto drop;
goto out;
skb_put(skb, pad);
}
}
@ -448,8 +448,9 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK;
drop:
dev->stats.tx_dropped++;
kfree_skb(skb);
out:
dev->stats.tx_dropped++;
return NETDEV_TX_OK;
}

View File

@ -2439,7 +2439,7 @@ void brcmf_p2p_ifp_removed(struct brcmf_if *ifp, bool locked)
vif = ifp->vif;
cfg = wdev_to_cfg(&vif->wdev);
cfg->p2p.bss_idx[P2PAPI_BSSCFG_DEVICE].vif = NULL;
if (locked) {
if (!locked) {
rtnl_lock();
wiphy_lock(cfg->wiphy);
cfg80211_unregister_wdev(&vif->wdev);

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2005-2014 Intel Corporation
* Copyright (C) 2005-2014, 2021 Intel Corporation
* Copyright (C) 2015-2017 Intel Deutschland GmbH
*/
#include <linux/sched.h>
@ -26,7 +26,7 @@ bool iwl_notification_wait(struct iwl_notif_wait_data *notif_wait,
if (!list_empty(&notif_wait->notif_waits)) {
struct iwl_notification_wait *w;
spin_lock(&notif_wait->notif_wait_lock);
spin_lock_bh(&notif_wait->notif_wait_lock);
list_for_each_entry(w, &notif_wait->notif_waits, list) {
int i;
bool found = false;
@ -59,7 +59,7 @@ bool iwl_notification_wait(struct iwl_notif_wait_data *notif_wait,
triggered = true;
}
}
spin_unlock(&notif_wait->notif_wait_lock);
spin_unlock_bh(&notif_wait->notif_wait_lock);
}
return triggered;
@ -70,10 +70,10 @@ void iwl_abort_notification_waits(struct iwl_notif_wait_data *notif_wait)
{
struct iwl_notification_wait *wait_entry;
spin_lock(&notif_wait->notif_wait_lock);
spin_lock_bh(&notif_wait->notif_wait_lock);
list_for_each_entry(wait_entry, &notif_wait->notif_waits, list)
wait_entry->aborted = true;
spin_unlock(&notif_wait->notif_wait_lock);
spin_unlock_bh(&notif_wait->notif_wait_lock);
wake_up_all(&notif_wait->notif_waitq);
}

View File

@ -414,6 +414,7 @@ struct iwl_cfg {
#define IWL_CFG_MAC_TYPE_QNJ 0x36
#define IWL_CFG_MAC_TYPE_SO 0x37
#define IWL_CFG_MAC_TYPE_SNJ 0x42
#define IWL_CFG_MAC_TYPE_SOF 0x43
#define IWL_CFG_MAC_TYPE_MA 0x44
#define IWL_CFG_RF_TYPE_TH 0x105

View File

@ -232,7 +232,7 @@ enum iwl_reg_capa_flags_v2 {
REG_CAPA_V2_MCS_9_ALLOWED = BIT(6),
REG_CAPA_V2_WEATHER_DISABLED = BIT(7),
REG_CAPA_V2_40MHZ_ALLOWED = BIT(8),
REG_CAPA_V2_11AX_DISABLED = BIT(13),
REG_CAPA_V2_11AX_DISABLED = BIT(10),
};
/*

View File

@ -1786,10 +1786,13 @@ static ssize_t iwl_dbgfs_rfi_freq_table_write(struct iwl_mvm *mvm, char *buf,
return -EINVAL;
/* value zero triggers re-sending the default table to the device */
if (!op_id)
if (!op_id) {
mutex_lock(&mvm->mutex);
ret = iwl_rfi_send_config_cmd(mvm, NULL);
else
mutex_unlock(&mvm->mutex);
} else {
ret = -EOPNOTSUPP; /* in the future a new table will be added */
}
return ret ?: count;
}

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2020 Intel Corporation
* Copyright (C) 2020 - 2021 Intel Corporation
*/
#include "mvm.h"
@ -66,6 +66,8 @@ int iwl_rfi_send_config_cmd(struct iwl_mvm *mvm, struct iwl_rfi_lut_entry *rfi_t
if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_RFIM_SUPPORT))
return -EOPNOTSUPP;
lockdep_assert_held(&mvm->mutex);
/* in case no table is passed, use the default one */
if (!rfi_table) {
memcpy(cmd.table, iwl_rfi_table, sizeof(cmd.table));
@ -75,9 +77,7 @@ int iwl_rfi_send_config_cmd(struct iwl_mvm *mvm, struct iwl_rfi_lut_entry *rfi_t
cmd.oem = 1;
}
mutex_lock(&mvm->mutex);
ret = iwl_mvm_send_cmd(mvm, &hcmd);
mutex_unlock(&mvm->mutex);
if (ret)
IWL_ERR(mvm, "Failed to send RFI config cmd %d\n", ret);

View File

@ -272,10 +272,10 @@ static void iwl_mvm_get_signal_strength(struct iwl_mvm *mvm,
rx_status->chain_signal[2] = S8_MIN;
}
static int iwl_mvm_rx_mgmt_crypto(struct ieee80211_sta *sta,
struct ieee80211_hdr *hdr,
struct iwl_rx_mpdu_desc *desc,
u32 status)
static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
struct ieee80211_hdr *hdr,
struct iwl_rx_mpdu_desc *desc,
u32 status)
{
struct iwl_mvm_sta *mvmsta;
struct iwl_mvm_vif *mvmvif;
@ -285,6 +285,9 @@ static int iwl_mvm_rx_mgmt_crypto(struct ieee80211_sta *sta,
u32 len = le16_to_cpu(desc->mpdu_len);
const u8 *frame = (void *)hdr;
if ((status & IWL_RX_MPDU_STATUS_SEC_MASK) == IWL_RX_MPDU_STATUS_SEC_NONE)
return 0;
/*
* For non-beacon, we don't really care. But beacons may
* be filtered out, and we thus need the firmware's replay
@ -356,6 +359,10 @@ static int iwl_mvm_rx_crypto(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
IWL_RX_MPDU_STATUS_SEC_UNKNOWN && !mvm->monitor_on)
return -1;
if (unlikely(ieee80211_is_mgmt(hdr->frame_control) &&
!ieee80211_has_protected(hdr->frame_control)))
return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status);
if (!ieee80211_has_protected(hdr->frame_control) ||
(status & IWL_RX_MPDU_STATUS_SEC_MASK) ==
IWL_RX_MPDU_STATUS_SEC_NONE)
@ -411,7 +418,7 @@ static int iwl_mvm_rx_crypto(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
stats->flag |= RX_FLAG_DECRYPTED;
return 0;
case RX_MPDU_RES_STATUS_SEC_CMAC_GMAC_ENC:
return iwl_mvm_rx_mgmt_crypto(sta, hdr, desc, status);
break;
default:
/*
* Sometimes we can get frames that were not decrypted

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2018-2020 Intel Corporation
* Copyright (C) 2018-2021 Intel Corporation
*/
#include "iwl-trans.h"
#include "iwl-fh.h"
@ -75,15 +75,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
const struct fw_img *fw)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
u32_encode_bits(250,
CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
struct iwl_context_info_gen3 *ctxt_info_gen3;
struct iwl_prph_scratch *prph_scratch;
struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl;
@ -217,26 +208,6 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,
CSR_AUTO_FUNC_BOOT_ENA);
/*
* To workaround hardware latency issues during the boot process,
* initialize the LTR to ~250 usec (see ltr_val above).
* The firmware initializes this again later (to a smaller value).
*/
if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
!trans->trans_cfg->integrated) {
iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
} else if (trans->trans_cfg->integrated &&
trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
}
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
else
iwl_set_bit(trans, CSR_GP_CNTRL, CSR_AUTO_FUNC_INIT);
return 0;
err_free_ctxt_info:

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2017 Intel Deutschland GmbH
* Copyright (C) 2018-2020 Intel Corporation
* Copyright (C) 2018-2021 Intel Corporation
*/
#include "iwl-trans.h"
#include "iwl-fh.h"
@ -240,7 +240,6 @@ int iwl_pcie_ctxt_info_init(struct iwl_trans *trans,
/* kick FW self load */
iwl_write64(trans, CSR_CTXT_INFO_BA, trans_pcie->ctxt_info_dma_addr);
iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1);
/* Context info will be released upon alive or failure to get one */

View File

@ -592,6 +592,7 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
IWL_DEV_INFO(0x4DF0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0, NULL),
IWL_DEV_INFO(0x4DF0, 0x2074, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x4DF0, 0x4070, iwl_ax201_cfg_qu_hr, NULL),
IWL_DEV_INFO(0x4DF0, 0x6074, iwl_ax201_cfg_qu_hr, NULL),
/* So with HR */
IWL_DEV_INFO(0x2725, 0x0090, iwlax211_2ax_cfg_so_gf_a0, NULL),
@ -1040,7 +1041,31 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
IWL_CFG_MAC_TYPE_SO, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax201_name)
iwl_cfg_so_a0_hr_a0, iwl_ax201_name),
/* So-F with Hr */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
IWL_CFG_NO_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax203_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY,
IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax101_name),
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY,
IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwl_cfg_so_a0_hr_a0, iwl_ax201_name),
/* So-F with Gf */
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
IWL_CFG_MAC_TYPE_SOF, IWL_CFG_ANY,
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY,
IWL_CFG_160, IWL_CFG_ANY, IWL_CFG_NO_CDB,
iwlax211_2ax_cfg_so_gf_a0, iwl_ax211_name),
#endif /* CONFIG_IWLMVM */
};

View File

@ -266,6 +266,34 @@ void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr)
mutex_unlock(&trans_pcie->mutex);
}
static void iwl_pcie_set_ltr(struct iwl_trans *trans)
{
u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |
u32_encode_bits(250,
CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |
CSR_LTR_LONG_VAL_AD_SNOOP_REQ |
u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,
CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |
u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);
/*
* To workaround hardware latency issues during the boot process,
* initialize the LTR to ~250 usec (see ltr_val above).
* The firmware initializes this again later (to a smaller value).
*/
if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 ||
trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) &&
!trans->trans_cfg->integrated) {
iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val);
} else if (trans->trans_cfg->integrated &&
trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) {
iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL);
iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val);
}
}
int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
const struct fw_img *fw, bool run_in_rfkill)
{
@ -332,6 +360,13 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
if (ret)
goto out;
iwl_pcie_set_ltr(trans);
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);
else
iwl_write_prph(trans, UREG_CPU_INIT_RUN, 1);
/* re-check RF-Kill state since we may have missed the interrupt */
hw_rfkill = iwl_pcie_check_hw_rf_kill(trans);
if (hw_rfkill && !run_in_rfkill)

View File

@ -928,6 +928,7 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
u32 cmd_pos;
const u8 *cmddata[IWL_MAX_CMD_TBS_PER_TFD];
u16 cmdlen[IWL_MAX_CMD_TBS_PER_TFD];
unsigned long flags;
if (WARN(!trans->wide_cmd_header &&
group_id > IWL_ALWAYS_LONG_GROUP,
@ -1011,10 +1012,10 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
goto free_dup_buf;
}
spin_lock_bh(&txq->lock);
spin_lock_irqsave(&txq->lock, flags);
if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
spin_unlock_bh(&txq->lock);
spin_unlock_irqrestore(&txq->lock, flags);
IWL_ERR(trans, "No space in command queue\n");
iwl_op_mode_cmd_queue_full(trans->op_mode);
@ -1174,7 +1175,7 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
unlock_reg:
spin_unlock(&trans_pcie->reg_lock);
out:
spin_unlock_bh(&txq->lock);
spin_unlock_irqrestore(&txq->lock, flags);
free_dup_buf:
if (idx < 0)
kfree(dup_buf);

View File

@ -135,10 +135,10 @@
#define MT_WTBLON_TOP_BASE 0x34000
#define MT_WTBLON_TOP(ofs) (MT_WTBLON_TOP_BASE + (ofs))
#define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x0)
#define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x200)
#define MT_WTBLON_TOP_WDUCR_GROUP GENMASK(2, 0)
#define MT_WTBL_UPDATE MT_WTBLON_TOP(0x030)
#define MT_WTBL_UPDATE MT_WTBLON_TOP(0x230)
#define MT_WTBL_UPDATE_WLAN_IDX GENMASK(9, 0)
#define MT_WTBL_UPDATE_ADM_COUNT_CLEAR BIT(12)
#define MT_WTBL_UPDATE_BUSY BIT(31)

View File

@ -12,6 +12,7 @@
#include <net/cfg80211.h>
#include <net/rtnetlink.h>
#include <linux/etherdevice.h>
#include <linux/math64.h>
#include <linux/module.h>
static struct wiphy *common_wiphy;
@ -168,11 +169,11 @@ static void virt_wifi_scan_result(struct work_struct *work)
scan_result.work);
struct wiphy *wiphy = priv_to_wiphy(priv);
struct cfg80211_scan_info scan_info = { .aborted = false };
u64 tsf = div_u64(ktime_get_boottime_ns(), 1000);
informed_bss = cfg80211_inform_bss(wiphy, &channel_5ghz,
CFG80211_BSS_FTYPE_PRESP,
fake_router_bssid,
ktime_get_boottime_ns(),
fake_router_bssid, tsf,
WLAN_CAPABILITY_ESS, 0,
(void *)&ssid, sizeof(ssid),
DBM_TO_MBM(-50), GFP_KERNEL);

View File

@ -476,7 +476,6 @@ struct virtchnl_rss_key {
u16 vsi_id;
u16 key_len;
u8 key[1]; /* RSS hash key, packed bytes */
u8 pad[1];
};
VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
@ -485,7 +484,6 @@ struct virtchnl_rss_lut {
u16 vsi_id;
u16 lut_entries;
u8 lut[1]; /* RSS lookup table */
u8 pad[1];
};
VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);

View File

@ -40,6 +40,7 @@ struct bpf_local_storage;
struct bpf_local_storage_map;
struct kobject;
struct mem_cgroup;
struct module;
extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
@ -623,6 +624,7 @@ struct bpf_trampoline {
/* Executable image of trampoline */
struct bpf_tramp_image *cur_image;
u64 selector;
struct module *mod;
};
struct bpf_attach_target_info {

View File

@ -87,9 +87,7 @@ u32 ethtool_op_get_link(struct net_device *dev);
int ethtool_op_get_ts_info(struct net_device *dev, struct ethtool_ts_info *eti);
/**
* struct ethtool_link_ext_state_info - link extended state and substate.
*/
/* Link extended state and substate. */
struct ethtool_link_ext_state_info {
enum ethtool_link_ext_state link_ext_state;
union {
@ -129,7 +127,6 @@ struct ethtool_link_ksettings {
__ETHTOOL_DECLARE_LINK_MODE_MASK(lp_advertising);
} link_modes;
u32 lanes;
enum ethtool_link_mode_bit_indices link_mode;
};
/**
@ -292,6 +289,9 @@ struct ethtool_pause_stats {
* do not attach ext_substate attribute to netlink message). If link_ext_state
* and link_ext_substate are unknown, return -ENODATA. If not implemented,
* link_ext_state and link_ext_substate will not be sent to userspace.
* @get_eeprom_len: Read range of EEPROM addresses for validation of
* @get_eeprom and @set_eeprom requests.
* Returns 0 if device does not support EEPROM access.
* @get_eeprom: Read data from the device EEPROM.
* Should fill in the magic field. Don't need to check len for zero
* or wraparound. Fill in the data argument with the eeprom values
@ -384,6 +384,8 @@ struct ethtool_pause_stats {
* @get_module_eeprom: Get the eeprom information from the plug-in module
* @get_eee: Get Energy-Efficient (EEE) supported and status.
* @set_eee: Set EEE status (enable/disable) as well as LPI timers.
* @get_tunable: Read the value of a driver / device tunable.
* @set_tunable: Set the value of a driver / device tunable.
* @get_per_queue_coalesce: Get interrupt coalescing parameters per queue.
* It must check that the given queue number is valid. If neither a RX nor
* a TX queue has this number, return -EINVAL. If only a RX queue or a TX
@ -547,8 +549,8 @@ struct phy_tdr_config;
* @get_sset_count: Get number of strings that @get_strings will write.
* @get_strings: Return a set of strings that describe the requested objects
* @get_stats: Return extended statistics about the PHY device.
* @start_cable_test - Start a cable test
* @start_cable_test_tdr - Start a Time Domain Reflectometry cable test
* @start_cable_test: Start a cable test
* @start_cable_test_tdr: Start a Time Domain Reflectometry cable test
*
* All operations are optional (i.e. the function pointer may be set to %NULL)
* and callers must take this into account. Callers must hold the RTNL lock.
@ -571,4 +573,12 @@ struct ethtool_phy_ops {
*/
void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops);
/*
* ethtool_params_from_link_mode - Derive link parameters from a given link mode
* @link_ksettings: Link parameters to be derived from the link mode
* @link_mode: Link mode
*/
void
ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings,
enum ethtool_link_mode_bit_indices link_mode);
#endif /* _LINUX_ETHTOOL_H */

View File

@ -437,11 +437,11 @@ struct mlx5_ifc_flow_table_prop_layout_bits {
u8 reserved_at_60[0x18];
u8 log_max_ft_num[0x8];
u8 reserved_at_80[0x18];
u8 reserved_at_80[0x10];
u8 log_max_flow_counter[0x8];
u8 log_max_destination[0x8];
u8 log_max_flow_counter[0x8];
u8 reserved_at_a8[0x10];
u8 reserved_at_a0[0x18];
u8 log_max_flow[0x8];
u8 reserved_at_c0[0x40];
@ -8835,6 +8835,8 @@ struct mlx5_ifc_pplm_reg_bits {
u8 fec_override_admin_100g_2x[0x10];
u8 fec_override_admin_50g_1x[0x10];
u8 reserved_at_140[0x140];
};
struct mlx5_ifc_ppcnt_reg_bits {
@ -10198,7 +10200,7 @@ struct mlx5_ifc_pbmc_reg_bits {
struct mlx5_ifc_bufferx_reg_bits buffer[10];
u8 reserved_at_2e0[0x40];
u8 reserved_at_2e0[0x80];
};
struct mlx5_ifc_qtct_reg_bits {

View File

@ -349,8 +349,13 @@ static inline void sk_psock_update_proto(struct sock *sk,
static inline void sk_psock_restore_proto(struct sock *sk,
struct sk_psock *psock)
{
sk->sk_prot->unhash = psock->saved_unhash;
if (inet_csk_has_ulp(sk)) {
/* TLS does not have an unhash proto in SW cases, but we need
* to ensure we stop using the sock_map unhash routine because
* the associated psock is being removed. So use the original
* unhash handler.
*/
WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash);
tcp_update_ulp(sk, psock->sk_proto, psock->saved_write_space);
} else {
sk->sk_write_space = psock->saved_write_space;

View File

@ -62,15 +62,21 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
return -EINVAL;
}
skb_reset_mac_header(skb);
if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
u32 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
u32 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);
u32 needed = start + max_t(u32, thlen, off + sizeof(__sum16));
if (!pskb_may_pull(skb, needed))
return -EINVAL;
if (!skb_partial_csum_set(skb, start, off))
return -EINVAL;
p_off = skb_transport_offset(skb) + thlen;
if (p_off > skb_headlen(skb))
if (!pskb_may_pull(skb, p_off))
return -EINVAL;
} else {
/* gso packets without NEEDS_CSUM do not set transport_offset.
@ -100,14 +106,14 @@ retry:
}
p_off = keys.control.thoff + thlen;
if (p_off > skb_headlen(skb) ||
if (!pskb_may_pull(skb, p_off) ||
keys.basic.ip_proto != ip_proto)
return -EINVAL;
skb_set_transport_header(skb, keys.control.thoff);
} else if (gso_type) {
p_off = thlen;
if (p_off > skb_headlen(skb))
if (!pskb_may_pull(skb, p_off))
return -EINVAL;
}
}

View File

@ -170,12 +170,7 @@ void tcf_idr_insert_many(struct tc_action *actions[]);
void tcf_idr_cleanup(struct tc_action_net *tn, u32 index);
int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index,
struct tc_action **a, int bind);
int __tcf_idr_release(struct tc_action *a, bool bind, bool strict);
static inline int tcf_idr_release(struct tc_action *a, bool bind)
{
return __tcf_idr_release(a, bind, false);
}
int tcf_idr_release(struct tc_action *a, bool bind);
int tcf_register_action(struct tc_action_ops *a, struct pernet_operations *ops);
int tcf_unregister_action(struct tc_action_ops *a,
@ -185,7 +180,7 @@ int tcf_action_exec(struct sk_buff *skb, struct tc_action **actions,
int nr_actions, struct tcf_result *res);
int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla,
struct nlattr *est, char *name, int ovr, int bind,
struct tc_action *actions[], size_t *attr_size,
struct tc_action *actions[], int init_res[], size_t *attr_size,
bool rtnl_held, struct netlink_ext_ack *extack);
struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
bool rtnl_held,
@ -193,7 +188,8 @@ struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla,
struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
struct nlattr *nla, struct nlattr *est,
char *name, int ovr, int bind,
struct tc_action_ops *ops, bool rtnl_held,
struct tc_action_ops *a_o, int *init_res,
bool rtnl_held,
struct netlink_ext_ack *extack);
int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind,
int ref, bool terse);

View File

@ -72,7 +72,9 @@ struct netns_xfrm {
#if IS_ENABLED(CONFIG_IPV6)
struct dst_ops xfrm6_dst_ops;
#endif
spinlock_t xfrm_state_lock;
spinlock_t xfrm_state_lock;
seqcount_spinlock_t xfrm_state_hash_generation;
spinlock_t xfrm_policy_lock;
struct mutex xfrm_cfg_mutex;
};

Some files were not shown because too many files have changed in this diff Show More