Networking fixes for 5.17-rc2, including fixes from netfilter and can.

Current release - new code bugs:
 
  - tcp: add a missing sk_defer_free_flush() in tcp_splice_read()
 
  - tcp: add a stub for sk_defer_free_flush(), fix CONFIG_INET=n
 
  - nf_tables: set last expression in register tracking area
 
  - nft_connlimit: fix memleak if nf_ct_netns_get() fails
 
  - mptcp: fix removing ids bitmap setting
 
  - bonding: use rcu_dereference_rtnl when getting active slave
 
  - fix three cases of sleep in atomic context in drivers: lan966x, gve
 
  - handful of build fixes for esoteric drivers after netdev->dev_addr
    was made const
 
 Previous releases - regressions:
 
  - revert "ipv6: Honor all IPv6 PIO Valid Lifetime values", it broke
    Linux compatibility with USGv6 tests
 
  - procfs: show net device bound packet types
 
  - ipv4: fix ip option filtering for locally generated fragments
 
  - phy: broadcom: hook up soft_reset for BCM54616S
 
 Previous releases - always broken:
 
  - ipv4: raw: lock the socket in raw_bind()
 
  - ipv4: decrease the use of shared IPID generator to decrease the
    chance of attackers guessing the values
 
  - procfs: fix cross-netns information leakage in /proc/net/ptype
 
  - ethtool: fix link extended state for big endian
 
  - bridge: vlan: fix single net device option dumping
 
  - ping: fix the sk_bound_dev_if match in ping_lookup
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmHy4mMACgkQMUZtbf5S
 IrvOZg/9HyOFAJrCYBlgA3zskHBdqYOGn9M3LCIevBcrCzQeigT+U1uWCINfBn+H
 DmsljeYKTicHZ38+HjdNXmzdnMqHtU+iJl4Ep1mcDNywygofW8JcS2Nf0n6Y+hK6
 nzyEa23DBt9UAiLmGXUTIoJwEhDRbuL/eH1/ZkkPLG7GtShtEDAKHg+dJBgHbYgJ
 0MQs3Q4s6AQ1PYOC0Z0zByhpSrAo2c4X/tr6g2ExNxU0vnydUbjIME0a5clFULr+
 ziVeOo4e83FINPaZiYAXEDbMGUC0z+rp1RoGsgRCdTnixi5BclkmEeGRaChYJHTZ
 T7tfIC2H0vZHu5/pAXFqwEHiRbminLv9jLkvA1/J67jbnpoNWTLD2jkuIWFlaY/Z
 xDm7LnVBB1CdLmXYo2ItSC/8ws9GANpJOq/vFvm+uOYZNKUVctfQ5viA3+hOSULC
 6BJHC0m5UminHZPVge9s1XZClarHK4jMMTH9Du2sHLsl3fedNxbgvcVPFdHswLdF
 uYiUGMSrIXuQjXw6SNmR4/voJgzikvYhT+jwMn4vTeWoFQFi5eNUch0MPfUImlXG
 e3T6WJHrOY3yJFyWQQ9GGLStchD72+iGq2uWLfOIyu9NRKCNBj4Kkm3bUvfqYp5b
 d5sP/nl93o3um4WskxB/fDLyhSXWjprgM9mKI45ilPhUC8bWQyo=
 =mwR3
 -----END PGP SIGNATURE-----

Merge tag 'net-5.17-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter and can.

  Current release - new code bugs:

   - tcp: add a missing sk_defer_free_flush() in tcp_splice_read()

   - tcp: add a stub for sk_defer_free_flush(), fix CONFIG_INET=n

   - nf_tables: set last expression in register tracking area

   - nft_connlimit: fix memleak if nf_ct_netns_get() fails

   - mptcp: fix removing ids bitmap setting

   - bonding: use rcu_dereference_rtnl when getting active slave

   - fix three cases of sleep in atomic context in drivers: lan966x, gve

   - handful of build fixes for esoteric drivers after netdev->dev_addr
     was made const

  Previous releases - regressions:

   - revert "ipv6: Honor all IPv6 PIO Valid Lifetime values", it broke
     Linux compatibility with USGv6 tests

   - procfs: show net device bound packet types

   - ipv4: fix ip option filtering for locally generated fragments

   - phy: broadcom: hook up soft_reset for BCM54616S

  Previous releases - always broken:

   - ipv4: raw: lock the socket in raw_bind()

   - ipv4: decrease the use of shared IPID generator to decrease the
     chance of attackers guessing the values

   - procfs: fix cross-netns information leakage in /proc/net/ptype

   - ethtool: fix link extended state for big endian

   - bridge: vlan: fix single net device option dumping

   - ping: fix the sk_bound_dev_if match in ping_lookup"

* tag 'net-5.17-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (86 commits)
  net: bridge: vlan: fix memory leak in __allowed_ingress
  net: socket: rename SKB_DROP_REASON_SOCKET_FILTER
  ipv4: remove sparse error in ip_neigh_gw4()
  ipv4: avoid using shared IP generator for connected sockets
  ipv4: tcp: send zero IPID in SYNACK messages
  ipv4: raw: lock the socket in raw_bind()
  MAINTAINERS: add missing IPv4/IPv6 header paths
  MAINTAINERS: add more files to eth PHY
  net: stmmac: dwmac-sun8i: use return val of readl_poll_timeout()
  net: bridge: vlan: fix single net device option dumping
  net: stmmac: skip only stmmac_ptp_register when resume from suspend
  net: stmmac: configure PTP clock source prior to PTP initialization
  Revert "ipv6: Honor all IPv6 PIO Valid Lifetime values"
  connector/cn_proc: Use task_is_in_init_pid_ns()
  pid: Introduce helper task_is_in_init_pid_ns()
  gve: Fix GFP flags when allocing pages
  net: lan966x: Fix sleep in atomic context when updating MAC table
  net: lan966x: Fix sleep in atomic context when injecting frames
  ethernet: seeq/ether3: don't write directly to netdev->dev_addr
  ethernet: 8390/etherh: don't write directly to netdev->dev_addr
  ...
This commit is contained in:
Linus Torvalds 2022-01-27 20:58:39 +02:00
commit 23a46422c5
94 changed files with 810 additions and 396 deletions

View File

@ -70,6 +70,7 @@ Boris Brezillon <bbrezillon@kernel.org> <boris.brezillon@bootlin.com>
Boris Brezillon <bbrezillon@kernel.org> <boris.brezillon@free-electrons.com>
Brian Avery <b.avery@hp.com>
Brian King <brking@us.ibm.com>
Brian Silverman <bsilver16384@gmail.com> <brian.silverman@bluerivertech.com>
Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com>
Changbin Du <changbin.du@intel.com> <changbin.du@intel.com>
Chao Yu <chao@kernel.org> <chao2.yu@samsung.com>

View File

@ -31,7 +31,7 @@ tcan4x5x: tcan4x5x@0 {
#address-cells = <1>;
#size-cells = <1>;
spi-max-frequency = <10000000>;
bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>;
bosch,mram-cfg = <0x0 0 0 16 0 0 1 1>;
interrupt-parent = <&gpio1>;
interrupts = <14 IRQ_TYPE_LEVEL_LOW>;
device-state-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;

View File

@ -190,8 +190,9 @@ M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
S: Maintained
W: https://wireless.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
Q: https://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git
F: Documentation/driver-api/80211/cfg80211.rst
F: Documentation/networking/regulatory.rst
F: include/linux/ieee80211.h
@ -7208,8 +7209,10 @@ F: drivers/net/mdio/of_mdio.c
F: drivers/net/pcs/
F: drivers/net/phy/
F: include/dt-bindings/net/qca-ar803x.h
F: include/linux/linkmode.h
F: include/linux/*mdio*.h
F: include/linux/mdio/*.h
F: include/linux/mii.h
F: include/linux/of_net.h
F: include/linux/phy.h
F: include/linux/phy_fixed.h
@ -11366,8 +11369,9 @@ M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
S: Maintained
W: https://wireless.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
Q: https://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git
F: Documentation/networking/mac80211-injection.rst
F: Documentation/networking/mac80211_hwsim/mac80211_hwsim.rst
F: drivers/net/wireless/mac80211_hwsim.[ch]
@ -13374,9 +13378,10 @@ NETWORKING DRIVERS (WIRELESS)
M: Kalle Valo <kvalo@kernel.org>
L: linux-wireless@vger.kernel.org
S: Maintained
Q: http://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next.git
W: https://wireless.wiki.kernel.org/
Q: https://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git
F: Documentation/devicetree/bindings/net/wireless/
F: drivers/net/wireless/
@ -13449,7 +13454,11 @@ L: netdev@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
F: arch/x86/net/*
F: include/linux/ip.h
F: include/linux/ipv6*
F: include/net/fib*
F: include/net/ip*
F: include/net/route.h
F: net/ipv4/
F: net/ipv6/
@ -13510,10 +13519,6 @@ F: include/net/tls.h
F: include/uapi/linux/tls.h
F: net/tls/*
NETWORKING [WIRELESS]
L: linux-wireless@vger.kernel.org
Q: http://patchwork.kernel.org/project/linux-wireless/list/
NETXEN (1/10) GbE SUPPORT
M: Manish Chopra <manishc@marvell.com>
M: Rahul Verma <rahulv@marvell.com>
@ -16532,8 +16537,9 @@ M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
S: Maintained
W: https://wireless.wiki.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
Q: https://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git
F: Documentation/ABI/stable/sysfs-class-rfkill
F: Documentation/driver-api/rfkill.rst
F: include/linux/rfkill.h

View File

@ -358,7 +358,7 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
* other namespaces.
*/
if ((current_user_ns() != &init_user_ns) ||
(task_active_pid_ns(current) != &init_pid_ns))
!task_is_in_init_pid_ns(current))
return;
/* Can only change if privileged. */

View File

@ -4133,9 +4133,7 @@ static int bond_eth_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cm
fallthrough;
case SIOCGHWTSTAMP:
rcu_read_lock();
real_dev = bond_option_active_slave_get_rcu(bond);
rcu_read_unlock();
if (!real_dev)
return -EOPNOTSUPP;
@ -5382,9 +5380,7 @@ static int bond_ethtool_get_ts_info(struct net_device *bond_dev,
struct net_device *real_dev;
struct phy_device *phydev;
rcu_read_lock();
real_dev = bond_option_active_slave_get_rcu(bond);
rcu_read_unlock();
if (real_dev) {
ops = real_dev->ethtool_ops;
phydev = real_dev->phydev;

View File

@ -296,6 +296,7 @@ static_assert(sizeof(struct flexcan_regs) == 0x4 * 18 + 0xfb8);
static const struct flexcan_devtype_data fsl_mcf5441x_devtype_data = {
.quirks = FLEXCAN_QUIRK_BROKEN_PERR_STATE |
FLEXCAN_QUIRK_NR_IRQ_3 | FLEXCAN_QUIRK_NR_MB_16 |
FLEXCAN_QUIRK_SUPPPORT_RX_MAILBOX |
FLEXCAN_QUIRK_SUPPPORT_RX_FIFO,
};

View File

@ -21,7 +21,7 @@
* Below is some version info we got:
* SOC Version IP-Version Glitch- [TR]WRN_INT IRQ Err Memory err RTR rece- FD Mode MB
* Filter? connected? Passive detection ption in MB Supported?
* MCF5441X FlexCAN2 ? no yes no no yes no 16
* MCF5441X FlexCAN2 ? no yes no no no no 16
* MX25 FlexCAN2 03.00.00.00 no no no no no no 64
* MX28 FlexCAN2 03.00.04.00 yes yes no no no no 64
* MX35 FlexCAN2 03.00.00.00 no no no no no no 64

View File

@ -336,6 +336,9 @@ m_can_fifo_read(struct m_can_classdev *cdev,
u32 addr_offset = cdev->mcfg[MRAM_RXF0].off + fgi * RXF0_ELEMENT_SIZE +
offset;
if (val_count == 0)
return 0;
return cdev->ops->read_fifo(cdev, addr_offset, val, val_count);
}
@ -346,6 +349,9 @@ m_can_fifo_write(struct m_can_classdev *cdev,
u32 addr_offset = cdev->mcfg[MRAM_TXB].off + fpi * TXB_ELEMENT_SIZE +
offset;
if (val_count == 0)
return 0;
return cdev->ops->write_fifo(cdev, addr_offset, val, val_count);
}

View File

@ -12,7 +12,7 @@
#define TCAN4X5X_SPI_INSTRUCTION_WRITE (0x61 << 24)
#define TCAN4X5X_SPI_INSTRUCTION_READ (0x41 << 24)
#define TCAN4X5X_MAX_REGISTER 0x8ffc
#define TCAN4X5X_MAX_REGISTER 0x87fc
static int tcan4x5x_regmap_gather_write(void *context,
const void *reg, size_t reg_len,

View File

@ -2278,6 +2278,7 @@ typhoon_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
struct net_device *dev;
struct typhoon *tp;
int card_id = (int) ent->driver_data;
u8 addr[ETH_ALEN] __aligned(4);
void __iomem *ioaddr;
void *shared;
dma_addr_t shared_dma;
@ -2409,8 +2410,9 @@ typhoon_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
goto error_out_reset;
}
*(__be16 *)&dev->dev_addr[0] = htons(le16_to_cpu(xp_resp[0].parm1));
*(__be32 *)&dev->dev_addr[2] = htonl(le32_to_cpu(xp_resp[0].parm2));
*(__be16 *)&addr[0] = htons(le16_to_cpu(xp_resp[0].parm1));
*(__be32 *)&addr[2] = htonl(le32_to_cpu(xp_resp[0].parm2));
eth_hw_addr_set(dev, addr);
if (!is_valid_ether_addr(dev->dev_addr)) {
err_msg = "Could not obtain valid ethernet address, aborting";

View File

@ -655,6 +655,7 @@ etherh_probe(struct expansion_card *ec, const struct ecard_id *id)
struct ei_device *ei_local;
struct net_device *dev;
struct etherh_priv *eh;
u8 addr[ETH_ALEN];
int ret;
ret = ecard_request_resources(ec);
@ -724,12 +725,13 @@ etherh_probe(struct expansion_card *ec, const struct ecard_id *id)
spin_lock_init(&ei_local->page_lock);
if (ec->cid.product == PROD_ANT_ETHERM) {
etherm_addr(dev->dev_addr);
etherm_addr(addr);
ei_local->reg_offset = etherm_regoffsets;
} else {
etherh_addr(dev->dev_addr, ec);
etherh_addr(addr, ec);
ei_local->reg_offset = etherh_regoffsets;
}
eth_hw_addr_set(dev, addr);
ei_local->name = dev->name;
ei_local->word16 = 1;

View File

@ -1032,6 +1032,7 @@ static int dec_lance_probe(struct device *bdev, const int type)
int i, ret;
unsigned long esar_base;
unsigned char *esar;
u8 addr[ETH_ALEN];
const char *desc;
if (dec_lance_debug && version_printed++ == 0)
@ -1228,7 +1229,8 @@ static int dec_lance_probe(struct device *bdev, const int type)
break;
}
for (i = 0; i < 6; i++)
dev->dev_addr[i] = esar[i * 4];
addr[i] = esar[i * 4];
eth_hw_addr_set(dev, addr);
printk("%s: %s, addr = %pM, irq = %d\n",
name, desc, dev->dev_addr, dev->irq);

View File

@ -826,7 +826,6 @@ int aq_filters_vlans_update(struct aq_nic_s *aq_nic)
struct aq_hw_s *aq_hw = aq_nic->aq_hw;
int hweight = 0;
int err = 0;
int i;
if (unlikely(!aq_hw_ops->hw_filter_vlan_set))
return -EOPNOTSUPP;
@ -837,8 +836,7 @@ int aq_filters_vlans_update(struct aq_nic_s *aq_nic)
aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans);
if (aq_nic->ndev->features & NETIF_F_HW_VLAN_CTAG_FILTER) {
for (i = 0; i < BITS_TO_LONGS(VLAN_N_VID); i++)
hweight += hweight_long(aq_nic->active_vlans[i]);
hweight = bitmap_weight(aq_nic->active_vlans, VLAN_N_VID);
err = aq_hw_ops->hw_filter_vlan_ctrl(aq_hw, false);
if (err)
@ -871,7 +869,7 @@ int aq_filters_vlan_offload_off(struct aq_nic_s *aq_nic)
struct aq_hw_s *aq_hw = aq_nic->aq_hw;
int err = 0;
memset(aq_nic->active_vlans, 0, sizeof(aq_nic->active_vlans));
bitmap_zero(aq_nic->active_vlans, VLAN_N_VID);
aq_fvlan_rebuild(aq_nic, aq_nic->active_vlans,
aq_nic->aq_hw_rx_fltrs.fl2.aq_vlans);

View File

@ -2183,9 +2183,7 @@ static int sbmac_init(struct platform_device *pldev, long long base)
ea_reg >>= 8;
}
for (i = 0; i < 6; i++) {
dev->dev_addr[i] = eaddr[i];
}
eth_hw_addr_set(dev, eaddr);
/*
* Initialize context (get pointers to registers and stuff), then

View File

@ -99,13 +99,13 @@ static void mpc52xx_fec_tx_timeout(struct net_device *dev, unsigned int txqueue)
netif_wake_queue(dev);
}
static void mpc52xx_fec_set_paddr(struct net_device *dev, u8 *mac)
static void mpc52xx_fec_set_paddr(struct net_device *dev, const u8 *mac)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
struct mpc52xx_fec __iomem *fec = priv->fec;
out_be32(&fec->paddr1, *(u32 *)(&mac[0]));
out_be32(&fec->paddr2, (*(u16 *)(&mac[4]) << 16) | FEC_PADDR2_TYPE);
out_be32(&fec->paddr1, *(const u32 *)(&mac[0]));
out_be32(&fec->paddr2, (*(const u16 *)(&mac[4]) << 16) | FEC_PADDR2_TYPE);
}
static int mpc52xx_fec_set_mac_address(struct net_device *dev, void *addr)
@ -893,13 +893,15 @@ static int mpc52xx_fec_probe(struct platform_device *op)
rv = of_get_ethdev_address(np, ndev);
if (rv) {
struct mpc52xx_fec __iomem *fec = priv->fec;
u8 addr[ETH_ALEN] __aligned(4);
/*
* If the MAC addresse is not provided via DT then read
* it back from the controller regs
*/
*(u32 *)(&ndev->dev_addr[0]) = in_be32(&fec->paddr1);
*(u16 *)(&ndev->dev_addr[4]) = in_be32(&fec->paddr2) >> 16;
*(u32 *)(&addr[0]) = in_be32(&fec->paddr1);
*(u16 *)(&addr[4]) = in_be32(&fec->paddr2) >> 16;
eth_hw_addr_set(ndev, addr);
}
/*

View File

@ -843,7 +843,7 @@ static inline bool gve_is_gqi(struct gve_priv *priv)
/* buffers */
int gve_alloc_page(struct gve_priv *priv, struct device *dev,
struct page **page, dma_addr_t *dma,
enum dma_data_direction);
enum dma_data_direction, gfp_t gfp_flags);
void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma,
enum dma_data_direction);
/* tx handling */

View File

@ -766,9 +766,9 @@ static void gve_free_rings(struct gve_priv *priv)
int gve_alloc_page(struct gve_priv *priv, struct device *dev,
struct page **page, dma_addr_t *dma,
enum dma_data_direction dir)
enum dma_data_direction dir, gfp_t gfp_flags)
{
*page = alloc_page(GFP_KERNEL);
*page = alloc_page(gfp_flags);
if (!*page) {
priv->page_alloc_fail++;
return -ENOMEM;
@ -811,7 +811,7 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id,
for (i = 0; i < pages; i++) {
err = gve_alloc_page(priv, &priv->pdev->dev, &qpl->pages[i],
&qpl->page_buses[i],
gve_qpl_dma_dir(priv, id));
gve_qpl_dma_dir(priv, id), GFP_KERNEL);
/* caller handles clean up */
if (err)
return -ENOMEM;

View File

@ -86,7 +86,8 @@ static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev,
dma_addr_t dma;
int err;
err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE);
err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE,
GFP_ATOMIC);
if (err)
return err;

View File

@ -157,7 +157,7 @@ static int gve_alloc_page_dqo(struct gve_priv *priv,
int err;
err = gve_alloc_page(priv, &priv->pdev->dev, &buf_state->page_info.page,
&buf_state->addr, DMA_FROM_DEVICE);
&buf_state->addr, DMA_FROM_DEVICE, GFP_KERNEL);
if (err)
return err;

View File

@ -2043,8 +2043,7 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data)
break;
}
if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER)
hclgevf_enable_vector(&hdev->misc_vector, true);
hclgevf_enable_vector(&hdev->misc_vector, true);
return IRQ_HANDLED;
}

View File

@ -986,6 +986,7 @@ static int
ether1_probe(struct expansion_card *ec, const struct ecard_id *id)
{
struct net_device *dev;
u8 addr[ETH_ALEN];
int i, ret = 0;
ether1_banner();
@ -1015,7 +1016,8 @@ ether1_probe(struct expansion_card *ec, const struct ecard_id *id)
}
for (i = 0; i < 6; i++)
dev->dev_addr[i] = readb(IDPROM_ADDRESS + (i << 2));
addr[i] = readb(IDPROM_ADDRESS + (i << 2));
eth_hw_addr_set(dev, addr);
if (ether1_init_2(dev)) {
ret = -ENODEV;

View File

@ -2602,6 +2602,7 @@ static void __ibmvnic_reset(struct work_struct *work)
struct ibmvnic_rwi *rwi;
unsigned long flags;
u32 reset_state;
int num_fails = 0;
int rc = 0;
adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset);
@ -2655,11 +2656,23 @@ static void __ibmvnic_reset(struct work_struct *work)
rc = do_hard_reset(adapter, rwi, reset_state);
rtnl_unlock();
}
if (rc) {
/* give backing device time to settle down */
if (rc)
num_fails++;
else
num_fails = 0;
/* If auto-priority-failover is enabled we can get
* back to back failovers during resets, resulting
* in at least two failed resets (from high-priority
* backing device to low-priority one and then back)
* If resets continue to fail beyond that, give the
* adapter some time to settle down before retrying.
*/
if (num_fails >= 3) {
netdev_dbg(adapter->netdev,
"[S:%s] Hard reset failed, waiting 60 secs\n",
adapter_state_to_string(adapter->state));
"[S:%s] Hard reset failed %d times, waiting 60 secs\n",
adapter_state_to_string(adapter->state),
num_fails);
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(60 * HZ);
}
@ -3844,11 +3857,25 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
struct device *dev = &adapter->vdev->dev;
union ibmvnic_crq crq;
int max_entries;
int cap_reqs;
/* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on
* the PROMISC flag). Initialize this count upfront. When the tasklet
* receives a response to all of these, it will send the next protocol
* message (QUERY_IP_OFFLOAD).
*/
if (!(adapter->netdev->flags & IFF_PROMISC) ||
adapter->promisc_supported)
cap_reqs = 7;
else
cap_reqs = 6;
if (!retry) {
/* Sub-CRQ entries are 32 byte long */
int entries_page = 4 * PAGE_SIZE / (sizeof(u64) * 4);
atomic_set(&adapter->running_cap_crqs, cap_reqs);
if (adapter->min_tx_entries_per_subcrq > entries_page ||
adapter->min_rx_add_entries_per_subcrq > entries_page) {
dev_err(dev, "Fatal, invalid entries per sub-crq\n");
@ -3909,44 +3936,45 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
adapter->opt_rx_comp_queues;
adapter->req_rx_add_queues = adapter->max_rx_add_queues;
} else {
atomic_add(cap_reqs, &adapter->running_cap_crqs);
}
memset(&crq, 0, sizeof(crq));
crq.request_capability.first = IBMVNIC_CRQ_CMD;
crq.request_capability.cmd = REQUEST_CAPABILITY;
crq.request_capability.capability = cpu_to_be16(REQ_TX_QUEUES);
crq.request_capability.number = cpu_to_be64(adapter->req_tx_queues);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
crq.request_capability.capability = cpu_to_be16(REQ_RX_QUEUES);
crq.request_capability.number = cpu_to_be64(adapter->req_rx_queues);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
crq.request_capability.capability = cpu_to_be16(REQ_RX_ADD_QUEUES);
crq.request_capability.number = cpu_to_be64(adapter->req_rx_add_queues);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
crq.request_capability.capability =
cpu_to_be16(REQ_TX_ENTRIES_PER_SUBCRQ);
crq.request_capability.number =
cpu_to_be64(adapter->req_tx_entries_per_subcrq);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
crq.request_capability.capability =
cpu_to_be16(REQ_RX_ADD_ENTRIES_PER_SUBCRQ);
crq.request_capability.number =
cpu_to_be64(adapter->req_rx_add_entries_per_subcrq);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
crq.request_capability.capability = cpu_to_be16(REQ_MTU);
crq.request_capability.number = cpu_to_be64(adapter->req_mtu);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
if (adapter->netdev->flags & IFF_PROMISC) {
@ -3954,16 +3982,21 @@ static void send_request_cap(struct ibmvnic_adapter *adapter, int retry)
crq.request_capability.capability =
cpu_to_be16(PROMISC_REQUESTED);
crq.request_capability.number = cpu_to_be64(1);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
}
} else {
crq.request_capability.capability =
cpu_to_be16(PROMISC_REQUESTED);
crq.request_capability.number = cpu_to_be64(0);
atomic_inc(&adapter->running_cap_crqs);
cap_reqs--;
ibmvnic_send_crq(adapter, &crq);
}
/* Keep at end to catch any discrepancy between expected and actual
* CRQs sent.
*/
WARN_ON(cap_reqs != 0);
}
static int pending_scrq(struct ibmvnic_adapter *adapter,
@ -4357,118 +4390,132 @@ static void send_query_map(struct ibmvnic_adapter *adapter)
static void send_query_cap(struct ibmvnic_adapter *adapter)
{
union ibmvnic_crq crq;
int cap_reqs;
/* We send out 25 QUERY_CAPABILITY CRQs below. Initialize this count
* upfront. When the tasklet receives a response to all of these, it
* can send out the next protocol messaage (REQUEST_CAPABILITY).
*/
cap_reqs = 25;
atomic_set(&adapter->running_cap_crqs, cap_reqs);
atomic_set(&adapter->running_cap_crqs, 0);
memset(&crq, 0, sizeof(crq));
crq.query_capability.first = IBMVNIC_CRQ_CMD;
crq.query_capability.cmd = QUERY_CAPABILITY;
crq.query_capability.capability = cpu_to_be16(MIN_TX_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MIN_RX_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MIN_RX_ADD_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MAX_TX_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MAX_RX_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MAX_RX_ADD_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(MIN_TX_ENTRIES_PER_SUBCRQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(MIN_RX_ADD_ENTRIES_PER_SUBCRQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(MAX_TX_ENTRIES_PER_SUBCRQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(MAX_RX_ADD_ENTRIES_PER_SUBCRQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(TCP_IP_OFFLOAD);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(PROMISC_SUPPORTED);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MIN_MTU);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MAX_MTU);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MAX_MULTICAST_FILTERS);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(VLAN_HEADER_INSERTION);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(RX_VLAN_HEADER_INSERTION);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(MAX_TX_SG_ENTRIES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(RX_SG_SUPPORTED);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(OPT_TX_COMP_SUB_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(OPT_RX_COMP_QUEUES);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(OPT_RX_BUFADD_Q_PER_RX_COMP_Q);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(OPT_TX_ENTRIES_PER_SUBCRQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability =
cpu_to_be16(OPT_RXBA_ENTRIES_PER_SUBCRQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
crq.query_capability.capability = cpu_to_be16(TX_RX_DESC_REQ);
atomic_inc(&adapter->running_cap_crqs);
ibmvnic_send_crq(adapter, &crq);
cap_reqs--;
/* Keep at end to catch any discrepancy between expected and actual
* CRQs sent.
*/
WARN_ON(cap_reqs != 0);
}
static void send_query_ip_offload(struct ibmvnic_adapter *adapter)
@ -4772,6 +4819,8 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
char *name;
atomic_dec(&adapter->running_cap_crqs);
netdev_dbg(adapter->netdev, "Outstanding request-caps: %d\n",
atomic_read(&adapter->running_cap_crqs));
switch (be16_to_cpu(crq->request_capability_rsp.capability)) {
case REQ_TX_QUEUES:
req_value = &adapter->req_tx_queues;
@ -4835,10 +4884,8 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
}
/* Done receiving requested capabilities, query IP offload support */
if (atomic_read(&adapter->running_cap_crqs) == 0) {
adapter->wait_capability = false;
if (atomic_read(&adapter->running_cap_crqs) == 0)
send_query_ip_offload(adapter);
}
}
static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
@ -5136,10 +5183,8 @@ static void handle_query_cap_rsp(union ibmvnic_crq *crq,
}
out:
if (atomic_read(&adapter->running_cap_crqs) == 0) {
adapter->wait_capability = false;
if (atomic_read(&adapter->running_cap_crqs) == 0)
send_request_cap(adapter, 0);
}
}
static int send_query_phys_parms(struct ibmvnic_adapter *adapter)
@ -5435,33 +5480,21 @@ static void ibmvnic_tasklet(struct tasklet_struct *t)
struct ibmvnic_crq_queue *queue = &adapter->crq;
union ibmvnic_crq *crq;
unsigned long flags;
bool done = false;
spin_lock_irqsave(&queue->lock, flags);
while (!done) {
/* Pull all the valid messages off the CRQ */
while ((crq = ibmvnic_next_crq(adapter)) != NULL) {
/* This barrier makes sure ibmvnic_next_crq()'s
* crq->generic.first & IBMVNIC_CRQ_CMD_RSP is loaded
* before ibmvnic_handle_crq()'s
* switch(gen_crq->first) and switch(gen_crq->cmd).
*/
dma_rmb();
ibmvnic_handle_crq(crq, adapter);
crq->generic.first = 0;
}
/* remain in tasklet until all
* capabilities responses are received
/* Pull all the valid messages off the CRQ */
while ((crq = ibmvnic_next_crq(adapter)) != NULL) {
/* This barrier makes sure ibmvnic_next_crq()'s
* crq->generic.first & IBMVNIC_CRQ_CMD_RSP is loaded
* before ibmvnic_handle_crq()'s
* switch(gen_crq->first) and switch(gen_crq->cmd).
*/
if (!adapter->wait_capability)
done = true;
dma_rmb();
ibmvnic_handle_crq(crq, adapter);
crq->generic.first = 0;
}
/* if capabilities CRQ's were sent in this tasklet, the following
* tasklet must wait until all responses are received
*/
if (atomic_read(&adapter->running_cap_crqs) != 0)
adapter->wait_capability = true;
spin_unlock_irqrestore(&queue->lock, flags);
}

View File

@ -919,7 +919,6 @@ struct ibmvnic_adapter {
int login_rsp_buf_sz;
atomic_t running_cap_crqs;
bool wait_capability;
struct ibmvnic_sub_crq_queue **tx_scrq ____cacheline_aligned;
struct ibmvnic_sub_crq_queue **rx_scrq ____cacheline_aligned;

View File

@ -174,7 +174,6 @@ enum i40e_interrupt_policy {
struct i40e_lump_tracking {
u16 num_entries;
u16 search_hint;
u16 list[0];
#define I40E_PILE_VALID_BIT 0x8000
#define I40E_IWARP_IRQ_PILE_ID (I40E_PILE_VALID_BIT - 2)
@ -848,12 +847,12 @@ struct i40e_vsi {
struct rtnl_link_stats64 net_stats_offsets;
struct i40e_eth_stats eth_stats;
struct i40e_eth_stats eth_stats_offsets;
u32 tx_restart;
u32 tx_busy;
u64 tx_restart;
u64 tx_busy;
u64 tx_linearize;
u64 tx_force_wb;
u32 rx_buf_failed;
u32 rx_page_failed;
u64 rx_buf_failed;
u64 rx_page_failed;
/* These are containers of ring pointers, allocated at run-time */
struct i40e_ring **rx_rings;

View File

@ -240,7 +240,7 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
(unsigned long int)vsi->net_stats_offsets.rx_compressed,
(unsigned long int)vsi->net_stats_offsets.tx_compressed);
dev_info(&pf->pdev->dev,
" tx_restart = %d, tx_busy = %d, rx_buf_failed = %d, rx_page_failed = %d\n",
" tx_restart = %llu, tx_busy = %llu, rx_buf_failed = %llu, rx_page_failed = %llu\n",
vsi->tx_restart, vsi->tx_busy,
vsi->rx_buf_failed, vsi->rx_page_failed);
rcu_read_lock();

View File

@ -196,10 +196,6 @@ int i40e_free_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem)
* @id: an owner id to stick on the items assigned
*
* Returns the base item index of the lump, or negative for error
*
* The search_hint trick and lack of advanced fit-finding only work
* because we're highly likely to have all the same size lump requests.
* Linear search time and any fragmentation should be minimal.
**/
static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
u16 needed, u16 id)
@ -214,8 +210,21 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
return -EINVAL;
}
/* start the linear search with an imperfect hint */
i = pile->search_hint;
/* Allocate last queue in the pile for FDIR VSI queue
* so it doesn't fragment the qp_pile
*/
if (pile == pf->qp_pile && pf->vsi[id]->type == I40E_VSI_FDIR) {
if (pile->list[pile->num_entries - 1] & I40E_PILE_VALID_BIT) {
dev_err(&pf->pdev->dev,
"Cannot allocate queue %d for I40E_VSI_FDIR\n",
pile->num_entries - 1);
return -ENOMEM;
}
pile->list[pile->num_entries - 1] = id | I40E_PILE_VALID_BIT;
return pile->num_entries - 1;
}
i = 0;
while (i < pile->num_entries) {
/* skip already allocated entries */
if (pile->list[i] & I40E_PILE_VALID_BIT) {
@ -234,7 +243,6 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
for (j = 0; j < needed; j++)
pile->list[i+j] = id | I40E_PILE_VALID_BIT;
ret = i;
pile->search_hint = i + j;
break;
}
@ -257,7 +265,7 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
{
int valid_id = (id | I40E_PILE_VALID_BIT);
int count = 0;
int i;
u16 i;
if (!pile || index >= pile->num_entries)
return -EINVAL;
@ -269,8 +277,6 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
count++;
}
if (count && index < pile->search_hint)
pile->search_hint = index;
return count;
}
@ -772,9 +778,9 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
struct rtnl_link_stats64 *ns; /* netdev stats */
struct i40e_eth_stats *oes;
struct i40e_eth_stats *es; /* device's eth stats */
u32 tx_restart, tx_busy;
u64 tx_restart, tx_busy;
struct i40e_ring *p;
u32 rx_page, rx_buf;
u64 rx_page, rx_buf;
u64 bytes, packets;
unsigned int start;
u64 tx_linearize;
@ -10574,15 +10580,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
}
i40e_get_oem_version(&pf->hw);
if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
((hw->aq.fw_maj_ver == 4 && hw->aq.fw_min_ver <= 33) ||
hw->aq.fw_maj_ver < 4) && hw->mac.type == I40E_MAC_XL710) {
/* The following delay is necessary for 4.33 firmware and older
* to recover after EMP reset. 200 ms should suffice but we
* put here 300 ms to be sure that FW is ready to operate
* after reset.
*/
mdelay(300);
if (test_and_clear_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state)) {
/* The following delay is necessary for firmware update. */
mdelay(1000);
}
/* re-verify the eeprom if we just had an EMP reset */
@ -11792,7 +11792,6 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
return -ENOMEM;
pf->irq_pile->num_entries = vectors;
pf->irq_pile->search_hint = 0;
/* track first vector for misc interrupts, ignore return */
(void)i40e_get_lump(pf, pf->irq_pile, 1, I40E_PILE_VALID_BIT - 1);
@ -12595,7 +12594,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
goto sw_init_done;
}
pf->qp_pile->num_entries = pf->hw.func_caps.num_tx_qp;
pf->qp_pile->search_hint = 0;
pf->tx_timeout_recovery_level = 1;

View File

@ -413,6 +413,9 @@
#define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */
#define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1
#define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT)
#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30
#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT)
#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
#define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */
#define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0
#define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11

View File

@ -1376,6 +1376,32 @@ static i40e_status i40e_config_vf_promiscuous_mode(struct i40e_vf *vf,
return aq_ret;
}
/**
* i40e_sync_vfr_reset
* @hw: pointer to hw struct
* @vf_id: VF identifier
*
* Before trigger hardware reset, we need to know if no other process has
* reserved the hardware for any reset operations. This check is done by
* examining the status of the RSTAT1 register used to signal the reset.
**/
static int i40e_sync_vfr_reset(struct i40e_hw *hw, int vf_id)
{
u32 reg;
int i;
for (i = 0; i < I40E_VFR_WAIT_COUNT; i++) {
reg = rd32(hw, I40E_VFINT_ICR0_ENA(vf_id)) &
I40E_VFINT_ICR0_ADMINQ_MASK;
if (reg)
return 0;
usleep_range(100, 200);
}
return -EAGAIN;
}
/**
* i40e_trigger_vf_reset
* @vf: pointer to the VF structure
@ -1390,9 +1416,11 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
u32 reg, reg_idx, bit_idx;
bool vf_active;
u32 radq;
/* warn the VF */
clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
vf_active = test_and_clear_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states);
/* Disable VF's configuration API during reset. The flag is re-enabled
* in i40e_alloc_vf_res(), when it's safe again to access VF's VSI.
@ -1406,7 +1434,19 @@ static void i40e_trigger_vf_reset(struct i40e_vf *vf, bool flr)
* just need to clean up, so don't hit the VFRTRIG register.
*/
if (!flr) {
/* reset VF using VPGEN_VFRTRIG reg */
/* Sync VFR reset before trigger next one */
radq = rd32(hw, I40E_VFINT_ICR0_ENA(vf->vf_id)) &
I40E_VFINT_ICR0_ADMINQ_MASK;
if (vf_active && !radq)
/* waiting for finish reset by virtual driver */
if (i40e_sync_vfr_reset(hw, vf->vf_id))
dev_info(&pf->pdev->dev,
"Reset VF %d never finished\n",
vf->vf_id);
/* Reset VF using VPGEN_VFRTRIG reg. It is also setting
* in progress state in rstat1 register.
*/
reg = rd32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id));
reg |= I40E_VPGEN_VFRTRIG_VFSWR_MASK;
wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg);
@ -2617,6 +2657,59 @@ error_param:
aq_ret);
}
/**
* i40e_check_enough_queue - find big enough queue number
* @vf: pointer to the VF info
* @needed: the number of items needed
*
* Returns the base item index of the queue, or negative for error
**/
static int i40e_check_enough_queue(struct i40e_vf *vf, u16 needed)
{
unsigned int i, cur_queues, more, pool_size;
struct i40e_lump_tracking *pile;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi;
vsi = pf->vsi[vf->lan_vsi_idx];
cur_queues = vsi->alloc_queue_pairs;
/* if current allocated queues are enough for need */
if (cur_queues >= needed)
return vsi->base_queue;
pile = pf->qp_pile;
if (cur_queues > 0) {
/* if the allocated queues are not zero
* just check if there are enough queues for more
* behind the allocated queues.
*/
more = needed - cur_queues;
for (i = vsi->base_queue + cur_queues;
i < pile->num_entries; i++) {
if (pile->list[i] & I40E_PILE_VALID_BIT)
break;
if (more-- == 1)
/* there is enough */
return vsi->base_queue;
}
}
pool_size = 0;
for (i = 0; i < pile->num_entries; i++) {
if (pile->list[i] & I40E_PILE_VALID_BIT) {
pool_size = 0;
continue;
}
if (needed <= ++pool_size)
/* there is enough */
return i;
}
return -ENOMEM;
}
/**
* i40e_vc_request_queues_msg
* @vf: pointer to the VF info
@ -2651,6 +2744,12 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg)
req_pairs - cur_pairs,
pf->queues_left);
vfres->num_queue_pairs = pf->queues_left + cur_pairs;
} else if (i40e_check_enough_queue(vf, req_pairs) < 0) {
dev_warn(&pf->pdev->dev,
"VF %d requested %d more queues, but there is not enough for it.\n",
vf->vf_id,
req_pairs - cur_pairs);
vfres->num_queue_pairs = cur_pairs;
} else {
/* successful request */
vf->num_req_queues = req_pairs;

View File

@ -19,6 +19,7 @@
#define I40E_MAX_VF_PROMISC_FLAGS 3
#define I40E_VF_STATE_WAIT_COUNT 20
#define I40E_VFR_WAIT_COUNT 100
/* Various queue ctrls */
enum i40e_queue_ctrl {

View File

@ -1570,6 +1570,8 @@ static struct mac_ops cgx_mac_ops = {
.mac_enadis_pause_frm = cgx_lmac_enadis_pause_frm,
.mac_pause_frm_config = cgx_lmac_pause_frm_config,
.mac_enadis_ptp_config = cgx_lmac_ptp_config,
.mac_rx_tx_enable = cgx_lmac_rx_tx_enable,
.mac_tx_enable = cgx_lmac_tx_enable,
};
static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)

View File

@ -107,6 +107,9 @@ struct mac_ops {
void (*mac_enadis_ptp_config)(void *cgxd,
int lmac_id,
bool enable);
int (*mac_rx_tx_enable)(void *cgxd, int lmac_id, bool enable);
int (*mac_tx_enable)(void *cgxd, int lmac_id, bool enable);
};
struct cgx {

View File

@ -732,6 +732,7 @@ enum nix_af_status {
NIX_AF_ERR_BANDPROF_INVAL_REQ = -428,
NIX_AF_ERR_CQ_CTX_WRITE_ERR = -429,
NIX_AF_ERR_AQ_CTX_RETRY_WRITE = -430,
NIX_AF_ERR_LINK_CREDITS = -431,
};
/* For NIX RX vtag action */

View File

@ -185,7 +185,6 @@ enum npc_kpu_parser_state {
NPC_S_KPU2_QINQ,
NPC_S_KPU2_ETAG,
NPC_S_KPU2_EXDSA,
NPC_S_KPU2_NGIO,
NPC_S_KPU2_CPT_CTAG,
NPC_S_KPU2_CPT_QINQ,
NPC_S_KPU3_CTAG,
@ -212,6 +211,7 @@ enum npc_kpu_parser_state {
NPC_S_KPU5_NSH,
NPC_S_KPU5_CPT_IP,
NPC_S_KPU5_CPT_IP6,
NPC_S_KPU5_NGIO,
NPC_S_KPU6_IP6_EXT,
NPC_S_KPU6_IP6_HOP_DEST,
NPC_S_KPU6_IP6_ROUT,
@ -1120,15 +1120,6 @@ static struct npc_kpu_profile_cam kpu1_cam_entries[] = {
0x0000,
0x0000,
},
{
NPC_S_KPU1_ETHER, 0xff,
NPC_ETYPE_CTAG,
0xffff,
NPC_ETYPE_NGIO,
0xffff,
0x0000,
0x0000,
},
{
NPC_S_KPU1_ETHER, 0xff,
NPC_ETYPE_CTAG,
@ -1966,6 +1957,15 @@ static struct npc_kpu_profile_cam kpu2_cam_entries[] = {
0x0000,
0x0000,
},
{
NPC_S_KPU2_CTAG, 0xff,
NPC_ETYPE_NGIO,
0xffff,
0x0000,
0x0000,
0x0000,
0x0000,
},
{
NPC_S_KPU2_CTAG, 0xff,
NPC_ETYPE_PPPOE,
@ -2749,15 +2749,6 @@ static struct npc_kpu_profile_cam kpu2_cam_entries[] = {
0x0000,
0x0000,
},
{
NPC_S_KPU2_NGIO, 0xff,
0x0000,
0x0000,
0x0000,
0x0000,
0x0000,
0x0000,
},
{
NPC_S_KPU2_CPT_CTAG, 0xff,
NPC_ETYPE_IP,
@ -5089,6 +5080,15 @@ static struct npc_kpu_profile_cam kpu5_cam_entries[] = {
0x0000,
0x0000,
},
{
NPC_S_KPU5_NGIO, 0xff,
0x0000,
0x0000,
0x0000,
0x0000,
0x0000,
0x0000,
},
{
NPC_S_NA, 0X00,
0x0000,
@ -8422,14 +8422,6 @@ static struct npc_kpu_profile_action kpu1_action_entries[] = {
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
8, 12, 0, 0, 0,
NPC_S_KPU2_NGIO, 12, 1,
NPC_LID_LA, NPC_LT_LA_ETHER,
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
8, 12, 0, 0, 0,
@ -9194,6 +9186,14 @@ static struct npc_kpu_profile_action kpu2_action_entries[] = {
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
0, 0, 0, 2, 0,
NPC_S_KPU5_NGIO, 6, 1,
NPC_LID_LB, NPC_LT_LB_CTAG,
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
8, 0, 6, 2, 0,
@ -9890,14 +9890,6 @@ static struct npc_kpu_profile_action kpu2_action_entries[] = {
NPC_F_LB_U_UNK_ETYPE | NPC_F_LB_L_EXDSA,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
0, 0, 0, 0, 1,
NPC_S_NA, 0, 1,
NPC_LID_LC, NPC_LT_LC_NGIO,
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
8, 0, 6, 2, 0,
@ -11973,6 +11965,14 @@ static struct npc_kpu_profile_action kpu5_action_entries[] = {
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_RE, NPC_EC_NOERR,
0, 0, 0, 0, 1,
NPC_S_NA, 0, 1,
NPC_LID_LC, NPC_LT_LC_NGIO,
0,
0, 0, 0, 0,
},
{
NPC_ERRLEV_LC, NPC_EC_UNK,
0, 0, 0, 0, 1,

View File

@ -30,6 +30,8 @@ static struct mac_ops rpm_mac_ops = {
.mac_enadis_pause_frm = rpm_lmac_enadis_pause_frm,
.mac_pause_frm_config = rpm_lmac_pause_frm_config,
.mac_enadis_ptp_config = rpm_lmac_ptp_config,
.mac_rx_tx_enable = rpm_lmac_rx_tx_enable,
.mac_tx_enable = rpm_lmac_tx_enable,
};
struct mac_ops *rpm_get_mac_ops(void)
@ -54,6 +56,43 @@ int rpm_get_nr_lmacs(void *rpmd)
return hweight8(rpm_read(rpm, 0, CGXX_CMRX_RX_LMACS) & 0xFULL);
}
int rpm_lmac_tx_enable(void *rpmd, int lmac_id, bool enable)
{
rpm_t *rpm = rpmd;
u64 cfg, last;
if (!is_lmac_valid(rpm, lmac_id))
return -ENODEV;
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
last = cfg;
if (enable)
cfg |= RPM_TX_EN;
else
cfg &= ~(RPM_TX_EN);
if (cfg != last)
rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
return !!(last & RPM_TX_EN);
}
int rpm_lmac_rx_tx_enable(void *rpmd, int lmac_id, bool enable)
{
rpm_t *rpm = rpmd;
u64 cfg;
if (!is_lmac_valid(rpm, lmac_id))
return -ENODEV;
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
if (enable)
cfg |= RPM_RX_EN | RPM_TX_EN;
else
cfg &= ~(RPM_RX_EN | RPM_TX_EN);
rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
return 0;
}
void rpm_lmac_enadis_rx_pause_fwding(void *rpmd, int lmac_id, bool enable)
{
rpm_t *rpm = rpmd;
@ -252,23 +291,20 @@ int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable)
if (!rpm || lmac_id >= rpm->lmac_count)
return -ENODEV;
lmac_type = rpm->mac_ops->get_lmac_type(rpm, lmac_id);
if (lmac_type == LMAC_MODE_100G_R) {
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1);
if (enable)
cfg |= RPMX_MTI_PCS_LBK;
else
cfg &= ~RPMX_MTI_PCS_LBK;
rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg);
} else {
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1);
if (enable)
cfg |= RPMX_MTI_PCS_LBK;
else
cfg &= ~RPMX_MTI_PCS_LBK;
rpm_write(rpm, lmac_id, RPMX_MTI_LPCSX_CONTROL1, cfg);
if (lmac_type == LMAC_MODE_QSGMII || lmac_type == LMAC_MODE_SGMII) {
dev_err(&rpm->pdev->dev, "loopback not supported for LPC mode\n");
return 0;
}
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1);
if (enable)
cfg |= RPMX_MTI_PCS_LBK;
else
cfg &= ~RPMX_MTI_PCS_LBK;
rpm_write(rpm, lmac_id, RPMX_MTI_PCS100X_CONTROL1, cfg);
return 0;
}

View File

@ -43,6 +43,8 @@
#define RPMX_MTI_STAT_DATA_HI_CDC 0x10038
#define RPM_LMAC_FWI 0xa
#define RPM_TX_EN BIT_ULL(0)
#define RPM_RX_EN BIT_ULL(1)
/* Function Declarations */
int rpm_get_nr_lmacs(void *rpmd);
@ -57,4 +59,6 @@ int rpm_lmac_enadis_pause_frm(void *rpmd, int lmac_id, u8 tx_pause,
int rpm_get_tx_stats(void *rpmd, int lmac_id, int idx, u64 *tx_stat);
int rpm_get_rx_stats(void *rpmd, int lmac_id, int idx, u64 *rx_stat);
void rpm_lmac_ptp_config(void *rpmd, int lmac_id, bool enable);
int rpm_lmac_rx_tx_enable(void *rpmd, int lmac_id, bool enable);
int rpm_lmac_tx_enable(void *rpmd, int lmac_id, bool enable);
#endif /* RPM_H */

View File

@ -520,8 +520,11 @@ static void rvu_block_reset(struct rvu *rvu, int blkaddr, u64 rst_reg)
rvu_write64(rvu, blkaddr, rst_reg, BIT_ULL(0));
err = rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true);
if (err)
dev_err(rvu->dev, "HW block:%d reset failed\n", blkaddr);
if (err) {
dev_err(rvu->dev, "HW block:%d reset timeout retrying again\n", blkaddr);
while (rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true) == -EBUSY)
;
}
}
static void rvu_reset_all_blocks(struct rvu *rvu)

View File

@ -806,6 +806,7 @@ bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature);
u32 rvu_cgx_get_fifolen(struct rvu *rvu);
void *rvu_first_cgx_pdata(struct rvu *rvu);
int cgxlmac_to_pf(struct rvu *rvu, int cgx_id, int lmac_id);
int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable);
int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
int type);

View File

@ -441,16 +441,26 @@ void rvu_cgx_enadis_rx_bp(struct rvu *rvu, int pf, bool enable)
int rvu_cgx_config_rxtx(struct rvu *rvu, u16 pcifunc, bool start)
{
int pf = rvu_get_pf(pcifunc);
struct mac_ops *mac_ops;
u8 cgx_id, lmac_id;
void *cgxd;
if (!is_cgx_config_permitted(rvu, pcifunc))
return LMAC_AF_ERR_PERM_DENIED;
rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
cgxd = rvu_cgx_pdata(cgx_id, rvu);
mac_ops = get_mac_ops(cgxd);
cgx_lmac_rx_tx_enable(rvu_cgx_pdata(cgx_id, rvu), lmac_id, start);
return mac_ops->mac_rx_tx_enable(cgxd, lmac_id, start);
}
return 0;
int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable)
{
struct mac_ops *mac_ops;
mac_ops = get_mac_ops(cgxd);
return mac_ops->mac_tx_enable(cgxd, lmac_id, enable);
}
void rvu_cgx_disable_dmac_entries(struct rvu *rvu, u16 pcifunc)

View File

@ -1224,6 +1224,8 @@ static void print_nix_cn10k_sq_ctx(struct seq_file *m,
seq_printf(m, "W3: head_offset\t\t\t%d\nW3: smenq_next_sqb_vld\t\t%d\n\n",
sq_ctx->head_offset, sq_ctx->smenq_next_sqb_vld);
seq_printf(m, "W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d\n",
sq_ctx->smq_next_sq_vld, sq_ctx->smq_pend);
seq_printf(m, "W4: next_sqb \t\t\t%llx\n\n", sq_ctx->next_sqb);
seq_printf(m, "W5: tail_sqb \t\t\t%llx\n\n", sq_ctx->tail_sqb);
seq_printf(m, "W6: smenq_sqb \t\t\t%llx\n\n", sq_ctx->smenq_sqb);

View File

@ -512,11 +512,11 @@ static int rvu_nix_get_bpid(struct rvu *rvu, struct nix_bp_cfg_req *req,
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST);
lmac_chan_cnt = cfg & 0xFF;
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
sdp_chan_cnt = cfg & 0xFFF;
cgx_bpid_cnt = hw->cgx_links * lmac_chan_cnt;
lbk_bpid_cnt = hw->lbk_links * ((cfg >> 16) & 0xFF);
cfg = rvu_read64(rvu, blkaddr, NIX_AF_CONST1);
sdp_chan_cnt = cfg & 0xFFF;
sdp_bpid_cnt = hw->sdp_links * sdp_chan_cnt;
pfvf = rvu_get_pfvf(rvu, req->hdr.pcifunc);
@ -2068,8 +2068,8 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
/* enable cgx tx if disabled */
if (is_pf_cgxmapped(rvu, pf)) {
rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
restore_tx_en = !cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu),
lmac_id, true);
restore_tx_en = !rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu),
lmac_id, true);
}
cfg = rvu_read64(rvu, blkaddr, NIX_AF_SMQX_CFG(smq));
@ -2092,7 +2092,7 @@ static int nix_smq_flush(struct rvu *rvu, int blkaddr,
rvu_cgx_enadis_rx_bp(rvu, pf, true);
/* restore cgx tx state */
if (restore_tx_en)
cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
return err;
}
@ -3878,7 +3878,7 @@ nix_config_link_credits(struct rvu *rvu, int blkaddr, int link,
/* Enable cgx tx if disabled for credits to be back */
if (is_pf_cgxmapped(rvu, pf)) {
rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
restore_tx_en = !cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu),
restore_tx_en = !rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu),
lmac_id, true);
}
@ -3891,8 +3891,8 @@ nix_config_link_credits(struct rvu *rvu, int blkaddr, int link,
NIX_AF_TL1X_SW_XOFF(schq), BIT_ULL(0));
}
rc = -EBUSY;
poll_tmo = jiffies + usecs_to_jiffies(10000);
rc = NIX_AF_ERR_LINK_CREDITS;
poll_tmo = jiffies + usecs_to_jiffies(200000);
/* Wait for credits to return */
do {
if (time_after(jiffies, poll_tmo))
@ -3918,7 +3918,7 @@ exit:
/* Restore state of cgx tx */
if (restore_tx_en)
cgx_lmac_tx_enable(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
rvu_cgx_config_tx(rvu_cgx_pdata(cgx_id, rvu), lmac_id, false);
mutex_unlock(&rvu->rsrc_lock);
return rc;

View File

@ -402,6 +402,7 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
int blkaddr, int index, struct mcam_entry *entry,
bool *enable)
{
struct rvu_npc_mcam_rule *rule;
u16 owner, target_func;
struct rvu_pfvf *pfvf;
u64 rx_action;
@ -423,6 +424,12 @@ static void npc_fixup_vf_rule(struct rvu *rvu, struct npc_mcam *mcam,
test_bit(NIXLF_INITIALIZED, &pfvf->flags)))
*enable = false;
/* fix up not needed for the rules added by user(ntuple filters) */
list_for_each_entry(rule, &mcam->mcam_rules, list) {
if (rule->entry == index)
return;
}
/* copy VF default entry action to the VF mcam entry */
rx_action = npc_get_default_entry_action(rvu, mcam, blkaddr,
target_func);
@ -489,8 +496,8 @@ static void npc_config_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam,
}
/* PF installing VF rule */
if (intf == NIX_INTF_RX && actindex < mcam->bmap_entries)
npc_fixup_vf_rule(rvu, mcam, blkaddr, index, entry, &enable);
if (is_npc_intf_rx(intf) && actindex < mcam->bmap_entries)
npc_fixup_vf_rule(rvu, mcam, blkaddr, actindex, entry, &enable);
/* Set 'action' */
rvu_write64(rvu, blkaddr,
@ -916,7 +923,8 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
int blkaddr, u16 pcifunc, u64 rx_action)
{
int actindex, index, bank, entry;
bool enable;
struct rvu_npc_mcam_rule *rule;
bool enable, update;
if (!(pcifunc & RVU_PFVF_FUNC_MASK))
return;
@ -924,6 +932,14 @@ static void npc_update_vf_flow_entry(struct rvu *rvu, struct npc_mcam *mcam,
mutex_lock(&mcam->lock);
for (index = 0; index < mcam->bmap_entries; index++) {
if (mcam->entry2target_pffunc[index] == pcifunc) {
update = true;
/* update not needed for the rules added via ntuple filters */
list_for_each_entry(rule, &mcam->mcam_rules, list) {
if (rule->entry == index)
update = false;
}
if (!update)
continue;
bank = npc_get_bank(mcam, index);
actindex = index;
entry = index & (mcam->banksize - 1);

View File

@ -1098,14 +1098,6 @@ find_rule:
write_req.cntr = rule->cntr;
}
err = rvu_mbox_handler_npc_mcam_write_entry(rvu, &write_req,
&write_rsp);
if (err) {
rvu_mcam_remove_counter_from_rule(rvu, owner, rule);
if (new)
kfree(rule);
return err;
}
/* update rule */
memcpy(&rule->packet, &dummy.packet, sizeof(rule->packet));
memcpy(&rule->mask, &dummy.mask, sizeof(rule->mask));
@ -1132,6 +1124,18 @@ find_rule:
if (req->default_rule)
pfvf->def_ucast_rule = rule;
/* write to mcam entry registers */
err = rvu_mbox_handler_npc_mcam_write_entry(rvu, &write_req,
&write_rsp);
if (err) {
rvu_mcam_remove_counter_from_rule(rvu, owner, rule);
if (new) {
list_del(&rule->list);
kfree(rule);
}
return err;
}
/* VF's MAC address is being changed via PF */
if (pf_set_vfs_mac) {
ether_addr_copy(pfvf->default_mac, req->packet.dmac);

View File

@ -603,6 +603,7 @@ static inline void __cn10k_aura_freeptr(struct otx2_nic *pfvf, u64 aura,
size++;
tar_addr |= ((size - 1) & 0x7) << 4;
}
dma_wmb();
memcpy((u64 *)lmt_info->lmt_addr, ptrs, sizeof(u64) * num_ptrs);
/* Perform LMTST flush */
cn10k_lmt_flush(val, tar_addr);

View File

@ -394,7 +394,12 @@ static int otx2_forward_vf_mbox_msgs(struct otx2_nic *pf,
dst_mdev->msg_size = mbox_hdr->msg_size;
dst_mdev->num_msgs = num_msgs;
err = otx2_sync_mbox_msg(dst_mbox);
if (err) {
/* Error code -EIO indicate there is a communication failure
* to the AF. Rest of the error codes indicate that AF processed
* VF messages and set the error codes in response messages
* (if any) so simply forward responses to VF.
*/
if (err == -EIO) {
dev_warn(pf->dev,
"AF not responding to VF%d messages\n", vf);
/* restore PF mbase and exit */

View File

@ -40,11 +40,12 @@ static int lan966x_mac_wait_for_completion(struct lan966x *lan966x)
{
u32 val;
return readx_poll_timeout(lan966x_mac_get_status,
lan966x, val,
(ANA_MACACCESS_MAC_TABLE_CMD_GET(val)) ==
MACACCESS_CMD_IDLE,
TABLE_UPDATE_SLEEP_US, TABLE_UPDATE_TIMEOUT_US);
return readx_poll_timeout_atomic(lan966x_mac_get_status,
lan966x, val,
(ANA_MACACCESS_MAC_TABLE_CMD_GET(val)) ==
MACACCESS_CMD_IDLE,
TABLE_UPDATE_SLEEP_US,
TABLE_UPDATE_TIMEOUT_US);
}
static void lan966x_mac_select(struct lan966x *lan966x,

View File

@ -182,9 +182,9 @@ static int lan966x_port_inj_ready(struct lan966x *lan966x, u8 grp)
{
u32 val;
return readx_poll_timeout(lan966x_port_inj_status, lan966x, val,
QS_INJ_STATUS_FIFO_RDY_GET(val) & BIT(grp),
READL_SLEEP_US, READL_TIMEOUT_US);
return readx_poll_timeout_atomic(lan966x_port_inj_status, lan966x, val,
QS_INJ_STATUS_FIFO_RDY_GET(val) & BIT(grp),
READL_SLEEP_US, READL_TIMEOUT_US);
}
static int lan966x_port_ifh_xmit(struct sk_buff *skb,

View File

@ -749,6 +749,7 @@ ether3_probe(struct expansion_card *ec, const struct ecard_id *id)
const struct ether3_data *data = id->data;
struct net_device *dev;
int bus_type, ret;
u8 addr[ETH_ALEN];
ether3_banner();
@ -776,7 +777,8 @@ ether3_probe(struct expansion_card *ec, const struct ecard_id *id)
priv(dev)->seeq = priv(dev)->base + data->base_offset;
dev->irq = ec->irq;
ether3_addr(dev->dev_addr, ec);
ether3_addr(addr, ec);
eth_hw_addr_set(dev, addr);
priv(dev)->dev = dev;
timer_setup(&priv(dev)->timer, ether3_ledoff, 0);

View File

@ -756,7 +756,7 @@ static int sun8i_dwmac_reset(struct stmmac_priv *priv)
if (err) {
dev_err(priv->device, "EMAC reset timeout\n");
return -EFAULT;
return err;
}
return 0;
}

View File

@ -22,21 +22,21 @@
#define ETHER_CLK_SEL_RMII_CLK_EN BIT(2)
#define ETHER_CLK_SEL_RMII_CLK_RST BIT(3)
#define ETHER_CLK_SEL_DIV_SEL_2 BIT(4)
#define ETHER_CLK_SEL_DIV_SEL_20 BIT(0)
#define ETHER_CLK_SEL_DIV_SEL_20 0
#define ETHER_CLK_SEL_FREQ_SEL_125M (BIT(9) | BIT(8))
#define ETHER_CLK_SEL_FREQ_SEL_50M BIT(9)
#define ETHER_CLK_SEL_FREQ_SEL_25M BIT(8)
#define ETHER_CLK_SEL_FREQ_SEL_2P5M 0
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN BIT(0)
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN 0
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC BIT(10)
#define ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV BIT(11)
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_IN BIT(0)
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_IN 0
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC BIT(12)
#define ETHER_CLK_SEL_RX_CLK_EXT_SEL_DIV BIT(13)
#define ETHER_CLK_SEL_TX_CLK_O_TX_I BIT(0)
#define ETHER_CLK_SEL_TX_CLK_O_TX_I 0
#define ETHER_CLK_SEL_TX_CLK_O_RMII_I BIT(14)
#define ETHER_CLK_SEL_TX_O_E_N_IN BIT(15)
#define ETHER_CLK_SEL_RMII_CLK_SEL_IN BIT(0)
#define ETHER_CLK_SEL_RMII_CLK_SEL_IN 0
#define ETHER_CLK_SEL_RMII_CLK_SEL_RX_C BIT(16)
#define ETHER_CLK_SEL_RX_TX_CLK_EN (ETHER_CLK_SEL_RX_CLK_EN | ETHER_CLK_SEL_TX_CLK_EN)
@ -96,31 +96,41 @@ static void visconti_eth_fix_mac_speed(void *priv, unsigned int speed)
val |= ETHER_CLK_SEL_TX_O_E_N_IN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
/* Set Clock-Mux, Start clock, Set TX_O direction */
switch (dwmac->phy_intf_sel) {
case ETHER_CONFIG_INTF_RGMII:
val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val &= ~ETHER_CLK_SEL_TX_O_E_N_IN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
break;
case ETHER_CONFIG_INTF_RMII:
val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_DIV |
ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC | ETHER_CLK_SEL_TX_O_E_N_IN |
ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV | ETHER_CLK_SEL_TX_O_E_N_IN |
ETHER_CLK_SEL_RMII_CLK_SEL_RX_C;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val |= ETHER_CLK_SEL_RMII_CLK_RST;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val |= ETHER_CLK_SEL_RMII_CLK_EN | ETHER_CLK_SEL_RX_TX_CLK_EN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
break;
case ETHER_CONFIG_INTF_MII:
default:
val = clk_sel_val | ETHER_CLK_SEL_RX_CLK_EXT_SEL_RXC |
ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV | ETHER_CLK_SEL_TX_O_E_N_IN |
ETHER_CLK_SEL_RMII_CLK_EN;
ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC | ETHER_CLK_SEL_TX_O_E_N_IN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
break;
}
/* Start clock */
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val |= ETHER_CLK_SEL_RX_TX_CLK_EN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
val &= ~ETHER_CLK_SEL_TX_O_E_N_IN;
writel(val, dwmac->reg + REG_ETHER_CLOCK_SEL);
spin_unlock_irqrestore(&dwmac->lock, flags);
}

View File

@ -194,7 +194,6 @@ struct stmmac_priv {
u32 tx_coal_timer[MTL_MAX_TX_QUEUES];
u32 rx_coal_frames[MTL_MAX_TX_QUEUES];
int tx_coalesce;
int hwts_tx_en;
bool tx_path_in_lpi_mode;
bool tso;
@ -229,7 +228,6 @@ struct stmmac_priv {
unsigned int flow_ctrl;
unsigned int pause;
struct mii_bus *mii;
int mii_irq[PHY_MAX_ADDR];
struct phylink_config phylink_config;
struct phylink *phylink;

View File

@ -402,7 +402,7 @@ static void stmmac_lpi_entry_timer_config(struct stmmac_priv *priv, bool en)
* Description: this function is to verify and enter in LPI mode in case of
* EEE.
*/
static void stmmac_enable_eee_mode(struct stmmac_priv *priv)
static int stmmac_enable_eee_mode(struct stmmac_priv *priv)
{
u32 tx_cnt = priv->plat->tx_queues_to_use;
u32 queue;
@ -412,13 +412,14 @@ static void stmmac_enable_eee_mode(struct stmmac_priv *priv)
struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
if (tx_q->dirty_tx != tx_q->cur_tx)
return; /* still unfinished work */
return -EBUSY; /* still unfinished work */
}
/* Check and enter in LPI mode */
if (!priv->tx_path_in_lpi_mode)
stmmac_set_eee_mode(priv, priv->hw,
priv->plat->en_tx_lpi_clockgating);
return 0;
}
/**
@ -450,8 +451,8 @@ static void stmmac_eee_ctrl_timer(struct timer_list *t)
{
struct stmmac_priv *priv = from_timer(priv, t, eee_ctrl_timer);
stmmac_enable_eee_mode(priv);
mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
if (stmmac_enable_eee_mode(priv))
mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
}
/**
@ -889,6 +890,9 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
bool xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
int ret;
if (priv->plat->ptp_clk_freq_config)
priv->plat->ptp_clk_freq_config(priv);
ret = stmmac_init_tstamp_counter(priv, STMMAC_HWTS_ACTIVE);
if (ret)
return ret;
@ -911,8 +915,6 @@ static int stmmac_init_ptp(struct stmmac_priv *priv)
priv->hwts_tx_en = 0;
priv->hwts_rx_en = 0;
stmmac_ptp_register(priv);
return 0;
}
@ -2647,8 +2649,8 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
if (priv->eee_enabled && !priv->tx_path_in_lpi_mode &&
priv->eee_sw_timer_en) {
stmmac_enable_eee_mode(priv);
mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
if (stmmac_enable_eee_mode(priv))
mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(priv->tx_lpi_timer));
}
/* We still have pending packets, let's call for a new scheduling */
@ -3238,7 +3240,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
/**
* stmmac_hw_setup - setup mac in a usable state.
* @dev : pointer to the device structure.
* @init_ptp: initialize PTP if set
* @ptp_register: register PTP if set
* Description:
* this is the main function to setup the HW in a usable state because the
* dma engine is reset, the core registers are configured (e.g. AXI,
@ -3248,7 +3250,7 @@ static int stmmac_fpe_start_wq(struct stmmac_priv *priv)
* 0 on success and an appropriate (-)ve integer as defined in errno.h
* file on failure.
*/
static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
static int stmmac_hw_setup(struct net_device *dev, bool ptp_register)
{
struct stmmac_priv *priv = netdev_priv(dev);
u32 rx_cnt = priv->plat->rx_queues_to_use;
@ -3305,13 +3307,13 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
stmmac_mmc_setup(priv);
if (init_ptp) {
ret = stmmac_init_ptp(priv);
if (ret == -EOPNOTSUPP)
netdev_warn(priv->dev, "PTP not supported by HW\n");
else if (ret)
netdev_warn(priv->dev, "PTP init failed\n");
}
ret = stmmac_init_ptp(priv);
if (ret == -EOPNOTSUPP)
netdev_warn(priv->dev, "PTP not supported by HW\n");
else if (ret)
netdev_warn(priv->dev, "PTP init failed\n");
else if (ptp_register)
stmmac_ptp_register(priv);
priv->eee_tw_timer = STMMAC_DEFAULT_TWT_LS;

View File

@ -297,9 +297,6 @@ void stmmac_ptp_register(struct stmmac_priv *priv)
{
int i;
if (priv->plat->ptp_clk_freq_config)
priv->plat->ptp_clk_freq_config(priv);
for (i = 0; i < priv->dma_cap.pps_out_num; i++) {
if (i >= STMMAC_PPS_MAX)
break;

View File

@ -1146,7 +1146,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv)
static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw,
int size)
{
struct page_pool_params pp_params;
struct page_pool_params pp_params = {};
struct page_pool *pool;
pp_params.order = 0;

View File

@ -1091,20 +1091,22 @@ static int tsi108_get_mac(struct net_device *dev)
struct tsi108_prv_data *data = netdev_priv(dev);
u32 word1 = TSI_READ(TSI108_MAC_ADDR1);
u32 word2 = TSI_READ(TSI108_MAC_ADDR2);
u8 addr[ETH_ALEN];
/* Note that the octets are reversed from what the manual says,
* producing an even weirder ordering...
*/
if (word2 == 0 && word1 == 0) {
dev->dev_addr[0] = 0x00;
dev->dev_addr[1] = 0x06;
dev->dev_addr[2] = 0xd2;
dev->dev_addr[3] = 0x00;
dev->dev_addr[4] = 0x00;
addr[0] = 0x00;
addr[1] = 0x06;
addr[2] = 0xd2;
addr[3] = 0x00;
addr[4] = 0x00;
if (0x8 == data->phy)
dev->dev_addr[5] = 0x01;
addr[5] = 0x01;
else
dev->dev_addr[5] = 0x02;
addr[5] = 0x02;
eth_hw_addr_set(dev, addr);
word2 = (dev->dev_addr[0] << 16) | (dev->dev_addr[1] << 24);
@ -1114,12 +1116,13 @@ static int tsi108_get_mac(struct net_device *dev)
TSI_WRITE(TSI108_MAC_ADDR1, word1);
TSI_WRITE(TSI108_MAC_ADDR2, word2);
} else {
dev->dev_addr[0] = (word2 >> 16) & 0xff;
dev->dev_addr[1] = (word2 >> 24) & 0xff;
dev->dev_addr[2] = (word1 >> 0) & 0xff;
dev->dev_addr[3] = (word1 >> 8) & 0xff;
dev->dev_addr[4] = (word1 >> 16) & 0xff;
dev->dev_addr[5] = (word1 >> 24) & 0xff;
addr[0] = (word2 >> 16) & 0xff;
addr[1] = (word2 >> 24) & 0xff;
addr[2] = (word1 >> 0) & 0xff;
addr[3] = (word1 >> 8) & 0xff;
addr[4] = (word1 >> 16) & 0xff;
addr[5] = (word1 >> 24) & 0xff;
eth_hw_addr_set(dev, addr);
}
if (!is_valid_ether_addr(dev->dev_addr)) {
@ -1136,14 +1139,12 @@ static int tsi108_set_mac(struct net_device *dev, void *addr)
{
struct tsi108_prv_data *data = netdev_priv(dev);
u32 word1, word2;
int i;
if (!is_valid_ether_addr(addr))
return -EADDRNOTAVAIL;
for (i = 0; i < 6; i++)
/* +2 is for the offset of the HW addr type */
dev->dev_addr[i] = ((unsigned char *)addr)[i + 2];
/* +2 is for the offset of the HW addr type */
eth_hw_addr_set(dev, ((unsigned char *)addr) + 2);
word2 = (dev->dev_addr[0] << 16) | (dev->dev_addr[1] << 24);

View File

@ -950,9 +950,7 @@ static int yam_siocdevprivate(struct net_device *dev, struct ifreq *ifr, void __
ym = memdup_user(data, sizeof(struct yamdrv_ioctl_mcs));
if (IS_ERR(ym))
return PTR_ERR(ym);
if (ym->cmd != SIOCYAMSMCS)
return -EINVAL;
if (ym->bitrate > YAM_MAXBITRATE) {
if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
kfree(ym);
return -EINVAL;
}

View File

@ -854,6 +854,7 @@ static struct phy_driver broadcom_drivers[] = {
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM54616S",
/* PHY_GBIT_FEATURES */
.soft_reset = genphy_soft_reset,
.config_init = bcm54xx_config_init,
.config_aneg = bcm54616s_config_aneg,
.config_intr = bcm_phy_config_intr,

View File

@ -1746,6 +1746,9 @@ void phy_detach(struct phy_device *phydev)
phy_driver_is_genphy_10g(phydev))
device_release_driver(&phydev->mdio.dev);
/* Assert the reset signal */
phy_device_reset(phydev, 1);
/*
* The phydev might go away on the put_device() below, so avoid
* a use-after-free bug by reading the underlying bus first.
@ -1757,9 +1760,6 @@ void phy_detach(struct phy_device *phydev)
ndev_owner = dev->dev.parent->driver->owner;
if (ndev_owner != bus->owner)
module_put(bus->owner);
/* Assert the reset signal */
phy_device_reset(phydev, 1);
}
EXPORT_SYMBOL(phy_detach);

View File

@ -651,6 +651,11 @@ struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode)
else if (ret < 0)
return ERR_PTR(ret);
if (!fwnode_device_is_available(ref.fwnode)) {
fwnode_handle_put(ref.fwnode);
return NULL;
}
bus = sfp_bus_get(ref.fwnode);
fwnode_handle_put(ref.fwnode);
if (!bus)

View File

@ -111,7 +111,7 @@ struct ethtool_link_ext_state_info {
enum ethtool_link_ext_substate_bad_signal_integrity bad_signal_integrity;
enum ethtool_link_ext_substate_cable_issue cable_issue;
enum ethtool_link_ext_substate_module module;
u8 __link_ext_substate;
u32 __link_ext_substate;
};
};

View File

@ -2548,6 +2548,7 @@ struct packet_type {
struct net_device *);
bool (*id_match)(struct packet_type *ptype,
struct sock *sk);
struct net *af_packet_net;
void *af_packet_priv;
struct list_head list;
};

View File

@ -86,4 +86,9 @@ extern struct pid_namespace *task_active_pid_ns(struct task_struct *tsk);
void pidhash_init(void);
void pid_idr_init(void);
static inline bool task_is_in_init_pid_ns(struct task_struct *tsk)
{
return task_active_pid_ns(tsk) == &init_pid_ns;
}
#endif /* _LINUX_PID_NS_H */

View File

@ -318,7 +318,7 @@ enum skb_drop_reason {
SKB_DROP_REASON_NO_SOCKET,
SKB_DROP_REASON_PKT_TOO_SMALL,
SKB_DROP_REASON_TCP_CSUM,
SKB_DROP_REASON_TCP_FILTER,
SKB_DROP_REASON_SOCKET_FILTER,
SKB_DROP_REASON_UDP_CSUM,
SKB_DROP_REASON_MAX,
};

View File

@ -6,6 +6,8 @@
#define RTR_SOLICITATION_INTERVAL (4*HZ)
#define RTR_SOLICITATION_MAX_INTERVAL (3600*HZ) /* 1 hour */
#define MIN_VALID_LIFETIME (2*3600) /* 2 hours */
#define TEMP_VALID_LIFETIME (7*86400)
#define TEMP_PREFERRED_LIFETIME (86400)
#define REGEN_MAX_RETRY (3)

View File

@ -346,7 +346,7 @@ static inline bool bond_uses_primary(struct bonding *bond)
static inline struct net_device *bond_option_active_slave_get_rcu(struct bonding *bond)
{
struct slave *slave = rcu_dereference(bond->curr_active_slave);
struct slave *slave = rcu_dereference_rtnl(bond->curr_active_slave);
return bond_uses_primary(bond) && slave ? slave->dev : NULL;
}

View File

@ -525,19 +525,18 @@ static inline void ip_select_ident_segs(struct net *net, struct sk_buff *skb,
{
struct iphdr *iph = ip_hdr(skb);
/* We had many attacks based on IPID, use the private
* generator as much as we can.
*/
if (sk && inet_sk(sk)->inet_daddr) {
iph->id = htons(inet_sk(sk)->inet_id);
inet_sk(sk)->inet_id += segs;
return;
}
if ((iph->frag_off & htons(IP_DF)) && !skb->ignore_df) {
/* This is only to work around buggy Windows95/2000
* VJ compression implementations. If the ID field
* does not change, they drop every other packet in
* a TCP stream using header compression.
*/
if (sk && inet_sk(sk)->inet_daddr) {
iph->id = htons(inet_sk(sk)->inet_id);
inet_sk(sk)->inet_id += segs;
} else {
iph->id = 0;
}
iph->id = 0;
} else {
/* Unfortunately we need the big hammer to get a suitable IPID */
__ip_select_ident(net, iph, segs);
}
}

View File

@ -282,7 +282,7 @@ static inline bool fib6_get_cookie_safe(const struct fib6_info *f6i,
fn = rcu_dereference(f6i->fib6_node);
if (fn) {
*cookie = fn->fn_sernum;
*cookie = READ_ONCE(fn->fn_sernum);
/* pairs with smp_wmb() in __fib6_update_sernum_upto_root() */
smp_rmb();
status = true;

View File

@ -370,7 +370,7 @@ static inline struct neighbour *ip_neigh_gw4(struct net_device *dev,
{
struct neighbour *neigh;
neigh = __ipv4_neigh_lookup_noref(dev, daddr);
neigh = __ipv4_neigh_lookup_noref(dev, (__force u32)daddr);
if (unlikely(!neigh))
neigh = __neigh_create(&arp_tbl, &daddr, dev, false);

View File

@ -1369,6 +1369,7 @@ static inline bool tcp_checksum_complete(struct sk_buff *skb)
bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb);
#ifdef CONFIG_INET
void __sk_defer_free_flush(struct sock *sk);
static inline void sk_defer_free_flush(struct sock *sk)
@ -1377,6 +1378,9 @@ static inline void sk_defer_free_flush(struct sock *sk)
return;
__sk_defer_free_flush(sk);
}
#else
static inline void sk_defer_free_flush(struct sock *sk) {}
#endif
int tcp_filter(struct sock *sk, struct sk_buff *skb);
void tcp_set_state(struct sock *sk, int state);

View File

@ -14,7 +14,7 @@
EM(SKB_DROP_REASON_NO_SOCKET, NO_SOCKET) \
EM(SKB_DROP_REASON_PKT_TOO_SMALL, PKT_TOO_SMALL) \
EM(SKB_DROP_REASON_TCP_CSUM, TCP_CSUM) \
EM(SKB_DROP_REASON_TCP_FILTER, TCP_FILTER) \
EM(SKB_DROP_REASON_SOCKET_FILTER, SOCKET_FILTER) \
EM(SKB_DROP_REASON_UDP_CSUM, UDP_CSUM) \
EMe(SKB_DROP_REASON_MAX, MAX)

View File

@ -560,10 +560,10 @@ static bool __allowed_ingress(const struct net_bridge *br,
!br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) {
if (*state == BR_STATE_FORWARDING) {
*state = br_vlan_get_pvid_state(vg);
return br_vlan_state_allowed(*state, true);
} else {
return true;
if (!br_vlan_state_allowed(*state, true))
goto drop;
}
return true;
}
}
v = br_vlan_find(vg, *vid);
@ -2020,7 +2020,8 @@ static int br_vlan_rtm_dump(struct sk_buff *skb, struct netlink_callback *cb)
goto out_err;
}
err = br_vlan_dump_dev(dev, skb, cb, dump_flags);
if (err && err != -EMSGSIZE)
/* if the dump completed without an error we return 0 here */
if (err != -EMSGSIZE)
goto out_err;
} else {
for_each_netdev_rcu(net, dev) {

View File

@ -190,12 +190,23 @@ static const struct seq_operations softnet_seq_ops = {
.show = softnet_seq_show,
};
static void *ptype_get_idx(loff_t pos)
static void *ptype_get_idx(struct seq_file *seq, loff_t pos)
{
struct list_head *ptype_list = NULL;
struct packet_type *pt = NULL;
struct net_device *dev;
loff_t i = 0;
int t;
for_each_netdev_rcu(seq_file_net(seq), dev) {
ptype_list = &dev->ptype_all;
list_for_each_entry_rcu(pt, ptype_list, list) {
if (i == pos)
return pt;
++i;
}
}
list_for_each_entry_rcu(pt, &ptype_all, list) {
if (i == pos)
return pt;
@ -216,22 +227,40 @@ static void *ptype_seq_start(struct seq_file *seq, loff_t *pos)
__acquires(RCU)
{
rcu_read_lock();
return *pos ? ptype_get_idx(*pos - 1) : SEQ_START_TOKEN;
return *pos ? ptype_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
}
static void *ptype_seq_next(struct seq_file *seq, void *v, loff_t *pos)
{
struct net_device *dev;
struct packet_type *pt;
struct list_head *nxt;
int hash;
++*pos;
if (v == SEQ_START_TOKEN)
return ptype_get_idx(0);
return ptype_get_idx(seq, 0);
pt = v;
nxt = pt->list.next;
if (pt->dev) {
if (nxt != &pt->dev->ptype_all)
goto found;
dev = pt->dev;
for_each_netdev_continue_rcu(seq_file_net(seq), dev) {
if (!list_empty(&dev->ptype_all)) {
nxt = dev->ptype_all.next;
goto found;
}
}
nxt = ptype_all.next;
goto ptype_all;
}
if (pt->type == htons(ETH_P_ALL)) {
ptype_all:
if (nxt != &ptype_all)
goto found;
hash = 0;
@ -260,7 +289,8 @@ static int ptype_seq_show(struct seq_file *seq, void *v)
if (v == SEQ_START_TOKEN)
seq_puts(seq, "Type Device Function\n");
else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) {
else if ((!pt->af_packet_net || net_eq(pt->af_packet_net, seq_file_net(seq))) &&
(!pt->dev || net_eq(dev_net(pt->dev), seq_file_net(seq)))) {
if (pt->type == htons(ETH_P_ALL))
seq_puts(seq, "ALL ");
else

View File

@ -162,12 +162,19 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
iph->daddr = (opt && opt->opt.srr ? opt->opt.faddr : daddr);
iph->saddr = saddr;
iph->protocol = sk->sk_protocol;
if (ip_dont_fragment(sk, &rt->dst)) {
/* Do not bother generating IPID for small packets (eg SYNACK) */
if (skb->len <= IPV4_MIN_MTU || ip_dont_fragment(sk, &rt->dst)) {
iph->frag_off = htons(IP_DF);
iph->id = 0;
} else {
iph->frag_off = 0;
__ip_select_ident(net, iph, 1);
/* TCP packets here are SYNACK with fat IPv4/TCP options.
* Avoid using the hashed IP ident generator.
*/
if (sk->sk_protocol == IPPROTO_TCP)
iph->id = (__force __be16)prandom_u32();
else
__ip_select_ident(net, iph, 1);
}
if (opt && opt->opt.optlen) {
@ -825,15 +832,24 @@ int ip_do_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
/* Everything is OK. Generate! */
ip_fraglist_init(skb, iph, hlen, &iter);
if (iter.frag)
ip_options_fragment(iter.frag);
for (;;) {
/* Prepare header of the next frame,
* before previous one went down. */
if (iter.frag) {
bool first_frag = (iter.offset == 0);
IPCB(iter.frag)->flags = IPCB(skb)->flags;
ip_fraglist_prepare(skb, &iter);
if (first_frag && IPCB(skb)->opt.optlen) {
/* ipcb->opt is not populated for frags
* coming from __ip_make_skb(),
* ip_options_fragment() needs optlen
*/
IPCB(iter.frag)->opt.optlen =
IPCB(skb)->opt.optlen;
ip_options_fragment(iter.frag);
ip_send_check(iter.iph);
}
}
skb->tstamp = tstamp;

View File

@ -220,7 +220,8 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
continue;
}
if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)
if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
sk->sk_bound_dev_if != inet_sdif(skb))
continue;
sock_hold(sk);

View File

@ -722,6 +722,7 @@ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
int ret = -EINVAL;
int chk_addr_ret;
lock_sock(sk);
if (sk->sk_state != TCP_CLOSE || addr_len < sizeof(struct sockaddr_in))
goto out;
@ -741,7 +742,9 @@ static int raw_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->inet_saddr = 0; /* Use device */
sk_dst_reset(sk);
ret = 0;
out: return ret;
out:
release_sock(sk);
return ret;
}
/*

View File

@ -842,6 +842,7 @@ ssize_t tcp_splice_read(struct socket *sock, loff_t *ppos,
}
release_sock(sk);
sk_defer_free_flush(sk);
if (spliced)
return spliced;

View File

@ -2095,7 +2095,7 @@ process:
nf_reset_ct(skb);
if (tcp_filter(sk, skb)) {
drop_reason = SKB_DROP_REASON_TCP_FILTER;
drop_reason = SKB_DROP_REASON_SOCKET_FILTER;
goto discard_and_relse;
}
th = (const struct tcphdr *)skb->data;

View File

@ -2589,7 +2589,7 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
__u32 valid_lft, u32 prefered_lft)
{
struct inet6_ifaddr *ifp = ipv6_get_ifaddr(net, addr, dev, 1);
int create = 0;
int create = 0, update_lft = 0;
if (!ifp && valid_lft) {
int max_addresses = in6_dev->cnf.max_addresses;
@ -2633,19 +2633,32 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
unsigned long now;
u32 stored_lft;
/* Update lifetime (RFC4862 5.5.3 e)
* We deviate from RFC4862 by honoring all Valid Lifetimes to
* improve the reaction of SLAAC to renumbering events
* (draft-gont-6man-slaac-renum-06, Section 4.2)
*/
/* update lifetime (RFC2462 5.5.3 e) */
spin_lock_bh(&ifp->lock);
now = jiffies;
if (ifp->valid_lft > (now - ifp->tstamp) / HZ)
stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ;
else
stored_lft = 0;
if (!create && stored_lft) {
const u32 minimum_lft = min_t(u32,
stored_lft, MIN_VALID_LIFETIME);
valid_lft = max(valid_lft, minimum_lft);
/* RFC4862 Section 5.5.3e:
* "Note that the preferred lifetime of the
* corresponding address is always reset to
* the Preferred Lifetime in the received
* Prefix Information option, regardless of
* whether the valid lifetime is also reset or
* ignored."
*
* So we should always update prefered_lft here.
*/
update_lft = 1;
}
if (update_lft) {
ifp->valid_lft = valid_lft;
ifp->prefered_lft = prefered_lft;
ifp->tstamp = now;

View File

@ -112,7 +112,7 @@ void fib6_update_sernum(struct net *net, struct fib6_info *f6i)
fn = rcu_dereference_protected(f6i->fib6_node,
lockdep_is_held(&f6i->fib6_table->tb6_lock));
if (fn)
fn->fn_sernum = fib6_new_sernum(net);
WRITE_ONCE(fn->fn_sernum, fib6_new_sernum(net));
}
/*
@ -590,12 +590,13 @@ static int fib6_dump_table(struct fib6_table *table, struct sk_buff *skb,
spin_unlock_bh(&table->tb6_lock);
if (res > 0) {
cb->args[4] = 1;
cb->args[5] = w->root->fn_sernum;
cb->args[5] = READ_ONCE(w->root->fn_sernum);
}
} else {
if (cb->args[5] != w->root->fn_sernum) {
int sernum = READ_ONCE(w->root->fn_sernum);
if (cb->args[5] != sernum) {
/* Begin at the root if the tree changed */
cb->args[5] = w->root->fn_sernum;
cb->args[5] = sernum;
w->state = FWS_INIT;
w->node = w->root;
w->skip = w->count;
@ -1345,7 +1346,7 @@ static void __fib6_update_sernum_upto_root(struct fib6_info *rt,
/* paired with smp_rmb() in fib6_get_cookie_safe() */
smp_wmb();
while (fn) {
fn->fn_sernum = sernum;
WRITE_ONCE(fn->fn_sernum, sernum);
fn = rcu_dereference_protected(fn->parent,
lockdep_is_held(&rt->fib6_table->tb6_lock));
}
@ -2174,8 +2175,8 @@ static int fib6_clean_node(struct fib6_walker *w)
};
if (c->sernum != FIB6_NO_SERNUM_CHANGE &&
w->node->fn_sernum != c->sernum)
w->node->fn_sernum = c->sernum;
READ_ONCE(w->node->fn_sernum) != c->sernum)
WRITE_ONCE(w->node->fn_sernum, c->sernum);
if (!c->func) {
WARN_ON_ONCE(c->sernum == FIB6_NO_SERNUM_CHANGE);
@ -2543,7 +2544,7 @@ static void ipv6_route_seq_setup_walk(struct ipv6_route_iter *iter,
iter->w.state = FWS_INIT;
iter->w.node = iter->w.root;
iter->w.args = iter;
iter->sernum = iter->w.root->fn_sernum;
iter->sernum = READ_ONCE(iter->w.root->fn_sernum);
INIT_LIST_HEAD(&iter->w.lh);
fib6_walker_link(net, &iter->w);
}
@ -2571,8 +2572,10 @@ static struct fib6_table *ipv6_route_seq_next_table(struct fib6_table *tbl,
static void ipv6_route_check_sernum(struct ipv6_route_iter *iter)
{
if (iter->sernum != iter->w.root->fn_sernum) {
iter->sernum = iter->w.root->fn_sernum;
int sernum = READ_ONCE(iter->w.root->fn_sernum);
if (iter->sernum != sernum) {
iter->sernum = sernum;
iter->w.state = FWS_INIT;
iter->w.node = iter->w.root;
WARN_ON(iter->w.skip);

View File

@ -1036,14 +1036,14 @@ int ip6_tnl_xmit_ctl(struct ip6_tnl *t,
if (unlikely(!ipv6_chk_addr_and_flags(net, laddr, ldev, false,
0, IFA_F_TENTATIVE)))
pr_warn("%s xmit: Local address not yet configured!\n",
p->name);
pr_warn_ratelimited("%s xmit: Local address not yet configured!\n",
p->name);
else if (!(p->flags & IP6_TNL_F_ALLOW_LOCAL_REMOTE) &&
!ipv6_addr_is_multicast(raddr) &&
unlikely(ipv6_chk_addr_and_flags(net, raddr, ldev,
true, 0, IFA_F_TENTATIVE)))
pr_warn("%s xmit: Routing loop! Remote address found on this node!\n",
p->name);
pr_warn_ratelimited("%s xmit: Routing loop! Remote address found on this node!\n",
p->name);
else
ret = 1;
rcu_read_unlock();

View File

@ -2802,7 +2802,7 @@ static void ip6_link_failure(struct sk_buff *skb)
if (from) {
fn = rcu_dereference(from->fib6_node);
if (fn && (rt->rt6i_flags & RTF_DEFAULT))
fn->fn_sernum = -1;
WRITE_ONCE(fn->fn_sernum, -1);
}
}
rcu_read_unlock();

View File

@ -478,6 +478,20 @@ __lookup_addr_by_id(struct pm_nl_pernet *pernet, unsigned int id)
return NULL;
}
static struct mptcp_pm_addr_entry *
__lookup_addr(struct pm_nl_pernet *pernet, const struct mptcp_addr_info *info,
bool lookup_by_id)
{
struct mptcp_pm_addr_entry *entry;
list_for_each_entry(entry, &pernet->local_addr_list, list) {
if ((!lookup_by_id && addresses_equal(&entry->addr, info, true)) ||
(lookup_by_id && entry->addr.id == info->id))
return entry;
}
return NULL;
}
static int
lookup_id_by_addr(struct pm_nl_pernet *pernet, const struct mptcp_addr_info *addr)
{
@ -777,7 +791,7 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
removed = true;
__MPTCP_INC_STATS(sock_net(sk), rm_type);
}
__set_bit(rm_list->ids[1], msk->pm.id_avail_bitmap);
__set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap);
if (!removed)
continue;
@ -1763,18 +1777,21 @@ static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
return -EOPNOTSUPP;
}
list_for_each_entry(entry, &pernet->local_addr_list, list) {
if ((!lookup_by_id && addresses_equal(&entry->addr, &addr.addr, true)) ||
(lookup_by_id && entry->addr.id == addr.addr.id)) {
mptcp_nl_addr_backup(net, &entry->addr, bkup);
if (bkup)
entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
else
entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
}
spin_lock_bh(&pernet->lock);
entry = __lookup_addr(pernet, &addr.addr, lookup_by_id);
if (!entry) {
spin_unlock_bh(&pernet->lock);
return -EINVAL;
}
if (bkup)
entry->flags |= MPTCP_PM_ADDR_FLAG_BACKUP;
else
entry->flags &= ~MPTCP_PM_ADDR_FLAG_BACKUP;
addr = *entry;
spin_unlock_bh(&pernet->lock);
mptcp_nl_addr_backup(net, &addr.addr, bkup);
return 0;
}

View File

@ -408,7 +408,7 @@ DECLARE_PER_CPU(struct mptcp_delegated_action, mptcp_delegated_actions);
struct mptcp_subflow_context {
struct list_head node;/* conn_list of subflows */
char reset_start[0];
struct_group(reset,
unsigned long avg_pacing_rate; /* protected by msk socket lock */
u64 local_key;
@ -458,7 +458,7 @@ struct mptcp_subflow_context {
long delegated_status;
char reset_end[0];
);
struct list_head delegated_node; /* link into delegated_action, protected by local BH */
@ -494,7 +494,7 @@ mptcp_subflow_tcp_sock(const struct mptcp_subflow_context *subflow)
static inline void
mptcp_subflow_ctx_reset(struct mptcp_subflow_context *subflow)
{
memset(subflow->reset_start, 0, subflow->reset_end - subflow->reset_start);
memset(&subflow->reset, 0, sizeof(subflow->reset));
subflow->request_mptcp = 1;
}

View File

@ -1924,15 +1924,17 @@ repeat:
pr_debug("nf_conntrack_in: Can't track with proto module\n");
nf_ct_put(ct);
skb->_nfct = 0;
NF_CT_STAT_INC_ATOMIC(state->net, invalid);
if (ret == -NF_DROP)
NF_CT_STAT_INC_ATOMIC(state->net, drop);
/* Special case: TCP tracker reports an attempt to reopen a
* closed/aborted connection. We have to go back and create a
* fresh conntrack.
*/
if (ret == -NF_REPEAT)
goto repeat;
NF_CT_STAT_INC_ATOMIC(state->net, invalid);
if (ret == -NF_DROP)
NF_CT_STAT_INC_ATOMIC(state->net, drop);
ret = -ret;
goto out;
}

View File

@ -20,13 +20,14 @@
#include <net/netfilter/nf_conntrack_helper.h>
#include <net/netfilter/nf_conntrack_expect.h>
#define HELPER_NAME "netbios-ns"
#define NMBD_PORT 137
MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
MODULE_DESCRIPTION("NetBIOS name service broadcast connection tracking helper");
MODULE_LICENSE("GPL");
MODULE_ALIAS("ip_conntrack_netbios_ns");
MODULE_ALIAS_NFCT_HELPER("netbios_ns");
MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
static unsigned int timeout __read_mostly = 3;
module_param(timeout, uint, 0400);
@ -44,7 +45,7 @@ static int netbios_ns_help(struct sk_buff *skb, unsigned int protoff,
}
static struct nf_conntrack_helper helper __read_mostly = {
.name = "netbios-ns",
.name = HELPER_NAME,
.tuple.src.l3num = NFPROTO_IPV4,
.tuple.src.u.udp.port = cpu_to_be16(NMBD_PORT),
.tuple.dst.protonum = IPPROTO_UDP,

View File

@ -8264,14 +8264,12 @@ static int nf_tables_commit_chain_prepare(struct net *net, struct nft_chain *cha
void *data, *data_boundary;
struct nft_rule_dp *prule;
struct nft_rule *rule;
int i;
/* already handled or inactive chain? */
if (chain->blob_next || !nft_is_active_next(net, chain))
return 0;
rule = list_entry(&chain->rules, struct nft_rule, list);
i = 0;
data_size = 0;
list_for_each_entry_continue(rule, &chain->rules, list) {
@ -8301,7 +8299,7 @@ static int nf_tables_commit_chain_prepare(struct net *net, struct nft_chain *cha
return -ENOMEM;
size = 0;
track.last = last;
track.last = nft_expr_last(rule);
nft_rule_for_each_expr(expr, last, rule) {
track.cur = expr;

View File

@ -62,6 +62,7 @@ static int nft_connlimit_do_init(const struct nft_ctx *ctx,
{
bool invert = false;
u32 flags, limit;
int err;
if (!tb[NFTA_CONNLIMIT_COUNT])
return -EINVAL;
@ -84,7 +85,15 @@ static int nft_connlimit_do_init(const struct nft_ctx *ctx,
priv->limit = limit;
priv->invert = invert;
return nf_ct_netns_get(ctx->net, ctx->family);
err = nf_ct_netns_get(ctx->net, ctx->family);
if (err < 0)
goto err_netns;
return 0;
err_netns:
kfree(priv->list);
return err;
}
static void nft_connlimit_do_destroy(const struct nft_ctx *ctx,

View File

@ -1774,6 +1774,7 @@ static int fanout_add(struct sock *sk, struct fanout_args *args)
match->prot_hook.dev = po->prot_hook.dev;
match->prot_hook.func = packet_rcv_fanout;
match->prot_hook.af_packet_priv = match;
match->prot_hook.af_packet_net = read_pnet(&match->net);
match->prot_hook.id_match = match_fanout_group;
match->max_num_members = args->max_num_members;
list_add(&match->list, &fanout_list);
@ -3353,6 +3354,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
po->prot_hook.func = packet_rcv_spkt;
po->prot_hook.af_packet_priv = sk;
po->prot_hook.af_packet_net = sock_net(sk);
if (proto) {
po->prot_hook.type = proto;

View File

@ -157,7 +157,7 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
{
struct sk_buff *skb;
unsigned long resend_at, rto_j;
unsigned long resend_at;
rxrpc_seq_t cursor, seq, top;
ktime_t now, max_age, oldest, ack_ts;
int ix;
@ -165,10 +165,8 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
_enter("{%d,%d}", call->tx_hard_ack, call->tx_top);
rto_j = call->peer->rto_j;
now = ktime_get_real();
max_age = ktime_sub(now, jiffies_to_usecs(rto_j));
max_age = ktime_sub(now, jiffies_to_usecs(call->peer->rto_j));
spin_lock_bh(&call->lock);
@ -213,7 +211,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
}
resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
resend_at += jiffies + rto_j;
resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, retrans);
WRITE_ONCE(call->resend_at, resend_at);
if (unacked)

View File

@ -468,7 +468,7 @@ done:
if (call->peer->rtt_count > 1) {
unsigned long nowj = jiffies, ack_lost_at;
ack_lost_at = rxrpc_get_rto_backoff(call->peer, retrans);
ack_lost_at = rxrpc_get_rto_backoff(call->peer, false);
ack_lost_at += nowj;
WRITE_ONCE(call->ack_lost_at, ack_lost_at);
rxrpc_reduce_call_timer(call, ack_lost_at, nowj,

View File

@ -1204,7 +1204,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
err = -ENOENT;
if (!ops) {
NL_SET_ERR_MSG(extack, "Specified qdisc not found");
NL_SET_ERR_MSG(extack, "Specified qdisc kind is unknown");
goto err_out;
}

View File

@ -1810,6 +1810,26 @@ static int htb_change_class(struct Qdisc *sch, u32 classid,
if (!hopt->rate.rate || !hopt->ceil.rate)
goto failure;
if (q->offload) {
/* Options not supported by the offload. */
if (hopt->rate.overhead || hopt->ceil.overhead) {
NL_SET_ERR_MSG(extack, "HTB offload doesn't support the overhead parameter");
goto failure;
}
if (hopt->rate.mpu || hopt->ceil.mpu) {
NL_SET_ERR_MSG(extack, "HTB offload doesn't support the mpu parameter");
goto failure;
}
if (hopt->quantum) {
NL_SET_ERR_MSG(extack, "HTB offload doesn't support the quantum parameter");
goto failure;
}
if (hopt->prio) {
NL_SET_ERR_MSG(extack, "HTB offload doesn't support the prio parameter");
goto failure;
}
}
/* Keeping backward compatible with rate_table based iproute2 tc */
if (hopt->rate.linklayer == TC_LINKLAYER_UNAWARE)
qdisc_put_rtab(qdisc_get_rtab(&hopt->rate, tb[TCA_HTB_RTAB],

View File

@ -566,12 +566,17 @@ static void smc_stat_fallback(struct smc_sock *smc)
mutex_unlock(&net->smc.mutex_fback_rsn);
}
static void smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
static int smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
{
wait_queue_head_t *smc_wait = sk_sleep(&smc->sk);
wait_queue_head_t *clc_wait = sk_sleep(smc->clcsock->sk);
wait_queue_head_t *clc_wait;
unsigned long flags;
mutex_lock(&smc->clcsock_release_lock);
if (!smc->clcsock) {
mutex_unlock(&smc->clcsock_release_lock);
return -EBADF;
}
smc->use_fallback = true;
smc->fallback_rsn = reason_code;
smc_stat_fallback(smc);
@ -586,18 +591,30 @@ static void smc_switch_to_fallback(struct smc_sock *smc, int reason_code)
* smc socket->wq, which should be removed
* to clcsocket->wq during the fallback.
*/
clc_wait = sk_sleep(smc->clcsock->sk);
spin_lock_irqsave(&smc_wait->lock, flags);
spin_lock_nested(&clc_wait->lock, SINGLE_DEPTH_NESTING);
list_splice_init(&smc_wait->head, &clc_wait->head);
spin_unlock(&clc_wait->lock);
spin_unlock_irqrestore(&smc_wait->lock, flags);
}
mutex_unlock(&smc->clcsock_release_lock);
return 0;
}
/* fall back during connect */
static int smc_connect_fallback(struct smc_sock *smc, int reason_code)
{
smc_switch_to_fallback(smc, reason_code);
struct net *net = sock_net(&smc->sk);
int rc = 0;
rc = smc_switch_to_fallback(smc, reason_code);
if (rc) { /* fallback fails */
this_cpu_inc(net->smc.smc_stats->clnt_hshake_err_cnt);
if (smc->sk.sk_state == SMC_INIT)
sock_put(&smc->sk); /* passive closing */
return rc;
}
smc_copy_sock_settings_to_clc(smc);
smc->connect_nonblock = 0;
if (smc->sk.sk_state == SMC_INIT)
@ -1518,11 +1535,12 @@ static void smc_listen_decline(struct smc_sock *new_smc, int reason_code,
{
/* RDMA setup failed, switch back to TCP */
smc_conn_abort(new_smc, local_first);
if (reason_code < 0) { /* error, no fallback possible */
if (reason_code < 0 ||
smc_switch_to_fallback(new_smc, reason_code)) {
/* error, no fallback possible */
smc_listen_out_err(new_smc);
return;
}
smc_switch_to_fallback(new_smc, reason_code);
if (reason_code && reason_code != SMC_CLC_DECL_PEERDECL) {
if (smc_clc_send_decline(new_smc, reason_code, version) < 0) {
smc_listen_out_err(new_smc);
@ -1964,8 +1982,11 @@ static void smc_listen_work(struct work_struct *work)
/* check if peer is smc capable */
if (!tcp_sk(newclcsock->sk)->syn_smc) {
smc_switch_to_fallback(new_smc, SMC_CLC_DECL_PEERNOSMC);
smc_listen_out_connected(new_smc);
rc = smc_switch_to_fallback(new_smc, SMC_CLC_DECL_PEERNOSMC);
if (rc)
smc_listen_out_err(new_smc);
else
smc_listen_out_connected(new_smc);
return;
}
@ -2254,7 +2275,9 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
if (msg->msg_flags & MSG_FASTOPEN) {
if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
rc = smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
if (rc)
goto out;
} else {
rc = -EINVAL;
goto out;
@ -2447,6 +2470,11 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
/* generic setsockopts reaching us here always apply to the
* CLC socket
*/
mutex_lock(&smc->clcsock_release_lock);
if (!smc->clcsock) {
mutex_unlock(&smc->clcsock_release_lock);
return -EBADF;
}
if (unlikely(!smc->clcsock->ops->setsockopt))
rc = -EOPNOTSUPP;
else
@ -2456,6 +2484,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
sk->sk_err = smc->clcsock->sk->sk_err;
sk_error_report(sk);
}
mutex_unlock(&smc->clcsock_release_lock);
if (optlen < sizeof(int))
return -EINVAL;
@ -2472,7 +2501,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
case TCP_FASTOPEN_NO_COOKIE:
/* option not supported by SMC */
if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
rc = smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
} else {
rc = -EINVAL;
}
@ -2515,13 +2544,23 @@ static int smc_getsockopt(struct socket *sock, int level, int optname,
char __user *optval, int __user *optlen)
{
struct smc_sock *smc;
int rc;
smc = smc_sk(sock->sk);
mutex_lock(&smc->clcsock_release_lock);
if (!smc->clcsock) {
mutex_unlock(&smc->clcsock_release_lock);
return -EBADF;
}
/* socket options apply to the CLC socket */
if (unlikely(!smc->clcsock->ops->getsockopt))
if (unlikely(!smc->clcsock->ops->getsockopt)) {
mutex_unlock(&smc->clcsock_release_lock);
return -EOPNOTSUPP;
return smc->clcsock->ops->getsockopt(smc->clcsock, level, optname,
optval, optlen);
}
rc = smc->clcsock->ops->getsockopt(smc->clcsock, level, optname,
optval, optlen);
mutex_unlock(&smc->clcsock_release_lock);
return rc;
}
static int smc_ioctl(struct socket *sock, unsigned int cmd,

View File

@ -240,11 +240,8 @@ static int check_ioam6_data(__u8 **p, struct ioam6_trace_hdr *ioam6h,
*p += sizeof(__u32);
}
if (ioam6h->type.bit6) {
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
return 1;
if (ioam6h->type.bit6)
*p += sizeof(__u32);
}
if (ioam6h->type.bit7) {
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)

View File

@ -75,6 +75,7 @@ init()
# let $ns2 reach any $ns1 address from any interface
ip -net "$ns2" route add default via 10.0.$i.1 dev ns2eth$i metric 10$i
ip -net "$ns2" route add default via dead:beef:$i::1 dev ns2eth$i metric 10$i
done
}
@ -1476,7 +1477,7 @@ ipv6_tests()
reset
ip netns exec $ns1 ./pm_nl_ctl limits 0 1
ip netns exec $ns2 ./pm_nl_ctl limits 0 1
ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 flags subflow
ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 dev ns2eth3 flags subflow
run_tests $ns1 $ns2 dead:beef:1::1 0 0 0 slow
chk_join_nr "single subflow IPv6" 1 1 1
@ -1511,7 +1512,7 @@ ipv6_tests()
ip netns exec $ns1 ./pm_nl_ctl limits 0 2
ip netns exec $ns1 ./pm_nl_ctl add dead:beef:2::1 flags signal
ip netns exec $ns2 ./pm_nl_ctl limits 1 2
ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 flags subflow
ip netns exec $ns2 ./pm_nl_ctl add dead:beef:3::2 dev ns2eth3 flags subflow
run_tests $ns1 $ns2 dead:beef:1::1 0 -1 -1 slow
chk_join_nr "remove subflow and signal IPv6" 2 2 2
chk_add_nr 1 1