Quite smaller than usual. Notably it includes the fix for the unix

regression you have been notified of in the past weeks.
 The TCP window fix will require some follow-up, already queued.
 
 Current release - regressions:
 
   - af_unix: fix garbage collection of embryos
 
 Previous releases - regressions:
 
   - af_unix: fix race between GC and receive path
 
   - ipv6: sr: fix missing sk_buff release in seg6_input_core
 
   - tcp: remove 64 KByte limit for initial tp->rcv_wnd value
 
   - eth: r8169: fix rx hangup
 
   - eth: lan966x: remove ptp traps in case the ptp is not enabled.
 
   - eth: ixgbe: fix link breakage vs cisco switches.
 
   - eth: ice: prevent ethtool from corrupting the channels.
 
 Previous releases - always broken:
 
   - openvswitch: set the skbuff pkt_type for proper pmtud support.
 
   - tcp: Fix shift-out-of-bounds in dctcp_update_alpha().
 
 Misc:
 
   - a bunch of selftests stabilization patches.
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmZPXmUSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOk/o4QAJTA/LcQmHkObgQWyJ7vSykhRFmxSsfR
 Qc/DstWuNkM+xDbasdjlxaM+BPgf0RduyB/bsPOr8UvGw0S0NUwQBC9V9bgQ0p67
 D9qrZH6gEDRbzG+mkbF49SXksJMSdNSygWc4YnYaCW+eufpCaZwN15q+4pAgAWfW
 UmSra9wCkgl9nRc7N4+UEJbhhi0Lso/yaRlHUUUooHOP0ENDe3JSKidUyS3UuhYc
 Ah75gKIMm9BygUhg/+mrsRyeb1kfXMfJ54ku/uEIimErG4rTntCJCAc+dBoRXtob
 pImg4xfgr1OBL1wQKTHM+nvhE+DThLAJOSguX2RYvTvklx/l00tL1PQkA/kn6XNM
 HdQGnDoN1JpUs3xw90hxWp0gzOwJ1XCjbXT/Dx2kp+ltFj0A1EZViTNNTgh6y2E0
 B5oo8NFD0y02ilMdaGW/KOpceglO82p2P4DEc0kBAYvCICQ8MKMdtThuubQeB0FK
 EO7Xs7lKbDXLJUDtmN4EiE1sofvLVD+1htGt5FG2jtizyQ5Ho/b2aTk2uq0kRN3F
 mZgaXcNR3sOJGBdaTvzquALZ2Dt69w0D3EHGv/30tD5zwQO8j71W5OoWTnjknWUp
 Nh7ytL/YlqvwJI47UuuTeDBh95jb/KpTWFv8EYsQLI0JOTfa1VXsoDxidg6rnHuX
 mvLdIOtzTZqU
 =zd2T
 -----END PGP SIGNATURE-----

Merge tag 'net-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Quite smaller than usual. Notably it includes the fix for the unix
  regression from the past weeks. The TCP window fix will require some
  follow-up, already queued.

  Current release - regressions:

   - af_unix: fix garbage collection of embryos

  Previous releases - regressions:

   - af_unix: fix race between GC and receive path

   - ipv6: sr: fix missing sk_buff release in seg6_input_core

   - tcp: remove 64 KByte limit for initial tp->rcv_wnd value

   - eth: r8169: fix rx hangup

   - eth: lan966x: remove ptp traps in case the ptp is not enabled

   - eth: ixgbe: fix link breakage vs cisco switches

   - eth: ice: prevent ethtool from corrupting the channels

  Previous releases - always broken:

   - openvswitch: set the skbuff pkt_type for proper pmtud support

   - tcp: Fix shift-out-of-bounds in dctcp_update_alpha()

  Misc:

   - a bunch of selftests stabilization patches"

* tag 'net-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (25 commits)
  r8169: Fix possible ring buffer corruption on fragmented Tx packets.
  idpf: Interpret .set_channels() input differently
  ice: Interpret .set_channels() input differently
  nfc: nci: Fix handling of zero-length payload packets in nci_rx_work()
  net: relax socket state check at accept time.
  tcp: remove 64 KByte limit for initial tp->rcv_wnd value
  net: ti: icssg_prueth: Fix NULL pointer dereference in prueth_probe()
  tls: fix missing memory barrier in tls_init
  net: fec: avoid lock evasion when reading pps_enable
  Revert "ixgbe: Manual AN-37 for troublesome link partners for X550 SFI"
  testing: net-drv: use stats64 for testing
  net: mana: Fix the extra HZ in mana_hwc_send_request
  net: lan966x: Remove ptp traps in case the ptp is not enabled.
  openvswitch: Set the skbuff pkt_type for proper pmtud support.
  selftest: af_unix: Make SCM_RIGHTS into OOB data.
  af_unix: Fix garbage collection of embryos carrying OOB with SCM_RIGHTS
  tcp: Fix shift-out-of-bounds in dctcp_update_alpha().
  selftests/net: use tc rule to filter the na packet
  ipv6: sr: fix memleak in seg6_hmac_init_algo
  af_unix: Update unix_sk(sk)->oob_skb under sk_receive_queue lock.
  ...
This commit is contained in:
Linus Torvalds 2024-05-23 12:49:37 -07:00
commit 66ad4829dd
26 changed files with 243 additions and 245 deletions

View File

@ -49,7 +49,9 @@ obj-$(CONFIG_MHI_NET) += mhi_net.o
obj-$(CONFIG_ARCNET) += arcnet/
obj-$(CONFIG_CAIF) += caif/
obj-$(CONFIG_CAN) += can/
obj-$(CONFIG_NET_DSA) += dsa/
ifdef CONFIG_NET_DSA
obj-y += dsa/
endif
obj-$(CONFIG_ETHERNET) += ethernet/
obj-$(CONFIG_FDDI) += fddi/
obj-$(CONFIG_HIPPI) += hippi/

View File

@ -104,14 +104,13 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
struct timespec64 ts;
u64 ns;
if (fep->pps_enable == enable)
return 0;
fep->pps_channel = DEFAULT_PPS_CHANNEL;
fep->reload_period = PPS_OUPUT_RELOAD_PERIOD;
spin_lock_irqsave(&fep->tmreg_lock, flags);
if (fep->pps_enable == enable) {
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
return 0;
}
if (enable) {
/* clear capture or output compare interrupt status if have.
*/
@ -532,6 +531,9 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
int ret = 0;
if (rq->type == PTP_CLK_REQ_PPS) {
fep->pps_channel = DEFAULT_PPS_CHANNEL;
fep->reload_period = PPS_OUPUT_RELOAD_PERIOD;
ret = fec_ptp_enable_pps(fep, on);
return ret;

View File

@ -3593,7 +3593,6 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
struct ice_pf *pf = vsi->back;
int new_rx = 0, new_tx = 0;
bool locked = false;
u32 curr_combined;
int ret = 0;
/* do not support changing channels in Safe Mode */
@ -3615,22 +3614,8 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
return -EOPNOTSUPP;
}
curr_combined = ice_get_combined_cnt(vsi);
/* these checks are for cases where user didn't specify a particular
* value on cmd line but we get non-zero value anyway via
* get_channels(); look at ethtool.c in ethtool repository (the user
* space part), particularly, do_schannels() routine
*/
if (ch->rx_count == vsi->num_rxq - curr_combined)
ch->rx_count = 0;
if (ch->tx_count == vsi->num_txq - curr_combined)
ch->tx_count = 0;
if (ch->combined_count == curr_combined)
ch->combined_count = 0;
if (!(ch->combined_count || (ch->rx_count && ch->tx_count))) {
netdev_err(dev, "Please specify at least 1 Rx and 1 Tx channel\n");
if (ch->rx_count && ch->tx_count) {
netdev_err(dev, "Dedicated RX or TX channels cannot be used simultaneously\n");
return -EINVAL;
}

View File

@ -222,14 +222,19 @@ static int idpf_set_channels(struct net_device *netdev,
struct ethtool_channels *ch)
{
struct idpf_vport_config *vport_config;
u16 combined, num_txq, num_rxq;
unsigned int num_req_tx_q;
unsigned int num_req_rx_q;
struct idpf_vport *vport;
u16 num_txq, num_rxq;
struct device *dev;
int err = 0;
u16 idx;
if (ch->rx_count && ch->tx_count) {
netdev_err(netdev, "Dedicated RX or TX channels cannot be used simultaneously\n");
return -EINVAL;
}
idpf_vport_ctrl_lock(netdev);
vport = idpf_netdev_to_vport(netdev);
@ -239,20 +244,6 @@ static int idpf_set_channels(struct net_device *netdev,
num_txq = vport_config->user_config.num_req_tx_qs;
num_rxq = vport_config->user_config.num_req_rx_qs;
combined = min(num_txq, num_rxq);
/* these checks are for cases where user didn't specify a particular
* value on cmd line but we get non-zero value anyway via
* get_channels(); look at ethtool.c in ethtool repository (the user
* space part), particularly, do_schannels() routine
*/
if (ch->combined_count == combined)
ch->combined_count = 0;
if (ch->combined_count && ch->rx_count == num_rxq - combined)
ch->rx_count = 0;
if (ch->combined_count && ch->tx_count == num_txq - combined)
ch->tx_count = 0;
num_req_tx_q = ch->combined_count + ch->tx_count;
num_req_rx_q = ch->combined_count + ch->rx_count;

View File

@ -3675,9 +3675,7 @@ struct ixgbe_info {
#define IXGBE_KRM_LINK_S1(P) ((P) ? 0x8200 : 0x4200)
#define IXGBE_KRM_LINK_CTRL_1(P) ((P) ? 0x820C : 0x420C)
#define IXGBE_KRM_AN_CNTL_1(P) ((P) ? 0x822C : 0x422C)
#define IXGBE_KRM_AN_CNTL_4(P) ((P) ? 0x8238 : 0x4238)
#define IXGBE_KRM_AN_CNTL_8(P) ((P) ? 0x8248 : 0x4248)
#define IXGBE_KRM_PCS_KX_AN(P) ((P) ? 0x9918 : 0x5918)
#define IXGBE_KRM_SGMII_CTRL(P) ((P) ? 0x82A0 : 0x42A0)
#define IXGBE_KRM_LP_BASE_PAGE_HIGH(P) ((P) ? 0x836C : 0x436C)
#define IXGBE_KRM_DSP_TXFFE_STATE_4(P) ((P) ? 0x8634 : 0x4634)
@ -3687,7 +3685,6 @@ struct ixgbe_info {
#define IXGBE_KRM_PMD_FLX_MASK_ST20(P) ((P) ? 0x9054 : 0x5054)
#define IXGBE_KRM_TX_COEFF_CTRL_1(P) ((P) ? 0x9520 : 0x5520)
#define IXGBE_KRM_RX_ANA_CTL(P) ((P) ? 0x9A00 : 0x5A00)
#define IXGBE_KRM_FLX_TMRS_CTRL_ST31(P) ((P) ? 0x9180 : 0x5180)
#define IXGBE_KRM_PMD_FLX_MASK_ST20_SFI_10G_DA ~(0x3 << 20)
#define IXGBE_KRM_PMD_FLX_MASK_ST20_SFI_10G_SR BIT(20)

View File

@ -1722,59 +1722,9 @@ static int ixgbe_setup_sfi_x550a(struct ixgbe_hw *hw, ixgbe_link_speed *speed)
return -EINVAL;
}
(void)mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_PMD_FLX_MASK_ST20(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* change mode enforcement rules to hybrid */
(void)mac->ops.read_iosf_sb_reg(hw,
IXGBE_KRM_FLX_TMRS_CTRL_ST31(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
reg_val |= 0x0400;
(void)mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_FLX_TMRS_CTRL_ST31(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* manually control the config */
(void)mac->ops.read_iosf_sb_reg(hw,
IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
reg_val |= 0x20002240;
(void)mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* move the AN base page values */
(void)mac->ops.read_iosf_sb_reg(hw,
IXGBE_KRM_PCS_KX_AN(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
reg_val |= 0x1;
(void)mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_PCS_KX_AN(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* set the AN37 over CB mode */
(void)mac->ops.read_iosf_sb_reg(hw,
IXGBE_KRM_AN_CNTL_4(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
reg_val |= 0x20000000;
(void)mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_AN_CNTL_4(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* restart AN manually */
(void)mac->ops.read_iosf_sb_reg(hw,
IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
reg_val |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART;
(void)mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
status = mac->ops.write_iosf_sb_reg(hw,
IXGBE_KRM_PMD_FLX_MASK_ST20(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* Toggle port SW reset by AN reset. */
status = ixgbe_restart_an_internal_phy_x550em(hw);

View File

@ -474,14 +474,14 @@ static int lan966x_port_hwtstamp_set(struct net_device *dev,
cfg->source != HWTSTAMP_SOURCE_PHYLIB)
return -EOPNOTSUPP;
if (cfg->source == HWTSTAMP_SOURCE_NETDEV && !port->lan966x->ptp)
return -EOPNOTSUPP;
err = lan966x_ptp_setup_traps(port, cfg);
if (err)
return err;
if (cfg->source == HWTSTAMP_SOURCE_NETDEV) {
if (!port->lan966x->ptp)
return -EOPNOTSUPP;
err = lan966x_ptp_hwtstamp_set(port, cfg, extack);
if (err) {
lan966x_ptp_del_traps(port);

View File

@ -849,7 +849,7 @@ int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len,
}
if (!wait_for_completion_timeout(&ctx->comp_event,
(msecs_to_jiffies(hwc->hwc_timeout) * HZ))) {
(msecs_to_jiffies(hwc->hwc_timeout)))) {
dev_err(hwc->dev, "HWC: Request timed out!\n");
err = -ETIMEDOUT;
goto out;

View File

@ -4337,11 +4337,11 @@ static void rtl8169_doorbell(struct rtl8169_private *tp)
static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
struct net_device *dev)
{
unsigned int frags = skb_shinfo(skb)->nr_frags;
struct rtl8169_private *tp = netdev_priv(dev);
unsigned int entry = tp->cur_tx % NUM_TX_DESC;
struct TxDesc *txd_first, *txd_last;
bool stop_queue, door_bell;
unsigned int frags;
u32 opts[2];
if (unlikely(!rtl_tx_slots_avail(tp))) {
@ -4364,6 +4364,7 @@ static netdev_tx_t rtl8169_start_xmit(struct sk_buff *skb,
txd_first = tp->TxDescArray + entry;
frags = skb_shinfo(skb)->nr_frags;
if (frags) {
if (rtl8169_xmit_frags(tp, skb, opts, entry))
goto err_dma_1;
@ -4657,10 +4658,8 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
rtl_schedule_task(tp, RTL_FLAG_TASK_RESET_PENDING);
}
if (napi_schedule_prep(&tp->napi)) {
rtl_irq_disable(tp);
__napi_schedule(&tp->napi);
}
rtl_irq_disable(tp);
napi_schedule(&tp->napi);
out:
rtl_ack_events(tp, status);

View File

@ -1039,7 +1039,12 @@ static int prueth_probe(struct platform_device *pdev)
prueth->registered_netdevs[PRUETH_MAC0] = prueth->emac[PRUETH_MAC0]->ndev;
emac_phy_connect(prueth->emac[PRUETH_MAC0]);
ret = emac_phy_connect(prueth->emac[PRUETH_MAC0]);
if (ret) {
dev_err(dev,
"can't connect to MII0 PHY, error -%d", ret);
goto netdev_unregister;
}
phy_attached_info(prueth->emac[PRUETH_MAC0]->ndev->phydev);
}
@ -1051,7 +1056,12 @@ static int prueth_probe(struct platform_device *pdev)
}
prueth->registered_netdevs[PRUETH_MAC1] = prueth->emac[PRUETH_MAC1]->ndev;
emac_phy_connect(prueth->emac[PRUETH_MAC1]);
ret = emac_phy_connect(prueth->emac[PRUETH_MAC1]);
if (ret) {
dev_err(dev,
"can't connect to MII1 PHY, error %d", ret);
goto netdev_unregister;
}
phy_attached_info(prueth->emac[PRUETH_MAC1]->ndev->phydev);
}

View File

@ -758,7 +758,9 @@ void __inet_accept(struct socket *sock, struct socket *newsock, struct sock *new
sock_rps_record_flow(newsk);
WARN_ON(!((1 << newsk->sk_state) &
(TCPF_ESTABLISHED | TCPF_SYN_RECV |
TCPF_CLOSE_WAIT | TCPF_CLOSE)));
TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 |
TCPF_CLOSING | TCPF_CLOSE_WAIT |
TCPF_CLOSE)));
if (test_bit(SOCK_SUPPORT_ZC, &sock->flags))
set_bit(SOCK_SUPPORT_ZC, &newsock->flags);

View File

@ -58,7 +58,18 @@ struct dctcp {
};
static unsigned int dctcp_shift_g __read_mostly = 4; /* g = 1/2^4 */
module_param(dctcp_shift_g, uint, 0644);
static int dctcp_shift_g_set(const char *val, const struct kernel_param *kp)
{
return param_set_uint_minmax(val, kp, 0, 10);
}
static const struct kernel_param_ops dctcp_shift_g_ops = {
.set = dctcp_shift_g_set,
.get = param_get_uint,
};
module_param_cb(dctcp_shift_g, &dctcp_shift_g_ops, &dctcp_shift_g, 0644);
MODULE_PARM_DESC(dctcp_shift_g, "parameter g for updating dctcp_alpha");
static unsigned int dctcp_alpha_on_init __read_mostly = DCTCP_MAX_ALPHA;

View File

@ -232,7 +232,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space, __u32 mss,
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_workaround_signed_windows))
(*rcv_wnd) = min(space, MAX_TCP_WINDOW);
else
(*rcv_wnd) = min_t(u32, space, U16_MAX);
(*rcv_wnd) = space;
if (init_rcv_wnd)
*rcv_wnd = min(*rcv_wnd, init_rcv_wnd * mss);

View File

@ -356,6 +356,7 @@ static int seg6_hmac_init_algo(void)
struct crypto_shash *tfm;
struct shash_desc *shash;
int i, alg_count, cpu;
int ret = -ENOMEM;
alg_count = ARRAY_SIZE(hmac_algos);
@ -366,12 +367,14 @@ static int seg6_hmac_init_algo(void)
algo = &hmac_algos[i];
algo->tfms = alloc_percpu(struct crypto_shash *);
if (!algo->tfms)
return -ENOMEM;
goto error_out;
for_each_possible_cpu(cpu) {
tfm = crypto_alloc_shash(algo->name, 0, 0);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
if (IS_ERR(tfm)) {
ret = PTR_ERR(tfm);
goto error_out;
}
p_tfm = per_cpu_ptr(algo->tfms, cpu);
*p_tfm = tfm;
}
@ -383,18 +386,22 @@ static int seg6_hmac_init_algo(void)
algo->shashs = alloc_percpu(struct shash_desc *);
if (!algo->shashs)
return -ENOMEM;
goto error_out;
for_each_possible_cpu(cpu) {
shash = kzalloc_node(shsize, GFP_KERNEL,
cpu_to_node(cpu));
if (!shash)
return -ENOMEM;
goto error_out;
*per_cpu_ptr(algo->shashs, cpu) = shash;
}
}
return 0;
error_out:
seg6_hmac_exit();
return ret;
}
int __init seg6_hmac_init(void)
@ -412,22 +419,29 @@ int __net_init seg6_hmac_net_init(struct net *net)
void seg6_hmac_exit(void)
{
struct seg6_hmac_algo *algo = NULL;
struct crypto_shash *tfm;
struct shash_desc *shash;
int i, alg_count, cpu;
alg_count = ARRAY_SIZE(hmac_algos);
for (i = 0; i < alg_count; i++) {
algo = &hmac_algos[i];
for_each_possible_cpu(cpu) {
struct crypto_shash *tfm;
struct shash_desc *shash;
shash = *per_cpu_ptr(algo->shashs, cpu);
kfree(shash);
tfm = *per_cpu_ptr(algo->tfms, cpu);
crypto_free_shash(tfm);
if (algo->shashs) {
for_each_possible_cpu(cpu) {
shash = *per_cpu_ptr(algo->shashs, cpu);
kfree(shash);
}
free_percpu(algo->shashs);
}
if (algo->tfms) {
for_each_possible_cpu(cpu) {
tfm = *per_cpu_ptr(algo->tfms, cpu);
crypto_free_shash(tfm);
}
free_percpu(algo->tfms);
}
free_percpu(algo->tfms);
free_percpu(algo->shashs);
}
}
EXPORT_SYMBOL(seg6_hmac_exit);

View File

@ -459,10 +459,8 @@ static int seg6_input_core(struct net *net, struct sock *sk,
int err;
err = seg6_do_srh(skb);
if (unlikely(err)) {
kfree_skb(skb);
return err;
}
if (unlikely(err))
goto drop;
slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
@ -486,7 +484,7 @@ static int seg6_input_core(struct net *net, struct sock *sk,
err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
if (unlikely(err))
return err;
goto drop;
if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled))
return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
@ -494,6 +492,9 @@ static int seg6_input_core(struct net *net, struct sock *sk,
skb_dst(skb)->dev, seg6_input_finish);
return seg6_input_finish(dev_net(skb->dev), NULL, skb);
drop:
kfree_skb(skb);
return err;
}
static int seg6_input_nf(struct sk_buff *skb)

View File

@ -1463,6 +1463,19 @@ int nci_core_ntf_packet(struct nci_dev *ndev, __u16 opcode,
ndev->ops->n_core_ops);
}
static bool nci_valid_size(struct sk_buff *skb)
{
BUILD_BUG_ON(NCI_CTRL_HDR_SIZE != NCI_DATA_HDR_SIZE);
unsigned int hdr_size = NCI_CTRL_HDR_SIZE;
if (skb->len < hdr_size ||
!nci_plen(skb->data) ||
skb->len < hdr_size + nci_plen(skb->data)) {
return false;
}
return true;
}
/* ---- NCI TX Data worker thread ---- */
static void nci_tx_work(struct work_struct *work)
@ -1516,10 +1529,9 @@ static void nci_rx_work(struct work_struct *work)
nfc_send_to_raw_sock(ndev->nfc_dev, skb,
RAW_PAYLOAD_NCI, NFC_DIRECTION_RX);
if (!nci_plen(skb->data)) {
if (!nci_valid_size(skb)) {
kfree_skb(skb);
kcov_remote_stop();
break;
continue;
}
/* Process frame */

View File

@ -936,6 +936,12 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
pskb_trim(skb, ovs_mac_header_len(key));
}
/* Need to set the pkt_type to involve the routing layer. The
* packet movement through the OVS datapath doesn't generally
* use routing, but this is needed for tunnel cases.
*/
skb->pkt_type = PACKET_OUTGOING;
if (likely(!mru ||
(skb->len <= mru + vport->dev->hard_header_len))) {
ovs_vport_send(vport, skb, ovs_key_mac_proto(key));

View File

@ -816,9 +816,17 @@ struct tls_context *tls_ctx_create(struct sock *sk)
return NULL;
mutex_init(&ctx->tx_lock);
rcu_assign_pointer(icsk->icsk_ulp_data, ctx);
ctx->sk_proto = READ_ONCE(sk->sk_prot);
ctx->sk = sk;
/* Release semantic of rcu_assign_pointer() ensures that
* ctx->sk_proto is visible before changing sk->sk_prot in
* update_sk_prot(), and prevents reading uninitialized value in
* tls_{getsockopt, setsockopt}. Note that we do not need a
* read barrier in tls_{getsockopt,setsockopt} as there is an
* address dependency between sk->sk_proto->{getsockopt,setsockopt}
* and ctx->sk_proto.
*/
rcu_assign_pointer(icsk->icsk_ulp_data, ctx);
return ctx;
}

View File

@ -2170,13 +2170,15 @@ static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other
maybe_add_creds(skb, sock, other);
skb_get(skb);
scm_stat_add(other, skb);
spin_lock(&other->sk_receive_queue.lock);
if (ousk->oob_skb)
consume_skb(ousk->oob_skb);
WRITE_ONCE(ousk->oob_skb, skb);
__skb_queue_tail(&other->sk_receive_queue, skb);
spin_unlock(&other->sk_receive_queue.lock);
scm_stat_add(other, skb);
skb_queue_tail(&other->sk_receive_queue, skb);
sk_send_sigurg(other);
unix_state_unlock(other);
other->sk_data_ready(other);
@ -2567,8 +2569,10 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
mutex_lock(&u->iolock);
unix_state_lock(sk);
spin_lock(&sk->sk_receive_queue.lock);
if (sock_flag(sk, SOCK_URGINLINE) || !u->oob_skb) {
spin_unlock(&sk->sk_receive_queue.lock);
unix_state_unlock(sk);
mutex_unlock(&u->iolock);
return -EINVAL;
@ -2580,6 +2584,8 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
WRITE_ONCE(u->oob_skb, NULL);
else
skb_get(oob_skb);
spin_unlock(&sk->sk_receive_queue.lock);
unix_state_unlock(sk);
chunk = state->recv_actor(oob_skb, 0, chunk, state);
@ -2608,6 +2614,10 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
consume_skb(skb);
skb = NULL;
} else {
struct sk_buff *unlinked_skb = NULL;
spin_lock(&sk->sk_receive_queue.lock);
if (skb == u->oob_skb) {
if (copied) {
skb = NULL;
@ -2619,13 +2629,19 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
} else if (flags & MSG_PEEK) {
skb = NULL;
} else {
skb_unlink(skb, &sk->sk_receive_queue);
__skb_unlink(skb, &sk->sk_receive_queue);
WRITE_ONCE(u->oob_skb, NULL);
if (!WARN_ON_ONCE(skb_unref(skb)))
kfree_skb(skb);
unlinked_skb = skb;
skb = skb_peek(&sk->sk_receive_queue);
}
}
spin_unlock(&sk->sk_receive_queue.lock);
if (unlinked_skb) {
WARN_ON_ONCE(skb_unref(unlinked_skb));
kfree_skb(unlinked_skb);
}
}
return skb;
}

View File

@ -342,6 +342,18 @@ enum unix_recv_queue_lock_class {
U_RECVQ_LOCK_EMBRYO,
};
static void unix_collect_queue(struct unix_sock *u, struct sk_buff_head *hitlist)
{
skb_queue_splice_init(&u->sk.sk_receive_queue, hitlist);
#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
if (u->oob_skb) {
WARN_ON_ONCE(skb_unref(u->oob_skb));
u->oob_skb = NULL;
}
#endif
}
static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist)
{
struct unix_vertex *vertex;
@ -365,18 +377,11 @@ static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist
/* listener -> embryo order, the inversion never happens. */
spin_lock_nested(&embryo_queue->lock, U_RECVQ_LOCK_EMBRYO);
skb_queue_splice_init(embryo_queue, hitlist);
unix_collect_queue(unix_sk(skb->sk), hitlist);
spin_unlock(&embryo_queue->lock);
}
} else {
skb_queue_splice_init(queue, hitlist);
#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
if (u->oob_skb) {
kfree_skb(u->oob_skb);
u->oob_skb = NULL;
}
#endif
unix_collect_queue(u, hitlist);
}
spin_unlock(&queue->lock);

View File

@ -69,7 +69,7 @@ def pkt_byte_sum(cfg) -> None:
return 0
for _ in range(10):
rtstat = rtnl.getlink({"ifi-index": cfg.ifindex})['stats']
rtstat = rtnl.getlink({"ifi-index": cfg.ifindex})['stats64']
if stat_cmp(rtstat, qstat) < 0:
raise Exception("RTNL stats are lower, fetched later")
qstat = get_qstat(cfg)

View File

@ -197,8 +197,8 @@ void __send_fd(struct __test_metadata *_metadata,
const FIXTURE_VARIANT(scm_rights) *variant,
int inflight, int receiver)
{
#define MSG "nop"
#define MSGLEN 3
#define MSG "x"
#define MSGLEN 1
struct {
struct cmsghdr cmsghdr;
int fd[2];

View File

@ -77,6 +77,7 @@ readonly LISTENER=$(mktemp -u listener-XXXXXXXX)
readonly GATEWAY=$(mktemp -u gateway-XXXXXXXX)
readonly RELAY=$(mktemp -u relay-XXXXXXXX)
readonly SOURCE=$(mktemp -u source-XXXXXXXX)
readonly SMCROUTEDIR="$(mktemp -d)"
ERR=4
err=0
@ -85,6 +86,11 @@ exit_cleanup()
for ns in "$@"; do
ip netns delete "${ns}" 2>/dev/null || true
done
if [ -f "$SMCROUTEDIR/amt.pid" ]; then
smcpid=$(< $SMCROUTEDIR/amt.pid)
kill $smcpid
fi
rm -rf $SMCROUTEDIR
exit $ERR
}
@ -167,7 +173,7 @@ setup_iptables()
setup_mcast_routing()
{
ip netns exec "${RELAY}" smcrouted
ip netns exec "${RELAY}" smcrouted -P $SMCROUTEDIR/amt.pid
ip netns exec "${RELAY}" smcroutectl a relay_src \
172.17.0.2 239.0.0.1 amtr
ip netns exec "${RELAY}" smcroutectl a relay_src \

View File

@ -73,25 +73,19 @@ setup_v6() {
# namespaces. veth0 is veth-router, veth1 is veth-host.
# first, set up the inteface's link to the namespace
# then, set the interface "up"
ip -6 -netns ${ROUTER_NS_V6} link add name ${ROUTER_INTF} \
type veth peer name ${HOST_INTF}
ip -n ${ROUTER_NS_V6} link add name ${ROUTER_INTF} \
type veth peer name ${HOST_INTF} netns ${HOST_NS_V6}
ip -6 -netns ${ROUTER_NS_V6} link set dev ${ROUTER_INTF} up
ip -6 -netns ${ROUTER_NS_V6} link set dev ${HOST_INTF} netns \
${HOST_NS_V6}
ip -6 -netns ${HOST_NS_V6} link set dev ${HOST_INTF} up
ip -6 -netns ${ROUTER_NS_V6} addr add \
${ROUTER_ADDR_V6}/${PREFIX_WIDTH_V6} dev ${ROUTER_INTF} nodad
# Add tc rule to filter out host na message
tc -n ${ROUTER_NS_V6} qdisc add dev ${ROUTER_INTF} clsact
tc -n ${ROUTER_NS_V6} filter add dev ${ROUTER_INTF} \
ingress protocol ipv6 pref 1 handle 101 \
flower src_ip ${HOST_ADDR_V6} ip_proto icmpv6 type 136 skip_hw action pass
HOST_CONF=net.ipv6.conf.${HOST_INTF}
ip netns exec ${HOST_NS_V6} sysctl -qw ${HOST_CONF}.ndisc_notify=1
ip netns exec ${HOST_NS_V6} sysctl -qw ${HOST_CONF}.disable_ipv6=0
ip -6 -netns ${HOST_NS_V6} addr add ${HOST_ADDR_V6}/${PREFIX_WIDTH_V6} \
dev ${HOST_INTF}
ROUTER_CONF=net.ipv6.conf.${ROUTER_INTF}
ip netns exec ${ROUTER_NS_V6} sysctl -w \
${ROUTER_CONF}.forwarding=1 >/dev/null 2>&1
ip netns exec ${ROUTER_NS_V6} sysctl -w \
@ -99,6 +93,13 @@ setup_v6() {
ip netns exec ${ROUTER_NS_V6} sysctl -w \
${ROUTER_CONF}.accept_untracked_na=${accept_untracked_na} \
>/dev/null 2>&1
ip -n ${ROUTER_NS_V6} link set dev ${ROUTER_INTF} up
ip -n ${HOST_NS_V6} link set dev ${HOST_INTF} up
ip -n ${ROUTER_NS_V6} addr add ${ROUTER_ADDR_V6}/${PREFIX_WIDTH_V6} \
dev ${ROUTER_INTF} nodad
ip -n ${HOST_NS_V6} addr add ${HOST_ADDR_V6}/${PREFIX_WIDTH_V6} \
dev ${HOST_INTF}
set +e
}
@ -162,26 +163,6 @@ arp_test_gratuitous_combinations() {
arp_test_gratuitous 2 1
}
cleanup_tcpdump() {
set -e
[[ ! -z ${tcpdump_stdout} ]] && rm -f ${tcpdump_stdout}
[[ ! -z ${tcpdump_stderr} ]] && rm -f ${tcpdump_stderr}
tcpdump_stdout=
tcpdump_stderr=
set +e
}
start_tcpdump() {
set -e
tcpdump_stdout=`mktemp`
tcpdump_stderr=`mktemp`
ip netns exec ${ROUTER_NS_V6} timeout 15s \
tcpdump --immediate-mode -tpni ${ROUTER_INTF} -c 1 \
"icmp6 && icmp6[0] == 136 && src ${HOST_ADDR_V6}" \
> ${tcpdump_stdout} 2> /dev/null
set +e
}
verify_ndisc() {
local accept_untracked_na=$1
local same_subnet=$2
@ -222,8 +203,9 @@ ndisc_test_untracked_advertisements() {
HOST_ADDR_V6=2001:db8:abcd:0012::3
fi
fi
setup_v6 $1 $2
start_tcpdump
setup_v6 $1
slowwait_for_counter 15 1 \
tc_rule_handle_stats_get "dev ${ROUTER_INTF} ingress" 101 ".packets" "-n ${ROUTER_NS_V6}"
if verify_ndisc $1 $2; then
printf " TEST: %-60s [ OK ]\n" "${test_msg[*]}"
@ -231,7 +213,6 @@ ndisc_test_untracked_advertisements() {
printf " TEST: %-60s [FAIL]\n" "${test_msg[*]}"
fi
cleanup_tcpdump
cleanup_v6
set +e
}

View File

@ -129,14 +129,6 @@ fi
source "$net_forwarding_dir/../lib.sh"
# timeout in seconds
slowwait()
{
local timeout_sec=$1; shift
loopy_wait "sleep 0.1" "$((timeout_sec * 1000))" "$@"
}
##############################################################################
# Sanity checks
@ -678,33 +670,6 @@ wait_for_trap()
"$@" | grep -q trap
}
until_counter_is()
{
local expr=$1; shift
local current=$("$@")
echo $((current))
((current $expr))
}
busywait_for_counter()
{
local timeout=$1; shift
local delta=$1; shift
local base=$("$@")
busywait "$timeout" until_counter_is ">= $((base + delta))" "$@"
}
slowwait_for_counter()
{
local timeout=$1; shift
local delta=$1; shift
local base=$("$@")
slowwait "$timeout" until_counter_is ">= $((base + delta))" "$@"
}
setup_wait_dev()
{
local dev=$1; shift
@ -1023,29 +988,6 @@ link_stats_rx_errors_get()
link_stats_get $1 rx errors
}
tc_rule_stats_get()
{
local dev=$1; shift
local pref=$1; shift
local dir=$1; shift
local selector=${1:-.packets}; shift
tc -j -s filter show dev $dev ${dir:-ingress} pref $pref \
| jq ".[1].options.actions[].stats$selector"
}
tc_rule_handle_stats_get()
{
local id=$1; shift
local handle=$1; shift
local selector=${1:-.packets}; shift
local netns=${1:-""}; shift
tc $netns -j -s filter show $id \
| jq ".[] | select(.options.handle == $handle) | \
.options.actions[0].stats$selector"
}
ethtool_stats_get()
{
local dev=$1; shift

View File

@ -91,6 +91,41 @@ busywait()
loopy_wait : "$timeout_ms" "$@"
}
# timeout in seconds
slowwait()
{
local timeout_sec=$1; shift
loopy_wait "sleep 0.1" "$((timeout_sec * 1000))" "$@"
}
until_counter_is()
{
local expr=$1; shift
local current=$("$@")
echo $((current))
((current $expr))
}
busywait_for_counter()
{
local timeout=$1; shift
local delta=$1; shift
local base=$("$@")
busywait "$timeout" until_counter_is ">= $((base + delta))" "$@"
}
slowwait_for_counter()
{
local timeout=$1; shift
local delta=$1; shift
local base=$("$@")
slowwait "$timeout" until_counter_is ">= $((base + delta))" "$@"
}
cleanup_ns()
{
local ns=""
@ -150,3 +185,26 @@ setup_ns()
done
NS_LIST="$NS_LIST $ns_list"
}
tc_rule_stats_get()
{
local dev=$1; shift
local pref=$1; shift
local dir=$1; shift
local selector=${1:-.packets}; shift
tc -j -s filter show dev $dev ${dir:-ingress} pref $pref \
| jq ".[1].options.actions[].stats$selector"
}
tc_rule_handle_stats_get()
{
local id=$1; shift
local handle=$1; shift
local selector=${1:-.packets}; shift
local netns=${1:-""}; shift
tc $netns -j -s filter show $id \
| jq ".[] | select(.options.handle == $handle) | \
.options.actions[0].stats$selector"
}