Including fixes from bluetooth, netfilter, WiFi.

Feels like an up-tick in regression fixes, mostly for older releases.
 The hfsc fix, tcp_disconnect() and Intel WWAN fixes stand out as fairly
 clear-cut user reported regressions. The mlx5 DMA bug was causing strife
 for 390x folks. The fixes themselves are not particularly scary, tho.
 No open investigations / outstanding reports at the time of writing.
 
 Current release - regressions:
 
  - eth: mlx5: perform DMA operations in the right locations,
    make devices usable on s390x, again
 
  - sched: sch_hfsc: upgrade 'rt' to 'sc' when it becomes a inner curve,
    previous fix of rejecting invalid config broke some scripts
 
  - rfkill: reduce data->mtx scope in rfkill_fop_open, avoid deadlock
 
  - revert "ethtool: Fix mod state of verbose no_mask bitset",
    needs more work
 
 Current release - new code bugs:
 
  - tcp: fix listen() warning with v4-mapped-v6 address
 
 Previous releases - regressions:
 
  - tcp: allow tcp_disconnect() again when threads are waiting,
    it was denied to plug a constant source of bugs but turns out
    .NET depends on it
 
  - eth: mlx5: fix double-free if buffer refill fails under OOM
 
  - revert "net: wwan: iosm: enable runtime pm support for 7560",
    it's causing regressions and the WWAN team at Intel disappeared
 
  - tcp: tsq: relax tcp_small_queue_check() when rtx queue contains
    a single skb, fix single-stream perf regression on some devices
 
 Previous releases - always broken:
 
  - Bluetooth:
    - fix issues in legacy BR/EDR PIN code pairing
    - correctly bounds check and pad HCI_MON_NEW_INDEX name
 
  - netfilter:
    - more fixes / follow ups for the large "commit protocol" rework,
      which went in as a fix to 6.5
    - fix null-derefs on netlink attrs which user may not pass in
 
  - tcp: fix excessive TLP and RACK timeouts from HZ rounding
    (bless Debian for keeping HZ=250 alive)
 
  - net: more strict VIRTIO_NET_HDR_GSO_UDP_L4 validation, prevent
    letting frankenstein UDP super-frames from getting into the stack
 
  - net: fix interface altnames when ifc moves to a new namespace
 
  - eth: qed: fix the size of the RX buffers
 
  - mptcp: avoid sending RST when closing the initial subflow
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmUxawUACgkQMUZtbf5S
 Irt75w/+MC2ilnFQhfoQqpCxL2uHWOKzhenUz2PoNKR60OKgLLmTq8YBYcpyCVAQ
 wBQaNzNu1/yjF5BG7aNS/j0suGeJsOfhfoahQvXlLaat9NuDqxTpaeoT5FZ7eNQw
 RZ8+CJug3BRDV0TkaJH9UDVC/nfJTsnsGWNIhNXYGPsuveqAUun+xrnN8ZbvZIrn
 6D9rMF+u9SdVO+ANCquXBC7+CWEWiJS1ljUrU7BRNiv/9FSlnPQtjOdpuKleeBO8
 4usMS7TezHNgRdiAKC8GjSGUiIkIIMJT4y4wuczBEQAD4Pkki9UpBrui97ozOj7h
 W4N7UOuPlUBIardvKNoYz9rZyiFXBcPPm0GruHiuCqpxyqmzoFgv2XJsb/6KfzNn
 Dyro+lvh8smtbFHvFqiwaNu5y8ucfClaowvR4gjSe2KcB7hIpwNkh6vWC6OMGJK3
 hiKHnDrnXBQMbnP1YfiJ4feLmm3UYCG8eFdv/ULZT0a9TzZ7fKfzAfywUwD+/O8Y
 +S28Hr9srdDCHO7ih/gF3Wq9wtnnLy8QEkpt7cpXjRDj0uWH8JkHU3YEIGF2814Y
 LNVGmX9y6RcgrHNp03K1PmUcgAzhTTuV9QRoQKEucuBT5AK9ALDQ8YomnWDWDgrp
 UOdJPi1RUTsmqslADF15wZ9W5Ki/cDnUJsE4HU/MtnM2w95C49Q=
 =z1F6
 -----END PGP SIGNATURE-----

Merge tag 'net-6.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bluetooth, netfilter, WiFi.

  Feels like an up-tick in regression fixes, mostly for older releases.
  The hfsc fix, tcp_disconnect() and Intel WWAN fixes stand out as
  fairly clear-cut user reported regressions. The mlx5 DMA bug was
  causing strife for 390x folks. The fixes themselves are not
  particularly scary, tho. No open investigations / outstanding reports
  at the time of writing.

  Current release - regressions:

   - eth: mlx5: perform DMA operations in the right locations, make
     devices usable on s390x, again

   - sched: sch_hfsc: upgrade 'rt' to 'sc' when it becomes a inner
     curve, previous fix of rejecting invalid config broke some scripts

   - rfkill: reduce data->mtx scope in rfkill_fop_open, avoid deadlock

   - revert "ethtool: Fix mod state of verbose no_mask bitset", needs
     more work

  Current release - new code bugs:

   - tcp: fix listen() warning with v4-mapped-v6 address

  Previous releases - regressions:

   - tcp: allow tcp_disconnect() again when threads are waiting, it was
     denied to plug a constant source of bugs but turns out .NET depends
     on it

   - eth: mlx5: fix double-free if buffer refill fails under OOM

   - revert "net: wwan: iosm: enable runtime pm support for 7560", it's
     causing regressions and the WWAN team at Intel disappeared

   - tcp: tsq: relax tcp_small_queue_check() when rtx queue contains a
     single skb, fix single-stream perf regression on some devices

  Previous releases - always broken:

   - Bluetooth:
      - fix issues in legacy BR/EDR PIN code pairing
      - correctly bounds check and pad HCI_MON_NEW_INDEX name

   - netfilter:
      - more fixes / follow ups for the large "commit protocol" rework,
        which went in as a fix to 6.5
      - fix null-derefs on netlink attrs which user may not pass in

   - tcp: fix excessive TLP and RACK timeouts from HZ rounding (bless
     Debian for keeping HZ=250 alive)

   - net: more strict VIRTIO_NET_HDR_GSO_UDP_L4 validation, prevent
     letting frankenstein UDP super-frames from getting into the stack

   - net: fix interface altnames when ifc moves to a new namespace

   - eth: qed: fix the size of the RX buffers

   - mptcp: avoid sending RST when closing the initial subflow"

* tag 'net-6.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (94 commits)
  Revert "ethtool: Fix mod state of verbose no_mask bitset"
  selftests: mptcp: join: no RST when rm subflow/addr
  mptcp: avoid sending RST when closing the initial subflow
  mptcp: more conservative check for zero probes
  tcp: check mptcp-level constraints for backlog coalescing
  selftests: mptcp: join: correctly check for no RST
  net: ti: icssg-prueth: Fix r30 CMDs bitmasks
  selftests: net: add very basic test for netdev names and namespaces
  net: move altnames together with the netdevice
  net: avoid UAF on deleted altname
  net: check for altname conflicts when changing netdev's netns
  net: fix ifname in netlink ntf during netns move
  net: ethernet: ti: Fix mixed module-builtin object
  net: phy: bcm7xxx: Add missing 16nm EPHY statistics
  ipv4: fib: annotate races around nh->nh_saddr_genid and nh->nh_saddr
  tcp_bpf: properly release resources on error paths
  net/sched: sch_hfsc: upgrade 'rt' to 'sc' when it becomes a inner curve
  net: mdio-mux: fix C45 access returning -EIO after API change
  tcp: tsq: relax tcp_small_queue_check() when rtx queue contains a single skb
  octeon_ep: update BQL sent bytes before ringing doorbell
  ...
This commit is contained in:
Linus Torvalds 2023-10-19 12:08:18 -07:00
commit ce55c22ec8
97 changed files with 967 additions and 466 deletions

View File

@ -323,7 +323,7 @@ operations:
- dev-name - dev-name
- sb-index - sb-index
reply: &sb-get-reply reply: &sb-get-reply
value: 11 value: 13
attributes: *sb-id-attrs attributes: *sb-id-attrs
dump: dump:
request: request:
@ -350,7 +350,7 @@ operations:
- sb-index - sb-index
- sb-pool-index - sb-pool-index
reply: &sb-pool-get-reply reply: &sb-pool-get-reply
value: 15 value: 17
attributes: *sb-pool-id-attrs attributes: *sb-pool-id-attrs
dump: dump:
request: request:
@ -378,7 +378,7 @@ operations:
- sb-index - sb-index
- sb-pool-index - sb-pool-index
reply: &sb-port-pool-get-reply reply: &sb-port-pool-get-reply
value: 19 value: 21
attributes: *sb-port-pool-id-attrs attributes: *sb-port-pool-id-attrs
dump: dump:
request: request:
@ -407,7 +407,7 @@ operations:
- sb-pool-type - sb-pool-type
- sb-tc-index - sb-tc-index
reply: &sb-tc-pool-bind-get-reply reply: &sb-tc-pool-bind-get-reply
value: 23 value: 25
attributes: *sb-tc-pool-bind-id-attrs attributes: *sb-tc-pool-bind-id-attrs
dump: dump:
request: request:
@ -538,7 +538,7 @@ operations:
- dev-name - dev-name
- trap-name - trap-name
reply: &trap-get-reply reply: &trap-get-reply
value: 61 value: 63
attributes: *trap-id-attrs attributes: *trap-id-attrs
dump: dump:
request: request:
@ -564,7 +564,7 @@ operations:
- dev-name - dev-name
- trap-group-name - trap-group-name
reply: &trap-group-get-reply reply: &trap-group-get-reply
value: 65 value: 67
attributes: *trap-group-id-attrs attributes: *trap-group-id-attrs
dump: dump:
request: request:
@ -590,7 +590,7 @@ operations:
- dev-name - dev-name
- trap-policer-id - trap-policer-id
reply: &trap-policer-get-reply reply: &trap-policer-get-reply
value: 69 value: 71
attributes: *trap-policer-id-attrs attributes: *trap-policer-id-attrs
dump: dump:
request: request:
@ -617,7 +617,7 @@ operations:
- port-index - port-index
- rate-node-name - rate-node-name
reply: &rate-get-reply reply: &rate-get-reply
value: 74 value: 76
attributes: *rate-id-attrs attributes: *rate-id-attrs
dump: dump:
request: request:
@ -643,7 +643,7 @@ operations:
- dev-name - dev-name
- linecard-index - linecard-index
reply: &linecard-get-reply reply: &linecard-get-reply
value: 78 value: 80
attributes: *linecard-id-attrs attributes: *linecard-id-attrs
dump: dump:
request: request:

View File

@ -162,9 +162,11 @@ How are representors identified?
The representor netdevice should *not* directly refer to a PCIe device (e.g. The representor netdevice should *not* directly refer to a PCIe device (e.g.
through ``net_dev->dev.parent`` / ``SET_NETDEV_DEV()``), either of the through ``net_dev->dev.parent`` / ``SET_NETDEV_DEV()``), either of the
representee or of the switchdev function. representee or of the switchdev function.
Instead, it should implement the ``ndo_get_devlink_port()`` netdevice op, which Instead, the driver should use the ``SET_NETDEV_DEVLINK_PORT`` macro to
the kernel uses to provide the ``phys_switch_id`` and ``phys_port_name`` sysfs assign a devlink port instance to the netdevice before registering the
nodes. (Some legacy drivers implement ``ndo_get_port_parent_id()`` and netdevice; the kernel uses the devlink port to provide the ``phys_switch_id``
and ``phys_port_name`` sysfs nodes.
(Some legacy drivers implement ``ndo_get_port_parent_id()`` and
``ndo_get_phys_port_name()`` directly, but this is deprecated.) See ``ndo_get_phys_port_name()`` directly, but this is deprecated.) See
:ref:`Documentation/networking/devlink/devlink-port.rst <devlink_port>` for the :ref:`Documentation/networking/devlink/devlink-port.rst <devlink_port>` for the
details of this API. details of this API.

View File

@ -962,13 +962,10 @@ static void btrtl_dmp_hdr(struct hci_dev *hdev, struct sk_buff *skb)
skb_put_data(skb, buf, strlen(buf)); skb_put_data(skb, buf, strlen(buf));
} }
static int btrtl_register_devcoredump_support(struct hci_dev *hdev) static void btrtl_register_devcoredump_support(struct hci_dev *hdev)
{ {
int err; hci_devcd_register(hdev, btrtl_coredump, btrtl_dmp_hdr, NULL);
err = hci_devcd_register(hdev, btrtl_coredump, btrtl_dmp_hdr, NULL);
return err;
} }
void btrtl_set_driver_name(struct hci_dev *hdev, const char *driver_name) void btrtl_set_driver_name(struct hci_dev *hdev, const char *driver_name)
@ -1255,8 +1252,7 @@ int btrtl_download_firmware(struct hci_dev *hdev,
} }
done: done:
if (!err) btrtl_register_devcoredump_support(hdev);
err = btrtl_register_devcoredump_support(hdev);
return err; return err;
} }

View File

@ -74,7 +74,10 @@ static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
struct vhci_data *data = hci_get_drvdata(hdev); struct vhci_data *data = hci_get_drvdata(hdev);
memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1); memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
mutex_lock(&data->open_mutex);
skb_queue_tail(&data->readq, skb); skb_queue_tail(&data->readq, skb);
mutex_unlock(&data->open_mutex);
wake_up_interruptible(&data->read_wait); wake_up_interruptible(&data->read_wait);
return 0; return 0;

View File

@ -4023,7 +4023,7 @@ static inline const void *bond_pull_data(struct sk_buff *skb,
if (likely(n <= hlen)) if (likely(n <= hlen))
return data; return data;
else if (skb && likely(pskb_may_pull(skb, n))) else if (skb && likely(pskb_may_pull(skb, n)))
return skb->head; return skb->data;
return NULL; return NULL;
} }

View File

@ -617,17 +617,16 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio"); dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio");
priv->master_mii_bus = of_mdio_find_bus(dn); priv->master_mii_bus = of_mdio_find_bus(dn);
if (!priv->master_mii_bus) { if (!priv->master_mii_bus) {
of_node_put(dn); err = -EPROBE_DEFER;
return -EPROBE_DEFER; goto err_of_node_put;
} }
get_device(&priv->master_mii_bus->dev);
priv->master_mii_dn = dn; priv->master_mii_dn = dn;
priv->slave_mii_bus = mdiobus_alloc(); priv->slave_mii_bus = mdiobus_alloc();
if (!priv->slave_mii_bus) { if (!priv->slave_mii_bus) {
of_node_put(dn); err = -ENOMEM;
return -ENOMEM; goto err_put_master_mii_bus_dev;
} }
priv->slave_mii_bus->priv = priv; priv->slave_mii_bus->priv = priv;
@ -684,11 +683,17 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
} }
err = mdiobus_register(priv->slave_mii_bus); err = mdiobus_register(priv->slave_mii_bus);
if (err && dn) { if (err && dn)
mdiobus_free(priv->slave_mii_bus); goto err_free_slave_mii_bus;
of_node_put(dn);
}
return 0;
err_free_slave_mii_bus:
mdiobus_free(priv->slave_mii_bus);
err_put_master_mii_bus_dev:
put_device(&priv->master_mii_bus->dev);
err_of_node_put:
of_node_put(dn);
return err; return err;
} }
@ -696,6 +701,7 @@ static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
{ {
mdiobus_unregister(priv->slave_mii_bus); mdiobus_unregister(priv->slave_mii_bus);
mdiobus_free(priv->slave_mii_bus); mdiobus_free(priv->slave_mii_bus);
put_device(&priv->master_mii_bus->dev);
of_node_put(priv->master_mii_dn); of_node_put(priv->master_mii_dn);
} }

View File

@ -911,7 +911,7 @@ static int csk_wait_memory(struct chtls_dev *cdev,
struct sock *sk, long *timeo_p) struct sock *sk, long *timeo_p)
{ {
DEFINE_WAIT_FUNC(wait, woken_wake_function); DEFINE_WAIT_FUNC(wait, woken_wake_function);
int err = 0; int ret, err = 0;
long current_timeo; long current_timeo;
long vm_wait = 0; long vm_wait = 0;
bool noblock; bool noblock;
@ -942,10 +942,13 @@ static int csk_wait_memory(struct chtls_dev *cdev,
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
sk->sk_write_pending++; sk->sk_write_pending++;
sk_wait_event(sk, &current_timeo, sk->sk_err || ret = sk_wait_event(sk, &current_timeo, sk->sk_err ||
(sk->sk_shutdown & SEND_SHUTDOWN) || (sk->sk_shutdown & SEND_SHUTDOWN) ||
(csk_mem_free(cdev, sk) && !vm_wait), &wait); (csk_mem_free(cdev, sk) && !vm_wait),
&wait);
sk->sk_write_pending--; sk->sk_write_pending--;
if (ret < 0)
goto do_error;
if (vm_wait) { if (vm_wait) {
vm_wait -= current_timeo; vm_wait -= current_timeo;
@ -1348,6 +1351,7 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int copied = 0; int copied = 0;
int target; int target;
long timeo; long timeo;
int ret;
buffers_freed = 0; buffers_freed = 0;
@ -1423,7 +1427,11 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (copied >= target) if (copied >= target)
break; break;
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0) {
copied = copied ? : ret;
goto unlock;
}
continue; continue;
found_ok_skb: found_ok_skb:
if (!skb->len) { if (!skb->len) {
@ -1518,6 +1526,8 @@ skip_copy:
if (buffers_freed) if (buffers_freed)
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
unlock:
release_sock(sk); release_sock(sk);
return copied; return copied;
} }
@ -1534,6 +1544,7 @@ static int peekmsg(struct sock *sk, struct msghdr *msg,
int copied = 0; int copied = 0;
size_t avail; /* amount of available data in current skb */ size_t avail; /* amount of available data in current skb */
long timeo; long timeo;
int ret;
lock_sock(sk); lock_sock(sk);
timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
@ -1585,7 +1596,12 @@ static int peekmsg(struct sock *sk, struct msghdr *msg,
release_sock(sk); release_sock(sk);
lock_sock(sk); lock_sock(sk);
} else { } else {
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0) {
/* here 'copied' is 0 due to previous checks */
copied = ret;
break;
}
} }
if (unlikely(peek_seq != tp->copied_seq)) { if (unlikely(peek_seq != tp->copied_seq)) {
@ -1656,6 +1672,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int copied = 0; int copied = 0;
long timeo; long timeo;
int target; /* Read at least this many bytes */ int target; /* Read at least this many bytes */
int ret;
buffers_freed = 0; buffers_freed = 0;
@ -1747,7 +1764,11 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (copied >= target) if (copied >= target)
break; break;
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0) {
copied = copied ? : ret;
goto unlock;
}
continue; continue;
found_ok_skb: found_ok_skb:
@ -1816,6 +1837,7 @@ skip_copy:
if (buffers_freed) if (buffers_freed)
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
unlock:
release_sock(sk); release_sock(sk);
return copied; return copied;
} }

View File

@ -146,7 +146,7 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
err = gve_rx_alloc_buffer(priv, &priv->pdev->dev, &rx->data.page_info[i], err = gve_rx_alloc_buffer(priv, &priv->pdev->dev, &rx->data.page_info[i],
&rx->data.data_ring[i]); &rx->data.data_ring[i]);
if (err) if (err)
goto alloc_err; goto alloc_err_rda;
} }
if (!rx->data.raw_addressing) { if (!rx->data.raw_addressing) {
@ -171,12 +171,26 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx)
return slots; return slots;
alloc_err_qpl: alloc_err_qpl:
/* Fully free the copy pool pages. */
while (j--) { while (j--) {
page_ref_sub(rx->qpl_copy_pool[j].page, page_ref_sub(rx->qpl_copy_pool[j].page,
rx->qpl_copy_pool[j].pagecnt_bias - 1); rx->qpl_copy_pool[j].pagecnt_bias - 1);
put_page(rx->qpl_copy_pool[j].page); put_page(rx->qpl_copy_pool[j].page);
} }
alloc_err:
/* Do not fully free QPL pages - only remove the bias added in this
* function with gve_setup_rx_buffer.
*/
while (i--)
page_ref_sub(rx->data.page_info[i].page,
rx->data.page_info[i].pagecnt_bias - 1);
gve_unassign_qpl(priv, rx->data.qpl->id);
rx->data.qpl = NULL;
return err;
alloc_err_rda:
while (i--) while (i--)
gve_rx_free_buffer(&priv->pdev->dev, gve_rx_free_buffer(&priv->pdev->dev,
&rx->data.page_info[i], &rx->data.page_info[i],

View File

@ -1082,7 +1082,7 @@ void i40e_clear_hw(struct i40e_hw *hw)
I40E_PFLAN_QALLOC_FIRSTQ_SHIFT; I40E_PFLAN_QALLOC_FIRSTQ_SHIFT;
j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >> j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >>
I40E_PFLAN_QALLOC_LASTQ_SHIFT; I40E_PFLAN_QALLOC_LASTQ_SHIFT;
if (val & I40E_PFLAN_QALLOC_VALID_MASK) if (val & I40E_PFLAN_QALLOC_VALID_MASK && j >= base_queue)
num_queues = (j - base_queue) + 1; num_queues = (j - base_queue) + 1;
else else
num_queues = 0; num_queues = 0;
@ -1092,7 +1092,7 @@ void i40e_clear_hw(struct i40e_hw *hw)
I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT; I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT;
j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >> j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >>
I40E_PF_VT_PFALLOC_LASTVF_SHIFT; I40E_PF_VT_PFALLOC_LASTVF_SHIFT;
if (val & I40E_PF_VT_PFALLOC_VALID_MASK) if (val & I40E_PF_VT_PFALLOC_VALID_MASK && j >= i)
num_vfs = (j - i) + 1; num_vfs = (j - i) + 1;
else else
num_vfs = 0; num_vfs = 0;

View File

@ -1201,8 +1201,7 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) & ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) &
ICE_AQ_VSI_Q_OPT_RSS_LUT_M) | ICE_AQ_VSI_Q_OPT_RSS_LUT_M) |
((hash_type << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) & (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
} }
static void static void

View File

@ -6,6 +6,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <generated/utsrelease.h> #include <generated/utsrelease.h>
#include <linux/crash_dump.h>
#include "ice.h" #include "ice.h"
#include "ice_base.h" #include "ice_base.h"
#include "ice_lib.h" #include "ice_lib.h"
@ -4683,6 +4684,9 @@ static void ice_init_features(struct ice_pf *pf)
static void ice_deinit_features(struct ice_pf *pf) static void ice_deinit_features(struct ice_pf *pf)
{ {
if (ice_is_safe_mode(pf))
return;
ice_deinit_lag(pf); ice_deinit_lag(pf);
if (test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags)) if (test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags))
ice_cfg_lldp_mib_change(&pf->hw, false); ice_cfg_lldp_mib_change(&pf->hw, false);
@ -5014,6 +5018,20 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
return -EINVAL; return -EINVAL;
} }
/* when under a kdump kernel initiate a reset before enabling the
* device in order to clear out any pending DMA transactions. These
* transactions can cause some systems to machine check when doing
* the pcim_enable_device() below.
*/
if (is_kdump_kernel()) {
pci_save_state(pdev);
pci_clear_master(pdev);
err = pcie_flr(pdev);
if (err)
return err;
pci_restore_state(pdev);
}
/* this driver uses devres, see /* this driver uses devres, see
* Documentation/driver-api/driver-model/devres.rst * Documentation/driver-api/driver-model/devres.rst
*/ */

View File

@ -715,20 +715,19 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
hw_desc->dptr = tx_buffer->sglist_dma; hw_desc->dptr = tx_buffer->sglist_dma;
} }
/* Flush the hw descriptor before writing to doorbell */ netdev_tx_sent_queue(iq->netdev_q, skb->len);
wmb(); skb_tx_timestamp(skb);
/* Ring Doorbell to notify the NIC there is a new packet */
writel(1, iq->doorbell_reg);
atomic_inc(&iq->instr_pending); atomic_inc(&iq->instr_pending);
wi++; wi++;
if (wi == iq->max_count) if (wi == iq->max_count)
wi = 0; wi = 0;
iq->host_write_index = wi; iq->host_write_index = wi;
/* Flush the hw descriptor before writing to doorbell */
wmb();
netdev_tx_sent_queue(iq->netdev_q, skb->len); /* Ring Doorbell to notify the NIC there is a new packet */
writel(1, iq->doorbell_reg);
iq->stats.instr_posted++; iq->stats.instr_posted++;
skb_tx_timestamp(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
dma_map_sg_err: dma_map_sg_err:

View File

@ -2186,52 +2186,23 @@ static u16 cmdif_rev(struct mlx5_core_dev *dev)
int mlx5_cmd_init(struct mlx5_core_dev *dev) int mlx5_cmd_init(struct mlx5_core_dev *dev)
{ {
int size = sizeof(struct mlx5_cmd_prot_block);
int align = roundup_pow_of_two(size);
struct mlx5_cmd *cmd = &dev->cmd; struct mlx5_cmd *cmd = &dev->cmd;
u32 cmd_l;
int err;
cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);
if (!cmd->pool)
return -ENOMEM;
err = alloc_cmd_page(dev, cmd);
if (err)
goto err_free_pool;
cmd_l = (u32)(cmd->dma);
if (cmd_l & 0xfff) {
mlx5_core_err(dev, "invalid command queue address\n");
err = -ENOMEM;
goto err_cmd_page;
}
cmd->checksum_disabled = 1; cmd->checksum_disabled = 1;
spin_lock_init(&cmd->alloc_lock); spin_lock_init(&cmd->alloc_lock);
spin_lock_init(&cmd->token_lock); spin_lock_init(&cmd->token_lock);
create_msg_cache(dev);
set_wqname(dev); set_wqname(dev);
cmd->wq = create_singlethread_workqueue(cmd->wq_name); cmd->wq = create_singlethread_workqueue(cmd->wq_name);
if (!cmd->wq) { if (!cmd->wq) {
mlx5_core_err(dev, "failed to create command workqueue\n"); mlx5_core_err(dev, "failed to create command workqueue\n");
err = -ENOMEM; return -ENOMEM;
goto err_cache;
} }
mlx5_cmdif_debugfs_init(dev); mlx5_cmdif_debugfs_init(dev);
return 0; return 0;
err_cache:
destroy_msg_cache(dev);
err_cmd_page:
free_cmd_page(dev, cmd);
err_free_pool:
dma_pool_destroy(cmd->pool);
return err;
} }
void mlx5_cmd_cleanup(struct mlx5_core_dev *dev) void mlx5_cmd_cleanup(struct mlx5_core_dev *dev)
@ -2240,15 +2211,15 @@ void mlx5_cmd_cleanup(struct mlx5_core_dev *dev)
mlx5_cmdif_debugfs_cleanup(dev); mlx5_cmdif_debugfs_cleanup(dev);
destroy_workqueue(cmd->wq); destroy_workqueue(cmd->wq);
destroy_msg_cache(dev);
free_cmd_page(dev, cmd);
dma_pool_destroy(cmd->pool);
} }
int mlx5_cmd_enable(struct mlx5_core_dev *dev) int mlx5_cmd_enable(struct mlx5_core_dev *dev)
{ {
int size = sizeof(struct mlx5_cmd_prot_block);
int align = roundup_pow_of_two(size);
struct mlx5_cmd *cmd = &dev->cmd; struct mlx5_cmd *cmd = &dev->cmd;
u32 cmd_h, cmd_l; u32 cmd_h, cmd_l;
int err;
memset(&cmd->vars, 0, sizeof(cmd->vars)); memset(&cmd->vars, 0, sizeof(cmd->vars));
cmd->vars.cmdif_rev = cmdif_rev(dev); cmd->vars.cmdif_rev = cmdif_rev(dev);
@ -2281,10 +2252,21 @@ int mlx5_cmd_enable(struct mlx5_core_dev *dev)
sema_init(&cmd->vars.pages_sem, 1); sema_init(&cmd->vars.pages_sem, 1);
sema_init(&cmd->vars.throttle_sem, DIV_ROUND_UP(cmd->vars.max_reg_cmds, 2)); sema_init(&cmd->vars.throttle_sem, DIV_ROUND_UP(cmd->vars.max_reg_cmds, 2));
cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);
if (!cmd->pool)
return -ENOMEM;
err = alloc_cmd_page(dev, cmd);
if (err)
goto err_free_pool;
cmd_h = (u32)((u64)(cmd->dma) >> 32); cmd_h = (u32)((u64)(cmd->dma) >> 32);
cmd_l = (u32)(cmd->dma); cmd_l = (u32)(cmd->dma);
if (WARN_ON(cmd_l & 0xfff)) if (cmd_l & 0xfff) {
return -EINVAL; mlx5_core_err(dev, "invalid command queue address\n");
err = -ENOMEM;
goto err_cmd_page;
}
iowrite32be(cmd_h, &dev->iseg->cmdq_addr_h); iowrite32be(cmd_h, &dev->iseg->cmdq_addr_h);
iowrite32be(cmd_l, &dev->iseg->cmdq_addr_l_sz); iowrite32be(cmd_l, &dev->iseg->cmdq_addr_l_sz);
@ -2297,17 +2279,27 @@ int mlx5_cmd_enable(struct mlx5_core_dev *dev)
cmd->mode = CMD_MODE_POLLING; cmd->mode = CMD_MODE_POLLING;
cmd->allowed_opcode = CMD_ALLOWED_OPCODE_ALL; cmd->allowed_opcode = CMD_ALLOWED_OPCODE_ALL;
create_msg_cache(dev);
create_debugfs_files(dev); create_debugfs_files(dev);
return 0; return 0;
err_cmd_page:
free_cmd_page(dev, cmd);
err_free_pool:
dma_pool_destroy(cmd->pool);
return err;
} }
void mlx5_cmd_disable(struct mlx5_core_dev *dev) void mlx5_cmd_disable(struct mlx5_core_dev *dev)
{ {
struct mlx5_cmd *cmd = &dev->cmd; struct mlx5_cmd *cmd = &dev->cmd;
clean_debug_files(dev);
flush_workqueue(cmd->wq); flush_workqueue(cmd->wq);
clean_debug_files(dev);
destroy_msg_cache(dev);
free_cmd_page(dev, cmd);
dma_pool_destroy(cmd->pool);
} }
void mlx5_cmd_set_state(struct mlx5_core_dev *dev, void mlx5_cmd_set_state(struct mlx5_core_dev *dev,

View File

@ -848,7 +848,7 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work)
mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner); mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner);
if (tracer->owner) { if (tracer->owner) {
tracer->owner = false; mlx5_fw_tracer_ownership_acquire(tracer);
return; return;
} }

View File

@ -467,6 +467,17 @@ static int mlx5_esw_bridge_switchdev_event(struct notifier_block *nb,
/* only handle the event on peers */ /* only handle the event on peers */
if (mlx5_esw_bridge_is_local(dev, rep, esw)) if (mlx5_esw_bridge_is_local(dev, rep, esw))
break; break;
fdb_info = container_of(info,
struct switchdev_notifier_fdb_info,
info);
/* Mark for deletion to prevent the update wq task from
* spuriously refreshing the entry which would mark it again as
* offloaded in SW bridge. After this fallthrough to regular
* async delete code.
*/
mlx5_esw_bridge_fdb_mark_deleted(dev, vport_num, esw_owner_vhca_id, br_offloads,
fdb_info);
fallthrough; fallthrough;
case SWITCHDEV_FDB_ADD_TO_DEVICE: case SWITCHDEV_FDB_ADD_TO_DEVICE:
case SWITCHDEV_FDB_DEL_TO_DEVICE: case SWITCHDEV_FDB_DEL_TO_DEVICE:

View File

@ -24,7 +24,8 @@ static int mlx5e_set_int_port_tunnel(struct mlx5e_priv *priv,
route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex); route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex);
if (!route_dev || !netif_is_ovs_master(route_dev)) if (!route_dev || !netif_is_ovs_master(route_dev) ||
attr->parse_attr->filter_dev == e->out_dev)
goto out; goto out;
err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex, err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex,

View File

@ -874,11 +874,11 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
} }
out: out:
if (flags & XDP_XMIT_FLUSH) { if (sq->mpwqe.wqe)
if (sq->mpwqe.wqe) mlx5e_xdp_mpwqe_complete(sq);
mlx5e_xdp_mpwqe_complete(sq);
if (flags & XDP_XMIT_FLUSH)
mlx5e_xmit_xdp_doorbell(sq); mlx5e_xmit_xdp_doorbell(sq);
}
return nxmit; return nxmit;
} }

View File

@ -701,7 +701,7 @@ mlx5e_rep_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
/* update HW stats in background for next time */ /* update HW stats in background for next time */
mlx5e_queue_update_stats(priv); mlx5e_queue_update_stats(priv);
memcpy(stats, &priv->stats.vf_vport, sizeof(*stats)); mlx5e_stats_copy_rep_stats(stats, &priv->stats.rep_stats);
} }
static int mlx5e_rep_change_mtu(struct net_device *netdev, int new_mtu) static int mlx5e_rep_change_mtu(struct net_device *netdev, int new_mtu)
@ -769,6 +769,7 @@ static int mlx5e_rep_max_nch_limit(struct mlx5_core_dev *mdev)
static void mlx5e_build_rep_params(struct net_device *netdev) static void mlx5e_build_rep_params(struct net_device *netdev)
{ {
const bool take_rtnl = netdev->reg_state == NETREG_REGISTERED;
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_rep_priv *rpriv = priv->ppriv; struct mlx5e_rep_priv *rpriv = priv->ppriv;
struct mlx5_eswitch_rep *rep = rpriv->rep; struct mlx5_eswitch_rep *rep = rpriv->rep;
@ -794,8 +795,15 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
/* RQ */ /* RQ */
mlx5e_build_rq_params(mdev, params); mlx5e_build_rq_params(mdev, params);
/* If netdev is already registered (e.g. move from nic profile to uplink,
* RTNL lock must be held before triggering netdev notifiers.
*/
if (take_rtnl)
rtnl_lock();
/* update XDP supported features */ /* update XDP supported features */
mlx5e_set_xdp_feature(netdev); mlx5e_set_xdp_feature(netdev);
if (take_rtnl)
rtnl_unlock();
/* CQ moderation params */ /* CQ moderation params */
params->rx_dim_enabled = MLX5_CAP_GEN(mdev, cq_moderation); params->rx_dim_enabled = MLX5_CAP_GEN(mdev, cq_moderation);

View File

@ -457,26 +457,41 @@ static int mlx5e_alloc_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
static int mlx5e_refill_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk) static int mlx5e_refill_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
{ {
int remaining = wqe_bulk; int remaining = wqe_bulk;
int i = 0; int total_alloc = 0;
int refill_alloc;
int refill;
/* The WQE bulk is split into smaller bulks that are sized /* The WQE bulk is split into smaller bulks that are sized
* according to the page pool cache refill size to avoid overflowing * according to the page pool cache refill size to avoid overflowing
* the page pool cache due to too many page releases at once. * the page pool cache due to too many page releases at once.
*/ */
do { do {
int refill = min_t(u16, rq->wqe.info.refill_unit, remaining); refill = min_t(u16, rq->wqe.info.refill_unit, remaining);
int alloc_count;
mlx5e_free_rx_wqes(rq, ix + i, refill); mlx5e_free_rx_wqes(rq, ix + total_alloc, refill);
alloc_count = mlx5e_alloc_rx_wqes(rq, ix + i, refill); refill_alloc = mlx5e_alloc_rx_wqes(rq, ix + total_alloc, refill);
i += alloc_count; if (unlikely(refill_alloc != refill))
if (unlikely(alloc_count != refill)) goto err_free;
break;
total_alloc += refill_alloc;
remaining -= refill; remaining -= refill;
} while (remaining); } while (remaining);
return i; return total_alloc;
err_free:
mlx5e_free_rx_wqes(rq, ix, total_alloc + refill_alloc);
for (int i = 0; i < total_alloc + refill; i++) {
int j = mlx5_wq_cyc_ctr2ix(&rq->wqe.wq, ix + i);
struct mlx5e_wqe_frag_info *frag;
frag = get_frag(rq, j);
for (int k = 0; k < rq->wqe.info.num_frags; k++, frag++)
frag->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
return 0;
} }
static void static void
@ -816,6 +831,8 @@ err_unmap:
mlx5e_page_release_fragmented(rq, frag_page); mlx5e_page_release_fragmented(rq, frag_page);
} }
bitmap_fill(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe);
err: err:
rq->stats->buff_alloc_err++; rq->stats->buff_alloc_err++;

View File

@ -484,11 +484,20 @@ struct mlx5e_stats {
struct mlx5e_vnic_env_stats vnic; struct mlx5e_vnic_env_stats vnic;
struct mlx5e_vport_stats vport; struct mlx5e_vport_stats vport;
struct mlx5e_pport_stats pport; struct mlx5e_pport_stats pport;
struct rtnl_link_stats64 vf_vport;
struct mlx5e_pcie_stats pcie; struct mlx5e_pcie_stats pcie;
struct mlx5e_rep_stats rep_stats; struct mlx5e_rep_stats rep_stats;
}; };
static inline void mlx5e_stats_copy_rep_stats(struct rtnl_link_stats64 *vf_vport,
struct mlx5e_rep_stats *rep_stats)
{
memset(vf_vport, 0, sizeof(*vf_vport));
vf_vport->rx_packets = rep_stats->vport_rx_packets;
vf_vport->tx_packets = rep_stats->vport_tx_packets;
vf_vport->rx_bytes = rep_stats->vport_rx_bytes;
vf_vport->tx_bytes = rep_stats->vport_tx_bytes;
}
extern mlx5e_stats_grp_t mlx5e_nic_stats_grps[]; extern mlx5e_stats_grp_t mlx5e_nic_stats_grps[];
unsigned int mlx5e_nic_stats_grps_num(struct mlx5e_priv *priv); unsigned int mlx5e_nic_stats_grps_num(struct mlx5e_priv *priv);

View File

@ -4972,7 +4972,8 @@ static int scan_tc_matchall_fdb_actions(struct mlx5e_priv *priv,
if (err) if (err)
return err; return err;
rpriv->prev_vf_vport_stats = priv->stats.vf_vport; mlx5e_stats_copy_rep_stats(&rpriv->prev_vf_vport_stats,
&priv->stats.rep_stats);
break; break;
default: default:
NL_SET_ERR_MSG_MOD(extack, "mlx5 supports only police action for matchall"); NL_SET_ERR_MSG_MOD(extack, "mlx5 supports only police action for matchall");
@ -5012,7 +5013,7 @@ void mlx5e_tc_stats_matchall(struct mlx5e_priv *priv,
u64 dbytes; u64 dbytes;
u64 dpkts; u64 dpkts;
cur_stats = priv->stats.vf_vport; mlx5e_stats_copy_rep_stats(&cur_stats, &priv->stats.rep_stats);
dpkts = cur_stats.rx_packets - rpriv->prev_vf_vport_stats.rx_packets; dpkts = cur_stats.rx_packets - rpriv->prev_vf_vport_stats.rx_packets;
dbytes = cur_stats.rx_bytes - rpriv->prev_vf_vport_stats.rx_bytes; dbytes = cur_stats.rx_bytes - rpriv->prev_vf_vport_stats.rx_bytes;
rpriv->prev_vf_vport_stats = cur_stats; rpriv->prev_vf_vport_stats = cur_stats;

View File

@ -1748,6 +1748,28 @@ void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16
entry->lastuse = jiffies; entry->lastuse = jiffies;
} }
void mlx5_esw_bridge_fdb_mark_deleted(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id,
struct mlx5_esw_bridge_offloads *br_offloads,
struct switchdev_notifier_fdb_info *fdb_info)
{
struct mlx5_esw_bridge_fdb_entry *entry;
struct mlx5_esw_bridge *bridge;
bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads);
if (!bridge)
return;
entry = mlx5_esw_bridge_fdb_lookup(bridge, fdb_info->addr, fdb_info->vid);
if (!entry) {
esw_debug(br_offloads->esw->dev,
"FDB mark deleted entry with specified key not found (MAC=%pM,vid=%u,vport=%u)\n",
fdb_info->addr, fdb_info->vid, vport_num);
return;
}
entry->flags |= MLX5_ESW_BRIDGE_FLAG_DELETED;
}
void mlx5_esw_bridge_fdb_create(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, void mlx5_esw_bridge_fdb_create(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id,
struct mlx5_esw_bridge_offloads *br_offloads, struct mlx5_esw_bridge_offloads *br_offloads,
struct switchdev_notifier_fdb_info *fdb_info) struct switchdev_notifier_fdb_info *fdb_info)
@ -1810,7 +1832,8 @@ void mlx5_esw_bridge_update(struct mlx5_esw_bridge_offloads *br_offloads)
unsigned long lastuse = unsigned long lastuse =
(unsigned long)mlx5_fc_query_lastuse(entry->ingress_counter); (unsigned long)mlx5_fc_query_lastuse(entry->ingress_counter);
if (entry->flags & MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER) if (entry->flags & (MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER |
MLX5_ESW_BRIDGE_FLAG_DELETED))
continue; continue;
if (time_after(lastuse, entry->lastuse)) if (time_after(lastuse, entry->lastuse))

View File

@ -62,6 +62,9 @@ int mlx5_esw_bridge_vport_peer_unlink(struct net_device *br_netdev, u16 vport_nu
void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id,
struct mlx5_esw_bridge_offloads *br_offloads, struct mlx5_esw_bridge_offloads *br_offloads,
struct switchdev_notifier_fdb_info *fdb_info); struct switchdev_notifier_fdb_info *fdb_info);
void mlx5_esw_bridge_fdb_mark_deleted(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id,
struct mlx5_esw_bridge_offloads *br_offloads,
struct switchdev_notifier_fdb_info *fdb_info);
void mlx5_esw_bridge_fdb_create(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, void mlx5_esw_bridge_fdb_create(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id,
struct mlx5_esw_bridge_offloads *br_offloads, struct mlx5_esw_bridge_offloads *br_offloads,
struct switchdev_notifier_fdb_info *fdb_info); struct switchdev_notifier_fdb_info *fdb_info);

View File

@ -133,6 +133,7 @@ struct mlx5_esw_bridge_mdb_key {
enum { enum {
MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER = BIT(0), MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER = BIT(0),
MLX5_ESW_BRIDGE_FLAG_PEER = BIT(1), MLX5_ESW_BRIDGE_FLAG_PEER = BIT(1),
MLX5_ESW_BRIDGE_FLAG_DELETED = BIT(2),
}; };
enum { enum {

View File

@ -1038,11 +1038,8 @@ const u32 *mlx5_esw_query_functions(struct mlx5_core_dev *dev)
return ERR_PTR(err); return ERR_PTR(err);
} }
static void mlx5_eswitch_event_handlers_register(struct mlx5_eswitch *esw) static void mlx5_eswitch_event_handler_register(struct mlx5_eswitch *esw)
{ {
MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE);
mlx5_eq_notifier_register(esw->dev, &esw->nb);
if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) { if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) {
MLX5_NB_INIT(&esw->esw_funcs.nb, mlx5_esw_funcs_changed_handler, MLX5_NB_INIT(&esw->esw_funcs.nb, mlx5_esw_funcs_changed_handler,
ESW_FUNCTIONS_CHANGED); ESW_FUNCTIONS_CHANGED);
@ -1050,13 +1047,11 @@ static void mlx5_eswitch_event_handlers_register(struct mlx5_eswitch *esw)
} }
} }
static void mlx5_eswitch_event_handlers_unregister(struct mlx5_eswitch *esw) static void mlx5_eswitch_event_handler_unregister(struct mlx5_eswitch *esw)
{ {
if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev))
mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb); mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb);
mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
flush_workqueue(esw->work_queue); flush_workqueue(esw->work_queue);
} }
@ -1483,6 +1478,9 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
mlx5_eswitch_update_num_of_vfs(esw, num_vfs); mlx5_eswitch_update_num_of_vfs(esw, num_vfs);
MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE);
mlx5_eq_notifier_register(esw->dev, &esw->nb);
if (esw->mode == MLX5_ESWITCH_LEGACY) { if (esw->mode == MLX5_ESWITCH_LEGACY) {
err = esw_legacy_enable(esw); err = esw_legacy_enable(esw);
} else { } else {
@ -1495,7 +1493,7 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED; esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED;
mlx5_eswitch_event_handlers_register(esw); mlx5_eswitch_event_handler_register(esw);
esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n",
esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
@ -1622,7 +1620,8 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw)
*/ */
mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_LEGACY); mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_LEGACY);
mlx5_eswitch_event_handlers_unregister(esw); mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
mlx5_eswitch_event_handler_unregister(esw);
esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n",
esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",

View File

@ -113,7 +113,10 @@ static void qed_ll2b_complete_tx_packet(void *cxt,
static int qed_ll2_alloc_buffer(struct qed_dev *cdev, static int qed_ll2_alloc_buffer(struct qed_dev *cdev,
u8 **data, dma_addr_t *phys_addr) u8 **data, dma_addr_t *phys_addr)
{ {
*data = kmalloc(cdev->ll2->rx_size, GFP_ATOMIC); size_t size = cdev->ll2->rx_size + NET_SKB_PAD +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
*data = kmalloc(size, GFP_ATOMIC);
if (!(*data)) { if (!(*data)) {
DP_INFO(cdev, "Failed to allocate LL2 buffer data\n"); DP_INFO(cdev, "Failed to allocate LL2 buffer data\n");
return -ENOMEM; return -ENOMEM;
@ -2589,7 +2592,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
INIT_LIST_HEAD(&cdev->ll2->list); INIT_LIST_HEAD(&cdev->ll2->list);
spin_lock_init(&cdev->ll2->lock); spin_lock_init(&cdev->ll2->lock);
cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN + cdev->ll2->rx_size = PRM_DMA_PAD_BYTES_NUM + ETH_HLEN +
L1_CACHE_BYTES + params->mtu; L1_CACHE_BYTES + params->mtu;
/* Allocate memory for LL2. /* Allocate memory for LL2.

View File

@ -90,12 +90,16 @@ config TI_CPTS
The unit can time stamp PTP UDP/IPv4 and Layer 2 packets, and the The unit can time stamp PTP UDP/IPv4 and Layer 2 packets, and the
driver offers a PTP Hardware Clock. driver offers a PTP Hardware Clock.
config TI_K3_CPPI_DESC_POOL
tristate
config TI_K3_AM65_CPSW_NUSS config TI_K3_AM65_CPSW_NUSS
tristate "TI K3 AM654x/J721E CPSW Ethernet driver" tristate "TI K3 AM654x/J721E CPSW Ethernet driver"
depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER
select NET_DEVLINK select NET_DEVLINK
select TI_DAVINCI_MDIO select TI_DAVINCI_MDIO
select PHYLINK select PHYLINK
select TI_K3_CPPI_DESC_POOL
imply PHY_TI_GMII_SEL imply PHY_TI_GMII_SEL
depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS
help help
@ -187,6 +191,7 @@ config TI_ICSSG_PRUETH
tristate "TI Gigabit PRU Ethernet driver" tristate "TI Gigabit PRU Ethernet driver"
select PHYLIB select PHYLIB
select TI_ICSS_IEP select TI_ICSS_IEP
select TI_K3_CPPI_DESC_POOL
depends on PRU_REMOTEPROC depends on PRU_REMOTEPROC
depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER
help help

View File

@ -24,14 +24,15 @@ keystone_netcp-y := netcp_core.o cpsw_ale.o
obj-$(CONFIG_TI_KEYSTONE_NETCP_ETHSS) += keystone_netcp_ethss.o obj-$(CONFIG_TI_KEYSTONE_NETCP_ETHSS) += keystone_netcp_ethss.o
keystone_netcp_ethss-y := netcp_ethss.o netcp_sgmii.o netcp_xgbepcsr.o cpsw_ale.o keystone_netcp_ethss-y := netcp_ethss.o netcp_sgmii.o netcp_xgbepcsr.o cpsw_ale.o
obj-$(CONFIG_TI_K3_CPPI_DESC_POOL) += k3-cppi-desc-pool.o
obj-$(CONFIG_TI_K3_AM65_CPSW_NUSS) += ti-am65-cpsw-nuss.o obj-$(CONFIG_TI_K3_AM65_CPSW_NUSS) += ti-am65-cpsw-nuss.o
ti-am65-cpsw-nuss-y := am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cpsw_ale.o k3-cppi-desc-pool.o am65-cpsw-qos.o ti-am65-cpsw-nuss-y := am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cpsw_ale.o am65-cpsw-qos.o
ti-am65-cpsw-nuss-$(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV) += am65-cpsw-switchdev.o ti-am65-cpsw-nuss-$(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV) += am65-cpsw-switchdev.o
obj-$(CONFIG_TI_K3_AM65_CPTS) += am65-cpts.o obj-$(CONFIG_TI_K3_AM65_CPTS) += am65-cpts.o
obj-$(CONFIG_TI_ICSSG_PRUETH) += icssg-prueth.o obj-$(CONFIG_TI_ICSSG_PRUETH) += icssg-prueth.o
icssg-prueth-y := k3-cppi-desc-pool.o \ icssg-prueth-y := icssg/icssg_prueth.o \
icssg/icssg_prueth.o \
icssg/icssg_classifier.o \ icssg/icssg_classifier.o \
icssg/icssg_queues.o \ icssg/icssg_queues.o \
icssg/icssg_config.o \ icssg/icssg_config.o \

View File

@ -379,9 +379,9 @@ int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice)
/* Bitmask for ICSSG r30 commands */ /* Bitmask for ICSSG r30 commands */
static const struct icssg_r30_cmd emac_r32_bitmask[] = { static const struct icssg_r30_cmd emac_r32_bitmask[] = {
{{0xffff0004, 0xffff0100, 0xffff0100, EMAC_NONE}}, /* EMAC_PORT_DISABLE */ {{0xffff0004, 0xffff0100, 0xffff0004, EMAC_NONE}}, /* EMAC_PORT_DISABLE */
{{0xfffb0040, 0xfeff0200, 0xfeff0200, EMAC_NONE}}, /* EMAC_PORT_BLOCK */ {{0xfffb0040, 0xfeff0200, 0xfeff0200, EMAC_NONE}}, /* EMAC_PORT_BLOCK */
{{0xffbb0000, 0xfcff0000, 0xdcff0000, EMAC_NONE}}, /* EMAC_PORT_FORWARD */ {{0xffbb0000, 0xfcff0000, 0xdcfb0000, EMAC_NONE}}, /* EMAC_PORT_FORWARD */
{{0xffbb0000, 0xfcff0000, 0xfcff2000, EMAC_NONE}}, /* EMAC_PORT_FORWARD_WO_LEARNING */ {{0xffbb0000, 0xfcff0000, 0xfcff2000, EMAC_NONE}}, /* EMAC_PORT_FORWARD_WO_LEARNING */
{{0xffff0001, EMAC_NONE, EMAC_NONE, EMAC_NONE}}, /* ACCEPT ALL */ {{0xffff0001, EMAC_NONE, EMAC_NONE, EMAC_NONE}}, /* ACCEPT ALL */
{{0xfffe0002, EMAC_NONE, EMAC_NONE, EMAC_NONE}}, /* ACCEPT TAGGED */ {{0xfffe0002, EMAC_NONE, EMAC_NONE, EMAC_NONE}}, /* ACCEPT TAGGED */

View File

@ -9,6 +9,9 @@
#include "icssg_stats.h" #include "icssg_stats.h"
#include <linux/regmap.h> #include <linux/regmap.h>
#define ICSSG_TX_PACKET_OFFSET 0xA0
#define ICSSG_TX_BYTE_OFFSET 0xEC
static u32 stats_base[] = { 0x54c, /* Slice 0 stats start */ static u32 stats_base[] = { 0x54c, /* Slice 0 stats start */
0xb18, /* Slice 1 stats start */ 0xb18, /* Slice 1 stats start */
}; };
@ -18,6 +21,7 @@ void emac_update_hardware_stats(struct prueth_emac *emac)
struct prueth *prueth = emac->prueth; struct prueth *prueth = emac->prueth;
int slice = prueth_emac_slice(emac); int slice = prueth_emac_slice(emac);
u32 base = stats_base[slice]; u32 base = stats_base[slice];
u32 tx_pkt_cnt = 0;
u32 val; u32 val;
int i; int i;
@ -29,7 +33,12 @@ void emac_update_hardware_stats(struct prueth_emac *emac)
base + icssg_all_stats[i].offset, base + icssg_all_stats[i].offset,
val); val);
if (icssg_all_stats[i].offset == ICSSG_TX_PACKET_OFFSET)
tx_pkt_cnt = val;
emac->stats[i] += val; emac->stats[i] += val;
if (icssg_all_stats[i].offset == ICSSG_TX_BYTE_OFFSET)
emac->stats[i] -= tx_pkt_cnt * 8;
} }
} }

View File

@ -39,6 +39,7 @@ void k3_cppi_desc_pool_destroy(struct k3_cppi_desc_pool *pool)
gen_pool_destroy(pool->gen_pool); /* frees pool->name */ gen_pool_destroy(pool->gen_pool); /* frees pool->name */
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_destroy);
struct k3_cppi_desc_pool * struct k3_cppi_desc_pool *
k3_cppi_desc_pool_create_name(struct device *dev, size_t size, k3_cppi_desc_pool_create_name(struct device *dev, size_t size,
@ -98,29 +99,38 @@ gen_pool_create_fail:
devm_kfree(pool->dev, pool); devm_kfree(pool->dev, pool);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_create_name);
dma_addr_t k3_cppi_desc_pool_virt2dma(struct k3_cppi_desc_pool *pool, dma_addr_t k3_cppi_desc_pool_virt2dma(struct k3_cppi_desc_pool *pool,
void *addr) void *addr)
{ {
return addr ? pool->dma_addr + (addr - pool->cpumem) : 0; return addr ? pool->dma_addr + (addr - pool->cpumem) : 0;
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_virt2dma);
void *k3_cppi_desc_pool_dma2virt(struct k3_cppi_desc_pool *pool, dma_addr_t dma) void *k3_cppi_desc_pool_dma2virt(struct k3_cppi_desc_pool *pool, dma_addr_t dma)
{ {
return dma ? pool->cpumem + (dma - pool->dma_addr) : NULL; return dma ? pool->cpumem + (dma - pool->dma_addr) : NULL;
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_dma2virt);
void *k3_cppi_desc_pool_alloc(struct k3_cppi_desc_pool *pool) void *k3_cppi_desc_pool_alloc(struct k3_cppi_desc_pool *pool)
{ {
return (void *)gen_pool_alloc(pool->gen_pool, pool->desc_size); return (void *)gen_pool_alloc(pool->gen_pool, pool->desc_size);
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_alloc);
void k3_cppi_desc_pool_free(struct k3_cppi_desc_pool *pool, void *addr) void k3_cppi_desc_pool_free(struct k3_cppi_desc_pool *pool, void *addr)
{ {
gen_pool_free(pool->gen_pool, (unsigned long)addr, pool->desc_size); gen_pool_free(pool->gen_pool, (unsigned long)addr, pool->desc_size);
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_free);
size_t k3_cppi_desc_pool_avail(struct k3_cppi_desc_pool *pool) size_t k3_cppi_desc_pool_avail(struct k3_cppi_desc_pool *pool)
{ {
return gen_pool_avail(pool->gen_pool) / pool->desc_size; return gen_pool_avail(pool->gen_pool) / pool->desc_size;
} }
EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_avail);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("TI K3 CPPI5 descriptors pool API");

View File

@ -55,6 +55,27 @@ out:
return r; return r;
} }
static int mdio_mux_read_c45(struct mii_bus *bus, int phy_id, int dev_addr,
int regnum)
{
struct mdio_mux_child_bus *cb = bus->priv;
struct mdio_mux_parent_bus *pb = cb->parent;
int r;
mutex_lock_nested(&pb->mii_bus->mdio_lock, MDIO_MUTEX_MUX);
r = pb->switch_fn(pb->current_child, cb->bus_number, pb->switch_data);
if (r)
goto out;
pb->current_child = cb->bus_number;
r = pb->mii_bus->read_c45(pb->mii_bus, phy_id, dev_addr, regnum);
out:
mutex_unlock(&pb->mii_bus->mdio_lock);
return r;
}
/* /*
* The parent bus' lock is used to order access to the switch_fn. * The parent bus' lock is used to order access to the switch_fn.
*/ */
@ -80,6 +101,28 @@ out:
return r; return r;
} }
static int mdio_mux_write_c45(struct mii_bus *bus, int phy_id, int dev_addr,
int regnum, u16 val)
{
struct mdio_mux_child_bus *cb = bus->priv;
struct mdio_mux_parent_bus *pb = cb->parent;
int r;
mutex_lock_nested(&pb->mii_bus->mdio_lock, MDIO_MUTEX_MUX);
r = pb->switch_fn(pb->current_child, cb->bus_number, pb->switch_data);
if (r)
goto out;
pb->current_child = cb->bus_number;
r = pb->mii_bus->write_c45(pb->mii_bus, phy_id, dev_addr, regnum, val);
out:
mutex_unlock(&pb->mii_bus->mdio_lock);
return r;
}
static int parent_count; static int parent_count;
static void mdio_mux_uninit_children(struct mdio_mux_parent_bus *pb) static void mdio_mux_uninit_children(struct mdio_mux_parent_bus *pb)
@ -173,6 +216,10 @@ int mdio_mux_init(struct device *dev,
cb->mii_bus->parent = dev; cb->mii_bus->parent = dev;
cb->mii_bus->read = mdio_mux_read; cb->mii_bus->read = mdio_mux_read;
cb->mii_bus->write = mdio_mux_write; cb->mii_bus->write = mdio_mux_write;
if (parent_bus->read_c45)
cb->mii_bus->read_c45 = mdio_mux_read_c45;
if (parent_bus->write_c45)
cb->mii_bus->write_c45 = mdio_mux_write_c45;
r = of_mdiobus_register(cb->mii_bus, child_bus_node); r = of_mdiobus_register(cb->mii_bus, child_bus_node);
if (r) { if (r) {
mdiobus_free(cb->mii_bus); mdiobus_free(cb->mii_bus);

View File

@ -894,6 +894,9 @@ static int bcm7xxx_28nm_probe(struct phy_device *phydev)
.name = _name, \ .name = _name, \
/* PHY_BASIC_FEATURES */ \ /* PHY_BASIC_FEATURES */ \
.flags = PHY_IS_INTERNAL, \ .flags = PHY_IS_INTERNAL, \
.get_sset_count = bcm_phy_get_sset_count, \
.get_strings = bcm_phy_get_strings, \
.get_stats = bcm7xxx_28nm_get_phy_stats, \
.probe = bcm7xxx_28nm_probe, \ .probe = bcm7xxx_28nm_probe, \
.config_init = bcm7xxx_16nm_ephy_config_init, \ .config_init = bcm7xxx_16nm_ephy_config_init, \
.config_aneg = genphy_config_aneg, \ .config_aneg = genphy_config_aneg, \

View File

@ -3073,10 +3073,11 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
struct net *net = sock_net(&tfile->sk); struct net *net = sock_net(&tfile->sk);
struct tun_struct *tun; struct tun_struct *tun;
void __user* argp = (void __user*)arg; void __user* argp = (void __user*)arg;
unsigned int ifindex, carrier; unsigned int carrier;
struct ifreq ifr; struct ifreq ifr;
kuid_t owner; kuid_t owner;
kgid_t group; kgid_t group;
int ifindex;
int sndbuf; int sndbuf;
int vnet_hdr_sz; int vnet_hdr_sz;
int le; int le;
@ -3132,7 +3133,9 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
ret = -EFAULT; ret = -EFAULT;
if (copy_from_user(&ifindex, argp, sizeof(ifindex))) if (copy_from_user(&ifindex, argp, sizeof(ifindex)))
goto unlock; goto unlock;
ret = -EINVAL;
if (ifindex < 0)
goto unlock;
ret = 0; ret = 0;
tfile->ifindex = ifindex; tfile->ifindex = ifindex;
goto unlock; goto unlock;

View File

@ -897,7 +897,7 @@ static int smsc95xx_reset(struct usbnet *dev)
if (timeout >= 100) { if (timeout >= 100) {
netdev_warn(dev->net, "timeout waiting for completion of Lite Reset\n"); netdev_warn(dev->net, "timeout waiting for completion of Lite Reset\n");
return ret; return -ETIMEDOUT;
} }
ret = smsc95xx_set_mac_address(dev); ret = smsc95xx_set_mac_address(dev);

View File

@ -4,7 +4,6 @@
*/ */
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/pm_runtime.h>
#include "iosm_ipc_chnl_cfg.h" #include "iosm_ipc_chnl_cfg.h"
#include "iosm_ipc_devlink.h" #include "iosm_ipc_devlink.h"
@ -632,11 +631,6 @@ static void ipc_imem_run_state_worker(struct work_struct *instance)
/* Complete all memory stores after setting bit */ /* Complete all memory stores after setting bit */
smp_mb__after_atomic(); smp_mb__after_atomic();
if (ipc_imem->pcie->pci->device == INTEL_CP_DEVICE_7560_ID) {
pm_runtime_mark_last_busy(ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_imem->dev);
}
return; return;
err_ipc_mux_deinit: err_ipc_mux_deinit:
@ -1240,7 +1234,6 @@ void ipc_imem_cleanup(struct iosm_imem *ipc_imem)
/* forward MDM_NOT_READY to listeners */ /* forward MDM_NOT_READY to listeners */
ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_NOT_READY); ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_NOT_READY);
pm_runtime_get_sync(ipc_imem->dev);
hrtimer_cancel(&ipc_imem->td_alloc_timer); hrtimer_cancel(&ipc_imem->td_alloc_timer);
hrtimer_cancel(&ipc_imem->tdupdate_timer); hrtimer_cancel(&ipc_imem->tdupdate_timer);
@ -1426,16 +1419,6 @@ struct iosm_imem *ipc_imem_init(struct iosm_pcie *pcie, unsigned int device_id,
set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag); set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag);
} }
if (!pm_runtime_enabled(ipc_imem->dev))
pm_runtime_enable(ipc_imem->dev);
pm_runtime_set_autosuspend_delay(ipc_imem->dev,
IPC_MEM_AUTO_SUSPEND_DELAY_MS);
pm_runtime_use_autosuspend(ipc_imem->dev);
pm_runtime_allow(ipc_imem->dev);
pm_runtime_mark_last_busy(ipc_imem->dev);
return ipc_imem; return ipc_imem;
devlink_channel_fail: devlink_channel_fail:
ipc_devlink_deinit(ipc_imem->ipc_devlink); ipc_devlink_deinit(ipc_imem->ipc_devlink);

View File

@ -103,8 +103,6 @@ struct ipc_chnl_cfg;
#define FULLY_FUNCTIONAL 0 #define FULLY_FUNCTIONAL 0
#define IOSM_DEVLINK_INIT 1 #define IOSM_DEVLINK_INIT 1
#define IPC_MEM_AUTO_SUSPEND_DELAY_MS 5000
/* List of the supported UL/DL pipes. */ /* List of the supported UL/DL pipes. */
enum ipc_mem_pipes { enum ipc_mem_pipes {
IPC_MEM_PIPE_0 = 0, IPC_MEM_PIPE_0 = 0,

View File

@ -6,7 +6,6 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/bitfield.h> #include <linux/bitfield.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pm_runtime.h>
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
#include "iosm_ipc_imem.h" #include "iosm_ipc_imem.h"
@ -438,8 +437,7 @@ static int __maybe_unused ipc_pcie_resume_cb(struct device *dev)
return 0; return 0;
} }
static DEFINE_RUNTIME_DEV_PM_OPS(iosm_ipc_pm, ipc_pcie_suspend_cb, static SIMPLE_DEV_PM_OPS(iosm_ipc_pm, ipc_pcie_suspend_cb, ipc_pcie_resume_cb);
ipc_pcie_resume_cb, NULL);
static struct pci_driver iosm_ipc_driver = { static struct pci_driver iosm_ipc_driver = {
.name = KBUILD_MODNAME, .name = KBUILD_MODNAME,

View File

@ -3,8 +3,6 @@
* Copyright (C) 2020-21 Intel Corporation. * Copyright (C) 2020-21 Intel Corporation.
*/ */
#include <linux/pm_runtime.h>
#include "iosm_ipc_chnl_cfg.h" #include "iosm_ipc_chnl_cfg.h"
#include "iosm_ipc_imem_ops.h" #include "iosm_ipc_imem_ops.h"
#include "iosm_ipc_port.h" #include "iosm_ipc_port.h"
@ -15,16 +13,12 @@ static int ipc_port_ctrl_start(struct wwan_port *port)
struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port); struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port);
int ret = 0; int ret = 0;
pm_runtime_get_sync(ipc_port->ipc_imem->dev);
ipc_port->channel = ipc_imem_sys_port_open(ipc_port->ipc_imem, ipc_port->channel = ipc_imem_sys_port_open(ipc_port->ipc_imem,
ipc_port->chl_id, ipc_port->chl_id,
IPC_HP_CDEV_OPEN); IPC_HP_CDEV_OPEN);
if (!ipc_port->channel) if (!ipc_port->channel)
ret = -EIO; ret = -EIO;
pm_runtime_mark_last_busy(ipc_port->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_port->ipc_imem->dev);
return ret; return ret;
} }
@ -33,24 +27,15 @@ static void ipc_port_ctrl_stop(struct wwan_port *port)
{ {
struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port); struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port);
pm_runtime_get_sync(ipc_port->ipc_imem->dev);
ipc_imem_sys_port_close(ipc_port->ipc_imem, ipc_port->channel); ipc_imem_sys_port_close(ipc_port->ipc_imem, ipc_port->channel);
pm_runtime_mark_last_busy(ipc_port->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_port->ipc_imem->dev);
} }
/* transfer control data to modem */ /* transfer control data to modem */
static int ipc_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb) static int ipc_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
{ {
struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port); struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port);
int ret;
pm_runtime_get_sync(ipc_port->ipc_imem->dev); return ipc_imem_sys_cdev_write(ipc_port, skb);
ret = ipc_imem_sys_cdev_write(ipc_port, skb);
pm_runtime_mark_last_busy(ipc_port->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_port->ipc_imem->dev);
return ret;
} }
static const struct wwan_port_ops ipc_wwan_ctrl_ops = { static const struct wwan_port_ops ipc_wwan_ctrl_ops = {

View File

@ -3,9 +3,7 @@
* Copyright (C) 2020-2021 Intel Corporation. * Copyright (C) 2020-2021 Intel Corporation.
*/ */
#include <linux/pm_runtime.h>
#include <linux/wwan.h> #include <linux/wwan.h>
#include "iosm_ipc_trace.h" #include "iosm_ipc_trace.h"
/* sub buffer size and number of sub buffer */ /* sub buffer size and number of sub buffer */
@ -99,8 +97,6 @@ static ssize_t ipc_trace_ctrl_file_write(struct file *filp,
if (ret) if (ret)
return ret; return ret;
pm_runtime_get_sync(ipc_trace->ipc_imem->dev);
mutex_lock(&ipc_trace->trc_mutex); mutex_lock(&ipc_trace->trc_mutex);
if (val == TRACE_ENABLE && ipc_trace->mode != TRACE_ENABLE) { if (val == TRACE_ENABLE && ipc_trace->mode != TRACE_ENABLE) {
ipc_trace->channel = ipc_imem_sys_port_open(ipc_trace->ipc_imem, ipc_trace->channel = ipc_imem_sys_port_open(ipc_trace->ipc_imem,
@ -121,10 +117,6 @@ static ssize_t ipc_trace_ctrl_file_write(struct file *filp,
ret = count; ret = count;
unlock: unlock:
mutex_unlock(&ipc_trace->trc_mutex); mutex_unlock(&ipc_trace->trc_mutex);
pm_runtime_mark_last_busy(ipc_trace->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_trace->ipc_imem->dev);
return ret; return ret;
} }

View File

@ -6,7 +6,6 @@
#include <linux/etherdevice.h> #include <linux/etherdevice.h>
#include <linux/if_arp.h> #include <linux/if_arp.h>
#include <linux/if_link.h> #include <linux/if_link.h>
#include <linux/pm_runtime.h>
#include <linux/rtnetlink.h> #include <linux/rtnetlink.h>
#include <linux/wwan.h> #include <linux/wwan.h>
#include <net/pkt_sched.h> #include <net/pkt_sched.h>
@ -52,13 +51,11 @@ static int ipc_wwan_link_open(struct net_device *netdev)
struct iosm_netdev_priv *priv = wwan_netdev_drvpriv(netdev); struct iosm_netdev_priv *priv = wwan_netdev_drvpriv(netdev);
struct iosm_wwan *ipc_wwan = priv->ipc_wwan; struct iosm_wwan *ipc_wwan = priv->ipc_wwan;
int if_id = priv->if_id; int if_id = priv->if_id;
int ret = 0;
if (if_id < IP_MUX_SESSION_START || if (if_id < IP_MUX_SESSION_START ||
if_id >= ARRAY_SIZE(ipc_wwan->sub_netlist)) if_id >= ARRAY_SIZE(ipc_wwan->sub_netlist))
return -EINVAL; return -EINVAL;
pm_runtime_get_sync(ipc_wwan->ipc_imem->dev);
/* get channel id */ /* get channel id */
priv->ch_id = ipc_imem_sys_wwan_open(ipc_wwan->ipc_imem, if_id); priv->ch_id = ipc_imem_sys_wwan_open(ipc_wwan->ipc_imem, if_id);
@ -66,8 +63,7 @@ static int ipc_wwan_link_open(struct net_device *netdev)
dev_err(ipc_wwan->dev, dev_err(ipc_wwan->dev,
"cannot connect wwan0 & id %d to the IPC mem layer", "cannot connect wwan0 & id %d to the IPC mem layer",
if_id); if_id);
ret = -ENODEV; return -ENODEV;
goto err_out;
} }
/* enable tx path, DL data may follow */ /* enable tx path, DL data may follow */
@ -76,11 +72,7 @@ static int ipc_wwan_link_open(struct net_device *netdev)
dev_dbg(ipc_wwan->dev, "Channel id %d allocated to if_id %d", dev_dbg(ipc_wwan->dev, "Channel id %d allocated to if_id %d",
priv->ch_id, priv->if_id); priv->ch_id, priv->if_id);
err_out: return 0;
pm_runtime_mark_last_busy(ipc_wwan->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_wwan->ipc_imem->dev);
return ret;
} }
/* Bring-down the wwan net link */ /* Bring-down the wwan net link */
@ -90,12 +82,9 @@ static int ipc_wwan_link_stop(struct net_device *netdev)
netif_stop_queue(netdev); netif_stop_queue(netdev);
pm_runtime_get_sync(priv->ipc_wwan->ipc_imem->dev);
ipc_imem_sys_wwan_close(priv->ipc_wwan->ipc_imem, priv->if_id, ipc_imem_sys_wwan_close(priv->ipc_wwan->ipc_imem, priv->if_id,
priv->ch_id); priv->ch_id);
priv->ch_id = -1; priv->ch_id = -1;
pm_runtime_mark_last_busy(priv->ipc_wwan->ipc_imem->dev);
pm_runtime_put_autosuspend(priv->ipc_wwan->ipc_imem->dev);
return 0; return 0;
} }
@ -117,7 +106,6 @@ static netdev_tx_t ipc_wwan_link_transmit(struct sk_buff *skb,
if_id >= ARRAY_SIZE(ipc_wwan->sub_netlist)) if_id >= ARRAY_SIZE(ipc_wwan->sub_netlist))
return -EINVAL; return -EINVAL;
pm_runtime_get(ipc_wwan->ipc_imem->dev);
/* Send the SKB to device for transmission */ /* Send the SKB to device for transmission */
ret = ipc_imem_sys_wwan_transmit(ipc_wwan->ipc_imem, ret = ipc_imem_sys_wwan_transmit(ipc_wwan->ipc_imem,
if_id, priv->ch_id, skb); if_id, priv->ch_id, skb);
@ -131,14 +119,9 @@ static netdev_tx_t ipc_wwan_link_transmit(struct sk_buff *skb,
ret = NETDEV_TX_BUSY; ret = NETDEV_TX_BUSY;
dev_err(ipc_wwan->dev, "unable to push packets"); dev_err(ipc_wwan->dev, "unable to push packets");
} else { } else {
pm_runtime_mark_last_busy(ipc_wwan->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_wwan->ipc_imem->dev);
goto exit; goto exit;
} }
pm_runtime_mark_last_busy(ipc_wwan->ipc_imem->dev);
pm_runtime_put_autosuspend(ipc_wwan->ipc_imem->dev);
return ret; return ret;
exit: exit:

View File

@ -3,8 +3,8 @@
#define _LINUX_VIRTIO_NET_H #define _LINUX_VIRTIO_NET_H
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
#include <linux/udp.h>
#include <uapi/linux/tcp.h> #include <uapi/linux/tcp.h>
#include <uapi/linux/udp.h>
#include <uapi/linux/virtio_net.h> #include <uapi/linux/virtio_net.h>
static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type)
@ -151,9 +151,22 @@ retry:
unsigned int nh_off = p_off; unsigned int nh_off = p_off;
struct skb_shared_info *shinfo = skb_shinfo(skb); struct skb_shared_info *shinfo = skb_shinfo(skb);
/* UFO may not include transport header in gso_size. */ switch (gso_type & ~SKB_GSO_TCP_ECN) {
if (gso_type & SKB_GSO_UDP) case SKB_GSO_UDP:
/* UFO may not include transport header in gso_size. */
nh_off -= thlen; nh_off -= thlen;
break;
case SKB_GSO_UDP_L4:
if (!(hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM))
return -EINVAL;
if (skb->csum_offset != offsetof(struct udphdr, check))
return -EINVAL;
if (skb->len - p_off > gso_size * UDP_MAX_SEGMENTS)
return -EINVAL;
if (gso_type != SKB_GSO_UDP_L4)
return -EINVAL;
break;
}
/* Kernel has a special handling for GSO_BY_FRAGS. */ /* Kernel has a special handling for GSO_BY_FRAGS. */
if (gso_size == GSO_BY_FRAGS) if (gso_size == GSO_BY_FRAGS)

View File

@ -56,7 +56,7 @@ struct hci_mon_new_index {
__u8 type; __u8 type;
__u8 bus; __u8 bus;
bdaddr_t bdaddr; bdaddr_t bdaddr;
char name[8]; char name[8] __nonstring;
} __packed; } __packed;
#define HCI_MON_NEW_INDEX_SIZE 16 #define HCI_MON_NEW_INDEX_SIZE 16

View File

@ -50,6 +50,7 @@ struct netns_xfrm {
struct list_head policy_all; struct list_head policy_all;
struct hlist_head *policy_byidx; struct hlist_head *policy_byidx;
unsigned int policy_idx_hmask; unsigned int policy_idx_hmask;
unsigned int idx_generator;
struct hlist_head policy_inexact[XFRM_POLICY_MAX]; struct hlist_head policy_inexact[XFRM_POLICY_MAX];
struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX]; struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX];
unsigned int policy_count[XFRM_POLICY_MAX * 2]; unsigned int policy_count[XFRM_POLICY_MAX * 2];

View File

@ -336,7 +336,7 @@ struct sk_filter;
* @sk_cgrp_data: cgroup data for this cgroup * @sk_cgrp_data: cgroup data for this cgroup
* @sk_memcg: this socket's memory cgroup association * @sk_memcg: this socket's memory cgroup association
* @sk_write_pending: a write to stream socket waits to start * @sk_write_pending: a write to stream socket waits to start
* @sk_wait_pending: number of threads blocked on this socket * @sk_disconnects: number of disconnect operations performed on this sock
* @sk_state_change: callback to indicate change in the state of the sock * @sk_state_change: callback to indicate change in the state of the sock
* @sk_data_ready: callback to indicate there is data to be processed * @sk_data_ready: callback to indicate there is data to be processed
* @sk_write_space: callback to indicate there is bf sending space available * @sk_write_space: callback to indicate there is bf sending space available
@ -429,7 +429,7 @@ struct sock {
unsigned int sk_napi_id; unsigned int sk_napi_id;
#endif #endif
int sk_rcvbuf; int sk_rcvbuf;
int sk_wait_pending; int sk_disconnects;
struct sk_filter __rcu *sk_filter; struct sk_filter __rcu *sk_filter;
union { union {
@ -1189,8 +1189,7 @@ static inline void sock_rps_reset_rxhash(struct sock *sk)
} }
#define sk_wait_event(__sk, __timeo, __condition, __wait) \ #define sk_wait_event(__sk, __timeo, __condition, __wait) \
({ int __rc; \ ({ int __rc, __dis = __sk->sk_disconnects; \
__sk->sk_wait_pending++; \
release_sock(__sk); \ release_sock(__sk); \
__rc = __condition; \ __rc = __condition; \
if (!__rc) { \ if (!__rc) { \
@ -1200,8 +1199,7 @@ static inline void sock_rps_reset_rxhash(struct sock *sk)
} \ } \
sched_annotate_sleep(); \ sched_annotate_sleep(); \
lock_sock(__sk); \ lock_sock(__sk); \
__sk->sk_wait_pending--; \ __rc = __dis == __sk->sk_disconnects ? __condition : -EPIPE; \
__rc = __condition; \
__rc; \ __rc; \
}) })

View File

@ -141,6 +141,9 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
#define TCP_RTO_MAX ((unsigned)(120*HZ)) #define TCP_RTO_MAX ((unsigned)(120*HZ))
#define TCP_RTO_MIN ((unsigned)(HZ/5)) #define TCP_RTO_MIN ((unsigned)(HZ/5))
#define TCP_TIMEOUT_MIN (2U) /* Min timeout for TCP timers in jiffies */ #define TCP_TIMEOUT_MIN (2U) /* Min timeout for TCP timers in jiffies */
#define TCP_TIMEOUT_MIN_US (2*USEC_PER_MSEC) /* Min TCP timeout in microsecs */
#define TCP_TIMEOUT_INIT ((unsigned)(1*HZ)) /* RFC6298 2.1 initial RTO value */ #define TCP_TIMEOUT_INIT ((unsigned)(1*HZ)) /* RFC6298 2.1 initial RTO value */
#define TCP_TIMEOUT_FALLBACK ((unsigned)(3*HZ)) /* RFC 1122 initial RTO value, now #define TCP_TIMEOUT_FALLBACK ((unsigned)(3*HZ)) /* RFC 1122 initial RTO value, now
* used as a fallback RTO for the * used as a fallback RTO for the

View File

@ -39,7 +39,6 @@ TRACE_EVENT(neigh_create,
), ),
TP_fast_assign( TP_fast_assign(
struct in6_addr *pin6;
__be32 *p32; __be32 *p32;
__entry->family = tbl->family; __entry->family = tbl->family;
@ -47,7 +46,6 @@ TRACE_EVENT(neigh_create,
__entry->entries = atomic_read(&tbl->gc_entries); __entry->entries = atomic_read(&tbl->gc_entries);
__entry->created = n != NULL; __entry->created = n != NULL;
__entry->gc_exempt = exempt_from_gc; __entry->gc_exempt = exempt_from_gc;
pin6 = (struct in6_addr *)__entry->primary_key6;
p32 = (__be32 *)__entry->primary_key4; p32 = (__be32 *)__entry->primary_key4;
if (tbl->family == AF_INET) if (tbl->family == AF_INET)
@ -57,6 +55,8 @@ TRACE_EVENT(neigh_create,
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
if (tbl->family == AF_INET6) { if (tbl->family == AF_INET6) {
struct in6_addr *pin6;
pin6 = (struct in6_addr *)__entry->primary_key6; pin6 = (struct in6_addr *)__entry->primary_key6;
*pin6 = *(struct in6_addr *)pkey; *pin6 = *(struct in6_addr *)pkey;
} }

View File

@ -1627,6 +1627,15 @@ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst,
return ERR_PTR(-EOPNOTSUPP); return ERR_PTR(-EOPNOTSUPP);
} }
/* Reject outgoing connection to device with same BD ADDR against
* CVE-2020-26555
*/
if (!bacmp(&hdev->bdaddr, dst)) {
bt_dev_dbg(hdev, "Reject connection with same BD_ADDR %pMR\n",
dst);
return ERR_PTR(-ECONNREFUSED);
}
acl = hci_conn_hash_lookup_ba(hdev, ACL_LINK, dst); acl = hci_conn_hash_lookup_ba(hdev, ACL_LINK, dst);
if (!acl) { if (!acl) {
acl = hci_conn_add(hdev, ACL_LINK, dst, HCI_ROLE_MASTER); acl = hci_conn_add(hdev, ACL_LINK, dst, HCI_ROLE_MASTER);

View File

@ -26,6 +26,8 @@
/* Bluetooth HCI event handling. */ /* Bluetooth HCI event handling. */
#include <asm/unaligned.h> #include <asm/unaligned.h>
#include <linux/crypto.h>
#include <crypto/algapi.h>
#include <net/bluetooth/bluetooth.h> #include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h> #include <net/bluetooth/hci_core.h>
@ -3268,6 +3270,16 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "bdaddr %pMR type 0x%x", &ev->bdaddr, ev->link_type); bt_dev_dbg(hdev, "bdaddr %pMR type 0x%x", &ev->bdaddr, ev->link_type);
/* Reject incoming connection from device with same BD ADDR against
* CVE-2020-26555
*/
if (hdev && !bacmp(&hdev->bdaddr, &ev->bdaddr)) {
bt_dev_dbg(hdev, "Reject connection with same BD_ADDR %pMR\n",
&ev->bdaddr);
hci_reject_conn(hdev, &ev->bdaddr);
return;
}
mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type, mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type,
&flags); &flags);
@ -4742,6 +4754,15 @@ static void hci_link_key_notify_evt(struct hci_dev *hdev, void *data,
if (!conn) if (!conn)
goto unlock; goto unlock;
/* Ignore NULL link key against CVE-2020-26555 */
if (!crypto_memneq(ev->link_key, ZERO_KEY, HCI_LINK_KEY_SIZE)) {
bt_dev_dbg(hdev, "Ignore NULL link key (ZERO KEY) for %pMR",
&ev->bdaddr);
hci_disconnect(conn, HCI_ERROR_AUTH_FAILURE);
hci_conn_drop(conn);
goto unlock;
}
hci_conn_hold(conn); hci_conn_hold(conn);
conn->disc_timeout = HCI_DISCONN_TIMEOUT; conn->disc_timeout = HCI_DISCONN_TIMEOUT;
hci_conn_drop(conn); hci_conn_drop(conn);
@ -5274,8 +5295,8 @@ static u8 bredr_oob_data_present(struct hci_conn *conn)
* available, then do not declare that OOB data is * available, then do not declare that OOB data is
* present. * present.
*/ */
if (!memcmp(data->rand256, ZERO_KEY, 16) || if (!crypto_memneq(data->rand256, ZERO_KEY, 16) ||
!memcmp(data->hash256, ZERO_KEY, 16)) !crypto_memneq(data->hash256, ZERO_KEY, 16))
return 0x00; return 0x00;
return 0x02; return 0x02;
@ -5285,8 +5306,8 @@ static u8 bredr_oob_data_present(struct hci_conn *conn)
* not supported by the hardware, then check that if * not supported by the hardware, then check that if
* P-192 data values are present. * P-192 data values are present.
*/ */
if (!memcmp(data->rand192, ZERO_KEY, 16) || if (!crypto_memneq(data->rand192, ZERO_KEY, 16) ||
!memcmp(data->hash192, ZERO_KEY, 16)) !crypto_memneq(data->hash192, ZERO_KEY, 16))
return 0x00; return 0x00;
return 0x01; return 0x01;
@ -5303,7 +5324,7 @@ static void hci_io_capa_request_evt(struct hci_dev *hdev, void *data,
hci_dev_lock(hdev); hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
if (!conn) if (!conn || !hci_conn_ssp_enabled(conn))
goto unlock; goto unlock;
hci_conn_hold(conn); hci_conn_hold(conn);
@ -5550,7 +5571,7 @@ static void hci_simple_pair_complete_evt(struct hci_dev *hdev, void *data,
hci_dev_lock(hdev); hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr);
if (!conn) if (!conn || !hci_conn_ssp_enabled(conn))
goto unlock; goto unlock;
/* Reset the authentication requirement to unknown */ /* Reset the authentication requirement to unknown */
@ -7021,6 +7042,14 @@ unlock:
hci_dev_unlock(hdev); hci_dev_unlock(hdev);
} }
static int hci_iso_term_big_sync(struct hci_dev *hdev, void *data)
{
u8 handle = PTR_UINT(data);
return hci_le_terminate_big_sync(hdev, handle,
HCI_ERROR_LOCAL_HOST_TERM);
}
static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data, static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
struct sk_buff *skb) struct sk_buff *skb)
{ {
@ -7065,16 +7094,17 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
rcu_read_lock(); rcu_read_lock();
} }
rcu_read_unlock();
if (!ev->status && !i) if (!ev->status && !i)
/* If no BISes have been connected for the BIG, /* If no BISes have been connected for the BIG,
* terminate. This is in case all bound connections * terminate. This is in case all bound connections
* have been closed before the BIG creation * have been closed before the BIG creation
* has completed. * has completed.
*/ */
hci_le_terminate_big_sync(hdev, ev->handle, hci_cmd_sync_queue(hdev, hci_iso_term_big_sync,
HCI_ERROR_LOCAL_HOST_TERM); UINT_PTR(ev->handle), NULL);
rcu_read_unlock();
hci_dev_unlock(hdev); hci_dev_unlock(hdev);
} }

View File

@ -488,7 +488,8 @@ static struct sk_buff *create_monitor_event(struct hci_dev *hdev, int event)
ni->type = hdev->dev_type; ni->type = hdev->dev_type;
ni->bus = hdev->bus; ni->bus = hdev->bus;
bacpy(&ni->bdaddr, &hdev->bdaddr); bacpy(&ni->bdaddr, &hdev->bdaddr);
memcpy(ni->name, hdev->name, 8); memcpy_and_pad(ni->name, sizeof(ni->name), hdev->name,
strnlen(hdev->name, sizeof(ni->name)), '\0');
opcode = cpu_to_le16(HCI_MON_NEW_INDEX); opcode = cpu_to_le16(HCI_MON_NEW_INDEX);
break; break;

View File

@ -5369,6 +5369,7 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason)
{ {
int err = 0; int err = 0;
u16 handle = conn->handle; u16 handle = conn->handle;
bool disconnect = false;
struct hci_conn *c; struct hci_conn *c;
switch (conn->state) { switch (conn->state) {
@ -5399,24 +5400,15 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason)
hci_dev_unlock(hdev); hci_dev_unlock(hdev);
return 0; return 0;
case BT_BOUND: case BT_BOUND:
hci_dev_lock(hdev); break;
hci_conn_failed(conn, reason);
hci_dev_unlock(hdev);
return 0;
default: default:
hci_dev_lock(hdev); disconnect = true;
conn->state = BT_CLOSED; break;
hci_disconn_cfm(conn, reason);
hci_conn_del(conn);
hci_dev_unlock(hdev);
return 0;
} }
hci_dev_lock(hdev); hci_dev_lock(hdev);
/* Check if the connection hasn't been cleanup while waiting /* Check if the connection has been cleaned up concurrently */
* commands to complete.
*/
c = hci_conn_hash_lookup_handle(hdev, handle); c = hci_conn_hash_lookup_handle(hdev, handle);
if (!c || c != conn) { if (!c || c != conn) {
err = 0; err = 0;
@ -5428,7 +5420,13 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason)
* or in case of LE it was still scanning so it can be cleanup * or in case of LE it was still scanning so it can be cleanup
* safely. * safely.
*/ */
hci_conn_failed(conn, reason); if (disconnect) {
conn->state = BT_CLOSED;
hci_disconn_cfm(conn, reason);
hci_conn_del(conn);
} else {
hci_conn_failed(conn, reason);
}
unlock: unlock:
hci_dev_unlock(hdev); hci_dev_unlock(hdev);

View File

@ -345,7 +345,6 @@ int netdev_name_node_alt_create(struct net_device *dev, const char *name)
static void __netdev_name_node_alt_destroy(struct netdev_name_node *name_node) static void __netdev_name_node_alt_destroy(struct netdev_name_node *name_node)
{ {
list_del(&name_node->list); list_del(&name_node->list);
netdev_name_node_del(name_node);
kfree(name_node->name); kfree(name_node->name);
netdev_name_node_free(name_node); netdev_name_node_free(name_node);
} }
@ -364,6 +363,8 @@ int netdev_name_node_alt_destroy(struct net_device *dev, const char *name)
if (name_node == dev->name_node || name_node->dev != dev) if (name_node == dev->name_node || name_node->dev != dev)
return -EINVAL; return -EINVAL;
netdev_name_node_del(name_node);
synchronize_rcu();
__netdev_name_node_alt_destroy(name_node); __netdev_name_node_alt_destroy(name_node);
return 0; return 0;
@ -380,6 +381,7 @@ static void netdev_name_node_alt_flush(struct net_device *dev)
/* Device list insertion */ /* Device list insertion */
static void list_netdevice(struct net_device *dev) static void list_netdevice(struct net_device *dev)
{ {
struct netdev_name_node *name_node;
struct net *net = dev_net(dev); struct net *net = dev_net(dev);
ASSERT_RTNL(); ASSERT_RTNL();
@ -390,6 +392,10 @@ static void list_netdevice(struct net_device *dev)
hlist_add_head_rcu(&dev->index_hlist, hlist_add_head_rcu(&dev->index_hlist,
dev_index_hash(net, dev->ifindex)); dev_index_hash(net, dev->ifindex));
write_unlock(&dev_base_lock); write_unlock(&dev_base_lock);
netdev_for_each_altname(dev, name_node)
netdev_name_node_add(net, name_node);
/* We reserved the ifindex, this can't fail */ /* We reserved the ifindex, this can't fail */
WARN_ON(xa_store(&net->dev_by_index, dev->ifindex, dev, GFP_KERNEL)); WARN_ON(xa_store(&net->dev_by_index, dev->ifindex, dev, GFP_KERNEL));
@ -401,12 +407,16 @@ static void list_netdevice(struct net_device *dev)
*/ */
static void unlist_netdevice(struct net_device *dev, bool lock) static void unlist_netdevice(struct net_device *dev, bool lock)
{ {
struct netdev_name_node *name_node;
struct net *net = dev_net(dev); struct net *net = dev_net(dev);
ASSERT_RTNL(); ASSERT_RTNL();
xa_erase(&net->dev_by_index, dev->ifindex); xa_erase(&net->dev_by_index, dev->ifindex);
netdev_for_each_altname(dev, name_node)
netdev_name_node_del(name_node);
/* Unlink dev from the device chain */ /* Unlink dev from the device chain */
if (lock) if (lock)
write_lock(&dev_base_lock); write_lock(&dev_base_lock);
@ -1086,7 +1096,8 @@ static int __dev_alloc_name(struct net *net, const char *name, char *buf)
for_each_netdev(net, d) { for_each_netdev(net, d) {
struct netdev_name_node *name_node; struct netdev_name_node *name_node;
list_for_each_entry(name_node, &d->name_node->list, list) {
netdev_for_each_altname(d, name_node) {
if (!sscanf(name_node->name, name, &i)) if (!sscanf(name_node->name, name, &i))
continue; continue;
if (i < 0 || i >= max_netdevices) if (i < 0 || i >= max_netdevices)
@ -1123,6 +1134,26 @@ static int __dev_alloc_name(struct net *net, const char *name, char *buf)
return -ENFILE; return -ENFILE;
} }
static int dev_prep_valid_name(struct net *net, struct net_device *dev,
const char *want_name, char *out_name)
{
int ret;
if (!dev_valid_name(want_name))
return -EINVAL;
if (strchr(want_name, '%')) {
ret = __dev_alloc_name(net, want_name, out_name);
return ret < 0 ? ret : 0;
} else if (netdev_name_in_use(net, want_name)) {
return -EEXIST;
} else if (out_name != want_name) {
strscpy(out_name, want_name, IFNAMSIZ);
}
return 0;
}
static int dev_alloc_name_ns(struct net *net, static int dev_alloc_name_ns(struct net *net,
struct net_device *dev, struct net_device *dev,
const char *name) const char *name)
@ -1160,19 +1191,13 @@ EXPORT_SYMBOL(dev_alloc_name);
static int dev_get_valid_name(struct net *net, struct net_device *dev, static int dev_get_valid_name(struct net *net, struct net_device *dev,
const char *name) const char *name)
{ {
BUG_ON(!net); char buf[IFNAMSIZ];
int ret;
if (!dev_valid_name(name)) ret = dev_prep_valid_name(net, dev, name, buf);
return -EINVAL; if (ret >= 0)
strscpy(dev->name, buf, IFNAMSIZ);
if (strchr(name, '%')) return ret;
return dev_alloc_name_ns(net, dev, name);
else if (netdev_name_in_use(net, name))
return -EEXIST;
else if (dev->name != name)
strscpy(dev->name, name, IFNAMSIZ);
return 0;
} }
/** /**
@ -11037,7 +11062,9 @@ EXPORT_SYMBOL(unregister_netdev);
int __dev_change_net_namespace(struct net_device *dev, struct net *net, int __dev_change_net_namespace(struct net_device *dev, struct net *net,
const char *pat, int new_ifindex) const char *pat, int new_ifindex)
{ {
struct netdev_name_node *name_node;
struct net *net_old = dev_net(dev); struct net *net_old = dev_net(dev);
char new_name[IFNAMSIZ] = {};
int err, new_nsid; int err, new_nsid;
ASSERT_RTNL(); ASSERT_RTNL();
@ -11064,10 +11091,15 @@ int __dev_change_net_namespace(struct net_device *dev, struct net *net,
/* We get here if we can't use the current device name */ /* We get here if we can't use the current device name */
if (!pat) if (!pat)
goto out; goto out;
err = dev_get_valid_name(net, dev, pat); err = dev_prep_valid_name(net, dev, pat, new_name);
if (err < 0) if (err < 0)
goto out; goto out;
} }
/* Check that none of the altnames conflicts. */
err = -EEXIST;
netdev_for_each_altname(dev, name_node)
if (netdev_name_in_use(net, name_node->name))
goto out;
/* Check that new_ifindex isn't used yet. */ /* Check that new_ifindex isn't used yet. */
if (new_ifindex) { if (new_ifindex) {
@ -11135,6 +11167,9 @@ int __dev_change_net_namespace(struct net_device *dev, struct net *net,
kobject_uevent(&dev->dev.kobj, KOBJ_ADD); kobject_uevent(&dev->dev.kobj, KOBJ_ADD);
netdev_adjacent_add_links(dev); netdev_adjacent_add_links(dev);
if (new_name[0]) /* Rename the netdev to prepared name */
strscpy(dev->name, new_name, IFNAMSIZ);
/* Fixup kobjects */ /* Fixup kobjects */
err = device_rename(&dev->dev, dev->name); err = device_rename(&dev->dev, dev->name);
WARN_ON(err); WARN_ON(err);

View File

@ -62,6 +62,9 @@ struct netdev_name_node {
int netdev_get_name(struct net *net, char *name, int ifindex); int netdev_get_name(struct net *net, char *name, int ifindex);
int dev_change_name(struct net_device *dev, const char *newname); int dev_change_name(struct net_device *dev, const char *newname);
#define netdev_for_each_altname(dev, namenode) \
list_for_each_entry((namenode), &(dev)->name_node->list, list)
int netdev_name_node_alt_create(struct net_device *dev, const char *name); int netdev_name_node_alt_create(struct net_device *dev, const char *name);
int netdev_name_node_alt_destroy(struct net_device *dev, const char *name); int netdev_name_node_alt_destroy(struct net_device *dev, const char *name);

View File

@ -669,19 +669,19 @@ static int pktgen_if_show(struct seq_file *seq, void *v)
seq_puts(seq, " Flags: "); seq_puts(seq, " Flags: ");
for (i = 0; i < NR_PKT_FLAGS; i++) { for (i = 0; i < NR_PKT_FLAGS; i++) {
if (i == F_FLOW_SEQ) if (i == FLOW_SEQ_SHIFT)
if (!pkt_dev->cflows) if (!pkt_dev->cflows)
continue; continue;
if (pkt_dev->flags & (1 << i)) if (pkt_dev->flags & (1 << i)) {
seq_printf(seq, "%s ", pkt_flag_names[i]); seq_printf(seq, "%s ", pkt_flag_names[i]);
else if (i == F_FLOW_SEQ)
seq_puts(seq, "FLOW_RND ");
#ifdef CONFIG_XFRM #ifdef CONFIG_XFRM
if (i == F_IPSEC && pkt_dev->spi) if (i == IPSEC_SHIFT && pkt_dev->spi)
seq_printf(seq, "spi:%u", pkt_dev->spi); seq_printf(seq, "spi:%u ", pkt_dev->spi);
#endif #endif
} else if (i == FLOW_SEQ_SHIFT) {
seq_puts(seq, "FLOW_RND ");
}
} }
seq_puts(seq, "\n"); seq_puts(seq, "\n");

View File

@ -5503,13 +5503,11 @@ static unsigned int
rtnl_offload_xstats_get_size_hw_s_info_one(const struct net_device *dev, rtnl_offload_xstats_get_size_hw_s_info_one(const struct net_device *dev,
enum netdev_offload_xstats_type type) enum netdev_offload_xstats_type type)
{ {
bool enabled = netdev_offload_xstats_enabled(dev, type);
return nla_total_size(0) + return nla_total_size(0) +
/* IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST */ /* IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST */
nla_total_size(sizeof(u8)) + nla_total_size(sizeof(u8)) +
/* IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED */ /* IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED */
(enabled ? nla_total_size(sizeof(u8)) : 0) + nla_total_size(sizeof(u8)) +
0; 0;
} }

View File

@ -117,7 +117,7 @@ EXPORT_SYMBOL(sk_stream_wait_close);
*/ */
int sk_stream_wait_memory(struct sock *sk, long *timeo_p) int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
{ {
int err = 0; int ret, err = 0;
long vm_wait = 0; long vm_wait = 0;
long current_timeo = *timeo_p; long current_timeo = *timeo_p;
DEFINE_WAIT_FUNC(wait, woken_wake_function); DEFINE_WAIT_FUNC(wait, woken_wake_function);
@ -142,11 +142,13 @@ int sk_stream_wait_memory(struct sock *sk, long *timeo_p)
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
sk->sk_write_pending++; sk->sk_write_pending++;
sk_wait_event(sk, &current_timeo, READ_ONCE(sk->sk_err) || ret = sk_wait_event(sk, &current_timeo, READ_ONCE(sk->sk_err) ||
(READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) || (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) ||
(sk_stream_memory_free(sk) && (sk_stream_memory_free(sk) && !vm_wait),
!vm_wait), &wait); &wait);
sk->sk_write_pending--; sk->sk_write_pending--;
if (ret < 0)
goto do_error;
if (vm_wait) { if (vm_wait) {
vm_wait -= current_timeo; vm_wait -= current_timeo;

View File

@ -431,10 +431,8 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
ethnl_string_array_t names, ethnl_string_array_t names,
struct netlink_ext_ack *extack, bool *mod) struct netlink_ext_ack *extack, bool *mod)
{ {
u32 *orig_bitmap, *saved_bitmap = NULL;
struct nlattr *bit_attr; struct nlattr *bit_attr;
bool no_mask; bool no_mask;
bool dummy;
int rem; int rem;
int ret; int ret;
@ -450,22 +448,8 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
} }
no_mask = tb[ETHTOOL_A_BITSET_NOMASK]; no_mask = tb[ETHTOOL_A_BITSET_NOMASK];
if (no_mask) { if (no_mask)
unsigned int nwords = DIV_ROUND_UP(nbits, 32); ethnl_bitmap32_clear(bitmap, 0, nbits, mod);
unsigned int nbytes = nwords * sizeof(u32);
/* The bitmap size is only the size of the map part without
* its mask part.
*/
saved_bitmap = kcalloc(nwords, sizeof(u32), GFP_KERNEL);
if (!saved_bitmap)
return -ENOMEM;
memcpy(saved_bitmap, bitmap, nbytes);
ethnl_bitmap32_clear(bitmap, 0, nbits, &dummy);
orig_bitmap = saved_bitmap;
} else {
orig_bitmap = bitmap;
}
nla_for_each_nested(bit_attr, tb[ETHTOOL_A_BITSET_BITS], rem) { nla_for_each_nested(bit_attr, tb[ETHTOOL_A_BITSET_BITS], rem) {
bool old_val, new_val; bool old_val, new_val;
@ -474,14 +458,13 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
if (nla_type(bit_attr) != ETHTOOL_A_BITSET_BITS_BIT) { if (nla_type(bit_attr) != ETHTOOL_A_BITSET_BITS_BIT) {
NL_SET_ERR_MSG_ATTR(extack, bit_attr, NL_SET_ERR_MSG_ATTR(extack, bit_attr,
"only ETHTOOL_A_BITSET_BITS_BIT allowed in ETHTOOL_A_BITSET_BITS"); "only ETHTOOL_A_BITSET_BITS_BIT allowed in ETHTOOL_A_BITSET_BITS");
ret = -EINVAL; return -EINVAL;
goto out;
} }
ret = ethnl_parse_bit(&idx, &new_val, nbits, bit_attr, no_mask, ret = ethnl_parse_bit(&idx, &new_val, nbits, bit_attr, no_mask,
names, extack); names, extack);
if (ret < 0) if (ret < 0)
goto out; return ret;
old_val = orig_bitmap[idx / 32] & ((u32)1 << (idx % 32)); old_val = bitmap[idx / 32] & ((u32)1 << (idx % 32));
if (new_val != old_val) { if (new_val != old_val) {
if (new_val) if (new_val)
bitmap[idx / 32] |= ((u32)1 << (idx % 32)); bitmap[idx / 32] |= ((u32)1 << (idx % 32));
@ -491,10 +474,7 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
} }
} }
ret = 0; return 0;
out:
kfree(saved_bitmap);
return ret;
} }
static int ethnl_compact_sanity_checks(unsigned int nbits, static int ethnl_compact_sanity_checks(unsigned int nbits,

View File

@ -597,7 +597,6 @@ static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
add_wait_queue(sk_sleep(sk), &wait); add_wait_queue(sk_sleep(sk), &wait);
sk->sk_write_pending += writebias; sk->sk_write_pending += writebias;
sk->sk_wait_pending++;
/* Basic assumption: if someone sets sk->sk_err, he _must_ /* Basic assumption: if someone sets sk->sk_err, he _must_
* change state of the socket from TCP_SYN_*. * change state of the socket from TCP_SYN_*.
@ -613,7 +612,6 @@ static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)
} }
remove_wait_queue(sk_sleep(sk), &wait); remove_wait_queue(sk_sleep(sk), &wait);
sk->sk_write_pending -= writebias; sk->sk_write_pending -= writebias;
sk->sk_wait_pending--;
return timeo; return timeo;
} }
@ -642,6 +640,7 @@ int __inet_stream_connect(struct socket *sock, struct sockaddr *uaddr,
return -EINVAL; return -EINVAL;
if (uaddr->sa_family == AF_UNSPEC) { if (uaddr->sa_family == AF_UNSPEC) {
sk->sk_disconnects++;
err = sk->sk_prot->disconnect(sk, flags); err = sk->sk_prot->disconnect(sk, flags);
sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED; sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED;
goto out; goto out;
@ -696,6 +695,7 @@ int __inet_stream_connect(struct socket *sock, struct sockaddr *uaddr,
int writebias = (sk->sk_protocol == IPPROTO_TCP) && int writebias = (sk->sk_protocol == IPPROTO_TCP) &&
tcp_sk(sk)->fastopen_req && tcp_sk(sk)->fastopen_req &&
tcp_sk(sk)->fastopen_req->data ? 1 : 0; tcp_sk(sk)->fastopen_req->data ? 1 : 0;
int dis = sk->sk_disconnects;
/* Error code is set above */ /* Error code is set above */
if (!timeo || !inet_wait_for_connect(sk, timeo, writebias)) if (!timeo || !inet_wait_for_connect(sk, timeo, writebias))
@ -704,6 +704,11 @@ int __inet_stream_connect(struct socket *sock, struct sockaddr *uaddr,
err = sock_intr_errno(timeo); err = sock_intr_errno(timeo);
if (signal_pending(current)) if (signal_pending(current))
goto out; goto out;
if (dis != sk->sk_disconnects) {
err = -EPIPE;
goto out;
}
} }
/* Connection was closed by RST, timeout, ICMP error /* Connection was closed by RST, timeout, ICMP error
@ -725,6 +730,7 @@ out:
sock_error: sock_error:
err = sock_error(sk) ? : -ECONNABORTED; err = sock_error(sk) ? : -ECONNABORTED;
sock->state = SS_UNCONNECTED; sock->state = SS_UNCONNECTED;
sk->sk_disconnects++;
if (sk->sk_prot->disconnect(sk, flags)) if (sk->sk_prot->disconnect(sk, flags))
sock->state = SS_DISCONNECTING; sock->state = SS_DISCONNECTING;
goto out; goto out;

View File

@ -732,7 +732,9 @@ static inline int esp_remove_trailer(struct sk_buff *skb)
skb->csum = csum_block_sub(skb->csum, csumdiff, skb->csum = csum_block_sub(skb->csum, csumdiff,
skb->len - trimlen); skb->len - trimlen);
} }
pskb_trim(skb, skb->len - trimlen); ret = pskb_trim(skb, skb->len - trimlen);
if (unlikely(ret))
return ret;
ret = nexthdr[1]; ret = nexthdr[1];

View File

@ -1325,15 +1325,18 @@ __be32 fib_info_update_nhc_saddr(struct net *net, struct fib_nh_common *nhc,
unsigned char scope) unsigned char scope)
{ {
struct fib_nh *nh; struct fib_nh *nh;
__be32 saddr;
if (nhc->nhc_family != AF_INET) if (nhc->nhc_family != AF_INET)
return inet_select_addr(nhc->nhc_dev, 0, scope); return inet_select_addr(nhc->nhc_dev, 0, scope);
nh = container_of(nhc, struct fib_nh, nh_common); nh = container_of(nhc, struct fib_nh, nh_common);
nh->nh_saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope); saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope);
nh->nh_saddr_genid = atomic_read(&net->ipv4.dev_addr_genid);
return nh->nh_saddr; WRITE_ONCE(nh->nh_saddr, saddr);
WRITE_ONCE(nh->nh_saddr_genid, atomic_read(&net->ipv4.dev_addr_genid));
return saddr;
} }
__be32 fib_result_prefsrc(struct net *net, struct fib_result *res) __be32 fib_result_prefsrc(struct net *net, struct fib_result *res)
@ -1347,8 +1350,9 @@ __be32 fib_result_prefsrc(struct net *net, struct fib_result *res)
struct fib_nh *nh; struct fib_nh *nh;
nh = container_of(nhc, struct fib_nh, nh_common); nh = container_of(nhc, struct fib_nh, nh_common);
if (nh->nh_saddr_genid == atomic_read(&net->ipv4.dev_addr_genid)) if (READ_ONCE(nh->nh_saddr_genid) ==
return nh->nh_saddr; atomic_read(&net->ipv4.dev_addr_genid))
return READ_ONCE(nh->nh_saddr);
} }
return fib_info_update_nhc_saddr(net, nhc, res->fi->fib_scope); return fib_info_update_nhc_saddr(net, nhc, res->fi->fib_scope);

View File

@ -1145,7 +1145,6 @@ struct sock *inet_csk_clone_lock(const struct sock *sk,
if (newsk) { if (newsk) {
struct inet_connection_sock *newicsk = inet_csk(newsk); struct inet_connection_sock *newicsk = inet_csk(newsk);
newsk->sk_wait_pending = 0;
inet_sk_set_state(newsk, TCP_SYN_RECV); inet_sk_set_state(newsk, TCP_SYN_RECV);
newicsk->icsk_bind_hash = NULL; newicsk->icsk_bind_hash = NULL;
newicsk->icsk_bind2_hash = NULL; newicsk->icsk_bind2_hash = NULL;

View File

@ -149,8 +149,14 @@ static bool inet_bind2_bucket_addr_match(const struct inet_bind2_bucket *tb2,
const struct sock *sk) const struct sock *sk)
{ {
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
if (sk->sk_family != tb2->family) if (sk->sk_family != tb2->family) {
return false; if (sk->sk_family == AF_INET)
return ipv6_addr_v4mapped(&tb2->v6_rcv_saddr) &&
tb2->v6_rcv_saddr.s6_addr32[3] == sk->sk_rcv_saddr;
return ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr) &&
sk->sk_v6_rcv_saddr.s6_addr32[3] == tb2->rcv_saddr;
}
if (sk->sk_family == AF_INET6) if (sk->sk_family == AF_INET6)
return ipv6_addr_equal(&tb2->v6_rcv_saddr, return ipv6_addr_equal(&tb2->v6_rcv_saddr,
@ -819,19 +825,7 @@ static bool inet_bind2_bucket_match(const struct inet_bind2_bucket *tb,
tb->l3mdev != l3mdev) tb->l3mdev != l3mdev)
return false; return false;
#if IS_ENABLED(CONFIG_IPV6) return inet_bind2_bucket_addr_match(tb, sk);
if (sk->sk_family != tb->family) {
if (sk->sk_family == AF_INET)
return ipv6_addr_v4mapped(&tb->v6_rcv_saddr) &&
tb->v6_rcv_saddr.s6_addr32[3] == sk->sk_rcv_saddr;
return false;
}
if (sk->sk_family == AF_INET6)
return ipv6_addr_equal(&tb->v6_rcv_saddr, &sk->sk_v6_rcv_saddr);
#endif
return tb->rcv_saddr == sk->sk_rcv_saddr;
} }
bool inet_bind2_bucket_match_addr_any(const struct inet_bind2_bucket *tb, const struct net *net, bool inet_bind2_bucket_match_addr_any(const struct inet_bind2_bucket *tb, const struct net *net,

View File

@ -831,7 +831,9 @@ ssize_t tcp_splice_read(struct socket *sock, loff_t *ppos,
*/ */
if (!skb_queue_empty(&sk->sk_receive_queue)) if (!skb_queue_empty(&sk->sk_receive_queue))
break; break;
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0)
break;
if (signal_pending(current)) { if (signal_pending(current)) {
ret = sock_intr_errno(timeo); ret = sock_intr_errno(timeo);
break; break;
@ -2442,7 +2444,11 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
__sk_flush_backlog(sk); __sk_flush_backlog(sk);
} else { } else {
tcp_cleanup_rbuf(sk, copied); tcp_cleanup_rbuf(sk, copied);
sk_wait_data(sk, &timeo, last); err = sk_wait_data(sk, &timeo, last);
if (err < 0) {
err = copied ? : err;
goto out;
}
} }
if ((flags & MSG_PEEK) && if ((flags & MSG_PEEK) &&
@ -2966,12 +2972,6 @@ int tcp_disconnect(struct sock *sk, int flags)
int old_state = sk->sk_state; int old_state = sk->sk_state;
u32 seq; u32 seq;
/* Deny disconnect if other threads are blocked in sk_wait_event()
* or inet_wait_for_connect().
*/
if (sk->sk_wait_pending)
return -EBUSY;
if (old_state != TCP_CLOSE) if (old_state != TCP_CLOSE)
tcp_set_state(sk, TCP_CLOSE); tcp_set_state(sk, TCP_CLOSE);

View File

@ -307,6 +307,10 @@ msg_bytes_ready:
} }
data = tcp_msg_wait_data(sk, psock, timeo); data = tcp_msg_wait_data(sk, psock, timeo);
if (data < 0) {
copied = data;
goto unlock;
}
if (data && !sk_psock_queue_empty(psock)) if (data && !sk_psock_queue_empty(psock))
goto msg_bytes_ready; goto msg_bytes_ready;
copied = -EAGAIN; copied = -EAGAIN;
@ -317,6 +321,8 @@ out:
tcp_rcv_space_adjust(sk); tcp_rcv_space_adjust(sk);
if (copied > 0) if (copied > 0)
__tcp_cleanup_rbuf(sk, copied); __tcp_cleanup_rbuf(sk, copied);
unlock:
release_sock(sk); release_sock(sk);
sk_psock_put(sk, psock); sk_psock_put(sk, psock);
return copied; return copied;
@ -351,6 +357,10 @@ msg_bytes_ready:
timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
data = tcp_msg_wait_data(sk, psock, timeo); data = tcp_msg_wait_data(sk, psock, timeo);
if (data < 0) {
ret = data;
goto unlock;
}
if (data) { if (data) {
if (!sk_psock_queue_empty(psock)) if (!sk_psock_queue_empty(psock))
goto msg_bytes_ready; goto msg_bytes_ready;
@ -361,6 +371,8 @@ msg_bytes_ready:
copied = -EAGAIN; copied = -EAGAIN;
} }
ret = copied; ret = copied;
unlock:
release_sock(sk); release_sock(sk);
sk_psock_put(sk, psock); sk_psock_put(sk, psock);
return ret; return ret;

View File

@ -1869,6 +1869,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
#ifdef CONFIG_TLS_DEVICE #ifdef CONFIG_TLS_DEVICE
tail->decrypted != skb->decrypted || tail->decrypted != skb->decrypted ||
#endif #endif
!mptcp_skb_can_collapse(tail, skb) ||
thtail->doff != th->doff || thtail->doff != th->doff ||
memcmp(thtail + 1, th + 1, hdrlen - sizeof(*th))) memcmp(thtail + 1, th + 1, hdrlen - sizeof(*th)))
goto no_coalesce; goto no_coalesce;

View File

@ -2542,6 +2542,18 @@ static bool tcp_pacing_check(struct sock *sk)
return true; return true;
} }
static bool tcp_rtx_queue_empty_or_single_skb(const struct sock *sk)
{
const struct rb_node *node = sk->tcp_rtx_queue.rb_node;
/* No skb in the rtx queue. */
if (!node)
return true;
/* Only one skb in rtx queue. */
return !node->rb_left && !node->rb_right;
}
/* TCP Small Queues : /* TCP Small Queues :
* Control number of packets in qdisc/devices to two packets / or ~1 ms. * Control number of packets in qdisc/devices to two packets / or ~1 ms.
* (These limits are doubled for retransmits) * (These limits are doubled for retransmits)
@ -2579,12 +2591,12 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb,
limit += extra_bytes; limit += extra_bytes;
} }
if (refcount_read(&sk->sk_wmem_alloc) > limit) { if (refcount_read(&sk->sk_wmem_alloc) > limit) {
/* Always send skb if rtx queue is empty. /* Always send skb if rtx queue is empty or has one skb.
* No need to wait for TX completion to call us back, * No need to wait for TX completion to call us back,
* after softirq/tasklet schedule. * after softirq/tasklet schedule.
* This helps when TX completions are delayed too much. * This helps when TX completions are delayed too much.
*/ */
if (tcp_rtx_queue_empty(sk)) if (tcp_rtx_queue_empty_or_single_skb(sk))
return false; return false;
set_bit(TSQ_THROTTLED, &sk->sk_tsq_flags); set_bit(TSQ_THROTTLED, &sk->sk_tsq_flags);
@ -2788,7 +2800,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
{ {
struct inet_connection_sock *icsk = inet_csk(sk); struct inet_connection_sock *icsk = inet_csk(sk);
struct tcp_sock *tp = tcp_sk(sk); struct tcp_sock *tp = tcp_sk(sk);
u32 timeout, rto_delta_us; u32 timeout, timeout_us, rto_delta_us;
int early_retrans; int early_retrans;
/* Don't do any loss probe on a Fast Open connection before 3WHS /* Don't do any loss probe on a Fast Open connection before 3WHS
@ -2812,11 +2824,12 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
* sample is available then probe after TCP_TIMEOUT_INIT. * sample is available then probe after TCP_TIMEOUT_INIT.
*/ */
if (tp->srtt_us) { if (tp->srtt_us) {
timeout = usecs_to_jiffies(tp->srtt_us >> 2); timeout_us = tp->srtt_us >> 2;
if (tp->packets_out == 1) if (tp->packets_out == 1)
timeout += TCP_RTO_MIN; timeout_us += tcp_rto_min_us(sk);
else else
timeout += TCP_TIMEOUT_MIN; timeout_us += TCP_TIMEOUT_MIN_US;
timeout = usecs_to_jiffies(timeout_us);
} else { } else {
timeout = TCP_TIMEOUT_INIT; timeout = TCP_TIMEOUT_INIT;
} }

View File

@ -104,7 +104,7 @@ bool tcp_rack_mark_lost(struct sock *sk)
tp->rack.advanced = 0; tp->rack.advanced = 0;
tcp_rack_detect_loss(sk, &timeout); tcp_rack_detect_loss(sk, &timeout);
if (timeout) { if (timeout) {
timeout = usecs_to_jiffies(timeout) + TCP_TIMEOUT_MIN; timeout = usecs_to_jiffies(timeout + TCP_TIMEOUT_MIN_US);
inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT, inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT,
timeout, inet_csk(sk)->icsk_rto); timeout, inet_csk(sk)->icsk_rto);
} }

View File

@ -770,7 +770,9 @@ static inline int esp_remove_trailer(struct sk_buff *skb)
skb->csum = csum_block_sub(skb->csum, csumdiff, skb->csum = csum_block_sub(skb->csum, csumdiff,
skb->len - trimlen); skb->len - trimlen);
} }
pskb_trim(skb, skb->len - trimlen); ret = pskb_trim(skb, skb->len - trimlen);
if (unlikely(ret))
return ret;
ret = nexthdr[1]; ret = nexthdr[1];

View File

@ -117,10 +117,10 @@ static void xfrm6_dst_destroy(struct dst_entry *dst)
{ {
struct xfrm_dst *xdst = (struct xfrm_dst *)dst; struct xfrm_dst *xdst = (struct xfrm_dst *)dst;
if (likely(xdst->u.rt6.rt6i_idev))
in6_dev_put(xdst->u.rt6.rt6i_idev);
dst_destroy_metrics_generic(dst); dst_destroy_metrics_generic(dst);
rt6_uncached_list_del(&xdst->u.rt6); rt6_uncached_list_del(&xdst->u.rt6);
if (likely(xdst->u.rt6.rt6i_idev))
in6_dev_put(xdst->u.rt6.rt6i_idev);
xfrm_dst_destroy(xdst); xfrm_dst_destroy(xdst);
} }

View File

@ -912,7 +912,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
*/ */
if (ieee80211_key_identical(sdata, old_key, key)) { if (ieee80211_key_identical(sdata, old_key, key)) {
ret = -EALREADY; ret = -EALREADY;
goto unlock; goto out;
} }
key->local = sdata->local; key->local = sdata->local;
@ -940,7 +940,6 @@ int ieee80211_key_link(struct ieee80211_key *key,
out: out:
ieee80211_key_free_unused(key); ieee80211_key_free_unused(key);
unlock:
mutex_unlock(&sdata->local->key_mtx); mutex_unlock(&sdata->local->key_mtx);
return ret; return ret;

View File

@ -1298,7 +1298,7 @@ alloc_skb:
if (copy == 0) { if (copy == 0) {
u64 snd_una = READ_ONCE(msk->snd_una); u64 snd_una = READ_ONCE(msk->snd_una);
if (snd_una != msk->snd_nxt) { if (snd_una != msk->snd_nxt || tcp_write_queue_tail(ssk)) {
tcp_remove_empty_skb(ssk); tcp_remove_empty_skb(ssk);
return 0; return 0;
} }
@ -1306,11 +1306,6 @@ alloc_skb:
zero_window_probe = true; zero_window_probe = true;
data_seq = snd_una - 1; data_seq = snd_una - 1;
copy = 1; copy = 1;
/* all mptcp-level data is acked, no skbs should be present into the
* ssk write queue
*/
WARN_ON_ONCE(reuse_skb);
} }
copy = min_t(size_t, copy, info->limit - info->sent); copy = min_t(size_t, copy, info->limit - info->sent);
@ -1339,7 +1334,6 @@ alloc_skb:
if (reuse_skb) { if (reuse_skb) {
TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_PSH; TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_PSH;
mpext->data_len += copy; mpext->data_len += copy;
WARN_ON_ONCE(zero_window_probe);
goto out; goto out;
} }
@ -2354,6 +2348,26 @@ bool __mptcp_retransmit_pending_data(struct sock *sk)
#define MPTCP_CF_PUSH BIT(1) #define MPTCP_CF_PUSH BIT(1)
#define MPTCP_CF_FASTCLOSE BIT(2) #define MPTCP_CF_FASTCLOSE BIT(2)
/* be sure to send a reset only if the caller asked for it, also
* clean completely the subflow status when the subflow reaches
* TCP_CLOSE state
*/
static void __mptcp_subflow_disconnect(struct sock *ssk,
struct mptcp_subflow_context *subflow,
unsigned int flags)
{
if (((1 << ssk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||
(flags & MPTCP_CF_FASTCLOSE)) {
/* The MPTCP code never wait on the subflow sockets, TCP-level
* disconnect should never fail
*/
WARN_ON_ONCE(tcp_disconnect(ssk, 0));
mptcp_subflow_ctx_reset(subflow);
} else {
tcp_shutdown(ssk, SEND_SHUTDOWN);
}
}
/* subflow sockets can be either outgoing (connect) or incoming /* subflow sockets can be either outgoing (connect) or incoming
* (accept). * (accept).
* *
@ -2391,7 +2405,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
if ((flags & MPTCP_CF_FASTCLOSE) && !__mptcp_check_fallback(msk)) { if ((flags & MPTCP_CF_FASTCLOSE) && !__mptcp_check_fallback(msk)) {
/* be sure to force the tcp_disconnect() path, /* be sure to force the tcp_close path
* to generate the egress reset * to generate the egress reset
*/ */
ssk->sk_lingertime = 0; ssk->sk_lingertime = 0;
@ -2401,11 +2415,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk); need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk);
if (!dispose_it) { if (!dispose_it) {
/* The MPTCP code never wait on the subflow sockets, TCP-level __mptcp_subflow_disconnect(ssk, subflow, flags);
* disconnect should never fail
*/
WARN_ON_ONCE(tcp_disconnect(ssk, 0));
mptcp_subflow_ctx_reset(subflow);
release_sock(ssk); release_sock(ssk);
goto out; goto out;
@ -3098,12 +3108,6 @@ static int mptcp_disconnect(struct sock *sk, int flags)
{ {
struct mptcp_sock *msk = mptcp_sk(sk); struct mptcp_sock *msk = mptcp_sk(sk);
/* Deny disconnect if other threads are blocked in sk_wait_event()
* or inet_wait_for_connect().
*/
if (sk->sk_wait_pending)
return -EBUSY;
/* We are on the fastopen error path. We can't call straight into the /* We are on the fastopen error path. We can't call straight into the
* subflows cleanup code due to lock nesting (we are already under * subflows cleanup code due to lock nesting (we are already under
* msk->firstsocket lock). * msk->firstsocket lock).
@ -3173,7 +3177,6 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
inet_sk(nsk)->pinet6 = mptcp_inet6_sk(nsk); inet_sk(nsk)->pinet6 = mptcp_inet6_sk(nsk);
#endif #endif
nsk->sk_wait_pending = 0;
__mptcp_init_sock(nsk); __mptcp_init_sock(nsk);
msk = mptcp_sk(nsk); msk = mptcp_sk(nsk);

View File

@ -3166,7 +3166,7 @@ int nft_expr_inner_parse(const struct nft_ctx *ctx, const struct nlattr *nla,
if (err < 0) if (err < 0)
return err; return err;
if (!tb[NFTA_EXPR_DATA]) if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME])
return -EINVAL; return -EINVAL;
type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]); type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]);
@ -5556,7 +5556,6 @@ static int nf_tables_fill_setelem(struct sk_buff *skb,
const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
unsigned char *b = skb_tail_pointer(skb); unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest; struct nlattr *nest;
u64 timeout = 0;
nest = nla_nest_start_noflag(skb, NFTA_LIST_ELEM); nest = nla_nest_start_noflag(skb, NFTA_LIST_ELEM);
if (nest == NULL) if (nest == NULL)
@ -5592,15 +5591,11 @@ static int nf_tables_fill_setelem(struct sk_buff *skb,
htonl(*nft_set_ext_flags(ext)))) htonl(*nft_set_ext_flags(ext))))
goto nla_put_failure; goto nla_put_failure;
if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT)) { if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) &&
timeout = *nft_set_ext_timeout(ext); nla_put_be64(skb, NFTA_SET_ELEM_TIMEOUT,
if (nla_put_be64(skb, NFTA_SET_ELEM_TIMEOUT, nf_jiffies64_to_msecs(*nft_set_ext_timeout(ext)),
nf_jiffies64_to_msecs(timeout), NFTA_SET_ELEM_PAD))
NFTA_SET_ELEM_PAD)) goto nla_put_failure;
goto nla_put_failure;
} else if (set->flags & NFT_SET_TIMEOUT) {
timeout = READ_ONCE(set->timeout);
}
if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) { if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) {
u64 expires, now = get_jiffies_64(); u64 expires, now = get_jiffies_64();
@ -5615,9 +5610,6 @@ static int nf_tables_fill_setelem(struct sk_buff *skb,
nf_jiffies64_to_msecs(expires), nf_jiffies64_to_msecs(expires),
NFTA_SET_ELEM_PAD)) NFTA_SET_ELEM_PAD))
goto nla_put_failure; goto nla_put_failure;
if (reset)
*nft_set_ext_expiration(ext) = now + timeout;
} }
if (nft_set_ext_exists(ext, NFT_SET_EXT_USERDATA)) { if (nft_set_ext_exists(ext, NFT_SET_EXT_USERDATA)) {
@ -7615,6 +7607,16 @@ nla_put_failure:
return -1; return -1;
} }
static void audit_log_obj_reset(const struct nft_table *table,
unsigned int base_seq, unsigned int nentries)
{
char *buf = kasprintf(GFP_ATOMIC, "%s:%u", table->name, base_seq);
audit_log_nfcfg(buf, table->family, nentries,
AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC);
kfree(buf);
}
struct nft_obj_filter { struct nft_obj_filter {
char *table; char *table;
u32 type; u32 type;
@ -7629,8 +7631,10 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
struct net *net = sock_net(skb->sk); struct net *net = sock_net(skb->sk);
int family = nfmsg->nfgen_family; int family = nfmsg->nfgen_family;
struct nftables_pernet *nft_net; struct nftables_pernet *nft_net;
unsigned int entries = 0;
struct nft_object *obj; struct nft_object *obj;
bool reset = false; bool reset = false;
int rc = 0;
if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET) if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET)
reset = true; reset = true;
@ -7643,6 +7647,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
if (family != NFPROTO_UNSPEC && family != table->family) if (family != NFPROTO_UNSPEC && family != table->family)
continue; continue;
entries = 0;
list_for_each_entry_rcu(obj, &table->objects, list) { list_for_each_entry_rcu(obj, &table->objects, list) {
if (!nft_is_active(net, obj)) if (!nft_is_active(net, obj))
goto cont; goto cont;
@ -7658,34 +7663,27 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
filter->type != NFT_OBJECT_UNSPEC && filter->type != NFT_OBJECT_UNSPEC &&
obj->ops->type->type != filter->type) obj->ops->type->type != filter->type)
goto cont; goto cont;
if (reset) {
char *buf = kasprintf(GFP_ATOMIC,
"%s:%u",
table->name,
nft_net->base_seq);
audit_log_nfcfg(buf, rc = nf_tables_fill_obj_info(skb, net,
family, NETLINK_CB(cb->skb).portid,
obj->handle, cb->nlh->nlmsg_seq,
AUDIT_NFT_OP_OBJ_RESET, NFT_MSG_NEWOBJ,
GFP_ATOMIC); NLM_F_MULTI | NLM_F_APPEND,
kfree(buf); table->family, table,
} obj, reset);
if (rc < 0)
if (nf_tables_fill_obj_info(skb, net, NETLINK_CB(cb->skb).portid, break;
cb->nlh->nlmsg_seq,
NFT_MSG_NEWOBJ,
NLM_F_MULTI | NLM_F_APPEND,
table->family, table,
obj, reset) < 0)
goto done;
entries++;
nl_dump_check_consistent(cb, nlmsg_hdr(skb)); nl_dump_check_consistent(cb, nlmsg_hdr(skb));
cont: cont:
idx++; idx++;
} }
if (reset && entries)
audit_log_obj_reset(table, nft_net->base_seq, entries);
if (rc < 0)
break;
} }
done:
rcu_read_unlock(); rcu_read_unlock();
cb->args[0] = idx; cb->args[0] = idx;
@ -7790,7 +7788,7 @@ static int nf_tables_getobj(struct sk_buff *skb, const struct nfnl_info *info,
audit_log_nfcfg(buf, audit_log_nfcfg(buf,
family, family,
obj->handle, 1,
AUDIT_NFT_OP_OBJ_RESET, AUDIT_NFT_OP_OBJ_RESET,
GFP_ATOMIC); GFP_ATOMIC);
kfree(buf); kfree(buf);

View File

@ -698,8 +698,8 @@ nfulnl_log_packet(struct net *net,
unsigned int plen = 0; unsigned int plen = 0;
struct nfnl_log_net *log = nfnl_log_pernet(net); struct nfnl_log_net *log = nfnl_log_pernet(net);
const struct nfnl_ct_hook *nfnl_ct = NULL; const struct nfnl_ct_hook *nfnl_ct = NULL;
enum ip_conntrack_info ctinfo = 0;
struct nf_conn *ct = NULL; struct nf_conn *ct = NULL;
enum ip_conntrack_info ctinfo;
if (li_user && li_user->type == NF_LOG_TYPE_ULOG) if (li_user && li_user->type == NF_LOG_TYPE_ULOG)
li = li_user; li = li_user;

View File

@ -298,6 +298,7 @@ static int nft_inner_init(const struct nft_ctx *ctx,
int err; int err;
if (!tb[NFTA_INNER_FLAGS] || if (!tb[NFTA_INNER_FLAGS] ||
!tb[NFTA_INNER_NUM] ||
!tb[NFTA_INNER_HDRSIZE] || !tb[NFTA_INNER_HDRSIZE] ||
!tb[NFTA_INNER_TYPE] || !tb[NFTA_INNER_TYPE] ||
!tb[NFTA_INNER_EXPR]) !tb[NFTA_INNER_EXPR])

View File

@ -179,7 +179,7 @@ void nft_payload_eval(const struct nft_expr *expr,
switch (priv->base) { switch (priv->base) {
case NFT_PAYLOAD_LL_HEADER: case NFT_PAYLOAD_LL_HEADER:
if (!skb_mac_header_was_set(skb)) if (!skb_mac_header_was_set(skb) || skb_mac_header_len(skb) == 0)
goto err; goto err;
if (skb_vlan_tag_present(skb) && if (skb_vlan_tag_present(skb) &&

View File

@ -147,7 +147,7 @@ struct nft_pipapo_match {
unsigned long * __percpu *scratch; unsigned long * __percpu *scratch;
size_t bsize_max; size_t bsize_max;
struct rcu_head rcu; struct rcu_head rcu;
struct nft_pipapo_field f[]; struct nft_pipapo_field f[] __counted_by(field_count);
}; };
/** /**

View File

@ -568,6 +568,8 @@ static void *nft_rbtree_deactivate(const struct net *net,
nft_rbtree_interval_end(this)) { nft_rbtree_interval_end(this)) {
parent = parent->rb_right; parent = parent->rb_right;
continue; continue;
} else if (nft_set_elem_expired(&rbe->ext)) {
break;
} else if (!nft_set_elem_active(&rbe->ext, genmask)) { } else if (!nft_set_elem_active(&rbe->ext, genmask)) {
parent = parent->rb_left; parent = parent->rb_left;
continue; continue;

View File

@ -151,6 +151,8 @@ static int send_acknowledge(struct nci_spi *nspi, u8 acknowledge)
int ret; int ret;
skb = nci_skb_alloc(nspi->ndev, 0, GFP_KERNEL); skb = nci_skb_alloc(nspi->ndev, 0, GFP_KERNEL);
if (!skb)
return -ENOMEM;
/* add the NCI SPI header to the start of the buffer */ /* add the NCI SPI header to the start of the buffer */
hdr = skb_push(skb, NCI_SPI_HDR_LEN); hdr = skb_push(skb, NCI_SPI_HDR_LEN);

View File

@ -1180,7 +1180,6 @@ static int rfkill_fop_open(struct inode *inode, struct file *file)
init_waitqueue_head(&data->read_wait); init_waitqueue_head(&data->read_wait);
mutex_lock(&rfkill_global_mutex); mutex_lock(&rfkill_global_mutex);
mutex_lock(&data->mtx);
/* /*
* start getting events from elsewhere but hold mtx to get * start getting events from elsewhere but hold mtx to get
* startup events added first * startup events added first
@ -1192,10 +1191,11 @@ static int rfkill_fop_open(struct inode *inode, struct file *file)
goto free; goto free;
rfkill_sync(rfkill); rfkill_sync(rfkill);
rfkill_fill_event(&ev->ev, rfkill, RFKILL_OP_ADD); rfkill_fill_event(&ev->ev, rfkill, RFKILL_OP_ADD);
mutex_lock(&data->mtx);
list_add_tail(&ev->list, &data->events); list_add_tail(&ev->list, &data->events);
mutex_unlock(&data->mtx);
} }
list_add(&data->list, &rfkill_fds); list_add(&data->list, &rfkill_fds);
mutex_unlock(&data->mtx);
mutex_unlock(&rfkill_global_mutex); mutex_unlock(&rfkill_global_mutex);
file->private_data = data; file->private_data = data;
@ -1203,7 +1203,6 @@ static int rfkill_fop_open(struct inode *inode, struct file *file)
return stream_open(inode, file); return stream_open(inode, file);
free: free:
mutex_unlock(&data->mtx);
mutex_unlock(&rfkill_global_mutex); mutex_unlock(&rfkill_global_mutex);
mutex_destroy(&data->mtx); mutex_destroy(&data->mtx);
list_for_each_entry_safe(ev, tmp, &data->events, list) list_for_each_entry_safe(ev, tmp, &data->events, list)

View File

@ -108,13 +108,13 @@ static int rfkill_gpio_probe(struct platform_device *pdev)
rfkill->clk = devm_clk_get(&pdev->dev, NULL); rfkill->clk = devm_clk_get(&pdev->dev, NULL);
gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_OUT_LOW); gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_ASIS);
if (IS_ERR(gpio)) if (IS_ERR(gpio))
return PTR_ERR(gpio); return PTR_ERR(gpio);
rfkill->reset_gpio = gpio; rfkill->reset_gpio = gpio;
gpio = devm_gpiod_get_optional(&pdev->dev, "shutdown", GPIOD_OUT_LOW); gpio = devm_gpiod_get_optional(&pdev->dev, "shutdown", GPIOD_ASIS);
if (IS_ERR(gpio)) if (IS_ERR(gpio))
return PTR_ERR(gpio); return PTR_ERR(gpio);

View File

@ -902,6 +902,14 @@ hfsc_change_usc(struct hfsc_class *cl, struct tc_service_curve *usc,
cl->cl_flags |= HFSC_USC; cl->cl_flags |= HFSC_USC;
} }
static void
hfsc_upgrade_rt(struct hfsc_class *cl)
{
cl->cl_fsc = cl->cl_rsc;
rtsc_init(&cl->cl_virtual, &cl->cl_fsc, cl->cl_vt, cl->cl_total);
cl->cl_flags |= HFSC_FSC;
}
static const struct nla_policy hfsc_policy[TCA_HFSC_MAX + 1] = { static const struct nla_policy hfsc_policy[TCA_HFSC_MAX + 1] = {
[TCA_HFSC_RSC] = { .len = sizeof(struct tc_service_curve) }, [TCA_HFSC_RSC] = { .len = sizeof(struct tc_service_curve) },
[TCA_HFSC_FSC] = { .len = sizeof(struct tc_service_curve) }, [TCA_HFSC_FSC] = { .len = sizeof(struct tc_service_curve) },
@ -1011,10 +1019,6 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
if (parent == NULL) if (parent == NULL)
return -ENOENT; return -ENOENT;
} }
if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) {
NL_SET_ERR_MSG(extack, "Invalid parent - parent class must have FSC");
return -EINVAL;
}
if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0) if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0)
return -EINVAL; return -EINVAL;
@ -1065,6 +1069,12 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
cl->cf_tree = RB_ROOT; cl->cf_tree = RB_ROOT;
sch_tree_lock(sch); sch_tree_lock(sch);
/* Check if the inner class is a misconfigured 'rt' */
if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) {
NL_SET_ERR_MSG(extack,
"Forced curve change on parent 'rt' to 'sc'");
hfsc_upgrade_rt(parent);
}
qdisc_class_hash_insert(&q->clhash, &cl->cl_common); qdisc_class_hash_insert(&q->clhash, &cl->cl_common);
list_add_tail(&cl->siblings, &parent->children); list_add_tail(&cl->siblings, &parent->children);
if (parent->level == 0) if (parent->level == 0)

View File

@ -1201,6 +1201,7 @@ static int smc_connect_rdma_v2_prepare(struct smc_sock *smc,
(struct smc_clc_msg_accept_confirm_v2 *)aclc; (struct smc_clc_msg_accept_confirm_v2 *)aclc;
struct smc_clc_first_contact_ext *fce = struct smc_clc_first_contact_ext *fce =
smc_get_clc_first_contact_ext(clc_v2, false); smc_get_clc_first_contact_ext(clc_v2, false);
struct net *net = sock_net(&smc->sk);
int rc; int rc;
if (!ini->first_contact_peer || aclc->hdr.version == SMC_V1) if (!ini->first_contact_peer || aclc->hdr.version == SMC_V1)
@ -1210,7 +1211,7 @@ static int smc_connect_rdma_v2_prepare(struct smc_sock *smc,
memcpy(ini->smcrv2.nexthop_mac, &aclc->r0.lcl.mac, ETH_ALEN); memcpy(ini->smcrv2.nexthop_mac, &aclc->r0.lcl.mac, ETH_ALEN);
ini->smcrv2.uses_gateway = false; ini->smcrv2.uses_gateway = false;
} else { } else {
if (smc_ib_find_route(smc->clcsock->sk->sk_rcv_saddr, if (smc_ib_find_route(net, smc->clcsock->sk->sk_rcv_saddr,
smc_ib_gid_to_ipv4(aclc->r0.lcl.gid), smc_ib_gid_to_ipv4(aclc->r0.lcl.gid),
ini->smcrv2.nexthop_mac, ini->smcrv2.nexthop_mac,
&ini->smcrv2.uses_gateway)) &ini->smcrv2.uses_gateway))
@ -2361,7 +2362,7 @@ static int smc_listen_find_device(struct smc_sock *new_smc,
smc_find_ism_store_rc(rc, ini); smc_find_ism_store_rc(rc, ini);
return (!rc) ? 0 : ini->rc; return (!rc) ? 0 : ini->rc;
} }
return SMC_CLC_DECL_NOSMCDEV; return prfx_rc;
} }
/* listen worker: finish RDMA setup */ /* listen worker: finish RDMA setup */

View File

@ -193,7 +193,7 @@ bool smc_ib_port_active(struct smc_ib_device *smcibdev, u8 ibport)
return smcibdev->pattr[ibport - 1].state == IB_PORT_ACTIVE; return smcibdev->pattr[ibport - 1].state == IB_PORT_ACTIVE;
} }
int smc_ib_find_route(__be32 saddr, __be32 daddr, int smc_ib_find_route(struct net *net, __be32 saddr, __be32 daddr,
u8 nexthop_mac[], u8 *uses_gateway) u8 nexthop_mac[], u8 *uses_gateway)
{ {
struct neighbour *neigh = NULL; struct neighbour *neigh = NULL;
@ -205,7 +205,7 @@ int smc_ib_find_route(__be32 saddr, __be32 daddr,
if (daddr == cpu_to_be32(INADDR_NONE)) if (daddr == cpu_to_be32(INADDR_NONE))
goto out; goto out;
rt = ip_route_output_flow(&init_net, &fl4, NULL); rt = ip_route_output_flow(net, &fl4, NULL);
if (IS_ERR(rt)) if (IS_ERR(rt))
goto out; goto out;
if (rt->rt_uses_gateway && rt->rt_gw_family != AF_INET) if (rt->rt_uses_gateway && rt->rt_gw_family != AF_INET)
@ -235,6 +235,7 @@ static int smc_ib_determine_gid_rcu(const struct net_device *ndev,
if (smcrv2 && attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP && if (smcrv2 && attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP &&
smc_ib_gid_to_ipv4((u8 *)&attr->gid) != cpu_to_be32(INADDR_NONE)) { smc_ib_gid_to_ipv4((u8 *)&attr->gid) != cpu_to_be32(INADDR_NONE)) {
struct in_device *in_dev = __in_dev_get_rcu(ndev); struct in_device *in_dev = __in_dev_get_rcu(ndev);
struct net *net = dev_net(ndev);
const struct in_ifaddr *ifa; const struct in_ifaddr *ifa;
bool subnet_match = false; bool subnet_match = false;
@ -248,7 +249,7 @@ static int smc_ib_determine_gid_rcu(const struct net_device *ndev,
} }
if (!subnet_match) if (!subnet_match)
goto out; goto out;
if (smcrv2->daddr && smc_ib_find_route(smcrv2->saddr, if (smcrv2->daddr && smc_ib_find_route(net, smcrv2->saddr,
smcrv2->daddr, smcrv2->daddr,
smcrv2->nexthop_mac, smcrv2->nexthop_mac,
&smcrv2->uses_gateway)) &smcrv2->uses_gateway))

View File

@ -112,7 +112,7 @@ void smc_ib_sync_sg_for_device(struct smc_link *lnk,
int smc_ib_determine_gid(struct smc_ib_device *smcibdev, u8 ibport, int smc_ib_determine_gid(struct smc_ib_device *smcibdev, u8 ibport,
unsigned short vlan_id, u8 gid[], u8 *sgid_index, unsigned short vlan_id, u8 gid[], u8 *sgid_index,
struct smc_init_info_smcrv2 *smcrv2); struct smc_init_info_smcrv2 *smcrv2);
int smc_ib_find_route(__be32 saddr, __be32 daddr, int smc_ib_find_route(struct net *net, __be32 saddr, __be32 daddr,
u8 nexthop_mac[], u8 *uses_gateway); u8 nexthop_mac[], u8 *uses_gateway);
bool smc_ib_is_valid_local_systemid(void); bool smc_ib_is_valid_local_systemid(void);
int smcr_nl_get_device(struct sk_buff *skb, struct netlink_callback *cb); int smcr_nl_get_device(struct sk_buff *skb, struct netlink_callback *cb);

View File

@ -139,8 +139,8 @@ void update_sk_prot(struct sock *sk, struct tls_context *ctx)
int wait_on_pending_writer(struct sock *sk, long *timeo) int wait_on_pending_writer(struct sock *sk, long *timeo)
{ {
int rc = 0;
DEFINE_WAIT_FUNC(wait, woken_wake_function); DEFINE_WAIT_FUNC(wait, woken_wake_function);
int ret, rc = 0;
add_wait_queue(sk_sleep(sk), &wait); add_wait_queue(sk_sleep(sk), &wait);
while (1) { while (1) {
@ -154,9 +154,13 @@ int wait_on_pending_writer(struct sock *sk, long *timeo)
break; break;
} }
if (sk_wait_event(sk, timeo, ret = sk_wait_event(sk, timeo,
!READ_ONCE(sk->sk_write_pending), &wait)) !READ_ONCE(sk->sk_write_pending), &wait);
if (ret) {
if (ret < 0)
rc = ret;
break; break;
}
} }
remove_wait_queue(sk_sleep(sk), &wait); remove_wait_queue(sk_sleep(sk), &wait);
return rc; return rc;

View File

@ -1291,6 +1291,7 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
struct tls_context *tls_ctx = tls_get_ctx(sk); struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx); struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
DEFINE_WAIT_FUNC(wait, woken_wake_function); DEFINE_WAIT_FUNC(wait, woken_wake_function);
int ret = 0;
long timeo; long timeo;
timeo = sock_rcvtimeo(sk, nonblock); timeo = sock_rcvtimeo(sk, nonblock);
@ -1302,6 +1303,9 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
if (sk->sk_err) if (sk->sk_err)
return sock_error(sk); return sock_error(sk);
if (ret < 0)
return ret;
if (!skb_queue_empty(&sk->sk_receive_queue)) { if (!skb_queue_empty(&sk->sk_receive_queue)) {
tls_strp_check_rcv(&ctx->strp); tls_strp_check_rcv(&ctx->strp);
if (tls_strp_msg_ready(ctx)) if (tls_strp_msg_ready(ctx))
@ -1320,10 +1324,10 @@ tls_rx_rec_wait(struct sock *sk, struct sk_psock *psock, bool nonblock,
released = true; released = true;
add_wait_queue(sk_sleep(sk), &wait); add_wait_queue(sk_sleep(sk), &wait);
sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
sk_wait_event(sk, &timeo, ret = sk_wait_event(sk, &timeo,
tls_strp_msg_ready(ctx) || tls_strp_msg_ready(ctx) ||
!sk_psock_queue_empty(psock), !sk_psock_queue_empty(psock),
&wait); &wait);
sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
remove_wait_queue(sk_sleep(sk), &wait); remove_wait_queue(sk_sleep(sk), &wait);
@ -1852,6 +1856,7 @@ static int tls_rx_reader_acquire(struct sock *sk, struct tls_sw_context_rx *ctx,
bool nonblock) bool nonblock)
{ {
long timeo; long timeo;
int ret;
timeo = sock_rcvtimeo(sk, nonblock); timeo = sock_rcvtimeo(sk, nonblock);
@ -1861,14 +1866,16 @@ static int tls_rx_reader_acquire(struct sock *sk, struct tls_sw_context_rx *ctx,
ctx->reader_contended = 1; ctx->reader_contended = 1;
add_wait_queue(&ctx->wq, &wait); add_wait_queue(&ctx->wq, &wait);
sk_wait_event(sk, &timeo, ret = sk_wait_event(sk, &timeo,
!READ_ONCE(ctx->reader_present), &wait); !READ_ONCE(ctx->reader_present), &wait);
remove_wait_queue(&ctx->wq, &wait); remove_wait_queue(&ctx->wq, &wait);
if (timeo <= 0) if (timeo <= 0)
return -EAGAIN; return -EAGAIN;
if (signal_pending(current)) if (signal_pending(current))
return sock_intr_errno(timeo); return sock_intr_errno(timeo);
if (ret < 0)
return ret;
} }
WRITE_ONCE(ctx->reader_present, 1); WRITE_ONCE(ctx->reader_present, 1);

View File

@ -1622,7 +1622,7 @@ void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work)
list_add_tail(&work->entry, &rdev->wiphy_work_list); list_add_tail(&work->entry, &rdev->wiphy_work_list);
spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags);
schedule_work(&rdev->wiphy_work); queue_work(system_unbound_wq, &rdev->wiphy_work);
} }
EXPORT_SYMBOL_GPL(wiphy_work_queue); EXPORT_SYMBOL_GPL(wiphy_work_queue);

View File

@ -380,8 +380,8 @@ static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
skb->dev = dev; skb->dev = dev;
if (err) { if (err) {
dev->stats.rx_errors++; DEV_STATS_INC(dev, rx_errors);
dev->stats.rx_dropped++; DEV_STATS_INC(dev, rx_dropped);
return 0; return 0;
} }
@ -426,7 +426,6 @@ static int
xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl) xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
{ {
struct xfrm_if *xi = netdev_priv(dev); struct xfrm_if *xi = netdev_priv(dev);
struct net_device_stats *stats = &xi->dev->stats;
struct dst_entry *dst = skb_dst(skb); struct dst_entry *dst = skb_dst(skb);
unsigned int length = skb->len; unsigned int length = skb->len;
struct net_device *tdev; struct net_device *tdev;
@ -473,7 +472,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
tdev = dst->dev; tdev = dst->dev;
if (tdev == dev) { if (tdev == dev) {
stats->collisions++; DEV_STATS_INC(dev, collisions);
net_warn_ratelimited("%s: Local routing loop detected!\n", net_warn_ratelimited("%s: Local routing loop detected!\n",
dev->name); dev->name);
goto tx_err_dst_release; goto tx_err_dst_release;
@ -512,13 +511,13 @@ xmit:
if (net_xmit_eval(err) == 0) { if (net_xmit_eval(err) == 0) {
dev_sw_netstats_tx_add(dev, 1, length); dev_sw_netstats_tx_add(dev, 1, length);
} else { } else {
stats->tx_errors++; DEV_STATS_INC(dev, tx_errors);
stats->tx_aborted_errors++; DEV_STATS_INC(dev, tx_aborted_errors);
} }
return 0; return 0;
tx_err_link_failure: tx_err_link_failure:
stats->tx_carrier_errors++; DEV_STATS_INC(dev, tx_carrier_errors);
dst_link_failure(skb); dst_link_failure(skb);
tx_err_dst_release: tx_err_dst_release:
dst_release(dst); dst_release(dst);
@ -528,7 +527,6 @@ tx_err_dst_release:
static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev) static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct xfrm_if *xi = netdev_priv(dev); struct xfrm_if *xi = netdev_priv(dev);
struct net_device_stats *stats = &xi->dev->stats;
struct dst_entry *dst = skb_dst(skb); struct dst_entry *dst = skb_dst(skb);
struct flowi fl; struct flowi fl;
int ret; int ret;
@ -545,7 +543,7 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6); dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6);
if (dst->error) { if (dst->error) {
dst_release(dst); dst_release(dst);
stats->tx_carrier_errors++; DEV_STATS_INC(dev, tx_carrier_errors);
goto tx_err; goto tx_err;
} }
skb_dst_set(skb, dst); skb_dst_set(skb, dst);
@ -561,7 +559,7 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC;
rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4); rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4);
if (IS_ERR(rt)) { if (IS_ERR(rt)) {
stats->tx_carrier_errors++; DEV_STATS_INC(dev, tx_carrier_errors);
goto tx_err; goto tx_err;
} }
skb_dst_set(skb, &rt->dst); skb_dst_set(skb, &rt->dst);
@ -580,8 +578,8 @@ static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK; return NETDEV_TX_OK;
tx_err: tx_err:
stats->tx_errors++; DEV_STATS_INC(dev, tx_errors);
stats->tx_dropped++; DEV_STATS_INC(dev, tx_dropped);
kfree_skb(skb); kfree_skb(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }

View File

@ -851,7 +851,7 @@ static void xfrm_policy_inexact_list_reinsert(struct net *net,
struct hlist_node *newpos = NULL; struct hlist_node *newpos = NULL;
bool matches_s, matches_d; bool matches_s, matches_d;
if (!policy->bydst_reinsert) if (policy->walk.dead || !policy->bydst_reinsert)
continue; continue;
WARN_ON_ONCE(policy->family != family); WARN_ON_ONCE(policy->family != family);
@ -1256,8 +1256,11 @@ static void xfrm_hash_rebuild(struct work_struct *work)
struct xfrm_pol_inexact_bin *bin; struct xfrm_pol_inexact_bin *bin;
u8 dbits, sbits; u8 dbits, sbits;
if (policy->walk.dead)
continue;
dir = xfrm_policy_id2dir(policy->index); dir = xfrm_policy_id2dir(policy->index);
if (policy->walk.dead || dir >= XFRM_POLICY_MAX) if (dir >= XFRM_POLICY_MAX)
continue; continue;
if ((dir & XFRM_POLICY_MASK) == XFRM_POLICY_OUT) { if ((dir & XFRM_POLICY_MASK) == XFRM_POLICY_OUT) {
@ -1372,8 +1375,6 @@ EXPORT_SYMBOL(xfrm_policy_hash_rebuild);
* of an absolute inpredictability of ordering of rules. This will not pass. */ * of an absolute inpredictability of ordering of rules. This will not pass. */
static u32 xfrm_gen_index(struct net *net, int dir, u32 index) static u32 xfrm_gen_index(struct net *net, int dir, u32 index)
{ {
static u32 idx_generator;
for (;;) { for (;;) {
struct hlist_head *list; struct hlist_head *list;
struct xfrm_policy *p; struct xfrm_policy *p;
@ -1381,8 +1382,8 @@ static u32 xfrm_gen_index(struct net *net, int dir, u32 index)
int found; int found;
if (!index) { if (!index) {
idx = (idx_generator | dir); idx = (net->xfrm.idx_generator | dir);
idx_generator += 8; net->xfrm.idx_generator += 8;
} else { } else {
idx = index; idx = index;
index = 0; index = 0;
@ -1823,9 +1824,11 @@ int xfrm_policy_flush(struct net *net, u8 type, bool task_valid)
again: again:
list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) {
if (pol->walk.dead)
continue;
dir = xfrm_policy_id2dir(pol->index); dir = xfrm_policy_id2dir(pol->index);
if (pol->walk.dead || if (dir >= XFRM_POLICY_MAX ||
dir >= XFRM_POLICY_MAX ||
pol->type != type) pol->type != type)
continue; continue;
@ -1862,9 +1865,11 @@ int xfrm_dev_policy_flush(struct net *net, struct net_device *dev,
again: again:
list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) {
if (pol->walk.dead)
continue;
dir = xfrm_policy_id2dir(pol->index); dir = xfrm_policy_id2dir(pol->index);
if (pol->walk.dead || if (dir >= XFRM_POLICY_MAX ||
dir >= XFRM_POLICY_MAX ||
pol->xdo.dev != dev) pol->xdo.dev != dev)
continue; continue;
@ -3215,7 +3220,7 @@ no_transform:
} }
for (i = 0; i < num_pols; i++) for (i = 0; i < num_pols; i++)
pols[i]->curlft.use_time = ktime_get_real_seconds(); WRITE_ONCE(pols[i]->curlft.use_time, ktime_get_real_seconds());
if (num_xfrms < 0) { if (num_xfrms < 0) {
/* Prohibit the flow */ /* Prohibit the flow */

View File

@ -16,19 +16,19 @@
static const char * const devlink_op_strmap[] = { static const char * const devlink_op_strmap[] = {
[3] = "get", [3] = "get",
[7] = "port-get", [7] = "port-get",
[DEVLINK_CMD_SB_GET] = "sb-get", [13] = "sb-get",
[DEVLINK_CMD_SB_POOL_GET] = "sb-pool-get", [17] = "sb-pool-get",
[DEVLINK_CMD_SB_PORT_POOL_GET] = "sb-port-pool-get", [21] = "sb-port-pool-get",
[DEVLINK_CMD_SB_TC_POOL_BIND_GET] = "sb-tc-pool-bind-get", [25] = "sb-tc-pool-bind-get",
[DEVLINK_CMD_PARAM_GET] = "param-get", [DEVLINK_CMD_PARAM_GET] = "param-get",
[DEVLINK_CMD_REGION_GET] = "region-get", [DEVLINK_CMD_REGION_GET] = "region-get",
[DEVLINK_CMD_INFO_GET] = "info-get", [DEVLINK_CMD_INFO_GET] = "info-get",
[DEVLINK_CMD_HEALTH_REPORTER_GET] = "health-reporter-get", [DEVLINK_CMD_HEALTH_REPORTER_GET] = "health-reporter-get",
[DEVLINK_CMD_TRAP_GET] = "trap-get", [63] = "trap-get",
[DEVLINK_CMD_TRAP_GROUP_GET] = "trap-group-get", [67] = "trap-group-get",
[DEVLINK_CMD_TRAP_POLICER_GET] = "trap-policer-get", [71] = "trap-policer-get",
[DEVLINK_CMD_RATE_GET] = "rate-get", [76] = "rate-get",
[DEVLINK_CMD_LINECARD_GET] = "linecard-get", [80] = "linecard-get",
[DEVLINK_CMD_SELFTESTS_GET] = "selftests-get", [DEVLINK_CMD_SELFTESTS_GET] = "selftests-get",
}; };
@ -838,7 +838,7 @@ devlink_sb_get(struct ynl_sock *ys, struct devlink_sb_get_req *req)
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_sb_get_rsp_parse; yrs.cb = devlink_sb_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_SB_GET; yrs.rsp_cmd = 13;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -876,7 +876,7 @@ devlink_sb_get_dump(struct ynl_sock *ys, struct devlink_sb_get_req_dump *req)
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_sb_get_list); yds.alloc_sz = sizeof(struct devlink_sb_get_list);
yds.cb = devlink_sb_get_rsp_parse; yds.cb = devlink_sb_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_SB_GET; yds.rsp_cmd = 13;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_GET, 1);
@ -987,7 +987,7 @@ devlink_sb_pool_get(struct ynl_sock *ys, struct devlink_sb_pool_get_req *req)
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_sb_pool_get_rsp_parse; yrs.cb = devlink_sb_pool_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_SB_POOL_GET; yrs.rsp_cmd = 17;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -1026,7 +1026,7 @@ devlink_sb_pool_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_sb_pool_get_list); yds.alloc_sz = sizeof(struct devlink_sb_pool_get_list);
yds.cb = devlink_sb_pool_get_rsp_parse; yds.cb = devlink_sb_pool_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_SB_POOL_GET; yds.rsp_cmd = 17;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_POOL_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_POOL_GET, 1);
@ -1147,7 +1147,7 @@ devlink_sb_port_pool_get(struct ynl_sock *ys,
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_sb_port_pool_get_rsp_parse; yrs.cb = devlink_sb_port_pool_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_SB_PORT_POOL_GET; yrs.rsp_cmd = 21;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -1187,7 +1187,7 @@ devlink_sb_port_pool_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_sb_port_pool_get_list); yds.alloc_sz = sizeof(struct devlink_sb_port_pool_get_list);
yds.cb = devlink_sb_port_pool_get_rsp_parse; yds.cb = devlink_sb_port_pool_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_SB_PORT_POOL_GET; yds.rsp_cmd = 21;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_PORT_POOL_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_PORT_POOL_GET, 1);
@ -1316,7 +1316,7 @@ devlink_sb_tc_pool_bind_get(struct ynl_sock *ys,
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_sb_tc_pool_bind_get_rsp_parse; yrs.cb = devlink_sb_tc_pool_bind_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_SB_TC_POOL_BIND_GET; yrs.rsp_cmd = 25;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -1356,7 +1356,7 @@ devlink_sb_tc_pool_bind_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_sb_tc_pool_bind_get_list); yds.alloc_sz = sizeof(struct devlink_sb_tc_pool_bind_get_list);
yds.cb = devlink_sb_tc_pool_bind_get_rsp_parse; yds.cb = devlink_sb_tc_pool_bind_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_SB_TC_POOL_BIND_GET; yds.rsp_cmd = 25;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_TC_POOL_BIND_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_SB_TC_POOL_BIND_GET, 1);
@ -2183,7 +2183,7 @@ devlink_trap_get(struct ynl_sock *ys, struct devlink_trap_get_req *req)
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_trap_get_rsp_parse; yrs.cb = devlink_trap_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_TRAP_GET; yrs.rsp_cmd = 63;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -2223,7 +2223,7 @@ devlink_trap_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_trap_get_list); yds.alloc_sz = sizeof(struct devlink_trap_get_list);
yds.cb = devlink_trap_get_rsp_parse; yds.cb = devlink_trap_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_TRAP_GET; yds.rsp_cmd = 63;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_TRAP_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_TRAP_GET, 1);
@ -2336,7 +2336,7 @@ devlink_trap_group_get(struct ynl_sock *ys,
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_trap_group_get_rsp_parse; yrs.cb = devlink_trap_group_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_TRAP_GROUP_GET; yrs.rsp_cmd = 67;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -2376,7 +2376,7 @@ devlink_trap_group_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_trap_group_get_list); yds.alloc_sz = sizeof(struct devlink_trap_group_get_list);
yds.cb = devlink_trap_group_get_rsp_parse; yds.cb = devlink_trap_group_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_TRAP_GROUP_GET; yds.rsp_cmd = 67;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_TRAP_GROUP_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_TRAP_GROUP_GET, 1);
@ -2483,7 +2483,7 @@ devlink_trap_policer_get(struct ynl_sock *ys,
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_trap_policer_get_rsp_parse; yrs.cb = devlink_trap_policer_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_TRAP_POLICER_GET; yrs.rsp_cmd = 71;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -2523,7 +2523,7 @@ devlink_trap_policer_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_trap_policer_get_list); yds.alloc_sz = sizeof(struct devlink_trap_policer_get_list);
yds.cb = devlink_trap_policer_get_rsp_parse; yds.cb = devlink_trap_policer_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_TRAP_POLICER_GET; yds.rsp_cmd = 71;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_TRAP_POLICER_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_TRAP_POLICER_GET, 1);
@ -2642,7 +2642,7 @@ devlink_rate_get(struct ynl_sock *ys, struct devlink_rate_get_req *req)
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_rate_get_rsp_parse; yrs.cb = devlink_rate_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_RATE_GET; yrs.rsp_cmd = 76;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -2682,7 +2682,7 @@ devlink_rate_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_rate_get_list); yds.alloc_sz = sizeof(struct devlink_rate_get_list);
yds.cb = devlink_rate_get_rsp_parse; yds.cb = devlink_rate_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_RATE_GET; yds.rsp_cmd = 76;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_RATE_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_RATE_GET, 1);
@ -2786,7 +2786,7 @@ devlink_linecard_get(struct ynl_sock *ys, struct devlink_linecard_get_req *req)
rsp = calloc(1, sizeof(*rsp)); rsp = calloc(1, sizeof(*rsp));
yrs.yarg.data = rsp; yrs.yarg.data = rsp;
yrs.cb = devlink_linecard_get_rsp_parse; yrs.cb = devlink_linecard_get_rsp_parse;
yrs.rsp_cmd = DEVLINK_CMD_LINECARD_GET; yrs.rsp_cmd = 80;
err = ynl_exec(ys, nlh, &yrs); err = ynl_exec(ys, nlh, &yrs);
if (err < 0) if (err < 0)
@ -2825,7 +2825,7 @@ devlink_linecard_get_dump(struct ynl_sock *ys,
yds.ys = ys; yds.ys = ys;
yds.alloc_sz = sizeof(struct devlink_linecard_get_list); yds.alloc_sz = sizeof(struct devlink_linecard_get_list);
yds.cb = devlink_linecard_get_rsp_parse; yds.cb = devlink_linecard_get_rsp_parse;
yds.rsp_cmd = DEVLINK_CMD_LINECARD_GET; yds.rsp_cmd = 80;
yds.rsp_policy = &devlink_nest; yds.rsp_policy = &devlink_nest;
nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_LINECARD_GET, 1); nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_LINECARD_GET, 1);

View File

@ -34,6 +34,7 @@ TEST_PROGS += gro.sh
TEST_PROGS += gre_gso.sh TEST_PROGS += gre_gso.sh
TEST_PROGS += cmsg_so_mark.sh TEST_PROGS += cmsg_so_mark.sh
TEST_PROGS += cmsg_time.sh cmsg_ipv6.sh TEST_PROGS += cmsg_time.sh cmsg_ipv6.sh
TEST_PROGS += netns-name.sh
TEST_PROGS += srv6_end_dt46_l3vpn_test.sh TEST_PROGS += srv6_end_dt46_l3vpn_test.sh
TEST_PROGS += srv6_end_dt4_l3vpn_test.sh TEST_PROGS += srv6_end_dt4_l3vpn_test.sh
TEST_PROGS += srv6_end_dt6_l3vpn_test.sh TEST_PROGS += srv6_end_dt6_l3vpn_test.sh

View File

@ -2437,6 +2437,9 @@ ipv4_mpath_list_test()
run_cmd "ip -n ns2 route add 203.0.113.0/24 run_cmd "ip -n ns2 route add 203.0.113.0/24
nexthop via 172.16.201.2 nexthop via 172.16.202.2" nexthop via 172.16.201.2 nexthop via 172.16.202.2"
run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.fib_multipath_hash_policy=1" run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.fib_multipath_hash_policy=1"
run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.conf.veth2.rp_filter=0"
run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=0"
run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.conf.default.rp_filter=0"
set +e set +e
local dmac=$(ip -n ns2 -j link show dev veth2 | jq -r '.[]["address"]') local dmac=$(ip -n ns2 -j link show dev veth2 | jq -r '.[]["address"]')
@ -2449,7 +2452,7 @@ ipv4_mpath_list_test()
# words, the FIB lookup tracepoint needs to be triggered for every # words, the FIB lookup tracepoint needs to be triggered for every
# packet. # packet.
local t0_rx_pkts=$(link_stats_get ns2 veth2 rx packets) local t0_rx_pkts=$(link_stats_get ns2 veth2 rx packets)
run_cmd "perf stat -e fib:fib_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd" run_cmd "perf stat -a -e fib:fib_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd"
local t1_rx_pkts=$(link_stats_get ns2 veth2 rx packets) local t1_rx_pkts=$(link_stats_get ns2 veth2 rx packets)
local diff=$(echo $t1_rx_pkts - $t0_rx_pkts | bc -l) local diff=$(echo $t1_rx_pkts - $t0_rx_pkts | bc -l)
list_rcv_eval $tmp_file $diff list_rcv_eval $tmp_file $diff
@ -2494,7 +2497,7 @@ ipv6_mpath_list_test()
# words, the FIB lookup tracepoint needs to be triggered for every # words, the FIB lookup tracepoint needs to be triggered for every
# packet. # packet.
local t0_rx_pkts=$(link_stats_get ns2 veth2 rx packets) local t0_rx_pkts=$(link_stats_get ns2 veth2 rx packets)
run_cmd "perf stat -e fib6:fib6_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd" run_cmd "perf stat -a -e fib6:fib6_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd"
local t1_rx_pkts=$(link_stats_get ns2 veth2 rx packets) local t1_rx_pkts=$(link_stats_get ns2 veth2 rx packets)
local diff=$(echo $t1_rx_pkts - $t0_rx_pkts | bc -l) local diff=$(echo $t1_rx_pkts - $t0_rx_pkts | bc -l)
list_rcv_eval $tmp_file $diff list_rcv_eval $tmp_file $diff

View File

@ -1432,7 +1432,9 @@ chk_rst_nr()
count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx") count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx")
if [ -z "$count" ]; then if [ -z "$count" ]; then
print_skip print_skip
elif [ $count -lt $rst_tx ]; then # accept more rst than expected except if we don't expect any
elif { [ $rst_tx -ne 0 ] && [ $count -lt $rst_tx ]; } ||
{ [ $rst_tx -eq 0 ] && [ $count -ne 0 ]; }; then
fail_test "got $count MP_RST[s] TX expected $rst_tx" fail_test "got $count MP_RST[s] TX expected $rst_tx"
else else
print_ok print_ok
@ -1442,7 +1444,9 @@ chk_rst_nr()
count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx") count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx")
if [ -z "$count" ]; then if [ -z "$count" ]; then
print_skip print_skip
elif [ "$count" -lt "$rst_rx" ]; then # accept more rst than expected except if we don't expect any
elif { [ $rst_rx -ne 0 ] && [ $count -lt $rst_rx ]; } ||
{ [ $rst_rx -eq 0 ] && [ $count -ne 0 ]; }; then
fail_test "got $count MP_RST[s] RX expected $rst_rx" fail_test "got $count MP_RST[s] RX expected $rst_rx"
else else
print_ok print_ok
@ -2305,6 +2309,7 @@ remove_tests()
chk_join_nr 1 1 1 chk_join_nr 1 1 1
chk_rm_tx_nr 1 chk_rm_tx_nr 1
chk_rm_nr 1 1 chk_rm_nr 1 1
chk_rst_nr 0 0
fi fi
# multiple subflows, remove # multiple subflows, remove
@ -2317,6 +2322,7 @@ remove_tests()
run_tests $ns1 $ns2 10.0.1.1 run_tests $ns1 $ns2 10.0.1.1
chk_join_nr 2 2 2 chk_join_nr 2 2 2
chk_rm_nr 2 2 chk_rm_nr 2 2
chk_rst_nr 0 0
fi fi
# single address, remove # single address, remove
@ -2329,6 +2335,7 @@ remove_tests()
chk_join_nr 1 1 1 chk_join_nr 1 1 1
chk_add_nr 1 1 chk_add_nr 1 1
chk_rm_nr 1 1 invert chk_rm_nr 1 1 invert
chk_rst_nr 0 0
fi fi
# subflow and signal, remove # subflow and signal, remove
@ -2342,6 +2349,7 @@ remove_tests()
chk_join_nr 2 2 2 chk_join_nr 2 2 2
chk_add_nr 1 1 chk_add_nr 1 1
chk_rm_nr 1 1 chk_rm_nr 1 1
chk_rst_nr 0 0
fi fi
# subflows and signal, remove # subflows and signal, remove
@ -2356,6 +2364,7 @@ remove_tests()
chk_join_nr 3 3 3 chk_join_nr 3 3 3
chk_add_nr 1 1 chk_add_nr 1 1
chk_rm_nr 2 2 chk_rm_nr 2 2
chk_rst_nr 0 0
fi fi
# addresses remove # addresses remove
@ -2370,6 +2379,7 @@ remove_tests()
chk_join_nr 3 3 3 chk_join_nr 3 3 3
chk_add_nr 3 3 chk_add_nr 3 3
chk_rm_nr 3 3 invert chk_rm_nr 3 3 invert
chk_rst_nr 0 0
fi fi
# invalid addresses remove # invalid addresses remove
@ -2384,6 +2394,7 @@ remove_tests()
chk_join_nr 1 1 1 chk_join_nr 1 1 1
chk_add_nr 3 3 chk_add_nr 3 3
chk_rm_nr 3 1 invert chk_rm_nr 3 1 invert
chk_rst_nr 0 0
fi fi
# subflows and signal, flush # subflows and signal, flush
@ -2398,6 +2409,7 @@ remove_tests()
chk_join_nr 3 3 3 chk_join_nr 3 3 3
chk_add_nr 1 1 chk_add_nr 1 1
chk_rm_nr 1 3 invert simult chk_rm_nr 1 3 invert simult
chk_rst_nr 0 0
fi fi
# subflows flush # subflows flush
@ -2417,6 +2429,7 @@ remove_tests()
else else
chk_rm_nr 3 3 chk_rm_nr 3 3
fi fi
chk_rst_nr 0 0
fi fi
# addresses flush # addresses flush
@ -2431,6 +2444,7 @@ remove_tests()
chk_join_nr 3 3 3 chk_join_nr 3 3 3
chk_add_nr 3 3 chk_add_nr 3 3
chk_rm_nr 3 3 invert simult chk_rm_nr 3 3 invert simult
chk_rst_nr 0 0
fi fi
# invalid addresses flush # invalid addresses flush
@ -2445,6 +2459,7 @@ remove_tests()
chk_join_nr 1 1 1 chk_join_nr 1 1 1
chk_add_nr 3 3 chk_add_nr 3 3
chk_rm_nr 3 1 invert chk_rm_nr 3 1 invert
chk_rst_nr 0 0
fi fi
# remove id 0 subflow # remove id 0 subflow
@ -2456,6 +2471,7 @@ remove_tests()
run_tests $ns1 $ns2 10.0.1.1 run_tests $ns1 $ns2 10.0.1.1
chk_join_nr 1 1 1 chk_join_nr 1 1 1
chk_rm_nr 1 1 chk_rm_nr 1 1
chk_rst_nr 0 0
fi fi
# remove id 0 address # remove id 0 address
@ -2468,6 +2484,7 @@ remove_tests()
chk_join_nr 1 1 1 chk_join_nr 1 1 1
chk_add_nr 1 1 chk_add_nr 1 1
chk_rm_nr 1 1 invert chk_rm_nr 1 1 invert
chk_rst_nr 0 0 invert
fi fi
} }

View File

@ -0,0 +1,87 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
set -o pipefail
NS=netns-name-test
DEV=dummy-dev0
DEV2=dummy-dev1
ALT_NAME=some-alt-name
RET_CODE=0
cleanup() {
ip netns del $NS
}
trap cleanup EXIT
fail() {
echo "ERROR: ${1:-unexpected return code} (ret: $_)" >&2
RET_CODE=1
}
ip netns add $NS
#
# Test basic move without a rename
#
ip -netns $NS link add name $DEV type dummy || fail
ip -netns $NS link set dev $DEV netns 1 ||
fail "Can't perform a netns move"
ip link show dev $DEV >> /dev/null || fail "Device not found after move"
ip link del $DEV || fail
#
# Test move with a conflict
#
ip link add name $DEV type dummy
ip -netns $NS link add name $DEV type dummy || fail
ip -netns $NS link set dev $DEV netns 1 2> /dev/null &&
fail "Performed a netns move with a name conflict"
ip link show dev $DEV >> /dev/null || fail "Device not found after move"
ip -netns $NS link del $DEV || fail
ip link del $DEV || fail
#
# Test move with a conflict and rename
#
ip link add name $DEV type dummy
ip -netns $NS link add name $DEV type dummy || fail
ip -netns $NS link set dev $DEV netns 1 name $DEV2 ||
fail "Can't perform a netns move with rename"
ip link del $DEV2 || fail
ip link del $DEV || fail
#
# Test dup alt-name with netns move
#
ip link add name $DEV type dummy || fail
ip link property add dev $DEV altname $ALT_NAME || fail
ip -netns $NS link add name $DEV2 type dummy || fail
ip -netns $NS link property add dev $DEV2 altname $ALT_NAME || fail
ip -netns $NS link set dev $DEV2 netns 1 2> /dev/null &&
fail "Moved with alt-name dup"
ip link del $DEV || fail
ip -netns $NS link del $DEV2 || fail
#
# Test creating alt-name in one net-ns and using in another
#
ip -netns $NS link add name $DEV type dummy || fail
ip -netns $NS link property add dev $DEV altname $ALT_NAME || fail
ip -netns $NS link set dev $DEV netns 1 || fail
ip link show dev $ALT_NAME >> /dev/null || fail "Can't find alt-name after move"
ip -netns $NS link show dev $ALT_NAME 2> /dev/null &&
fail "Can still find alt-name after move"
ip link del $DEV || fail
echo -ne "$(basename $0) \t\t\t\t"
if [ $RET_CODE -eq 0 ]; then
echo "[ OK ]"
else
echo "[ FAIL ]"
fi
exit $RET_CODE

View File

@ -3,6 +3,8 @@
# #
# OVS kernel module self tests # OVS kernel module self tests
trap ovs_exit_sig EXIT TERM INT ERR
# Kselftest framework requirement - SKIP code is 4. # Kselftest framework requirement - SKIP code is 4.
ksft_skip=4 ksft_skip=4
@ -142,6 +144,12 @@ ovs_add_flow () {
return 0 return 0
} }
ovs_del_flows () {
info "Deleting all flows from DP: sbx:$1 br:$2"
ovs_sbx "$1" python3 $ovs_base/ovs-dpctl.py del-flows "$2"
return 0
}
ovs_drop_record_and_run () { ovs_drop_record_and_run () {
local sbx=$1 local sbx=$1
shift shift
@ -198,6 +206,17 @@ test_drop_reason() {
ip netns exec server ip addr add 172.31.110.20/24 dev s1 ip netns exec server ip addr add 172.31.110.20/24 dev s1
ip netns exec server ip link set s1 up ip netns exec server ip link set s1 up
# Check if drop reasons can be sent
ovs_add_flow "test_drop_reason" dropreason \
'in_port(1),eth(),eth_type(0x0806),arp()' 'drop(10)' 2>/dev/null
if [ $? == 1 ]; then
info "no support for drop reasons - skipping"
ovs_exit_sig
return $ksft_skip
fi
ovs_del_flows "test_drop_reason" dropreason
# Allow ARP # Allow ARP
ovs_add_flow "test_drop_reason" dropreason \ ovs_add_flow "test_drop_reason" dropreason \
'in_port(1),eth(),eth_type(0x0806),arp()' '2' || return 1 'in_port(1),eth(),eth_type(0x0806),arp()' '2' || return 1
@ -525,7 +544,7 @@ run_test() {
fi fi
if python3 ovs-dpctl.py -h 2>&1 | \ if python3 ovs-dpctl.py -h 2>&1 | \
grep "Need to install the python" >/dev/null 2>&1; then grep -E "Need to (install|upgrade) the python" >/dev/null 2>&1; then
stdbuf -o0 printf "TEST: %-60s [PYLIB]\n" "${tdesc}" stdbuf -o0 printf "TEST: %-60s [PYLIB]\n" "${tdesc}"
return $ksft_skip return $ksft_skip
fi fi

View File

@ -28,8 +28,10 @@ try:
from pyroute2.netlink import nlmsg_atoms from pyroute2.netlink import nlmsg_atoms
from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.exceptions import NetlinkError
from pyroute2.netlink.generic import GenericNetlinkSocket from pyroute2.netlink.generic import GenericNetlinkSocket
import pyroute2
except ModuleNotFoundError: except ModuleNotFoundError:
print("Need to install the python pyroute2 package.") print("Need to install the python pyroute2 package >= 0.6.")
sys.exit(0) sys.exit(0)
@ -1117,12 +1119,14 @@ class ovskey(nla):
"src", "src",
lambda x: str(ipaddress.IPv4Address(x)), lambda x: str(ipaddress.IPv4Address(x)),
int, int,
convert_ipv4,
), ),
( (
"dst", "dst",
"dst", "dst",
lambda x: str(ipaddress.IPv6Address(x)), lambda x: str(ipaddress.IPv4Address(x)),
int, int,
convert_ipv4,
), ),
("tp_src", "tp_src", "%d", int), ("tp_src", "tp_src", "%d", int),
("tp_dst", "tp_dst", "%d", int), ("tp_dst", "tp_dst", "%d", int),
@ -1904,6 +1908,32 @@ class OvsFlow(GenericNetlinkSocket):
raise ne raise ne
return reply return reply
def del_flows(self, dpifindex):
"""
Send a del message to the kernel that will drop all flows.
dpifindex should be a valid datapath obtained by calling
into the OvsDatapath lookup
"""
flowmsg = OvsFlow.ovs_flow_msg()
flowmsg["cmd"] = OVS_FLOW_CMD_DEL
flowmsg["version"] = OVS_DATAPATH_VERSION
flowmsg["reserved"] = 0
flowmsg["dpifindex"] = dpifindex
try:
reply = self.nlm_request(
flowmsg,
msg_type=self.prid,
msg_flags=NLM_F_REQUEST | NLM_F_ACK,
)
reply = reply[0]
except NetlinkError as ne:
print(flowmsg)
raise ne
return reply
def dump(self, dpifindex, flowspec=None): def dump(self, dpifindex, flowspec=None):
""" """
Returns a list of messages containing flows. Returns a list of messages containing flows.
@ -1998,6 +2028,12 @@ def main(argv):
nlmsg_atoms.ovskey = ovskey nlmsg_atoms.ovskey = ovskey
nlmsg_atoms.ovsactions = ovsactions nlmsg_atoms.ovsactions = ovsactions
# version check for pyroute2
prverscheck = pyroute2.__version__.split(".")
if int(prverscheck[0]) == 0 and int(prverscheck[1]) < 6:
print("Need to upgrade the python pyroute2 package to >= 0.6.")
sys.exit(0)
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument( parser.add_argument(
"-v", "-v",
@ -2060,6 +2096,9 @@ def main(argv):
addflcmd.add_argument("flow", help="Flow specification") addflcmd.add_argument("flow", help="Flow specification")
addflcmd.add_argument("acts", help="Flow actions") addflcmd.add_argument("acts", help="Flow actions")
delfscmd = subparsers.add_parser("del-flows")
delfscmd.add_argument("flsbr", help="Datapath name")
args = parser.parse_args() args = parser.parse_args()
if args.verbose > 0: if args.verbose > 0:
@ -2143,6 +2182,11 @@ def main(argv):
flow = OvsFlow.ovs_flow_msg() flow = OvsFlow.ovs_flow_msg()
flow.parse(args.flow, args.acts, rep["dpifindex"]) flow.parse(args.flow, args.acts, rep["dpifindex"])
ovsflow.add_flow(rep["dpifindex"], flow) ovsflow.add_flow(rep["dpifindex"], flow)
elif hasattr(args, "flsbr"):
rep = ovsdp.info(args.flsbr, 0)
if rep is None:
print("DP '%s' not found." % args.flsbr)
ovsflow.del_flows(rep["dpifindex"])
return 0 return 0

View File

@ -11,6 +11,12 @@ nft --version >/dev/null 2>&1 || {
exit $SKIP_RC exit $SKIP_RC
} }
# Run everything in a separate network namespace
[ "${1}" != "run" ] && { unshare -n "${0}" run; exit $?; }
# give other scripts a chance to finish - audit_logread sees all activity
sleep 1
logfile=$(mktemp) logfile=$(mktemp)
rulefile=$(mktemp) rulefile=$(mktemp)
echo "logging into $logfile" echo "logging into $logfile"
@ -93,6 +99,12 @@ do_test 'nft add counter t1 c1' \
do_test 'nft add counter t2 c1; add counter t2 c2' \ do_test 'nft add counter t2 c1; add counter t2 c2' \
'table=t2 family=2 entries=2 op=nft_register_obj' 'table=t2 family=2 entries=2 op=nft_register_obj'
for ((i = 3; i <= 500; i++)); do
echo "add counter t2 c$i"
done >$rulefile
do_test "nft -f $rulefile" \
'table=t2 family=2 entries=498 op=nft_register_obj'
# adding/updating quotas # adding/updating quotas
do_test 'nft add quota t1 q1 { 10 bytes }' \ do_test 'nft add quota t1 q1 { 10 bytes }' \
@ -101,6 +113,12 @@ do_test 'nft add quota t1 q1 { 10 bytes }' \
do_test 'nft add quota t2 q1 { 10 bytes }; add quota t2 q2 { 10 bytes }' \ do_test 'nft add quota t2 q1 { 10 bytes }; add quota t2 q2 { 10 bytes }' \
'table=t2 family=2 entries=2 op=nft_register_obj' 'table=t2 family=2 entries=2 op=nft_register_obj'
for ((i = 3; i <= 500; i++)); do
echo "add quota t2 q$i { 10 bytes }"
done >$rulefile
do_test "nft -f $rulefile" \
'table=t2 family=2 entries=498 op=nft_register_obj'
# changing the quota value triggers obj update path # changing the quota value triggers obj update path
do_test 'nft add quota t1 q1 { 20 bytes }' \ do_test 'nft add quota t1 q1 { 20 bytes }' \
'table=t1 family=2 entries=1 op=nft_register_obj' 'table=t1 family=2 entries=1 op=nft_register_obj'
@ -150,6 +168,40 @@ done
do_test 'nft reset set t1 s' \ do_test 'nft reset set t1 s' \
'table=t1 family=2 entries=3 op=nft_reset_setelem' 'table=t1 family=2 entries=3 op=nft_reset_setelem'
# resetting counters
do_test 'nft reset counter t1 c1' \
'table=t1 family=2 entries=1 op=nft_reset_obj'
do_test 'nft reset counters t1' \
'table=t1 family=2 entries=1 op=nft_reset_obj'
do_test 'nft reset counters t2' \
'table=t2 family=2 entries=342 op=nft_reset_obj
table=t2 family=2 entries=158 op=nft_reset_obj'
do_test 'nft reset counters' \
'table=t1 family=2 entries=1 op=nft_reset_obj
table=t2 family=2 entries=341 op=nft_reset_obj
table=t2 family=2 entries=159 op=nft_reset_obj'
# resetting quotas
do_test 'nft reset quota t1 q1' \
'table=t1 family=2 entries=1 op=nft_reset_obj'
do_test 'nft reset quotas t1' \
'table=t1 family=2 entries=1 op=nft_reset_obj'
do_test 'nft reset quotas t2' \
'table=t2 family=2 entries=315 op=nft_reset_obj
table=t2 family=2 entries=185 op=nft_reset_obj'
do_test 'nft reset quotas' \
'table=t1 family=2 entries=1 op=nft_reset_obj
table=t2 family=2 entries=314 op=nft_reset_obj
table=t2 family=2 entries=186 op=nft_reset_obj'
# deleting rules # deleting rules
readarray -t handles < <(nft -a list chain t1 c1 | \ readarray -t handles < <(nft -a list chain t1 c1 | \