Including fixes from bluetooth and netfilter.

Current release - regressions:
 
  - virtio_net: avoid crash on resume - move netdev_tx_reset_queue()
    call before RX napi enable
 
 Current release - new code bugs:
 
  - net/mlx5e: fix page leak and incorrect header release w/ HW GRO
 
 Previous releases - regressions:
 
  - udp: fix receiving fraglist GSO packets
 
  - tcp: prevent refcount underflow due to concurrent execution
    of tcp_sk_exit_batch()
 
 Previous releases - always broken:
 
  - ipv6: fix possible UAF when incrementing error counters on output
 
  - ip6: tunnel: prevent merging of packets with different L2
 
  - mptcp: pm: fix IDs not being reusable
 
  - bonding: fix potential crashes in IPsec offload handling
 
  - Bluetooth: HCI:
    - MGMT: add error handling to pair_device() to avoid a crash
    - invert LE State quirk to be opt-out rather then opt-in
    - fix LE quote calculation
 
  - drv: dsa: VLAN fixes for Ocelot driver
 
  - drv: igb: cope with large MAX_SKB_FRAGS Kconfig settings
 
  - drv: ice: fi Rx data path on architectures with PAGE_SIZE >= 8192
 
 Misc:
 
  - netpoll: do not export netpoll_poll_[disable|enable]()
 
  - MAINTAINERS: update the list of networking headers
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmbHpqwACgkQMUZtbf5S
 IrtLCQ//ZcAj1aNa68uSBEE8/jAfiKy+L+uZDVcP1aEJfw+B6UFgdclf9mOG5fJY
 ZlVyqr0YbNaOl8aGTd7MHNFzdRxgY8rbw93GZAdZfgV+yM2XLec4/cjLxx5vuqMn
 rfbLyZ3+yPeHq/PhI1uHdyxzCUDpMlKBxsVybZ4KZTFV1UvwRIw0/k1C3N+E0Bu0
 b/qgAC9sp7nxclpjMSbhbK1Tpg41MQeQluDVPSuPI2yZDfcFTHTb1DMfYN2XmaOe
 nsVNExwgrJp/bSUx31fatehcsZBk0iIwHbBcfMjig4PYdTaMo8NnBxE9eN0HBkcN
 WLb7jpENPcdo1/p86hAajQID5w2C1ubruAtrlzhznDH6ZsvA3ewuxS18a2FEEaZE
 ZJdR5eQqNFjgLc0yUOs7Pc3QZpdbELVFN22PYsq8Dg2anaSpBEMSZIxSR469Yeuo
 +Uf5WZApoCu92CgXB+pyIzM/X9iCSGTr3Pn5hCp0/zQeuUAqRIKSCBxM+SRhElCu
 Qf6WktOiLOMT1Gyrz7m8dNzWtosP657YTQ9QGPgl9fDpgwjQmNLKoWpVYeE0cYTL
 wOVxafB++a/n5kRkt3iidNkIF2JkIfrd84/dI7BmCV7NnU5y2H8uLCzG5Z+R4iUy
 KLSUAkCHoeIb1VNWMSo7ne6L/pSEl+SIgz58gP2I8LqgsVYA+Hs=
 =htcO
 -----END PGP SIGNATURE-----

Merge tag 'net-6.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bluetooth and netfilter.

  Current release - regressions:

   - virtio_net: avoid crash on resume - move netdev_tx_reset_queue()
     call before RX napi enable

  Current release - new code bugs:

   - net/mlx5e: fix page leak and incorrect header release w/ HW GRO

  Previous releases - regressions:

   - udp: fix receiving fraglist GSO packets

   - tcp: prevent refcount underflow due to concurrent execution of
     tcp_sk_exit_batch()

  Previous releases - always broken:

   - ipv6: fix possible UAF when incrementing error counters on output

   - ip6: tunnel: prevent merging of packets with different L2

   - mptcp: pm: fix IDs not being reusable

   - bonding: fix potential crashes in IPsec offload handling

   - Bluetooth: HCI:
      - MGMT: add error handling to pair_device() to avoid a crash
      - invert LE State quirk to be opt-out rather then opt-in
      - fix LE quote calculation

   - drv: dsa: VLAN fixes for Ocelot driver

   - drv: igb: cope with large MAX_SKB_FRAGS Kconfig settings

   - drv: ice: fi Rx data path on architectures with PAGE_SIZE >= 8192

  Misc:

   - netpoll: do not export netpoll_poll_[disable|enable]()

   - MAINTAINERS: update the list of networking headers"

* tag 'net-6.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (82 commits)
  s390/iucv: Fix vargs handling in iucv_alloc_device()
  net: ovs: fix ovs_drop_reasons error
  net: xilinx: axienet: Fix dangling multicast addresses
  net: xilinx: axienet: Always disable promiscuous mode
  MAINTAINERS: Mark JME Network Driver as Odd Fixes
  MAINTAINERS: Add header files to NETWORKING sections
  MAINTAINERS: Add limited globs for Networking headers
  MAINTAINERS: Add net_tstamp.h to SOCKET TIMESTAMPING section
  MAINTAINERS: Add sonet.h to ATM section of MAINTAINERS
  octeontx2-af: Fix CPT AF register offset calculation
  net: phy: realtek: Fix setting of PHY LEDs Mode B bit on RTL8211F
  net: ngbe: Fix phy mode set to external phy
  netfilter: flowtable: validate vlan header
  bnxt_en: Fix double DMA unmapping for XDP_REDIRECT
  ipv6: prevent possible UAF in ip6_xmit()
  ipv6: fix possible UAF in ip6_finish_output2()
  ipv6: prevent UAF in ip6_send_skb()
  netpoll: do not export netpoll_poll_[disable|enable]()
  selftests: mlxsw: ethtool_lanes: Source ethtool lib from correct path
  udp: fix receiving fraglist GSO packets
  ...
This commit is contained in:
Linus Torvalds 2024-08-23 07:47:01 +08:00
commit aa0743a229
74 changed files with 1562 additions and 561 deletions

View File

@ -3504,7 +3504,9 @@ S: Maintained
W: http://linux-atm.sourceforge.net
F: drivers/atm/
F: include/linux/atm*
F: include/linux/sonet.h
F: include/uapi/linux/atm*
F: include/uapi/linux/sonet.h
ATMEL MACB ETHERNET DRIVER
M: Nicolas Ferre <nicolas.ferre@microchip.com>
@ -11993,7 +11995,7 @@ F: fs/jfs/
JME NETWORK DRIVER
M: Guo-Fu Tseng <cooldavid@cooldavid.org>
L: netdev@vger.kernel.org
S: Maintained
S: Odd Fixes
F: drivers/net/ethernet/jme.*
JOURNALLING FLASH FILE SYSTEM V2 (JFFS2)
@ -15877,15 +15879,19 @@ F: drivers/net/
F: include/dt-bindings/net/
F: include/linux/cn_proc.h
F: include/linux/etherdevice.h
F: include/linux/ethtool_netlink.h
F: include/linux/fcdevice.h
F: include/linux/fddidevice.h
F: include/linux/hippidevice.h
F: include/linux/if_*
F: include/linux/inetdevice.h
F: include/linux/netdevice.h
F: include/linux/netdev*
F: include/linux/platform_data/wiznet.h
F: include/uapi/linux/cn_proc.h
F: include/uapi/linux/ethtool_netlink.h
F: include/uapi/linux/if_*
F: include/uapi/linux/netdevice.h
F: include/uapi/linux/netdev*
F: tools/testing/selftests/drivers/net/
X: drivers/net/wireless/
NETWORKING DRIVERS (WIRELESS)
@ -15936,14 +15942,28 @@ F: include/linux/framer/framer-provider.h
F: include/linux/framer/framer.h
F: include/linux/in.h
F: include/linux/indirect_call_wrapper.h
F: include/linux/inet.h
F: include/linux/inet_diag.h
F: include/linux/net.h
F: include/linux/netdevice.h
F: include/linux/skbuff.h
F: include/linux/netdev*
F: include/linux/netlink.h
F: include/linux/netpoll.h
F: include/linux/rtnetlink.h
F: include/linux/seq_file_net.h
F: include/linux/skbuff*
F: include/net/
F: include/uapi/linux/genetlink.h
F: include/uapi/linux/hsr_netlink.h
F: include/uapi/linux/in.h
F: include/uapi/linux/inet_diag.h
F: include/uapi/linux/nbd-netlink.h
F: include/uapi/linux/net.h
F: include/uapi/linux/net_namespace.h
F: include/uapi/linux/netdevice.h
F: include/uapi/linux/netconf.h
F: include/uapi/linux/netdev*
F: include/uapi/linux/netlink.h
F: include/uapi/linux/netlink_diag.h
F: include/uapi/linux/rtnetlink.h
F: lib/net_utils.c
F: lib/random32.c
F: net/
@ -21054,6 +21074,7 @@ SOCKET TIMESTAMPING
M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
S: Maintained
F: Documentation/networking/timestamping.rst
F: include/linux/net_tstamp.h
F: include/uapi/linux/net_tstamp.h
F: tools/testing/selftests/net/so_txtime.c

View File

@ -2945,9 +2945,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
&hdev->quirks);
if (ver.hw_variant == 0x08 && ver.fw_variant == 0x22)
set_bit(HCI_QUIRK_VALID_LE_STATES,
&hdev->quirks);
err = btintel_legacy_rom_setup(hdev, &ver);
break;
@ -2956,7 +2953,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
case 0x12: /* ThP */
case 0x13: /* HrP */
case 0x14: /* CcP */
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
fallthrough;
case 0x0c: /* WsP */
/* Apply the device specific HCI quirks
@ -3048,9 +3044,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
/* These variants don't seem to support LE Coded PHY */
set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
/* Set Valid LE States quirk */
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
/* Setup MSFT Extension support */
btintel_set_msft_opcode(hdev, ver.hw_variant);
@ -3076,9 +3069,6 @@ static int btintel_setup_combined(struct hci_dev *hdev)
*/
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
/* Apply LE States quirk from solar onwards */
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
/* Setup MSFT Extension support */
btintel_set_msft_opcode(hdev,
INTEL_HW_VARIANT(ver_tlv.cnvi_bt));

View File

@ -1180,9 +1180,6 @@ static int btintel_pcie_setup(struct hci_dev *hdev)
*/
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
/* Apply LE States quirk from solar onwards */
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
/* Setup MSFT Extension support */
btintel_set_msft_opcode(hdev,
INTEL_HW_VARIANT(ver_tlv.cnvi_bt));

View File

@ -1148,9 +1148,6 @@ static int btmtksdio_setup(struct hci_dev *hdev)
}
}
/* Valid LE States quirk for MediaTek 7921 */
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
break;
case 0x7663:
case 0x7668:

View File

@ -1287,7 +1287,6 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
case CHIP_ID_8852C:
case CHIP_ID_8851B:
case CHIP_ID_8852BT:
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
/* RTL8852C needs to transmit mSBC data continuously without

View File

@ -3956,8 +3956,8 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_WIDEBAND_SPEECH)
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
if (id->driver_info & BTUSB_VALID_LE_STATES)
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
if (!(id->driver_info & BTUSB_VALID_LE_STATES))
set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
if (id->driver_info & BTUSB_DIGIANSWER) {
data->cmdreq_type = USB_TYPE_VENDOR;

View File

@ -2474,8 +2474,8 @@ static int qca_serdev_probe(struct serdev_device *serdev)
set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
&hdev->quirks);
if (data->capabilities & QCA_CAP_VALID_LE_STATES)
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
if (!(data->capabilities & QCA_CAP_VALID_LE_STATES))
set_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks);
}
return 0;

View File

@ -425,8 +425,6 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
if (opcode & 0x80)
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
if (hci_register_dev(hdev) < 0) {
BT_ERR("Can't register HCI device");
hci_free_dev(hdev);

View File

@ -582,7 +582,6 @@ static void bond_ipsec_del_sa_all(struct bonding *bond)
} else {
slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs);
}
ipsec->xs->xso.real_dev = NULL;
}
spin_unlock_bh(&bond->ipsec_lock);
rcu_read_unlock();
@ -599,34 +598,30 @@ static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs)
struct net_device *real_dev;
struct slave *curr_active;
struct bonding *bond;
int err;
bool ok = false;
bond = netdev_priv(bond_dev);
rcu_read_lock();
curr_active = rcu_dereference(bond->curr_active_slave);
if (!curr_active)
goto out;
real_dev = curr_active->dev;
if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) {
err = false;
if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP)
goto out;
}
if (!xs->xso.real_dev) {
err = false;
if (!xs->xso.real_dev)
goto out;
}
if (!real_dev->xfrmdev_ops ||
!real_dev->xfrmdev_ops->xdo_dev_offload_ok ||
netif_is_bond_master(real_dev)) {
err = false;
netif_is_bond_master(real_dev))
goto out;
}
err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
ok = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs);
out:
rcu_read_unlock();
return err;
return ok;
}
static const struct xfrmdev_ops bond_xfrmdev_ops = {

View File

@ -936,7 +936,7 @@ static int bond_option_active_slave_set(struct bonding *bond,
/* check to see if we are clearing active */
if (!slave_dev) {
netdev_dbg(bond->dev, "Clearing current active slave\n");
RCU_INIT_POINTER(bond->curr_active_slave, NULL);
bond_change_active_slave(bond, NULL);
bond_select_active_slave(bond);
} else {
struct slave *old_active = rtnl_dereference(bond->curr_active_slave);

View File

@ -266,7 +266,6 @@ static int ksz_ptp_enable_mode(struct ksz_device *dev)
struct ksz_port *prt;
struct dsa_port *dp;
bool tag_en = false;
int ret;
dsa_switch_for_each_user_port(dp, dev->ds) {
prt = &dev->ports[dp->index];
@ -277,9 +276,7 @@ static int ksz_ptp_enable_mode(struct ksz_device *dev)
}
if (tag_en) {
ret = ptp_schedule_worker(ptp_data->clock, 0);
if (ret)
return ret;
ptp_schedule_worker(ptp_data->clock, 0);
} else {
ptp_cancel_worker_sync(ptp_data->clock);
}

View File

@ -457,7 +457,8 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
trace_mv88e6xxx_atu_full_violation(chip->dev, spid,
entry.portvec, entry.mac,
fid);
chip->ports[spid].atu_full_violation++;
if (spid < ARRAY_SIZE(chip->ports))
chip->ports[spid].atu_full_violation++;
}
return IRQ_HANDLED;

View File

@ -61,11 +61,46 @@ static int felix_cpu_port_for_conduit(struct dsa_switch *ds,
return cpu_dp->index;
}
/**
* felix_update_tag_8021q_rx_rule - Update VCAP ES0 tag_8021q rule after
* vlan_filtering change
* @outer_tagging_rule: Pointer to VCAP filter on which the update is performed
* @vlan_filtering: Current bridge VLAN filtering setting
*
* Source port identification for tag_8021q is done using VCAP ES0 rules on the
* CPU port(s). The ES0 tag B (inner tag from the packet) can be configured as
* either:
* - push_inner_tag=0: the inner tag is never pushed into the frame
* (and we lose info about the classified VLAN). This is
* good when the classified VLAN is a discardable quantity
* for the software RX path: it is either set to
* OCELOT_STANDALONE_PVID, or to
* ocelot_vlan_unaware_pvid(bridge).
* - push_inner_tag=1: the inner tag is always pushed. This is good when the
* classified VLAN is not a discardable quantity (the port
* is under a VLAN-aware bridge, and software needs to
* continue processing the packet in the same VLAN as the
* hardware).
* The point is that what is good for a VLAN-unaware port is not good for a
* VLAN-aware port, and vice versa. Thus, the RX tagging rules must be kept in
* sync with the VLAN filtering state of the port.
*/
static void
felix_update_tag_8021q_rx_rule(struct ocelot_vcap_filter *outer_tagging_rule,
bool vlan_filtering)
{
if (vlan_filtering)
outer_tagging_rule->action.push_inner_tag = OCELOT_ES0_TAG;
else
outer_tagging_rule->action.push_inner_tag = OCELOT_NO_ES0_TAG;
}
/* Set up VCAP ES0 rules for pushing a tag_8021q VLAN towards the CPU such that
* the tagger can perform RX source port identification.
*/
static int felix_tag_8021q_vlan_add_rx(struct dsa_switch *ds, int port,
int upstream, u16 vid)
int upstream, u16 vid,
bool vlan_filtering)
{
struct ocelot_vcap_filter *outer_tagging_rule;
struct ocelot *ocelot = ds->priv;
@ -96,6 +131,14 @@ static int felix_tag_8021q_vlan_add_rx(struct dsa_switch *ds, int port,
outer_tagging_rule->action.tag_a_tpid_sel = OCELOT_TAG_TPID_SEL_8021AD;
outer_tagging_rule->action.tag_a_vid_sel = 1;
outer_tagging_rule->action.vid_a_val = vid;
felix_update_tag_8021q_rx_rule(outer_tagging_rule, vlan_filtering);
outer_tagging_rule->action.tag_b_tpid_sel = OCELOT_TAG_TPID_SEL_8021Q;
/* Leave TAG_B_VID_SEL at 0 (Classified VID + VID_B_VAL). Since we also
* leave VID_B_VAL at 0, this makes ES0 tag B (the inner tag) equal to
* the classified VID, which we need to see in the DSA tagger's receive
* path. Note: the inner tag is only visible in the packet when pushed
* (push_inner_tag == OCELOT_ES0_TAG).
*/
err = ocelot_vcap_filter_add(ocelot, outer_tagging_rule, NULL);
if (err)
@ -227,6 +270,7 @@ static int felix_tag_8021q_vlan_del_tx(struct dsa_switch *ds, int port, u16 vid)
static int felix_tag_8021q_vlan_add(struct dsa_switch *ds, int port, u16 vid,
u16 flags)
{
struct dsa_port *dp = dsa_to_port(ds, port);
struct dsa_port *cpu_dp;
int err;
@ -234,11 +278,12 @@ static int felix_tag_8021q_vlan_add(struct dsa_switch *ds, int port, u16 vid,
* membership, which we aren't. So we don't need to add any VCAP filter
* for the CPU port.
*/
if (!dsa_is_user_port(ds, port))
if (!dsa_port_is_user(dp))
return 0;
dsa_switch_for_each_cpu_port(cpu_dp, ds) {
err = felix_tag_8021q_vlan_add_rx(ds, port, cpu_dp->index, vid);
err = felix_tag_8021q_vlan_add_rx(ds, port, cpu_dp->index, vid,
dsa_port_is_vlan_filtering(dp));
if (err)
return err;
}
@ -258,10 +303,11 @@ add_tx_failed:
static int felix_tag_8021q_vlan_del(struct dsa_switch *ds, int port, u16 vid)
{
struct dsa_port *dp = dsa_to_port(ds, port);
struct dsa_port *cpu_dp;
int err;
if (!dsa_is_user_port(ds, port))
if (!dsa_port_is_user(dp))
return 0;
dsa_switch_for_each_cpu_port(cpu_dp, ds) {
@ -278,11 +324,41 @@ static int felix_tag_8021q_vlan_del(struct dsa_switch *ds, int port, u16 vid)
del_tx_failed:
dsa_switch_for_each_cpu_port(cpu_dp, ds)
felix_tag_8021q_vlan_add_rx(ds, port, cpu_dp->index, vid);
felix_tag_8021q_vlan_add_rx(ds, port, cpu_dp->index, vid,
dsa_port_is_vlan_filtering(dp));
return err;
}
static int felix_update_tag_8021q_rx_rules(struct dsa_switch *ds, int port,
bool vlan_filtering)
{
struct ocelot_vcap_filter *outer_tagging_rule;
struct ocelot_vcap_block *block_vcap_es0;
struct ocelot *ocelot = ds->priv;
struct dsa_port *cpu_dp;
unsigned long cookie;
int err;
block_vcap_es0 = &ocelot->block[VCAP_ES0];
dsa_switch_for_each_cpu_port(cpu_dp, ds) {
cookie = OCELOT_VCAP_ES0_TAG_8021Q_RXVLAN(ocelot, port,
cpu_dp->index);
outer_tagging_rule = ocelot_vcap_block_find_filter_by_id(block_vcap_es0,
cookie, false);
felix_update_tag_8021q_rx_rule(outer_tagging_rule, vlan_filtering);
err = ocelot_vcap_filter_replace(ocelot, outer_tagging_rule);
if (err)
return err;
}
return 0;
}
static int felix_trap_get_cpu_port(struct dsa_switch *ds,
const struct ocelot_vcap_filter *trap)
{
@ -528,7 +604,19 @@ static int felix_tag_8021q_setup(struct dsa_switch *ds)
* so we need to be careful that there are no extra frames to be
* dequeued over MMIO, since we would never know to discard them.
*/
ocelot_lock_xtr_grp_bh(ocelot, 0);
ocelot_drain_cpu_queue(ocelot, 0);
ocelot_unlock_xtr_grp_bh(ocelot, 0);
/* Problem: when using push_inner_tag=1 for ES0 tag B, we lose info
* about whether the received packets were VLAN-tagged on the wire,
* since they are always tagged on egress towards the CPU port.
*
* Since using push_inner_tag=1 is unavoidable for VLAN-aware bridges,
* we must work around the fallout by untagging in software to make
* untagged reception work more or less as expected.
*/
ds->untag_vlan_aware_bridge_pvid = true;
return 0;
}
@ -554,6 +642,8 @@ static void felix_tag_8021q_teardown(struct dsa_switch *ds)
ocelot_port_teardown_dsa_8021q_cpu(ocelot, dp->index);
dsa_tag_8021q_unregister(ds);
ds->untag_vlan_aware_bridge_pvid = false;
}
static unsigned long felix_tag_8021q_get_host_fwd_mask(struct dsa_switch *ds)
@ -1008,8 +1098,23 @@ static int felix_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
struct netlink_ext_ack *extack)
{
struct ocelot *ocelot = ds->priv;
bool using_tag_8021q;
struct felix *felix;
int err;
return ocelot_port_vlan_filtering(ocelot, port, enabled, extack);
err = ocelot_port_vlan_filtering(ocelot, port, enabled, extack);
if (err)
return err;
felix = ocelot_to_felix(ocelot);
using_tag_8021q = felix->tag_proto == DSA_TAG_PROTO_OCELOT_8021Q;
if (using_tag_8021q) {
err = felix_update_tag_8021q_rx_rules(ds, port, enabled);
if (err)
return err;
}
return 0;
}
static int felix_vlan_add(struct dsa_switch *ds, int port,
@ -1518,6 +1623,8 @@ static void felix_port_deferred_xmit(struct kthread_work *work)
int port = xmit_work->dp->index;
int retries = 10;
ocelot_lock_inj_grp(ocelot, 0);
do {
if (ocelot_can_inject(ocelot, 0))
break;
@ -1526,6 +1633,7 @@ static void felix_port_deferred_xmit(struct kthread_work *work)
} while (--retries);
if (!retries) {
ocelot_unlock_inj_grp(ocelot, 0);
dev_err(ocelot->dev, "port %d failed to inject skb\n",
port);
ocelot_port_purge_txtstamp_skb(ocelot, port, skb);
@ -1535,6 +1643,8 @@ static void felix_port_deferred_xmit(struct kthread_work *work)
ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
ocelot_unlock_inj_grp(ocelot, 0);
consume_skb(skb);
kfree(xmit_work);
}
@ -1694,6 +1804,8 @@ static bool felix_check_xtr_pkt(struct ocelot *ocelot)
if (!felix->info->quirk_no_xtr_irq)
return false;
ocelot_lock_xtr_grp(ocelot, grp);
while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp)) {
struct sk_buff *skb;
unsigned int type;
@ -1730,6 +1842,8 @@ out:
ocelot_drain_cpu_queue(ocelot, 0);
}
ocelot_unlock_xtr_grp(ocelot, grp);
return true;
}

View File

@ -5056,7 +5056,7 @@ void bnxt_del_one_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr)
list_del_init(&fltr->list);
}
void bnxt_clear_usr_fltrs(struct bnxt *bp, bool all)
static void bnxt_clear_usr_fltrs(struct bnxt *bp, bool all)
{
struct bnxt_filter_base *usr_fltr, *tmp;
@ -10248,7 +10248,7 @@ static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp)
}
}
void bnxt_clear_rss_ctxs(struct bnxt *bp)
static void bnxt_clear_rss_ctxs(struct bnxt *bp)
{
struct ethtool_rxfh_context *ctx;
unsigned long context;

View File

@ -2790,7 +2790,6 @@ void bnxt_set_ring_params(struct bnxt *);
int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
void bnxt_insert_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr);
void bnxt_del_one_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr);
void bnxt_clear_usr_fltrs(struct bnxt *bp, bool all);
int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap,
int bmap_size, bool async_only);
int bnxt_hwrm_func_drv_unrgtr(struct bnxt *bp);
@ -2842,7 +2841,6 @@ int bnxt_hwrm_vnic_rss_cfg_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic);
int __bnxt_setup_vnic_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic);
void bnxt_del_one_rss_ctx(struct bnxt *bp, struct bnxt_rss_ctx *rss_ctx,
bool all);
void bnxt_clear_rss_ctxs(struct bnxt *bp);
int bnxt_open_nic(struct bnxt *, bool, bool);
int bnxt_half_open_nic(struct bnxt *bp);
void bnxt_half_close_nic(struct bnxt *bp);

View File

@ -968,9 +968,6 @@ static int bnxt_set_channels(struct net_device *dev,
return -EINVAL;
}
bnxt_clear_usr_fltrs(bp, true);
if (BNXT_SUPPORTS_MULTI_RSS_CTX(bp))
bnxt_clear_rss_ctxs(bp);
if (netif_running(dev)) {
if (BNXT_PF(bp)) {
/* TODO CHIMP_FW: Send message to all VF's
@ -2000,7 +1997,6 @@ static int bnxt_set_rxfh(struct net_device *dev,
bnxt_modify_rss(bp, NULL, NULL, rxfh);
bnxt_clear_usr_fltrs(bp, false);
if (netif_running(bp->dev)) {
bnxt_close_nic(bp, false, false);
rc = bnxt_open_nic(bp, false, false);

View File

@ -297,11 +297,6 @@ bool bnxt_rx_xdp(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 cons,
* redirect is coming from a frame received by the
* bnxt_en driver.
*/
rx_buf = &rxr->rx_buf_ring[cons];
mapping = rx_buf->mapping - bp->rx_dma_offset;
dma_unmap_page_attrs(&pdev->dev, mapping,
BNXT_RX_PAGE_SIZE, bp->rx_dir,
DMA_ATTR_WEAK_ORDERING);
/* if we are unable to allocate a new buffer, abort and reuse */
if (bnxt_alloc_rx_data(bp, rxr, rxr->rx_prod, GFP_ATOMIC)) {

View File

@ -1244,7 +1244,8 @@ static u64 hash_filter_ntuple(struct ch_filter_specification *fs,
* in the Compressed Filter Tuple.
*/
if (tp->vlan_shift >= 0 && fs->mask.ivlan)
ntuple |= (FT_VLAN_VLD_F | fs->val.ivlan) << tp->vlan_shift;
ntuple |= (u64)(FT_VLAN_VLD_F |
fs->val.ivlan) << tp->vlan_shift;
if (tp->port_shift >= 0 && fs->mask.iport)
ntuple |= (u64)fs->val.iport << tp->port_shift;

View File

@ -2638,13 +2638,14 @@ static int dpaa2_switch_refill_bp(struct ethsw_core *ethsw)
static int dpaa2_switch_seed_bp(struct ethsw_core *ethsw)
{
int *count, i;
int *count, ret, i;
for (i = 0; i < DPAA2_ETHSW_NUM_BUFS; i += BUFS_PER_CMD) {
ret = dpaa2_switch_add_bufs(ethsw, ethsw->bpid);
count = &ethsw->buf_count;
*count += dpaa2_switch_add_bufs(ethsw, ethsw->bpid);
*count += ret;
if (unlikely(*count < BUFS_PER_CMD))
if (unlikely(ret < BUFS_PER_CMD))
return -ENOMEM;
}

View File

@ -337,7 +337,7 @@ int ice_devlink_create_pf_port(struct ice_pf *pf)
return -EIO;
attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
attrs.phys.port_number = pf->hw.bus.func;
attrs.phys.port_number = pf->hw.pf_id;
/* As FW supports only port split options for whole device,
* set port split options only for first PF.
@ -455,7 +455,7 @@ int ice_devlink_create_vf_port(struct ice_vf *vf)
return -EINVAL;
attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF;
attrs.pci_vf.pf = pf->hw.bus.func;
attrs.pci_vf.pf = pf->hw.pf_id;
attrs.pci_vf.vf = vf->vf_id;
ice_devlink_set_switch_id(pf, &attrs.switch_id);

View File

@ -512,6 +512,25 @@ static void ice_xsk_pool_fill_cb(struct ice_rx_ring *ring)
xsk_pool_fill_cb(ring->xsk_pool, &desc);
}
/**
* ice_get_frame_sz - calculate xdp_buff::frame_sz
* @rx_ring: the ring being configured
*
* Return frame size based on underlying PAGE_SIZE
*/
static unsigned int ice_get_frame_sz(struct ice_rx_ring *rx_ring)
{
unsigned int frame_sz;
#if (PAGE_SIZE >= 8192)
frame_sz = rx_ring->rx_buf_len;
#else
frame_sz = ice_rx_pg_size(rx_ring) / 2;
#endif
return frame_sz;
}
/**
* ice_vsi_cfg_rxq - Configure an Rx queue
* @ring: the ring being configured
@ -576,7 +595,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
}
}
xdp_init_buff(&ring->xdp, ice_rx_pg_size(ring) / 2, &ring->xdp_rxq);
xdp_init_buff(&ring->xdp, ice_get_frame_sz(ring), &ring->xdp_rxq);
ring->xdp.data = NULL;
ring->xdp_ext.pkt_ctx = &ring->pkt_ctx;
err = ice_setup_rx_ctx(ring);

View File

@ -521,30 +521,6 @@ err:
return -ENOMEM;
}
/**
* ice_rx_frame_truesize
* @rx_ring: ptr to Rx ring
* @size: size
*
* calculate the truesize with taking into the account PAGE_SIZE of
* underlying arch
*/
static unsigned int
ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, const unsigned int size)
{
unsigned int truesize;
#if (PAGE_SIZE < 8192)
truesize = ice_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */
#else
truesize = rx_ring->rx_offset ?
SKB_DATA_ALIGN(rx_ring->rx_offset + size) +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) :
SKB_DATA_ALIGN(size);
#endif
return truesize;
}
/**
* ice_run_xdp - Executes an XDP program on initialized xdp_buff
* @rx_ring: Rx ring
@ -837,16 +813,15 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf)
if (!dev_page_is_reusable(page))
return false;
#if (PAGE_SIZE < 8192)
/* if we are only owner of page we can reuse it */
if (unlikely(rx_buf->pgcnt - pagecnt_bias > 1))
return false;
#else
#if (PAGE_SIZE >= 8192)
#define ICE_LAST_OFFSET \
(SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_2048)
(SKB_WITH_OVERHEAD(PAGE_SIZE) - ICE_RXBUF_3072)
if (rx_buf->page_offset > ICE_LAST_OFFSET)
return false;
#endif /* PAGE_SIZE < 8192) */
#endif /* PAGE_SIZE >= 8192) */
/* If we have drained the page fragment pool we need to update
* the pagecnt_bias and page count so that we fully restock the
@ -949,12 +924,7 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
struct ice_rx_buf *rx_buf;
rx_buf = &rx_ring->rx_buf[ntc];
rx_buf->pgcnt =
#if (PAGE_SIZE < 8192)
page_count(rx_buf->page);
#else
0;
#endif
rx_buf->pgcnt = page_count(rx_buf->page);
prefetchw(rx_buf->page);
if (!size)
@ -1160,11 +1130,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
bool failure;
u32 first;
/* Frame size depend on rx_ring setup when PAGE_SIZE=4K */
#if (PAGE_SIZE < 8192)
xdp->frame_sz = ice_rx_frame_truesize(rx_ring, 0);
#endif
xdp_prog = READ_ONCE(rx_ring->xdp_prog);
if (xdp_prog) {
xdp_ring = rx_ring->xdp_ring;
@ -1223,10 +1188,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
hard_start = page_address(rx_buf->page) + rx_buf->page_offset -
offset;
xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
#if (PAGE_SIZE > 4096)
/* At larger PAGE_SIZE, frame_sz depend on len size */
xdp->frame_sz = ice_rx_frame_truesize(rx_ring, size);
#endif
xdp_buff_clear_frags_flag(xdp);
} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
break;

View File

@ -4808,6 +4808,7 @@ static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
#if (PAGE_SIZE < 8192)
if (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB ||
IGB_2K_TOO_SMALL_WITH_PADDING ||
rd32(E1000_RCTL) & E1000_RCTL_SBP)
set_ring_uses_large_buffer(rx_ring);
#endif

View File

@ -632,7 +632,9 @@ int rvu_mbox_handler_cpt_inline_ipsec_cfg(struct rvu *rvu,
return ret;
}
static bool is_valid_offset(struct rvu *rvu, struct cpt_rd_wr_reg_msg *req)
static bool validate_and_update_reg_offset(struct rvu *rvu,
struct cpt_rd_wr_reg_msg *req,
u64 *reg_offset)
{
u64 offset = req->reg_offset;
int blkaddr, num_lfs, lf;
@ -663,6 +665,11 @@ static bool is_valid_offset(struct rvu *rvu, struct cpt_rd_wr_reg_msg *req)
if (lf < 0)
return false;
/* Translate local LF's offset to global CPT LF's offset to
* access LFX register.
*/
*reg_offset = (req->reg_offset & 0xFF000) + (lf << 3);
return true;
} else if (!(req->hdr.pcifunc & RVU_PFVF_FUNC_MASK)) {
/* Registers that can be accessed from PF */
@ -697,7 +704,7 @@ int rvu_mbox_handler_cpt_rd_wr_register(struct rvu *rvu,
struct cpt_rd_wr_reg_msg *rsp)
{
u64 offset = req->reg_offset;
int blkaddr, lf;
int blkaddr;
blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
if (blkaddr < 0)
@ -708,18 +715,10 @@ int rvu_mbox_handler_cpt_rd_wr_register(struct rvu *rvu,
!is_cpt_vf(rvu, req->hdr.pcifunc))
return CPT_AF_ERR_ACCESS_DENIED;
if (!is_valid_offset(rvu, req))
if (!validate_and_update_reg_offset(rvu, req, &offset))
return CPT_AF_ERR_ACCESS_DENIED;
/* Translate local LF used by VFs to global CPT LF */
lf = rvu_get_lf(rvu, &rvu->hw->block[blkaddr], req->hdr.pcifunc,
(offset & 0xFFF) >> 3);
/* Translate local LF's offset to global CPT LF's offset */
offset &= 0xFF000;
offset += lf << 3;
rsp->reg_offset = offset;
rsp->reg_offset = req->reg_offset;
rsp->ret_val = req->ret_val;
rsp->is_write = req->is_write;

View File

@ -998,6 +998,7 @@ void mlx5e_build_ptys2ethtool_map(void);
bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev, u8 page_shift,
enum mlx5e_mpwrq_umr_mode umr_mode);
void mlx5e_shampo_fill_umr(struct mlx5e_rq *rq, int len);
void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq);
void mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats);
void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s);

View File

@ -1236,6 +1236,14 @@ void mlx5e_free_rx_missing_descs(struct mlx5e_rq *rq)
rq->mpwqe.actual_wq_head = wq->head;
rq->mpwqe.umr_in_progress = 0;
rq->mpwqe.umr_completed = 0;
if (test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state)) {
struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
u16 len;
len = (shampo->pi - shampo->ci) & shampo->hd_per_wq;
mlx5e_shampo_fill_umr(rq, len);
}
}
void mlx5e_free_rx_descs(struct mlx5e_rq *rq)
@ -3020,15 +3028,18 @@ int mlx5e_update_tx_netdev_queues(struct mlx5e_priv *priv)
static void mlx5e_set_default_xps_cpumasks(struct mlx5e_priv *priv,
struct mlx5e_params *params)
{
struct mlx5_core_dev *mdev = priv->mdev;
int num_comp_vectors, ix, irq;
num_comp_vectors = mlx5_comp_vectors_max(mdev);
int ix;
for (ix = 0; ix < params->num_channels; ix++) {
cpumask_clear(priv->scratchpad.cpumask);
int num_comp_vectors, irq, vec_ix;
struct mlx5_core_dev *mdev;
for (irq = ix; irq < num_comp_vectors; irq += params->num_channels) {
mdev = mlx5_sd_ch_ix_get_dev(priv->mdev, ix);
num_comp_vectors = mlx5_comp_vectors_max(mdev);
cpumask_clear(priv->scratchpad.cpumask);
vec_ix = mlx5_sd_ch_ix_get_vec_ix(mdev, ix);
for (irq = vec_ix; irq < num_comp_vectors; irq += params->num_channels) {
int cpu = mlx5_comp_vector_get_cpu(mdev, irq);
cpumask_set_cpu(cpu, priv->scratchpad.cpumask);

View File

@ -735,6 +735,7 @@ static int mlx5e_alloc_rx_hd_mpwqe(struct mlx5e_rq *rq)
ksm_entries = bitmap_find_window(shampo->bitmap,
shampo->hd_per_wqe,
shampo->hd_per_wq, shampo->pi);
ksm_entries = ALIGN_DOWN(ksm_entries, MLX5E_SHAMPO_WQ_HEADER_PER_PAGE);
if (!ksm_entries)
return 0;
@ -962,26 +963,31 @@ void mlx5e_free_icosq_descs(struct mlx5e_icosq *sq)
sq->cc = sqcc;
}
static void mlx5e_handle_shampo_hd_umr(struct mlx5e_shampo_umr umr,
struct mlx5e_icosq *sq)
void mlx5e_shampo_fill_umr(struct mlx5e_rq *rq, int len)
{
struct mlx5e_channel *c = container_of(sq, struct mlx5e_channel, icosq);
struct mlx5e_shampo_hd *shampo;
/* assume 1:1 relationship between RQ and icosq */
struct mlx5e_rq *rq = &c->rq;
int end, from, len = umr.len;
struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
int end, from, full_len = len;
shampo = rq->mpwqe.shampo;
end = shampo->hd_per_wq;
from = shampo->ci;
if (from + len > shampo->hd_per_wq) {
if (from + len > end) {
len -= end - from;
bitmap_set(shampo->bitmap, from, end - from);
from = 0;
}
bitmap_set(shampo->bitmap, from, len);
shampo->ci = (shampo->ci + umr.len) & (shampo->hd_per_wq - 1);
shampo->ci = (shampo->ci + full_len) & (shampo->hd_per_wq - 1);
}
static void mlx5e_handle_shampo_hd_umr(struct mlx5e_shampo_umr umr,
struct mlx5e_icosq *sq)
{
struct mlx5e_channel *c = container_of(sq, struct mlx5e_channel, icosq);
/* assume 1:1 relationship between RQ and icosq */
struct mlx5e_rq *rq = &c->rq;
mlx5e_shampo_fill_umr(rq, umr.len);
}
int mlx5e_poll_ico_cq(struct mlx5e_cq *cq)

View File

@ -386,7 +386,8 @@ static int ipsec_fs_roce_tx_mpv_create(struct mlx5_core_dev *mdev,
return -EOPNOTSUPP;
peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
if (!peer_priv) {
if (!peer_priv || !peer_priv->ipsec) {
mlx5_core_err(mdev, "IPsec not supported on master device\n");
err = -EOPNOTSUPP;
goto release_peer;
}
@ -455,7 +456,8 @@ static int ipsec_fs_roce_rx_mpv_create(struct mlx5_core_dev *mdev,
return -EOPNOTSUPP;
peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
if (!peer_priv) {
if (!peer_priv || !peer_priv->ipsec) {
mlx5_core_err(mdev, "IPsec not supported on master device\n");
err = -EOPNOTSUPP;
goto release_peer;
}

View File

@ -453,9 +453,158 @@ static u16 ocelot_vlan_unaware_pvid(struct ocelot *ocelot,
return VLAN_N_VID - bridge_num - 1;
}
/**
* ocelot_update_vlan_reclassify_rule() - Make switch aware only to bridge VLAN TPID
*
* @ocelot: Switch private data structure
* @port: Index of ingress port
*
* IEEE 802.1Q-2018 clauses "5.5 C-VLAN component conformance" and "5.6 S-VLAN
* component conformance" suggest that a C-VLAN component should only recognize
* and filter on C-Tags, and an S-VLAN component should only recognize and
* process based on C-Tags.
*
* In Linux, as per commit 1a0b20b25732 ("Merge branch 'bridge-next'"), C-VLAN
* components are largely represented by a bridge with vlan_protocol 802.1Q,
* and S-VLAN components by a bridge with vlan_protocol 802.1ad.
*
* Currently the driver only offloads vlan_protocol 802.1Q, but the hardware
* design is non-conformant, because the switch assigns each frame to a VLAN
* based on an entirely different question, as detailed in figure "Basic VLAN
* Classification Flow" from its manual and reproduced below.
*
* Set TAG_TYPE, PCP, DEI, VID to port-default values in VLAN_CFG register
* if VLAN_AWARE_ENA[port] and frame has outer tag then:
* if VLAN_INNER_TAG_ENA[port] and frame has inner tag then:
* TAG_TYPE = (Frame.InnerTPID <> 0x8100)
* Set PCP, DEI, VID to values from inner VLAN header
* else:
* TAG_TYPE = (Frame.OuterTPID <> 0x8100)
* Set PCP, DEI, VID to values from outer VLAN header
* if VID == 0 then:
* VID = VLAN_CFG.VLAN_VID
*
* Summarized, the switch will recognize both 802.1Q and 802.1ad TPIDs as VLAN
* "with equal rights", and just set the TAG_TYPE bit to 0 (if 802.1Q) or to 1
* (if 802.1ad). It will classify based on whichever of the tags is "outer", no
* matter what TPID that may have (or "inner", if VLAN_INNER_TAG_ENA[port]).
*
* In the VLAN Table, the TAG_TYPE information is not accessible - just the
* classified VID is - so it is as if each VLAN Table entry is for 2 VLANs:
* C-VLAN X, and S-VLAN X.
*
* Whereas the Linux bridge behavior is to only filter on frames with a TPID
* equal to the vlan_protocol, and treat everything else as VLAN-untagged.
*
* Consider an ingress packet tagged with 802.1ad VID=3 and 802.1Q VID=5,
* received on a bridge vlan_filtering=1 vlan_protocol=802.1Q port. This frame
* should be treated as 802.1Q-untagged, and classified to the PVID of that
* bridge port. Not to VID=3, and not to VID=5.
*
* The VCAP IS1 TCAM has everything we need to overwrite the choices made in
* the basic VLAN classification pipeline: it can match on TAG_TYPE in the key,
* and it can modify the classified VID in the action. Thus, for each port
* under a vlan_filtering bridge, we can insert a rule in VCAP IS1 lookup 0 to
* match on 802.1ad tagged frames and modify their classified VID to the 802.1Q
* PVID of the port. This effectively makes it appear to the outside world as
* if those packets were processed as VLAN-untagged.
*
* The rule needs to be updated each time the bridge PVID changes, and needs
* to be deleted if the bridge PVID is deleted, or if the port becomes
* VLAN-unaware.
*/
static int ocelot_update_vlan_reclassify_rule(struct ocelot *ocelot, int port)
{
unsigned long cookie = OCELOT_VCAP_IS1_VLAN_RECLASSIFY(ocelot, port);
struct ocelot_vcap_block *block_vcap_is1 = &ocelot->block[VCAP_IS1];
struct ocelot_port *ocelot_port = ocelot->ports[port];
const struct ocelot_bridge_vlan *pvid_vlan;
struct ocelot_vcap_filter *filter;
int err, val, pcp, dei;
bool vid_replace_ena;
u16 vid;
pvid_vlan = ocelot_port->pvid_vlan;
vid_replace_ena = ocelot_port->vlan_aware && pvid_vlan;
filter = ocelot_vcap_block_find_filter_by_id(block_vcap_is1, cookie,
false);
if (!vid_replace_ena) {
/* If the reclassification filter doesn't need to exist, delete
* it if it was previously installed, and exit doing nothing
* otherwise.
*/
if (filter)
return ocelot_vcap_filter_del(ocelot, filter);
return 0;
}
/* The reclassification rule must apply. See if it already exists
* or if it must be created.
*/
/* Treating as VLAN-untagged means using as classified VID equal to
* the bridge PVID, and PCP/DEI set to the port default QoS values.
*/
vid = pvid_vlan->vid;
val = ocelot_read_gix(ocelot, ANA_PORT_QOS_CFG, port);
pcp = ANA_PORT_QOS_CFG_QOS_DEFAULT_VAL_X(val);
dei = !!(val & ANA_PORT_QOS_CFG_DP_DEFAULT_VAL);
if (filter) {
bool changed = false;
/* Filter exists, just update it */
if (filter->action.vid != vid) {
filter->action.vid = vid;
changed = true;
}
if (filter->action.pcp != pcp) {
filter->action.pcp = pcp;
changed = true;
}
if (filter->action.dei != dei) {
filter->action.dei = dei;
changed = true;
}
if (!changed)
return 0;
return ocelot_vcap_filter_replace(ocelot, filter);
}
/* Filter doesn't exist, create it */
filter = kzalloc(sizeof(*filter), GFP_KERNEL);
if (!filter)
return -ENOMEM;
filter->key_type = OCELOT_VCAP_KEY_ANY;
filter->ingress_port_mask = BIT(port);
filter->vlan.tpid = OCELOT_VCAP_BIT_1;
filter->prio = 1;
filter->id.cookie = cookie;
filter->id.tc_offload = false;
filter->block_id = VCAP_IS1;
filter->type = OCELOT_VCAP_FILTER_OFFLOAD;
filter->lookup = 0;
filter->action.vid_replace_ena = true;
filter->action.pcp_dei_ena = true;
filter->action.vid = vid;
filter->action.pcp = pcp;
filter->action.dei = dei;
err = ocelot_vcap_filter_add(ocelot, filter, NULL);
if (err)
kfree(filter);
return err;
}
/* Default vlan to clasify for untagged frames (may be zero) */
static void ocelot_port_set_pvid(struct ocelot *ocelot, int port,
const struct ocelot_bridge_vlan *pvid_vlan)
static int ocelot_port_set_pvid(struct ocelot *ocelot, int port,
const struct ocelot_bridge_vlan *pvid_vlan)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
u16 pvid = ocelot_vlan_unaware_pvid(ocelot, ocelot_port->bridge);
@ -475,15 +624,23 @@ static void ocelot_port_set_pvid(struct ocelot *ocelot, int port,
* happens automatically), but also 802.1p traffic which gets
* classified to VLAN 0, but that is always in our RX filter, so it
* would get accepted were it not for this setting.
*
* Also, we only support the bridge 802.1Q VLAN protocol, so
* 802.1ad-tagged frames (carrying S-Tags) should be considered
* 802.1Q-untagged, and also dropped.
*/
if (!pvid_vlan && ocelot_port->vlan_aware)
val = ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA;
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_S_TAGGED_ENA;
ocelot_rmw_gix(ocelot, val,
ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA,
ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA |
ANA_PORT_DROP_CFG_DROP_S_TAGGED_ENA,
ANA_PORT_DROP_CFG, port);
return ocelot_update_vlan_reclassify_rule(ocelot, port);
}
static struct ocelot_bridge_vlan *ocelot_bridge_vlan_find(struct ocelot *ocelot,
@ -631,7 +788,10 @@ int ocelot_port_vlan_filtering(struct ocelot *ocelot, int port,
ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M,
ANA_PORT_VLAN_CFG, port);
ocelot_port_set_pvid(ocelot, port, ocelot_port->pvid_vlan);
err = ocelot_port_set_pvid(ocelot, port, ocelot_port->pvid_vlan);
if (err)
return err;
ocelot_port_manage_port_tag(ocelot, port);
return 0;
@ -684,9 +844,12 @@ int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
return err;
/* Default ingress vlan classification */
if (pvid)
ocelot_port_set_pvid(ocelot, port,
ocelot_bridge_vlan_find(ocelot, vid));
if (pvid) {
err = ocelot_port_set_pvid(ocelot, port,
ocelot_bridge_vlan_find(ocelot, vid));
if (err)
return err;
}
/* Untagged egress vlan clasification */
ocelot_port_manage_port_tag(ocelot, port);
@ -712,8 +875,11 @@ int ocelot_vlan_del(struct ocelot *ocelot, int port, u16 vid)
return err;
/* Ingress */
if (del_pvid)
ocelot_port_set_pvid(ocelot, port, NULL);
if (del_pvid) {
err = ocelot_port_set_pvid(ocelot, port, NULL);
if (err)
return err;
}
/* Egress */
ocelot_port_manage_port_tag(ocelot, port);
@ -1099,6 +1265,48 @@ void ocelot_ptp_rx_timestamp(struct ocelot *ocelot, struct sk_buff *skb,
}
EXPORT_SYMBOL(ocelot_ptp_rx_timestamp);
void ocelot_lock_inj_grp(struct ocelot *ocelot, int grp)
__acquires(&ocelot->inj_lock)
{
spin_lock(&ocelot->inj_lock);
}
EXPORT_SYMBOL_GPL(ocelot_lock_inj_grp);
void ocelot_unlock_inj_grp(struct ocelot *ocelot, int grp)
__releases(&ocelot->inj_lock)
{
spin_unlock(&ocelot->inj_lock);
}
EXPORT_SYMBOL_GPL(ocelot_unlock_inj_grp);
void ocelot_lock_xtr_grp(struct ocelot *ocelot, int grp)
__acquires(&ocelot->inj_lock)
{
spin_lock(&ocelot->inj_lock);
}
EXPORT_SYMBOL_GPL(ocelot_lock_xtr_grp);
void ocelot_unlock_xtr_grp(struct ocelot *ocelot, int grp)
__releases(&ocelot->inj_lock)
{
spin_unlock(&ocelot->inj_lock);
}
EXPORT_SYMBOL_GPL(ocelot_unlock_xtr_grp);
void ocelot_lock_xtr_grp_bh(struct ocelot *ocelot, int grp)
__acquires(&ocelot->xtr_lock)
{
spin_lock_bh(&ocelot->xtr_lock);
}
EXPORT_SYMBOL_GPL(ocelot_lock_xtr_grp_bh);
void ocelot_unlock_xtr_grp_bh(struct ocelot *ocelot, int grp)
__releases(&ocelot->xtr_lock)
{
spin_unlock_bh(&ocelot->xtr_lock);
}
EXPORT_SYMBOL_GPL(ocelot_unlock_xtr_grp_bh);
int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **nskb)
{
u64 timestamp, src_port, len;
@ -1109,6 +1317,8 @@ int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **nskb)
u32 val, *buf;
int err;
lockdep_assert_held(&ocelot->xtr_lock);
err = ocelot_xtr_poll_xfh(ocelot, grp, xfh);
if (err)
return err;
@ -1184,6 +1394,8 @@ bool ocelot_can_inject(struct ocelot *ocelot, int grp)
{
u32 val = ocelot_read(ocelot, QS_INJ_STATUS);
lockdep_assert_held(&ocelot->inj_lock);
if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))))
return false;
if (val & QS_INJ_STATUS_WMARK_REACHED(BIT(grp)))
@ -1193,28 +1405,55 @@ bool ocelot_can_inject(struct ocelot *ocelot, int grp)
}
EXPORT_SYMBOL(ocelot_can_inject);
void ocelot_ifh_port_set(void *ifh, int port, u32 rew_op, u32 vlan_tag)
/**
* ocelot_ifh_set_basic - Set basic information in Injection Frame Header
* @ifh: Pointer to Injection Frame Header memory
* @ocelot: Switch private data structure
* @port: Egress port number
* @rew_op: Egress rewriter operation for PTP
* @skb: Pointer to socket buffer (packet)
*
* Populate the Injection Frame Header with basic information for this skb: the
* analyzer bypass bit, destination port, VLAN info, egress rewriter info.
*/
void ocelot_ifh_set_basic(void *ifh, struct ocelot *ocelot, int port,
u32 rew_op, struct sk_buff *skb)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
struct net_device *dev = skb->dev;
u64 vlan_tci, tag_type;
int qos_class;
ocelot_xmit_get_vlan_info(skb, ocelot_port->bridge, &vlan_tci,
&tag_type);
qos_class = netdev_get_num_tc(dev) ?
netdev_get_prio_tc_map(dev, skb->priority) : skb->priority;
memset(ifh, 0, OCELOT_TAG_LEN);
ocelot_ifh_set_bypass(ifh, 1);
ocelot_ifh_set_src(ifh, BIT_ULL(ocelot->num_phys_ports));
ocelot_ifh_set_dest(ifh, BIT_ULL(port));
ocelot_ifh_set_tag_type(ifh, IFH_TAG_TYPE_C);
if (vlan_tag)
ocelot_ifh_set_vlan_tci(ifh, vlan_tag);
ocelot_ifh_set_qos_class(ifh, qos_class);
ocelot_ifh_set_tag_type(ifh, tag_type);
ocelot_ifh_set_vlan_tci(ifh, vlan_tci);
if (rew_op)
ocelot_ifh_set_rew_op(ifh, rew_op);
}
EXPORT_SYMBOL(ocelot_ifh_port_set);
EXPORT_SYMBOL(ocelot_ifh_set_basic);
void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
u32 rew_op, struct sk_buff *skb)
{
u32 ifh[OCELOT_TAG_LEN / 4] = {0};
u32 ifh[OCELOT_TAG_LEN / 4];
unsigned int i, count, last;
lockdep_assert_held(&ocelot->inj_lock);
ocelot_write_rix(ocelot, QS_INJ_CTRL_GAP_SIZE(1) |
QS_INJ_CTRL_SOF, QS_INJ_CTRL, grp);
ocelot_ifh_port_set(ifh, port, rew_op, skb_vlan_tag_get(skb));
ocelot_ifh_set_basic(ifh, ocelot, port, rew_op, skb);
for (i = 0; i < OCELOT_TAG_LEN / 4; i++)
ocelot_write_rix(ocelot, ifh[i], QS_INJ_WR, grp);
@ -1247,6 +1486,8 @@ EXPORT_SYMBOL(ocelot_port_inject_frame);
void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp)
{
lockdep_assert_held(&ocelot->xtr_lock);
while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp))
ocelot_read_rix(ocelot, QS_XTR_RD, grp);
}
@ -2532,7 +2773,7 @@ int ocelot_port_set_default_prio(struct ocelot *ocelot, int port, u8 prio)
ANA_PORT_QOS_CFG,
port);
return 0;
return ocelot_update_vlan_reclassify_rule(ocelot, port);
}
EXPORT_SYMBOL_GPL(ocelot_port_set_default_prio);
@ -2929,6 +3170,8 @@ int ocelot_init(struct ocelot *ocelot)
mutex_init(&ocelot->fwd_domain_lock);
spin_lock_init(&ocelot->ptp_clock_lock);
spin_lock_init(&ocelot->ts_id_lock);
spin_lock_init(&ocelot->inj_lock);
spin_lock_init(&ocelot->xtr_lock);
ocelot->owq = alloc_ordered_workqueue("ocelot-owq", 0);
if (!ocelot->owq)

View File

@ -665,8 +665,7 @@ static int ocelot_fdma_prepare_skb(struct ocelot *ocelot, int port, u32 rew_op,
ifh = skb_push(skb, OCELOT_TAG_LEN);
skb_put(skb, ETH_FCS_LEN);
memset(ifh, 0, OCELOT_TAG_LEN);
ocelot_ifh_port_set(ifh, port, rew_op, skb_vlan_tag_get(skb));
ocelot_ifh_set_basic(ifh, ocelot, port, rew_op, skb);
return 0;
}

View File

@ -695,6 +695,7 @@ static void is1_entry_set(struct ocelot *ocelot, int ix,
vcap_key_bit_set(vcap, &data, VCAP_IS1_HK_L2_MC, filter->dmac_mc);
vcap_key_bit_set(vcap, &data, VCAP_IS1_HK_L2_BC, filter->dmac_bc);
vcap_key_bit_set(vcap, &data, VCAP_IS1_HK_VLAN_TAGGED, tag->tagged);
vcap_key_bit_set(vcap, &data, VCAP_IS1_HK_TPID, tag->tpid);
vcap_key_set(vcap, &data, VCAP_IS1_HK_VID,
tag->vid.value, tag->vid.mask);
vcap_key_set(vcap, &data, VCAP_IS1_HK_PCP,

View File

@ -51,6 +51,8 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
struct ocelot *ocelot = arg;
int grp = 0, err;
ocelot_lock_xtr_grp(ocelot, grp);
while (ocelot_read(ocelot, QS_XTR_DATA_PRESENT) & BIT(grp)) {
struct sk_buff *skb;
@ -69,6 +71,8 @@ out:
if (err < 0)
ocelot_drain_cpu_queue(ocelot, 0);
ocelot_unlock_xtr_grp(ocelot, grp);
return IRQ_HANDLED;
}

View File

@ -124,8 +124,12 @@ static int ngbe_phylink_init(struct wx *wx)
MAC_SYM_PAUSE | MAC_ASYM_PAUSE;
config->mac_managed_pm = true;
phy_mode = PHY_INTERFACE_MODE_RGMII_ID;
__set_bit(PHY_INTERFACE_MODE_RGMII_ID, config->supported_interfaces);
/* The MAC only has add the Tx delay and it can not be modified.
* So just disable TX delay in PHY, and it is does not matter to
* internal phy.
*/
phy_mode = PHY_INTERFACE_MODE_RGMII_RXID;
__set_bit(PHY_INTERFACE_MODE_RGMII_RXID, config->supported_interfaces);
phylink = phylink_create(config, NULL, phy_mode, &ngbe_mac_ops);
if (IS_ERR(phylink))

View File

@ -170,6 +170,7 @@
#define XAE_UAW0_OFFSET 0x00000700 /* Unicast address word 0 */
#define XAE_UAW1_OFFSET 0x00000704 /* Unicast address word 1 */
#define XAE_FMI_OFFSET 0x00000708 /* Frame Filter Control */
#define XAE_FFE_OFFSET 0x0000070C /* Frame Filter Enable */
#define XAE_AF0_OFFSET 0x00000710 /* Address Filter 0 */
#define XAE_AF1_OFFSET 0x00000714 /* Address Filter 1 */

View File

@ -432,7 +432,7 @@ static int netdev_set_mac_address(struct net_device *ndev, void *p)
*/
static void axienet_set_multicast_list(struct net_device *ndev)
{
int i;
int i = 0;
u32 reg, af0reg, af1reg;
struct axienet_local *lp = netdev_priv(ndev);
@ -450,7 +450,10 @@ static void axienet_set_multicast_list(struct net_device *ndev)
} else if (!netdev_mc_empty(ndev)) {
struct netdev_hw_addr *ha;
i = 0;
reg = axienet_ior(lp, XAE_FMI_OFFSET);
reg &= ~XAE_FMI_PM_MASK;
axienet_iow(lp, XAE_FMI_OFFSET, reg);
netdev_for_each_mc_addr(ha, ndev) {
if (i >= XAE_MULTICAST_CAM_TABLE_NUM)
break;
@ -469,6 +472,7 @@ static void axienet_set_multicast_list(struct net_device *ndev)
axienet_iow(lp, XAE_FMI_OFFSET, reg);
axienet_iow(lp, XAE_AF0_OFFSET, af0reg);
axienet_iow(lp, XAE_AF1_OFFSET, af1reg);
axienet_iow(lp, XAE_FFE_OFFSET, 1);
i++;
}
} else {
@ -476,18 +480,15 @@ static void axienet_set_multicast_list(struct net_device *ndev)
reg &= ~XAE_FMI_PM_MASK;
axienet_iow(lp, XAE_FMI_OFFSET, reg);
for (i = 0; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) {
reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00;
reg |= i;
axienet_iow(lp, XAE_FMI_OFFSET, reg);
axienet_iow(lp, XAE_AF0_OFFSET, 0);
axienet_iow(lp, XAE_AF1_OFFSET, 0);
}
dev_info(&ndev->dev, "Promiscuous mode disabled.\n");
}
for (; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) {
reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00;
reg |= i;
axienet_iow(lp, XAE_FMI_OFFSET, reg);
axienet_iow(lp, XAE_FFE_OFFSET, 0);
}
}
/**

View File

@ -555,7 +555,7 @@ static int rtl8211f_led_hw_control_set(struct phy_device *phydev, u8 index,
unsigned long rules)
{
const u16 mask = RTL8211F_LEDCR_MASK << (RTL8211F_LEDCR_SHIFT * index);
u16 reg = RTL8211F_LEDCR_MODE; /* Mode B */
u16 reg = 0;
if (index >= RTL8211F_LED_COUNT)
return -EINVAL;
@ -575,6 +575,7 @@ static int rtl8211f_led_hw_control_set(struct phy_device *phydev, u8 index,
}
reg <<= RTL8211F_LEDCR_SHIFT * index;
reg |= RTL8211F_LEDCR_MODE; /* Mode B */
return phy_modify_paged(phydev, 0xd04, RTL8211F_LEDCR, mask, reg);
}

View File

@ -2867,8 +2867,8 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
if (err < 0)
goto err_xdp_reg_mem_model;
virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, qp_index));
virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
return 0;

View File

@ -5,6 +5,8 @@
#ifndef _NET_DSA_TAG_OCELOT_H
#define _NET_DSA_TAG_OCELOT_H
#include <linux/if_bridge.h>
#include <linux/if_vlan.h>
#include <linux/kthread.h>
#include <linux/packing.h>
#include <linux/skbuff.h>
@ -273,4 +275,49 @@ static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
return rew_op;
}
/**
* ocelot_xmit_get_vlan_info: Determine VLAN_TCI and TAG_TYPE for injected frame
* @skb: Pointer to socket buffer
* @br: Pointer to bridge device that the port is under, if any
* @vlan_tci:
* @tag_type:
*
* If the port is under a VLAN-aware bridge, remove the VLAN header from the
* payload and move it into the DSA tag, which will make the switch classify
* the packet to the bridge VLAN. Otherwise, leave the classified VLAN at zero,
* which is the pvid of standalone ports (OCELOT_STANDALONE_PVID), although not
* of VLAN-unaware bridge ports (that would be ocelot_vlan_unaware_pvid()).
* Anyway, VID 0 is fine because it is stripped on egress for these port modes,
* and source address learning is not performed for packets injected from the
* CPU anyway, so it doesn't matter that the VID is "wrong".
*/
static inline void ocelot_xmit_get_vlan_info(struct sk_buff *skb,
struct net_device *br,
u64 *vlan_tci, u64 *tag_type)
{
struct vlan_ethhdr *hdr;
u16 proto, tci;
if (!br || !br_vlan_enabled(br)) {
*vlan_tci = 0;
*tag_type = IFH_TAG_TYPE_C;
return;
}
hdr = (struct vlan_ethhdr *)skb_mac_header(skb);
br_vlan_get_proto(br, &proto);
if (ntohs(hdr->h_vlan_proto) == proto) {
vlan_remove_tag(skb, &tci);
*vlan_tci = tci;
} else {
rcu_read_lock();
br_vlan_get_pvid_rcu(br, &tci);
rcu_read_unlock();
*vlan_tci = tci;
}
*tag_type = (proto != ETH_P_8021Q) ? IFH_TAG_TYPE_S : IFH_TAG_TYPE_C;
}
#endif

View File

@ -206,14 +206,17 @@ enum {
*/
HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
/* When this quirk is set, the controller has validated that
* LE states reported through the HCI_LE_READ_SUPPORTED_STATES are
* valid. This mechanism is necessary as many controllers have
* been seen has having trouble initiating a connectable
* advertisement despite the state combination being reported as
* supported.
/* When this quirk is set, the LE states reported through the
* HCI_LE_READ_SUPPORTED_STATES are invalid/broken.
*
* This mechanism is necessary as many controllers have been seen has
* having trouble initiating a connectable advertisement despite the
* state combination being reported as supported.
*
* This quirk can be set before hci_register_dev is called or
* during the hdev->setup vendor callback.
*/
HCI_QUIRK_VALID_LE_STATES,
HCI_QUIRK_BROKEN_LE_STATES,
/* When this quirk is set, then erroneous data reporting
* is ignored. This is mainly due to the fact that the HCI

View File

@ -825,7 +825,7 @@ extern struct mutex hci_cb_list_lock;
} while (0)
#define hci_dev_le_state_simultaneous(hdev) \
(test_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks) && \
(!test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) && \
(hdev->le_states[4] & 0x08) && /* Central */ \
(hdev->le_states[4] & 0x40) && /* Peripheral */ \
(hdev->le_states[3] & 0x10)) /* Simultaneous */

View File

@ -403,14 +403,18 @@ struct dsa_switch {
*/
u32 configure_vlan_while_not_filtering:1;
/* If the switch driver always programs the CPU port as egress tagged
* despite the VLAN configuration indicating otherwise, then setting
* @untag_bridge_pvid will force the DSA receive path to pop the
* bridge's default_pvid VLAN tagged frames to offer a consistent
* behavior between a vlan_filtering=0 and vlan_filtering=1 bridge
* device.
/* Pop the default_pvid of VLAN-unaware bridge ports from tagged frames.
* DEPRECATED: Do NOT set this field in new drivers. Instead look at
* the dsa_software_vlan_untag() comments.
*/
u32 untag_bridge_pvid:1;
/* Pop the default_pvid of VLAN-aware bridge ports from tagged frames.
* Useful if the switch cannot preserve the VLAN tag as seen on the
* wire for user port ingress, and chooses to send all frames as
* VLAN-tagged to the CPU, including those which were originally
* untagged.
*/
u32 untag_vlan_aware_bridge_pvid:1;
/* Let DSA manage the FDB entries towards the
* CPU, based on the software bridge database.

View File

@ -70,6 +70,7 @@ struct kcm_sock {
struct work_struct tx_work;
struct list_head wait_psock_list;
struct sk_buff *seq_skb;
struct mutex tx_mutex;
u32 tx_stopped : 1;
/* Don't use bit fields here, these are set under different locks */

View File

@ -813,6 +813,9 @@ struct ocelot {
const u32 *const *map;
struct list_head stats_regions;
spinlock_t inj_lock;
spinlock_t xtr_lock;
u32 pool_size[OCELOT_SB_NUM][OCELOT_SB_POOL_NUM];
int packet_buffer_size;
int num_frame_refs;
@ -966,10 +969,17 @@ void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target,
u32 val, u32 reg, u32 offset);
/* Packet I/O */
void ocelot_lock_inj_grp(struct ocelot *ocelot, int grp);
void ocelot_unlock_inj_grp(struct ocelot *ocelot, int grp);
void ocelot_lock_xtr_grp(struct ocelot *ocelot, int grp);
void ocelot_unlock_xtr_grp(struct ocelot *ocelot, int grp);
void ocelot_lock_xtr_grp_bh(struct ocelot *ocelot, int grp);
void ocelot_unlock_xtr_grp_bh(struct ocelot *ocelot, int grp);
bool ocelot_can_inject(struct ocelot *ocelot, int grp);
void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
u32 rew_op, struct sk_buff *skb);
void ocelot_ifh_port_set(void *ifh, int port, u32 rew_op, u32 vlan_tag);
void ocelot_ifh_set_basic(void *ifh, struct ocelot *ocelot, int port,
u32 rew_op, struct sk_buff *skb);
int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb);
void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp);
void ocelot_ptp_rx_timestamp(struct ocelot *ocelot, struct sk_buff *skb,

View File

@ -13,6 +13,7 @@
*/
#define OCELOT_VCAP_ES0_TAG_8021Q_RXVLAN(ocelot, port, upstream) ((upstream) << 16 | (port))
#define OCELOT_VCAP_IS1_TAG_8021Q_TXVLAN(ocelot, port) (port)
#define OCELOT_VCAP_IS1_VLAN_RECLASSIFY(ocelot, port) ((ocelot)->num_phys_ports + (port))
#define OCELOT_VCAP_IS2_TAG_8021Q_TXVLAN(ocelot, port) (port)
#define OCELOT_VCAP_IS2_MRP_REDIRECT(ocelot, port) ((ocelot)->num_phys_ports + (port))
#define OCELOT_VCAP_IS2_MRP_TRAP(ocelot) ((ocelot)->num_phys_ports * 2)
@ -499,6 +500,7 @@ struct ocelot_vcap_key_vlan {
struct ocelot_vcap_u8 pcp; /* PCP (3 bit) */
enum ocelot_vcap_bit dei; /* DEI */
enum ocelot_vcap_bit tagged; /* Tagged/untagged frame */
enum ocelot_vcap_bit tpid;
};
struct ocelot_vcap_key_etype {

View File

@ -3664,19 +3664,19 @@ static void hci_sched_le(struct hci_dev *hdev)
{
struct hci_chan *chan;
struct sk_buff *skb;
int quote, cnt, tmp;
int quote, *cnt, tmp;
BT_DBG("%s", hdev->name);
if (!hci_conn_num(hdev, LE_LINK))
return;
cnt = hdev->le_pkts ? hdev->le_cnt : hdev->acl_cnt;
cnt = hdev->le_pkts ? &hdev->le_cnt : &hdev->acl_cnt;
__check_timeout(hdev, cnt, LE_LINK);
__check_timeout(hdev, *cnt, LE_LINK);
tmp = cnt;
while (cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
tmp = *cnt;
while (*cnt && (chan = hci_chan_sent(hdev, LE_LINK, &quote))) {
u32 priority = (skb_peek(&chan->data_q))->priority;
while (quote-- && (skb = skb_peek(&chan->data_q))) {
BT_DBG("chan %p skb %p len %d priority %u", chan, skb,
@ -3691,7 +3691,7 @@ static void hci_sched_le(struct hci_dev *hdev)
hci_send_frame(hdev, skb);
hdev->le_last_tx = jiffies;
cnt--;
(*cnt)--;
chan->sent++;
chan->conn->sent++;
@ -3701,12 +3701,7 @@ static void hci_sched_le(struct hci_dev *hdev)
}
}
if (hdev->le_pkts)
hdev->le_cnt = cnt;
else
hdev->acl_cnt = cnt;
if (cnt != tmp)
if (*cnt != tmp)
hci_prio_recalculate(hdev, LE_LINK);
}

View File

@ -5920,7 +5920,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
* while we have an existing one in peripheral role.
*/
if (hdev->conn_hash.le_num_peripheral > 0 &&
(!test_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks) ||
(test_bit(HCI_QUIRK_BROKEN_LE_STATES, &hdev->quirks) ||
!(hdev->le_states[3] & 0x10)))
return NULL;

View File

@ -3456,6 +3456,10 @@ static int pair_device(struct sock *sk, struct hci_dev *hdev, void *data,
* will be kept and this function does nothing.
*/
p = hci_conn_params_add(hdev, &cp->addr.bdaddr, addr_type);
if (!p) {
err = -EIO;
goto unlock;
}
if (p->auto_connect == HCI_AUTO_CONN_EXPLICIT)
p->auto_connect = HCI_AUTO_CONN_DISABLED;

View File

@ -914,7 +914,7 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,
* Confirms and the responder Enters the passkey.
*/
if (smp->method == OVERLAP) {
if (hcon->role == HCI_ROLE_MASTER)
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
smp->method = CFM_PASSKEY;
else
smp->method = REQ_PASSKEY;
@ -964,7 +964,7 @@ static u8 smp_confirm(struct smp_chan *smp)
smp_send_cmd(smp->conn, SMP_CMD_PAIRING_CONFIRM, sizeof(cp), &cp);
if (conn->hcon->out)
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
else
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
@ -980,7 +980,8 @@ static u8 smp_random(struct smp_chan *smp)
int ret;
bt_dev_dbg(conn->hcon->hdev, "conn %p %s", conn,
conn->hcon->out ? "initiator" : "responder");
test_bit(SMP_FLAG_INITIATOR, &smp->flags) ? "initiator" :
"responder");
ret = smp_c1(smp->tk, smp->rrnd, smp->preq, smp->prsp,
hcon->init_addr_type, &hcon->init_addr,
@ -994,7 +995,7 @@ static u8 smp_random(struct smp_chan *smp)
return SMP_CONFIRM_FAILED;
}
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
u8 stk[16];
__le64 rand = 0;
__le16 ediv = 0;
@ -1256,14 +1257,15 @@ static void smp_distribute_keys(struct smp_chan *smp)
rsp = (void *) &smp->prsp[1];
/* The responder sends its keys first */
if (hcon->out && (smp->remote_key_dist & KEY_DIST_MASK)) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags) &&
(smp->remote_key_dist & KEY_DIST_MASK)) {
smp_allow_key_dist(smp);
return;
}
req = (void *) &smp->preq[1];
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
keydist = &rsp->init_key_dist;
*keydist &= req->init_key_dist;
} else {
@ -1432,7 +1434,7 @@ static int sc_mackey_and_ltk(struct smp_chan *smp, u8 mackey[16], u8 ltk[16])
struct hci_conn *hcon = smp->conn->hcon;
u8 *na, *nb, a[7], b[7];
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
na = smp->prnd;
nb = smp->rrnd;
} else {
@ -1460,7 +1462,7 @@ static void sc_dhkey_check(struct smp_chan *smp)
a[6] = hcon->init_addr_type;
b[6] = hcon->resp_addr_type;
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
local_addr = a;
remote_addr = b;
memcpy(io_cap, &smp->preq[1], 3);
@ -1539,7 +1541,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
/* The round is only complete when the initiator
* receives pairing random.
*/
if (!hcon->out) {
if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
sizeof(smp->prnd), smp->prnd);
if (smp->passkey_round == 20)
@ -1567,7 +1569,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
sizeof(smp->prnd), smp->prnd);
return 0;
@ -1578,7 +1580,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op)
case SMP_CMD_PUBLIC_KEY:
default:
/* Initiating device starts the round */
if (!hcon->out)
if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
return 0;
bt_dev_dbg(hdev, "Starting passkey round %u",
@ -1623,7 +1625,7 @@ static int sc_user_reply(struct smp_chan *smp, u16 mgmt_op, __le32 passkey)
}
/* Initiator sends DHKey check first */
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
sc_dhkey_check(smp);
SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
} else if (test_and_clear_bit(SMP_FLAG_DHKEY_PENDING, &smp->flags)) {
@ -1746,7 +1748,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
struct smp_cmd_pairing rsp, *req = (void *) skb->data;
struct l2cap_chan *chan = conn->smp;
struct hci_dev *hdev = conn->hcon->hdev;
struct smp_chan *smp;
struct smp_chan *smp = chan->data;
u8 key_size, auth, sec_level;
int ret;
@ -1755,16 +1757,14 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
if (skb->len < sizeof(*req))
return SMP_INVALID_PARAMS;
if (conn->hcon->role != HCI_ROLE_SLAVE)
if (smp && test_bit(SMP_FLAG_INITIATOR, &smp->flags))
return SMP_CMD_NOTSUPP;
if (!chan->data)
if (!smp) {
smp = smp_chan_create(conn);
else
smp = chan->data;
if (!smp)
return SMP_UNSPECIFIED;
if (!smp)
return SMP_UNSPECIFIED;
}
/* We didn't start the pairing, so match remote */
auth = req->auth_req & AUTH_REQ_MASK(hdev);
@ -1946,7 +1946,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
if (skb->len < sizeof(*rsp))
return SMP_INVALID_PARAMS;
if (conn->hcon->role != HCI_ROLE_MASTER)
if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
return SMP_CMD_NOTSUPP;
skb_pull(skb, sizeof(*rsp));
@ -2041,7 +2041,7 @@ static u8 sc_check_confirm(struct smp_chan *smp)
if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
return sc_passkey_round(smp, SMP_CMD_PAIRING_CONFIRM);
if (conn->hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
smp->prnd);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
@ -2063,7 +2063,7 @@ static int fixup_sc_false_positive(struct smp_chan *smp)
u8 auth;
/* The issue is only observed when we're in responder role */
if (hcon->out)
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
return SMP_UNSPECIFIED;
if (hci_dev_test_flag(hdev, HCI_SC_ONLY)) {
@ -2099,7 +2099,8 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
struct hci_dev *hdev = hcon->hdev;
bt_dev_dbg(hdev, "conn %p %s", conn,
hcon->out ? "initiator" : "responder");
test_bit(SMP_FLAG_INITIATOR, &smp->flags) ? "initiator" :
"responder");
if (skb->len < sizeof(smp->pcnf))
return SMP_INVALID_PARAMS;
@ -2121,7 +2122,7 @@ static u8 smp_cmd_pairing_confirm(struct l2cap_conn *conn, struct sk_buff *skb)
return ret;
}
if (conn->hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd),
smp->prnd);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RANDOM);
@ -2156,7 +2157,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
if (!test_bit(SMP_FLAG_SC, &smp->flags))
return smp_random(smp);
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
pkax = smp->local_pk;
pkbx = smp->remote_pk;
na = smp->prnd;
@ -2169,7 +2170,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
}
if (smp->method == REQ_OOB) {
if (!hcon->out)
if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags))
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
sizeof(smp->prnd), smp->prnd);
SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
@ -2180,7 +2181,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb)
if (smp->method == REQ_PASSKEY || smp->method == DSP_PASSKEY)
return sc_passkey_round(smp, SMP_CMD_PAIRING_RANDOM);
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
u8 cfm[16];
err = smp_f4(smp->tfm_cmac, smp->remote_pk, smp->local_pk,
@ -2221,7 +2222,7 @@ mackey_and_ltk:
return SMP_UNSPECIFIED;
if (smp->method == REQ_OOB) {
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
sc_dhkey_check(smp);
SMP_ALLOW_CMD(smp, SMP_CMD_DHKEY_CHECK);
}
@ -2295,10 +2296,27 @@ bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level,
return false;
}
static void smp_send_pairing_req(struct smp_chan *smp, __u8 auth)
{
struct smp_cmd_pairing cp;
if (smp->conn->hcon->type == ACL_LINK)
build_bredr_pairing_cmd(smp, &cp, NULL);
else
build_pairing_cmd(smp->conn, &cp, NULL, auth);
smp->preq[0] = SMP_CMD_PAIRING_REQ;
memcpy(&smp->preq[1], &cp, sizeof(cp));
smp_send_cmd(smp->conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
set_bit(SMP_FLAG_INITIATOR, &smp->flags);
}
static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
{
struct smp_cmd_security_req *rp = (void *) skb->data;
struct smp_cmd_pairing cp;
struct hci_conn *hcon = conn->hcon;
struct hci_dev *hdev = hcon->hdev;
struct smp_chan *smp;
@ -2347,18 +2365,22 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
skb_pull(skb, sizeof(*rp));
memset(&cp, 0, sizeof(cp));
build_pairing_cmd(conn, &cp, NULL, auth);
smp->preq[0] = SMP_CMD_PAIRING_REQ;
memcpy(&smp->preq[1], &cp, sizeof(cp));
smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
smp_send_pairing_req(smp, auth);
return 0;
}
static void smp_send_security_req(struct smp_chan *smp, __u8 auth)
{
struct smp_cmd_security_req cp;
cp.auth_req = auth;
smp_send_cmd(smp->conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
clear_bit(SMP_FLAG_INITIATOR, &smp->flags);
}
int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
{
struct l2cap_conn *conn = hcon->l2cap_data;
@ -2427,23 +2449,11 @@ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level)
authreq |= SMP_AUTH_MITM;
}
if (hcon->role == HCI_ROLE_MASTER) {
struct smp_cmd_pairing cp;
if (hcon->role == HCI_ROLE_MASTER)
smp_send_pairing_req(smp, authreq);
else
smp_send_security_req(smp, authreq);
build_pairing_cmd(conn, &cp, NULL, authreq);
smp->preq[0] = SMP_CMD_PAIRING_REQ;
memcpy(&smp->preq[1], &cp, sizeof(cp));
smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(cp), &cp);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
} else {
struct smp_cmd_security_req cp;
cp.auth_req = authreq;
smp_send_cmd(conn, SMP_CMD_SECURITY_REQ, sizeof(cp), &cp);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_REQ);
}
set_bit(SMP_FLAG_INITIATOR, &smp->flags);
ret = 0;
unlock:
@ -2694,8 +2704,6 @@ static int smp_cmd_sign_info(struct l2cap_conn *conn, struct sk_buff *skb)
static u8 sc_select_method(struct smp_chan *smp)
{
struct l2cap_conn *conn = smp->conn;
struct hci_conn *hcon = conn->hcon;
struct smp_cmd_pairing *local, *remote;
u8 local_mitm, remote_mitm, local_io, remote_io, method;
@ -2708,7 +2716,7 @@ static u8 sc_select_method(struct smp_chan *smp)
* the "struct smp_cmd_pairing" from them we need to skip the
* first byte which contains the opcode.
*/
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
local = (void *) &smp->preq[1];
remote = (void *) &smp->prsp[1];
} else {
@ -2777,7 +2785,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
/* Non-initiating device sends its public key after receiving
* the key from the initiating device.
*/
if (!hcon->out) {
if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
err = sc_send_public_key(smp);
if (err)
return err;
@ -2839,7 +2847,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
}
if (smp->method == REQ_OOB) {
if (hcon->out)
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM,
sizeof(smp->prnd), smp->prnd);
@ -2848,7 +2856,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
return 0;
}
if (hcon->out)
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_CONFIRM);
if (smp->method == REQ_PASSKEY) {
@ -2863,7 +2871,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
/* The Initiating device waits for the non-initiating device to
* send the confirm value.
*/
if (conn->hcon->out)
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags))
return 0;
err = smp_f4(smp->tfm_cmac, smp->local_pk, smp->remote_pk, smp->prnd,
@ -2897,7 +2905,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
a[6] = hcon->init_addr_type;
b[6] = hcon->resp_addr_type;
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
local_addr = a;
remote_addr = b;
memcpy(io_cap, &smp->prsp[1], 3);
@ -2922,7 +2930,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
if (crypto_memneq(check->e, e, 16))
return SMP_DHKEY_CHECK_FAILED;
if (!hcon->out) {
if (!test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
if (test_bit(SMP_FLAG_WAIT_USER, &smp->flags)) {
set_bit(SMP_FLAG_DHKEY_PENDING, &smp->flags);
return 0;
@ -2934,7 +2942,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb)
sc_add_ltk(smp);
if (hcon->out) {
if (test_bit(SMP_FLAG_INITIATOR, &smp->flags)) {
hci_le_start_enc(hcon, 0, 0, smp->tk, smp->enc_key_size);
hcon->enc_key_size = smp->enc_key_size;
}
@ -3083,7 +3091,6 @@ static void bredr_pairing(struct l2cap_chan *chan)
struct l2cap_conn *conn = chan->conn;
struct hci_conn *hcon = conn->hcon;
struct hci_dev *hdev = hcon->hdev;
struct smp_cmd_pairing req;
struct smp_chan *smp;
bt_dev_dbg(hdev, "chan %p", chan);
@ -3135,14 +3142,7 @@ static void bredr_pairing(struct l2cap_chan *chan)
bt_dev_dbg(hdev, "starting SMP over BR/EDR");
/* Prepare and send the BR/EDR SMP Pairing Request */
build_bredr_pairing_cmd(smp, &req, NULL);
smp->preq[0] = SMP_CMD_PAIRING_REQ;
memcpy(&smp->preq[1], &req, sizeof(req));
smp_send_cmd(conn, SMP_CMD_PAIRING_REQ, sizeof(req), &req);
SMP_ALLOW_CMD(smp, SMP_CMD_PAIRING_RSP);
smp_send_pairing_req(smp, 0x00);
}
static void smp_resume_cb(struct l2cap_chan *chan)

View File

@ -228,7 +228,6 @@ void netpoll_poll_disable(struct net_device *dev)
down(&ni->dev_lock);
srcu_read_unlock(&netpoll_srcu, idx);
}
EXPORT_SYMBOL(netpoll_poll_disable);
void netpoll_poll_enable(struct net_device *dev)
{
@ -239,7 +238,6 @@ void netpoll_poll_enable(struct net_device *dev)
up(&ni->dev_lock);
rcu_read_unlock();
}
EXPORT_SYMBOL(netpoll_poll_enable);
static void refill_skbs(void)
{

View File

@ -105,8 +105,9 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
p = netdev_priv(skb->dev);
if (unlikely(cpu_dp->ds->untag_bridge_pvid)) {
nskb = dsa_untag_bridge_pvid(skb);
if (unlikely(cpu_dp->ds->untag_bridge_pvid ||
cpu_dp->ds->untag_vlan_aware_bridge_pvid)) {
nskb = dsa_software_vlan_untag(skb);
if (!nskb) {
kfree_skb(skb);
return 0;

View File

@ -44,46 +44,81 @@ static inline struct net_device *dsa_conduit_find_user(struct net_device *dev,
return NULL;
}
/* If under a bridge with vlan_filtering=0, make sure to send pvid-tagged
* frames as untagged, since the bridge will not untag them.
/**
* dsa_software_untag_vlan_aware_bridge: Software untagging for VLAN-aware bridge
* @skb: Pointer to received socket buffer (packet)
* @br: Pointer to bridge upper interface of ingress port
* @vid: Parsed VID from packet
*
* The bridge can process tagged packets. Software like STP/PTP may not. The
* bridge can also process untagged packets, to the same effect as if they were
* tagged with the PVID of the ingress port. So packets tagged with the PVID of
* the bridge port must be software-untagged, to support both use cases.
*/
static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb)
static inline void dsa_software_untag_vlan_aware_bridge(struct sk_buff *skb,
struct net_device *br,
u16 vid)
{
struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct net_device *br = dsa_port_bridge_dev_get(dp);
struct net_device *dev = skb->dev;
struct net_device *upper_dev;
u16 vid, pvid, proto;
u16 pvid, proto;
int err;
if (!br || br_vlan_enabled(br))
return skb;
err = br_vlan_get_proto(br, &proto);
if (err)
return skb;
return;
/* Move VLAN tag from data to hwaccel */
if (!skb_vlan_tag_present(skb) && skb->protocol == htons(proto)) {
skb = skb_vlan_untag(skb);
if (!skb)
return NULL;
}
if (!skb_vlan_tag_present(skb))
return skb;
vid = skb_vlan_tag_get_id(skb);
/* We already run under an RCU read-side critical section since
* we are called from netif_receive_skb_list_internal().
*/
err = br_vlan_get_pvid_rcu(dev, &pvid);
err = br_vlan_get_pvid_rcu(skb->dev, &pvid);
if (err)
return skb;
return;
if (vid != pvid)
return skb;
if (vid == pvid && skb->vlan_proto == htons(proto))
__vlan_hwaccel_clear_tag(skb);
}
/**
* dsa_software_untag_vlan_unaware_bridge: Software untagging for VLAN-unaware bridge
* @skb: Pointer to received socket buffer (packet)
* @br: Pointer to bridge upper interface of ingress port
* @vid: Parsed VID from packet
*
* The bridge ignores all VLAN tags. Software like STP/PTP may not (it may run
* on the plain port, or on a VLAN upper interface). Maybe packets are coming
* to software as tagged with a driver-defined VID which is NOT equal to the
* PVID of the bridge port (since the bridge is VLAN-unaware, its configuration
* should NOT be committed to hardware). DSA needs a method for this private
* VID to be communicated by software to it, and if packets are tagged with it,
* software-untag them. Note: the private VID may be different per bridge, to
* support the FDB isolation use case.
*
* FIXME: this is currently implemented based on the broken assumption that
* the "private VID" used by the driver in VLAN-unaware mode is equal to the
* bridge PVID. It should not be, except for a coincidence; the bridge PVID is
* irrelevant to the data path in the VLAN-unaware mode. Thus, the VID that
* this function removes is wrong.
*
* All users of ds->untag_bridge_pvid should fix their drivers, if necessary,
* to make the two independent. Only then, if there still remains a need to
* strip the private VID from packets, then a new ds->ops->get_private_vid()
* API shall be introduced to communicate to DSA what this VID is, which needs
* to be stripped here.
*/
static inline void dsa_software_untag_vlan_unaware_bridge(struct sk_buff *skb,
struct net_device *br,
u16 vid)
{
struct net_device *upper_dev;
u16 pvid, proto;
int err;
err = br_vlan_get_proto(br, &proto);
if (err)
return;
err = br_vlan_get_pvid_rcu(skb->dev, &pvid);
if (err)
return;
if (vid != pvid || skb->vlan_proto != htons(proto))
return;
/* The sad part about attempting to untag from DSA is that we
* don't know, unless we check, if the skb will end up in
@ -95,10 +130,50 @@ static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb)
* definitely keep the tag, to make sure it keeps working.
*/
upper_dev = __vlan_find_dev_deep_rcu(br, htons(proto), vid);
if (upper_dev)
if (!upper_dev)
__vlan_hwaccel_clear_tag(skb);
}
/**
* dsa_software_vlan_untag: Software VLAN untagging in DSA receive path
* @skb: Pointer to socket buffer (packet)
*
* Receive path method for switches which cannot avoid tagging all packets
* towards the CPU port. Called when ds->untag_bridge_pvid (legacy) or
* ds->untag_vlan_aware_bridge_pvid is set to true.
*
* As a side effect of this method, any VLAN tag from the skb head is moved
* to hwaccel.
*/
static inline struct sk_buff *dsa_software_vlan_untag(struct sk_buff *skb)
{
struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct net_device *br = dsa_port_bridge_dev_get(dp);
u16 vid;
/* software untagging for standalone ports not yet necessary */
if (!br)
return skb;
__vlan_hwaccel_clear_tag(skb);
/* Move VLAN tag from data to hwaccel */
if (!skb_vlan_tag_present(skb)) {
skb = skb_vlan_untag(skb);
if (!skb)
return NULL;
}
if (!skb_vlan_tag_present(skb))
return skb;
vid = skb_vlan_tag_get_id(skb);
if (br_vlan_enabled(br)) {
if (dp->ds->untag_vlan_aware_bridge_pvid)
dsa_software_untag_vlan_aware_bridge(skb, br, vid);
} else {
if (dp->ds->untag_bridge_pvid)
dsa_software_untag_vlan_unaware_bridge(skb, br, vid);
}
return skb;
}

View File

@ -8,40 +8,6 @@
#define OCELOT_NAME "ocelot"
#define SEVILLE_NAME "seville"
/* If the port is under a VLAN-aware bridge, remove the VLAN header from the
* payload and move it into the DSA tag, which will make the switch classify
* the packet to the bridge VLAN. Otherwise, leave the classified VLAN at zero,
* which is the pvid of standalone and VLAN-unaware bridge ports.
*/
static void ocelot_xmit_get_vlan_info(struct sk_buff *skb, struct dsa_port *dp,
u64 *vlan_tci, u64 *tag_type)
{
struct net_device *br = dsa_port_bridge_dev_get(dp);
struct vlan_ethhdr *hdr;
u16 proto, tci;
if (!br || !br_vlan_enabled(br)) {
*vlan_tci = 0;
*tag_type = IFH_TAG_TYPE_C;
return;
}
hdr = skb_vlan_eth_hdr(skb);
br_vlan_get_proto(br, &proto);
if (ntohs(hdr->h_vlan_proto) == proto) {
vlan_remove_tag(skb, &tci);
*vlan_tci = tci;
} else {
rcu_read_lock();
br_vlan_get_pvid_rcu(br, &tci);
rcu_read_unlock();
*vlan_tci = tci;
}
*tag_type = (proto != ETH_P_8021Q) ? IFH_TAG_TYPE_S : IFH_TAG_TYPE_C;
}
static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
__be32 ifh_prefix, void **ifh)
{
@ -53,7 +19,8 @@ static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
u32 rew_op = 0;
u64 qos_class;
ocelot_xmit_get_vlan_info(skb, dp, &vlan_tci, &tag_type);
ocelot_xmit_get_vlan_info(skb, dsa_port_bridge_dev_get(dp), &vlan_tci,
&tag_type);
qos_class = netdev_get_num_tc(netdev) ?
netdev_get_prio_tc_map(netdev, skb->priority) : skb->priority;

View File

@ -97,6 +97,8 @@ static DEFINE_PER_CPU(struct sock_bh_locked, ipv4_tcp_sk) = {
.bh_lock = INIT_LOCAL_LOCK(bh_lock),
};
static DEFINE_MUTEX(tcp_exit_batch_mutex);
static u32 tcp_v4_init_seq(const struct sk_buff *skb)
{
return secure_tcp_seq(ip_hdr(skb)->daddr,
@ -3514,6 +3516,16 @@ static void __net_exit tcp_sk_exit_batch(struct list_head *net_exit_list)
{
struct net *net;
/* make sure concurrent calls to tcp_sk_exit_batch from net_cleanup_work
* and failed setup_net error unwinding path are serialized.
*
* tcp_twsk_purge() handles twsk in any dead netns, not just those in
* net_exit_list, the thread that dismantles a particular twsk must
* do so without other thread progressing to refcount_dec_and_test() of
* tcp_death_row.tw_refcount.
*/
mutex_lock(&tcp_exit_batch_mutex);
tcp_twsk_purge(net_exit_list);
list_for_each_entry(net, net_exit_list, exit_list) {
@ -3521,6 +3533,8 @@ static void __net_exit tcp_sk_exit_batch(struct list_head *net_exit_list)
WARN_ON_ONCE(!refcount_dec_and_test(&net->ipv4.tcp_death_row.tw_refcount));
tcp_fastopen_ctx_destroy(net);
}
mutex_unlock(&tcp_exit_batch_mutex);
}
static struct pernet_operations __net_initdata tcp_sk_ops = {

View File

@ -279,7 +279,8 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
return ERR_PTR(-EINVAL);
if (unlikely(skb_checksum_start(gso_skb) !=
skb_transport_header(gso_skb)))
skb_transport_header(gso_skb) &&
!(skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)))
return ERR_PTR(-EINVAL);
/* We don't know if egress device can segment and checksum the packet

View File

@ -70,11 +70,15 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
/* Be paranoid, rather than too clever. */
if (unlikely(hh_len > skb_headroom(skb)) && dev->header_ops) {
/* Make sure idev stays alive */
rcu_read_lock();
skb = skb_expand_head(skb, hh_len);
if (!skb) {
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
rcu_read_unlock();
return -ENOMEM;
}
rcu_read_unlock();
}
hdr = ipv6_hdr(skb);
@ -283,11 +287,15 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
head_room += opt->opt_nflen + opt->opt_flen;
if (unlikely(head_room > skb_headroom(skb))) {
/* Make sure idev stays alive */
rcu_read_lock();
skb = skb_expand_head(skb, head_room);
if (!skb) {
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
rcu_read_unlock();
return -ENOBUFS;
}
rcu_read_unlock();
}
if (opt) {
@ -1956,6 +1964,7 @@ int ip6_send_skb(struct sk_buff *skb)
struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
int err;
rcu_read_lock();
err = ip6_local_out(net, skb->sk, skb);
if (err) {
if (err > 0)
@ -1965,6 +1974,7 @@ int ip6_send_skb(struct sk_buff *skb)
IPSTATS_MIB_OUTDISCARDS);
}
rcu_read_unlock();
return err;
}

View File

@ -1507,7 +1507,8 @@ static void ip6_tnl_link_config(struct ip6_tnl *t)
tdev = __dev_get_by_index(t->net, p->link);
if (tdev) {
dev->hard_header_len = tdev->hard_header_len + t_hlen;
dev->needed_headroom = tdev->hard_header_len +
tdev->needed_headroom + t_hlen;
mtu = min_t(unsigned int, tdev->mtu, IP6_MAX_MTU);
mtu = mtu - t_hlen;
@ -1731,7 +1732,9 @@ ip6_tnl_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
{
struct ip6_tnl *tnl = netdev_priv(dev);
int t_hlen;
t_hlen = tnl->hlen + sizeof(struct ipv6hdr);
if (tnl->parms.proto == IPPROTO_IPV6) {
if (new_mtu < IPV6_MIN_MTU)
return -EINVAL;
@ -1740,10 +1743,10 @@ int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
return -EINVAL;
}
if (tnl->parms.proto == IPPROTO_IPV6 || tnl->parms.proto == 0) {
if (new_mtu > IP6_MAX_MTU - dev->hard_header_len)
if (new_mtu > IP6_MAX_MTU - dev->hard_header_len - t_hlen)
return -EINVAL;
} else {
if (new_mtu > IP_MAX_MTU - dev->hard_header_len)
if (new_mtu > IP_MAX_MTU - dev->hard_header_len - t_hlen)
return -EINVAL;
}
WRITE_ONCE(dev->mtu, new_mtu);
@ -1887,12 +1890,11 @@ ip6_tnl_dev_init_gen(struct net_device *dev)
t_hlen = t->hlen + sizeof(struct ipv6hdr);
dev->type = ARPHRD_TUNNEL6;
dev->hard_header_len = LL_MAX_HEADER + t_hlen;
dev->mtu = ETH_DATA_LEN - t_hlen;
if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT))
dev->mtu -= 8;
dev->min_mtu = ETH_MIN_MTU;
dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len;
dev->max_mtu = IP6_MAX_MTU - dev->hard_header_len - t_hlen;
netdev_hold(dev, &t->dev_tracker, GFP_KERNEL);
netdev_lockdep_set_classes(dev);

View File

@ -86,13 +86,15 @@ struct device *iucv_alloc_device(const struct attribute_group **attrs,
{
struct device *dev;
va_list vargs;
char buf[20];
int rc;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev)
goto out_error;
va_start(vargs, fmt);
rc = dev_set_name(dev, fmt, vargs);
vsnprintf(buf, sizeof(buf), fmt, vargs);
rc = dev_set_name(dev, "%s", buf);
va_end(vargs);
if (rc)
goto out_error;

View File

@ -755,6 +755,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
!(msg->msg_flags & MSG_MORE) : !!(msg->msg_flags & MSG_EOR);
int err = -EPIPE;
mutex_lock(&kcm->tx_mutex);
lock_sock(sk);
/* Per tcp_sendmsg this should be in poll */
@ -926,6 +927,7 @@ partial_message:
KCM_STATS_ADD(kcm->stats.tx_bytes, copied);
release_sock(sk);
mutex_unlock(&kcm->tx_mutex);
return copied;
out_error:
@ -951,6 +953,7 @@ out_error:
sk->sk_write_space(sk);
release_sock(sk);
mutex_unlock(&kcm->tx_mutex);
return err;
}
@ -1204,6 +1207,7 @@ static void init_kcm_sock(struct kcm_sock *kcm, struct kcm_mux *mux)
spin_unlock_bh(&mux->lock);
INIT_WORK(&kcm->tx_work, kcm_tx_work);
mutex_init(&kcm->tx_mutex);
spin_lock_bh(&mux->rx_lock);
kcm_rcv_ready(kcm);

View File

@ -366,7 +366,7 @@ static void mctp_test_route_input_sk(struct kunit *test)
skb2 = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc);
KUNIT_EXPECT_NOT_ERR_OR_NULL(test, skb2);
KUNIT_EXPECT_EQ(test, skb->len, 1);
KUNIT_EXPECT_EQ(test, skb2->len, 1);
skb_free_datagram(sock->sk, skb2);

View File

@ -60,16 +60,6 @@ int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_
return 0;
}
int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list)
{
pr_debug("msk=%p, rm_list_nr=%d", msk, rm_list->nr);
spin_lock_bh(&msk->pm.lock);
mptcp_pm_nl_rm_subflow_received(msk, rm_list);
spin_unlock_bh(&msk->pm.lock);
return 0;
}
/* path manager event handlers */
void mptcp_pm_new_connection(struct mptcp_sock *msk, const struct sock *ssk, int server_side)
@ -444,9 +434,6 @@ int mptcp_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, unsigned int id
*flags = 0;
*ifindex = 0;
if (!id)
return 0;
if (mptcp_pm_is_userspace(msk))
return mptcp_userspace_pm_get_flags_and_ifindex_by_id(msk, id, flags, ifindex);
return mptcp_pm_nl_get_flags_and_ifindex_by_id(msk, id, flags, ifindex);

View File

@ -143,11 +143,13 @@ static bool lookup_subflow_by_daddr(const struct list_head *list,
return false;
}
static struct mptcp_pm_addr_entry *
static bool
select_local_address(const struct pm_nl_pernet *pernet,
const struct mptcp_sock *msk)
const struct mptcp_sock *msk,
struct mptcp_pm_addr_entry *new_entry)
{
struct mptcp_pm_addr_entry *entry, *ret = NULL;
struct mptcp_pm_addr_entry *entry;
bool found = false;
msk_owned_by_me(msk);
@ -159,17 +161,21 @@ select_local_address(const struct pm_nl_pernet *pernet,
if (!test_bit(entry->addr.id, msk->pm.id_avail_bitmap))
continue;
ret = entry;
*new_entry = *entry;
found = true;
break;
}
rcu_read_unlock();
return ret;
return found;
}
static struct mptcp_pm_addr_entry *
select_signal_address(struct pm_nl_pernet *pernet, const struct mptcp_sock *msk)
static bool
select_signal_address(struct pm_nl_pernet *pernet, const struct mptcp_sock *msk,
struct mptcp_pm_addr_entry *new_entry)
{
struct mptcp_pm_addr_entry *entry, *ret = NULL;
struct mptcp_pm_addr_entry *entry;
bool found = false;
rcu_read_lock();
/* do not keep any additional per socket state, just signal
@ -184,11 +190,13 @@ select_signal_address(struct pm_nl_pernet *pernet, const struct mptcp_sock *msk)
if (!(entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL))
continue;
ret = entry;
*new_entry = *entry;
found = true;
break;
}
rcu_read_unlock();
return ret;
return found;
}
unsigned int mptcp_pm_get_add_addr_signal_max(const struct mptcp_sock *msk)
@ -512,9 +520,10 @@ __lookup_addr(struct pm_nl_pernet *pernet, const struct mptcp_addr_info *info)
static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
{
struct mptcp_pm_addr_entry *local, *signal_and_subflow = NULL;
struct sock *sk = (struct sock *)msk;
struct mptcp_pm_addr_entry local;
unsigned int add_addr_signal_max;
bool signal_and_subflow = false;
unsigned int local_addr_max;
struct pm_nl_pernet *pernet;
unsigned int subflows_max;
@ -565,23 +574,22 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
if (msk->pm.addr_signal & BIT(MPTCP_ADD_ADDR_SIGNAL))
return;
local = select_signal_address(pernet, msk);
if (!local)
if (!select_signal_address(pernet, msk, &local))
goto subflow;
/* If the alloc fails, we are on memory pressure, not worth
* continuing, and trying to create subflows.
*/
if (!mptcp_pm_alloc_anno_list(msk, &local->addr))
if (!mptcp_pm_alloc_anno_list(msk, &local.addr))
return;
__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
msk->pm.add_addr_signaled++;
mptcp_pm_announce_addr(msk, &local->addr, false);
mptcp_pm_announce_addr(msk, &local.addr, false);
mptcp_pm_nl_addr_send_ack(msk);
if (local->flags & MPTCP_PM_ADDR_FLAG_SUBFLOW)
signal_and_subflow = local;
if (local.flags & MPTCP_PM_ADDR_FLAG_SUBFLOW)
signal_and_subflow = true;
}
subflow:
@ -592,26 +600,22 @@ subflow:
bool fullmesh;
int i, nr;
if (signal_and_subflow) {
local = signal_and_subflow;
signal_and_subflow = NULL;
} else {
local = select_local_address(pernet, msk);
if (!local)
break;
}
if (signal_and_subflow)
signal_and_subflow = false;
else if (!select_local_address(pernet, msk, &local))
break;
fullmesh = !!(local->flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
fullmesh = !!(local.flags & MPTCP_PM_ADDR_FLAG_FULLMESH);
msk->pm.local_addr_used++;
__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
nr = fill_remote_addresses_vec(msk, &local->addr, fullmesh, addrs);
__clear_bit(local.addr.id, msk->pm.id_avail_bitmap);
nr = fill_remote_addresses_vec(msk, &local.addr, fullmesh, addrs);
if (nr == 0)
continue;
spin_unlock_bh(&msk->pm.lock);
for (i = 0; i < nr; i++)
__mptcp_subflow_connect(sk, &local->addr, &addrs[i]);
__mptcp_subflow_connect(sk, &local.addr, &addrs[i]);
spin_lock_bh(&msk->pm.lock);
}
mptcp_pm_nl_check_work_pending(msk);
@ -636,6 +640,7 @@ static unsigned int fill_local_addresses_vec(struct mptcp_sock *msk,
{
struct sock *sk = (struct sock *)msk;
struct mptcp_pm_addr_entry *entry;
struct mptcp_addr_info mpc_addr;
struct pm_nl_pernet *pernet;
unsigned int subflows_max;
int i = 0;
@ -643,6 +648,8 @@ static unsigned int fill_local_addresses_vec(struct mptcp_sock *msk,
pernet = pm_nl_get_pernet_from_msk(msk);
subflows_max = mptcp_pm_get_subflows_max(msk);
mptcp_local_address((struct sock_common *)msk, &mpc_addr);
rcu_read_lock();
list_for_each_entry_rcu(entry, &pernet->local_addr_list, list) {
if (!(entry->flags & MPTCP_PM_ADDR_FLAG_FULLMESH))
@ -653,7 +660,13 @@ static unsigned int fill_local_addresses_vec(struct mptcp_sock *msk,
if (msk->pm.subflows < subflows_max) {
msk->pm.subflows++;
addrs[i++] = entry->addr;
addrs[i] = entry->addr;
/* Special case for ID0: set the correct ID */
if (mptcp_addresses_equal(&entry->addr, &mpc_addr, entry->addr.port))
addrs[i].id = 0;
i++;
}
}
rcu_read_unlock();
@ -829,25 +842,27 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
mptcp_close_ssk(sk, ssk, subflow);
spin_lock_bh(&msk->pm.lock);
removed = true;
removed |= subflow->request_join;
if (rm_type == MPTCP_MIB_RMSUBFLOW)
__MPTCP_INC_STATS(sock_net(sk), rm_type);
}
if (rm_type == MPTCP_MIB_RMSUBFLOW)
__set_bit(rm_id ? rm_id : msk->mpc_endpoint_id, msk->pm.id_avail_bitmap);
else if (rm_type == MPTCP_MIB_RMADDR)
if (rm_type == MPTCP_MIB_RMADDR)
__MPTCP_INC_STATS(sock_net(sk), rm_type);
if (!removed)
continue;
if (!mptcp_pm_is_kernel(msk))
continue;
if (rm_type == MPTCP_MIB_RMADDR) {
msk->pm.add_addr_accepted--;
WRITE_ONCE(msk->pm.accept_addr, true);
} else if (rm_type == MPTCP_MIB_RMSUBFLOW) {
msk->pm.local_addr_used--;
if (rm_type == MPTCP_MIB_RMADDR && rm_id &&
!WARN_ON_ONCE(msk->pm.add_addr_accepted == 0)) {
/* Note: if the subflow has been closed before, this
* add_addr_accepted counter will not be decremented.
*/
if (--msk->pm.add_addr_accepted < mptcp_pm_get_add_addr_accept_max(msk))
WRITE_ONCE(msk->pm.accept_addr, true);
}
}
}
@ -857,8 +872,8 @@ static void mptcp_pm_nl_rm_addr_received(struct mptcp_sock *msk)
mptcp_pm_nl_rm_addr_or_subflow(msk, &msk->pm.rm_list_rx, MPTCP_MIB_RMADDR);
}
void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
const struct mptcp_rm_list *rm_list)
static void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
const struct mptcp_rm_list *rm_list)
{
mptcp_pm_nl_rm_addr_or_subflow(msk, rm_list, MPTCP_MIB_RMSUBFLOW);
}
@ -1393,6 +1408,10 @@ int mptcp_pm_nl_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, unsigned int
struct sock *sk = (struct sock *)msk;
struct net *net = sock_net(sk);
/* No entries with ID 0 */
if (id == 0)
return 0;
rcu_read_lock();
entry = __lookup_addr_by_id(pm_nl_get_pernet(net), id);
if (entry) {
@ -1431,13 +1450,24 @@ static bool mptcp_pm_remove_anno_addr(struct mptcp_sock *msk,
ret = remove_anno_list_by_saddr(msk, addr);
if (ret || force) {
spin_lock_bh(&msk->pm.lock);
msk->pm.add_addr_signaled -= ret;
if (ret) {
__set_bit(addr->id, msk->pm.id_avail_bitmap);
msk->pm.add_addr_signaled--;
}
mptcp_pm_remove_addr(msk, &list);
spin_unlock_bh(&msk->pm.lock);
}
return ret;
}
static void __mark_subflow_endp_available(struct mptcp_sock *msk, u8 id)
{
/* If it was marked as used, and not ID 0, decrement local_addr_used */
if (!__test_and_set_bit(id ? : msk->mpc_endpoint_id, msk->pm.id_avail_bitmap) &&
id && !WARN_ON_ONCE(msk->pm.local_addr_used == 0))
msk->pm.local_addr_used--;
}
static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
const struct mptcp_pm_addr_entry *entry)
{
@ -1466,8 +1496,19 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr);
mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
!(entry->flags & MPTCP_PM_ADDR_FLAG_IMPLICIT));
if (remove_subflow)
mptcp_pm_remove_subflow(msk, &list);
if (remove_subflow) {
spin_lock_bh(&msk->pm.lock);
mptcp_pm_nl_rm_subflow_received(msk, &list);
spin_unlock_bh(&msk->pm.lock);
}
if (entry->flags & MPTCP_PM_ADDR_FLAG_SUBFLOW) {
spin_lock_bh(&msk->pm.lock);
__mark_subflow_endp_available(msk, list.ids[0]);
spin_unlock_bh(&msk->pm.lock);
}
release_sock(sk);
next:
@ -1502,6 +1543,7 @@ static int mptcp_nl_remove_id_zero_address(struct net *net,
spin_lock_bh(&msk->pm.lock);
mptcp_pm_remove_addr(msk, &list);
mptcp_pm_nl_rm_subflow_received(msk, &list);
__mark_subflow_endp_available(msk, 0);
spin_unlock_bh(&msk->pm.lock);
release_sock(sk);
@ -1605,14 +1647,17 @@ static void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
alist.ids[alist.nr++] = entry->addr.id;
}
spin_lock_bh(&msk->pm.lock);
if (alist.nr) {
spin_lock_bh(&msk->pm.lock);
msk->pm.add_addr_signaled -= alist.nr;
mptcp_pm_remove_addr(msk, &alist);
spin_unlock_bh(&msk->pm.lock);
}
if (slist.nr)
mptcp_pm_remove_subflow(msk, &slist);
mptcp_pm_nl_rm_subflow_received(msk, &slist);
/* Reset counters: maybe some subflows have been removed before */
bitmap_fill(msk->pm.id_avail_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
msk->pm.local_addr_used = 0;
spin_unlock_bh(&msk->pm.lock);
}
static void mptcp_nl_remove_addrs_list(struct net *net,
@ -1900,6 +1945,7 @@ static void mptcp_pm_nl_fullmesh(struct mptcp_sock *msk,
spin_lock_bh(&msk->pm.lock);
mptcp_pm_nl_rm_subflow_received(msk, &list);
__mark_subflow_endp_available(msk, list.ids[0]);
mptcp_pm_create_subflow_or_signal_addr(msk);
spin_unlock_bh(&msk->pm.lock);
}

View File

@ -1026,7 +1026,6 @@ int mptcp_pm_announce_addr(struct mptcp_sock *msk,
const struct mptcp_addr_info *addr,
bool echo);
int mptcp_pm_remove_addr(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list);
int mptcp_pm_remove_subflow(struct mptcp_sock *msk, const struct mptcp_rm_list *rm_list);
void mptcp_pm_remove_addrs(struct mptcp_sock *msk, struct list_head *rm_list);
void mptcp_free_local_addr_list(struct mptcp_sock *msk);
@ -1133,8 +1132,6 @@ static inline u8 subflow_get_local_id(const struct mptcp_subflow_context *subflo
void __init mptcp_pm_nl_init(void);
void mptcp_pm_nl_work(struct mptcp_sock *msk);
void mptcp_pm_nl_rm_subflow_received(struct mptcp_sock *msk,
const struct mptcp_rm_list *rm_list);
unsigned int mptcp_pm_get_add_addr_signal_max(const struct mptcp_sock *msk);
unsigned int mptcp_pm_get_add_addr_accept_max(const struct mptcp_sock *msk);
unsigned int mptcp_pm_get_subflows_max(const struct mptcp_sock *msk);

View File

@ -17,6 +17,9 @@ nf_flow_offload_inet_hook(void *priv, struct sk_buff *skb,
switch (skb->protocol) {
case htons(ETH_P_8021Q):
if (!pskb_may_pull(skb, skb_mac_offset(skb) + sizeof(*veth)))
return NF_ACCEPT;
veth = (struct vlan_ethhdr *)skb_mac_header(skb);
proto = veth->h_vlan_encapsulated_proto;
break;

View File

@ -281,6 +281,9 @@ static bool nf_flow_skb_encap_protocol(struct sk_buff *skb, __be16 proto,
switch (skb->protocol) {
case htons(ETH_P_8021Q):
if (!pskb_may_pull(skb, skb_mac_offset(skb) + sizeof(*veth)))
return false;
veth = (struct vlan_ethhdr *)skb_mac_header(skb);
if (veth->h_vlan_encapsulated_proto == proto) {
*offset += VLAN_HLEN;

View File

@ -107,11 +107,16 @@ static void nft_counter_reset(struct nft_counter_percpu_priv *priv,
struct nft_counter *total)
{
struct nft_counter *this_cpu;
seqcount_t *myseq;
local_bh_disable();
this_cpu = this_cpu_ptr(priv->counter);
myseq = this_cpu_ptr(&nft_counter_seq);
write_seqcount_begin(myseq);
this_cpu->packets -= total->packets;
this_cpu->bytes -= total->bytes;
write_seqcount_end(myseq);
local_bh_enable();
}
@ -265,7 +270,7 @@ static void nft_counter_offload_stats(struct nft_expr *expr,
struct nft_counter *this_cpu;
seqcount_t *myseq;
preempt_disable();
local_bh_disable();
this_cpu = this_cpu_ptr(priv->counter);
myseq = this_cpu_ptr(&nft_counter_seq);
@ -273,7 +278,7 @@ static void nft_counter_offload_stats(struct nft_expr *expr,
this_cpu->packets += stats->pkts;
this_cpu->bytes += stats->bytes;
write_seqcount_end(myseq);
preempt_enable();
local_bh_enable();
}
void nft_counter_init_seqcount(void)

View File

@ -2706,7 +2706,7 @@ static struct pernet_operations ovs_net_ops = {
};
static const char * const ovs_drop_reasons[] = {
#define S(x) (#x),
#define S(x) [(x) & ~SKB_DROP_REASON_SUBSYS_MASK] = (#x),
OVS_DROP_REASONS(S)
#undef S
};

View File

@ -446,12 +446,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
struct netem_sched_data *q = qdisc_priv(sch);
/* We don't fill cb now as skb_unshare() may invalidate it */
struct netem_skb_cb *cb;
struct sk_buff *skb2;
struct sk_buff *skb2 = NULL;
struct sk_buff *segs = NULL;
unsigned int prev_len = qdisc_pkt_len(skb);
int count = 1;
int rc = NET_XMIT_SUCCESS;
int rc_drop = NET_XMIT_DROP;
/* Do not fool qdisc_drop_all() */
skb->prev = NULL;
@ -480,19 +478,11 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
skb_orphan_partial(skb);
/*
* If we need to duplicate packet, then re-insert at top of the
* qdisc tree, since parent queuer expects that only one
* skb will be queued.
* If we need to duplicate packet, then clone it before
* original is modified.
*/
if (count > 1 && (skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) {
struct Qdisc *rootq = qdisc_root_bh(sch);
u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
q->duplicate = 0;
rootq->enqueue(skb2, rootq, to_free);
q->duplicate = dupsave;
rc_drop = NET_XMIT_SUCCESS;
}
if (count > 1)
skb2 = skb_clone(skb, GFP_ATOMIC);
/*
* Randomized packet corruption.
@ -504,7 +494,8 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
if (skb_is_gso(skb)) {
skb = netem_segment(skb, sch, to_free);
if (!skb)
return rc_drop;
goto finish_segs;
segs = skb->next;
skb_mark_not_on_list(skb);
qdisc_skb_cb(skb)->pkt_len = skb->len;
@ -530,7 +521,24 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
/* re-link segs, so that qdisc_drop_all() frees them all */
skb->next = segs;
qdisc_drop_all(skb, sch, to_free);
return rc_drop;
if (skb2)
__qdisc_drop(skb2, to_free);
return NET_XMIT_DROP;
}
/*
* If doing duplication then re-insert at top of the
* qdisc tree, since parent queuer expects that only one
* skb will be queued.
*/
if (skb2) {
struct Qdisc *rootq = qdisc_root_bh(sch);
u32 dupsave = q->duplicate; /* prevent duplicating a dup... */
q->duplicate = 0;
rootq->enqueue(skb2, rootq, to_free);
q->duplicate = dupsave;
skb2 = NULL;
}
qdisc_qstats_backlog_inc(sch, skb);
@ -601,9 +609,12 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
}
finish_segs:
if (skb2)
__qdisc_drop(skb2, to_free);
if (segs) {
unsigned int len, last_len;
int nb;
int rc, nb;
len = skb ? skb->len : 0;
nb = skb ? 1 : 0;

View File

@ -2,6 +2,7 @@
# SPDX-License-Identifier: GPL-2.0
lib_dir=$(dirname $0)/../../../net/forwarding
ethtool_lib_dir=$(dirname $0)/../hw
ALL_TESTS="
autoneg
@ -11,7 +12,7 @@ ALL_TESTS="
NUM_NETIFS=2
: ${TIMEOUT:=30000} # ms
source $lib_dir/lib.sh
source $lib_dir/ethtool_lib.sh
source $ethtool_lib_dir/ethtool_lib.sh
setup_prepare()
{

View File

@ -1,7 +1,7 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn"
ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn other_tpid"
NUM_NETIFS=4
CHECK_TC="yes"
source lib.sh
@ -142,6 +142,58 @@ extern_learn()
bridge fdb del de:ad:be:ef:13:37 dev $swp1 master vlan 1 &> /dev/null
}
other_tpid()
{
local mac=de:ad:be:ef:13:37
# Test that packets with TPID 802.1ad VID 3 + TPID 802.1Q VID 5 are
# classified as untagged by a bridge with vlan_protocol 802.1Q, and
# are processed in the PVID of the ingress port (here 1). Not VID 3,
# and not VID 5.
RET=0
tc qdisc add dev $h2 clsact
tc filter add dev $h2 ingress protocol all pref 1 handle 101 \
flower dst_mac $mac action drop
ip link set $h2 promisc on
ethtool -K $h2 rx-vlan-filter off rx-vlan-stag-filter off
$MZ -q $h1 -c 1 -b $mac -a own "88:a8 00:03 81:00 00:05 08:00 aa-aa-aa-aa-aa-aa-aa-aa-aa"
sleep 1
# Match on 'self' addresses as well, for those drivers which
# do not push their learned addresses to the bridge software
# database
bridge -j fdb show $swp1 | \
jq -e ".[] | select(.mac == \"$(mac_get $h1)\") | select(.vlan == 1)" &> /dev/null
check_err $? "FDB entry was not learned when it should"
log_test "FDB entry in PVID for VLAN-tagged with other TPID"
RET=0
tc -j -s filter show dev $h2 ingress \
| jq -e ".[] | select(.options.handle == 101) \
| select(.options.actions[0].stats.packets == 1)" &> /dev/null
check_err $? "Packet was not forwarded when it should"
log_test "Reception of VLAN with other TPID as untagged"
bridge vlan del dev $swp1 vid 1
$MZ -q $h1 -c 1 -b $mac -a own "88:a8 00:03 81:00 00:05 08:00 aa-aa-aa-aa-aa-aa-aa-aa-aa"
sleep 1
RET=0
tc -j -s filter show dev $h2 ingress \
| jq -e ".[] | select(.options.handle == 101) \
| select(.options.actions[0].stats.packets == 1)" &> /dev/null
check_err $? "Packet was forwarded when should not"
log_test "Reception of VLAN with other TPID as untagged (no PVID)"
bridge vlan add dev $swp1 vid 1 pvid untagged
ip link set $h2 promisc off
tc qdisc del dev $h2 clsact
}
trap cleanup EXIT
setup_prepare

View File

@ -500,6 +500,11 @@ check_err_fail()
fi
}
xfail()
{
FAIL_TO_XFAIL=yes "$@"
}
xfail_on_slow()
{
if [[ $KSFT_MACHINE_SLOW = yes ]]; then
@ -1113,6 +1118,39 @@ mac_get()
ip -j link show dev $if_name | jq -r '.[]["address"]'
}
ether_addr_to_u64()
{
local addr="$1"
local order="$((1 << 40))"
local val=0
local byte
addr="${addr//:/ }"
for byte in $addr; do
byte="0x$byte"
val=$((val + order * byte))
order=$((order >> 8))
done
printf "0x%x" $val
}
u64_to_ether_addr()
{
local val=$1
local byte
local i
for ((i = 40; i >= 0; i -= 8)); do
byte=$(((val & (0xff << i)) >> i))
printf "%02x" $byte
if [ $i -ne 0 ]; then
printf ":"
fi
done
}
ipv6_lladdr_get()
{
local if_name=$1
@ -2229,3 +2267,22 @@ absval()
echo $((v > 0 ? v : -v))
}
has_unicast_flt()
{
local dev=$1; shift
local mac_addr=$(mac_get $dev)
local tmp=$(ether_addr_to_u64 $mac_addr)
local promisc
ip link set $dev up
ip link add link $dev name macvlan-tmp type macvlan mode private
ip link set macvlan-tmp address $(u64_to_ether_addr $((tmp + 1)))
ip link set macvlan-tmp up
promisc=$(ip -j -d link show dev $dev | jq -r '.[].promiscuity')
ip link del macvlan-tmp
[[ $promisc == 1 ]] && echo "no" || echo "yes"
}

View File

@ -1,7 +1,9 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
ALL_TESTS="standalone bridge"
ALL_TESTS="standalone vlan_unaware_bridge vlan_aware_bridge test_vlan \
vlan_over_vlan_unaware_bridged_port vlan_over_vlan_aware_bridged_port \
vlan_over_vlan_unaware_bridge vlan_over_vlan_aware_bridge"
NUM_NETIFS=2
PING_COUNT=1
REQUIRE_MTOOLS=yes
@ -37,9 +39,68 @@ UNKNOWN_MACV6_MC_ADDR1="33:33:01:02:03:05"
UNKNOWN_MACV6_MC_ADDR2="33:33:01:02:03:06"
UNKNOWN_MACV6_MC_ADDR3="33:33:01:02:03:07"
NON_IP_MC="01:02:03:04:05:06"
NON_IP_PKT="00:04 48:45:4c:4f"
BC="ff:ff:ff:ff:ff:ff"
PTP_1588_L2_SYNC=" \
01:1b:19:00:00:00 00:00:de:ad:be:ef 88:f7 00 02 \
00 2c 00 00 02 00 00 00 00 00 00 00 00 00 00 00 \
00 00 3e 37 63 ff fe cf 17 0e 00 01 00 00 00 00 \
00 00 00 00 00 00 00 00 00 00"
PTP_1588_L2_FOLLOW_UP=" \
01:1b:19:00:00:00 00:00:de:ad:be:ef 88:f7 08 02 \
00 2c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \
00 00 3e 37 63 ff fe cf 17 0e 00 01 00 00 02 00 \
00 00 66 83 c5 f1 17 97 ed f0"
PTP_1588_L2_PDELAY_REQ=" \
01:80:c2:00:00:0e 00:00:de:ad:be:ef 88:f7 02 02 \
00 36 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \
00 00 3e 37 63 ff fe cf 17 0e 00 01 00 06 05 7f \
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \
00 00 00 00"
PTP_1588_IPV4_SYNC=" \
01:00:5e:00:01:81 00:00:de:ad:be:ef 08:00 45 00 \
00 48 0a 9a 40 00 01 11 cb 88 c0 00 02 01 e0 00 \
01 81 01 3f 01 3f 00 34 a3 c8 00 02 00 2c 00 00 \
02 00 00 00 00 00 00 00 00 00 00 00 00 00 3e 37 \
63 ff fe cf 17 0e 00 01 00 00 00 00 00 00 00 00 \
00 00 00 00 00 00"
PTP_1588_IPV4_FOLLOW_UP="
01:00:5e:00:01:81 00:00:de:ad:be:ef 08:00 45 00 \
00 48 0a 9b 40 00 01 11 cb 87 c0 00 02 01 e0 00 \
01 81 01 40 01 40 00 34 a3 c8 08 02 00 2c 00 00 \
00 00 00 00 00 00 00 00 00 00 00 00 00 00 3e 37 \
63 ff fe cf 17 0e 00 01 00 00 02 00 00 00 66 83 \
c6 0f 1d 9a 61 87"
PTP_1588_IPV4_PDELAY_REQ=" \
01:00:5e:00:00:6b 00:00:de:ad:be:ef 08:00 45 00 \
00 52 35 a9 40 00 01 11 a1 85 c0 00 02 01 e0 00 \
00 6b 01 3f 01 3f 00 3e a2 bc 02 02 00 36 00 00 \
00 00 00 00 00 00 00 00 00 00 00 00 00 00 3e 37 \
63 ff fe cf 17 0e 00 01 00 01 05 7f 00 00 00 00 \
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00"
PTP_1588_IPV6_SYNC=" \
33:33:00:00:01:81 00:00:de:ad:be:ef 86:dd 60 06 \
7c 2f 00 36 11 01 20 01 0d b8 00 01 00 00 00 00 \
00 00 00 00 00 01 ff 0e 00 00 00 00 00 00 00 00 \
00 00 00 00 01 81 01 3f 01 3f 00 36 2e 92 00 02 \
00 2c 00 00 02 00 00 00 00 00 00 00 00 00 00 00 \
00 00 3e 37 63 ff fe cf 17 0e 00 01 00 00 00 00 \
00 00 00 00 00 00 00 00 00 00 00 00"
PTP_1588_IPV6_FOLLOW_UP=" \
33:33:00:00:01:81 00:00:de:ad:be:ef 86:dd 60 0a \
00 bc 00 36 11 01 20 01 0d b8 00 01 00 00 00 00 \
00 00 00 00 00 01 ff 0e 00 00 00 00 00 00 00 00 \
00 00 00 00 01 81 01 40 01 40 00 36 2e 92 08 02 \
00 2c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \
00 00 3e 37 63 ff fe cf 17 0e 00 01 00 00 02 00 \
00 00 66 83 c6 2a 32 09 bd 74 00 00"
PTP_1588_IPV6_PDELAY_REQ=" \
33:33:00:00:00:6b 00:00:de:ad:be:ef 86:dd 60 0c \
5c fd 00 40 11 01 fe 80 00 00 00 00 00 00 3c 37 \
63 ff fe cf 17 0e ff 02 00 00 00 00 00 00 00 00 \
00 00 00 00 00 6b 01 3f 01 3f 00 40 b4 54 02 02 \
00 36 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \
00 00 3e 37 63 ff fe cf 17 0e 00 01 00 01 05 7f \
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \
00 00 00 00 00 00"
# Disable promisc to ensure we don't receive unknown MAC DA packets
export TCPDUMP_EXTRA_FLAGS="-pl"
@ -47,13 +108,15 @@ export TCPDUMP_EXTRA_FLAGS="-pl"
h1=${NETIFS[p1]}
h2=${NETIFS[p2]}
send_non_ip()
send_raw()
{
local if_name=$1
local smac=$2
local dmac=$3
local if_name=$1; shift
local pkt="$1"; shift
local smac=$(mac_get $if_name)
$MZ -q $if_name "$dmac $smac $NON_IP_PKT"
pkt="${pkt/00:00:de:ad:be:ef/$smac}"
$MZ -q $if_name "$pkt"
}
send_uc_ipv4()
@ -68,10 +131,11 @@ send_uc_ipv4()
check_rcv()
{
local if_name=$1
local type=$2
local pattern=$3
local should_receive=$4
local if_name=$1; shift
local type=$1; shift
local pattern=$1; shift
local should_receive=$1; shift
local test_name="$1"; shift
local should_fail=
[ $should_receive = true ] && should_fail=0 || should_fail=1
@ -81,7 +145,7 @@ check_rcv()
check_err_fail "$should_fail" "$?" "reception"
log_test "$if_name: $type"
log_test "$test_name: $type"
}
mc_route_prepare()
@ -104,44 +168,78 @@ mc_route_destroy()
run_test()
{
local rcv_if_name=$1
local smac=$(mac_get $h1)
local send_if_name=$1; shift
local rcv_if_name=$1; shift
local skip_ptp=$1; shift
local no_unicast_flt=$1; shift
local test_name="$1"; shift
local smac=$(mac_get $send_if_name)
local rcv_dmac=$(mac_get $rcv_if_name)
local should_receive
tcpdump_start $rcv_if_name
mc_route_prepare $h1
mc_route_prepare $send_if_name
mc_route_prepare $rcv_if_name
send_uc_ipv4 $h1 $rcv_dmac
send_uc_ipv4 $h1 $MACVLAN_ADDR
send_uc_ipv4 $h1 $UNKNOWN_UC_ADDR1
send_uc_ipv4 $send_if_name $rcv_dmac
send_uc_ipv4 $send_if_name $MACVLAN_ADDR
send_uc_ipv4 $send_if_name $UNKNOWN_UC_ADDR1
ip link set dev $rcv_if_name promisc on
send_uc_ipv4 $h1 $UNKNOWN_UC_ADDR2
mc_send $h1 $UNKNOWN_IPV4_MC_ADDR2
mc_send $h1 $UNKNOWN_IPV6_MC_ADDR2
send_uc_ipv4 $send_if_name $UNKNOWN_UC_ADDR2
mc_send $send_if_name $UNKNOWN_IPV4_MC_ADDR2
mc_send $send_if_name $UNKNOWN_IPV6_MC_ADDR2
ip link set dev $rcv_if_name promisc off
mc_join $rcv_if_name $JOINED_IPV4_MC_ADDR
mc_send $h1 $JOINED_IPV4_MC_ADDR
mc_send $send_if_name $JOINED_IPV4_MC_ADDR
mc_leave
mc_join $rcv_if_name $JOINED_IPV6_MC_ADDR
mc_send $h1 $JOINED_IPV6_MC_ADDR
mc_send $send_if_name $JOINED_IPV6_MC_ADDR
mc_leave
mc_send $h1 $UNKNOWN_IPV4_MC_ADDR1
mc_send $h1 $UNKNOWN_IPV6_MC_ADDR1
mc_send $send_if_name $UNKNOWN_IPV4_MC_ADDR1
mc_send $send_if_name $UNKNOWN_IPV6_MC_ADDR1
ip link set dev $rcv_if_name allmulticast on
send_uc_ipv4 $h1 $UNKNOWN_UC_ADDR3
mc_send $h1 $UNKNOWN_IPV4_MC_ADDR3
mc_send $h1 $UNKNOWN_IPV6_MC_ADDR3
send_uc_ipv4 $send_if_name $UNKNOWN_UC_ADDR3
mc_send $send_if_name $UNKNOWN_IPV4_MC_ADDR3
mc_send $send_if_name $UNKNOWN_IPV6_MC_ADDR3
ip link set dev $rcv_if_name allmulticast off
mc_route_destroy $rcv_if_name
mc_route_destroy $h1
mc_route_destroy $send_if_name
if [ $skip_ptp = false ]; then
ip maddress add 01:1b:19:00:00:00 dev $rcv_if_name
send_raw $send_if_name "$PTP_1588_L2_SYNC"
send_raw $send_if_name "$PTP_1588_L2_FOLLOW_UP"
ip maddress del 01:1b:19:00:00:00 dev $rcv_if_name
ip maddress add 01:80:c2:00:00:0e dev $rcv_if_name
send_raw $send_if_name "$PTP_1588_L2_PDELAY_REQ"
ip maddress del 01:80:c2:00:00:0e dev $rcv_if_name
mc_join $rcv_if_name 224.0.1.129
send_raw $send_if_name "$PTP_1588_IPV4_SYNC"
send_raw $send_if_name "$PTP_1588_IPV4_FOLLOW_UP"
mc_leave
mc_join $rcv_if_name 224.0.0.107
send_raw $send_if_name "$PTP_1588_IPV4_PDELAY_REQ"
mc_leave
mc_join $rcv_if_name ff0e::181
send_raw $send_if_name "$PTP_1588_IPV6_SYNC"
send_raw $send_if_name "$PTP_1588_IPV6_FOLLOW_UP"
mc_leave
mc_join $rcv_if_name ff02::6b
send_raw $send_if_name "$PTP_1588_IPV6_PDELAY_REQ"
mc_leave
fi
sleep 1
@ -149,61 +247,99 @@ run_test()
check_rcv $rcv_if_name "Unicast IPv4 to primary MAC address" \
"$smac > $rcv_dmac, ethertype IPv4 (0x0800)" \
true
true "$test_name"
check_rcv $rcv_if_name "Unicast IPv4 to macvlan MAC address" \
"$smac > $MACVLAN_ADDR, ethertype IPv4 (0x0800)" \
true
true "$test_name"
xfail_on_veth $h1 \
check_rcv $rcv_if_name "Unicast IPv4 to unknown MAC address" \
"$smac > $UNKNOWN_UC_ADDR1, ethertype IPv4 (0x0800)" \
false
[ $no_unicast_flt = true ] && should_receive=true || should_receive=false
check_rcv $rcv_if_name "Unicast IPv4 to unknown MAC address" \
"$smac > $UNKNOWN_UC_ADDR1, ethertype IPv4 (0x0800)" \
$should_receive "$test_name"
check_rcv $rcv_if_name "Unicast IPv4 to unknown MAC address, promisc" \
"$smac > $UNKNOWN_UC_ADDR2, ethertype IPv4 (0x0800)" \
true
true "$test_name"
xfail_on_veth $h1 \
check_rcv $rcv_if_name \
"Unicast IPv4 to unknown MAC address, allmulti" \
"$smac > $UNKNOWN_UC_ADDR3, ethertype IPv4 (0x0800)" \
false
[ $no_unicast_flt = true ] && should_receive=true || should_receive=false
check_rcv $rcv_if_name \
"Unicast IPv4 to unknown MAC address, allmulti" \
"$smac > $UNKNOWN_UC_ADDR3, ethertype IPv4 (0x0800)" \
$should_receive "$test_name"
check_rcv $rcv_if_name "Multicast IPv4 to joined group" \
"$smac > $JOINED_MACV4_MC_ADDR, ethertype IPv4 (0x0800)" \
true
true "$test_name"
xfail_on_veth $h1 \
xfail \
check_rcv $rcv_if_name \
"Multicast IPv4 to unknown group" \
"$smac > $UNKNOWN_MACV4_MC_ADDR1, ethertype IPv4 (0x0800)" \
false
false "$test_name"
check_rcv $rcv_if_name "Multicast IPv4 to unknown group, promisc" \
"$smac > $UNKNOWN_MACV4_MC_ADDR2, ethertype IPv4 (0x0800)" \
true
true "$test_name"
check_rcv $rcv_if_name "Multicast IPv4 to unknown group, allmulti" \
"$smac > $UNKNOWN_MACV4_MC_ADDR3, ethertype IPv4 (0x0800)" \
true
true "$test_name"
check_rcv $rcv_if_name "Multicast IPv6 to joined group" \
"$smac > $JOINED_MACV6_MC_ADDR, ethertype IPv6 (0x86dd)" \
true
true "$test_name"
xfail_on_veth $h1 \
xfail \
check_rcv $rcv_if_name "Multicast IPv6 to unknown group" \
"$smac > $UNKNOWN_MACV6_MC_ADDR1, ethertype IPv6 (0x86dd)" \
false
false "$test_name"
check_rcv $rcv_if_name "Multicast IPv6 to unknown group, promisc" \
"$smac > $UNKNOWN_MACV6_MC_ADDR2, ethertype IPv6 (0x86dd)" \
true
true "$test_name"
check_rcv $rcv_if_name "Multicast IPv6 to unknown group, allmulti" \
"$smac > $UNKNOWN_MACV6_MC_ADDR3, ethertype IPv6 (0x86dd)" \
true
true "$test_name"
if [ $skip_ptp = false ]; then
check_rcv $rcv_if_name "1588v2 over L2 transport, Sync" \
"ethertype PTP (0x88f7).* PTPv2.* msg type : sync msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over L2 transport, Follow-Up" \
"ethertype PTP (0x88f7).* PTPv2.* msg type : follow up msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over L2 transport, Peer Delay Request" \
"ethertype PTP (0x88f7).* PTPv2.* msg type : peer delay req msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over IPv4, Sync" \
"ethertype IPv4 (0x0800).* PTPv2.* msg type : sync msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over IPv4, Follow-Up" \
"ethertype IPv4 (0x0800).* PTPv2.* msg type : follow up msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over IPv4, Peer Delay Request" \
"ethertype IPv4 (0x0800).* PTPv2.* msg type : peer delay req msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over IPv6, Sync" \
"ethertype IPv6 (0x86dd).* PTPv2.* msg type : sync msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over IPv6, Follow-Up" \
"ethertype IPv6 (0x86dd).* PTPv2.* msg type : follow up msg" \
true "$test_name"
check_rcv $rcv_if_name "1588v2 over IPv6, Peer Delay Request" \
"ethertype IPv6 (0x86dd).* PTPv2.* msg type : peer delay req msg" \
true "$test_name"
fi
tcpdump_cleanup $rcv_if_name
}
@ -228,59 +364,210 @@ h2_destroy()
simple_if_fini $h2 $H2_IPV4/24 $H2_IPV6/64
}
h1_vlan_create()
{
simple_if_init $h1
vlan_create $h1 100 v$h1 $H1_IPV4/24 $H1_IPV6/64
}
h1_vlan_destroy()
{
vlan_destroy $h1 100
simple_if_fini $h1
}
h2_vlan_create()
{
simple_if_init $h2
vlan_create $h2 100 v$h2 $H2_IPV4/24 $H2_IPV6/64
}
h2_vlan_destroy()
{
vlan_destroy $h2 100
simple_if_fini $h2
}
bridge_create()
{
ip link add br0 type bridge
local vlan_filtering=$1
ip link add br0 type bridge vlan_filtering $vlan_filtering
ip link set br0 address $BRIDGE_ADDR
ip link set br0 up
ip link set $h2 master br0
ip link set $h2 up
simple_if_init br0 $H2_IPV4/24 $H2_IPV6/64
}
bridge_destroy()
{
simple_if_fini br0 $H2_IPV4/24 $H2_IPV6/64
ip link del br0
}
macvlan_create()
{
local lower=$1
ip link add link $lower name macvlan0 type macvlan mode private
ip link set macvlan0 address $MACVLAN_ADDR
ip link set macvlan0 up
}
macvlan_destroy()
{
ip link del macvlan0
}
standalone()
{
local no_unicast_flt=true
local skip_ptp=false
if [ $(has_unicast_flt $h2) = yes ]; then
no_unicast_flt=false
fi
h1_create
h2_create
macvlan_create $h2
ip link add link $h2 name macvlan0 type macvlan mode private
ip link set macvlan0 address $MACVLAN_ADDR
ip link set macvlan0 up
run_test $h2
ip link del macvlan0
run_test $h1 $h2 $skip_ptp $no_unicast_flt "$h2"
macvlan_destroy
h2_destroy
h1_destroy
}
bridge()
test_bridge()
{
local no_unicast_flt=true
local vlan_filtering=$1
local skip_ptp=true
h1_create
bridge_create
bridge_create $vlan_filtering
simple_if_init br0 $H2_IPV4/24 $H2_IPV6/64
macvlan_create br0
ip link add link br0 name macvlan0 type macvlan mode private
ip link set macvlan0 address $MACVLAN_ADDR
ip link set macvlan0 up
run_test br0
ip link del macvlan0
run_test $h1 br0 $skip_ptp $no_unicast_flt \
"vlan_filtering=$vlan_filtering bridge"
macvlan_destroy
simple_if_fini br0 $H2_IPV4/24 $H2_IPV6/64
bridge_destroy
h1_destroy
}
vlan_unaware_bridge()
{
test_bridge 0
}
vlan_aware_bridge()
{
test_bridge 1
}
test_vlan()
{
local no_unicast_flt=true
local skip_ptp=false
if [ $(has_unicast_flt $h2) = yes ]; then
no_unicast_flt=false
fi
h1_vlan_create
h2_vlan_create
macvlan_create $h2.100
run_test $h1.100 $h2.100 $skip_ptp $no_unicast_flt "VLAN upper"
macvlan_destroy
h2_vlan_destroy
h1_vlan_destroy
}
vlan_over_bridged_port()
{
local no_unicast_flt=true
local vlan_filtering=$1
local skip_ptp=false
# br_manage_promisc() will not force a single vlan_filtering port to
# promiscuous mode, so we should still expect unicast filtering to take
# place if the device can do it.
if [ $(has_unicast_flt $h2) = yes ] && [ $vlan_filtering = 1 ]; then
no_unicast_flt=false
fi
h1_vlan_create
h2_vlan_create
bridge_create $vlan_filtering
macvlan_create $h2.100
run_test $h1.100 $h2.100 $skip_ptp $no_unicast_flt \
"VLAN over vlan_filtering=$vlan_filtering bridged port"
macvlan_destroy
bridge_destroy
h2_vlan_destroy
h1_vlan_destroy
}
vlan_over_vlan_unaware_bridged_port()
{
vlan_over_bridged_port 0
}
vlan_over_vlan_aware_bridged_port()
{
vlan_over_bridged_port 1
}
vlan_over_bridge()
{
local no_unicast_flt=true
local vlan_filtering=$1
local skip_ptp=true
h1_vlan_create
bridge_create $vlan_filtering
simple_if_init br0
vlan_create br0 100 vbr0 $H2_IPV4/24 $H2_IPV6/64
macvlan_create br0.100
if [ $vlan_filtering = 1 ]; then
bridge vlan add dev $h2 vid 100 master
bridge vlan add dev br0 vid 100 self
fi
run_test $h1.100 br0.100 $skip_ptp $no_unicast_flt \
"VLAN over vlan_filtering=$vlan_filtering bridge"
if [ $vlan_filtering = 1 ]; then
bridge vlan del dev br0 vid 100 self
bridge vlan del dev $h2 vid 100 master
fi
macvlan_destroy
vlan_destroy br0 100
simple_if_fini br0
bridge_destroy
h1_vlan_destroy
}
vlan_over_vlan_unaware_bridge()
{
vlan_over_bridge 0
}
vlan_over_vlan_aware_bridge()
{
vlan_over_bridge 1
}
cleanup()
{
pre_cleanup

View File

@ -436,9 +436,10 @@ reset_with_tcp_filter()
local ns="${!1}"
local src="${2}"
local target="${3}"
local chain="${4:-INPUT}"
if ! ip netns exec "${ns}" ${iptables} \
-A INPUT \
-A "${chain}" \
-s "${src}" \
-p tcp \
-j "${target}"; then
@ -3058,6 +3059,7 @@ fullmesh_tests()
pm_nl_set_limits $ns1 1 3
pm_nl_set_limits $ns2 1 3
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
pm_nl_add_endpoint $ns2 10.0.1.2 flags subflow,fullmesh
fullmesh=1 speed=slow \
run_tests $ns1 $ns2 10.0.1.1
chk_join_nr 3 3 3
@ -3571,10 +3573,10 @@ endpoint_tests()
mptcp_lib_kill_wait $tests_pid
fi
if reset "delete and re-add" &&
if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 1 1
pm_nl_set_limits $ns2 1 1
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 0 2
pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
test_linkfail=4 speed=20 \
run_tests $ns1 $ns2 10.0.1.1 &
@ -3591,19 +3593,37 @@ endpoint_tests()
chk_subflow_nr "after delete" 1
chk_mptcp_info subflows 0 subflows 0
pm_nl_add_endpoint $ns2 10.0.2.2 dev ns2eth2 flags subflow
pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow
wait_mpj $ns2
chk_subflow_nr "after re-add" 2
chk_mptcp_info subflows 1 subflows 1
pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
wait_attempt_fail $ns2
chk_subflow_nr "after new reject" 2
chk_mptcp_info subflows 1 subflows 1
ip netns exec "${ns2}" ${iptables} -D OUTPUT -s "10.0.3.2" -p tcp -j REJECT
pm_nl_del_endpoint $ns2 3 10.0.3.2
pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
wait_mpj $ns2
chk_subflow_nr "after no reject" 3
chk_mptcp_info subflows 2 subflows 2
mptcp_lib_kill_wait $tests_pid
chk_join_nr 3 3 3
chk_rm_nr 1 1
fi
# remove and re-add
if reset "delete re-add signal" &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 1 1
pm_nl_set_limits $ns2 1 1
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 2 2
pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
# broadcast IP: no packet for this address will be received on ns1
pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal
test_linkfail=4 speed=20 \
run_tests $ns1 $ns2 10.0.1.1 &
local tests_pid=$!
@ -3615,17 +3635,53 @@ endpoint_tests()
chk_mptcp_info subflows 1 subflows 1
pm_nl_del_endpoint $ns1 1 10.0.2.1
pm_nl_del_endpoint $ns1 2 224.0.0.1
sleep 0.5
chk_subflow_nr "after delete" 1
chk_mptcp_info subflows 0 subflows 0
pm_nl_add_endpoint $ns1 10.0.2.1 flags signal
pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
pm_nl_add_endpoint $ns1 10.0.3.1 id 2 flags signal
wait_mpj $ns2
chk_subflow_nr "after re-add" 2
chk_mptcp_info subflows 1 subflows 1
chk_subflow_nr "after re-add" 3
chk_mptcp_info subflows 2 subflows 2
mptcp_lib_kill_wait $tests_pid
chk_join_nr 3 3 3
chk_add_nr 4 4
chk_rm_nr 2 1 invert
fi
# flush and re-add
if reset_with_tcp_filter "flush re-add" ns2 10.0.3.2 REJECT OUTPUT &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
pm_nl_set_limits $ns1 0 2
pm_nl_set_limits $ns2 1 2
# broadcast IP: no packet for this address will be received on ns1
pm_nl_add_endpoint $ns1 224.0.0.1 id 2 flags signal
pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
test_linkfail=4 speed=20 \
run_tests $ns1 $ns2 10.0.1.1 &
local tests_pid=$!
wait_attempt_fail $ns2
chk_subflow_nr "before flush" 1
chk_mptcp_info subflows 0 subflows 0
pm_nl_flush_endpoint $ns2
pm_nl_flush_endpoint $ns1
wait_rm_addr $ns2 0
ip netns exec "${ns2}" ${iptables} -D OUTPUT -s "10.0.3.2" -p tcp -j REJECT
pm_nl_add_endpoint $ns2 10.0.3.2 id 3 flags subflow
wait_mpj $ns2
pm_nl_add_endpoint $ns1 10.0.3.1 id 2 flags signal
wait_mpj $ns2
mptcp_lib_kill_wait $tests_pid
chk_join_nr 2 2 2
chk_add_nr 2 2
chk_rm_nr 1 0 invert
fi
}
# [$1: error message]

View File

@ -7,8 +7,6 @@ source net_helper.sh
readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)"
BPF_FILE="xdp_dummy.bpf.o"
# set global exit status, but never reset nonzero one.
check_err()
{
@ -38,7 +36,7 @@ cfg_veth() {
ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/24
ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad
ip -netns "${PEER_NS}" link set dev veth1 up
ip -n "${PEER_NS}" link set veth1 xdp object ${BPF_FILE} section xdp
ip netns exec "${PEER_NS}" ethtool -K veth1 gro on
}
run_one() {
@ -46,17 +44,19 @@ run_one() {
local -r all="$@"
local -r tx_args=${all%rx*}
local -r rx_args=${all#*rx}
local ret=0
cfg_veth
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} && \
echo "ok" || \
echo "failed" &
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} &
local PID1=$!
wait_local_port_listen ${PEER_NS} 8000 udp
./udpgso_bench_tx ${tx_args}
ret=$?
wait $(jobs -p)
check_err $?
wait ${PID1}
check_err $?
[ "$ret" -eq 0 ] && echo "ok" || echo "failed"
return $ret
}
@ -73,6 +73,7 @@ run_one_nat() {
local -r all="$@"
local -r tx_args=${all%rx*}
local -r rx_args=${all#*rx}
local ret=0
if [[ ${tx_args} = *-4* ]]; then
ipt_cmd=iptables
@ -93,16 +94,17 @@ run_one_nat() {
# ... so that GRO will match the UDP_GRO enabled socket, but packets
# will land on the 'plain' one
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -G ${family} -b ${addr1} -n 0 &
pid=$!
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} && \
echo "ok" || \
echo "failed"&
local PID1=$!
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} &
local PID2=$!
wait_local_port_listen "${PEER_NS}" 8000 udp
./udpgso_bench_tx ${tx_args}
ret=$?
kill -INT $pid
wait $(jobs -p)
check_err $?
kill -INT ${PID1}
wait ${PID2}
check_err $?
[ "$ret" -eq 0 ] && echo "ok" || echo "failed"
return $ret
}
@ -111,20 +113,26 @@ run_one_2sock() {
local -r all="$@"
local -r tx_args=${all%rx*}
local -r rx_args=${all#*rx}
local ret=0
cfg_veth
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} -p 12345 &
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} && \
echo "ok" || \
echo "failed" &
local PID1=$!
ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} &
local PID2=$!
wait_local_port_listen "${PEER_NS}" 12345 udp
./udpgso_bench_tx ${tx_args} -p 12345
check_err $?
wait_local_port_listen "${PEER_NS}" 8000 udp
./udpgso_bench_tx ${tx_args}
ret=$?
wait $(jobs -p)
check_err $?
wait ${PID1}
check_err $?
wait ${PID2}
check_err $?
[ "$ret" -eq 0 ] && echo "ok" || echo "failed"
return $ret
}
@ -196,11 +204,6 @@ run_all() {
return $ret
}
if [ ! -f ${BPF_FILE} ]; then
echo "Missing ${BPF_FILE}. Run 'make' first"
exit -1
fi
if [[ $# -eq 0 ]]; then
run_all
elif [[ $1 == "__subprocess" ]]; then

View File

@ -143,7 +143,6 @@ class PluginMgr:
except Exception as ee:
print('exception {} in call to pre_case for {} plugin'.
format(ee, pgn_inst.__class__))
print('test_ordinal is {}'.format(test_ordinal))
print('testid is {}'.format(caseinfo['id']))
raise