forked from Minki/linux
Networking fixes for 5.17-rc6, including fixes from bpf and netfilter.
Current release - regressions: - bpf: fix crash due to out of bounds access into reg2btf_ids - mvpp2: always set port pcs ops, avoid null-deref - eth: marvell: fix driver load from initrd - eth: intel: revert "Fix reset bw limit when DCB enabled with 1 TC" Current release - new code bugs: - mptcp: fix race in overlapping signal events Previous releases - regressions: - xen-netback: revert hotplug-status changes causing devices to not be configured - dsa: - avoid call to __dev_set_promiscuity() while rtnl_mutex isn't held - fix panic when removing unoffloaded port from bridge - dsa: microchip: fix bridging with more than two member ports Previous releases - always broken: - bpf: - fix crash due to incorrect copy_map_value when both spin lock and timer are present in a single value - fix a bpf_timer initialization issue with clang - do not try bpf_msg_push_data with len 0 - add schedule points in batch ops - nf_tables: - unregister flowtable hooks on netns exit - correct flow offload action array size - fix a couple of memory leaks - vsock: don't check owner in vhost_vsock_stop() while releasing - gso: do not skip outer ip header in case of ipip and net_failover - smc: use a mutex for locking "struct smc_pnettable" - openvswitch: fix setting ipv6 fields causing hw csum failure - mptcp: fix race in incoming ADD_ADDR option processing - sysfs: add check for netdevice being present to speed_show - sched: act_ct: fix flow table lookup after ct clear or switching zones - eth: intel: fixes for SR-IOV forwarding offloads - eth: broadcom: fixes for selftests and error recovery - eth: mellanox: flow steering and SR-IOV forwarding fixes Misc: - make __pskb_pull_tail() & pskb_carve_frag_list() drop_monitor friends not report freed skbs as drops - force inlining of checksum functions in net/checksum.h Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmIX2ssACgkQMUZtbf5S IrvImQ//b+JILp0M/jz6q25n5U7qxuNmJypq659kR19jnwGH520XTwnFE9/FB3gw UnlCb28+jdMX1HHQJaUKkKYTilfFvyMoRPAMbLFO51Y02dVALTjD7C2wJ1AyEiTV eKhOcGHLbDzLom3+FnK566adOlGsIZfr4bR4zlGcthU0wTvU6S2K3WTkVJMASJzJ JizNgN+SvpdpmnYj+wsg2cj/5W4R/IPdxCrkZMkEMomJnVxA61RV+wsCcsT+Cjrf wu+cknUiVIGQNtCT4hz8VZ3tOoAeX+Xg/4YbaxVxnvunTQh+D+eIza40IEqewlEq KFOXGuPXsse6ZJ7IqVZt1hgBxJ8bpItxEBNSgU3KqJKMTTKOpWWjZxkTYeIERMry Ywb/ciZ7pwbo2CNhICh6+xefQvGbU0jgsiMgSkQvXZ9b9IsdPM4bwgvjFsyqnEMz 0HVpqN02F7MM44mD4P0TQct9OSemu6sVqQFrpk8+CvPfaSEctCv/iJ6WR/xxUgSp uPvKYlv7BqOKZtqzGOk215WEvTUf8dy9cxcQwoYBOBxs8h2XQSRXEWCsGWCOg5+V xLnlnreXHXKWcUrAmsJlZh6XmWGk9lBDqLX7hKCYZzMgU8nNopSDKKcDpVDkaBzC DrK8Y3y+lBhpBwCHt/GZw8Qg9aDDsczFpOfPZBVJy+jH+7AGK7M= =LT/x -----END PGP SIGNATURE----- Merge tag 'net-5.17-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bpf and netfilter. Current release - regressions: - bpf: fix crash due to out of bounds access into reg2btf_ids - mvpp2: always set port pcs ops, avoid null-deref - eth: marvell: fix driver load from initrd - eth: intel: revert "Fix reset bw limit when DCB enabled with 1 TC" Current release - new code bugs: - mptcp: fix race in overlapping signal events Previous releases - regressions: - xen-netback: revert hotplug-status changes causing devices to not be configured - dsa: - avoid call to __dev_set_promiscuity() while rtnl_mutex isn't held - fix panic when removing unoffloaded port from bridge - dsa: microchip: fix bridging with more than two member ports Previous releases - always broken: - bpf: - fix crash due to incorrect copy_map_value when both spin lock and timer are present in a single value - fix a bpf_timer initialization issue with clang - do not try bpf_msg_push_data with len 0 - add schedule points in batch ops - nf_tables: - unregister flowtable hooks on netns exit - correct flow offload action array size - fix a couple of memory leaks - vsock: don't check owner in vhost_vsock_stop() while releasing - gso: do not skip outer ip header in case of ipip and net_failover - smc: use a mutex for locking "struct smc_pnettable" - openvswitch: fix setting ipv6 fields causing hw csum failure - mptcp: fix race in incoming ADD_ADDR option processing - sysfs: add check for netdevice being present to speed_show - sched: act_ct: fix flow table lookup after ct clear or switching zones - eth: intel: fixes for SR-IOV forwarding offloads - eth: broadcom: fixes for selftests and error recovery - eth: mellanox: flow steering and SR-IOV forwarding fixes Misc: - make __pskb_pull_tail() & pskb_carve_frag_list() drop_monitor friends not report freed skbs as drops - force inlining of checksum functions in net/checksum.h" * tag 'net-5.17-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (85 commits) net: mv643xx_eth: process retval from of_get_mac_address ping: remove pr_err from ping_lookup Revert "i40e: Fix reset bw limit when DCB enabled with 1 TC" openvswitch: Fix setting ipv6 fields causing hw csum failure ipv6: prevent a possible race condition with lifetimes net/smc: Use a mutex for locking "struct smc_pnettable" bnx2x: fix driver load from initrd Revert "xen-netback: Check for hotplug-status existence before watching" Revert "xen-netback: remove 'hotplug-status' once it has served its purpose" net/mlx5e: Fix VF min/max rate parameters interchange mistake net/mlx5e: Add missing increment of count net/mlx5e: MPLSoUDP decap, fix check for unsupported matches net/mlx5e: Fix MPLSoUDP encap to use MPLS action information net/mlx5e: Add feature check for set fec counters net/mlx5e: TC, Skip redundant ct clear actions net/mlx5e: TC, Reject rules with forward and drop actions net/mlx5e: TC, Reject rules with drop and modify hdr action net/mlx5e: kTLS, Use CHECKSUM_UNNECESSARY for device-offloaded packets net/mlx5e: Fix wrong return value on ioctl EEPROM query failure net/mlx5: Fix possible deadlock on rule deletion ...
This commit is contained in:
commit
f672ff9123
@ -16073,8 +16073,8 @@ F: Documentation/devicetree/bindings/mtd/qcom,nandc.yaml
|
||||
F: drivers/mtd/nand/raw/qcom_nandc.c
|
||||
|
||||
QUALCOMM RMNET DRIVER
|
||||
M: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
||||
M: Sean Tranchetti <stranche@codeaurora.org>
|
||||
M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com>
|
||||
M: Sean Tranchetti <quic_stranche@quicinc.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/networking/device_drivers/cellular/qualcomm/rmnet.rst
|
||||
@ -16366,6 +16366,7 @@ F: drivers/watchdog/realtek_otto_wdt.c
|
||||
|
||||
REALTEK RTL83xx SMI DSA ROUTER CHIPS
|
||||
M: Linus Walleij <linus.walleij@linaro.org>
|
||||
M: Alvin Šipraga <alsi@bang-olufsen.dk>
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/net/dsa/realtek-smi.txt
|
||||
F: drivers/net/dsa/realtek-smi*
|
||||
|
@ -26,7 +26,7 @@ void ksz_update_port_member(struct ksz_device *dev, int port)
|
||||
struct dsa_switch *ds = dev->ds;
|
||||
u8 port_member = 0, cpu_port;
|
||||
const struct dsa_port *dp;
|
||||
int i;
|
||||
int i, j;
|
||||
|
||||
if (!dsa_is_user_port(ds, port))
|
||||
return;
|
||||
@ -45,13 +45,33 @@ void ksz_update_port_member(struct ksz_device *dev, int port)
|
||||
continue;
|
||||
if (!dsa_port_bridge_same(dp, other_dp))
|
||||
continue;
|
||||
if (other_p->stp_state != BR_STATE_FORWARDING)
|
||||
continue;
|
||||
|
||||
if (other_p->stp_state == BR_STATE_FORWARDING &&
|
||||
p->stp_state == BR_STATE_FORWARDING) {
|
||||
if (p->stp_state == BR_STATE_FORWARDING) {
|
||||
val |= BIT(port);
|
||||
port_member |= BIT(i);
|
||||
}
|
||||
|
||||
/* Retain port [i]'s relationship to other ports than [port] */
|
||||
for (j = 0; j < ds->num_ports; j++) {
|
||||
const struct dsa_port *third_dp;
|
||||
struct ksz_port *third_p;
|
||||
|
||||
if (j == i)
|
||||
continue;
|
||||
if (j == port)
|
||||
continue;
|
||||
if (!dsa_is_user_port(ds, j))
|
||||
continue;
|
||||
third_p = &dev->ports[j];
|
||||
if (third_p->stp_state != BR_STATE_FORWARDING)
|
||||
continue;
|
||||
third_dp = dsa_to_port(ds, j);
|
||||
if (dsa_port_bridge_same(other_dp, third_dp))
|
||||
val |= BIT(j);
|
||||
}
|
||||
|
||||
dev->dev_ops->cfg_port_member(dev, i, val | cpu_port);
|
||||
}
|
||||
|
||||
|
@ -100,6 +100,9 @@ MODULE_LICENSE("GPL");
|
||||
MODULE_FIRMWARE(FW_FILE_NAME_E1);
|
||||
MODULE_FIRMWARE(FW_FILE_NAME_E1H);
|
||||
MODULE_FIRMWARE(FW_FILE_NAME_E2);
|
||||
MODULE_FIRMWARE(FW_FILE_NAME_E1_V15);
|
||||
MODULE_FIRMWARE(FW_FILE_NAME_E1H_V15);
|
||||
MODULE_FIRMWARE(FW_FILE_NAME_E2_V15);
|
||||
|
||||
int bnx2x_num_queues;
|
||||
module_param_named(num_queues, bnx2x_num_queues, int, 0444);
|
||||
|
@ -4747,8 +4747,10 @@ static int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, u16 vnic_id)
|
||||
return rc;
|
||||
|
||||
req->vnic_id = cpu_to_le32(vnic->fw_vnic_id);
|
||||
req->num_mc_entries = cpu_to_le32(vnic->mc_list_count);
|
||||
req->mc_tbl_addr = cpu_to_le64(vnic->mc_list_mapping);
|
||||
if (vnic->rx_mask & CFA_L2_SET_RX_MASK_REQ_MASK_MCAST) {
|
||||
req->num_mc_entries = cpu_to_le32(vnic->mc_list_count);
|
||||
req->mc_tbl_addr = cpu_to_le64(vnic->mc_list_mapping);
|
||||
}
|
||||
req->mask = cpu_to_le32(vnic->rx_mask);
|
||||
return hwrm_req_send_silent(bp, req);
|
||||
}
|
||||
@ -7787,6 +7789,19 @@ static int bnxt_map_fw_health_regs(struct bnxt *bp)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bnxt_remap_fw_health_regs(struct bnxt *bp)
|
||||
{
|
||||
if (!bp->fw_health)
|
||||
return;
|
||||
|
||||
if (bp->fw_cap & BNXT_FW_CAP_ERROR_RECOVERY) {
|
||||
bp->fw_health->status_reliable = true;
|
||||
bp->fw_health->resets_reliable = true;
|
||||
} else {
|
||||
bnxt_try_map_fw_health_reg(bp);
|
||||
}
|
||||
}
|
||||
|
||||
static int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp)
|
||||
{
|
||||
struct bnxt_fw_health *fw_health = bp->fw_health;
|
||||
@ -8639,6 +8654,9 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
|
||||
vnic->uc_filter_count = 1;
|
||||
|
||||
vnic->rx_mask = 0;
|
||||
if (test_bit(BNXT_STATE_HALF_OPEN, &bp->state))
|
||||
goto skip_rx_mask;
|
||||
|
||||
if (bp->dev->flags & IFF_BROADCAST)
|
||||
vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
|
||||
|
||||
@ -8648,7 +8666,7 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
|
||||
if (bp->dev->flags & IFF_ALLMULTI) {
|
||||
vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
|
||||
vnic->mc_list_count = 0;
|
||||
} else {
|
||||
} else if (bp->dev->flags & IFF_MULTICAST) {
|
||||
u32 mask = 0;
|
||||
|
||||
bnxt_mc_list_updated(bp, &mask);
|
||||
@ -8659,6 +8677,7 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
|
||||
if (rc)
|
||||
goto err_out;
|
||||
|
||||
skip_rx_mask:
|
||||
rc = bnxt_hwrm_set_coal(bp);
|
||||
if (rc)
|
||||
netdev_warn(bp->dev, "HWRM set coalescing failure rc: %x\n",
|
||||
@ -9850,8 +9869,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
||||
resc_reinit = true;
|
||||
if (flags & FUNC_DRV_IF_CHANGE_RESP_FLAGS_HOT_FW_RESET_DONE)
|
||||
fw_reset = true;
|
||||
else if (bp->fw_health && !bp->fw_health->status_reliable)
|
||||
bnxt_try_map_fw_health_reg(bp);
|
||||
else
|
||||
bnxt_remap_fw_health_regs(bp);
|
||||
|
||||
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
|
||||
netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
|
||||
@ -10330,13 +10349,15 @@ int bnxt_half_open_nic(struct bnxt *bp)
|
||||
goto half_open_err;
|
||||
}
|
||||
|
||||
rc = bnxt_alloc_mem(bp, false);
|
||||
rc = bnxt_alloc_mem(bp, true);
|
||||
if (rc) {
|
||||
netdev_err(bp->dev, "bnxt_alloc_mem err: %x\n", rc);
|
||||
goto half_open_err;
|
||||
}
|
||||
rc = bnxt_init_nic(bp, false);
|
||||
set_bit(BNXT_STATE_HALF_OPEN, &bp->state);
|
||||
rc = bnxt_init_nic(bp, true);
|
||||
if (rc) {
|
||||
clear_bit(BNXT_STATE_HALF_OPEN, &bp->state);
|
||||
netdev_err(bp->dev, "bnxt_init_nic err: %x\n", rc);
|
||||
goto half_open_err;
|
||||
}
|
||||
@ -10344,7 +10365,7 @@ int bnxt_half_open_nic(struct bnxt *bp)
|
||||
|
||||
half_open_err:
|
||||
bnxt_free_skbs(bp);
|
||||
bnxt_free_mem(bp, false);
|
||||
bnxt_free_mem(bp, true);
|
||||
dev_close(bp->dev);
|
||||
return rc;
|
||||
}
|
||||
@ -10354,9 +10375,10 @@ half_open_err:
|
||||
*/
|
||||
void bnxt_half_close_nic(struct bnxt *bp)
|
||||
{
|
||||
bnxt_hwrm_resource_free(bp, false, false);
|
||||
bnxt_hwrm_resource_free(bp, false, true);
|
||||
bnxt_free_skbs(bp);
|
||||
bnxt_free_mem(bp, false);
|
||||
bnxt_free_mem(bp, true);
|
||||
clear_bit(BNXT_STATE_HALF_OPEN, &bp->state);
|
||||
}
|
||||
|
||||
void bnxt_reenable_sriov(struct bnxt *bp)
|
||||
@ -10772,7 +10794,7 @@ static void bnxt_set_rx_mode(struct net_device *dev)
|
||||
if (dev->flags & IFF_ALLMULTI) {
|
||||
mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
|
||||
vnic->mc_list_count = 0;
|
||||
} else {
|
||||
} else if (dev->flags & IFF_MULTICAST) {
|
||||
mc_update = bnxt_mc_list_updated(bp, &mask);
|
||||
}
|
||||
|
||||
@ -10849,9 +10871,10 @@ skip_uc:
|
||||
!bnxt_promisc_ok(bp))
|
||||
vnic->rx_mask &= ~CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS;
|
||||
rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
|
||||
if (rc && vnic->mc_list_count) {
|
||||
if (rc && (vnic->rx_mask & CFA_L2_SET_RX_MASK_REQ_MASK_MCAST)) {
|
||||
netdev_info(bp->dev, "Failed setting MC filters rc: %d, turning on ALL_MCAST mode\n",
|
||||
rc);
|
||||
vnic->rx_mask &= ~CFA_L2_SET_RX_MASK_REQ_MASK_MCAST;
|
||||
vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
|
||||
vnic->mc_list_count = 0;
|
||||
rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
|
||||
|
@ -1921,6 +1921,7 @@ struct bnxt {
|
||||
#define BNXT_STATE_RECOVER 12
|
||||
#define BNXT_STATE_FW_NON_FATAL_COND 13
|
||||
#define BNXT_STATE_FW_ACTIVATE_RESET 14
|
||||
#define BNXT_STATE_HALF_OPEN 15 /* For offline ethtool tests */
|
||||
|
||||
#define BNXT_NO_FW_ACCESS(bp) \
|
||||
(test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \
|
||||
|
@ -367,6 +367,16 @@ bnxt_dl_livepatch_report_err(struct bnxt *bp, struct netlink_ext_ack *extack,
|
||||
}
|
||||
}
|
||||
|
||||
/* Live patch status in NVM */
|
||||
#define BNXT_LIVEPATCH_NOT_INSTALLED 0
|
||||
#define BNXT_LIVEPATCH_INSTALLED FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL
|
||||
#define BNXT_LIVEPATCH_REMOVED FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE
|
||||
#define BNXT_LIVEPATCH_MASK (FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL | \
|
||||
FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE)
|
||||
#define BNXT_LIVEPATCH_ACTIVATED BNXT_LIVEPATCH_MASK
|
||||
|
||||
#define BNXT_LIVEPATCH_STATE(flags) ((flags) & BNXT_LIVEPATCH_MASK)
|
||||
|
||||
static int
|
||||
bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
|
||||
{
|
||||
@ -374,8 +384,9 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
|
||||
struct hwrm_fw_livepatch_query_input *query_req;
|
||||
struct hwrm_fw_livepatch_output *patch_resp;
|
||||
struct hwrm_fw_livepatch_input *patch_req;
|
||||
u16 flags, live_patch_state;
|
||||
bool activated = false;
|
||||
u32 installed = 0;
|
||||
u16 flags;
|
||||
u8 target;
|
||||
int rc;
|
||||
|
||||
@ -394,7 +405,6 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
|
||||
hwrm_req_drop(bp, query_req);
|
||||
return rc;
|
||||
}
|
||||
patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_ACTIVATE;
|
||||
patch_req->loadtype = FW_LIVEPATCH_REQ_LOADTYPE_NVM_INSTALL;
|
||||
patch_resp = hwrm_req_hold(bp, patch_req);
|
||||
|
||||
@ -407,12 +417,20 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
|
||||
}
|
||||
|
||||
flags = le16_to_cpu(query_resp->status_flags);
|
||||
if (~flags & FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_INSTALL)
|
||||
live_patch_state = BNXT_LIVEPATCH_STATE(flags);
|
||||
|
||||
if (live_patch_state == BNXT_LIVEPATCH_NOT_INSTALLED)
|
||||
continue;
|
||||
if ((flags & FW_LIVEPATCH_QUERY_RESP_STATUS_FLAGS_ACTIVE) &&
|
||||
!strncmp(query_resp->active_ver, query_resp->install_ver,
|
||||
sizeof(query_resp->active_ver)))
|
||||
|
||||
if (live_patch_state == BNXT_LIVEPATCH_ACTIVATED) {
|
||||
activated = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (live_patch_state == BNXT_LIVEPATCH_INSTALLED)
|
||||
patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_ACTIVATE;
|
||||
else if (live_patch_state == BNXT_LIVEPATCH_REMOVED)
|
||||
patch_req->opcode = FW_LIVEPATCH_REQ_OPCODE_DEACTIVATE;
|
||||
|
||||
patch_req->fw_target = target;
|
||||
rc = hwrm_req_send(bp, patch_req);
|
||||
@ -424,8 +442,13 @@ bnxt_dl_livepatch_activate(struct bnxt *bp, struct netlink_ext_ack *extack)
|
||||
}
|
||||
|
||||
if (!rc && !installed) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "No live patches found");
|
||||
rc = -ENOENT;
|
||||
if (activated) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Live patch already activated");
|
||||
rc = -EEXIST;
|
||||
} else {
|
||||
NL_SET_ERR_MSG_MOD(extack, "No live patches found");
|
||||
rc = -ENOENT;
|
||||
}
|
||||
}
|
||||
hwrm_req_drop(bp, query_req);
|
||||
hwrm_req_drop(bp, patch_req);
|
||||
|
@ -25,6 +25,7 @@
|
||||
#include "bnxt_hsi.h"
|
||||
#include "bnxt.h"
|
||||
#include "bnxt_hwrm.h"
|
||||
#include "bnxt_ulp.h"
|
||||
#include "bnxt_xdp.h"
|
||||
#include "bnxt_ptp.h"
|
||||
#include "bnxt_ethtool.h"
|
||||
@ -1969,6 +1970,9 @@ static int bnxt_get_fecparam(struct net_device *dev,
|
||||
case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS272_IEEE_ACTIVE:
|
||||
fec->active_fec |= ETHTOOL_FEC_LLRS;
|
||||
break;
|
||||
case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_NONE_ACTIVE:
|
||||
fec->active_fec |= ETHTOOL_FEC_OFF;
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@ -3454,7 +3458,7 @@ static int bnxt_run_loopback(struct bnxt *bp)
|
||||
if (!skb)
|
||||
return -ENOMEM;
|
||||
data = skb_put(skb, pkt_size);
|
||||
eth_broadcast_addr(data);
|
||||
ether_addr_copy(&data[i], bp->dev->dev_addr);
|
||||
i += ETH_ALEN;
|
||||
ether_addr_copy(&data[i], bp->dev->dev_addr);
|
||||
i += ETH_ALEN;
|
||||
@ -3548,9 +3552,12 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
|
||||
if (!offline) {
|
||||
bnxt_run_fw_tests(bp, test_mask, &test_results);
|
||||
} else {
|
||||
rc = bnxt_close_nic(bp, false, false);
|
||||
if (rc)
|
||||
bnxt_ulp_stop(bp);
|
||||
rc = bnxt_close_nic(bp, true, false);
|
||||
if (rc) {
|
||||
bnxt_ulp_start(bp, rc);
|
||||
return;
|
||||
}
|
||||
bnxt_run_fw_tests(bp, test_mask, &test_results);
|
||||
|
||||
buf[BNXT_MACLPBK_TEST_IDX] = 1;
|
||||
@ -3560,6 +3567,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
|
||||
if (rc) {
|
||||
bnxt_hwrm_mac_loopback(bp, false);
|
||||
etest->flags |= ETH_TEST_FL_FAILED;
|
||||
bnxt_ulp_start(bp, rc);
|
||||
return;
|
||||
}
|
||||
if (bnxt_run_loopback(bp))
|
||||
@ -3585,7 +3593,8 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
|
||||
}
|
||||
bnxt_hwrm_phy_loopback(bp, false, false);
|
||||
bnxt_half_close_nic(bp);
|
||||
rc = bnxt_open_nic(bp, false, true);
|
||||
rc = bnxt_open_nic(bp, true, true);
|
||||
bnxt_ulp_start(bp, rc);
|
||||
}
|
||||
if (rc || bnxt_test_irq(bp)) {
|
||||
buf[BNXT_IRQ_TEST_IDX] = 1;
|
||||
|
@ -644,17 +644,23 @@ static int __hwrm_send(struct bnxt *bp, struct bnxt_hwrm_ctx *ctx)
|
||||
|
||||
/* Last byte of resp contains valid bit */
|
||||
valid = ((u8 *)ctx->resp) + len - 1;
|
||||
for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; j++) {
|
||||
for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; ) {
|
||||
/* make sure we read from updated DMA memory */
|
||||
dma_rmb();
|
||||
if (*valid)
|
||||
break;
|
||||
usleep_range(1, 5);
|
||||
if (j < 10) {
|
||||
udelay(1);
|
||||
j++;
|
||||
} else {
|
||||
usleep_range(20, 30);
|
||||
j += 20;
|
||||
}
|
||||
}
|
||||
|
||||
if (j >= HWRM_VALID_BIT_DELAY_USEC) {
|
||||
hwrm_err(bp, ctx, "Error (timeout: %u) msg {0x%x 0x%x} len:%d v:%d\n",
|
||||
hwrm_total_timeout(i), req_type,
|
||||
hwrm_total_timeout(i) + j, req_type,
|
||||
le16_to_cpu(ctx->req->seq_id), len, *valid);
|
||||
goto exit;
|
||||
}
|
||||
|
@ -90,7 +90,7 @@ static inline unsigned int hwrm_total_timeout(unsigned int n)
|
||||
}
|
||||
|
||||
|
||||
#define HWRM_VALID_BIT_DELAY_USEC 150
|
||||
#define HWRM_VALID_BIT_DELAY_USEC 50000
|
||||
|
||||
static inline bool bnxt_cfa_hwrm_message(u16 req_type)
|
||||
{
|
||||
|
@ -989,117 +989,6 @@ static int ftgmac100_alloc_rx_buffers(struct ftgmac100 *priv)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ftgmac100_adjust_link(struct net_device *netdev)
|
||||
{
|
||||
struct ftgmac100 *priv = netdev_priv(netdev);
|
||||
struct phy_device *phydev = netdev->phydev;
|
||||
bool tx_pause, rx_pause;
|
||||
int new_speed;
|
||||
|
||||
/* We store "no link" as speed 0 */
|
||||
if (!phydev->link)
|
||||
new_speed = 0;
|
||||
else
|
||||
new_speed = phydev->speed;
|
||||
|
||||
/* Grab pause settings from PHY if configured to do so */
|
||||
if (priv->aneg_pause) {
|
||||
rx_pause = tx_pause = phydev->pause;
|
||||
if (phydev->asym_pause)
|
||||
tx_pause = !rx_pause;
|
||||
} else {
|
||||
rx_pause = priv->rx_pause;
|
||||
tx_pause = priv->tx_pause;
|
||||
}
|
||||
|
||||
/* Link hasn't changed, do nothing */
|
||||
if (phydev->speed == priv->cur_speed &&
|
||||
phydev->duplex == priv->cur_duplex &&
|
||||
rx_pause == priv->rx_pause &&
|
||||
tx_pause == priv->tx_pause)
|
||||
return;
|
||||
|
||||
/* Print status if we have a link or we had one and just lost it,
|
||||
* don't print otherwise.
|
||||
*/
|
||||
if (new_speed || priv->cur_speed)
|
||||
phy_print_status(phydev);
|
||||
|
||||
priv->cur_speed = new_speed;
|
||||
priv->cur_duplex = phydev->duplex;
|
||||
priv->rx_pause = rx_pause;
|
||||
priv->tx_pause = tx_pause;
|
||||
|
||||
/* Link is down, do nothing else */
|
||||
if (!new_speed)
|
||||
return;
|
||||
|
||||
/* Disable all interrupts */
|
||||
iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
|
||||
|
||||
/* Reset the adapter asynchronously */
|
||||
schedule_work(&priv->reset_task);
|
||||
}
|
||||
|
||||
static int ftgmac100_mii_probe(struct net_device *netdev)
|
||||
{
|
||||
struct ftgmac100 *priv = netdev_priv(netdev);
|
||||
struct platform_device *pdev = to_platform_device(priv->dev);
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct phy_device *phydev;
|
||||
phy_interface_t phy_intf;
|
||||
int err;
|
||||
|
||||
/* Default to RGMII. It's a gigabit part after all */
|
||||
err = of_get_phy_mode(np, &phy_intf);
|
||||
if (err)
|
||||
phy_intf = PHY_INTERFACE_MODE_RGMII;
|
||||
|
||||
/* Aspeed only supports these. I don't know about other IP
|
||||
* block vendors so I'm going to just let them through for
|
||||
* now. Note that this is only a warning if for some obscure
|
||||
* reason the DT really means to lie about it or it's a newer
|
||||
* part we don't know about.
|
||||
*
|
||||
* On the Aspeed SoC there are additionally straps and SCU
|
||||
* control bits that could tell us what the interface is
|
||||
* (or allow us to configure it while the IP block is held
|
||||
* in reset). For now I chose to keep this driver away from
|
||||
* those SoC specific bits and assume the device-tree is
|
||||
* right and the SCU has been configured properly by pinmux
|
||||
* or the firmware.
|
||||
*/
|
||||
if (priv->is_aspeed && !(phy_interface_mode_is_rgmii(phy_intf))) {
|
||||
netdev_warn(netdev,
|
||||
"Unsupported PHY mode %s !\n",
|
||||
phy_modes(phy_intf));
|
||||
}
|
||||
|
||||
phydev = phy_find_first(priv->mii_bus);
|
||||
if (!phydev) {
|
||||
netdev_info(netdev, "%s: no PHY found\n", netdev->name);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
phydev = phy_connect(netdev, phydev_name(phydev),
|
||||
&ftgmac100_adjust_link, phy_intf);
|
||||
|
||||
if (IS_ERR(phydev)) {
|
||||
netdev_err(netdev, "%s: Could not attach to PHY\n", netdev->name);
|
||||
return PTR_ERR(phydev);
|
||||
}
|
||||
|
||||
/* Indicate that we support PAUSE frames (see comment in
|
||||
* Documentation/networking/phy.rst)
|
||||
*/
|
||||
phy_support_asym_pause(phydev);
|
||||
|
||||
/* Display what we found */
|
||||
phy_attached_info(phydev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ftgmac100_mdiobus_read(struct mii_bus *bus, int phy_addr, int regnum)
|
||||
{
|
||||
struct net_device *netdev = bus->priv;
|
||||
@ -1410,10 +1299,8 @@ static int ftgmac100_init_all(struct ftgmac100 *priv, bool ignore_alloc_err)
|
||||
return err;
|
||||
}
|
||||
|
||||
static void ftgmac100_reset_task(struct work_struct *work)
|
||||
static void ftgmac100_reset(struct ftgmac100 *priv)
|
||||
{
|
||||
struct ftgmac100 *priv = container_of(work, struct ftgmac100,
|
||||
reset_task);
|
||||
struct net_device *netdev = priv->netdev;
|
||||
int err;
|
||||
|
||||
@ -1459,6 +1346,134 @@ static void ftgmac100_reset_task(struct work_struct *work)
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
static void ftgmac100_reset_task(struct work_struct *work)
|
||||
{
|
||||
struct ftgmac100 *priv = container_of(work, struct ftgmac100,
|
||||
reset_task);
|
||||
|
||||
ftgmac100_reset(priv);
|
||||
}
|
||||
|
||||
static void ftgmac100_adjust_link(struct net_device *netdev)
|
||||
{
|
||||
struct ftgmac100 *priv = netdev_priv(netdev);
|
||||
struct phy_device *phydev = netdev->phydev;
|
||||
bool tx_pause, rx_pause;
|
||||
int new_speed;
|
||||
|
||||
/* We store "no link" as speed 0 */
|
||||
if (!phydev->link)
|
||||
new_speed = 0;
|
||||
else
|
||||
new_speed = phydev->speed;
|
||||
|
||||
/* Grab pause settings from PHY if configured to do so */
|
||||
if (priv->aneg_pause) {
|
||||
rx_pause = tx_pause = phydev->pause;
|
||||
if (phydev->asym_pause)
|
||||
tx_pause = !rx_pause;
|
||||
} else {
|
||||
rx_pause = priv->rx_pause;
|
||||
tx_pause = priv->tx_pause;
|
||||
}
|
||||
|
||||
/* Link hasn't changed, do nothing */
|
||||
if (phydev->speed == priv->cur_speed &&
|
||||
phydev->duplex == priv->cur_duplex &&
|
||||
rx_pause == priv->rx_pause &&
|
||||
tx_pause == priv->tx_pause)
|
||||
return;
|
||||
|
||||
/* Print status if we have a link or we had one and just lost it,
|
||||
* don't print otherwise.
|
||||
*/
|
||||
if (new_speed || priv->cur_speed)
|
||||
phy_print_status(phydev);
|
||||
|
||||
priv->cur_speed = new_speed;
|
||||
priv->cur_duplex = phydev->duplex;
|
||||
priv->rx_pause = rx_pause;
|
||||
priv->tx_pause = tx_pause;
|
||||
|
||||
/* Link is down, do nothing else */
|
||||
if (!new_speed)
|
||||
return;
|
||||
|
||||
/* Disable all interrupts */
|
||||
iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
|
||||
|
||||
/* Release phy lock to allow ftgmac100_reset to aquire it, keeping lock
|
||||
* order consistent to prevent dead lock.
|
||||
*/
|
||||
if (netdev->phydev)
|
||||
mutex_unlock(&netdev->phydev->lock);
|
||||
|
||||
ftgmac100_reset(priv);
|
||||
|
||||
if (netdev->phydev)
|
||||
mutex_lock(&netdev->phydev->lock);
|
||||
|
||||
}
|
||||
|
||||
static int ftgmac100_mii_probe(struct net_device *netdev)
|
||||
{
|
||||
struct ftgmac100 *priv = netdev_priv(netdev);
|
||||
struct platform_device *pdev = to_platform_device(priv->dev);
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct phy_device *phydev;
|
||||
phy_interface_t phy_intf;
|
||||
int err;
|
||||
|
||||
/* Default to RGMII. It's a gigabit part after all */
|
||||
err = of_get_phy_mode(np, &phy_intf);
|
||||
if (err)
|
||||
phy_intf = PHY_INTERFACE_MODE_RGMII;
|
||||
|
||||
/* Aspeed only supports these. I don't know about other IP
|
||||
* block vendors so I'm going to just let them through for
|
||||
* now. Note that this is only a warning if for some obscure
|
||||
* reason the DT really means to lie about it or it's a newer
|
||||
* part we don't know about.
|
||||
*
|
||||
* On the Aspeed SoC there are additionally straps and SCU
|
||||
* control bits that could tell us what the interface is
|
||||
* (or allow us to configure it while the IP block is held
|
||||
* in reset). For now I chose to keep this driver away from
|
||||
* those SoC specific bits and assume the device-tree is
|
||||
* right and the SCU has been configured properly by pinmux
|
||||
* or the firmware.
|
||||
*/
|
||||
if (priv->is_aspeed && !(phy_interface_mode_is_rgmii(phy_intf))) {
|
||||
netdev_warn(netdev,
|
||||
"Unsupported PHY mode %s !\n",
|
||||
phy_modes(phy_intf));
|
||||
}
|
||||
|
||||
phydev = phy_find_first(priv->mii_bus);
|
||||
if (!phydev) {
|
||||
netdev_info(netdev, "%s: no PHY found\n", netdev->name);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
phydev = phy_connect(netdev, phydev_name(phydev),
|
||||
&ftgmac100_adjust_link, phy_intf);
|
||||
|
||||
if (IS_ERR(phydev)) {
|
||||
netdev_err(netdev, "%s: Could not attach to PHY\n", netdev->name);
|
||||
return PTR_ERR(phydev);
|
||||
}
|
||||
|
||||
/* Indicate that we support PAUSE frames (see comment in
|
||||
* Documentation/networking/phy.rst)
|
||||
*/
|
||||
phy_support_asym_pause(phydev);
|
||||
|
||||
/* Display what we found */
|
||||
phy_attached_info(phydev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ftgmac100_open(struct net_device *netdev)
|
||||
{
|
||||
struct ftgmac100 *priv = netdev_priv(netdev);
|
||||
|
@ -5917,10 +5917,14 @@ static ssize_t failover_store(struct device *dev, struct device_attribute *attr,
|
||||
be64_to_cpu(session_token));
|
||||
rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address,
|
||||
H_SESSION_ERR_DETECTED, session_token, 0, 0);
|
||||
if (rc)
|
||||
if (rc) {
|
||||
netdev_err(netdev,
|
||||
"H_VIOCTL initiated failover failed, rc %ld\n",
|
||||
rc);
|
||||
goto last_resort;
|
||||
}
|
||||
|
||||
return count;
|
||||
|
||||
last_resort:
|
||||
netdev_dbg(netdev, "Trying to send CRQ_CMD, the last resort\n");
|
||||
|
@ -5372,15 +5372,7 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
|
||||
/* There is no need to reset BW when mqprio mode is on. */
|
||||
if (pf->flags & I40E_FLAG_TC_MQPRIO)
|
||||
return 0;
|
||||
|
||||
if (!vsi->mqprio_qopt.qopt.hw) {
|
||||
if (pf->flags & I40E_FLAG_DCB_ENABLED)
|
||||
goto skip_reset;
|
||||
|
||||
if (IS_ENABLED(CONFIG_I40E_DCB) &&
|
||||
i40e_dcb_hw_get_num_tc(&pf->hw) == 1)
|
||||
goto skip_reset;
|
||||
|
||||
if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
|
||||
ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
|
||||
if (ret)
|
||||
dev_info(&pf->pdev->dev,
|
||||
@ -5388,8 +5380,6 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
|
||||
vsi->seid);
|
||||
return ret;
|
||||
}
|
||||
|
||||
skip_reset:
|
||||
memset(&bw_data, 0, sizeof(bw_data));
|
||||
bw_data.tc_valid_bits = enabled_tc;
|
||||
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
|
||||
|
@ -280,7 +280,6 @@ enum ice_pf_state {
|
||||
ICE_VFLR_EVENT_PENDING,
|
||||
ICE_FLTR_OVERFLOW_PROMISC,
|
||||
ICE_VF_DIS,
|
||||
ICE_VF_DEINIT_IN_PROGRESS,
|
||||
ICE_CFG_BUSY,
|
||||
ICE_SERVICE_SCHED,
|
||||
ICE_SERVICE_DIS,
|
||||
|
@ -3340,7 +3340,7 @@ ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg,
|
||||
|
||||
if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(hw) &&
|
||||
!ice_fw_supports_report_dflt_cfg(hw)) {
|
||||
struct ice_link_default_override_tlv tlv;
|
||||
struct ice_link_default_override_tlv tlv = { 0 };
|
||||
|
||||
status = ice_get_link_default_override(&tlv, pi);
|
||||
if (status)
|
||||
|
@ -44,6 +44,7 @@ ice_eswitch_add_vf_mac_rule(struct ice_pf *pf, struct ice_vf *vf, const u8 *mac)
|
||||
ctrl_vsi->rxq_map[vf->vf_id];
|
||||
rule_info.flags_info.act |= ICE_SINGLE_ACT_LB_ENABLE;
|
||||
rule_info.flags_info.act_valid = true;
|
||||
rule_info.tun_type = ICE_SW_TUN_AND_NON_TUN;
|
||||
|
||||
err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info,
|
||||
vf->repr->mac_rule);
|
||||
|
@ -1799,7 +1799,9 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
|
||||
* reset, so print the event prior to reset.
|
||||
*/
|
||||
ice_print_vf_rx_mdd_event(vf);
|
||||
mutex_lock(&pf->vf[i].cfg_lock);
|
||||
ice_reset_vf(&pf->vf[i], false);
|
||||
mutex_unlock(&pf->vf[i].cfg_lock);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -47,6 +47,7 @@ enum ice_protocol_type {
|
||||
|
||||
enum ice_sw_tunnel_type {
|
||||
ICE_NON_TUN = 0,
|
||||
ICE_SW_TUN_AND_NON_TUN,
|
||||
ICE_SW_TUN_VXLAN,
|
||||
ICE_SW_TUN_GENEVE,
|
||||
ICE_SW_TUN_NVGRE,
|
||||
|
@ -1533,9 +1533,12 @@ exit:
|
||||
static int ice_ptp_adjtime_nonatomic(struct ptp_clock_info *info, s64 delta)
|
||||
{
|
||||
struct timespec64 now, then;
|
||||
int ret;
|
||||
|
||||
then = ns_to_timespec64(delta);
|
||||
ice_ptp_gettimex64(info, &now, NULL);
|
||||
ret = ice_ptp_gettimex64(info, &now, NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
now = timespec64_add(now, then);
|
||||
|
||||
return ice_ptp_settime64(info, (const struct timespec64 *)&now);
|
||||
|
@ -4537,6 +4537,7 @@ ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo,
|
||||
case ICE_SW_TUN_NVGRE:
|
||||
prof_type = ICE_PROF_TUN_GRE;
|
||||
break;
|
||||
case ICE_SW_TUN_AND_NON_TUN:
|
||||
default:
|
||||
prof_type = ICE_PROF_ALL;
|
||||
break;
|
||||
@ -5305,7 +5306,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
|
||||
if (status)
|
||||
goto err_ice_add_adv_rule;
|
||||
|
||||
if (rinfo->tun_type != ICE_NON_TUN) {
|
||||
if (rinfo->tun_type != ICE_NON_TUN &&
|
||||
rinfo->tun_type != ICE_SW_TUN_AND_NON_TUN) {
|
||||
status = ice_fill_adv_packet_tun(hw, rinfo->tun_type,
|
||||
s_rule->pdata.lkup_tx_rx.hdr,
|
||||
pkt_offsets);
|
||||
|
@ -709,7 +709,7 @@ ice_tc_set_port(struct flow_match_ports match,
|
||||
fltr->flags |= ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT;
|
||||
else
|
||||
fltr->flags |= ICE_TC_FLWR_FIELD_DEST_L4_PORT;
|
||||
fltr->flags |= ICE_TC_FLWR_FIELD_DEST_L4_PORT;
|
||||
|
||||
headers->l4_key.dst_port = match.key->dst;
|
||||
headers->l4_mask.dst_port = match.mask->dst;
|
||||
}
|
||||
@ -718,7 +718,7 @@ ice_tc_set_port(struct flow_match_ports match,
|
||||
fltr->flags |= ICE_TC_FLWR_FIELD_ENC_SRC_L4_PORT;
|
||||
else
|
||||
fltr->flags |= ICE_TC_FLWR_FIELD_SRC_L4_PORT;
|
||||
fltr->flags |= ICE_TC_FLWR_FIELD_SRC_L4_PORT;
|
||||
|
||||
headers->l4_key.src_port = match.key->src;
|
||||
headers->l4_mask.src_port = match.mask->src;
|
||||
}
|
||||
|
@ -500,8 +500,6 @@ void ice_free_vfs(struct ice_pf *pf)
|
||||
struct ice_hw *hw = &pf->hw;
|
||||
unsigned int tmp, i;
|
||||
|
||||
set_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state);
|
||||
|
||||
if (!pf->vf)
|
||||
return;
|
||||
|
||||
@ -519,22 +517,26 @@ void ice_free_vfs(struct ice_pf *pf)
|
||||
else
|
||||
dev_warn(dev, "VFs are assigned - not disabling SR-IOV\n");
|
||||
|
||||
/* Avoid wait time by stopping all VFs at the same time */
|
||||
ice_for_each_vf(pf, i)
|
||||
ice_dis_vf_qs(&pf->vf[i]);
|
||||
|
||||
tmp = pf->num_alloc_vfs;
|
||||
pf->num_qps_per_vf = 0;
|
||||
pf->num_alloc_vfs = 0;
|
||||
for (i = 0; i < tmp; i++) {
|
||||
if (test_bit(ICE_VF_STATE_INIT, pf->vf[i].vf_states)) {
|
||||
struct ice_vf *vf = &pf->vf[i];
|
||||
|
||||
mutex_lock(&vf->cfg_lock);
|
||||
|
||||
ice_dis_vf_qs(vf);
|
||||
|
||||
if (test_bit(ICE_VF_STATE_INIT, vf->vf_states)) {
|
||||
/* disable VF qp mappings and set VF disable state */
|
||||
ice_dis_vf_mappings(&pf->vf[i]);
|
||||
set_bit(ICE_VF_STATE_DIS, pf->vf[i].vf_states);
|
||||
ice_free_vf_res(&pf->vf[i]);
|
||||
ice_dis_vf_mappings(vf);
|
||||
set_bit(ICE_VF_STATE_DIS, vf->vf_states);
|
||||
ice_free_vf_res(vf);
|
||||
}
|
||||
|
||||
mutex_destroy(&pf->vf[i].cfg_lock);
|
||||
mutex_unlock(&vf->cfg_lock);
|
||||
|
||||
mutex_destroy(&vf->cfg_lock);
|
||||
}
|
||||
|
||||
if (ice_sriov_free_msix_res(pf))
|
||||
@ -570,7 +572,6 @@ void ice_free_vfs(struct ice_pf *pf)
|
||||
i);
|
||||
|
||||
clear_bit(ICE_VF_DIS, pf->state);
|
||||
clear_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state);
|
||||
clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags);
|
||||
}
|
||||
|
||||
@ -1498,6 +1499,8 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
|
||||
ice_for_each_vf(pf, v) {
|
||||
vf = &pf->vf[v];
|
||||
|
||||
mutex_lock(&vf->cfg_lock);
|
||||
|
||||
vf->driver_caps = 0;
|
||||
ice_vc_set_default_allowlist(vf);
|
||||
|
||||
@ -1512,6 +1515,8 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
|
||||
ice_vf_pre_vsi_rebuild(vf);
|
||||
ice_vf_rebuild_vsi(vf);
|
||||
ice_vf_post_vsi_rebuild(vf);
|
||||
|
||||
mutex_unlock(&vf->cfg_lock);
|
||||
}
|
||||
|
||||
if (ice_is_eswitch_mode_switchdev(pf))
|
||||
@ -1562,6 +1567,8 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
|
||||
u32 reg;
|
||||
int i;
|
||||
|
||||
lockdep_assert_held(&vf->cfg_lock);
|
||||
|
||||
dev = ice_pf_to_dev(pf);
|
||||
|
||||
if (test_bit(ICE_VF_RESETS_DISABLED, pf->state)) {
|
||||
@ -2061,9 +2068,12 @@ void ice_process_vflr_event(struct ice_pf *pf)
|
||||
bit_idx = (hw->func_caps.vf_base_id + vf_id) % 32;
|
||||
/* read GLGEN_VFLRSTAT register to find out the flr VFs */
|
||||
reg = rd32(hw, GLGEN_VFLRSTAT(reg_idx));
|
||||
if (reg & BIT(bit_idx))
|
||||
if (reg & BIT(bit_idx)) {
|
||||
/* GLGEN_VFLRSTAT bit will be cleared in ice_reset_vf */
|
||||
mutex_lock(&vf->cfg_lock);
|
||||
ice_reset_vf(vf, true);
|
||||
mutex_unlock(&vf->cfg_lock);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -2140,7 +2150,9 @@ ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event)
|
||||
if (!vf)
|
||||
return;
|
||||
|
||||
mutex_lock(&vf->cfg_lock);
|
||||
ice_vc_reset_vf(vf);
|
||||
mutex_unlock(&vf->cfg_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -4625,10 +4637,6 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
|
||||
struct device *dev;
|
||||
int err = 0;
|
||||
|
||||
/* if de-init is underway, don't process messages from VF */
|
||||
if (test_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state))
|
||||
return;
|
||||
|
||||
dev = ice_pf_to_dev(pf);
|
||||
if (ice_validate_vf_id(pf, vf_id)) {
|
||||
err = -EINVAL;
|
||||
|
@ -2704,6 +2704,16 @@ MODULE_DEVICE_TABLE(of, mv643xx_eth_shared_ids);
|
||||
|
||||
static struct platform_device *port_platdev[3];
|
||||
|
||||
static void mv643xx_eth_shared_of_remove(void)
|
||||
{
|
||||
int n;
|
||||
|
||||
for (n = 0; n < 3; n++) {
|
||||
platform_device_del(port_platdev[n]);
|
||||
port_platdev[n] = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
|
||||
struct device_node *pnp)
|
||||
{
|
||||
@ -2740,7 +2750,9 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
of_get_mac_address(pnp, ppd.mac_addr);
|
||||
ret = of_get_mac_address(pnp, ppd.mac_addr);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mv643xx_eth_property(pnp, "tx-queue-size", ppd.tx_queue_size);
|
||||
mv643xx_eth_property(pnp, "tx-sram-addr", ppd.tx_sram_addr);
|
||||
@ -2804,21 +2816,13 @@ static int mv643xx_eth_shared_of_probe(struct platform_device *pdev)
|
||||
ret = mv643xx_eth_shared_of_add_port(pdev, pnp);
|
||||
if (ret) {
|
||||
of_node_put(pnp);
|
||||
mv643xx_eth_shared_of_remove();
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mv643xx_eth_shared_of_remove(void)
|
||||
{
|
||||
int n;
|
||||
|
||||
for (n = 0; n < 3; n++) {
|
||||
platform_device_del(port_platdev[n]);
|
||||
port_platdev[n] = NULL;
|
||||
}
|
||||
}
|
||||
#else
|
||||
static inline int mv643xx_eth_shared_of_probe(struct platform_device *pdev)
|
||||
{
|
||||
|
@ -6870,6 +6870,9 @@ static int mvpp2_port_probe(struct platform_device *pdev,
|
||||
dev->max_mtu = MVPP2_BM_JUMBO_PKT_SIZE;
|
||||
dev->dev.of_node = port_node;
|
||||
|
||||
port->pcs_gmac.ops = &mvpp2_phylink_gmac_pcs_ops;
|
||||
port->pcs_xlg.ops = &mvpp2_phylink_xlg_pcs_ops;
|
||||
|
||||
if (!mvpp2_use_acpi_compat_mode(port_fwnode)) {
|
||||
port->phylink_config.dev = &dev->dev;
|
||||
port->phylink_config.type = PHYLINK_NETDEV;
|
||||
@ -6940,9 +6943,6 @@ static int mvpp2_port_probe(struct platform_device *pdev,
|
||||
port->phylink_config.supported_interfaces);
|
||||
}
|
||||
|
||||
port->pcs_gmac.ops = &mvpp2_phylink_gmac_pcs_ops;
|
||||
port->pcs_xlg.ops = &mvpp2_phylink_xlg_pcs_ops;
|
||||
|
||||
phylink = phylink_create(&port->phylink_config, port_fwnode,
|
||||
phy_mode, &mvpp2_phylink_ops);
|
||||
if (IS_ERR(phylink)) {
|
||||
|
@ -16,11 +16,13 @@ struct mlx5e_tc_act_parse_state {
|
||||
unsigned int num_actions;
|
||||
struct mlx5e_tc_flow *flow;
|
||||
struct netlink_ext_ack *extack;
|
||||
bool ct_clear;
|
||||
bool encap;
|
||||
bool decap;
|
||||
bool mpls_push;
|
||||
bool ptype_host;
|
||||
const struct ip_tunnel_info *tun_info;
|
||||
struct mlx5e_mpls_info mpls_info;
|
||||
struct pedit_headers_action hdrs[__PEDIT_CMD_MAX];
|
||||
int ifindexes[MLX5_MAX_FLOW_FWD_VPORTS];
|
||||
int if_count;
|
||||
|
@ -27,8 +27,13 @@ tc_act_parse_ct(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
struct mlx5e_priv *priv,
|
||||
struct mlx5_flow_attr *attr)
|
||||
{
|
||||
bool clear_action = act->ct.action & TCA_CT_ACT_CLEAR;
|
||||
int err;
|
||||
|
||||
/* It's redundant to do ct clear more than once. */
|
||||
if (clear_action && parse_state->ct_clear)
|
||||
return 0;
|
||||
|
||||
err = mlx5_tc_ct_parse_action(parse_state->ct_priv, attr,
|
||||
&attr->parse_attr->mod_hdr_acts,
|
||||
act, parse_state->extack);
|
||||
@ -40,6 +45,8 @@ tc_act_parse_ct(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
if (mlx5e_is_eswitch_flow(parse_state->flow))
|
||||
attr->esw_attr->split_count = attr->esw_attr->out_count;
|
||||
|
||||
parse_state->ct_clear = clear_action;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -177,6 +177,12 @@ parse_mirred_encap(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
return -ENOMEM;
|
||||
|
||||
parse_state->encap = false;
|
||||
|
||||
if (parse_state->mpls_push) {
|
||||
memcpy(&parse_attr->mpls_info[esw_attr->out_count],
|
||||
&parse_state->mpls_info, sizeof(parse_state->mpls_info));
|
||||
parse_state->mpls_push = false;
|
||||
}
|
||||
esw_attr->dests[esw_attr->out_count].flags |= MLX5_ESW_DEST_ENCAP;
|
||||
esw_attr->out_count++;
|
||||
/* attr->dests[].rep is resolved when we handle encap */
|
||||
|
@ -22,6 +22,16 @@ tc_act_can_offload_mpls_push(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
return true;
|
||||
}
|
||||
|
||||
static void
|
||||
copy_mpls_info(struct mlx5e_mpls_info *mpls_info,
|
||||
const struct flow_action_entry *act)
|
||||
{
|
||||
mpls_info->label = act->mpls_push.label;
|
||||
mpls_info->tc = act->mpls_push.tc;
|
||||
mpls_info->bos = act->mpls_push.bos;
|
||||
mpls_info->ttl = act->mpls_push.ttl;
|
||||
}
|
||||
|
||||
static int
|
||||
tc_act_parse_mpls_push(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
const struct flow_action_entry *act,
|
||||
@ -29,6 +39,7 @@ tc_act_parse_mpls_push(struct mlx5e_tc_act_parse_state *parse_state,
|
||||
struct mlx5_flow_attr *attr)
|
||||
{
|
||||
parse_state->mpls_push = true;
|
||||
copy_mpls_info(&parse_state->mpls_info, act);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -35,6 +35,7 @@ enum {
|
||||
|
||||
struct mlx5e_tc_flow_parse_attr {
|
||||
const struct ip_tunnel_info *tun_info[MLX5_MAX_FLOW_FWD_VPORTS];
|
||||
struct mlx5e_mpls_info mpls_info[MLX5_MAX_FLOW_FWD_VPORTS];
|
||||
struct net_device *filter_dev;
|
||||
struct mlx5_flow_spec spec;
|
||||
struct mlx5e_tc_mod_hdr_acts mod_hdr_acts;
|
||||
|
@ -750,6 +750,7 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
|
||||
struct mlx5e_tc_flow_parse_attr *parse_attr;
|
||||
struct mlx5_flow_attr *attr = flow->attr;
|
||||
const struct ip_tunnel_info *tun_info;
|
||||
const struct mlx5e_mpls_info *mpls_info;
|
||||
unsigned long tbl_time_before = 0;
|
||||
struct mlx5e_encap_entry *e;
|
||||
struct mlx5e_encap_key key;
|
||||
@ -760,6 +761,7 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
|
||||
|
||||
parse_attr = attr->parse_attr;
|
||||
tun_info = parse_attr->tun_info[out_index];
|
||||
mpls_info = &parse_attr->mpls_info[out_index];
|
||||
family = ip_tunnel_info_af(tun_info);
|
||||
key.ip_tun_key = &tun_info->key;
|
||||
key.tc_tunnel = mlx5e_get_tc_tun(mirred_dev);
|
||||
@ -810,6 +812,7 @@ int mlx5e_attach_encap(struct mlx5e_priv *priv,
|
||||
goto out_err_init;
|
||||
}
|
||||
e->tun_info = tun_info;
|
||||
memcpy(&e->mpls_info, mpls_info, sizeof(*mpls_info));
|
||||
err = mlx5e_tc_tun_init_encap_attr(mirred_dev, priv, e, extack);
|
||||
if (err)
|
||||
goto out_err_init;
|
||||
|
@ -30,16 +30,15 @@ static int generate_ip_tun_hdr(char buf[],
|
||||
struct mlx5e_encap_entry *r)
|
||||
{
|
||||
const struct ip_tunnel_key *tun_key = &r->tun_info->key;
|
||||
const struct mlx5e_mpls_info *mpls_info = &r->mpls_info;
|
||||
struct udphdr *udp = (struct udphdr *)(buf);
|
||||
struct mpls_shim_hdr *mpls;
|
||||
u32 tun_id;
|
||||
|
||||
tun_id = be32_to_cpu(tunnel_id_to_key32(tun_key->tun_id));
|
||||
mpls = (struct mpls_shim_hdr *)(udp + 1);
|
||||
*ip_proto = IPPROTO_UDP;
|
||||
|
||||
udp->dest = tun_key->tp_dst;
|
||||
*mpls = mpls_entry_encode(tun_id, tun_key->ttl, tun_key->tos, true);
|
||||
*mpls = mpls_entry_encode(mpls_info->label, mpls_info->ttl, mpls_info->tc, mpls_info->bos);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -60,37 +59,31 @@ static int parse_tunnel(struct mlx5e_priv *priv,
|
||||
void *headers_v)
|
||||
{
|
||||
struct flow_rule *rule = flow_cls_offload_flow_rule(f);
|
||||
struct flow_match_enc_keyid enc_keyid;
|
||||
struct flow_match_mpls match;
|
||||
void *misc2_c;
|
||||
void *misc2_v;
|
||||
|
||||
misc2_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
|
||||
misc_parameters_2);
|
||||
misc2_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
|
||||
misc_parameters_2);
|
||||
|
||||
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS))
|
||||
return 0;
|
||||
|
||||
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
|
||||
return 0;
|
||||
|
||||
flow_rule_match_enc_keyid(rule, &enc_keyid);
|
||||
|
||||
if (!enc_keyid.mask->keyid)
|
||||
return 0;
|
||||
|
||||
if (!MLX5_CAP_ETH(priv->mdev, tunnel_stateless_mpls_over_udp) &&
|
||||
!(MLX5_CAP_GEN(priv->mdev, flex_parser_protocols) & MLX5_FLEX_PROTO_CW_MPLS_UDP))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_KEYID))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS))
|
||||
return 0;
|
||||
|
||||
flow_rule_match_mpls(rule, &match);
|
||||
|
||||
/* Only support matching the first LSE */
|
||||
if (match.mask->used_lses != 1)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
misc2_c = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
|
||||
misc_parameters_2);
|
||||
misc2_v = MLX5_ADDR_OF(fte_match_param, spec->match_value,
|
||||
misc_parameters_2);
|
||||
|
||||
MLX5_SET(fte_match_set_misc2, misc2_c,
|
||||
outer_first_mpls_over_udp.mpls_label,
|
||||
match.mask->ls[0].mpls_label);
|
||||
|
@ -1792,7 +1792,7 @@ static int mlx5e_get_module_eeprom(struct net_device *netdev,
|
||||
if (size_read < 0) {
|
||||
netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n",
|
||||
__func__, size_read);
|
||||
return 0;
|
||||
return size_read;
|
||||
}
|
||||
|
||||
i += size_read;
|
||||
|
@ -183,6 +183,13 @@ struct mlx5e_decap_entry {
|
||||
struct rcu_head rcu;
|
||||
};
|
||||
|
||||
struct mlx5e_mpls_info {
|
||||
u32 label;
|
||||
u8 tc;
|
||||
u8 bos;
|
||||
u8 ttl;
|
||||
};
|
||||
|
||||
struct mlx5e_encap_entry {
|
||||
/* attached neigh hash entry */
|
||||
struct mlx5e_neigh_hash_entry *nhe;
|
||||
@ -196,6 +203,7 @@ struct mlx5e_encap_entry {
|
||||
struct list_head route_list;
|
||||
struct mlx5_pkt_reformat *pkt_reformat;
|
||||
const struct ip_tunnel_info *tun_info;
|
||||
struct mlx5e_mpls_info mpls_info;
|
||||
unsigned char h_dest[ETH_ALEN]; /* destination eth addr */
|
||||
|
||||
struct net_device *out_dev;
|
||||
|
@ -1349,7 +1349,8 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
|
||||
}
|
||||
|
||||
/* True when explicitly set via priv flag, or XDP prog is loaded */
|
||||
if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state))
|
||||
if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state) ||
|
||||
get_cqe_tls_offload(cqe))
|
||||
goto csum_unnecessary;
|
||||
|
||||
/* CQE csum doesn't cover padding octets in short ethernet
|
||||
|
@ -334,6 +334,7 @@ void mlx5e_self_test(struct net_device *ndev, struct ethtool_test *etest,
|
||||
netdev_info(ndev, "\t[%d] %s start..\n", i, st.name);
|
||||
buf[count] = st.st_func(priv);
|
||||
netdev_info(ndev, "\t[%d] %s end: result(%lld)\n", i, st.name, buf[count]);
|
||||
count++;
|
||||
}
|
||||
|
||||
mutex_unlock(&priv->state_lock);
|
||||
|
@ -1254,9 +1254,6 @@ static void fec_set_corrected_bits_total(struct mlx5e_priv *priv,
|
||||
u32 in[MLX5_ST_SZ_DW(ppcnt_reg)] = {};
|
||||
int sz = MLX5_ST_SZ_BYTES(ppcnt_reg);
|
||||
|
||||
if (!MLX5_CAP_PCAM_FEATURE(mdev, ppcnt_statistical_group))
|
||||
return;
|
||||
|
||||
MLX5_SET(ppcnt_reg, in, local_port, 1);
|
||||
MLX5_SET(ppcnt_reg, in, grp, MLX5_PHYSICAL_LAYER_STATISTICAL_GROUP);
|
||||
if (mlx5_core_access_reg(mdev, in, sz, ppcnt_phy_statistical,
|
||||
@ -1272,6 +1269,9 @@ static void fec_set_corrected_bits_total(struct mlx5e_priv *priv,
|
||||
void mlx5e_stats_fec_get(struct mlx5e_priv *priv,
|
||||
struct ethtool_fec_stats *fec_stats)
|
||||
{
|
||||
if (!MLX5_CAP_PCAM_FEATURE(priv->mdev, ppcnt_statistical_group))
|
||||
return;
|
||||
|
||||
fec_set_corrected_bits_total(priv, fec_stats);
|
||||
fec_set_block_stats(priv, fec_stats);
|
||||
}
|
||||
|
@ -3204,6 +3204,18 @@ actions_match_supported(struct mlx5e_priv *priv,
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!(~actions &
|
||||
(MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
|
||||
actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {
|
||||
NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");
|
||||
return false;
|
||||
}
|
||||
|
||||
if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
|
||||
!modify_header_match_supported(priv, &parse_attr->spec, flow_action,
|
||||
actions, ct_flow, ct_clear, extack))
|
||||
|
@ -697,7 +697,7 @@ void mlx5_esw_qos_vport_disable(struct mlx5_eswitch *esw, struct mlx5_vport *vpo
|
||||
}
|
||||
|
||||
int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vport,
|
||||
u32 min_rate, u32 max_rate)
|
||||
u32 max_rate, u32 min_rate)
|
||||
{
|
||||
int err;
|
||||
|
||||
|
@ -2838,10 +2838,6 @@ bool mlx5_esw_vport_match_metadata_supported(const struct mlx5_eswitch *esw)
|
||||
if (!MLX5_CAP_ESW_FLOWTABLE(esw->dev, flow_source))
|
||||
return false;
|
||||
|
||||
if (mlx5_core_is_ecpf_esw_manager(esw->dev) ||
|
||||
mlx5_ecpf_vport_exists(esw->dev))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -2074,6 +2074,8 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
|
||||
fte->node.del_hw_func = NULL;
|
||||
up_write_ref_node(&fte->node, false);
|
||||
tree_put_node(&fte->node, false);
|
||||
} else {
|
||||
up_write_ref_node(&fte->node, false);
|
||||
}
|
||||
kfree(handle);
|
||||
}
|
||||
|
@ -121,6 +121,9 @@ u32 mlx5_chains_get_nf_ft_chain(struct mlx5_fs_chains *chains)
|
||||
|
||||
u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains)
|
||||
{
|
||||
if (!mlx5_chains_prios_supported(chains))
|
||||
return 1;
|
||||
|
||||
if (mlx5_chains_ignore_flow_level_supported(chains))
|
||||
return UINT_MAX;
|
||||
|
||||
|
@ -526,7 +526,7 @@ static int handle_hca_cap(struct mlx5_core_dev *dev, void *set_ctx)
|
||||
|
||||
/* Check log_max_qp from HCA caps to set in current profile */
|
||||
if (prof->log_max_qp == LOG_MAX_SUPPORTED_QPS) {
|
||||
prof->log_max_qp = MLX5_CAP_GEN_MAX(dev, log_max_qp);
|
||||
prof->log_max_qp = min_t(u8, 17, MLX5_CAP_GEN_MAX(dev, log_max_qp));
|
||||
} else if (MLX5_CAP_GEN_MAX(dev, log_max_qp) < prof->log_max_qp) {
|
||||
mlx5_core_warn(dev, "log_max_qp value in current profile is %d, changing it to HCA capability limit (%d)\n",
|
||||
prof->log_max_qp,
|
||||
@ -1840,10 +1840,12 @@ static const struct pci_device_id mlx5_core_pci_table[] = {
|
||||
{ PCI_VDEVICE(MELLANOX, 0x101e), MLX5_PCI_DEV_IS_VF}, /* ConnectX Family mlx5Gen Virtual Function */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x101f) }, /* ConnectX-6 LX */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1021) }, /* ConnectX-7 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0x1023) }, /* ConnectX-8 */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d2) }, /* BlueField integrated ConnectX-5 network controller */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d3), MLX5_PCI_DEV_IS_VF}, /* BlueField integrated ConnectX-5 network controller VF */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2d6) }, /* BlueField-2 integrated ConnectX-6 Dx network controller */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2dc) }, /* BlueField-3 integrated ConnectX-7 network controller */
|
||||
{ PCI_VDEVICE(MELLANOX, 0xa2df) }, /* BlueField-4 integrated ConnectX-8 network controller */
|
||||
{ 0, }
|
||||
};
|
||||
|
||||
|
@ -4,7 +4,6 @@
|
||||
#include "dr_types.h"
|
||||
|
||||
#define DR_ICM_MODIFY_HDR_ALIGN_BASE 64
|
||||
#define DR_ICM_SYNC_THRESHOLD_POOL (64 * 1024 * 1024)
|
||||
|
||||
struct mlx5dr_icm_pool {
|
||||
enum mlx5dr_icm_type icm_type;
|
||||
@ -136,37 +135,35 @@ static void dr_icm_pool_mr_destroy(struct mlx5dr_icm_mr *icm_mr)
|
||||
kvfree(icm_mr);
|
||||
}
|
||||
|
||||
static int dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk)
|
||||
static int dr_icm_buddy_get_ste_size(struct mlx5dr_icm_buddy_mem *buddy)
|
||||
{
|
||||
chunk->ste_arr = kvzalloc(chunk->num_of_entries *
|
||||
sizeof(chunk->ste_arr[0]), GFP_KERNEL);
|
||||
if (!chunk->ste_arr)
|
||||
return -ENOMEM;
|
||||
/* We support only one type of STE size, both for ConnectX-5 and later
|
||||
* devices. Once the support for match STE which has a larger tag is
|
||||
* added (32B instead of 16B), the STE size for devices later than
|
||||
* ConnectX-5 needs to account for that.
|
||||
*/
|
||||
return DR_STE_SIZE_REDUCED;
|
||||
}
|
||||
|
||||
chunk->hw_ste_arr = kvzalloc(chunk->num_of_entries *
|
||||
DR_STE_SIZE_REDUCED, GFP_KERNEL);
|
||||
if (!chunk->hw_ste_arr)
|
||||
goto out_free_ste_arr;
|
||||
static void dr_icm_chunk_ste_init(struct mlx5dr_icm_chunk *chunk, int offset)
|
||||
{
|
||||
struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem;
|
||||
int index = offset / DR_STE_SIZE;
|
||||
|
||||
chunk->miss_list = kvmalloc(chunk->num_of_entries *
|
||||
sizeof(chunk->miss_list[0]), GFP_KERNEL);
|
||||
if (!chunk->miss_list)
|
||||
goto out_free_hw_ste_arr;
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_hw_ste_arr:
|
||||
kvfree(chunk->hw_ste_arr);
|
||||
out_free_ste_arr:
|
||||
kvfree(chunk->ste_arr);
|
||||
return -ENOMEM;
|
||||
chunk->ste_arr = &buddy->ste_arr[index];
|
||||
chunk->miss_list = &buddy->miss_list[index];
|
||||
chunk->hw_ste_arr = buddy->hw_ste_arr +
|
||||
index * dr_icm_buddy_get_ste_size(buddy);
|
||||
}
|
||||
|
||||
static void dr_icm_chunk_ste_cleanup(struct mlx5dr_icm_chunk *chunk)
|
||||
{
|
||||
kvfree(chunk->miss_list);
|
||||
kvfree(chunk->hw_ste_arr);
|
||||
kvfree(chunk->ste_arr);
|
||||
struct mlx5dr_icm_buddy_mem *buddy = chunk->buddy_mem;
|
||||
|
||||
memset(chunk->hw_ste_arr, 0,
|
||||
chunk->num_of_entries * dr_icm_buddy_get_ste_size(buddy));
|
||||
memset(chunk->ste_arr, 0,
|
||||
chunk->num_of_entries * sizeof(chunk->ste_arr[0]));
|
||||
}
|
||||
|
||||
static enum mlx5dr_icm_type
|
||||
@ -189,6 +186,44 @@ static void dr_icm_chunk_destroy(struct mlx5dr_icm_chunk *chunk,
|
||||
kvfree(chunk);
|
||||
}
|
||||
|
||||
static int dr_icm_buddy_init_ste_cache(struct mlx5dr_icm_buddy_mem *buddy)
|
||||
{
|
||||
int num_of_entries =
|
||||
mlx5dr_icm_pool_chunk_size_to_entries(buddy->pool->max_log_chunk_sz);
|
||||
|
||||
buddy->ste_arr = kvcalloc(num_of_entries,
|
||||
sizeof(struct mlx5dr_ste), GFP_KERNEL);
|
||||
if (!buddy->ste_arr)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Preallocate full STE size on non-ConnectX-5 devices since
|
||||
* we need to support both full and reduced with the same cache.
|
||||
*/
|
||||
buddy->hw_ste_arr = kvcalloc(num_of_entries,
|
||||
dr_icm_buddy_get_ste_size(buddy), GFP_KERNEL);
|
||||
if (!buddy->hw_ste_arr)
|
||||
goto free_ste_arr;
|
||||
|
||||
buddy->miss_list = kvmalloc(num_of_entries * sizeof(struct list_head), GFP_KERNEL);
|
||||
if (!buddy->miss_list)
|
||||
goto free_hw_ste_arr;
|
||||
|
||||
return 0;
|
||||
|
||||
free_hw_ste_arr:
|
||||
kvfree(buddy->hw_ste_arr);
|
||||
free_ste_arr:
|
||||
kvfree(buddy->ste_arr);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static void dr_icm_buddy_cleanup_ste_cache(struct mlx5dr_icm_buddy_mem *buddy)
|
||||
{
|
||||
kvfree(buddy->ste_arr);
|
||||
kvfree(buddy->hw_ste_arr);
|
||||
kvfree(buddy->miss_list);
|
||||
}
|
||||
|
||||
static int dr_icm_buddy_create(struct mlx5dr_icm_pool *pool)
|
||||
{
|
||||
struct mlx5dr_icm_buddy_mem *buddy;
|
||||
@ -208,11 +243,19 @@ static int dr_icm_buddy_create(struct mlx5dr_icm_pool *pool)
|
||||
buddy->icm_mr = icm_mr;
|
||||
buddy->pool = pool;
|
||||
|
||||
if (pool->icm_type == DR_ICM_TYPE_STE) {
|
||||
/* Reduce allocations by preallocating and reusing the STE structures */
|
||||
if (dr_icm_buddy_init_ste_cache(buddy))
|
||||
goto err_cleanup_buddy;
|
||||
}
|
||||
|
||||
/* add it to the -start- of the list in order to search in it first */
|
||||
list_add(&buddy->list_node, &pool->buddy_mem_list);
|
||||
|
||||
return 0;
|
||||
|
||||
err_cleanup_buddy:
|
||||
mlx5dr_buddy_cleanup(buddy);
|
||||
err_free_buddy:
|
||||
kvfree(buddy);
|
||||
free_mr:
|
||||
@ -234,6 +277,9 @@ static void dr_icm_buddy_destroy(struct mlx5dr_icm_buddy_mem *buddy)
|
||||
|
||||
mlx5dr_buddy_cleanup(buddy);
|
||||
|
||||
if (buddy->pool->icm_type == DR_ICM_TYPE_STE)
|
||||
dr_icm_buddy_cleanup_ste_cache(buddy);
|
||||
|
||||
kvfree(buddy);
|
||||
}
|
||||
|
||||
@ -261,34 +307,30 @@ dr_icm_chunk_create(struct mlx5dr_icm_pool *pool,
|
||||
chunk->byte_size =
|
||||
mlx5dr_icm_pool_chunk_size_to_byte(chunk_size, pool->icm_type);
|
||||
chunk->seg = seg;
|
||||
chunk->buddy_mem = buddy_mem_pool;
|
||||
|
||||
if (pool->icm_type == DR_ICM_TYPE_STE && dr_icm_chunk_ste_init(chunk)) {
|
||||
mlx5dr_err(pool->dmn,
|
||||
"Failed to init ste arrays (order: %d)\n",
|
||||
chunk_size);
|
||||
goto out_free_chunk;
|
||||
}
|
||||
if (pool->icm_type == DR_ICM_TYPE_STE)
|
||||
dr_icm_chunk_ste_init(chunk, offset);
|
||||
|
||||
buddy_mem_pool->used_memory += chunk->byte_size;
|
||||
chunk->buddy_mem = buddy_mem_pool;
|
||||
INIT_LIST_HEAD(&chunk->chunk_list);
|
||||
|
||||
/* chunk now is part of the used_list */
|
||||
list_add_tail(&chunk->chunk_list, &buddy_mem_pool->used_list);
|
||||
|
||||
return chunk;
|
||||
|
||||
out_free_chunk:
|
||||
kvfree(chunk);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static bool dr_icm_pool_is_sync_required(struct mlx5dr_icm_pool *pool)
|
||||
{
|
||||
if (pool->hot_memory_size > DR_ICM_SYNC_THRESHOLD_POOL)
|
||||
return true;
|
||||
int allow_hot_size;
|
||||
|
||||
return false;
|
||||
/* sync when hot memory reaches half of the pool size */
|
||||
allow_hot_size =
|
||||
mlx5dr_icm_pool_chunk_size_to_byte(pool->max_log_chunk_sz,
|
||||
pool->icm_type) / 2;
|
||||
|
||||
return pool->hot_memory_size > allow_hot_size;
|
||||
}
|
||||
|
||||
static int dr_icm_pool_sync_all_buddy_pools(struct mlx5dr_icm_pool *pool)
|
||||
|
@ -13,18 +13,6 @@ static bool dr_mask_is_dmac_set(struct mlx5dr_match_spec *spec)
|
||||
return (spec->dmac_47_16 || spec->dmac_15_0);
|
||||
}
|
||||
|
||||
static bool dr_mask_is_src_addr_set(struct mlx5dr_match_spec *spec)
|
||||
{
|
||||
return (spec->src_ip_127_96 || spec->src_ip_95_64 ||
|
||||
spec->src_ip_63_32 || spec->src_ip_31_0);
|
||||
}
|
||||
|
||||
static bool dr_mask_is_dst_addr_set(struct mlx5dr_match_spec *spec)
|
||||
{
|
||||
return (spec->dst_ip_127_96 || spec->dst_ip_95_64 ||
|
||||
spec->dst_ip_63_32 || spec->dst_ip_31_0);
|
||||
}
|
||||
|
||||
static bool dr_mask_is_l3_base_set(struct mlx5dr_match_spec *spec)
|
||||
{
|
||||
return (spec->ip_protocol || spec->frag || spec->tcp_flags ||
|
||||
@ -503,11 +491,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
|
||||
&mask, inner, rx);
|
||||
|
||||
if (outer_ipv == DR_RULE_IPV6) {
|
||||
if (dr_mask_is_dst_addr_set(&mask.outer))
|
||||
if (DR_MASK_IS_DST_IP_SET(&mask.outer))
|
||||
mlx5dr_ste_build_eth_l3_ipv6_dst(ste_ctx, &sb[idx++],
|
||||
&mask, inner, rx);
|
||||
|
||||
if (dr_mask_is_src_addr_set(&mask.outer))
|
||||
if (DR_MASK_IS_SRC_IP_SET(&mask.outer))
|
||||
mlx5dr_ste_build_eth_l3_ipv6_src(ste_ctx, &sb[idx++],
|
||||
&mask, inner, rx);
|
||||
|
||||
@ -610,11 +598,11 @@ static int dr_matcher_set_ste_builders(struct mlx5dr_matcher *matcher,
|
||||
&mask, inner, rx);
|
||||
|
||||
if (inner_ipv == DR_RULE_IPV6) {
|
||||
if (dr_mask_is_dst_addr_set(&mask.inner))
|
||||
if (DR_MASK_IS_DST_IP_SET(&mask.inner))
|
||||
mlx5dr_ste_build_eth_l3_ipv6_dst(ste_ctx, &sb[idx++],
|
||||
&mask, inner, rx);
|
||||
|
||||
if (dr_mask_is_src_addr_set(&mask.inner))
|
||||
if (DR_MASK_IS_SRC_IP_SET(&mask.inner))
|
||||
mlx5dr_ste_build_eth_l3_ipv6_src(ste_ctx, &sb[idx++],
|
||||
&mask, inner, rx);
|
||||
|
||||
|
@ -602,12 +602,34 @@ int mlx5dr_ste_set_action_decap_l3_list(struct mlx5dr_ste_ctx *ste_ctx,
|
||||
used_hw_action_num);
|
||||
}
|
||||
|
||||
static int dr_ste_build_pre_check_spec(struct mlx5dr_domain *dmn,
|
||||
struct mlx5dr_match_spec *spec)
|
||||
{
|
||||
if (spec->ip_version) {
|
||||
if (spec->ip_version != 0xf) {
|
||||
mlx5dr_err(dmn,
|
||||
"Partial ip_version mask with src/dst IP is not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
} else if (spec->ethertype != 0xffff &&
|
||||
(DR_MASK_IS_SRC_IP_SET(spec) || DR_MASK_IS_DST_IP_SET(spec))) {
|
||||
mlx5dr_err(dmn,
|
||||
"Partial/no ethertype mask with src/dst IP is not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int mlx5dr_ste_build_pre_check(struct mlx5dr_domain *dmn,
|
||||
u8 match_criteria,
|
||||
struct mlx5dr_match_param *mask,
|
||||
struct mlx5dr_match_param *value)
|
||||
{
|
||||
if (!value && (match_criteria & DR_MATCHER_CRITERIA_MISC)) {
|
||||
if (value)
|
||||
return 0;
|
||||
|
||||
if (match_criteria & DR_MATCHER_CRITERIA_MISC) {
|
||||
if (mask->misc.source_port && mask->misc.source_port != 0xffff) {
|
||||
mlx5dr_err(dmn,
|
||||
"Partial mask source_port is not supported\n");
|
||||
@ -621,6 +643,14 @@ int mlx5dr_ste_build_pre_check(struct mlx5dr_domain *dmn,
|
||||
}
|
||||
}
|
||||
|
||||
if ((match_criteria & DR_MATCHER_CRITERIA_OUTER) &&
|
||||
dr_ste_build_pre_check_spec(dmn, &mask->outer))
|
||||
return -EINVAL;
|
||||
|
||||
if ((match_criteria & DR_MATCHER_CRITERIA_INNER) &&
|
||||
dr_ste_build_pre_check_spec(dmn, &mask->inner))
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -798,6 +798,16 @@ struct mlx5dr_match_param {
|
||||
(_misc3)->icmpv4_code || \
|
||||
(_misc3)->icmpv4_header_data)
|
||||
|
||||
#define DR_MASK_IS_SRC_IP_SET(_spec) ((_spec)->src_ip_127_96 || \
|
||||
(_spec)->src_ip_95_64 || \
|
||||
(_spec)->src_ip_63_32 || \
|
||||
(_spec)->src_ip_31_0)
|
||||
|
||||
#define DR_MASK_IS_DST_IP_SET(_spec) ((_spec)->dst_ip_127_96 || \
|
||||
(_spec)->dst_ip_95_64 || \
|
||||
(_spec)->dst_ip_63_32 || \
|
||||
(_spec)->dst_ip_31_0)
|
||||
|
||||
struct mlx5dr_esw_caps {
|
||||
u64 drop_icm_address_rx;
|
||||
u64 drop_icm_address_tx;
|
||||
|
@ -233,7 +233,11 @@ static bool contain_vport_reformat_action(struct mlx5_flow_rule *dst)
|
||||
dst->dest_attr.vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID;
|
||||
}
|
||||
|
||||
#define MLX5_FLOW_CONTEXT_ACTION_MAX 32
|
||||
/* We want to support a rule with 32 destinations, which means we need to
|
||||
* account for 32 destinations plus usually a counter plus one more action
|
||||
* for a multi-destination flow table.
|
||||
*/
|
||||
#define MLX5_FLOW_CONTEXT_ACTION_MAX 34
|
||||
static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
|
||||
struct mlx5_flow_table *ft,
|
||||
struct mlx5_flow_group *group,
|
||||
@ -403,9 +407,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
|
||||
enum mlx5_flow_destination_type type = dst->dest_attr.type;
|
||||
u32 id;
|
||||
|
||||
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
|
||||
num_term_actions >= MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -ENOSPC;
|
||||
if (fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
|
||||
num_term_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto free_actions;
|
||||
}
|
||||
|
||||
@ -478,8 +482,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
|
||||
MLX5_FLOW_DESTINATION_TYPE_COUNTER)
|
||||
continue;
|
||||
|
||||
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -ENOSPC;
|
||||
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
|
||||
fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto free_actions;
|
||||
}
|
||||
|
||||
@ -499,14 +504,28 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
|
||||
params.match_sz = match_sz;
|
||||
params.match_buf = (u64 *)fte->val;
|
||||
if (num_term_actions == 1) {
|
||||
if (term_actions->reformat)
|
||||
if (term_actions->reformat) {
|
||||
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto free_actions;
|
||||
}
|
||||
actions[num_actions++] = term_actions->reformat;
|
||||
}
|
||||
|
||||
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto free_actions;
|
||||
}
|
||||
actions[num_actions++] = term_actions->dest;
|
||||
} else if (num_term_actions > 1) {
|
||||
bool ignore_flow_level =
|
||||
!!(fte->action.flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
|
||||
|
||||
if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
|
||||
fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto free_actions;
|
||||
}
|
||||
tmp_action = mlx5dr_action_create_mult_dest_tbl(domain,
|
||||
term_actions,
|
||||
num_term_actions,
|
||||
|
@ -160,6 +160,11 @@ struct mlx5dr_icm_buddy_mem {
|
||||
* sync_ste command sets them free.
|
||||
*/
|
||||
struct list_head hot_list;
|
||||
|
||||
/* Memory optimisation */
|
||||
struct mlx5dr_ste *ste_arr;
|
||||
struct list_head *miss_list;
|
||||
u8 *hw_ste_arr;
|
||||
};
|
||||
|
||||
int mlx5dr_buddy_init(struct mlx5dr_icm_buddy_mem *buddy,
|
||||
|
@ -922,8 +922,8 @@ nfp_tunnel_add_shared_mac(struct nfp_app *app, struct net_device *netdev,
|
||||
int port, bool mod)
|
||||
{
|
||||
struct nfp_flower_priv *priv = app->priv;
|
||||
int ida_idx = NFP_MAX_MAC_INDEX, err;
|
||||
struct nfp_tun_offloaded_mac *entry;
|
||||
int ida_idx = -1, err;
|
||||
u16 nfp_mac_idx = 0;
|
||||
|
||||
entry = nfp_tunnel_lookup_offloaded_macs(app, netdev->dev_addr);
|
||||
@ -997,7 +997,7 @@ err_remove_hash:
|
||||
err_free_entry:
|
||||
kfree(entry);
|
||||
err_free_ida:
|
||||
if (ida_idx != NFP_MAX_MAC_INDEX)
|
||||
if (ida_idx != -1)
|
||||
ida_simple_remove(&priv->tun.mac_off_ids, ida_idx);
|
||||
|
||||
return err;
|
||||
|
@ -1433,6 +1433,8 @@ static int temac_probe(struct platform_device *pdev)
|
||||
lp->indirect_lock = devm_kmalloc(&pdev->dev,
|
||||
sizeof(*lp->indirect_lock),
|
||||
GFP_KERNEL);
|
||||
if (!lp->indirect_lock)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(lp->indirect_lock);
|
||||
}
|
||||
|
||||
|
@ -668,11 +668,11 @@ static void sixpack_close(struct tty_struct *tty)
|
||||
*/
|
||||
netif_stop_queue(sp->dev);
|
||||
|
||||
unregister_netdev(sp->dev);
|
||||
|
||||
del_timer_sync(&sp->tx_t);
|
||||
del_timer_sync(&sp->resync_t);
|
||||
|
||||
unregister_netdev(sp->dev);
|
||||
|
||||
/* Free all 6pack frame buffers after unreg. */
|
||||
kfree(sp->rbuff);
|
||||
kfree(sp->xbuff);
|
||||
|
@ -200,7 +200,11 @@ static int ipq_mdio_reset(struct mii_bus *bus)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return clk_prepare_enable(priv->mdio_clk);
|
||||
ret = clk_prepare_enable(priv->mdio_clk);
|
||||
if (ret == 0)
|
||||
mdelay(10);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ipq4019_mdio_probe(struct platform_device *pdev)
|
||||
|
@ -413,7 +413,7 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
|
||||
/* ignore the CRC length */
|
||||
len = (skb->data[1] | (skb->data[2] << 8)) - 4;
|
||||
|
||||
if (len > ETH_FRAME_LEN)
|
||||
if (len > ETH_FRAME_LEN || len > skb->len)
|
||||
return 0;
|
||||
|
||||
/* the last packet of current skb */
|
||||
|
@ -256,6 +256,7 @@ static void backend_disconnect(struct backend_info *be)
|
||||
unsigned int queue_index;
|
||||
|
||||
xen_unregister_watchers(vif);
|
||||
xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
xenvif_debugfs_delif(vif);
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
@ -675,7 +676,6 @@ static void hotplug_status_changed(struct xenbus_watch *watch,
|
||||
|
||||
/* Not interested in this watch anymore. */
|
||||
unregister_hotplug_status_watch(be);
|
||||
xenbus_rm(XBT_NIL, be->dev->nodename, "hotplug-status");
|
||||
}
|
||||
kfree(str);
|
||||
}
|
||||
@ -824,15 +824,11 @@ static void connect(struct backend_info *be)
|
||||
xenvif_carrier_on(be->vif);
|
||||
|
||||
unregister_hotplug_status_watch(be);
|
||||
if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
|
||||
NULL, hotplug_status_changed,
|
||||
"%s/%s", dev->nodename,
|
||||
"hotplug-status");
|
||||
if (err)
|
||||
goto err;
|
||||
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
|
||||
hotplug_status_changed,
|
||||
"%s/%s", dev->nodename, "hotplug-status");
|
||||
if (!err)
|
||||
be->have_hotplug_status_watch = 1;
|
||||
}
|
||||
|
||||
netif_tx_wake_all_queues(be->vif->dev);
|
||||
|
||||
|
@ -629,16 +629,18 @@ err:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int vhost_vsock_stop(struct vhost_vsock *vsock)
|
||||
static int vhost_vsock_stop(struct vhost_vsock *vsock, bool check_owner)
|
||||
{
|
||||
size_t i;
|
||||
int ret;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&vsock->dev.mutex);
|
||||
|
||||
ret = vhost_dev_check_owner(&vsock->dev);
|
||||
if (ret)
|
||||
goto err;
|
||||
if (check_owner) {
|
||||
ret = vhost_dev_check_owner(&vsock->dev);
|
||||
if (ret)
|
||||
goto err;
|
||||
}
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
|
||||
struct vhost_virtqueue *vq = &vsock->vqs[i];
|
||||
@ -753,7 +755,12 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
|
||||
* inefficient. Room for improvement here. */
|
||||
vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
|
||||
|
||||
vhost_vsock_stop(vsock);
|
||||
/* Don't check the owner, because we are in the release path, so we
|
||||
* need to stop the vsock device in any case.
|
||||
* vhost_vsock_stop() can not fail in this case, so we don't need to
|
||||
* check the return code.
|
||||
*/
|
||||
vhost_vsock_stop(vsock, false);
|
||||
vhost_vsock_flush(vsock);
|
||||
vhost_dev_stop(&vsock->dev);
|
||||
|
||||
@ -868,7 +875,7 @@ static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl,
|
||||
if (start)
|
||||
return vhost_vsock_start(vsock);
|
||||
else
|
||||
return vhost_vsock_stop(vsock);
|
||||
return vhost_vsock_stop(vsock, true);
|
||||
case VHOST_GET_FEATURES:
|
||||
features = VHOST_VSOCK_FEATURES;
|
||||
if (copy_to_user(argp, &features, sizeof(features)))
|
||||
|
@ -209,11 +209,9 @@ static inline bool map_value_has_timer(const struct bpf_map *map)
|
||||
static inline void check_and_init_map_value(struct bpf_map *map, void *dst)
|
||||
{
|
||||
if (unlikely(map_value_has_spin_lock(map)))
|
||||
*(struct bpf_spin_lock *)(dst + map->spin_lock_off) =
|
||||
(struct bpf_spin_lock){};
|
||||
memset(dst + map->spin_lock_off, 0, sizeof(struct bpf_spin_lock));
|
||||
if (unlikely(map_value_has_timer(map)))
|
||||
*(struct bpf_timer *)(dst + map->timer_off) =
|
||||
(struct bpf_timer){};
|
||||
memset(dst + map->timer_off, 0, sizeof(struct bpf_timer));
|
||||
}
|
||||
|
||||
/* copy everything but bpf_spin_lock and bpf_timer. There could be one of each. */
|
||||
@ -224,7 +222,8 @@ static inline void copy_map_value(struct bpf_map *map, void *dst, void *src)
|
||||
if (unlikely(map_value_has_spin_lock(map))) {
|
||||
s_off = map->spin_lock_off;
|
||||
s_sz = sizeof(struct bpf_spin_lock);
|
||||
} else if (unlikely(map_value_has_timer(map))) {
|
||||
}
|
||||
if (unlikely(map_value_has_timer(map))) {
|
||||
t_off = map->timer_off;
|
||||
t_sz = sizeof(struct bpf_timer);
|
||||
}
|
||||
|
@ -22,7 +22,7 @@
|
||||
#include <asm/checksum.h>
|
||||
|
||||
#ifndef _HAVE_ARCH_COPY_AND_CSUM_FROM_USER
|
||||
static inline
|
||||
static __always_inline
|
||||
__wsum csum_and_copy_from_user (const void __user *src, void *dst,
|
||||
int len)
|
||||
{
|
||||
@ -33,7 +33,7 @@ __wsum csum_and_copy_from_user (const void __user *src, void *dst,
|
||||
#endif
|
||||
|
||||
#ifndef HAVE_CSUM_COPY_USER
|
||||
static __inline__ __wsum csum_and_copy_to_user
|
||||
static __always_inline __wsum csum_and_copy_to_user
|
||||
(const void *src, void __user *dst, int len)
|
||||
{
|
||||
__wsum sum = csum_partial(src, len, ~0U);
|
||||
@ -45,7 +45,7 @@ static __inline__ __wsum csum_and_copy_to_user
|
||||
#endif
|
||||
|
||||
#ifndef _HAVE_ARCH_CSUM_AND_COPY
|
||||
static inline __wsum
|
||||
static __always_inline __wsum
|
||||
csum_partial_copy_nocheck(const void *src, void *dst, int len)
|
||||
{
|
||||
memcpy(dst, src, len);
|
||||
@ -54,7 +54,7 @@ csum_partial_copy_nocheck(const void *src, void *dst, int len)
|
||||
#endif
|
||||
|
||||
#ifndef HAVE_ARCH_CSUM_ADD
|
||||
static inline __wsum csum_add(__wsum csum, __wsum addend)
|
||||
static __always_inline __wsum csum_add(__wsum csum, __wsum addend)
|
||||
{
|
||||
u32 res = (__force u32)csum;
|
||||
res += (__force u32)addend;
|
||||
@ -62,12 +62,12 @@ static inline __wsum csum_add(__wsum csum, __wsum addend)
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline __wsum csum_sub(__wsum csum, __wsum addend)
|
||||
static __always_inline __wsum csum_sub(__wsum csum, __wsum addend)
|
||||
{
|
||||
return csum_add(csum, ~addend);
|
||||
}
|
||||
|
||||
static inline __sum16 csum16_add(__sum16 csum, __be16 addend)
|
||||
static __always_inline __sum16 csum16_add(__sum16 csum, __be16 addend)
|
||||
{
|
||||
u16 res = (__force u16)csum;
|
||||
|
||||
@ -75,12 +75,12 @@ static inline __sum16 csum16_add(__sum16 csum, __be16 addend)
|
||||
return (__force __sum16)(res + (res < (__force u16)addend));
|
||||
}
|
||||
|
||||
static inline __sum16 csum16_sub(__sum16 csum, __be16 addend)
|
||||
static __always_inline __sum16 csum16_sub(__sum16 csum, __be16 addend)
|
||||
{
|
||||
return csum16_add(csum, ~addend);
|
||||
}
|
||||
|
||||
static inline __wsum csum_shift(__wsum sum, int offset)
|
||||
static __always_inline __wsum csum_shift(__wsum sum, int offset)
|
||||
{
|
||||
/* rotate sum to align it with a 16b boundary */
|
||||
if (offset & 1)
|
||||
@ -88,42 +88,43 @@ static inline __wsum csum_shift(__wsum sum, int offset)
|
||||
return sum;
|
||||
}
|
||||
|
||||
static inline __wsum
|
||||
static __always_inline __wsum
|
||||
csum_block_add(__wsum csum, __wsum csum2, int offset)
|
||||
{
|
||||
return csum_add(csum, csum_shift(csum2, offset));
|
||||
}
|
||||
|
||||
static inline __wsum
|
||||
static __always_inline __wsum
|
||||
csum_block_add_ext(__wsum csum, __wsum csum2, int offset, int len)
|
||||
{
|
||||
return csum_block_add(csum, csum2, offset);
|
||||
}
|
||||
|
||||
static inline __wsum
|
||||
static __always_inline __wsum
|
||||
csum_block_sub(__wsum csum, __wsum csum2, int offset)
|
||||
{
|
||||
return csum_block_add(csum, ~csum2, offset);
|
||||
}
|
||||
|
||||
static inline __wsum csum_unfold(__sum16 n)
|
||||
static __always_inline __wsum csum_unfold(__sum16 n)
|
||||
{
|
||||
return (__force __wsum)n;
|
||||
}
|
||||
|
||||
static inline __wsum csum_partial_ext(const void *buff, int len, __wsum sum)
|
||||
static __always_inline
|
||||
__wsum csum_partial_ext(const void *buff, int len, __wsum sum)
|
||||
{
|
||||
return csum_partial(buff, len, sum);
|
||||
}
|
||||
|
||||
#define CSUM_MANGLED_0 ((__force __sum16)0xffff)
|
||||
|
||||
static inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
|
||||
static __always_inline void csum_replace_by_diff(__sum16 *sum, __wsum diff)
|
||||
{
|
||||
*sum = csum_fold(csum_add(diff, ~csum_unfold(*sum)));
|
||||
}
|
||||
|
||||
static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
|
||||
static __always_inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
|
||||
{
|
||||
__wsum tmp = csum_sub(~csum_unfold(*sum), (__force __wsum)from);
|
||||
|
||||
@ -136,11 +137,16 @@ static inline void csum_replace4(__sum16 *sum, __be32 from, __be32 to)
|
||||
* m : old value of a 16bit field
|
||||
* m' : new value of a 16bit field
|
||||
*/
|
||||
static inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
|
||||
static __always_inline void csum_replace2(__sum16 *sum, __be16 old, __be16 new)
|
||||
{
|
||||
*sum = ~csum16_add(csum16_sub(~(*sum), old), new);
|
||||
}
|
||||
|
||||
static inline void csum_replace(__wsum *csum, __wsum old, __wsum new)
|
||||
{
|
||||
*csum = csum_add(csum_sub(*csum, old), new);
|
||||
}
|
||||
|
||||
struct sk_buff;
|
||||
void inet_proto_csum_replace4(__sum16 *sum, struct sk_buff *skb,
|
||||
__be32 from, __be32 to, bool pseudohdr);
|
||||
@ -150,16 +156,16 @@ void inet_proto_csum_replace16(__sum16 *sum, struct sk_buff *skb,
|
||||
void inet_proto_csum_replace_by_diff(__sum16 *sum, struct sk_buff *skb,
|
||||
__wsum diff, bool pseudohdr);
|
||||
|
||||
static inline void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
|
||||
__be16 from, __be16 to,
|
||||
bool pseudohdr)
|
||||
static __always_inline
|
||||
void inet_proto_csum_replace2(__sum16 *sum, struct sk_buff *skb,
|
||||
__be16 from, __be16 to, bool pseudohdr)
|
||||
{
|
||||
inet_proto_csum_replace4(sum, skb, (__force __be32)from,
|
||||
(__force __be32)to, pseudohdr);
|
||||
}
|
||||
|
||||
static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
|
||||
int start, int offset)
|
||||
static __always_inline __wsum remcsum_adjust(void *ptr, __wsum csum,
|
||||
int start, int offset)
|
||||
{
|
||||
__sum16 *psum = (__sum16 *)(ptr + offset);
|
||||
__wsum delta;
|
||||
@ -175,12 +181,12 @@ static inline __wsum remcsum_adjust(void *ptr, __wsum csum,
|
||||
return delta;
|
||||
}
|
||||
|
||||
static inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
|
||||
static __always_inline void remcsum_unadjust(__sum16 *psum, __wsum delta)
|
||||
{
|
||||
*psum = csum_fold(csum_sub(delta, (__force __wsum)*psum));
|
||||
}
|
||||
|
||||
static inline __wsum wsum_negate(__wsum val)
|
||||
static __always_inline __wsum wsum_negate(__wsum val)
|
||||
{
|
||||
return (__force __wsum)-((__force u32)val);
|
||||
}
|
||||
|
@ -905,9 +905,9 @@ struct nft_expr_ops {
|
||||
int (*offload)(struct nft_offload_ctx *ctx,
|
||||
struct nft_flow_rule *flow,
|
||||
const struct nft_expr *expr);
|
||||
bool (*offload_action)(const struct nft_expr *expr);
|
||||
void (*offload_stats)(struct nft_expr *expr,
|
||||
const struct flow_stats *stats);
|
||||
u32 offload_flags;
|
||||
const struct nft_expr_type *type;
|
||||
void *data;
|
||||
};
|
||||
|
@ -67,8 +67,6 @@ struct nft_flow_rule {
|
||||
struct flow_rule *rule;
|
||||
};
|
||||
|
||||
#define NFT_OFFLOAD_F_ACTION (1 << 0)
|
||||
|
||||
void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow,
|
||||
enum flow_dissector_key_id addr_type);
|
||||
|
||||
|
@ -507,7 +507,7 @@ struct sock {
|
||||
#endif
|
||||
u16 sk_tsflags;
|
||||
u8 sk_shutdown;
|
||||
u32 sk_tskey;
|
||||
atomic_t sk_tskey;
|
||||
atomic_t sk_zckey;
|
||||
|
||||
u8 sk_clockid;
|
||||
@ -2667,7 +2667,7 @@ static inline void _sock_tx_timestamp(struct sock *sk, __u16 tsflags,
|
||||
__sock_tx_timestamp(tsflags, tx_flags);
|
||||
if (tsflags & SOF_TIMESTAMPING_OPT_ID && tskey &&
|
||||
tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK)
|
||||
*tskey = sk->sk_tskey++;
|
||||
*tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
}
|
||||
if (unlikely(sock_flag(sk, SOCK_WIFI_STATUS)))
|
||||
*tx_flags |= SKBTX_WIFI_STATUS;
|
||||
|
@ -5688,7 +5688,8 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
|
||||
}
|
||||
if (check_ptr_off_reg(env, reg, regno))
|
||||
return -EINVAL;
|
||||
} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || reg2btf_ids[reg->type])) {
|
||||
} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID ||
|
||||
(reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) {
|
||||
const struct btf_type *reg_ref_t;
|
||||
const struct btf *reg_btf;
|
||||
const char *reg_ref_tname;
|
||||
@ -5706,7 +5707,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
|
||||
reg_ref_id = reg->btf_id;
|
||||
} else {
|
||||
reg_btf = btf_vmlinux;
|
||||
reg_ref_id = *reg2btf_ids[reg->type];
|
||||
reg_ref_id = *reg2btf_ids[base_type(reg->type)];
|
||||
}
|
||||
|
||||
reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id,
|
||||
|
@ -2,6 +2,7 @@
|
||||
/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
|
||||
*/
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/btf.h>
|
||||
#include <linux/bpf-cgroup.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include <linux/random.h>
|
||||
@ -1075,6 +1076,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer)
|
||||
void *key;
|
||||
u32 idx;
|
||||
|
||||
BTF_TYPE_EMIT(struct bpf_timer);
|
||||
callback_fn = rcu_dereference_check(t->callback_fn, rcu_read_lock_bh_held());
|
||||
if (!callback_fn)
|
||||
goto out;
|
||||
|
@ -1355,6 +1355,7 @@ int generic_map_delete_batch(struct bpf_map *map,
|
||||
maybe_wait_bpf_programs(map);
|
||||
if (err)
|
||||
break;
|
||||
cond_resched();
|
||||
}
|
||||
if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
|
||||
err = -EFAULT;
|
||||
@ -1412,6 +1413,7 @@ int generic_map_update_batch(struct bpf_map *map,
|
||||
|
||||
if (err)
|
||||
break;
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
|
||||
@ -1509,6 +1511,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
|
||||
swap(prev_key, key);
|
||||
retry = MAP_LOOKUP_RETRIES;
|
||||
cp++;
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
if (err == -EFAULT)
|
||||
|
@ -2006,7 +2006,7 @@ struct j1939_session *j1939_tp_send(struct j1939_priv *priv,
|
||||
/* set the end-packet for broadcast */
|
||||
session->pkt.last = session->pkt.total;
|
||||
|
||||
skcb->tskey = session->sk->sk_tskey++;
|
||||
skcb->tskey = atomic_inc_return(&session->sk->sk_tskey) - 1;
|
||||
session->tskey = skcb->tskey;
|
||||
|
||||
return session;
|
||||
|
@ -2710,6 +2710,9 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
|
||||
if (unlikely(flags))
|
||||
return -EINVAL;
|
||||
|
||||
if (unlikely(len == 0))
|
||||
return 0;
|
||||
|
||||
/* First find the starting scatterlist element */
|
||||
i = msg->sg.start;
|
||||
do {
|
||||
|
@ -213,7 +213,7 @@ static ssize_t speed_show(struct device *dev,
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
if (netif_running(netdev)) {
|
||||
if (netif_running(netdev) && netif_device_present(netdev)) {
|
||||
struct ethtool_link_ksettings cmd;
|
||||
|
||||
if (!__ethtool_get_link_ksettings(netdev, &cmd))
|
||||
|
@ -2276,7 +2276,7 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta)
|
||||
/* Free pulled out fragments. */
|
||||
while ((list = skb_shinfo(skb)->frag_list) != insp) {
|
||||
skb_shinfo(skb)->frag_list = list->next;
|
||||
kfree_skb(list);
|
||||
consume_skb(list);
|
||||
}
|
||||
/* And insert new clone at head. */
|
||||
if (clone) {
|
||||
@ -4730,7 +4730,7 @@ static void __skb_complete_tx_timestamp(struct sk_buff *skb,
|
||||
if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) {
|
||||
serr->ee.ee_data = skb_shinfo(skb)->tskey;
|
||||
if (sk_is_tcp(sk))
|
||||
serr->ee.ee_data -= sk->sk_tskey;
|
||||
serr->ee.ee_data -= atomic_read(&sk->sk_tskey);
|
||||
}
|
||||
|
||||
err = sock_queue_err_skb(sk, skb);
|
||||
@ -6105,7 +6105,7 @@ static int pskb_carve_frag_list(struct sk_buff *skb,
|
||||
/* Free pulled out fragments. */
|
||||
while ((list = shinfo->frag_list) != insp) {
|
||||
shinfo->frag_list = list->next;
|
||||
kfree_skb(list);
|
||||
consume_skb(list);
|
||||
}
|
||||
/* And insert new clone at head. */
|
||||
if (clone) {
|
||||
|
@ -879,9 +879,9 @@ int sock_set_timestamping(struct sock *sk, int optname,
|
||||
if ((1 << sk->sk_state) &
|
||||
(TCPF_CLOSE | TCPF_LISTEN))
|
||||
return -EINVAL;
|
||||
sk->sk_tskey = tcp_sk(sk)->snd_una;
|
||||
atomic_set(&sk->sk_tskey, tcp_sk(sk)->snd_una);
|
||||
} else {
|
||||
sk->sk_tskey = 0;
|
||||
atomic_set(&sk->sk_tskey, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -260,11 +260,16 @@ static void dsa_netdev_ops_set(struct net_device *dev,
|
||||
dev->dsa_ptr->netdev_ops = ops;
|
||||
}
|
||||
|
||||
/* Keep the master always promiscuous if the tagging protocol requires that
|
||||
* (garbles MAC DA) or if it doesn't support unicast filtering, case in which
|
||||
* it would revert to promiscuous mode as soon as we call dev_uc_add() on it
|
||||
* anyway.
|
||||
*/
|
||||
static void dsa_master_set_promiscuity(struct net_device *dev, int inc)
|
||||
{
|
||||
const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops;
|
||||
|
||||
if (!ops->promisc_on_master)
|
||||
if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_master)
|
||||
return;
|
||||
|
||||
ASSERT_RTNL();
|
||||
|
@ -395,10 +395,17 @@ void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br)
|
||||
.tree_index = dp->ds->dst->index,
|
||||
.sw_index = dp->ds->index,
|
||||
.port = dp->index,
|
||||
.bridge = *dp->bridge,
|
||||
};
|
||||
int err;
|
||||
|
||||
/* If the port could not be offloaded to begin with, then
|
||||
* there is nothing to do.
|
||||
*/
|
||||
if (!dp->bridge)
|
||||
return;
|
||||
|
||||
info.bridge = *dp->bridge;
|
||||
|
||||
/* Here the port is already unbridged. Reflect the current configuration
|
||||
* so that drivers can program their chips accordingly.
|
||||
*/
|
||||
@ -781,9 +788,15 @@ int dsa_port_host_fdb_add(struct dsa_port *dp, const unsigned char *addr,
|
||||
struct dsa_port *cpu_dp = dp->cpu_dp;
|
||||
int err;
|
||||
|
||||
err = dev_uc_add(cpu_dp->master, addr);
|
||||
if (err)
|
||||
return err;
|
||||
/* Avoid a call to __dev_set_promiscuity() on the master, which
|
||||
* requires rtnl_lock(), since we can't guarantee that is held here,
|
||||
* and we can't take it either.
|
||||
*/
|
||||
if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) {
|
||||
err = dev_uc_add(cpu_dp->master, addr);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_ADD, &info);
|
||||
}
|
||||
@ -800,9 +813,11 @@ int dsa_port_host_fdb_del(struct dsa_port *dp, const unsigned char *addr,
|
||||
struct dsa_port *cpu_dp = dp->cpu_dp;
|
||||
int err;
|
||||
|
||||
err = dev_uc_del(cpu_dp->master, addr);
|
||||
if (err)
|
||||
return err;
|
||||
if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) {
|
||||
err = dev_uc_del(cpu_dp->master, addr);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return dsa_port_notify(dp, DSA_NOTIFIER_HOST_FDB_DEL, &info);
|
||||
}
|
||||
|
@ -1376,8 +1376,11 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
|
||||
}
|
||||
|
||||
ops = rcu_dereference(inet_offloads[proto]);
|
||||
if (likely(ops && ops->callbacks.gso_segment))
|
||||
if (likely(ops && ops->callbacks.gso_segment)) {
|
||||
segs = ops->callbacks.gso_segment(skb, features);
|
||||
if (!segs)
|
||||
skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
|
||||
}
|
||||
|
||||
if (IS_ERR_OR_NULL(segs))
|
||||
goto out;
|
||||
|
@ -991,7 +991,7 @@ static int __ip_append_data(struct sock *sk,
|
||||
|
||||
if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP &&
|
||||
sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)
|
||||
tskey = sk->sk_tskey++;
|
||||
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
|
||||
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
|
||||
|
||||
|
@ -187,7 +187,6 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
|
||||
(int)ident, &ipv6_hdr(skb)->daddr, dif);
|
||||
#endif
|
||||
} else {
|
||||
pr_err("ping: protocol(%x) is not supported\n", ntohs(skb->protocol));
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -846,7 +846,7 @@ udp_tunnel_nic_unregister(struct net_device *dev, struct udp_tunnel_nic *utn)
|
||||
list_for_each_entry(node, &info->shared->devices, list)
|
||||
if (node->dev == dev)
|
||||
break;
|
||||
if (node->dev != dev)
|
||||
if (list_entry_is_head(node, &info->shared->devices, list))
|
||||
return;
|
||||
|
||||
list_del(&node->list);
|
||||
|
@ -4998,6 +4998,7 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
|
||||
nla_put_s32(skb, IFA_TARGET_NETNSID, args->netnsid))
|
||||
goto error;
|
||||
|
||||
spin_lock_bh(&ifa->lock);
|
||||
if (!((ifa->flags&IFA_F_PERMANENT) &&
|
||||
(ifa->prefered_lft == INFINITY_LIFE_TIME))) {
|
||||
preferred = ifa->prefered_lft;
|
||||
@ -5019,6 +5020,7 @@ static int inet6_fill_ifaddr(struct sk_buff *skb, struct inet6_ifaddr *ifa,
|
||||
preferred = INFINITY_LIFE_TIME;
|
||||
valid = INFINITY_LIFE_TIME;
|
||||
}
|
||||
spin_unlock_bh(&ifa->lock);
|
||||
|
||||
if (!ipv6_addr_any(&ifa->peer_addr)) {
|
||||
if (nla_put_in6_addr(skb, IFA_LOCAL, &ifa->addr) < 0 ||
|
||||
|
@ -114,6 +114,8 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
|
||||
if (likely(ops && ops->callbacks.gso_segment)) {
|
||||
skb_reset_transport_header(skb);
|
||||
segs = ops->callbacks.gso_segment(skb, features);
|
||||
if (!segs)
|
||||
skb->network_header = skb_mac_header(skb) + nhoff - skb->head;
|
||||
}
|
||||
|
||||
if (IS_ERR_OR_NULL(segs))
|
||||
|
@ -1465,7 +1465,7 @@ static int __ip6_append_data(struct sock *sk,
|
||||
|
||||
if (cork->tx_flags & SKBTX_ANY_SW_TSTAMP &&
|
||||
sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID)
|
||||
tskey = sk->sk_tskey++;
|
||||
tskey = atomic_inc_return(&sk->sk_tskey) - 1;
|
||||
|
||||
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
|
||||
|
||||
|
@ -35,12 +35,14 @@ static const struct snmp_mib mptcp_snmp_list[] = {
|
||||
SNMP_MIB_ITEM("AddAddr", MPTCP_MIB_ADDADDR),
|
||||
SNMP_MIB_ITEM("EchoAdd", MPTCP_MIB_ECHOADD),
|
||||
SNMP_MIB_ITEM("PortAdd", MPTCP_MIB_PORTADD),
|
||||
SNMP_MIB_ITEM("AddAddrDrop", MPTCP_MIB_ADDADDRDROP),
|
||||
SNMP_MIB_ITEM("MPJoinPortSynRx", MPTCP_MIB_JOINPORTSYNRX),
|
||||
SNMP_MIB_ITEM("MPJoinPortSynAckRx", MPTCP_MIB_JOINPORTSYNACKRX),
|
||||
SNMP_MIB_ITEM("MPJoinPortAckRx", MPTCP_MIB_JOINPORTACKRX),
|
||||
SNMP_MIB_ITEM("MismatchPortSynRx", MPTCP_MIB_MISMATCHPORTSYNRX),
|
||||
SNMP_MIB_ITEM("MismatchPortAckRx", MPTCP_MIB_MISMATCHPORTACKRX),
|
||||
SNMP_MIB_ITEM("RmAddr", MPTCP_MIB_RMADDR),
|
||||
SNMP_MIB_ITEM("RmAddrDrop", MPTCP_MIB_RMADDRDROP),
|
||||
SNMP_MIB_ITEM("RmSubflow", MPTCP_MIB_RMSUBFLOW),
|
||||
SNMP_MIB_ITEM("MPPrioTx", MPTCP_MIB_MPPRIOTX),
|
||||
SNMP_MIB_ITEM("MPPrioRx", MPTCP_MIB_MPPRIORX),
|
||||
|
@ -28,12 +28,14 @@ enum linux_mptcp_mib_field {
|
||||
MPTCP_MIB_ADDADDR, /* Received ADD_ADDR with echo-flag=0 */
|
||||
MPTCP_MIB_ECHOADD, /* Received ADD_ADDR with echo-flag=1 */
|
||||
MPTCP_MIB_PORTADD, /* Received ADD_ADDR with a port-number */
|
||||
MPTCP_MIB_ADDADDRDROP, /* Dropped incoming ADD_ADDR */
|
||||
MPTCP_MIB_JOINPORTSYNRX, /* Received a SYN MP_JOIN with a different port-number */
|
||||
MPTCP_MIB_JOINPORTSYNACKRX, /* Received a SYNACK MP_JOIN with a different port-number */
|
||||
MPTCP_MIB_JOINPORTACKRX, /* Received an ACK MP_JOIN with a different port-number */
|
||||
MPTCP_MIB_MISMATCHPORTSYNRX, /* Received a SYN MP_JOIN with a mismatched port-number */
|
||||
MPTCP_MIB_MISMATCHPORTACKRX, /* Received an ACK MP_JOIN with a mismatched port-number */
|
||||
MPTCP_MIB_RMADDR, /* Received RM_ADDR */
|
||||
MPTCP_MIB_RMADDRDROP, /* Dropped incoming RM_ADDR */
|
||||
MPTCP_MIB_RMSUBFLOW, /* Remove a subflow */
|
||||
MPTCP_MIB_MPPRIOTX, /* Transmit a MP_PRIO */
|
||||
MPTCP_MIB_MPPRIORX, /* Received a MP_PRIO */
|
||||
|
@ -213,6 +213,8 @@ void mptcp_pm_add_addr_received(struct mptcp_sock *msk,
|
||||
mptcp_pm_add_addr_send_ack(msk);
|
||||
} else if (mptcp_pm_schedule_work(msk, MPTCP_PM_ADD_ADDR_RECEIVED)) {
|
||||
pm->remote = *addr;
|
||||
} else {
|
||||
__MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_ADDADDRDROP);
|
||||
}
|
||||
|
||||
spin_unlock_bh(&pm->lock);
|
||||
@ -253,8 +255,10 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
|
||||
mptcp_event_addr_removed(msk, rm_list->ids[i]);
|
||||
|
||||
spin_lock_bh(&pm->lock);
|
||||
mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED);
|
||||
pm->rm_list_rx = *rm_list;
|
||||
if (mptcp_pm_schedule_work(msk, MPTCP_PM_RM_ADDR_RECEIVED))
|
||||
pm->rm_list_rx = *rm_list;
|
||||
else
|
||||
__MPTCP_INC_STATS(sock_net((struct sock *)msk), MPTCP_MIB_RMADDRDROP);
|
||||
spin_unlock_bh(&pm->lock);
|
||||
}
|
||||
|
||||
|
@ -546,6 +546,16 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
|
||||
if (msk->pm.add_addr_signaled < add_addr_signal_max) {
|
||||
local = select_signal_address(pernet, msk);
|
||||
|
||||
/* due to racing events on both ends we can reach here while
|
||||
* previous add address is still running: if we invoke now
|
||||
* mptcp_pm_announce_addr(), that will fail and the
|
||||
* corresponding id will be marked as used.
|
||||
* Instead let the PM machinery reschedule us when the
|
||||
* current address announce will be completed.
|
||||
*/
|
||||
if (msk->pm.addr_signal & BIT(MPTCP_ADD_ADDR_SIGNAL))
|
||||
return;
|
||||
|
||||
if (local) {
|
||||
if (mptcp_pm_alloc_anno_list(msk, local)) {
|
||||
__clear_bit(local->addr.id, msk->pm.id_avail_bitmap);
|
||||
@ -650,6 +660,7 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
|
||||
unsigned int add_addr_accept_max;
|
||||
struct mptcp_addr_info remote;
|
||||
unsigned int subflows_max;
|
||||
bool reset_port = false;
|
||||
int i, nr;
|
||||
|
||||
add_addr_accept_max = mptcp_pm_get_add_addr_accept_max(msk);
|
||||
@ -659,15 +670,19 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
|
||||
msk->pm.add_addr_accepted, add_addr_accept_max,
|
||||
msk->pm.remote.family);
|
||||
|
||||
if (lookup_subflow_by_daddr(&msk->conn_list, &msk->pm.remote))
|
||||
remote = msk->pm.remote;
|
||||
if (lookup_subflow_by_daddr(&msk->conn_list, &remote))
|
||||
goto add_addr_echo;
|
||||
|
||||
/* pick id 0 port, if none is provided the remote address */
|
||||
if (!remote.port) {
|
||||
reset_port = true;
|
||||
remote.port = sk->sk_dport;
|
||||
}
|
||||
|
||||
/* connect to the specified remote address, using whatever
|
||||
* local address the routing configuration will pick.
|
||||
*/
|
||||
remote = msk->pm.remote;
|
||||
if (!remote.port)
|
||||
remote.port = sk->sk_dport;
|
||||
nr = fill_local_addresses_vec(msk, addrs);
|
||||
|
||||
msk->pm.add_addr_accepted++;
|
||||
@ -680,8 +695,12 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
|
||||
__mptcp_subflow_connect(sk, &addrs[i], &remote);
|
||||
spin_lock_bh(&msk->pm.lock);
|
||||
|
||||
/* be sure to echo exactly the received address */
|
||||
if (reset_port)
|
||||
remote.port = 0;
|
||||
|
||||
add_addr_echo:
|
||||
mptcp_pm_announce_addr(msk, &msk->pm.remote, true);
|
||||
mptcp_pm_announce_addr(msk, &remote, true);
|
||||
mptcp_pm_nl_addr_send_ack(msk);
|
||||
}
|
||||
|
||||
|
@ -6551,12 +6551,15 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
|
||||
{
|
||||
struct nft_object *newobj;
|
||||
struct nft_trans *trans;
|
||||
int err;
|
||||
int err = -ENOMEM;
|
||||
|
||||
if (!try_module_get(type->owner))
|
||||
return -ENOENT;
|
||||
|
||||
trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
|
||||
sizeof(struct nft_trans_obj));
|
||||
if (!trans)
|
||||
return -ENOMEM;
|
||||
goto err_trans;
|
||||
|
||||
newobj = nft_obj_init(ctx, type, attr);
|
||||
if (IS_ERR(newobj)) {
|
||||
@ -6573,6 +6576,8 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
|
||||
|
||||
err_free_trans:
|
||||
kfree(trans);
|
||||
err_trans:
|
||||
module_put(type->owner);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -8185,7 +8190,7 @@ static void nft_obj_commit_update(struct nft_trans *trans)
|
||||
if (obj->ops->update)
|
||||
obj->ops->update(obj, newobj);
|
||||
|
||||
kfree(newobj);
|
||||
nft_obj_destroy(&trans->ctx, newobj);
|
||||
}
|
||||
|
||||
static void nft_commit_release(struct nft_trans *trans)
|
||||
@ -8976,7 +8981,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
|
||||
break;
|
||||
case NFT_MSG_NEWOBJ:
|
||||
if (nft_trans_obj_update(trans)) {
|
||||
kfree(nft_trans_obj_newobj(trans));
|
||||
nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans));
|
||||
nft_trans_destroy(trans);
|
||||
} else {
|
||||
trans->ctx.table->use--;
|
||||
@ -9636,10 +9641,13 @@ EXPORT_SYMBOL_GPL(__nft_release_basechain);
|
||||
|
||||
static void __nft_release_hook(struct net *net, struct nft_table *table)
|
||||
{
|
||||
struct nft_flowtable *flowtable;
|
||||
struct nft_chain *chain;
|
||||
|
||||
list_for_each_entry(chain, &table->chains, list)
|
||||
nf_tables_unregister_hook(net, table, chain);
|
||||
list_for_each_entry(flowtable, &table->flowtables, list)
|
||||
nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list);
|
||||
}
|
||||
|
||||
static void __nft_release_hooks(struct net *net)
|
||||
|
@ -94,7 +94,8 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
|
||||
|
||||
expr = nft_expr_first(rule);
|
||||
while (nft_expr_more(rule, expr)) {
|
||||
if (expr->ops->offload_flags & NFT_OFFLOAD_F_ACTION)
|
||||
if (expr->ops->offload_action &&
|
||||
expr->ops->offload_action(expr))
|
||||
num_actions++;
|
||||
|
||||
expr = nft_expr_next(expr);
|
||||
|
@ -67,6 +67,11 @@ static int nft_dup_netdev_offload(struct nft_offload_ctx *ctx,
|
||||
return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_MIRRED, oif);
|
||||
}
|
||||
|
||||
static bool nft_dup_netdev_offload_action(const struct nft_expr *expr)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static struct nft_expr_type nft_dup_netdev_type;
|
||||
static const struct nft_expr_ops nft_dup_netdev_ops = {
|
||||
.type = &nft_dup_netdev_type,
|
||||
@ -75,6 +80,7 @@ static const struct nft_expr_ops nft_dup_netdev_ops = {
|
||||
.init = nft_dup_netdev_init,
|
||||
.dump = nft_dup_netdev_dump,
|
||||
.offload = nft_dup_netdev_offload,
|
||||
.offload_action = nft_dup_netdev_offload_action,
|
||||
};
|
||||
|
||||
static struct nft_expr_type nft_dup_netdev_type __read_mostly = {
|
||||
|
@ -79,6 +79,11 @@ static int nft_fwd_netdev_offload(struct nft_offload_ctx *ctx,
|
||||
return nft_fwd_dup_netdev_offload(ctx, flow, FLOW_ACTION_REDIRECT, oif);
|
||||
}
|
||||
|
||||
static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
struct nft_fwd_neigh {
|
||||
u8 sreg_dev;
|
||||
u8 sreg_addr;
|
||||
@ -222,6 +227,7 @@ static const struct nft_expr_ops nft_fwd_netdev_ops = {
|
||||
.dump = nft_fwd_netdev_dump,
|
||||
.validate = nft_fwd_validate,
|
||||
.offload = nft_fwd_netdev_offload,
|
||||
.offload_action = nft_fwd_netdev_offload_action,
|
||||
};
|
||||
|
||||
static const struct nft_expr_ops *
|
||||
|
@ -213,6 +213,16 @@ static int nft_immediate_offload(struct nft_offload_ctx *ctx,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool nft_immediate_offload_action(const struct nft_expr *expr)
|
||||
{
|
||||
const struct nft_immediate_expr *priv = nft_expr_priv(expr);
|
||||
|
||||
if (priv->dreg == NFT_REG_VERDICT)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static const struct nft_expr_ops nft_imm_ops = {
|
||||
.type = &nft_imm_type,
|
||||
.size = NFT_EXPR_SIZE(sizeof(struct nft_immediate_expr)),
|
||||
@ -224,7 +234,7 @@ static const struct nft_expr_ops nft_imm_ops = {
|
||||
.dump = nft_immediate_dump,
|
||||
.validate = nft_immediate_validate,
|
||||
.offload = nft_immediate_offload,
|
||||
.offload_flags = NFT_OFFLOAD_F_ACTION,
|
||||
.offload_action = nft_immediate_offload_action,
|
||||
};
|
||||
|
||||
struct nft_expr_type nft_imm_type __read_mostly = {
|
||||
|
@ -340,11 +340,20 @@ static int nft_limit_obj_pkts_dump(struct sk_buff *skb,
|
||||
return nft_limit_dump(skb, &priv->limit, NFT_LIMIT_PKTS);
|
||||
}
|
||||
|
||||
static void nft_limit_obj_pkts_destroy(const struct nft_ctx *ctx,
|
||||
struct nft_object *obj)
|
||||
{
|
||||
struct nft_limit_priv_pkts *priv = nft_obj_data(obj);
|
||||
|
||||
nft_limit_destroy(ctx, &priv->limit);
|
||||
}
|
||||
|
||||
static struct nft_object_type nft_limit_obj_type;
|
||||
static const struct nft_object_ops nft_limit_obj_pkts_ops = {
|
||||
.type = &nft_limit_obj_type,
|
||||
.size = NFT_EXPR_SIZE(sizeof(struct nft_limit_priv_pkts)),
|
||||
.init = nft_limit_obj_pkts_init,
|
||||
.destroy = nft_limit_obj_pkts_destroy,
|
||||
.eval = nft_limit_obj_pkts_eval,
|
||||
.dump = nft_limit_obj_pkts_dump,
|
||||
};
|
||||
@ -378,11 +387,20 @@ static int nft_limit_obj_bytes_dump(struct sk_buff *skb,
|
||||
return nft_limit_dump(skb, priv, NFT_LIMIT_PKT_BYTES);
|
||||
}
|
||||
|
||||
static void nft_limit_obj_bytes_destroy(const struct nft_ctx *ctx,
|
||||
struct nft_object *obj)
|
||||
{
|
||||
struct nft_limit_priv *priv = nft_obj_data(obj);
|
||||
|
||||
nft_limit_destroy(ctx, priv);
|
||||
}
|
||||
|
||||
static struct nft_object_type nft_limit_obj_type;
|
||||
static const struct nft_object_ops nft_limit_obj_bytes_ops = {
|
||||
.type = &nft_limit_obj_type,
|
||||
.size = sizeof(struct nft_limit_priv),
|
||||
.init = nft_limit_obj_bytes_init,
|
||||
.destroy = nft_limit_obj_bytes_destroy,
|
||||
.eval = nft_limit_obj_bytes_eval,
|
||||
.dump = nft_limit_obj_bytes_dump,
|
||||
};
|
||||
|
@ -220,8 +220,10 @@ static void socket_mt_destroy(const struct xt_mtdtor_param *par)
|
||||
{
|
||||
if (par->family == NFPROTO_IPV4)
|
||||
nf_defrag_ipv4_disable(par->net);
|
||||
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
|
||||
else if (par->family == NFPROTO_IPV6)
|
||||
nf_defrag_ipv6_disable(par->net);
|
||||
#endif
|
||||
}
|
||||
|
||||
static struct xt_match socket_mt_reg[] __read_mostly = {
|
||||
|
@ -423,12 +423,43 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto,
|
||||
memcpy(addr, new_addr, sizeof(__be32[4]));
|
||||
}
|
||||
|
||||
static void set_ipv6_fl(struct ipv6hdr *nh, u32 fl, u32 mask)
|
||||
static void set_ipv6_dsfield(struct sk_buff *skb, struct ipv6hdr *nh, u8 ipv6_tclass, u8 mask)
|
||||
{
|
||||
u8 old_ipv6_tclass = ipv6_get_dsfield(nh);
|
||||
|
||||
ipv6_tclass = OVS_MASKED(old_ipv6_tclass, ipv6_tclass, mask);
|
||||
|
||||
if (skb->ip_summed == CHECKSUM_COMPLETE)
|
||||
csum_replace(&skb->csum, (__force __wsum)(old_ipv6_tclass << 12),
|
||||
(__force __wsum)(ipv6_tclass << 12));
|
||||
|
||||
ipv6_change_dsfield(nh, ~mask, ipv6_tclass);
|
||||
}
|
||||
|
||||
static void set_ipv6_fl(struct sk_buff *skb, struct ipv6hdr *nh, u32 fl, u32 mask)
|
||||
{
|
||||
u32 ofl;
|
||||
|
||||
ofl = nh->flow_lbl[0] << 16 | nh->flow_lbl[1] << 8 | nh->flow_lbl[2];
|
||||
fl = OVS_MASKED(ofl, fl, mask);
|
||||
|
||||
/* Bits 21-24 are always unmasked, so this retains their values. */
|
||||
OVS_SET_MASKED(nh->flow_lbl[0], (u8)(fl >> 16), (u8)(mask >> 16));
|
||||
OVS_SET_MASKED(nh->flow_lbl[1], (u8)(fl >> 8), (u8)(mask >> 8));
|
||||
OVS_SET_MASKED(nh->flow_lbl[2], (u8)fl, (u8)mask);
|
||||
nh->flow_lbl[0] = (u8)(fl >> 16);
|
||||
nh->flow_lbl[1] = (u8)(fl >> 8);
|
||||
nh->flow_lbl[2] = (u8)fl;
|
||||
|
||||
if (skb->ip_summed == CHECKSUM_COMPLETE)
|
||||
csum_replace(&skb->csum, (__force __wsum)htonl(ofl), (__force __wsum)htonl(fl));
|
||||
}
|
||||
|
||||
static void set_ipv6_ttl(struct sk_buff *skb, struct ipv6hdr *nh, u8 new_ttl, u8 mask)
|
||||
{
|
||||
new_ttl = OVS_MASKED(nh->hop_limit, new_ttl, mask);
|
||||
|
||||
if (skb->ip_summed == CHECKSUM_COMPLETE)
|
||||
csum_replace(&skb->csum, (__force __wsum)(nh->hop_limit << 8),
|
||||
(__force __wsum)(new_ttl << 8));
|
||||
nh->hop_limit = new_ttl;
|
||||
}
|
||||
|
||||
static void set_ip_ttl(struct sk_buff *skb, struct iphdr *nh, u8 new_ttl,
|
||||
@ -546,18 +577,17 @@ static int set_ipv6(struct sk_buff *skb, struct sw_flow_key *flow_key,
|
||||
}
|
||||
}
|
||||
if (mask->ipv6_tclass) {
|
||||
ipv6_change_dsfield(nh, ~mask->ipv6_tclass, key->ipv6_tclass);
|
||||
set_ipv6_dsfield(skb, nh, key->ipv6_tclass, mask->ipv6_tclass);
|
||||
flow_key->ip.tos = ipv6_get_dsfield(nh);
|
||||
}
|
||||
if (mask->ipv6_label) {
|
||||
set_ipv6_fl(nh, ntohl(key->ipv6_label),
|
||||
set_ipv6_fl(skb, nh, ntohl(key->ipv6_label),
|
||||
ntohl(mask->ipv6_label));
|
||||
flow_key->ipv6.label =
|
||||
*(__be32 *)nh & htonl(IPV6_FLOWINFO_FLOWLABEL);
|
||||
}
|
||||
if (mask->ipv6_hlimit) {
|
||||
OVS_SET_MASKED(nh->hop_limit, key->ipv6_hlimit,
|
||||
mask->ipv6_hlimit);
|
||||
set_ipv6_ttl(skb, nh, key->ipv6_hlimit, mask->ipv6_hlimit);
|
||||
flow_key->ip.ttl = nh->hop_limit;
|
||||
}
|
||||
return 0;
|
||||
|
@ -274,7 +274,7 @@ static int tcf_action_offload_add_ex(struct tc_action *action,
|
||||
err = tc_setup_action(&fl_action->action, actions);
|
||||
if (err) {
|
||||
NL_SET_ERR_MSG_MOD(extack,
|
||||
"Failed to setup tc actions for offload\n");
|
||||
"Failed to setup tc actions for offload");
|
||||
goto fl_err;
|
||||
}
|
||||
|
||||
|
@ -533,11 +533,6 @@ static bool tcf_ct_flow_table_lookup(struct tcf_ct_params *p,
|
||||
struct nf_conn *ct;
|
||||
u8 dir;
|
||||
|
||||
/* Previously seen or loopback */
|
||||
ct = nf_ct_get(skb, &ctinfo);
|
||||
if ((ct && !nf_ct_is_template(ct)) || ctinfo == IP_CT_UNTRACKED)
|
||||
return false;
|
||||
|
||||
switch (family) {
|
||||
case NFPROTO_IPV4:
|
||||
if (!tcf_ct_flow_table_fill_tuple_ipv4(skb, &tuple, &tcph))
|
||||
|
@ -113,7 +113,7 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
/* remove table entry */
|
||||
write_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist,
|
||||
list) {
|
||||
if (!pnet_name ||
|
||||
@ -131,7 +131,7 @@ static int smc_pnet_remove_by_pnetid(struct net *net, char *pnet_name)
|
||||
rc = 0;
|
||||
}
|
||||
}
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
|
||||
/* if this is not the initial namespace, stop here */
|
||||
if (net != &init_net)
|
||||
@ -192,7 +192,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
|
||||
sn = net_generic(net, smc_net_id);
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
write_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
|
||||
if (pnetelem->type == SMC_PNET_ETH && !pnetelem->ndev &&
|
||||
!strncmp(pnetelem->eth_name, ndev->name, IFNAMSIZ)) {
|
||||
@ -206,7 +206,7 @@ static int smc_pnet_add_by_ndev(struct net_device *ndev)
|
||||
break;
|
||||
}
|
||||
}
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -224,7 +224,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
|
||||
sn = net_generic(net, smc_net_id);
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
write_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry_safe(pnetelem, tmp_pe, &pnettable->pnetlist, list) {
|
||||
if (pnetelem->type == SMC_PNET_ETH && pnetelem->ndev == ndev) {
|
||||
dev_put_track(pnetelem->ndev, &pnetelem->dev_tracker);
|
||||
@ -237,7 +237,7 @@ static int smc_pnet_remove_by_ndev(struct net_device *ndev)
|
||||
break;
|
||||
}
|
||||
}
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -370,7 +370,7 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
|
||||
strncpy(new_pe->eth_name, eth_name, IFNAMSIZ);
|
||||
rc = -EEXIST;
|
||||
new_netdev = true;
|
||||
write_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
|
||||
if (tmp_pe->type == SMC_PNET_ETH &&
|
||||
!strncmp(tmp_pe->eth_name, eth_name, IFNAMSIZ)) {
|
||||
@ -385,9 +385,9 @@ static int smc_pnet_add_eth(struct smc_pnettable *pnettable, struct net *net,
|
||||
GFP_ATOMIC);
|
||||
}
|
||||
list_add_tail(&new_pe->list, &pnettable->pnetlist);
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
} else {
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
kfree(new_pe);
|
||||
goto out_put;
|
||||
}
|
||||
@ -448,7 +448,7 @@ static int smc_pnet_add_ib(struct smc_pnettable *pnettable, char *ib_name,
|
||||
new_pe->ib_port = ib_port;
|
||||
|
||||
new_ibdev = true;
|
||||
write_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
|
||||
if (tmp_pe->type == SMC_PNET_IB &&
|
||||
!strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) {
|
||||
@ -458,9 +458,9 @@ static int smc_pnet_add_ib(struct smc_pnettable *pnettable, char *ib_name,
|
||||
}
|
||||
if (new_ibdev) {
|
||||
list_add_tail(&new_pe->list, &pnettable->pnetlist);
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
} else {
|
||||
write_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
kfree(new_pe);
|
||||
}
|
||||
return (new_ibdev) ? 0 : -EEXIST;
|
||||
@ -605,7 +605,7 @@ static int _smc_pnet_dump(struct net *net, struct sk_buff *skb, u32 portid,
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
/* dump pnettable entries */
|
||||
read_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry(pnetelem, &pnettable->pnetlist, list) {
|
||||
if (pnetid && !smc_pnet_match(pnetelem->pnet_name, pnetid))
|
||||
continue;
|
||||
@ -620,7 +620,7 @@ static int _smc_pnet_dump(struct net *net, struct sk_buff *skb, u32 portid,
|
||||
break;
|
||||
}
|
||||
}
|
||||
read_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
return idx;
|
||||
}
|
||||
|
||||
@ -864,7 +864,7 @@ int smc_pnet_net_init(struct net *net)
|
||||
struct smc_pnetids_ndev *pnetids_ndev = &sn->pnetids_ndev;
|
||||
|
||||
INIT_LIST_HEAD(&pnettable->pnetlist);
|
||||
rwlock_init(&pnettable->lock);
|
||||
mutex_init(&pnettable->lock);
|
||||
INIT_LIST_HEAD(&pnetids_ndev->list);
|
||||
rwlock_init(&pnetids_ndev->lock);
|
||||
|
||||
@ -944,7 +944,7 @@ static int smc_pnet_find_ndev_pnetid_by_table(struct net_device *ndev,
|
||||
sn = net_generic(net, smc_net_id);
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
read_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry(pnetelem, &pnettable->pnetlist, list) {
|
||||
if (pnetelem->type == SMC_PNET_ETH && ndev == pnetelem->ndev) {
|
||||
/* get pnetid of netdev device */
|
||||
@ -953,7 +953,7 @@ static int smc_pnet_find_ndev_pnetid_by_table(struct net_device *ndev,
|
||||
break;
|
||||
}
|
||||
}
|
||||
read_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -1156,7 +1156,7 @@ int smc_pnetid_by_table_ib(struct smc_ib_device *smcibdev, u8 ib_port)
|
||||
sn = net_generic(&init_net, smc_net_id);
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
read_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
|
||||
if (tmp_pe->type == SMC_PNET_IB &&
|
||||
!strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX) &&
|
||||
@ -1166,7 +1166,7 @@ int smc_pnetid_by_table_ib(struct smc_ib_device *smcibdev, u8 ib_port)
|
||||
break;
|
||||
}
|
||||
}
|
||||
read_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
@ -1185,7 +1185,7 @@ int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev)
|
||||
sn = net_generic(&init_net, smc_net_id);
|
||||
pnettable = &sn->pnettable;
|
||||
|
||||
read_lock(&pnettable->lock);
|
||||
mutex_lock(&pnettable->lock);
|
||||
list_for_each_entry(tmp_pe, &pnettable->pnetlist, list) {
|
||||
if (tmp_pe->type == SMC_PNET_IB &&
|
||||
!strncmp(tmp_pe->ib_name, ib_name, IB_DEVICE_NAME_MAX)) {
|
||||
@ -1194,7 +1194,7 @@ int smc_pnetid_by_table_smcd(struct smcd_dev *smcddev)
|
||||
break;
|
||||
}
|
||||
}
|
||||
read_unlock(&pnettable->lock);
|
||||
mutex_unlock(&pnettable->lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -29,7 +29,7 @@ struct smc_link_group;
|
||||
* @pnetlist: List of PNETIDs
|
||||
*/
|
||||
struct smc_pnettable {
|
||||
rwlock_t lock;
|
||||
struct mutex lock;
|
||||
struct list_head pnetlist;
|
||||
};
|
||||
|
||||
|
@ -967,7 +967,7 @@ static int __tipc_nl_add_nametable_publ(struct tipc_nl_msg *msg,
|
||||
list_for_each_entry(p, &sr->all_publ, all_publ)
|
||||
if (p->key == *last_key)
|
||||
break;
|
||||
if (p->key != *last_key)
|
||||
if (list_entry_is_head(p, &sr->all_publ, all_publ))
|
||||
return -EPIPE;
|
||||
} else {
|
||||
p = list_first_entry(&sr->all_publ,
|
||||
|
@ -3749,7 +3749,7 @@ static int __tipc_nl_list_sk_publ(struct sk_buff *skb,
|
||||
if (p->key == *last_publ)
|
||||
break;
|
||||
}
|
||||
if (p->key != *last_publ) {
|
||||
if (list_entry_is_head(p, &tsk->publications, binding_sock)) {
|
||||
/* We never set seq or call nl_dump_check_consistent()
|
||||
* this means that setting prev_seq here will cause the
|
||||
* consistence check to fail in the netlink callback
|
||||
|
32
tools/testing/selftests/bpf/prog_tests/timer_crash.c
Normal file
32
tools/testing/selftests/bpf/prog_tests/timer_crash.c
Normal file
@ -0,0 +1,32 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <test_progs.h>
|
||||
#include "timer_crash.skel.h"
|
||||
|
||||
enum {
|
||||
MODE_ARRAY,
|
||||
MODE_HASH,
|
||||
};
|
||||
|
||||
static void test_timer_crash_mode(int mode)
|
||||
{
|
||||
struct timer_crash *skel;
|
||||
|
||||
skel = timer_crash__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load"))
|
||||
return;
|
||||
skel->bss->pid = getpid();
|
||||
skel->bss->crash_map = mode;
|
||||
if (!ASSERT_OK(timer_crash__attach(skel), "timer_crash__attach"))
|
||||
goto end;
|
||||
usleep(1);
|
||||
end:
|
||||
timer_crash__destroy(skel);
|
||||
}
|
||||
|
||||
void test_timer_crash(void)
|
||||
{
|
||||
if (test__start_subtest("array"))
|
||||
test_timer_crash_mode(MODE_ARRAY);
|
||||
if (test__start_subtest("hash"))
|
||||
test_timer_crash_mode(MODE_HASH);
|
||||
}
|
@ -235,7 +235,7 @@ SEC("sk_msg1")
|
||||
int bpf_prog4(struct sk_msg_md *msg)
|
||||
{
|
||||
int *bytes, zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5;
|
||||
int *start, *end, *start_push, *end_push, *start_pop, *pop;
|
||||
int *start, *end, *start_push, *end_push, *start_pop, *pop, err = 0;
|
||||
|
||||
bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
|
||||
if (bytes)
|
||||
@ -249,8 +249,11 @@ int bpf_prog4(struct sk_msg_md *msg)
|
||||
bpf_msg_pull_data(msg, *start, *end, 0);
|
||||
start_push = bpf_map_lookup_elem(&sock_bytes, &two);
|
||||
end_push = bpf_map_lookup_elem(&sock_bytes, &three);
|
||||
if (start_push && end_push)
|
||||
bpf_msg_push_data(msg, *start_push, *end_push, 0);
|
||||
if (start_push && end_push) {
|
||||
err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
|
||||
if (err)
|
||||
return SK_DROP;
|
||||
}
|
||||
start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
|
||||
pop = bpf_map_lookup_elem(&sock_bytes, &five);
|
||||
if (start_pop && pop)
|
||||
@ -263,6 +266,7 @@ int bpf_prog6(struct sk_msg_md *msg)
|
||||
{
|
||||
int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, key = 0;
|
||||
int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop, *f;
|
||||
int err = 0;
|
||||
__u64 flags = 0;
|
||||
|
||||
bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
|
||||
@ -279,8 +283,11 @@ int bpf_prog6(struct sk_msg_md *msg)
|
||||
|
||||
start_push = bpf_map_lookup_elem(&sock_bytes, &two);
|
||||
end_push = bpf_map_lookup_elem(&sock_bytes, &three);
|
||||
if (start_push && end_push)
|
||||
bpf_msg_push_data(msg, *start_push, *end_push, 0);
|
||||
if (start_push && end_push) {
|
||||
err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
|
||||
if (err)
|
||||
return SK_DROP;
|
||||
}
|
||||
|
||||
start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
|
||||
pop = bpf_map_lookup_elem(&sock_bytes, &five);
|
||||
@ -338,7 +345,7 @@ SEC("sk_msg5")
|
||||
int bpf_prog10(struct sk_msg_md *msg)
|
||||
{
|
||||
int *bytes, *start, *end, *start_push, *end_push, *start_pop, *pop;
|
||||
int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5;
|
||||
int zero = 0, one = 1, two = 2, three = 3, four = 4, five = 5, err = 0;
|
||||
|
||||
bytes = bpf_map_lookup_elem(&sock_apply_bytes, &zero);
|
||||
if (bytes)
|
||||
@ -352,8 +359,11 @@ int bpf_prog10(struct sk_msg_md *msg)
|
||||
bpf_msg_pull_data(msg, *start, *end, 0);
|
||||
start_push = bpf_map_lookup_elem(&sock_bytes, &two);
|
||||
end_push = bpf_map_lookup_elem(&sock_bytes, &three);
|
||||
if (start_push && end_push)
|
||||
bpf_msg_push_data(msg, *start_push, *end_push, 0);
|
||||
if (start_push && end_push) {
|
||||
err = bpf_msg_push_data(msg, *start_push, *end_push, 0);
|
||||
if (err)
|
||||
return SK_PASS;
|
||||
}
|
||||
start_pop = bpf_map_lookup_elem(&sock_bytes, &four);
|
||||
pop = bpf_map_lookup_elem(&sock_bytes, &five);
|
||||
if (start_pop && pop)
|
||||
|
54
tools/testing/selftests/bpf/progs/timer_crash.c
Normal file
54
tools/testing/selftests/bpf/progs/timer_crash.c
Normal file
@ -0,0 +1,54 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#include <vmlinux.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
struct map_elem {
|
||||
struct bpf_timer timer;
|
||||
struct bpf_spin_lock lock;
|
||||
};
|
||||
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_ARRAY);
|
||||
__uint(max_entries, 1);
|
||||
__type(key, int);
|
||||
__type(value, struct map_elem);
|
||||
} amap SEC(".maps");
|
||||
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_HASH);
|
||||
__uint(max_entries, 1);
|
||||
__type(key, int);
|
||||
__type(value, struct map_elem);
|
||||
} hmap SEC(".maps");
|
||||
|
||||
int pid = 0;
|
||||
int crash_map = 0; /* 0 for amap, 1 for hmap */
|
||||
|
||||
SEC("fentry/do_nanosleep")
|
||||
int sys_enter(void *ctx)
|
||||
{
|
||||
struct map_elem *e, value = {};
|
||||
void *map = crash_map ? (void *)&hmap : (void *)&amap;
|
||||
|
||||
if (bpf_get_current_task_btf()->tgid != pid)
|
||||
return 0;
|
||||
|
||||
*(void **)&value = (void *)0xdeadcaf3;
|
||||
|
||||
bpf_map_update_elem(map, &(int){0}, &value, 0);
|
||||
/* For array map, doing bpf_map_update_elem will do a
|
||||
* check_and_free_timer_in_array, which will trigger the crash if timer
|
||||
* pointer was overwritten, for hmap we need to use bpf_timer_cancel.
|
||||
*/
|
||||
if (crash_map == 1) {
|
||||
e = bpf_map_lookup_elem(map, &(int){0});
|
||||
if (!e)
|
||||
return 0;
|
||||
bpf_timer_cancel(&e->timer);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
@ -71,6 +71,36 @@ chk_msk_remote_key_nr()
|
||||
__chk_nr "grep -c remote_key" $*
|
||||
}
|
||||
|
||||
# $1: ns, $2: port
|
||||
wait_local_port_listen()
|
||||
{
|
||||
local listener_ns="${1}"
|
||||
local port="${2}"
|
||||
|
||||
local port_hex i
|
||||
|
||||
port_hex="$(printf "%04X" "${port}")"
|
||||
for i in $(seq 10); do
|
||||
ip netns exec "${listener_ns}" cat /proc/net/tcp | \
|
||||
awk "BEGIN {rc=1} {if (\$2 ~ /:${port_hex}\$/ && \$4 ~ /0A/) {rc=0; exit}} END {exit rc}" &&
|
||||
break
|
||||
sleep 0.1
|
||||
done
|
||||
}
|
||||
|
||||
wait_connected()
|
||||
{
|
||||
local listener_ns="${1}"
|
||||
local port="${2}"
|
||||
|
||||
local port_hex i
|
||||
|
||||
port_hex="$(printf "%04X" "${port}")"
|
||||
for i in $(seq 10); do
|
||||
ip netns exec ${listener_ns} grep -q " 0100007F:${port_hex} " /proc/net/tcp && break
|
||||
sleep 0.1
|
||||
done
|
||||
}
|
||||
|
||||
trap cleanup EXIT
|
||||
ip netns add $ns
|
||||
@ -81,15 +111,15 @@ echo "a" | \
|
||||
ip netns exec $ns \
|
||||
./mptcp_connect -p 10000 -l -t ${timeout_poll} \
|
||||
0.0.0.0 >/dev/null &
|
||||
sleep 0.1
|
||||
wait_local_port_listen $ns 10000
|
||||
chk_msk_nr 0 "no msk on netns creation"
|
||||
|
||||
echo "b" | \
|
||||
timeout ${timeout_test} \
|
||||
ip netns exec $ns \
|
||||
./mptcp_connect -p 10000 -j -t ${timeout_poll} \
|
||||
./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} \
|
||||
127.0.0.1 >/dev/null &
|
||||
sleep 0.1
|
||||
wait_connected $ns 10000
|
||||
chk_msk_nr 2 "after MPC handshake "
|
||||
chk_msk_remote_key_nr 2 "....chk remote_key"
|
||||
chk_msk_fallback_nr 0 "....chk no fallback"
|
||||
@ -101,13 +131,13 @@ echo "a" | \
|
||||
ip netns exec $ns \
|
||||
./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \
|
||||
0.0.0.0 >/dev/null &
|
||||
sleep 0.1
|
||||
wait_local_port_listen $ns 10001
|
||||
echo "b" | \
|
||||
timeout ${timeout_test} \
|
||||
ip netns exec $ns \
|
||||
./mptcp_connect -p 10001 -j -t ${timeout_poll} \
|
||||
./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} \
|
||||
127.0.0.1 >/dev/null &
|
||||
sleep 0.1
|
||||
wait_connected $ns 10001
|
||||
chk_msk_fallback_nr 1 "check fallback"
|
||||
flush_pids
|
||||
|
||||
@ -119,7 +149,7 @@ for I in `seq 1 $NR_CLIENTS`; do
|
||||
./mptcp_connect -p $((I+10001)) -l -w 10 \
|
||||
-t ${timeout_poll} 0.0.0.0 >/dev/null &
|
||||
done
|
||||
sleep 0.1
|
||||
wait_local_port_listen $ns $((NR_CLIENTS + 10001))
|
||||
|
||||
for I in `seq 1 $NR_CLIENTS`; do
|
||||
echo "b" | \
|
||||
|
@ -660,6 +660,7 @@ chk_join_nr()
|
||||
local ack_nr=$4
|
||||
local count
|
||||
local dump_stats
|
||||
local with_cookie
|
||||
|
||||
printf "%02u %-36s %s" "$TEST_COUNT" "$msg" "syn"
|
||||
count=`ip netns exec $ns1 nstat -as | grep MPTcpExtMPJoinSynRx | awk '{print $2}'`
|
||||
@ -673,12 +674,20 @@ chk_join_nr()
|
||||
fi
|
||||
|
||||
echo -n " - synack"
|
||||
with_cookie=`ip netns exec $ns2 sysctl -n net.ipv4.tcp_syncookies`
|
||||
count=`ip netns exec $ns2 nstat -as | grep MPTcpExtMPJoinSynAckRx | awk '{print $2}'`
|
||||
[ -z "$count" ] && count=0
|
||||
if [ "$count" != "$syn_ack_nr" ]; then
|
||||
echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr"
|
||||
ret=1
|
||||
dump_stats=1
|
||||
# simult connections exceeding the limit with cookie enabled could go up to
|
||||
# synack validation as the conn limit can be enforced reliably only after
|
||||
# the subflow creation
|
||||
if [ "$with_cookie" = 2 ] && [ "$count" -gt "$syn_ack_nr" ] && [ "$count" -le "$syn_nr" ]; then
|
||||
echo -n "[ ok ]"
|
||||
else
|
||||
echo "[fail] got $count JOIN[s] synack expected $syn_ack_nr"
|
||||
ret=1
|
||||
dump_stats=1
|
||||
fi
|
||||
else
|
||||
echo -n "[ ok ]"
|
||||
fi
|
||||
@ -752,11 +761,17 @@ chk_add_nr()
|
||||
local mis_ack_nr=${8:-0}
|
||||
local count
|
||||
local dump_stats
|
||||
local timeout
|
||||
|
||||
timeout=`ip netns exec $ns1 sysctl -n net.mptcp.add_addr_timeout`
|
||||
|
||||
printf "%-39s %s" " " "add"
|
||||
count=`ip netns exec $ns2 nstat -as | grep MPTcpExtAddAddr | awk '{print $2}'`
|
||||
count=`ip netns exec $ns2 nstat -as MPTcpExtAddAddr | grep MPTcpExtAddAddr | awk '{print $2}'`
|
||||
[ -z "$count" ] && count=0
|
||||
if [ "$count" != "$add_nr" ]; then
|
||||
|
||||
# if the test configured a short timeout tolerate greater then expected
|
||||
# add addrs options, due to retransmissions
|
||||
if [ "$count" != "$add_nr" ] && [ "$timeout" -gt 1 -o "$count" -lt "$add_nr" ]; then
|
||||
echo "[fail] got $count ADD_ADDR[s] expected $add_nr"
|
||||
ret=1
|
||||
dump_stats=1
|
||||
@ -961,7 +976,7 @@ wait_for_tw()
|
||||
local ns=$1
|
||||
|
||||
while [ $time -lt $timeout_ms ]; do
|
||||
local cnt=$(ip netns exec $ns ss -t state time-wait |wc -l)
|
||||
local cnt=$(ip netns exec $ns nstat -as TcpAttemptFails | grep TcpAttemptFails | awk '{print $2}')
|
||||
|
||||
[ "$cnt" = 1 ] && return 1
|
||||
time=$((time + 100))
|
||||
@ -1158,7 +1173,10 @@ signal_address_tests()
|
||||
ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags signal
|
||||
ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags signal
|
||||
ip netns exec $ns2 ./pm_nl_ctl add 10.0.4.2 flags signal
|
||||
run_tests $ns1 $ns2 10.0.1.1
|
||||
|
||||
# the peer could possibly miss some addr notification, allow retransmission
|
||||
ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1
|
||||
run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow
|
||||
chk_join_nr "signal addresses race test" 3 3 3
|
||||
|
||||
# the server will not signal the address terminating
|
||||
|
Loading…
Reference in New Issue
Block a user