Including fixes from can, bluetooth and wireless.

No known regressions at this point. Another calm week, but chances are
 that has more to do with vacation season than the quality of our work.
 
 Current release - new code bugs:
 
  - smc: prevent NULL pointer dereference in txopt_get
 
  - eth: ti: am65-cpsw: number of XDP-related fixes
 
 Previous releases - regressions:
 
  - Revert "Bluetooth: MGMT/SMP: Fix address type when using SMP over
    BREDR/LE", it breaks existing user space
 
  - Bluetooth: qca: if memdump doesn't work, re-enable IBS to avoid
    later problems with suspend
 
  - can: mcp251x: fix deadlock if an interrupt occurs during mcp251x_open
 
  - eth: r8152: fix the firmware communication error due to use
    of bulk write
 
  - ptp: ocp: fix serial port information export
 
  - eth: igb: fix not clearing TimeSync interrupts for 82580
 
  - Revert "wifi: ath11k: support hibernation", fix suspend on Lenovo
 
 Previous releases - always broken:
 
  - eth: intel: fix crashes and bugs when reconfiguration and resets
    happening in parallel
 
  - wifi: ath11k: fix NULL dereference in ath11k_mac_get_eirp_power()
 
 Misc:
 
  - docs: netdev: document guidance on cleanup.h
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmbaMbsACgkQMUZtbf5S
 IrtpUg/+J6rNaZuGVTHJQAjdSlMx/HzpN3GIbhYyUSg+iHNclqtxJ706b2vyrG88
 Rw5a+f8aQueONNsoFfa/ooU4cGsdO1oYlch0Wtuj5taCqy2SvtVqJAyiuDyNNjU0
 BQ1Rf7aRLI7enmEpZJN2FFu106YTVccBcLqhPkx0CPcEjV+p5RvypfeQL72H6ZKx
 +7/HzEl4bagHIQW3W1uJGNUdwNP7fP2/Kg7TrTJ1t629nLiJCxKL7LrsmebO5o9a
 v+2NxAa1eujTZ1k7ITcM0wYlxKOaGNFF4sT+dA+GfMe+SFssdhGeZYyv1t0zm5VI
 3apJ/pSHza1a/hXFa6PaOSw5M5LWn4bJOqeZLl/yIV0upu5xadWqmT0gVay8V9lY
 +x/MURGr3seuNRSMsaToHDIq+Us45Dt/qkDDNO/P+9R/BsJKCW05Pfqx3Mr/OHzv
 eeCPbXRh4YYBdrUicBWo04gSD+BUA53vW8FC3pxU5ieLOOcX4kOPeb8wNPHcXjMU
 73D+kyO1ufsfsFMkd3VfgDI1mMz+xpEuZ6pxs33tJ/1Ny7DdG1Q49xlQVh4Wnobk
 uQqUSzdoelOROeg1rwmsIbfwIvj5a5dVIyBu8TDjHlb/rk1QNTkyu+fFmQRWEotL
 fQ7U62wXlpoCT8WchSMtiU32IDJ2+Lwhwecguy1Z7kLOLrtL8XU=
 =ju86
 -----END PGP SIGNATURE-----

Merge tag 'net-6.11-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from can, bluetooth and wireless.

  No known regressions at this point. Another calm week, but chances are
  that has more to do with vacation season than the quality of our work.

  Current release - new code bugs:

   - smc: prevent NULL pointer dereference in txopt_get

   - eth: ti: am65-cpsw: number of XDP-related fixes

  Previous releases - regressions:

   - Revert "Bluetooth: MGMT/SMP: Fix address type when using SMP over
     BREDR/LE", it breaks existing user space

   - Bluetooth: qca: if memdump doesn't work, re-enable IBS to avoid
     later problems with suspend

   - can: mcp251x: fix deadlock if an interrupt occurs during
     mcp251x_open

   - eth: r8152: fix the firmware communication error due to use of bulk
     write

   - ptp: ocp: fix serial port information export

   - eth: igb: fix not clearing TimeSync interrupts for 82580

   - Revert "wifi: ath11k: support hibernation", fix suspend on Lenovo

  Previous releases - always broken:

   - eth: intel: fix crashes and bugs when reconfiguration and resets
     happening in parallel

   - wifi: ath11k: fix NULL dereference in ath11k_mac_get_eirp_power()

  Misc:

   - docs: netdev: document guidance on cleanup.h"

* tag 'net-6.11-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (61 commits)
  ila: call nf_unregister_net_hooks() sooner
  tools/net/ynl: fix cli.py --subscribe feature
  MAINTAINERS: fix ptp ocp driver maintainers address
  selftests: net: enable bind tests
  net: dsa: vsc73xx: fix possible subblocks range of CAPT block
  sched: sch_cake: fix bulk flow accounting logic for host fairness
  docs: netdev: document guidance on cleanup.h
  net: xilinx: axienet: Fix race in axienet_stop
  net: bridge: br_fdb_external_learn_add(): always set EXT_LEARN
  r8152: fix the firmware doesn't work
  fou: Fix null-ptr-deref in GRO.
  bareudp: Fix device stats updates.
  net: mana: Fix error handling in mana_create_txq/rxq's NAPI cleanup
  bpf, net: Fix a potential race in do_sock_getsockopt()
  net: dqs: Do not use extern for unused dql_group
  sch/netem: fix use after free in netem_dequeue
  usbnet: modern method to get random MAC
  MAINTAINERS: wifi: cw1200: add net-cw1200.h
  ice: do not bring the VSI up, if it was down before the XDP setup
  ice: remove ICE_CFG_BUSY locking from AF_XDP code
  ...
This commit is contained in:
Linus Torvalds 2024-09-05 17:08:01 -07:00
commit d759ee240d
62 changed files with 867 additions and 681 deletions

View File

@ -258,24 +258,29 @@ Description: (RW) When retrieving the PHC with the PTP SYS_OFFSET_EXTENDED
the estimated point where the FPGA latches the PHC time. This
value may be changed by writing an unsigned integer.
What: /sys/class/timecard/ocpN/ttyGNSS
What: /sys/class/timecard/ocpN/ttyGNSS2
Date: September 2021
Contact: Jonathan Lemon <jonathan.lemon@gmail.com>
Description: These optional attributes link to the TTY serial ports
associated with the GNSS devices.
What: /sys/class/timecard/ocpN/tty
Date: August 2024
Contact: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Description: (RO) Directory containing the sysfs nodes for TTY attributes
What: /sys/class/timecard/ocpN/ttyMAC
Date: September 2021
What: /sys/class/timecard/ocpN/tty/ttyGNSS
What: /sys/class/timecard/ocpN/tty/ttyGNSS2
Date: August 2024
Contact: Jonathan Lemon <jonathan.lemon@gmail.com>
Description: This optional attribute links to the TTY serial port
associated with the Miniature Atomic Clock.
Description: (RO) These optional attributes contain names of the TTY serial
ports associated with the GNSS devices.
What: /sys/class/timecard/ocpN/ttyNMEA
Date: September 2021
What: /sys/class/timecard/ocpN/tty/ttyMAC
Date: August 2024
Contact: Jonathan Lemon <jonathan.lemon@gmail.com>
Description: This optional attribute links to the TTY serial port
which outputs the PHC time in NMEA ZDA format.
Description: (RO) This optional attribute contains name of the TTY serial
port associated with the Miniature Atomic Clock.
What: /sys/class/timecard/ocpN/tty/ttyNMEA
Date: August 2024
Contact: Jonathan Lemon <jonathan.lemon@gmail.com>
Description: (RO) This optional attribute contains name of the TTY serial
port which outputs the PHC time in NMEA ZDA format.
What: /sys/class/timecard/ocpN/utc_tai_offset
Date: September 2021

View File

@ -375,6 +375,22 @@ When working in existing code which uses nonstandard formatting make
your code follow the most recent guidelines, so that eventually all code
in the domain of netdev is in the preferred format.
Using device-managed and cleanup.h constructs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Netdev remains skeptical about promises of all "auto-cleanup" APIs,
including even ``devm_`` helpers, historically. They are not the preferred
style of implementation, merely an acceptable one.
Use of ``guard()`` is discouraged within any function longer than 20 lines,
``scoped_guard()`` is considered more readable. Using normal lock/unlock is
still (weakly) preferred.
Low level cleanup constructs (such as ``__free()``) can be used when building
APIs and helpers, especially scoped iterators. However, direct use of
``__free()`` within networking core and drivers is discouraged.
Similar guidance applies to declaring variables mid-function.
Resending after review
~~~~~~~~~~~~~~~~~~~~~~

View File

@ -5956,6 +5956,7 @@ F: Documentation/process/cve.rst
CW1200 WLAN driver
S: Orphan
F: drivers/net/wireless/st/cw1200/
F: include/linux/platform_data/net-cw1200.h
CX18 VIDEO4LINUX DRIVER
M: Andy Walls <awalls@md.metrocast.net>
@ -15905,6 +15906,8 @@ F: include/uapi/linux/ethtool_netlink.h
F: include/uapi/linux/if_*
F: include/uapi/linux/netdev*
F: tools/testing/selftests/drivers/net/
X: Documentation/devicetree/bindings/net/bluetooth/
X: Documentation/devicetree/bindings/net/wireless/
X: drivers/net/wireless/
NETWORKING DRIVERS (WIRELESS)
@ -17130,7 +17133,7 @@ F: include/dt-bindings/
OPENCOMPUTE PTP CLOCK DRIVER
M: Jonathan Lemon <jonathan.lemon@gmail.com>
M: Vadim Fedorenko <vadfed@linux.dev>
M: Vadim Fedorenko <vadim.fedorenko@linux.dev>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/ptp/ptp_ocp.c

View File

@ -1091,6 +1091,7 @@ static void qca_controller_memdump(struct work_struct *work)
qca->memdump_state = QCA_MEMDUMP_COLLECTED;
cancel_delayed_work(&qca->ctrl_memdump_timeout);
clear_bit(QCA_MEMDUMP_COLLECTION, &qca->flags);
clear_bit(QCA_IBS_DISABLED, &qca->flags);
mutex_unlock(&qca->hci_memdump_lock);
return;
}

View File

@ -83,7 +83,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion,
sizeof(ipversion))) {
bareudp->dev->stats.rx_dropped++;
DEV_STATS_INC(bareudp->dev, rx_dropped);
goto drop;
}
ipversion >>= 4;
@ -93,7 +93,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
} else if (ipversion == 6 && bareudp->multi_proto_mode) {
proto = htons(ETH_P_IPV6);
} else {
bareudp->dev->stats.rx_dropped++;
DEV_STATS_INC(bareudp->dev, rx_dropped);
goto drop;
}
} else if (bareudp->ethertype == htons(ETH_P_MPLS_UC)) {
@ -107,7 +107,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
ipv4_is_multicast(tunnel_hdr->daddr)) {
proto = htons(ETH_P_MPLS_MC);
} else {
bareudp->dev->stats.rx_dropped++;
DEV_STATS_INC(bareudp->dev, rx_dropped);
goto drop;
}
} else {
@ -123,7 +123,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
(addr_type & IPV6_ADDR_MULTICAST)) {
proto = htons(ETH_P_MPLS_MC);
} else {
bareudp->dev->stats.rx_dropped++;
DEV_STATS_INC(bareudp->dev, rx_dropped);
goto drop;
}
}
@ -135,7 +135,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
proto,
!net_eq(bareudp->net,
dev_net(bareudp->dev)))) {
bareudp->dev->stats.rx_dropped++;
DEV_STATS_INC(bareudp->dev, rx_dropped);
goto drop;
}
@ -143,7 +143,7 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
tun_dst = udp_tun_rx_dst(skb, family, key, 0, 0);
if (!tun_dst) {
bareudp->dev->stats.rx_dropped++;
DEV_STATS_INC(bareudp->dev, rx_dropped);
goto drop;
}
skb_dst_set(skb, &tun_dst->dst);
@ -169,8 +169,8 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
&((struct ipv6hdr *)oiph)->saddr);
}
if (err > 1) {
++bareudp->dev->stats.rx_frame_errors;
++bareudp->dev->stats.rx_errors;
DEV_STATS_INC(bareudp->dev, rx_frame_errors);
DEV_STATS_INC(bareudp->dev, rx_errors);
goto drop;
}
}
@ -467,11 +467,11 @@ tx_error:
dev_kfree_skb(skb);
if (err == -ELOOP)
dev->stats.collisions++;
DEV_STATS_INC(dev, collisions);
else if (err == -ENETUNREACH)
dev->stats.tx_carrier_errors++;
DEV_STATS_INC(dev, tx_carrier_errors);
dev->stats.tx_errors++;
DEV_STATS_INC(dev, tx_errors);
return NETDEV_TX_OK;
}

View File

@ -1686,6 +1686,7 @@ static irqreturn_t kvaser_pciefd_irq_handler(int irq, void *dev)
const struct kvaser_pciefd_irq_mask *irq_mask = pcie->driver_data->irq_mask;
u32 pci_irq = ioread32(KVASER_PCIEFD_PCI_IRQ_ADDR(pcie));
u32 srb_irq = 0;
u32 srb_release = 0;
int i;
if (!(pci_irq & irq_mask->all))
@ -1699,17 +1700,14 @@ static irqreturn_t kvaser_pciefd_irq_handler(int irq, void *dev)
kvaser_pciefd_transmit_irq(pcie->can[i]);
}
if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0) {
/* Reset DMA buffer 0, may trigger new interrupt */
iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0,
KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
}
if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0)
srb_release |= KVASER_PCIEFD_SRB_CMD_RDB0;
if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1) {
/* Reset DMA buffer 1, may trigger new interrupt */
iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1,
KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
}
if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1)
srb_release |= KVASER_PCIEFD_SRB_CMD_RDB1;
if (srb_release)
iowrite32(srb_release, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG);
return IRQ_HANDLED;
}

View File

@ -483,11 +483,10 @@ static inline void m_can_disable_all_interrupts(struct m_can_classdev *cdev)
{
m_can_coalescing_disable(cdev);
m_can_write(cdev, M_CAN_ILE, 0x0);
cdev->active_interrupts = 0x0;
if (!cdev->net->irq) {
dev_dbg(cdev->dev, "Stop hrtimer\n");
hrtimer_cancel(&cdev->hrtimer);
hrtimer_try_to_cancel(&cdev->hrtimer);
}
}
@ -1037,22 +1036,6 @@ end:
return work_done;
}
static int m_can_rx_peripheral(struct net_device *dev, u32 irqstatus)
{
struct m_can_classdev *cdev = netdev_priv(dev);
int work_done;
work_done = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, irqstatus);
/* Don't re-enable interrupts if the driver had a fatal error
* (e.g., FIFO read failure).
*/
if (work_done < 0)
m_can_disable_all_interrupts(cdev);
return work_done;
}
static int m_can_poll(struct napi_struct *napi, int quota)
{
struct net_device *dev = napi->dev;
@ -1217,16 +1200,18 @@ static void m_can_coalescing_update(struct m_can_classdev *cdev, u32 ir)
HRTIMER_MODE_REL);
}
static irqreturn_t m_can_isr(int irq, void *dev_id)
/* This interrupt handler is called either from the interrupt thread or a
* hrtimer. This has implications like cancelling a timer won't be possible
* blocking.
*/
static int m_can_interrupt_handler(struct m_can_classdev *cdev)
{
struct net_device *dev = (struct net_device *)dev_id;
struct m_can_classdev *cdev = netdev_priv(dev);
struct net_device *dev = cdev->net;
u32 ir;
int ret;
if (pm_runtime_suspended(cdev->dev)) {
m_can_coalescing_disable(cdev);
if (pm_runtime_suspended(cdev->dev))
return IRQ_NONE;
}
ir = m_can_read(cdev, M_CAN_IR);
m_can_coalescing_update(cdev, ir);
@ -1250,11 +1235,9 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
m_can_disable_all_interrupts(cdev);
napi_schedule(&cdev->napi);
} else {
int pkts;
pkts = m_can_rx_peripheral(dev, ir);
if (pkts < 0)
goto out_fail;
ret = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, ir);
if (ret < 0)
return ret;
}
}
@ -1272,8 +1255,9 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
} else {
if (ir & (IR_TEFN | IR_TEFW)) {
/* New TX FIFO Element arrived */
if (m_can_echo_tx_event(dev) != 0)
goto out_fail;
ret = m_can_echo_tx_event(dev);
if (ret != 0)
return ret;
}
}
@ -1281,16 +1265,31 @@ static irqreturn_t m_can_isr(int irq, void *dev_id)
can_rx_offload_threaded_irq_finish(&cdev->offload);
return IRQ_HANDLED;
}
out_fail:
static irqreturn_t m_can_isr(int irq, void *dev_id)
{
struct net_device *dev = (struct net_device *)dev_id;
struct m_can_classdev *cdev = netdev_priv(dev);
int ret;
ret = m_can_interrupt_handler(cdev);
if (ret < 0) {
m_can_disable_all_interrupts(cdev);
return IRQ_HANDLED;
}
return ret;
}
static enum hrtimer_restart m_can_coalescing_timer(struct hrtimer *timer)
{
struct m_can_classdev *cdev = container_of(timer, struct m_can_classdev, hrtimer);
if (cdev->can.state == CAN_STATE_BUS_OFF ||
cdev->can.state == CAN_STATE_STOPPED)
return HRTIMER_NORESTART;
irq_wake_thread(cdev->net->irq, cdev->net);
return HRTIMER_NORESTART;
@ -1542,6 +1541,7 @@ static int m_can_chip_config(struct net_device *dev)
else
interrupts &= ~(IR_ERR_LEC_31X);
}
cdev->active_interrupts = 0;
m_can_interrupt_enable(cdev, interrupts);
/* route all interrupts to INT0 */
@ -1991,8 +1991,17 @@ static enum hrtimer_restart hrtimer_callback(struct hrtimer *timer)
{
struct m_can_classdev *cdev = container_of(timer, struct
m_can_classdev, hrtimer);
int ret;
m_can_isr(0, cdev->net);
if (cdev->can.state == CAN_STATE_BUS_OFF ||
cdev->can.state == CAN_STATE_STOPPED)
return HRTIMER_NORESTART;
ret = m_can_interrupt_handler(cdev);
/* On error or if napi is scheduled to read, stop the timer */
if (ret < 0 || napi_is_scheduled(&cdev->napi))
return HRTIMER_NORESTART;
hrtimer_forward_now(timer, ms_to_ktime(HRTIMER_POLL_INTERVAL_MS));
@ -2052,7 +2061,7 @@ static int m_can_open(struct net_device *dev)
/* start the m_can controller */
err = m_can_start(dev);
if (err)
goto exit_irq_fail;
goto exit_start_fail;
if (!cdev->is_peripheral)
napi_enable(&cdev->napi);
@ -2061,6 +2070,9 @@ static int m_can_open(struct net_device *dev)
return 0;
exit_start_fail:
if (cdev->is_peripheral || dev->irq)
free_irq(dev->irq, dev);
exit_irq_fail:
if (cdev->is_peripheral)
destroy_workqueue(cdev->tx_wq);
@ -2172,7 +2184,7 @@ static int m_can_set_coalesce(struct net_device *dev,
return 0;
}
static const struct ethtool_ops m_can_ethtool_ops = {
static const struct ethtool_ops m_can_ethtool_ops_coalescing = {
.supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ |
ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ |
ETHTOOL_COALESCE_TX_USECS_IRQ |
@ -2183,18 +2195,20 @@ static const struct ethtool_ops m_can_ethtool_ops = {
.set_coalesce = m_can_set_coalesce,
};
static const struct ethtool_ops m_can_ethtool_ops_polling = {
static const struct ethtool_ops m_can_ethtool_ops = {
.get_ts_info = ethtool_op_get_ts_info,
};
static int register_m_can_dev(struct net_device *dev)
static int register_m_can_dev(struct m_can_classdev *cdev)
{
struct net_device *dev = cdev->net;
dev->flags |= IFF_ECHO; /* we support local echo */
dev->netdev_ops = &m_can_netdev_ops;
if (dev->irq)
dev->ethtool_ops = &m_can_ethtool_ops;
if (dev->irq && cdev->is_peripheral)
dev->ethtool_ops = &m_can_ethtool_ops_coalescing;
else
dev->ethtool_ops = &m_can_ethtool_ops_polling;
dev->ethtool_ops = &m_can_ethtool_ops;
return register_candev(dev);
}
@ -2380,7 +2394,7 @@ int m_can_class_register(struct m_can_classdev *cdev)
if (ret)
goto rx_offload_del;
ret = register_m_can_dev(cdev->net);
ret = register_m_can_dev(cdev);
if (ret) {
dev_err(cdev->dev, "registering %s failed (err=%d)\n",
cdev->net->name, ret);
@ -2427,12 +2441,15 @@ int m_can_class_suspend(struct device *dev)
netif_device_detach(ndev);
/* leave the chip running with rx interrupt enabled if it is
* used as a wake-up source.
* used as a wake-up source. Coalescing needs to be reset then,
* the timer is cancelled here, interrupts are done in resume.
*/
if (cdev->pm_wake_source)
if (cdev->pm_wake_source) {
hrtimer_cancel(&cdev->hrtimer);
m_can_write(cdev, M_CAN_IE, IR_RF0N);
else
} else {
m_can_stop(ndev);
}
m_can_clk_stop(cdev);
}
@ -2462,6 +2479,13 @@ int m_can_class_resume(struct device *dev)
return ret;
if (cdev->pm_wake_source) {
/* Restore active interrupts but disable coalescing as
* we may have missed important waterlevel interrupts
* between suspend and resume. Timers are already
* stopped in suspend. Here we enable all interrupts
* again.
*/
cdev->active_interrupts |= IR_RF0N | IR_TEFN;
m_can_write(cdev, M_CAN_IE, cdev->active_interrupts);
} else {
ret = m_can_start(ndev);

View File

@ -752,7 +752,7 @@ static int mcp251x_hw_wake(struct spi_device *spi)
int ret;
/* Force wakeup interrupt to wake device, but don't execute IST */
disable_irq(spi->irq);
disable_irq_nosync(spi->irq);
mcp251x_write_2regs(spi, CANINTE, CANINTE_WAKIE, CANINTF_WAKIF);
/* Wait for oscillator startup timer after wake up */

View File

@ -97,7 +97,16 @@ void can_ram_get_layout(struct can_ram_layout *layout,
if (ring) {
u8 num_rx_coalesce = 0, num_tx_coalesce = 0;
num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, ring->rx_pending);
/* If the ring parameters have been configured in
* CAN-CC mode, but and we are in CAN-FD mode now,
* they might be to big. Use the default CAN-FD values
* in this case.
*/
num_rx = ring->rx_pending;
if (num_rx > layout->max_rx)
num_rx = layout->default_rx;
num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, num_rx);
/* The ethtool doc says:
* To disable coalescing, set usecs = 0 and max_frames = 1.

View File

@ -290,7 +290,7 @@ int mcp251xfd_ring_init(struct mcp251xfd_priv *priv)
const struct mcp251xfd_rx_ring *rx_ring;
u16 base = 0, ram_used;
u8 fifo_nr = 1;
int i;
int err = 0, i;
netdev_reset_queue(priv->ndev);
@ -386,10 +386,18 @@ int mcp251xfd_ring_init(struct mcp251xfd_priv *priv)
netdev_err(priv->ndev,
"Error during ring configuration, using more RAM (%u bytes) than available (%u bytes).\n",
ram_used, MCP251XFD_RAM_SIZE);
return -ENOMEM;
err = -ENOMEM;
}
return 0;
if (priv->tx_obj_num_coalesce_irq &&
priv->tx_obj_num_coalesce_irq * 2 != priv->tx->obj_num) {
netdev_err(priv->ndev,
"Error during ring configuration, number of TEF coalescing buffers (%u) must be half of TEF buffers (%u).\n",
priv->tx_obj_num_coalesce_irq, priv->tx->obj_num);
err = -EINVAL;
}
return err;
}
void mcp251xfd_ring_free(struct mcp251xfd_priv *priv)
@ -469,11 +477,25 @@ int mcp251xfd_ring_alloc(struct mcp251xfd_priv *priv)
/* switching from CAN-2.0 to CAN-FD mode or vice versa */
if (fd_mode != test_bit(MCP251XFD_FLAGS_FD_MODE, priv->flags)) {
const struct ethtool_ringparam ring = {
.rx_pending = priv->rx_obj_num,
.tx_pending = priv->tx->obj_num,
};
const struct ethtool_coalesce ec = {
.rx_coalesce_usecs_irq = priv->rx_coalesce_usecs_irq,
.rx_max_coalesced_frames_irq = priv->rx_obj_num_coalesce_irq,
.tx_coalesce_usecs_irq = priv->tx_coalesce_usecs_irq,
.tx_max_coalesced_frames_irq = priv->tx_obj_num_coalesce_irq,
};
struct can_ram_layout layout;
can_ram_get_layout(&layout, &mcp251xfd_ram_config, NULL, NULL, fd_mode);
priv->rx_obj_num = layout.default_rx;
tx_ring->obj_num = layout.default_tx;
can_ram_get_layout(&layout, &mcp251xfd_ram_config, &ring, &ec, fd_mode);
priv->rx_obj_num = layout.cur_rx;
priv->rx_obj_num_coalesce_irq = layout.rx_coalesce;
tx_ring->obj_num = layout.cur_tx;
priv->tx_obj_num_coalesce_irq = layout.tx_coalesce;
}
if (fd_mode) {

View File

@ -36,7 +36,7 @@
#define VSC73XX_BLOCK_ANALYZER 0x2 /* Only subblock 0 */
#define VSC73XX_BLOCK_MII 0x3 /* Subblocks 0 and 1 */
#define VSC73XX_BLOCK_MEMINIT 0x3 /* Only subblock 2 */
#define VSC73XX_BLOCK_CAPTURE 0x4 /* Only subblock 2 */
#define VSC73XX_BLOCK_CAPTURE 0x4 /* Subblocks 0-4, 6, 7 */
#define VSC73XX_BLOCK_ARBITER 0x5 /* Only subblock 0 */
#define VSC73XX_BLOCK_SYSTEM 0x7 /* Only subblock 0 */
@ -410,13 +410,19 @@ int vsc73xx_is_addr_valid(u8 block, u8 subblock)
break;
case VSC73XX_BLOCK_MII:
case VSC73XX_BLOCK_CAPTURE:
case VSC73XX_BLOCK_ARBITER:
switch (subblock) {
case 0 ... 1:
return 1;
}
break;
case VSC73XX_BLOCK_CAPTURE:
switch (subblock) {
case 0 ... 4:
case 6 ... 7:
return 1;
}
break;
}
return 0;

View File

@ -318,6 +318,7 @@ enum ice_vsi_state {
ICE_VSI_UMAC_FLTR_CHANGED,
ICE_VSI_MMAC_FLTR_CHANGED,
ICE_VSI_PROMISC_CHANGED,
ICE_VSI_REBUILD_PENDING,
ICE_VSI_STATE_NBITS /* must be last */
};
@ -411,6 +412,7 @@ struct ice_vsi {
struct ice_tx_ring **xdp_rings; /* XDP ring array */
u16 num_xdp_txq; /* Used XDP queues */
u8 xdp_mapping_mode; /* ICE_MAP_MODE_[CONTIG|SCATTER] */
struct mutex xdp_state_lock;
struct net_device **target_netdevs;

View File

@ -190,16 +190,11 @@ static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)
}
q_vector = vsi->q_vectors[v_idx];
ice_for_each_tx_ring(tx_ring, q_vector->tx) {
ice_queue_set_napi(vsi, tx_ring->q_index, NETDEV_QUEUE_TYPE_TX,
NULL);
ice_for_each_tx_ring(tx_ring, vsi->q_vectors[v_idx]->tx)
tx_ring->q_vector = NULL;
}
ice_for_each_rx_ring(rx_ring, q_vector->rx) {
ice_queue_set_napi(vsi, rx_ring->q_index, NETDEV_QUEUE_TYPE_RX,
NULL);
ice_for_each_rx_ring(rx_ring, vsi->q_vectors[v_idx]->rx)
rx_ring->q_vector = NULL;
}
/* only VSI with an associated netdev is set up with NAPI */
if (vsi->netdev)

View File

@ -447,6 +447,7 @@ static void ice_vsi_free(struct ice_vsi *vsi)
ice_vsi_free_stats(vsi);
ice_vsi_free_arrays(vsi);
mutex_destroy(&vsi->xdp_state_lock);
mutex_unlock(&pf->sw_mutex);
devm_kfree(dev, vsi);
}
@ -626,6 +627,8 @@ static struct ice_vsi *ice_vsi_alloc(struct ice_pf *pf)
pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi,
pf->next_vsi);
mutex_init(&vsi->xdp_state_lock);
unlock_pf:
mutex_unlock(&pf->sw_mutex);
return vsi;
@ -2286,9 +2289,6 @@ static int ice_vsi_cfg_def(struct ice_vsi *vsi)
ice_vsi_map_rings_to_vectors(vsi);
/* Associate q_vector rings to napi */
ice_vsi_set_napi_queues(vsi);
vsi->stat_offsets_loaded = false;
/* ICE_VSI_CTRL does not need RSS so skip RSS processing */
@ -2426,7 +2426,7 @@ void ice_vsi_decfg(struct ice_vsi *vsi)
dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",
vsi->vsi_num, err);
if (ice_is_xdp_ena_vsi(vsi))
if (vsi->xdp_rings)
/* return value check can be skipped here, it always returns
* 0 if reset is in progress
*/
@ -2528,7 +2528,7 @@ static void ice_vsi_release_msix(struct ice_vsi *vsi)
for (q = 0; q < q_vector->num_ring_tx; q++) {
ice_write_itr(&q_vector->tx, 0);
wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), 0);
if (ice_is_xdp_ena_vsi(vsi)) {
if (vsi->xdp_rings) {
u32 xdp_txq = txq + vsi->num_xdp_txq;
wr32(hw, QINT_TQCTL(vsi->txq_map[xdp_txq]), 0);
@ -2628,6 +2628,7 @@ void ice_vsi_close(struct ice_vsi *vsi)
if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state))
ice_down(vsi);
ice_vsi_clear_napi_queues(vsi);
ice_vsi_free_irq(vsi);
ice_vsi_free_tx_rings(vsi);
ice_vsi_free_rx_rings(vsi);
@ -2671,8 +2672,7 @@ int ice_ena_vsi(struct ice_vsi *vsi, bool locked)
*/
void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
{
if (test_bit(ICE_VSI_DOWN, vsi->state))
return;
bool already_down = test_bit(ICE_VSI_DOWN, vsi->state);
set_bit(ICE_VSI_NEEDS_RESTART, vsi->state);
@ -2680,134 +2680,70 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
if (netif_running(vsi->netdev)) {
if (!locked)
rtnl_lock();
already_down = test_bit(ICE_VSI_DOWN, vsi->state);
if (!already_down)
ice_vsi_close(vsi);
if (!locked)
rtnl_unlock();
} else {
} else if (!already_down) {
ice_vsi_close(vsi);
}
} else if (vsi->type == ICE_VSI_CTRL) {
} else if (vsi->type == ICE_VSI_CTRL && !already_down) {
ice_vsi_close(vsi);
}
}
/**
* __ice_queue_set_napi - Set the napi instance for the queue
* @dev: device to which NAPI and queue belong
* @queue_index: Index of queue
* @type: queue type as RX or TX
* @napi: NAPI context
* @locked: is the rtnl_lock already held
*
* Set the napi instance for the queue. Caller indicates the lock status.
*/
static void
__ice_queue_set_napi(struct net_device *dev, unsigned int queue_index,
enum netdev_queue_type type, struct napi_struct *napi,
bool locked)
{
if (!locked)
rtnl_lock();
netif_queue_set_napi(dev, queue_index, type, napi);
if (!locked)
rtnl_unlock();
}
/**
* ice_queue_set_napi - Set the napi instance for the queue
* @vsi: VSI being configured
* @queue_index: Index of queue
* @type: queue type as RX or TX
* @napi: NAPI context
*
* Set the napi instance for the queue. The rtnl lock state is derived from the
* execution path.
*/
void
ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index,
enum netdev_queue_type type, struct napi_struct *napi)
{
struct ice_pf *pf = vsi->back;
if (!vsi->netdev)
return;
if (current_work() == &pf->serv_task ||
test_bit(ICE_PREPARED_FOR_RESET, pf->state) ||
test_bit(ICE_DOWN, pf->state) ||
test_bit(ICE_SUSPENDED, pf->state))
__ice_queue_set_napi(vsi->netdev, queue_index, type, napi,
false);
else
__ice_queue_set_napi(vsi->netdev, queue_index, type, napi,
true);
}
/**
* __ice_q_vector_set_napi_queues - Map queue[s] associated with the napi
* @q_vector: q_vector pointer
* @locked: is the rtnl_lock already held
*
* Associate the q_vector napi with all the queue[s] on the vector.
* Caller indicates the lock status.
*/
void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked)
{
struct ice_rx_ring *rx_ring;
struct ice_tx_ring *tx_ring;
ice_for_each_rx_ring(rx_ring, q_vector->rx)
__ice_queue_set_napi(q_vector->vsi->netdev, rx_ring->q_index,
NETDEV_QUEUE_TYPE_RX, &q_vector->napi,
locked);
ice_for_each_tx_ring(tx_ring, q_vector->tx)
__ice_queue_set_napi(q_vector->vsi->netdev, tx_ring->q_index,
NETDEV_QUEUE_TYPE_TX, &q_vector->napi,
locked);
/* Also set the interrupt number for the NAPI */
netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
}
/**
* ice_q_vector_set_napi_queues - Map queue[s] associated with the napi
* @q_vector: q_vector pointer
*
* Associate the q_vector napi with all the queue[s] on the vector
*/
void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector)
{
struct ice_rx_ring *rx_ring;
struct ice_tx_ring *tx_ring;
ice_for_each_rx_ring(rx_ring, q_vector->rx)
ice_queue_set_napi(q_vector->vsi, rx_ring->q_index,
NETDEV_QUEUE_TYPE_RX, &q_vector->napi);
ice_for_each_tx_ring(tx_ring, q_vector->tx)
ice_queue_set_napi(q_vector->vsi, tx_ring->q_index,
NETDEV_QUEUE_TYPE_TX, &q_vector->napi);
/* Also set the interrupt number for the NAPI */
netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
}
/**
* ice_vsi_set_napi_queues
* ice_vsi_set_napi_queues - associate netdev queues with napi
* @vsi: VSI pointer
*
* Associate queue[s] with napi for all vectors
* Associate queue[s] with napi for all vectors.
* The caller must hold rtnl_lock.
*/
void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
{
int i;
struct net_device *netdev = vsi->netdev;
int q_idx, v_idx;
if (!vsi->netdev)
if (!netdev)
return;
ice_for_each_q_vector(vsi, i)
ice_q_vector_set_napi_queues(vsi->q_vectors[i]);
ice_for_each_rxq(vsi, q_idx)
netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX,
&vsi->rx_rings[q_idx]->q_vector->napi);
ice_for_each_txq(vsi, q_idx)
netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX,
&vsi->tx_rings[q_idx]->q_vector->napi);
/* Also set the interrupt number for the NAPI */
ice_for_each_q_vector(vsi, v_idx) {
struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
}
}
/**
* ice_vsi_clear_napi_queues - dissociate netdev queues from napi
* @vsi: VSI pointer
*
* Clear the association between all VSI queues queue[s] and napi.
* The caller must hold rtnl_lock.
*/
void ice_vsi_clear_napi_queues(struct ice_vsi *vsi)
{
struct net_device *netdev = vsi->netdev;
int q_idx;
if (!netdev)
return;
ice_for_each_txq(vsi, q_idx)
netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, NULL);
ice_for_each_rxq(vsi, q_idx)
netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, NULL);
}
/**
@ -3039,19 +2975,23 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf))
return -EINVAL;
mutex_lock(&vsi->xdp_state_lock);
ret = ice_vsi_realloc_stat_arrays(vsi);
if (ret)
goto err_vsi_cfg;
goto unlock;
ice_vsi_decfg(vsi);
ret = ice_vsi_cfg_def(vsi);
if (ret)
goto err_vsi_cfg;
goto unlock;
coalesce = kcalloc(vsi->num_q_vectors,
sizeof(struct ice_coalesce_stored), GFP_KERNEL);
if (!coalesce)
return -ENOMEM;
if (!coalesce) {
ret = -ENOMEM;
goto decfg;
}
prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
@ -3059,22 +2999,23 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
if (ret) {
if (vsi_flags & ICE_VSI_FLAG_INIT) {
ret = -EIO;
goto err_vsi_cfg_tc_lan;
goto free_coalesce;
}
kfree(coalesce);
return ice_schedule_reset(pf, ICE_RESET_PFR);
ret = ice_schedule_reset(pf, ICE_RESET_PFR);
goto free_coalesce;
}
ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors);
clear_bit(ICE_VSI_REBUILD_PENDING, vsi->state);
free_coalesce:
kfree(coalesce);
return 0;
err_vsi_cfg_tc_lan:
decfg:
if (ret)
ice_vsi_decfg(vsi);
kfree(coalesce);
err_vsi_cfg:
unlock:
mutex_unlock(&vsi->xdp_state_lock);
return ret;
}

View File

@ -44,16 +44,10 @@ void ice_vsi_cfg_netdev_tc(struct ice_vsi *vsi, u8 ena_tc);
struct ice_vsi *
ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params);
void
ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index,
enum netdev_queue_type type, struct napi_struct *napi);
void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked);
void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector);
void ice_vsi_set_napi_queues(struct ice_vsi *vsi);
void ice_vsi_clear_napi_queues(struct ice_vsi *vsi);
int ice_vsi_release(struct ice_vsi *vsi);
void ice_vsi_close(struct ice_vsi *vsi);

View File

@ -608,11 +608,15 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
memset(&vsi->mqprio_qopt, 0, sizeof(vsi->mqprio_qopt));
}
}
if (vsi->netdev)
netif_device_detach(vsi->netdev);
skip:
/* clear SW filtering DB */
ice_clear_hw_tbls(hw);
/* disable the VSIs and their queues that are not already DOWN */
set_bit(ICE_VSI_REBUILD_PENDING, ice_get_main_vsi(pf)->state);
ice_pf_dis_all_vsi(pf, false);
if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
@ -3001,8 +3005,8 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
struct netlink_ext_ack *extack)
{
unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD;
bool if_running = netif_running(vsi->netdev);
int ret = 0, xdp_ring_err = 0;
bool if_running;
if (prog && !prog->aux->xdp_has_frags) {
if (frame_size > ice_max_xdp_frame_size(vsi)) {
@ -3013,13 +3017,17 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
}
/* hot swap progs and avoid toggling link */
if (ice_is_xdp_ena_vsi(vsi) == !!prog) {
if (ice_is_xdp_ena_vsi(vsi) == !!prog ||
test_bit(ICE_VSI_REBUILD_PENDING, vsi->state)) {
ice_vsi_assign_bpf_prog(vsi, prog);
return 0;
}
if_running = netif_running(vsi->netdev) &&
!test_and_set_bit(ICE_VSI_DOWN, vsi->state);
/* need to stop netdev while setting up the program for Rx rings */
if (if_running && !test_and_set_bit(ICE_VSI_DOWN, vsi->state)) {
if (if_running) {
ret = ice_down(vsi);
if (ret) {
NL_SET_ERR_MSG_MOD(extack, "Preparing device for XDP attach failed");
@ -3085,21 +3093,28 @@ static int ice_xdp(struct net_device *dev, struct netdev_bpf *xdp)
{
struct ice_netdev_priv *np = netdev_priv(dev);
struct ice_vsi *vsi = np->vsi;
int ret;
if (vsi->type != ICE_VSI_PF) {
NL_SET_ERR_MSG_MOD(xdp->extack, "XDP can be loaded only on PF VSI");
return -EINVAL;
}
mutex_lock(&vsi->xdp_state_lock);
switch (xdp->command) {
case XDP_SETUP_PROG:
return ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack);
ret = ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack);
break;
case XDP_SETUP_XSK_POOL:
return ice_xsk_pool_setup(vsi, xdp->xsk.pool,
xdp->xsk.queue_id);
ret = ice_xsk_pool_setup(vsi, xdp->xsk.pool, xdp->xsk.queue_id);
break;
default:
return -EINVAL;
ret = -EINVAL;
}
mutex_unlock(&vsi->xdp_state_lock);
return ret;
}
/**
@ -3555,11 +3570,9 @@ static void ice_napi_add(struct ice_vsi *vsi)
if (!vsi->netdev)
return;
ice_for_each_q_vector(vsi, v_idx) {
ice_for_each_q_vector(vsi, v_idx)
netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi,
ice_napi_poll);
__ice_q_vector_set_napi_queues(vsi->q_vectors[v_idx], false);
}
}
/**
@ -5537,7 +5550,9 @@ static int ice_reinit_interrupt_scheme(struct ice_pf *pf)
if (ret)
goto err_reinit;
ice_vsi_map_rings_to_vectors(pf->vsi[v]);
rtnl_lock();
ice_vsi_set_napi_queues(pf->vsi[v]);
rtnl_unlock();
}
ret = ice_req_irq_msix_misc(pf);
@ -5551,8 +5566,12 @@ static int ice_reinit_interrupt_scheme(struct ice_pf *pf)
err_reinit:
while (v--)
if (pf->vsi[v])
if (pf->vsi[v]) {
rtnl_lock();
ice_vsi_clear_napi_queues(pf->vsi[v]);
rtnl_unlock();
ice_vsi_free_q_vectors(pf->vsi[v]);
}
return ret;
}
@ -5617,6 +5636,9 @@ static int ice_suspend(struct device *dev)
ice_for_each_vsi(pf, v) {
if (!pf->vsi[v])
continue;
rtnl_lock();
ice_vsi_clear_napi_queues(pf->vsi[v]);
rtnl_unlock();
ice_vsi_free_q_vectors(pf->vsi[v]);
}
ice_clear_interrupt_scheme(pf);
@ -7230,7 +7252,7 @@ int ice_down(struct ice_vsi *vsi)
if (tx_err)
netdev_err(vsi->netdev, "Failed stop Tx rings, VSI %d error %d\n",
vsi->vsi_num, tx_err);
if (!tx_err && ice_is_xdp_ena_vsi(vsi)) {
if (!tx_err && vsi->xdp_rings) {
tx_err = ice_vsi_stop_xdp_tx_rings(vsi);
if (tx_err)
netdev_err(vsi->netdev, "Failed stop XDP rings, VSI %d error %d\n",
@ -7247,7 +7269,7 @@ int ice_down(struct ice_vsi *vsi)
ice_for_each_txq(vsi, i)
ice_clean_tx_ring(vsi->tx_rings[i]);
if (ice_is_xdp_ena_vsi(vsi))
if (vsi->xdp_rings)
ice_for_each_xdp_txq(vsi, i)
ice_clean_tx_ring(vsi->xdp_rings[i]);
@ -7452,6 +7474,8 @@ int ice_vsi_open(struct ice_vsi *vsi)
err = netif_set_real_num_rx_queues(vsi->netdev, vsi->num_rxq);
if (err)
goto err_set_qs;
ice_vsi_set_napi_queues(vsi);
}
err = ice_up_complete(vsi);
@ -7589,6 +7613,7 @@ static void ice_update_pf_netdev_link(struct ice_pf *pf)
*/
static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
{
struct ice_vsi *vsi = ice_get_main_vsi(pf);
struct device *dev = ice_pf_to_dev(pf);
struct ice_hw *hw = &pf->hw;
bool dvm;
@ -7731,6 +7756,9 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
ice_rebuild_arfs(pf);
}
if (vsi && vsi->netdev)
netif_device_attach(vsi->netdev);
ice_update_pf_netdev_link(pf);
/* tell the firmware we are up */

View File

@ -39,7 +39,7 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
sizeof(vsi_stat->rx_ring_stats[q_idx]->rx_stats));
memset(&vsi_stat->tx_ring_stats[q_idx]->stats, 0,
sizeof(vsi_stat->tx_ring_stats[q_idx]->stats));
if (ice_is_xdp_ena_vsi(vsi))
if (vsi->xdp_rings)
memset(&vsi->xdp_rings[q_idx]->ring_stats->stats, 0,
sizeof(vsi->xdp_rings[q_idx]->ring_stats->stats));
}
@ -52,7 +52,7 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
{
ice_clean_tx_ring(vsi->tx_rings[q_idx]);
if (ice_is_xdp_ena_vsi(vsi))
if (vsi->xdp_rings)
ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
ice_clean_rx_ring(vsi->rx_rings[q_idx]);
}
@ -165,7 +165,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
struct ice_q_vector *q_vector;
struct ice_tx_ring *tx_ring;
struct ice_rx_ring *rx_ring;
int timeout = 50;
int fail = 0;
int err;
@ -176,13 +175,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
rx_ring = vsi->rx_rings[q_idx];
q_vector = rx_ring->q_vector;
while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) {
timeout--;
if (!timeout)
return -EBUSY;
usleep_range(1000, 2000);
}
synchronize_net();
netif_carrier_off(vsi->netdev);
netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
@ -194,7 +186,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
if (!fail)
fail = err;
if (ice_is_xdp_ena_vsi(vsi)) {
if (vsi->xdp_rings) {
struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx];
memset(&txq_meta, 0, sizeof(txq_meta));
@ -261,7 +253,6 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
netif_carrier_on(vsi->netdev);
}
clear_bit(ICE_CFG_BUSY, vsi->state);
return fail;
}
@ -390,7 +381,8 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
goto failure;
}
if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
if_running = !test_bit(ICE_VSI_DOWN, vsi->state) &&
ice_is_xdp_ena_vsi(vsi);
if (if_running) {
struct ice_rx_ring *rx_ring = vsi->rx_rings[qid];

View File

@ -6960,10 +6960,20 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt)
static void igb_tsync_interrupt(struct igb_adapter *adapter)
{
const u32 mask = (TSINTR_SYS_WRAP | E1000_TSICR_TXTS |
TSINTR_TT0 | TSINTR_TT1 |
TSINTR_AUTT0 | TSINTR_AUTT1);
struct e1000_hw *hw = &adapter->hw;
u32 tsicr = rd32(E1000_TSICR);
struct ptp_clock_event event;
if (hw->mac.type == e1000_82580) {
/* 82580 has a hardware bug that requires an explicit
* write to clear the TimeSync interrupt cause.
*/
wr32(E1000_TSICR, tsicr & mask);
}
if (tsicr & TSINTR_SYS_WRAP) {
event.type = PTP_CLOCK_PPS;
if (adapter->ptp_caps.pps)

View File

@ -7413,6 +7413,7 @@ static void igc_io_resume(struct pci_dev *pdev)
rtnl_lock();
if (netif_running(netdev)) {
if (igc_open(netdev)) {
rtnl_unlock();
netdev_err(netdev, "igc_open failed after reset\n");
return;
}

View File

@ -1442,18 +1442,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
vcap_enable_lookups(&test_vctrl, &test_netdev, 0, 0,
rule->cookie, false);
vcap_free_rule(rule);
/* Check that the rule has been freed: tricky to access since this
* memory should not be accessible anymore
*/
KUNIT_EXPECT_PTR_NE(test, NULL, rule);
ret = list_empty(&rule->keyfields);
KUNIT_EXPECT_EQ(test, true, ret);
ret = list_empty(&rule->actionfields);
KUNIT_EXPECT_EQ(test, true, ret);
vcap_del_rule(&test_vctrl, &test_netdev, id);
ret = vcap_del_rule(&test_vctrl, &test_netdev, id);
KUNIT_EXPECT_EQ(test, 0, ret);
}
static void vcap_api_set_rule_counter_test(struct kunit *test)

View File

@ -1872,10 +1872,12 @@ static void mana_destroy_txq(struct mana_port_context *apc)
for (i = 0; i < apc->num_queues; i++) {
napi = &apc->tx_qp[i].tx_cq.napi;
if (apc->tx_qp[i].txq.napi_initialized) {
napi_synchronize(napi);
napi_disable(napi);
netif_napi_del(napi);
apc->tx_qp[i].txq.napi_initialized = false;
}
mana_destroy_wq_obj(apc, GDMA_SQ, apc->tx_qp[i].tx_object);
mana_deinit_cq(apc, &apc->tx_qp[i].tx_cq);
@ -1931,6 +1933,7 @@ static int mana_create_txq(struct mana_port_context *apc,
txq->ndev = net;
txq->net_txq = netdev_get_tx_queue(net, i);
txq->vp_offset = apc->tx_vp_offset;
txq->napi_initialized = false;
skb_queue_head_init(&txq->pending_skbs);
memset(&spec, 0, sizeof(spec));
@ -1997,6 +2000,7 @@ static int mana_create_txq(struct mana_port_context *apc,
netif_napi_add_tx(net, &cq->napi, mana_poll);
napi_enable(&cq->napi);
txq->napi_initialized = true;
mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT);
}
@ -2008,7 +2012,7 @@ out:
}
static void mana_destroy_rxq(struct mana_port_context *apc,
struct mana_rxq *rxq, bool validate_state)
struct mana_rxq *rxq, bool napi_initialized)
{
struct gdma_context *gc = apc->ac->gdma_dev->gdma_context;
@ -2023,14 +2027,14 @@ static void mana_destroy_rxq(struct mana_port_context *apc,
napi = &rxq->rx_cq.napi;
if (validate_state)
if (napi_initialized) {
napi_synchronize(napi);
napi_disable(napi);
xdp_rxq_info_unreg(&rxq->xdp_rxq);
netif_napi_del(napi);
}
xdp_rxq_info_unreg(&rxq->xdp_rxq);
mana_destroy_wq_obj(apc, GDMA_RQ, rxq->rxobj);

View File

@ -156,12 +156,13 @@
#define AM65_CPSW_CPPI_TX_PKT_TYPE 0x7
/* XDP */
#define AM65_CPSW_XDP_CONSUMED 2
#define AM65_CPSW_XDP_REDIRECT 1
#define AM65_CPSW_XDP_CONSUMED BIT(1)
#define AM65_CPSW_XDP_REDIRECT BIT(0)
#define AM65_CPSW_XDP_PASS 0
/* Include headroom compatible with both skb and xdpf */
#define AM65_CPSW_HEADROOM (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN)
#define AM65_CPSW_HEADROOM_NA (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN)
#define AM65_CPSW_HEADROOM ALIGN(AM65_CPSW_HEADROOM_NA, sizeof(long))
static void am65_cpsw_port_set_sl_mac(struct am65_cpsw_port *slave,
const u8 *dev_addr)
@ -933,7 +934,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev,
host_desc = k3_cppi_desc_pool_alloc(tx_chn->desc_pool);
if (unlikely(!host_desc)) {
ndev->stats.tx_dropped++;
return -ENOMEM;
return AM65_CPSW_XDP_CONSUMED; /* drop */
}
am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, buf_type);
@ -942,7 +943,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev,
pkt_len, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(tx_chn->dma_dev, dma_buf))) {
ndev->stats.tx_dropped++;
ret = -ENOMEM;
ret = AM65_CPSW_XDP_CONSUMED; /* drop */
goto pool_free;
}
@ -977,6 +978,7 @@ static int am65_cpsw_xdp_tx_frame(struct net_device *ndev,
/* Inform BQL */
netdev_tx_completed_queue(netif_txq, 1, pkt_len);
ndev->stats.tx_errors++;
ret = AM65_CPSW_XDP_CONSUMED; /* drop */
goto dma_unmap;
}
@ -996,7 +998,9 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
int desc_idx, int cpu, int *len)
{
struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns;
struct am65_cpsw_ndev_priv *ndev_priv;
struct net_device *ndev = port->ndev;
struct am65_cpsw_ndev_stats *stats;
int ret = AM65_CPSW_XDP_CONSUMED;
struct am65_cpsw_tx_chn *tx_chn;
struct netdev_queue *netif_txq;
@ -1004,6 +1008,7 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
struct bpf_prog *prog;
struct page *page;
u32 act;
int err;
prog = READ_ONCE(port->xdp_prog);
if (!prog)
@ -1013,6 +1018,9 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
/* XDP prog might have changed packet data and boundaries */
*len = xdp->data_end - xdp->data;
ndev_priv = netdev_priv(ndev);
stats = this_cpu_ptr(ndev_priv->stats);
switch (act) {
case XDP_PASS:
ret = AM65_CPSW_XDP_PASS;
@ -1023,31 +1031,36 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
xdpf = xdp_convert_buff_to_frame(xdp);
if (unlikely(!xdpf))
break;
goto drop;
__netif_tx_lock(netif_txq, cpu);
ret = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf,
err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf,
AM65_CPSW_TX_BUF_TYPE_XDP_TX);
__netif_tx_unlock(netif_txq);
if (ret)
break;
if (err)
goto drop;
ndev->stats.rx_bytes += *len;
ndev->stats.rx_packets++;
u64_stats_update_begin(&stats->syncp);
stats->rx_bytes += *len;
stats->rx_packets++;
u64_stats_update_end(&stats->syncp);
ret = AM65_CPSW_XDP_CONSUMED;
goto out;
case XDP_REDIRECT:
if (unlikely(xdp_do_redirect(ndev, xdp, prog)))
break;
goto drop;
ndev->stats.rx_bytes += *len;
ndev->stats.rx_packets++;
u64_stats_update_begin(&stats->syncp);
stats->rx_bytes += *len;
stats->rx_packets++;
u64_stats_update_end(&stats->syncp);
ret = AM65_CPSW_XDP_REDIRECT;
goto out;
default:
bpf_warn_invalid_xdp_action(ndev, prog, act);
fallthrough;
case XDP_ABORTED:
drop:
trace_xdp_exception(ndev, prog, act);
fallthrough;
case XDP_DROP:
@ -1056,7 +1069,6 @@ static int am65_cpsw_run_xdp(struct am65_cpsw_common *common,
page = virt_to_head_page(xdp->data);
am65_cpsw_put_page(rx_chn, page, true, desc_idx);
out:
return ret;
}
@ -1095,7 +1107,7 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *skb, u32 csum_info)
}
static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
u32 flow_idx, int cpu)
u32 flow_idx, int cpu, int *xdp_state)
{
struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns;
u32 buf_dma_len, pkt_len, port_id = 0, csum_info;
@ -1114,6 +1126,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
void **swdata;
u32 *psdata;
*xdp_state = AM65_CPSW_XDP_PASS;
ret = k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, &desc_dma);
if (ret) {
if (ret != -ENODATA)
@ -1161,15 +1174,13 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
}
if (port->xdp_prog) {
xdp_init_buff(&xdp, AM65_CPSW_MAX_PACKET_SIZE, &port->xdp_rxq);
xdp_prepare_buff(&xdp, page_addr, skb_headroom(skb),
xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq);
xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM,
pkt_len, false);
ret = am65_cpsw_run_xdp(common, port, &xdp, desc_idx,
*xdp_state = am65_cpsw_run_xdp(common, port, &xdp, desc_idx,
cpu, &pkt_len);
if (ret != AM65_CPSW_XDP_PASS)
return ret;
if (*xdp_state != AM65_CPSW_XDP_PASS)
goto allocate;
/* Compute additional headroom to be reserved */
headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb);
@ -1193,9 +1204,13 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
stats->rx_bytes += pkt_len;
u64_stats_update_end(&stats->syncp);
allocate:
new_page = page_pool_dev_alloc_pages(rx_chn->page_pool);
if (unlikely(!new_page))
if (unlikely(!new_page)) {
dev_err(dev, "page alloc failed\n");
return -ENOMEM;
}
rx_chn->pages[desc_idx] = new_page;
if (netif_dormant(ndev)) {
@ -1229,8 +1244,9 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
struct am65_cpsw_common *common = am65_cpsw_napi_to_common(napi_rx);
int flow = AM65_CPSW_MAX_RX_FLOWS;
int cpu = smp_processor_id();
bool xdp_redirect = false;
int xdp_state_or = 0;
int cur_budget, ret;
int xdp_state;
int num_rx = 0;
/* process every flow */
@ -1238,12 +1254,11 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
cur_budget = budget - num_rx;
while (cur_budget--) {
ret = am65_cpsw_nuss_rx_packets(common, flow, cpu);
if (ret) {
if (ret == AM65_CPSW_XDP_REDIRECT)
xdp_redirect = true;
ret = am65_cpsw_nuss_rx_packets(common, flow, cpu,
&xdp_state);
xdp_state_or |= xdp_state;
if (ret)
break;
}
num_rx++;
}
@ -1251,7 +1266,7 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *napi_rx, int budget)
break;
}
if (xdp_redirect)
if (xdp_state_or & AM65_CPSW_XDP_REDIRECT)
xdp_do_flush();
dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget);
@ -1918,12 +1933,13 @@ static int am65_cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf)
static int am65_cpsw_ndo_xdp_xmit(struct net_device *ndev, int n,
struct xdp_frame **frames, u32 flags)
{
struct am65_cpsw_common *common = am65_ndev_to_common(ndev);
struct am65_cpsw_tx_chn *tx_chn;
struct netdev_queue *netif_txq;
int cpu = smp_processor_id();
int i, nxmit = 0;
tx_chn = &am65_ndev_to_common(ndev)->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES];
tx_chn = &common->tx_chns[cpu % common->tx_ch_num];
netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
__netif_tx_lock(netif_txq, cpu);

View File

@ -436,6 +436,8 @@ struct skbuf_dma_descriptor {
* @tx_bytes: TX byte count for statistics
* @tx_stat_sync: Synchronization object for TX stats
* @dma_err_task: Work structure to process Axi DMA errors
* @stopping: Set when @dma_err_task shouldn't do anything because we are
* about to stop the device.
* @tx_irq: Axidma TX IRQ number
* @rx_irq: Axidma RX IRQ number
* @eth_irq: Ethernet core IRQ number
@ -507,6 +509,7 @@ struct axienet_local {
struct u64_stats_sync tx_stat_sync;
struct work_struct dma_err_task;
bool stopping;
int tx_irq;
int rx_irq;

View File

@ -1460,6 +1460,7 @@ static int axienet_init_legacy_dma(struct net_device *ndev)
struct axienet_local *lp = netdev_priv(ndev);
/* Enable worker thread for Axi DMA error handling */
lp->stopping = false;
INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler);
napi_enable(&lp->napi_rx);
@ -1580,6 +1581,9 @@ static int axienet_stop(struct net_device *ndev)
dev_dbg(&ndev->dev, "axienet_close()\n");
if (!lp->use_dmaengine) {
WRITE_ONCE(lp->stopping, true);
flush_work(&lp->dma_err_task);
napi_disable(&lp->napi_tx);
napi_disable(&lp->napi_rx);
}
@ -2154,6 +2158,10 @@ static void axienet_dma_err_handler(struct work_struct *work)
dma_err_task);
struct net_device *ndev = lp->ndev;
/* Don't bother if we are going to stop anyway */
if (READ_ONCE(lp->stopping))
return;
napi_disable(&lp->napi_tx);
napi_disable(&lp->napi_rx);

View File

@ -21,6 +21,11 @@ config MCTP_SERIAL
Say y here if you need to connect to MCTP endpoints over serial. To
compile as a module, use m; the module will be called mctp-serial.
config MCTP_SERIAL_TEST
bool "MCTP serial tests" if !KUNIT_ALL_TESTS
depends on MCTP_SERIAL=y && KUNIT=y
default KUNIT_ALL_TESTS
config MCTP_TRANSPORT_I2C
tristate "MCTP SMBus/I2C transport"
# i2c-mux is optional, but we must build as a module if i2c-mux is a module

View File

@ -91,8 +91,8 @@ static int next_chunk_len(struct mctp_serial *dev)
* will be those non-escaped bytes, and does not include the escaped
* byte.
*/
for (i = 1; i + dev->txpos + 1 < dev->txlen; i++) {
if (needs_escape(dev->txbuf[dev->txpos + i + 1]))
for (i = 1; i + dev->txpos < dev->txlen; i++) {
if (needs_escape(dev->txbuf[dev->txpos + i]))
break;
}
@ -521,3 +521,112 @@ module_exit(mctp_serial_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Jeremy Kerr <jk@codeconstruct.com.au>");
MODULE_DESCRIPTION("MCTP Serial transport");
#if IS_ENABLED(CONFIG_MCTP_SERIAL_TEST)
#include <kunit/test.h>
#define MAX_CHUNKS 6
struct test_chunk_tx {
u8 input_len;
u8 input[MCTP_SERIAL_MTU];
u8 chunks[MAX_CHUNKS];
};
static void test_next_chunk_len(struct kunit *test)
{
struct mctp_serial devx;
struct mctp_serial *dev = &devx;
int next;
const struct test_chunk_tx *params = test->param_value;
memset(dev, 0x0, sizeof(*dev));
memcpy(dev->txbuf, params->input, params->input_len);
dev->txlen = params->input_len;
for (size_t i = 0; i < MAX_CHUNKS; i++) {
next = next_chunk_len(dev);
dev->txpos += next;
KUNIT_EXPECT_EQ(test, next, params->chunks[i]);
if (next == 0) {
KUNIT_EXPECT_EQ(test, dev->txpos, dev->txlen);
return;
}
}
KUNIT_FAIL_AND_ABORT(test, "Ran out of chunks");
}
static struct test_chunk_tx chunk_tx_tests[] = {
{
.input_len = 5,
.input = { 0x00, 0x11, 0x22, 0x7e, 0x80 },
.chunks = { 3, 1, 1, 0},
},
{
.input_len = 5,
.input = { 0x00, 0x11, 0x22, 0x7e, 0x7d },
.chunks = { 3, 1, 1, 0},
},
{
.input_len = 3,
.input = { 0x7e, 0x11, 0x22, },
.chunks = { 1, 2, 0},
},
{
.input_len = 3,
.input = { 0x7e, 0x7e, 0x7d, },
.chunks = { 1, 1, 1, 0},
},
{
.input_len = 4,
.input = { 0x7e, 0x7e, 0x00, 0x7d, },
.chunks = { 1, 1, 1, 1, 0},
},
{
.input_len = 6,
.input = { 0x7e, 0x7e, 0x00, 0x7d, 0x10, 0x10},
.chunks = { 1, 1, 1, 1, 2, 0},
},
{
.input_len = 1,
.input = { 0x7e },
.chunks = { 1, 0 },
},
{
.input_len = 1,
.input = { 0x80 },
.chunks = { 1, 0 },
},
{
.input_len = 3,
.input = { 0x80, 0x80, 0x00 },
.chunks = { 3, 0 },
},
{
.input_len = 7,
.input = { 0x01, 0x00, 0x08, 0xc8, 0x00, 0x80, 0x02 },
.chunks = { 7, 0 },
},
{
.input_len = 7,
.input = { 0x01, 0x00, 0x08, 0xc8, 0x7e, 0x80, 0x02 },
.chunks = { 4, 1, 2, 0 },
},
};
KUNIT_ARRAY_PARAM(chunk_tx, chunk_tx_tests, NULL);
static struct kunit_case mctp_serial_test_cases[] = {
KUNIT_CASE_PARAM(test_next_chunk_len, chunk_tx_gen_params),
};
static struct kunit_suite mctp_serial_test_suite = {
.name = "mctp_serial",
.test_cases = mctp_serial_test_cases,
};
kunit_test_suite(mctp_serial_test_suite);
#endif /* CONFIG_MCTP_SERIAL_TEST */

View File

@ -3347,11 +3347,13 @@ static int of_phy_leds(struct phy_device *phydev)
err = of_phy_led(phydev, led);
if (err) {
of_node_put(led);
of_node_put(leds);
phy_leds_unregister(phydev);
return err;
}
}
of_node_put(leds);
return 0;
}

View File

@ -5178,14 +5178,23 @@ static void rtl8152_fw_mac_apply(struct r8152 *tp, struct fw_mac *mac)
data = (u8 *)mac;
data += __le16_to_cpu(mac->fw_offset);
generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length, data,
type);
if (generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length,
data, type) < 0) {
dev_err(&tp->intf->dev, "Write %s fw fail\n",
type ? "PLA" : "USB");
return;
}
ocp_write_word(tp, type, __le16_to_cpu(mac->bp_ba_addr),
__le16_to_cpu(mac->bp_ba_value));
generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD,
__le16_to_cpu(mac->bp_num) << 1, mac->bp, type);
if (generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD,
ALIGN(__le16_to_cpu(mac->bp_num) << 1, 4),
mac->bp, type) < 0) {
dev_err(&tp->intf->dev, "Write %s bp fail\n",
type ? "PLA" : "USB");
return;
}
bp_en_addr = __le16_to_cpu(mac->bp_en_addr);
if (bp_en_addr)

View File

@ -61,9 +61,6 @@
/*-------------------------------------------------------------------------*/
// randomly generated ethernet address
static u8 node_id [ETH_ALEN];
/* use ethtool to change the level for any given device */
static int msg_level = -1;
module_param (msg_level, int, 0);
@ -1725,7 +1722,6 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
dev->net = net;
strscpy(net->name, "usb%d", sizeof(net->name));
eth_hw_addr_set(net, node_id);
/* rx and tx sides can use different message sizes;
* bind() should set rx_urb_size in that case.
@ -1801,9 +1797,9 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
goto out4;
}
/* let userspace know we have a random address */
if (ether_addr_equal(net->dev_addr, node_id))
net->addr_assign_type = NET_ADDR_RANDOM;
/* this flags the device for user space */
if (!is_valid_ether_addr(net->dev_addr))
eth_hw_addr_random(net);
if ((dev->driver_info->flags & FLAG_WLAN) != 0)
SET_NETDEV_DEVTYPE(net, &wlan_type);
@ -2211,7 +2207,6 @@ static int __init usbnet_init(void)
BUILD_BUG_ON(
sizeof_field(struct sk_buff, cb) < sizeof(struct skb_data));
eth_random_addr(node_id);
return 0;
}
module_init(usbnet_init);

View File

@ -413,7 +413,7 @@ static int ath11k_ahb_power_up(struct ath11k_base *ab)
return ret;
}
static void ath11k_ahb_power_down(struct ath11k_base *ab, bool is_suspend)
static void ath11k_ahb_power_down(struct ath11k_base *ab)
{
struct ath11k_ahb *ab_ahb = ath11k_ahb_priv(ab);
@ -1280,7 +1280,7 @@ static void ath11k_ahb_remove(struct platform_device *pdev)
struct ath11k_base *ab = platform_get_drvdata(pdev);
if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
ath11k_ahb_power_down(ab, false);
ath11k_ahb_power_down(ab);
ath11k_debugfs_soc_destroy(ab);
ath11k_qmi_deinit_service(ab);
goto qmi_fail;

View File

@ -906,6 +906,12 @@ int ath11k_core_suspend(struct ath11k_base *ab)
return ret;
}
ret = ath11k_wow_enable(ab);
if (ret) {
ath11k_warn(ab, "failed to enable wow during suspend: %d\n", ret);
return ret;
}
ret = ath11k_dp_rx_pktlog_stop(ab, false);
if (ret) {
ath11k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n",
@ -916,85 +922,29 @@ int ath11k_core_suspend(struct ath11k_base *ab)
ath11k_ce_stop_shadow_timers(ab);
ath11k_dp_stop_shadow_timers(ab);
/* PM framework skips suspend_late/resume_early callbacks
* if other devices report errors in their suspend callbacks.
* However ath11k_core_resume() would still be called because
* here we return success thus kernel put us on dpm_suspended_list.
* Since we won't go through a power down/up cycle, there is
* no chance to call complete(&ab->restart_completed) in
* ath11k_core_restart(), making ath11k_core_resume() timeout.
* So call it here to avoid this issue. This also works in case
* no error happens thus suspend_late/resume_early get called,
* because it will be reinitialized in ath11k_core_resume_early().
*/
complete(&ab->restart_completed);
ath11k_hif_irq_disable(ab);
ath11k_hif_ce_irq_disable(ab);
ret = ath11k_hif_suspend(ab);
if (ret) {
ath11k_warn(ab, "failed to suspend hif: %d\n", ret);
return ret;
}
return 0;
}
EXPORT_SYMBOL(ath11k_core_suspend);
int ath11k_core_suspend_late(struct ath11k_base *ab)
{
struct ath11k_pdev *pdev;
struct ath11k *ar;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far single_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
ar = pdev->ar;
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
ath11k_hif_irq_disable(ab);
ath11k_hif_ce_irq_disable(ab);
ath11k_hif_power_down(ab, true);
return 0;
}
EXPORT_SYMBOL(ath11k_core_suspend_late);
int ath11k_core_resume_early(struct ath11k_base *ab)
{
int ret;
struct ath11k_pdev *pdev;
struct ath11k *ar;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far single_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
ar = pdev->ar;
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
reinit_completion(&ab->restart_completed);
ret = ath11k_hif_power_up(ab);
if (ret)
ath11k_warn(ab, "failed to power up hif during resume: %d\n", ret);
return ret;
}
EXPORT_SYMBOL(ath11k_core_resume_early);
int ath11k_core_resume(struct ath11k_base *ab)
{
int ret;
struct ath11k_pdev *pdev;
struct ath11k *ar;
long time_left;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far single_pdev_only chips have supports_suspend as true
/* so far signle_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
@ -1002,30 +952,30 @@ int ath11k_core_resume(struct ath11k_base *ab)
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
time_left = wait_for_completion_timeout(&ab->restart_completed,
ATH11K_RESET_TIMEOUT_HZ);
if (time_left == 0) {
ath11k_warn(ab, "timeout while waiting for restart complete");
return -ETIMEDOUT;
}
if (ab->hw_params.current_cc_support &&
ar->alpha2[0] != 0 && ar->alpha2[1] != 0) {
ret = ath11k_reg_set_cc(ar);
ret = ath11k_hif_resume(ab);
if (ret) {
ath11k_warn(ab, "failed to set country code during resume: %d\n",
ret);
ath11k_warn(ab, "failed to resume hif during resume: %d\n", ret);
return ret;
}
}
ath11k_hif_ce_irq_enable(ab);
ath11k_hif_irq_enable(ab);
ret = ath11k_dp_rx_pktlog_start(ab);
if (ret)
if (ret) {
ath11k_warn(ab, "failed to start rx pktlog during resume: %d\n",
ret);
return ret;
}
ret = ath11k_wow_wakeup(ab);
if (ret) {
ath11k_warn(ab, "failed to wakeup wow during resume: %d\n", ret);
return ret;
}
return 0;
}
EXPORT_SYMBOL(ath11k_core_resume);
static void ath11k_core_check_cc_code_bdfext(const struct dmi_header *hdr, void *data)
@ -2119,8 +2069,6 @@ static void ath11k_core_restart(struct work_struct *work)
if (!ab->is_reset)
ath11k_core_post_reconfigure_recovery(ab);
complete(&ab->restart_completed);
}
static void ath11k_core_reset(struct work_struct *work)
@ -2190,7 +2138,7 @@ static void ath11k_core_reset(struct work_struct *work)
ath11k_hif_irq_disable(ab);
ath11k_hif_ce_irq_disable(ab);
ath11k_hif_power_down(ab, false);
ath11k_hif_power_down(ab);
ath11k_hif_power_up(ab);
ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset started\n");
@ -2263,7 +2211,7 @@ void ath11k_core_deinit(struct ath11k_base *ab)
mutex_unlock(&ab->core_lock);
ath11k_hif_power_down(ab, false);
ath11k_hif_power_down(ab);
ath11k_mac_destroy(ab);
ath11k_core_soc_destroy(ab);
ath11k_fw_destroy(ab);
@ -2316,7 +2264,6 @@ struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
timer_setup(&ab->rx_replenish_retry, ath11k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
init_completion(&ab->wow.wakeup_completed);
init_completion(&ab->restart_completed);
ab->dev = dev;
ab->hif.bus = bus;

View File

@ -1036,8 +1036,6 @@ struct ath11k_base {
DECLARE_BITMAP(fw_features, ATH11K_FW_FEATURE_COUNT);
} fw;
struct completion restart_completed;
#ifdef CONFIG_NL80211_TESTMODE
struct {
u32 data_pos;
@ -1237,10 +1235,8 @@ void ath11k_core_free_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd);
int ath11k_core_check_dt(struct ath11k_base *ath11k);
int ath11k_core_check_smbios(struct ath11k_base *ab);
void ath11k_core_halt(struct ath11k *ar);
int ath11k_core_resume_early(struct ath11k_base *ab);
int ath11k_core_resume(struct ath11k_base *ab);
int ath11k_core_suspend(struct ath11k_base *ab);
int ath11k_core_suspend_late(struct ath11k_base *ab);
void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab);
bool ath11k_core_coldboot_cal_support(struct ath11k_base *ab);

View File

@ -18,7 +18,7 @@ struct ath11k_hif_ops {
int (*start)(struct ath11k_base *ab);
void (*stop)(struct ath11k_base *ab);
int (*power_up)(struct ath11k_base *ab);
void (*power_down)(struct ath11k_base *ab, bool is_suspend);
void (*power_down)(struct ath11k_base *ab);
int (*suspend)(struct ath11k_base *ab);
int (*resume)(struct ath11k_base *ab);
int (*map_service_to_pipe)(struct ath11k_base *ab, u16 service_id,
@ -67,18 +67,12 @@ static inline void ath11k_hif_irq_disable(struct ath11k_base *ab)
static inline int ath11k_hif_power_up(struct ath11k_base *ab)
{
if (!ab->hif.ops->power_up)
return -EOPNOTSUPP;
return ab->hif.ops->power_up(ab);
}
static inline void ath11k_hif_power_down(struct ath11k_base *ab, bool is_suspend)
static inline void ath11k_hif_power_down(struct ath11k_base *ab)
{
if (!ab->hif.ops->power_down)
return;
ab->hif.ops->power_down(ab, is_suspend);
ab->hif.ops->power_down(ab);
}
static inline int ath11k_hif_suspend(struct ath11k_base *ab)

View File

@ -7900,6 +7900,7 @@ static void ath11k_mac_parse_tx_pwr_env(struct ath11k *ar,
}
if (psd) {
arvif->reg_tpc_info.is_psd_power = true;
arvif->reg_tpc_info.num_pwr_levels = psd->count;
for (i = 0; i < arvif->reg_tpc_info.num_pwr_levels; i++) {

View File

@ -453,17 +453,9 @@ int ath11k_mhi_start(struct ath11k_pci *ab_pci)
return 0;
}
void ath11k_mhi_stop(struct ath11k_pci *ab_pci, bool is_suspend)
void ath11k_mhi_stop(struct ath11k_pci *ab_pci)
{
/* During suspend we need to use mhi_power_down_keep_dev()
* workaround, otherwise ath11k_core_resume() will timeout
* during resume.
*/
if (is_suspend)
mhi_power_down_keep_dev(ab_pci->mhi_ctrl, true);
else
mhi_power_down(ab_pci->mhi_ctrl, true);
mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
}

View File

@ -18,7 +18,7 @@
#define MHICTRL_RESET_MASK 0x2
int ath11k_mhi_start(struct ath11k_pci *ar_pci);
void ath11k_mhi_stop(struct ath11k_pci *ar_pci, bool is_suspend);
void ath11k_mhi_stop(struct ath11k_pci *ar_pci);
int ath11k_mhi_register(struct ath11k_pci *ar_pci);
void ath11k_mhi_unregister(struct ath11k_pci *ar_pci);
void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab);
@ -26,4 +26,5 @@ void ath11k_mhi_clear_vector(struct ath11k_base *ab);
int ath11k_mhi_suspend(struct ath11k_pci *ar_pci);
int ath11k_mhi_resume(struct ath11k_pci *ar_pci);
#endif

View File

@ -638,7 +638,7 @@ static int ath11k_pci_power_up(struct ath11k_base *ab)
return 0;
}
static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend)
static void ath11k_pci_power_down(struct ath11k_base *ab)
{
struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
@ -649,7 +649,7 @@ static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend)
ath11k_pci_msi_disable(ab_pci);
ath11k_mhi_stop(ab_pci, is_suspend);
ath11k_mhi_stop(ab_pci);
clear_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags);
ath11k_pci_sw_reset(ab_pci->ab, false);
}
@ -970,7 +970,7 @@ static void ath11k_pci_remove(struct pci_dev *pdev)
ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
ath11k_pci_power_down(ab, false);
ath11k_pci_power_down(ab);
ath11k_debugfs_soc_destroy(ab);
ath11k_qmi_deinit_service(ab);
goto qmi_fail;
@ -998,7 +998,7 @@ static void ath11k_pci_shutdown(struct pci_dev *pdev)
struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
ath11k_pci_power_down(ab, false);
ath11k_pci_power_down(ab);
}
static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
@ -1035,39 +1035,9 @@ static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
return ret;
}
static __maybe_unused int ath11k_pci_pm_suspend_late(struct device *dev)
{
struct ath11k_base *ab = dev_get_drvdata(dev);
int ret;
ret = ath11k_core_suspend_late(ab);
if (ret)
ath11k_warn(ab, "failed to late suspend core: %d\n", ret);
/* Similar to ath11k_pci_pm_suspend(), we return success here
* even error happens, to allow system suspend/hibernation survive.
*/
return 0;
}
static __maybe_unused int ath11k_pci_pm_resume_early(struct device *dev)
{
struct ath11k_base *ab = dev_get_drvdata(dev);
int ret;
ret = ath11k_core_resume_early(ab);
if (ret)
ath11k_warn(ab, "failed to early resume core: %d\n", ret);
return ret;
}
static const struct dev_pm_ops __maybe_unused ath11k_pci_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend,
ath11k_pci_pm_resume)
SET_LATE_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend_late,
ath11k_pci_pm_resume_early)
};
static SIMPLE_DEV_PM_OPS(ath11k_pci_pm_ops,
ath11k_pci_pm_suspend,
ath11k_pci_pm_resume);
static struct pci_driver ath11k_pci_driver = {
.name = "ath11k_pci",

View File

@ -2877,7 +2877,7 @@ int ath11k_qmi_fwreset_from_cold_boot(struct ath11k_base *ab)
}
/* reset the firmware */
ath11k_hif_power_down(ab, false);
ath11k_hif_power_down(ab);
ath11k_hif_power_up(ab);
ath11k_dbg(ab, ATH11K_DBG_QMI, "exit wait for cold boot done\n");
return 0;

View File

@ -316,6 +316,15 @@ struct ptp_ocp_serial_port {
#define OCP_SERIAL_LEN 6
#define OCP_SMA_NUM 4
enum {
PORT_GNSS,
PORT_GNSS2,
PORT_MAC, /* miniature atomic clock */
PORT_NMEA,
__PORT_COUNT,
};
struct ptp_ocp {
struct pci_dev *pdev;
struct device dev;
@ -357,10 +366,7 @@ struct ptp_ocp {
struct delayed_work sync_work;
int id;
int n_irqs;
struct ptp_ocp_serial_port gnss_port;
struct ptp_ocp_serial_port gnss2_port;
struct ptp_ocp_serial_port mac_port; /* miniature atomic clock */
struct ptp_ocp_serial_port nmea_port;
struct ptp_ocp_serial_port port[__PORT_COUNT];
bool fw_loader;
u8 fw_tag;
u16 fw_version;
@ -655,28 +661,28 @@ static struct ocp_resource ocp_fb_resource[] = {
},
},
{
OCP_SERIAL_RESOURCE(gnss_port),
OCP_SERIAL_RESOURCE(port[PORT_GNSS]),
.offset = 0x00160000 + 0x1000, .irq_vec = 3,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 115200,
},
},
{
OCP_SERIAL_RESOURCE(gnss2_port),
OCP_SERIAL_RESOURCE(port[PORT_GNSS2]),
.offset = 0x00170000 + 0x1000, .irq_vec = 4,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 115200,
},
},
{
OCP_SERIAL_RESOURCE(mac_port),
OCP_SERIAL_RESOURCE(port[PORT_MAC]),
.offset = 0x00180000 + 0x1000, .irq_vec = 5,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 57600,
},
},
{
OCP_SERIAL_RESOURCE(nmea_port),
OCP_SERIAL_RESOURCE(port[PORT_NMEA]),
.offset = 0x00190000 + 0x1000, .irq_vec = 10,
},
{
@ -740,7 +746,7 @@ static struct ocp_resource ocp_art_resource[] = {
.offset = 0x01000000, .size = 0x10000,
},
{
OCP_SERIAL_RESOURCE(gnss_port),
OCP_SERIAL_RESOURCE(port[PORT_GNSS]),
.offset = 0x00160000 + 0x1000, .irq_vec = 3,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 115200,
@ -839,7 +845,7 @@ static struct ocp_resource ocp_art_resource[] = {
},
},
{
OCP_SERIAL_RESOURCE(mac_port),
OCP_SERIAL_RESOURCE(port[PORT_MAC]),
.offset = 0x00190000, .irq_vec = 7,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 9600,
@ -950,14 +956,14 @@ static struct ocp_resource ocp_adva_resource[] = {
.offset = 0x00220000, .size = 0x1000,
},
{
OCP_SERIAL_RESOURCE(gnss_port),
OCP_SERIAL_RESOURCE(port[PORT_GNSS]),
.offset = 0x00160000 + 0x1000, .irq_vec = 3,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 9600,
},
},
{
OCP_SERIAL_RESOURCE(mac_port),
OCP_SERIAL_RESOURCE(port[PORT_MAC]),
.offset = 0x00180000 + 0x1000, .irq_vec = 5,
.extra = &(struct ptp_ocp_serial_port) {
.baud = 115200,
@ -1649,6 +1655,15 @@ ptp_ocp_tod_gnss_name(int idx)
return gnss_name[idx];
}
static const char *
ptp_ocp_tty_port_name(int idx)
{
static const char * const tty_name[] = {
"GNSS", "GNSS2", "MAC", "NMEA"
};
return tty_name[idx];
}
struct ptp_ocp_nvmem_match_info {
struct ptp_ocp *bp;
const void * const tag;
@ -3346,6 +3361,54 @@ static EXT_ATTR_RO(freq, frequency, 1);
static EXT_ATTR_RO(freq, frequency, 2);
static EXT_ATTR_RO(freq, frequency, 3);
static ssize_t
ptp_ocp_tty_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct dev_ext_attribute *ea = to_ext_attr(attr);
struct ptp_ocp *bp = dev_get_drvdata(dev);
return sysfs_emit(buf, "ttyS%d", bp->port[(uintptr_t)ea->var].line);
}
static umode_t
ptp_ocp_timecard_tty_is_visible(struct kobject *kobj, struct attribute *attr, int n)
{
struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj));
struct ptp_ocp_serial_port *port;
struct device_attribute *dattr;
struct dev_ext_attribute *ea;
if (strncmp(attr->name, "tty", 3))
return attr->mode;
dattr = container_of(attr, struct device_attribute, attr);
ea = container_of(dattr, struct dev_ext_attribute, attr);
port = &bp->port[(uintptr_t)ea->var];
return port->line == -1 ? 0 : 0444;
}
#define EXT_TTY_ATTR_RO(_name, _val) \
struct dev_ext_attribute dev_attr_tty##_name = \
{ __ATTR(tty##_name, 0444, ptp_ocp_tty_show, NULL), (void *)_val }
static EXT_TTY_ATTR_RO(GNSS, PORT_GNSS);
static EXT_TTY_ATTR_RO(GNSS2, PORT_GNSS2);
static EXT_TTY_ATTR_RO(MAC, PORT_MAC);
static EXT_TTY_ATTR_RO(NMEA, PORT_NMEA);
static struct attribute *ptp_ocp_timecard_tty_attrs[] = {
&dev_attr_ttyGNSS.attr.attr,
&dev_attr_ttyGNSS2.attr.attr,
&dev_attr_ttyMAC.attr.attr,
&dev_attr_ttyNMEA.attr.attr,
NULL,
};
static const struct attribute_group ptp_ocp_timecard_tty_group = {
.name = "tty",
.attrs = ptp_ocp_timecard_tty_attrs,
.is_visible = ptp_ocp_timecard_tty_is_visible,
};
static ssize_t
serialnum_show(struct device *dev, struct device_attribute *attr, char *buf)
{
@ -3775,6 +3838,7 @@ static const struct attribute_group fb_timecard_group = {
static const struct ocp_attr_group fb_timecard_groups[] = {
{ .cap = OCP_CAP_BASIC, .group = &fb_timecard_group },
{ .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group },
{ .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal0_group },
{ .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal1_group },
{ .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal2_group },
@ -3814,6 +3878,7 @@ static const struct attribute_group art_timecard_group = {
static const struct ocp_attr_group art_timecard_groups[] = {
{ .cap = OCP_CAP_BASIC, .group = &art_timecard_group },
{ .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group },
{ },
};
@ -3841,6 +3906,7 @@ static const struct attribute_group adva_timecard_group = {
static const struct ocp_attr_group adva_timecard_groups[] = {
{ .cap = OCP_CAP_BASIC, .group = &adva_timecard_group },
{ .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group },
{ .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal0_group },
{ .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal1_group },
{ .cap = OCP_CAP_FREQ, .group = &fb_timecard_freq0_group },
@ -3960,16 +4026,11 @@ ptp_ocp_summary_show(struct seq_file *s, void *data)
bp = dev_get_drvdata(dev);
seq_printf(s, "%7s: /dev/ptp%d\n", "PTP", ptp_clock_index(bp->ptp));
if (bp->gnss_port.line != -1)
seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS1",
bp->gnss_port.line);
if (bp->gnss2_port.line != -1)
seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS2",
bp->gnss2_port.line);
if (bp->mac_port.line != -1)
seq_printf(s, "%7s: /dev/ttyS%d\n", "MAC", bp->mac_port.line);
if (bp->nmea_port.line != -1)
seq_printf(s, "%7s: /dev/ttyS%d\n", "NMEA", bp->nmea_port.line);
for (i = 0; i < __PORT_COUNT; i++) {
if (bp->port[i].line != -1)
seq_printf(s, "%7s: /dev/ttyS%d\n", ptp_ocp_tty_port_name(i),
bp->port[i].line);
}
memset(sma_val, 0xff, sizeof(sma_val));
if (bp->sma_map1) {
@ -4279,7 +4340,7 @@ ptp_ocp_dev_release(struct device *dev)
static int
ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev)
{
int err;
int i, err;
mutex_lock(&ptp_ocp_lock);
err = idr_alloc(&ptp_ocp_idr, bp, 0, 0, GFP_KERNEL);
@ -4292,10 +4353,10 @@ ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev)
bp->ptp_info = ptp_ocp_clock_info;
spin_lock_init(&bp->lock);
bp->gnss_port.line = -1;
bp->gnss2_port.line = -1;
bp->mac_port.line = -1;
bp->nmea_port.line = -1;
for (i = 0; i < __PORT_COUNT; i++)
bp->port[i].line = -1;
bp->pdev = pdev;
device_initialize(&bp->dev);
@ -4352,22 +4413,6 @@ ptp_ocp_complete(struct ptp_ocp *bp)
struct pps_device *pps;
char buf[32];
if (bp->gnss_port.line != -1) {
sprintf(buf, "ttyS%d", bp->gnss_port.line);
ptp_ocp_link_child(bp, buf, "ttyGNSS");
}
if (bp->gnss2_port.line != -1) {
sprintf(buf, "ttyS%d", bp->gnss2_port.line);
ptp_ocp_link_child(bp, buf, "ttyGNSS2");
}
if (bp->mac_port.line != -1) {
sprintf(buf, "ttyS%d", bp->mac_port.line);
ptp_ocp_link_child(bp, buf, "ttyMAC");
}
if (bp->nmea_port.line != -1) {
sprintf(buf, "ttyS%d", bp->nmea_port.line);
ptp_ocp_link_child(bp, buf, "ttyNMEA");
}
sprintf(buf, "ptp%d", ptp_clock_index(bp->ptp));
ptp_ocp_link_child(bp, buf, "ptp");
@ -4416,23 +4461,20 @@ ptp_ocp_info(struct ptp_ocp *bp)
};
struct device *dev = &bp->pdev->dev;
u32 reg;
int i;
ptp_ocp_phc_info(bp);
ptp_ocp_serial_info(dev, "GNSS", bp->gnss_port.line,
bp->gnss_port.baud);
ptp_ocp_serial_info(dev, "GNSS2", bp->gnss2_port.line,
bp->gnss2_port.baud);
ptp_ocp_serial_info(dev, "MAC", bp->mac_port.line, bp->mac_port.baud);
if (bp->nmea_out && bp->nmea_port.line != -1) {
bp->nmea_port.baud = -1;
for (i = 0; i < __PORT_COUNT; i++) {
if (i == PORT_NMEA && bp->nmea_out && bp->port[PORT_NMEA].line != -1) {
bp->port[PORT_NMEA].baud = -1;
reg = ioread32(&bp->nmea_out->uart_baud);
if (reg < ARRAY_SIZE(nmea_baud))
bp->nmea_port.baud = nmea_baud[reg];
ptp_ocp_serial_info(dev, "NMEA", bp->nmea_port.line,
bp->nmea_port.baud);
bp->port[PORT_NMEA].baud = nmea_baud[reg];
}
ptp_ocp_serial_info(dev, ptp_ocp_tty_port_name(i), bp->port[i].line,
bp->port[i].baud);
}
}
@ -4441,9 +4483,6 @@ ptp_ocp_detach_sysfs(struct ptp_ocp *bp)
{
struct device *dev = &bp->dev;
sysfs_remove_link(&dev->kobj, "ttyGNSS");
sysfs_remove_link(&dev->kobj, "ttyGNSS2");
sysfs_remove_link(&dev->kobj, "ttyMAC");
sysfs_remove_link(&dev->kobj, "ptp");
sysfs_remove_link(&dev->kobj, "pps");
}
@ -4473,14 +4512,9 @@ ptp_ocp_detach(struct ptp_ocp *bp)
for (i = 0; i < 4; i++)
if (bp->signal_out[i])
ptp_ocp_unregister_ext(bp->signal_out[i]);
if (bp->gnss_port.line != -1)
serial8250_unregister_port(bp->gnss_port.line);
if (bp->gnss2_port.line != -1)
serial8250_unregister_port(bp->gnss2_port.line);
if (bp->mac_port.line != -1)
serial8250_unregister_port(bp->mac_port.line);
if (bp->nmea_port.line != -1)
serial8250_unregister_port(bp->nmea_port.line);
for (i = 0; i < __PORT_COUNT; i++)
if (bp->port[i].line != -1)
serial8250_unregister_port(bp->port[i].line);
platform_device_unregister(bp->spi_flash);
platform_device_unregister(bp->i2c_ctrl);
if (bp->i2c_clk)

View File

@ -390,14 +390,6 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
__ret; \
})
#define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) \
({ \
int __ret = 0; \
if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT)) \
copy_from_sockptr(&__ret, optlen, sizeof(int)); \
__ret; \
})
#define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, optlen, \
max_optlen, retval) \
({ \
@ -518,7 +510,6 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
#define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; })
#define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(atype, major, minor, access) ({ 0; })
#define BPF_CGROUP_RUN_PROG_SYSCTL(head,table,write,buf,count,pos) ({ 0; })
#define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) ({ 0; })
#define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, \
optlen, max_optlen, retval) ({ retval; })
#define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval, \

View File

@ -186,7 +186,6 @@ struct blocked_key {
struct smp_csrk {
bdaddr_t bdaddr;
u8 bdaddr_type;
u8 link_type;
u8 type;
u8 val[16];
};
@ -196,7 +195,6 @@ struct smp_ltk {
struct rcu_head rcu;
bdaddr_t bdaddr;
u8 bdaddr_type;
u8 link_type;
u8 authenticated;
u8 type;
u8 enc_size;
@ -211,7 +209,6 @@ struct smp_irk {
bdaddr_t rpa;
bdaddr_t bdaddr;
u8 addr_type;
u8 link_type;
u8 val[16];
};
@ -219,8 +216,6 @@ struct link_key {
struct list_head list;
struct rcu_head rcu;
bdaddr_t bdaddr;
u8 bdaddr_type;
u8 link_type;
u8 type;
u8 val[HCI_LINK_KEY_SIZE];
u8 pin_len;

View File

@ -73,6 +73,10 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy);
int hci_cmd_sync_queue_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy);
int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy);
int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy);
struct hci_cmd_sync_work_entry *
hci_cmd_sync_lookup_entry(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy);

View File

@ -98,6 +98,8 @@ struct mana_txq {
atomic_t pending_sends;
bool napi_initialized;
struct mana_stats_tx stats;
};

View File

@ -2952,5 +2952,9 @@ int hci_abort_conn(struct hci_conn *conn, u8 reason)
return 0;
}
return hci_cmd_sync_queue_once(hdev, abort_conn_sync, conn, NULL);
/* Run immediately if on cmd_sync_work since this may be called
* as a result to MGMT_OP_DISCONNECT/MGMT_OP_UNPAIR which does
* already queue its callback on cmd_sync_work.
*/
return hci_cmd_sync_run_once(hdev, abort_conn_sync, conn, NULL);
}

View File

@ -112,7 +112,7 @@ static void hci_cmd_sync_add(struct hci_request *req, u16 opcode, u32 plen,
skb_queue_tail(&req->cmd_q, skb);
}
static int hci_cmd_sync_run(struct hci_request *req)
static int hci_req_sync_run(struct hci_request *req)
{
struct hci_dev *hdev = req->hdev;
struct sk_buff *skb;
@ -169,7 +169,7 @@ struct sk_buff *__hci_cmd_sync_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
hdev->req_status = HCI_REQ_PEND;
err = hci_cmd_sync_run(&req);
err = hci_req_sync_run(&req);
if (err < 0)
return ERR_PTR(err);
@ -782,6 +782,44 @@ int hci_cmd_sync_queue_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
}
EXPORT_SYMBOL(hci_cmd_sync_queue_once);
/* Run HCI command:
*
* - hdev must be running
* - if on cmd_sync_work then run immediately otherwise queue
*/
int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy)
{
/* Only queue command if hdev is running which means it had been opened
* and is either on init phase or is already up.
*/
if (!test_bit(HCI_RUNNING, &hdev->flags))
return -ENETDOWN;
/* If on cmd_sync_work then run immediately otherwise queue */
if (current_work() == &hdev->cmd_sync_work)
return func(hdev, data);
return hci_cmd_sync_submit(hdev, func, data, destroy);
}
EXPORT_SYMBOL(hci_cmd_sync_run);
/* Run HCI command entry once:
*
* - Lookup if an entry already exist and only if it doesn't creates a new entry
* and run it.
* - if on cmd_sync_work then run immediately otherwise queue
*/
int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
void *data, hci_cmd_sync_work_destroy_t destroy)
{
if (hci_cmd_sync_lookup_entry(hdev, func, data, destroy))
return 0;
return hci_cmd_sync_run(hdev, func, data, destroy);
}
EXPORT_SYMBOL(hci_cmd_sync_run_once);
/* Lookup HCI command entry:
*
* - Return first entry that matches by function callback or data or

View File

@ -2830,16 +2830,6 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "debug_keys %u key_count %u", cp->debug_keys,
key_count);
for (i = 0; i < key_count; i++) {
struct mgmt_link_key_info *key = &cp->keys[i];
/* Considering SMP over BREDR/LE, there is no need to check addr_type */
if (key->type > 0x08)
return mgmt_cmd_status(sk, hdev->id,
MGMT_OP_LOAD_LINK_KEYS,
MGMT_STATUS_INVALID_PARAMS);
}
hci_dev_lock(hdev);
hci_link_keys_clear(hdev);
@ -2864,6 +2854,19 @@ static int load_link_keys(struct sock *sk, struct hci_dev *hdev, void *data,
continue;
}
if (key->addr.type != BDADDR_BREDR) {
bt_dev_warn(hdev,
"Invalid link address type %u for %pMR",
key->addr.type, &key->addr.bdaddr);
continue;
}
if (key->type > 0x08) {
bt_dev_warn(hdev, "Invalid link key type %u for %pMR",
key->type, &key->addr.bdaddr);
continue;
}
/* Always ignore debug keys and require a new pairing if
* the user wants to use them.
*/
@ -2921,7 +2924,12 @@ static int unpair_device_sync(struct hci_dev *hdev, void *data)
if (!conn)
return 0;
return hci_abort_conn_sync(hdev, conn, HCI_ERROR_REMOTE_USER_TERM);
/* Disregard any possible error since the likes of hci_abort_conn_sync
* will clean up the connection no matter the error.
*/
hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
return 0;
}
static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
@ -3053,13 +3061,44 @@ unlock:
return err;
}
static void disconnect_complete(struct hci_dev *hdev, void *data, int err)
{
struct mgmt_pending_cmd *cmd = data;
cmd->cmd_complete(cmd, mgmt_status(err));
mgmt_pending_free(cmd);
}
static int disconnect_sync(struct hci_dev *hdev, void *data)
{
struct mgmt_pending_cmd *cmd = data;
struct mgmt_cp_disconnect *cp = cmd->param;
struct hci_conn *conn;
if (cp->addr.type == BDADDR_BREDR)
conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK,
&cp->addr.bdaddr);
else
conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr,
le_addr_type(cp->addr.type));
if (!conn)
return -ENOTCONN;
/* Disregard any possible error since the likes of hci_abort_conn_sync
* will clean up the connection no matter the error.
*/
hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
return 0;
}
static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
u16 len)
{
struct mgmt_cp_disconnect *cp = data;
struct mgmt_rp_disconnect rp;
struct mgmt_pending_cmd *cmd;
struct hci_conn *conn;
int err;
bt_dev_dbg(hdev, "sock %p", sk);
@ -3082,27 +3121,7 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
goto failed;
}
if (pending_find(MGMT_OP_DISCONNECT, hdev)) {
err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT,
MGMT_STATUS_BUSY, &rp, sizeof(rp));
goto failed;
}
if (cp->addr.type == BDADDR_BREDR)
conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK,
&cp->addr.bdaddr);
else
conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr,
le_addr_type(cp->addr.type));
if (!conn || conn->state == BT_OPEN || conn->state == BT_CLOSED) {
err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT,
MGMT_STATUS_NOT_CONNECTED, &rp,
sizeof(rp));
goto failed;
}
cmd = mgmt_pending_add(sk, MGMT_OP_DISCONNECT, hdev, data, len);
cmd = mgmt_pending_new(sk, MGMT_OP_DISCONNECT, hdev, data, len);
if (!cmd) {
err = -ENOMEM;
goto failed;
@ -3110,9 +3129,10 @@ static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data,
cmd->cmd_complete = generic_cmd_complete;
err = hci_disconnect(conn, HCI_ERROR_REMOTE_USER_TERM);
err = hci_cmd_sync_queue(hdev, disconnect_sync, cmd,
disconnect_complete);
if (err < 0)
mgmt_pending_remove(cmd);
mgmt_pending_free(cmd);
failed:
hci_dev_unlock(hdev);
@ -7072,7 +7092,6 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
for (i = 0; i < irk_count; i++) {
struct mgmt_irk_info *irk = &cp->irks[i];
u8 addr_type = le_addr_type(irk->addr.type);
if (hci_is_blocked_key(hdev,
HCI_BLOCKED_KEY_TYPE_IRK,
@ -7082,12 +7101,8 @@ static int load_irks(struct sock *sk, struct hci_dev *hdev, void *cp_data,
continue;
}
/* When using SMP over BR/EDR, the addr type should be set to BREDR */
if (irk->addr.type == BDADDR_BREDR)
addr_type = BDADDR_BREDR;
hci_add_irk(hdev, &irk->addr.bdaddr,
addr_type, irk->val,
le_addr_type(irk->addr.type), irk->val,
BDADDR_ANY);
}
@ -7152,15 +7167,6 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
bt_dev_dbg(hdev, "key_count %u", key_count);
for (i = 0; i < key_count; i++) {
struct mgmt_ltk_info *key = &cp->keys[i];
if (!ltk_is_valid(key))
return mgmt_cmd_status(sk, hdev->id,
MGMT_OP_LOAD_LONG_TERM_KEYS,
MGMT_STATUS_INVALID_PARAMS);
}
hci_dev_lock(hdev);
hci_smp_ltks_clear(hdev);
@ -7168,7 +7174,6 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
for (i = 0; i < key_count; i++) {
struct mgmt_ltk_info *key = &cp->keys[i];
u8 type, authenticated;
u8 addr_type = le_addr_type(key->addr.type);
if (hci_is_blocked_key(hdev,
HCI_BLOCKED_KEY_TYPE_LTK,
@ -7178,6 +7183,12 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
continue;
}
if (!ltk_is_valid(key)) {
bt_dev_warn(hdev, "Invalid LTK for %pMR",
&key->addr.bdaddr);
continue;
}
switch (key->type) {
case MGMT_LTK_UNAUTHENTICATED:
authenticated = 0x00;
@ -7203,12 +7214,8 @@ static int load_long_term_keys(struct sock *sk, struct hci_dev *hdev,
continue;
}
/* When using SMP over BR/EDR, the addr type should be set to BREDR */
if (key->addr.type == BDADDR_BREDR)
addr_type = BDADDR_BREDR;
hci_add_ltk(hdev, &key->addr.bdaddr,
addr_type, type, authenticated,
le_addr_type(key->addr.type), type, authenticated,
key->val, key->enc_size, key->ediv, key->rand);
}
@ -9502,7 +9509,7 @@ void mgmt_new_link_key(struct hci_dev *hdev, struct link_key *key,
ev.store_hint = persistent;
bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
ev.key.addr.type = BDADDR_BREDR;
ev.key.type = key->type;
memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE);
ev.key.pin_len = key->pin_len;
@ -9553,7 +9560,7 @@ void mgmt_new_ltk(struct hci_dev *hdev, struct smp_ltk *key, bool persistent)
ev.store_hint = persistent;
bacpy(&ev.key.addr.bdaddr, &key->bdaddr);
ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type);
ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type);
ev.key.type = mgmt_ltk_type(key);
ev.key.enc_size = key->enc_size;
ev.key.ediv = key->ediv;
@ -9582,7 +9589,7 @@ void mgmt_new_irk(struct hci_dev *hdev, struct smp_irk *irk, bool persistent)
bacpy(&ev.rpa, &irk->rpa);
bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr);
ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type);
ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type);
memcpy(ev.irk.val, irk->val, sizeof(irk->val));
mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL);
@ -9611,7 +9618,7 @@ void mgmt_new_csrk(struct hci_dev *hdev, struct smp_csrk *csrk,
ev.store_hint = persistent;
bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr);
ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type);
ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type);
ev.key.type = csrk->type;
memcpy(ev.key.val, csrk->val, sizeof(csrk->val));
@ -9689,18 +9696,6 @@ void mgmt_device_connected(struct hci_dev *hdev, struct hci_conn *conn,
mgmt_event_skb(skb, NULL);
}
static void disconnect_rsp(struct mgmt_pending_cmd *cmd, void *data)
{
struct sock **sk = data;
cmd->cmd_complete(cmd, 0);
*sk = cmd->sk;
sock_hold(*sk);
mgmt_pending_remove(cmd);
}
static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data)
{
struct hci_dev *hdev = data;
@ -9744,8 +9739,6 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
if (link_type != ACL_LINK && link_type != LE_LINK)
return;
mgmt_pending_foreach(MGMT_OP_DISCONNECT, hdev, disconnect_rsp, &sk);
bacpy(&ev.addr.bdaddr, bdaddr);
ev.addr.type = link_to_bdaddr(link_type, addr_type);
ev.reason = reason;
@ -9758,9 +9751,6 @@ void mgmt_device_disconnected(struct hci_dev *hdev, bdaddr_t *bdaddr,
if (sk)
sock_put(sk);
mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp,
hdev);
}
void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,

View File

@ -1060,7 +1060,6 @@ static void smp_notify_keys(struct l2cap_conn *conn)
}
if (smp->remote_irk) {
smp->remote_irk->link_type = hcon->type;
mgmt_new_irk(hdev, smp->remote_irk, persistent);
/* Now that user space can be considered to know the
@ -1080,28 +1079,24 @@ static void smp_notify_keys(struct l2cap_conn *conn)
}
if (smp->csrk) {
smp->csrk->link_type = hcon->type;
smp->csrk->bdaddr_type = hcon->dst_type;
bacpy(&smp->csrk->bdaddr, &hcon->dst);
mgmt_new_csrk(hdev, smp->csrk, persistent);
}
if (smp->responder_csrk) {
smp->responder_csrk->link_type = hcon->type;
smp->responder_csrk->bdaddr_type = hcon->dst_type;
bacpy(&smp->responder_csrk->bdaddr, &hcon->dst);
mgmt_new_csrk(hdev, smp->responder_csrk, persistent);
}
if (smp->ltk) {
smp->ltk->link_type = hcon->type;
smp->ltk->bdaddr_type = hcon->dst_type;
bacpy(&smp->ltk->bdaddr, &hcon->dst);
mgmt_new_ltk(hdev, smp->ltk, persistent);
}
if (smp->responder_ltk) {
smp->responder_ltk->link_type = hcon->type;
smp->responder_ltk->bdaddr_type = hcon->dst_type;
bacpy(&smp->responder_ltk->bdaddr, &hcon->dst);
mgmt_new_ltk(hdev, smp->responder_ltk, persistent);
@ -1121,8 +1116,6 @@ static void smp_notify_keys(struct l2cap_conn *conn)
key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst,
smp->link_key, type, 0, &persistent);
if (key) {
key->link_type = hcon->type;
key->bdaddr_type = hcon->dst_type;
mgmt_new_link_key(hdev, key, persistent);
/* Don't keep debug keys around if the relevant

View File

@ -1469,12 +1469,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
modified = true;
}
if (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) {
if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) {
/* Refresh entry */
fdb->used = jiffies;
} else if (!test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags)) {
/* Take over SW learned entry */
set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags);
} else {
modified = true;
}

View File

@ -1470,6 +1470,10 @@ static void bcm_notify(struct bcm_sock *bo, unsigned long msg,
/* remove device reference, if this is our bound device */
if (bo->bound && bo->ifindex == dev->ifindex) {
#if IS_ENABLED(CONFIG_PROC_FS)
if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read)
remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir);
#endif
bo->bound = 0;
bo->ifindex = 0;
notify_enodev = 1;

View File

@ -1524,7 +1524,7 @@ static const struct attribute_group dql_group = {
};
#else
/* Fake declaration, all the code using it should be dead */
extern const struct attribute_group dql_group;
static const struct attribute_group dql_group = {};
#endif /* CONFIG_BQL */
#ifdef CONFIG_XPS

View File

@ -50,7 +50,7 @@ struct fou_net {
static inline struct fou *fou_from_sock(struct sock *sk)
{
return sk->sk_user_data;
return rcu_dereference_sk_user_data(sk);
}
static int fou_recv_pull(struct sk_buff *skb, struct fou *fou, size_t len)
@ -233,9 +233,15 @@ static struct sk_buff *fou_gro_receive(struct sock *sk,
struct sk_buff *skb)
{
const struct net_offload __rcu **offloads;
u8 proto = fou_from_sock(sk)->protocol;
struct fou *fou = fou_from_sock(sk);
const struct net_offload *ops;
struct sk_buff *pp = NULL;
u8 proto;
if (!fou)
goto out;
proto = fou->protocol;
/* We can clear the encap_mark for FOU as we are essentially doing
* one of two possible things. We are either adding an L4 tunnel
@ -263,14 +269,24 @@ static int fou_gro_complete(struct sock *sk, struct sk_buff *skb,
int nhoff)
{
const struct net_offload __rcu **offloads;
u8 proto = fou_from_sock(sk)->protocol;
struct fou *fou = fou_from_sock(sk);
const struct net_offload *ops;
int err = -ENOSYS;
u8 proto;
int err;
if (!fou) {
err = -ENOENT;
goto out;
}
proto = fou->protocol;
offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads;
ops = rcu_dereference(offloads[proto]);
if (WARN_ON(!ops || !ops->callbacks.gro_complete))
if (WARN_ON(!ops || !ops->callbacks.gro_complete)) {
err = -ENOSYS;
goto out;
}
err = ops->callbacks.gro_complete(skb, nhoff);
@ -320,6 +336,9 @@ static struct sk_buff *gue_gro_receive(struct sock *sk,
struct gro_remcsum grc;
u8 proto;
if (!fou)
goto out;
skb_gro_remcsum_init(&grc);
off = skb_gro_offset(skb);

View File

@ -577,7 +577,7 @@ out_err:
err = sk_stream_error(sk, msg->msg_flags, err);
release_sock(sk);
sk_psock_put(sk, psock);
return copied ? copied : err;
return copied > 0 ? copied : err;
}
enum {

View File

@ -108,6 +108,7 @@ int ila_lwt_init(void);
void ila_lwt_fini(void);
int ila_xlat_init_net(struct net *net);
void ila_xlat_pre_exit_net(struct net *net);
void ila_xlat_exit_net(struct net *net);
int ila_xlat_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info);

View File

@ -71,6 +71,11 @@ ila_xlat_init_fail:
return err;
}
static __net_exit void ila_pre_exit_net(struct net *net)
{
ila_xlat_pre_exit_net(net);
}
static __net_exit void ila_exit_net(struct net *net)
{
ila_xlat_exit_net(net);
@ -78,6 +83,7 @@ static __net_exit void ila_exit_net(struct net *net)
static struct pernet_operations ila_net_ops = {
.init = ila_init_net,
.pre_exit = ila_pre_exit_net,
.exit = ila_exit_net,
.id = &ila_net_id,
.size = sizeof(struct ila_net),

View File

@ -619,6 +619,15 @@ int ila_xlat_init_net(struct net *net)
return 0;
}
void ila_xlat_pre_exit_net(struct net *net)
{
struct ila_net *ilan = net_generic(net, ila_net_id);
if (ilan->xlat.hooks_registered)
nf_unregister_net_hooks(net, ila_nf_hook_ops,
ARRAY_SIZE(ila_nf_hook_ops));
}
void ila_xlat_exit_net(struct net *net)
{
struct ila_net *ilan = net_generic(net, ila_net_id);
@ -626,10 +635,6 @@ void ila_xlat_exit_net(struct net *net)
rhashtable_free_and_destroy(&ilan->xlat.rhash_table, ila_free_cb, NULL);
free_bucket_spinlocks(ilan->xlat.locks);
if (ilan->xlat.hooks_registered)
nf_unregister_net_hooks(net, ila_nf_hook_ops,
ARRAY_SIZE(ila_nf_hook_ops));
}
static int ila_xlat_addr(struct sk_buff *skb, bool sir2ila)

View File

@ -786,12 +786,15 @@ skip_hash:
* queue, accept the collision, update the host tags.
*/
q->way_collisions++;
if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
}
allocate_src = cake_dsrc(flow_mode);
allocate_dst = cake_ddst(flow_mode);
if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
if (allocate_src)
q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
if (allocate_dst)
q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
}
found:
/* reserve queue for future packets in same flow */
reduced_hash = outer_hash + k;

View File

@ -742,11 +742,10 @@ deliver:
err = qdisc_enqueue(skb, q->qdisc, &to_free);
kfree_skb_list(to_free);
if (err != NET_XMIT_SUCCESS &&
net_xmit_drop_count(err)) {
if (err != NET_XMIT_SUCCESS) {
if (net_xmit_drop_count(err))
qdisc_qstats_drop(sch);
qdisc_tree_reduce_backlog(sch, 1,
pkt_len);
qdisc_tree_reduce_backlog(sch, 1, pkt_len);
}
goto tfifo_dequeue;
}

View File

@ -284,6 +284,9 @@ struct smc_connection {
struct smc_sock { /* smc sock container */
struct sock sk;
#if IS_ENABLED(CONFIG_IPV6)
struct ipv6_pinfo *pinet6;
#endif
struct socket *clcsock; /* internal tcp socket */
void (*clcsk_state_change)(struct sock *sk);
/* original stat_change fct. */

View File

@ -60,6 +60,11 @@ static struct inet_protosw smc_inet_protosw = {
};
#if IS_ENABLED(CONFIG_IPV6)
struct smc6_sock {
struct smc_sock smc;
struct ipv6_pinfo inet6;
};
static struct proto smc_inet6_prot = {
.name = "INET6_SMC",
.owner = THIS_MODULE,
@ -67,9 +72,10 @@ static struct proto smc_inet6_prot = {
.hash = smc_hash_sk,
.unhash = smc_unhash_sk,
.release_cb = smc_release_cb,
.obj_size = sizeof(struct smc_sock),
.obj_size = sizeof(struct smc6_sock),
.h.smc_hash = &smc_v6_hashinfo,
.slab_flags = SLAB_TYPESAFE_BY_RCU,
.ipv6_pinfo_offset = offsetof(struct smc6_sock, inet6),
};
static const struct proto_ops smc_inet6_stream_ops = {

View File

@ -2362,7 +2362,7 @@ INDIRECT_CALLABLE_DECLARE(bool tcp_bpf_bypass_getsockopt(int level,
int do_sock_getsockopt(struct socket *sock, bool compat, int level,
int optname, sockptr_t optval, sockptr_t optlen)
{
int max_optlen __maybe_unused;
int max_optlen __maybe_unused = 0;
const struct proto_ops *ops;
int err;
@ -2371,7 +2371,7 @@ int do_sock_getsockopt(struct socket *sock, bool compat, int level,
return err;
if (!compat)
max_optlen = BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen);
copy_from_sockptr(&max_optlen, optlen, sizeof(int));
ops = READ_ONCE(sock->ops);
if (level == SOL_SOCKET) {

View File

@ -388,6 +388,8 @@ class NetlinkProtocol:
def decode(self, ynl, nl_msg, op):
msg = self._decode(nl_msg)
if op is None:
op = ynl.rsp_by_value[msg.cmd()]
fixed_header_size = ynl._struct_size(op.fixed_header)
msg.raw_attrs = NlAttrs(msg.raw, fixed_header_size)
return msg
@ -921,8 +923,7 @@ class YnlFamily(SpecFamily):
print("Netlink done while checking for ntf!?")
continue
op = self.rsp_by_value[nl_msg.cmd()]
decoded = self.nlproto.decode(self, nl_msg, op)
decoded = self.nlproto.decode(self, nl_msg, None)
if decoded.cmd() not in self.async_msg_ids:
print("Unexpected msg id done while checking for ntf", decoded)
continue
@ -980,7 +981,7 @@ class YnlFamily(SpecFamily):
if nl_msg.extack:
self._decode_extack(req_msg, op, nl_msg.extack)
else:
op = self.rsp_by_value[nl_msg.cmd()]
op = None
req_flags = []
if nl_msg.error:

View File

@ -85,7 +85,8 @@ TEST_GEN_PROGS += so_incoming_cpu
TEST_PROGS += sctp_vrf.sh
TEST_GEN_FILES += sctp_hello
TEST_GEN_FILES += ip_local_port_range
TEST_GEN_FILES += bind_wildcard
TEST_GEN_PROGS += bind_wildcard
TEST_GEN_PROGS += bind_timewait
TEST_PROGS += test_vxlan_mdb.sh
TEST_PROGS += test_bridge_neigh_suppress.sh
TEST_PROGS += test_vxlan_nolocalbypass.sh