Files
linux/drivers/net/ethernet/stmicro/stmmac/stmmac.h

398 lines
9.5 KiB
C
Raw Normal View History

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 291 Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms and conditions of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details the full gnu general public license is included in this distribution in the file called copying this program is free software you can redistribute it and or modify it under the terms and conditions of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope [that] it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details the full gnu general public license is included in this distribution in the file called copying extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 57 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141901.515993066@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-29 07:18:05 -07:00
/* SPDX-License-Identifier: GPL-2.0-only */
/*******************************************************************************
Copyright (C) 2007-2009 STMicroelectronics Ltd
Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>
*******************************************************************************/
#ifndef __STMMAC_H__
#define __STMMAC_H__
#define STMMAC_RESOURCE_NAME "stmmaceth"
#include <linux/clk.h>
#include <linux/hrtimer.h>
#include <linux/if_vlan.h>
#include <linux/stmmac.h>
#include <linux/phylink.h>
#include <linux/pci.h>
#include "common.h"
#include <linux/ptp_clock_kernel.h>
#include <linux/net_tstamp.h>
#include <linux/reset.h>
#include <net/page_pool.h>
#include <uapi/linux/bpf.h>
struct stmmac_resources {
void __iomem *addr;
of: net: pass the dst buffer to of_get_mac_address() of_get_mac_address() returns a "const void*" pointer to a MAC address. Lately, support to fetch the MAC address by an NVMEM provider was added. But this will only work with platform devices. It will not work with PCI devices (e.g. of an integrated root complex) and esp. not with DSA ports. There is an of_* variant of the nvmem binding which works without devices. The returned data of a nvmem_cell_read() has to be freed after use. On the other hand the return of_get_mac_address() points to some static data without a lifetime. The trick for now, was to allocate a device resource managed buffer which is then returned. This will only work if we have an actual device. Change it, so that the caller of of_get_mac_address() has to supply a buffer where the MAC address is written to. Unfortunately, this will touch all drivers which use the of_get_mac_address(). Usually the code looks like: const char *addr; addr = of_get_mac_address(np); if (!IS_ERR(addr)) ether_addr_copy(ndev->dev_addr, addr); This can then be simply rewritten as: of_get_mac_address(np, ndev->dev_addr); Sometimes is_valid_ether_addr() is used to test the MAC address. of_get_mac_address() already makes sure, it just returns a valid MAC address. Thus we can just test its return code. But we have to be careful if there are still other sources for the MAC address before the of_get_mac_address(). In this case we have to keep the is_valid_ether_addr() call. The following coccinelle patch was used to convert common cases to the new style. Afterwards, I've manually gone over the drivers and fixed the return code variable: either used a new one or if one was already available use that. Mansour Moufid, thanks for that coccinelle patch! <spml> @a@ identifier x; expression y, z; @@ - x = of_get_mac_address(y); + x = of_get_mac_address(y, z); <... - ether_addr_copy(z, x); ...> @@ identifier a.x; @@ - if (<+... x ...+>) {} @@ identifier a.x; @@ if (<+... x ...+>) { ... } - else {} @@ identifier a.x; expression e; @@ - if (<+... x ...+>@e) - {} - else + if (!(e)) {...} @@ expression x, y, z; @@ - x = of_get_mac_address(y, z); + of_get_mac_address(y, z); ... when != x </spml> All drivers, except drivers/net/ethernet/aeroflex/greth.c, were compile-time tested. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-12 19:47:17 +02:00
u8 mac[ETH_ALEN];
int wol_irq;
int lpi_irq;
int irq;
int sfty_ce_irq;
int sfty_ue_irq;
int rx_irq[MTL_MAX_RX_QUEUES];
int tx_irq[MTL_MAX_TX_QUEUES];
};
enum stmmac_txbuf_type {
STMMAC_TXBUF_T_SKB,
STMMAC_TXBUF_T_XDP_TX,
STMMAC_TXBUF_T_XDP_NDO,
net: stmmac: Add TX via XDP zero-copy socket We add the support of XDP ZC TX submission and cleaning into stmmac_tx_clean(). The function is made to clean as many TX complete frames as possible, i.e. limit by priv->dma_tx_size instead of NAPI budget. For TX ring that is associated with XSK pool, the function stmmac_xdp_xmit_zc() is introduced to TX frame buffers from XSK pool by using xsk_tx_peek_desc(). To make stmmac_tx_clean() support the cleaning of XSK TX frames, STMMAC_TXBUF_T_XSK_TX TX buffer type is introduced. As stmmac_tx_clean() uses the return value to cue whether NAPI function should continue to poll, we augment the caller of stmmac_tx_clean() to pass NAPI budget instead of priv->dma_tx_size through 'budget' input and made stmmac_tx_clean() to always clean up-to the TX ring size instead. This allows us to use the return boolean status of stmmac_xdp_xmit_zc() to decide if XSK TX work is done or not: If true, set 'xmits' to return 'budget - 1' so that NAPI poll may exit. Else, set 'xmits' to return 'budget' to make NAPI poll continue to poll since XSK TX work is not done. Finally, at the end of stmmac_tx_clean(), the function now take a maximum value between 'count' and 'xmits' so that status from both TX cleaning and XSK TX (only for XDP ZC) is considered. This patch adds a new NAPI poll called stmmac_napi_poll_rxtx() that is meant to be enabled/disabled for RX and TX ring that are bound to XSK pool. This NAPI poll function starts with cleaning TX ring, then submits XSK TX frames to TX ring before proceed to perform RX operations, i.e. , receiving RX frames and replenishing RX ring with RX free buffers obtained from XSK pool. Therefore, during XSK RX and TX setup, the driver enables stmmac_napi_poll_rxtx() for RX and TX operations, then during XSK RX and TX pool tear-down, the driver reenables the exisiting independent NAPI poll functions accordingly: stmmac_napi_poll_rx() and stmmac_napi_poll_tx(). Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:26 +08:00
STMMAC_TXBUF_T_XSK_TX,
};
struct stmmac_tx_info {
dma_addr_t buf;
bool map_as_page;
unsigned len;
bool last_segment;
bool is_jumbo;
enum stmmac_txbuf_type buf_type;
};
#define STMMAC_TBS_AVAIL BIT(0)
#define STMMAC_TBS_EN BIT(1)
/* Frequently used values are kept adjacent for cache effect */
struct stmmac_tx_queue {
u32 tx_count_frames;
int tbs;
struct hrtimer txtimer;
u32 queue_index;
struct stmmac_priv *priv_data;
struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp;
struct dma_edesc *dma_entx;
struct dma_desc *dma_tx;
union {
struct sk_buff **tx_skbuff;
struct xdp_frame **xdpf;
};
struct stmmac_tx_info *tx_skbuff_dma;
net: stmmac: Add TX via XDP zero-copy socket We add the support of XDP ZC TX submission and cleaning into stmmac_tx_clean(). The function is made to clean as many TX complete frames as possible, i.e. limit by priv->dma_tx_size instead of NAPI budget. For TX ring that is associated with XSK pool, the function stmmac_xdp_xmit_zc() is introduced to TX frame buffers from XSK pool by using xsk_tx_peek_desc(). To make stmmac_tx_clean() support the cleaning of XSK TX frames, STMMAC_TXBUF_T_XSK_TX TX buffer type is introduced. As stmmac_tx_clean() uses the return value to cue whether NAPI function should continue to poll, we augment the caller of stmmac_tx_clean() to pass NAPI budget instead of priv->dma_tx_size through 'budget' input and made stmmac_tx_clean() to always clean up-to the TX ring size instead. This allows us to use the return boolean status of stmmac_xdp_xmit_zc() to decide if XSK TX work is done or not: If true, set 'xmits' to return 'budget - 1' so that NAPI poll may exit. Else, set 'xmits' to return 'budget' to make NAPI poll continue to poll since XSK TX work is not done. Finally, at the end of stmmac_tx_clean(), the function now take a maximum value between 'count' and 'xmits' so that status from both TX cleaning and XSK TX (only for XDP ZC) is considered. This patch adds a new NAPI poll called stmmac_napi_poll_rxtx() that is meant to be enabled/disabled for RX and TX ring that are bound to XSK pool. This NAPI poll function starts with cleaning TX ring, then submits XSK TX frames to TX ring before proceed to perform RX operations, i.e. , receiving RX frames and replenishing RX ring with RX free buffers obtained from XSK pool. Therefore, during XSK RX and TX setup, the driver enables stmmac_napi_poll_rxtx() for RX and TX operations, then during XSK RX and TX pool tear-down, the driver reenables the exisiting independent NAPI poll functions accordingly: stmmac_napi_poll_rx() and stmmac_napi_poll_tx(). Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:26 +08:00
struct xsk_buff_pool *xsk_pool;
u32 xsk_frames_done;
unsigned int cur_tx;
unsigned int dirty_tx;
dma_addr_t dma_tx_phy;
dma_addr_t tx_tail_addr;
u32 mss;
};
struct stmmac_rx_buffer {
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
union {
struct {
struct page *page;
dma_addr_t addr;
__u32 page_offset;
};
struct xdp_buff *xdp;
};
struct page *sec_page;
net: stmmac: Add Split Header support and enable it in XGMAC cores Add the support for Split Header feature in the RX path and enable it in XGMAC cores. This does not impact neither beneficts bandwidth but it does reduces CPU usage because without the feature all the entire packet is memcpy'ed, while that with the feature only the header is. With Split Header disabled 'perf stat -d' gives: 86870.624945 task-clock (msec) # 0.429 CPUs utilized 1073352 context-switches # 0.012 M/sec 1 cpu-migrations # 0.000 K/sec 213 page-faults # 0.002 K/sec 327113872376 cycles # 3.766 GHz (62.53%) 56618161216 instructions # 0.17 insn per cycle (75.06%) 10742205071 branches # 123.658 M/sec (75.36%) 584309242 branch-misses # 5.44% of all branches (75.19%) 17594787965 L1-dcache-loads # 202.540 M/sec (74.88%) 4003773131 L1-dcache-load-misses # 22.76% of all L1-dcache hits (74.89%) 1313301468 LLC-loads # 15.118 M/sec (49.75%) 355906510 LLC-load-misses # 27.10% of all LL-cache hits (49.92%) With Split Header enabled 'perf stat -d' gives: 49324.456539 task-clock (msec) # 0.245 CPUs utilized 2542387 context-switches # 0.052 M/sec 1 cpu-migrations # 0.000 K/sec 213 page-faults # 0.004 K/sec 177092791469 cycles # 3.590 GHz (62.30%) 68555756017 instructions # 0.39 insn per cycle (75.16%) 12697019382 branches # 257.418 M/sec (74.81%) 442081897 branch-misses # 3.48% of all branches (74.79%) 20337958358 L1-dcache-loads # 412.330 M/sec (75.46%) 3820210140 L1-dcache-load-misses # 18.78% of all L1-dcache hits (75.35%) 1257719198 LLC-loads # 25.499 M/sec (49.73%) 685543923 LLC-load-misses # 54.51% of all LL-cache hits (49.86%) Changes from v2: - Reword commit message (Jakub) Changes from v1: - Add performance info (David) - Add misssing dma_sync_single_for_device() Signed-off-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-17 20:54:43 +02:00
dma_addr_t sec_addr;
};
struct stmmac_rx_queue {
u32 rx_count_frames;
u32 queue_index;
struct xdp_rxq_info xdp_rxq;
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
struct xsk_buff_pool *xsk_pool;
struct page_pool *page_pool;
struct stmmac_rx_buffer *buf_pool;
struct stmmac_priv *priv_data;
struct dma_extended_desc *dma_erx;
struct dma_desc *dma_rx ____cacheline_aligned_in_smp;
unsigned int cur_rx;
unsigned int dirty_rx;
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
unsigned int buf_alloc_num;
u32 rx_zeroc_thresh;
dma_addr_t dma_rx_phy;
u32 rx_tail_addr;
unsigned int state_saved;
struct {
struct sk_buff *skb;
unsigned int len;
unsigned int error;
} state;
};
struct stmmac_channel {
struct napi_struct rx_napi ____cacheline_aligned_in_smp;
struct napi_struct tx_napi ____cacheline_aligned_in_smp;
net: stmmac: Add TX via XDP zero-copy socket We add the support of XDP ZC TX submission and cleaning into stmmac_tx_clean(). The function is made to clean as many TX complete frames as possible, i.e. limit by priv->dma_tx_size instead of NAPI budget. For TX ring that is associated with XSK pool, the function stmmac_xdp_xmit_zc() is introduced to TX frame buffers from XSK pool by using xsk_tx_peek_desc(). To make stmmac_tx_clean() support the cleaning of XSK TX frames, STMMAC_TXBUF_T_XSK_TX TX buffer type is introduced. As stmmac_tx_clean() uses the return value to cue whether NAPI function should continue to poll, we augment the caller of stmmac_tx_clean() to pass NAPI budget instead of priv->dma_tx_size through 'budget' input and made stmmac_tx_clean() to always clean up-to the TX ring size instead. This allows us to use the return boolean status of stmmac_xdp_xmit_zc() to decide if XSK TX work is done or not: If true, set 'xmits' to return 'budget - 1' so that NAPI poll may exit. Else, set 'xmits' to return 'budget' to make NAPI poll continue to poll since XSK TX work is not done. Finally, at the end of stmmac_tx_clean(), the function now take a maximum value between 'count' and 'xmits' so that status from both TX cleaning and XSK TX (only for XDP ZC) is considered. This patch adds a new NAPI poll called stmmac_napi_poll_rxtx() that is meant to be enabled/disabled for RX and TX ring that are bound to XSK pool. This NAPI poll function starts with cleaning TX ring, then submits XSK TX frames to TX ring before proceed to perform RX operations, i.e. , receiving RX frames and replenishing RX ring with RX free buffers obtained from XSK pool. Therefore, during XSK RX and TX setup, the driver enables stmmac_napi_poll_rxtx() for RX and TX operations, then during XSK RX and TX pool tear-down, the driver reenables the exisiting independent NAPI poll functions accordingly: stmmac_napi_poll_rx() and stmmac_napi_poll_tx(). Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:26 +08:00
struct napi_struct rxtx_napi ____cacheline_aligned_in_smp;
struct stmmac_priv *priv_data;
spinlock_t lock;
u32 index;
};
struct stmmac_tc_entry {
bool in_use;
bool in_hw;
bool is_last;
bool is_frag;
void *frag_ptr;
unsigned int table_pos;
u32 handle;
u32 prio;
struct {
u32 match_data;
u32 match_en;
u8 af:1;
u8 rf:1;
u8 im:1;
u8 nc:1;
u8 res1:4;
u8 frame_offset;
u8 ok_index;
u8 dma_ch_no;
u32 res2;
} __packed val;
};
#define STMMAC_PPS_MAX 4
struct stmmac_pps_cfg {
bool available;
struct timespec64 start;
struct timespec64 period;
};
struct stmmac_rss {
int enable;
u8 key[STMMAC_RSS_HASH_KEY_SIZE];
u32 table[STMMAC_RSS_MAX_TABLE_SIZE];
};
#define STMMAC_FLOW_ACTION_DROP BIT(0)
struct stmmac_flow_entry {
unsigned long cookie;
unsigned long action;
u8 ip_proto;
int in_use;
int idx;
int is_l4;
};
net: stmmac: fix tc flower deletion for VLAN priority Rx steering To replicate the issue:- 1) Add 1 flower filter for VLAN Priority based frame steering:- $ IFDEVNAME=eth0 $ tc qdisc add dev $IFDEVNAME ingress $ tc qdisc add dev $IFDEVNAME root mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 0 $ tc filter add dev $IFDEVNAME parent ffff: protocol 802.1Q \ flower vlan_prio 0 hw_tc 0 2) Get the 'pref' id $ tc filter show dev $IFDEVNAME ingress 3) Delete a specific tc flower record (say pref 49151) $ tc filter del dev $IFDEVNAME parent ffff: pref 49151 From dmesg, we will observe kernel NULL pointer ooops [ 197.170464] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 197.171367] #PF: supervisor read access in kernel mode [ 197.171367] #PF: error_code(0x0000) - not-present page [ 197.171367] PGD 0 P4D 0 [ 197.171367] Oops: 0000 [#1] PREEMPT SMP NOPTI <snip> [ 197.171367] RIP: 0010:tc_setup_cls+0x20b/0x4a0 [stmmac] <snip> [ 197.171367] Call Trace: [ 197.171367] <TASK> [ 197.171367] ? __stmmac_disable_all_queues+0xa8/0xe0 [stmmac] [ 197.171367] stmmac_setup_tc_block_cb+0x70/0x110 [stmmac] [ 197.171367] tc_setup_cb_destroy+0xb3/0x180 [ 197.171367] fl_hw_destroy_filter+0x94/0xc0 [cls_flower] The above issue is due to previous incorrect implementation of tc_del_vlan_flow(), shown below, that uses flow_cls_offload_flow_rule() to get struct flow_rule *rule which is no longer valid for tc filter delete operation. struct flow_rule *rule = flow_cls_offload_flow_rule(cls); struct flow_dissector *dissector = rule->match.dissector; So, to ensure tc_del_vlan_flow() deletes the right VLAN cls record for earlier configured RX queue (configured by hw_tc) in tc_add_vlan_flow(), this patch introduces stmmac_rfs_entry as driver-side flow_cls_offload record for 'RX frame steering' tc flower, currently used for VLAN priority. The implementation has taken consideration for future extension to include other type RX frame steering such as EtherType based. v2: - Clean up overly extensive backtrace and rewrite git message to better explain the kernel NULL pointer issue. Fixes: 0e039f5cf86c ("net: stmmac: add RX frame steering based on VLAN priority in tc flower") Tested-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-11 22:51:34 +08:00
/* Rx Frame Steering */
enum stmmac_rfs_type {
STMMAC_RFS_T_VLAN,
STMMAC_RFS_T_LLDP,
STMMAC_RFS_T_1588,
net: stmmac: fix tc flower deletion for VLAN priority Rx steering To replicate the issue:- 1) Add 1 flower filter for VLAN Priority based frame steering:- $ IFDEVNAME=eth0 $ tc qdisc add dev $IFDEVNAME ingress $ tc qdisc add dev $IFDEVNAME root mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 0 $ tc filter add dev $IFDEVNAME parent ffff: protocol 802.1Q \ flower vlan_prio 0 hw_tc 0 2) Get the 'pref' id $ tc filter show dev $IFDEVNAME ingress 3) Delete a specific tc flower record (say pref 49151) $ tc filter del dev $IFDEVNAME parent ffff: pref 49151 From dmesg, we will observe kernel NULL pointer ooops [ 197.170464] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 197.171367] #PF: supervisor read access in kernel mode [ 197.171367] #PF: error_code(0x0000) - not-present page [ 197.171367] PGD 0 P4D 0 [ 197.171367] Oops: 0000 [#1] PREEMPT SMP NOPTI <snip> [ 197.171367] RIP: 0010:tc_setup_cls+0x20b/0x4a0 [stmmac] <snip> [ 197.171367] Call Trace: [ 197.171367] <TASK> [ 197.171367] ? __stmmac_disable_all_queues+0xa8/0xe0 [stmmac] [ 197.171367] stmmac_setup_tc_block_cb+0x70/0x110 [stmmac] [ 197.171367] tc_setup_cb_destroy+0xb3/0x180 [ 197.171367] fl_hw_destroy_filter+0x94/0xc0 [cls_flower] The above issue is due to previous incorrect implementation of tc_del_vlan_flow(), shown below, that uses flow_cls_offload_flow_rule() to get struct flow_rule *rule which is no longer valid for tc filter delete operation. struct flow_rule *rule = flow_cls_offload_flow_rule(cls); struct flow_dissector *dissector = rule->match.dissector; So, to ensure tc_del_vlan_flow() deletes the right VLAN cls record for earlier configured RX queue (configured by hw_tc) in tc_add_vlan_flow(), this patch introduces stmmac_rfs_entry as driver-side flow_cls_offload record for 'RX frame steering' tc flower, currently used for VLAN priority. The implementation has taken consideration for future extension to include other type RX frame steering such as EtherType based. v2: - Clean up overly extensive backtrace and rewrite git message to better explain the kernel NULL pointer issue. Fixes: 0e039f5cf86c ("net: stmmac: add RX frame steering based on VLAN priority in tc flower") Tested-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-11 22:51:34 +08:00
STMMAC_RFS_T_MAX,
};
struct stmmac_rfs_entry {
unsigned long cookie;
u16 etype;
net: stmmac: fix tc flower deletion for VLAN priority Rx steering To replicate the issue:- 1) Add 1 flower filter for VLAN Priority based frame steering:- $ IFDEVNAME=eth0 $ tc qdisc add dev $IFDEVNAME ingress $ tc qdisc add dev $IFDEVNAME root mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 0 $ tc filter add dev $IFDEVNAME parent ffff: protocol 802.1Q \ flower vlan_prio 0 hw_tc 0 2) Get the 'pref' id $ tc filter show dev $IFDEVNAME ingress 3) Delete a specific tc flower record (say pref 49151) $ tc filter del dev $IFDEVNAME parent ffff: pref 49151 From dmesg, we will observe kernel NULL pointer ooops [ 197.170464] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 197.171367] #PF: supervisor read access in kernel mode [ 197.171367] #PF: error_code(0x0000) - not-present page [ 197.171367] PGD 0 P4D 0 [ 197.171367] Oops: 0000 [#1] PREEMPT SMP NOPTI <snip> [ 197.171367] RIP: 0010:tc_setup_cls+0x20b/0x4a0 [stmmac] <snip> [ 197.171367] Call Trace: [ 197.171367] <TASK> [ 197.171367] ? __stmmac_disable_all_queues+0xa8/0xe0 [stmmac] [ 197.171367] stmmac_setup_tc_block_cb+0x70/0x110 [stmmac] [ 197.171367] tc_setup_cb_destroy+0xb3/0x180 [ 197.171367] fl_hw_destroy_filter+0x94/0xc0 [cls_flower] The above issue is due to previous incorrect implementation of tc_del_vlan_flow(), shown below, that uses flow_cls_offload_flow_rule() to get struct flow_rule *rule which is no longer valid for tc filter delete operation. struct flow_rule *rule = flow_cls_offload_flow_rule(cls); struct flow_dissector *dissector = rule->match.dissector; So, to ensure tc_del_vlan_flow() deletes the right VLAN cls record for earlier configured RX queue (configured by hw_tc) in tc_add_vlan_flow(), this patch introduces stmmac_rfs_entry as driver-side flow_cls_offload record for 'RX frame steering' tc flower, currently used for VLAN priority. The implementation has taken consideration for future extension to include other type RX frame steering such as EtherType based. v2: - Clean up overly extensive backtrace and rewrite git message to better explain the kernel NULL pointer issue. Fixes: 0e039f5cf86c ("net: stmmac: add RX frame steering based on VLAN priority in tc flower") Tested-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-11 22:51:34 +08:00
int in_use;
int type;
int tc;
};
struct stmmac_priv {
/* Frequently used values are kept adjacent for cache effect */
u32 tx_coal_frames[MTL_MAX_TX_QUEUES];
u32 tx_coal_timer[MTL_MAX_TX_QUEUES];
u32 rx_coal_frames[MTL_MAX_TX_QUEUES];
int hwts_tx_en;
bool tx_path_in_lpi_mode;
bool tso;
net: stmmac: Add Split Header support and enable it in XGMAC cores Add the support for Split Header feature in the RX path and enable it in XGMAC cores. This does not impact neither beneficts bandwidth but it does reduces CPU usage because without the feature all the entire packet is memcpy'ed, while that with the feature only the header is. With Split Header disabled 'perf stat -d' gives: 86870.624945 task-clock (msec) # 0.429 CPUs utilized 1073352 context-switches # 0.012 M/sec 1 cpu-migrations # 0.000 K/sec 213 page-faults # 0.002 K/sec 327113872376 cycles # 3.766 GHz (62.53%) 56618161216 instructions # 0.17 insn per cycle (75.06%) 10742205071 branches # 123.658 M/sec (75.36%) 584309242 branch-misses # 5.44% of all branches (75.19%) 17594787965 L1-dcache-loads # 202.540 M/sec (74.88%) 4003773131 L1-dcache-load-misses # 22.76% of all L1-dcache hits (74.89%) 1313301468 LLC-loads # 15.118 M/sec (49.75%) 355906510 LLC-load-misses # 27.10% of all LL-cache hits (49.92%) With Split Header enabled 'perf stat -d' gives: 49324.456539 task-clock (msec) # 0.245 CPUs utilized 2542387 context-switches # 0.052 M/sec 1 cpu-migrations # 0.000 K/sec 213 page-faults # 0.004 K/sec 177092791469 cycles # 3.590 GHz (62.30%) 68555756017 instructions # 0.39 insn per cycle (75.16%) 12697019382 branches # 257.418 M/sec (74.81%) 442081897 branch-misses # 3.48% of all branches (74.79%) 20337958358 L1-dcache-loads # 412.330 M/sec (75.46%) 3820210140 L1-dcache-load-misses # 18.78% of all L1-dcache hits (75.35%) 1257719198 LLC-loads # 25.499 M/sec (49.73%) 685543923 LLC-load-misses # 54.51% of all LL-cache hits (49.86%) Changes from v2: - Reword commit message (Jakub) Changes from v1: - Add performance info (David) - Add misssing dma_sync_single_for_device() Signed-off-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-17 20:54:43 +02:00
int sph;
int sph_cap;
u32 sarc_type;
unsigned int dma_buf_sz;
unsigned int rx_copybreak;
u32 rx_riwt[MTL_MAX_TX_QUEUES];
int hwts_rx_en;
void __iomem *ioaddr;
struct net_device *dev;
struct device *device;
struct mac_device_info *hw;
int (*hwif_quirks)(struct stmmac_priv *priv);
struct mutex lock;
/* RX Queue */
struct stmmac_rx_queue rx_queue[MTL_MAX_RX_QUEUES];
unsigned int dma_rx_size;
/* TX Queue */
struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES];
unsigned int dma_tx_size;
/* Generic channel for NAPI */
struct stmmac_channel channel[STMMAC_CH_MAX];
int speed;
unsigned int flow_ctrl;
unsigned int pause;
struct mii_bus *mii;
struct phylink_config phylink_config;
struct phylink *phylink;
struct stmmac_extra_stats xstats ____cacheline_aligned_in_smp;
struct stmmac_safety_stats sstats;
struct plat_stmmacenet_data *plat;
struct dma_features dma_cap;
struct stmmac_counters mmc;
int hw_cap_support;
int synopsys_id;
u32 msg_enable;
int wolopts;
int wol_irq;
int clk_csr;
struct timer_list eee_ctrl_timer;
int lpi_irq;
int eee_enabled;
int eee_active;
int tx_lpi_timer;
int tx_lpi_enabled;
int eee_tw_timer;
bool eee_sw_timer_en;
unsigned int mode;
unsigned int chain_mode;
int extend_desc;
struct hwtstamp_config tstamp_config;
struct ptp_clock *ptp_clock;
struct ptp_clock_info ptp_clock_ops;
unsigned int default_addend;
u32 sub_second_inc;
u32 systime_flags;
u32 adv_ts;
int use_riwt;
int irq_wake;
rwlock_t ptp_lock;
/* Protects auxiliary snapshot registers from concurrent access. */
struct mutex aux_ts_lock;
void __iomem *mmcaddr;
void __iomem *ptpaddr;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
int sfty_ce_irq;
int sfty_ue_irq;
int rx_irq[MTL_MAX_RX_QUEUES];
int tx_irq[MTL_MAX_TX_QUEUES];
/*irq name */
char int_name_mac[IFNAMSIZ + 9];
char int_name_wol[IFNAMSIZ + 9];
char int_name_lpi[IFNAMSIZ + 9];
char int_name_sfty_ce[IFNAMSIZ + 10];
char int_name_sfty_ue[IFNAMSIZ + 10];
char int_name_rx_irq[MTL_MAX_TX_QUEUES][IFNAMSIZ + 14];
char int_name_tx_irq[MTL_MAX_TX_QUEUES][IFNAMSIZ + 18];
#ifdef CONFIG_DEBUG_FS
struct dentry *dbgfs_dir;
#endif
unsigned long state;
struct workqueue_struct *wq;
struct work_struct service_task;
net: stmmac: support FPE link partner hand-shaking procedure In order to discover whether remote station supports frame preemption, local station sends verify mPacket and expects response mPacket in return from the remote station. So, we add the functions to send and handle event when verify mPacket and response mPacket are exchanged between the networked stations. The mechanism to handle different FPE states between local and remote station (link partner) is implemented using workqueue which starts a task each time there is some sign of verify & response mPacket exchange as check in FPE IRQ event. The task retries couple of times to try to spot the states that both stations are ready to enter FPE ON. This allows different end points to enable FPE at different time and verify-response mPacket can happen asynchronously. Ultimately, the task will only turn FPE ON when local station have both exchange response in both directions. Thanks to Voon Weifeng for implementing the core functions for detecting FPE events and send mPacket and phylink related change. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Co-developed-by: Voon Weifeng <weifeng.voon@intel.com> Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Co-developed-by: Tan Tee Min <tee.min.tan@intel.com> Signed-off-by: Tan Tee Min <tee.min.tan@intel.com> Co-developed-by: Mohammad Athari Bin Ismail <mohammad.athari.ismail@intel.com> Signed-off-by: Mohammad Athari Bin Ismail <mohammad.athari.ismail@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-24 17:07:42 +08:00
/* Workqueue for handling FPE hand-shaking */
unsigned long fpe_task_state;
struct workqueue_struct *fpe_wq;
struct work_struct fpe_task;
char wq_name[IFNAMSIZ + 4];
/* TC Handling */
unsigned int tc_entries_max;
unsigned int tc_off_max;
struct stmmac_tc_entry *tc_entries;
unsigned int flow_entries_max;
struct stmmac_flow_entry *flow_entries;
net: stmmac: fix tc flower deletion for VLAN priority Rx steering To replicate the issue:- 1) Add 1 flower filter for VLAN Priority based frame steering:- $ IFDEVNAME=eth0 $ tc qdisc add dev $IFDEVNAME ingress $ tc qdisc add dev $IFDEVNAME root mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 \ queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 hw 0 $ tc filter add dev $IFDEVNAME parent ffff: protocol 802.1Q \ flower vlan_prio 0 hw_tc 0 2) Get the 'pref' id $ tc filter show dev $IFDEVNAME ingress 3) Delete a specific tc flower record (say pref 49151) $ tc filter del dev $IFDEVNAME parent ffff: pref 49151 From dmesg, we will observe kernel NULL pointer ooops [ 197.170464] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 197.171367] #PF: supervisor read access in kernel mode [ 197.171367] #PF: error_code(0x0000) - not-present page [ 197.171367] PGD 0 P4D 0 [ 197.171367] Oops: 0000 [#1] PREEMPT SMP NOPTI <snip> [ 197.171367] RIP: 0010:tc_setup_cls+0x20b/0x4a0 [stmmac] <snip> [ 197.171367] Call Trace: [ 197.171367] <TASK> [ 197.171367] ? __stmmac_disable_all_queues+0xa8/0xe0 [stmmac] [ 197.171367] stmmac_setup_tc_block_cb+0x70/0x110 [stmmac] [ 197.171367] tc_setup_cb_destroy+0xb3/0x180 [ 197.171367] fl_hw_destroy_filter+0x94/0xc0 [cls_flower] The above issue is due to previous incorrect implementation of tc_del_vlan_flow(), shown below, that uses flow_cls_offload_flow_rule() to get struct flow_rule *rule which is no longer valid for tc filter delete operation. struct flow_rule *rule = flow_cls_offload_flow_rule(cls); struct flow_dissector *dissector = rule->match.dissector; So, to ensure tc_del_vlan_flow() deletes the right VLAN cls record for earlier configured RX queue (configured by hw_tc) in tc_add_vlan_flow(), this patch introduces stmmac_rfs_entry as driver-side flow_cls_offload record for 'RX frame steering' tc flower, currently used for VLAN priority. The implementation has taken consideration for future extension to include other type RX frame steering such as EtherType based. v2: - Clean up overly extensive backtrace and rewrite git message to better explain the kernel NULL pointer issue. Fixes: 0e039f5cf86c ("net: stmmac: add RX frame steering based on VLAN priority in tc flower") Tested-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-11 22:51:34 +08:00
unsigned int rfs_entries_max[STMMAC_RFS_T_MAX];
unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX];
unsigned int rfs_entries_total;
struct stmmac_rfs_entry *rfs_entries;
/* Pulse Per Second output */
struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
/* Receive Side Scaling */
struct stmmac_rss rss;
/* XDP BPF Program */
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
unsigned long *af_xdp_zc_qps;
struct bpf_prog *xdp_prog;
};
enum stmmac_state {
STMMAC_DOWN,
STMMAC_RESET_REQUESTED,
STMMAC_RESETING,
STMMAC_SERVICE_SCHED,
};
int stmmac_mdio_unregister(struct net_device *ndev);
int stmmac_mdio_register(struct net_device *ndev);
int stmmac_mdio_reset(struct mii_bus *mii);
int stmmac_xpcs_setup(struct mii_bus *mii);
void stmmac_set_ethtool_ops(struct net_device *netdev);
net: stmmac: retain PTP clock time during SIOCSHWTSTAMP ioctls Currently, when user space emits SIOCSHWTSTAMP ioctl calls such as enabling/disabling timestamping or changing filter settings, the driver reads the current CLOCK_REALTIME value and programming this into the NIC's hardware clock. This might be necessary during system initialization, but at runtime, when the PTP clock has already been synchronized to a grandmaster, a reset of the timestamp settings might result in a clock jump. Furthermore, if the clock is also controlled by phc2sys in automatic mode (where the UTC offset is queried from ptp4l), that UTC-to-TAI offset (currently 37 seconds in 2021) would be temporarily reset to 0, and it would take a long time for phc2sys to readjust so that CLOCK_REALTIME and the PHC are apart by 37 seconds again. To address the issue, we introduce a new function called stmmac_init_tstamp_counter(), which gets called during ndo_open(). It contains the code snippet moved from stmmac_hwtstamp_set() that manages the time synchronization. Besides, the sub second increment configuration is also moved here since the related values are hardware dependent and runtime invariant. Furthermore, the hardware clock must be kept running even when no time stamping mode is selected in order to retain the synchronized time base. That way, timestamping can be enabled again at any time only with the need to compensate the clock's natural drifting. As a side effect, this patch fixes the issue that ptp_clock_info::enable can be called before SIOCSHWTSTAMP and the driver (which looks at priv->systime_flags) was not prepared to handle that ordering. Fixes: 92ba6888510c ("stmmac: add the support for PTP hw clock driver") Reported-by: Michael Olbrich <m.olbrich@pengutronix.de> Signed-off-by: Ahmad Fatoum <a.fatoum@pengutronix.de> Signed-off-by: Holger Assmann <h.assmann@pengutronix.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-11-21 19:57:04 +02:00
int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags);
void stmmac_ptp_register(struct stmmac_priv *priv);
void stmmac_ptp_unregister(struct stmmac_priv *priv);
int stmmac_xdp_open(struct net_device *dev);
void stmmac_xdp_release(struct net_device *dev);
int stmmac_resume(struct device *dev);
int stmmac_suspend(struct device *dev);
int stmmac_dvr_remove(struct device *dev);
int stmmac_dvr_probe(struct device *device,
struct plat_stmmacenet_data *plat_dat,
struct stmmac_resources *res);
void stmmac_disable_eee_mode(struct stmmac_priv *priv);
bool stmmac_eee_init(struct stmmac_priv *priv);
int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt);
int stmmac_reinit_ringparam(struct net_device *dev, u32 rx_size, u32 tx_size);
int stmmac_bus_clks_config(struct stmmac_priv *priv, bool enabled);
net: stmmac: support FPE link partner hand-shaking procedure In order to discover whether remote station supports frame preemption, local station sends verify mPacket and expects response mPacket in return from the remote station. So, we add the functions to send and handle event when verify mPacket and response mPacket are exchanged between the networked stations. The mechanism to handle different FPE states between local and remote station (link partner) is implemented using workqueue which starts a task each time there is some sign of verify & response mPacket exchange as check in FPE IRQ event. The task retries couple of times to try to spot the states that both stations are ready to enter FPE ON. This allows different end points to enable FPE at different time and verify-response mPacket can happen asynchronously. Ultimately, the task will only turn FPE ON when local station have both exchange response in both directions. Thanks to Voon Weifeng for implementing the core functions for detecting FPE events and send mPacket and phylink related change. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Co-developed-by: Voon Weifeng <weifeng.voon@intel.com> Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Co-developed-by: Tan Tee Min <tee.min.tan@intel.com> Signed-off-by: Tan Tee Min <tee.min.tan@intel.com> Co-developed-by: Mohammad Athari Bin Ismail <mohammad.athari.ismail@intel.com> Signed-off-by: Mohammad Athari Bin Ismail <mohammad.athari.ismail@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-24 17:07:42 +08:00
void stmmac_fpe_handshake(struct stmmac_priv *priv, bool enable);
static inline bool stmmac_xdp_is_enabled(struct stmmac_priv *priv)
{
return !!priv->xdp_prog;
}
static inline unsigned int stmmac_rx_offset(struct stmmac_priv *priv)
{
if (stmmac_xdp_is_enabled(priv))
return XDP_PACKET_HEADROOM;
return 0;
}
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
void stmmac_disable_rx_queue(struct stmmac_priv *priv, u32 queue);
void stmmac_enable_rx_queue(struct stmmac_priv *priv, u32 queue);
net: stmmac: Add TX via XDP zero-copy socket We add the support of XDP ZC TX submission and cleaning into stmmac_tx_clean(). The function is made to clean as many TX complete frames as possible, i.e. limit by priv->dma_tx_size instead of NAPI budget. For TX ring that is associated with XSK pool, the function stmmac_xdp_xmit_zc() is introduced to TX frame buffers from XSK pool by using xsk_tx_peek_desc(). To make stmmac_tx_clean() support the cleaning of XSK TX frames, STMMAC_TXBUF_T_XSK_TX TX buffer type is introduced. As stmmac_tx_clean() uses the return value to cue whether NAPI function should continue to poll, we augment the caller of stmmac_tx_clean() to pass NAPI budget instead of priv->dma_tx_size through 'budget' input and made stmmac_tx_clean() to always clean up-to the TX ring size instead. This allows us to use the return boolean status of stmmac_xdp_xmit_zc() to decide if XSK TX work is done or not: If true, set 'xmits' to return 'budget - 1' so that NAPI poll may exit. Else, set 'xmits' to return 'budget' to make NAPI poll continue to poll since XSK TX work is not done. Finally, at the end of stmmac_tx_clean(), the function now take a maximum value between 'count' and 'xmits' so that status from both TX cleaning and XSK TX (only for XDP ZC) is considered. This patch adds a new NAPI poll called stmmac_napi_poll_rxtx() that is meant to be enabled/disabled for RX and TX ring that are bound to XSK pool. This NAPI poll function starts with cleaning TX ring, then submits XSK TX frames to TX ring before proceed to perform RX operations, i.e. , receiving RX frames and replenishing RX ring with RX free buffers obtained from XSK pool. Therefore, during XSK RX and TX setup, the driver enables stmmac_napi_poll_rxtx() for RX and TX operations, then during XSK RX and TX pool tear-down, the driver reenables the exisiting independent NAPI poll functions accordingly: stmmac_napi_poll_rx() and stmmac_napi_poll_tx(). Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:26 +08:00
void stmmac_disable_tx_queue(struct stmmac_priv *priv, u32 queue);
void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue);
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags);
struct timespec64 stmmac_calc_tas_basetime(ktime_t old_base_time,
ktime_t current_time,
u64 cycle_time);
net: stmmac: Enable RX via AF_XDP zero-copy This patch adds the support for receiving packet via AF_XDP zero-copy mechanism. XDP ZC uses 1:1 mapping of XDP buffer to receive packet, therefore the use of split header is not used currently. The 'xdp_buff' is declared as union together with a struct that contains 'page', 'addr' and 'page_offset' that are associated with primary buffer. RX buffers are now allocated either via page_pool or xsk pool. For RX buffers from xsk_pool they are allocated and deallocated using below functions: * stmmac_alloc_rx_buffers_zc(struct stmmac_priv *priv, u32 queue) * dma_free_rx_xskbufs(struct stmmac_priv *priv, u32 queue) With above functions now available, we then extend the following driver functions to support XDP ZC: * stmmac_reinit_rx_buffers() * __init_dma_rx_desc_rings() * init_dma_rx_desc_rings() * __free_dma_rx_desc_resources() Note: stmmac_alloc_rx_buffers_zc() may return -ENOMEM due to RX XDP buffer pool is not allocated (e.g. samples/bpf/xdpsock TX-only). But, it is still ok to let TX XDP ZC to continue, therefore, the -ENOMEM is silently ignored to let the driver succcessfully transition to XDP ZC mode for the said RX and TX queue. As XDP ZC buffer size is different, the DMA buffer size is required to be reprogrammed accordingly for RX DMA/Queue that is populated with XDP buffer from XSK pool. Next, to add or remove per-queue XSK pool, stmmac_xdp_setup_pool() will call stmmac_xdp_enable_pool() or stmmac_xdp_disable_pool() that in-turn coordinates the tearing down and setting up RX ring via RX buffers and descriptors removal and reallocation through stmmac_disable_rx_queue() and stmmac_enable_rx_queue(). In addition, stmmac_xsk_wakeup() is added to initiate XDP RX buffer replenishing by signalling user application to add available XDP frames back to FILL queue. For RX processing using XDP zero-copy buffer, stmmac_rx_zc() is introduced which is implemented with the assumption that RX split header is disabled. For XDP verdict is XDP_PASS, the XDP buffer is copied into a sk_buff allocated through stmmac_construct_skb_zc() and sent to Linux network GRO inside stmmac_dispatch_skb_zc(). Free RX buffers are then replenished using stmmac_rx_refill_zc() v2: introduce __stmmac_disable_all_queues() to contain the original code that does napi_disable() and then make stmmac_setup_tc_block_cb() to use it. Move synchronize_rcu() into stmmac_disable_all_queues() that eventually calls __stmmac_disable_all_queues(). Then, make both stmmac_release() and stmmac_suspend() to use stmmac_disable_all_queues(). Thanks David Miller for spotting the synchronize_rcu() issue in v1 patch. Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-13 17:36:25 +08:00
#if IS_ENABLED(CONFIG_STMMAC_SELFTESTS)
void stmmac_selftest_run(struct net_device *dev,
struct ethtool_test *etest, u64 *buf);
void stmmac_selftest_get_strings(struct stmmac_priv *priv, u8 *data);
int stmmac_selftest_get_count(struct stmmac_priv *priv);
#else
static inline void stmmac_selftest_run(struct net_device *dev,
struct ethtool_test *etest, u64 *buf)
{
/* Not enabled */
}
static inline void stmmac_selftest_get_strings(struct stmmac_priv *priv,
u8 *data)
{
/* Not enabled */
}
static inline int stmmac_selftest_get_count(struct stmmac_priv *priv)
{
return -EOPNOTSUPP;
}
#endif /* CONFIG_STMMAC_SELFTESTS */
#endif /* __STMMAC_H__ */