Commit Graph

35849 Commits

Author SHA1 Message Date
George Cherian
fae06da4f2 octeontx2-af: Add devlink suppoort to af driver
Add devlink support to AF driver. Basic devlink support is added.
Currently info_get is the only supported devlink ops.

devlink ouptput looks like this
 # devlink dev
 pci/0002:01:00.0
 # devlink dev info
 pci/0002:01:00.0:
  driver octeontx2-af
 #

Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: George Cherian <george.cherian@marvell.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-14 17:49:28 -08:00
Linus Torvalds
9e4b0d55d8 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "API:
   - Add speed testing on 1420-byte blocks for networking

  Algorithms:
   - Improve performance of chacha on ARM for network packets
   - Improve performance of aegis128 on ARM for network packets

  Drivers:
   - Add support for Keem Bay OCS AES/SM4
   - Add support for QAT 4xxx devices
   - Enable crypto-engine retry mechanism in caam
   - Enable support for crypto engine on sdm845 in qce
   - Add HiSilicon PRNG driver support"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (161 commits)
  crypto: qat - add capability detection logic in qat_4xxx
  crypto: qat - add AES-XTS support for QAT GEN4 devices
  crypto: qat - add AES-CTR support for QAT GEN4 devices
  crypto: atmel-i2c - select CONFIG_BITREVERSE
  crypto: hisilicon/trng - replace atomic_add_return()
  crypto: keembay - Add support for Keem Bay OCS AES/SM4
  dt-bindings: Add Keem Bay OCS AES bindings
  crypto: aegis128 - avoid spurious references crypto_aegis128_update_simd
  crypto: seed - remove trailing semicolon in macro definition
  crypto: x86/poly1305 - Use TEST %reg,%reg instead of CMP $0,%reg
  crypto: x86/sha512 - Use TEST %reg,%reg instead of CMP $0,%reg
  crypto: aesni - Use TEST %reg,%reg instead of CMP $0,%reg
  crypto: cpt - Fix sparse warnings in cptpf
  hwrng: ks-sa - Add dependency on IOMEM and OF
  crypto: lib/blake2s - Move selftest prototype into header file
  crypto: arm/aes-ce - work around Cortex-A57/A72 silion errata
  crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()
  crypto: ccree - rework cache parameters handling
  crypto: cavium - Use dma_set_mask_and_coherent to simplify code
  crypto: marvell/octeontx - Use dma_set_mask_and_coherent to simplify code
  ...
2020-12-14 12:18:19 -08:00
Jakub Kicinski
46d5e62dd3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
xdp_return_frame_bulk() needs to pass a xdp_buff
to __xdp_return().

strlcpy got converted to strscpy but here it makes no
functional difference, so just keep the right code.

Conflicts:
	net/netfilter/nf_tables_api.c

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-11 22:29:38 -08:00
David S. Miller
d9838b1d39 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Alexei Starovoitov says:

====================
pull-request: bpf 2020-12-10

The following pull-request contains BPF updates for your *net* tree.

We've added 21 non-merge commits during the last 12 day(s) which contain
a total of 21 files changed, 163 insertions(+), 88 deletions(-).

The main changes are:

1) Fix propagation of 32-bit signed bounds from 64-bit bounds, from Alexei.

2) Fix ring_buffer__poll() return value, from Andrii.

3) Fix race in lwt_bpf, from Cong.

4) Fix test_offload, from Toke.

5) Various xsk fixes.

Please consider pulling these changes from:

  git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git

Thanks a lot!

Also thanks to reporters, reviewers and testers of commits in this pull-request:

Cong Wang, Hulk Robot, Jakub Kicinski, Jean-Philippe Brucker, John
Fastabend, Magnus Karlsson, Maxim Mikityanskiy, Yonghong Song
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 14:29:30 -08:00
Willy Tarreau
1d608d2e0d Revert "macb: support the two tx descriptors on at91rm9200"
This reverts commit 0a4e9ce17b.

The code was developed and tested on an MSC313E SoC, which seems to be
half-way between the AT91RM9200 and the AT91SAM9260 in that it supports
both the 2-descriptors mode and a Tx ring.

It turns out that after the code was merged I could notice that the
controller would sometimes lock up, and only when dealing with sustained
bidirectional transfers, in which case it would report a Tx overrun
condition right after having reported being ready, and will stop sending
even after the status is cleared (a down/up cycle fixes it though).

After adding lots of traces I couldn't spot a sequence pattern allowing
to predict that this situation would happen. The chip comes with no
documentation and other bits are often reported with no conclusive
pattern either.

It is possible that my change is wrong just like it is possible that
the controller on the chip is bogus or at least unpredictable based on
existing docs from other chips. I do not have an RM9200 at hand to test
at the moment and a few tests run on a more recent 9G20 indicate that
this code path cannot be used there to test the code on a 3rd platform.

Since the MSC313E works fine in the single-descriptor mode, and that
people using the old RM9200 very likely favor stability over performance,
better revert this patch until we can test it on the original platform
this part of the driver was written for. Note that the reverted patch
was actually tested on MSC313E.

Cc: Nicolas Ferre <nicolas.ferre@microchip.com>
Cc: Claudiu Beznea <claudiu.beznea@microchip.com>
Cc: Daniel Palmer <daniel@0x0f.com>
Cc: Alexandre Belloni <alexandre.belloni@bootlin.com>
Link: https://lore.kernel.org/netdev/20201206092041.GA10646@1wt.eu/
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 13:32:33 -08:00
Subash Abhinov Kasiviswanathan
b7f5eb6ba2 net: qualcomm: rmnet: Update rmnet device MTU based on real device
Packets sent by rmnet to the real device have variable MAP header
lengths based on the data format configured. This patch adds checks
to ensure that the real device MTU is sufficient to transmit the MAP
packet comprising of the MAP header and the IP packet. This check
is enforced when rmnet devices are created and updated and during
MTU updates of both the rmnet and real device.

Additionally, rmnet devices now have a default MTU configured which
accounts for the real device MTU and the headroom based on the data
format.

Signed-off-by: Sean Tranchetti <stranche@codeaurora.org>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Tested-by: Loic Poulain <loic.poulain@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 13:30:26 -08:00
Sasha Neftin
bfa5e98c9d igc: Add new device ID
Add new device ID for the next step of the silicon and
reflect the I226_K part.

Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 13:13:24 -08:00
Zheng Yongjun
a76b6b1fe8 net: mediatek: simplify the return expression of mtk_gmac_sgmii_path_setup()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 13:02:16 -08:00
Zheng Yongjun
b18cac546b net/mlx4: simplify the return expression of mlx4_init_srq_table()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 13:00:51 -08:00
Zheng Yongjun
ec73c31dfb net: stmmac: simplify the return tc_delete_knode()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-10 12:58:55 -08:00
Guojia Liao
cdab7c9779 net: hns3: adjust rss tc mode configure command
For the max rss size of PF may be up to 512, the max queue
number of single tc may be up to 512 too. For the total queue
numbers may be up to 1280, so the queue offset of each tc may
be more than 1024. So adjust the rss tc mode configuration
command, including extend tc size field from 10 bits to 11
bits, and extend tc size field from 3 bits to 4 bits.

Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:20 -08:00
Guojia Liao
8eeb1f4bce net: hns3: adjust rss indirection table configure command
For the max rss size of PF may be up to 512, so adjust the
command of configuring rss indirection table to support
queue id larger than 255. The width of queue id is extended
from 8 bits to 10 bits. The high 2 bits are stored in filed
rss_qid_h when the queue id is larger than 255.

Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:20 -08:00
Guojia Liao
f1c2e66d7f net: hns3: add support for max 512 rss size
Currently, the driver gets the max rss size from configuration
file when initialization. Both the PF and VF share the same
max rss size, and no more than 128.

For DEVICE_VERSION_V3, the max rss size for PF can be up to
512, so there is a new field in configuration file to store it,
the old filed is used for VF.

To be compatible with boards using old configure file, the PF
will use the old filed if the one is zero.

For the rss size may be larger than 256, so the type of
rss_indirection_tbl of struct hclge_vport should be changed to
u16 as well.

Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:20 -08:00
Jian Shen
0205ec041e net: hns3: add support for hw tc offload of tc flower
Some new device supports forwarding packet to queues of specified
TC when flow director rule hit. So add support to configure flow
director rule by tc flower. To avoid rule conflict, add a new flow
director mode HCLGE_FD_TC_FLOWER_ACTIVE, and only one mode can be
active at the same time.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:19 -08:00
Jian Shen
0f993fe2b8 net: hns3: add support for forwarding packet to queues of specified TC when flow director rule hit
For some new device, it supports forwarding packet to queues
of specified TC when flow director rule hit. So extend the
command handle to support it.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:19 -08:00
Jian Shen
5a5c909174 net: hns3: add support for tc mqprio offload
Currently, the HNS3 driver only supports offload for tc number
and prio_tc. This patch adds support for other qopts, including
queues count and offset for each tc.

When enable tc mqprio offload, it's not allowed to change
queue numbers by ethtool. For hardware limitation, the queue
number of each tc should be power of 2.

For the queues is not assigned to each tc by average, so it's
should return vport->alloc_tqps for hclge_get_max_channels().

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:19 -08:00
Jian Shen
35244430d6 net: hns3: refine the struct hane3_tc_info
Currently, there are multiple members related to tc information
in struct hnae3_knic_private_info. Merge them into a new struct
hnae3_tc_info.

Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 20:33:19 -08:00
Jakub Kicinski
c0ead5552c nfp: silence set but not used warning with IPV6=n
Test robot reports:

drivers/net/ethernet/netronome/nfp/crypto/tls.c: In function 'nfp_net_tls_rx_resync_req':
drivers/net/ethernet/netronome/nfp/crypto/tls.c:477:18: warning: variable 'ipv6h' set but not used [-Wunused-but-set-variable]
  477 |  struct ipv6hdr *ipv6h;
      |                  ^~~~~
In file included from include/linux/compiler_types.h:65,
                    from <command-line>:
drivers/net/ethernet/netronome/nfp/crypto/tls.c: In function 'nfp_net_tls_add':
include/linux/compiler_attributes.h:208:41: warning: statement will never be executed [-Wswitch-unreachable]
  208 | # define fallthrough                    __attribute__((__fallthrough__))
      |                                         ^~~~~~~~~~~~~
drivers/net/ethernet/netronome/nfp/crypto/tls.c:299:3: note: in expansion of macro 'fallthrough'
  299 |   fallthrough;
      |   ^~~~~~~~~~~

Use the IPv6 header in the switch, it doesn't matter which header
we use to read the version field.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 19:42:03 -08:00
Wong Vee Khee
523437d7b5 net: stmmac: allow stmmac to probe for C45 PHY devices
Assign stmmac's mdio_bus probe capabilities to MDIOBUS_C22_C45.
This extended the probing of C45 PHY devices on the MDIO bus.

Signed-off-by: Wong Vee Khee <vee.khee.wong@intel.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 19:40:09 -08:00
Zheng Yongjun
016ade51a7 net/mlx4: simplify the return expression of mlx4_init_cq_table()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 19:20:46 -08:00
Dwip N. Banerjee
c2af62256e ibmvnic: fix rx buffer tracking and index management in replenish_rx_pool partial success
We observed that in the error case for batched send_subcrq_indirect() the
driver does not account for the partial success case. This caused Linux to
crash when free_map and pool index are inconsistent.

Driver needs to update the rx pools "available" count when some batched
sends worked but an error was encountered as part of the whole operation.
Also track replenish_add_buff_failure for statistic purposes.

Fixes: 4f0b6812e9 ("ibmvnic: Introduce batched RX buffer descriptor transmission")
Signed-off-by: Dwip N. Banerjee <dnbanerg@us.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 19:06:10 -08:00
David S. Miller
dc528d5bcc Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:

====================
100GbE Intel Wired LAN Driver Updates 2020-12-09

This series contains updates to ice driver only.

Bruce changes the allocation of ice_flow_prof_params from stack to heap to
avoid excessive stack usage. Corrects a misleading comment and silences a
sparse warning that is not a problem.

Paul allows for HW initialization to continue if PHY abilities cannot
be obtained.

Jeb removes bypassing FW link override and reading Option ROM and
netlist information for non-E810 devices as this is now available on
other devices.

Nick removes vlan_ena field as this information can be gathered by
checking num_vlan.

Jake combines format strings and debug prints to the same line.

Simon adds a space to fix string concatenation.

v4: Drop ACL patches. Change PHY abilities failure message from debug to
warning.
v3: Fix email address for DaveM and fix character in cover letter
v2: Expand on commit message for patch 3 to show example usage/commands.
    Reduce number of defensive checks being done.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 18:51:59 -08:00
David S. Miller
88287773ff Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue
Tony Nguyen says:

====================
Intel Wired LAN Driver Updates 2020-12-09

This series contains updates to igb, ixgbe, i40e, and ice drivers.

Sven Auhagen fixes issues with igb XDP: return correct error value in XDP
xmit back, increase header padding to include space for double VLAN, add
an extack error when Rx buffer is too small for frame size, set metasize if
it is set in xdp, change xdp_do_flush_map to xdp_do_flush, and update
trans_start to avoid possible Tx timeout.

Björn fixes an issue where an Rx buffer can be reused prematurely with
XDP redirect for ixgbe, i40e, and ice drivers.

The following are changes since commit 323a391a22:
  can: isotp: isotp_setsockopt(): block setsockopt on bound sockets
and are available in the git repository at:
  git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue 1GbE
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 18:48:29 -08:00
Zheng Yongjun
b8d909375d net: marvell: octeontx2: simplify the otx2_ptp_adjfine()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:37 -08:00
Zheng Yongjun
6f2d5cf975 net: stmmac: simplify the return dwmac5_rxp_disable()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:37 -08:00
Zheng Yongjun
f75e594458 net: hinic: simplify the return hinic_configure_max_qnum()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:37 -08:00
Zheng Yongjun
264386fc19 net: freescale: dpaa: simplify the return dpaa_eth_refill_bpools()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:37 -08:00
Zheng Yongjun
d867bc3a26 net: cisco: enic: simplify the return vnic_cq_alloc()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:37 -08:00
Zheng Yongjun
dd0e7aabca net: emulex: benet: simplify the return expression of be_if_create()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:36 -08:00
Zheng Yongjun
8e3bf53c61 net: marvell: octeontx2: simplify the return expression of rvu_npa_init()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:36 -08:00
Zheng Yongjun
05372c456f net: marvell: prestera: simplify the return expression of prestera_port_close()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 17:05:36 -08:00
Moshe Shemesh
ba603d9d7b net/mlx4_en: Handle TX error CQE
In case error CQE was found while polling TX CQ, the QP is in error
state and all posted WQEs will generate error CQEs without any data
transmitted. Fix it by reopening the channels, via same method used for
TX timeout handling.

In addition add some more info on error CQE and WQE for debug.

Fixes: bd2f631d7c ("net/mlx4_en: Notify user when TX ring in error state")
Signed-off-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:44:35 -08:00
Moshe Shemesh
fed91613c9 net/mlx4_en: Avoid scheduling restart task if it is already running
Add restarting state flag to avoid scheduling another restart task while
such task is already running. Change task name from watchdog_task to
restart_task to better fit the task role.

Fixes: 1e338db56e ("mlx4_en: Fix a race at restart task")
Signed-off-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:44:35 -08:00
Zheng Yongjun
af89784eb6 net: freescale: convert comma to semicolon
Replace a comma between expression statements by a semicolon.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:23:08 -08:00
Zheng Yongjun
011446cd2f net: ethernet: ti: convert comma to semicolon
Replace a comma between expression statements by a semicolon.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:23:08 -08:00
Zheng Yongjun
474d8feffb hisilicon/hns3: convert comma to semicolon
Replace a comma between expression statements by a semicolon.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:23:08 -08:00
Zheng Yongjun
3d4068b24c hisilicon/hns: convert comma to semicolon
Replace a comma between expression statements by a semicolon.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:23:08 -08:00
Zheng Yongjun
873d2f1216 net: mlx5: convert comma to semicolon
Replace a comma between expression statements by a semicolon.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:23:08 -08:00
Zheng Yongjun
eba251f2e6 net: micrel: convert comma to semicolon
Replace a comma between expression statements by a semicolon.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:23:08 -08:00
Claudiu Beznea
700d566e81 net: macb: add support for sama7g5 emac interface
Add support for SAMA7G5 10/100Mbps interface.

Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:19:21 -08:00
Claudiu Beznea
ec771de654 net: macb: add support for sama7g5 gem interface
Add support for SAMA7G5 gigabit ethernet interface.

Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:19:21 -08:00
Claudiu Beznea
f4de93f03e net: macb: unprepare clocks in case of failure
Unprepare clocks in case of any failure in fu540_c000_clk_init().

Fixes: c218ad5590 ("macb: Add support for SiFive FU540-C000")
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:19:21 -08:00
Claudiu Beznea
38493da4e6 net: macb: add function to disable all macb clocks
Add function to disable all macb clocks.

Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:19:20 -08:00
Claudiu Beznea
daafa1d33c net: macb: add capability to not set the clock rate
SAMA7G5's ethernet IPs TX clock could be provided by its generic clock or
by the external clock provided by the PHY. The internal IP logic divides
properly this clock depending on the link speed. The patch adds a new
capability so that macb_set_tx_clock() to not be called for IPs having
this capability (the clock rate, in case of generic clock, is set at the
boot time via device tree and the driver only enables it).

Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:19:20 -08:00
Claudiu Beznea
edac63861d net: macb: add userio bits as platform configuration
This is necessary for SAMA7G5 as it uses different values for
PHY interface and also introduces hdfctlen bit.

Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-09 16:19:20 -08:00
Vitaly Lifshits
a379b01cd4 e1000e: fix S0ix flow to allow S0i3.2 subset entry
Changed a configuration in the flows to align with
architecture requirements to achieve S0i3.2 substate.

This helps both i219V and i219LM configurations.

Also fixed a typo in the previous commit 632fbd5eb5
("e1000e: fix S0ix flows for cable connected case").

Fixes: 632fbd5eb5 ("e1000e: fix S0ix flows for cable connected case").
Signed-off-by: Vitaly Lifshits <vitaly.lifshits@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: Mario Limonciello <mario.limonciello@dell.com>
Link: https://lore.kernel.org/r/20201208185632.151052-1-mario.limonciello@dell.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:41:34 -08:00
Björn Töpel
1beb7830d3 ice: avoid premature Rx buffer reuse
The page recycle code, incorrectly, relied on that a page fragment
could not be freed inside xdp_do_redirect(). This assumption leads to
that page fragments that are used by the stack/XDP redirect can be
reused and overwritten.

To avoid this, store the page count prior invoking xdp_do_redirect().

Fixes: efc2214b60 ("ice: Add support for XDP")
Reported-and-analyzed-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Björn Töpel
a06316dc87 ixgbe: avoid premature Rx buffer reuse
The page recycle code, incorrectly, relied on that a page fragment
could not be freed inside xdp_do_redirect(). This assumption leads to
that page fragments that are used by the stack/XDP redirect can be
reused and overwritten.

To avoid this, store the page count prior invoking xdp_do_redirect().

Fixes: 6453073987 ("ixgbe: add initial support for xdp redirect")
Reported-and-analyzed-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Björn Töpel
75aab4e10a i40e: avoid premature Rx buffer reuse
The page recycle code, incorrectly, relied on that a page fragment
could not be freed inside xdp_do_redirect(). This assumption leads to
that page fragments that are used by the stack/XDP redirect can be
reused and overwritten.

To avoid this, store the page count prior invoking xdp_do_redirect().

Longer explanation:

Intel NICs have a recycle mechanism. The main idea is that a page is
split into two parts. One part is owned by the driver, one part might
be owned by someone else, such as the stack.

t0: Page is allocated, and put on the Rx ring
              +---------------
used by NIC ->| upper buffer
(rx_buffer)   +---------------
              | lower buffer
              +---------------
  page count  == USHRT_MAX
  rx_buffer->pagecnt_bias == USHRT_MAX

t1: Buffer is received, and passed to the stack (e.g.)
              +---------------
              | upper buff (skb)
              +---------------
used by NIC ->| lower buffer
(rx_buffer)   +---------------
  page count  == USHRT_MAX
  rx_buffer->pagecnt_bias == USHRT_MAX - 1

t2: Buffer is received, and redirected
              +---------------
              | upper buff (skb)
              +---------------
used by NIC ->| lower buffer
(rx_buffer)   +---------------

Now, prior calling xdp_do_redirect():
  page count  == USHRT_MAX
  rx_buffer->pagecnt_bias == USHRT_MAX - 2

This means that buffer *cannot* be flipped/reused, because the skb is
still using it.

The problem arises when xdp_do_redirect() actually frees the
segment. Then we get:
  page count  == USHRT_MAX - 1
  rx_buffer->pagecnt_bias == USHRT_MAX - 2

From a recycle perspective, the buffer can be flipped and reused,
which means that the skb data area is passed to the Rx HW ring!

To work around this, the page count is stored prior calling
xdp_do_redirect().

Note that this is not optimal, since the NIC could actually reuse the
"lower buffer" again. However, then we need to track whether
XDP_REDIRECT consumed the buffer or not.

Fixes: d9314c474d ("i40e: add support for XDP_REDIRECT")
Reported-and-analyzed-by: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Sven Auhagen
ec107e775d igb: avoid transmit queue timeout in xdp path
Since we share the transmit queue with the network stack,
it is possible that we run into a transmit queue timeout.
This will reset the queue.
This happens under high load when XDP is using the
transmit queue pretty much exclusively.

netdev_start_xmit() sets the trans_start variable of the
transmit queue to jiffies which is later utilized by dev_watchdog(),
so to avoid timeout, let stack know that XDP xmit happened by
bumping the trans_start within XDP Tx routines to jiffies.

Fixes: 9cbc948b5a ("igb: add XDP support")
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Sven Auhagen
3eca859008 igb: use xdp_do_flush
Since it is a new XDP implementation change xdp_do_flush_map
to xdp_do_flush.

Fixes: 9cbc948b5a ("igb: add XDP support")
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Sven Auhagen
681429dba9 igb: skb add metasize for xdp
add metasize if it is set in xdp

Fixes: 9cbc948b5a ("igb: add XDP support")
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Sven Auhagen
2e2bb5594c igb: XDP extack message on error
Add an extack error message when the RX buffer size is too small
for the frame size.

Fixes: 9cbc948b5a ("igb: add XDP support")
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Sven Auhagen
b829ec1a66 igb: take VLAN double header into account
Increase the packet header padding to include double VLAN tagging.
This patch uses a macro for this.

Fixes: 9cbc948b5a ("igb: add XDP support")
Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:58 -08:00
Sven Auhagen
cfb33e174f igb: XDP xmit back fix error code
The igb XDP xmit back function should only return
defined error codes.

Fixes: 9cbc948b5a ("igb: add XDP support")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Sven Auhagen <sven.auhagen@voleatech.de>
Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 15:26:57 -08:00
Shay Agroskin
f1a2558913 net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT
This patch implements the ndo_xdp_xmit() net_device function which is
called when a packet is redirected to this driver using an
XDP_REDIRECT directive.

The function receives an array of xdp frames that it needs to xmit.
The TX queues that are used to xmit these frames are the XDP
queues used by the XDP_TX flow. Therefore a lock is added to synchronize
both flows (XDP_TX and XDP_REDIRECT).

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
f8b91f255a net: ena: use xdp_return_frame() to free xdp frames
XDP subsystem has a function to free XDP frames and their associated
pages. Using this function would help the driver's XDP implementation to
adjust to new changes in the XDP subsystem in the kernel (e.g.
introduction of XDP MB).

Also, remove 'xdp_rx_page' field from ena_tx_buffer struct since it is
no longer used.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
a318c70ad1 net: ena: introduce XDP redirect implementation
This patch adds a partial support for the XDP_REDIRECT directive which
instructs the driver to pass the packet to an interface specified by the
program. The directive is passed to the driver by calling bpf_redirect()
or bpf_redirect_map() functions from the eBPF program.

To lay the ground for integration with the existing XDP TX
implementation the patch removes the redundant page ref count increase
in ena_xdp_xmit_frame() and then decrease in ena_clean_rx_irq(). Instead
it only DMA unmaps descriptors for which XDP TX or REDIRECT directive
was received.

The XDP Redirect support is still missing .ndo_xdp_xmit function
implementation, which allows to redirect packet to an ENA interface,
which would be added in a later patch.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
e8223eeff0 net: ena: use xdp_frame in XDP TX flow
Rename the ena_xdp_xmit_buff() function to ena_xdp_xmit_frame() and pass
it an xdp_frame struct instead of xdp_buff.
This change lays the ground for XDP redirect implementation which uses
xdp_frames when 'xmit'ing packets.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
89dd735e8c net: ena: aggregate stats increase into a function
Introduce ena_increase_stat() function to increase statistics by a
certain number.
The function includes the
    - lock aquire (on 32bit machines)
    - stat increase
    - lock release (on 32bit machines)

line sequence that is ubiquitous across the driver.

The function increases a single stat at a time and several stats which
are increased together weren't put into a function to avoid
calling the function several times for each stat which looks bad and
might decrease performance.

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
1e5847395e net: ena: fix coding style nits
This commit fixes two nits, but it does not generate any change to binary
because of the optimization of gcc.

  - use `count` instead of `channels->combined_count`
  - change return type from `int` to `bool`

Also add spaces and change macro order in OR assignment to make the code
easier to read.

Signed-off-by: Sameeh Jubran <sameehj@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
e9548fdf93 net: ena: store values in their appropriate variables types
This patch changes some of the variables types to match the values they
hold. These wrong types fail some of our static checkers that search for
accidental conversions in our driver.

Signed-off-by: Ido Segev <idose@amazon.com>
Signed-off-by: Igor Chauskin <igorch@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
da580ca8de net: ena: add device distinct log prefix to files
ENA logs are adjusted to display the full ENA representation to
distinct each ENA device in case of multiple interfaces.
Using netdev_err/warn and dev_info functions for logging provides
uniform printing with clear distinction of the device and interface.

This patch changes all printing in ena_com files to use netev_* logging
functions except for messages of info level. Log functions of that level
would be printed with dev_info because of the early stage they are
called in when net_device struct isn't yet registered.

To allow using netdev_* functions in all ena_com functions, a pointer to
the net_device was added to ena_com_dev struct.

The patch also adds some log messages to make driver debugging easier.

Signed-off-by: Amit Bernstein <amitbern@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Shay Agroskin
ce74496a15 net: ena: use constant value for net_device allocation
The patch changes the maximum number of RX/TX queues it advertises to
the kernel (via alloc_etherdev_mq()) from a value received from the
device to a constant value which is the minimum between 128 and the
number of CPUs in the system.

By allocating the net_device struct with a constant number of queues,
the driver is able to allocate it at a much earlier stage, before
calling any ena_com functions. This would allow to make all log prints
in ena_com to use netdev_* log functions instead or current pr_* ones.

Note:
netdev_* prints in ena_com functions that are called before
net_device registration in ena_probe() might print messages that are
a bit ugly (with strings like "(unnamed net_device) (uninitialized)").
However we decided to use netdev_* prints in these functions anyway,
for the sake of getting better messages later, when ena_com functions
are called after ena_probe() form other parts of the driver.
See discussion about this decision in [1].

[1] http://www.mail-archive.com/netdev@vger.kernel.org/msg353590.html

Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-09 15:26:40 -08:00
Simon Perron Caissy
5b13886da8 ice: Add space to unknown speed
Add space to the end of 'Unknown' string  in  order to avoid
concatenation with 'bps' string when formatting netdev log message.

Signed-off-by: Simon Perron Caissy <simon.perron.caissy@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:55 -08:00
Jacob Keller
9228d8b261 ice: join format strings to same line as ice_debug
When printing messages with ice_debug, align the printed string to the
origin line of the message in order to ease debugging and tracking
messages back to their source.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:55 -08:00
Bruce Allan
34d8461a65 ice: silence static analysis warning
sparse warns about cast to/from restricted types which is not
an actual problem; silence the warning.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:55 -08:00
Bruce Allan
32e6deb297 ice: cleanup misleading comment
The maximum Admin Queue buffer size and NVM shadow RAM sector size are both
4 Kilobytes. Some comments refer to those as 4Kb which can be confused with
4 Kilobits. Update the comments to use the commonly used KB symbol instead.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:55 -08:00
Nick Nunley
bcf68ea1e5 ice: Remove vlan_ena from vsi structure
vlan_ena was introduced to track whether VLAN filters are enabled on
the device, but
1) checking for num_vlan > 1 already gives us this information, and is
currently used in this way throughout the code
2) the logic for vlan_ena is broken when multiple VLANs are active

Just remove vlan_ena and use num_vlan instead.

Signed-off-by: Nick Nunley <nicholas.d.nunley@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:54 -08:00
Jeb Cramer
956542cae5 ice: Remove gate to OROM init
Remove the gate that prevents the OROM and netlist info from being
populated.  The NVM now has the appropriate section for software to
reference the versioning info.

Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:54 -08:00
Jeb Cramer
c21125c997 ice: Enable Support for FW Override (E82X)
The driver is able to override the firmware when it comes to supporting
a more lenient link mode.  This feature was limited to E810 devices.  It
is now extended to E82X devices.

Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:54 -08:00
Paul M Stillwell Jr
f2651a91b9 ice: don't always return an error for Get PHY Abilities AQ command
There are times when the driver shouldn't return an error when the Get
PHY abilities AQ command (0x0600) returns an error. Instead the driver
should log that the error occurred and continue on. This allows the
driver to load even though the AQ command failed. The user can then
later determine the reason for the failure and correct it.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:54 -08:00
Bruce Allan
88dcfdb4cd ice: cleanup stack hog
In ice_flow_add_prof_sync(), struct ice_flow_prof_params has recently
grown in size hogging stack space when allocated there.  Hogging stack
space should be avoided.  Change allocation to be on the heap when needed.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Harikumar Bokkena <harikumarx.bokkena@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-12-09 08:11:13 -08:00
Toke Høiland-Jørgensen
998f172962 xdp: Remove the xdp_attachment_flags_ok() callback
Since commit 7f0a838254 ("bpf, xdp: Maintain info on attached XDP BPF
programs in net_device"), the XDP program attachment info is now maintained
in the core code. This interacts badly with the xdp_attachment_flags_ok()
check that prevents unloading an XDP program with different load flags than
it was loaded with. In practice, two kinds of failures are seen:

- An XDP program loaded without specifying a mode (and which then ends up
  in driver mode) cannot be unloaded if the program mode is specified on
  unload.

- The dev_xdp_uninstall() hook always calls the driver callback with the
  mode set to the type of the program but an empty flags argument, which
  means the flags_ok() check prevents the program from being removed,
  leading to bpf prog reference leaks.

The original reason this check was added was to avoid ambiguity when
multiple programs were loaded. With the way the checks are done in the core
now, this is quite simple to enforce in the core code, so let's add a check
there and get rid of the xdp_attachment_flags_ok() callback entirely.

Fixes: 7f0a838254 ("bpf, xdp: Maintain info on attached XDP BPF programs in net_device")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/bpf/160752225751.110217.10267659521308669050.stgit@toke.dk
2020-12-09 16:27:42 +01:00
Zheng Yongjun
afae3cc2da net: atheros: simplify the return expression of atl2_phy_setup_autoneg_adv()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:22:54 -08:00
Zheng Yongjun
6eea39266c drivers: net: qlcnic: simplify the return expression of qlcnic_sriov_vf_shutdown()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:22:54 -08:00
Zheng Yongjun
10dd7b4fe5 drivers: net: ionic: simplify the return expression of ionic_set_rxfh()
Simplify the return expression.

Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:22:54 -08:00
Zhang Changzhong
cc6596fc72 net: ll_temac: Fix potential NULL dereference in temac_probe()
platform_get_resource() may fail and in this case a NULL dereference
will occur.

Fix it to use devm_platform_ioremap_resource() instead of calling
platform_get_resource() and devm_ioremap().

This is detected by Coccinelle semantic patch.

@@
expression pdev, res, n, t, e, e1, e2;
@@

res = \(platform_get_resource\|platform_get_resource_byname\)(pdev, t, n);
+ if (!res)
+   return -EINVAL;
... when != res == NULL
e = devm_ioremap(e1, res->start, e2);

Fixes: 8425c41d1e ("net: ll_temac: Extend support to non-device-tree platforms")
Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com>
Acked-by: Esben Haabendal <esben@geanix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:15:46 -08:00
Catherine Sullivan
6f007c6486 gve: Add support for raw addressing in the tx path
During TX, skbs' data addresses are dma_map'ed and passed to the NIC.
This means that the device can perform DMA directly from these addresses
and the driver does not have to copy the buffer content into
pre-allocated buffers/qpls (as in qpl mode).

Reviewed-by: Yangchun Fu <yangchun@google.com>
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:06:28 -08:00
David Awogbemila
02b0e0c18b gve: Rx Buffer Recycling
This patch lets the driver reuse buffers that have been freed by the
networking stack.

In the raw addressing case, this allows the driver avoid allocating new
buffers.
In the qpl case, the driver can avoid copies.

This patch separates the page refcount tracking mechanism
into a function gve_rx_can_recycle_buffer which uses get_page - this will
be changed in a future patch to entirely eliminate the use of get_page in
tracking page refcounts.

Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:06:28 -08:00
Catherine Sullivan
ede3fcf5ec gve: Add support for raw addressing to the rx path
Add support to use raw dma addresses in the rx path. Due to this new
support we can alloc a new buffer instead of making a copy.

RX buffers are handed to the networking stack and are
re-allocated as needed, avoiding the need to use
skb_copy_to_linear_data() as in "qpl" mode.

Reviewed-by: Yangchun Fu <yangchun@google.com>
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:06:28 -08:00
Catherine Sullivan
4944db80ac gve: Add support for raw addressing device option
Add support to describe device for parsing device options. As
the first device option, add raw addressing.

"Raw Addressing" mode (as opposed to the current "qpl" mode) is an
operational mode which allows the driver avoid bounce buffer copies
which it currently performs using pre-allocated qpls (queue_page_lists)
when sending and receiving packets.
For egress packets, the provided skb data addresses will be dma_map'ed and
passed to the device, allowing the NIC can perform DMA directly - the
driver will not have to copy the buffer content into pre-allocated
buffers/qpls (as in qpl mode).
For ingress packets, copies are also eliminated as buffers are handed to
the networking stack and then recycled or re-allocated as
necessary, avoiding the use of skb_copy_to_linear_data().

This patch only introduces the option to the driver.
Subsequent patches will add the ingress and egress functionality.

Reviewed-by: Yangchun Fu <yangchun@google.com>
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 16:06:28 -08:00
Amit Cohen
745f73deea mlxsw: spectrum_switchdev: Allow joining VxLAN to 802.1ad bridge
The previous patches added support for VxLAN device enslaved to 802.1ad
bridge in Spectrum-2 ASIC and vetoed it in Spectrum-1.

Do not veto VxLAN with 802.1ad bridge.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:57 -08:00
Amit Cohen
efbcb67339 mlxsw: Veto Q-in-VNI for Spectrum-1 ASIC
Implementation of Q-in-VNI is different between ASIC types, this set adds
support only for Spectrum-2.

Return an error when trying to create VxLAN device and enslave it to
802.1ad bridge in Spectrum-1.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
7e9c72a5da mlxsw: spectrum_switchdev: Use ops->vxlan_join() when adding VLAN to VxLAN device
Currently mlxsw_sp_switchdev_vxlan_vlan_add() always calls
mlxsw_sp_bridge_8021q_vxlan_join() because VLANs were only ever added to
a VLAN-filtering bridge, which is only 802.1q bridge.

This set adds support for VxLAN with 802.1ad bridge, so VLAN-filtering
bridge is not only 802.1q.

Call ops->vxlan_join(), so mlxsw_sp_bridge_802{1q, 1ad}_vxlan_join()
will be called according to bridge type.

This is needed to ensure that VxLAN with 802.1ad bridge will be vetoed
in Spectrum-1 with the next patch.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
0b5ec8f237 mlxsw: spectrum_nve_vxlan: Add support for Q-in-VNI for Spectrum-2 ASIC
On Spectrum-2, the default setting is not to push VLAN to the decapsulated
packet. This is controlled by SPVTR.ipvid_mode.
Set SPVTR.ipvid_mode to always push VLAN.
Without this setting, Spectrum-2 overtakes the VLAN tag of decapsulated
packet for bridging.

In addition, set SPVID register to use EtherType saved in
mlxsw_sp_nve_config when VLAN is pushed for the NVE tunnel.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
4418096e84 mlxsw: spectrum: Publish mlxsw_sp_ethtype_to_sver_type()
Declare mlxsw_sp_ethtype_to_sver_type() in spectrum.h to enable using it
in other files.

It will be used in the next patch to map between EtherType and the
relevant value configured by SVER register.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
49d18964e9 mlxsw: Save EtherType as part of mlxsw_sp_nve_config
Add EtherType field to mlxsw_sp_nve_config struct.
Set EtherType according to mlxsw_sp_nve_params.ethertype.

Pass 'mlxsw_sp_nve_params' instead of 'mlxsw_sp_nve_params->dev' to the
function which initializes mlxsw_sp_nve_config struct to know which
EtherType to use.

This field is needed to configure which EtherType will be used when
VLAN is pushed at ingress of the tunnel port.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
0913a24b3a mlxsw: Save EtherType as part of mlxsw_sp_nve_params
Add EtherType field to mlxsw_sp_nve_params struct.
Set it when VxLAN device is added to bridge device.

This field is needed to configure which EtherType will be used when
VLAN is pushed at ingress of the tunnel port.

Use ETH_P_8021Q for tunnel port enslaved to 802.1d and 802.1q bridges.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
e2c777d7e3 mlxsw: spectrum_switchdev: Create common function for joining VxLAN to VLAN-aware bridge
The code in mlxsw_sp_bridge_8021q_vxlan_join() can be used also for
802.1ad bridge.

Move the code to function called mlxsw_sp_bridge_vlan_aware_vxlan_join()
and call it from mlxsw_sp_bridge_8021q_vxlan_join() to enable code
reuse.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
598874c8e9 mlxsw: reg: Add support for tunnel port in SPVID register
Add spvid_tport field which indicates if the port is tunnel port.
When spvid_tport is true, local_port field supposed to be tunnel port
type.

It will be used to configure which Ethertype will be used when VLAN is
pushed at ingress for tunnel port.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
c1c32a79c5 mlxsw: reg: Add Switch Port VLAN Stacking Register
SPVTR register configures the VLAN mode of the port to enable VLAN
stacking.

It will be used to configure VxLAN to push VLAN to the decapsulated packet.
Without this setting, Spectrum-2 overtakes the VLAN tag of decapsulated
packet for bridging.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
Amit Cohen
02c3b5c5d0 mlxsw: Use one enum for all registers that contain tunnel_port field
Currently SFN, TNUMT and TNPC registers use separate enums for
tunnel_port.

Create one enum with a neutral name and use it.
Remove the enums that are not currently required.

The next patches add two more registers that contain tunnel_port field,
the new enum can be used for them also.

Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:45:56 -08:00
David S. Miller
a8d5dd192a mlx5-updates-2020-12-01
mlx5e port TX timestamping support and MISC updates
 
 1) Add support for port TX timestamping, for better PTP accuracy.
 
 Currently in mlx5 HW TX timestamping is done on CQE (TX completion)
 generation, which much earlier than when the packet actually goes out to
 the wire, in this series Eran implements the option to do timestamping on
 the port using a special SQ (Send Queue), such Send Queue will generate 2
 CQEs (TX completions), the original one and a new one when the packet
 leaves the port, due to the nature of this special handling, such mechanism
 is an opt-in only and it is off by default to avoid any performance
 degradation on normal traffic flows.
 
 This patchset improves TX Hardware timestamping offset to be less than
 40ns at a 100Gbps line rate, compared to 600ns before.
 
 With that, making our HW compliant with G.8273.2 class C, and allow Linux
 systems to be deployed in the 5G telco edge, where this standard is a must.
 
 2) Misc updates and trivial improvements.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEGhZs6bAKwk/OTgTpSD+KveBX+j4FAl/P0/EACgkQSD+KveBX
 +j5ETAf/Y6zPUHEk8mufD7RJaeQ/11gDtT3JKaPfcYBxaHs9VOPUTA2vemAZUuM0
 0Rfsrv25PeLjvBtE2wCyA/gZ9naEeE2UELHR6FKEipnrnT+b2JD/dYG9Nb7EiARw
 zu8djoOza/bLLKvNew+VAr1P0+of0IIkD7NXqRTnUTnATQubLYm/AF7TGhINIa8H
 xXnBHu1C7Lz8Ow2f6IBA1ntrGoWApdyuXpHJQlJvwgpPLXkD10GEoqXk3LtP11/J
 KCAKDdMAlChydCo2rmruwwCM0z3Gjl86uffIbLE0bbJJGk3Gpkq12B2aUk/l3Xzj
 Y3RtiMHTgqaqo9AVjBPI435mw/LPvA==
 =kvQC
 -----END PGP SIGNATURE-----

Merge tag 'mlx5-updates-2020-12-01' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

mlx5-updates-2020-12-01

mlx5e port TX timestamping support and MISC updates

1) Add support for port TX timestamping, for better PTP accuracy.

Currently in mlx5 HW TX timestamping is done on CQE (TX completion)
generation, which much earlier than when the packet actually goes out to
the wire, in this series Eran implements the option to do timestamping on
the port using a special SQ (Send Queue), such Send Queue will generate 2
CQEs (TX completions), the original one and a new one when the packet
leaves the port, due to the nature of this special handling, such mechanism
is an opt-in only and it is off by default to avoid any performance
degradation on normal traffic flows.

This patchset improves TX Hardware timestamping offset to be less than
40ns at a 100Gbps line rate, compared to 600ns before.

With that, making our HW compliant with G.8273.2 class C, and allow Linux
systems to be deployed in the 5G telco edge, where this standard is a must.

2) Misc updates and trivial improvements.

Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 15:39:01 -08:00
Fugang Duan
f119cc9818 net: stmmac: overwrite the dma_cap.addr64 according to HW design
The current IP register MAC_HW_Feature1[ADDR64] only defines
32/40/64 bit width, but some SOCs support others like i.MX8MP
support 34 bits but it maps to 40 bits width in MAC_HW_Feature1[ADDR64].
So overwrite dma_cap.addr64 according to HW real design.

Fixes: 94abdad697 ("net: ethernet: dwmac: add ethernet glue logic for NXP imx8 chip")
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 14:52:29 -08:00
Fugang Duan
5f58591323 net: stmmac: delete the eee_ctrl_timer after napi disabled
There have chance to re-enable the eee_ctrl_timer and fire the timer
in napi callback after delete the timer in .stmmac_release(), which
introduces to access eee registers in the timer function after clocks
are disabled then causes system hang. Found this issue when do
suspend/resume and reboot stress test.

It is safe to delete the timer after napi disabled and disable lpi mode.

Fixes: d765955d2a ("stmmac: add the Energy Efficient Ethernet support")
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 14:52:29 -08:00
Fugang Duan
4ec236c7c5 net: stmmac: free tx skb buffer in stmmac_resume()
When do suspend/resume test, there have WARN_ON() log dump from
stmmac_xmit() funciton, the code logic:
	entry = tx_q->cur_tx;
	first_entry = entry;
	WARN_ON(tx_q->tx_skbuff[first_entry]);

In normal case, tx_q->tx_skbuff[txq->cur_tx] should be NULL because
the skb should be handled and freed in stmmac_tx_clean().

But stmmac_resume() reset queue parameters like below, skb buffers
may not be freed.
	tx_q->cur_tx = 0;
	tx_q->dirty_tx = 0;

So free tx skb buffer in stmmac_resume() to avoid warning and
memory leak.

log:
[   46.139824] ------------[ cut here ]------------
[   46.144453] WARNING: CPU: 0 PID: 0 at drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:3235 stmmac_xmit+0x7a0/0x9d0
[   46.154969] Modules linked in: crct10dif_ce vvcam(O) flexcan can_dev
[   46.161328] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G           O      5.4.24-2.1.0+g2ad925d15481 #1
[   46.170369] Hardware name: NXP i.MX8MPlus EVK board (DT)
[   46.175677] pstate: 80000005 (Nzcv daif -PAN -UAO)
[   46.180465] pc : stmmac_xmit+0x7a0/0x9d0
[   46.184387] lr : dev_hard_start_xmit+0x94/0x158
[   46.188913] sp : ffff800010003cc0
[   46.192224] x29: ffff800010003cc0 x28: ffff000177e2a100
[   46.197533] x27: ffff000176ef0840 x26: ffff000176ef0090
[   46.202842] x25: 0000000000000000 x24: 0000000000000000
[   46.208151] x23: 0000000000000003 x22: ffff8000119ddd30
[   46.213460] x21: ffff00017636f000 x20: ffff000176ef0cc0
[   46.218769] x19: 0000000000000003 x18: 0000000000000000
[   46.224078] x17: 0000000000000000 x16: 0000000000000000
[   46.229386] x15: 0000000000000079 x14: 0000000000000000
[   46.234695] x13: 0000000000000003 x12: 0000000000000003
[   46.240003] x11: 0000000000000010 x10: 0000000000000010
[   46.245312] x9 : ffff00017002b140 x8 : 0000000000000000
[   46.250621] x7 : ffff00017636f000 x6 : 0000000000000010
[   46.255930] x5 : 0000000000000001 x4 : ffff000176ef0000
[   46.261238] x3 : 0000000000000003 x2 : 00000000ffffffff
[   46.266547] x1 : ffff000177e2a000 x0 : 0000000000000000
[   46.271856] Call trace:
[   46.274302]  stmmac_xmit+0x7a0/0x9d0
[   46.277874]  dev_hard_start_xmit+0x94/0x158
[   46.282056]  sch_direct_xmit+0x11c/0x338
[   46.285976]  __qdisc_run+0x118/0x5f0
[   46.289549]  net_tx_action+0x110/0x198
[   46.293297]  __do_softirq+0x120/0x23c
[   46.296958]  irq_exit+0xb8/0xd8
[   46.300098]  __handle_domain_irq+0x64/0xb8
[   46.304191]  gic_handle_irq+0x5c/0x148
[   46.307936]  el1_irq+0xb8/0x180
[   46.311076]  cpuidle_enter_state+0x84/0x360
[   46.315256]  cpuidle_enter+0x34/0x48
[   46.318829]  call_cpuidle+0x18/0x38
[   46.322314]  do_idle+0x1e0/0x280
[   46.325539]  cpu_startup_entry+0x24/0x40
[   46.329460]  rest_init+0xd4/0xe0
[   46.332687]  arch_call_rest_init+0xc/0x14
[   46.336695]  start_kernel+0x420/0x44c
[   46.340353] ---[ end trace bc1ee695123cbacd ]---

Fixes: 47dd7a540b ("net: add support for STMicroelectronics Ethernet controllers.")
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 14:52:29 -08:00
Fugang Duan
36d18b5664 net: stmmac: start phylink instance before stmmac_hw_setup()
Start phylink instance and resume back the PHY to supply
RX clock to MAC before MAC layer initialization by calling
.stmmac_hw_setup(), since DMA reset depends on the RX clock,
otherwise DMA reset cost maximum timeout value then finally
timeout.

Fixes: 74371272f9 ("net: stmmac: Convert to phylink and remove phylib logic")
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 14:52:29 -08:00
Fugang Duan
9d14edfdea net: stmmac: increase the timeout for dma reset
Current timeout value is not enough for gmac5 dma reset
on imx8mp platform, increase the timeout range.

Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-12-08 14:52:29 -08:00
Guojia Liao
592b0179cd net: hns3: refine the VLAN tag handle for port based VLAN
For DEVICE_VERSION_V2, the hardware only supports max two layer
VLAN tags, including port based tag inserted by hardware, tag in
tx buffer descriptor(get from skb->tci) and tag in packet.

For transmit packet:
If port based VLAN disabled, and vf driver gets a VLAN tag from
skb, the VLAN tag must be filled to the Outer_VLAN_TAG field
(tag near to DMAC) of tx buffer descriptor, otherwise it may
be inserted after the tag in packet.

If port based VLAN enabled, and vf driver gets a VLAN tag from
skb, the VLAN tag must be filled to the VLAN_TAG field (tag
far to DMAC) of tx buffer descriptor, otherwise it may be
conflicted with port based VLAN, and raise a hardware error.

For receive packet:
The hardware will strip the VLAN tags and fill them in the rx
buffer descriptor, no matter port based VLAN enable or not.
Because port based VLAN tag is useless for stack, so vf driver
needs to discard the port based VLAN tag get from rx buffer
descriptor when port based VLAN enabled.

So vf must know about the port based VLAN state.

For DEVICE_VERSION_V3, the hardware provides some new
configuration to improve it.

For transmit packet:
When enable tag shift mode, hardware will handle the VLAN tag
in outer_VLAN_TAG field as VLAN_TAG, so it won't conflict with
port based VLAN. And hardware also make sure the tag before
the tag in packet. So vf driver doesn't need to specify the tag
position according to the port based VLAN state anymore.

For receive packet:
When enable discard mode, hardware will strip and discard the
port based VLAN tag, so vf driver doesn't need to identify it
from rx buffer descriptor.

So modify the port based VLAN configuration, simplify the process
for vf handling the VLAN tag.

Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-08 11:58:52 -08:00