Remove the gate that prevents the OROM and netlist info from being
populated. The NVM now has the appropriate section for software to
reference the versioning info.
Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
The driver is able to override the firmware when it comes to supporting
a more lenient link mode. This feature was limited to E810 devices. It
is now extended to E82X devices.
Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
There are times when the driver shouldn't return an error when the Get
PHY abilities AQ command (0x0600) returns an error. Instead the driver
should log that the error occurred and continue on. This allows the
driver to load even though the AQ command failed. The user can then
later determine the reason for the failure and correct it.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
In ice_flow_add_prof_sync(), struct ice_flow_prof_params has recently
grown in size hogging stack space when allocated there. Hogging stack
space should be avoided. Change allocation to be on the heap when needed.
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Harikumar Bokkena <harikumarx.bokkena@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Simplify the return expression.
Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
Reviewed-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
MT7530 has a global address age control register, so use it to set
ageing time.
The applied timer is (AGE_CNT + 1) * (AGE_UNIT + 1) seconds
Signed-off-by: DENG Qingfang <dqfext@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Awogbemil says:
====================
GVE Raw Addressing
Patchset description:
This patchset introduces "raw addressing" mode to the GVE driver.
Previously (in "queue_page_list" or "qpl" mode), the driver would
pre-allocate and dma_map buffers to be used on egress and ingress.
On egress, it would copy data from the skb provided to the
pre-allocated buffers - this was expensive.
In raw addressing mode, the driver can avoid this copy and simply
dma_map the skb's data so that the NIC can use it.
On ingress, the driver passes buffers up to the networking stack and
then frees and reallocates buffers when necessary instead of using
skb_copy_to_linear_data.
Patch 3 separates the page refcount tracking mechanism
into a function gve_rx_can_recycle_buffer which uses get_page - this will
be changed in a future patch to eliminate the use of get_page in tracking
page refcounts.
Changes from v9:
Patch 4: Use u64, not u32 for new tx stat counters.
====================
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
During TX, skbs' data addresses are dma_map'ed and passed to the NIC.
This means that the device can perform DMA directly from these addresses
and the driver does not have to copy the buffer content into
pre-allocated buffers/qpls (as in qpl mode).
Reviewed-by: Yangchun Fu <yangchun@google.com>
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch lets the driver reuse buffers that have been freed by the
networking stack.
In the raw addressing case, this allows the driver avoid allocating new
buffers.
In the qpl case, the driver can avoid copies.
This patch separates the page refcount tracking mechanism
into a function gve_rx_can_recycle_buffer which uses get_page - this will
be changed in a future patch to entirely eliminate the use of get_page in
tracking page refcounts.
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support to use raw dma addresses in the rx path. Due to this new
support we can alloc a new buffer instead of making a copy.
RX buffers are handed to the networking stack and are
re-allocated as needed, avoiding the need to use
skb_copy_to_linear_data() as in "qpl" mode.
Reviewed-by: Yangchun Fu <yangchun@google.com>
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support to describe device for parsing device options. As
the first device option, add raw addressing.
"Raw Addressing" mode (as opposed to the current "qpl" mode) is an
operational mode which allows the driver avoid bounce buffer copies
which it currently performs using pre-allocated qpls (queue_page_lists)
when sending and receiving packets.
For egress packets, the provided skb data addresses will be dma_map'ed and
passed to the device, allowing the NIC can perform DMA directly - the
driver will not have to copy the buffer content into pre-allocated
buffers/qpls (as in qpl mode).
For ingress packets, copies are also eliminated as buffers are handed to
the networking stack and then recycled or re-allocated as
necessary, avoiding the use of skb_copy_to_linear_data().
This patch only introduces the option to the driver.
Subsequent patches will add the ingress and egress functionality.
Reviewed-by: Yangchun Fu <yangchun@google.com>
Signed-off-by: Catherine Sullivan <csully@google.com>
Signed-off-by: David Awogbemila <awogbemila@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a spelling mistake in the Kconfig help text. Fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Johan Hedberg says:
====================
pull request: bluetooth-next 2020-12-07
Here's the main bluetooth-next pull request for the 5.11 kernel.
- Updated Bluetooth entries in MAINTAINERS to include Luiz von Dentz
- Added support for Realtek 8822CE and 8852A devices
- Added support for MediaTek MT7615E device
- Improved workarounds for fake CSR devices
- Fix Bluetooth qualification test case L2CAP/COS/CFD/BV-14-C
- Fixes for LL Privacy support
- Enforce 16 byte encryption key size for FIPS security level
- Added new mgmt commands for extended advertising support
- Multiple other smaller fixes & improvements
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel says:
====================
mlxsw: Add support for Q-in-VNI
This patch set adds support for Q-in-VNI over Spectrum-{2,3} ASICs.
Q-in-VNI is like regular VxLAN encapsulation with the sole difference
that overlay packets can contain a VLAN tag. In Linux, this is achieved
by adding the VxLAN device to a 802.1ad bridge instead of a 802.1q
bridge.
From mlxsw perspective, Q-in-VNI support entails two main changes:
1. An outer VLAN tag should always be pushed to the overlay packet
during decapsulation
2. The EtherType used during decapsulation should be 802.1ad (0x88a8)
instead of the default 802.1q (0x8100)
Patch set overview:
Patches #1-#3 add required device registers and fields
Patch #4 performs small refactoring to allow code re-use
Patches #5-#7 make the EtherType used during decapsulation a property of
the tunnel port (i.e., VxLAN). This leads to the driver vetoing
configurations in which VxLAN devices are member in both 802.1ad and
802.1q/802.1d bridges. Will be handled in the future by determining the
overlay EtherType on the egress port instead
Patch #8 adds support for Q-in-VNI for Spectrum-2 and newer ASICs
Patches #9-#10 veto Q-in-VNI for Spectrum-1 ASICs due to some hardware
limitations. Can be worked around, but decided not to support it for now
Patch #11 adjusts mlxsw to stop vetoing addition of VXLAN devices to
802.1ad bridges
Patch #12 adds a generic forwarding test that can be used with both veth
pairs and physical ports with a loopback
Patch #13 adds a test to make sure mlxsw vetoes unsupported Q-in-VNI
configurations
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add tests to ensure that the forbidden and unsupported cases are indeed
vetoed by mlxsw driver.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add test to check Q-in-VNI traffic.
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The previous patches added support for VxLAN device enslaved to 802.1ad
bridge in Spectrum-2 ASIC and vetoed it in Spectrum-1.
Do not veto VxLAN with 802.1ad bridge.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implementation of Q-in-VNI is different between ASIC types, this set adds
support only for Spectrum-2.
Return an error when trying to create VxLAN device and enslave it to
802.1ad bridge in Spectrum-1.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently mlxsw_sp_switchdev_vxlan_vlan_add() always calls
mlxsw_sp_bridge_8021q_vxlan_join() because VLANs were only ever added to
a VLAN-filtering bridge, which is only 802.1q bridge.
This set adds support for VxLAN with 802.1ad bridge, so VLAN-filtering
bridge is not only 802.1q.
Call ops->vxlan_join(), so mlxsw_sp_bridge_802{1q, 1ad}_vxlan_join()
will be called according to bridge type.
This is needed to ensure that VxLAN with 802.1ad bridge will be vetoed
in Spectrum-1 with the next patch.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On Spectrum-2, the default setting is not to push VLAN to the decapsulated
packet. This is controlled by SPVTR.ipvid_mode.
Set SPVTR.ipvid_mode to always push VLAN.
Without this setting, Spectrum-2 overtakes the VLAN tag of decapsulated
packet for bridging.
In addition, set SPVID register to use EtherType saved in
mlxsw_sp_nve_config when VLAN is pushed for the NVE tunnel.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Declare mlxsw_sp_ethtype_to_sver_type() in spectrum.h to enable using it
in other files.
It will be used in the next patch to map between EtherType and the
relevant value configured by SVER register.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add EtherType field to mlxsw_sp_nve_config struct.
Set EtherType according to mlxsw_sp_nve_params.ethertype.
Pass 'mlxsw_sp_nve_params' instead of 'mlxsw_sp_nve_params->dev' to the
function which initializes mlxsw_sp_nve_config struct to know which
EtherType to use.
This field is needed to configure which EtherType will be used when
VLAN is pushed at ingress of the tunnel port.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add EtherType field to mlxsw_sp_nve_params struct.
Set it when VxLAN device is added to bridge device.
This field is needed to configure which EtherType will be used when
VLAN is pushed at ingress of the tunnel port.
Use ETH_P_8021Q for tunnel port enslaved to 802.1d and 802.1q bridges.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The code in mlxsw_sp_bridge_8021q_vxlan_join() can be used also for
802.1ad bridge.
Move the code to function called mlxsw_sp_bridge_vlan_aware_vxlan_join()
and call it from mlxsw_sp_bridge_8021q_vxlan_join() to enable code
reuse.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add spvid_tport field which indicates if the port is tunnel port.
When spvid_tport is true, local_port field supposed to be tunnel port
type.
It will be used to configure which Ethertype will be used when VLAN is
pushed at ingress for tunnel port.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
SPVTR register configures the VLAN mode of the port to enable VLAN
stacking.
It will be used to configure VxLAN to push VLAN to the decapsulated packet.
Without this setting, Spectrum-2 overtakes the VLAN tag of decapsulated
packet for bridging.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently SFN, TNUMT and TNPC registers use separate enums for
tunnel_port.
Create one enum with a neutral name and use it.
Remove the enums that are not currently required.
The next patches add two more registers that contain tunnel_port field,
the new enum can be used for them also.
Signed-off-by: Amit Cohen <amcohen@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
mlx5e port TX timestamping support and MISC updates
1) Add support for port TX timestamping, for better PTP accuracy.
Currently in mlx5 HW TX timestamping is done on CQE (TX completion)
generation, which much earlier than when the packet actually goes out to
the wire, in this series Eran implements the option to do timestamping on
the port using a special SQ (Send Queue), such Send Queue will generate 2
CQEs (TX completions), the original one and a new one when the packet
leaves the port, due to the nature of this special handling, such mechanism
is an opt-in only and it is off by default to avoid any performance
degradation on normal traffic flows.
This patchset improves TX Hardware timestamping offset to be less than
40ns at a 100Gbps line rate, compared to 600ns before.
With that, making our HW compliant with G.8273.2 class C, and allow Linux
systems to be deployed in the 5G telco edge, where this standard is a must.
2) Misc updates and trivial improvements.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEGhZs6bAKwk/OTgTpSD+KveBX+j4FAl/P0/EACgkQSD+KveBX
+j5ETAf/Y6zPUHEk8mufD7RJaeQ/11gDtT3JKaPfcYBxaHs9VOPUTA2vemAZUuM0
0Rfsrv25PeLjvBtE2wCyA/gZ9naEeE2UELHR6FKEipnrnT+b2JD/dYG9Nb7EiARw
zu8djoOza/bLLKvNew+VAr1P0+of0IIkD7NXqRTnUTnATQubLYm/AF7TGhINIa8H
xXnBHu1C7Lz8Ow2f6IBA1ntrGoWApdyuXpHJQlJvwgpPLXkD10GEoqXk3LtP11/J
KCAKDdMAlChydCo2rmruwwCM0z3Gjl86uffIbLE0bbJJGk3Gpkq12B2aUk/l3Xzj
Y3RtiMHTgqaqo9AVjBPI435mw/LPvA==
=kvQC
-----END PGP SIGNATURE-----
Merge tag 'mlx5-updates-2020-12-01' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
mlx5-updates-2020-12-01
mlx5e port TX timestamping support and MISC updates
1) Add support for port TX timestamping, for better PTP accuracy.
Currently in mlx5 HW TX timestamping is done on CQE (TX completion)
generation, which much earlier than when the packet actually goes out to
the wire, in this series Eran implements the option to do timestamping on
the port using a special SQ (Send Queue), such Send Queue will generate 2
CQEs (TX completions), the original one and a new one when the packet
leaves the port, due to the nature of this special handling, such mechanism
is an opt-in only and it is off by default to avoid any performance
degradation on normal traffic flows.
This patchset improves TX Hardware timestamping offset to be less than
40ns at a 100Gbps line rate, compared to 600ns before.
With that, making our HW compliant with G.8273.2 class C, and allow Linux
systems to be deployed in the 5G telco edge, where this standard is a must.
2) Misc updates and trivial improvements.
Signed-off-by: David S. Miller <davem@davemloft.net>
For DEVICE_VERSION_V2, the hardware only supports max two layer
VLAN tags, including port based tag inserted by hardware, tag in
tx buffer descriptor(get from skb->tci) and tag in packet.
For transmit packet:
If port based VLAN disabled, and vf driver gets a VLAN tag from
skb, the VLAN tag must be filled to the Outer_VLAN_TAG field
(tag near to DMAC) of tx buffer descriptor, otherwise it may
be inserted after the tag in packet.
If port based VLAN enabled, and vf driver gets a VLAN tag from
skb, the VLAN tag must be filled to the VLAN_TAG field (tag
far to DMAC) of tx buffer descriptor, otherwise it may be
conflicted with port based VLAN, and raise a hardware error.
For receive packet:
The hardware will strip the VLAN tags and fill them in the rx
buffer descriptor, no matter port based VLAN enable or not.
Because port based VLAN tag is useless for stack, so vf driver
needs to discard the port based VLAN tag get from rx buffer
descriptor when port based VLAN enabled.
So vf must know about the port based VLAN state.
For DEVICE_VERSION_V3, the hardware provides some new
configuration to improve it.
For transmit packet:
When enable tag shift mode, hardware will handle the VLAN tag
in outer_VLAN_TAG field as VLAN_TAG, so it won't conflict with
port based VLAN. And hardware also make sure the tag before
the tag in packet. So vf driver doesn't need to specify the tag
position according to the port based VLAN state anymore.
For receive packet:
When enable discard mode, hardware will strip and discard the
port based VLAN tag, so vf driver doesn't need to identify it
from rx buffer descriptor.
So modify the port based VLAN configuration, simplify the process
for vf handling the VLAN tag.
Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Currently, the tx unicast promisc is always enabled when promisc
mode on. If tx unicast promisc on, a function will receive all
unicast packet from other functions belong to the same port.
Add a ethtool private flag to control whether enable tx
unicast promisc. Then the function is able to filter the
unknown unicast packets from other function.
Signed-off-by: Jian Shen <shenjian15@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
For DEVICE_VERSION_V2, the hardware supports enable tx and rx
promiscuous separately. But tx or rx promiscuous is active for
unicast, multicast and broadcast promiscuous simultaneously.
To support traffics between functions belong to the same port,
we always enable tx promiscuous for broadcast promiscuous, so
tx promiscuous for unicast and multicast promiscuous is also
enabled.
For DEVICE_VERSION_V3, the hardware decouples the above
relationship. Tx unicast promiscuous, rx unicast promiscuous,
tx multicast promiscuous, rx multicast promiscuous, tx broadcast
promiscuous and rx broadcast promiscuous can be enabled separately.
So add support for the new promiscuous command.
Signed-off-by: Guojia Liao <liaoguojia@huawei.com>
Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Create a function to fill the fields of struct mlx5e_create_cq_param
based on a channel. The purpose is code reuse between normal CQs, XSK
CQs and the upcoming QoS CQs.
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Use the new FW caps to advertise for ip-in-ip tunnel support separately
for RX and TX.
Signed-off-by: Aya Levin <ayal@nvidia.com>
Reviewed-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Fix smatch warnings:
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_lgcy.c:105 esw_acl_egress_lgcy_setup() warn: passing zero to 'PTR_ERR'
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/egress_ofld.c:177 esw_acl_egress_ofld_setup() warn: passing zero to 'PTR_ERR'
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c:184 esw_acl_ingress_lgcy_setup() warn: passing zero to 'PTR_ERR'
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c:262 esw_acl_ingress_ofld_setup() warn: passing zero to 'PTR_ERR'
esw_acl_table_create() never returns NULL, so
NULL test should be removed.
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Currently, when more than one EQ is sharing an IRQ, and this IRQ is
being interrupted, all the EQs sharing the IRQ will be armed. This is
done regardless of whether an EQ has EQE.
When multiple EQs are sharing an IRQ, one or more EQs can have valid
EQEs.
Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Since kvzalloc will initialize the allocated memory, it is not
necessary to initialize it once again.
Fixes: 11b717d615 ("net/mlx5: E-Switch, Get reg_c0 value on CQE")
Signed-off-by: Zhu Yanjun <yanjunz@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Transmitted packet timestamping accuracy can be improved when using
timestamp from the port, instead of packet CQE creation timestamp, as
it better reflects the actual time of a packet's transmit.
TX port timestamping is supported starting from ConnectX6-DX hardware.
Although at the original completion, only CQE timestamp can be attached,
we are able to get TX port timestamping via an additional completion over
a special CQ associated with the SQ (in addition to the regular CQ).
Driver to ignore the original packet completion timestamp, and report
back the timestamp of the special CQ completion. If the absolute timestamp
diff between the two completions is greater than 1 / 128 second, ignore
the TX port timestamp as it has a jitter which is too big.
No skb will be generate out of the extra completion.
Allocate additional CQ per ptpsq, to receive the TX port timestamp.
Driver to hold an skb FIFO in order to map between transmitted skb to
the two expected completions. When using ptpsq, hold double refcount on
the skb, to gaurantee it will not get released before both completions
arrive.
Expose dedicated counters of the ptp additional CQ and connect it to the
TX health reporter.
This patch improves TX Hardware timestamping offset to be less than 40ns
at a 100Gbps line rate, compared to 600ns before.
With that, making our HW compliant with G.8273.2 class C, and allow Linux
systems to be deployed in the 5G telco edge, where this standard is a
must.
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Add TX PTP port object support for better TX timestamping accuracy.
Currently, driver supports CQE based TX port timestamp. Device
also offers TX port timestamp, which has less jitter and better
reflects the actual time of a packet's transmit.
Define new driver layout called ptpsq, on which driver will create
SQs that will support TX port timestamp for their transmitted packets.
Driver to identify PTP TX skbs and steer them to these dedicated SQs
as part of the select queue ndo.
Driver to hold ptpsq per TC and report them at
netif_set_real_num_tx_queues().
Add support for all needed functionality in order to xmit and poll
completions received via ptpsq.
Add ptpsq to the TX reporter recover, diagnose and dump methods.
Creation of ptpsqs is disabled by default, and can be enabled via
tx_port_ts private flag.
This patch steer all timestamp related packets to a ptpsq, but it
does not open the port timestamp support for it. The support will
be added in the following patch.
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
MLX5E_RX_ERR_CQE Macro is used only in data-path, move it to the
appropriate header file.
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
SW group counter update function aggregates sw stats out of many
mlx5e_*_stats resides in a given mlx5e_channel_stats struct.
Split the function into a few helper functions.
This will be used later in the series to calculate specific
mlx5e_*_stats which are not defined inside mlx5e_channel_stats.
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
The skb fifo push/pop API used pre-defined attributes within the
mlx5e_txqsq.
In order to share the skb fifo API with other non-SQ use cases,
change the API input to get newly defined mlx5e_skb_fifo struct.
Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>