This patch adds support for offloading IPXIP6 type packets that represent
either IPv4 or IPv6 encapsulated inside of an IPv6 outer IP header. In
addition with this change we should also be able to support FOU
encapsulated traffic with outer IPv6 headers.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch defines two new GSO definitions SKB_GSO_IPXIP4 and
SKB_GSO_IPXIP6 along with corresponding NETIF_F_GSO_IPXIP4 and
NETIF_F_GSO_IPXIP6. These are used to described IP in IP
tunnel and what the outer protocol is. The inner protocol
can be deduced from other GSO types (e.g. SKB_GSO_TCPV4 and
SKB_GSO_TCPV6). The GSO types of SKB_GSO_IPIP and SKB_GSO_SIT
are removed (these are both instances of SKB_GSO_IPXIP4).
SKB_GSO_IPXIP6 will be used when support for GSO with IP
encapsulation over IPv6 is added.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch enables a feature to enable/disable all multicast
for a trusted VF.
Change-Id: I926eba7f8850c8d40f8ad7e08bbe4056bbd3985f
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Allocate the correct number of RX buffers, and don't fiddle with
next_to_use. The common RX code handles all of this. This fixes a memory
leak of one page each time the driver is opened.
Change-Id: Id06eca353086e084921f047acad28c14745684ee
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The hardware supports a 16 byte descriptor for receive, but the
driver was never using it in production. There was no performance
benefit to the real driver of 16 byte descriptors, so drop a whole
lot of complexity while getting rid of the code.
Also since the previous patch made us use no-split mode all the
time, drop any support in the driver for any other value in dtype
and assume it is always zero (aka no-split).
Hooray for code removal!
Change-ID: I2257e902e4dad84a07b94db6d2e6f4ce69b27bc0
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is part 2 of the Rx refactor series, just including
changes to i40evf.
This refactor aligns the receive routine with the one in
ixgbe which was highly optimized. This reduces the code
we have to maintain and allows for (hopefully) more readable
and maintainable RX hot path.
In order to do this:
- consolidate the receive path into a single function that doesn't
use packet split but *does* use pages for Rx buffers.
- remove the old _1buf routine
- consolidate several routines into helper functions
- remove VF ethtool control over packet split
- remove priv_flags interface since it is unused
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
As part of preparation for the rx-refactor, remove the
packet split receive routine and ancillary code.
Some of the split related context set up code stays in
i40e_virtchnl_pf.c in case an older VF driver tries to load
and still wants to use packet split.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
As part of the rx-refactor, the dtype variable in the i40e_ring
struct is no longer used, so remove it.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Refactor the interpretation of a tunnel. This removes
some code and lets us start using the hardware's parsing.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch makes it so that i40e and i40evf can use GSO_PARTIAL to support
segmentation for frames with checksums enabled in outer headers. As a
result we can now send data over these types of tunnels at over 20Gb/s
versus the 12Gb/s that was previously possible on my system.
The advantage with the i40e parts is that this offload is mostly
transparent as the hardware still deals with the inner and/or outer IPv4
headers so the IP ID is still incrementing for both when this offload is
performed.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
GCC 6 has a new warning which will display when you attempt to left
shift a signed value beyond the storage size of the type. I40E_MASK
generates a mask value for 32bit registers. Properly typecast the mask
value and place the values in parenthesis to prevent macro expansion
issues.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch syncs the VF code for the changes made to the PF for the RSS
hash tuple settings. Since the VF still cannot change the RSS hash
settings, change the code to make this clear to the user. Previously,
the default settings were returned in this function. However, the
default can be changed by the PF so this does not make sense anymore.
Change-Id: I085eaf005fc7978b440d2a1bf2b2dd7cadaff39b
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Remove the code that implements the HMC AQ APIs and call these APIs.
This is done because these are obsolete APIs and are not supported
by firmware.
Change-ID: I5d771d8f37c3e16e7b0a972ff9b27e75aa2d05d4
Signed-off-by: Neerav Parikh <neerav.parikh@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add necessary Linux Ethernet driver support for promiscuous mode
operation. Add a flag so the VF knows it is in promiscuous mode
and two state flags to discreetly track multicast and unicast
promiscuous states.
Change-Id: Ib2f2dc7a7582304fec90fc917ebb7ded21ba1de4
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The driver was offloading the VLAN tag into the skb
any time there was a VLAN tag and the hardware stripping was
enabled. Just check to make sure it's enabled before put_tag.
Change-Id: Ife95290c06edd9a616393b38679923938b382241
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add device capability which defines if update is available and security
check is needed during update process.
Change-ID: I380787c878275e1df18b39198df3ee3666342282
Signed-off-by: Michal Kosiarz <michal.kosiarz@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If the PF driver reports proper support, allow the PF driver to
configure RSS on the behalf of the VF driver. This will allow for RSS
support on future hardware without changes to the VF driver.
Unfortunately, the old RSS code still needs to stay as the driver needs
to be compatible with PF drivers that don't support this interface. But
this change still simplifies the data structures a bunch and makes this
code simpler to read and maintain.
Change-ID: I0375aad40788ecdc0cb24d5cfeccf07804e69771
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
To add a little flexibility to the nvmupdate facility, this code adds the
ability to specify an AQ event opcode to wait on after the Exec_AQ request.
Change-ID: Iddbfd63c3de8df3edb9d3e90678b08989bc4946e
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Under some circumstances the driver remove function may be called before
the driver is fully initialized. So we can't assume that we know where
our towel is at, or that all of the data structures are initialized.
To ensure that we don't panic, check that the vsi_res pointer is valid
before dereferencing it. Then drink beer and eat peanuts.
Change-ID: If697b4db57348e39f9538793e16aa755e3e1af03
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Looking over the documentation it turns out enabling IPIP and SIT offloads
for i40e is pretty straightforward. As such I decided to enable them with
this patch. In my testing I am seeing an improvement of 8 to 10 Gb/s
for IPIP and SIT tunnels with this offload enabled.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The feature flags list for i40e and i40evf is beginning to become pretty
massive. I plan to add another 4 or so features to these drivers and
duplicating the flags for each and every flags list is becoming a bit
repetitive.
The primary change here is that we now build our features list around
hw_encap_features. After that we assign that to vlan_features,
hw_features, and finally map that onto features. In addition we end up
throwing features onto hw_encap_features that end up having no effect such
as the Rx offloads and SCTP_CRC. However that should have no impact and
makes things a bit easier for us as hw_encap_features is one of the less
updated features maps available.
For i40evf I went through and sanity checked a few features as well.
Specifically RXCSUM was being set as a read-only feature which didn't make
much sense. I have updated things so we can clear the NETIF_F_RXCSUM flag
since that is really a software feature and not a hardware one anyway so
disabling it is just a matter of ignoring the result from the hardware.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Conflicts were two cases of simple overlapping changes,
nothing serious.
In the UDP case, we need to add a hlist_add_tail_rcu()
to linux/rculist.h, because we've moved UDP socket handling
away from using nulls lists.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch addresses a bug introduced based on my interpretation of the
XL710 datasheet. Specifically section 8.4.1 states that "A single transmit
packet may span up to 8 buffers (up to 8 data descriptors per packet
including both the header and payload buffers)." It then later goes on to
say that each segment for a TSO obeys the previous rule, however it then
refers to TSO header and the segment payload buffers.
I believe the actual limit for fragments with TSO and a skbuff that has
payload data in the header portion of the buffer is actually only 7
fragments as the skb->data portion counts as 2 buffers, one for the TSO
header, and one for a segment payload buffer.
Fixes: 2d37490b82 ("i40e/i40evf: Rewrite logic for 8 descriptor per packet check")
Reported-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Correctly set the VLAN feature flags after setting the rest of the
netdev flags. And don't set them in hw_features, because these can't be
controlled by the VF driver.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add defines for input set mask (RSS, flow director, flexible payload),
including defines specific to IPv6.
Change-ID: Ie95ef7d0916a4d6ca011c194283f959774c8dce9
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add opcodes and structures to support RSS configuration by PF driver on
behalf of the VF drivers. This reduces complexity in the VF driver and
allows us to support future hardware designs without modifying the VF
driver.
Change-ID: I8c75765c630eacb71f95967f1109a198542593ac
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The NVM update status info should stay collected together, not
spread across different structs.
Change-ID: Ic16f9e9fd79945d865bb7226184c889884585025
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
As it turns out, calling into other files from hot path hurts
performance a lot. In this case the majority of the time we
call "check FCoE" and the packet is *not* FCoE, but this call
was taking 5% of our total cycles spent on receive.
Change-ID: I080552c26e7060bc7b78504dc2763f6f0b3d8c76
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Some of the tx_ring arguments can be deleted since they are not used.
Change-ID: I99275b0f191d7f63ec2f05061919904940c36f31
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
A local variable could move down inside the context where it is used.
Change-ID: I9caba9e1eacf921037077f2665cbce83fd8e95d6
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
With IPv4 and IPv6 now using the same format for checksums based on the
length of the frame we need to update the i40e and i40evf drivers so that
they correctly account for lengths greater than or equal to 64K.
With this patch the driver should now correctly update checksums for frames
up to 16776960 in length which should be more than large enough for all
possible TSO frames in the near future.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We were passing in the seed where we should just be passing false
because we want the VSI table not the pf table.
Change-ID: I9b633ab06eb59468087f0c0af8539857e99f9495
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Upon module remove, wait a little longer after requesting a reset before
checking to see if the firmware responded. This change prevents double
resets when the firmware is busy.
Change-ID: Ieedc988ee82fac1f32a074bf4d9e4dba426bfa58
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The new device ID is 0x37D3 and it should follow the same flows and
branding string as for 0x37D0.
Change-ID: Ia5ad4a1910268c4666a3fd46a7afffbec55b4fc2
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Users of ethtool were being given the mistaken impression that this
driver was able to change its VLAN tagging features, and were
disappointed that this was not actually the case. Implement
ndo_fix_features method so that we can adjust these flags as needed to
avoid false impressions.
Change-ID: I08584f103a4fa73d6a4128d472e4ef44dcfda57f
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If the driver happens to read a register during the time in which the
device is undergoing reset, it will receive a value of 0xdeadbeef
instead of a valid value. Unfortunately, the driver may misinterpret
this as a valid value, especially if it's just looking for individual
bits.
Add an explicit check for this value when we are looking for admin queue
errors, and trigger reset recovery if we find it.
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Simple cast to fix a sparse warning.
Fixes: commit 5453205cd0 ("i40e/i40evf: Enable support for
SKB_GSO_UDP_TUNNEL_CSUM")
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch enables bulk Tx clean for skbs. In order to enable it we need
to pass the napi_budget value as that is used to determine if we are truly
running in NAPI mode or if we are simply calling the routine from netpoll
with a budget of 0. In order to avoid adding too many more variables I
thought it best to pass the VSI directly in a fashion similar to what we do
on igb and ixgbe with the q_vector.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In the polling routines for i40e and i40evf we were using bitwise operators
to avoid the side effects of the logical operators, specifically the fact
that if the first case is true with "||" we skip the second case, or if it
is false with "&&" we skip the second case. This fixes an earlier patch
that converted the bitwise operators over to the logical operators and
instead replaces the entire thing with just an if statement since it should
be more readable what we are trying to do this way.
Fixes: 1a36d7fadd ("i40e/i40evf: use logical operators, not bitwise")
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The only error case is when the malloc fails, in which case the clean up
loop does nothing at all, so remove it
Signed-off-by: Alan Cox <alan@linux.intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
From what I can tell the practical limitation on the size of the Tx data
buffer is the fact that the Tx descriptor is limited to 14 bits. As such
we cannot use 16K as is typically used on the other Intel drivers. However
artificially limiting ourselves to 8K can be expensive as this means that
we will consume up to 10 descriptors (1 context, 1 for header, and 9 for
payload, non-8K aligned) in a single send.
I propose that we can reduce this by increasing the maximum data for a 4K
aligned block to 12K. We can reduce the descriptors used for a 32K aligned
block by 1 by increasing the size like this. In addition we still have the
4K - 1 of space that is still unused. We can use this as a bit of extra
padding when dealing with data that is not aligned to 4K.
By aligning the descriptors after the first to 4K we can improve the
efficiency of PCIe accesses as we can avoid using byte enables and can fetch
full TLP transactions after the first fetch of the buffer. This helps to
improve PCIe efficiency. Below is the results of testing before and after
with this patch:
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % S % U us/KB us/KB
Before:
87380 16384 16384 10.00 33682.24 20.27 -1.00 0.592 -1.00
After:
87380 16384 16384 10.00 34204.08 20.54 -1.00 0.590 -1.00
So the net result of this patch is that we have a small gain in throughput
due to a reduction in overhead for putting together the frame.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use the new AdminQ functions for safely accessing the Rx control
registers that may be affected by heavy small packet traffic.
Change-ID: Ibb00983e8dcba71f4b760222a609a5fcaa726f18
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the new opcodes and struct used for asking the firmware to update Rx
control registers that need extra care when being accessed while under
heavy traffic - e.g. sustained 64byte packets at line rate on all ports.
The firmware will take extra steps to be sure the register accesses
are successful.
The registers involved are:
PFQF_CTL_0
PFQF_HENA
PFQF_FDALLOC
PFQF_HREGION
PFLAN_QALLOC
VPQF_CTL
VFQF_HENA
VFQF_HREGION
VSIQF_CTL
VSILAN_QBASE
VSILAN_QTABLE
VSIQF_TCREGION
PFQF_HKEY
VFQF_HKEY
PRTQF_CTL_0
GLFCOE_RCTL
GLFCOE_RSOF
GLQF_CTL
GLQF_SWAP
GLQF_HASH_MSK
GLQF_HASH_INSET
GLQF_HSYM
GLQF_FC_MSK
GLQF_FC_INSET
GLQF_FD_MSK
PRTQF_FD_INSET
PRTQF_FD_FLXINSET
PRTQF_FD_MSK
Change-ID: I56c8144000da66ad99f68948d8a184b2ec2aeb3e
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds functions to blink led on devices using
10GBaseT PHY since MAC registers used in other designs
do not work in this device configuration.
Change-ID: Id4b88c93c649fd2b88073a00b42867a77c761ca3
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
On all of the other Intel drivers we place checksum close to TSO as they
have a significant amount in common and it can help to reduce the decision
tree for how to handle the frame as the first check in TSO is to see if
checksumming is offloaded, and if it is not we can skip _BOTH_ TSO and Tx
checksum offload based on a single check.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch is meant to rewrite the logic for how we determine if we can
transmit the frame or if it needs to be linearized.
The previous code for this function was using a mix of division and modulus
division as a part of computing if we need to take the slow path. Instead
I have replaced this by simply working with a sliding window which will
tell us if the frame would be capable of causing a single packet to span
several descriptors.
The logic for the scan is fairly simple. If any given group of 6 fragments
is less than gso_size - 1 then it is possible for us to have one byte
coming out of the first fragment, 6 fragments, and one or more bytes coming
out of the last fragment. This gives us a total of 8 fragments
which exceeds what we can allow so we send such frames to be linearized.
Arguably the use of modulus might be more exact as the approach I propose
may generate some false positives. However the likelihood of us taking much
of a hit for those false positives is fairly low, and I would rather not
add more overhead in the case where we are receiving a frame composed of 4K
pages.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In an upcoming patch I would like to have access to the descriptor count
used for the data portion of the frame. For this reason I am splitting up
the descriptor count function from the function that stops the ring.
Also in order to try and reduce unnecessary duplication of code I am moving
the slow-path portions of the code out of being inline calls so that we can
just jump to them and process them instead of having to build them into
each function that calls them.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Recent changes should have enabled support for IPv6 based tunnels and
support for TSO with outer UDP checksums. As such we can update the
feature flags to reflect that.
In addition we can clean-up the flags that aren't needed such as SCTP and
RXCSUM since having the bits there doesn't add any value.
I also found one spot where we were setting the same flag twice. It looks
like it was probably a git merge error that resulted in the line being
duplicated. As such I have dropped it in this patch.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Acked-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The XL722 has support for providing the outer UDP tunnel checksum on
transmits. Make use of this feature to support segmenting UDP tunnels with
outer checksums enabled.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is mostly a minor clean-up for the Rx checksum path in order to avoid
some of the unnecessary conditional checks that were being applied.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add exception handling to the Tx checksum path so that we can handle cases
of TSO where the frame is bad, or Tx checksum where we didn't recognize a
protocol
Drop I40E_TX_FLAGS_CSUM as it is unused, move the CHECKSUM_PARTIAL check
into the function itself so that we can decrease indent.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch defers writing to the Tx descriptor bits until we know we have
successfully completed a given operation. So for example we defer updating
the tunnelling portion of the context descriptor until we have fully
identified the type.
The advantage to this approach is that we can assemble values as we go
instead of having to try and kludge everything together all at once. As a
result we can significantly clean up the tunneling configuration for
instance as we can just do a pointer walk and do the math for the distance
between each set of points.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds support for IPv6 extension headers in setting up the Tx
checksum. Without this patch extension headers would cause IPv6 traffic to
fail as the transport protocol could not be identified.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch fixes two issues. First was the fact that iphdr(skb)->protocl
was being used to test for the outer transport protocol. This completely
breaks IPv6 support. Second was the fact that we cleared the flag for v4
going to v6, but we didn't take care of txflags going the other way. As
such we would have the v6 flag still set even if the inner header was v4.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The Tx checksum path was maintaining a set of 3 pointers and two lengths in
order to prepare the packet for being checksummed. The thing is we only
really needed 2 pointers, and the lengths that were being maintained can
easily be computed.
As such we can replace the IPv4 and IPv6 header pointers with one single
union that represents both, or a generic pointer to the start of the
network header. For the L4 headers we can do the same with TCP and a
generic pointer to the start of the transport header. The length of the
TCP header is obtained by simply multiplying doff by 4, and the network
header length can be obtained by subtracting the network header pointer
from the transport header pointer.
While I was at it I renamed l4_hdr to l4_proto to make it a bit more clear
and less likely to be confused with l4.hdr which is the transport header
pointer.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch goes through and pulls all of the spots where we were updating
either the TCP or IP checksums in the TSO and checksum path into the TSO
function. The general idea here is that we should only be updating the
header after we verify we have completed a skb_cow_head check to verify the
head is writable.
One other advantage to doing this is that it makes things much more
obvious. For example, in the case of IPv6 there was one spot where the
offset of the IPv4 header checksum was being updated which is obviously
incorrect.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch makes it so that the L4 header offsets and such can be ignored
when dealing with the L3 checksum and length update. This is done making
use of two things.
First we can just use the offset from the L4 header to the start of the
packet to determine the L4 offset, and from that we can then make use of
the data offset to determine the full length of the headers.
As far as adjusting the checksum to remove the length we can simply add the
inverse of the length instead of having to recompute the entire
pseudo-header without the length. In the case of an IPv6 header this
should be significantly cheaper since we can make use of a value we already
needed instead of having to read the source and destination address out of
the packet.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Instead of casing u32 values to u64 it makes more sense to just start out
with u64 values in the first place. This way we don't need to create a
mess with all of the casts needed to populate a 64b value.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The i40e and i40evf drivers contained code for inserting an outer checksum
on UDP tunnels. The issue however is that the upper levels of the stack
never requested such an offload and it results in possible errors.
In addition the same logic was being applied to the Rx side where it was
attempting to validate the outer checksum, but the logic there was
incorrect in that it was testing for the resultant sum to be equal to the
header checksum instead of being equal to 0.
Since this code is so massively flawed, and doing things that we didn't ask
for it to do I am just dropping it, and will bring it back later to use as
an offload for SKB_GSO_UDP_TUNNEL_CSUM which can make use of such a
feature.
As far as the Rx feature I am dropping it completely since it would need to
be massively expanded and applied to IPv4 and IPv6 checksums for all parts,
not just the one that supports Tx checksum offload for the outer.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In MFP mode particularly when we were setting the PF VSI in limited
promiscuous, the HW switch was still mirroring the outgoing packets
from other VSIs (VF/VMdq) onto the PF VSI.
With this new bit set, the mirroring doesn't happen any more and so
we are in limited promiscuous on the PF VSI in MFP which is similar
to defport.
An API check is not required, since this bit is reserved for FW API
version < 1.5
Also update copyright year in file headers.
Change-ID: I9840cb95f11dde733d943cb03ce84f68b9611bc8
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In one obscure corner case, it was possible to clear the NVM update wait
flag when no update_done message was actually received. This patch
cleans the event descriptor before use, and moves the opcode check to
where it won't get done if there was no event to clean.
Also update copyright year in file headers.
Change-ID: I68bbc41965e93f4adf07cbe98b9dfd63d41509a4
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If a reset fails to complete, the driver gets its affairs in order and
awaits the cold solace of rmmod. Unfortunately, it was not properly
setting the adapter state, which would cause a panic on rmmod, instead
of the desired surcease.
Set the adapter state to DOWN in this case, and avoid a panic.
Change-ID: I6fdd9906da52e023f8dc744f7da44b5d95278ca9
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In the case where we have a page fully used by receive data, we need to
release the page fully to the stack. Instead of calling get_page (which
increments the page count) followed by free_page (which decrements the
page count), just donate our reference to the stack. Although this
donation is not tax deductible, it does allow us to avoid two very
expensive atomic operations that reverse each other.
Change-ID: If70739792d5748995fc175ec92ac2171ed4ad8fc
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds a workaround for cases where we might have
interrupts that got lost but WB happened.
If that happens without this patch we will see a tx_timeout.
To work around it, this patch goes ahead and reschedules NAPI
in that situation, if NAPI is not already scheduled.
We also add a counter in ethtool to keep track of when
we detect a case of tx_lost_interrupt.
Note: napi_reschedule() can be safely called from process/service_task
context and is done in other drivers as well without an issue.
Change-ID: I00f98f1ce3774524d9421227652bef20fcbd0d20
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Support packet split receive on VFs. This is off by default but can be
enabled using ethtool private flags. Because we need to trigger a reset
from outside of i40evf_main.c, create a new function to do so, and
export it.
Also update copyright year in file headers.
Change-ID: I721aa5d70113d3d6d94102e5f31526f6fc57cbbb
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Bump version to i40e-1.4.13 and i40evf-1.4.9
Change-ID: I9db37f9d4899141c3e5455dfb456d45465b8c035
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Get rid of the unused hsplit field in the ring struct and use the
existing macro to detect packet split enablement. This allows debugfs
dumps of the VSI to properly show which Rx routine is in use.
Change-ID: Ic4e9589e6a788ab196ed0850703f704e30c03781
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Mr. Spock would certainly raise an eyebrow to see us using bitwise
operators, when we should clearly be relying on logic. Fascinating.
Change-ID: Ie338010c016f93e9faa2002c07c90b15134b7477
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Refactor the packet split Rx code to properly use half-pages for
receives. The previous code was doing way more mapping and unmapping
than it needed to, and wasn't properly using half-pages.
Increment the page use count each time we give a half-page to an skb,
knowing that the stack will probably process and release the page before
we need it again. Only free and reallocate pages if the count shows that
both half-pages are in use. Add counters to track reallocations and page
reuse.
Change-ID: I534b299196036b64be82b4861a0a4036310a8f22
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The i40e and i40evf drivers now cleanly handle allocation
failures and can avoid kernel log spew from the memory allocator
when allocations fail, so set __GFP_NOWARN on Rx buffer alloc.
Change-ID: Ic9e1b83c495e2a3ef6b069ba7fb6e52ce134cd23
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This is the "Don't Give Up" patch. Previously the
driver could fail an allocation, and then possibly stall
a queue forever, by never coming back to continue receiving
or allocating buffers.
With this patch, the driver will keep polling trying to allocate
receive buffers until it succeeds. This should keep all receive
queues running even in the face of memory pressure.
Also update copyright year in file header.
Change-ID: I2b103d1ce95b9831288a7222c3343ffa1988b81b
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
While re-enabling interrupts the driver would clear all pending
causes. This meant that if an interrupt was generated while the driver
was cleaning or polling with interrupts disabled, then that interrupt
was lost. This could cause a queue to become dead, especially for
receive. Refactored the enable_icr0 function in order to allow
it to be decided by the caller whether the CLEARPBA (clear pending
events) bit will be set while re-enabling the interrupt.
Also update copyright year in file headers.
Change-ID: Ic1db100a05e13c98919057696db147a258ca365a
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Change the driver string to 40-10 Gigabit instead of XL710/X710 for X722
and all future products.
Also update copyright year in file header.
Change-ID: I57fae656b36dc4eb682b2b7a054f8f48f3589149
Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Now that the Force-WriteBack functionality in X710/XL710 devices
has been moved out of the clean routine and into the service task,
we need to make sure WriteBack-On-ITR is separated out since it
is still called from clean.
In the X722 devices, Force-WriteBack implies WriteBack-On-ITR but
without the interrupt, which put the driver into a missed
interrupt scenario and a potential tx-timeout report.
With this patch, we break the two functions out, and call the
appropriate ones at the right place. This will avoid creating missed
interrupt like scenarios for X722 devices.
Also update copyright year in file headers.
Change-ID: Iacbde39f95f332f82be8736864675052c3583a40
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Don't bother trying to set up a TSO if the skb->ip_summed is not
set to CHECKSUM_PARTIAL.
Change-ID: I6495b3568e404907a2965b48cf3e2effa7c9ab55
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Driver was using an offset based off a DMA handle while mapping and
unmapping using sync_single_range_for[cpu|device], where it should
be using DMA handle (returned from alloc_coherent) and the offset of the
memory to be sync'd.
Change-ID: I208256565b1595ff0e9171ab852de06b997917c6
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Nelson, Shannon <shannon.nelson@intel.com>
Reviewed-by: Williams, Mitch A <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
If the VF is reset via VFLR, the device will be knocked out of bus
master mode, and the driver will fail to recover from the reset. Fix
this by enabling bus mastering after every reset. In a non-VFLR case,
the bus master bit will not be disabled, and this call will have no effect.
Change-ID: Id515859ac7a691db478222228add6d149e96801a
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We were not doing write-back on interrupt throttle for Legacy case in X722.
This patch fixes that, so we do WB_ON_ITR for Legacy as well. Plus the issue
that we should still be setting NO_ITR if we are touching the DYN_CTLN register
since we do not want to change ITR setting here.
Change-ID: I5db8491ee1544118a389db839cecc93e1bbc480e
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Bump AQ minor version to 1.5 for new FW features.
Change-ID: I5a790f7f519a2a8921aaa1c5663727dd1897ffec
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the new AQ command and struct for managing a thermal sensor.
Change-ID: I6f5631839a0f3dca352a6c222f1269a960e2310a
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the new Cisco VXLAN-GPE cloud tunnel type for the Add Cloud Filter
and UDP tunnel AQ commands.
Change-ID: I2c093c7d79726c7fca08a36e5c63581a905da3d2
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the new Set Switch Config AdminQ command, and mark the L2 Filter
bit as deprecated in the Add VEB command.
Change-ID: I5b24790f14c56f0ddf3f70df1e486844146b039f
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add flags to MAC allocation requests to signify that the MAC VLAN filters
should come from the shared resource pool rather than the dedicated PF
resource pools.
Change-ID: I4c2da64c01856edcb0982bc4aab75c5a91047a7a
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the new External Device Power Ability field to the get_link_status data
structure, using space from the reserved field at the end of the struct.
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Fix the name of the new cloud tunnel type from the place-holder NGE
name to the official Geneve. Also fix the spelling of the VXLAN type.
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Add the AQ opcode and struct definitions for the Run PHY Activity command
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Kevin Scott <kevin.c.scott@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add the new proxy-wake-on-lan capability bit available with the
new X722 device.
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
As done per ixgbe, use a private workqueue to avoid blocking the
system workqueue. This avoids some strange side effects when
some other entity is depending on the system work queue.
Change-ID: Ic8ba08f5b03696cf638b21afd25fbae7738d55ee
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Add write-back on interrupt throttle rate timer expiration support
for the i40evf driver, when running on X722 devices.
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The PCTYPES for the X710 and X722 families are different. This patch
makes adjustments for that.
Signed-off-by: Anjali Singhai Jain <anjali.singhai@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>