Converts the list and the core manipulating with it to be the same as uc_list.
+uses two functions for adding/removing mc address (normal and "global"
variant) instead of a function parameter.
+removes dev_mcast.c completely.
+exposes netdev_hw_addr_list_* macros along with __hw_addr_* functions for
manipulation with lists on a sandbox (used in bonding and 80211 drivers)
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When running the offline diagnostic tests check to see if any VFs are
online. If so then only run the link test. This is necessary because
the VFs running in guest VMs aren't aware of when the PF is taken
offline for a diagnostic test. Also put a message to the system log
telling the system administrator to take the VFs offline manually if
(s)he wants to run a full diagnostic. Return 1 on each of the tests
not run to alert the user of the condition.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
During FCF solicitation, the switch is supposed to pad the
solicited advertisement out to the endpoints specified
maximum FCoE frame size. That means that we need to receive
FIP frames that are larger than the standard MTU. To make
sure the receive queue is configured correctly, we should be
filtering FIP traffic into the FCoE queues.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently FIP (FCoE Initialization Protocol) frames
are going untagged. This causes various problems
with FCFs (switches) that have negotiated a priority
over dcbx. This patch tags FIP frames with the same
priority as the FCoE frames.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If the user buffer count was 256 the shift would place a 1
in the offset region leading to errors. It also overwrites
the uers buffer list. This patch makes sure that at most
256 user buffers are allowed for DDP and the buffer count
is masked properly such that it doesn't overwrite the offset
when shifting the bits.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Frank Zhang <frank_1.zhang@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the last patch I missed an unecessary min_t comparison.
This patch removes it, the path allocates at most
72 tx queues for 82599 and 24 for 82598 there is no need
for this check.
Additionally this sets MAX_[TX|RX]_QUEUES to 72. Which is
used as the size for the tx/rx_ring arrays. There is no
reason to have more tx_rings/rx_rings then num_tx_queues.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The clear_to_send flag is being cleared before the call to ping all
the VFs. It should be called after pinging all the VFs.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
VFs running in guest VMs do not respond in as timely a manner to
PF indication it is going down as they do when running in the host
domain. If the adapter is in SR-IOV mode insert a two second delay
to guarantee that all VFs have had time to respond to the PF reset.
In any case resetting the PF while VFs are active should be
discouraged but if it must be done then there will be a two
second delay to help synchronize resets among the PF and all the
VFs.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Includes one minor indentation fix to placate checkpatch.
Signed-off-by: Frans Pop <elendil@planet.nl>
Cc: e1000-devel@lists.sourceforge.net
Signed-off-by: David S. Miller <davem@davemloft.net>
As per Simon Horman's feedback set IXGBE_RSC_CB(skb)->dma to zero
after unmapping HWRSC DMA address to avoid double freeing.
Signed-off-by: Mallikarjuna R Chilakala <mallikarjuna.chilakala@intel.com>
Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently netdev_features_change is called before fcoe tx queues
setup is done, so this patch moves calling of netdev_features_change
after tx queues setup is done in ixgbe_init_interrupt_scheme, so
that real_num_tx_queues is updated correctly on each fcoe enable
or disable.
This allows additional fcoe queues updated correctly in vlan driver
for their correct queue selection.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Advanced Power Management is disabled for 82599 KX4 connections by
clearing GRC.APME bit, causing it to not wake the system from an
improper system shutdown. By default GRC.APME is enabled and software
is not supposed to clear these settings during adapter probe.
Signed-off-by: Mallikarjuna R Chilakala <mallikarjuna.chilakala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix 82599 link issues during driver load and unload test using multi-speed
10G & 1G fiber modules. When connected back to back sometime 82599 multispeed
fiber modules would link at 1G speed instead of 10G highest speed, due to a
race condition in autotry process involving Tx laser flapping. Move autotry
autoneg-37 tx laser flapping process from multispeed module init setup
to driver unload. This will alert the link partner to restart its
autotry process when it tries to establish the link with the link partner
Signed-off-by: Mallikarjuna R Chilakala <mallikarjuna.chilakala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rename for_each_bit to for_each_set_bit in the kernel source tree. To
permit for_each_clear_bit(), should that ever be added.
The patch includes a macro to map the old for_each_bit() onto the new
for_each_set_bit(). This is a (very) temporary thing to ease the migration.
[akpm@linux-foundation.org: add temporary for_each_bit()]
Suggested-by: Alexey Dobriyan <adobriyan@gmail.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Artem Bityutskiy <dedekind@infradead.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Move TC_PRIO_CONTROL check and queue remapping into
ixgbe_select_queue(). Remapping queues after the qdisc
can result in the wrong qdisc queue being stopped with
netif_stop_subqueue(). Even if this is resolved and the
correct queue is stopped it can result in a queue being
blocked by TC_PRIO_CONTROL frames uneccesarily. Moving
this into the select_queue routine maintains alignment
between tx_rings and qdisc queues.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of allocating 128 struct netdev_queue per device, use the
minimum value between 128 and the number of possible txq's, to
reduce ram usage and "tc -s -d class shod dev .." output.
This patch fixes Eric Dumazet's patch to set the TX queues to
the correct minimum.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Disabling TSO can cause the dev_watchdog timer to be triggered because
when TSO is disabled netif_tx_stop_all_queues is called. If the watchdog
timer fires while the queues are stopped and traffic has not recently been
sent on a paticular queue this is falsly identified as a hang and
ndo_tx_timeout() is called. This is ocossionally seen during testing.
This removes the netif_tx_stop_all_queues() it is not needed. The scheduler
submits skb's with dev_hard_start_xmit(), this checks if netif_needs_gso and
if so it calls dev_gso_segment. Disabling TSO will cause dev_hard_start_xmit()
to do the gso processing. However ixgbe does not use the features flags to
determine if it needs to use tso or not instead it uses skb->gso_size so
ixgbe will process these frames correctly regardless of the netdev features
flag.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Work around 82599 HW issue when HWRSC is enabled on IOMMU enabled
kernels. 82599 HW is updating the header information after setting the
descriptor to done, resulting DMA mapping/unmapping issues on IOMMU
enabled systems. To work around the issue delay unmapping of first packet
that carries the header information until end of packet is reached.
Signed-off-by: Mallikarjuna R Chilakala <mallikarjuna.chilakala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The PowerPC architecture does not require loads to independent bytes to be
ordered without adding an explicit barrier.
In ixgbe_clean_rx_irq we load the status bit then load the packet data.
With packet split disabled if these loads go out of order we get a
stale packet, but we will notice the bad sequence numbers and drop it.
The problem occurs with packet split enabled where the TCP/IP header and data
are in different descriptors. If the reads go out of order we may have data
that doesn't match the TCP/IP header. Since we use hardware checksumming this
bad data is never verified and it makes it all the way to the application.
This bug was found during stress testing and adding this barrier has been shown
to fix it.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We need to have the WUS register set to all 1's in order for the hardware
to be capable of ever waking up. Set it here in the ixgbe_probe().
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The 82598 has an erratum that receipt of pause frames at 1G
could lead to a Tx Hang. To avoid this this patch disables
Rx FC while at 1G speed for all 82598 parts.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The recent n-tuple patches added some comments to the headers
of the Flow Director functions that aren't accurate. This
cleans them up, and is a purely cosmetic patch.
Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch replaces dev->mc_count in all drivers (hopefully I didn't miss
anything). Used spatch and did small tweaks and conding style changes when
it was suitable.
Jirka
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Driver has gone under significant changes, the version should
reflect that.
Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds n-tuple filter programming to 82599.
Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch allocates the ring structures themselves on each
NUMA node along with the buffer_info structures. This way we
don't allocate the entire ring memory on a single node in one
big block, thus reducing NUMA node memory crosstalk.
Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The default policy for the current driver is to do all its memory
allocation on whatever processor is running insmod/modprobe. This
is less than optimal.
This driver's default mode of operation will be to use each node for each
subsequent transmit/receive queue. The most efficient allocation will be
to then have the interrupts bound in such a way as to match up the
interrupt of the queue to the cpu where its memory was allocated.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Processing multiple ixgbe_watchdog_task calls may cause
the link_up variable and IXGBE_FLAG_NEED_LINK_UPDATE flag
to be set incorrectly. In the worse case this is causing
the netif_carrier_off to be called inappropriately which
results in an interface that can't be brought up.
Although schedule_work() will only schedule the task if
it is not already on the work queue the WORK_STRUCT_PENDING
bits are cleared just before calling the work function.
This allows WORK_STRUCT_PENDING to be cleared, the work
function to start and meanwhile schedule another task.
This patch adds a mutex to the watchdog task. This bug is
actualized by changing DCB settings or doing extended
cable pull or reset tests.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
a developer had complained of getting lots of warnings:
"eth16 selects TX queue 98, but real number of TX queues is 64"
http://www.mail-archive.com/e1000-devel@lists.sourceforge.net/msg02200.html
As there was no follow up on that bug, I am submitting this
patch assuming that the other return points will not return
invalid txq's, and also that this fixes the bug (not tested).
Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit e5a43549f7 (ixgbe: remove
skb_dma_map/unmap calls from driver) looks to have introduced a bug in
ixgbe_tx_map. If we get an error from a PCI DMA call, we loop backwards
through count until it becomes -1 and return that.
The caller of ixgbe_tx_map expects 0 on error, so return that instead.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Call ixgbe_copy_dcb_cfg() earlier in the ixgbe_dcbnl_set_all() so that
we can learn if this is going to fail as early as possible. Previously,
ixgbe_down or ixgbe_close were being called before this check and the
IXGBE_RESETTING bit was being set and cleared. Worse if this failed
the corresponding ixgbe_up/ndo_open would not called.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Set the correct bit BIT_PG_TX when tx PG settings are set.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: PJ Waskiewicz <peter.p.waskiewicz.jr@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch introduces three macros to work with uc list from net drivers.
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Found this problem when testing IPv6 from a KVM guest to a remote
host via e1000e device on the host.
The following patch fixes the check for IPv6 GSO packet in Intel
ethernet drivers to use skb_is_gso_v6(). SKB_GSO_DODGY is also set
when packets are forwarded from a guest.
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Inadequate coordination between the PF driver and the VF driver results
in tx hangs in the VF driver when you perform certain actions that will
lead to a re-init of the PF. Add feature to notify active VFs when the PF
is about to re-initialize so that the VFs can take appropriate action.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The PF Reset Done bit should not be set in the extended control register
until the PF has actually completed the bring up process. It is a mis-
interpretation of the purpose of this bit to assume it should be set
when the physical reset of the device is done. Instead it should be used
to indicate to the VFs when the PF is ready to provide them with required
services. This is not until after the PF is finished coming up and ready
to process mailbox events.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This data storage for SW emulated MAC addresses is unlikely to ever be used
so pull it.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When VFs are allocated (as indicated by adapter->num_vfs is non-zero) then
the PF pool is no longer zero. Instead it will be the same as the number
of VFs allocated. When setting the VLVF entry for the PF we need to use
the correct pool otherwise the PF will get VLAN packets from the wire
because the packet will pass VFTA filtering and the PF has the default
pool, but it will not get VLAN packets from the VFs because it has
not set the correct pool bit in the VLVF entry.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The variable count and i are unsigned so the (<|>=)0 tests do not work.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch resolves issues seen when running netconsole and rebooting via
reboot -f. The issue was due to the fact that we were attempting to
perform interrupt actions when the q_vectors and rings had already been
freed via the ixgbe_shutdown routines.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Mallikarjuna R Chilakala <mallikarjuna.chilakala@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Skip MAC loopback test when the adapter is set to a VT mode such as SR-IOV
or VMDq
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Adds SR-IOV features supported by the 82599 controller to the main driver
module. If the CONFIG_PCI_IOV kernel option is selected then the SR-IOV
features are enabled. Use the max_vfs module option to allocate up to 63
virtual functions per physical port.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>