I found the PPP subsystem to not work properly when connecting channels
with different speeds to the same bundle.
Problem Description:
As the "ppp_mp_explode" function fragments the sk_buff buffer evenly
among the PPP channels that are connected to a certain PPP unit to
make up a bundle, if we are transmitting using an upper layer protocol
that requires an Ack before sending the next packet (like TCP/IP for
example), we will have a bandwidth bottleneck on the slowest channel
of the bundle.
Let's clarify by an example. Let's consider a scenario where we have
two PPP links making up a bundle: a slow link (10KB/sec) and a fast
link (1000KB/sec) working at the best (full bandwidth). On the top we
have a TCP/IP stack sending a 1000 Bytes sk_buff buffer down to the
PPP subsystem. The "ppp_mp_explode" function will divide the buffer in
two fragments of 500B each (we are neglecting all the headers, crc,
flags etc?.). Before the TCP/IP stack sends out the next buffer, it
will have to wait for the ACK response from the remote peer, so it
will have to wait for both fragments to have been sent over the two
PPP links, received by the remote peer and reconstructed. The
resulting behaviour is that, rather than having a bundle working
@1010KB/sec (the sum of the channels bandwidths), we'll have a bundle
working @20KB/sec (the double of the slowest channels bandwidth).
Problem Solution:
The problem has been solved by redesigning the "ppp_mp_explode"
function in such a way to make it split the sk_buff buffer according
to the speeds of the underlying PPP channels (the speeds of the serial
interfaces respectively attached to the PPP channels). Referring to
the above example, the redesigned "ppp_mp_explode" function will now
divide the 1000 Bytes buffer into two fragments whose sizes are set
according to the speeds of the channels where they are going to be
sent on (e.g . 10 Byets on 10KB/sec channel and 990 Bytes on
1000KB/sec channel). The reworked function grants the same
performances of the original one in optimal working conditions (i.e. a
bundle made up of PPP links all working at the same speed), while
greatly improving performances on the bundles made up of channels
working at different speeds.
Signed-off-by: Gabriele Paoloni <gabriele.paoloni@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The original patch was submitted last year but wasn't discussed or applied
because of missing maintainer's CCs. I only fixed some formatting errors,
but as I saw tulip is very badly formatted and needs further work.
Original description:
This patch fixes MTU problem, which occurs when using 802.1q VLANs. We
should allow receiving frames of up to 1518 bytes in length, instead of
1514.
Based on patch written by Ben McKeegan for 2.4.x kernels. It is archived
at http://www.candelatech.com/~greear/vlan/howto.html#tulip
I've adjusted a few things to make it apply on 2.6.x kernels.
Tested on D-Link DFE-570TX quad-fastethernet card.
Signed-off-by: Tomasz Lemiech <szpajder@staszic.waw.pl>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
Acked-by: Grant Grundler <grundler@parisc-linux.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
It closes a race in phy_stop_machine when reprogramming of phy_timer
(from phy_state_machine) happens between del_timer_sync and cancel_work_sync.
Without this change it could lead to crash if phy_device would be freed after
phy_stop_machine (timer would fire and schedule freed work).
Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Acked-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes the circular locking problem by changing the locking strategy
concerning the logging of firmware handles.
Signed-off-by: Jan-Bernd Themann <themann@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Changing the mac address when a macvlan device is up will leave the
device on the wrong hash chain making it impossible to receive
packets.
There is no checking of the mac address set on the macvlan. Allowing
a misconfiguration to grab packets from the the underlying device or
another macvlan.
To resolve these problems I update the hash table of macvlans when the
mac address of a macvlan changes, and when updating the hash table
I verify that the new mac address is usable.
The result is well defined and predictable if not perfect handling of
mac vlan mac addresses.
To keep the code clear I have created a set of hash table maintenance
in macvlan so I am not open coding the hash function and the logic
needed to update the hash table all over the place.
Signed-off-by: Eric Biederman <ebiederm@aristanetworks.com>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
When running in a network namespace whose only link to
the outside world is a macvlan device, not being
able to create another macvlan is a real pain.
So modify macvlan creation to allow automatically forward
a creation of a macvlan on a macvlan to become a creation
of a macvlan on the underlying network device.
Signed-off-by: Eric Biederman <ebiederm@aristanetworks.com>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch from Juha Leppanen suppresses a false warning if the eeprom
load succeeds on the very last attempt.
Juha> In function smsc911x_open smsc911x_reg_read+udelay can be run 50
Juha> times with timeout reaching -1, and the following if statetement
Juha> does not catch the timeout and no warning is issued. Also if the
Juha> 50th smsc911x_reg_read is GOOD, loop is exited with timeout as 0
Juha> and bogus warning issued. Replace testing order and --timeout
Juha> instead of timeout-- and now max 50 smsc911x_reg_read's are done,
Juha> with max 49 udelays.
Signed-off-by: Steve Glendinning <steve.glendinning@smsc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch from Juha Leppanen suppresses a false warning if a fast
forward operation succeeds on the very last attempt.
Juha> If smsc911x_reg_read loop is executed 500 times, timeout reaches 0
Juha> and the 500th smsc911x_reg_read result in val is ignored. If
Juha> testing order is changed, then val is checked first. The 500th
Juha> reg_read might be GOOD, why ignore it!
Signed-off-by: Steve Glendinning <steve.glendinning@smsc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit 1577ecef76
Author: Andy Fleming <afleming@freescale.com>
Date: Wed Feb 4 16:42:12 2009 -0800
netdev: Merge UCC and gianfar MDIO bus drivers
left out the deletion of gianfar_mii.c.
Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Similar patch as for 8139cp posted yesterday, so the same comment:
So far there was not a chance to set a mac address on running 8139too device.
This is for example needed when you want to use this NIC as a bonding slave in
bonding device in mode balance-alb. This simple patch allows it.
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
So far there was not a chance to set a mac address on running 8139cp device.
This is for example needed when you want to use this NIC as a bonding slave in
bonding device in mode balance-alb. This simple patch allows it.
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The driver currently ignores the local or remote link faults
raised at the mac layer. This patch fixes it.
Our mac however only advertizes link events, so wait for the
phy to stabilize the link, then enable mac link events interrupts.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update the heurstics workaround unlocking a hung mac:
- reduce Tx mac toggling by enabling Tx drain before resetting the mac
- Take Tx (lack of) activity in account only
- Update the monitoring counter range to 64 bits
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Under RX pressure, The HW might generate a high load of interrupts
to signal mac fifo or free lists overflow.
Disable the interrupts, and poll the relevant status bits
to maintain stats.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Separate TX and RX reclaim handlers
Don't disable interrupts in RX reclaim handler.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Elmininate a cache miss when accessing the CPL header within
the first aggregated buffer.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update skb truesize correctly for the 2nd buffer from a Jumbo frame
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Release page chunk reference in case we fail to map it.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ring free lists door bell less frequently,
specifically every quarter of the active FL
size.
Signed-off-by: Divy Le Ray <divy@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix printk format warnings:
drivers/net/wimax/i2400m/netdev.c:523: warning: format '%zu' expects type 'size_t', but argument 7 has type 'unsigned int'
drivers/net/wimax/i2400m/netdev.c:548: warning: format '%zu' expects type 'size_t', but argument 7 has type 'unsigned int'
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch clears the irqstatus register with the exact same events it
has read from it. Since the read-write operation is not atomic, a new
irqstatus bit could have been set in between these operations and would
then be cleared accidentally.
Secondly, we now don't need any spin lock protection when
scheduling/completing napi poll as the isr will not execute anymore (as
we turn off all interrupts now).
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch modifies the throughput mode poll settings to reduce the
number of interrupts. This is only used by older hardware that need a
timer irq in throughput mode.
Secondly, this patch increases the default rx ring from 128 to 512. This
drastically improves bandwidth utilization for small packets sizes i.e
512 bytes.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the logic to moderate the interrupts by changing the
mode between throughput and poll. If there has been a large amount of
time without any burst of network load, the code will transition to pure
throughput mode (where each tx/rx/other will cause an interrupt). If
bursts of network load occurs, it will transition to poll based mode to
help reduce cpu utilization (it will not interrupt on each packet) while
maintaining the optimum network bandwidth utilization.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch is only a subset of changes so that it is easier to see the
modifications. This patch removes the isr 'for' loop and shifts all the
logic to account for new tab spacing.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A new optimization mode called Dynamic has been added. This will be mode
where interrupt moderation logic will dynamically switch between pure
throughput mode and poll based (called 'cpu') mode.
Also, for newer chipsets, the timer irq is not needed for throughput
mode. Secondly, since we are modifying the irqmask to change between
modes, msix is not supported.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The napi poll routine has been modified to handle all interrupt events
and process them accordingly. Therefore, the ISR will now only schedule
the napi poll and disable all interrupts instead of just disabling rx
interrupt.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are two tx_done routines to handle tx completion processing. Both
these functions now take in a limit value and return the amount of tx
completions. This will be used by a future patch to determine the total
amount of work done.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch removes unnecessary overhead code. Firstly, there is no nead
to mask off unwanted interrupts as we will be checking against the
irqmask field anyways. Secondly, there has been no value in last few
years from detecting error or unknown interrupts.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch will save the irq events in the driver's context so that the
napi routine knows which interrupts have occurred. Subsequent changes
will be moving all interrupt processing into the napi poll routine.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch removes support for msix running in conjunction with napi.
There has been reported issues regarding the behaviour of irqmask and
generation of interrupts by the HW when in MSIX mode. When running napi,
the driver is constantly turning off/on the irqmask. For the time being,
I am going to disable it until I can root cause the issue.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds missing napi enable/disable calls.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Newer versions of the stats feature would not encompass all older
versions. This would result in only retreiving a subset of all available
stats in HW.
Signed-off-by: Ayaz Abdulla <aabdulla@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Protocols that use packet_type can be __read_mostly section for better
locality. Elminate any unnecessary initializations of NULL.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove huge board config structure from each instance, read
only necessary fields from flash.
Replace board_type with port_type (1G/10G), there's another
board_type field describing card type (SFP/XFP/CX4).
Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
MAX_RCV_CTX was set to 1, there's only rx context per
PCI function.
Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
rearrange open and close into hardware attach(), detach() and
nic up() and down(). this will be used for suspend/resume
subsequently.
Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
o remove unused rx fragment handling code.
o imporove check for status descriptor ownership.
Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There was a bug, which occasionally caused failure in PRAM initialization after
the cold boot.
Also incremented version number to 1.45.27.
Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Adding a proper cast to the argument of PAGE_ALIGN macro so that the output
won't depend on its original type. Without this cast aligned value will be
truncated to the size of the argument type.
Reported-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com>
Tested-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
sge_buff_size may not be more than 0xffff.
Reported-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com>
Tested-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This was introduced in an earlier net-next patch.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With some asic configurations xmit of frames smaller than 60 bytes may
fail.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Moving netif_napi_del() up the call chain so it will get called from all
exit points.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>