From: Randy Dunlap <rdunlap@xenotime.net>
Fix implicit nocast warnings in ip_vs code:
net/ipv4/ipvs/ip_vs_app.c:631:54: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix implicit nocast warnings in decnet code:
net/decnet/af_decnet.c:458:40: warning: implicit cast to nocast type
net/decnet/dn_nsp_out.c:125:35: warning: implicit cast to nocast type
net/decnet/dn_nsp_out.c:219:29: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix implicit nocast warnings in atm code:
net/atm/atm_misc.c:35:44: warning: implicit cast to nocast type
drivers/atm/fore200e.c:183:33: warning: implicit cast to nocast type
Also use kzalloc() instead of kmalloc().
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The patch below introduces special thresholds to keep root node in the trie
large. This gives a flatter tree at the cost of a modest memory increase.
Overall it seems to be gain and this was also proposed by one the authors
of the paper in recent a seminar.
Main table after loading 123 k routes.
Aver depth: 3.30
Max depth: 9
Root-node size 12 bits
Total size: 4044 kB
With the patch:
Aver depth: 2.78
Max depth: 8
Root-node size 15 bits
Total size: 4150 kB
An increase of 8-10% was seen in forwading performance for an rDoS attack.
Signed-off-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix implicit nocast warnings in ieee80211 code, including __nocast:
net/ieee80211/ieee80211_tx.c:215:9: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Fix implicit nocast warnings in ieee80211 code:
net/ieee80211/ieee80211_tx.c:215:9: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
This patch gets rid of a bogus __in_dev_put() in pktgen.c. This was
spotted by Suzanne Wood.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
The following patch renames __in_dev_get() to __in_dev_get_rtnl() and
introduces __in_dev_get_rcu() to cover the second case.
1) RCU with refcnt should use in_dev_get().
2) RCU without refcnt should use __in_dev_get_rcu().
3) All others must hold RTNL and use __in_dev_get_rtnl().
There is one exception in net/ipv4/route.c which is in fact a pre-existing
race condition. I've marked it as such so that we remember to fix it.
This patch is based on suggestions and prior work by Suzanne Wood and
Paul McKenney.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Meelis Roos <mroos@linux.ee> wrote:
> RK> My firewall setup relies on proxyarp working. However, with 2.6.14-rc3,
> RK> it appears to be completely broken. The firewall is 212.18.232.186,
>
> Same here with some kernel between 14-rc2 and 14-rc3 - no reposnse to
> ARP on a proxyarp gateway. Sorry, no exact revison and no more debugging
> yet since it'a a production gateway.
The breakage is caused by the change to use the CB area for flagging
whether a packet has been queued due to proxy_delay. This area gets
cleared every time arp_rcv gets called. Unfortunately packets delayed
due to proxy_delay also go through arp_rcv when they are reprocessed.
In fact, I can't think of a reason why delayed proxy packets should go
through netfilter again at all. So the easiest solution is to bypass
that and go straight to arp_process.
This is essentially what would've happened before netfilter support
was added to ARP.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
During the build for ARM machine type "fortunet", this error occurred:
CC net/sysctl_net.o
net/sysctl_net.c:36: error: 'core_table' undeclared here (not in a function)
It appears that the following configuration settings cause this error
due to a missing include:
CONFIG_SYSCTL=y
CONFIG_NET=y
# CONFIG_INET is not set
core_table appears to be declared in net/sock.h. if CONFIG_INET were
defined, net/sock.h would have been included via:
sysctl_net.c -> net/ip.h -> linux/ip.h -> net/sock.h
so include it directly.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Arnaldo and I agreed it could be applied now, because I have other
pending patches depending on this one (Thank you Arnaldo)
(The other important patch moves skc_refcnt in a separate cache line,
so that the SMP/NUMA performance doesnt suffer from cache line ping pongs)
1) First some performance data :
--------------------------------
tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established()
The most time critical code is :
sk_for_each(sk, node, &head->chain) {
if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif))
goto hit; /* You sunk my battleship! */
}
The sk_for_each() does use prefetch() hints but only the begining of
"struct sock" is prefetched.
As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far
away from the begining of "struct sock", it has to bring into CPU
cache cold cache line. Each iteration has to use at least 2 cache
lines.
This can be problematic if some chains are very long.
2) The goal
-----------
The idea I had is to change things so that INET_MATCH() may return
FALSE in 99% of cases only using the data already in the CPU cache,
using one cache line per iteration.
3) Description of the patch
---------------------------
Adds a new 'unsigned int skc_hash' field in 'struct sock_common',
filling a 32 bits hole on 64 bits platform.
struct sock_common {
unsigned short skc_family;
volatile unsigned char skc_state;
unsigned char skc_reuse;
int skc_bound_dev_if;
struct hlist_node skc_node;
struct hlist_node skc_bind_node;
atomic_t skc_refcnt;
+ unsigned int skc_hash;
struct proto *skc_prot;
};
Store in this 32 bits field the full hash, not masked by (ehash_size -
1) Using this full hash as the first comparison done in INET_MATCH
permits us immediatly skip the element without touching a second cache
line in case of a miss.
Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to
sk_hash and tw_hash) already contains the slot number if we mask with
(ehash_size - 1)
File include/net/inet_hashtables.h
64 bits platforms :
#define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
(((__sk)->sk_hash == (__hash))
((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie)) && \
((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports)) && \
(!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
32bits platforms:
#define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
(((__sk)->sk_hash == (__hash)) && \
(inet_sk(__sk)->daddr == (__saddr)) && \
(inet_sk(__sk)->rcv_saddr == (__daddr)) && \
(!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))
- Adds a prefetch(head->chain.first) in
__inet_lookup_established()/__tcp_v4_check_established() and
__inet6_lookup_established()/__tcp_v6_check_established() and
__dccp_v4_check_established() to bring into cache the first element of the
list, before the {read|write}_lock(&head->lock);
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
I've found the problem in general. It affects any 64-bit
architecture. The problem occurs when you change the system time.
Suppose that when you boot your system clock is forward by a day.
This gets recorded down in skb_tv_base. You then wind the clock back
by a day. From that point onwards the offset will be negative which
essentially overflows the 32-bit variables they're stored in.
In fact, why don't we just store the real time stamp in those 32-bit
variables? After all, we're not going to overflow for quite a while
yet.
When we do overflow, we'll need a better solution of course.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
initialized which can cause problems for drivers that expect the network
structure to be completely filled in.
This patch will make sure the network is filled in as much as possible.
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
results in a lot of duplicate code.
This will move the parsing stage into a seperate function.
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
net/ieee80211/ieee80211_tx.c:215:9: warning: implicit cast to nocast type
Signed-off-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
header, and I also added the ieee80211_is_cck_rate counterpart.
Various drivers currently create there own version of these functions,
but I guess the ieee80211 stack is the best place to provide such
routines.
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Handle better the case where the sender sends full sized
frames initially, then moves to a mode where it trickles
out small amounts of data at a time.
This known problem is even mentioned in the comments
above tcp_grow_window() in tcp_input.c, specifically:
...
* The scheme does not work when sender sends good segments opening
* window and then starts to feed us spagetti. But it should work
* in common situations. Otherwise, we have to rely on queue collapsing.
...
When the sender gives full sized frames, the "struct sk_buff" overhead
from each packet is small. So we'll advertize a larger window.
If the sender moves to a mode where small segments are sent, this
ratio becomes tilted to the other extreme and we start overrunning
the socket buffer space.
tcp_clamp_window() tries to address this, but it's clamping of
tp->window_clamp is a wee bit too aggressive for this particular case.
Fix confirmed by Ion Badulescu.
Signed-off-by: David S. Miller <davem@davemloft.net>
But retain the comment fix.
Alexey Kuznetsov has explained the situation as follows:
--------------------
I think the fix is incorrect. Look, the RFC function init_cwnd(mss) is
not continuous: f.e. for mss=1095 it needs initial window 1095*4, but
for mss=1096 it is 1096*3. We do not know exactly what mss sender used
for calculations. If we advertised 1096 (and calculate initial window
3*1096), the sender could limit it to some value < 1096 and then it
will need window his_mss*4 > 3*1096 to send initial burst.
See?
So, the honest function for inital rcv_wnd derived from
tcp_init_cwnd() is:
init_rcv_wnd(mss)=
min { init_cwnd(mss1)*mss1 for mss1 <= mss }
It is something sort of:
if (mss < 1096)
return mss*4;
if (mss < 1096*2)
return 1096*4;
return mss*2;
(I just scrablled a graph of piece of paper, it is difficult to see or
to explain without this)
I selected it differently giving more window than it is strictly
required. Initial receive window must be large enough to allow sender
following to the rfc (or just setting initial cwnd to 2) to send
initial burst. But besides that it is arbitrary, so I decided to give
slack space of one segment.
Actually, the logic was:
If mss is low/normal (<=ethernet), set window to receive more than
initial burst allowed by rfc under the worst conditions
i.e. mss*4. This gives slack space of 1 segment for ethernet frames.
For msses slighlty more than ethernet frame, take 3. Try to give slack
space of 1 frame again.
If mss is huge, force 2*mss. No slack space.
Value 1460*3 is really confusing. Minimal one is 1096*2, but besides
that it is an arbitrary value. It was meant to be ~4096. 1460*3 is
just the magic number from RFC, 1460*3 = 1095*4 is the magic :-), so
that I guess hands typed this themselves.
--------------------
Signed-off-by: David S. Miller <davem@davemloft.net>
A bunch of create_proc_dir_entry() calls creating directories had crept
in since the last sweep; converted to proc_mkdir().
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
From: Oliver Dawid <oliver@helios.de>
we found a bug in net/appletalk/ddp.c concerning broadcast packets. In
kernel 2.4 it was working fine. The bug first occured 4 years ago when
switching to new SNAP layer handling. This bug can be splitted up into a
sending(1) and reception(2) problem:
Sending(1)
In kernel 2.4 broadcast packets were sent to a matching ethernet device
and atalk_rcv() was called to receive it as "loopback" (so loopback
packets were shortcutted and handled in DDP layer).
When switching to the new SNAP structure, this shortcut was removed and
the loopback packet was send to SNAP layer. The author forgot to replace
the remote device pointer by the loopback device pointer before sending
the packet to SNAP layer (by calling ddp_dl->request() ) therfor the
packet was not sent back by underlying layers to ddp's atalk_rcv().
Reception(2)
In atalk_rcv() a packet received by this loopback mechanism contains now
the (rigth) loopback device pointer (in Kernel 2.4 it was the (wrong)
remote ethernet device pointer) and therefor no matching socket will be
found to deliver this packet to. Because a broadcast packet should be
send to the first matching socket (as it is done in many other protocols
(?)), we removed the network comparison in broadcast case.
Below you will find a patch to correct this bug. Its diffed to kernel
2.6.14-rc1
Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
We know the thing is at least 2-byte aligned, so take
advantage of that instead of invoking memcmp() which
results in truly horrifically inefficient code because
it can't assume anything about alignment.
Signed-off-by: David S. Miller <davem@davemloft.net>
* Don't bother with proto registering if rose_ndevs is bad.
* Make escape structure more coherent.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
I have been experimenting with loadable protocol modules, and ran into
several issues with module reference counting.
The first issue was that __module_get failed at the BUG_ON check at
the top of the routine (checking that my module reference count was
not zero) when I created the first socket. When sk_alloc() is called,
my module reference count was still 0. When I looked at why sctp
didn't have this problem, I discovered that sctp creates a control
socket during module init (when the module ref count is not 0), which
keeps the reference count non-zero. This section has been updated to
address the point Stephen raised about checking the return value of
try_module_get().
The next problem arose when my socket init routine returned an error.
This resulted in my module reference count being decremented below 0.
My socket ops->release routine was also being called. The issue here
is that sock_release() calls the ops->release routine and decrements
the ref count if sock->ops is not NULL. Since the socket probably
didn't get correctly initialized, this should not be done, so we will
set sock->ops to NULL because we will not call try_module_get().
While searching for another bug, I also noticed that sys_accept() has
a possibility of doing a module_put() when it did not do an
__module_get so I re-ordered the call to security_socket_accept().
Signed-off-by: Frank Filz <ffilzlnx@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use iteration instead of recursion. Fraglists within fraglists
should never occur, so we BUG check this.
Signed-off-by: Daniel Phillips <phillips@istop.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If we double-add a neighbour entry timer, which should be
impossible but has been reported, dump the current state of
the entry so that we can debug this.
Signed-off-by: David S. Miller <davem@davemloft.net>
When you've enabled conntrack and NAT as a module (standard case in all
distributions), and you've also enabled the new conntrack netlink
interface, loading ip_conntrack_netlink.ko will auto-load iptable_nat.ko.
This causes a huge performance penalty, since for every packet you iterate
the nat code, even if you don't want it.
This patch splits iptable_nat.ko into the NAT core (ip_nat.ko) and the
iptables frontend (iptable_nat.ko). Threfore, ip_conntrack_netlink.ko will
only pull ip_nat.ko, but not the frontend. ip_nat.ko will "only" allocate
some resources, but not affect runtime performance.
This separation is also a nice step in anticipation of new packet filters
(nf-hipac, ipset, pkttables) being able to use the NAT core.
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
These broke existing apps, and the checks are superfluous
as the values being verified aren't even used.
Signed-off-by: David S. Miller <davem@davemloft.net>
> Steps to reproduce:
> 1. Boot Linux, do NOT setup any IPv6 routes
> 2. ip route get 2001::1 (or any unroutable address)
Well caught. We never set rt6i_idev on ip6_null_entry.
This patch should make the problem go away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's on the stack and declared as "unsigned char[]", but pointers
and similar can be in here thus we need to give it an explicit
alignment attribute.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The GRE, SCTP and TCP protocol helpers did not call
ip_conntrack_event_cache() when updating ct->status. This patch adds
the respective calls.
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
From: Amos Waterland <apw@us.ibm.com>
If CONFIG_PROC_FS is not selected, the compiler emits this warning:
net/core/neighbour.c:64: warning: `neigh_stat_seq_fops' defined but not used
Which is correct, because neigh_stat_seq_fops is in fact only
initialized and used by code that is protected by CONFIG_PROC_FS. So
this patch fixes that up.
Signed-off-by: Amos Waterland <apw@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
We have to introduce a separate Kconfig menu entry for the NFQUEUE targets.
They cannot "just" depend on nfnetlink_queue, since nfnetlink_queue could
be linked into the kernel, whereas iptables can be a module.
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The in-kernel portmapper does in fact need a reserved port when registering
new services, but not when performing bind queries.
Ensure that we distinguish between the two cases.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Currently rpc_mkdir/rpc_rmdir and rpc_mkpipe/mk_unlink have an API that's
a little unfortunate. They take a path relative to the rpc_pipefs root and
thus need to perform a full lookup. If you look at debugfs or usbfs they
always store the dentry for directories they created and thus can pass in
a dentry + single pathname component pair into their equivalents of the
above functions.
And in fact rpc_pipefs actually stores a dentry for all but one component so
this change not only simplifies the core rpc_pipe code but also the callers.
Unfortuntately this code path is only used by the NFS4 idmapper and
AUTH_GSSAPI for which I don't have a test enviroment. Could someone give
it a spin? It's the last bit needed before we can rework the
lookup_hash API
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
In fact, ->set_buffer_size should be completely functionless for non-UDP.
Test-plan:
Check socket buffer size on UDP sockets over time.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Each transport implementation can now set unique bind, connect,
reestablishment, and idle timeout values. These are variables,
allowing the values to be modified dynamically. This permits
exponential backoff of any of these values, for instance.
As an example, we implement exponential backoff for the connection
reestablishment timeout.
Test-plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Implement a best practice: if the remote end drops our connection, try to
reconnect using the same port number. This is important because the NFS
server's Duplicate Reply Cache often hashes on the source port number.
If the client reuses the port number when it reconnects, the server's DRC
will be more effective.
Based on suggestions by Mike Eisler, Olaf Kirch, and Alexey Kuznetsky.
Test-plan:
Destructive testing.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Select an RPC client source port between 650 and 1023 instead of between
1 and 800. The old range conflicts with a number of network services.
Provide sysctls to allow admins to select a different port range.
Note that this doesn't affect user-level RPC library behavior, which
still uses 1 to 800.
Based on a suggestion by Olaf Kirch <okir@suse.de>.
Test-plan:
Repeated mount and unmount. Destructive testing. Idle timeouts.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: Move some macros that are specific to the Van Jacobson
implementation into xprt.c. Get rid of the cong_wait field in
rpc_xprt, which is no longer used. Get rid of xprt_clear_backlog.
Test-plan:
Compile with CONFIG_NFS enabled.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Get rid of the "xprt->nocong" variable.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss with UDP mounts.
Look for significant regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The final place where congestion control state is adjusted is in
xprt_release, where each request is finally released. Add a callout
there to allow transports to perform additional processing when a
request is about to be released.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss. Look for significant
regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
A new interface that allows transports to adjust their congestion window
using the Van Jacobson implementation in xprt.c is provided.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss. Look for
significant regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Allow transports to hook the retransmit timer interrupt. Some transports
calculate their congestion window here so that a retransmit timeout has
immediate effect on the congestion window.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss. Look for significant
regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The next method we abstract is the one that releases a transport,
allowing another task to have access to the transport.
Again, one generic version of this is provided for transports that
don't need the RPC client to perform congestion control, and one
version is for transports that can use the original Van Jacobson
implementation in xprt.c.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss. Look for
significant regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The next several patches introduce an API that allows transports to
choose whether the RPC client provides congestion control or whether
the transport itself provides it.
The first method we abstract is the one that serializes access to the
RPC transport to prevent the bytes from different requests from mingling
together. This method provides proper request serialization and the
opportunity to prevent new requests from being started because the
transport is congested.
The normal situation is for the transport to handle congestion control
itself. Although NFS over UDP was first, it has been recognized after
years of experience that having the transport provide congestion control
is much better than doing it in the RPC client. Thus TCP, and probably
every future transport implementation, will use the default method,
xprt_lock_write, provided in xprt.c, which does not provide any kind
of congestion control. UDP can continue using the xprt.c-provided
Van Jacobson congestion avoidance implementation.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss. Look for significant
regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Prepare the way to remove the "xprt->nocong" variable by adding a callout
to the RPC client transport switch API to handle setting RPC retransmit
timeouts.
Add a pair of generic helper functions that provide the ability to set a
simple fixed timeout, or to set a timeout based on the state of a round-
trip estimator.
Test-plan:
Use WAN simulation to cause sporadic bursty packet loss. Look for significant
regression in performance or client stability.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Now we can fix up the last few places that use the "xprt->stream"
variable, and get rid of it from the rpc_xprt structure.
Test-plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP.
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Add a generic mechanism for skipping over transport-specific headers
when constructing an RPC request. This removes another "xprt->stream"
dependency.
Test-plan:
Write-intensive workload on a single mount point (try both UDP and
TCP).
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Split the RPC client's main socket write path into a TCP version and a UDP
version to eliminate another dependency on the "xprt->stream" variable.
Compiler optimization removes unneeded code from xs_sendpages, as this
function is now called with some constant arguments.
We can now cleanly perform transport protocol-specific return code testing
and error recovery in each path.
Test-plan:
Millions of fsx operations. Performance characterization such as
"sio" or "iozone". Examine oprofile results for any changes before and
after this patch is applied.
Version: Thu, 11 Aug 2005 16:08:46 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Create separate connection worker functions for managing UDP and TCP
transport sockets. This eliminates several dependencies on "xprt->stream".
Test-plan:
Destructive testing (unplugging the network temporarily). Connectathon with
v2, v3, and v4.
Version: Thu, 11 Aug 2005 16:08:18 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Split the socket write space callback function into a TCP version and UDP
version, eliminating one dependence on the "xprt->stream" variable.
Keep the common pieces of this path in xprt.c so other transports can use
it too.
Test-plan:
Write-intensive workload on a single mount point.
Version: Thu, 11 Aug 2005 16:07:51 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: change some comments to reflect the realities of the new RPC
transport switch mechanism. Get rid of unused xprt_receive() prototype.
Also, organize function prototypes in xprt.h by usage and scope.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:07:21 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: remove only reference to xprt->pending from the socket transport
implementation. This makes a cleaner interface for other transport
implementations as well.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:06:52 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: get rid of unnecessary socket.h and in.h includes in the generic
parts of the RPC client.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:06:23 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: get rid of a name reference to sockets in the generic parts of the
RPC client by renaming the sockstate field in the rpc_xprt structure.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:05:53 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: Replace the xprt_lock with something more aptly named. This lock
single-threads the XID and request slot reservation process.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:05:26 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: replace a name reference to sockets in the generic parts of the RPC
client by renaming sock_lock in the rpc_xprt structure.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:05:00 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Reduce stack utilization of the RPC socket transport's send path.
A couple of unlikely()s are added to ensure the compiler places the
tail processing at the end of the csect.
Test-plan:
Millions of fsx operations. Performance characterization such as "sio" or
"iozone".
Version: Thu, 11 Aug 2005 16:04:30 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Introduce block header comments and a function naming convention to the
socket transport implementation. Provide a debug setting for transports
that is separate from RPCDBG_XPRT. Eliminate xprt_default_timeout().
Provide block comments for exposed interfaces in xprt.c, and eliminate
the useless obvious comments.
Convert printk's to dprintk's.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Version: Thu, 11 Aug 2005 16:04:04 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Move the bulk of client-side socket-specific code into a separate source
file, net/sunrpc/xprtsock.c.
Test-plan:
Millions of fsx operations. Performance characterization such as "sio" or
"iozone". Destructive testing (unplugging the network temporarily, server
reboots). Connectathon with v2, v3, and v4.
Version: Thu, 11 Aug 2005 16:03:38 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Clean-up: Move some code that is common to both RPC client- and server-side
socket transports into its own source file, net/sunrpc/socklib.c.
Test-plan:
Compile kernel with CONFIG_NFS enabled. Millions of fsx operations over
UDP, client and server. Connectathon over UDP.
Version: Thu, 11 Aug 2005 16:03:09 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
The in-kernel portmapper does not require a reserved port for making
bind queries.
Test-plan:
Tens of runs of the Connectathon locking suite with TCP and UDP
against several other NFS server implementations using NFSv3,
not NFSv4 (which doesn't require rpcbind).
Version: Thu, 11 Aug 2005 16:02:43 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Implement a best practice: don't use exponential backoff when computing
retransmit timeout values on TCP connections, but simply retransmit
at regular intervals.
This also fixes a bug introduced when xprt_reset_majortimeo() was added.
Test-plan:
Enable RPC debugging and watch timeout behavior on a NFS/TCP mount.
Version: Thu, 11 Aug 2005 16:02:19 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Implement a best practice: for soft mounts, an rpcbind timeout should
cause an RPC request to fail.
This also provides an FSM hook for retrying an rpcbind with a different
rpcbind protocol version. We'll use this later to try multiple rpcbind
protocol versions when binding. To enable this, expose the RPC error
code returned during a portmap request to the FSM so it can make some
decision about how to report, retry, or fail the request.
Test-plan:
Hundreds of passes with connectathon NFSv3 locking suite, on the client
and server.
Version: Thu, 11 Aug 2005 16:01:53 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Fix up xprt_connect_status: the soft timeout logic was clobbering tk_status,
so TCP connect errors were not properly reported on soft mounts.
Test-plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP.
Version: Thu, 11 Aug 2005 16:01:28 -0400
Signed-off-by: Chuck Lever <cel@netapp.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Fix to allow SCTP_SHUTDOWN notifications to be received on 1-1 style
SCTP SOCK_STREAM sockets.
Add SCTP_SHUTDOWN notification to the receive queue before updating
the state of the association.
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes a number of bugs. It cannot be reasonably split up in
multiple fixes, since all bugs interact with each other and affect the same
function:
Bug #1:
The event cache code cannot be called while a lock is held. Therefore, the
call to ip_conntrack_event_cache() within ip_ct_refresh_acct() needs to be
moved outside of the locked section. This fixes a number of 2.6.14-rcX
oops and deadlock reports.
Bug #2:
We used to call ct_add_counters() for unconfirmed connections without
holding a lock. Since the add operations are not atomic, we could race
with another CPU.
Bug #3:
ip_ct_refresh_acct() lost REFRESH events in some cases where refresh
(and the corresponding event) are desired, but no accounting shall be
performed. Both, evenst and accounting implicitly depended on the skb
parameter bein non-null. We now re-introduce a non-accounting
"ip_ct_refresh()" variant to explicitly state the desired behaviour.
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
As noted by Alexey Dobriyan, the DEBUGP statement prints the wrong
callID.
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the introduction of TSO pcount a year ago, it has been possible
for tcp_fragment() to cause packets_out to decrease. Prior to that,
tcp_retrans_try_collapse() was the only way for that to happen on the
retransmission path.
When this happens with Reno, it is possible for sasked_out to become
invalid because it is only an estimate and not tied to any particular
packet on the retransmission queue.
Therefore we need to adjust sacked_out as well as left_out in the Reno
case. The following patch does exactly that.
This bug is pretty difficult to trigger in practice though since you
need a SACKless peer with a retransmission that occurs just as the
cached MTU value expires.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Changed crypto method from requiring a struct ieee80211_device reference
to the init handler. Instead we now have a get/set flags method for
each crypto component.
Setting of TKIP countermeasures can now be done via
set_flags(IEEE80211_CRYPTO_TKIP_COUNTERMEASURES)
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree de81b55e78e85997642c651ea677078d0554a14f
parent c8030da8c159f8b82712172a6748a42523aea83a
author James Ketrenos <jketreno@linux.intel.com> 1127104380 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127315225 -0500
Added handle_deauth() callback.
Enhanced crypt_{tkip,ccmp} to support varying splits of HW/SW offload.
Changed channel freq to u32 from u16.
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree c1b50ac5d2d1f9b727c39c6bd86a7872f25a1127
parent 1bb997a3ac7dd1941e02426d2f70bd28993a82b7
author James Ketrenos <jketreno@linux.intel.com> 1126720779 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127314674 -0500
Added subsystem version string and reporting via MODULE_VERSION and
pritnk during load.
NOTE: This is the version support split out from patch 24/29 of the
prior series.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Borrowing the structure of TCP/IP for this. On the receive of new connections I
was bh_lock_socking the _new_ sock, not the listening one, duh, now it survives
the ssh connections storm I've been using to test this specific bug.
Also fixes send side skb sock accounting.
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
llc_fixup_skb() had a bug dropping 3 bytes packets (like UA frames). Token ring
doesn't pad these frames.
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
And make it look more like the similar routines in the TCP/IP source code.
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
So as to set the newly created sk_buff ->dev member with it, that way we stop
using dev_base->next, that is the wrong thing to do, as there may well be
several interfaces being used with LLC. This was not such a big problem after
all as most of the users of llc_alloc_frame were setting the correct dev, but
this way code is reduced.
This also fixes another bug in llc_station_ac_send_null_dsap_xid_c, that was
not setting the skb->dev field.
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
Remove old WIRELESS_EXT version compatibility
In-tree doesn't need to maintain backward compatibility.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 0d3e41e574fcb41b9da7f0b7e1d27ec350726654
parent dbe2885fe2f454d538eaaabefc741ded1026f476
author James Ketrenos <jketreno@linux.intel.com> 1126720499 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127314531 -0500
Updated copyright dates.
NOTE: This is a split out of just the copyright updates from patch
24/29 in the prior series.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 5c7559a1216ae1121487f6aed94a6017490729b3
parent c1ff4c22e5622c8987bf96c09158c4924cde98c2
author Hong Liu <hong.liu@intel.com> 1125482767 +0800
committer James Ketrenos <jketreno@linux.intel.com> 1127314427 -0500
Mixed PTK/GTK CCMP/TKIP support.
Signed-off-by: Hong Liu <hong.liu@intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 385b391fc0d7c124cd0547fdb6183e9a0c333391
parent 97d7a47f76e72bedde7f402785559ed4c7a8e8e8
author James Ketrenos <jketreno@linux.intel.com> 1124447590 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127313735 -0500
Added ieee80211_geo to provide helper functions to drivers for
implementing supported channel maps.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree a3ad796273e98036eb0e9fc063225070fa24508a
parent 1b9c0aeb377abf8e4a43a86cff42382f74ca0259
author Mohamed Abbas <mabbas@linux.intel.com> 1124447069 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127313435 -0500
Add QoS (WME) support to the ieee80211 subsystem.
NOTE: This requires drivers that use the ieee80211 hard_start_xmit
(ipw2100 and ipw2200) to add the priority parameter to their callback.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree ba6509c7cd1dd4244a2f285f2da5d632e7ffbb25
parent 7b5f9f2ddcabdaea214527a895e6e8445cafdd80
author James Ketrenos <jketreno@linux.intel.com> 1124447000 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127313383 -0500
Per the conversations with folks at OLS, the QoS layer in 802.11
drivers can now result in NETDEV_TX_BUSY being returned when the queue
a packet is targetted for is full.
To implement this, ieee80211_xmit will now call the driver's
is_queue_full to determine if the current priority queue is full. If
so, NETDEV_TX_BUSY is returned to the kernel and no processing is done
on the frame.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 8428e9f510e6ad6c77baec89cb57374842abf733
parent d78bfd3ddae9c422dd350159110f9c4d7cfc50de
author Liu Hong <hong.liu@intel.com> 1124446520 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127313183 -0500
Fix TKIP, repeated fragmentation problem, and payload_size reporting
1. TKIP encryption
Originally, TKIP encryption issues msdu + mpdu encryption on every
fragment. Change the behavior to msdu encryption on the whole
packet, then mpdu encryption on every fragment.
2. Avoid repeated fragmentation when !host_encrypt.
We only need do fragmentation when using host encryption. Otherwise
we only need pass the whole packet to driver, letting driver do the
fragmentation.
3. change the txb->payload_size to correct value
FW will use this value to determine whether to do fragmentation. If
we pass the wrong value, fw may cut on the wrong bound which will
make decryption fail when we do host encryption.
NOTE: This requires changing drivers (hostap) that have
extra_prefix_len used within them (structure member name change).
Signed-off-by: Hong Liu <liu.hong@intel.com>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 40adc78b623ae70d56074934ec6334eb4f0ae6a5
parent db43d847bcebaa3df6414e26d0008eb21690e8cf
author James Ketrenos <jketreno@linux.intel.com> 1124445938 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127313102 -0500
Added ieee80211_tx_frame to convert generic 802.11 data frames into
txbs for transmission.
Added several purpose specific callbacks (handle_assoc, handle_auth,
etc.) which the driver can register with for being notified on
reception of variouf frame elements.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree b45c9c1017fd23216bfbe71e441aed9aa297fc84
parent 04aacdd71e904656a304d923bdcf57ad3bd2b254
author Ivo van Doorn <IvDoorn@gmail.com> 1124445405 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127313029 -0500
This patch adds support for the creation of RTS packets when the
config flag CFG_IEEE80211_RTS has been set.
Signed-Off-By: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree e9c18b2c8e5ad446a4d213243c2dcf9fd1652a7b
parent 4e97ad6ae7084a4f741e94e76c41c68bc7c5a76a
author James Ketrenos <jketreno@linux.intel.com> 1124444315 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127312922 -0500
Renamed ieee80211_hdr to ieee80211_hdr_3addr and modified ieee80211_hdr
to just contain the frame_ctrl and duration_id.
Changed uses of ieee80211_hdr to ieee80211_hdr_4addr or
ieee80211_hdr_3addr based on what was expected for that portion of code.
NOTE: This requires changes to ipw2100, ipw2200, hostap, and atmel
drivers.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 1536f39c18756698d033da72c49300a561be1289
parent 07172d7c9f10ee3d05d6f6489ba6d6ee2628da06
author Liu Hong <hong.liu@intel.com> 1124436225 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127312664 -0500
Added WE-18 support to default wireless extension handler in ieee80211
subsystem.
Updated patch since last send to account for ieee80211_device parameter
being added to the crypto init method.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 898fedef6ca1b5b58b8bdf7e6d8894a78bbde4cd
parent 8720fff53090ae428d2159332b6f4b2749dea10f
author Zhu Yi <jketreno@io.(none)> 1124435746 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127312509 -0500
Allow drivers to fix an issue when using wpa_supplicant with WEP.
The problem is introduced by the hwcrypto patch. We changed indicator of
the encryption request from the upper layer (i.e. wpa_supplicant):
In the original host based crypto the driver could use: crypt &&
crypt->ops.
In the new hardware based crypto, the driver should use the flags
specified in ieee->sec.encrypt.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree b69e983266840983183a00f5ac02c66d5270ca47
parent cdd6372949b76694622ed74fe36e1dd17a92eb71
author Zhu Yi <jketreno@io.(none)> 1124435425 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127312421 -0500
Fix kernel Oops when module unload.
Export a new function ieee80211_crypt_quiescing from ieee80211. Device
drivers call it to make the host crypto stack enter the quiescence
state, which means "process existing requests, but don't accept new
ones". This is usually called during a driver's host crypto data
structure free (module unload) path.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree b9cdd7058b787807655ea6f125e2adbf8d26c863
parent 85d9b2bddfcf3ed2eb4d061947c25c6a832891ab
author Zhu Yi <jketreno@io.(none)> 1124435212 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127312152 -0500
Fix time calculation, switching to use jiffies_to_msecs.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 5322d496af90d03ffbec27292dc1a6268a746ede
parent 6c9364386ccb786e4a84427ab3ad712f0b7b8904
author James Ketrenos <jketreno@linux.intel.com> 1124432367 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127311810 -0500
Hardware crypto and fragmentation offload support added (Zhu Yi)
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
tree 367069f24fc38b4aa910e86ff40094d2078d8aa7
parent a33a198201
author James Ketrenos <jketreno@linux.intel.com> 1124430800 -0500
committer James Ketrenos <jketreno@linux.intel.com> 1127310571 -0500
Fixed a kernel oops on module unload by adding spin lock protection to
ieee80211's crypt handlers (thanks to Zhu Yi)
Modified scan result logic to report WPA and RSN IEs if set (vs.being
based on wpa_enabled)
Added ieee80211_device as the first parameter to the crypt init()
method. TKIP modified to use that structure for determining whether to
countermeasures are active.
Signed-off-by: James Ketrenos <jketreno@linux.intel.com>
Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
Patch from Joel Sing to fix the default congestion control algorithm
for incoming connections. If a new congestion control handler is added
(via module), it should become the default for new
connections. Instead, the incoming connections use reno. The cause is
incorrect initialisation causes the tcp_init_congestion_control()
function to return after the initial if test fails.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Acked-by: Ian McDonald <imcdnzl@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cleanup the printk's in fib_trie:
* Convert a couple of places in the dump code to BUG_ON
* Put log level's on each message
The version message really needed the message since it leaks out
on the pretty Fedora bootup.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Acked-by: Robert Olsson <Robert.Olsson@data.slu.se>,
Signed-off-by: David S. Miller <davem@davemloft.net>
The convention is that longer addresses will simply extend
the hardeware address byte arrays at the end of sockaddr_ll and
packet_mreq.
In making this change a small information leak was also closed.
The code only initializes the hardware address bytes that are
used, but all of struct sockaddr_ll was copied to userspace.
Now we just copy sockaddr_ll to the last byte of the hardware
address used.
For error checking larger structures than our internal
maximums continue to be allowed but an error is signaled if we can
not fit the hardware address into our internal structure.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix unchecked __get_user that could be tricked into generating a
memory read on an arbitrary address. The result of the read is not
returned directly but you may be able to divine some information about
it, or use the read to cause a crash on some architectures by reading
hardware state. CAN-2004-2492.
Fix from Al Viro, ack from Dave Miller.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The problem is that we're now calling tcp_fragment() in a context
where the packets might be marked as SACKED_ACKED or SACKED_RETRANS.
This was not possible before as you never retransmitted packets that
are so marked.
Because of this, we need to adjust sacked_out and retrans_out in
tcp_fragment(). This is exactly what the following patch does.
We also need to preserve the SACKED_ACKED/SACKED_RETRANS marking
if they exist.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Those exports are needed by the PPTP helper following in the next
couple of changes.
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>