I fogot to remove the code that freed the memory in cleanup_slots().
Here is the new patch, which I have also taken care of the comment
by Eike to remove the cast in hotplug_slot->private.
Signed-off-by: Dely Sy <dely.l.sy@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch just adds Intel's Hance Rapid south bridge IDs to ICH4 region quirk.
Patch was successfuly tested by Chunhao Huang from Winbond.
Signed-Off-By: Rudolf Marek <r.marek@sh.cvut.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
here is the patch that fixes the bug introduced by my previous patch which
already went into 2.6.12-rc2 and is likely to cause trouble is someone hits
one the else case here by accident.
Using the &= operation before the if statement destroys the information the
if asks for so we always go into the else branch.
Signed-off-by: Rolf Eike Beer <eike-hotplug@sf-tec.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Now pci drivers can know when the system is going down without having to
add a reboot notifier event.
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch adds the possibility to do word-aligned 16-bit atomic PCI
configuration space accesses via the sysfs PCI interface. As a result, problems
with Emulex LFPC on IBM PowerPC64 are fixed.
Patch is present in SLES 9 SP1.
Signed-off-by: Vojtech Pavlik <vojtech@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This fixes u32 vs. pm_message_t confusion in documentation, and
removes references to no-longer-existing (*save_state), too. With
exception of USB (I hope David will fix/apply my patch), this should
fix last piece of this confusion... famous last words.
Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
pci_find_slot() doesn't work on multiple-domain boxes so pci_get_slot()
should be used instead.
Signed-off-by: Matthew Wilcox <matthew@wil.cx>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
I think 'is_enabled' flag in pci_dev structure should be set/cleared
when the device actually enabled/disabled. Especially about
pci_enable_device(), it can be failed. By this change, we will also
get the possibility of refering 'is_enabled' flag from the functions
called through pci_enable_device()/pci_disable_device().
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Long standing bug.
Policy to repeat an action never worked.
Signed-off-by: J Hadi Salim <hadi@cyberus.ca>
Signed-off-by: David S. Miller <davem@davemloft.net>
I found a bug that stopped IPsec/IPv6 from working. About
a month ago IPv6 started using rt6i_idev->dev on the cached socket dst
entries. If the cached socket dst entry is IPsec, then rt6i_idev will
be NULL.
Since we want to look at the rt6i_idev of the original route in this
case, the easiest fix is to store rt6i_idev in the IPsec dst entry just
as we do for a number of other IPv6 route attributes. Unfortunately
this means that we need some new code to handle the references to
rt6i_idev. That's why this patch is bigger than it would otherwise be.
I've also done the same thing for IPv4 since it is conceivable that
once these idev attributes start getting used for accounting, we
probably need to dereference them for IPv4 IPsec entries too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix qlen underrun when doing duplication with netem. If netem is used
as leaf discipline, then the parent needs to be tweaked when packets
are duplicated.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Netem currently dumps packets into the queue when timer expires. This
patch makes work by self-clocking (more like TBF). It fixes a bug
when 0 delay is requested (only doing loss or duplication).
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Due to bugs in netem (fixed by later patches), it is possible to get qdisc
qlen to go negative. If this happens the CPU ends up spinning forever
in qdisc_run(). So add a BUG_ON() to trap it.
Signed-off-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some network drivers call netif_stop_queue() when detecting loss of
carrier. This leads to packets being queued up at the qdisc level for
an unbound period of time. In order to prevent this effect, the core
networking stack will now cease to queue packets for any device, that
is operationally down (i.e. the queue is flushed and disabled).
Signed-off-by: Tommy S. Christensen <tommy.christensen@tpack.net>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
If we free up a partially processed packet because it's
skb->len dropped to zero, we need to decrement qlen because
we are dropping out of the top-level loop so it will do
the decrement for us.
Spotted by Herbert Xu.
Signed-off-by: David S. Miller <davem@davemloft.net>
The qlen should continue to decrement, even if we
pop partially processed SKBs back onto the receive queue.
Signed-off-by: David S. Miller <davem@davemloft.net>
Patch from Sascha Hauer
This patch adds the missing include files for the i.MX framebuffer
driver.
Signed-off-by: Sascha Hauer
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Let's recap the problem. The current asynchronous netlink kernel
message processing is vulnerable to these attacks:
1) Hit and run: Attacker sends one or more messages and then exits
before they're processed. This may confuse/disable the next netlink
user that gets the netlink address of the attacker since it may
receive the responses to the attacker's messages.
Proposed solutions:
a) Synchronous processing.
b) Stream mode socket.
c) Restrict/prohibit binding.
2) Starvation: Because various netlink rcv functions were written
to not return until all messages have been processed on a socket,
it is possible for these functions to execute for an arbitrarily
long period of time. If this is successfully exploited it could
also be used to hold rtnl forever.
Proposed solutions:
a) Synchronous processing.
b) Stream mode socket.
Firstly let's cross off solution c). It only solves the first
problem and it has user-visible impacts. In particular, it'll
break user space applications that expect to bind or communicate
with specific netlink addresses (pid's).
So we're left with a choice of synchronous processing versus
SOCK_STREAM for netlink.
For the moment I'm sticking with the synchronous approach as
suggested by Alexey since it's simpler and I'd rather spend
my time working on other things.
However, it does have a number of deficiencies compared to the
stream mode solution:
1) User-space to user-space netlink communication is still vulnerable.
2) Inefficient use of resources. This is especially true for rtnetlink
since the lock is shared with other users such as networking drivers.
The latter could hold the rtnl while communicating with hardware which
causes the rtnetlink user to wait when it could be doing other things.
3) It is still possible to DoS all netlink users by flooding the kernel
netlink receive queue. The attacker simply fills the receive socket
with a single netlink message that fills up the entire queue. The
attacker then continues to call sendmsg with the same message in a loop.
Point 3) can be countered by retransmissions in user-space code, however
it is pretty messy.
In light of these problems (in particular, point 3), we should implement
stream mode netlink at some point. In the mean time, here is a patch
that implements synchronous processing.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Here is a little optimisation for the cb_lock used by netlink_dump.
While fixing that race earlier, I noticed that the reference count
held by cb_lock is completely useless. The reason is that in order
to obtain the protection of the reference count, you have to take
the cb_lock. But the only way to take the cb_lock is through
dereferencing the socket.
That is, you must already possess a reference count on the socket
before you can take advantage of the reference count held by cb_lock.
As a corollary, we can remve the reference count held by the cb_lock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
htb_enqueue(): Free skb and return NET_XMIT_DROP if a packet is
destined for the direct_queue but the direct_queue is full. (Before
this: erroneously returned NET_XMIT_SUCCESS even though the packet was
not enqueued)
Signed-off-by: Asim Shankar <asimshankar@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
kfree() and vfree() can both deal with NULL pointers. This patch removes
redundant NULL pointer checks from the ppp code in drivers/net/
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is a trivial fix for a typo on Kconfig, where the Generic Random Early
Detection algorithm is abbreviated as RED instead of GRED.
Signed-off-by: Lucas Correia Villa Real <lucasvr@gobolinux.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
kfree(0) is perfectly valid, checking pointers for NULL before calling
kfree() on them is redundant. The patch below cleans away a few such
redundant checks (and while I was around some of those bits I couldn't
stop myself from making a few tiny whitespace changes as well).
Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Converts remaining rtnetlink_link tables to use c99 designated
initializers to make greping a little bit easier.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Converts rtm_min and rtm_max arrays to use c99 designated
initializers for easier insertion of new message families.
RTM_GETMULTICAST and RTM_GETANYCAST did not have the minimal
message size specified which means that the netlink message
was parsed for routing attributes starting from the header.
Adds the proper minimal message sizes for these messages
(netlink header + common rtnetlink header) to fix this issue.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
RTM_MAX is currently set to the maximum reserverd message type plus one
thus being the cause of two bugs for new types being assigned a) given the
new family registers only the NEW command in its reserved block the array
size for per family entries is calculated one entry short and b) given the
new family registers all commands RTM_MAX would point to the first entry
of the block following this one and the rtnetlink receive path would accept
a message type for a nonexisting family.
This patch changes RTM_MAX to point to the maximum valid message type
by aligning it to the start of the next block and subtracting one.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Converts xfrm_msg_min and xfrm_dispatch to use c99 designated
initializers to make greping a little bit easier. Also replaces
two hardcoded message type with meaningful names.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Makes the type > XFRM_MSG_MAX check behave correctly to
protect access to xfrm_dispatch.
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch includes net/ipv6.h from addrconf.h since it needs
ipv6_addr_set.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
I made a mistake in my last patch to the raw socket checksum code.
I used the value of inet->cork.length as the length of the payload.
While this works with normal packets, it breaks down when IPsec is
present since the cork length includes the extension header length.
So here is a patch to fix the length calculations.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
gcc-4.0 generates altivec code implicitly when -mcpu indicates an
altivec capable CPU which is not suitable for the kernel. However, we
used to set -mcpu=970 when CONFIG_ALTIVEC was set because a gcc-3.x bug
prevented from using -maltivec along with -mcpu=power4, thus prevented
building the RAID6 altivec code.
This patch fixes all of this by testing for the gcc version. If 4.0 or
later, just normally use -mcpu=power4 and let the RAID6 code add
-maltivec to the few files it needs to be compiled with altivec support.
For 3.x, we still use -mcpu=970 to work around the above problem, which
is fine as 3.x will never implicitly generate altivec code.
The Makefile hackery may not be the most lovely, I welcome anybody more
skilled than me to improve it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
VMALLOC_START and VMALLOC_OFFSET are common between all ARM
machine classes. Move them into include/asm-arm/pgtable.h,
but allow a machine class to override them if required.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Rather than duplicate the assembly for debug macros in the
decompressor head.S, use asm/arch/debug-macros.S instead.
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
Modify xtSearch so that it returns the next allocated block when the
requested block is unmapped. This can be used to make sure we don't
create a new extent that overlaps the next one.
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch adds jfs_syncpt, which calls lmLogSync to write sync points
to the journal both in jfs_sync_fs and when sync barrier processing
completes.
lmLogSync accomplishes two things: 1) it pushes logged-but-dirty
metadata pages to disk, and 2) it writes a sync record to the journal
so that jfs_fsck doesn't need to replay more transactions than is
necessary.
Signed-off-by: Dave Kleikamp <shaggy@austin.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>