Merge branch 'for_2.6.34rc_a' of git://git.pwsan.com/linux-2.6 into omap-fixes-for-linus

This commit is contained in:
Tony Lindgren 2010-04-21 15:26:56 -07:00
commit e2bca7c76a
362 changed files with 3344 additions and 2001 deletions

View File

@ -340,7 +340,7 @@ Note:
5.3 swappiness
Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only.
Following cgroups' swapiness can't be changed.
Following cgroups' swappiness can't be changed.
- root cgroup (uses /proc/sys/vm/swappiness).
- a cgroup which uses hierarchy and it has child cgroup.
- a cgroup which uses hierarchy and not the root of hierarchy.

View File

@ -0,0 +1,234 @@
================
CIRCULAR BUFFERS
================
By: David Howells <dhowells@redhat.com>
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Linux provides a number of features that can be used to implement circular
buffering. There are two sets of such features:
(1) Convenience functions for determining information about power-of-2 sized
buffers.
(2) Memory barriers for when the producer and the consumer of objects in the
buffer don't want to share a lock.
To use these facilities, as discussed below, there needs to be just one
producer and just one consumer. It is possible to handle multiple producers by
serialising them, and to handle multiple consumers by serialising them.
Contents:
(*) What is a circular buffer?
(*) Measuring power-of-2 buffers.
(*) Using memory barriers with circular buffers.
- The producer.
- The consumer.
==========================
WHAT IS A CIRCULAR BUFFER?
==========================
First of all, what is a circular buffer? A circular buffer is a buffer of
fixed, finite size into which there are two indices:
(1) A 'head' index - the point at which the producer inserts items into the
buffer.
(2) A 'tail' index - the point at which the consumer finds the next item in
the buffer.
Typically when the tail pointer is equal to the head pointer, the buffer is
empty; and the buffer is full when the head pointer is one less than the tail
pointer.
The head index is incremented when items are added, and the tail index when
items are removed. The tail index should never jump the head index, and both
indices should be wrapped to 0 when they reach the end of the buffer, thus
allowing an infinite amount of data to flow through the buffer.
Typically, items will all be of the same unit size, but this isn't strictly
required to use the techniques below. The indices can be increased by more
than 1 if multiple items or variable-sized items are to be included in the
buffer, provided that neither index overtakes the other. The implementer must
be careful, however, as a region more than one unit in size may wrap the end of
the buffer and be broken into two segments.
============================
MEASURING POWER-OF-2 BUFFERS
============================
Calculation of the occupancy or the remaining capacity of an arbitrarily sized
circular buffer would normally be a slow operation, requiring the use of a
modulus (divide) instruction. However, if the buffer is of a power-of-2 size,
then a much quicker bitwise-AND instruction can be used instead.
Linux provides a set of macros for handling power-of-2 circular buffers. These
can be made use of by:
#include <linux/circ_buf.h>
The macros are:
(*) Measure the remaining capacity of a buffer:
CIRC_SPACE(head_index, tail_index, buffer_size);
This returns the amount of space left in the buffer[1] into which items
can be inserted.
(*) Measure the maximum consecutive immediate space in a buffer:
CIRC_SPACE_TO_END(head_index, tail_index, buffer_size);
This returns the amount of consecutive space left in the buffer[1] into
which items can be immediately inserted without having to wrap back to the
beginning of the buffer.
(*) Measure the occupancy of a buffer:
CIRC_CNT(head_index, tail_index, buffer_size);
This returns the number of items currently occupying a buffer[2].
(*) Measure the non-wrapping occupancy of a buffer:
CIRC_CNT_TO_END(head_index, tail_index, buffer_size);
This returns the number of consecutive items[2] that can be extracted from
the buffer without having to wrap back to the beginning of the buffer.
Each of these macros will nominally return a value between 0 and buffer_size-1,
however:
[1] CIRC_SPACE*() are intended to be used in the producer. To the producer
they will return a lower bound as the producer controls the head index,
but the consumer may still be depleting the buffer on another CPU and
moving the tail index.
To the consumer it will show an upper bound as the producer may be busy
depleting the space.
[2] CIRC_CNT*() are intended to be used in the consumer. To the consumer they
will return a lower bound as the consumer controls the tail index, but the
producer may still be filling the buffer on another CPU and moving the
head index.
To the producer it will show an upper bound as the consumer may be busy
emptying the buffer.
[3] To a third party, the order in which the writes to the indices by the
producer and consumer become visible cannot be guaranteed as they are
independent and may be made on different CPUs - so the result in such a
situation will merely be a guess, and may even be negative.
===========================================
USING MEMORY BARRIERS WITH CIRCULAR BUFFERS
===========================================
By using memory barriers in conjunction with circular buffers, you can avoid
the need to:
(1) use a single lock to govern access to both ends of the buffer, thus
allowing the buffer to be filled and emptied at the same time; and
(2) use atomic counter operations.
There are two sides to this: the producer that fills the buffer, and the
consumer that empties it. Only one thing should be filling a buffer at any one
time, and only one thing should be emptying a buffer at any one time, but the
two sides can operate simultaneously.
THE PRODUCER
------------
The producer will look something like this:
spin_lock(&producer_lock);
unsigned long head = buffer->head;
unsigned long tail = ACCESS_ONCE(buffer->tail);
if (CIRC_SPACE(head, tail, buffer->size) >= 1) {
/* insert one item into the buffer */
struct item *item = buffer[head];
produce_item(item);
smp_wmb(); /* commit the item before incrementing the head */
buffer->head = (head + 1) & (buffer->size - 1);
/* wake_up() will make sure that the head is committed before
* waking anyone up */
wake_up(consumer);
}
spin_unlock(&producer_lock);
This will instruct the CPU that the contents of the new item must be written
before the head index makes it available to the consumer and then instructs the
CPU that the revised head index must be written before the consumer is woken.
Note that wake_up() doesn't have to be the exact mechanism used, but whatever
is used must guarantee a (write) memory barrier between the update of the head
index and the change of state of the consumer, if a change of state occurs.
THE CONSUMER
------------
The consumer will look something like this:
spin_lock(&consumer_lock);
unsigned long head = ACCESS_ONCE(buffer->head);
unsigned long tail = buffer->tail;
if (CIRC_CNT(head, tail, buffer->size) >= 1) {
/* read index before reading contents at that index */
smp_read_barrier_depends();
/* extract one item from the buffer */
struct item *item = buffer[tail];
consume_item(item);
smp_mb(); /* finish reading descriptor before incrementing tail */
buffer->tail = (tail + 1) & (buffer->size - 1);
}
spin_unlock(&consumer_lock);
This will instruct the CPU to make sure the index is up to date before reading
the new item, and then it shall make sure the CPU has finished reading the item
before it writes the new tail pointer, which will erase the item.
Note the use of ACCESS_ONCE() in both algorithms to read the opposition index.
This prevents the compiler from discarding and reloading its cached value -
which some compilers will do across smp_read_barrier_depends(). This isn't
strictly needed if you can be sure that the opposition index will _only_ be
used the once.
===============
FURTHER READING
===============
See also Documentation/memory-barriers.txt for a description of Linux's memory
barrier facilities.

View File

@ -16,6 +16,8 @@ befs.txt
- information about the BeOS filesystem for Linux.
bfs.txt
- info for the SCO UnixWare Boot Filesystem (BFS).
ceph.txt
- info for the Ceph Distributed File System
cifs.txt
- description of the CIFS filesystem.
coda.txt

View File

@ -8,7 +8,7 @@ Basic features include:
* POSIX semantics
* Seamless scaling from 1 to many thousands of nodes
* High availability and reliability. No single points of failure.
* High availability and reliability. No single point of failure.
* N-way replication of data across storage nodes
* Fast recovery from node failures
* Automatic rebalancing of data on node addition/removal
@ -94,7 +94,7 @@ Mount Options
wsize=X
Specify the maximum write size in bytes. By default there is no
maximu. Ceph will normally size writes based on the file stripe
maximum. Ceph will normally size writes based on the file stripe
size.
rsize=X
@ -115,7 +115,7 @@ Mount Options
number of entries in that directory.
nocrc
Disable CRC32C calculation for data writes. If set, the OSD
Disable CRC32C calculation for data writes. If set, the storage node
must rely on TCP's error correction to detect data corruption
in the data payload.
@ -133,7 +133,8 @@ For more information on Ceph, see the home page at
http://ceph.newdream.net/
The Linux kernel client source tree is available at
git://ceph.newdream.net/linux-ceph-client.git
git://ceph.newdream.net/git/ceph-client.git
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
and the source for the full system is at
git://ceph.newdream.net/ceph.git
git://ceph.newdream.net/git/ceph.git

View File

@ -82,11 +82,13 @@ tmpfs has a mount option to set the NUMA memory allocation policy for
all files in that instance (if CONFIG_NUMA is enabled) - which can be
adjusted on the fly via 'mount -o remount ...'
mpol=default prefers to allocate memory from the local node
mpol=default use the process allocation policy
(see set_mempolicy(2))
mpol=prefer:Node prefers to allocate memory from the given Node
mpol=bind:NodeList allocates memory only from nodes in NodeList
mpol=interleave prefers to allocate from each node in turn
mpol=interleave:NodeList allocates from each node of NodeList in turn
mpol=local prefers to allocate memory from the local node
NodeList format is a comma-separated list of decimal numbers and ranges,
a range being two hyphen-separated decimal numbers, the smallest and
@ -134,3 +136,5 @@ Author:
Christoph Rohland <cr@sap.com>, 1.12.01
Updated:
Hugh Dickins, 4 June 2007
Updated:
KOSAKI Motohiro, 16 Mar 2010

View File

@ -3,6 +3,7 @@
============================
By: David Howells <dhowells@redhat.com>
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Contents:
@ -60,6 +61,10 @@ Contents:
- And then there's the Alpha.
(*) Example uses.
- Circular buffers.
(*) References.
@ -2226,6 +2231,21 @@ The Alpha defines the Linux kernel's memory barrier model.
See the subsection on "Cache Coherency" above.
============
EXAMPLE USES
============
CIRCULAR BUFFERS
----------------
Memory barriers can be used to implement circular buffering without the need
of a lock to serialise the producer with the consumer. See:
Documentation/circular-buffers.txt
for details.
==========
REFERENCES
==========

View File

@ -63,9 +63,9 @@ way to perform a busy wait is:
cpu_relax();
The cpu_relax() call can lower CPU power consumption or yield to a
hyperthreaded twin processor; it also happens to serve as a memory barrier,
so, once again, volatile is unnecessary. Of course, busy-waiting is
generally an anti-social act to begin with.
hyperthreaded twin processor; it also happens to serve as a compiler
barrier, so, once again, volatile is unnecessary. Of course, busy-
waiting is generally an anti-social act to begin with.
There are still a few rare situations where volatile makes sense in the
kernel:

View File

@ -797,12 +797,12 @@ M: Michael Petchkovsky <mkpetch@internode.on.net>
S: Maintained
ARM/NOMADIK ARCHITECTURE
M: Alessandro Rubini <rubini@unipv.it>
M: STEricsson <STEricsson_nomadik_linux@list.st.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-nomadik/
F: arch/arm/plat-nomadik/
M: Alessandro Rubini <rubini@unipv.it>
M: STEricsson <STEricsson_nomadik_linux@list.st.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-nomadik/
F: arch/arm/plat-nomadik/
ARM/OPENMOKO NEO FREERUNNER (GTA02) MACHINE SUPPORT
M: Nelson Castillo <arhuaco@freaks-unidos.net>
@ -1443,7 +1443,7 @@ F: arch/powerpc/platforms/cell/
CEPH DISTRIBUTED FILE SYSTEM CLIENT
M: Sage Weil <sage@newdream.net>
L: ceph-devel@lists.sourceforge.net
L: ceph-devel@vger.kernel.org
W: http://ceph.newdream.net/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
S: Supported
@ -1926,17 +1926,17 @@ F: drivers/scsi/dpt*
F: drivers/scsi/dpt/
DRBD DRIVER
P: Philipp Reisner
P: Lars Ellenberg
M: drbd-dev@lists.linbit.com
L: drbd-user@lists.linbit.com
W: http://www.drbd.org
T: git git://git.drbd.org/linux-2.6-drbd.git drbd
T: git git://git.drbd.org/drbd-8.3.git
S: Supported
F: drivers/block/drbd/
F: lib/lru_cache.c
F: Documentation/blockdev/drbd/
P: Philipp Reisner
P: Lars Ellenberg
M: drbd-dev@lists.linbit.com
L: drbd-user@lists.linbit.com
W: http://www.drbd.org
T: git git://git.drbd.org/linux-2.6-drbd.git drbd
T: git git://git.drbd.org/drbd-8.3.git
S: Supported
F: drivers/block/drbd/
F: lib/lru_cache.c
F: Documentation/blockdev/drbd/
DRIVER CORE, KOBJECTS, AND SYSFS
M: Greg Kroah-Hartman <gregkh@suse.de>
@ -3083,6 +3083,7 @@ F: include/scsi/*iscsi*
ISDN SUBSYSTEM
M: Karsten Keil <isdn@linux-pingi.de>
L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
L: netdev@vger.kernel.org
W: http://www.isdn4linux.de
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kkeil/isdn-2.6.git
S: Maintained
@ -3269,6 +3270,16 @@ S: Maintained
F: include/linux/kexec.h
F: kernel/kexec.c
KEYS/KEYRINGS:
M: David Howells <dhowells@redhat.com>
L: keyrings@linux-nfs.org
S: Maintained
F: Documentation/keys.txt
F: include/linux/key.h
F: include/linux/key-type.h
F: include/keys/
F: security/keys/
KGDB
M: Jason Wessel <jason.wessel@windriver.com>
L: kgdb-bugreport@lists.sourceforge.net
@ -3518,8 +3529,8 @@ F: drivers/scsi/sym53c8xx_2/
LTP (Linux Test Project)
M: Rishikesh K Rajak <risrajak@linux.vnet.ibm.com>
M: Garrett Cooper <yanegomi@gmail.com>
M: Mike Frysinger <vapier@gentoo.org>
M: Subrata Modak <subrata@linux.vnet.ibm.com>
M: Mike Frysinger <vapier@gentoo.org>
M: Subrata Modak <subrata@linux.vnet.ibm.com>
L: ltp-list@lists.sourceforge.net (subscribers-only)
W: http://ltp.sourceforge.net/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/galak/ltp.git
@ -5423,7 +5434,6 @@ S: Maintained
F: sound/soc/codecs/twl4030*
TIPC NETWORK LAYER
M: Per Liden <per.liden@ericsson.com>
M: Jon Maloy <jon.maloy@ericsson.com>
M: Allan Stephens <allan.stephens@windriver.com>
L: tipc-discussion@lists.sourceforge.net
@ -6201,7 +6211,7 @@ F: arch/x86/
X86 PLATFORM DRIVERS
M: Matthew Garrett <mjg@redhat.com>
L: platform-driver-x86@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mjg59/platform-drivers-x86.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mjg59/platform-drivers-x86.git
S: Maintained
F: drivers/platform/x86

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 34
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*

View File

@ -290,7 +290,7 @@ static int locomo_suspend(struct platform_device *dev, pm_message_t state)
save->LCM_GPO = locomo_readl(lchip->base + LOCOMO_GPO); /* GPIO */
locomo_writel(0x00, lchip->base + LOCOMO_GPO);
save->LCM_SPICT = locomo_readl(lchip->base + LOCOMO_SPI + LOCOMO_SPICT); /* SPI */
locomo_writel(0x40, lchip->base + LOCOMO_SPICT);
locomo_writel(0x40, lchip->base + LOCOMO_SPI + LOCOMO_SPICT);
save->LCM_GPE = locomo_readl(lchip->base + LOCOMO_GPE); /* GPIO */
locomo_writel(0x00, lchip->base + LOCOMO_GPE);
save->LCM_ASD = locomo_readl(lchip->base + LOCOMO_ASD); /* ADSTART */
@ -418,7 +418,7 @@ __locomo_probe(struct device *me, struct resource *mem, int irq)
/* Longtime timer */
locomo_writel(0, lchip->base + LOCOMO_LTINT);
/* SPI */
locomo_writel(0, lchip->base + LOCOMO_SPIIE);
locomo_writel(0, lchip->base + LOCOMO_SPI + LOCOMO_SPIIE);
locomo_writel(6 + 8 + 320 + 30 - 10, lchip->base + LOCOMO_ASD);
r = locomo_readl(lchip->base + LOCOMO_ASD);
@ -707,7 +707,7 @@ void locomo_m62332_senddata(struct locomo_dev *ldev, unsigned int dac_data, int
udelay(DAC_SCL_HIGH_HOLD_TIME); /* 4.7 usec */
if (locomo_readl(mapbase + LOCOMO_DAC) & LOCOMO_DAC_SDAOEB) { /* High is error */
printk(KERN_WARNING "locomo: m62332_senddata Error 1\n");
return;
goto out;
}
/* Send Sub address (LSB is channel select) */
@ -735,7 +735,7 @@ void locomo_m62332_senddata(struct locomo_dev *ldev, unsigned int dac_data, int
udelay(DAC_SCL_HIGH_HOLD_TIME); /* 4.7 usec */
if (locomo_readl(mapbase + LOCOMO_DAC) & LOCOMO_DAC_SDAOEB) { /* High is error */
printk(KERN_WARNING "locomo: m62332_senddata Error 2\n");
return;
goto out;
}
/* Send DAC data */
@ -760,9 +760,9 @@ void locomo_m62332_senddata(struct locomo_dev *ldev, unsigned int dac_data, int
udelay(DAC_SCL_HIGH_HOLD_TIME); /* 4.7 usec */
if (locomo_readl(mapbase + LOCOMO_DAC) & LOCOMO_DAC_SDAOEB) { /* High is error */
printk(KERN_WARNING "locomo: m62332_senddata Error 3\n");
return;
}
out:
/* stop */
r = locomo_readl(mapbase + LOCOMO_DAC);
r &= ~(LOCOMO_DAC_SCLOEB);

View File

@ -19,7 +19,7 @@
*/
#define PHYS_OFFSET (0x00000000)
#define IXP23XX_PCI_SDRAM_OFFSET (*((volatile int *)IXP23XX_PCI_SDRAM_BAR) & 0xfffffff0))
#define IXP23XX_PCI_SDRAM_OFFSET (*((volatile int *)IXP23XX_PCI_SDRAM_BAR) & 0xfffffff0)
#define __phys_to_bus(x) ((x) + (IXP23XX_PCI_SDRAM_OFFSET - PHYS_OFFSET))
#define __bus_to_phys(x) ((x) - (IXP23XX_PCI_SDRAM_OFFSET - PHYS_OFFSET))

View File

@ -74,9 +74,9 @@ static struct gpio_keys_button mv88f6281gtw_ge_button_pins[] = {
.desc = "SWR Button",
.active_low = 1,
}, {
.code = KEY_F1,
.code = KEY_WPS_BUTTON,
.gpio = 46,
.desc = "WPS Button(F1)",
.desc = "WPS Button",
.active_low = 1,
},
};

View File

@ -14,7 +14,7 @@
#define UART2_BASE (APB_PHYS_BASE + 0x17000)
#define UART3_BASE (APB_PHYS_BASE + 0x18000)
static volatile unsigned long *UART = (unsigned long *)UART2_BASE;
static volatile unsigned long *UART;
static inline void putc(char c)
{
@ -37,6 +37,9 @@ static inline void flush(void)
static inline void arch_decomp_setup(void)
{
/* default to UART2 */
UART = (unsigned long *)UART2_BASE;
if (machine_is_avengers_lite())
UART = (unsigned long *)UART3_BASE;
}

View File

@ -895,7 +895,7 @@ static struct clk dpll4_m4x2_ck = {
.ops = &clkops_omap2_dflt_wait,
.parent = &dpll4_m4_ck,
.enable_reg = OMAP_CM_REGADDR(PLL_MOD, CM_CLKEN),
.enable_bit = OMAP3430_PWRDN_CAM_SHIFT,
.enable_bit = OMAP3430_PWRDN_DSS1_SHIFT,
.flags = INVERT_ENABLE,
.clkdm_name = "dpll4_clkdm",
.recalc = &omap3_clkoutx2_recalc,

View File

@ -240,7 +240,7 @@ static void _omap2_clkdm_set_hwsup(struct clockdomain *clkdm, int enable)
bits = OMAP24XX_CLKSTCTRL_ENABLE_AUTO;
else
bits = OMAP24XX_CLKSTCTRL_DISABLE_AUTO;
} else if (cpu_is_omap34xx() | cpu_is_omap44xx()) {
} else if (cpu_is_omap34xx() || cpu_is_omap44xx()) {
if (enable)
bits = OMAP34XX_CLKSTCTRL_ENABLE_AUTO;
else
@ -812,7 +812,7 @@ int omap2_clkdm_sleep(struct clockdomain *clkdm)
cm_set_mod_reg_bits(OMAP24XX_FORCESTATE,
clkdm->pwrdm.ptr->prcm_offs, OMAP2_PM_PWSTCTRL);
} else if (cpu_is_omap34xx() | cpu_is_omap44xx()) {
} else if (cpu_is_omap34xx() || cpu_is_omap44xx()) {
u32 bits = (OMAP34XX_CLKSTCTRL_FORCE_SLEEP <<
__ffs(clkdm->clktrctrl_mask));
@ -856,7 +856,7 @@ int omap2_clkdm_wakeup(struct clockdomain *clkdm)
cm_clear_mod_reg_bits(OMAP24XX_FORCESTATE,
clkdm->pwrdm.ptr->prcm_offs, OMAP2_PM_PWSTCTRL);
} else if (cpu_is_omap34xx() | cpu_is_omap44xx()) {
} else if (cpu_is_omap34xx() || cpu_is_omap44xx()) {
u32 bits = (OMAP34XX_CLKSTCTRL_FORCE_WAKEUP <<
__ffs(clkdm->clktrctrl_mask));

View File

@ -1511,6 +1511,9 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
c = oh->slaves[oh->_mpu_port_index]->_clk;
}
if (!c->clkdm)
return NULL;
return c->clkdm->pwrdm.ptr;
}

View File

@ -222,7 +222,7 @@ void pwrdm_init(struct powerdomain **pwrdm_list)
{
struct powerdomain **p = NULL;
if (cpu_is_omap24xx() | cpu_is_omap34xx()) {
if (cpu_is_omap24xx() || cpu_is_omap34xx()) {
pwrstctrl_reg_offs = OMAP2_PM_PWSTCTRL;
pwrstst_reg_offs = OMAP2_PM_PWSTST;
} else if (cpu_is_omap44xx()) {

View File

@ -123,7 +123,7 @@ struct omap3_prcm_regs prcm_context;
u32 omap_prcm_get_reset_sources(void)
{
/* XXX This presumably needs modification for 34XX */
if (cpu_is_omap24xx() | cpu_is_omap34xx())
if (cpu_is_omap24xx() || cpu_is_omap34xx())
return prm_read_mod_reg(WKUP_MOD, OMAP2_RM_RSTST) & 0x7f;
if (cpu_is_omap44xx())
return prm_read_mod_reg(WKUP_MOD, OMAP4_RM_RSTST) & 0x7f;
@ -157,7 +157,7 @@ void omap_prcm_arch_reset(char mode, const char *cmd)
else
WARN_ON(1);
if (cpu_is_omap24xx() | cpu_is_omap34xx())
if (cpu_is_omap24xx() || cpu_is_omap34xx())
prm_set_mod_reg_bits(OMAP_RST_DPLL3, prcm_offs,
OMAP2_RM_RSTCTRL);
if (cpu_is_omap44xx())

View File

@ -77,7 +77,7 @@ static struct gpio_keys_button wrt350n_v2_buttons[] = {
.desc = "Reset Button",
.active_low = 1,
}, {
.code = KEY_WLAN,
.code = KEY_WPS_BUTTON,
.gpio = 2,
.desc = "WPS Button",
.active_low = 1,

View File

@ -272,7 +272,6 @@ config MACH_H5000
config MACH_HIMALAYA
bool "HTC Himalaya Support"
select CPU_PXA26x
select FB_W100
config MACH_MAGICIAN
bool "Enable HTC Magician Support"
@ -454,6 +453,13 @@ config PXA_SHARPSL
config SHARPSL_PM
bool
select APM_EMULATION
select SHARPSL_PM_MAX1111
config SHARPSL_PM_MAX1111
bool
depends on !CORGI_SSP_DEPRECATED
select HWMON
select SENSORS_MAX1111
config CORGI_SSP_DEPRECATED
bool
@ -547,7 +553,6 @@ config MACH_E740
bool "Toshiba e740"
default y
depends on ARCH_PXA_ESERIES
select FB_W100
help
Say Y here if you intend to run this kernel on a Toshiba
e740 family PDA.
@ -556,7 +561,6 @@ config MACH_E750
bool "Toshiba e750"
default y
depends on ARCH_PXA_ESERIES
select FB_W100
help
Say Y here if you intend to run this kernel on a Toshiba
e750 family PDA.
@ -573,7 +577,6 @@ config MACH_E800
bool "Toshiba e800"
default y
depends on ARCH_PXA_ESERIES
select FB_W100
help
Say Y here if you intend to run this kernel on a Toshiba
e800 family PDA.

View File

@ -559,10 +559,6 @@ static void __init imote2_init(void)
pxa_set_btuart_info(NULL);
pxa_set_stuart_info(NULL);
/* SPI chip select directions - all other directions should
* be handled by drivers.*/
gpio_direction_output(37, 0);
platform_add_devices(imote2_devices, ARRAY_SIZE(imote2_devices));
pxa2xx_set_spi_info(1, &pxa_ssp_master_0_info);

View File

@ -16,9 +16,9 @@
#define BTUART_BASE (0x40200000)
#define STUART_BASE (0x40700000)
static unsigned long uart_base = FFUART_BASE;
static unsigned int uart_shift = 2;
static unsigned int uart_is_pxa = 1;
static unsigned long uart_base;
static unsigned int uart_shift;
static unsigned int uart_is_pxa;
static inline unsigned char uart_read(int offset)
{
@ -56,6 +56,11 @@ static inline void flush(void)
static inline void arch_decomp_setup(void)
{
/* initialize to default */
uart_base = FFUART_BASE;
uart_shift = 2;
uart_is_pxa = 1;
if (machine_is_littleton() || machine_is_intelmote2()
|| machine_is_csb726() || machine_is_stargate2()
|| machine_is_cm_x300() || machine_is_balloon3())

View File

@ -37,8 +37,6 @@
#include <linux/lis3lv02d.h>
#include <linux/pda_power.h>
#include <linux/power_supply.h>
#include <linux/pda_power.h>
#include <linux/power_supply.h>
#include <linux/regulator/max8660.h>
#include <linux/regulator/machine.h>
#include <linux/regulator/fixed.h>
@ -444,7 +442,7 @@ static struct gpio_keys_button gpio_keys_button[] = {
.active_low = 0,
.wakeup = 0,
.debounce_interval = 5, /* ms */
.desc = "on/off button",
.desc = "on_off button",
},
};

View File

@ -764,11 +764,6 @@ static void __init stargate2_init(void)
pxa_set_btuart_info(NULL);
pxa_set_stuart_info(NULL);
/* spi chip selects */
gpio_direction_output(37, 0);
gpio_direction_output(24, 0);
gpio_direction_output(39, 0);
platform_add_devices(ARRAY_AND_SIZE(stargate2_devices));
pxa2xx_set_spi_info(1, &pxa_ssp_master_0_info);

View File

@ -294,8 +294,8 @@ struct omap_hwmod_class_sysconfig {
u16 rev_offs;
u16 sysc_offs;
u16 syss_offs;
u16 sysc_flags;
u8 idlemodes;
u8 sysc_flags;
u8 clockact;
struct omap_hwmod_sysc_fields *sysc_fields;
};

View File

@ -12,7 +12,7 @@
#
# http://www.arm.linux.org.uk/developer/machines/?action=new
#
# Last update: Sat Feb 20 14:16:15 2010
# Last update: Sat Mar 20 15:35:41 2010
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
@ -2663,7 +2663,7 @@ reb01 MACH_REB01 REB01 2675
aquila MACH_AQUILA AQUILA 2676
spark_sls_hw2 MACH_SPARK_SLS_HW2 SPARK_SLS_HW2 2677
sheeva_esata MACH_ESATA_SHEEVAPLUG ESATA_SHEEVAPLUG 2678
surf7x30 MACH_SURF7X30 SURF7X30 2679
msm7x30_surf MACH_MSM7X30_SURF MSM7X30_SURF 2679
micro2440 MACH_MICRO2440 MICRO2440 2680
am2440 MACH_AM2440 AM2440 2681
tq2440 MACH_TQ2440 TQ2440 2682
@ -2678,3 +2678,74 @@ vc088x MACH_VC088X VC088X 2690
mioa702 MACH_MIOA702 MIOA702 2691
hpmin MACH_HPMIN HPMIN 2692
ak880xak MACH_AK880XAK AK880XAK 2693
arm926tomap850 MACH_ARM926TOMAP850 ARM926TOMAP850 2694
lkevm MACH_LKEVM LKEVM 2695
mw6410 MACH_MW6410 MW6410 2696
terastation_wxl MACH_TERASTATION_WXL TERASTATION_WXL 2697
cpu8000e MACH_CPU8000E CPU8000E 2698
catania MACH_CATANIA CATANIA 2699
tokyo MACH_TOKYO TOKYO 2700
msm7201a_surf MACH_MSM7201A_SURF MSM7201A_SURF 2701
msm7201a_ffa MACH_MSM7201A_FFA MSM7201A_FFA 2702
msm7x25_surf MACH_MSM7X25_SURF MSM7X25_SURF 2703
msm7x25_ffa MACH_MSM7X25_FFA MSM7X25_FFA 2704
msm7x27_surf MACH_MSM7X27_SURF MSM7X27_SURF 2705
msm7x27_ffa MACH_MSM7X27_FFA MSM7X27_FFA 2706
msm7x30_ffa MACH_MSM7X30_FFA MSM7X30_FFA 2707
qsd8x50_surf MACH_QSD8X50_SURF QSD8X50_SURF 2708
qsd8x50_comet MACH_QSD8X50_COMET QSD8X50_COMET 2709
qsd8x50_ffa MACH_QSD8X50_FFA QSD8X50_FFA 2710
qsd8x50a_surf MACH_QSD8X50A_SURF QSD8X50A_SURF 2711
qsd8x50a_ffa MACH_QSD8X50A_FFA QSD8X50A_FFA 2712
adx_xgcp10 MACH_ADX_XGCP10 ADX_XGCP10 2713
mcgwumts2a MACH_MCGWUMTS2A MCGWUMTS2A 2714
mobikt MACH_MOBIKT MOBIKT 2715
mx53_evk MACH_MX53_EVK MX53_EVK 2716
igep0030 MACH_IGEP0030 IGEP0030 2717
axell_h40_h50_ctrl MACH_AXELL_H40_H50_CTRL AXELL_H40_H50_CTRL 2718
dtcommod MACH_DTCOMMOD DTCOMMOD 2719
gould MACH_GOULD GOULD 2720
siberia MACH_SIBERIA SIBERIA 2721
sbc3530 MACH_SBC3530 SBC3530 2722
qarm MACH_QARM QARM 2723
mips MACH_MIPS MIPS 2724
mx27grb MACH_MX27GRB MX27GRB 2725
sbc8100 MACH_SBC8100 SBC8100 2726
saarb MACH_SAARB SAARB 2727
omap3mini MACH_OMAP3MINI OMAP3MINI 2728
cnmbook7se MACH_CNMBOOK7SE CNMBOOK7SE 2729
catan MACH_CATAN CATAN 2730
harmony MACH_HARMONY HARMONY 2731
tonga MACH_TONGA TONGA 2732
cybook_orizon MACH_CYBOOK_ORIZON CYBOOK_ORIZON 2733
htcrhodiumcdma MACH_HTCRHODIUMCDMA HTCRHODIUMCDMA 2734
epc_g45 MACH_EPC_G45 EPC_G45 2735
epc_lpc3250 MACH_EPC_LPC3250 EPC_LPC3250 2736
mxc91341evb MACH_MXC91341EVB MXC91341EVB 2737
rtw1000 MACH_RTW1000 RTW1000 2738
bobcat MACH_BOBCAT BOBCAT 2739
trizeps6 MACH_TRIZEPS6 TRIZEPS6 2740
msm7x30_fluid MACH_MSM7X30_FLUID MSM7X30_FLUID 2741
nedap9263 MACH_NEDAP9263 NEDAP9263 2742
netgear_ms2110 MACH_NETGEAR_MS2110 NETGEAR_MS2110 2743
bmx MACH_BMX BMX 2744
netstream MACH_NETSTREAM NETSTREAM 2745
vpnext_rcu MACH_VPNEXT_RCU VPNEXT_RCU 2746
vpnext_mpu MACH_VPNEXT_MPU VPNEXT_MPU 2747
bcmring_tablet_v1 MACH_BCMRING_TABLET_V1 BCMRING_TABLET_V1 2748
sgarm10 MACH_SGARM10 SGARM10 2749
cm_t3517 MACH_CM_T3517 CM_T3517 2750
omap3_cps MACH_OMAP3_CPS OMAP3_CPS 2751
axar1500_receiver MACH_AXAR1500_RECEIVER AXAR1500_RECEIVER 2752
wbd222 MACH_WBD222 WBD222 2753
mt65xx MACH_MT65XX MT65XX 2754
msm8x60_surf MACH_MSM8X60_SURF MSM8X60_SURF 2755
msm8x60_sim MACH_MSM8X60_SIM MSM8X60_SIM 2756
vmc300 MACH_VMC300 VMC300 2757
tcc8000_sdk MACH_TCC8000_SDK TCC8000_SDK 2758
nanos MACH_NANOS NANOS 2759
stamp9g10 MACH_STAMP9G10 STAMP9G10 2760
stamp9g45 MACH_STAMP9G45 STAMP9G45 2761
h6053 MACH_H6053 H6053 2762
smint01 MACH_SMINT01 SMINT01 2763
prtlvt2 MACH_PRTLVT2 PRTLVT2 2764

View File

@ -50,7 +50,7 @@ pcibios_align_resource(void *data, const struct resource *res,
if ((res->flags & IORESOURCE_IO) && (start & 0x300))
start = (start + 0x3ff) & ~0x3ff;
return start
return start;
}
int pcibios_enable_resources(struct pci_dev *dev, int mask)

View File

@ -41,7 +41,7 @@ pcibios_align_resource(void *data, const struct resource *res,
if ((res->flags & IORESOURCE_IO) && (start & 0x300))
start = (start + 0x3ff) & ~0x3ff;
return start
return start;
}
@ -94,8 +94,7 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
r = &dev->resource[idx];
if (!r->start)
continue;
if (pci_claim_resource(dev, idx) < 0)
printk(KERN_ERR "PCI: Cannot allocate resource region %d of bridge %s\n", idx, pci_name(dev));
pci_claim_resource(dev, idx);
}
}
pcibios_allocate_bus_resources(&bus->children);
@ -125,7 +124,6 @@ static void __init pcibios_allocate_resources(int pass)
DBG("PCI: Resource %08lx-%08lx (f=%lx, d=%d, p=%d)\n",
r->start, r->end, r->flags, disabled, pass);
if (pci_claim_resource(dev, idx) < 0) {
printk(KERN_ERR "PCI: Cannot allocate resource region %d of device %s\n", idx, pci_name(dev));
/* We'll assign a new address later */
r->end -= r->start;
r->start = 0;

View File

@ -28,6 +28,7 @@
#define PPC_LLARX(t, a, b, eh) PPC_LDARX(t, a, b, eh)
#define PPC_STLCX stringify_in_c(stdcx.)
#define PPC_CNTLZL stringify_in_c(cntlzd)
#define PPC_LR_STKOFF 16
/* Move to CR, single-entry optimized version. Only available
* on POWER4 and later.
@ -51,6 +52,7 @@
#define PPC_STLCX stringify_in_c(stwcx.)
#define PPC_CNTLZL stringify_in_c(cntlzw)
#define PPC_MTOCRF stringify_in_c(mtcrf)
#define PPC_LR_STKOFF 4
#endif

View File

@ -127,3 +127,31 @@ _GLOBAL(__setup_cpu_power7)
_GLOBAL(__restore_cpu_power7)
/* place holder */
blr
#ifdef CONFIG_EVENT_TRACING
/*
* Get a minimal set of registers for our caller's nth caller.
* r3 = regs pointer, r5 = n.
*
* We only get R1 (stack pointer), NIP (next instruction pointer)
* and LR (link register). These are all we can get in the
* general case without doing complicated stack unwinding, but
* fortunately they are enough to do a stack backtrace, which
* is all we need them for.
*/
_GLOBAL(perf_arch_fetch_caller_regs)
mr r6,r1
cmpwi r5,0
mflr r4
ble 2f
mtctr r5
1: PPC_LL r6,0(r6)
bdnz 1b
PPC_LL r4,PPC_LR_STKOFF(r6)
2: PPC_LL r7,0(r6)
PPC_LL r7,PPC_LR_STKOFF(r7)
PPC_STL r6,GPR1-STACK_FRAME_OVERHEAD(r3)
PPC_STL r4,_NIP-STACK_FRAME_OVERHEAD(r3)
PPC_STL r7,_LINK-STACK_FRAME_OVERHEAD(r3)
blr
#endif /* CONFIG_EVENT_TRACING */

View File

@ -24,8 +24,8 @@
/* Symbols defined by linker scripts */
extern char input_data[];
extern int input_len;
extern int _text;
extern int _end;
extern char _text, _end;
extern char _bss, _ebss;
static void error(char *m);
@ -129,12 +129,12 @@ unsigned long decompress_kernel(void)
unsigned long output_addr;
unsigned char *output;
check_ipl_parmblock((void *) 0, (unsigned long) output + SZ__bss_start);
memset(&_bss, 0, &_ebss - &_bss);
free_mem_ptr = (unsigned long)&_end;
free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
output = (unsigned char *) ((free_mem_end_ptr + 4095UL) & -4096UL);
check_ipl_parmblock((void *) 0, (unsigned long) output + SZ__bss_start);
#ifdef CONFIG_BLK_DEV_INITRD
/*
* Move the initrd right behind the end of the decompressed

View File

@ -110,6 +110,7 @@ extern void pfault_fini(void);
#endif /* CONFIG_PFAULT */
extern void cmma_init(void);
extern int memcpy_real(void *, void *, size_t);
#define finish_arch_switch(prev) do { \
set_fs(current->thread.mm_segment); \
@ -218,8 +219,8 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
" l %0,%2\n"
"0: nr %0,%5\n"
" lr %1,%0\n"
" or %0,%2\n"
" or %1,%3\n"
" or %0,%3\n"
" or %1,%4\n"
" cs %0,%1,%2\n"
" jnl 1f\n"
" xr %1,%0\n"
@ -239,8 +240,8 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
" l %0,%2\n"
"0: nr %0,%5\n"
" lr %1,%0\n"
" or %0,%2\n"
" or %1,%3\n"
" or %0,%3\n"
" or %1,%4\n"
" cs %0,%1,%2\n"
" jnl 1f\n"
" xr %1,%0\n"

View File

@ -517,7 +517,10 @@ startup:
lhi %r1,2 # mode 2 = esame (dump)
sigp %r1,%r0,0x12 # switch to esame mode
sam64 # switch to 64 bit mode
larl %r13,4f
lmh %r0,%r15,0(%r13) # clear high-order half
jg startup_continue
4: .fill 16,4,0x0
#else
mvi __LC_AR_MODE_ID,0 # set ESA flag (mode 0)
l %r13,4f-.LPG0(%r13)

View File

@ -21,7 +21,6 @@ startup_continue:
larl %r1,sched_clock_base_cc
mvc 0(8,%r1),__LC_LAST_UPDATE_CLOCK
larl %r13,.LPG1 # get base
lmh %r0,%r15,.Lzero64-.LPG1(%r13) # clear high-order half
lctlg %c0,%c15,.Lctl-.LPG1(%r13) # load control registers
lg %r12,.Lparmaddr-.LPG1(%r13) # pointer to parameter area
# move IPL device to lowcore
@ -67,7 +66,6 @@ startup_continue:
.L4malign:.quad 0xffffffffffc00000
.Lscan2g:.quad 0x80000000 + 0x20000 - 8 # 2GB + 128K - 8
.Lnop: .long 0x07000700
.Lzero64:.fill 16,4,0x0
.Lparmaddr:
.quad PARMAREA
.align 64

View File

@ -401,7 +401,7 @@ setup_lowcore(void)
* Setup lowcore for boot cpu
*/
BUILD_BUG_ON(sizeof(struct _lowcore) != LC_PAGES * 4096);
lc = __alloc_bootmem(LC_PAGES * PAGE_SIZE, LC_PAGES * PAGE_SIZE, 0);
lc = __alloc_bootmem_low(LC_PAGES * PAGE_SIZE, LC_PAGES * PAGE_SIZE, 0);
lc->restart_psw.mask = PSW_BASE_BITS | PSW_DEFAULT_KEY;
lc->restart_psw.addr =
PSW_ADDR_AMODE | (unsigned long) restart_int_handler;
@ -433,7 +433,7 @@ setup_lowcore(void)
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE) {
lc->extended_save_area_addr = (__u32)
__alloc_bootmem(PAGE_SIZE, PAGE_SIZE, 0);
__alloc_bootmem_low(PAGE_SIZE, PAGE_SIZE, 0);
/* enable extended save area */
__ctl_set_bit(14, 29);
}

View File

@ -292,9 +292,9 @@ static void __init smp_get_save_area(unsigned int cpu, unsigned int phy_cpu)
zfcpdump_save_areas[cpu] = kmalloc(sizeof(struct save_area), GFP_KERNEL);
while (raw_sigp(phy_cpu, sigp_stop_and_store_status) == sigp_busy)
cpu_relax();
memcpy(zfcpdump_save_areas[cpu],
(void *)(unsigned long) store_prefix() + SAVE_AREA_BASE,
sizeof(struct save_area));
memcpy_real(zfcpdump_save_areas[cpu],
(void *)(unsigned long) store_prefix() + SAVE_AREA_BASE,
sizeof(struct save_area));
}
struct save_area *zfcpdump_save_areas[NR_CPUS + 1];

View File

@ -59,3 +59,29 @@ long probe_kernel_write(void *dst, void *src, size_t size)
}
return copied < 0 ? -EFAULT : 0;
}
int memcpy_real(void *dest, void *src, size_t count)
{
register unsigned long _dest asm("2") = (unsigned long) dest;
register unsigned long _len1 asm("3") = (unsigned long) count;
register unsigned long _src asm("4") = (unsigned long) src;
register unsigned long _len2 asm("5") = (unsigned long) count;
unsigned long flags;
int rc = -EFAULT;
if (!count)
return 0;
flags = __raw_local_irq_stnsm(0xf8UL);
asm volatile (
"0: mvcle %1,%2,0x0\n"
"1: jo 0b\n"
" lhi %0,0x0\n"
"2:\n"
EX_TABLE(1b,2b)
: "+d" (rc), "+d" (_dest), "+d" (_src), "+d" (_len1),
"+d" (_len2), "=m" (*((long *) dest))
: "m" (*((long *) src))
: "cc", "memory");
__raw_local_irq_ssm(flags);
return rc;
}

View File

@ -836,6 +836,8 @@ static void __init sh_eth_init(struct sh_eth_plat_data *pd)
pd->mac_addr[i] = mac_read(a, 0x10 + i);
msleep(10);
}
i2c_put_adapter(a);
}
#else
static void __init sh_eth_init(struct sh_eth_plat_data *pd)

View File

@ -52,6 +52,13 @@
* and change SW41 to use 720p
*/
/*
* about sound
*
* This setup.c supports FSI slave mode.
* Please change J20, J21, J22 pin to 1-2 connection.
*/
/* Heartbeat */
static struct resource heartbeat_resource = {
.start = PA_LED,
@ -276,6 +283,7 @@ static struct clk fsimcka_clk = {
.rate = 0, /* unknown */
};
/* change J20, J21, J22 pin to 1-2 connection to use slave mode */
struct sh_fsi_platform_info fsi_info = {
.porta_flags = SH_FSI_BRS_INV |
SH_FSI_OUT_SLAVE_MODE |

View File

@ -19,6 +19,8 @@
#define MMUCR 0xFF000010 /* MMU Control Register */
#define MMU_ITLB_ADDRESS_ARRAY 0xF2000000
#define MMU_ITLB_ADDRESS_ARRAY2 0xF2800000
#define MMU_UTLB_ADDRESS_ARRAY 0xF6000000
#define MMU_UTLB_ADDRESS_ARRAY2 0xF6800000
#define MMU_PAGE_ASSOC_BIT 0x80

View File

@ -21,6 +21,12 @@
#define WTCNT 0xffcc0000 /*WDTST*/
#define WTST WTCNT
#define WTBST 0xffcc0008 /*WDTBST*/
/* Register definitions */
#elif defined(CONFIG_CPU_SUBTYPE_SH7722) || \
defined(CONFIG_CPU_SUBTYPE_SH7723) || \
defined(CONFIG_CPU_SUBTYPE_SH7724)
#define WTCNT 0xa4520000
#define WTCSR 0xa4520004
#else
/* Register definitions */
#define WTCNT 0xffc00008

View File

@ -727,7 +727,7 @@ static int dwarf_parse_cie(void *entry, void *p, unsigned long len,
unsigned char *end, struct module *mod)
{
struct rb_node **rb_node = &cie_root.rb_node;
struct rb_node *parent;
struct rb_node *parent = *rb_node;
struct dwarf_cie *cie;
unsigned long flags;
int count;
@ -856,7 +856,7 @@ static int dwarf_parse_fde(void *entry, u32 entry_type,
unsigned char *end, struct module *mod)
{
struct rb_node **rb_node = &fde_root.rb_node;
struct rb_node *parent;
struct rb_node *parent = *rb_node;
struct dwarf_fde *fde;
struct dwarf_cie *cie;
unsigned long flags;

View File

@ -112,7 +112,7 @@ void cpu_idle(void)
}
}
void __cpuinit select_idle_routine(void)
void __init select_idle_routine(void)
{
/*
* If a platform has set its own idle routine, leave it alone.

View File

@ -315,7 +315,7 @@ void hw_perf_disable(void)
sh_pmu->disable_all();
}
int register_sh_pmu(struct sh_pmu *pmu)
int __cpuinit register_sh_pmu(struct sh_pmu *pmu)
{
if (sh_pmu)
return -EBUSY;

View File

@ -504,13 +504,6 @@ out:
return error;
}
/*
* These bracket the sleeping functions..
*/
extern void interruptible_sleep_on(wait_queue_head_t *q);
#define mid_sched ((unsigned long) interruptible_sleep_on)
#ifdef CONFIG_FRAME_POINTER
static int in_sh64_switch_to(unsigned long pc)
{

View File

@ -323,6 +323,7 @@ static void __clear_pmb_entry(struct pmb_entry *pmbe)
writel_uncached(data_val & ~PMB_V, data);
}
#ifdef CONFIG_PM
static void set_pmb_entry(struct pmb_entry *pmbe)
{
unsigned long flags;
@ -331,6 +332,7 @@ static void set_pmb_entry(struct pmb_entry *pmbe)
__set_pmb_entry(pmbe);
spin_unlock_irqrestore(&pmbe->lock, flags);
}
#endif /* CONFIG_PM */
int pmb_bolt_mapping(unsigned long vaddr, phys_addr_t phys,
unsigned long size, pgprot_t prot)
@ -802,7 +804,7 @@ void __init pmb_init(void)
writel_uncached(0, PMB_IRMCR);
/* Flush out the TLB */
__raw_writel(__raw_readl(MMUCR) | MMUCR_TI, MMUCR);
local_flush_tlb_all();
ctrl_barrier();
}

View File

@ -73,5 +73,7 @@ void local_flush_tlb_one(unsigned long asid, unsigned long page)
jump_to_uncached();
__raw_writel(page, MMU_UTLB_ADDRESS_ARRAY | MMU_PAGE_ASSOC_BIT);
__raw_writel(asid, MMU_UTLB_ADDRESS_ARRAY2 | MMU_PAGE_ASSOC_BIT);
__raw_writel(page, MMU_ITLB_ADDRESS_ARRAY | MMU_PAGE_ASSOC_BIT);
__raw_writel(asid, MMU_ITLB_ADDRESS_ARRAY2 | MMU_PAGE_ASSOC_BIT);
back_to_cached();
}

View File

@ -123,18 +123,27 @@ void local_flush_tlb_mm(struct mm_struct *mm)
void local_flush_tlb_all(void)
{
unsigned long flags, status;
int i;
/*
* Flush all the TLB.
*
* Write to the MMU control register's bit:
* TF-bit for SH-3, TI-bit for SH-4.
* It's same position, bit #2.
*/
local_irq_save(flags);
jump_to_uncached();
status = __raw_readl(MMUCR);
status |= 0x04;
__raw_writel(status, MMUCR);
status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT);
if (status == 0)
status = MMUCR_URB_NENTRIES;
for (i = 0; i < status; i++)
__raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8));
for (i = 0; i < 4; i++)
__raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8));
back_to_cached();
ctrl_barrier();
local_irq_restore(flags);
}

View File

@ -53,8 +53,8 @@ struct stat {
ino_t st_ino;
mode_t st_mode;
short st_nlink;
uid16_t st_uid;
gid16_t st_gid;
unsigned short st_uid;
unsigned short st_gid;
unsigned short st_rdev;
off_t st_size;
time_t st_atime;

View File

@ -1337,7 +1337,7 @@ static void perf_callchain_user_32(struct pt_regs *regs,
callchain_store(entry, PERF_CONTEXT_USER);
callchain_store(entry, regs->tpc);
ufp = regs->u_regs[UREG_I6];
ufp = regs->u_regs[UREG_I6] & 0xffffffffUL;
do {
struct sparc_stackf32 *usf, sf;
unsigned long pc;

View File

@ -107,12 +107,12 @@ static unsigned long run_on_cpu(unsigned long cpu,
unsigned long ret;
/* should return -EINVAL to userspace */
if (set_cpus_allowed(current, cpumask_of_cpu(cpu)))
if (set_cpus_allowed_ptr(current, cpumask_of(cpu)))
return 0;
ret = func(arg);
set_cpus_allowed(current, old_affinity);
set_cpus_allowed_ptr(current, &old_affinity);
return ret;
}

View File

@ -238,12 +238,12 @@ static unsigned int us2e_freq_get(unsigned int cpu)
return 0;
cpus_allowed = current->cpus_allowed;
set_cpus_allowed(current, cpumask_of_cpu(cpu));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
clock_tick = sparc64_get_clock_tick(cpu) / 1000;
estar = read_hbreg(HBIRD_ESTAR_MODE_ADDR);
set_cpus_allowed(current, cpus_allowed);
set_cpus_allowed_ptr(current, &cpus_allowed);
return clock_tick / estar_to_divisor(estar);
}
@ -259,7 +259,7 @@ static void us2e_set_cpu_divider_index(unsigned int cpu, unsigned int index)
return;
cpus_allowed = current->cpus_allowed;
set_cpus_allowed(current, cpumask_of_cpu(cpu));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
new_freq = clock_tick = sparc64_get_clock_tick(cpu) / 1000;
new_bits = index_to_estar_mode(index);
@ -281,7 +281,7 @@ static void us2e_set_cpu_divider_index(unsigned int cpu, unsigned int index)
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
set_cpus_allowed(current, cpus_allowed);
set_cpus_allowed_ptr(current, &cpus_allowed);
}
static int us2e_freq_target(struct cpufreq_policy *policy,

View File

@ -86,12 +86,12 @@ static unsigned int us3_freq_get(unsigned int cpu)
return 0;
cpus_allowed = current->cpus_allowed;
set_cpus_allowed(current, cpumask_of_cpu(cpu));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
reg = read_safari_cfg();
ret = get_current_freq(cpu, reg);
set_cpus_allowed(current, cpus_allowed);
set_cpus_allowed_ptr(current, &cpus_allowed);
return ret;
}
@ -106,7 +106,7 @@ static void us3_set_cpu_divider_index(unsigned int cpu, unsigned int index)
return;
cpus_allowed = current->cpus_allowed;
set_cpus_allowed(current, cpumask_of_cpu(cpu));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
new_freq = sparc64_get_clock_tick(cpu) / 1000;
switch (index) {
@ -140,7 +140,7 @@ static void us3_set_cpu_divider_index(unsigned int cpu, unsigned int index)
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
set_cpus_allowed(current, cpus_allowed);
set_cpus_allowed_ptr(current, &cpus_allowed);
}
static int us3_freq_target(struct cpufreq_policy *policy,

View File

@ -82,6 +82,9 @@ enum fixed_addresses {
#endif
FIX_DBGP_BASE,
FIX_EARLYCON_MEM_BASE,
#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
FIX_OHCI1394_BASE,
#endif
#ifdef CONFIG_X86_LOCAL_APIC
FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
#endif
@ -132,9 +135,6 @@ enum fixed_addresses {
(__end_of_permanent_fixed_addresses & (TOTAL_FIX_BTMAPS - 1))
: __end_of_permanent_fixed_addresses,
FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1,
#ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT
FIX_OHCI1394_BASE,
#endif
#ifdef CONFIG_X86_32
FIX_WP_TEST,
#endif

View File

@ -133,6 +133,7 @@ extern void (*__initconst interrupt[NR_VECTORS-FIRST_EXTERNAL_VECTOR])(void);
typedef int vector_irq_t[NR_VECTORS];
DECLARE_PER_CPU(vector_irq_t, vector_irq);
extern void setup_vector_irq(int cpu);
#ifdef CONFIG_X86_IO_APIC
extern void lock_vector_lock(void);

View File

@ -105,6 +105,8 @@
#define MSR_AMD64_PATCH_LEVEL 0x0000008b
#define MSR_AMD64_NB_CFG 0xc001001f
#define MSR_AMD64_PATCH_LOADER 0xc0010020
#define MSR_AMD64_OSVW_ID_LENGTH 0xc0010140
#define MSR_AMD64_OSVW_STATUS 0xc0010141
#define MSR_AMD64_IBSFETCHCTL 0xc0011030
#define MSR_AMD64_IBSFETCHLINAD 0xc0011031
#define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032

View File

@ -1268,6 +1268,14 @@ void __setup_vector_irq(int cpu)
/* Mark the inuse vectors */
for_each_irq_desc(irq, desc) {
cfg = desc->chip_data;
/*
* If it is a legacy IRQ handled by the legacy PIC, this cpu
* will be part of the irq_cfg's domain.
*/
if (irq < legacy_pic->nr_legacy_irqs && !IO_APIC_IRQ(irq))
cpumask_set_cpu(cpu, cfg->domain);
if (!cpumask_test_cpu(cpu, cfg->domain))
continue;
vector = cfg->vector;

View File

@ -348,10 +348,12 @@ static void amd_pmu_cpu_offline(int cpu)
raw_spin_lock(&amd_nb_lock);
if (--cpuhw->amd_nb->refcnt == 0)
kfree(cpuhw->amd_nb);
if (cpuhw->amd_nb) {
if (--cpuhw->amd_nb->refcnt == 0)
kfree(cpuhw->amd_nb);
cpuhw->amd_nb = NULL;
cpuhw->amd_nb = NULL;
}
raw_spin_unlock(&amd_nb_lock);
}

View File

@ -7,6 +7,7 @@
#include <linux/init.h>
#include <linux/start_kernel.h>
#include <linux/mm.h>
#include <asm/setup.h>
#include <asm/sections.h>
@ -44,9 +45,10 @@ void __init i386_start_kernel(void)
#ifdef CONFIG_BLK_DEV_INITRD
/* Reserve INITRD */
if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
/* Assume only end is not page aligned */
u64 ramdisk_image = boot_params.hdr.ramdisk_image;
u64 ramdisk_size = boot_params.hdr.ramdisk_size;
u64 ramdisk_end = ramdisk_image + ramdisk_size;
u64 ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size);
reserve_early(ramdisk_image, ramdisk_end, "RAMDISK");
}
#endif

View File

@ -103,9 +103,10 @@ void __init x86_64_start_reservations(char *real_mode_data)
#ifdef CONFIG_BLK_DEV_INITRD
/* Reserve INITRD */
if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {
/* Assume only end is not page aligned */
unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;
unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;
unsigned long ramdisk_end = ramdisk_image + ramdisk_size;
unsigned long ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size);
reserve_early(ramdisk_image, ramdisk_end, "RAMDISK");
}
#endif

View File

@ -141,6 +141,28 @@ void __init init_IRQ(void)
x86_init.irqs.intr_init();
}
/*
* Setup the vector to irq mappings.
*/
void setup_vector_irq(int cpu)
{
#ifndef CONFIG_X86_IO_APIC
int irq;
/*
* On most of the platforms, legacy PIC delivers the interrupts on the
* boot cpu. But there are certain platforms where PIC interrupts are
* delivered to multiple cpu's. If the legacy IRQ is handled by the
* legacy PIC, for the new cpu that is coming online, setup the static
* legacy vector to irq mapping:
*/
for (irq = 0; irq < legacy_pic->nr_legacy_irqs; irq++)
per_cpu(vector_irq, cpu)[IRQ0_VECTOR + irq] = irq;
#endif
__setup_vector_irq(cpu);
}
static void __init smp_intr_init(void)
{
#ifdef CONFIG_SMP

View File

@ -526,21 +526,37 @@ static int __cpuinit mwait_usable(const struct cpuinfo_x86 *c)
}
/*
* Check for AMD CPUs, which have potentially C1E support
* Check for AMD CPUs, where APIC timer interrupt does not wake up CPU from C1e.
* For more information see
* - Erratum #400 for NPT family 0xf and family 0x10 CPUs
* - Erratum #365 for family 0x11 (not affected because C1e not in use)
*/
static int __cpuinit check_c1e_idle(const struct cpuinfo_x86 *c)
{
u64 val;
if (c->x86_vendor != X86_VENDOR_AMD)
return 0;
if (c->x86 < 0x0F)
return 0;
goto no_c1e_idle;
/* Family 0x0f models < rev F do not have C1E */
if (c->x86 == 0x0f && c->x86_model < 0x40)
return 0;
if (c->x86 == 0x0F && c->x86_model >= 0x40)
return 1;
return 1;
if (c->x86 == 0x10) {
/*
* check OSVW bit for CPUs that are not affected
* by erratum #400
*/
rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, val);
if (val >= 2) {
rdmsrl(MSR_AMD64_OSVW_STATUS, val);
if (!(val & BIT(1)))
goto no_c1e_idle;
}
return 1;
}
no_c1e_idle:
return 0;
}
static cpumask_var_t c1e_mask;

View File

@ -314,16 +314,17 @@ static void __init reserve_brk(void)
#define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT)
static void __init relocate_initrd(void)
{
/* Assume only end is not page aligned */
u64 ramdisk_image = boot_params.hdr.ramdisk_image;
u64 ramdisk_size = boot_params.hdr.ramdisk_size;
u64 area_size = PAGE_ALIGN(ramdisk_size);
u64 end_of_lowmem = max_low_pfn_mapped << PAGE_SHIFT;
u64 ramdisk_here;
unsigned long slop, clen, mapaddr;
char *p, *q;
/* We need to move the initrd down into lowmem */
ramdisk_here = find_e820_area(0, end_of_lowmem, ramdisk_size,
ramdisk_here = find_e820_area(0, end_of_lowmem, area_size,
PAGE_SIZE);
if (ramdisk_here == -1ULL)
@ -332,7 +333,7 @@ static void __init relocate_initrd(void)
/* Note: this includes all the lowmem currently occupied by
the initrd, we rely on that fact to keep the data intact. */
reserve_early(ramdisk_here, ramdisk_here + ramdisk_size,
reserve_early(ramdisk_here, ramdisk_here + area_size,
"NEW RAMDISK");
initrd_start = ramdisk_here + PAGE_OFFSET;
initrd_end = initrd_start + ramdisk_size;
@ -376,9 +377,10 @@ static void __init relocate_initrd(void)
static void __init reserve_initrd(void)
{
/* Assume only end is not page aligned */
u64 ramdisk_image = boot_params.hdr.ramdisk_image;
u64 ramdisk_size = boot_params.hdr.ramdisk_size;
u64 ramdisk_end = ramdisk_image + ramdisk_size;
u64 ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size);
u64 end_of_lowmem = max_low_pfn_mapped << PAGE_SHIFT;
if (!boot_params.hdr.type_of_loader ||

View File

@ -247,7 +247,7 @@ static void __cpuinit smp_callin(void)
/*
* Need to setup vector mappings before we enable interrupts.
*/
__setup_vector_irq(smp_processor_id());
setup_vector_irq(smp_processor_id());
/*
* Get our bogomips.
*

View File

@ -291,8 +291,8 @@ SECTIONS
.smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) {
__smp_locks = .;
*(.smp_locks)
__smp_locks_end = .;
. = ALIGN(PAGE_SIZE);
__smp_locks_end = .;
}
#ifdef CONFIG_X86_64

View File

@ -331,11 +331,23 @@ int devmem_is_allowed(unsigned long pagenr)
void free_init_pages(char *what, unsigned long begin, unsigned long end)
{
unsigned long addr = begin;
unsigned long addr;
unsigned long begin_aligned, end_aligned;
if (addr >= end)
/* Make sure boundaries are page aligned */
begin_aligned = PAGE_ALIGN(begin);
end_aligned = end & PAGE_MASK;
if (WARN_ON(begin_aligned != begin || end_aligned != end)) {
begin = begin_aligned;
end = end_aligned;
}
if (begin >= end)
return;
addr = begin;
/*
* If debugging page accesses then do not free this memory but
* mark them not present - any buggy init-section access will
@ -343,7 +355,7 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
*/
#ifdef CONFIG_DEBUG_PAGEALLOC
printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",
begin, PAGE_ALIGN(end));
begin, end);
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
#else
/*
@ -358,8 +370,7 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
for (; addr < end; addr += PAGE_SIZE) {
ClearPageReserved(virt_to_page(addr));
init_page_count(virt_to_page(addr));
memset((void *)(addr & ~(PAGE_SIZE-1)),
POISON_FREE_INITMEM, PAGE_SIZE);
memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);
free_page(addr);
totalram_pages++;
}
@ -376,6 +387,15 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD
void free_initrd_mem(unsigned long start, unsigned long end)
{
free_init_pages("initrd memory", start, end);
/*
* end could be not aligned, and We can not align that,
* decompresser could be confused by aligned initrd_end
* We already reserve the end partial page before in
* - i386_start_kernel()
* - x86_64_start_kernel()
* - relocate_initrd()
* So here We can do PAGE_ALIGN() safely to get partial page to be freed
*/
free_init_pages("initrd memory", start, PAGE_ALIGN(end));
}
#endif

View File

@ -122,8 +122,8 @@ setup_resource(struct acpi_resource *acpi_res, void *data)
struct acpi_resource_address64 addr;
acpi_status status;
unsigned long flags;
struct resource *root;
u64 start, end;
struct resource *root, *conflict;
u64 start, end, max_len;
status = resource_to_addr(acpi_res, &addr);
if (!ACPI_SUCCESS(status))
@ -140,6 +140,17 @@ setup_resource(struct acpi_resource *acpi_res, void *data)
} else
return AE_OK;
max_len = addr.maximum - addr.minimum + 1;
if (addr.address_length > max_len) {
dev_printk(KERN_DEBUG, &info->bridge->dev,
"host bridge window length %#llx doesn't fit in "
"%#llx-%#llx, trimming\n",
(unsigned long long) addr.address_length,
(unsigned long long) addr.minimum,
(unsigned long long) addr.maximum);
addr.address_length = max_len;
}
start = addr.minimum + addr.translation_offset;
end = start + addr.address_length - 1;
@ -157,9 +168,12 @@ setup_resource(struct acpi_resource *acpi_res, void *data)
return AE_OK;
}
if (insert_resource(root, res)) {
conflict = insert_resource_conflict(root, res);
if (conflict) {
dev_err(&info->bridge->dev,
"can't allocate host bridge window %pR\n", res);
"address space collision: host bridge window %pR "
"conflicts with %s %pR\n",
res, conflict->name, conflict);
} else {
pci_bus_add_resource(info->bus, res, 0);
info->res_num++;

View File

@ -127,9 +127,6 @@ static void __init pcibios_allocate_bus_resources(struct list_head *bus_list)
continue;
if (!r->start ||
pci_claim_resource(dev, idx) < 0) {
dev_info(&dev->dev,
"can't reserve window %pR\n",
r);
/*
* Something is wrong with the region.
* Invalidate the resource to prevent
@ -181,8 +178,6 @@ static void __init pcibios_allocate_resources(int pass)
"BAR %d: reserving %pr (d=%d, p=%d)\n",
idx, r, disabled, pass);
if (pci_claim_resource(dev, idx) < 0) {
dev_info(&dev->dev,
"can't reserve %pR\n", r);
/* We'll assign a new address later */
r->end -= r->start;
r->start = 0;

View File

@ -8,6 +8,7 @@
#include <linux/acpi.h>
#include <linux/signal.h>
#include <linux/kthread.h>
#include <linux/dmi.h>
#include <acpi/acpi_drivers.h>
@ -1032,6 +1033,41 @@ static void acpi_add_id(struct acpi_device *device, const char *dev_id)
list_add_tail(&id->list, &device->pnp.ids);
}
/*
* Old IBM workstations have a DSDT bug wherein the SMBus object
* lacks the SMBUS01 HID and the methods do not have the necessary "_"
* prefix. Work around this.
*/
static int acpi_ibm_smbus_match(struct acpi_device *device)
{
acpi_handle h_dummy;
struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL};
int result;
if (!dmi_name_in_vendors("IBM"))
return -ENODEV;
/* Look for SMBS object */
result = acpi_get_name(device->handle, ACPI_SINGLE_NAME, &path);
if (result)
return result;
if (strcmp("SMBS", path.pointer)) {
result = -ENODEV;
goto out;
}
/* Does it have the necessary (but misnamed) methods? */
result = -ENODEV;
if (ACPI_SUCCESS(acpi_get_handle(device->handle, "SBI", &h_dummy)) &&
ACPI_SUCCESS(acpi_get_handle(device->handle, "SBR", &h_dummy)) &&
ACPI_SUCCESS(acpi_get_handle(device->handle, "SBW", &h_dummy)))
result = 0;
out:
kfree(path.pointer);
return result;
}
static void acpi_device_set_id(struct acpi_device *device)
{
acpi_status status;
@ -1082,6 +1118,8 @@ static void acpi_device_set_id(struct acpi_device *device)
acpi_add_id(device, ACPI_BAY_HID);
else if (ACPI_SUCCESS(acpi_dock_match(device)))
acpi_add_id(device, ACPI_DOCK_HID);
else if (!acpi_ibm_smbus_match(device))
acpi_add_id(device, ACPI_SMBUS_IBM_HID);
break;
case ACPI_BUS_TYPE_POWER:

View File

@ -1667,6 +1667,7 @@ unsigned int ata_sff_host_intr(struct ata_port *ap,
{
struct ata_eh_info *ehi = &ap->link.eh_info;
u8 status, host_stat = 0;
bool bmdma_stopped = false;
VPRINTK("ata%u: protocol %d task_state %d\n",
ap->print_id, qc->tf.protocol, ap->hsm_task_state);
@ -1699,6 +1700,7 @@ unsigned int ata_sff_host_intr(struct ata_port *ap,
/* before we do anything else, clear DMA-Start bit */
ap->ops->bmdma_stop(qc);
bmdma_stopped = true;
if (unlikely(host_stat & ATA_DMA_ERR)) {
/* error when transfering data to/from memory */
@ -1716,8 +1718,14 @@ unsigned int ata_sff_host_intr(struct ata_port *ap,
/* check main status, clearing INTRQ if needed */
status = ata_sff_irq_status(ap);
if (status & ATA_BUSY)
goto idle_irq;
if (status & ATA_BUSY) {
if (bmdma_stopped) {
/* BMDMA engine is already stopped, we're screwed */
qc->err_mask |= AC_ERR_HSM;
ap->hsm_task_state = HSM_ST_ERR;
} else
goto idle_irq;
}
/* ack bmdma irq events */
ap->ops->sff_irq_clear(ap);
@ -1762,13 +1770,16 @@ EXPORT_SYMBOL_GPL(ata_sff_host_intr);
irqreturn_t ata_sff_interrupt(int irq, void *dev_instance)
{
struct ata_host *host = dev_instance;
bool retried = false;
unsigned int i;
unsigned int handled = 0, polling = 0;
unsigned int handled, idle, polling;
unsigned long flags;
/* TODO: make _irqsave conditional on x86 PCI IDE legacy mode */
spin_lock_irqsave(&host->lock, flags);
retry:
handled = idle = polling = 0;
for (i = 0; i < host->n_ports; i++) {
struct ata_port *ap = host->ports[i];
struct ata_queued_cmd *qc;
@ -1782,7 +1793,8 @@ irqreturn_t ata_sff_interrupt(int irq, void *dev_instance)
handled |= ata_sff_host_intr(ap, qc);
else
polling |= 1 << i;
}
} else
idle |= 1 << i;
}
/*
@ -1790,7 +1802,9 @@ irqreturn_t ata_sff_interrupt(int irq, void *dev_instance)
* asserting IRQ line, nobody cared will ensue. Check IRQ
* pending status if available and clear spurious IRQ.
*/
if (!handled) {
if (!handled && !retried) {
bool retry = false;
for (i = 0; i < host->n_ports; i++) {
struct ata_port *ap = host->ports[i];
@ -1805,8 +1819,23 @@ irqreturn_t ata_sff_interrupt(int irq, void *dev_instance)
ata_port_printk(ap, KERN_INFO,
"clearing spurious IRQ\n");
ap->ops->sff_check_status(ap);
ap->ops->sff_irq_clear(ap);
if (idle & (1 << i)) {
ap->ops->sff_check_status(ap);
ap->ops->sff_irq_clear(ap);
} else {
/* clear INTRQ and check if BUSY cleared */
if (!(ap->ops->sff_check_status(ap) & ATA_BUSY))
retry |= true;
/*
* With command in flight, we can't do
* sff_irq_clear() w/o racing with completion.
*/
}
}
if (retry) {
retried = true;
goto retry;
}
}

View File

@ -576,6 +576,10 @@ static int via_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
u8 rev = isa->revision;
pci_dev_put(isa);
if ((id->device == 0x0415 || id->device == 0x3164) &&
(config->id != id->device))
continue;
if (rev >= config->rev_min && rev <= config->rev_max)
break;
}
@ -677,6 +681,7 @@ static const struct pci_device_id via[] = {
{ PCI_VDEVICE(VIA, 0x3164), },
{ PCI_VDEVICE(VIA, 0x5324), },
{ PCI_VDEVICE(VIA, 0xC409), VIA_IDFLAG_SINGLE },
{ PCI_VDEVICE(VIA, 0x9001), VIA_IDFLAG_SINGLE },
{ },
};

View File

@ -439,8 +439,23 @@ static int device_resume_noirq(struct device *dev, pm_message_t state)
if (dev->bus && dev->bus->pm) {
pm_dev_dbg(dev, state, "EARLY ");
error = pm_noirq_op(dev, dev->bus->pm, state);
if (error)
goto End;
}
if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "EARLY type ");
error = pm_noirq_op(dev, dev->type->pm, state);
if (error)
goto End;
}
if (dev->class && dev->class->pm) {
pm_dev_dbg(dev, state, "EARLY class ");
error = pm_noirq_op(dev, dev->class->pm, state);
}
End:
TRACE_RESUME(error);
return error;
}
@ -735,10 +750,26 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state)
{
int error = 0;
if (dev->class && dev->class->pm) {
pm_dev_dbg(dev, state, "LATE class ");
error = pm_noirq_op(dev, dev->class->pm, state);
if (error)
goto End;
}
if (dev->type && dev->type->pm) {
pm_dev_dbg(dev, state, "LATE type ");
error = pm_noirq_op(dev, dev->type->pm, state);
if (error)
goto End;
}
if (dev->bus && dev->bus->pm) {
pm_dev_dbg(dev, state, "LATE ");
error = pm_noirq_op(dev, dev->bus->pm, state);
}
End:
return error;
}

View File

@ -97,6 +97,9 @@ EXPORT_SYMBOL(intel_agp_enabled);
#define IS_PINEVIEW (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_HB)
#define IS_SNB (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB)
#define IS_G4X (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_EAGLELAKE_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q45_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G45_HB || \
@ -107,8 +110,7 @@ EXPORT_SYMBOL(intel_agp_enabled);
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB)
IS_SNB)
extern int agp_memory_reserved;
@ -175,6 +177,10 @@ extern int agp_memory_reserved;
#define SNB_GMCH_GMS_STOLEN_448M (0xe << 3)
#define SNB_GMCH_GMS_STOLEN_480M (0xf << 3)
#define SNB_GMCH_GMS_STOLEN_512M (0x10 << 3)
#define SNB_GTT_SIZE_0M (0 << 8)
#define SNB_GTT_SIZE_1M (1 << 8)
#define SNB_GTT_SIZE_2M (2 << 8)
#define SNB_GTT_SIZE_MASK (3 << 8)
static const struct aper_size_info_fixed intel_i810_sizes[] =
{
@ -1200,6 +1206,9 @@ static void intel_i9xx_setup_flush(void)
if (intel_private.ifp_resource.start)
return;
if (IS_SNB)
return;
/* setup a resource for this object */
intel_private.ifp_resource.name = "Intel Flush Page";
intel_private.ifp_resource.flags = IORESOURCE_MEM;
@ -1438,6 +1447,8 @@ static unsigned long intel_i965_mask_memory(struct agp_bridge_data *bridge,
static void intel_i965_get_gtt_range(int *gtt_offset, int *gtt_size)
{
u16 snb_gmch_ctl;
switch (agp_bridge->dev->device) {
case PCI_DEVICE_ID_INTEL_GM45_HB:
case PCI_DEVICE_ID_INTEL_EAGLELAKE_HB:
@ -1449,9 +1460,26 @@ static void intel_i965_get_gtt_range(int *gtt_offset, int *gtt_size)
case PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB:
case PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB:
case PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB:
*gtt_offset = *gtt_size = MB(2);
break;
case PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB:
case PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB:
*gtt_offset = *gtt_size = MB(2);
*gtt_offset = MB(2);
pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
switch (snb_gmch_ctl & SNB_GTT_SIZE_MASK) {
default:
case SNB_GTT_SIZE_0M:
printk(KERN_ERR "Bad GTT size mask: 0x%04x.\n", snb_gmch_ctl);
*gtt_size = MB(0);
break;
case SNB_GTT_SIZE_1M:
*gtt_size = MB(1);
break;
case SNB_GTT_SIZE_2M:
*gtt_size = MB(2);
break;
}
break;
default:
*gtt_offset = *gtt_size = KB(512);

View File

@ -681,6 +681,10 @@ static void resize_console(struct port *port)
struct virtio_device *vdev;
struct winsize ws;
/* The port could have been hot-unplugged */
if (!port)
return;
vdev = port->portdev->vdev;
if (virtio_has_feature(vdev, VIRTIO_CONSOLE_F_SIZE)) {
vdev->config->get(vdev,
@ -947,11 +951,18 @@ static void handle_control_message(struct ports_device *portdev,
*/
err = sysfs_create_group(&port->dev->kobj,
&port_attribute_group);
if (err)
if (err) {
dev_err(port->dev,
"Error %d creating sysfs device attributes\n",
err);
} else {
/*
* Generate a udev event so that appropriate
* symlinks can be created based on udev
* rules.
*/
kobject_uevent(&port->dev->kobj, KOBJ_CHANGE);
}
break;
case VIRTIO_CONSOLE_PORT_REMOVE:
/*

View File

@ -316,7 +316,12 @@ void amd_decode_nb_mce(int node_id, struct err_regs *regs, int handle_errors)
if (regs->nbsh & K8_NBSH_ERR_CPU_VAL)
pr_cont(", core: %u\n", (u8)(regs->nbsh & 0xf));
} else {
pr_cont(", core: %d\n", fls((regs->nbsh & 0xf) - 1));
u8 assoc_cpus = regs->nbsh & 0xf;
if (assoc_cpus > 0)
pr_cont(", core: %d", fls(assoc_cpus) - 1);
pr_cont("\n");
}
pr_emerg("%s.\n", EXT_ERR_MSG(xec));

View File

@ -126,97 +126,74 @@ int fw_csr_string(const u32 *directory, int key, char *buf, size_t size)
}
EXPORT_SYMBOL(fw_csr_string);
static bool is_fw_unit(struct device *dev);
static int match_unit_directory(const u32 *directory, u32 match_flags,
const struct ieee1394_device_id *id)
static void get_ids(const u32 *directory, int *id)
{
struct fw_csr_iterator ci;
int key, value, match;
int key, value;
match = 0;
fw_csr_iterator_init(&ci, directory);
while (fw_csr_iterator_next(&ci, &key, &value)) {
if (key == CSR_VENDOR && value == id->vendor_id)
match |= IEEE1394_MATCH_VENDOR_ID;
if (key == CSR_MODEL && value == id->model_id)
match |= IEEE1394_MATCH_MODEL_ID;
if (key == CSR_SPECIFIER_ID && value == id->specifier_id)
match |= IEEE1394_MATCH_SPECIFIER_ID;
if (key == CSR_VERSION && value == id->version)
match |= IEEE1394_MATCH_VERSION;
switch (key) {
case CSR_VENDOR: id[0] = value; break;
case CSR_MODEL: id[1] = value; break;
case CSR_SPECIFIER_ID: id[2] = value; break;
case CSR_VERSION: id[3] = value; break;
}
}
return (match & match_flags) == match_flags;
}
static void get_modalias_ids(struct fw_unit *unit, int *id)
{
get_ids(&fw_parent_device(unit)->config_rom[5], id);
get_ids(unit->directory, id);
}
static bool match_ids(const struct ieee1394_device_id *id_table, int *id)
{
int match = 0;
if (id[0] == id_table->vendor_id)
match |= IEEE1394_MATCH_VENDOR_ID;
if (id[1] == id_table->model_id)
match |= IEEE1394_MATCH_MODEL_ID;
if (id[2] == id_table->specifier_id)
match |= IEEE1394_MATCH_SPECIFIER_ID;
if (id[3] == id_table->version)
match |= IEEE1394_MATCH_VERSION;
return (match & id_table->match_flags) == id_table->match_flags;
}
static bool is_fw_unit(struct device *dev);
static int fw_unit_match(struct device *dev, struct device_driver *drv)
{
struct fw_unit *unit = fw_unit(dev);
struct fw_device *device;
const struct ieee1394_device_id *id;
const struct ieee1394_device_id *id_table =
container_of(drv, struct fw_driver, driver)->id_table;
int id[] = {0, 0, 0, 0};
/* We only allow binding to fw_units. */
if (!is_fw_unit(dev))
return 0;
device = fw_parent_device(unit);
id = container_of(drv, struct fw_driver, driver)->id_table;
get_modalias_ids(fw_unit(dev), id);
for (; id->match_flags != 0; id++) {
if (match_unit_directory(unit->directory, id->match_flags, id))
for (; id_table->match_flags != 0; id_table++)
if (match_ids(id_table, id))
return 1;
/* Also check vendor ID in the root directory. */
if ((id->match_flags & IEEE1394_MATCH_VENDOR_ID) &&
match_unit_directory(&device->config_rom[5],
IEEE1394_MATCH_VENDOR_ID, id) &&
match_unit_directory(unit->directory, id->match_flags
& ~IEEE1394_MATCH_VENDOR_ID, id))
return 1;
}
return 0;
}
static int get_modalias(struct fw_unit *unit, char *buffer, size_t buffer_size)
{
struct fw_device *device = fw_parent_device(unit);
struct fw_csr_iterator ci;
int id[] = {0, 0, 0, 0};
int key, value;
int vendor = 0;
int model = 0;
int specifier_id = 0;
int version = 0;
fw_csr_iterator_init(&ci, &device->config_rom[5]);
while (fw_csr_iterator_next(&ci, &key, &value)) {
switch (key) {
case CSR_VENDOR:
vendor = value;
break;
case CSR_MODEL:
model = value;
break;
}
}
fw_csr_iterator_init(&ci, unit->directory);
while (fw_csr_iterator_next(&ci, &key, &value)) {
switch (key) {
case CSR_SPECIFIER_ID:
specifier_id = value;
break;
case CSR_VERSION:
version = value;
break;
}
}
get_modalias_ids(unit, id);
return snprintf(buffer, buffer_size,
"ieee1394:ven%08Xmo%08Xsp%08Xver%08X",
vendor, model, specifier_id, version);
id[0], id[1], id[2], id[3]);
}
static int fw_unit_uevent(struct device *dev, struct kobj_uevent_env *env)

View File

@ -331,8 +331,9 @@ void fw_iso_resource_manage(struct fw_card *card, int generation,
if (ret < 0)
*bandwidth = 0;
if (allocate && ret < 0 && c >= 0) {
deallocate_channel(card, irm_id, generation, c, buffer);
if (allocate && ret < 0) {
if (c >= 0)
deallocate_channel(card, irm_id, generation, c, buffer);
*channel = ret;
}
}

View File

@ -231,6 +231,8 @@ static inline struct fw_ohci *fw_ohci(struct fw_card *card)
static char ohci_driver_name[] = KBUILD_MODNAME;
#define PCI_DEVICE_ID_TI_TSB12LV22 0x8009
#define QUIRK_CYCLE_TIMER 1
#define QUIRK_RESET_PACKET 2
#define QUIRK_BE_HEADERS 4
@ -239,6 +241,8 @@ static char ohci_driver_name[] = KBUILD_MODNAME;
static const struct {
unsigned short vendor, device, flags;
} ohci_quirks[] = {
{PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_TSB12LV22, QUIRK_CYCLE_TIMER |
QUIRK_RESET_PACKET},
{PCI_VENDOR_ID_TI, PCI_ANY_ID, QUIRK_RESET_PACKET},
{PCI_VENDOR_ID_AL, PCI_ANY_ID, QUIRK_CYCLE_TIMER},
{PCI_VENDOR_ID_NEC, PCI_ANY_ID, QUIRK_CYCLE_TIMER},

View File

@ -242,3 +242,7 @@ int __devexit __max730x_remove(struct device *dev)
return ret;
}
EXPORT_SYMBOL_GPL(__max730x_remove);
MODULE_AUTHOR("Juergen Beisert, Wolfram Sang");
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("MAX730x GPIO-Expanders, generic parts");

View File

@ -1881,29 +1881,29 @@ struct drm_ioctl_desc i915_ioctls[] = {
DRM_IOCTL_DEF(DRM_I915_GET_VBLANK_PIPE, i915_vblank_pipe_get, DRM_AUTH ),
DRM_IOCTL_DEF(DRM_I915_VBLANK_SWAP, i915_vblank_swap, DRM_AUTH),
DRM_IOCTL_DEF(DRM_I915_HWS_ADDR, i915_set_status_page, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_I915_GEM_INIT, i915_gem_init_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_I915_GEM_EXECBUFFER, i915_gem_execbuffer, DRM_AUTH),
DRM_IOCTL_DEF(DRM_I915_GEM_EXECBUFFER2, i915_gem_execbuffer2, DRM_AUTH),
DRM_IOCTL_DEF(DRM_I915_GEM_PIN, i915_gem_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_I915_GEM_UNPIN, i915_gem_unpin_ioctl, DRM_AUTH|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_AUTH),
DRM_IOCTL_DEF(DRM_I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_AUTH),
DRM_IOCTL_DEF(DRM_I915_GEM_ENTERVT, i915_gem_entervt_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_I915_GEM_LEAVEVT, i915_gem_leavevt_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_I915_GEM_CREATE, i915_gem_create_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_PREAD, i915_gem_pread_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_PWRITE, i915_gem_pwrite_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_MMAP, i915_gem_mmap_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_MMAP_GTT, i915_gem_mmap_gtt_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_SET_DOMAIN, i915_gem_set_domain_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_SET_TILING, i915_gem_set_tiling, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_GET_TILING, i915_gem_get_tiling, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_GET_APERTURE, i915_gem_get_aperture_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_GET_PIPE_FROM_CRTC_ID, intel_get_pipe_from_crtc_id, 0),
DRM_IOCTL_DEF(DRM_I915_GEM_MADVISE, i915_gem_madvise_ioctl, 0),
DRM_IOCTL_DEF(DRM_I915_OVERLAY_PUT_IMAGE, intel_overlay_put_image, DRM_MASTER|DRM_CONTROL_ALLOW),
DRM_IOCTL_DEF(DRM_I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW),
DRM_IOCTL_DEF(DRM_I915_GEM_INIT, i915_gem_init_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_EXECBUFFER, i915_gem_execbuffer, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_EXECBUFFER2, i915_gem_execbuffer2, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_PIN, i915_gem_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_UNPIN, i915_gem_unpin_ioctl, DRM_AUTH|DRM_ROOT_ONLY|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_AUTH|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_ENTERVT, i915_gem_entervt_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_LEAVEVT, i915_gem_leavevt_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_CREATE, i915_gem_create_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_PREAD, i915_gem_pread_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_PWRITE, i915_gem_pwrite_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_MMAP, i915_gem_mmap_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_MMAP_GTT, i915_gem_mmap_gtt_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_SET_DOMAIN, i915_gem_set_domain_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_SET_TILING, i915_gem_set_tiling, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_GET_TILING, i915_gem_get_tiling, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_GET_APERTURE, i915_gem_get_aperture_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GET_PIPE_FROM_CRTC_ID, intel_get_pipe_from_crtc_id, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_GEM_MADVISE, i915_gem_madvise_ioctl, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_OVERLAY_PUT_IMAGE, intel_overlay_put_image, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
};
int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls);

View File

@ -139,12 +139,12 @@ const static struct intel_device_info intel_ironlake_m_info = {
const static struct intel_device_info intel_sandybridge_d_info = {
.is_i965g = 1, .is_i9xx = 1, .need_gfx_hws = 1,
.has_hotplug = 1,
.has_hotplug = 1, .is_gen6 = 1,
};
const static struct intel_device_info intel_sandybridge_m_info = {
.is_i965g = 1, .is_mobile = 1, .is_i9xx = 1, .need_gfx_hws = 1,
.has_hotplug = 1,
.has_hotplug = 1, .is_gen6 = 1,
};
const static struct pci_device_id pciidlist[] = {

View File

@ -205,6 +205,7 @@ struct intel_device_info {
u8 is_g4x : 1;
u8 is_pineview : 1;
u8 is_ironlake : 1;
u8 is_gen6 : 1;
u8 has_fbc : 1;
u8 has_rc6 : 1;
u8 has_pipe_cxsr : 1;
@ -1084,6 +1085,7 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller);
#define IS_IRONLAKE_M(dev) ((dev)->pci_device == 0x0046)
#define IS_IRONLAKE(dev) (INTEL_INFO(dev)->is_ironlake)
#define IS_I9XX(dev) (INTEL_INFO(dev)->is_i9xx)
#define IS_GEN6(dev) (INTEL_INFO(dev)->is_gen6)
#define IS_MOBILE(dev) (INTEL_INFO(dev)->is_mobile)
#define IS_GEN3(dev) (IS_I915G(dev) || \
@ -1107,8 +1109,6 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller);
#define I915_NEED_GFX_HWS(dev) (INTEL_INFO(dev)->need_gfx_hws)
#define IS_GEN6(dev) ((dev)->pci_device == 0x0102)
/* With the 945 and later, Y tiling got adjusted so that it was 32 128-byte
* rows, which changed the alignment requirements and fence programming.
*/

View File

@ -1466,9 +1466,6 @@ i915_gem_object_put_pages(struct drm_gem_object *obj)
obj_priv->dirty = 0;
for (i = 0; i < page_count; i++) {
if (obj_priv->pages[i] == NULL)
break;
if (obj_priv->dirty)
set_page_dirty(obj_priv->pages[i]);
@ -2227,11 +2224,6 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
seqno = i915_add_request(dev, NULL, obj->write_domain);
if (seqno == 0)
return -ENOMEM;
ret = i915_wait_request(dev, seqno);
if (ret)
return ret;
continue;
}
}
@ -2256,7 +2248,6 @@ i915_gem_object_get_pages(struct drm_gem_object *obj,
struct address_space *mapping;
struct inode *inode;
struct page *page;
int ret;
if (obj_priv->pages_refcount++ != 0)
return 0;
@ -2279,11 +2270,9 @@ i915_gem_object_get_pages(struct drm_gem_object *obj,
mapping_gfp_mask (mapping) |
__GFP_COLD |
gfpmask);
if (IS_ERR(page)) {
ret = PTR_ERR(page);
i915_gem_object_put_pages(obj);
return ret;
}
if (IS_ERR(page))
goto err_pages;
obj_priv->pages[i] = page;
}
@ -2291,6 +2280,15 @@ i915_gem_object_get_pages(struct drm_gem_object *obj,
i915_gem_object_do_bit_17_swizzle(obj);
return 0;
err_pages:
while (i--)
page_cache_release(obj_priv->pages[i]);
drm_free_large(obj_priv->pages);
obj_priv->pages = NULL;
obj_priv->pages_refcount--;
return PTR_ERR(page);
}
static void sandybridge_write_fence_reg(struct drm_i915_fence_reg *reg)
@ -4730,6 +4728,11 @@ i915_gem_init_ringbuffer(struct drm_device *dev)
ring->space += ring->Size;
}
if (IS_I9XX(dev) && !IS_GEN3(dev)) {
I915_WRITE(MI_MODE,
(VS_TIMER_DISPATCH) << 16 | VS_TIMER_DISPATCH);
}
return 0;
}

View File

@ -325,9 +325,12 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
* need to ensure that any fence register is cleared.
*/
if (!i915_gem_object_fence_offset_ok(obj, args->tiling_mode))
ret = i915_gem_object_unbind(obj);
ret = i915_gem_object_unbind(obj);
else if (obj_priv->fence_reg != I915_FENCE_REG_NONE)
ret = i915_gem_object_put_fence_reg(obj);
else
ret = i915_gem_object_put_fence_reg(obj);
i915_gem_release_mmap(obj);
if (ret != 0) {
WARN(ret != -ERESTARTSYS,
"failed to reset object for tiling switch");

View File

@ -298,6 +298,10 @@
#define INSTDONE 0x02090
#define NOPID 0x02094
#define HWSTAM 0x02098
#define MI_MODE 0x0209c
# define VS_TIMER_DISPATCH (1 << 6)
#define SCPD0 0x0209c /* 915+ only */
#define IER 0x020a0
#define IIR 0x020a4
@ -366,7 +370,7 @@
#define FBC_CTL_PERIODIC (1<<30)
#define FBC_CTL_INTERVAL_SHIFT (16)
#define FBC_CTL_UNCOMPRESSIBLE (1<<14)
#define FBC_C3_IDLE (1<<13)
#define FBC_CTL_C3_IDLE (1<<13)
#define FBC_CTL_STRIDE_SHIFT (5)
#define FBC_CTL_FENCENO (1<<0)
#define FBC_COMMAND 0x0320c
@ -2172,6 +2176,14 @@
#define DISPLAY_PORT_PLL_BIOS_1 0x46010
#define DISPLAY_PORT_PLL_BIOS_2 0x46014
#define PCH_DSPCLK_GATE_D 0x42020
# define DPFDUNIT_CLOCK_GATE_DISABLE (1 << 7)
# define DPARBUNIT_CLOCK_GATE_DISABLE (1 << 5)
#define PCH_3DCGDIS0 0x46020
# define MARIUNIT_CLOCK_GATE_DISABLE (1 << 18)
# define SVSMUNIT_CLOCK_GATE_DISABLE (1 << 1)
#define FDI_PLL_FREQ_CTL 0x46030
#define FDI_PLL_FREQ_CHANGE_REQUEST (1<<24)
#define FDI_PLL_FREQ_LOCK_LIMIT_MASK 0xfff00

View File

@ -417,8 +417,9 @@ parse_edp(struct drm_i915_private *dev_priv, struct bdb_header *bdb)
edp = find_section(bdb, BDB_EDP);
if (!edp) {
if (SUPPORTS_EDP(dev_priv->dev) && dev_priv->edp_support) {
DRM_DEBUG_KMS("No eDP BDB found but eDP panel supported,\
assume 18bpp panel color depth.\n");
DRM_DEBUG_KMS("No eDP BDB found but eDP panel "
"supported, assume 18bpp panel color "
"depth.\n");
dev_priv->edp_bpp = 18;
}
return;

View File

@ -1032,7 +1032,7 @@ static void i8xx_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
/* enable it... */
fbc_ctl = FBC_CTL_EN | FBC_CTL_PERIODIC;
if (IS_I945GM(dev))
fbc_ctl |= FBC_C3_IDLE; /* 945 needs special SR handling */
fbc_ctl |= FBC_CTL_C3_IDLE; /* 945 needs special SR handling */
fbc_ctl |= (dev_priv->cfb_pitch & 0xff) << FBC_CTL_STRIDE_SHIFT;
fbc_ctl |= (interval & 0x2fff) << FBC_CTL_INTERVAL_SHIFT;
if (obj_priv->tiling_mode != I915_TILING_NONE)
@ -4717,6 +4717,20 @@ void intel_init_clock_gating(struct drm_device *dev)
* specs, but enable as much else as we can.
*/
if (HAS_PCH_SPLIT(dev)) {
uint32_t dspclk_gate = VRHUNIT_CLOCK_GATE_DISABLE;
if (IS_IRONLAKE(dev)) {
/* Required for FBC */
dspclk_gate |= DPFDUNIT_CLOCK_GATE_DISABLE;
/* Required for CxSR */
dspclk_gate |= DPARBUNIT_CLOCK_GATE_DISABLE;
I915_WRITE(PCH_3DCGDIS0,
MARIUNIT_CLOCK_GATE_DISABLE |
SVSMUNIT_CLOCK_GATE_DISABLE);
}
I915_WRITE(PCH_DSPCLK_GATE_D, dspclk_gate);
return;
} else if (IS_G4X(dev)) {
uint32_t dspclk_gate;

View File

@ -607,53 +607,6 @@ static void intel_lvds_mode_set(struct drm_encoder *encoder,
I915_WRITE(PFIT_CONTROL, lvds_priv->pfit_control);
}
/* Some lid devices report incorrect lid status, assume they're connected */
static const struct dmi_system_id bad_lid_status[] = {
{
.ident = "Compaq nx9020",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
DMI_MATCH(DMI_BOARD_NAME, "3084"),
},
},
{
.ident = "Samsung SX20S",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Samsung Electronics"),
DMI_MATCH(DMI_BOARD_NAME, "SX20S"),
},
},
{
.ident = "Aspire One",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_NAME, "Aspire one"),
},
},
{
.ident = "Aspire 1810T",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 1810T"),
},
},
{
.ident = "PC-81005",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "MALATA"),
DMI_MATCH(DMI_PRODUCT_NAME, "PC-81005"),
},
},
{
.ident = "Clevo M5x0N",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "CLEVO Co."),
DMI_MATCH(DMI_BOARD_NAME, "M5x0N"),
},
},
{ }
};
/**
* Detect the LVDS connection.
*
@ -669,12 +622,9 @@ static enum drm_connector_status intel_lvds_detect(struct drm_connector *connect
/* ACPI lid methods were generally unreliable in this generation, so
* don't even bother.
*/
if (IS_GEN2(dev))
if (IS_GEN2(dev) || IS_GEN3(dev))
return connector_status_connected;
if (!dmi_check_system(bad_lid_status) && !acpi_lid_open())
status = connector_status_disconnected;
return status;
}

View File

@ -1068,14 +1068,18 @@ int intel_overlay_put_image(struct drm_device *dev, void *data,
drmmode_obj = drm_mode_object_find(dev, put_image_rec->crtc_id,
DRM_MODE_OBJECT_CRTC);
if (!drmmode_obj)
return -ENOENT;
if (!drmmode_obj) {
ret = -ENOENT;
goto out_free;
}
crtc = to_intel_crtc(obj_to_crtc(drmmode_obj));
new_bo = drm_gem_object_lookup(dev, file_priv,
put_image_rec->bo_handle);
if (!new_bo)
return -ENOENT;
if (!new_bo) {
ret = -ENOENT;
goto out_free;
}
mutex_lock(&dev->mode_config.mutex);
mutex_lock(&dev->struct_mutex);
@ -1165,6 +1169,7 @@ out_unlock:
mutex_unlock(&dev->struct_mutex);
mutex_unlock(&dev->mode_config.mutex);
drm_gem_object_unreference_unlocked(new_bo);
out_free:
kfree(params);
return ret;

View File

@ -116,13 +116,7 @@ int radeon_irq_kms_init(struct radeon_device *rdev)
}
/* enable msi */
rdev->msi_enabled = 0;
/* MSIs don't seem to work on my rs780;
* not sure about rs880 or other rs780s.
* Needs more investigation.
*/
if ((rdev->family >= CHIP_RV380) &&
(rdev->family != CHIP_RS780) &&
(rdev->family != CHIP_RS880)) {
if (rdev->family >= CHIP_RV380) {
int ret = pci_enable_msi(rdev->pdev);
if (!ret) {
rdev->msi_enabled = 1;

View File

@ -217,8 +217,8 @@ config SENSORS_ASC7621
depends on HWMON && I2C
help
If you say yes here you get support for the aSC7621
family of SMBus sensors chip found on most Intel X48, X38, 975,
965 and 945 desktop boards. Currently supported chips:
family of SMBus sensors chip found on most Intel X38, X48, X58,
945, 965 and 975 desktop boards. Currently supported chips:
aSC7621
aSC7621a

View File

@ -228,7 +228,7 @@ static int __devinit adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *
if (err) {
dev_warn(dev,
"Unable to access MSR 0xEE, for Tjmax, left"
" at default");
" at default\n");
} else if (eax & 0x40000000) {
tjmax = tjmax_ee;
}
@ -466,7 +466,7 @@ static int __init coretemp_init(void)
family 6 CPU */
if ((c->x86 == 0x6) && (c->x86_model > 0xf))
printk(KERN_WARNING DRVNAME ": Unknown CPU "
"model %x\n", c->x86_model);
"model 0x%x\n", c->x86_model);
continue;
}

View File

@ -1294,7 +1294,7 @@ static int watchdog_close(struct inode *inode, struct file *filp)
static ssize_t watchdog_write(struct file *filp, const char __user *buf,
size_t count, loff_t *offset)
{
size_t ret;
ssize_t ret;
struct w83793_data *data = filp->private_data;
if (count) {

View File

@ -33,6 +33,7 @@ struct acpi_smbus_cmi {
u8 cap_info:1;
u8 cap_read:1;
u8 cap_write:1;
struct smbus_methods_t *methods;
};
static const struct smbus_methods_t smbus_methods = {
@ -41,10 +42,19 @@ static const struct smbus_methods_t smbus_methods = {
.mt_sbw = "_SBW",
};
/* Some IBM BIOSes omit the leading underscore */
static const struct smbus_methods_t ibm_smbus_methods = {
.mt_info = "SBI_",
.mt_sbr = "SBR_",
.mt_sbw = "SBW_",
};
static const struct acpi_device_id acpi_smbus_cmi_ids[] = {
{"SMBUS01", 0},
{"SMBUS01", (kernel_ulong_t)&smbus_methods},
{ACPI_SMBUS_IBM_HID, (kernel_ulong_t)&ibm_smbus_methods},
{"", 0}
};
MODULE_DEVICE_TABLE(acpi, acpi_smbus_cmi_ids);
#define ACPI_SMBUS_STATUS_OK 0x00
#define ACPI_SMBUS_STATUS_FAIL 0x07
@ -150,11 +160,11 @@ acpi_smbus_cmi_access(struct i2c_adapter *adap, u16 addr, unsigned short flags,
if (read_write == I2C_SMBUS_READ) {
protocol |= ACPI_SMBUS_PRTCL_READ;
method = smbus_methods.mt_sbr;
method = smbus_cmi->methods->mt_sbr;
input.count = 3;
} else {
protocol |= ACPI_SMBUS_PRTCL_WRITE;
method = smbus_methods.mt_sbw;
method = smbus_cmi->methods->mt_sbw;
input.count = 5;
}
@ -290,13 +300,13 @@ static int acpi_smbus_cmi_add_cap(struct acpi_smbus_cmi *smbus_cmi,
union acpi_object *obj;
acpi_status status;
if (!strcmp(name, smbus_methods.mt_info)) {
if (!strcmp(name, smbus_cmi->methods->mt_info)) {
status = acpi_evaluate_object(smbus_cmi->handle,
smbus_methods.mt_info,
smbus_cmi->methods->mt_info,
NULL, &buffer);
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO, "Evaluating %s: %i",
smbus_methods.mt_info, status));
smbus_cmi->methods->mt_info, status));
return -EIO;
}
@ -319,9 +329,9 @@ static int acpi_smbus_cmi_add_cap(struct acpi_smbus_cmi *smbus_cmi,
kfree(buffer.pointer);
smbus_cmi->cap_info = 1;
} else if (!strcmp(name, smbus_methods.mt_sbr))
} else if (!strcmp(name, smbus_cmi->methods->mt_sbr))
smbus_cmi->cap_read = 1;
else if (!strcmp(name, smbus_methods.mt_sbw))
else if (!strcmp(name, smbus_cmi->methods->mt_sbw))
smbus_cmi->cap_write = 1;
else
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Unsupported CMI method: %s\n",
@ -349,6 +359,7 @@ static acpi_status acpi_smbus_cmi_query_methods(acpi_handle handle, u32 level,
static int acpi_smbus_cmi_add(struct acpi_device *device)
{
struct acpi_smbus_cmi *smbus_cmi;
const struct acpi_device_id *id;
smbus_cmi = kzalloc(sizeof(struct acpi_smbus_cmi), GFP_KERNEL);
if (!smbus_cmi)
@ -362,6 +373,11 @@ static int acpi_smbus_cmi_add(struct acpi_device *device)
smbus_cmi->cap_read = 0;
smbus_cmi->cap_write = 0;
for (id = acpi_smbus_cmi_ids; id->id[0]; id++)
if (!strcmp(id->id, acpi_device_hid(device)))
smbus_cmi->methods =
(struct smbus_methods_t *) id->driver_data;
acpi_walk_namespace(ACPI_TYPE_METHOD, smbus_cmi->handle, 1,
acpi_smbus_cmi_query_methods, NULL, smbus_cmi, NULL);

View File

@ -695,14 +695,8 @@ static int ide_probe_port(ide_hwif_t *hwif)
if (irqd)
disable_irq(hwif->irq);
rc = ide_port_wait_ready(hwif);
if (rc == -ENODEV) {
printk(KERN_INFO "%s: no devices on the port\n", hwif->name);
goto out;
} else if (rc == -EBUSY)
printk(KERN_ERR "%s: not ready before the probe\n", hwif->name);
else
rc = -ENODEV;
if (ide_port_wait_ready(hwif) == -EBUSY)
printk(KERN_DEBUG "%s: Wait for ready failed before probe !\n", hwif->name);
/*
* Second drive should only exist if first drive was found,
@ -713,7 +707,7 @@ static int ide_probe_port(ide_hwif_t *hwif)
if (drive->dev_flags & IDE_DFLAG_PRESENT)
rc = 0;
}
out:
/*
* Use cached IRQ number. It might be (and is...) changed by probe
* code above

View File

@ -110,7 +110,6 @@ struct via82cxxx_dev
{
struct via_isa_bridge *via_config;
unsigned int via_80w;
u8 cached_device[2];
};
/**
@ -403,66 +402,10 @@ static const struct ide_port_ops via_port_ops = {
.cable_detect = via82cxxx_cable_detect,
};
static void via_write_devctl(ide_hwif_t *hwif, u8 ctl)
{
struct via82cxxx_dev *vdev = hwif->host->host_priv;
outb(ctl, hwif->io_ports.ctl_addr);
outb(vdev->cached_device[hwif->channel], hwif->io_ports.device_addr);
}
static void __via_dev_select(ide_drive_t *drive, u8 select)
{
ide_hwif_t *hwif = drive->hwif;
struct via82cxxx_dev *vdev = hwif->host->host_priv;
outb(select, hwif->io_ports.device_addr);
vdev->cached_device[hwif->channel] = select;
}
static void via_dev_select(ide_drive_t *drive)
{
__via_dev_select(drive, drive->select | ATA_DEVICE_OBS);
}
static void via_tf_load(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid)
{
ide_hwif_t *hwif = drive->hwif;
struct ide_io_ports *io_ports = &hwif->io_ports;
if (valid & IDE_VALID_FEATURE)
outb(tf->feature, io_ports->feature_addr);
if (valid & IDE_VALID_NSECT)
outb(tf->nsect, io_ports->nsect_addr);
if (valid & IDE_VALID_LBAL)
outb(tf->lbal, io_ports->lbal_addr);
if (valid & IDE_VALID_LBAM)
outb(tf->lbam, io_ports->lbam_addr);
if (valid & IDE_VALID_LBAH)
outb(tf->lbah, io_ports->lbah_addr);
if (valid & IDE_VALID_DEVICE)
__via_dev_select(drive, tf->device);
}
const struct ide_tp_ops via_tp_ops = {
.exec_command = ide_exec_command,
.read_status = ide_read_status,
.read_altstatus = ide_read_altstatus,
.write_devctl = via_write_devctl,
.dev_select = via_dev_select,
.tf_load = via_tf_load,
.tf_read = ide_tf_read,
.input_data = ide_input_data,
.output_data = ide_output_data,
};
static const struct ide_port_info via82cxxx_chipset __devinitdata = {
.name = DRV_NAME,
.init_chipset = init_chipset_via82cxxx,
.enablebits = { { 0x40, 0x02, 0x02 }, { 0x40, 0x01, 0x01 } },
.tp_ops = &via_tp_ops,
.port_ops = &via_port_ops,
.host_flags = IDE_HFLAG_PIO_NO_BLACKLIST |
IDE_HFLAG_POST_SET_MODE |

View File

@ -50,7 +50,7 @@ module_param(isdnprot, int, 0);
handler.
*/
static int avma1cs_config(struct pcmcia_device *link);
static int avma1cs_config(struct pcmcia_device *link) __devinit ;
static void avma1cs_release(struct pcmcia_device *link);
/*
@ -59,7 +59,7 @@ static void avma1cs_release(struct pcmcia_device *link);
needed to manage one actual PCMCIA card.
*/
static void avma1cs_detach(struct pcmcia_device *p_dev);
static void avma1cs_detach(struct pcmcia_device *p_dev) __devexit ;
/*
@ -99,7 +99,7 @@ typedef struct local_info_t {
======================================================================*/
static int avma1cs_probe(struct pcmcia_device *p_dev)
static int __devinit avma1cs_probe(struct pcmcia_device *p_dev)
{
local_info_t *local;
@ -140,7 +140,7 @@ static int avma1cs_probe(struct pcmcia_device *p_dev)
======================================================================*/
static void avma1cs_detach(struct pcmcia_device *link)
static void __devexit avma1cs_detach(struct pcmcia_device *link)
{
dev_dbg(&link->dev, "avma1cs_detach(0x%p)\n", link);
avma1cs_release(link);
@ -174,7 +174,7 @@ static int avma1cs_configcheck(struct pcmcia_device *p_dev,
}
static int avma1cs_config(struct pcmcia_device *link)
static int __devinit avma1cs_config(struct pcmcia_device *link)
{
local_info_t *dev;
int i;
@ -282,7 +282,7 @@ static struct pcmcia_driver avma1cs_driver = {
.name = "avma1_cs",
},
.probe = avma1cs_probe,
.remove = avma1cs_detach,
.remove = __devexit_p(avma1cs_detach),
.id_table = avma1cs_ids,
};

View File

@ -76,7 +76,7 @@ module_param(protocol, int, 0);
handler.
*/
static int elsa_cs_config(struct pcmcia_device *link);
static int elsa_cs_config(struct pcmcia_device *link) __devinit ;
static void elsa_cs_release(struct pcmcia_device *link);
/*
@ -85,7 +85,7 @@ static void elsa_cs_release(struct pcmcia_device *link);
needed to manage one actual PCMCIA card.
*/
static void elsa_cs_detach(struct pcmcia_device *p_dev);
static void elsa_cs_detach(struct pcmcia_device *p_dev) __devexit;
/*
A driver needs to provide a dev_node_t structure for each device
@ -121,7 +121,7 @@ typedef struct local_info_t {
======================================================================*/
static int elsa_cs_probe(struct pcmcia_device *link)
static int __devinit elsa_cs_probe(struct pcmcia_device *link)
{
local_info_t *local;
@ -166,7 +166,7 @@ static int elsa_cs_probe(struct pcmcia_device *link)
======================================================================*/
static void elsa_cs_detach(struct pcmcia_device *link)
static void __devexit elsa_cs_detach(struct pcmcia_device *link)
{
local_info_t *info = link->priv;
@ -210,7 +210,7 @@ static int elsa_cs_configcheck(struct pcmcia_device *p_dev,
return -ENODEV;
}
static int elsa_cs_config(struct pcmcia_device *link)
static int __devinit elsa_cs_config(struct pcmcia_device *link)
{
local_info_t *dev;
int i;
@ -327,7 +327,7 @@ static struct pcmcia_driver elsa_cs_driver = {
.name = "elsa_cs",
},
.probe = elsa_cs_probe,
.remove = elsa_cs_detach,
.remove = __devexit_p(elsa_cs_detach),
.id_table = elsa_ids,
.suspend = elsa_suspend,
.resume = elsa_resume,

View File

@ -76,7 +76,7 @@ module_param(protocol, int, 0);
event handler.
*/
static int sedlbauer_config(struct pcmcia_device *link);
static int sedlbauer_config(struct pcmcia_device *link) __devinit ;
static void sedlbauer_release(struct pcmcia_device *link);
/*
@ -85,7 +85,7 @@ static void sedlbauer_release(struct pcmcia_device *link);
needed to manage one actual PCMCIA card.
*/
static void sedlbauer_detach(struct pcmcia_device *p_dev);
static void sedlbauer_detach(struct pcmcia_device *p_dev) __devexit;
/*
You'll also need to prototype all the functions that will actually
@ -129,7 +129,7 @@ typedef struct local_info_t {
======================================================================*/
static int sedlbauer_probe(struct pcmcia_device *link)
static int __devinit sedlbauer_probe(struct pcmcia_device *link)
{
local_info_t *local;
@ -177,7 +177,7 @@ static int sedlbauer_probe(struct pcmcia_device *link)
======================================================================*/
static void sedlbauer_detach(struct pcmcia_device *link)
static void __devexit sedlbauer_detach(struct pcmcia_device *link)
{
dev_dbg(&link->dev, "sedlbauer_detach(0x%p)\n", link);
@ -283,7 +283,7 @@ static int sedlbauer_config_check(struct pcmcia_device *p_dev,
static int sedlbauer_config(struct pcmcia_device *link)
static int __devinit sedlbauer_config(struct pcmcia_device *link)
{
local_info_t *dev = link->priv;
win_req_t *req;
@ -441,7 +441,7 @@ static struct pcmcia_driver sedlbauer_driver = {
.name = "sedlbauer_cs",
},
.probe = sedlbauer_probe,
.remove = sedlbauer_detach,
.remove = __devexit_p(sedlbauer_detach),
.id_table = sedlbauer_ids,
.suspend = sedlbauer_suspend,
.resume = sedlbauer_resume,

Some files were not shown because too many files have changed in this diff Show More