Commit Graph

1554 Commits

Author SHA1 Message Date
Andrea Righi
27ac792ca0 PAGE_ALIGN(): correctly handle 64-bit values on 32-bit architectures
On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit
boundary. For example:

	u64 val = PAGE_ALIGN(size);

always returns a value < 4GB even if size is greater than 4GB.

The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for
example):

#define PAGE_SHIFT      12
#define PAGE_SIZE       (_AC(1,UL) << PAGE_SHIFT)
#define PAGE_MASK       (~(PAGE_SIZE-1))
...
#define PAGE_ALIGN(addr)       (((addr)+PAGE_SIZE-1)&PAGE_MASK)

The "~" is performed on a 32-bit value, so everything in "and" with
PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary.
Using the ALIGN() macro seems to be the right way, because it uses
typeof(addr) for the mask.

Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in
include/linux/mm.h.

See also lkml discussion: http://lkml.org/lkml/2008/6/11/237

[akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c]
[akpm@linux-foundation.org: fix v850]
[akpm@linux-foundation.org: fix powerpc]
[akpm@linux-foundation.org: fix arm]
[akpm@linux-foundation.org: fix mips]
[akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c]
[akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c]
[akpm@linux-foundation.org: fix powerpc]
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:21 -07:00
Jon Tollefson
0d9ea75443 powerpc: support multiple hugepage sizes
Instead of using the variable mmu_huge_psize to keep track of the huge
page size we use an array of MMU_PAGE_* values.  For each supported huge
page size we need to know the hugepte_shift value and have a
pgtable_cache.  The hstate or an mmu_huge_psizes index is passed to
functions so that they know which huge page size they should use.

The hugepage sizes 16M and 64K are setup(if available on the hardware) so
that they don't have to be set on the boot cmd line in order to use them.
The number of 16G pages have to be specified at boot-time though (e.g.
hugepagesz=16G hugepages=5).

Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:19 -07:00
Jon Tollefson
91224346aa powerpc: define support for 16G hugepages
The huge page size is defined for 16G pages.  If a hugepagesz of 16G is
specified at boot-time then it becomes the huge page size instead of the
default 16M.

The change in pgtable-64K.h is to the macro pte_iterate_hashed_subpages to
make the increment to va (the 1 being shifted) be a long so that it is not
shifted to 0.  Otherwise it would create an infinite loop when the shift
value is for a 16G page (when base page size is 64K).

Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:19 -07:00
Jon Tollefson
658013e93e powerpc: scan device tree for gigantic pages
The 16G huge pages have to be reserved in the HMC prior to boot.  The
location of the pages are placed in the device tree.  This patch adds code
to scan the device tree during very early boot and save these page
locations until hugetlbfs is ready for them.

Acked-by: Adam Litke <agl@us.ibm.com>
Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:19 -07:00
Andi Kleen
a551643895 hugetlb: modular state for hugetlb page size
The goal of this patchset is to support multiple hugetlb page sizes.  This
is achieved by introducing a new struct hstate structure, which
encapsulates the important hugetlb state and constants (eg.  huge page
size, number of huge pages currently allocated, etc).

The hstate structure is then passed around the code which requires these
fields, they will do the right thing regardless of the exact hstate they
are operating on.

This patch adds the hstate structure, with a single global instance of it
(default_hstate), and does the basic work of converting hugetlb to use the
hstate.

Future patches will add more hstate structures to allow for different
hugetlbfs mounts to have different page sizes.

[akpm@linux-foundation.org: coding-style fixes]
Acked-by: Adam Litke <agl@us.ibm.com>
Acked-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:17 -07:00
Jan Beulich
42b7772812 mm: remove double indirection on tlb parameter to free_pgd_range() & Co
The double indirection here is not needed anywhere and hence (at least)
confusing.

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:15 -07:00
Benjamin Herrenschmidt
a1f242ff46 powerpc ioremap_prot
This adds ioremap_prot and pte_pgprot() so that one can extract protection
bits from a PTE and use them to ioremap_prot() (in order to support ptrace
of VM_IO | VM_PFNMAP as per Rik's patch).

This moves a couple of flag checks around in the ioremap implementations
of arch/powerpc.  There's a side effect of allowing non-cacheable and
non-guarded mappings on ppc32 which before would always have _PAGE_GUARDED
set whenever _PAGE_NO_CACHE is.

(standard ioremap will still set _PAGE_GUARDED, but ioremap_prot will be
capable of setting such a non guarded mapping).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:15 -07:00
Jason Wessel
17ce452f7e kgdb, powerpc: arch specific powerpc kgdb support
This patch removes the old kgdb reminants from ARCH=powerpc and
implements the new style arch specific stub for the common kgdb core
interface.

It is possible to have xmon and kgdb in the same kernel, but you
cannot use both at the same time because there is only one set of
debug hooks.

The arch specific kgdb implementation saves the previous state of the
debug hooks and restores them if you unconfigure the kgdb I/O driver.
Kgdb should have no impact on a kernel that has no kgdb I/O driver
configured.

Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
2008-07-23 11:30:15 -05:00
Benjamin Herrenschmidt
8725f25acc Merge commit 'origin/master'
Manually fixed up:

	drivers/net/fs_enet/fs_enet-main.c
2008-07-22 17:12:37 +10:00
Michael Ellerman
551c3c04b4 powerpc: Use PPC_LONG_ALIGN in uaccess.h
Use the new PPC_LONG_ALIGN macro instead of passing an argument
to the asm for consistency.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:35 +10:00
Michael Ellerman
6a2a24bb75 powerpc: Add a #define for aligning to a long-sized boundary
Add a #define for aligning to a long-sized boundary. It would be nice
to use sizeof(long) for this, but that requires generating the value
with asm-offsets.c, and asm-offsets.c includes asm-compat.h and we
descend into some sort of recursive include hell.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:34 +10:00
Masakazu Mokuno
059e4938f8 powerpc/ps3: Add a sub-match id to ps3_system_bus
Add sub match id for ps3 system bus so that two different system bus
devices can be connected to a shared device.

Signed-off-by: Masakazu Mokuno <mokuno@sm.sony.co.jp>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:33 +10:00
Mark Nelson
4f3dd8a062 powerpc/dma: Use the struct dma_attrs in iommu code
Update iommu_alloc() to take the struct dma_attrs and pass them on to
tce_build(). This change propagates down to the tce_build functions of
all the platforms.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:32 +10:00
Christian Krafft
4795b7801b powerpc/cell: Add support for power button of future IBM cell blades
This patch adds support for the power button on future IBM cell blades.
It actually doesn't shut down the machine. Instead it exposes an
input device /dev/input/event0 to userspace which sends KEY_POWER
if power button has been pressed.
haldaemon actually recognizes the button, so a plattform independent acpid
replacement should handle it correctly.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:32 +10:00
Laurent Vivier
588968b6b7 KVM: Add coalesced MMIO support (powerpc part)
This patch enables coalesced MMIO for powerpc architecture.
It defines KVM_MMIO_PAGE_OFFSET and KVM_CAP_COALESCED_MMIO.
It enables the compilation of coalesced_mmio.c.

Signed-off-by: Laurent Vivier <Laurent.Vivier@bull.net>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-07-20 12:42:31 +03:00
Kumar Gala
6cfd8990e2 powerpc: rework FSL Book-E PTE access and TLB miss
This converts the FSL Book-E PTE access and TLB miss handling to match
with the recent changes to 44x that introduce support for non-atomic PTE
operations in pgtable-ppc32.h and removes write back to the PTE from
the TLB miss handlers. In addition, the DSI interrupt code no longer
tries to fixup write permission, this is left to generic code, and
_PAGE_HWWRITE is gone.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-07-16 17:57:51 -05:00
Andy Fleming
7e1cc9c55a powerpc: Fix a bunch of sparse warnings in the qe_lib
Mostly having to do with not marking things __iomem.  And some failure
to use appropriate accessors to read MMIO regs.

Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-07-16 17:57:45 -05:00
Scott Wood
d49747bdfb powerpc/mpc83xx: Power Management support
Basic PM support for 83xx.  Standby is implemented as sleep.
Suspend-to-RAM is implemented as "deep sleep" (with the processor
turned off) on 831x.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-07-16 17:57:30 -05:00
Benjamin Herrenschmidt
84c3d4aaec Merge commit 'origin/master'
Manual merge of:

	arch/powerpc/Kconfig
	arch/powerpc/kernel/stacktrace.c
	arch/powerpc/mm/slice.c
	arch/ppc/kernel/smp.c
2008-07-16 11:07:59 +10:00
Ingo Molnar
1a781a777b Merge branch 'generic-ipi' into generic-ipi-for-linus
Conflicts:

	arch/powerpc/Kconfig
	arch/s390/kernel/time.c
	arch/x86/kernel/apic_32.c
	arch/x86/kernel/cpu/perfctr-watchdog.c
	arch/x86/kernel/i8259_64.c
	arch/x86/kernel/ldt.c
	arch/x86/kernel/nmi_64.c
	arch/x86/kernel/smpboot.c
	arch/x86/xen/smp.c
	include/asm-x86/hw_irq_32.h
	include/asm-x86/hw_irq_64.h
	include/asm-x86/mach-default/irq_vectors.h
	include/asm-x86/mach-voyager/irq_vectors.h
	include/asm-x86/smp.h
	kernel/Makefile

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-15 21:55:59 +02:00
Benjamin Herrenschmidt
43d2548bb2 Merge commit '85082fd7cbe3173198aac0eb5e85ab1edcc6352c' into test-build
Manual fixup of:

	arch/powerpc/Kconfig
2008-07-15 15:44:51 +10:00
Kumar Gala
585583d95c powerpc: Fix pte_update for CONFIG_PTE_64BIT and !PTE_ATOMIC_UPDATES
Because the pte is now 64-bits the compiler was optimizing the update
to always clear the upper 32-bits of the pte.  We need to ensure the
clr mask is treated as an unsigned long long to get the proper behavior.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 15:44:02 +10:00
Michael Neuling
7c29217096 powerpc: fix giveup_vsx to save registers correctly
giveup_vsx didn't save the FPU and VMX regsiters.  Change it to be
like giveup_fpr/altivec which save these registers.

Also update call sites where FPU and VMX are already saved to use the
original giveup_vsx (renamed to __giveup_vsx).

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:29:23 +10:00
Nathan Lynch
0f47331475 powerpc: Add PPC_FEATURE_PSERIES_PERFMON_COMPAT
Background from Maynard Johnson:
As of POWER6, a set of 32 common events is defined that must be
supported on all future POWER processors.  The main impetus for this
compat set is the need to support partition migration, especially from
processor P(n) to processor P(n+1), where performance software that's
running in the new partition may not be knowledgeable about processor
P(n+1).  If a performance tool determines it does not support the
physical processor, but is told (via the
PPC_FEATURE_PSERIES_PERFMON_COMPAT bit) that the processor supports
the notion of the PMU compat set, then the performance tool can
surface just those events to the user of the tool.

PPC_FEATURE_PSERIES_PERFMON_COMPAT indicates that the PMU supports at
least this basic subset of events which is compatible across POWER
processor lines.

Signed-off-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:24:57 +10:00
Stephen Rothwell
b3fcaaa8a6 powerpc: mman.h export fixups
Commit ef3d3246a0 ("powerpc/mm: Add Strong
Access Ordering support") in the powerpc/{next,master} tree caused the
following in a powerpc allmodconfig build:

usr/include/asm/mman.h requires linux/mm.h, which does not exist in exported headers

We should not use CONFIG_PPC64 in an unprotected (by __KERNEL__)
section of an exported include file and linux/mm.h is not exported.  So
protect the whole section that is CONFIG_PPC64 with __KERNEL__ and put
the two introduced includes in there as well.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:24:53 +10:00
Benjamin Herrenschmidt
930074b6b9 Merge commit 'jwb/jwb-next' 2008-07-15 11:54:57 +10:00
Benjamin Herrenschmidt
11c2d8174e Merge commit 'origin/HEAD' into test-merge
Manual fixup of include/asm-powerpc/pgtable-ppc64.h
2008-07-14 14:29:49 +10:00
Ingo Molnar
bac0c9103b Merge branch 'tracing/ftrace' into auto-ftrace-next 2008-07-10 11:43:00 +02:00
Benjamin Herrenschmidt
1bc54c0311 powerpc: rework 4xx PTE access and TLB miss
This is some preliminary work to improve TLB management on SW loaded
TLB powerpc platforms. This introduce support for non-atomic PTE
operations in pgtable-ppc32.h and removes write back to the PTE from
the TLB miss handlers. In addition, the DSI interrupt code no longer
tries to fixup write permission, this is left to generic code, and
_PAGE_HWWRITE is gone.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2008-07-09 13:36:17 -04:00
Dave Kleikamp
ef3d3246a0 powerpc/mm: Add Strong Access Ordering support
Allow an application to enable Strong Access Ordering on specific pages of
memory on Power 7 hardware. Currently, power has a weaker memory model than
x86. Implementing a stronger memory model allows an emulator to more
efficiently translate x86 code into power code, resulting in faster code
execution.

On Power 7 hardware, storing 0b1110 in the WIMG bits of the hpte enables
strong access ordering mode for the memory page.  This patchset allows a
user to specify which pages are thus enabled by passing a new protection
bit through mmap() and mprotect().  I have defined PROT_SAO to be 0x10.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:45 +10:00
Dave Kleikamp
379070491e powerpc/mm: Add SAO Feature bit to the cputable
Add the CPU feature bit for the new Strong Access Ordering
facility of Power7

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:45 +10:00
Dave Kleikamp
aba46c5027 powerpc/mm: Define flags for Strong Access Ordering
This patch defines:

- PROT_SAO, which is passed into mmap() and mprotect() in the prot field
- VM_SAO in vma->vm_flags, and
- _PAGE_SAO, the combination of WIMG bits in the pte that enables strong
access ordering for the page.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:45 +10:00
Srinivasa Ds
e5093ff05d powerpc: Implement task_pt_regs() accessor
The task_pt_regs() macro allows access to the pt_regs of a given task.

This macro is not currently defined for the powerpc architecture, but
we need it for some upcoming utrace additions.

Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:44 +10:00
Mark Nelson
3a4c6f0b15 powerpc: move device_to_mask() to dma-mapping.h
Move device_to_mask() to dma-mapping.h because we need to use it from
outside dma_64.c in a later patch.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:44 +10:00
Mark Nelson
3affedc4e1 powerpc/dma: implement new dma_*map*_attrs() interfaces
Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so
update struct dma_mapping_ops to accept a struct dma_attrs and propagate
these changes through to all users of the code (generic IOMMU and the
64bit DMA code, and the iseries and ps3 platform code).

The old dma_*map_*() interfaces are reimplemented as calls to the
corresponding new interfaces.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:43 +10:00
Mark Nelson
c8692362db powerpc/dma: Add struct iommu_table argument to iommu_map_sg()
Make iommu_map_sg take a struct iommu_table. It did so before commit
740c3ce667 (iommu sg merging: ppc: make
iommu respect the segment size limits).

This stops the function looking in the archdata.dma_data for the iommu
table because in the future it will be called with a device that has
no table there.

This also has the nice side effect of making iommu_map_sg() match the
other map functions.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:43 +10:00
Maxim Shchetynin
fabb657005 powerpc/spufs: add atomic busy_spus counter to struct cbe_spu_info
As nr_active counter includes also spus waiting for syscalls to return
we need a seperate counter that only counts spus that are currently running
on spu side. This counter shall be used by a cpufreq governor that targets
a frequency dependent from the number of running spus.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:42 +10:00
David Gibson
86df864249 Correct hash flushing from huge_ptep_set_wrprotect()
As Andy Whitcroft recently pointed out, the current powerpc version of
huge_ptep_set_wrprotect() has a bug.  It just calls ptep_set_wrprotect()
which in turn calls pte_update() then hpte_need_flush() with the 'huge'
argument set to 0.  This will cause hpte_need_flush() to flush the wrong
hash entries (of any).  Andy's fix for this is already in the powerpc
tree as commit 016b33c495.

I have confirmed this is a real bug, not masked by some other
synchronization, with a new testcase for libhugetlbfs.  A process write
a (MAP_PRIVATE) hugepage mapping, fork(), then alter the mapping and
have the child incorrectly see the second write.

Therefore, this should be fixed for 2.6.26, and for the stable tree.
Here is a suitable patch for 2.6.26, which I think will also be suitable
for the stable tree (neither of the headers in question has been changed
much recently).

It is cut down slighlty from Andy's original version, in that it does
not include a 32-bit version of huge_ptep_set_wrprotect().  Currently,
hugepages are not supported on any 32-bit powerpc platform.  When they
are, a suitable 32-bit version can be added - the only 32-bit hardware
which supports hugepages does not use the conventional hashtable MMU and
so will have different needs anyway.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-08 09:27:58 -07:00
Nathan Fontenot
3c3f67eafa powerpc/pseries: Update the device tree correctly for drconf memory add/remove
This updates the device tree manipulation routines so that memory
add/remove of lmbs represented under the
ibm,dynamic-reconfiguration-memory node of the device tree invokes the
hotplug notifier chain.

This change is needed because of the change in the way memory is
represented under the ibm,dynamic-reconfiguration-memory node.  All lmbs
are described in the ibm,dynamic-memory property instead of having a
separate node for each lmb as in previous device tree layouts.  This
requires the update_node() routine to check for updates to the
ibm,dynamic-memory property and invoke the hotplug notifier chain.

This also updates the pseries hotplug notifier to be able to gather information
for lmbs represented under the ibm,dynamic-reconfiguration-memory node and
have the lmbs added/removed.

Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:16 +10:00
Michael Neuling
138fc1ee06 powerpc: Remove old dump_task_* functions
Since Roland's ptrace cleanup starting with commit
f65255e8d5 ("[POWERPC] Use user_regset
accessors for FP regs"), the dump_task_* functions are no longer being
used.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:13 +10:00
Kumar Gala
2d1b202762 powerpc: Fixup lwsync at runtime
To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync
at runtime.  On e500v1/v2 lwsync causes an illop so we need to patch up
the code.  We default to 'sync' since that is always safe and if the cpu
is capable we will replace 'sync' with 'lwsync'.

We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is
needed.  This flag could be moved elsewhere since we dont really use it
for the normal CPU_FTR purpose.

Finally we only store the relative offset in the fixup section to keep it
as small as possible rather than using a full fixup_entry.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:10 +10:00
Michael Neuling
e17a2565bf powerpc: Fix compile warning in init_thread
Currently we get this warning:
arch/powerpc/kernel/init_task.c:33: warning: missing braces around initializer
arch/powerpc/kernel/init_task.c:33: warning: (near initialization for 'init_task.thread.fpr[0]')

This fixes it.

Noticed by Stephen Rothwell.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:08 +10:00
Tony Breeds
db7f37de2c powerpc: Fix building of arch/powerpc/mm/mem.o when MEMORY_HOTPLUG=y and SPARSEMEM=n
Currently the kernel fails to build with the above config options with:
  CC      arch/powerpc/mm/mem.o
arch/powerpc/mm/mem.c: In function 'arch_add_memory':
arch/powerpc/mm/mem.c:130: error: implicit declaration of function 'create_section_mapping'

This explicitly includes asm/sparsemem.h in arch/powerpc/mm/mem.c and
moves the guards in include/asm-powerpc/sparsemem.h to protect the
SPARSEMEM specific portions only.

Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-03 16:58:07 +10:00
Michael Neuling
f3e909c275 powerpc: Update for VSX core file and ptrace
This correctly hooks the VSX dump into Roland McGrath core file
infrastructure.  It adds the VSX dump information as an additional elf
note in the core file (after talking more to the tool chain/gdb guys).
This also ensures the formats are consistent between signals, ptrace
and core files.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 14:47:09 +10:00
Eric B Munson
a91a03ee31 powerpc: Keep 3 high personality bytes across exec
Currently when a 32 bit process is exec'd on a powerpc 64 bit host the
value in the top three bytes of the personality is clobbered.  patch
adds a check in the SET_PERSONALITY macro that will carry all the
values in the top three bytes across the exec.

These three bytes currently carry flags to disable address randomisation,
limit the address space, force zeroing of an mmapped page, etc.  Should an
application set any of these bits they will be maintained and honoured on
homogeneous environment but discarded and ignored on a heterogeneous
environment.  So if an application requires all mmapped pages to be initialised
to zero and a wrapper is used to setup the personality and exec the target,
these flags will remain set on an all 32 or all 64 bit envrionment, but they
will be lost in the exec on a mixed 32/64 bit environment.  Losing these bits
means that the same application would behave differently in different
environments.  Tested on a POWER5+ machine with 64bit kernel and a mixed
64/32 bit user space.

Signed-off-by: Eric B Munson <ebmunson@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 14:47:02 +10:00
Bart Van Assche
89b5810f6e powerpc: Make sure that include/asm-powerpc/spinlock.h does not trigger compilation warnings
When compiling kernel modules for ppc that include <linux/spinlock.h>,
gcc prints a warning message every time it encounters a function
declaration where the inline keyword appears after the return type.
This makes sure that the order of the inline keyword and the return
type is as gcc expects it.  Additionally, the __inline__ keyword is
replaced by inline, as checkpatch expects.

Signed-off-by: Bart Van Assche <bart.vanassche@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:29:00 +10:00
Andy Whitcroft
016b33c495 powerpc: Add 64 bit version of huge_ptep_set_wrprotect
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected.  However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation.  This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.

Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect().  This is fine on ppc32 kernels as this call is
generic.  On 64 bit this is implemented as:

	pte_update(mm, addr, ptep, _PAGE_RW, 0);

On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():

	hpte_need_flush(mm, addr, ptep, old, huge);

And this directly affects the page size we pass to flush_hash_page():

	flush_hash_page(vaddr, rpte, psize, ssize, 0);

As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.

Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:56 +10:00
Michael Neuling
ce48b21007 powerpc: Add VSX context save/restore, ptrace and signal support
This patch extends the floating point save and restore code to use the
VSX load/stores when VSX is available.  This will make FP context
save/restore marginally slower on FP only code, when VSX is available,
as it has to load/store 128bits rather than just 64bits.

Mixing FP, VMX and VSX code will get constant architected state.

The signals interface is extended to enable access to VSR 0-31
doubleword 1 after discussions with tool chain maintainers.  Backward
compatibility is maintained.

The ptrace interface is also extended to allow access to VSR 0-31 full
registers.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:50 +10:00
Michael Neuling
72ffff5b17 powerpc: Add VSX assembler code macros
This adds the macros for the VSX load/store instruction as most
binutils are not going to support this for a while.

Also add VSX register save/restore macros and vsr[0-63] register definitions.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:48 +10:00
Michael Neuling
b962ce9d26 powerpc: Add VSX CPU feature
Add a VSX CPU feature.  Also add code to detect if VSX is available
from the device tree.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:47 +10:00