The context switch code in the kernel issues a dummy stwcx. to clear the
reservation, as recommended by the architecture. However, some processors
can have issues if this stwcx to address A occurs while the reservation
is already held to a different address B. To avoid this problem, the dummy
stwcx. needs to be paired with a dummy lwarx to the same address.
This adds the dummy lwarx, and creates a cpu feature bit to indicate
which cpus are affected. Tested on mpc8641_hpcn_defconfig in
arch/powerpc; build tested in arch/ppc.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
PA6T has a bug where the slbie instruction does not honor the large
segment bit. As a result, we have to always use slbia when switching
context.
We don't have to worry about changing the slbie's during fault processing,
since they should never be replacing one VSID with another using the
same ESID. I.e. there's no risk for inserting duplicate entries due to a
failed slbie of the old entry. So as long as we clear it out on context
switch we should be fine.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This makes the kernel use 1TB segments for all kernel mappings and for
user addresses of 1TB and above, on machines which support them
(currently POWER5+, POWER6 and PA6T).
We detect that the machine supports 1TB segments by looking at the
ibm,processor-segment-sizes property in the device tree.
We don't currently use 1TB segments for user addresses < 1T, since
that would effectively prevent 32-bit processes from using huge pages
unless we also had a way to revert to using 256MB segments. That
would be possible but would involve extra complications (such as
keeping track of which segment size was used when HPTEs were inserted)
and is not addressed here.
Parts of this patch were originally written by Ben Herrenschmidt.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Some IBM machines supply a "logical" PVR (processor version register)
value in the device tree in the cpu nodes rather than the real PVR.
This is used for instance to indicate that the processors in a POWER6
partition have been configured by the hypervisor to run in POWER5+
mode rather than POWER6 mode. To cope with this, we call identify_cpu
a second time with the logical PVR value (the first call is with the
real PVR value in the very early setup code).
However, POWER5+ machines can also supply a logical PVR value, and use
the same value (the value that indicates a v2.04 architecture
compliant processor). This causes problems for code that uses the
performance monitor (such as oprofile), because the PMU registers are
different in POWER6 (even in POWER5+ mode) from the real POWER5+.
This change works around this problem by taking out the PMU
information from the cputable entries for the logical PVR values, and
changing identify_cpu so that the second call to it won't overwrite
the PMU information that was established by the first call (the one
with the real PVR), but does update the other fields. Specifically,
if the cputable entry for the logical PVR value has num_pmcs == 0,
none of the PMU-related fields get used.
So that we can create a mixed cputable entry, we now make cur_cpu_spec
point to a single static struct cpu_spec, and copy stuff from
cpu_specs[i] into it. This has the side-effect that we can now make
cpu_specs[] be initdata.
Ultimately it would be good to move the PMU-related fields out to a
separate structure, pointed to by the cputable entries, and change
identify_cpu so that it saves the PMU info pointer, copies the whole
structure, and restores the PMU info pointer, rather than identify_cpu
having to list all the fields that are *not* PMU-related.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The 8272 (and presumably other PCI PQ2 chips) appear to have the
same issue as the 83xx regarding PCI streaming DMA.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Make it so that SPE support can be determined at runtime. This is similiar
to how we handle AltiVec. This allows us to have SPE support built in and
work on processors with and without SPE.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
The 750 CPU_FTR macros have quite a bit of duplication in them. Consolidate
them to use CPU_FTRS_750 and only list the unique features for derivatives.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently the powerpc kernel has a 64-bit only feature,
COHERENT_ICACHE used for those CPUS which maintain icache/dcache
coherency in hardware (POWER5, essentially). It also has a feature,
SPLIT_ID_CACHE, which is used on CPUs which have separate i and
d-caches, which is to say everything except 601 and Freescale E200.
In nearly all the places we check the SPLIT_ID_CACHE, what we actually
care about is whether the i and d-caches are coherent (which they will
be, trivially, if they're the same cache).
This tries to clarify the situation a little. The COHERENT_ICACHE
feature becomes availble on 32-bit and is set for all CPUs where i and
d-cache are effectively coherent, whether this is due to special logic
(POWER5) or because they're unified. We check this, instead of
SPLIT_ID_CACHE nearly everywhere.
The SPLIT_ID_CACHE feature itself is replaced by a UNIFIED_ID_CACHE
feature with reversed sense, set only on 601 and Freescale E200. In
the two places (one Freescale BookE specific) where we really care
whether it's a unified cache, not whether they're coherent, we check
this feature. The CPUs with unified cache are so few, we could
consider replacing this feature bit with explicit checks against the
PVR.
This will make unifying the 32-bit and 64-bit cache flush code a
little more straightforward.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove CPU_FTR_NEED_COHERENT for MPC7448 (and single-core MPC86xx).
This prevents needlessly setting M=1 when not SMP.
Signed-off-by: James.Yang <James.Yang@freescale.com>
Acked-by: Jon Loeliger <jdl@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Oprofile support for PA6T, kernel side.
Also rename the PA6T_SPRN.* defines to SPRN_PA6T.*.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
PowerPC 750CL has high BATs. The patch below adds a CPU_FTRS_750CL that
includes that. Without it, the original firmware mappings in the high BATs
aren't cleared which continue to override the linux translations.
It also adds CPU_FTR_COMMON to CPU_FTRS_750GX for completeness.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add cputable entries for which type of PMC implementation the processor
has.
I've only filled in the current 64-bit processors, the unfilled default
value will have same behaviour as before so it can be done over time
as needed.
Also tidy up the dummy_perf implementation a bit, aggregating it into
one function with ifdefs instead of several.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
POWER6 adds a new SPR, the data stream control register (DSCR). It can
be used to adjust how agressive the prefetch mechanisms are.
Its possible we may want to context switch this, but for now just export
it to userspace via sysfs so we can adjust it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The e300c2 has no FPU. Its MSR[FP] is grounded to zero. If an attempt
is made to execute a floating point instruction (including floating-point
load, store, or move instructions), the e300c2 takes a floating-point
unavailable interrupt.
This patch adds support for FP emulation on the e300c2 by declaring a
new CPU_FTR_FP_TAKES_FPUNAVAIL, where FP unavail interrupts are
intercepted and redirected to the ProgramCheck exception path for
correct emulation handling.
(If we run out of CPU_FTR bits we could look to reclaim this bit by adding
support to test the cpu_user_features for PPC_FEATURE_HAS_FPU instead)
It adds a nop to the exception path for 32-bit processors with a FPU.
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Remove CPU_FTR_16M_PAGE from the cupfeatures mask at runtime on iSeries.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
It saves #ifdef'ing in callers if we at least define the 64-bit cpu
features for 32-bit also.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This adds code to look at the properties firmware puts in the device
tree to determine what compatibility mode the partition is in on
POWER6 machines, and set the ELF aux vector AT_HWCAP and AT_PLATFORM
entries appropriately.
Specifically, we look at the cpu-version property in the cpu node(s).
If that contains a "logical" PVR value (of the form 0x0f00000x), we
call identify_cpu again with this PVR value. A value of 0x0f000001
indicates the partition is in POWER5+ compatibility mode, and a value
of 0x0f000002 indicates "POWER6 architected" mode, with various
extensions disabled. We also look for various other properties:
ibm,dfp, ibm,purr and ibm,spurr.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add PPU event-based and cycle-based profiling support to Oprofile for Cell.
Oprofile is expected to collect data on all CPUs simultaneously.
However, there is one set of performance counters per node. There are
two hardware threads or virtual CPUs on each node. Hence, OProfile must
multiplex in time the performance counter collection on the two virtual
CPUs.
The multiplexing of the performance counters is done by a virtual
counter routine. Initially, the counters are configured to collect data
on the even CPUs in the system, one CPU per node. In order to capture
the PC for the virtual CPU when the performance counter interrupt occurs
(the specified number of events between samples has occurred), the even
processors are configured to handle the performance counter interrupts
for their node. The virtual counter routine is called via a kernel
timer after the virtual sample time. The routine stops the counters,
saves the current counts, loads the last counts for the other virtual
CPU on the node, sets interrupts to be handled by the other virtual CPU
and restarts the counters, the virtual timer routine is scheduled to run
again. The virtual sample time is kept relatively small to make sure
sampling occurs on both CPUs on the node with a relatively small
granularity. Whenever the counters overflow, the performance counter
interrupt is called to collect the PC for the CPU where data is being
collected.
The oprofile driver relies on a firmware RTAS call to setup the debug bus
to route the desired signals to the performance counter hardware to be
counted. The RTAS call must set the routing registers appropriately in
each of the islands to pass the signals down the debug bus as well as
routing the signals from a particular island onto the bus. There is a
second firmware RTAS call to reset the debug bus to the non pass thru
state when the counters are not in use.
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The Cell CPU timebase has an erratum. When reading the entire 64 bits
of the timebase with one mftb instruction, there is a handful of cycles
window during which one might read a value with the low order 32 bits
already reset to 0x00000000 but the high order bits not yet incremeted
by one. This fixes it by reading the timebase again until the low order
32 bits is no longer 0. That might introduce occasional latencies if
hitting mftb just at the wrong time, but no more than 70ns on a cell
blade, and that was considered acceptable.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch reworks the feature fixup mecanism so vdso's can be fixed up.
The main issue was that the construct:
.long label (or .llong on 64 bits)
will not work in the case of a shared library like the vdso. It will
generate an empty placeholder in the fixup table along with a reloc,
which is not something we can deal with in the vdso.
The idea here (thanks Alan Modra !) is to instead use something like:
1:
.long label - 1b
That is, the feature fixup tables no longer contain addresses of bits of
code to patch, but offsets of such code from the fixup table entry
itself. That is properly resolved by ld when building the .so's. I've
modified the fixup mecanism generically to use that method for the rest
of the kernel as well.
Another trick is that the 32 bits vDSO included in the 64 bits kernel
need to have a table in the 64 bits format. However, gas does not
support 32 bits code with a statement of the form:
.llong label - 1b (Or even just .llong label)
That is, it cannot emit the right fixup/relocation for the linker to use
to assign a 32 bits address to an .llong field. Thus, in the specific
case of the 32 bits vdso built as part of the 64 bits kernel, we are
using a modified macro that generates:
.long 0xffffffff
.llong label - 1b
Note that is assumes that the value is negative which is enforced by
the .lds (those offsets are always negative as the .text is always
before the fixup table and gas doesn't support emiting the reloc the
other way around).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch adds some macros that can be used with an explicit label in
order to nest cpu features. This should be used very careful but is
necessary for the upcoming cell TB fixup.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There are currently two versions of the functions for applying the
feature fixups, one for CPU features and one for firmware features. In
addition, they are both in assembly and with separate implementations
for 32 and 64 bits. identify_cpu() is also implemented in assembly and
separately for 32 and 64 bits.
This patch replaces them with a pair of C functions. The call sites are
slightly moved on ppc64 as well to be called from C instead of from
assembly, though it's a very small change, and thus shouldn't cause any
problem.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The performance monitor implementation (including CTRL register behaviour)
is just included in PPC v2 as an example, it's not truly part of the base.
It's actually a somewhat misleading feature, but I'll leave that be for
now: The presence of the register is not what the feature bit is used
for, but instead it's used to determine if it contains the runlatch
bit for idle reporting of the performance monitor. For alternative
implementations, the register might still exist but the bit might have
different meaning (or no meaning at all).
For now, split it off and don't include it in CPU_FTR_PPCAS_ARCH_V2_BASE.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cleanup CPU inits a bit more, Geoff Levand already did some earlier.
* Move CPU state save to cpu_setup, since cpu_setup is only ever done
on cpu 0 on 64-bit and save is never done more than once.
* Rename __restore_cpu_setup to __restore_cpu_ppc970 and add
function pointers to the cputable to use instead. Powermac always
has 970 so no need to check there.
* Rename __970_cpu_preinit to __cpu_preinit_ppc970 and check PVR before
calling it instead of in it, it's too early to use cputable.
* Rename pSeries_secondary_smp_init to generic_secondary_smp_init since
everyone but powermac and iSeries use it.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (139 commits)
[POWERPC] re-enable OProfile for iSeries, using timer interrupt
[POWERPC] support ibm,extended-*-frequency properties
[POWERPC] Extra sanity check in EEH code
[POWERPC] Dont look for class-code in pci children
[POWERPC] Fix mdelay badness on shared processor partitions
[POWERPC] disable floating point exceptions for init
[POWERPC] Unify ppc syscall tables
[POWERPC] mpic: add support for serial mode interrupts
[POWERPC] pseries: Print PCI slot location code on failure
[POWERPC] spufs: one more fix for 64k pages
[POWERPC] spufs: fail spu_create with invalid flags
[POWERPC] spufs: clear class2 interrupt status before wakeup
[POWERPC] spufs: fix Makefile for "make clean"
[POWERPC] spufs: remove stop_code from struct spu
[POWERPC] spufs: fix spu irq affinity setting
[POWERPC] spufs: further abstract priv1 register access
[POWERPC] spufs: split the Cell BE support into generic and platform dependant parts
[POWERPC] spufs: dont try to access SPE channel 1 count
[POWERPC] spufs: use kzalloc in create_spu
[POWERPC] spufs: fix initial state of wbox file
...
Manually resolved conflicts in:
drivers/net/phy/Makefile
include/asm-powerpc/spu.h
Reflect the fact that the Cell Broadband Engine supports 64k
pages by adding the bit to the CPU features.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove some stale POWER3/POWER4/970 on 32bit kernel support.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds the PowerPC part of the code to allow processes to change
their endian mode via prctl.
This also extends the alignment exception handler to be able to fix up
alignment exceptions that occur in little-endian mode, both for
"PowerPC" little-endian and true little-endian.
We always enter signal handlers in big-endian mode -- the support for
little-endian mode does not amount to the creation of a little-endian
user/kernel ABI. If the signal handler returns, the endian mode is
restored to what it was when the signal was delivered.
We have two new kernel CPU feature bits, one for PPC little-endian and
one for true little-endian. Most of the classic 32-bit processors
support PPC little-endian, and this is reflected in the CPU feature
table. There are two corresponding feature bits reported to userland
in the AT_HWCAP aux vector entry.
This is based on an earlier patch by Anton Blanchard.
Signed-off-by: Paul Mackerras <paulus@samba.org>
POWER6 moves some of the MMCRA bits and also requires some bits to be
cleared each PMU interrupt.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add a cputable entry for the POWER6 processor.
The SIHV and SIPR bits in the mmcra have moved in POWER6, so disable
support for that until oprofile is fixed.
Also tell firmware that we know about POWER6.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Christoph noticed that sparse warned about all the enum tags in cuptable.h
that had values that required them to be type log. (enum tags are ints
according to the standard.)
This patch attempts to fix them in the least intrusive way possible by
turning them all into #defines except for the 32 bit CPU_FTRS_POSSIBLE and
CPU_FTRS_ALWAYS which are hard to construct that way. This works because
these last two contain no bits above 2^31.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove redundant whitespace in include/asm-powerpc/cputable.h
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch makes userland aware of the icache snoop capability of the
POWER5 (and possibly others in the future) and of SMT capabilities.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This implements accurate task and cpu time accounting for 64-bit
powerpc kernels. Instead of accounting a whole jiffy of time to a
task on a timer interrupt because that task happened to be running at
the time, we now account time in units of timebase ticks according to
the actual time spent by the task in user mode and kernel mode. We
also count the time spent processing hardware and software interrupts
accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If
that is not set, we do tick-based approximate accounting as before.
To get this accurate information, we read either the PURR (processor
utilization of resources register) on POWER5 machines, or the timebase
on other machines on
* each entry to the kernel from usermode
* each exit to usermode
* transitions between process context, hard irq context and soft irq
context in kernel mode
* context switches.
On POWER5 systems with shared-processor logical partitioning we also
read both the PURR and the timebase at each timer interrupt and
context switch in order to determine how much time has been taken by
the hypervisor to run other partitions ("steal" time). Unfortunately,
since we need values of the PURR on both threads at the same time to
accurately calculate the steal time, and since we can only calculate
steal time on a per-core basis, the apportioning of the steal time
between idle time (time which we ceded to the hypervisor in the idle
loop) and actual stolen time is somewhat approximate at the moment.
This is all based quite heavily on what s390 does, and it uses the
generic interfaces that were added by the s390 developers,
i.e. account_system_time(), account_user_time(), etc.
This patch doesn't add any new interfaces between the kernel and
userspace, and doesn't change the units in which time is reported to
userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(),
times(), etc. Internally the various task and cpu times are stored in
timebase units, but they are converted to USER_HZ units (1/100th of a
second) when reported to userspace. Some precision is therefore lost
but there should not be any accumulating error, since the internal
accumulation is at full precision.
Signed-off-by: Paul Mackerras <paulus@samba.org>
On the 83xx platform to ensure the PCI inbound memory is handled properly we
have to turn on coherency for all pages in the MMU. Otherwise we see
corruption if inbound "prefetching/streaming" is enabled on the PCI controller.
Signed-off-by: Randy Vinson <rvinson@mvista.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
In 2.6.15-git6 a change was commited in the oprofile support in
the powerpc architecture. It introduced the powerpc_oprofile_type
which contains the define G4. This causes a name clash with the
existing wacom usb tablet driver.
CC [M] drivers/usb/input/wacom.o
drivers/usb/input/wacom.c:98: error: conflicting types for `G4'
include/asm/cputable.h:37: error: previous declaration of `G4'
CC [M] drivers/usb/mon/mon_text.o
make[3]: *** [drivers/usb/input/wacom.o] Error 1
make[2]: *** [drivers/usb/input] Error 2
The elements of an enum declared in global scope are effectivly
global identifiers themselves. As such we need to ensure the names
are unique. This patch updates the later oprofile support to use
unique names.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The glibc folks want to use AT_PLATFORM to select between possible
alternative versions of shared libraries. This commit makes the kernel
supply an AT_PLATFORM string that indicates what class of processor
we are running on. Processors with the same set of user-level
instructions and roughly the same instruction scheduling characteristics
are given the same AT_PLATFORM value; for example, 821, 823 and 860
are all reported as "ppc823", and 7447, 7447A, 7448, 7450, 7451, 7455
are all called "ppc7450".
The intention is that the AT_PLATFORM values match the values that
gcc accepts for the -mcpu= option. For values which are numeric
(e.g. -mcpu=750), "ppc" has been prepended.
This also adds a PPC_FEATURE_BOOKE bit to the AT_HWCAP value and sets
it for the 440 family and the Freescale 85xx family.
Signed-off-by: Paul Mackerras <paulus@samba.org>
My recent changes to oprofile broke it when built as a module. Fix it by
using an enum instead of a function pointer. This way we still retain
the oprofile configuration in the cputable.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch enables support for pause(0) power management state
for the Cell Broadband Processor, which is import for power efficient
operation. The pervasive infrastructure will in the future enable
us to introduce more functionality specific to the Cell's
pervasive unit.
From: Maximino Aguilar <maguilar@us.ibm.com>
Signed-off-by: Arnd Bergmann <arndb@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
include/asm-ppc/ had #ifdef __KERNEL__ in all header files that
are not meant for use by user space, include/asm-powerpc does
not have this yet.
This patch gets us a lot closer there. There are a few cases
where I was not sure, so I left them out. I have verified
that no CONFIG_* symbols are used outside of __KERNEL__
any more and that there are no obvious compile errors when
including any of the headers in user space libraries.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Milton and I were looking at the cputable code and it looks like we can
set spurious bits on 64bit.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch merges align.c, the result isn't quite what was in ppc64 nor
what was in ppc32 :) It should implement all the functionalities of both
though. Kumar, since you played with that in the past, I suppose you
have some test cases for verifying that it works properly before I dig
out the 601 machine ? :)
Since it's likely that I won't be able to test all scenario, code
inspection is much welcome.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is at the request of the glibc folks, who want to use these bits
to select libraries optimized for the microarchitecture and new
instructions in these processors.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch consolidates macros used to generate assembly for
compatibility across different CPUs or configs. A new header,
asm-powerpc/asm-compat.h contains the main compatibility macros. It
uses some preprocessor magic to make the macros suitable both for use
in .S files, and in inline asm in .c files. Headers (bitops.h,
uaccess.h, atomic.h, bug.h) which had their own such compatibility
macros are changed to use asm-compat.h.
ppc_asm.h is now for use in .S files *only*, and a #error enforces
that. As such, we're a lot more careless about namespace pollution
here than in asm-compat.h.
While we're at it, this patch adds a call to the PPC405_ERR77 macro in
futex.h which should have had it already, but didn't.
Built and booted on pSeries, Maple and iSeries (ARCH=powerpc). Built
for 32-bit powermac (ARCH=powerpc) and Walnut (ARCH=ppc).
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel
base page size to 64K. The resulting kernel still boots on any
hardware. On current machines with 4K pages support only, the kernel
will maintain 16 "subpages" for each 64K page transparently.
Note that while real 64K capable HW has been tested, the current patch
will not enable it yet as such hardware is not released yet, and I'm
still verifying with the firmware architects the proper to get the
information from the newer hypervisors.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>