Commit Graph

221 Commits

Author SHA1 Message Date
David S. Miller
c4bce90ea2 [SPARC64]: Deal with PTE layout differences in SUN4V.
Yes, you heard it right, they changed the PTE layout for
SUN4V.  Ho hum...

This is the simple and inefficient way to support this.
It'll get optimized, don't worry.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:25 -08:00
David S. Miller
490384e752 [SPARC64]: Register kernel TSB with hypervisor.
We do this right after we take over the trap table from OBP.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:23 -08:00
David S. Miller
e92b92571c [SPARC64]: Handle hypervisor case correctly in copy_tsb().
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:19 -08:00
David S. Miller
12eaa328f9 [SPARC64]: Use ASI_SCRATCHPAD address 0x0 properly.
This is where the virtual address of the fault status
area belongs.

To set it up we don't make a hypervisor call, instead
we call OBP's SUNW,set-trap-table with the real address
of the fault status area as the second argument.  And
right before that call we write the virtual address into
ASI_SCRATCHPAD vaddr 0x0.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:15 -08:00
David S. Miller
164c220fa3 [SPARC64]: Fix hypervisor call arg passing.
Function goes in %o5, args go in %o0 --> %o5.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:14 -08:00
David S. Miller
618e9ed98a [SPARC64]: Hypervisor TSB context switching.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:06 -08:00
David S. Miller
d82ace7dc4 [SPARC64]: Detect sun4v early in boot process.
We look for "SUNW,sun4v" in the 'compatible' property
of the root OBP device tree node.

Protect every %ver register access, to make sure it is
not touched on sun4v, as %ver is hyperprivileged there.

Lock kernel TLB entries using hypervisor calls instead of
calls into OBP.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:12:03 -08:00
David S. Miller
8b11bd12af [SPARC64]: Patch up mmu context register writes for sun4v.
sun4v uses ASI_MMU instead of ASI_DMMU

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:56 -08:00
David S. Miller
481295f982 [SPARC64]: Register per-cpu fault status area with sun4v hypervisor.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:55 -08:00
David S. Miller
df7d6aec96 [SPARC64]: Rename gl_{1,2}insn_patch --> sun4v_{1,2}insn_patch
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:53 -08:00
David S. Miller
d257d5da39 [SPARC64]: Initial sun4v TLB miss handling infrastructure.
Things are a little tricky because, unlike sun4u, we have
to:

1) do a hypervisor trap to do the TLB load.
2) do the TSB lookup calculations by hand

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:52 -08:00
David S. Miller
45fec05f80 [SPARC64]: Sanitize %pstate writes for sun4v.
If we're just switching between different alternate global
sets, nop it out on sun4v.  Also, get rid of all of the
alternate global save/restore in the OBP CIF trampoline code.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:50 -08:00
David S. Miller
a43fe0e789 [SPARC64]: Add some hypervisor tlb_type checks.
And more consistently check cheetah{,_plus} instead
of assuming anything not spitfire is cheetah{,_plus}.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:40 -08:00
David S. Miller
52bf082f0a [SPARC64]: SUN4V hypervisor TLB flush support code.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:38 -08:00
David S. Miller
f4e841da30 [SPARC64]: Turn off TSB growing for now.
There are several tricky races involved with growing the TSB.  So just
use base-size TSBs for user contexts and we can revisit enabling this
later.

One part of the SMP problems is that tsb_context_switch() can see
partially updated TSB configuration state if tsb_grow() is running in
parallel.  That's easily solved with a seqlock taken as a writer by
tsb_grow() and taken as a reader to capture all the TSB config state
in tsb_context_switch().

Then there is flush_tsb_user() running in parallel with a tsb_grow().
In theory we could take the seqlock as a reader there too, and just
resample the TSB pointer and reflush but that looks really ugly.

Lastly, I believe there is a case with threads that results in a TSB
entry lock bit being set spuriously which will cause the next access
to that TSB entry to wedge the cpu (since the TSB entry lock bit will
never clear).  It's either copy_tsb() or some bug elsewhere in the TSB
assembly.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:34 -08:00
David S. Miller
517af33237 [SPARC64]: Access TSB with physical addresses when possible.
This way we don't need to lock the TSB into the TLB.
The trick is that every TSB load/store is registered into
a special instruction patch section.  The default uses
virtual addresses, and the patch instructions use physical
address load/stores.

We can't do this on all chips because only cheetah+ and later
have the physical variant of the atomic quad load.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:32 -08:00
David S. Miller
9954863975 [SPARC64]: Kill swapper_pgd_zero, totally unused.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:28 -08:00
David S. Miller
2f7ee7c63f [SPARC64]: Increase swapper_tsb size to 32K.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:26 -08:00
David S. Miller
a8b900d801 [SPARC64]: Kill sole argument passed to setup_tba().
No longer used, and move extern declaration to a header file.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:25 -08:00
David S. Miller
3487d1d441 [SPARC64]: Kill PROM locked TLB entry preservation code.
It is totally unnecessary complexity.  After we take over
the trap table, we handle all PROM tlb misses fully.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:24 -08:00
David S. Miller
4da808c352 [SPARC64]: Fix bogus flush instruction usage.
Some of the trap code was still assuming that alternate
global %g6 was hard coded with current_thread_info().
Let's just consistently flush at KERNBASE when we need
a pipeline synchronization.  That's locked into the TLB
and will always work.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:22 -08:00
David S. Miller
4753eb2ac7 [SPARC64]: Fix incorrect TSB lock bit handling.
The TSB_LOCK_BIT define is actually a special
value shifted down by 32-bits for the assembler
code macros.

In C code, this isn't what we want.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:21 -08:00
David S. Miller
b70c0fa161 [SPARC64]: Preload TSB entries from update_mmu_cache().
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:19 -08:00
David S. Miller
bd40791e1d [SPARC64]: Dynamically grow TSB in response to RSS growth.
As the RSS grows, grow the TSB in order to reduce the likelyhood
of hash collisions and thus poor hit rates in the TSB.

This definitely needs some serious tuning.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:18 -08:00
David S. Miller
98c5584cfc [SPARC64]: Add infrastructure for dynamic TSB sizing.
This also cleans up tsb_context_switch().  The assembler
routine is now __tsb_context_switch() and the former is
an inline function that picks out the bits from the mm_struct
and passes it into the assembler code as arguments.

setup_tsb_parms() computes the locked TLB entry to map the
TSB.  Later when we support using the physical address quad
load instructions of Cheetah+ and later, we'll simply use
the physical address for the TSB register value and set
the map virtual and PTE both to zero.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:17 -08:00
David S. Miller
09f94287f7 [SPARC64]: TSB refinements.
Move {init_new,destroy}_context() out of line.

Do not put huge pages into the TSB, only base page size translations.
There are some clever things we could do here, but for now let's be
correct instead of fancy.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:16 -08:00
David S. Miller
56fb4df6da [SPARC64]: Elminate all usage of hard-coded trap globals.
UltraSPARC has special sets of global registers which are switched to
for certain trap types.  There is one set for MMU related traps, one
set of Interrupt Vector processing, and another set (called the
Alternate globals) for all other trap types.

For what seems like forever we've hard coded the values in some of
these trap registers.  Some examples include:

1) Interrupt Vector global %g6 holds current processors interrupt
   work struct where received interrupts are managed for IRQ handler
   dispatch.

2) MMU global %g7 holds the base of the page tables of the currently
   active address space.

3) Alternate global %g6 held the current_thread_info() value.

Such hardcoding has resulted in some serious issues in many areas.
There are some code sequences where having another register available
would help clean up the implementation.  Taking traps such as
cross-calls from the OBP firmware requires some trick code sequences
wherein we have to save away and restore all of the special sets of
global registers when we enter/exit OBP.

We were also using the IMMU TSB register on SMP to hold the per-cpu
area base address, which doesn't work any longer now that we actually
use the TSB facility of the cpu.

The implementation is pretty straight forward.  One tricky bit is
getting the current processor ID as that is different on different cpu
variants.  We use a stub with a fancy calling convention which we
patch at boot time.  The calling convention is that the stub is
branched to and the (PC - 4) to return to is in register %g1.  The cpu
number is left in %g6.  This stub can be invoked by using the
__GET_CPUID macro.

We use an array of per-cpu trap state to store the current thread and
physical address of the current address space's page tables.  The
TRAP_LOAD_THREAD_REG loads %g6 with the current thread from this
table, it uses __GET_CPUID and also clobbers %g1.

TRAP_LOAD_IRQ_WORK is used by the interrupt vector processing to load
the current processor's IRQ software state into %g6.  It also uses
__GET_CPUID and clobbers %g1.

Finally, TRAP_LOAD_PGD_PHYS loads the physical address base of the
current address space's page tables into %g7, it clobbers %g1 and uses
__GET_CPUID.

Many refinements are possible, as well as some tuning, with this stuff
in place.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:16 -08:00
David S. Miller
3c93646524 [SPARC64]: Kill pgtable quicklists and use SLAB.
Taking a nod from the powerpc port.

With the per-cpu caching of both the page allocator and SLAB, the
pgtable quicklist scheme becomes relatively silly and primitive.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:14 -08:00
David S. Miller
05e28f9de6 [SPARC64]: No need to D-cache color page tables any longer.
Unlike the virtual page tables, the new TSB scheme does not
require this ugly hack.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:13 -08:00
David S. Miller
74bf4312ff [SPARC64]: Move away from virtual page tables, part 1.
We now use the TSB hardware assist features of the UltraSPARC
MMUs.

SMP is currently knowingly broken, we need to find another place
to store the per-cpu base pointers.  We hid them away in the TSB
base register, and that obviously will not work any more :-)

Another known broken case is non-8KB base page size.

Also noticed that flush_tlb_all() is not referenced anywhere, only
the internal __flush_tlb_all() (local cpu only) is used by the
sparc64 port, so we can get rid of flush_tlb_all().

The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
gets it's own private 8K TSB.  Later we can add code to dynamically
increase the size of per-process TSB as the RSS grows.  An 8KB TSB is
good enough for up to about a 4MB RSS, after which the TSB starts to
incur many capacity and conflict misses.

We even accumulate OBP translations into the kernel TSB.

Another area for refinement is large page size support.  We could use
a secondary address space TSB to handle those.

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-03-20 01:11:13 -08:00
David S. Miller
cab3f16feb [SPARC64]: Fix >8K I/O mappings.
Increment the PFN field of the PTE so that the tests
on vm_pfn in mm/memory.c match up.  The TLB ignores these
lower bits for larger page sizes, so it's OK to set things
like this.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-11-29 13:59:03 -08:00
David S. Miller
5cd9194a1b [PATCH] sparc: convert IO remapping to VM_PFNMAP
Here are the Sparc bits.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-28 14:35:36 -08:00
Hugh Dickins
0b14c179a4 [PATCH] unpaged: VM_UNPAGED
Although we tend to associate VM_RESERVED with remap_pfn_range, quite a few
drivers set VM_RESERVED on areas which are then populated by nopage.  The
PageReserved removal in 2.6.15-rc1 changed VM_RESERVED not to free pages in
zap_pte_range, without changing those drivers not to set it: so their pages
just leak away.

Let's not change miscellaneous drivers now: introduce VM_UNPAGED at the core,
to flag the special areas where the ptes may have no struct page, or if they
have then it's not to be touched.  Replace most instances of VM_RESERVED in
core mm by VM_UNPAGED.  Force it on in remap_pfn_range, and the sparc and
sparc64 io_remap_pfn_range.

Revert addition of VM_RESERVED to powerpc vdso, it's not needed there.  Is it
needed anywhere?  It still governs the mm->reserved_vm statistic, and special
vmas not to be merged, and areas not to be core dumped; but could probably be
eliminated later (the drivers are probably specifying it because in 2.4 it
kept swapout off the vma, but in 2.6 we work from the LRU, which these pages
don't get on).

Use the VM_SHM slot for VM_UNPAGED, and define VM_SHM to 0: it serves no
purpose whatsoever, and should be removed from drivers when we clean up.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: William Irwin <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-22 09:13:42 -08:00
Tobias Klauser
84c1a13a30 [SPARC64]: Use ARRAY_SIZE macro
Use ARRAY_SIZE macro instead of sizeof(x)/sizeof(x[0]) and remove a
duplicate of ARRAY_SIZE which is never used anyways.

Signed-off-by: Tobias Klauser <tklauser@nuerscht.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-11-09 12:03:42 -08:00
Hugh Dickins
da1605465e [SPARC64] mm: update get_user_insn comment
Update comment on get_user_insn to the more general "pte lock", which may
or may not be the page_table_lock.  Note vmtruncate handled like kswapd.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-11-08 10:00:55 -08:00
Hugh Dickins
b462705ac6 [PATCH] mm: arches skip ptlock
Convert those few architectures which are calling pud_alloc, pmd_alloc,
pte_alloc_map on a user mm, not to take the page_table_lock first, nor drop it
after.  Each of these can continue to use pte_alloc_map, no need to change
over to pte_alloc_map_lock, they're neither racy nor swappable.

In the sparc64 io_remap_pfn_range, flush_tlb_range then falls outside of the
page_table_lock: that's okay, on sparc64 it's like flush_tlb_mm, and that has
always been called from outside of page_table_lock in dup_mmap.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:40 -07:00
Nick Piggin
b5810039a5 [PATCH] core remove PageReserved
Remove PageReserved() calls from core code by tightening VM_RESERVED
handling in mm/ to cover PageReserved functionality.

PageReserved special casing is removed from get_page and put_page.

All setting and clearing of PageReserved is retained, and it is now flagged
in the page_alloc checks to help ensure we don't introduce any refcount
based freeing of Reserved pages.

MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
deprecated.  We never completely handled it correctly anyway, and is be
reintroduced in future if required (Hugh has a proof of concept).

Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
be trivially removed.

Last real user of PageReserved is swsusp, which uses PageReserved to
determine whether a struct page points to valid memory or not.  This still
needs to be addressed (a generic page_is_ram() should work).

A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
thus mapcounted and count towards shared rss).  These writes to the struct
page could cause excessive cacheline bouncing on big systems.  There are a
number of ways this could be addressed if it is an issue.

Signed-off-by: Nick Piggin <npiggin@suse.de>

Refcount bug fix for filemap_xip.c

Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:39 -07:00
Hugh Dickins
fc2acab31b [PATCH] mm: tlb_finish_mmu forget rss
zap_pte_range has been counting the pages it frees in tlb->freed, then
tlb_finish_mmu has used that to update the mm's rss.  That got stranger when I
added anon_rss, yet updated it by a different route; and stranger when rss and
anon_rss became mm_counters with special access macros.  And it would no
longer be viable if we're relying on page_table_lock to stabilize the
mm_counter, but calling tlb_finish_mmu outside that lock.

Remove the mmu_gather's freed field, let tlb_finish_mmu stick to its own
business, just decrement the rss mm_counter in zap_pte_range (yes, there was
some point to batching the update, and a subsequent patch restores that).  And
forget the anal paranoia of first reading the counter to avoid going negative
- if rss does go negative, just fix that bug.

Remove the mmu_gather's flushes and avoided_flushes from arm and arm26: no use
was being made of them.  But arm26 alone was actually using the freed, in the
way some others use need_flush: give it a need_flush.  arm26 seems to prefer
spaces to tabs here: respect that.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:37 -07:00
Hugh Dickins
4d6ddfa924 [PATCH] mm: tlb_is_full_mm was obscure
tlb_is_full_mm?  What does that mean?  The TLB is full?  No, it means that the
mm's last user has gone and the whole mm is being torn down.  And it's an
inline function because sparc64 uses a different (slightly better)
"tlb_frozen" name for the flag others call "fullmm".

And now the ptep_get_and_clear_full macro used in zap_pte_range refers
directly to tlb->fullmm, which would be wrong for sparc64.  Rather than
correct that, I'd prefer to scrap tlb_is_full_mm altogether, and change
sparc64 to just use the same poor name as everyone else - is that okay?

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:37 -07:00
David S. Miller
b4d1b82578 [SPARC64]: Fix powering off on SMP.
Doing a "SUNW,stop-self" firmware call on the other cpus is not the
correct thing to do when dropping into the firmware for a halt,
reboot, or power-off.

For now, just do nothing to quiet the other cpus, as the system should
be quiescent enough.  Later we may decide to implement smp_send_stop()
like the other SMP platforms do.

Based upon a report from Christopher Zimmermann.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-14 15:26:08 -07:00
David S. Miller
c9c1083074 [SPARC64]: Fix boot failures on SunBlade-150
The sequence to move over to the Linux trap tables from
the firmware ones needs to be more air tight.  It turns
out that to be %100 safe we do need to be able to translate
OBP mappings in our TLB miss handlers early.

In order not to eat up a lot of kernel image memory with
static page tables, just use the translations array in
the OBP TLB miss handlers.  That solves the bulk of the
problem.

Furthermore, to make sure the OBP TLB miss path will work
even before the fixed MMU globals are loaded, explicitly
load %g1 to TLB_SFSR at the beginning of the i-TLB and
d-TLB miss handlers.

To ease the OBP TLB miss walking of the prom_trans[] array,
we sort it then delete all of the non-OBP entries in there
(for example, there are entries for the kernel image itself
which we're not interested in at all).

We also save about 32K of kernel image size with this change.
Not a bad side effect :-)

There are still some reasons why trampoline.S can't use the
setup_trap_table() yet.  The most noteworthy are:

1) OBP boots secondary processors with non-bias'd stack for
   some reason.  This is easily fixed by using a small bootup
   stack in the kernel image explicitly for this purpose.

2) Doing a firmware call via the normal C call prom_set_trap_table()
   goes through the whole OBP enter/exit sequence that saves and
   restores OBP and Linux kernel state in the MMUs.  This path
   unfortunately does a "flush %g6" while loading up the OBP locked
   TLB entries for the firmware call.

   If we setup the %g6 in the trampoline.S code properly, that
   is in the PAGE_OFFSET linear mapping, but we're not on the
   kernel trap table yet so those addresses won't translate properly.

   One idea is to do a by-hand firmware call like we do in the
   early bootup code and elsewhere here in trampoline.S  But this
   fails as well, as aparently the secondary processors are not
   booted with OBP's special locked TLB entries loaded.  These
   are necessary for the firwmare to processes TLB misses correctly
   up until the point where we take over the trap table.

This does need to be resolved at some point.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-12 12:22:46 -07:00
David S. Miller
9ad98c5b44 [SPARC64]: Fix initrd when net booting.
By allocating early memory for the firmware page tables, we
can write over the beginning of the initrd image.

So what we do now is:

1) Read in firmware translations table while still on the
   firmware's trap table.
2) Switch to Linux trap table.
3) Init bootmem.
4) Build firmware page tables using __alloc_bootmem().

And this keeps the initrd from being clobbered.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-05 15:12:00 -07:00
David S. Miller
0835ae0f27 [SPARC64]: Replace cheetah+ code patching with variables.
Instead of code patching to handle the page size fields in
the context registers, just use variables from which we get
the proper values.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-04 15:23:20 -07:00
David S. Miller
13edad7a5c [SPARC64]: Rewrite convoluted physical memory probing.
Delete all of the code working with sp_banks[] and replace
with clean acquisition and sorting of physical memory
parameters from the firmware.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29 17:58:26 -07:00
David S. Miller
10147570f9 [SPARC64]: Kill all external references to sp_banks[]
Thus, we can mark sp_banks[] static in arch/sparc64/mm/init.c

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28 21:46:43 -07:00
David S. Miller
0836a0eb40 [SPARC64]: Move phys_base, kern_{base,size}, and sp_banks[] init to paging_init
Also, move prom_probe_memory() into arch/sparc64/mm/init.c

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28 21:38:08 -07:00
David S. Miller
efdc1e2083 [SPARC64]: Simplify user fault fixup handling.
Instead of doing byte-at-a-time user accesses to figure
out where the fault occurred, read the saved fault_address
from the current thread structure.

For the sake of defensive programming, if the fault_address
does not fall into the user buffer range, simply assume the
whole area faulted.  This will cause the fixup for
copy_from_user() to clear the entire kernel side buffer.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28 21:06:47 -07:00
David S. Miller
8cf14af0a7 [SPARC64]: Convert to use generic exception table support.
The funny "range" exception table entries we had were only
used by the compat layer socketcall assembly, and it wasn't
even needed there.

For free we now get proper exception table sorting and fast
binary searching.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28 20:21:11 -07:00
David S. Miller
0dc4610698 [SPARC64]: Do not do TLB pre-filling any more.
In order to do it correctly on UltraSPARC-III+ and later we'd
need to add some complicated code to set the TAG access extension
register before loading the TLB.

Since this optimization gives questionable gains, it's best to
just remove it for now instead of adding the fix for Ultra-III+

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26 16:12:18 -07:00
David S. Miller
c5bd50a953 [SPARC64]: Simplify Spitfire D-cache page flush.
It tries to batch up the tag loads and comparisons, and
then the stores.  And this is just complicated instead
of efficient.

Also, make the symbol of the Cheetah version more grepable.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26 16:06:03 -07:00
David S. Miller
5642530651 [SPARC64]: Add CONFIG_DEBUG_PAGEALLOC support.
The trick is that we do the kernel linear mapping TLB miss starting
with an instruction sequence like this:

	ba,pt		%xcc, kvmap_load
	 xor		%g2, %g4, %g5

succeeded by an instruction sequence which performs a full page table
walk starting at swapper_pg_dir.

We first take over the trap table from the firmware.  Then, using this
constant PTE generation for the linear mapping area above, we build
the kernel page tables for the linear mapping.

After this is setup, we patch that branch above into a "nop", which
will cause TLB misses to fall through to the full page table walk.

With this, the page unmapping for CONFIG_DEBUG_PAGEALLOC is trivial.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-25 16:46:57 -07:00
David S. Miller
898cf0ecb7 [SPARC64]: Mark functions called by paging_init() as __init.
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-23 11:59:44 -07:00
David S. Miller
bff06d5522 [SPARC64]: Rewrite bootup sequence.
Instead of all of this cpu-specific code to remap the kernel
to the correct location, use portable firmware calls to do
this instead.

What we do now is the following in position independant
assembler:

	chosen_node = prom_finddevice("/chosen");
	prom_mmu_ihandle_cache = prom_getint(chosen_node, "mmu");
	vaddr = 4MB_ALIGN(current_text_addr());
	prom_translate(vaddr, &paddr_high, &paddr_low, &mode);
	prom_boot_mapping_mode = mode;
	prom_boot_mapping_phys_high = paddr_high;
	prom_boot_mapping_phys_low = paddr_low;
	prom_map(-1, 8 * 1024 * 1024, KERNBASE, paddr_low);

and that replaces the massive amount of by-hand TLB probing and
programming we used to do here.

The new code should also handle properly the case where the kernel
is mapped at the correct address already (think: future kexec
support).

Consequently, the bulk of remap_kernel() dies as does the entirety
of arch/sparc64/prom/map.S

We try to share some strings in the PROM library with the ones used
at bootup, and while we're here mark input strings to oplib.h routines
with "const" when appropriate.

There are many more simplifications now possible.  For one thing, we
can consolidate the two copies we now have of a lot of cpu setup code
sitting in head.S and trampoline.S.

This is a significant step towards CONFIG_DEBUG_PAGEALLOC support.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 20:11:33 -07:00
David S. Miller
40fd3533c9 [SPARC64]: Kill readjust_prom_translations()
Testing shows that the prom_unmap() calls do absolutely
nothing.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 13:03:36 -07:00
David S. Miller
2bdb3cb265 [SPARC64]: Remove unnecessary paging_init() cruft.
Because we don't access the PAGE_OFFSET linear mappings
any longer before we take over the trap table from the
firmware, we don't need to load dummy mappings there
into the TLB and we don't need the bootmap_base hack
any longer either.

While we are here, check for a larger than 8MB kernel
and halt the boot with an error message.  We know that
doesn't work, so instead of failing mysteriously we
should let the user know exactly what's wrong.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 01:08:57 -07:00
David S. Miller
5085b4a549 [SPARC64]: Do not allocate OBP page tables using bootmem
Just allocate them physically starting from the end of
the kernel image.  This incredibly simplifies our MM
bootstrap in that we don't need any mappings in the linear
PAGE_OFFSET area working in order to bootstrap ourselves and
take over the trap table from the firmware.

Many further simplifications are possible now, and this also
sets the stage for CONFIG_DEBUG_PAGEALLOC support.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 00:45:41 -07:00
David S. Miller
405599bd98 [SPARC64]: Break up inherit_prom_mappings() into it's constituent parts.
This thing was just a huge monolithic mess, so chop it up.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 00:12:35 -07:00
David S. Miller
b206fc4c09 [SPARC64]: Do not allocate prom translations using bootmem.
Use __initdata instead.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-21 22:31:13 -07:00
David S. Miller
1ac4f5ebaa [SPARC64]: Remove ktlb.S instruction patching.
This was kind of ugly, and actually buggy.  The bug was that
we didn't handle a machine with memory starting > 4GB.  If
the 'prompmd' was allocated in physical memory > 4GB we'd
croak because the obp_iaddr_patch and obp_daddr_patch things
only supported a 32-bit physical address.

So fix this by just loading the appropriate values from two
variables in the kernel image, which is locked into the TLB
and thus accesses to them can't cause a recursive TLB miss.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-21 21:49:32 -07:00
Prasanna S Panchamukhi
83005161c8 [PATCH] kprobes-prevent-possible-race-conditions-sparc64-changes fix
This patch adds flags "ax" to .kprobe.text section.

Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:00 -07:00
Prasanna S Panchamukhi
05e14cb3ba [PATCH] Kprobes: prevent possible race conditions sparc64 changes
This patch contains the sparc64 architecture specific changes to prevent the
possible race conditions.

Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:00 -07:00
David S. Miller
a7a6cac204 [SPARC]: Kill io_remap_page_range()
It's been deprecated long enough and there are no in-tree
users any longer.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-01 21:51:26 -07:00
David S. Miller
2ef27778a2 [SPARC64]: Preserve nucleus ctx page size during TLB flushes.
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-08-30 20:21:34 -07:00
David S. Miller
48b0e5487f [SPARC64]: Fix ugly dependency on NR_CPUS being a power-of-2.
The page->flags D-cache dirty state tracking depended upon
NR_CPUS being a power-of-2 via it's "NR_CPUS - 1" masking.

Fix that to use a fixed (256 - 1) mask as that is the limit
imposed by thread_info->cpu which is a "u8".

Finally, add a compile time check that NR_CPUS is not greater
than 256.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-27 16:08:44 -07:00
David S. Miller
af166d15c3 [SPARC64]: Kill ancient and unused SYSCALL_TRACING debugging code.
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-10 15:56:40 -07:00
David S. Miller
fef43da4e4 [SPARC64]: Fix UltraSPARC-III fallout from membar changes.
The membar changes made the size of __cheetah_flush_tlb_pending
grow by one instruction, but the boot-time code patching was
not updated to match.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-05 19:45:24 -07:00
David S. Miller
b445e26cbf [SPARC64]: Avoid membar instructions in delay slots.
In particular, avoid membar instructions in the delay
slot of a jmpl instruction.

UltraSPARC-I, II, IIi, and IIe have a bug, documented in
the UltraSPARC-IIi User's Manual, Appendix K, Erratum 51

The long and short of it is that if the IMU unit misses
on a branch or jmpl, and there is a store buffer synchronizing
membar in the delay slot, the chip can stop fetching instructions.

If interrupts are enabled or some other trap is enabled, the
chip will unwedge itself, but performance will suffer.

We already had a workaround for this bug in a few spots, but
it's better to have the entire tree sanitized for this rule.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-27 15:42:04 -07:00
David Gibson
63551ae0fe [PATCH] Hugepage consolidation
A lot of the code in arch/*/mm/hugetlbpage.c is quite similar.  This patch
attempts to consolidate a lot of the code across the arch's, putting the
combined version in mm/hugetlb.c.  There are a couple of uglyish hacks in
order to covert all the hugepage archs, but the result is a very large
reduction in the total amount of code.  It also means things like hugepage
lazy allocation could be implemented in one place, instead of six.

Tested, at least a little, on ppc64, i386 and x86_64.

Notes:
	- this patch changes the meaning of set_huge_pte() to be more
	  analagous to set_pte()
	- does SH4 need s special huge_ptep_get_and_clear()??

Acked-by: William Lee Irwin <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:15 -07:00
Christoph Hellwig
8edf72ebce [SPARC64]: Kill useless __pte_alloc_one_kernel indirection
warning: untested, but it there's not too much chance for screwups

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-05 14:27:56 -07:00
David S. Miller
a9546f59e9 [PATCH] sparc64: Do not flush dcache for ZERO_PAGE.
This case actually can get exercised a lot during an ELF
coredump of a process which contains a lot of non-COW'd
anonymous pages.  GDB has this test case which in partiaular
creates near terabyte process full of ZERO_PAGEes.  It takes
forever to just walk through the page tables because of
all of these spurious cache flushes on sparc64.

With this change it takes only a second or so.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-17 18:03:09 -07:00
Linus Torvalds
1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00