Instead of doing byte-at-a-time user accesses to figure
out where the fault occurred, read the saved fault_address
from the current thread structure.
For the sake of defensive programming, if the fault_address
does not fall into the user buffer range, simply assume the
whole area faulted. This will cause the fixup for
copy_from_user() to clear the entire kernel side buffer.
Signed-off-by: David S. Miller <davem@davemloft.net>
The funny "range" exception table entries we had were only
used by the compat layer socketcall assembly, and it wasn't
even needed there.
For free we now get proper exception table sorting and fast
binary searching.
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to do it correctly on UltraSPARC-III+ and later we'd
need to add some complicated code to set the TAG access extension
register before loading the TLB.
Since this optimization gives questionable gains, it's best to
just remove it for now instead of adding the fix for Ultra-III+
Signed-off-by: David S. Miller <davem@davemloft.net>
It tries to batch up the tag loads and comparisons, and
then the stores. And this is just complicated instead
of efficient.
Also, make the symbol of the Cheetah version more grepable.
Signed-off-by: David S. Miller <davem@davemloft.net>
The trick is that we do the kernel linear mapping TLB miss starting
with an instruction sequence like this:
ba,pt %xcc, kvmap_load
xor %g2, %g4, %g5
succeeded by an instruction sequence which performs a full page table
walk starting at swapper_pg_dir.
We first take over the trap table from the firmware. Then, using this
constant PTE generation for the linear mapping area above, we build
the kernel page tables for the linear mapping.
After this is setup, we patch that branch above into a "nop", which
will cause TLB misses to fall through to the full page table walk.
With this, the page unmapping for CONFIG_DEBUG_PAGEALLOC is trivial.
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of all of this cpu-specific code to remap the kernel
to the correct location, use portable firmware calls to do
this instead.
What we do now is the following in position independant
assembler:
chosen_node = prom_finddevice("/chosen");
prom_mmu_ihandle_cache = prom_getint(chosen_node, "mmu");
vaddr = 4MB_ALIGN(current_text_addr());
prom_translate(vaddr, &paddr_high, &paddr_low, &mode);
prom_boot_mapping_mode = mode;
prom_boot_mapping_phys_high = paddr_high;
prom_boot_mapping_phys_low = paddr_low;
prom_map(-1, 8 * 1024 * 1024, KERNBASE, paddr_low);
and that replaces the massive amount of by-hand TLB probing and
programming we used to do here.
The new code should also handle properly the case where the kernel
is mapped at the correct address already (think: future kexec
support).
Consequently, the bulk of remap_kernel() dies as does the entirety
of arch/sparc64/prom/map.S
We try to share some strings in the PROM library with the ones used
at bootup, and while we're here mark input strings to oplib.h routines
with "const" when appropriate.
There are many more simplifications now possible. For one thing, we
can consolidate the two copies we now have of a lot of cpu setup code
sitting in head.S and trampoline.S.
This is a significant step towards CONFIG_DEBUG_PAGEALLOC support.
Signed-off-by: David S. Miller <davem@davemloft.net>
Because we don't access the PAGE_OFFSET linear mappings
any longer before we take over the trap table from the
firmware, we don't need to load dummy mappings there
into the TLB and we don't need the bootmap_base hack
any longer either.
While we are here, check for a larger than 8MB kernel
and halt the boot with an error message. We know that
doesn't work, so instead of failing mysteriously we
should let the user know exactly what's wrong.
Signed-off-by: David S. Miller <davem@davemloft.net>
Just allocate them physically starting from the end of
the kernel image. This incredibly simplifies our MM
bootstrap in that we don't need any mappings in the linear
PAGE_OFFSET area working in order to bootstrap ourselves and
take over the trap table from the firmware.
Many further simplifications are possible now, and this also
sets the stage for CONFIG_DEBUG_PAGEALLOC support.
Signed-off-by: David S. Miller <davem@davemloft.net>
This was kind of ugly, and actually buggy. The bug was that
we didn't handle a machine with memory starting > 4GB. If
the 'prompmd' was allocated in physical memory > 4GB we'd
croak because the obp_iaddr_patch and obp_daddr_patch things
only supported a 32-bit physical address.
So fix this by just loading the appropriate values from two
variables in the kernel image, which is locked into the TLB
and thus accesses to them can't cause a recursive TLB miss.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch contains the sparc64 architecture specific changes to prevent the
possible race conditions.
Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The page->flags D-cache dirty state tracking depended upon
NR_CPUS being a power-of-2 via it's "NR_CPUS - 1" masking.
Fix that to use a fixed (256 - 1) mask as that is the limit
imposed by thread_info->cpu which is a "u8".
Finally, add a compile time check that NR_CPUS is not greater
than 256.
Signed-off-by: David S. Miller <davem@davemloft.net>
The membar changes made the size of __cheetah_flush_tlb_pending
grow by one instruction, but the boot-time code patching was
not updated to match.
Signed-off-by: David S. Miller <davem@davemloft.net>
In particular, avoid membar instructions in the delay
slot of a jmpl instruction.
UltraSPARC-I, II, IIi, and IIe have a bug, documented in
the UltraSPARC-IIi User's Manual, Appendix K, Erratum 51
The long and short of it is that if the IMU unit misses
on a branch or jmpl, and there is a store buffer synchronizing
membar in the delay slot, the chip can stop fetching instructions.
If interrupts are enabled or some other trap is enabled, the
chip will unwedge itself, but performance will suffer.
We already had a workaround for this bug in a few spots, but
it's better to have the entire tree sanitized for this rule.
Signed-off-by: David S. Miller <davem@davemloft.net>
A lot of the code in arch/*/mm/hugetlbpage.c is quite similar. This patch
attempts to consolidate a lot of the code across the arch's, putting the
combined version in mm/hugetlb.c. There are a couple of uglyish hacks in
order to covert all the hugepage archs, but the result is a very large
reduction in the total amount of code. It also means things like hugepage
lazy allocation could be implemented in one place, instead of six.
Tested, at least a little, on ppc64, i386 and x86_64.
Notes:
- this patch changes the meaning of set_huge_pte() to be more
analagous to set_pte()
- does SH4 need s special huge_ptep_get_and_clear()??
Acked-by: William Lee Irwin <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This case actually can get exercised a lot during an ELF
coredump of a process which contains a lot of non-COW'd
anonymous pages. GDB has this test case which in partiaular
creates near terabyte process full of ZERO_PAGEes. It takes
forever to just walk through the page tables because of
all of these spurious cache flushes on sparc64.
With this change it takes only a second or so.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!