The Rn value from the emulation is unconditionally written back;
this is fine as long as Rn != PC because in that case, even if the
instruction isn't a write back instruction, it will only result in the
same value being written back.
In case Rn == PC, then the emulated instruction doesn't have the
actual PC value in Rn but an adjusted value; when this is written
back, it will result in the PC being incorrectly updated.
An altenative solution would be to check bits 24 and 22 to see whether
the instruction actually is a write back instruction or not. I think
it's enough to check whether Rn != PC, because:
- it's looks cheaper than the alternative
- to my understaning it's not permitted to update the PC with a write
back instruction, so we don't lose any ability to emulate legal
instructions.
- in case of writing back for non write back instructions where Rn != PC, it doesn't matter because the values are the same.
Regarding the second point above, it would possibly be prudent to add
some checking to prep_emulate_ldr_str(), so that instructions with
both write back and Rn == PC would be rejected.
Signed-off-by: Viktor Rosendahl <viktor.rosendahl@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If a counter overflows during a perf stat profiling run it may overtake
the last known value of the counter:
0 prev new 0xffffffff
|----------|-------|----------------------|
In this case, the number of events that have occurred is
(0xffffffff - prev) + new. Unfortunately, the event update code will
not realise an overflow has occurred and will instead report the event
delta as (new - prev) which may be considerably smaller than the real
count.
This patch adds an extra argument to armpmu_event_update which indicates
whether or not an overflow has occurred. If an overflow has occurred
then we use the maximum period of the counter to calculate the elapsed
events.
Acked-by: Jamie Iles <jamie@jamieiles.com>
Reported-by: Ashwin Chaugule <ashwinc@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv7 dictates that the interrupt-enable and count-enable registers for
each PMU counter are UNKNOWN following core reset.
This patch adds a new (optional) function pointer to struct arm_pmu for
resetting the PMU state during init. The reset function is called on
each CPU via an arch_initcall in the generic ARM perf_event code and
allows the PMU backend to write sane values to any UNKNOWN registers.
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARMv7 architecture does not guarantee that effects from co-processor
writes are immediately visible to following instructions.
This patch adds two isbs to the ARMv7 perf code:
(1) Immediately after selecting an event register, so that the PMU state
following this instruction is consistent with the new event.
(2) Immediately before writing to the PMCR, so that any previous writes
to the PMU have taken effect before (typically) enabling the
counters.
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Percpu allocator honors alignment request upto PAGE_SIZE and both the
percpu addresses in the percpu address space and the translated kernel
addresses should be aligned accordingly. The calculation of the
former depends on the alignment of percpu output section in the kernel
image.
The linker script macros PERCPU_VADDR() and PERCPU() are used to
define this output section and the latter takes @align parameter.
Several architectures are using @align smaller than PAGE_SIZE breaking
percpu memory alignment.
This patch removes @align parameter from PERCPU(), renames it to
PERCPU_SECTION() and makes it always align to PAGE_SIZE. While at it,
add PCPU_SETUP_BUG_ON() checks such that alignment problems are
reliably detected and remove percpu alignment comment recently added
in workqueue.c as the condition would trigger BUG way before reaching
there.
For um, this patch raises the alignment of percpu area. As the area
is in .init, there shouldn't be any noticeable difference.
This problem was discovered by David Howells while debugging boot
failure on mn10300.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Cc: David Howells <dhowells@redhat.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: user-mode-linux-devel@lists.sourceforge.net
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (35 commits)
ARM: Update (and cut down) mach-types
ARM: 6771/1: vexpress: add support for multiple core tiles
ARM: 6797/1: hw_breakpoint: Fix newlines in WARNings
ARM: 6751/1: vexpress: select applicable errata workarounds in Kconfig
ARM: 6753/1: omap4: Enable ARM local timers with OMAP4430 es1.0 exception
ARM: 6759/1: smp: Select local timers vs broadcast timer support runtime
ARM: pgtable: add pud-level code
ARM: 6673/1: LPAE: use phys_addr_t instead of unsigned long for start of membanks
ARM: Use long long format when printing meminfo physical addresses
ARM: integrator: add Integrator/CP sched_clock support
ARM: realview/vexpress: consolidate SMP bringup code
ARM: realview/vexpress: consolidate localtimer support
ARM: integrator/versatile: consolidate FPGA IRQ handling code
ARM: rationalize versatile family Kconfig/Makefile
ARM: realview: remove old AMBA device DMA definitions
ARM: versatile: remove old AMBA device DMA definitions
ARM: vexpress: use new init_early for clock tree and sched_clock init
ARM: realview: use new init_early for clock tree and sched_clock init
ARM: versatile: use new init_early for clock tree and sched_clock init
ARM: integrator: use new init_early for clock tree init
...
The Xen PV drivers in a crashed HVM guest can not connect to the dom0
backend drivers because both frontend and backend drivers are still in
connected state. To run the connection reset function only in case of a
crashdump, the is_kdump_kernel() function needs to be available for the PV
driver modules.
Consolidate elfcorehdr_addr, setup_elfcorehdr and saved_max_pfn into
kernel/crash_dump.c Also export elfcorehdr_addr to make is_kdump_kernel()
usable for modules.
Leave 'elfcorehdr' as early_param(). This changes powerpc from __setup()
to early_param(). It adds an address range check from x86 also on ia64
and powerpc.
[akpm@linux-foundation.org: additional #includes]
[akpm@linux-foundation.org: remove elfcorehdr_addr export]
[akpm@linux-foundation.org: fix for Tejun's mm/nobootmem.c changes]
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'devel-stable' of master.kernel.org:/home/rmk/linux-2.6-arm: (289 commits)
davinci: DM644x EVM: register MUSB device earlier
davinci: add spi devices on tnetv107x evm
davinci: add ssp config for tnetv107x evm board
davinci: add tnetv107x ssp platform device
spi: add ti-ssp spi master driver
mfd: add driver for sequencer serial port
ARM: EXYNOS4: Implement Clock gating for System MMU
ARM: EXYNOS4: Enhancement of System MMU driver
ARM: EXYNOS4: Add support for gpio interrupts
ARM: S5P: Add function to register gpio interrupt bank data
ARM: S5P: Cleanup S5P gpio interrupt code
ARM: EXYNOS4: Add missing GPYx banks
ARM: S3C64XX: Fix section mismatch from cpufreq init
ARM: EXYNOS4: Add keypad device to the SMDKV310
ARM: EXYNOS4: Update clocks for keypad
ARM: EXYNOS4: Update keypad base address
ARM: EXYNOS4: Add keypad device helpers
ARM: EXYNOS4: Add support for SATA on ARMLEX4210
plat-nomadik: make GPIO interrupts work with cpuidle ApSleep
mach-u300: define a dummy filter function for coh901318
...
Fix up various conflicts in
- arch/arm/mach-exynos4/cpufreq.c
- arch/arm/mach-mxs/gpio.c
- drivers/net/Kconfig
- drivers/tty/serial/Kconfig
- drivers/tty/serial/Makefile
- drivers/usb/gadget/fsl_mxc_udc.c
- drivers/video/Kconfig
* 'defcfg' of master.kernel.org:/home/rmk/linux-2.6-arm:
ARM: 6647/1: add Versatile Express defconfig
ARM: 6644/1: mach-ux500: update the U8500 defconfig
* 'drivers' of master.kernel.org:/home/rmk/linux-2.6-arm:
ARM: 6764/1: pl011: factor out FIFO to TTY code
ARM: 6763/1: pl011: add optional RX DMA to PL011 v2
ARM: 6758/1: amba: support pm ops
ARM: amba: make amba_driver id_table const
ARM: amba: make internal ID table handling const
ARM: amba: make probe() functions take const id tables
ARM: 6662/1: amba: make amba_bustype non-static
ARM: mmci: add dmaengine-based DMA support
ARM: mmci: no need for separate host->data_xfered
ARM: mmci: avoid unnecessary switch to data available PIO interrupts
ARM: mmci: no need to call flush_dcache_page() with sg_miter API
ARM: mmci: avoid reporting too many completed bytes on fifo overrun
ALSA: AACI: make fifo variables more explanitory
ALSA: AACI: no need to call snd_pcm_period_elapsed() for each period
ALSA: AACI: use snd_pcm_lib_period_bytes()
ALSA: AACI: clean up AACI announcement printk
ALSA: AACI: fix channel mask selection
ALSA: AACI: fix number of channels for record
ALSA: AACI: fix multiple IRQ claiming
* 'cyberpro-next' of master.kernel.org:/home/rmk/linux-2.6-arm:
VIDEO: cyberpro: remove unused cyber2000fb_get_fb_var()
VIDEO: cyberpro: remove useless function extreg pointers
VIDEO: cyberpro: update handling of device structures
VIDEO: cyberpro: add support for video capture I2C
VIDEO: cyberpro: make 'reg_b0_lock' always present
VIDEO: cyberpro: add I2C support
VIDEO: cyberpro: select lowest multipler/divisor for PLL
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (91 commits)
ARM: 6806/1: irq: introduce entry and exit functions for chained handlers
ARM: 6781/1: Thumb-2: Work around buggy Thumb-2 short branch relocations in gas
ARM: 6747/1: P2V: Thumb2 support
ARM: 6798/1: aout-core: zero thread debug registers in a.out core dump
ARM: 6796/1: Footbridge: Fix I/O mappings for NOMMU mode
ARM: 6784/1: errata: no automatic Store Buffer drain on Cortex-A9
ARM: 6772/1: errata: possible fault MMU translations following an ASID switch
ARM: 6776/1: mach-ux500: activate fix for errata 753970
ARM: 6794/1: SPEAr: Append UL to device address macros.
ARM: 6793/1: SPEAr: Remove unused *_SIZE macros from spear*.h files
ARM: 6792/1: SPEAr: Replace SIZE macro's with SZ_4K macros
ARM: 6791/1: SPEAr3xx: Declare device structures after shirq code
ARM: 6790/1: SPEAr: Clock Framework: Rename usbd clock and align apb_clk entry
ARM: 6789/1: SPEAr3xx: Rename sdio to sdhci
ARM: 6788/1: SPEAr: Include mach/hardware.h instead of mach/spear.h
ARM: 6787/1: SPEAr: Reorder #includes in .h & .c files.
ARM: 6681/1: SPEAr: add debugfs support to clk API
ARM: 6703/1: SPEAr: update clk API support
ARM: 6679/1: SPEAr: make clk API functions more generic
ARM: 6737/1: SPEAr: formalized timer support
...
* 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support
percpu: Generic support for this_cpu_cmpxchg_double()
alpha: use L1_CACHE_BYTES for cacheline size in the linker script
percpu: align percpu readmostly subsection to cacheline
Fix up trivial conflict in arch/x86/kernel/vmlinux.lds.S due to the
percpu alignment having changed ("x86: Reduce back the alignment of the
per-CPU data section")
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (62 commits)
posix-clocks: Check write permissions in posix syscalls
hrtimer: Remove empty hrtimer_init_hres_timer()
hrtimer: Update hrtimer->state documentation
hrtimer: Update base[CLOCK_BOOTTIME].offset correctly
timers: Export CLOCK_BOOTTIME via the posix timers interface
timers: Add CLOCK_BOOTTIME hrtimer base
time: Extend get_xtime_and_monotonic_offset() to also return sleep
time: Introduce get_monotonic_boottime and ktime_get_boottime
hrtimers: extend hrtimer base code to handle more then 2 clockids
ntp: Remove redundant and incorrect parameter check
mn10300: Switch do_timer() to xtimer_update()
posix clocks: Introduce dynamic clocks
posix-timers: Cleanup namespace
posix-timers: Add support for fd based clocks
x86: Add clock_adjtime for x86
posix-timers: Introduce a syscall for clock tuning.
time: Splitout compat timex accessors
ntp: Add ADJ_SETOFFSET mode bit
time: Introduce timekeeping_inject_offset
posix-timer: Update comment
...
Fix up new system-call-related conflicts in
arch/x86/ia32/ia32entry.S
arch/x86/include/asm/unistd_32.h
arch/x86/include/asm/unistd_64.h
arch/x86/kernel/syscall_table_32.S
(name_to_handle_at()/open_by_handle_at() vs clock_adjtime()), and some
due to movement of get_jiffies_64() in:
kernel/time.c
Adding Thumb2 support to the runtime patching of the virt_to_phys and
phys_to_virt opcodes.
Tested both the 8-bit and the 16-bit fixups, using different placements
in memory to exercize all code paths.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
These warnings are missing newlines and spaces causing confusing
looking output when they trigger.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide the option to call a machine-specific function
before kexec'ing a new kernel.
Signed-off-by: Eric Cooper <ecc@cmu.edu>
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
ARMv7 allows the debug core logic to be powered down and provides the
DBGPRSR register so that software can power-up and check the status of
the logic.
This patch ensures that the debug logic is powered up on ARMv7 cores
before we attempt to access the extended debug registers.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The GETHBPREGS ptrace request incorrectly maps its index argument onto
the thread's saved debug state when the index != 0. This has not yet
been seen from userspace because GDB (the only user of this request)
only reads from register 0.
This patch fixes the indexing.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The current code support of dummy timers in absence of local
timer is compile time. This is an attempt to convert it to runtime
so that on few SOC version if the local timers aren't supported
kernel can switch to dummy timers. OMAP4430 ES1.0 does suffer from
this limitation.
This patch should not have any functional impact on affected
files.
Cc: Daniel Walker <dwalker@codeaurora.org>
Cc: Bryan Huntsman <bryanh@codeaurora.org>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Colin Cross <ccross@android.com>
Cc: Erik Gilling <konkers@android.com>
Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
Cc: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: David Brown <davidb@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PTRACE_SINGLESTEP is a ptrace request designed to offer single-stepping
support to userspace when the underlying architecture has hardware
support for this operation.
On ARM, we set arch_has_single_step() to 1 and attempt to emulate hardware
single-stepping by disassembling the current instruction to determine the
next pc and placing a software breakpoint on that location.
Unfortunately this has the following problems:
1.) Only a subset of ARMv7 instructions are supported
2.) Thumb-2 is unsupported
3.) The code is not SMP safe
We could try to fix this code, but it turns out that because of the above
issues it is rarely used in practice. GDB, for example, uses PTRACE_POKETEXT
and PTRACE_PEEKTEXT to manage breakpoints itself and does not require any
kernel assistance.
This patch removes the single-step emulation code from ptrace meaning that
the PTRACE_SINGLESTEP request will return -EIO on ARM. Portable code must
check the return value from a ptrace call and handle the failure gracefully.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure appropriate locks are taken to ensure that IRQ migration off
the current CPU is race-free. We may have a concurrent set_affinity
via procfs running on another CPU in parallel with the IRQ migration,
resulting in unpredictable results.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The force argument to irq_set_affinity really should be 'true' as
moving IRQs off a CPU which is going down isn't optional.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a missing call to pci_enable_bridges() so that devices behind
bridges get found by the pci bus scan.
Signed-off-by: Chris Partington <chris.partington@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Current diagnostics are rather poor when things go wrong:
ipv6: relocation out of range, section 2 reloc 0 sym 'snmp_mib_free'
Let's include a little more information about the problem:
ipv6: section 2 reloc 0 sym 'snmp_mib_free': relocation 28 out of range (0xbf0000a4 -> 0xc11b4858)
so that we show exactly what the problem is - not only what type of
relocation but also the offending address range too.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/kernel/return_address.c:37:6: warning: symbol 'return_address' was not declared. Should it be static?
arch/arm/kernel/setup.c:76:14: warning: symbol 'processor_id' was not declared. Should it be static?
arch/arm/kernel/traps.c:259:1: warning: symbol 'die_lock' was not declared. Should it be static?
arch/arm/vfp/vfpmodule.c:156:6: warning: symbol 'vfp_raise_sigfpe' was not declared. Should it be static?
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make Primecell driver probe functions take a const pointer to their
ID tables. Drivers should never modify their ID tables in their
probe handler.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The initial MMU table created in head.S contains a 1 MB mapping at the
start of memory to let the early kernel boot code access the boot params
specified by mdesc->boot_params.
When using CONFIG_ARM_PATCH_PHYS_VIRT it is possible for the kernel to
have a different idea of where the start of memory is at run time, making
the compile-time determined mdesc->boot_params pointing to a memory area
which is not mapped. Any access to the boot params in that case will
fault and silently hang the kernel at that point. It is therefore a
better idea to simply ignore mdesc->boot_params in that case and give
the kernel a chance to print some diagnostic on the console later.
If the bootloader provides a valid pointer in r2 to the kernel then this
is used instead of mdesc->boot_params, and an explicit mapping is already
created in the initial MMU table for it. It is therefore a good idea to
use that facility when using a relocated kernel.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since commit 6fc31d54 there is no callers for lookup_machine_type()
other than setup_machine(). And if the former fails it won't return,
therefore the error path in the later is dead code. Let's clean
things up by merging them together.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Allow the generic sleep code to be used with SMP CPU idle by storing
N CPU stack pointers rather than just one. Tested on Assabet and
Tegra 2.
Tested-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds core support for saving and restoring CPU coprocessor
registers for suspend/resume support. This contains support for suspend
with ARM920, ARM926, SA11x0, PXA25x, PXA27x, PXA3xx, V6 and V7 CPUs.
Tested on Assabet and Tegra 2.
Tested-by: Colin Cross <ccross@android.com>
Tested-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Marcin Slusarz says:
> In arch/arm/kernel/kprobes-decode.c there's a function
> arm_kprobe_decode_insn which does:
>
> } else if ((insn & 0x0e000000) == 0x0c400000) {
> ...
>
> This is always false, so code below is dead.
> I found this bug by coccinelle (http://coccinelle.lip6.fr/).
Reported-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When SMP_ON_UP is used and the spinlocks are inlined, we end up with
inline spinlocks in the exit code, with references from the SMP
alternatives section to the exit sections. This causes link time
errors. Avoid this by placing the exit sections in the init-discarded
region.
Cc: <stable@kernel.org>
Tested-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure a predictable endian state when entering signal handlers. This
avoids programs which use SETEND to momentarily switch their endian
state from having their signal handlers entered with an unpredictable
endian state.
Cc: <stable@kernel.org>
Acked-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 18991197b4 added --build-id
linker option when toolchain supports it. ARM one does, but for some
reason places the section at 0 when linker script doesn't mention it
explicitly.
The 1e621a8e37 worked around the problem
removing this section from binary image with explicit objcopy options,
but it still exists in vmlinux, confusing tools like debuggers and perf.
This problem was discussed here:
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-May/015994.htmlhttp://lists.infradead.org/pipermail/linux-arm-kernel/2010-May/015994.html
but the proposed changes to the linker script were substantial.
This patch simply places NOTES (36 bytes long, at least when compiled
with CodeSourcery toolchain) between data and bss, which seem to be
the right place (and suggested by the sample linker script in
include/asm-generic/vmlinux.lds.h).
It is enough to place it correctly in vmlinux (so debuggers are happy):
Section Headers:
[11] .data PROGBITS c07ce000 7ce000 020fc0 00 WA 0 0 32
[12] .notes NOTE c07eefc0 7eefc0 000024 00 AX 0 0 4
[13] .bss NOBITS c07ef000 7eefe4 01e628 00 WA 0 0 32
Program Headers:
LOAD 0x008000 0xc0008000 0xc0008000 0x7e6fe4 0x805628 RWE 0x8000
NOTE 0x7eefc0 0xc07eefc0 0xc07eefc0 0x00024 0x00024 R E 0x4
Section to Segment mapping:
Segment Sections...
00 <...> .data .notes .bss
01 .notes
and to get it exposed as /sys/kernel/notes used by perf tools.
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The unsigned long datatype is not sufficient for mapping physical addresses
>= 4GB.
This patch ensures that the phys_addr_t datatype is used to represent
the start address of a membank, which may reside above the 4GB boundary.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If ID_MMFR0[3:0] >= 3, the architecture version is ARMv7. The code was
currently only testing for ID_MMFR0[3:0] == 3.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that we can execute a CONFIG_SMP kernel on a uniprocessor system,
extra care has to be taken in the PMU IRQ affinity setting code to
ensure that we don't always fail to initialise.
This patch changes the CPU PMU initialisation code so that when we
only have a single IRQ, whose affinity can not be changed at the
controller, we report success (0) rather than -EINVAL.
Reported-by: Avik Sil <avik.sil@linaro.org>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If ATAGs or DTB pointer is not within first 1MB of RAM, then the boot params
will not be mapped early enough, so map the 1MB region that r2 points to. Only
map the first 1MB when r2 is 0.
Some assembly improvements from Nicolas Pitre.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
MSM's memory is aligned to 2MB, which is more than we can do with our
existing method as we're limited to the upper 8 bits. Extend this by
using two instructions to 16 bits, automatically selected when MSM is
enabled.
Acked-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.
Patch the physical to virtual translations at runtime. As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.
As many translations are of the form:
physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)
we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt(). We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.
Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.
At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.
Add a module version magic string for this feature to prevent
incompatible modules being loaded.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
head.S makes use of PHYS_OFFSET. When it becomes a variable, the
assembler won't understand this. Compute PHYS_OFFSET by the following
method. This code is linked at its virtual address, but run at before
the MMU is enabled, so at his physical address.
1: .long .
.long PAGE_OFFSET
adr r0, 1b @ r0 = physical ','
ldmia r0, {r1, r2} @ r1 = virtual '.', r2 = PAGE_OFFSET
sub r1, r0, r1 @ r1 = physical-virtual
add r2, r2, r1 @ r2 = PAGE_OFFSET + physical-virtual
@ := PHYS_OFFSET.
Switch XIP users of PHYS_OFFSET to use PLAT_PHYS_OFFSET - we can't
use this method for XIP kernels as the code doesn't execute in RAM.
Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As PHYS_OFFSET will be becoming a variable, we can't have it used in
initializers nor assembly code. Replace those in generic code with
a run-time initialization. Replace those in platform code using the
individual platform specific PLAT_PHYS_OFFSET.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: David Brown <davidb@codeaurora.org>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This uncouple PHYS_OFFSET from the platform definitions, thereby
facilitating run-time computation of the physical memory offset.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Magnus Damm <damm@opensource.se>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Acked-by: Wan ZongShun <mcuos.com@gmail.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Acked-by: Jiandong Zheng <jdzheng@broadcom.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Allow a platform-specific IRQ handler to be specified via platform data.
This will be used to implement the single-irq workaround for the DB8500.
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Acked-by: Lee Jones <lee.jones@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Since the debug macros no longer depend on the machine type information,
the machine type lookup can be deferred to setup_arch() in setup.c which
simplifies the code somewhat.
We also move the __error_a functionality into setup.c for displaying a
message when a bad machine ID is passed to the kernel via the LL debug
code. We also log this into the kernel ring buffer which makes it
possible to retrieve the message via a debugger.
Original idea from Grant Likely.
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For the Kernel to support 2 level and 3 level page tables, physical
addresses (and also page table entries) need to be 32 or 64-bits depending
upon the configuration.
This patch uses the %08llx conversion specifier for physical addresses
and page table entries, ensuring that they are cast to (long long) so
that common code can be used regardless of the datatype widths.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows the cache/processor/fault glue to be more easily used
from assembler code. Tested on Assabet and Tegra 2.
Tested-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ptrace debug information register was advertising breakpoint and
watchpoint resources for unsupported debug architectures. This meant
that setting breakpoints on these architectures would appear to succeed,
although they would never fire in reality.
This patch fixes the breakpoint slot probing so that it returns 0 when
running on an unsupported debug architecture.
Reported-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reading baseline CP14 registers, other than DBGDIDR, when the OS Lock
is set leads to UNPREDICTABLE behaviour.
This patch ensures that we clear the OS lock before accessing anything
other than the DBGDIDR, thereby avoiding this behaviour.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a function to set the SCU low-power mode for SMP CPUs. This
centralizes this functionality rather than having to expose the
SCU register definitions to each platform.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since tail is the previous fp - 1, we need to compare the new fp with tail + 1
to ensure that we don't end up passing in the same tail again, in order to
avoid a potential infinite loop in the perf interrupt handler (which has been
observed to occur). A similar fix seems to be needed in the OProfile code.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With certain configurations, we inline the unlock functions in modules,
which results in SMP alternatives being created in modules. We need to
fix those up when loading a module to prevent undefined instruction
faults.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If CONFIG_CPU_V6 is enabled, then the kernel must support ARMv6 CPUs
which don't have the V6K extensions implemented. Always use the
dummy store-exclusive method to ensure that the exclusive monitors are
cleared.
If CONFIG_CPU_V6 is not set, but CONFIG_CPU_32v6K is enabled, then we
have the K extensions available on all CPUs we're building support for,
so we can use the new clear-exclusive instruction.
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Introduce a CPU_V6K configuration option for platforms to select if they
have a V6K CPU core. This allows us to identify whether we need to
support ARMv6 CPUs without the V6K SMP extensions at build time.
Currently CPU_V6K is just an alias for CPU_V6, and all places which
reference CPU_V6 are replaced by (CPU_V6 || CPU_V6K).
Select CPU_V6K from platforms which are known to be V6K-only.
Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Switch the set/clear/change bitops to use the word-based exclusive
operations, which are only present in a wider range of ARM architectures
than the byte-based exclusive operations.
Tested record:
- Nicolas Pitre: ext3,rw,le
- Sourav Poddar: nfs,le
- Will Deacon: ext3,rw,le
- Tony Lindgren: ext3+nfs,le
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Sourav Poddar <sourav.poddar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Allow non-ARM SMP processors to use the SMP_ON_UP feature. CPUs
supporting SMP must have the new CPU ID format, so check for this first.
Then check for ARM11MPCore, which fails the MPIDR check. Lastly check
the MPIDR reports multiprocessing extensions and that the CPU is part of
a multiprocessing system.
Cc: <stable@kernel.org>
Reported-and-Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure that the twd timer reload value is reprogrammed each time we
enter periodic mode. This ensures that the reload value is always
reset correctly.
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently percpu readmostly subsection may share cachelines with other
percpu subsections which may result in unnecessary cacheline bounce
and performance degradation.
This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
linker macros, makes each arch linker scripts specify its cacheline
size and use it to align percpu subsections.
This is based on Shaohua's x86 only patch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Shaohua Li <shaohua.li@intel.com>
* 'fixes' of master.kernel.org:/home/rmk/linux-2.6-arm:
ARM: fix missing branch in __error_a
ARM: fix /proc/$PID/stack on SMP
ARM: Fix build regression on SA11x0, PXA, and H720x targets
ARM: 6625/1: use memblock memory regions for "System RAM" I/O resources
ARM: fix wrongly patched constants
ARM: 6624/1: fix dependency for CONFIG_SMP_ON_UP
ARM: 6623/1: Thumb-2: Fix out-of-range offset for Thumb-2 in proc-v7.S
ARM: 6622/1: fix dma_unmap_sg() documentation
ARM: 6621/1: bitops: remove condition code clobber for CLZ
ARM: 6620/1: Change misleading warning when CONFIG_CMDLINE_FORCE is used
ARM: 6619/1: nommu: avoid mapping vectors page when !CONFIG_MMU
ARM: sched_clock: make minsec argument to clocks_calc_mult_shift() zero
ARM: sched_clock: allow init_sched_clock() to be called early
ARM: integrator: fix compile warning in cpu.c
ARM: 6616/1: Fix ep93xx-fb init/exit annotations
ARM: twd: fix display of twd frequency
ARM: udelay: prevent math rounding resulting in short udelays
When DEBUG_LL is not set, we don't want __error_a re-entering
__lookup_machine_type - we want it to go to the error function. This
used to be the case before we reorganized the layout for hotplug cpu,
as we used to fall through to __error. With the changed layout, we
need an explicit branch here instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rabin Vincent reports:
| On SMP, this BUG() in save_stack_trace_tsk() can be easily triggered
| from user space by reading /proc/$PID/stack, where $PID is any pid but
| the current process:
|
| if (tsk != current) {
| #ifdef CONFIG_SMP
| /*
| * What guarantees do we have here that 'tsk'
| * is not running on another CPU?
| */
| BUG();
| #else
Fix this by replacing the BUG() with an entry to terminate the stack
trace, returning an empty trace - I'd rather not expose the dwarf
unwinder to a volatile stack of a running thread.
Reported-by: Rabin Vincent <rabin@rab.in>
Tested-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Do not use memory bank info to request the "system ram" resources as
they do not track holes created by memblock_remove inside
machine's reserve callback. If the removed memory is passed as
platform_device's ioresource, then drivers that call
request_mem_region would fail due to a conflict with the incorrectly
configured system ram resource.
Instead, iterate through the regions of memblock.memory and add
those as "System RAM" resources.
Signed-off-by: Dima Zavin <dima@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Four architectures (arm, mips, sparc, x86) use __vmalloc_area() for
module_init(). Much of the code is duplicated and can be generalized in a
globally accessible function, __vmalloc_node_range().
__vmalloc_node() now calls into __vmalloc_node_range() with a range of
[VMALLOC_START, VMALLOC_END) for functionally equivalent behavior.
Each architecture may then use __vmalloc_node_range() directly to remove
the duplication of code.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When CONFIG_CMDLINE_FORCE is used, the warning
Ignoring unrecognised tag 0x54410009
was displayed. Change this to
Ignoring tag cmdline (using the default kernel command line)
Signed-off-by: Alexander Holler <holler@ahsoftware.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When running without an MMU, we do not need to install a mapping for the
vectors page. Attempting to do so causes a compile-time error because
install_special_mapping is not defined.
This patch adds compile-time guards to the vector mapping functions
so that we can build nommu configurations once more.
Acked-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The purpose of the minsec argument is to prevent 64-bit math overflow
when the number of cycles is multiplied up. However, the multipler
is 32-bit, and in the sched_clock() case, the cycle counter is up to
32-bit as well. So the math can never overflow.
With a value of 60, and clock rates greater than 71MHz, the calculated
multiplier is unnecessarily reduced in value, which reduces accuracy by
maybe 70ppt. It's almost not worth bothering with as the oscillator
driving the counter won't be any more than 1ppm - unless you're using
a rubidium lamp or caesium fountain frequency standard.
So, set the minsec argument to zero.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
sched_clock is supposed to be initialized early - in the recently added
init_early platform hook. However, in doing so we end up calling
mod_timer() before the timer lists are initialized, resulting in an
oops.
Split the initialization in two - the part which the platform calls
early which starts things off. The addition of the timer can be
delayed until after we have more of the kernel initialized - when the
normal time sources are initialized.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The fraction of MHz was not being displayed correctly as the calculation
was a factor of 10 out. Fix this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (416 commits)
ARM: DMA: add support for DMA debugging
ARM: PL011: add DMA burst threshold support for ST variants
ARM: PL011: Add support for transmit DMA
ARM: PL011: Ensure IRQs are disabled in UART interrupt handler
ARM: PL011: Separate hardware FIFO size from TTY FIFO size
ARM: PL011: Allow better handling of vendor data
ARM: PL011: Ensure error flags are clear at startup
ARM: PL011: include revision number in boot-time port printk
ARM: vexpress: add sched_clock() for Versatile Express
ARM i.MX53: Make MX53 EVK bootable
ARM i.MX53: Some bug fix about MX53 MSL code
ARM: 6607/1: sa1100: Update platform device registration
ARM: 6606/1: sa1100: Fix platform device registration
ARM i.MX51: rename IPU irqs
ARM i.MX51: Add ipu clock support
ARM: imx/mx27_3ds: Add PMIC support
ARM: DMA: Replace page_to_dma()/dma_to_page() with pfn_to_dma()/dma_to_pfn()
mx51: fix usb clock support
MX51: Add support for usb host 2
arch/arm/plat-mxc/ehci.c: fix errors/typos
...
This allows platforms to hook into the initialization early to setup
things like scheduler clocks, etc.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than storing each machine init hook separately, store a
pointer to the machine description record and dereference this
instead. This pointer is only available while the init sections
are present, which is not a problem as we only use it from init
code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Per subarch interrupt handler macros V3.
This patch breaks out code from the irq_handler macro
into arch_irq_handler and arch_irq_handler_default.
The macros are put in the header file "entry-macro-multi.S"
The arch_irq_handler_default macro is designed to be
used by irq_handler in entry-armv.S while arch_irq_handler
is suitable for per-subarch use.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Normally different ARM platform has different way to decode the IRQ
hardware status and demultiplex to the corresponding IRQ handler.
This is highly optimized by macro irq_handler in entry-armv.S, and
each machine defines their own macro to decode the IRQ number.
However, this prevents multiple machine classes to be built into a
single kernel.
By allowing each machine to specify thier own handler, and making
function pointer 'handle_arch_irq' to point to it at run time, this
can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both
solutions to work.
Comparing with the highly optimized macro of irq_handler, the new
function must be written with care not to lose too much performance.
And the IPI stuff on SMP is expected to move to the provided arch
IRQ handler as well.
The assembly code to invoke handle_arch_irq is optimized by Russell
King.
Signed-off-by: Eric Miao <eric.miao@canonical.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If the irqsoff tracer is in use, stop tracing the interrupt disable
interval when returning to userspace. Tracing userspace execution time
as interrupts disabled time is not helpful for kernel performance
analysis purposes. Only do so if the irqsoff tracer is enabled, to
avoid overhead for lockdep, which doesn't care.
Signed-off-by: Todd Poynor <toddpoynor@google.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide common sched_clock() infrastructure for platforms to use to
create a 64-bit ns based sched_clock() implementation from a counter
running at a non-variable clock rate.
This implementation is based upon maintaining an epoch for the counter
and an epoch for the nanosecond time. When we desire a sched_clock()
time, we calculate the number of counter ticks since the last epoch
update, convert this to nanoseconds and add to the epoch nanoseconds.
We regularly refresh these epochs within the counter wrap interval.
We perform a similar calculation as above, and store the new epochs.
We read and write the epochs in such a way that sched_clock() can easily
(and locklessly) detect when an update is in progress, and repeat the
loading of these constants when they're known not to be stable. The
one caveat is that sched_clock() is not called in the middle of an
update. We achieve that by disabling IRQs.
Finally, if the clock rate is known at compile time, the counter to ns
conversion factors can be specified, allowing sched_clock() to be tightly
optimized. We ensure that these factors are correct by providing an
initialization function which performs a run-time check.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Will Deacon <will.deacon@arm.com>
Tested-by: Mikael Pettersson <mikpe@it.uu.se>
Tested-by: Eric Miao <eric.y.miao@gmail.com>
Tested-by: Olof Johansson <olof@lixom.net>
Tested-by: Jamie Iles <jamie@jamieiles.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We have two places where we create identity mappings - one when we bring
secondary CPUs online, and one where we setup some mappings for soft-
reboot. Combine these two into a single implementation. Also collect
the identity mapping deletion function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The MMU is always configured to read page tables from the L2 cache
so there's little point flushing them out of the L2 cache back to
RAM. Remove these flushes.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When we soft-CPU hotplug a CPU, we reset the stack pointer and
jump back to start_secondary(). This allows us to restart as if
the CPU was actually reset.
However, we weren't resetting the frame pointer, which could cause
problems with backtracing. Reset the frame pointer to zero (which
means no parent frame) just like the early assembly code also does.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
smp.c is becoming too large, so split out the TLB maintainence
broadcasting into a separate smp_tlb.c file.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When a CPU is hot unplugged, the generic tick code cleans up the
clock event device, but fails to call down to the device's set_mode
function to actually shut the device down.
To work around this, we've historically had a local_timer_stop()
callback out of the hotplug code. However, this adds needless
complexity when we have the clock event device itself available.
Explicitly call the clock event device's set_mode function with
CLOCK_EVT_MODE_UNUSED, so that the hardware can be cleanly shutdown
without any special external callbacks. When/if the generic code
is fixed, percpu_timer_stop() can be killed off.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We used to print a bland error message which gave no clue as to the
failure when we failed to bring up a secondary CPU. Resolve this by
separating the two failure cases.
If boot_secondary() fails, we print a message indicating the returned
error code from boot_secondary():
"CPU%u: failed to boot: %d\n", cpu, ret.
However, if boot_secondary() succeeded, but the CPU did not appear to
mark itself online within the timeout, indicate that it failed to come
online:
"CPU%u: failed to come online\n", cpu
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* __fixup_smp_on_up has been modified with support for the
THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split
into halfwords in case of misalignment, since we can't rely on
unaligned accesses working before turning the MMU on.
No attempt is made to optimise the aligned case, since the
number of fixups is typically small, and it seems best to keep
the code as simple as possible.
* Add a rotate in the fixup_smp code in order to support
CPU_BIG_ENDIAN, as suggested by Nicolas Pitre.
* Add an assembly-time sanity-check to ALT_UP() to ensure that
the content really is the right size (4 bytes).
(No check is done for ALT_SMP(). Possibly, this could be fixed
by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus
ALT_SMP...SMP_UP_B) into two macros. In the first case,
ALT_SMP needs to expand to >= 4 bytes, not == 4.)
* smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due
to macro limitations) has not been modified: the affected
instruction (mov) has no 16-bit encoding, so the correct
instruction size is satisfied in this case.
* A "mode" parameter has been added to smp_dmb:
smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser)
smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP()
This avoids assembly failures due to use of W() inside smp_dmb,
when assembling pure-ARM code in the vectors page.
There might be a better way to achieve this.
* Kconfig: make SMP_ON_UP depend on
(!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now
supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2
currently assumes little-endian order.)
Tested using a single generic realview kernel on:
ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y})
ARM RealView PBX-A9 (SMP)
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Don't call idle_task_exit() with interrupts disabled, and ensure
that we have a memory barrier after interrupts are disabled but
before signalling that this CPU has shut down.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We always need to wait for the dying CPU to reach a safe state before
taking it down, irrespective of the requirements of the platform.
Move the completion code into the ARM SMP hotplug code rather than
having each platform re-implement this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
All platforms call trace_hardirqs_off() in their secondary startup code,
so move this into the core SMP code - it doesn't need to be in the
per-platform code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is a certain amount of smp_prepare_cpus() which doesn't belong
in the platform support code - that is, code which is invariant to the
SMP implementation. Move this code into arch/arm/kernel/smp.c, and
add a platform_ prefix to the original function.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Wait for CPUs to indicate that they've stopped, after sending the
stop IPI, rather than blindly continuing on and hoping that they've
stopped in time. Print a warning if we fail to stop the other CPUs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Use r0,r3-r6 rather than r0,r3,r4,r6,r7, which makes it easier to
understand which registers can be modified. Also document which
registers hold values which must be preserved.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The IPI and local timer interrupts weren't being properly accounted
for in /proc/stat. Collect them from the irq_stat structure, and
return their sum.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This separates out the individual IPI interrupt counts from the
total IPI count, which allows better visibility of what IPIs are
being used for.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
iwmmxt is used in XScale, XScale3, Mohawk and PJ4 core. But the instructions
of accessing CP0 and CP1 is changed in PJ4. Append more files to support
iwmmxt in PJ4 core.
Signed-off-by: Zhou Zhu <zzhu3@marvell.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
As per x86, align the initial column according to how many IRQs we
have. Also, provide an english explaination for the 'LOC:' and
'IPI:' lines.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the ipi_count into irq_stat, which allows the ipi_data structure
to be entirely removed.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide __inc_irq_stat() and __get_irq_stat() to increment and
read the irq stat counters.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
send_ipi_message() does nothing except call smp_cross_call(). As
this is a static function, nothing external to this file calls it,
so we can easily clean up this now unnecessary indirection.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We should not be incrementing mm_users when we startup a secondary
CPU - doing so results in mm_users incrementing by one each time we
hotplug a CPU, which will eventually wrap, and will cause problems.
Other architectures such as x86 do not increment mm_users, but only
mm_count, so we follow that pattern.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Extend the perf_pmu_register() interface to allow for named and
dynamic pmu types.
Because we need to support the existing static types we cannot use
dynamic types for everything, hence provide a type argument.
If we want to enumerate the PMUs they need a name, provide one.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20101117222056.259707703@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The debug registers can only be manipulated from software if monitor
debug mode is enabled. On some cores, this can never be enabled (i.e.
the corresponding bit in the DSCR is RAZ/WI).
This patch ensures we can handle this hardware configuration and fail
gracefully, rather than blow up the kernel during boot.
Reported-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Avoid adding nasty genirq-specific code to local timers to enable PPI
interrupts. Instead, provide a gic function to do this.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
sparse doesn't like per-cpu accesses such as:
static DEFINE_PER_CPU(struct perf_event *, foo[MAXLEN]);
struct perf_event **bar = __get_cpu_var(foo);
and shouts quite loudly about it:
| warning: incorrect type in assignment (different modifiers)
| expected struct perf_event **slots
| got struct perf_event *[noderef] *<noident>
This patch adds casts to these sorts of assignments in hw_breakpoint.c
in order to silence the warnings.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Single-stepping a breakpoint requires us to disable it temporarily so that
we don't get stuck in a recursive debug trap. With per-cpu breakpoints this
presents a problem where an interrupt can be taken before the single-step has
completed and a new task is eventually scheduled. This new task will not
hit the breakpoint because it will have been disabled during the previous
handling code.
This patch disallows per-cpu breakpoints on ARM when an overflow handler
is not present. A similar effect can be created by placing breakpoints on
a shell and then running applications there.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The single-stepping code is currently different depending on whether
we are stepping over a breakpoint or a watchpoint. There is no good
reason for this, so let's sort it out.
This patch adds functions for enabling/disabling single-step for
a particular hw_breakpoint and integrates this with the exception
handling code.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The watchpoint single-stepping code calls register_user_hw_breakpoint to
register a mismatch breakpoint for stepping over the watchpoint. This is
performed with preemption disabled, which is unsafe as we may end up scheduling
whilst in_atomic(). Furthermore, using the perf API is rather overkill since
we are already in the hw-breakpoint backend and only require access to reserved
breakpoints anyway.
This patch reworks the watchpoint stepping code so that we don't require
another perf_event for the mismatch breakpoint. Instead, we hold a separate
arch_hw_breakpoint_ctrl struct inside the watchpoint which is used exclusively
for stepping. We can check whether or not stepping is enabled when installing
or uninstalling the watchpoint and operate on the breakpoint accordingly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
To permit handling of watchpoint exceptions without signalling a
debugger, it is necessary to reserve breakpoint registers for in-kernel
use only.
This patch ensures that we record and subtract the number of reserved
breakpoints from the number of usable breakpoint registers that we
advertise to userspace via the ptrace API.
Signed-off-by: Will Deacon <will.deacon@arm.com>
On ARM, debug exceptions occur in the form of data or prefetch aborts.
One difference is that debug exceptions require access to per-cpu banked
registers and data structures which are not saved in the low-level exception
code. For kernels built with CONFIG_PREEMPT, there is an unlikely scenario
that the debug handler ends up running on a different CPU from the one
that originally signalled the event, resulting in random data being read
from the wrong registers.
This patch adds a debug_entry macro to the low-level exception handling
code which checks whether the taken exception is a debug exception. If
it is, the preempt count for the faulting process is incremented. After
the debug handler has finished, the count is decremented.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The current hw_breakpoint code tries to fix up the alignment of
breakpoints so that we can make use of sparse byte-address-select
bits in the control register and give the illusion that we can
set breakpoints on unaligned addresses.
Although this works on v6 cores, v7 forbids this behaviour, instead
requiring breakpoints to be set on aligned addresses and have contiguous
byte-address-select ranges depending on the instruction set in use.
For ARM the only supported size is 4 bytes, whilst Thumb-2 also permits
2 byte breakpoints (watchpoints can be of 1, 2, 4 or 8 bytes long).
This patch simplifies the alignment fixup code so that we require
addresses to be aligned to the size of the corresponding breakpoint.
This allows us to handle the common case of breaking on a half-word
aligned Thumb-2 instruction and also allows us to set byte watchpoints
on arbitrary addresses.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARMv7 debug architecture doesn't make any guarantees about the
contents of debug control registers following a debug logic reset.
This patch ensures that we reset the control registers when a cpu
comes ONLINE (for example, with hotplug) so that when we enable
monitor mode while inserting a breakpoint we won't exhibit random
behaviour.
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv7 architects a system for saving and restoring the debug registers
across low-power modes. At the heart of this system is a lock register
which, when set, forbids writes to the debug registers. While locked,
writes to debug registers via the co-processor interface will result
in undefined instruction traps. Linux currently doesn't make use of
this feature because we update the debug registers on context switch
anyway, however the status of the lock is IMPLEMENTATION DEFINED on
reset.
This patch ensures that the lock is cleared during boot so that we
can write to the debug registers safely.
Signed-off-by: Will Deacon <will.deacon@arm.com>
As our SMP implementation uses MESI protocols. Grouping together data
which is mostly only read together means that we avoid unnecessary
cache line bouncing when this code shares a cache line with other data.
In other words, cache lines associated with read-mostly data are
expected to spend most of their time in shared state.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For kernels built with PREEMPT_RT, critical sections protected
by standard spinlocks are preemptible. This is not acceptable
on perf as (a) we may be scheduled onto a different CPU whilst
reading/writing banked PMU registers and (b) the latency when
reading the PMU registers becomes unpredictable.
This patch upgrades the pmu_lock spinlock to a raw_spinlock
instead.
Reported-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Russell reported a number of warnings coming from sparse when
checking the ARM perf_event.c files:
| perf_event.c seems to also have problems too:
|
| CHECK arch/arm/kernel/perf_event.c
| arch/arm/kernel/perf_event.c:37:1: warning: symbol 'pmu_lock' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:70:1: warning: symbol 'cpu_hw_events' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1006:1: warning: symbol 'armv6pmu_enable_event' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1113:1: warning: symbol 'armv6pmu_stop' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:1956:6: warning: symbol 'armv7pmu_enable_event' was not declared. Should it be static?
| arch/arm/kernel/perf_event.c:3072:14: warning: incorrect type in argument 1 (different address spaces)
| arch/arm/kernel/perf_event.c:3072:14: expected void const volatile [noderef] <asn:1>*<noident>
| arch/arm/kernel/perf_event.c:3072:14: got struct frame_tail *tail
| arch/arm/kernel/perf_event.c:3074:49: warning: incorrect type in argument 2 (different address spaces)
| arch/arm/kernel/perf_event.c:3074:49: expected void const [noderef] <asn:1>*from
| arch/arm/kernel/perf_event.c:3074:49: got struct frame_tail *tail
This patch resolves these issues so we can live in silence
again.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When kexec is used to start a crash kernel the other cores
are notified. These non-crashing cores will save their state
in the crash notes and then do nothing.
Signed-off-by: Per Fransson <per.xx.fransson@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The existing code invokes the syscall with rubbish in r7,
due to what looks like an incorrect literal load idiom.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As we've now removed the spinlock and bitmask, we have nothing left
which requires interrupts to be disabled when sending an IPI. All
current IPI-sending implementations use the GIC, which also does not
require interrupts disabled when calling gic_raise_softirq().
Remove the now unnecessary IRQ disable.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Avoid using bitmasks and locks in the percpu area for IPIs, and instead
use individual software generated interrupts to identify the reason for
the IPI. This avoids the problems of having spinlocks in the percpu
area.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows us to use smp_cross_call() to trigger a number of different
software generated interrupts, rather than combining them all on one
SGI. Recover the SGI number via do_IPI.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If a section is not marked with SHF_ALLOC, it will be discarded
by the module code. Therefore, it is not correct to register
the unwind tables.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no need to keep pointers to the ELF sections available while
the module is loaded - we only need the section pointers while we're
finding and registering the unwind tables, which can all be done during
the finalize stage of loading.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The 32-bit conditional branches in Thumb-2 have a shorter range
(+/-512K) than their ARM counterparts (+/-32MB). The linker does
not currently generate trampolines to extend the range of these
Thumb-2 conditional branches, resulting in link errors when vmlinux
is sufficiently large, e.g.:
head.o:(.text+0x464): relocation truncated to fit: R_ARM_THM_JUMP19
This patch forces the longer-range, unconditional branch encoding
by use of an explicit IT instruction. The resulting branches are
triggered on the same conditions as before.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Directives such as .long and .word do not magically cause the
assembler location counter to become aligned in gas. As a result,
using these directives in code sections can result in misaligned
data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL).
This is a Bad Thing, since the ABI permits the compiler to assume
that fundamental types of word size or above are word- aligned when
accessing them from C. If the data is not really word-aligned,
this can cause impaired performance and stray alignment faults in
some circumstances.
In general, the following rules should be applied when using data
word declaration directives inside code sections:
* .quad and .double:
.align 3
* .long, .word, .single, .float:
.align (or .align 2)
* .short:
No explicit alignment required, since Thumb-2
instructions are always 2 or 4 bytes in size.
immediately after an instruction.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Directives such as .long and .word do not magically cause the
assembler location counter to become aligned in gas. As a result,
using these directives in code sections can result in misaligned
data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL).
This is a Bad Thing, since the ABI permits the compiler to assume
that fundamental types of word size or above are word- aligned when
accessing them from C. If the data is not really word-aligned,
this can cause impaired performance and stray alignment faults in
some circumstances.
In general, the following rules should be applied when using data
word declaration directives inside code sections:
* .quad and .double:
.align 3
* .long, .word, .single, .float:
.align (or .align 2)
* .short:
No explicit alignment required, since Thumb-2
instructions are always 2 or 4 bytes in size.
immediately after an instruction.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rather than passing the pte value to __pte_error, pass the raw pte_t
cookie instead. Do the same for pmd and pgd functions.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The perf hardware pmu got initialized at various points in the boot,
some before early_initcall() some after (notably arch_initcall).
The problem is that the NMI lockup detector is ran from early_initcall()
and expects the hardware pmu to be present.
Sanitize this by moving all architecture hardware pmu implementations to
initialize at early_initcall() and move the lockup detector to an explicit
initcall right after that.
Cc: paulus <paulus@samba.org>
Cc: davem <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290707759.2145.119.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
swp_emulate is only used on ARMv7+, and includes ARMv7+ assembly
instructions. Allow the assembler to accept ARMv7 instructions,
but leave the compiler's code generation options alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>