Today SError is taken using the inv_entry macro that ends up in
bad_mode.
SError can be used by the RAS Extensions to notify either the OS or
firmware of CPU problems, some of which may have been corrected.
To allow this handling to be added, add a do_serror() C function
that just panic()s. Add the entry.S boiler plate to save/restore the
CPU registers and unmask debug exceptions. Future patches may change
do_serror() to return if the SError Interrupt was notification of a
corrected error.
Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: Wang Xiongfeng <wangxiongfengi2@huawei.com>
[Split out of a bigger patch, added compat path, renamed, enabled debug
exceptions]
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Following our 'dai' order, irqs should be processed with debug and
serror exceptions unmasked.
Add a helper to unmask these two, (and fiq for good measure).
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
el0_sync also unmasks exceptions on a case-by-case basis, debug exceptions
are enabled, unless this was a debug exception. Irqs are unmasked for
some exception types but not for others.
el0_dbg should run with everything masked to prevent us taking a debug
exception from do_debug_exception. For the other cases we can unmask
everything. This changes the behaviour of fpsimd_{acc,exc} and el0_inv
which previously ran with irqs masked.
This patch removed the last user of enable_dbg_and_irq, remove it.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
el1_sync unmasks exceptions on a case-by-case basis, debug exceptions
are unmasked, unless this was a debug exception. IRQs are unmasked
for instruction and data aborts only if the interupted context had
irqs unmasked.
Following our 'dai' order, el1_dbg should run with everything masked.
For the other cases we can inherit whatever we interrupted.
Add a macro inherit_daif to set daif based on the interrupted pstate.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
To take RAS Exceptions as quickly as possible we need to keep SError
unmasked as much as possible. We need to mask it during kernel_exit
as taking an error from this code will overwrite the exception-registers.
Adding a naked 'disable_daif' to kernel_exit causes a performance problem
for micro-benchmarks that do no real work, (e.g. calling getpid() in a
loop). This is because the ret_to_user loop has already masked IRQs so
that the TIF_WORK_MASK thread flags can't change underneath it, adding
disable_daif is an additional self-synchronising operation.
In the future, the RAS APEI code may need to modify the TIF_WORK_MASK
flags from an SError, in which case the ret_to_user loop must mask SError
while it examines the flags.
Disable all exceptions for return to EL1. For return to EL0 get the
ret_to_user loop to leave all exceptions masked once it has done its
work, this avoids an extra pstate-write.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Remove the local_{async,fiq}_{en,dis}able macros as they don't respect
our newly defined order and are only used to set the flags for process
context when we bring CPUs online.
Add a helper to do this. The IRQ flag varies as we want it masked on
the boot CPU until we are ready to handle interrupts.
The boot CPU unmasks SError during early boot once it can print an error
message. If we can print an error message about SError, we can do the
same for FIQ. Debug exceptions are already enabled by __cpu_setup(),
which has also configured MDSCR_EL1 to disable MDE and KDE.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently SError is always masked in the kernel. To support RAS exceptions
using SError on hardware with the v8.2 RAS Extensions we need to unmask
SError as much as possible.
Let's define an order for masking and unmasking exceptions. 'dai' is
memorable and effectively what we have today.
Disabling debug exceptions should cause all other exceptions to be masked.
Masking SError should mask irq, but not disable debug exceptions.
Masking irqs has no side effects for other flags. Keeping to this order
makes it easier for entry.S to know which exceptions should be unmasked.
FIQ is never expected, but we mask it when we mask debug exceptions, and
unmask it at all other times.
Given masking debug exceptions masks everything, we don't need macros
to save/restore that bit independently. Remove them and switch the last
caller over to use the daif calls.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There are a few places where we want to mask all exceptions. Today we
do this in a piecemeal fashion, typically we expect the caller to
have masked irqs and the arch code masks debug exceptions, ignoring
serror which is probably masked.
Make it clear that 'mask all exceptions' is the intention by adding
helpers to do exactly that.
This will let us unmask SError without having to add 'oh and SError'
to these paths.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
After commit 9e8e865bbe ("arm64: unify idmap removal"), we no need to
flush tlb in suspend.c, so the included file tlbflush.h can be removed.
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The vdso tries to check for a NULL res pointer in __kernel_clock_getres,
but only checks the lower 32 bits as is uses CBZ on the W register the
res pointer is held in.
Thus, if the res pointer happened to be aligned to a 4GiB boundary, we'd
spuriously skip storing the timespec to it, while returning a zero error code
to the caller.
Prevent this by checking the whole pointer, using CBZ on the X register
the res pointer is held in.
Fixes: 9031fefde6 ("arm64: VDSO support")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Andrew Pinski <apinski@cavium.com>
Reported-by: Mark Salyzyn <salyzyn@android.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We can decode the PSTATE easily enough, so pretty-print it in register
dumps.
Tested-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Printing raw pointer values in backtraces has potential security
implications and are of questionable value anyway.
This patch follows x86's lead and removes the "Exception stack:" dump
from kernel backtraces, as well as converting PC/LR values to symbols
such as "sysrq_handle_crash+0x20/0x30".
Tested-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Software Step exception is missing after stepping a trapped instruction.
Ensure SPSR.SS gets set to 0 after emulating/skipping a trapped instruction
before doing ERET.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
[will: replaced AARCH32_INSN_SIZE with 4]
Signed-off-by: Will Deacon <will.deacon@arm.com>
__memcpy_{to,from}io fall back to byte-at-a-time copying if both the
source and destination pointers are not 8-byte aligned. Since one of the
pointers always points at normal memory, this is unnecessary and
detrimental to performance, so only do byte copying until we hit an 8-byte
boundary for the device pointer.
This change was motivated by performance issues in the pstore driver.
On a test platform, measuring probe time for pstore, console buffer
size of 1/4MB and pmsg of 1/2MB, was in the 90-107ms region. Change
managed to reduce it to 10-25ms, an improvement in boot time.
Cc: Kees Cook <keescook@chromium.org>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Merge in ARM PMU and perf updates for 4.15:
- Support for the Statistical Profiling Extension
- Support for Hisilicon's SoC PMU
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that the ARM ARM clearly specifies the rules for inferring
the values of the ID register fields, fix the types of the
feature bits we have in the kernel.
As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the
ID scheme for fields in ID registers" lists the registers to
which the scheme applies along with the exceptions.
This patch changes the relevant feature bits from FTR_EXACT
to FTR_LOWER_SAFE to select the safer value. This will enable
an older kernel running on a new CPU detect the safer option
rather than completely disabling the feature.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When booting at EL2, ensure that we permit the EL1 host to sample
physical addresses and physical counter values using SPE.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3.
Expose them to the userspace via HWCAPs and mrs emulation.
SHA2-512 - Instruction support for SHA512 Hash algorithm (e.g SHA512H,
SHA512H2, SHA512U0, SHA512SU1)
SHA3 - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX).
SM3 - Instruction support for Chinese cryptography algorithm SM3
SM4 - Instruction support for Chinese cryptography algorithm SM4
DP - Dot Product instructions (UDOT, SDOT).
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We register the pm/hotplug callbacks for FPSIMD as late_initcall,
which happens after the userspace is active (from initramfs via
populate_rootfs, a rootfs_initcall). Make sure we are ready even
before the userspace could potentially use it, by promoting to
a core_initcall.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We trap and emulate some instructions (e.g, mrs, deprecated instructions)
for the userspace. However the handlers for these are registered as
late_initcalls and the userspace could be up and running from the initramfs
by that time (with populate_rootfs, which is a rootfs_initcall()). This
could cause problems for the early applications ending up in failure
like :
[ 11.152061] modprobe[93]: undefined instruction: pc=0000ffff8ca48ff4
This patch promotes the specific calls to core_initcalls, which are
guaranteed to be completed before we hit userspace.
Cc: stable@vger.kernel.org
Cc: Dave Martin <dave.martin@arm.com>
Cc: Matthias Brugger <mbrugger@suse.com>
Cc: James Morse <james.morse@arm.com>
Reported-by: Matwey V. Kornilov <matwey.kornilov@gmail.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently we inconsistently log identifying information for the boot CPU
and secondary CPUs. For the boot CPU, we log the MIDR and MPIDR across
separate messages, whereas for the secondary CPUs we only log the MIDR.
In some cases, it would be useful to know the MPIDR of secondary CPUs,
and it would be nice for these messages to be consistent.
This patch ensures that in the primary and secondary boot paths, we log
both the MPIDR and MIDR in a single message, with a consistent format.
the MPIDR is consistently padded to 10 hex characters to cover Aff3 in
bits 39:32, so that IDs can be compared easily.
The newly redundant message in setup_arch() is removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Al Stone <ahs3@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[will: added '0x' prefixes consistently]
Signed-off-by: Will Deacon <will.deacon@arm.com>
As you see in init/version.c, init_uts_ns.name.machine is initially
set to UTS_MACHINE. There is no point to copy the same string.
I dug the git history to figure out why this line is here. My best
guess is like this:
- This line has been around here since the initial support of arm64
by commit 9703d9d7f7 ("arm64: Kernel booting and initialisation").
If ARCH (=arm64) and UTS_MACHINE (=aarch64) do not match,
arch/$(ARCH)/Makefile is supposed to override UTS_MACHINE, but the
initial version of arch/arm64/Makefile missed to do that. Instead,
the boot code copied "aarch64" to init_utsname()->machine.
- Commit 94ed1f2cb5 ("arm64: setup: report ELF_PLATFORM as the
machine for utsname") replaced "aarch64" with ELF_PLATFORM to
make "uname" to reflect the endianness.
- ELF_PLATFORM does not help to provide the UTS machine name to rpm
target, so commit cfa88c7946 ("arm64: Set UTS_MACHINE in the
Makefile") fixed it. The commit simply replaced ELF_PLATFORM with
UTS_MACHINE, but missed the fact the string copy itself is no longer
needed.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ILP32 series [1] introduces the dependency on <asm/is_compat.h> for
TASK_SIZE macro. Which in turn requires <asm/thread_info.h>, and
<asm/thread_info.h> include <asm/memory.h>, giving a circular dependency,
because TASK_SIZE is currently located in <asm/memory.h>.
In other architectures, TASK_SIZE is defined in <asm/processor.h>, and
moving TASK_SIZE there fixes the problem.
Discussion: https://patchwork.kernel.org/patch/9929107/
[1] https://github.com/norov/linux/tree/ilp32-next
CC: Will Deacon <will.deacon@arm.com>
CC: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When the kernel is entered at EL2 on an ARMv8.0 system, we construct
the EL1 pstate and make sure this uses the the EL1 stack pointer
(we perform an exception return to EL1h).
But if the kernel is either entered at EL1 or stays at EL2 (because
we're on a VHE-capable system), we fail to set SPsel, and use whatever
stack selection the higher exception level has choosen for us.
Let's not take any chance, and make sure that SPsel is set to one
before we decide the mode we're going to run in.
Cc: <stable@vger.kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull address-limit checking fixes from Ingo Molnar:
"This fixes a number of bugs in the address-limit (USER_DS) checks that
got introduced in the merge window, (mostly) affecting the ARM and
ARM64 platforms"
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
arm64/syscalls: Move address limit check in loop
arm/syscalls: Optimize address limit check
Revert "arm/syscalls: Check address limit on user-mode return"
syscalls: Use CHECK_DATA_CORRUPTION for addr_limit_user_check
__efi_fpsimd_begin()/__efi_fpsimd_end() are for use when making EFI
calls only, so using them in non-EFI kernels is not allowed.
This patch compiles them out if CONFIG_EFI is not set.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
A bug was reported on ARM where set_fs might be called after it was
checked on the work pending function. ARM64 is not affected by this bug
but has a similar construct. In order to avoid any similar problems in
the future, the addr_limit_user_check function is moved at the beginning
of the loop.
Fixes: cf7de27ab3 ("arm64/syscalls: Check address limit on user-mode return")
Reported-by: Leonard Crestez <leonard.crestez@nxp.com>
Signed-off-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Will Drewry <wad@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-api@vger.kernel.org
Cc: Yonghong Song <yhs@fb.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1504798247-48833-5-git-send-email-keescook@chromium.org
The stacktraces always begin as follows:
[<c00117b4>] save_stack_trace_tsk+0x0/0x98
[<c0011870>] save_stack_trace+0x24/0x28
...
This is because the stack trace code includes the stack frames for
itself. This is incorrect behaviour, and also leads to "skip" doing the
wrong thing (which is the number of stack frames to avoid recording.)
Perversely, it does the right thing when passed a non-current thread.
Fix this by ensuring that we have a known constant number of frames
above the main stack trace function, and always skip these.
This was fixed for arch arm by commit 3683f44c42 ("ARM: stacktrace:
avoid listing stacktrace functions in stacktrace")
Link: http://lkml.kernel.org/r/1504078343-28754-1-git-send-email-guptap@codeaurora.org
Signed-off-by: Prakash Gupta <guptap@codeaurora.org>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull namespace updates from Eric Biederman:
"Life has been busy and I have not gotten half as much done this round
as I would have liked. I delayed it so that a minor conflict
resolution with the mips tree could spend a little time in linux-next
before I sent this pull request.
This includes two long delayed user namespace changes from Kirill
Tkhai. It also includes a very useful change from Serge Hallyn that
allows the security capability attribute to be used inside of user
namespaces. The practical effect of this is people can now untar
tarballs and install rpms in user namespaces. It had been suggested to
generalize this and encode some of the namespace information
information in the xattr name. Upon close inspection that makes the
things that should be hard easy and the things that should be easy
more expensive.
Then there is my bugfix/cleanup for signal injection that removes the
magic encoding of the siginfo union member from the kernel internal
si_code. The mips folks reported the case where I had used FPE_FIXME
me is impossible so I have remove FPE_FIXME from mips, while at the
same time including a return statement in that case to keep gcc from
complaining about unitialized variables.
I almost finished the work to get make copy_siginfo_to_user a trivial
copy to user. The code is available at:
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace.git neuter-copy_siginfo_to_user-v3
But I did not have time/energy to get the code posted and reviewed
before the merge window opened.
I was able to see that the security excuse for just copying fields
that we know are initialized doesn't work in practice there are buggy
initializations that don't initialize the proper fields in siginfo. So
we still sometimes copy unitialized data to userspace"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
Introduce v3 namespaced file capabilities
mips/signal: In force_fcr31_sig return in the impossible case
signal: Remove kernel interal si_code magic
fcntl: Don't use ambiguous SIG_POLL si_codes
prctl: Allow local CAP_SYS_ADMIN changing exe_file
security: Use user_namespace::level to avoid redundant iterations in cap_capable()
userns,pidns: Verify the userns for new pid namespaces
signal/testing: Don't look for __SI_FAULT in userspace
signal/mips: Document a conflict with SI_USER with SIGFPE
signal/sparc: Document a conflict with SI_USER with SIGFPE
signal/ia64: Document a conflict with SI_USER with SIGFPE
signal/alpha: Document a conflict with SI_USER for SIGTRAP
Merge more updates from Andrew Morton:
- most of the rest of MM
- a small number of misc things
- lib/ updates
- checkpatch
- autofs updates
- ipc/ updates
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (126 commits)
ipc: optimize semget/shmget/msgget for lots of keys
ipc/sem: play nicer with large nsops allocations
ipc/sem: drop sem_checkid helper
ipc: convert kern_ipc_perm.refcount from atomic_t to refcount_t
ipc: convert sem_undo_list.refcnt from atomic_t to refcount_t
ipc: convert ipc_namespace.count from atomic_t to refcount_t
kcov: support compat processes
sh: defconfig: cleanup from old Kconfig options
mn10300: defconfig: cleanup from old Kconfig options
m32r: defconfig: cleanup from old Kconfig options
drivers/pps: use surrounding "if PPS" to remove numerous dependency checks
drivers/pps: aesthetic tweaks to PPS-related content
cpumask: make cpumask_next() out-of-line
kmod: move #ifdef CONFIG_MODULES wrapper to Makefile
kmod: split off umh headers into its own file
MAINTAINERS: clarify kmod is just a kernel module loader
kmod: split out umh code into its own file
test_kmod: flip INT checks to be consistent
test_kmod: remove paranoid UINT_MAX check on uint range processing
vfat: deduplicate hex2bin()
...
First, number of CPUs can't be negative number.
Second, different signnnedness leads to suboptimal code in the following
cases:
1)
kmalloc(nr_cpu_ids * sizeof(X));
"int" has to be sign extended to size_t.
2)
while (loff_t *pos < nr_cpu_ids)
MOVSXD is 1 byte longed than the same MOV.
Other cases exist as well. Basically compiler is told that nr_cpu_ids
can't be negative which can't be deduced if it is "int".
Code savings on allyesconfig kernel: -3KB
add/remove: 0/0 grow/shrink: 25/264 up/down: 261/-3631 (-3370)
function old new delta
coretemp_cpu_online 450 512 +62
rcu_init_one 1234 1272 +38
pci_device_probe 374 399 +25
...
pgdat_reclaimable_pages 628 556 -72
select_fallback_rq 446 369 -77
task_numa_find_cpu 1923 1807 -116
Link: http://lkml.kernel.org/r/20170819114959.GA30580@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZsr8cAAoJEFmIoMA60/r8lXYQAKViYIRMJDD4n3NhjMeLOsnJ
vwaBmWlLRjSFIEpag5kMjS1RJE17qAvmkBZnDvSNZ6cT28INkkZnVM2IW96WECVq
64MIvDijVPcvqGuWePCfWdDiSXApiDWwJuw55BOhmvV996wGy0gYgzpPY+1g0Knh
XzH9IOzDL79hZleLfsxX0MLV6FGBVtOsr0jvQ04k4IgEMIxEDTlbw85rnrvzQUtc
0Vj2koaxWIESZsq7G/wiZb2n6ekaFdXO/VlVvvhmTSDLCBaJ63Hb/gfOhwMuVkS6
B3cVprNrCT0dSzWmU4ZXf+wpOyDpBexlemW/OR/6CQUkC6AUS6kQ5si1X44dbGmJ
nBPh414tdlm/6V4h/A3UFPOajSGa/ZWZ/uQZPfvKs1R6WfjUerWVBfUpAzPbgjam
c/mhJ19HYT1J7vFBfhekBMeY2Px3JgSJ9rNsrFl48ynAALaX5GEwdpo4aqBfscKz
4/f9fU4ysumopvCEuKD2SsJvsPKd5gMQGGtvAhXM1TxvAoQ5V4cc99qEetAPXXPf
h2EqWm4ph7YP4a+n/OZBjzluHCmZJn1CntH5+//6wpUk6HnmzsftGELuO9n12cLE
GGkreI3T9ctV1eOkzVVa0l0QTE1X/VLyEyKCtb9obXsDaG4Ud7uKQoZgB19DwyTJ
EG76ridTolUFVV+wzJD9
=9cLP
-----END PGP SIGNATURE-----
Merge tag 'pci-v4.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
- add enhanced Downstream Port Containment support, which prints more
details about Root Port Programmed I/O errors (Dongdong Liu)
- add Layerscape ls1088a and ls2088a support (Hou Zhiqiang)
- add MediaTek MT2712 and MT7622 support (Ryder Lee)
- add MediaTek MT2712 and MT7622 MSI support (Honghui Zhang)
- add Qualcom IPQ8074 support (Varadarajan Narayanan)
- add R-Car r8a7743/5 device tree support (Biju Das)
- add Rockchip per-lane PHY support for better power management (Shawn
Lin)
- fix IRQ mapping for hot-added devices by replacing the
pci_fixup_irqs() boot-time design with a host bridge hook called at
probe-time (Lorenzo Pieralisi, Matthew Minter)
- fix race when enabling two devices that results in upstream bridge
not being enabled correctly (Srinath Mannam)
- fix pciehp power fault infinite loop (Keith Busch)
- fix SHPC bridge MSI hotplug events by enabling bus mastering
(Aleksandr Bezzubikov)
- fix a VFIO issue by correcting PCIe capability sizes (Alex
Williamson)
- fix an INTD issue on Xilinx and possibly other drivers by unifying
INTx IRQ domain support (Paul Burton)
- avoid IOMMU stalls by marking AMD Stoney GPU ATS as broken (Joerg
Roedel)
- allow APM X-Gene device assignment to guests by adding an ACS quirk
(Feng Kan)
- fix driver crashes by disabling Extended Tags on Broadcom HT2100
(Extended Tags support is required for PCIe Receivers but not
Requesters, and we now enable them by default when Requesters support
them) (Sinan Kaya)
- fix MSIs for devices that use phantom RIDs for DMA by assuming MSIs
use the real Requester ID (not a phantom RID) (Robin Murphy)
- prevent assignment of Intel VMD children to guests (which may be
supported eventually, but isn't yet) by not associating an IOMMU with
them (Jon Derrick)
- fix Intel VMD suspend/resume by releasing IRQs on suspend (Scott
Bauer)
- fix a Function-Level Reset issue with Intel 750 NVMe by waiting
longer (up to 60sec instead of 1sec) for device to become ready
(Sinan Kaya)
- fix a Function-Level Reset issue on iProc Stingray by working around
hardware defects in the CRS implementation (Oza Pawandeep)
- fix an issue with Intel NVMe P3700 after an iProc reset by adding a
delay during shutdown (Oza Pawandeep)
- fix a Microsoft Hyper-V lockdep issue by polling instead of blocking
in compose_msi_msg() (Stephen Hemminger)
- fix a wireless LAN driver timeout by clearing DesignWare MSI
interrupt status after it is handled, not before (Faiz Abbas)
- fix DesignWare ATU enable checking (Jisheng Zhang)
- reduce Layerscape dependencies on the bootloader by doing more
initialization in the driver (Hou Zhiqiang)
- improve Intel VMD performance allowing allocation of more IRQ vectors
than present CPUs (Keith Busch)
- improve endpoint framework support for initial DMA mask, different
BAR sizes, configurable page sizes, MSI, test driver, etc (Kishon
Vijay Abraham I, Stan Drozd)
- rework CRS support to add periodic messages while we poll during
enumeration and after Function-Level Reset and prepare for possible
other uses of CRS (Sinan Kaya)
- clean up Root Port AER handling by removing unnecessary code and
moving error handler methods to struct pcie_port_service_driver
(Christoph Hellwig)
- clean up error handling paths in various drivers (Bjorn Andersson,
Fabio Estevam, Gustavo A. R. Silva, Harunobu Kurokawa, Jeffy Chen,
Lorenzo Pieralisi, Sergei Shtylyov)
- clean up SR-IOV resource handling by disabling VF decoding before
updating the corresponding resource structs (Gavin Shan)
- clean up DesignWare-based drivers by unifying quirks to update Class
Code and Interrupt Pin and related handling of write-protected
registers (Hou Zhiqiang)
- clean up by adding empty generic pcibios_align_resource() and
pcibios_fixup_bus() and removing empty arch-specific implementations
(Palmer Dabbelt)
- request exclusive reset control for several drivers to allow cleanup
elsewhere (Philipp Zabel)
- constify various structures (Arvind Yadav, Bhumika Goyal)
- convert from full_name() to %pOF (Rob Herring)
- remove unused variables from iProc, HiSi, Altera, Keystone (Shawn
Lin)
* tag 'pci-v4.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (170 commits)
PCI: xgene: Clean up whitespace
PCI: xgene: Define XGENE_PCI_EXP_CAP and use generic PCI_EXP_RTCTL offset
PCI: xgene: Fix platform_get_irq() error handling
PCI: xilinx-nwl: Fix platform_get_irq() error handling
PCI: rockchip: Fix platform_get_irq() error handling
PCI: altera: Fix platform_get_irq() error handling
PCI: spear13xx: Fix platform_get_irq() error handling
PCI: artpec6: Fix platform_get_irq() error handling
PCI: armada8k: Fix platform_get_irq() error handling
PCI: dra7xx: Fix platform_get_irq() error handling
PCI: exynos: Fix platform_get_irq() error handling
PCI: iproc: Clean up whitespace
PCI: iproc: Rename PCI_EXP_CAP to IPROC_PCI_EXP_CAP
PCI: iproc: Add 500ms delay during device shutdown
PCI: Fix typos and whitespace errors
PCI: Remove unused "res" variable from pci_resource_io()
PCI: Correct kernel-doc of pci_vpd_srdt_size(), pci_vpd_srdt_tag()
PCI/AER: Reformat AER register definitions
iommu/vt-d: Prevent VMD child devices from being remapping targets
x86/PCI: Use is_vmd() rather than relying on the domain number
...
- Update the ACPICA code in the kernel to upstream revision 20170728
including:
* Alias operator handling update (Bob Moore).
* Deferred resolution of reference package elements (Bob Moore).
* Support for the _DMA method in walk resources (Bob Moore).
* Tables handling update and support for deferred table
verification (Lv Zheng).
* Update of SMMU models for IORT (Robin Murphy).
* Compiler and disassembler updates (Alex James, Erik Schmauss,
Ganapatrao Kulkarni, James Morse).
* Tools updates (Erik Schmauss, Lv Zheng).
* Assorted minor fixes and cleanups (Bob Moore, Kees Cook,
Lv Zheng, Shao Ming).
- Rework the initialization of non-wakeup GPEs with method handlers
in order to address a boot crash on some systems with Thunderbolt
devices connected at boot time where we miss an early hotplug
event due to a delay in GPE enabling (Rafael Wysocki).
- Rework the handling of PCI bridges when setting up ACPI-based
device wakeup in order to avoid disabling wakeup for bridges
prematurely (Rafael Wysocki).
- Consolidate Apple DMI checks throughout the tree, add support for
Apple device properties to the device properties framework and
use these properties for the handling of I2C and SPI devices on
Apple systems (Lukas Wunner).
- Add support for _DMA to the ACPI-based device properties lookup
code and make it possible to use the information from there to
configure DMA regions on ARM64 systems (Lorenzo Pieralisi).
- Fix several issues in the APEI code, add support for exporting
the BERT error region over sysfs and update APEI MAINTAINERS
entry with reviewers information (Borislav Petkov, Dongjiu Geng,
Loc Ho, Punit Agrawal, Tony Luck, Yazen Ghannam).
- Fix a potential initialization ordering issue in the ACPI EC
driver and clean it up somewhat (Lv Zheng).
- Update the ACPI SPCR driver to extend the existing XGENE 8250
workaround in it to a new platform (m400) and to work around
an Xgene UART clock issue (Graeme Gregory).
- Add a new utility function to the ACPI core to support using
ACPI OEM ID / OEM Table ID / Revision for system identification
in blacklisting or similar and switch over the existing code
already using this information to this new interface (Toshi Kani).
- Fix an xpower PMIC issue related to GPADC reads that always return
0 without extra pin manipulations (Hans de Goede).
- Add statements to print debug messages in a couple of places in
the ACPI core for easier diagnostics (Rafael Wysocki).
- Clean up the ACPI processor driver slightly (Colin Ian King,
Hanjun Guo).
- Clean up the ACPI x86 boot code somewhat (Andy Shevchenko).
- Add a quirk for Dell OptiPlex 9020M to the ACPI backlight
driver (Alex Hung).
- Assorted fixes, cleanups and updates related to ACPI (Amitoj Kaur
Chawla, Bhumika Goyal, Frank Rowand, Jean Delvare, Punit Agrawal,
Ronald Tschalär, Sumeet Pawnikar).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJZrcE+AAoJEILEb/54YlRxVGAP/RKzkJlYlOIXtMjf4XWg5ZfJ
RKZA68E9DW179KoBoTCVPD6/eD5UoEJ7fsWXFU2Hgp2xL3N1mZMAJHgAE4GoAwCx
uImoYvQgdPna7DawzRIFkvkfceYxNyh+KaV9s7xne4hAwsB7JzP9yf5Ywll53+oF
Le27/r6lDOaWhG7uYcxSabnQsWZQkBF5mj2GPzEpKDIHcLA1Vii0URzm7mAHdZsz
vGjYhxrshKYEVdkLSRn536m1rEfp2fqsRJ5wqNAazZJr6Cs1WIfNVuv/RfduRJpG
/zHIRAmgKV+3jp39cBpjdnexLczb1rGiCV1yZOvwCNM7jy4evL8vbL7VgcUCopaj
fHbF34chNG/hKJd3Zn3RRCTNzCs6bv+txslOMARxji5eyr2Q4KuVnvg5LM4hxOUP
23FvcYkBYWu4QCNLOTnC7y2OqK6WzOvDpfi7hf13Z42iNzeAUbwt1sVF0/OCwL51
Og6blSy2x8FidKp8oaBBboBzHEiKWnXBj/Hw8KEHVcsqZv1ZC6igNRAL3tjxamU8
98/Z2NSZHYPrrrn13tT9ywISYXReXzUF85787+0ofugvDe8/QyBH6UhzzZc/xKVA
t329JEjEFZZSLgxMIIa9bXoQANxkeZEGsxN6FfwvQhyIVdagLF3UvCjZl/q2NScC
9n++s32qfUBRHetGODWc
=6Ke9
-----END PGP SIGNATURE-----
Merge tag 'acpi-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI updates from Rafael Wysocki:
"These include a usual ACPICA code update (this time to upstream
revision 20170728), a fix for a boot crash on some systems with
Thunderbolt devices connected at boot time, a rework of the handling
of PCI bridges when setting up device wakeup, new support for Apple
device properties, support for DMA configurations reported via ACPI on
ARM64, APEI-related updates, ACPI EC driver updates and assorted minor
modifications in several places.
Specifics:
- Update the ACPICA code in the kernel to upstream revision 20170728
including:
* Alias operator handling update (Bob Moore).
* Deferred resolution of reference package elements (Bob Moore).
* Support for the _DMA method in walk resources (Bob Moore).
* Tables handling update and support for deferred table
verification (Lv Zheng).
* Update of SMMU models for IORT (Robin Murphy).
* Compiler and disassembler updates (Alex James, Erik Schmauss,
Ganapatrao Kulkarni, James Morse).
* Tools updates (Erik Schmauss, Lv Zheng).
* Assorted minor fixes and cleanups (Bob Moore, Kees Cook, Lv
Zheng, Shao Ming).
- Rework the initialization of non-wakeup GPEs with method handlers
in order to address a boot crash on some systems with Thunderbolt
devices connected at boot time where we miss an early hotplug event
due to a delay in GPE enabling (Rafael Wysocki).
- Rework the handling of PCI bridges when setting up ACPI-based
device wakeup in order to avoid disabling wakeup for bridges
prematurely (Rafael Wysocki).
- Consolidate Apple DMI checks throughout the tree, add support for
Apple device properties to the device properties framework and use
these properties for the handling of I2C and SPI devices on Apple
systems (Lukas Wunner).
- Add support for _DMA to the ACPI-based device properties lookup
code and make it possible to use the information from there to
configure DMA regions on ARM64 systems (Lorenzo Pieralisi).
- Fix several issues in the APEI code, add support for exporting the
BERT error region over sysfs and update APEI MAINTAINERS entry with
reviewers information (Borislav Petkov, Dongjiu Geng, Loc Ho, Punit
Agrawal, Tony Luck, Yazen Ghannam).
- Fix a potential initialization ordering issue in the ACPI EC driver
and clean it up somewhat (Lv Zheng).
- Update the ACPI SPCR driver to extend the existing XGENE 8250
workaround in it to a new platform (m400) and to work around an
Xgene UART clock issue (Graeme Gregory).
- Add a new utility function to the ACPI core to support using ACPI
OEM ID / OEM Table ID / Revision for system identification in
blacklisting or similar and switch over the existing code already
using this information to this new interface (Toshi Kani).
- Fix an xpower PMIC issue related to GPADC reads that always return
0 without extra pin manipulations (Hans de Goede).
- Add statements to print debug messages in a couple of places in the
ACPI core for easier diagnostics (Rafael Wysocki).
- Clean up the ACPI processor driver slightly (Colin Ian King, Hanjun
Guo).
- Clean up the ACPI x86 boot code somewhat (Andy Shevchenko).
- Add a quirk for Dell OptiPlex 9020M to the ACPI backlight driver
(Alex Hung).
- Assorted fixes, cleanups and updates related to ACPI (Amitoj Kaur
Chawla, Bhumika Goyal, Frank Rowand, Jean Delvare, Punit Agrawal,
Ronald Tschalär, Sumeet Pawnikar)"
* tag 'acpi-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (75 commits)
ACPI / APEI: Suppress message if HEST not present
intel_pstate: convert to use acpi_match_platform_list()
ACPI / blacklist: add acpi_match_platform_list()
ACPI, APEI, EINJ: Subtract any matching Register Region from Trigger resources
ACPI: make device_attribute const
ACPI / sysfs: Extend ACPI sysfs to provide access to boot error region
ACPI: APEI: fix the wrong iteration of generic error status block
ACPI / processor: make function acpi_processor_check_duplicates() static
ACPI / EC: Clean up EC GPE mask flag
ACPI: EC: Fix possible issues related to EC initialization order
ACPI / PM: Add debug statements to acpi_pm_notify_handler()
ACPI: Add debug statements to acpi_global_event_handler()
ACPI / scan: Enable GPEs before scanning the namespace
ACPICA: Make it possible to enable runtime GPEs earlier
ACPICA: Dispatch active GPEs at init time
ACPI: SPCR: work around clock issue on xgene UART
ACPI: SPCR: extend XGENE 8250 workaround to m400
ACPI / LPSS: Don't abort ACPI scan on missing mem resource
mailbox: pcc: Drop uninformative output during boot
ACPI/IORT: Add IORT named component memory address limits
...
- VMAP_STACK support, allowing the kernel stacks to be allocated in
the vmalloc space with a guard page for trapping stack overflows. One
of the patches introduces THREAD_ALIGN and changes the generic
alloc_thread_stack_node() to use this instead of THREAD_SIZE (no
functional change for other architectures)
- Contiguous PTE hugetlb support re-enabled (after being reverted a
couple of times). We now have the semantics agreed in the generic mm
layer together with API improvements so that the architecture code can
detect between contiguous and non-contiguous huge PTEs
- Initial support for persistent memory on ARM: DC CVAP instruction
exposed to user space (HWCAP) and the in-kernel pmem API implemented
- raid6 improvements for arm64: faster algorithm for the delta syndrome
and implementation of the recovery routines using Neon
- FP/SIMD refactoring and removal of support for Neon in interrupt
context. This is in preparation for full SVE support
- PTE accessors converted from inline asm to cmpxchg so that we can
use LSE atomics if available (ARMv8.1)
- Perf support for Cortex-A35 and A73
- Non-urgent fixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAlmuunYACgkQa9axLQDI
XvEH9BAAo8V94GOMkX6HkT+2hjkl7DQ9krjumzmfzLV5AdgHMMzBNozmWKOCzgh0
yaxRcTUju3EyNeKhADr7yLiKDH8fnRPmYEJiVrwfgo7MaPApaCorr7LLIXfPGuxe
DTBHw+oxRMjlmaHeATX4PBWfQxAx+vjjhHqv3Qpmvdm4nYqR+0hZomH2BNsu64fk
AkSeUCxfCEyzSFIKuQM04M4zhSSZHz1tDxWI0b0RcK73qqEOuYZNkn6qxSKP5J4X
b2Y2U8nmxJ5C2fXpDYZaK9shiJ4Vu7X3Ocf/M7hsJzGY5z4dhnmUmxpHROaNiSvo
hCx7POYKyAPovps7zMSqcdsujkqOIQO8RHp4zGXx/pIr1RumjIiCY+RGpUYGibvU
N4Px5hZNneuHaPZZ+sWjOOdNB28xyzeUp2UK9Bb6uHB+/3xssMAD8Fd/b2ZLnS6a
YW3wrZmqA+ckfETsSRibabTs/ayqYHs2SDVwnlDJGtn+4Pw8oQpwGrwokxLQuuw3
uF2sNEPhJz+dcy21q3udYAQE1qOJBlLqTptgP96CHoVqh8X6nYSi5obT7y30ln3n
dhpZGOdi6R8YOouxgXS3Wg07pxn444L/VzDw5ku/5DkdryPOZCSRbk/2t8If6oDM
2VD6PCbTx3hsGc7SZ7FdSwIysD2j446u40OMGdH2iLB5jWBwyOM=
=vd0/
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- VMAP_STACK support, allowing the kernel stacks to be allocated in the
vmalloc space with a guard page for trapping stack overflows. One of
the patches introduces THREAD_ALIGN and changes the generic
alloc_thread_stack_node() to use this instead of THREAD_SIZE (no
functional change for other architectures)
- Contiguous PTE hugetlb support re-enabled (after being reverted a
couple of times). We now have the semantics agreed in the generic mm
layer together with API improvements so that the architecture code
can detect between contiguous and non-contiguous huge PTEs
- Initial support for persistent memory on ARM: DC CVAP instruction
exposed to user space (HWCAP) and the in-kernel pmem API implemented
- raid6 improvements for arm64: faster algorithm for the delta syndrome
and implementation of the recovery routines using Neon
- FP/SIMD refactoring and removal of support for Neon in interrupt
context. This is in preparation for full SVE support
- PTE accessors converted from inline asm to cmpxchg so that we can use
LSE atomics if available (ARMv8.1)
- Perf support for Cortex-A35 and A73
- Non-urgent fixes and cleanups
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (75 commits)
arm64: cleanup {COMPAT_,}SET_PERSONALITY() macro
arm64: introduce separated bits for mm_context_t flags
arm64: hugetlb: Cleanup setup_hugepagesz
arm64: Re-enable support for contiguous hugepages
arm64: hugetlb: Override set_huge_swap_pte_at() to support contiguous hugepages
arm64: hugetlb: Override huge_pte_clear() to support contiguous hugepages
arm64: hugetlb: Handle swap entries in huge_pte_offset() for contiguous hugepages
arm64: hugetlb: Add break-before-make logic for contiguous entries
arm64: hugetlb: Spring clean huge pte accessors
arm64: hugetlb: Introduce pte_pgprot helper
arm64: hugetlb: set_huge_pte_at Add WARN_ON on !pte_present
arm64: kexec: have own crash_smp_send_stop() for crash dump for nonpanic cores
arm64: dma-mapping: Mark atomic_pool as __ro_after_init
arm64: dma-mapping: Do not pass data to gen_pool_set_algo()
arm64: Remove the !CONFIG_ARM64_HW_AFDBM alternative code paths
arm64: Ignore hardware dirty bit updates in ptep_set_wrprotect()
arm64: Move PTE_RDONLY bit handling out of set_pte_at()
kvm: arm64: Convert kvm_set_s2pte_readonly() from inline asm to cmpxchg()
arm64: Convert pte handling from inline asm to using (cmp)xchg
arm64: neon/efi: Make EFI fpsimd save/restore variables static
...
Pull syscall updates from Ingo Molnar:
"Improve the security of set_fs(): we now check the address limit on a
number of key platforms (x86, arm, arm64) before returning to
user-space - without adding overhead to the typical system call fast
path"
* 'x86-syscall-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
arm64/syscalls: Check address limit on user-mode return
arm/syscalls: Check address limit on user-mode return
x86/syscalls: Check address limit on user-mode return
Pull RCU updates from Ingo Molnad:
"The main RCU related changes in this cycle were:
- Removal of spin_unlock_wait()
- SRCU updates
- RCU torture-test updates
- RCU Documentation updates
- Extend the sys_membarrier() ABI with the MEMBARRIER_CMD_PRIVATE_EXPEDITED variant
- Miscellaneous RCU fixes
- CPU-hotplug fixes"
* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits)
arch: Remove spin_unlock_wait() arch-specific definitions
locking: Remove spin_unlock_wait() generic definitions
drivers/ata: Replace spin_unlock_wait() with lock/unlock pair
ipc: Replace spin_unlock_wait() with lock/unlock pair
exit: Replace spin_unlock_wait() with lock/unlock pair
completion: Replace spin_unlock_wait() with lock/unlock pair
doc: Set down RCU's scheduling-clock-interrupt needs
doc: No longer allowed to use rcu_dereference on non-pointers
doc: Add RCU files to docbook-generation files
doc: Update memory-barriers.txt for read-to-write dependencies
doc: Update RCU documentation
membarrier: Provide expedited private command
rcu: Remove exports from rcu_idle_exit() and rcu_idle_enter()
rcu: Add warning to rcu_idle_enter() for irqs enabled
rcu: Make rcu_idle_enter() rely on callers disabling irqs
rcu: Add assertions verifying blocked-tasks list
rcu/tracing: Set disable_rcu_irq_enter on rcu_eqs_exit()
rcu: Add TPS() protection for _rcu_barrier_trace strings
rcu: Use idle versions of swait to make idle-hack clear
swait: Add idle variants which don't contribute to load average
...
There is some work that should be done after setting the personality.
Currently it's done in the macro, which is not the best idea.
In this patch new arch_setup_new_exec() routine is introduced, and all
setup code is moved there, as suggested by Catalin:
https://lkml.org/lkml/2017/8/4/494
Cc: Pratyush Anand <panand@redhat.com>
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
[catalin.marinas@arm.com: comments changed or removed]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With 16KB pages and a kernel Image larger than 16MB, the current
kaslr_early_init() logic for avoiding mappings across swapper table
boundaries fails since increasing the offset by kimg_sz just moves the
problem to the next boundary.
This patch rounds the offset down to (1 << SWAPPER_TABLE_SHIFT) if the
Image crosses a PMD_SIZE boundary.
Fixes: afd0e5a876 ("arm64: kaslr: Fix up the kernel image alignment")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In the KASLR setup routine, we ensure that the early virtual mapping
of the kernel image does not cover more than a single table entry at
the level above the swapper block level, so that the assembler routines
involved in setting up this mapping can remain simple.
In this calculation we add the proposed KASLR offset to the values of
the _text and _end markers, and reject it if they would end up falling
in different swapper table sized windows.
However, when taking the addresses of _text and _end, the modulo offset
(the physical displacement modulo 2 MB) is already accounted for, and
so adding it again results in incorrect results. So disregard the modulo
offset from the calculation.
Fixes: 08cdac619c ("arm64: relocatable: deal with physically misaligned ...")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There are some tricky dependencies between the different stages of
flushing the FPSIMD register state during exec, and these can race
with context switch in ways that can cause the old task's regs to
leak across. In particular, a context switch during the memset() can
cause some of the task's old FPSIMD registers to reappear.
Disabling preemption for this small window would be no big deal for
performance: preemption is already disabled for similar scenarios
like updating the FPSIMD registers in sigreturn.
So, instead of rearranging things in ways that might swap existing
subtle bugs for new ones, this patch just disables preemption
around the FPSIMD state flushing so that races of this type can't
occur here. This brings fpsimd_flush_thread() into line with other
code paths.
Cc: stable@vger.kernel.org
Fixes: 674c242c93 ("arm64: flush FP/SIMD state correctly after execve()")
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently mm->context.flags field uses thread_info flags which is not
the best idea for many reasons. For example, mm_context_t doesn't need
most of thread_info flags. And it would be difficult to add new mm-related
flag if needed because it may easily interfere with TIF ones.
To deal with it, the new MMCF_AARCH32 flag is introduced for
mm_context_t->flags, where MMCF prefix stands for mm_context_t flags.
Also, mm_context_t flag doesn't require atomicity and ordering of the
access, so using set/clear_bit() is replaced with simple masks.
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 0ee5941 : (x86/panic: replace smp_send_stop() with kdump friendly
version in panic path) introduced crash_smp_send_stop() which is a weak
function and can be overridden by architecture codes to fix the side effect
caused by commit f06e515 : (kernel/panic.c: add "crash_kexec_post_
notifiers" option).
ARM64 architecture uses the weak version function and the problem is that
the weak function simply calls smp_send_stop() which makes other CPUs
offline and takes away the chance to save crash information for nonpanic
CPUs in machine_crash_shutdown() when crash_kexec_post_notifiers kernel
option is enabled.
Calling smp_send_crash_stop() in machine_crash_shutdown() is useless
because all nonpanic CPUs are already offline by smp_send_stop() in this
case and smp_send_crash_stop() only works against online CPUs.
The result is that secondary CPUs registers are not saved by
crash_save_cpu() and the vmcore file misreports these CPUs as being
offline.
crash_smp_send_stop() is implemented to fix this problem by replacing the
existing smp_send_crash_stop() and adding a check for multiple calling to
the function. The function (strong symbol version) saves crash information
for nonpanic CPUs and machine_crash_shutdown() tries to save crash
information for nonpanic CPUs only when crash_kexec_post_notifiers kernel
option is disabled.
* crash_kexec_post_notifiers : false
panic()
__crash_kexec()
machine_crash_shutdown()
crash_smp_send_stop() <= save crash dump for nonpanic cores
* crash_kexec_post_notifiers : true
panic()
crash_smp_send_stop() <= save crash dump for nonpanic cores
__crash_kexec()
machine_crash_shutdown()
crash_smp_send_stop() <= just return.
Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com>
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently PTE_RDONLY is treated as a hardware only bit and not handled
by the pte_mkwrite(), pte_wrprotect() or the user PAGE_* definitions.
The set_pte_at() function is responsible for setting this bit based on
the write permission or dirty state. This patch moves the PTE_RDONLY
handling out of set_pte_at into the pte_mkwrite()/pte_wrprotect()
functions. The PAGE_* definitions to need to be updated to explicitly
include PTE_RDONLY when !PTE_WRITE.
The patch also removes the redundant PAGE_COPY(_EXEC) definitions as
they are identical to the corresponding PAGE_READONLY(_EXEC).
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* 'for-next/perf' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux:
arm64: perf: add support for Cortex-A35
arm64: perf: add support for Cortex-A73
arm64: perf: Remove redundant entries from CPU-specific event maps
arm64: perf: Connect additional events to pmu counters
arm64: perf: Allow standard PMUv3 events to be extended by the CPU type
perf: xgene: Remove unnecessary managed resources cleanup
arm64: perf: Allow more than one cycle counter to be used
The percpu variables efi_fpsimd_state and efi_fpsimd_state_used,
used by the FPSIMD save/restore routines for EFI calls, are
unintentionally global.
There's no reason for anything outside fpsimd.c to touch these, so
this patch makes them static (as they should have been in the first
place).
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built
from all runqueues for which current thread's mm is the same as the
thread calling sys_membarrier. It executes faster than the non-expedited
variant (no blocking). It also works on NOHZ_FULL configurations.
Scheduler-wise, it requires a memory barrier before and after context
switching between processes (which have different mm). The memory
barrier before context switch is already present. For the barrier after
context switch:
* Our TSO archs can do RELEASE without being a full barrier. Look at
x86 spin_unlock() being a regular STORE for example. But for those
archs, all atomics imply smp_mb and all of them have atomic ops in
switch_mm() for mm_cpumask(), and on x86 the CR3 load acts as a full
barrier.
* From all weakly ordered machines, only ARM64 and PPC can do RELEASE,
the rest does indeed do smp_mb(), so there the spin_unlock() is a full
barrier and we're good.
* ARM64 has a very heavy barrier in switch_to(), which suffices.
* PPC just removed its barrier from switch_to(), but appears to be
talking about adding something to switch_mm(). So add a
smp_mb__after_unlock_lock() for now, until this is settled on the PPC
side.
Changes since v3:
- Properly document the memory barriers provided by each architecture.
Changes since v2:
- Address comments from Peter Zijlstra,
- Add smp_mb__after_unlock_lock() after finish_lock_switch() in
finish_task_switch() to add the memory barrier we need after storing
to rq->curr. This is much simpler than the previous approach relying
on atomic_dec_and_test() in mmdrop(), which actually added a memory
barrier in the common case of switching between userspace processes.
- Return -EINVAL when MEMBARRIER_CMD_SHARED is used on a nohz_full
kernel, rather than having the whole membarrier system call returning
-ENOSYS. Indeed, CMD_PRIVATE_EXPEDITED is compatible with nohz_full.
Adapt the CMD_QUERY mask accordingly.
Changes since v1:
- move membarrier code under kernel/sched/ because it uses the
scheduler runqueue,
- only add the barrier when we switch from a kernel thread. The case
where we switch from a user-space thread is already handled by
the atomic_dec_and_test() in mmdrop().
- add a comment to mmdrop() documenting the requirement on the implicit
memory barrier.
CC: Peter Zijlstra <peterz@infradead.org>
CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
CC: Boqun Feng <boqun.feng@gmail.com>
CC: Andrew Hunter <ahh@google.com>
CC: Maged Michael <maged.michael@gmail.com>
CC: gromer@google.com
CC: Avi Kivity <avi@scylladb.com>
CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
CC: Paul Mackerras <paulus@samba.org>
CC: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Dave Watson <davejwatson@fb.com>
Both unwind_frame() and dump_backtrace() try to check whether a stack
address is sane to access, with very similar logic. Both will need
updating in order to handle overflow stacks.
Factor out this logic into a helper, so that we can avoid further
duplication when we add overflow stacks.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
This patch enables arm64 to be built with vmap'd task and IRQ stacks.
As vmap'd stacks are mapped at page granularity, stacks must be a multiple of
PAGE_SIZE. This means that a 64K page kernel must use stacks of at least 64K in
size.
To minimize the increase in Image size, IRQ stacks are dynamically allocated at
boot time, rather than embedding the boot CPU's IRQ stack in the kernel image.
This patch was co-authored by Ard Biesheuvel and Mark Rutland.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
We allocate our IRQ stacks using a percpu array. This allows us to generate our
IRQ stack pointers with adr_this_cpu, but bloats the kernel Image with the boot
CPU's IRQ stack. Additionally, these are packed with other percpu variables,
and aren't guaranteed to have guard pages.
When we enable VMAP_STACK we'll want to vmap our IRQ stacks also, in order to
provide guard pages and to permit more stringent alignment requirements. Doing
so will require that we use a percpu pointer to each IRQ stack, rather than
allocating a percpu IRQ stack in the kernel image.
This patch updates our IRQ stack code to use a percpu pointer to the base of
each IRQ stack. This will allow us to change the way the stack is allocated
with minimal changes elsewhere. In some cases we may try to backtrace before
the IRQ stack pointers are initialised, so on_irq_stack() is updated to account
for this.
In testing with cyclictest, there was no measureable difference between using
adr_this_cpu (for irq_stack) and ldr_this_cpu (for irq_stack_ptr) in the IRQ
entry path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
In subsequent patches, we will detect stack overflow in our exception
entry code, by verifying the SP after it has been decremented to make
space for the exception regs.
This verification code is small, and we can minimize its impact by
placing it directly in the vectors. To avoid redundant modification of
the SP, we also need to move the initial decrement of the SP into the
vectors.
As a preparatory step, this patch introduces kernel_ventry, which
performs this decrement, and updates the entry code accordingly.
Subsequent patches will fold SP verification into kernel_ventry.
There should be no functional change as a result of this patch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: turn into prep patch, expand commit msg]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Currently we define SEGMENT_ALIGN directly in our vmlinux.lds.S.
This is unfortunate, as the EFI stub currently open-codes the same
number, and in future we'll want to fiddle with this.
This patch moves the definition to our <asm/memory.h>, where it can be
used by both vmlinux.lds.S and the EFI stub code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Before we add yet another stack to the kernel, it would be nice to
ensure that we consistently organise stack definitions and related
helper functions.
This patch moves the basic IRQ stack defintions to <asm/memory.h> to
live with their task stack counterparts. Helpers used for unwinding are
moved into <asm/stacktrace.h>, where subsequent patches will add helpers
for other stacks. Includes are fixed up accordingly.
This patch is a pure refactoring -- there should be no functional
changes as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
For historical reasons, we leave the top 16 bytes of our task and IRQ
stacks unused, a practice used to ensure that the SP can always be
masked to find the base of the current stack (historically, where
thread_info could be found).
However, this is not necessary, as:
* When an exception is taken from a task stack, we decrement the SP by
S_FRAME_SIZE and stash the exception registers before we compare the
SP against the task stack. In such cases, the SP must be at least
S_FRAME_SIZE below the limit, and can be safely masked to determine
whether the task stack is in use.
* When transitioning to an IRQ stack, we'll place a dummy frame onto the
IRQ stack before enabling asynchronous exceptions, or executing code
we expect to trigger faults. Thus, if an exception is taken from the
IRQ stack, the SP must be at least 16 bytes below the limit.
* We no longer mask the SP to find the thread_info, which is now found
via sp_el0. Note that historically, the offset was critical to ensure
that cpu_switch_to() found the correct stack for new threads that
hadn't yet executed ret_from_fork().
Given that, this initial offset serves no purpose, and can be removed.
This brings us in-line with other architectures (e.g. x86) which do not
rely on this masking.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: rebase, kill THREAD_START_SP, commit msg additions]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Our __die() implementation tries to dump the stack memory, in addition
to a backtrace, which is problematic.
For contemporary 16K stacks, this can be a lot of data, which can take a
long time to dump, and can push other useful context out of the kernel's
printk ringbuffer (and/or a user's scrollback buffer on an attached
console).
Additionally, the code implicitly assumes that the SP is on the task's
stack, and tries to dump everything between the SP and the highest task
stack address. When the SP points at an IRQ stack (or is corrupted),
this makes the kernel attempt to dump vast amounts of VA space. With
vmap'd stacks, this may result in erroneous accesses to peripherals.
This patch removes the memory dump, leaving us to rely on the backtrace,
and other means of dumping stack memory such as kdump.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
The Cortex-A35 uses some implementation defined perf events.
The Cortex-A35 derives from the Cortex-A53 core, using the same event mapings
based on Cortex-A35 TRM r0p2, section C2.3 - Performance monitoring events
(pages C2-562 to C2-565).
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The Cortex-A73 uses some implementation defined perf events.
This patch sets up the necessary mapping for Cortex-A73.
Mappings are based on Cortex-A73 TRM r0p2, section 11.9 Events
(pages 11-457 to 11-460).
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that the event mapping code always looks into the PMUv3 events
before any extended mappings, the extended mappings can be reduced to
only those events that are not discoverable through the PMCEID registers.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Last level caches and node events were almost never connected in current
supported cores.
We connect last level caches to the actual last level within the core and
node events are connected to bus accesses.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The unwind code sets the sp member of struct stackframe to
'frame pointer + 0x10' unconditionally, without regard for whether
doing so produces a legal value. So let's simply remove it now that
we have stopped using it anyway.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
As it turns out, the unwind code is slightly broken, and probably has
been for a while. The problem is in the dumping of the exception stack,
which is intended to dump the contents of the pt_regs struct at each
level in the call stack where an exception was taken and routed to a
routine marked as __exception (which means its stack frame is right
below the pt_regs struct on the stack).
'Right below the pt_regs struct' is ill defined, though: the unwind
code assigns 'frame pointer + 0x10' to the .sp member of the stackframe
struct at each level, and dump_backtrace() happily dereferences that as
the pt_regs pointer when encountering an __exception routine. However,
the actual size of the stack frame created by this routine (which could
be one of many __exception routines we have in the kernel) is not known,
and so frame.sp is pretty useless to figure out where struct pt_regs
really is.
So it seems the only way to ensure that we can find our struct pt_regs
when walking the stack frames is to put it at a known fixed offset of
the stack frame pointer that is passed to such __exception routines.
The simplest way to do that is to put it inside pt_regs itself, which is
the main change implemented by this patch. As a bonus, doing this allows
us to get rid of a fair amount of cruft related to walking from one stack
to the other, which is especially nice since we intend to introduce yet
another stack for overflow handling once we add support for vmapped
stacks. It also fixes an inconsistency where we only add a stack frame
pointing to ELR_EL1 if we are executing from the IRQ stack but not when
we are executing from the task stack.
To consistly identify exceptions regs even in the presence of exceptions
taken from entry code, we must check whether the next frame was created
by entry text, rather than whether the current frame was crated by
exception text.
To avoid backtracing using PCs that fall in the idmap, or are controlled
by userspace, we must explcitly zero the FP and LR in startup paths, and
must ensure that the frame embedded in pt_regs is zeroed upon entry from
EL0. To avoid these NULL entries showin in the backtrace, unwind_frame()
is updated to avoid them.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: compare current frame against .entry.text, avoid bogus PCs]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
vDSO VMA address is saved in mm_context for the purpose of using
restorer from vDSO page to return to userspace after signal handling.
In Checkpoint Restore in Userspace (CRIU) project we place vDSO VMA
on restore back to the place where it was on the dump.
With the exception for x86 (where there is API to map vDSO with
arch_prctl()), we move vDSO inherited from CRIU task to restoree
position by mremap().
CRIU does support arm64 architecture, but kernel doesn't update
context.vdso pointer after mremap(). Which results in translation
fault after signal handling on restored application:
https://github.com/xemul/criu/issues/288
Make vDSO code track the VMA address by supplying .mremap() fops
the same way it's done for x86 and arm32 by:
commit b059a453b1 ("x86/vdso: Add mremap hook to vm_special_mapping")
commit 280e87e98c ("ARM: 8683/1: ARM32: Support mremap() for sigpage/vDSO").
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add a clean-to-point-of-persistence cache maintenance helper, and wire
up the basic architectural support for the pmem driver based on it.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[catalin.marinas@arm.com: move arch_*_pmem() functions to arch/arm64/mm/flush.c]
[catalin.marinas@arm.com: change dmb(sy) to dmb(osh)]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cache clean to PoP is subject to the same access controls as to PoC, so
if we are trapping userspace cache maintenance with SCTLR_EL1.UCI, we
need to be prepared to handle it. To avoid getting into complicated
fights with binutils about ARMv8.2 options, we'll just cheat and use the
raw SYS instruction rather than the 'proper' DC alias.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The ARMv8.2-DCPoP feature introduces persistent memory support to the
architecture, by defining a point of persistence in the memory
hierarchy, and a corresponding cache maintenance operation, DC CVAP.
Expose the support via HWCAP and MRS emulation.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
__inval_cache_range() is already the odd one out among our data cache
maintenance routines as the only remaining range-based one; as we're
going to want an invalidation routine to call from C code for the pmem
API, let's tweak the prototype and name to bring it in line with the
clean operations, and to make its relationship with __dma_inv_area()
neatly mirror that of __clean_dcache_area_poc() and __dma_clean_area().
The loop clearing the early page tables gets mildly massaged in the
process for the sake of consistency.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Rather than continue adding CPU-specific event maps, instead look up by
default in the PMUv3 event map and only fallback to the CPU-specific maps
if either the event isn't described by PMUv3, or it is described but
the PMCEID registers say that it is unsupported by the current CPU.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, when unwinding the call stack, we validate the frame pointer
of each frame against frame.sp, whose value is not clearly defined, and
which makes it more difficult to link stack frames together across
different stacks. It is far better to simply check whether the frame
pointer itself points into a valid stack.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Our IRQ_STACK_PTR() and on_irq_stack() helpers both take a cpu argument,
used to generate a percpu address. In all cases, they are passed
{raw_,}smp_processor_id(), so this parameter is redundant.
Since {raw_,}smp_processor_id() use a percpu variable internally, this
approach means we generate a percpu offset to find the current cpu, then
use this to index an array of percpu offsets, which we then use to find
the current CPU's IRQ stack pointer. Thus, most of the work is
redundant.
Instead, we can consistently use raw_cpu_ptr() to generate the CPU's
irq_stack pointer by simply adding the percpu offset to the irq_stack
address, which is simpler in both respects.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Currently, cpu_switch_to and ret_from_fork both live in .entry.text,
though neither form the critical path for an exception entry.
In subsequent patches, we will require that code in .entry.text is part
of the critical path for exception entry, for which we can assume
certain properties (e.g. the presence of exception regs on the stack).
Neither cpu_switch_to nor ret_from_fork will meet these requirements, so
we must move them out of .entry.text. To ensure that neither are kprobed
after being moved out of .entry.text, we must explicitly blacklist them,
requiring a new NOKPROBE() asm helper.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
In most cases, our exception entry assembly branches to C handlers with
a BL instruction, but in cases where we do not expect to return, we use
B instead.
While this is correct today, it means that backtraces for fatal
exceptions miss the entry assembly (as the LR is stale at the point we
call C code), while non-fatal exceptions have the entry assembly in the
LR. In subsequent patches, we will need the LR to be set in these cases
in order to backtrace reliably.
This patch updates these sites to use a BL, ensuring consistency, and
preparing for backtrace rework. An ASM_BUG() is added after each of
these new BLs, which both catches unexpected returns, and ensures that
the LR value doesn't point to another function label.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Currently:
$ perf stat -e cycles:u -e cycles:k true
Performance counter stats for 'true':
2,24,699 cycles:u
<not counted> cycles:k (0.00%)
0.000788087 seconds time elapsed
We can not count more than one cycle counter in one instance,because we
allow to map cycle counter into PMCCNTR_EL0 only. However, if I did not
miss anything then specification do not prohibit to use PMEVCNTR<n>_EL0
for cycle count as well.
Modify the code so that it still prefers to use PMCCNTR_EL0 for cycle
counter, however allow to use PMEVCNTR<n>_EL0 if PMCCNTR_EL0 is already
in use.
After this patch:
$ perf stat -e cycles:u -e cycles:k true
Performance counter stats for 'true':
2,17,310 cycles:u
7,40,009 cycles:k
0.000764149 seconds time elapsed
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
may_use_simd() can be invoked from loadable modules and it accesses
kernel_neon_busy. Make sure that the latter is exported.
Fixes: cb84d11e16 ("arm64: neon: Remove support for nested or hardirq kernel-mode NEON")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The -1 "no syscall" value is written in various ways, shared with
the user ABI in some places, and generally obscure.
This patch attempts to make things a little more consistent and
readable by replacing all these uses with a single #define. A
couple of symbolic helpers are provided to clarify the intent
further.
Because the in-syscall check in do_signal() is changed from >= 0 to
!= NO_SYSCALL by this patch, different behaviour may be observable
if syscallno is set to values less than -1 by a tracer. However,
this is not different from the behaviour that is already observable
if a tracer sets syscallno to a value >= __NR_(compat_)syscalls.
It appears that this can cause spurious syscall restarting, but
that is not a new behaviour either, and does not appear harmful.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The upper 32 bits of the syscallno field in thread_struct are
handled inconsistently, being sometimes zero extended and sometimes
sign-extended. In fact, only the lower 32 bits seem to have any
real significance for the behaviour of the code: it's been OK to
handle the upper bits inconsistently because they don't matter.
Currently, the only place I can find where those bits are
significant is in calling trace_sys_enter(), which may be
unintentional: for example, if a compat tracer attempts to cancel a
syscall by passing -1 to (COMPAT_)PTRACE_SET_SYSCALL at the
syscall-enter-stop, it will be traced as syscall 4294967295
rather than -1 as might be expected (and as occurs for a native
tracer doing the same thing). Elsewhere, reads of syscallno cast
it to an int or truncate it.
There's also a conspicuous amount of code and casting to bodge
around the fact that although semantically an int, syscallno is
stored as a u64.
Let's not pretend any more.
In order to preserve the stp x instruction that stores the syscall
number in entry.S, this patch special-cases the layout of struct
pt_regs for big endian so that the newly 32-bit syscallno field
maps onto the low bits of the stored value. This is not beautiful,
but benchmarking of the getpid syscall on Juno suggests indicates a
minor slowdown if the stp is split into an stp x and stp w.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Support for kernel-mode NEON to be nested and/or used in hardirq
context adds significant complexity, and the benefits may be
marginal. In practice, kernel-mode NEON is not used in hardirq
context, and is rarely used in softirq context (by certain mac80211
drivers).
This patch implements an arm64 may_use_simd() function to allow
clients to check whether kernel-mode NEON is usable in the current
context, and simplifies kernel_neon_{begin,end}() to handle only
saving of the task FPSIMD state (if any). Without nesting, there
is no other state to save.
The partial fpsimd save/restore functions become redundant as a
result of these changes, so they are removed too.
The save/restore model is changed to operate directly on
task_struct without additional percpu storage. This simplifies the
code and saves a bit of memory, but means that softirqs must now be
disabled when manipulating the task fpsimd state from task context:
correspondingly, preempt_{en,dis}sable() calls are upgraded to
local_bh_{en,dis}able() as appropriate. fpsimd_thread_switch()
already runs with hardirqs disabled and so is already protected
from softirqs.
These changes should make it easier to support kernel-mode NEON in
the presence of the Scalable Vector extension in the future.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In order to be able to cope with kernel-mode NEON being unavailable
in hardirq/nmi context and non-nestable, we need special handling
for EFI runtime service calls that may be made during an interrupt
that interrupted a kernel_neon_begin()..._end() block. This will
occur if the kernel tries to write diagnostic data to EFI
persistent storage during a panic triggered by an NMI for example.
EFI runtime services specify an ABI that clobbers the FPSIMD state,
rather than being able to use it optionally as an accelerator.
This means that EFI is really a special case and can be handled
specially.
To enable EFI calls from interrupts, this patch creates dedicated
__efi_fpsimd_{begin,end}() helpers solely for this purpose, which
save/restore to a separate percpu buffer if called in a context
where kernel_neon_begin() is not usable.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
__this_cpu_ ops are not used consistently with regard to this_cpu_
ops in a couple of places in fpsimd.c.
Since preemption is explicitly disabled in
fpsimd_restore_current_state() and fpsimd_update_current_state(),
this patch converts this_cpu_ ops in those functions to __this_cpu_
ops. This doesn't save cost on arm64, but benefits from additional
assertions in the core code.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Multiple architectures define this as a trivial function, and I'm adding
another one as part of the RISC-V port. Add a __weak version of
pcibios_align_resource() and delete the now-obselete ones in a handful of
ports.
The only functional change should be that a handful of ports used to export
pcibios_fixup_bus(). Only some architectures export this, so I just
dropped it.
Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Multiple architectures define this as an empty function, and I'm adding
another one as part of the RISC-V port. Add a __weak version of
pcibios_fixup_bus() and delete the now-obselete ones in a handful of
ports.
The only functional change should be that microblaze used to export
pcibios_fixup_bus(). None of the other architectures exports this, so I
just dropped it.
Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
In an ideal world, CNTFRQ_EL0 always contains the timer frequency
for the kernel to use. Sadly, we get quite a few broken systems
where the firmware authors cannot be bothered to program that
register on all CPUs, and rely on DT to provide that frequency.
So when trapping CNTFRQ_EL0, make sure to return the actual rate
(as known by the kernel), and not CNTFRQ_EL0.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Sparse complains about wrong address space used in __acpi_map_table()
and in __acpi_unmap_table().
arch/x86/kernel/acpi/boot.c:127:29: warning: incorrect type in return expression (different address spaces)
arch/x86/kernel/acpi/boot.c:127:29: expected char *
arch/x86/kernel/acpi/boot.c:127:29: got void [noderef] <asn:2>*
arch/x86/kernel/acpi/boot.c:135:23: warning: incorrect type in argument 1 (different address spaces)
arch/x86/kernel/acpi/boot.c:135:23: expected void [noderef] <asn:2>*addr
arch/x86/kernel/acpi/boot.c:135:23: got char *map
Correct address space to be in align of type of returned and passed
parameter.
Reviewed-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
struct siginfo is a union and the kernel since 2.4 has been hiding a union
tag in the high 16bits of si_code using the values:
__SI_KILL
__SI_TIMER
__SI_POLL
__SI_FAULT
__SI_CHLD
__SI_RT
__SI_MESGQ
__SI_SYS
While this looks plausible on the surface, in practice this situation has
not worked well.
- Injected positive signals are not copied to user space properly
unless they have these magic high bits set.
- Injected positive signals are not reported properly by signalfd
unless they have these magic high bits set.
- These kernel internal values leaked to userspace via ptrace_peek_siginfo
- It was possible to inject these kernel internal values and cause the
the kernel to misbehave.
- Kernel developers got confused and expected these kernel internal values
in userspace in kernel self tests.
- Kernel developers got confused and set si_code to __SI_FAULT which
is SI_USER in userspace which causes userspace to think an ordinary user
sent the signal and that it was not kernel generated.
- The values make it impossible to reorganize the code to transform
siginfo_copy_to_user into a plain copy_to_user. As si_code must
be massaged before being passed to userspace.
So remove these kernel internal si codes and make the kernel code simpler
and more maintainable.
To replace these kernel internal magic si_codes introduce the helper
function siginfo_layout, that takes a signal number and an si_code and
computes which union member of siginfo is being used. Have
siginfo_layout return an enumeration so that gcc will have enough
information to warn if a switch statement does not handle all of union
members.
A couple of architectures have a messed up ABI that defines signal
specific duplications of SI_USER which causes more special cases in
siginfo_layout than I would like. The good news is only problem
architectures pay the cost.
Update all of the code that used the previous magic __SI_ values to
use the new SIL_ values and to call siginfo_layout to get those
values. Escept where not all of the cases are handled remove the
defaults in the switch statements so that if a new case is missed in
the future the lack will show up at compile time.
Modify the code that copies siginfo si_code to userspace to just copy
the value and not cast si_code to a short first. The high bits are no
longer used to hold a magic union member.
Fixup the siginfo header files to stop including the __SI_ values in
their constants and for the headers that were missing it to properly
update the number of si_codes for each signal type.
The fixes to copy_siginfo_from_user32 implementations has the
interesting property that several of them perviously should never have
worked as the __SI_ values they depended up where kernel internal.
With that dependency gone those implementations should work much
better.
The idea of not passing the __SI_ values out to userspace and then
not reinserting them has been tested with criu and criu worked without
changes.
Ref: 2.4.0-test1
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Now that we have a custom printf format specifier, convert users of
full_name to use %pOF instead. This is preparation to remove storing
of the full path string for each node.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
In current die(), the irq is disabled for __die() handle, not
including the possible panic() handling. Since the log in __die()
can take several hundreds ms, new irq might come and interrupt
current die().
If the process calling die() holds some critical resource, and some
other process scheduled later also needs it, then it would deadlock.
The first panic will not be executed.
So here disable irq for the whole flow of die().
Signed-off-by: Qiao Zhou <qiaozhou@asrmicro.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Ensure the address limit is a user-mode segment before returning to
user-mode. Otherwise a process can corrupt kernel-mode memory and
elevate privileges [1].
The set_fs function sets the TIF_SETFS flag to force a slow path on
return. In the slow path, the address limit is checked to be USER_DS if
needed.
[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990
Signed-off-by: Thomas Garnier <thgarnie@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: kernel-hardening@lists.openwall.com
Cc: Will Deacon <will.deacon@arm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Will Drewry <wad@chromium.org>
Cc: linux-api@vger.kernel.org
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Link: http://lkml.kernel.org/r/20170615011203.144108-3-thgarnie@google.com
- Better machine check handling for HV KVM
- Ability to support guests with threads=2, 4 or 8 on POWER9
- Fix for a race that could cause delayed recognition of signals
- Fix for a bug where POWER9 guests could sleep with interrupts pending.
ARM:
- VCPU request overhaul
- allow timer and PMU to have their interrupt number selected from userspace
- workaround for Cavium erratum 30115
- handling of memory poisonning
- the usual crop of fixes and cleanups
s390:
- initial machine check forwarding
- migration support for the CMMA page hinting information
- cleanups and fixes
x86:
- nested VMX bugfixes and improvements
- more reliable NMI window detection on AMD
- APIC timer optimizations
Generic:
- VCPU request overhaul + documentation of common code patterns
- kvm_stat improvements
There is a small conflict in arch/s390 due to an arch-wide field rename.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJZW4XTAAoJEL/70l94x66DkhMH/izpk54KI17PtyQ9VYI2sYeZ
BWK6Kl886g3ij4pFi3pECqjDJzWaa3ai+vFfzzpJJ8OkCJT5Rv4LxC5ERltVVmR8
A3T1I/MRktSC0VJLv34daPC2z4Lco/6SPipUpPnL4bE2HATKed4vzoOjQ3tOeGTy
dwi7TFjKwoVDiM7kPPDRnTHqCe5G5n13sZ49dBe9WeJ7ttJauWqoxhlYosCGNPEj
g8ZX8+cvcAhVnz5uFL8roqZ8ygNEQq2mgkU18W8ZZKuiuwR0gdsG0gSBFNTdwIMK
NoreRKMrw0+oLXTIB8SZsoieU6Qi7w3xMAMabe8AJsvYtoersugbOmdxGCr1lsA=
=OD7H
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
"PPC:
- Better machine check handling for HV KVM
- Ability to support guests with threads=2, 4 or 8 on POWER9
- Fix for a race that could cause delayed recognition of signals
- Fix for a bug where POWER9 guests could sleep with interrupts pending.
ARM:
- VCPU request overhaul
- allow timer and PMU to have their interrupt number selected from userspace
- workaround for Cavium erratum 30115
- handling of memory poisonning
- the usual crop of fixes and cleanups
s390:
- initial machine check forwarding
- migration support for the CMMA page hinting information
- cleanups and fixes
x86:
- nested VMX bugfixes and improvements
- more reliable NMI window detection on AMD
- APIC timer optimizations
Generic:
- VCPU request overhaul + documentation of common code patterns
- kvm_stat improvements"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (124 commits)
Update my email address
kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS
x86: kvm: mmu: use ept a/d in vmcs02 iff used in vmcs12
kvm: x86: mmu: allow A/D bits to be disabled in an mmu
x86: kvm: mmu: make spte mmio mask more explicit
x86: kvm: mmu: dead code thanks to access tracking
KVM: PPC: Book3S: Fix typo in XICS-on-XIVE state saving code
KVM: PPC: Book3S HV: Close race with testing for signals on guest entry
KVM: PPC: Book3S HV: Simplify dynamic micro-threading code
KVM: x86: remove ignored type attribute
KVM: LAPIC: Fix lapic timer injection delay
KVM: lapic: reorganize restart_apic_timer
KVM: lapic: reorganize start_hv_timer
kvm: nVMX: Check memory operand to INVVPID
KVM: s390: Inject machine check into the nested guest
KVM: s390: Inject machine check into the guest
tools/kvm_stat: add new interactive command 'b'
tools/kvm_stat: add new command line switch '-i'
tools/kvm_stat: fix error on interactive command 'g'
KVM: SVM: suppress unnecessary NMI singlestep on GIF=0 and nested exit
...
Here is the big driver core update for 4.13-rc1.
The large majority of this is a lot of cleanup of old fields in the
driver core structures and their remaining usages in random drivers.
All of those fixes have been reviewed by the various subsystem
maintainers. There's also some small firmware updates in here, a new
kobject uevent api interface that makes userspace interaction easier,
and a few other minor things.
All of these have been in linux-next for a long while with no reported
issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWVpX4A8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ymobgCfd0d13IfpZoq1N41wc6z2Z0xD7cwAnRMeH1/p
kEeISGpHPYP9f8PBh9FO
=Hfqt
-----END PGP SIGNATURE-----
Merge tag 'driver-core-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core updates from Greg KH:
"Here is the big driver core update for 4.13-rc1.
The large majority of this is a lot of cleanup of old fields in the
driver core structures and their remaining usages in random drivers.
All of those fixes have been reviewed by the various subsystem
maintainers. There's also some small firmware updates in here, a new
kobject uevent api interface that makes userspace interaction easier,
and a few other minor things.
All of these have been in linux-next for a long while with no reported
issues"
* tag 'driver-core-4.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (56 commits)
arm: mach-rpc: ecard: fix build error
zram: convert remaining CLASS_ATTR() to CLASS_ATTR_RO()
driver-core: remove struct bus_type.dev_attrs
powerpc: vio_cmo: use dev_groups and not dev_attrs for bus_type
powerpc: vio: use dev_groups and not dev_attrs for bus_type
USB: usbip: convert to use DRIVER_ATTR_RW
s390: drivers: convert to use DRIVER_ATTR_RO/WO
platform: thinkpad_acpi: convert to use DRIVER_ATTR_RO/RW
pcmcia: ds: convert to use DRIVER_ATTR_RO
wireless: ipw2x00: convert to use DRIVER_ATTR_RW
net: ehea: convert to use DRIVER_ATTR_RO
net: caif: convert to use DRIVER_ATTR_RO
TTY: hvc: convert to use DRIVER_ATTR_RW
PCI: pci-driver: convert to use DRIVER_ATTR_WO
IB: nes: convert to use DRIVER_ATTR_RW
HID: hid-core: convert to use DRIVER_ATTR_RO and drv_groups
arm: ecard: fix dev_groups patch typo
tty: serdev: use dev_groups and not dev_attrs for bus_type
sparc: vio: use dev_groups and not dev_attrs for bus_type
hid: intel-ish-hid: use dev_groups and not dev_attrs for bus_type
...
Pull SMP hotplug updates from Thomas Gleixner:
"This update is primarily a cleanup of the CPU hotplug locking code.
The hotplug locking mechanism is an open coded RWSEM, which allows
recursive locking. The main problem with that is the recursive nature
as it evades the full lockdep coverage and hides potential deadlocks.
The rework replaces the open coded RWSEM with a percpu RWSEM and
establishes full lockdep coverage that way.
The bulk of the changes fix up recursive locking issues and address
the now fully reported potential deadlocks all over the place. Some of
these deadlocks have been observed in the RT tree, but on mainline the
probability was low enough to hide them away."
* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits)
cpu/hotplug: Constify attribute_group structures
powerpc: Only obtain cpu_hotplug_lock if called by rtasd
ARM/hw_breakpoint: Fix possible recursive locking for arch_hw_breakpoint_init
cpu/hotplug: Remove unused check_for_tasks() function
perf/core: Don't release cred_guard_mutex if not taken
cpuhotplug: Link lock stacks for hotplug callbacks
acpi/processor: Prevent cpu hotplug deadlock
sched: Provide is_percpu_thread() helper
cpu/hotplug: Convert hotplug locking to percpu rwsem
s390: Prevent hotplug rwsem recursion
arm: Prevent hotplug rwsem recursion
arm64: Prevent cpu hotplug rwsem recursion
kprobes: Cure hotplug lock ordering issues
jump_label: Reorder hotplug lock and jump_label_lock
perf/tracing/cpuhotplug: Fix locking order
ACPI/processor: Use cpu_hotplug_disable() instead of get_online_cpus()
PCI: Replace the racy recursion prevention
PCI: Use cpu_hotplug_disable() instead of get_online_cpus()
perf/x86/intel: Drop get_online_cpus() in intel_snb_check_microcode()
x86/perf: Drop EXPORT of perf_check_microcode
...
Pull timer updates from Thomas Gleixner:
"A rather large update for timers/timekeeping:
- compat syscall consolidation (Al Viro)
- Posix timer consolidation (Christoph Helwig / Thomas Gleixner)
- Cleanup of the device tree based initialization for clockevents and
clocksources (Daniel Lezcano)
- Consolidation of the FTTMR010 clocksource/event driver (Linus
Walleij)
- The usual set of small fixes and updates all over the place"
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (93 commits)
timers: Make the cpu base lock raw
clocksource/drivers/mips-gic-timer: Fix an error code in 'gic_clocksource_of_init()'
clocksource/drivers/fsl_ftm_timer: Unmap region obtained by of_iomap
clocksource/drivers/tcb_clksrc: Make IO endian agnostic
clocksource/drivers/sun4i: Switch to the timer-of common init
clocksource/drivers/timer-of: Fix invalid iomap check
Revert "ktime: Simplify ktime_compare implementation"
clocksource/drivers: Fix uninitialized variable use in timer_of_init
kselftests: timers: Add test for frequency step
kselftests: timers: Fix inconsistency-check to not ignore first timestamp
time: Add warning about imminent deprecation of CONFIG_GENERIC_TIME_VSYSCALL_OLD
time: Clean up CLOCK_MONOTONIC_RAW time handling
posix-cpu-timers: Make timespec to nsec conversion safe
itimer: Make timeval to nsec conversion range limited
timers: Fix parameter description of try_to_del_timer_sync()
ktime: Simplify ktime_compare implementation
clocksource/drivers/fttmr010: Factor out clock read code
clocksource/drivers/fttmr010: Implement delay timer
clocksource/drivers: Add timer-of common init routine
clocksource/drivers/tcb_clksrc: Save timer context on suspend/resume
...
Pull scheduler updates from Ingo Molnar:
"The main changes in this cycle were:
- Add the SYSTEM_SCHEDULING bootup state to move various scheduler
debug checks earlier into the bootup. This turns silent and
sporadically deadly bugs into nice, deterministic splats. Fix some
of the splats that triggered. (Thomas Gleixner)
- A round of restructuring and refactoring of the load-balancing and
topology code (Peter Zijlstra)
- Another round of consolidating ~20 of incremental scheduler code
history: this time in terms of wait-queue nomenclature. (I didn't
get much feedback on these renaming patches, and we can still
easily change any names I might have misplaced, so if anyone hates
a new name, please holler and I'll fix it.) (Ingo Molnar)
- sched/numa improvements, fixes and updates (Rik van Riel)
- Another round of x86/tsc scheduler clock code improvements, in hope
of making it more robust (Peter Zijlstra)
- Improve NOHZ behavior (Frederic Weisbecker)
- Deadline scheduler improvements and fixes (Luca Abeni, Daniel
Bristot de Oliveira)
- Simplify and optimize the topology setup code (Lauro Ramos
Venancio)
- Debloat and decouple scheduler code some more (Nicolas Pitre)
- Simplify code by making better use of llist primitives (Byungchul
Park)
- ... plus other fixes and improvements"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (103 commits)
sched/cputime: Refactor the cputime_adjust() code
sched/debug: Expose the number of RT/DL tasks that can migrate
sched/numa: Hide numa_wake_affine() from UP build
sched/fair: Remove effective_load()
sched/numa: Implement NUMA node level wake_affine()
sched/fair: Simplify wake_affine() for the single socket case
sched/numa: Override part of migrate_degrades_locality() when idle balancing
sched/rt: Move RT related code from sched/core.c to sched/rt.c
sched/deadline: Move DL related code from sched/core.c to sched/deadline.c
sched/cpuset: Only offer CONFIG_CPUSETS if SMP is enabled
sched/fair: Spare idle load balancing on nohz_full CPUs
nohz: Move idle balancer registration to the idle path
sched/loadavg: Generalize "_idle" naming to "_nohz"
sched/core: Drop the unused try_get_task_struct() helper function
sched/fair: WARN() and refuse to set buddy when !se->on_rq
sched/debug: Fix SCHED_WARN_ON() to return a value on !CONFIG_SCHED_DEBUG as well
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
sched/wait: Move bit_wait_table[] and related functionality from sched/core.c to sched/wait_bit.c
sched/wait: Split out the wait_bit*() APIs from <linux/wait.h> into <linux/wait_bit.h>
sched/wait: Re-adjust macro line continuation backslashes in <linux/wait.h>
...
Pull EFI updates from Ingo Molnar:
"The main changes in this cycle were:
- Rework the EFI capsule loader to allow for workarounds for
non-compliant firmware (Ard Biesheuvel)
- Implement a capsule loader quirk for Quark X102x (Jan Kiszka)
- Enable SMBIOS/DMI support for the ARM architecture (Ard Biesheuvel)
- Add CONFIG_EFI_PGT_DUMP=y support for x86-32 and kexec (Sai
Praneeth)
- Fixes for EFI support for Xen dom0 guests running under x86-64
hosts (Daniel Kiper)"
* 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/xen/efi: Initialize only the EFI struct members used by Xen
efi: Process the MEMATTR table only if EFI_MEMMAP is enabled
efi/arm: Enable DMI/SMBIOS
x86/efi: Extend CONFIG_EFI_PGT_DUMP support to x86_32 and kexec as well
efi/efi_test: Use memdup_user() helper
efi/capsule: Add support for Quark security header
efi/capsule-loader: Use page addresses rather than struct page pointers
efi/capsule-loader: Redirect calls to efi_capsule_setup_info() via weak alias
efi/capsule: Remove NULL test on kmap()
efi/capsule-loader: Use a cached copy of the capsule header
efi/capsule: Adjust return type of efi_capsule_setup_info()
efi/capsule: Clean up pr_err/_info() messages
efi/capsule: Remove pr_debug() on ENOMEM or EFAULT
efi/capsule: Fix return code on failing kmap/vmap
With the introduction of struct pci_host_bridge.map_irq pointer it is
possible to assign IRQs for all devices originating from a PCI host bridge
at probe time; this is implemented through pci_assign_irq() that relies on
the struct pci_host_bridge.map_irq pointer to map IRQ for a given device.
The benefits this brings are twofold:
- the IRQ for a device is assigned once at probe time
- the IRQ assignment works also for hotplugged devices
With all DT based PCI host bridges converted to the struct
pci_host_bridge.{map/swizzle}_irq hooks mechanism the DT IRQ allocation in
ARM64 pcibios_alloc_irq() is now redundant and can be removed.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Will Deacon <will.deacon@arm.com>
attribute_groups are not supposed to change at runtime. All functions
working with attribute_groups provided by <linux/sysfs.h> work with const
attribute_group. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- vcpu request overhaul
- allow timer and PMU to have their interrupt number
selected from userspace
- workaround for Cavium erratum 30115
- handling of memory poisonning
- the usual crop of fixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQJJBAABCAAzFiEEn9UcU+C1Yxj9lZw9I9DQutE9ekMFAllWCM0VHG1hcmMuenlu
Z2llckBhcm0uY29tAAoJECPQ0LrRPXpDjJ0QAI16x6+trKhH31lTSYekYfqm4hZ2
Fp7IbALW9KNCaY35tZov2Zuh99qGRduxTh7ewqhKpON8kkU+UKj0F7zH22+vfN4m
yas/+uNr8R9VLyvea4ysPsgx8Q8v1Ix9setohHYNZIL9/klVqtaHpYvArHVF/mzq
p2j/NxRS2dlp9r2TtoMRMhA05u6r0wolhUuh+z9v2ipib0gfOBIG24jsqCTEcD9n
5A/cVd+ztYshkrV95h3y9peahwt3zOA4QBGzrQ2K25jp0s54nqhmC7JTNSa8dtar
YGW2MuAMoIFTwCFAlpwCzrwpOJFzF3Q6A8bOxei2fjclzjPMgT1xQxuhOoe4ntFa
lTPxSHalm5W6dFTW90YSo2DBcPe+N7sQkhjR0cCeY3GYsOFhXMLTlOl5Pt1YK1or
+3FAI74tFRKvVmb9mhZeGTvuzhDgRvtf3Qq5rjwlGzKc2BBOEgtMyj/Wgwo4N6Dz
IjOnoRaUGELoBCWoTorMxLpsPBdPVSUxNyJTdAhqZ/ZtT1xqjhFNLZcrVWmOTzDM
1cav+jZkla4sLmJSNDD54aCSvvtPHis0nZn9PRlh12xgOyYiAVx4K++MNuWP0P37
hbh1gbPT+FcoVxPurUsX/pjNlTucPZcBwFytZDQlpwtPBpEFzJiImLYe/PldRb0f
9WQOH1Y1+q14MF+N
=6hNK
-----END PGP SIGNATURE-----
Merge tag 'kvmarm-for-4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/ARM updates for 4.13
- vcpu request overhaul
- allow timer and PMU to have their interrupt number
selected from userspace
- workaround for Cavium erratum 30115
- handling of memory poisonning
- the usual crop of fixes and cleanups
Conflicts:
arch/s390/include/asm/kvm_host.h
Now that compat_vfp_get() uses the regset API to copy the FPSCR
value out to userspace, compat_vfp_set() looks inconsistent. In
particular, compat_vfp_set() will fail if called with kbuf != NULL
&& ubuf == NULL (which is valid usage according to the regset API).
This patch fixes compat_vfp_set() to use user_regset_copyin(),
similarly to compat_vfp_get().
This also squashes a sparse warning triggered by the cast that
drops __user when calling get_user().
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
compat_vfp_set() checks for userspace trying to write an excessive
amount of data to the regset. However this check is conspicuous
for its absence from every other _set() in the arm64 ptrace
implementation. In fact, the core ptrace_regset() already clamps
userspace's iov_len to the regset size before the individual regset
.{get,set}() methods get called.
This patch removes the redundant check.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If get_user() fails when reading the new FPSCR value from userspace
in compat_vfp_get(), then garbage* will be written to the task's
FPSR and FPCR registers.
This patch prevents this by checking the return from get_user()
first.
[*] Actually, zero, due to the behaviour of get_user() on error, but
that's still not what userspace expects.
Fixes: 478fcb2cdb ("arm64: Debugging support")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
get_alt_insn() is used to read and create ARM instructions, which
are always stored in memory in little-endian order. These values
are thus correctly converted to/from native order when processed
but the pointers used to hold the address of these instructions
are declared as for native order values.
Fix this by declaring the pointers as __le32* instead of u32* and
make the few appropriate needed changes like removing the unneeded
cast '(u32*)' in front of __ALT_PTR()'s definition.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In the flattened device tree format, all integer properties are
in big-endian order.
Here the property "kaslr-seed" is read from the fdt and then
correctly converted to native order (via fdt64_to_cpu()) but the
pointer used for this is not annotated as being for big-endian.
Fix this by declaring the pointer as fdt64_t instead of u64
(fdt64_t being itself typedefed to __be64).
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Here both variables 'cpu_id' and 'entry_point' are read via
read[lq]_relaxed(), from a little-endian annotated pointer
and then used as a native endian value.
This is correct since the read[lq]() family of function
internally do a little-to-native endian conversion.
But in this case, it is wrong to declare these variable as
little-endian since there are native ones.
Fix this by changing the declaration of these variables
as 'u32' or 'u64' instead of '__le32' / '__le64'.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Here the entrypoint, declared as a 64 bit integer, is read from
a pointer to 64bit integer but the read is done via readl_relaxed()
which is for 32bit quantities.
All the high bits will thus be lost which change the meaning
of the test against zero done later.
Fix this by using readq_relaxed() instead as it should be for
64bit quantities.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Here the functions reloc_insn_movw() & reloc_insn_imm() are used
to read, modify and write back ARM instructions, which are always
stored in memory in little-endian order. These values are thus
correctly converted to/from native order but the pointers used to
hold their addresses are declared as for native order values.
Fix this by declaring the pointers as __le32* and remove the
casts that are now unneeded.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
aarch64_insn_write() is used to write an instruction.
As on ARM64 in-memory instructions are always stored
in little-endian order, this function, taking the instruction
opcode in native order, correctly convert it to little-endian
before sending it to an helper function __aarch64_insn_write()
which will do the effective write.
This is all good, but the variable and argument holding the
converted value are not annotated for a little-endian value
but left for native values.
Fix this by adjusting the prototype of the helper and
directly using the result of cpu_to_le32() without passing
by an intermediate variable (which was not a distinct one
but the same as the one holding the native value).
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The function arch64_insn_read() is used to read an instruction.
On AM64 instructions are always stored in little-endian order
and thus the function correctly do a little-to-native endian
conversion to the value just read.
However, the variable used to hold the value before the conversion
is not declared for a little-endian value but for a native one.
Fix this by using the correct type for the declaration: __le32
Note: This only works because the function reading the value,
probe_kernel_read((), takes a void pointer and void pointers
are endian-agnostic. Otherwise probe_kernel_read() should
also be properly annotated (or worse, need to be specialized).
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Here we're reading thumb or ARM instructions, which are always
stored in memory in little-endian order. These values are thus
correctly converted to native order but the intermediate value
should be annotated as for little-endian values.
Fix this by declaring the intermediate var as __le32 or __le16.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Here we're reading thumb or ARM instructions, which are always
stored in memory in little-endian order. These values are thus
correctly converted to native order but the intermediate value
should be annotated as for little-endian values.
Fix this by declaring the intermediate var as __le32 or __le16.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When a kernel is built without CONFIG_ARM64_MODULE_PLTS, we don't
generate the expected branch instruction in ftrace_make_nop(). This
means we pass zero (rather than a valid branch) to ftrace_modify_code()
as the expected instruction to validate. This causes us to return
-EINVAL to the core ftrace code for a valid case, resulting in a splat
at boot time.
This was an unintended effect of commit:
687644209a ("arm64: ftrace: fix building without CONFIG_MODULES")
... which incorrectly moved the generation of the branch instruction
into the ifdef for CONFIG_ARM64_MODULE_PLTS.
This patch fixes the issue by moving the ifdef inside of the relevant
if-else case, and always checking that the branch is in range,
regardless of CONFIG_ARM64_MODULE_PLTS. This ensures that we generate
the expected branch instruction, and also improves our sanity checks.
For consistency, both ftrace_make_nop() and ftrace_make_call() are
updated with this pattern.
Fixes: 687644209a ("arm64: ftrace: fix building without CONFIG_MODULES")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch defines an extra_context signal frame record that can be
used to describe an expanded signal frame, and modifies the context
block allocator and signal frame setup and parsing code to create,
populate, parse and decode this block as necessary.
To avoid abuse by userspace, parse_user_sigframe() attempts to
ensure that:
* no more than one extra_context is accepted;
* the extra context data is a sensible size, and properly placed
and aligned.
The extra_context data is required to start at the first 16-byte
aligned address immediately after the dummy terminator record
following extra_context in rt_sigframe.__reserved[] (as ensured
during signal delivery). This serves as a sanity-check that the
signal frame has not been moved or copied without taking the extra
data into account.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
[will: add __force annotation when casting extra_datap to __user pointer]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When debugging a kernel panic(), it can be useful to know which CPU
features have been detected by the kernel, as some code paths can depend
on these (and may have been patched at runtime).
This patch adds a notifier to dump the detected CPU caps (as a hex
string) at panic(), when we log other information useful for debugging.
On a Juno R1 system running v4.12-rc5, this looks like:
[ 615.431249] Kernel panic - not syncing: Fatal exception in interrupt
[ 615.437609] SMP: stopping secondary CPUs
[ 615.441872] Kernel Offset: disabled
[ 615.445372] CPU features: 0x02086
[ 615.448522] Memory Limit: none
A developer can decode this by looking at the corresponding
<asm/cpucaps.h> bits. For example, the above decodes as:
* bit 1: ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE
* bit 2: ARM64_WORKAROUND_845719
* bit 7: ARM64_WORKAROUND_834220
* bit 13: ARM64_HAS_32BIT_EL0
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When reading current's user-writable TLS register (which occurs
when dumping core for native tasks), it is possible that userspace
has modified it since the time the task was last scheduled out.
The new TLS register value is not guaranteed to have been written
immediately back to thread_struct in this case.
As a result, a coredump can capture stale data for this register.
Reading the register for a stopped task via ptrace is unaffected.
For native tasks, this patch explicitly flushes the TPIDR_EL0
register back to thread_struct before dumping when operating on
current, thus ensuring that coredump contents are up to date. For
compat tasks, the TLS register is not user-writable and so cannot
be out of sync, so no flush is required in compat_tls_get().
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When reading the FPSIMD state of current (which occurs when dumping
core), it is possible that userspace has modified the FPSIMD
registers since the time the task was last scheduled out. Such
changes are not guaranteed to be reflected immedately in
thread_struct.
As a result, a coredump can contain stale values for these
registers. Reading the registers of a stopped task via ptrace is
unaffected.
This patch explicitly flushes the CPU state back to thread_struct
before dumping when operating on current, thus ensuring that
coredump contents are up to date.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, VFP registers are omitted from coredumps for compat
processes, due to a bug in the REGSET_COMPAT_VFP regset
implementation.
compat_vfp_get() needs to transfer non-contiguous data from
thread_struct.fpsimd_state, and uses put_user() to handle the
offending trailing word (FPSCR). This fails when copying to a
kernel address (i.e., kbuf && !ubuf), which is what happens when
dumping core. As a result, the ELF coredump core code silently
omits the NT_ARM_VFP note from the dump.
It would be possible to work around this with additional special
case code for the put_user(), but since user_regset_copyout() is
explicitly designed to handle this scenario it is cleaner to port
the put_user() to a user_regset_copyout() call, which this patch
does.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Merge time(keeping) updates from John Stultz:
"Just a small set of changes, the biggest changes being the MONOTONIC_RAW
handling cleanup, and a new kselftest from Miroslav. Also a a clear
warning deprecating CONFIG_GENERIC_TIME_VSYSCALL_OLD, which affects ppc
and ia64."
Now that we fixed the sub-ns handling for CLOCK_MONOTONIC_RAW,
remove the duplicitive tk->raw_time.tv_nsec, which can be
stored in tk->tkr_raw.xtime_nsec (similarly to how its handled
for monotonic time).
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Miroslav Lichvar <mlichvar@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Stephen Boyd <stephen.boyd@linaro.org>
Cc: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Daniel Mentz <danielmentz@google.com>
Tested-by: Daniel Mentz <danielmentz@google.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
This patch factors out the allocator for signal frame optional
records into a separate function, to ensure consistency and
facilitate later expansion.
No overrun checking is currently done, because the allocation is in
user memory and anyway the kernel never tries to allocate enough
space in the signal frame yet for an overrun to occur. This
behaviour will be refined in future patches.
The approach taken in this patch to allocation of the terminator
record is not very clean: this will also be replaced in subsequent
patches.
For future extension, a comment is added in sigcontext.h
documenting the current static allocations in __reserved[]. This
will be important for determining under what circumstances
userspace may or may not see an expanded signal frame.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In preparation for expanding the signal frame, this patch refactors
the signal frame setup code in setup_sigframe() into two separate
passes.
The first pass, setup_sigframe_layout(), determines the size of the
signal frame and its internal layout, including the presence and
location of optional records. The resulting knowledge is used to
allocate and locate the user stack space required for the signal
frame and to determine which optional records to include.
The second pass, setup_sigframe(), is called once the stack frame
is allocated in order to populate it with the necessary context
information.
As a result of these changes, it becomes more natural to represent
locations in the signal frame by a base pointer and an offset,
since the absolute address of each location is not known during the
layout pass. To be more consistent with this logic,
parse_user_sigframe() is refactored to describe signal frame
locations in a similar way.
This change has no effect on the signal ABI, but will make it
easier to expand the signal frame in future patches.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, rt_sigreturn does very limited checking on the
sigcontext coming from userspace.
Future additions to the sigcontext data will increase the potential
for surprises. Also, it is not clear whether the sigcontext
extension records are supposed to occur in a particular order.
To allow the parsing code to be extended more easily, this patch
factors out the sigcontext parsing into a separate function, and
adds extra checks to validate the well-formedness of the sigcontext
structure.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order to be able to increase the amount of the data currently
written to the __reserved[] array in the signal frame, it is
necessary to overwrite the locations currently occupied by the
{fp,lr} frame link record pushed at the top of the signal stack.
In order for this to work, this patch detaches the frame link
record from struct rt_sigframe and places it separately at the top
of the signal stack. This will allow subsequent patches to insert
data between it and __reserved[].
This change relies on the non-ABI status of the placement of the
frame record with respect to struct sigframe: this status is
undocumented, but the placement is not declared or described in the
user headers, and known unwinder implementations (libgcc,
libunwind, gdb) appear not to rely on it.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Conflicts:
kernel/sched/Makefile
Pick up the waitqueue related renames - it didn't get much feedback,
so it appears to be uncontroversial. Famous last words? ;-)
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Recently vDSO support for CLOCK_MONOTONIC_RAW was added in
49eea433b3 ("arm64: Add support for CLOCK_MONOTONIC_RAW in
clock_gettime() vDSO"). Noticing that the core timekeeping code
never set tkr_raw.xtime_nsec, the vDSO implementation didn't
bother exposing it via the data page and instead took the
unshifted tk->raw_time.tv_nsec value which was then immediately
shifted left in the vDSO code.
Unfortunately, by accellerating the MONOTONIC_RAW clockid, it
uncovered potential 1ns time inconsistencies caused by the
timekeeping core not handing sub-ns resolution.
Now that the core code has been fixed and is actually setting
tkr_raw.xtime_nsec, we need to take that into account in the
vDSO by adding it to the shifted raw_time value, in order to
fix the user-visible inconsistency. Rather than do that at each
use (and expand the data page in the process), instead perform
the shift/addition operation when populating the data page and
remove the shift from the vDSO code entirely.
[jstultz: minor whitespace tweak, tried to improve commit
message to make it more clear this fixes a regression]
Reported-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Tested-by: Daniel Mentz <danielmentz@google.com>
Acked-by: Kevin Brodsky <kevin.brodsky@arm.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Stephen Boyd <stephen.boyd@linaro.org>
Cc: "stable #4 . 8+" <stable@vger.kernel.org>
Cc: Miroslav Lichvar <mlichvar@redhat.com>
Link: http://lkml.kernel.org/r/1496965462-20003-4-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The kernel watchdog is a great debugging tool for finding tasks that
consume a disproportionate amount of CPU time in contiguous chunks. One
can imagine building a similar watchdog for arbitrary driver threads
using save_stack_trace_tsk() and print_stack_trace(). However, this is
not viable for dynamically loaded driver modules on ARM platforms
because save_stack_trace_tsk() is not exported for those architectures.
Export save_stack_trace_tsk() for the ARM64 architecture to align with
x86 and support various debugging use cases such as arbitrary driver
thread watchdog timers.
Signed-off-by: Dustin Brown <dustinb@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Some Cavium Thunder CPUs suffer a problem where a KVM guest may
inadvertently cause the host kernel to quit receiving interrupts.
Use the Group-0/1 trapping in order to deal with it.
[maz]: Adapted patch to the Group-0/1 trapping, reworked commit log
Tested-by: Alexander Graf <agraf@suse.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
The function name is now renamed to 'timer_probe' for consistency with
the CLOCKSOURCE_OF_DECLARE => TIMER_OF_DECLARE change.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
When CONFIG_MODULES is disabled, we cannot dereference a module pointer:
arch/arm64/kernel/ftrace.c: In function 'ftrace_make_call':
arch/arm64/kernel/ftrace.c:107:36: error: dereferencing pointer to incomplete type 'struct module'
trampoline = (unsigned long *)mod->arch.ftrace_trampoline;
Also, the within_module() function is not defined:
arch/arm64/kernel/ftrace.c: In function 'ftrace_make_nop':
arch/arm64/kernel/ftrace.c:171:8: error: implicit declaration of function 'within_module'; did you mean 'init_module'? [-Werror=implicit-function-declaration]
This addresses both by adding replacing the IS_ENABLED(CONFIG_ARM64_MODULE_PLTS)
checks with #ifdef versions.
Fixes: e71a4e1beb ("arm64: ftrace: add support for far branches to dynamic ftrace")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, dynamic ftrace support in the arm64 kernel assumes that all
core kernel code is within range of ordinary branch instructions that
occur in module code, which is usually the case, but is no longer
guaranteed now that we have support for module PLTs and address space
randomization.
Since on arm64, all patching of branch instructions involves function
calls to the same entry point [ftrace_caller()], we can emit the modules
with a trampoline that has unlimited range, and patch both the trampoline
itself and the branch instruction to redirect the call via the trampoline.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: minor clarification to smp_wmb() comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When turning branch instructions into NOPs, we attempt to validate the
action by comparing the old value at the call site with the opcode of
a direct relative branch instruction pointing at the old target.
However, these call sites are statically initialized to call _mcount(),
and may be redirected via a PLT entry if the module is loaded far away
from the kernel text, leading to false negatives and spurious errors.
So skip the validation if CONFIG_ARM64_MODULE_PLTS is configured.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Wire up the existing arm64 support for SMBIOS tables (aka DMI) for ARM as
well, by moving the arm64 init code to drivers/firmware/efi/arm-runtime.c
(which is shared between ARM and arm64), and adding a asm/dmi.h header to
ARM that defines the mapping routines for the firmware tables.
This allows userspace to access these tables to discover system information
exposed by the firmware. It also sets the hardware name used in crash
dumps, e.g.:
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = ed3c0000
[00000000] *pgd=bf1f3835
Internal error: Oops: 817 [#1] SMP THUMB2
Modules linked in:
CPU: 0 PID: 759 Comm: bash Not tainted 4.10.0-09601-g0e8f38792120-dirty #112
Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
^^^
NOTE: This does *NOT* enable or encourage the use of DMI quirks, i.e., the
the practice of identifying the platform via DMI to decide whether
certain workarounds for buggy hardware and/or firmware need to be
enabled. This would require the DMI subsystem to be enabled much
earlier than we do on ARM, which is non-trivial.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20170602135207.21708-14-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 3fde2999fa ("arm64: cpufeature: Don't dump useless backtrace on
CPU_OUT_OF_SPEC") changed the cpufeature detection code to use add_taint
instead of WARN_TAINT_ONCE when detecting a heterogeneous system with
mismatched feature support. Unfortunately, this resulted in all systems
getting the taint, regardless of any feature mismatch.
This patch fixes the problem by conditionalising the taint on detecting
a feature mismatch.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that some functions that deal with arch topology information live
under drivers, there is a clash of naming that might create confusion.
Tidy things up by creating a topology namespace for interfaces used by
arch code; achieve this by prepending a 'topology_' prefix to driver
interfaces.
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Create a new header file (include/linux/arch_topology.h) and put there
declarations of interfaces used by arm, arm64 and drivers code.
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reduce the scope of cap_parsing_failed (making it static in
drivers/base/arch_topology.c) by slightly changing {arm,arm64} DT
parsing code.
For arm checking for !cap_parsing_failed before calling normalize_
cpu_capacity() is superfluous, as returning an error from parse_
cpu_capacity() (above) means cap_from _dt is set to false.
For arm64 we can simply check if raw_capacity points to something,
which is not if capacity parsing has failed.
Suggested-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arm and arm64 share lot of code relative to parsing CPU capacity
information from DT, using that information for appropriate scaling and
exposing a sysfs interface for chaging such values at runtime.
Factorize such code in a common place (driver/base/arch_topology.c) in
preparation for further additions.
Suggested-by: Will Deacon <will.deacon@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Perf has supported ARMv8.1 feature with 16-bit evtCount filed [see c210ae8
arm64: perf: Extend event mask for ARMv8.1], event config should be
extended to 16-bit too, otherwise, if use -e event_name whose event_code
is more than 0x3ff, pmu_config_term will return -EINVAL because function
pmu_format_max_value depends on event config.
This patch extends event config to 16-bit.
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
PCI core requires the NUMA node for the struct pci_host_bridge.dev to
be set by using the pcibus_to_node(struct pci_bus*) API, that on ARM64
systems relies on the struct pci_host_bridge->bus.dev NUMA node.
The struct pci_host_bridge.dev NUMA node is then propagated through
the PCI device hierarchy as PCI devices (and bridges) are enumerated
under it.
Therefore, in order to set-up the PCI NUMA hierarchy appropriately, the
struct pci_host_bridge->bus.dev NUMA node must be set before core
code calls pcibus_to_node(struct pci_bus*) on it so that PCI core can
retrieve the NUMA node for the struct pci_host_bridge.dev device and can
propagate it through the PCI bus tree.
On ARM64 ACPI based systems the struct pci_host_bridge->bus.dev NUMA
node can be set-up in pcibios_root_bridge_prepare() by parsing the root
bridge ACPI device firmware binding.
Add code to the pcibios_root_bridge_prepare() that, when booting with
ACPI, parse the root bridge ACPI device companion NUMA binding and set
the corresponding struct pci_host_bridge->bus.dev NUMA node
appropriately.
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Robert Richter <rrichter@cavium.com>
Tested-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
It's useless to print machine name and setup arch-specific system
identifiers if of_flat_dt_get_machine_name() return NULL, especially
when ACPI-based boot.
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Unfortunately, it turns out that mismatched CPU features in big.LITTLE
systems are starting to appear in the wild. Whilst we should continue to
taint the kernel with CPU_OUT_OF_SPEC for features that differ in ways
that we can't fix up, dumping a useless backtrace out of the cpufeature
code is pointless and irritating.
This patch removes the backtrace from the taint.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Generic code expects show_regs() to dump the stack, but arm64's
show_regs() does not. This makes it hard to debug softlockups and
other issues that result in show_regs() being called.
This patch updates arm64's show_regs() to dump the stack, as common
code expects.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
[will: folded in bug_handler fix from mrutland]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Generic code expects show_regs() to also dump the stack, but arm64's
show_reg() does not do this. Some arm64 callers of show_regs() *only*
want the registers dumped, without the stack.
To enable generic code to work as expected, we need to make
show_regs() dump the stack. Where we only want the registers dumped,
we must use __show_regs().
This patch updates code to use __show_regs() where only registers are
desired. A subsequent patch will modify show_regs().
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The text patching functions which are invoked from jump_label and kprobes
code are protected against cpu hotplug at the call sites.
Use stop_machine_cpuslocked() to avoid recursion on the cpu hotplug
rwsem. stop_machine_cpuslocked() contains a lockdep assertion to catch any
unprotected callers.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20170524081549.197070135@linutronix.de
Commit 093d24a204 ("arm64: PCI: Manage controller-specific data on
per-controller basis") added code to allocate ACPI PCI root_ops
dynamically on a per host bridge basis but failed to update the
corresponding memory allocation failure path in pci_acpi_scan_root()
leading to a potential memory leakage.
Fix it by adding the required kfree call.
Fixes: 093d24a204 ("arm64: PCI: Manage controller-specific data on per-controller basis")
Reviewed-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
[lorenzo.pieralisi@arm.com: refactored code, rewrote commit log]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To enable smp_processor_id() and might_sleep() debug checks earlier, it's
required to add system states between SYSTEM_BOOTING and SYSTEM_RUNNING.
Adjust the system_state check in smp_send_stop() to handle the extra states.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/20170516184735.112589728@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
must take the jump_label mutex.
We call cpus_set_cap() in the secondary bringup path, from the idle
thread where interrupts are disabled. Taking a mutex in this path "is a
NONO" regardless of whether it's contended, and something we must avoid.
We didn't spot this until recently, as ___might_sleep() won't warn for
this case until all CPUs have been brought up.
This patch avoids taking the mutex in the secondary bringup path. The
poking of static keys is deferred until enable_cpu_capabilities(), which
runs in a suitable context on the boot CPU. To account for the static
keys being set later, cpus_have_const_cap() is updated to use another
static key to check whether the const cap keys have been initialised,
falling back to the caps bitmap until this is the case.
This means that users of cpus_have_const_cap() gain should only gain a
single additional NOP in the fast path once the const caps are
initialised, but should always see the current cap value.
The hyp code should never dereference the caps array, since the caps are
initialized before we run the module initcall to initialise hyp. A check
is added to the hyp init code to document this requirement.
This change will sidestep a number of issues when the upcoming hotplug
locking rework is merged.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyniger <marc.zyngier@arm.com>
Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Sewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
commit d98ecdaca2 ("arm64: perf: Count EL2 events if the kernel is
running in HYP") returns -EINVAL when perf system call perf_event_open is
called with exclude_hv != exclude_kernel. This change breaks applications
on VHE enabled ARMv8.1 platforms. The issue was observed with HHVM
application, which calls perf_event_open with exclude_hv = 1 and
exclude_kernel = 0.
There is no separate hypervisor privilege level when VHE is enabled, the
host kernel runs at EL2. So when VHE is enabled, we should ignore
exclude_hv from the application. This behaviour is consistent with PowerPC
where the exclude_hv is ignored when the hypervisor is not present and with
x86 where this flag is ignored.
Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
[will: added comment to justify the behaviour of exclude_hv]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Silence module allocation failures when CONFIG_ARM*_MODULE_PLTS is
enabled. This requires a check for __GFP_NOWARN in alloc_vmap_area()
- Improve/sanitise user tagged pointers handling in the kernel
- Inline asm fixes/cleanups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZFJszAAoJEGvWsS0AyF7xASwQAKsY72jJMu+FbLqzn9vS7Frx
AGlx+M20odn6htFBBEDhaJQxFTFSfuBUNb6z4WmRsVVcVZ722EHsvEFFkHU4naR1
lAdZ1iFNHBRwGxV/JwCt08JwG0ipuqvcuNQH7XaYeuqldQLWaVTf4cangH4cZGX4
Fcl54DI7Nfy6QYBnfkBSzi6Pqjhkdn6vh1JlNvkX40BwkT6Zt9WryXzvCwQha9A0
EsstRhBECK6yCSaBcp7MbwyRbpB56PyOxUaeRUNoPaag+bSa8xs65JFq/yvolmpa
Cm1Bt/hlVHvi3rgMIYnm+z1C4IVgLA1ouEKYAGdq4IpWA46BsPxwOBmmYG/0qLqH
b7F5my5W8bFm9w1LI9I9l4FwoM1BU7b+n8KOZDZGpgfTwy86jIODhb42e7E4vEtn
yHCwwu688zkxoI+JTt7PvY3Oue69zkP1/kXUWt5SILKH5LFyweZvdGc+VCSeQoGo
fjwlnxI0l12vYIt2RnZWGJcA+W/T1E4cPJtIvvid9U9uuXs3Vv/EQ3F5wgaXoPN2
UDyJTxwrv/iT2yMoZmaaVh36+6UDUPV+b2alA9Wq/3996axGlzeI3go+cdhQXj+E
8JFzWph+kIZqCnGUaWMt/FTphFhOHjMxC36WEgxVRQZigXrajdrKAgvCj+7n2Qtm
X0wL+XDgsWA8yPgt4WLK
=WZ6G
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull more arm64 updates from Catalin Marinas:
- Silence module allocation failures when CONFIG_ARM*_MODULE_PLTS is
enabled. This requires a check for __GFP_NOWARN in alloc_vmap_area()
- Improve/sanitise user tagged pointers handling in the kernel
- Inline asm fixes/cleanups
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Silence first allocation with CONFIG_ARM64_MODULE_PLTS=y
ARM: Silence first allocation with CONFIG_ARM_MODULE_PLTS=y
mm: Silence vmap() allocation failures based on caller gfp_flags
arm64: uaccess: suppress spurious clang warning
arm64: atomic_lse: match asm register sizes
arm64: armv8_deprecated: ensure extension of addr
arm64: uaccess: ensure extension of access_ok() addr
arm64: ensure extension of smp_store_release value
arm64: xchg: hazard against entire exchange variable
arm64: documentation: document tagged pointer stack constraints
arm64: entry: improve data abort handling of tagged pointers
arm64: hw_breakpoint: fix watchpoint matching for tagged pointers
arm64: traps: fix userspace cache maintenance emulation on a tagged pointer
When CONFIG_ARM64_MODULE_PLTS is enabled, the first allocation using the
module space fails, because the module is too big, and then the module
allocation is attempted from vmalloc space. Silence the first allocation
failure in that case by setting __GFP_NOWARN.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Our compat swp emulation holds the compat user address in an unsigned
int, which it passes to __user_swpX_asm(). When a 32-bit value is passed
in a register, the upper 32 bits of the register are unknown, and we
must extend the value to 64 bits before we can use it as a base address.
This patch casts the address to unsigned long to ensure it has been
suitably extended, avoiding the potential issue, and silencing a related
warning from clang.
Fixes: bd35a4adc4 ("arm64: Port SWP/SWPB emulation support from arm")
Cc: <stable@vger.kernel.org> # 3.19.x-
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When handling a data abort from EL0, we currently zero the top byte of
the faulting address, as we assume the address is a TTBR0 address, which
may contain a non-zero address tag. However, the address may be a TTBR1
address, in which case we should not zero the top byte. This patch fixes
that. The effect is that the full TTBR1 address is passed to the task's
signal handler (or printed out in the kernel log).
When handling a data abort from EL1, we leave the faulting address
intact, as we assume it's either a TTBR1 address or a TTBR0 address with
tag 0x00. This is true as far as I'm aware, we don't seem to access a
tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to
forget about address tags, and code added in the future may not always
remember to remove tags from addresses before accessing them. So add tag
handling to the EL1 data abort handler as well. This also makes it
consistent with the EL0 data abort handler.
Fixes: d50240a5f6 ("arm64: mm: permit use of tagged pointers at EL0")
Cc: <stable@vger.kernel.org> # 3.12.x-
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When we take a watchpoint exception, the address that triggered the
watchpoint is found in FAR_EL1. We compare it to the address of each
configured watchpoint to see which one was hit.
The configured watchpoint addresses are untagged, while the address in
FAR_EL1 will have an address tag if the data access was done using a
tagged address. The tag needs to be removed to compare the address to
the watchpoints.
Currently we don't remove it, and as a result can report the wrong
watchpoint as being hit (specifically, always either the highest TTBR0
watchpoint or lowest TTBR1 watchpoint). This patch removes the tag.
Fixes: d50240a5f6 ("arm64: mm: permit use of tagged pointers at EL0")
Cc: <stable@vger.kernel.org> # 3.12.x-
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When we emulate userspace cache maintenance in the kernel, we can
currently send the task a SIGSEGV even though the maintenance was done
on a valid address. This happens if the address has a non-zero address
tag, and happens to not be mapped in.
When we get the address from a user register, we don't currently remove
the address tag before performing cache maintenance on it. If the
maintenance faults, we end up in either __do_page_fault, where find_vma
can't find the VMA if the address has a tag, or in do_translation_fault,
where the tagged address will appear to be above TASK_SIZE. In both
cases, the address is not mapped in, and the task is sent a SIGSEGV.
This patch removes the tag from the address before using it. With this
patch, the fault is handled correctly, the address gets mapped in, and
the cache maintenance succeeds.
As a second bug, if cache maintenance (correctly) fails on an invalid
tagged address, the address gets passed into arm64_notify_segfault,
where find_vma fails to find the VMA due to the tag, and the wrong
si_code may be sent as part of the siginfo_t of the segfault. With this
patch, the correct si_code is sent.
Fixes: 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
Cc: <stable@vger.kernel.org> # 4.8.x-
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
support; virtual interrupt controller performance improvements; support
for userspace virtual interrupt controller (slower, but necessary for
KVM on the weird Broadcom SoCs used by the Raspberry Pi 3)
* MIPS: basic support for hardware virtualization (ImgTec
P5600/P6600/I6400 and Cavium Octeon III)
* PPC: in-kernel acceleration for VFIO
* s390: support for guests without storage keys; adapter interruption
suppression
* x86: usual range of nVMX improvements, notably nested EPT support for
accessed and dirty bits; emulation of CPL3 CPUID faulting
* generic: first part of VCPU thread request API; kvm_stat improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJZEHUkAAoJEL/70l94x66DBeYH/09wrpJ2FjU4Rqv7FxmqgWfH
9WGi4wvn/Z+XzQSyfMJiu2SfZVzU69/Y67OMHudy7vBT6knB+ziM7Ntoiu/hUfbG
0g5KsDX79FW15HuvuuGh9kSjUsj7qsQdyPZwP4FW/6ZoDArV9mibSvdjSmiUSMV/
2wxaoLzjoShdOuCe9EABaPhKK0XCrOYkygT6Paz1pItDxaSn8iW3ulaCuWMprUfG
Niq+dFemK464E4yn6HVD88xg5j2eUM6bfuXB3qR3eTR76mHLgtwejBzZdDjLG9fk
32PNYKhJNomBxHVqtksJ9/7cSR6iNPs7neQ1XHemKWTuYqwYQMlPj1NDy0aslQU=
=IsiZ
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
"ARM:
- HYP mode stub supports kexec/kdump on 32-bit
- improved PMU support
- virtual interrupt controller performance improvements
- support for userspace virtual interrupt controller (slower, but
necessary for KVM on the weird Broadcom SoCs used by the Raspberry
Pi 3)
MIPS:
- basic support for hardware virtualization (ImgTec P5600/P6600/I6400
and Cavium Octeon III)
PPC:
- in-kernel acceleration for VFIO
s390:
- support for guests without storage keys
- adapter interruption suppression
x86:
- usual range of nVMX improvements, notably nested EPT support for
accessed and dirty bits
- emulation of CPL3 CPUID faulting
generic:
- first part of VCPU thread request API
- kvm_stat improvements"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (227 commits)
kvm: nVMX: Don't validate disabled secondary controls
KVM: put back #ifndef CONFIG_S390 around kvm_vcpu_kick
Revert "KVM: Support vCPU-based gfn->hva cache"
tools/kvm: fix top level makefile
KVM: x86: don't hold kvm->lock in KVM_SET_GSI_ROUTING
KVM: Documentation: remove VM mmap documentation
kvm: nVMX: Remove superfluous VMX instruction fault checks
KVM: x86: fix emulation of RSM and IRET instructions
KVM: mark requests that need synchronization
KVM: return if kvm_vcpu_wake_up() did wake up the VCPU
KVM: add explicit barrier to kvm_vcpu_kick
KVM: perform a wake_up in kvm_make_all_cpus_request
KVM: mark requests that do not need a wakeup
KVM: remove #ifndef CONFIG_S390 around kvm_vcpu_wake_up
KVM: x86: always use kvm_make_request instead of set_bit
KVM: add kvm_{test,clear}_request to replace {test,clear}_bit
s390: kvm: Cpu model support for msa6, msa7 and msa8
KVM: x86: remove irq disablement around KVM_SET_CLOCK/KVM_GET_CLOCK
kvm: better MWAIT emulation for guests
KVM: x86: virtualize cpuid faulting
...
- kdump support, including two necessary memblock additions:
memblock_clear_nomap() and memblock_cap_memory_range()
- ARMv8.3 HWCAP bits for JavaScript conversion instructions, complex
numbers and weaker release consistency
- arm64 ACPI platform MSI support
- arm perf updates: ACPI PMU support, L3 cache PMU in some Qualcomm
SoCs, Cortex-A53 L2 cache events and DTLB refills, MAINTAINERS update
for DT perf bindings
- architected timer errata framework (the arch/arm64 changes only)
- support for DMA_ATTR_FORCE_CONTIGUOUS in the arm64 iommu DMA API
- arm64 KVM refactoring to use common system register definitions
- remove support for ASID-tagged VIVT I-cache (no ARMv8 implementation
using it and deprecated in the architecture) together with some
I-cache handling clean-up
- PE/COFF EFI header clean-up/hardening
- define BUG() instruction without CONFIG_BUG
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZDKMoAAoJEGvWsS0AyF7xR+YP/0EMEz5MDfCv0PVYj7/AIa0G
Zphl7OhysIkeDAz7urXw9Jdl0NfORNIqmD1vZNVSc321IyNp56Od+kWd82lBrOWB
ad3nNT67pEmu0pAW7CO48ju3rTesEnEl3ra45E1tULeLihmv93jc4ZlfXgumlKq3
/GE84XJ5ZFmluuhq1zgNefeUtyl1tbxTxHJ74+INF7dTd/5sJcphpqS4Dzpb+msT
20WYliccQCBF9zBFUYHc2KjcXXKRQGxLulGS3MuoN2DLkD+U9YyR/OmA7SoXh2J2
WXC5b0x856xTQJFCJ39pb7rw5xHjt3l5zfU3VLSvqEVL/+asBqCcgGNtNUgOW1Es
dEHC6bc66Ley6mn7bbpFE3MK8D+K5q8HwMF6G5KDtIVB6DB/iQ6kzi5aXKoupxtb
1EuU4OW6cDhmOFQYjgIDofLgqbmVvJofdF6+NfxasfZmWrMgHzv0rYvaCDnAV/Tr
t7bhH7hf9/KcP/wpk86O2AMKKpgoNTqe1Qy8cWVFFLnut567Pb6zs/L3ZXfleoLv
t613yM8Zj2fE05ja8ylMDjaasidNpXGttb08/4kAn06Daaoueqla0jmduAhy4aaV
dQ3OFP9lJ5MFaFnMMTPfU3vtvNLMHuo9MZsYCrv5zCaNNs3lpAPUiPNh588ZscKa
sWx4PEiaCi+wcOsLsJvh
=SDkm
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- kdump support, including two necessary memblock additions:
memblock_clear_nomap() and memblock_cap_memory_range()
- ARMv8.3 HWCAP bits for JavaScript conversion instructions, complex
numbers and weaker release consistency
- arm64 ACPI platform MSI support
- arm perf updates: ACPI PMU support, L3 cache PMU in some Qualcomm
SoCs, Cortex-A53 L2 cache events and DTLB refills, MAINTAINERS update
for DT perf bindings
- architected timer errata framework (the arch/arm64 changes only)
- support for DMA_ATTR_FORCE_CONTIGUOUS in the arm64 iommu DMA API
- arm64 KVM refactoring to use common system register definitions
- remove support for ASID-tagged VIVT I-cache (no ARMv8 implementation
using it and deprecated in the architecture) together with some
I-cache handling clean-up
- PE/COFF EFI header clean-up/hardening
- define BUG() instruction without CONFIG_BUG
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits)
arm64: Fix the DMA mmap and get_sgtable API with DMA_ATTR_FORCE_CONTIGUOUS
arm64: Print DT machine model in setup_machine_fdt()
arm64: pmu: Wire-up Cortex A53 L2 cache events and DTLB refills
arm64: module: split core and init PLT sections
arm64: pmuv3: handle pmuv3+
arm64: Add CNTFRQ_EL0 trap handler
arm64: Silence spurious kbuild warning on menuconfig
arm64: pmuv3: use arm_pmu ACPI framework
arm64: pmuv3: handle !PMUv3 when probing
drivers/perf: arm_pmu: add ACPI framework
arm64: add function to get a cpu's MADT GICC table
drivers/perf: arm_pmu: split out platform device probe logic
drivers/perf: arm_pmu: move irq request/free into probe
drivers/perf: arm_pmu: split cpu-local irq request/free
drivers/perf: arm_pmu: rename irq request/free functions
drivers/perf: arm_pmu: handle no platform_device
drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs()
drivers/perf: arm_pmu: factor out pmu registration
drivers/perf: arm_pmu: fold init into alloc
drivers/perf: arm_pmu: define armpmu_init_fn
...
Pull networking updates from David Millar:
"Here are some highlights from the 2065 networking commits that
happened this development cycle:
1) XDP support for IXGBE (John Fastabend) and thunderx (Sunil Kowuri)
2) Add a generic XDP driver, so that anyone can test XDP even if they
lack a networking device whose driver has explicit XDP support
(me).
3) Sparc64 now has an eBPF JIT too (me)
4) Add a BPF program testing framework via BPF_PROG_TEST_RUN (Alexei
Starovoitov)
5) Make netfitler network namespace teardown less expensive (Florian
Westphal)
6) Add symmetric hashing support to nft_hash (Laura Garcia Liebana)
7) Implement NAPI and GRO in netvsc driver (Stephen Hemminger)
8) Support TC flower offload statistics in mlxsw (Arkadi Sharshevsky)
9) Multiqueue support in stmmac driver (Joao Pinto)
10) Remove TCP timewait recycling, it never really could possibly work
well in the real world and timestamp randomization really zaps any
hint of usability this feature had (Soheil Hassas Yeganeh)
11) Support level3 vs level4 ECMP route hashing in ipv4 (Nikolay
Aleksandrov)
12) Add socket busy poll support to epoll (Sridhar Samudrala)
13) Netlink extended ACK support (Johannes Berg, Pablo Neira Ayuso,
and several others)
14) IPSEC hw offload infrastructure (Steffen Klassert)"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (2065 commits)
tipc: refactor function tipc_sk_recv_stream()
tipc: refactor function tipc_sk_recvmsg()
net: thunderx: Optimize page recycling for XDP
net: thunderx: Support for XDP header adjustment
net: thunderx: Add support for XDP_TX
net: thunderx: Add support for XDP_DROP
net: thunderx: Add basic XDP support
net: thunderx: Cleanup receive buffer allocation
net: thunderx: Optimize CQE_TX handling
net: thunderx: Optimize RBDR descriptor handling
net: thunderx: Support for page recycling
ipx: call ipxitf_put() in ioctl error path
net: sched: add helpers to handle extended actions
qed*: Fix issues in the ptp filter config implementation.
qede: Fix concurrency issue in PTP Tx path processing.
stmmac: Add support for SIMATIC IOT2000 platform
net: hns: fix ethtool_get_strings overflow in hns driver
tcp: fix wraparound issue in tcp_lp
bpf, arm64: fix jit branch offset related to ldimm64
bpf, arm64: implement jiting of BPF_XADD
...
Pull EFI updates from Ingo Molnar:
"The main changes in this cycle were:
- move BGRT handling to drivers/acpi so it can be shared between x86
and ARM
- bring the EFI stub's initrd and FDT allocation logic in line with
the latest changes to the arm64 boot protocol
- improvements and fixes to the EFI stub's command line parsing
routines
- randomize the virtual mapping of the UEFI runtime services on
ARM/arm64
- ... and other misc enhancements, cleanups and fixes"
* 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
efi/libstub/arm: Don't use TASK_SIZE when randomizing the RT space
ef/libstub/arm/arm64: Randomize the base of the UEFI rt services region
efi/libstub/arm/arm64: Disable debug prints on 'quiet' cmdline arg
efi/libstub: Unify command line param parsing
efi/libstub: Fix harmless command line parsing bug
efi/arm32-stub: Allow boot-time allocations in the vmlinux region
x86/efi: Clean up a minor mistake in comment
efi/pstore: Return error code (if any) from efi_pstore_write()
efi/bgrt: Enable ACPI BGRT handling on arm64
x86/efi/bgrt: Move efi-bgrt handling out of arch/x86
efi/arm-stub: Round up FDT allocation to mapping size
efi/arm-stub: Correct FDT and initrd allocation rules for arm64
Pull timer updates from Thomas Gleixner:
"The timer departement delivers:
- more year 2038 rework
- a massive rework of the arm achitected timer
- preparatory patches to allow NTP correction of clock event devices
to avoid early expiry
- the usual pile of fixes and enhancements all over the place"
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (91 commits)
timer/sysclt: Restrict timer migration sysctl values to 0 and 1
arm64/arch_timer: Mark errata handlers as __maybe_unused
Clocksource/mips-gic: Remove redundant non devicetree init
MIPS/Malta: Probe gic-timer via devicetree
clocksource: Use GENMASK_ULL in definition of CLOCKSOURCE_MASK
acpi/arm64: Add SBSA Generic Watchdog support in GTDT driver
clocksource: arm_arch_timer: add GTDT support for memory-mapped timer
acpi/arm64: Add memory-mapped timer support in GTDT driver
clocksource: arm_arch_timer: simplify ACPI support code.
acpi/arm64: Add GTDT table parse driver
clocksource: arm_arch_timer: split MMIO timer probing.
clocksource: arm_arch_timer: add structs to describe MMIO timer
clocksource: arm_arch_timer: move arch_timer_needs_of_probing into DT init call
clocksource: arm_arch_timer: refactor arch_timer_needs_probing
clocksource: arm_arch_timer: split dt-only rate handling
x86/uv/time: Set ->min_delta_ticks and ->max_delta_ticks
unicore32/time: Set ->min_delta_ticks and ->max_delta_ticks
um/time: Set ->min_delta_ticks and ->max_delta_ticks
tile/time: Set ->min_delta_ticks and ->max_delta_ticks
score/time: Set ->min_delta_ticks and ->max_delta_ticks
...
Pull uaccess unification updates from Al Viro:
"This is the uaccess unification pile. It's _not_ the end of uaccess
work, but the next batch of that will go into the next cycle. This one
mostly takes copy_from_user() and friends out of arch/* and gets the
zero-padding behaviour in sync for all architectures.
Dealing with the nocache/writethrough mess is for the next cycle;
fortunately, that's x86-only. Same for cleanups in iov_iter.c (I am
sold on access_ok() in there, BTW; just not in this pile), same for
reducing __copy_... callsites, strn*... stuff, etc. - there will be a
pile about as large as this one in the next merge window.
This one sat in -next for weeks. -3KLoC"
* 'work.uaccess' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (96 commits)
HAVE_ARCH_HARDENED_USERCOPY is unconditional now
CONFIG_ARCH_HAS_RAW_COPY_USER is unconditional now
m32r: switch to RAW_COPY_USER
hexagon: switch to RAW_COPY_USER
microblaze: switch to RAW_COPY_USER
get rid of padding, switch to RAW_COPY_USER
ia64: get rid of copy_in_user()
ia64: sanitize __access_ok()
ia64: get rid of 'segment' argument of __do_{get,put}_user()
ia64: get rid of 'segment' argument of __{get,put}_user_check()
ia64: add extable.h
powerpc: get rid of zeroing, switch to RAW_COPY_USER
esas2r: don't open-code memdup_user()
alpha: fix stack smashing in old_adjtimex(2)
don't open-code kernel_setsockopt()
mips: switch to RAW_COPY_USER
mips: get rid of tail-zeroing in primitives
mips: make copy_from_user() zero tail explicitly
mips: clean and reorder the forest of macros...
mips: consolidate __invoke_... wrappers
...
On arm32, the machine model specified in the device tree is printed
during boot-up, courtesy of of_flat_dt_match_machine().
On arm64, of_flat_dt_match_machine() is not called, and the machine
model information is not available from the kernel log.
Print the machine model to make it easier to derive the machine model
from an arbitrary kernel boot log.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add missing L2 cache events: read/write accesses and misses, as well as
the DTLB refills.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 module PLT code allocates all PLT entries in a single core
section, since the overhead of having a separate init PLT section is
not justified by the small number of PLT entries usually required for
init code.
However, the core and init module regions are allocated independently,
and there is a corner case where the core region may be allocated from
the VMALLOC region if the dedicated module region is exhausted, but the
init region, being much smaller, can still be allocated from the module
region. This leads to relocation failures if the distance between those
regions exceeds 128 MB. (In fact, this corner case is highly unlikely to
occur on arm64, but the issue has been observed on ARM, whose module
region is much smaller).
So split the core and init PLT regions, and name the latter ".init.plt"
so it gets allocated along with (and sufficiently close to) the .init
sections that it serves. Also, given that init PLT entries may need to
be emitted for branches that target the core module, modify the logic
that disregards defined symbols to only disregard symbols that are
defined in the same section as the relocated branch instruction.
Since there may now be two PLT entries associated with each entry in
the symbol table, we can no longer hijack the symbol::st_size fields
to record the addresses of PLT entries as we emit them for zero-addend
relocations. So instead, perform an explicit comparison to check for
duplicate entries.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit f1b36dcb5c ("arm64: pmuv3: handle !PMUv3 when probing") is
a little too restrictive, and prevents the use of of backwards
compatible PMUv3 extenstions, which have a PMUver value other than 1.
For instance, ARMv8.1 PMU extensions (as implemented by ThunderX2) are
reported with PMUver value 4.
Per the usual ID register principles, at least 0x1-0x7 imply a
PMUv3-compatible PMU. It's not currently clear whether 0x8-0xe imply the
same.
For the time being, treat the value as signed, and with 0x1-0x7 treated
as meaning PMUv3 is implemented. This may be relaxed by future patches.
Reported-by: Jayachandran C <jnair@caviumnetworks.com>
Tested-by: Jayachandran C <jnair@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We now trap accesses to CNTVCT_EL0 when the counter is broken
enough to require the kernel to mediate the access. But it
turns out that some existing userspace (such as OpenMPI) do
probe for the counter frequency, leading to an UNDEF exception
as CNTVCT_EL0 and CNTFRQ_EL0 share the same control bit.
The fix is to handle the exception the same way we do for CNTVCT_EL0.
Fixes: a86bd139f2 ("arm64: arch_timer: Enable CNTVCT_EL0 trap if workaround is enabled")
Reported-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Reviewed-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we have a framework to handle the ACPI bits, make the PMUv3
code use this. The framework is a little different to what was
originally envisaged, and we can drop some unused support code in the
process of moving over to it.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
[will: make armv8_pmu_driver_init static]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When probing via ACPI, we won't know up-front whether a CPU has a PMUv3
compatible PMU. Thus we need to consult ID registers during probe time.
This patch updates our PMUv3 probing code to test for the presence of
PMUv3 functionality before touching an PMUv3-specific registers, and
before updating the struct arm_pmu with PMUv3 data.
When a PMUv3-compatible PMU is not present, probing will return -ENODEV.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently the ACPI parking protocol code needs to parse each CPU's MADT
GICC table to extract the mailbox address and so on. Each time we parse
a GICC table, we call back to the parking protocol code to parse it.
This has been fine so far, but we're about to have more code that needs
to extract data from the GICC tables, and adding a callback for each
user is going to get unwieldy.
Instead, this patch ensures that we stash a copy of each CPU's GICC
table at boot time, such that anything needing to parse it can later
request it. This will allow for other parsers of GICC, and for
simplification to the ACPI parking protocol code. Note that we must
store a copy, rather than a pointer, since the core ACPI code
temporarily maps/unmaps tables while iterating over them.
Since we parse the MADT before we know how many CPUs we have (and hence
before we setup the percpu areas), we must use an NR_CPUS sized array.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We now return HVC_STUB_ERR when a stub hypercall fails, but we
leave whatever was in x0 on success. Zeroing it on return seems
like a good idea.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Nobody is using __hyp_get_vectors anymore, so let's remove both
implementations (hyp-stub and KVM).
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Let's define a new stub hypercall that resets the HYP configuration
to its default: hyp-stub vectors, and MMU disabled.
Of course, for the hyp-stub itself, this is a trivial no-op.
Hypervisors will have a bit more work to do.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Define a standard return value to be returned when a hyp stub
call fails, and make KVM use it for ARM_EXCEPTION_HYP_GONE
(instead of using a KVM-specific value).
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
The EL2 code is not corrupting lr anymore, so don't bother preserving
it in the EL1 trampoline code.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
When entering the kernel hyp stub, we check whether or not we've
made it here through an HVC instruction, clobbering lr (aka x30)
in the process.
This is completely pointless, as HVC is the only way to get here
(all traps to EL2 are disabled, no interrupt override is applied).
So let's remove this bit of code whose only point is to corrupt
a valuable register.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
- Allow checking of a CPU-local erratum
- Add CNTVCT_EL0 trap handler
- Define Cortex-A73 MIDR
- Allow an erratum to be match for all revisions of a core
- Add capability to advertise Cortex-A73 erratum 858921
-----BEGIN PGP SIGNATURE-----
iQJJBAABCAAzFiEEn9UcU+C1Yxj9lZw9I9DQutE9ekMFAljnlbgVHG1hcmMuenlu
Z2llckBhcm0uY29tAAoJECPQ0LrRPXpDx+YP/1OU8GnrVBR2cjEctaTz3s51sPRR
+7j1BZ3pbKnwY3XpM6GJpxc//sSBSphqWZXN5GJZW1lSgw7hpl02mA9rh62G4Xmh
wUqR2hmQ5UdenoR5tE39BZrzMFlef90bA7AZyatSqum6AEQcYPJZEgzOvEvfw/mY
Bc4w73At9OTILRXeZjJZabdeJ4aJLs+6xMfsc9ApgxiZIZ5xhJ7CQzcJFdQ2tFeB
9wYqxkt8i9aZfmj3IX14m3fXISQ4JlXoi8QTBqy7qunCMEFU6jtCwC5aZKj1U3Hh
zebVWomcWMQpFaJw3qxZgjVXftv5r2UsHbMoLo4wFa98Qerr7t9/bGifbFqpKm7S
hA0uzcftUHoQF+l/Qxt9rTlsPKrBMoXjY2XDkKiWP/hqXsXKDBnngKQb2ZFJFT5r
UABMBhqlBQpI2l0/Qn0fL7aEC1xZCVnSgGpAXFvg/Ci7X2K+HH37brt5uNh3vDZI
c5UaWUUycDa5GcCnmoK3QWVeitVW/ydJGaoIT06CKBDrktcpsoO4u4w6CI8xsIJL
RbJwCb72JU5HV7XDpdbc/lLzi30kWceOEUSHrs3dCEnkr41kOunDGm/Xxk/Sm8VL
SeYSx9UotuJvWWozZzVWpLiUSizjh6UB2XWnkSLkT++KA6YPX4L2/hYAw7E3bYVe
FAFoLL7jUYdrjpCF
=XSx4
-----END PGP SIGNATURE-----
Merge tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core
Pre-requisites for the arch timer errata workarounds:
- Allow checking of a CPU-local erratum
- Add CNTVCT_EL0 trap handler
- Define Cortex-A73 MIDR
- Allow an erratum to be match for all revisions of a core
- Add capability to advertise Cortex-A73 erratum 858921
* tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms:
arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921
arm64: cpu_errata: Allow an erratum to be match for all revisions of a core
arm64: Define Cortex-A73 MIDR
arm64: Add CNTVCT_EL0 trap handler
arm64: Allow checking of a CPU-local erratum
In order to work around Cortex-A73 erratum 858921 in a subsequent
patch, add the required capability that advertise the erratum.
As the configuration option it depends on is not present yet,
this has no immediate effect.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Some minor erratum may not be fixed in further revisions of a core,
leading to a situation where the workaround needs to be updated each
time an updated core is released.
Introduce a MIDR_ALL_VERSIONS match helper that will work for all
versions of that MIDR, once and for all.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Since people seem to make a point in breaking the userspace visible
counter, we have no choice but to trap the access. Add the required
handler.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
this_cpu_has_cap() only checks the feature array, and not the errata
one. In order to be able to check for a CPU-local erratum, allow it
to inspect the latter as well.
This is consistent with cpus_have_cap()'s behaviour, which includes
errata already.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Arch-specific functions are added to allow for implementing a crash dump
file interface, /proc/vmcore, which can be viewed as a ELF file.
A user space tool, like kexec-tools, is responsible for allocating
a separate region for the core's ELF header within crash kdump kernel
memory and filling it in when executing kexec_load().
Then, its location will be advertised to crash dump kernel via a new
device-tree property, "linux,elfcorehdr", and crash dump kernel preserves
the region for later use with reserve_elfcorehdr() at boot time.
On crash dump kernel, /proc/vmcore will access the primary kernel's memory
with copy_oldmem_page(), which feeds the data page-by-page by ioremap'ing
it since it does not reside in linear mapping on crash dump kernel.
Meanwhile, elfcorehdr_read() is simple as the region is always mapped.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In addition to common VMCOREINFO's defined in
crash_save_vmcoreinfo_init(), we need to know, for crash utility,
- kimage_voffset
- PHYS_OFFSET
to examine the contents of a dump file (/proc/vmcore) correctly
due to the introduction of KASLR (CONFIG_RANDOMIZE_BASE) in v4.6.
- VA_BITS
is also required for makedumpfile command.
arch_crash_save_vmcoreinfo() appends them to the dump file.
More VMCOREINFO's may be added later.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus
and save registers' status in per-cpu ELF notes before starting crash
dump kernel. See kernel_kexec().
Even if not all secondary cpus have shut down, we do kdump anyway.
As we don't have to make non-boot(crashed) cpus offline (to preserve
correct status of cpus at crash dump) before shutting down, this patch
also adds a variant of smp_send_stop().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since arch_kexec_protect_crashkres() removes a mapping for crash dump
kernel image, the loaded data won't be preserved around hibernation.
In this patch, helper functions, crash_prepare_suspend()/
crash_post_resume(), are additionally called before/after hibernation so
that the relevant memory segments will be mapped again and preserved just
as the others are.
In addition, to minimize the size of hibernation image, crash_is_nosave()
is added to pfn_is_nosave() in order to recognize only the pages that hold
loaded crash dump kernel image as saveable. Hibernation excludes any pages
that are marked as Reserved and yet "nosave."
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres()
are meant to be called by kexec_load() in order to protect the memory
allocated for crash dump kernel once the image is loaded.
The protection is implemented by unmapping the relevant segments in crash
dump kernel memory, rather than making it read-only as other archs do,
to prevent coherency issues due to potential cache aliasing (with
mismatched attributes).
Page-level mappings are consistently used here so that we can change
the attributes of segments in page granularity as well as shrink the region
also in page granularity through /sys/kernel/kexec_crash_size, putting
the freed memory back to buddy system.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
"crashkernel=" kernel parameter specifies the size (and optionally
the start address) of the system ram to be used by crash dump kernel.
reserve_crashkernel() will allocate and reserve that memory at boot time
of primary kernel.
The memory range will be exposed to userspace as a resource named
"Crash kernel" in /proc/iomem.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that the ACPI BGRT handling code has been made generic, we can
enable it for arm64.
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
[ Updated commit log to reflect that BGRT is only enabled for arm64, and added
missing 'return' statement to the dummy acpi_parse_bgrt() function. ]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/20170404160245.27812-8-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
To prevent unintended modifications to the kernel text (malicious or
otherwise) while running the EFI stub, describe the kernel image as
two separate sections: a .text section with read-execute permissions,
covering .text, .rodata and .init.text, and a .data section with
read-write permissions, covering .init.data, .data and .bss.
This relies on the firmware to actually take the section permission
flags into account, but this is something that is currently being
implemented in EDK2, which means we will likely start seeing it in
the wild between one and two years from now.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Replace open coded constants with symbolic ones throughout the
Image and the EFI headers. No binary level changes are intended.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The kernel's EFI PE/COFF header contains a dummy .reloc section, and
an explanatory comment that claims that this is required for the EFI
application loader to accept the Image as a relocatable image (i.e.,
one that can be loaded at any offset and fixed up in place)
This was inherited from the x86 implementation, which has elaborate host
tooling to mangle the PE/COFF header post-link time, and which populates
the .reloc section with a single dummy base relocation. On ARM, no such
tooling exists, and the .reloc section remains empty, and is never even
exposed via the BaseRelocationTable directory entry, which is where the
PE/COFF loader looks for it.
The PE/COFF spec is unclear about relocatable images that do not require
any fixups, but the EDK2 implementation, which is the de facto reference
for PE/COFF in the UEFI space, clearly does not care, and explicitly
mentions (in a comment) that relocatable images with no base relocations
are perfectly fine, as long as they don't have the RELOCS_STRIPPED
attribute set (which is not the case for our PE/COFF image)
So simply remove the .reloc section altogether.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bring the PE/COFF header in line with the PE/COFF spec, by setting
NumberOfSymbols to 0, and removing the section alignment flags.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
After having split off the PE header, clean up the bits that remain:
use .long consistently, merge two adjacent #ifdef CONFIG_EFI blocks,
fix the offset of the PE header pointer and remove the redundant .align
that follows it.
Also, since we will be eliminating all open coded constants from the
EFI header in subsequent patches, let's replace the open coded "ARM\x64"
magic number with its .ascii equivalent.
No changes to the resulting binary image are intended.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In preparation of yet another round of modifications to the PE/COFF
header, macroize it and move the definition into a separate source
file.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
read_system_reg() can readily be confused with read_sysreg(),
whereas these are really quite different in their meaning.
This patches attempts to reduce the ambiguity be reserving "sysreg"
for the actual system register accessors.
read_system_reg() is instead renamed to read_sanitised_ftr_reg(),
to make it more obvious that the Linux-defined sanitised feature
register cache is being accessed here, not the underlying
architectural system registers.
cpufeature.c's internal __raw_read_system_reg() function is renamed
in line with its actual purpose: a form of read_sysreg() that
indexes on (non-compiletime-constant) encoding rather than symbolic
register name.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 5c492c3f52 ("arm64: smp: Add function to determine if cpus are
stuck in the kernel") added a helper function to determine if die() is
supported in cpu_ops. This function assumes a cpu will have a valid
cpu_ops entry, but that may not be the case for cpu0 is spin-table or
parking protocol is used to boot secondary cpus. In that case, there
is a NULL dereference if have_cpu_die() is called by cpu0. So add a
check for a valid cpu_ops before dereferencing it.
Fixes: 5c492c3f52 ("arm64: smp: Add function to determine if cpus are stuck in the kernel")
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There are two unnecessary newlines, one is in show_regs, another
is in __show_regs(), drop them.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.
This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).
Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region.
Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.
For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.
Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We only need to initialise sctlr_el1 if we're installing an EL2 stub, so
we may as well defer this until we're doing so. Similarly, we can defer
intialising CPTR_EL2 until then, as we do not access any trapped
functionality as part of el2_setup.
This patch modified el2_setup accordingly, allowing us to remove a
branch and simplify the code flow.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The early el2_setup code is a little convoluted, with two branches where
one would do. This makes the code more painful to read than is
necessary.
We can remove a branch and simplify the logic by moving the early return
in the booted-at-EL1 case earlier in the function. This separates it
from all the setup logic that only makes sense for EL2.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If kernel image extends across alignment boundary, existing
code increases the KASLR offset by size of kernel image. The
offset is masked after resizing. There are cases, where after
masking, we may still have kernel image extending across
boundary. This eventually results in only 2MB block getting
mapped while creating the page tables. This results in data aborts
while accessing unmapped regions during second relocation (with
kaslr offset) in __primary_switch. To fix this problem, round up the
kernel image size, by swapper block size, before adding it for
correction.
For example consider below case, where kernel image still crosses
1GB alignment boundary, after masking the offset, which is fixed
by rounding up kernel image size.
SWAPPER_TABLE_SHIFT = 30
Swapper using section maps with section size 2MB.
CONFIG_PGTABLE_LEVELS = 3
VA_BITS = 39
_text : 0xffffff8008080000
_end : 0xffffff800aa1b000
offset : 0x1f35600000
mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
(_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
offset after existing correction (before mask) = 0x1f37f9b000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
(_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
offset (after mask) = 0x1f37e00000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
(_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
new offset w/ rounding up = 0x1f38000000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
(_end + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
Fixes: f80fb3a3d5 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv8.3 adds new instructions to support Release Consistent
processor consistent (RCpc) model, which is weaker than the
RCsc model.
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM v8.3 adds support for new instructions to aid floating-point
multiplication and addition of complex numbers. Expose the support
via HWCAP and MRS emulation
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARMv8.3 adds support for a new instruction to perform conversion
from double precision floating point to integer to match the
architected behaviour of the equivalent Javascript conversion.
Expose the availability via HWCAP and MRS emulation.
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add support for detecting VPIPT I-caches, as introduced by ARMv8.2.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
cachetype.h and cache.h are small and both obviously related to caches.
Merge them together to reduce clutter.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As a recent change to ARMv8, ASID-tagged VIVT I-caches are removed
retrospectively from the architecture. Consequently, we don't need to
support them in Linux either.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The CCSIDR_EL1.{NumSets,Associativity,LineSize} fields are only for use
in conjunction with set/way cache maintenance and are not guaranteed to
represent the actual microarchitectural features of a design.
The architecture explicitly states:
| You cannot make any inference about the actual sizes of caches based
| on these parameters.
Furthermore, CCSIDR_EL1.{WT,WB,RA,WA} have been removed retrospectively
from ARMv8 and are now considered to be UNKNOWN.
Since the kernel doesn't make use of set/way cache maintenance and it is
not possible for userspace to execute these instructions, we have no
need for the CCSIDR information in the kernel.
This patch removes the accessors, along with the related portions of the
cacheinfo support, which should instead be reintroduced when firmware has
a mechanism to provide us with reliable information.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The CCSIDR_EL1.{NumSets,Associativity,LineSize} fields are only for use
in conjunction with set/way cache maintenance and are not guaranteed to
represent the actual microarchitectural features of a design.
The architecture explicitly states:
| You cannot make any inference about the actual sizes of caches based
| on these parameters.
We currently use these fields to determine whether or the I-cache is
aliasing, which is bogus and known to break on some platforms. Instead,
assume the I-cache is always aliasing if it advertises a VIPT policy.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit af391b15f7 ("arm64: kernel: rename __cpu_suspend to keep it
aligned with arm") renamed cpu_suspend() to arm_cpuidle_suspend(), but
forgot to update the kerneldoc header.
Fixes: af391b15f7 ("arm64: kernel: rename __cpu_suspend to keep it aligned with arm")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit fc62d0207a ("kprobes: Introduce weak variant of
kprobe_exceptions_notify()") introduces a generic empty version of the
function for architectures that don't need special handling, like arm64.
As such, remove the arch/arm64/ specific handler.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Unlike most sysreg defintiions, the GICv3 definitions don't have a SYS_
prefix, and they don't live in <asm/sysreg.h>. Additionally, some
definitions are duplicated elsewhere (e.g. in the KVM save/restore
code).
For consistency, and to make it possible to share a common definition
for these sysregs, this patch moves the definitions to <asm/sysreg.h>,
adding a SYS_ prefix, and sorting the registers per their encoding.
Existing users of the definitions are fixed up so that this change is
not problematic.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Update code that relied on sched.h including various MM types for them.
This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Instead of including the full <linux/signal.h>, we are going to include the
types-only <linux/signal_types.h> header in <linux/sched.h>, to further
decouple the scheduler header from the signal headers.
This means that various files which relied on the full <linux/signal.h> need
to be updated to gain an explicit dependency on it.
Update the code that relies on sched.h's inclusion of the <linux/signal.h> header.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/task_stack.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/task.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/task.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/hotplug.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/hotplug.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/debug.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/signal.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/topology.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/topology.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Apart from adding the helper function itself, the rest of the kernel is
converted mechanically using:
git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/'
git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/'
This is needed for a later patch that hooks into the helper, but might
be a worthwhile cleanup on its own.
(Michal Hocko provided most of the kerneldoc comment.)
Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.com
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Often all is needed is these small helpers, instead of compiler.h or a
full kprobes.h. This is important for asm helpers, in fact even some
asm/kprobes.h make use of these helpers... instead just keep a generic
asm file with helpers useful for asm code with the least amount of
clutter as possible.
Likewise we need now to also address what to do about this file for both
when architectures have CONFIG_HAVE_KPROBES, and when they do not. Then
for when architectures have CONFIG_HAVE_KPROBES but have disabled
CONFIG_KPROBES.
Right now most asm/kprobes.h do not have guards against CONFIG_KPROBES,
this means most architecture code cannot include asm/kprobes.h safely.
Correct this and add guards for architectures missing them.
Additionally provide architectures that not have kprobes support with
the default asm-generic solution. This lets us force asm/kprobes.h on
the header include/linux/kprobes.h always, but most importantly we can
now safely include just asm/kprobes.h on architecture code without
bringing the full kitchen sink of header files.
Two architectures already provided a guard against CONFIG_KPROBES on its
kprobes.h: sh, arch. The rest of the architectures needed gaurds added.
We avoid including any not-needed headers on asm/kprobes.h unless
kprobes have been enabled.
In a subsequent atomic change we can try now to remove compiler.h from
include/linux/kprobes.h.
During this sweep I've also identified a few architectures defining a
common macro needed for both kprobes and ftrace, that of the definition
of the breakput instruction up. Some refer to this as
BREAKPOINT_INSTRUCTION. This must be kept outside of the #ifdef
CONFIG_KPROBES guard.
[mcgrof@kernel.org: fix arm64 build]
Link: http://lkml.kernel.org/r/CAB=NE6X1WMByuARS4mZ1g9+W=LuVBnMDnh_5zyN0CLADaVh=Jw@mail.gmail.com
[sfr@canb.auug.org.au: fixup for kprobes declarations moving]
Link: http://lkml.kernel.org/r/20170214165933.13ebd4f4@canb.auug.org.au
Link: http://lkml.kernel.org/r/20170203233139.32682-1-mcgrof@kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- Errata workarounds for Qualcomm's Falkor CPU
- Qualcomm L2 Cache PMU driver
- Qualcomm SMCCC firmware quirk
- Support for DEBUG_VIRTUAL
- CPU feature detection for userspace via MRS emulation
- Preliminary work for the Statistical Profiling Extension
- Misc cleanups and non-critical fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJYpIxqAAoJELescNyEwWM0xdwH/AsTYAXPZDMdRnrQUyV0Fd2H
/9pMzww6dHXEmCMKkImf++otUD6S+gTCJTsj7kEAXT5sZzLk27std5lsW7R9oPjc
bGQMalZy+ovLR1gJ6v072seM3In4xph/qAYOpD8Q0AfYCLHjfMMArQfoLa8Esgru
eSsrAgzVAkrK7XHi3sYycUjr9Hac9tvOOuQ3SaZkDz4MfFIbI4b43+c1SCF7wgT9
tQUHLhhxzGmgxjViI2lLYZuBWsIWsE+algvOe1qocvA9JEIXF+W8NeOuCjdL8WwX
3aoqYClC+qD/9+/skShFv5gM5fo0/IweLTUNIHADXpB6OkCYDyg+sxNM+xnEWQU=
=YrPg
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
- Errata workarounds for Qualcomm's Falkor CPU
- Qualcomm L2 Cache PMU driver
- Qualcomm SMCCC firmware quirk
- Support for DEBUG_VIRTUAL
- CPU feature detection for userspace via MRS emulation
- Preliminary work for the Statistical Profiling Extension
- Misc cleanups and non-critical fixes
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (74 commits)
arm64/kprobes: consistently handle MRS/MSR with XZR
arm64: cpufeature: correctly handle MRS to XZR
arm64: traps: correctly handle MRS/MSR with XZR
arm64: ptrace: add XZR-safe regs accessors
arm64: include asm/assembler.h in entry-ftrace.S
arm64: fix warning about swapper_pg_dir overflow
arm64: Work around Falkor erratum 1003
arm64: head.S: Enable EL1 (host) access to SPE when entered at EL2
arm64: arch_timer: document Hisilicon erratum 161010101
arm64: use is_vmalloc_addr
arm64: use linux/sizes.h for constants
arm64: uaccess: consistently check object sizes
perf: add qcom l2 cache perf events driver
arm64: remove wrong CONFIG_PROC_SYSCTL ifdef
ARM: smccc: Update HVC comment to describe new quirk parameter
arm64: do not trace atomic operations
ACPI/IORT: Fix the error return code in iort_add_smmu_platform_device()
ACPI/IORT: Fix iort_node_get_id() mapping entries indexing
arm64: mm: enable CONFIG_HOLES_IN_ZONE for NUMA
perf: xgene: Include module.h
...
CONFIG_SET_MODULE_RONX to the more sensible CONFIG_STRICT_KERNEL_RWX and
CONFIG_STRICT_MODULE_RWX.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Kees Cook <kees@outflux.net>
iQIcBAABCgAGBQJYrJ2ZAAoJEIly9N/cbcAmb4UQAIDnJYF4xecUfxofypQwt7ey
DcR8SH+g/Rkm3v2bUOrVdlP333ePRUEs6C47PgYSLlKsZiQA3H6bsTILHJZGHZ3j
laNH4sjQ0j+Sr2rHXk8fLz3YpHHwIy49bfu2ERXFH92BMnTMCv1h9IWFgOMH+4y5
09n16TPHMUj1k0DGjHO/n03qLIKOo3Xy/Va5dhQ/6dGU4zR4KhOBnhLlG3IU7Atd
KTR+ba/qym7bDQbTezMuaajTiZctr6a45yBKDWu4Knu+ot2a7K7fYvfRT3YVb5SU
aTSYps7NKQbewcQSqNdek1zytoy2Ck7CH511e+3ypwNmao5KQwRgH7OX1pDEXyZv
rGDaVzKMTSddH23jLEKUbpR847Lza9+V3h5YtbMG8GgiCKs91Ec666iEE3NVZBO8
1hiiYhE2iDxi10B/EZZcn2gOt2JaB2m2GxWIrJOz4txtDAWbUYlhUpWEUynBTPQ0
cYBZVnge81awipZJTWUv57LyufnTnMSK3i8Q8t0woj4C7pFbPYfjnKCrgwTQyAvr
mD4uFBrgFb1lftbc3kfTdeoZmXerzvubsstWdr3rU3nsiJFzY1SwJZe8n0THyL4g
DzURFrj/8UXb32Kavysz6FTxFO9u87mJm6yqHn/Y3bEK7Y7cch/NYjRC9Q6dpH+4
ld9apHF6iRrqgf+x6oOh
=7KhR
-----END PGP SIGNATURE-----
Merge tag 'rodata-v4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull rodata updates from Kees Cook:
"This renames the (now inaccurate) DEBUG_RODATA and related
SET_MODULE_RONX configs to the more sensible STRICT_KERNEL_RWX and
STRICT_MODULE_RWX"
* tag 'rodata-v4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
arch: Rename CONFIG_DEBUG_RODATA and CONFIG_DEBUG_MODULE_RONX
arch: Move CONFIG_DEBUG_RODATA and CONFIG_SET_MODULE_RONX to be common
Now that we have XZR-safe helpers for fiddling with registers, use these
in the arm64 kprobes code rather than open-coding the logic.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In emulate_mrs() we may erroneously write back to the user SP rather
than XZR if we trap an MRS instruction where Xt == 31.
Use the new pt_regs_write_reg() helper to handle this correctly.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 77c97b4ee2 ("arm64: cpufeature: Expose CPUID registers by emulation")
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently we hand-roll XZR-safe register handling in
user_cache_maint_handler(), though we forget to do the same in
ctr_read_handler(), and may erroneously write back to the user SP rather
than XZR.
Use the new helpers to handle these cases correctly and consistently.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 116c81f427 ("arm64: Work around systems with mismatched cache line sizes")
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In a randconfig build I ran into this build error:
arch/arm64/kernel/entry-ftrace.S: Assembler messages:
arch/arm64/kernel/entry-ftrace.S:101: Error: unknown mnemonic `ldr_l' -- `ldr_l x2,ftrace_trace_function'
The macro is defined in asm/assembler.h, so we should include that file.
Fixes: 829d2bd133 ("arm64: entry-ftrace.S: avoid open-coded {adr,ldr}_l")
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
is triggered, page table entries using the new translation table base
address (BADDR) will be allocated into the TLB using the old ASID. All
circumstances leading to the incorrect ASID being cached in the TLB arise
when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
operation is in the process of performing a translation using the specific
TTBRx_EL1 being written, and the memory operation uses a translation table
descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
ASID is not subject to this erratum because hardware is prohibited from
performing translations from an out-of-context translation regime.
Consider the following pseudo code.
write new BADDR and ASID values to TTBRx_EL1
Replacing the above sequence with the one below will ensure that no TLB
entries with an incorrect ASID are used by software.
write reserved value to TTBRx_EL1[ASID]
ISB
write new value to TTBRx_EL1[BADDR]
ISB
write new value to TTBRx_EL1[ASID]
ISB
When the above sequence is used, page table entries using the new BADDR
value may still be incorrectly allocated into the TLB using the reserved
ASID. Yet this will not reduce functionality, since TLB entries incorrectly
tagged with the reserved ASID will never be hit by a later instruction.
Based on work by Shanker Donthineni <shankerd@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The SPE architecture requires each exception level to enable access
to the SPE controls for the exception level below it, since additional
context-switch logic may be required to handle the buffer safely.
This patch allows EL1 (host) access to the SPE controls when entered at
EL2.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The sysfs cpu_capacity entry for each CPU has nothing to do with
PROC_FS, nor it's in /proc/sys path.
Remove such ifdef.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reported-and-suggested-by: Sudeep Holla <sudeep.holla@arm.com>
Fixes: be8f185d8a ('arm64: add sysfs cpu_capacity attribute')
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Both of these options are poorly named. The features they provide are
necessary for system security and should not be considered debug only.
Change the names to CONFIG_STRICT_KERNEL_RWX and
CONFIG_STRICT_MODULE_RWX to better describe what these options do.
Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
This patch adds a Qualcomm specific quirk to the arm_smccc_smc call.
On Qualcomm ARM64 platforms, the SMC call can return before it has
completed. If this occurs, the call can be restarted, but it requires
using the returned session ID value from the interrupted SMC call.
The quirk stores off the session ID from the interrupted call in the
quirk structure so that it can be used by the caller.
This patch folds in a fix given by Sricharan R:
https://lkml.org/lkml/2016/9/28/272
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds a quirk parameter to the arm_smccc_(smc/hvc) calls.
The quirk structure allows for specialized SMC operations due to SoC
specific requirements. The current arm_smccc_(smc/hvc) is renamed and
macros are used instead to specify the standard arm_smccc_(smc/hvc) or
the arm_smccc_(smc/hvc)_quirk function.
This patch and partial implementation was suggested by Will Deacon.
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When building with debugging symbols, take the absolute path to the
vmlinux binary and add it to the special PE/COFF debug table entry.
This allows a debug EFI build to find the vmlinux binary, which is
very helpful in debugging, given that the offset where the Image is
first loaded by EFI is highly unpredictable.
On implementations of UEFI that choose to implement it, this
information is exposed via the EFI debug support table, which is a UEFI
configuration table that is accessible both by the firmware at boot time
and by the OS at runtime, and lists all PE/COFF images loaded by the
system.
The format of the NB10 Codeview entry is based on the definition used
by EDK2, which is our primary reference when it comes to the use of
PE/COFF in the context of UEFI firmware.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: use realpath instead of shell invocation, as discussed on list]
Signed-off-by: Will Deacon <will.deacon@arm.com>
We recently discovered that __raw_read_system_reg() erroneously mapped
sysreg IDs to the wrong registers.
To ensure that we don't get hit by a similar issue in future, this patch
makes __raw_read_system_reg() use a macro for each case statement,
ensuring that each case reads the correct register.
To ensure that this patch hasn't introduced an issue, I've binary-diffed
the object files before and after this patch. No code or data sections
differ (though some debug section differ due to line numbering
changing).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since it was introduced in commit da8d02d19f ("arm64/capabilities:
Make use of system wide safe value"), __raw_read_system_reg() has
erroneously mapped some sysreg IDs to other registers.
For the fields in ID_ISAR5_EL1, our local feature detection will be
erroneous. We may spuriously detect that a feature is uniformly
supported, or may fail to detect when it actually is, meaning some
compat hwcaps may be erroneous (or not enforced upon hotplug).
This patch corrects the erroneous entries.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: da8d02d19f ("arm64/capabilities: Make use of system wide safe value")
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Instead of open-coding the loop, let's use canned macro.
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
During a TLB invalidate sequence targeting the inner shareable domain,
Falkor may prematurely complete the DSB before all loads and stores using
the old translation are observed. Instruction fetches are not subject to
the conditions of this erratum. If the original code sequence includes
multiple TLB invalidate instructions followed by a single DSB, onle one of
the TLB instructions needs to be repeated to work around this erratum.
While the erratum only applies to cases in which the TLBI specifies the
inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or
stronger (OSH, SYS), this changes applies the workaround overabundantly--
to local TLBI, DSB NSH sequences as well--for simplicity.
Based on work by Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If an EL0 instruction in the SYS class triggers an exception, do_sysintr
looks for a sys64_hook matching the instruction, and if none is found,
injects a SIGILL. This mirrors what we do for undefined instruction
encodings in do_undefinstr, where we look for an undef_hook matching the
instruction, and if none is found, inject a SIGILL.
Over time, new SYS instruction encodings may be allocated. Prior to
allocation, exceptions resulting from these would be handled by
do_undefinstr, whereas after allocation these may be handled by
do_sysintr.
To ensure that we have consistent behaviour if and when this happens, it
would be beneficial to have do_sysinstr fall back to do_undefinstr.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On ACPI based systems where the topology is setup using the API
store_cpu_topology, at the moment we do not have necessary code
to parse cpu capacity and handle cpufreq notifier, thus
resulting in a kernel panic.
Stack:
init_cpu_capacity_callback+0xb4/0x1c8
notifier_call_chain+0x5c/0xa0
__blocking_notifier_call_chain+0x58/0xa0
blocking_notifier_call_chain+0x3c/0x50
cpufreq_set_policy+0xe4/0x328
cpufreq_init_policy+0x80/0x100
cpufreq_online+0x418/0x710
cpufreq_add_dev+0x118/0x180
subsys_interface_register+0xa4/0xf8
cpufreq_register_driver+0x1c0/0x298
cppc_cpufreq_init+0xdc/0x1000 [cppc_cpufreq]
do_one_initcall+0x5c/0x168
do_init_module+0x64/0x1e4
load_module+0x130c/0x14d0
SyS_finit_module+0x108/0x120
el0_svc_naked+0x24/0x28
Fixes: 7202bde8b7 ("arm64: parse cpu capacity-dmips-mhz from DT")
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Memory regions marked as NOMAP should not be used for general allocation
by the kernel, and should not even be covered by the linear mapping
(hence the name). However, drivers or other subsystems (such as ACPI)
that access the firmware directly may legally access them, which means
it is also reasonable for such drivers to claim them by invoking
request_resource(). Currently, this is prevented by the fact that arm64's
request_standard_resources() marks reserved regions as IORESOURCE_BUSY.
So drop the IORESOURCE_BUSY flag from these requests.
Reported-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If CONFIG_DEBUG_VIRTUAL=y and CONFIG_ARM64_SW_TTBR0_PAN=y:
virt_to_phys used for non-linear address: ffffff8008cc0000 (empty_zero_page+0x0/0x1000)
WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x28/0x60
...
[<ffffff800809abb4>] __virt_to_phys+0x28/0x60
[<ffffff8008a02600>] setup_arch+0x46c/0x4d4
Fixes: 2077be6783 ("arm64: Use __pa_symbol for kernel symbols")
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Generally, taking an unexpected exception should be a fatal event, and
bad_mode is intended to cater for this. However, it should be possible
to contain unexpected synchronous exceptions from EL0 without bringing
the kernel down, by sending a SIGILL to the task.
We tried to apply this approach in commit 9955ac47f4 ("arm64:
don't kill the kernel on a bad esr from el0"), by sending a signal for
any bad_mode call resulting from an EL0 exception.
However, this also applies to other unexpected exceptions, such as
SError and FIQ. The entry paths for these exceptions branch to bad_mode
without configuring the link register, and have no kernel_exit. Thus, if
we take one of these exceptions from EL0, bad_mode will eventually
return to the original user link register value.
This patch fixes this by introducing a new bad_el0_sync handler to cater
for the recoverable case, and restoring bad_mode to its original state,
whereby it calls panic() and never returns. The recoverable case
branches to bad_el0_sync with a bl, and returns to userspace via the
usual ret_to_user mechanism.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 9955ac47f4 ("arm64: don't kill the kernel on a bad esr from el0")
Reported-by: Mark Salter <msalter@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We cannot preserve partial fields for hardware breakpoints, because
the values written by userspace to the hardware breakpoint
registers can't subsequently be recovered intact from the hardware.
So, just reject attempts to write incomplete fields with -EINVAL.
Cc: <stable@vger.kernel.org> # 3.7.x-
Fixes: 478fcb2cdb ("arm64: Debugging support")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <Will.Deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ensure that if userspace supplies insufficient data to
PTRACE_SETREGSET to fill all the registers, the thread's old
registers are preserved.
Cc: <stable@vger.kernel.org> # 4.3.x-
Fixes: 5d220ff942 ("arm64: Better native ptrace support for compat tasks")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <Will.Deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ensure that if userspace supplies insufficient data to
PTRACE_SETREGSET to fill all the registers, the thread's old
registers are preserved.
Cc: <stable@vger.kernel.org> # 3.19.x-
Fixes: 766a85d7bc ("arm64: ptrace: add NT_ARM_SYSTEM_CALL regset")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <Will.Deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ensure that if userspace supplies insufficient data to
PTRACE_SETREGSET to fill all the registers, the thread's old
registers are preserved.
Cc: <stable@vger.kernel.org> # 3.7.x-
Fixes: 478fcb2cdb ("arm64: Debugging support")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <Will.Deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Some places in the kernel open-code sequences using ADRP for a symbol
another instruction using a :lo12: relocation for that same symbol.
These sequences are easy to get wrong, and more painful to read than is
necessary. For these reasons, it is preferable to use the
{adr,ldr,str}_l macros for these cases.
This patch makes use of these in entry-ftrace.S, removing open-coded
sequences using adrp. This results in a minor code change, since a
temporary register is not used when generating the address for some
symbols, but this is fine, as the value of the temporary register is not
used elsewhere.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Some places in the kernel open-code sequences using ADRP for a symbol
another instruction using a :lo12: relocation for that same symbol.
These sequences are easy to get wrong, and more painful to read than is
necessary. For these reasons, it is preferable to use the
{adr,ldr,str}_l macros for these cases.
This patch makes use of these in efi-entry.S, removing open-coded
sequences using adrp.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Some places in the kernel open-code sequences using ADRP for a symbol
another instruction using a :lo12: relocation for that same symbol.
These sequences are easy to get wrong, and more painful to read than is
necessary. For these reasons, it is preferable to use the
{adr,ldr,str}_l macros for these cases.
This patch makes use of adr_l these in head.S, removing an open-coded
sequence using adrp.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The cache hierarchy can be identified through Cache Level ID(CLIDR)
architected system register. However in some cases it will provide
only the number of cache levels that are integrated into the processor
itself. In other words, it can't provide any information about the
caches that are external and/or transparent.
Some platforms require to export the information about all such external
caches to the userspace applications via the sysfs interface.
This patch adds support to override the cache levels using device tree
to take such external non-architected caches into account.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Tested-by: Tan Xiaojun <tanxiaojun@huawei.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Definition of cpu ranges are hard to read if the cpu variant is not
zero. Provide MIDR_CPU_VAR_REV() macro to describe the full hardware
revision of a cpu including variant and (minor) revision.
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARM v8.1 extensions include support for rounding double multiply
add/subtract instructions to the A64 SIMD instructions set. Let
the userspace know about it via a HWCAP bit.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
__pa_symbol is technically the marcro that should be used for kernel
symbols. Switch to this as a pre-requisite for DEBUG_VIRTUAL which
will do bounds checking.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Documentation for the infrastructure to expose CPU feature
register by emulating MRS.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds the hook for emulating MRS instruction to
export the 'user visible' value of supported system registers.
We emulate only the following id space for system registers:
Op0=3, Op1=0, CRn=0, CRm=[0, 4-7]
The rest will fall back to SIGILL. This capability is also
advertised via a new HWCAP_CPUID.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
[will: add missing static keyword to enable_mrs_emulation]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Track the user visible fields of a CPU feature register. This will be
used for exposing the value to the userspace. All the user visible
fields of a feature register will be passed on as it is, while the
others would be filled with their respective safe value.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add a helper to extract the register field from a given
instruction.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch does the following clean ups :
1) All undescribed fields of a register are now treated as 'strict'
with a safe value of 0. Hence we could leave an empty table for
describing registers which are RAZ.
2) ID_AA64DFR1_EL1 is RAZ and should use the table for RAZ register.
3) ftr_generic32 is used to represent a register with a 32bit feature
value. Rename this to ftr_singl32 to make it more obvious. Since
we don't have a 64bit singe feature register, kill ftr_generic.
Based on a patch by Mark Rutland.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently have some RAZ fields described explicitly in our
arm64_ftr_bits arrays. These are inconsistently commented, grouped,
and/or applied, and maintaining these is error-prone.
Luckily, we don't need these at all. We'll never need to inspect RAZ
fields to determine feature support, and init_cpu_ftr_reg() will ensure
that any bits without a corresponding arm64_ftr_bits entry are treated
as RES0 with strict matching requirements. In check_update_ftr_reg()
we'll then compare these bits from the relevant cpuinfo_arm64
structures, and need not store them in a arm64_ftr_reg.
This patch removes the unnecessary arm64_ftr_bits entries for RES0 bits.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Any fields not defined in an arm64_ftr_bits entry are propagated to the
system-wide register value in init_cpu_ftr_reg(), and while we require
that these strictly match for the sanity checks, we don't update them in
update_cpu_ftr_reg().
Generally, the lack of an arm64_ftr_bits entry indicates that the bits
are currently RES0 (as is the case for the upper 32 bits of all
supposedly 32-bit registers).
A better default would be to use zero for the system-wide value of
unallocated bits, making all register checking consistent, and allowing
for subsequent simplifications to the arm64_ftr_bits arrays.
This patch updates init_cpu_ftr_reg() to treat unallocated bits as RES0
for the purpose of the system-wide safe value. These bits will still be
sanity checked with strict match requirements, as is currently the case.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The statistical profiling extension (SPE) is an optional feature of
ARMv8.1 and is unlikely to be supported by all of the CPUs in a
heterogeneous system.
This patch updates the cpufeature checks so that such systems are not
tainted as unsupported.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Perf already supports multiple PMU instances for heterogeneous systems,
so there's no need to be strict in the cpufeature checking, particularly
as the PMU extension is optional in the architecture.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since its introduction, the UAO enable call was broken, and useless.
commit 2a6dcb2b5f ("arm64: cpufeature: Schedule enable() calls instead
of calling them via IPI"), fixed the framework so that these calls
are scheduled, so that they can modify PSTATE.
Now it is just useless. Remove it. UAO is enabled by the code patching
which causes get_user() and friends to use the 'ldtr' family of
instructions. This relies on the PSTATE.UAO bit being set to match
addr_limit, which we do in uao_thread_switch() called via __switch_to().
All that is needed to enable UAO is patch the code, and call schedule().
__apply_alternatives_multi_stop() calls stop_machine() when it modifies
the kernel text to enable the alternatives, (including the UAO code in
uao_thread_switch()). Once stop_machine() has finished __switch_to() is
called to reschedule the original task, this causes PSTATE.UAO to be set
appropriately. An explicit enable() call is not needed.
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
In commit 23c8a500c2 ("arm64: kernel: use ordinary return/argument
register for el2_setup()"), we stopped using w20 as a global stash of
the boot mode flag, and instead pass this around in w0 as a function
parameter.
Unfortunately, we missed a couple of comments, which still refer to the
old convention of using w20/x20.
This patch fixes up the comments to describe the code as it currently
works.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A few printk calls in arm64 omit a trailing newline, even though there
is no subsequent KERN_CONT printk associated with them, and we actually
want a newline.
This can result in unrelated lines being appended, rather than appearing
on a new line. Additionally, timestamp prefixes may appear in-line. This
makes the logs harder to read than necessary.
Avoid this by adding a trailing newline.
These were found with a shortlist generated by:
$ git grep 'pr\(intk\|_.*\)(.*)' -- arch/arm64 | grep -v pr_fmt | grep -v '\\n"'
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
CC: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Function graph tracer shows negative time (wrap around) when tracing
__switch_to if the nosleep-time trace option is enabled.
Time compensation for nosleep-time is done by an ftrace probe on
sched_switch. This doesn't work well for the following events (with
letters representing timestamps):
A - sched switch probe called for task T switch out
B - __switch_to calltime is recorded
C - sched_switch probe called for task T switch in
D - __switch_to rettime is recorded
If C - A > D - B, then we end up over compensating for the time spent in
__switch_to giving rise to negative times in the trace output.
On x86, __switch_to is not traced if function graph tracer is enabled.
Do the same for arm64 as well.
Cc: Todd Kjos <tkjos@google.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Pull SMP hotplug notifier removal from Thomas Gleixner:
"This is the final cleanup of the hotplug notifier infrastructure. The
series has been reintgrated in the last two days because there came a
new driver using the old infrastructure via the SCSI tree.
Summary:
- convert the last leftover drivers utilizing notifiers
- fixup for a completely broken hotplug user
- prevent setup of already used states
- removal of the notifiers
- treewide cleanup of hotplug state names
- consolidation of state space
There is a sphinx based documentation pending, but that needs review
from the documentation folks"
* 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/armada-xp: Consolidate hotplug state space
irqchip/gic: Consolidate hotplug state space
coresight/etm3/4x: Consolidate hotplug state space
cpu/hotplug: Cleanup state names
cpu/hotplug: Remove obsolete cpu hotplug register/unregister functions
staging/lustre/libcfs: Convert to hotplug state machine
scsi/bnx2i: Convert to hotplug state machine
scsi/bnx2fc: Convert to hotplug state machine
cpu/hotplug: Prevent overwriting of callbacks
x86/msr: Remove bogus cleanup from the error path
bus: arm-ccn: Prevent hotplug callback leak
perf/x86/intel/cstate: Prevent hotplug callback leak
ARM/imx/mmcd: Fix broken cpu hotplug handling
scsi: qedi: Convert to hotplug state machine
When the state names got added a script was used to add the extra argument
to the calls. The script basically converted the state constant to a
string, but the cleanup to convert these strings into meaningful ones did
not happen.
Replace all the useless strings with 'subsys/xxx/yyy:state' strings which
are used in all the other places already.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Link: http://lkml.kernel.org/r/20161221192112.085444152@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)
to do the replacement at the end of the merge window.
Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- Move some Linux-specific functionality to upstream ACPICA and
update the in-kernel users of it accordingly (Lv Zheng).
- Drop a useless warning (triggered by the lack of an optional
object) from the ACPI namespace scanning code (Zhang Rui).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYW90kAAoJEILEb/54YlRxS78P/RZwSOwkgdowIMGOlO7Hz5Xz
6Psjfc3QdGhVGOau4F3Irg7HIDsPaHE5+xijwujWFK0wh89tknrChrFDrOueND6K
rZ4w72lYROQhtQl4bu9yXZ2TswP3I5aYERDPyqKfDAi+SaQBpZUtmWdoh0I/JN2M
svyuzxoRlNglgp2GWPnnnKHFe6plEeZgKjgoGjPAufDHakqYvP0cQwY3i4+PpuYh
prXb4Xr4fFvAG2MhM9ciT02ewrQJcYUgVj5bnuSBmMu0y0zUBJ1kK7abiQlf+lTy
/BCqQbllhHrrXQD0zkqi2oBFqWL4Bnj9r+5zj03KmBsfoz3u+zXr5CApDei8Yc06
gGIZXX9aCqBirahkpsMFYPEQVPoPFCek0slXSyF5xKjCWYyocwgUtHnB87uIiSFO
jCeSsWwT5IO9HnbsTf6x4xDWBdY6LOJgJyDirIGcNSUX+q4tfgn/LFrgN9Ck4M2g
xkZn8e8H7u09ACX9l6dHZ1PCHlg7bWKeH6gcqdo5R4NOVeZxt9YDIz13ERsG3D17
JZvdrAbBdC1fXfHnOLBj/BRy+TFvieWOC+kHBTAnPQlnkr7NjXXv/vgWU8flVL/z
kxlR2U2lD2XarG/b8OHvYxO2k/TDLRenN0IqqFmYbmyhH0dHYKGGBLY4TetAXKNL
N4EouS+xUBulCP2XOI8N
=55TG
-----END PGP SIGNATURE-----
Merge tag 'acpi-extra-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull more ACPI updates from Rafael Wysocki:
"Here are new versions of two ACPICA changes that were deferred
previously due to a problem they had introduced, two cleanups on top
of them and the removal of a useless warning message from the ACPI
core.
Specifics:
- Move some Linux-specific functionality to upstream ACPICA and
update the in-kernel users of it accordingly (Lv Zheng)
- Drop a useless warning (triggered by the lack of an optional
object) from the ACPI namespace scanning code (Zhang Rui)"
* tag 'acpi-extra-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI / osl: Remove deprecated acpi_get_table_with_size()/early_acpi_os_unmap_memory()
ACPI / osl: Remove acpi_get_table_with_size()/early_acpi_os_unmap_memory() users
ACPICA: Tables: Allow FADT to be customized with virtual address
ACPICA: Tables: Back port acpi_get_table_with_size() and early_acpi_os_unmap_memory() from Linux kernel
ACPI: do not warn if _BQC does not exist
* acpica:
ACPI / osl: Remove deprecated acpi_get_table_with_size()/early_acpi_os_unmap_memory()
ACPI / osl: Remove acpi_get_table_with_size()/early_acpi_os_unmap_memory() users
ACPICA: Tables: Allow FADT to be customized with virtual address
ACPICA: Tables: Back port acpi_get_table_with_size() and early_acpi_os_unmap_memory() from Linux kernel
* acpi-scan:
ACPI: do not warn if _BQC does not exist
This patch removes the users of the deprectated APIs:
acpi_get_table_with_size()
early_acpi_os_unmap_memory()
The following APIs should be used instead of:
acpi_get_table()
acpi_put_table()
The deprecated APIs are invented to be a replacement of acpi_get_table()
during the early stage so that the early mapped pointer will not be stored
in ACPICA core and thus the late stage acpi_get_table() won't return a
wrong pointer. The mapping size is returned just because it is required by
early_acpi_os_unmap_memory() to unmap the pointer during early stage.
But as the mapping size equals to the acpi_table_header.length
(see acpi_tb_init_table_descriptor() and acpi_tb_validate_table()), when
such a convenient result is returned, driver code will start to use it
instead of accessing acpi_table_header to obtain the length.
Thus this patch cleans up the drivers by replacing returned table size with
acpi_table_header.length, and should be a no-op.
Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYUt1vAAoJEFmIoMA60/r8abgP/3R+5Lsk5/kfAHk5/2Mtqbvg
mZ0eDUpY9GbUeMjSq84Nr2H8u7d+1AJCCu8KtDJYZCmjZpnSp2SuE2PS5JoGC7zC
fintD24jlIF4/J5+HeVXXmbfr3xATxvpTuiSLEi8sLBRJ3KRIswhMSwoPwOyeTQw
v/EclWKPGYcI5Zp0oigY9/Jd3q3lQ17KXppi/0dDoLh7PNOFvEHItXWzmf++u/NP
iYT9R1xmzEsy0/HRd6hiwPT2xA8YsAXxgobhHooUgh1FWmZ02Tg1WjgDemOW4lVh
kNIUcsLczh7wZCceogrrJ+pwb9+NyyIyKuHPv6OG3ieyz1IZdznaj1fAE5HJYiPo
eVS7cP1S6DyV3Y5qFj5F2dSRS7T4GXdXG5mNhmeCpUHs0vfzSCG36jLmhTy8UIxs
1rCf5oFa+uU9q0okfH8VtcGOXqWjGgyxTSGGfF71HUMLnPbsci2fxC2cO6svzIX7
wDY0uxOzpyMIYMuQR6iz7VqvAwEaZ+7pfMIrWWdDcQ9/5tCNJ49cLuKaThPL4bVu
juiGBQtnTLg8tjrhjDL9tQiJpuVIweVXyyQ1fvZoVXkMLlhVCF2ttirvwFUit2PB
84OlevQZ+9QdE/qalrWbv4qzhesuiwu0avkzjGoqg6tWTF0epu2AHI2vqy6UBYEG
tcfJPEcz1019PKZNSvWy
=ut0k
-----END PGP SIGNATURE-----
Merge tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI updates from Bjorn Helgaas:
"PCI changes:
- add support for PCI on ARM64 boxes with ACPI. We already had this
for theoretical spec-compliant hardware; now we're adding quirks
for the actual hardware (Cavium, HiSilicon, Qualcomm, X-Gene)
- add runtime PM support for hotplug ports
- enable runtime suspend for Intel UHCI that uses platform-specific
wakeup signaling
- add yet another host bridge registration interface. We hope this is
extensible enough to subsume the others
- expose device revision in sysfs for DRM
- to avoid device conflicts, make sure any VF BAR updates are done
before enabling the VF
- avoid unnecessary link retrains for ASPM
- allow INTx masking on Mellanox devices that support it
- allow access to non-standard VPD for Chelsio devices
- update Broadcom iProc support for PAXB v2, PAXC v2, inbound DMA,
etc
- update Rockchip support for max-link-speed
- add NVIDIA Tegra210 support
- add Layerscape LS1046a support
- update R-Car compatibility strings
- add Qualcomm MSM8996 support
- remove some uninformative bootup messages"
* tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (115 commits)
PCI: Enable access to non-standard VPD for Chelsio devices (cxgb3)
PCI: Expand "VPD access disabled" quirk message
PCI: pciehp: Remove loading message
PCI: hotplug: Remove hotplug core message
PCI: Remove service driver load/unload messages
PCI/AER: Log AER IRQ when claiming Root Port
PCI/AER: Log errors with PCI device, not PCIe service device
PCI/AER: Remove unused version macros
PCI/PME: Log PME IRQ when claiming Root Port
PCI/PME: Drop unused support for PMEs from Root Complex Event Collectors
PCI: Move config space size macros to pci_regs.h
x86/platform/intel-mid: Constify mid_pci_platform_pm
PCI/ASPM: Don't retrain link if ASPM not possible
PCI: iproc: Skip check for legacy IRQ on PAXC buses
PCI: pciehp: Leave power indicator on when enabling already-enabled slot
PCI: pciehp: Prioritize data-link event over presence detect
PCI: rcar: Add gen3 fallback compatibility string for pcie-rcar
PCI: rcar: Use gen2 fallback compatibility last
PCI: rcar-gen2: Use gen2 fallback compatibility last
PCI: rockchip: Move the deassert of pm/aclk/pclk after phy_init()
..
- struct thread_info moved off-stack (also touching
include/linux/thread_info.h and include/linux/restart_block.h)
- cpus_have_cap() reworked to avoid __builtin_constant_p() for static
key use (also touching drivers/irqchip/irq-gic-v3.c)
- Uprobes support (currently only for native 64-bit tasks)
- Emulation of kernel Privileged Access Never (PAN) using TTBR0_EL1
switching to a reserved page table
- CPU capacity information passing via DT or sysfs (used by the
scheduler)
- Support for systems without FP/SIMD (IOW, kernel avoids touching these
registers; there is no soft-float ABI, nor kernel emulation for
AArch64 FP/SIMD)
- Handling of hardware watchpoint with unaligned addresses, varied
lengths and offsets from base
- Use of the page table contiguous hint for kernel mappings
- Hugetlb fixes for sizes involving the contiguous hint
- Remove unnecessary I-cache invalidation in flush_cache_range()
- CNTHCTL_EL2 access fix for CPUs with VHE support (ARMv8.1)
- Boot-time checks for writable+executable kernel mappings
- Simplify asm/opcodes.h and avoid including the 32-bit ARM counterpart
and make the arm64 kernel headers self-consistent (Xen headers patch
merged separately)
- Workaround for broken .inst support in certain binutils versions
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJYUEd0AAoJEGvWsS0AyF7xLpIP/AvSZgtz6/N+UcJ70r1oPwZ/
wIZl5OJ1hpfIEs+9XPU71TJbfETOusyOYwDUQmp8lXFDICk3snB4PvXFpLHOSytL
N05eYnV2de+gyKstC3ysg0mZdpIrazjKQbmHPc1KeNHuf6ZPSuIqRFINr3rnpziY
TeOVmFplgKnbDYcF4ejqcaEFEn5BkkpNNfqhX4mOHJIC4BMmglT/KefzHtK/39AT
EdZWrsA9UTEA+ccgolYtq55YcZD9kQFmEy2BRhZLbOamH5UrsUOVl9sS6fRvA3Qs
eSbnHBsdJ7n/ym6w/CK+KXKo3M/02H0JNXqhPlHaAqb+djlp7N74wyiERISR6GL9
s+7Fh/uNhfMg7vYtWkN3TlXth9HmNXdpaouNe/m8seBvwdKH+KfC0IBhXCl0NziB
hxwMI+OtV4wxzPgXTSkYlbqVEC49dAq9GnRtR+Bi5tY4a9+jeNwG/uIRcFMaRHJe
Wq48050mHMlmOjnmr3N+0l7dNhda8/ZO03ZlPfqrccBccX0idqVypkG6Wj75ZK1b
TTBvQ2A2Hqi7YtSqZNrUnTDx5O4IlywQpXLzIsDJPph8mrZ4h06lRr2fkh4FcKgH
NQrr9tjTD9XLOJfl3u0VwSbWYucWrgMHYI1r5SA5xl1Xqp6YJ8Kfod3sdA+uxS3P
SK03zJP1LM+e1HidQhKN
=8Uk9
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- struct thread_info moved off-stack (also touching
include/linux/thread_info.h and include/linux/restart_block.h)
- cpus_have_cap() reworked to avoid __builtin_constant_p() for static
key use (also touching drivers/irqchip/irq-gic-v3.c)
- uprobes support (currently only for native 64-bit tasks)
- Emulation of kernel Privileged Access Never (PAN) using TTBR0_EL1
switching to a reserved page table
- CPU capacity information passing via DT or sysfs (used by the
scheduler)
- support for systems without FP/SIMD (IOW, kernel avoids touching
these registers; there is no soft-float ABI, nor kernel emulation for
AArch64 FP/SIMD)
- handling of hardware watchpoint with unaligned addresses, varied
lengths and offsets from base
- use of the page table contiguous hint for kernel mappings
- hugetlb fixes for sizes involving the contiguous hint
- remove unnecessary I-cache invalidation in flush_cache_range()
- CNTHCTL_EL2 access fix for CPUs with VHE support (ARMv8.1)
- boot-time checks for writable+executable kernel mappings
- simplify asm/opcodes.h and avoid including the 32-bit ARM counterpart
and make the arm64 kernel headers self-consistent (Xen headers patch
merged separately)
- Workaround for broken .inst support in certain binutils versions
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (60 commits)
arm64: Disable PAN on uaccess_enable()
arm64: Work around broken .inst when defective gas is detected
arm64: Add detection code for broken .inst support in binutils
arm64: Remove reference to asm/opcodes.h
arm64: Get rid of asm/opcodes.h
arm64: smp: Prevent raw_smp_processor_id() recursion
arm64: head.S: Fix CNTHCTL_EL2 access on VHE system
arm64: Remove I-cache invalidation from flush_cache_range()
arm64: Enable HIBERNATION in defconfig
arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN
arm64: xen: Enable user access before a privcmd hvc call
arm64: Handle faults caused by inadvertent user access with PAN enabled
arm64: Disable TTBR0_EL1 during normal kernel execution
arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1
arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro
arm64: Factor out PAN enabling/disabling into separate uaccess_* macros
arm64: Update the synchronous external abort fault description
selftests: arm64: add test for unaligned/inexact watchpoint handling
arm64: Allow hw watchpoint of length 3,5,6 and 7
arm64: hw_breakpoint: Handle inexact watchpoint addresses
...
Pull smp hotplug updates from Thomas Gleixner:
"This is the final round of converting the notifier mess to the state
machine. The removal of the notifiers and the related infrastructure
will happen around rc1, as there are conversions outstanding in other
trees.
The whole exercise removed about 2000 lines of code in total and in
course of the conversion several dozen bugs got fixed. The new
mechanism allows to test almost every hotplug step standalone, so
usage sites can exercise all transitions extensively.
There is more room for improvement, like integrating all the
pointlessly different architecture mechanisms of synchronizing,
setting cpus online etc into the core code"
* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits)
tracing/rb: Init the CPU mask on allocation
soc/fsl/qbman: Convert to hotplug state machine
soc/fsl/qbman: Convert to hotplug state machine
zram: Convert to hotplug state machine
KVM/PPC/Book3S HV: Convert to hotplug state machine
arm64/cpuinfo: Convert to hotplug state machine
arm64/cpuinfo: Make hotplug notifier symmetric
mm/compaction: Convert to hotplug state machine
iommu/vt-d: Convert to hotplug state machine
mm/zswap: Convert pool to hotplug state machine
mm/zswap: Convert dst-mem to hotplug state machine
mm/zsmalloc: Convert to hotplug state machine
mm/vmstat: Convert to hotplug state machine
mm/vmstat: Avoid on each online CPU loops
mm/vmstat: Drop get_online_cpus() from init_cpu_node_state/vmstat_cpu_dead()
tracing/rb: Convert to hotplug state machine
oprofile/nmi timer: Convert to hotplug state machine
net/iucv: Use explicit clean up labels in iucv_init()
x86/pci/amd-bus: Convert to hotplug state machine
x86/oprofile/nmi: Convert to hotplug state machine
...
pci_mcfg_lookup() is the external interface to the generic MCFG code.
Previously it merely looked up the ECAM base address for a given domain and
bus range. We want a way to add MCFG quirks, some of which may require
special config accessors and adjustments to the ECAM address range.
Extend pci_mcfg_lookup() so it can return a pointer to a pci_ecam_ops
structure and a struct resource for the ECAM address space. For now, it
always returns &pci_generic_ecam_ops (the standard accessor) and the
resource described by the MCFG.
No functional changes intended.
[bhelgaas: changelog]
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
On x86 and ia64, we have treated all ACPI _CRS resources of PNP0A03 host
bridge devices as "producers", i.e., as host bridge windows. That's partly
because some x86 BIOSes improperly used "consumer" descriptors to describe
windows and partly because Linux didn't have good support for handling
consumer and producer descriptors differently.
One result is that x86 BIOSes describe host bridge "consumer" resources in
the _CRS of a PNP0C02 device, not the PNP0A03 device itself. On arm64 we
don't have a legacy of firmware that has this consumer/producer confusion,
so we can handle PNP0A03 "consumer" descriptors as host bridge registers
instead of windows.
Exclude non-window ("consumer") resources from the list of host bridge
windows. This allows the use of "consumer" PNP0A03 descriptors for bridge
register space.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Currently we use one shared global acpi_pci_root_ops structure to keep
controller-specific ops. We pass its pointer to acpi_pci_root_create() and
associate it with a host bridge instance for good. Such a design implies
serious drawback. Any potential manipulation on the single system-wide
acpi_pci_root_ops leads to kernel crash. The structure content is not
really changing even across multiple host bridges creation; thus it was not
an issue so far.
In preparation for adding ECAM quirks mechanism (where controller-specific
PCI ops may be different for each host bridge) allocate new
acpi_pci_root_ops and fill in with data for each bridge. Now it is safe to
have different controller-specific info. As a consequence free
acpi_pci_root_ops when host bridge is released.
No functional changes in this patch.
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
The static MCFG table tells us the base of ECAM space, but it does not
reserve the space -- the reservation should be done via a device in the
ACPI namespace whose _CRS includes the ECAM region.
Use acpi_resource_consumer() to check whether the ECAM space is reserved by
an ACPI namespace device. If it is, emit a message showing which device
reserves it. If not, emit a "[Firmware Bug]" warning.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Use a local "struct device *dev" for brevity. No functional change
intended.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
The opcodes.h drags in a lot of definition from the 32bit port, most
of which is not required at all. Clean things up a bit by moving
the bare minimum of what is required next to the actual users,
and drop the include file.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Install the callbacks via the state machine and let the core invoke
the callbacks on the already online CPUs.
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: rt@linutronix.de
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20161126231350.10321-17-bigeasy@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
There is no requirement to keep the sysfs files around until the CPU is
completely dead. Remove them during the DOWN_PREPARE notification. This is
a preparatory patch for converting to the hotplug state machine.
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: rt@linutronix.de
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20161126231350.10321-16-bigeasy@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* will/for-next/perf:
selftests: arm64: add test for unaligned/inexact watchpoint handling
arm64: Allow hw watchpoint of length 3,5,6 and 7
arm64: hw_breakpoint: Handle inexact watchpoint addresses
arm64: Allow hw watchpoint at varied offset from base address
hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit.
EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they
are 11th and 10th bits respectively when E2H is set. Current code is
unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set.
In fact, we don't need to set those two bits, which allow EL1 and EL0 to
access physical timer and counter respectively, if E2H and TGE are set
for the host kernel. They will be configured later as necessary. First,
we don't need to configure those bits for EL1, since the host kernel
runs in EL2. It is a hypervisor's responsibility to configure them
before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses
are configured in the later stage of boot process.
Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_EL1 is done on exception return if returning
to user or returning to a context where PAN was disabled.
Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.
Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.
User cache maintenance (user_cache_maint_handler and
__flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
operations are performed by user virtual address.
This patch also removes a stale comment on the switch_mm() function.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the uaccess macros/functions to disable access to user
space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
written to TTBR0_EL1 must be a physical address, for simplicity this
patch introduces a reserved_ttbr0 page at a constant offset from
swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
adjusted by the reserved_ttbr0 offset.
Enabling access to user is done by restoring TTBR0_EL1 with the value
from the struct thread_info ttbr0 variable. Interrupts must be disabled
during the uaccess_ttbr0_enable code to ensure the atomicity of the
thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
get_thread_info asm macro from entry.S to assembler.h for reuse in the
uaccess_ttbr0_* macros.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.
Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since, arm64 can support all offset within a double word limit. Therefore,
now support other lengths within that range as well.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Arm64 hardware does not always report a watchpoint hit address that
matches one of the watchpoints set. It can also report an address
"near" the watchpoint if a single instruction access both watched and
unwatched addresses. There is no straight-forward way, short of
disassembling the offending instruction, to map that address back to
the watchpoint.
Previously, when the hardware reported a watchpoint hit on an address
that did not match our watchpoint (this happens in case of instructions
which access large chunks of memory such as "stp") the process would
enter a loop where we would be continually resuming it (because we did
not recognise that watchpoint hit) and it would keep hitting the
watchpoint again and again. The tracing process would never get
notified of the watchpoint hit.
This commit fixes the problem by looking at the watchpoints near the
address reported by the hardware. If the address does not exactly match
one of the watchpoints we have set, it attributes the hit to the
nearest watchpoint we have. This heuristic is a bit dodgy, but I don't
think we can do much more, given the hardware limitations.
Signed-off-by: Pavel Labath <labath@google.com>
[panand: reworked to rebase on his patches]
Signed-off-by: Pratyush Anand <panand@redhat.com>
[will: use __ffs instead of ffs - 1]
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARM64 hardware supports watchpoint at any double word aligned address.
However, it can select any consecutive bytes from offset 0 to 7 from that
base address. For example, if base address is programmed as 0x420030 and
byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will
generate a watchpoint exception.
Currently, we do not have such modularity. We can only program byte,
halfword, word and double word access exception from any base address.
This patch adds support to overcome above limitations.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
KVM calls kvm_pmu_set_counter_event_type() when PMCCFILTR is configured.
But this function can't deals with PMCCFILTR correctly because the evtCount
bits of PMCCFILTR, which is reserved 0, conflits with the SW_INCR event
type of other PMXEVTYPER<n> registers. To fix it, when eventsel == 0, this
function shouldn't return immediately; instead it needs to check further
if select_idx is ARMV8_PMU_CYCLE_IDX.
Another issue is that KVM shouldn't copy the eventsel bits of PMCCFILTER
blindly to attr.config. Instead it ought to convert the request to the
"cpu cycle" event type (i.e. 0x11).
To support this patch and to prevent duplicated definitions, a limited
set of ARMv8 perf event types were relocated from perf_event.c to
asm/perf_event.h.
Cc: stable@vger.kernel.org # 4.6+
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The arm64 kernel assumes that FP/ASIMD units are always present
and accesses the FP/ASIMD specific registers unconditionally. This
could cause problems when they are absent. This patch adds the
support for kernel handling systems without FP/ASIMD by skipping the
register access within the kernel. For kvm, we trap the accesses
to FP/ASIMD and inject an undefined instruction exception to the VM.
The callers of the exported kernel_neon_begin_partial() should
make sure that the FP/ASIMD is supported.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: add comment on the ARM64_HAS_NO_FPSIMD conflict and the new location]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The hypervisor may not have full access to the kernel data structures
and hence cannot safely use cpus_have_cap() helper for checking the
system capability. Add a safe helper for hypervisors to check a constant
system capability, which *doesn't* fall back to checking the bitmap
maintained by the kernel. With this, make the cpus_have_cap() only
check the bitmask and force constant cap checks to use the new API
for quicker checks.
Cc: Robert Ritcher <rritcher@cavium.com>
Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch moves arm64's struct thread_info from the task stack into
task_struct. This protects thread_info from corruption in the case of
stack overflows, and makes its address harder to determine if stack
addresses are leaked, making a number of attacks more difficult. Precise
detection and handling of overflow is left for subsequent patches.
Largely, this involves changing code to store the task_struct in sp_el0,
and acquire the thread_info from the task struct. Core code now
implements current_thread_info(), and as noted in <linux/sched.h> this
relies on offsetof(task_struct, thread_info) == 0, enforced by core
code.
This change means that the 'tsk' register used in entry.S now points to
a task_struct, rather than a thread_info as it used to. To make this
clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
appropriately updated to account for the structural change.
Userspace clobbers sp_el0, and we can no longer restore this from the
stack. Instead, the current task is cached in a per-cpu variable that we
can safely access from early assembly as interrupts are disabled (and we
are thus not preemptible).
Both secondary entry and idle are updated to stash the sp and task
pointer separately.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Shortly we will want to load a percpu variable in the return from
userspace path. We can save an instruction by folding the addition of
the percpu offset into the load instruction, and this patch adds a new
helper to do so.
At the same time, we clean up this_cpu_ptr for consistency. As with
{adr,ldr,str}_l, we change the template to take the destination register
first, and name this dst. Secondly, we rename the macro to adr_this_cpu,
following the scheme of adr_l, and matching the newly added
ldr_this_cpu.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In the absence of CONFIG_THREAD_INFO_IN_TASK, core code maintains
thread_info::cpu, and low-level architecture code can access this to
build raw_smp_processor_id(). With CONFIG_THREAD_INFO_IN_TASK, core code
maintains task_struct::cpu, which for reasons of hte header soup is not
accessible to low-level arch code.
Instead, we can maintain a percpu variable containing the cpu number.
For both the old and new implementation of raw_smp_processor_id(), we
read a syreg into a GPR, add an offset, and load the result. As the
offset is now larger, it may not be folded into the load, but otherwise
the assembly shouldn't change much.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Subsequent patches will make smp_processor_id() use a percpu variable.
This will make smp_processor_id() dependent on the percpu offset, and
thus we cannot use smp_processor_id() to figure out what to initialise
the offset to.
Prepare for this by initialising the percpu offset based on
current::cpu, which will work regardless of how smp_processor_id() is
implemented. Also, make this relationship obvious by placing this code
together at the start of secondary_start_kernel().
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When returning from idle, we rely on the fact that thread_info lives at
the end of the kernel stack, and restore this by masking the saved stack
pointer. Subsequent patches will sever the relationship between the
stack and thread_info, and to cater for this we must save/restore sp_el0
explicitly, storing it in cpu_suspend_ctx.
As cpu_suspend_ctx must be doubleword aligned, this leaves us with an
extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1
in the same way, which simplifies the code, avoiding pointer chasing on
the restore path (as we no longer need to load thread_info::cpu followed
by the relevant slot in __per_cpu_offset based on this).
This patch stashes both registers in cpu_suspend_ctx.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed
before a task is destroyed. To account for this, the stacks are
refcounted, and when manipulating the stack of another task, it is
necessary to get/put the stack to ensure it isn't freed and/or re-used
while we do so.
This patch reworks the arm64 stack walking code to account for this.
When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no
refcounting, and this should only be a structural change that does not
affect behaviour.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The walk_stackframe functions is architecture-specific, with a varying
prototype, and common code should not use it directly. None of its
current users can be built as modules. With THREAD_INFO_IN_TASK, users
will also need to hold a stack reference before calling it.
There's no reason for it to be exported, and it's very easy to misuse,
so unexport it for now.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In arm64's die and __die routines we pass around a thread_info, and
subsequently use this to determine the relevant task_struct, and the end
of the thread's stack. Subsequent patches will decouple thread_info from
the stack, and this approach will no longer work.
To figure out the end of the stack, we can use the new generic
end_of_stack() helper. As we only call __die() from die(), and die()
always deals with the current task, we can remove the parameter and have
both acquire current directly, which also makes it clear that __die
can't be called for arbitrary tasks.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We define current_stack_pointer in <asm/thread_info.h>, though other
files and header relying upon it do not have this necessary include, and
are thus fragile to changes in the header soup.
Subsequent patches will affect the header soup such that directly
including <asm/thread_info.h> may result in a circular header include in
some of these cases, so we can't simply include <asm/thread_info.h>.
Instead, factor current_thread_info into its own header, and have all
existing users include this explicitly.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Subsequent patches will move the thread_info::{task,cpu} fields, and the
current TI_{TASK,CPU} offset definitions are not used anywhere.
This patch removes the redundant definitions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When CONFIG_KPROBE is disabled but CONFIG_UPROBE_EVENT is enabled, we get
following compilation error:
In file included from
.../arch/arm64/kernel/probes/decode-insn.c:20:0:
.../arch/arm64/include/asm/kprobes.h:52:5: error:
conflicting types for 'kprobe_fault_handler'
int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
^~~~~~~~~~~~~~~~~~~~
In file included from
.../arch/arm64/kernel/probes/decode-insn.c:17:0:
.../include/linux/kprobes.h:398:90: note:
previous definition of 'kprobe_fault_handler' was here
static inline int kprobe_fault_handler(struct pt_regs *regs, int trapnr)
^
.../scripts/Makefile.build:290: recipe for target
'arch/arm64/kernel/probes/decode-insn.o' failed
<asm/kprobes.h> is already included from <linux/kprobes.h> under #ifdef
CONFIG_KPROBE. So, this patch fixes the error by removing it from
decode-insn.c.
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for uprobe on ARM64 architecture.
Unit tests for following have been done so far and they have been found
working
1. Step-able instructions, like sub, ldr, add etc.
2. Simulation-able like ret, cbnz, cbz etc.
3. uretprobe
4. Reject-able instructions like sev, wfe etc.
5. trapped and abort xol path
6. probe at unaligned user address.
7. longjump test cases
Currently it does not support aarch32 instruction probing.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
uprobe is registered at break_hook with a unique ESR code. So, when a
TRAP_BRKPT occurs, call_break_hook checks if it was for uprobe. If not,
then send a SIGTRAP to user.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
uprobe registers a handler at step_hook. So, single_step_handler now
checks for user mode as well if there is a valid hook.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM64 step exception does not have any syndrome information. So, it is
responsibility of exception handler to take care that they handle it
only if exception was raised for them.
Since kgdb_step_brk_fn() always returns 0, therefore we might have problem
when we will have other step handler registered as well.
This patch fixes kgdb_step_brk_fn() to return error in case of step handler
was not meant for kgdb.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
decode-insn code has to be reused by arm64 uprobe implementation as well.
Therefore, this patch protects some portion of kprobe code and renames few
other, so that decode-insn functionality can be reused by uprobe even when
CONFIG_KPROBES is not defined.
kprobe_opcode_t and struct arch_specific_insn are also defined by
linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these
definitions in asm/probes.h.
linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion
of asm/kprobes.h from decode-insn.c.
There are some definitions like kprobe_insn and kprobes_handler_t etc can
be re-used by uprobe. So, it would be better to remove 'k' from their
names.
struct arch_specific_insn is specific to kprobe. Therefore, introduce a new
struct arch_probe_insn which will be common for both kprobe and uprobe, so
that decode-insn code can be shared. Modify kprobe code accordingly.
Function arm_probe_decode_insn() will be needed by uprobe as well. So make
it global.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In preparation of adding support for contiguous PTE and PMD mappings,
let's replace 'block_mappings_allowed' with 'page_mappings_only', which
will be a more accurate description of the nature of the setting once we
add such contiguous mappings into the mix.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Kprobes does not need its own homebrewed (and frankly inscrutable) sign
extension macro; just use the standard kernel functions instead. Since
the compiler actually recognises the sign-extension idiom of the latter,
we also get the small bonus of some nicer codegen, as each displacement
calculation helper then compiles to a single optimal SBFX instruction.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add a sysfs cpu_capacity attribute with which it is possible to read and
write (thus over-writing default values) CPUs capacity. This might be
useful in situations where values needs changing after boot.
The new attribute shows up as:
/sys/devices/system/cpu/cpu*/cpu_capacity
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With the introduction of cpu capacity-dmips-mhz bindings, CPU capacities
can now be calculated from values extracted from DT and information
coming from cpufreq. Add parsing of DT information at boot time, and
complement it with cpufreq information. Also, store such information
using per CPU variables, as we do for arm.
Caveat: the information provided by this patch will start to be used in
the future. We need to #define arch_scale_cpu_capacity to something
provided in arch, so that scheduler's default implementation (which gets
used if arch_scale_cpu_capacity is not defined) is overwritten.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Juri Lelli <juri.lelli@arm.com>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Fix ACPI boot due to recent broken NUMA changes
- Fix remote enabling of CPU features requiring PSTATE bit manipulation
- Add address range check when emulating user cache maintenance
- Fix LL/SC loops that allow compiler to introduce memory accesses
- Fix recently added write_sysreg_s macro
- Ensure MDCR_EL2 is initialised on qemu targets without a PMU
- Avoid kaslr breakage due to MODVERSIONs and DYNAMIC_FTRACE
- Correctly drive recent ld when building relocatable Image
- Remove junk IS_ERR check from xgene PMU driver added during merge window
- pr_cont fixes after core changes in the merge window
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJYCNgDAAoJELescNyEwWM0BV8IAKZLVlfKk2YTo3T/tx/2FGIW
5VKjSY13VLLC5cKQLB7Yvm7G1kzvLiN4Zb5fqvL0CK1ut8scPVbR1AAhSDngB4vU
UNzUqwp1R0Tl+GhLT+IfOElWjEcB9kwic3CZV5v4FxvZg4HvwstL3zLvMkjTaDYK
GjaS9iQ2zQsgsYHtluzia7q1k2fXfqdLOd5V0XF05CykJKO3j7zpqTv8PKF7PUFU
utsjRdyyGmBYaamG/cO5phDbAD5VMvdWcfDeJ25JdSwHaoxjZ8tpM721R4b5GRN7
5rPn52v5Hycp++FmhuO45laVQc60LYMz17mQwSTnIX2pGuFRqjRWJztJpyQqzWo=
=MXN1
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Most of these are CC'd for stable, but there are a few fixing issues
introduced during the recent merge window too.
There's also a fix for the xgene PMU driver, but it seemed daft to
send as a separate pull request, so I've included it here with the
rest of the fixes.
- Fix ACPI boot due to recent broken NUMA changes
- Fix remote enabling of CPU features requiring PSTATE bit manipulation
- Add address range check when emulating user cache maintenance
- Fix LL/SC loops that allow compiler to introduce memory accesses
- Fix recently added write_sysreg_s macro
- Ensure MDCR_EL2 is initialised on qemu targets without a PMU
- Avoid kaslr breakage due to MODVERSIONs and DYNAMIC_FTRACE
- Correctly drive recent ld when building relocatable Image
- Remove junk IS_ERR check from xgene PMU driver added during merge window
- pr_cont fixes after core changes in the merge window"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: remove pr_cont abuse from mem_init
arm64: fix show_regs fallout from KERN_CONT changes
arm64: kernel: force ET_DYN ELF type for CONFIG_RELOCATABLE=y
arm64: suspend: Reconfigure PSTATE after resume from idle
arm64: mm: Set PSTATE.PAN from the cpu_enable_pan() call
arm64: cpufeature: Schedule enable() calls instead of calling them via IPI
arm64: Cortex-A53 errata workaround: check for kernel addresses
arm64: percpu: rewrite ll/sc loops in assembly
arm64: swp emulation: bound LL/SC retries before rescheduling
arm64: sysreg: Fix use of XZR in write_sysreg_s
arm64: kaslr: keep modules close to the kernel when DYNAMIC_FTRACE=y
arm64: kernel: Init MDCR_EL2 even in the absence of a PMU
perf: xgene: Remove bogus IS_ERR() check
arm64: kernel: numa: fix ACPI boot cpu numa node mapping
arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y
Recently in commit 4bcc595ccd ("printk: reinstate KERN_CONT for
printing continuation lines"), the behaviour of printk changed w.r.t.
KERN_CONT. Now, KERN_CONT is mandatory to continue existing lines.
Without this, prefixes are inserted, making output illegible, e.g.
[ 1007.069010] pc : [<ffff00000871898c>] lr : [<ffff000008718948>] pstate: 40000145
[ 1007.076329] sp : ffff000008d53ec0
[ 1007.079606] x29: ffff000008d53ec0 [ 1007.082797] x28: 0000000080c50018
[ 1007.086160]
[ 1007.087630] x27: ffff000008e0c7f8 [ 1007.090820] x26: ffff80097631ca00
[ 1007.094183]
[ 1007.095653] x25: 0000000000000001 [ 1007.098843] x24: 000000ea68b61cac
[ 1007.102206]
... or when dumped with the userpace dmesg tool, which has slightly
different implicit newline behaviour. e.g.
[ 1007.069010] pc : [<ffff00000871898c>] lr : [<ffff000008718948>] pstate: 40000145
[ 1007.076329] sp : ffff000008d53ec0
[ 1007.079606] x29: ffff000008d53ec0
[ 1007.082797] x28: 0000000080c50018
[ 1007.086160]
[ 1007.087630] x27: ffff000008e0c7f8
[ 1007.090820] x26: ffff80097631ca00
[ 1007.094183]
[ 1007.095653] x25: 0000000000000001
[ 1007.098843] x24: 000000ea68b61cac
[ 1007.102206]
We can't simply always use KERN_CONT for lines which may or may not be
continuations. That causes line prefixes (e.g. timestamps) to be
supressed, and the alignment of all but the first line will be broken.
For even more fun, we can't simply insert some dummy empty-string printk
calls, as GCC warns for an empty printk string, and even if we pass
KERN_DEFAULT explcitly to silence the warning, the prefix gets swallowed
unless there is an additional part to the string.
Instead, we must manually iterate over pairs of registers, which gives
us the legible output we want in either case, e.g.
[ 169.771790] pc : [<ffff00000871898c>] lr : [<ffff000008718948>] pstate: 40000145
[ 169.779109] sp : ffff000008d53ec0
[ 169.782386] x29: ffff000008d53ec0 x28: 0000000080c50018
[ 169.787650] x27: ffff000008e0c7f8 x26: ffff80097631de00
[ 169.792913] x25: 0000000000000001 x24: 00000027827b2cf4
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not
save/restore PSTATE. As a result of this cpufeatures that were detected
and have bits in PSTATE get lost when we resume from idle.
UAO gets set appropriately on the next context switch. PAN will be
re-enabled next time we return from user-space, but on a preemptible
kernel we may run work accessing user space before this point.
Add code to re-enable theses two features in __cpu_suspend_exit().
We re-use uao_thread_switch() passing current.
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The enable() call for a cpufeature/errata is called using on_each_cpu().
This issues a cross-call IPI to get the work done. Implicitly, this
stashes the running PSTATE in SPSR when the CPU receives the IPI, and
restores it when we return. This means an enable() call can never modify
PSTATE.
To allow PAN to do this, change the on_each_cpu() call to use
stop_machine(). This schedules the work on each CPU which allows
us to modify PSTATE.
This involves changing the protype of all the enable() functions.
enable_cpu_capabilities() is called during boot and enables the feature
on all online CPUs. This path now uses stop_machine(). CPU features for
hotplug'd CPUs are enabled by verify_local_cpu_features() which only
acts on the local CPU, and can already modify the running PSTATE as it
is called from secondary_start_kernel().
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on
errata-affected core") adds code to execute cache maintenance instructions
in the kernel on behalf of userland on CPUs with certain ARM CPU errata.
It turns out that the address hasn't been checked to be a valid user
space address, allowing userland to clean cache lines in kernel space.
Fix this by introducing an address check before executing the
instructions on behalf of userland.
Since the address doesn't come via a syscall parameter, we can't just
reject tagged pointers and instead have to remove the tag when checking
against the user address limit.
Cc: <stable@vger.kernel.org>
Fixes: 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
Reported-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[will: rework commit message + replace access_ok with max_user_addr()]
Signed-off-by: Will Deacon <will.deacon@arm.com>
If a CPU does not implement a global monitor for certain memory types,
then userspace can attempt a kernel DoS by issuing SWP instructions
targetting the problematic memory (for example, a framebuffer mapped
with non-cacheable attributes).
The SWP emulation code protects against these sorts of attacks by
checking for pending signals and potentially rescheduling when the STXR
instruction fails during the emulation. Whilst this is good for avoiding
livelock, it harms emulation of legitimate SWP instructions on CPUs
where forward progress is not guaranteed if there are memory accesses to
the same reservation granule (up to 2k) between the failing STXR and
the retry of the LDXR.
This patch solves the problem by retrying the STXR a bounded number of
times (4) before breaking out of the LL/SC loop and looking for
something else to do.
Cc: <stable@vger.kernel.org>
Fixes: bd35a4adc4 ("arm64: Port SWP/SWPB emulation support from arm")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit f436b2ac90 ("arm64: kernel: fix architected PMU registers
unconditional access") made sure we wouldn't access unimplemented
PMU registers, but also left MDCR_EL2 uninitialized in that case,
leading to trap bits being potentially left set.
Make sure we always write something in that register.
Fixes: f436b2ac90 ("arm64: kernel: fix architected PMU registers unconditional access")
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 7ba5f605f3 ("arm64/numa: remove the limitation that cpu0 must
bind to node0") removed the numa cpu<->node mapping restriction whereby
logical cpu 0 always corresponds to numa node 0; removing the
restriction was correct, in that it does not really exist in practice
but the commit only updated the early mapping of logical cpu 0 to its
real numa node for the DT boot path, missing the ACPI one, leading to
boot failures on ACPI systems owing to missing node<->cpu map for
logical cpu 0.
Fix the issue by updating the ACPI boot path with code that carries out
the early cpu<->node mapping also for the boot cpu (ie cpu 0), mirroring
what is currently done in the DT boot path.
Fixes: 7ba5f605f3 ("arm64/numa: remove the limitation that cpu0 must bind to node0")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Zhen Lei <thunder.leizhen@huawei.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, all callers to randomize_range() set the length to 0 and
calculate end by adding a constant to the start address. We can simplify
the API to remove a bunch of needless checks and variables.
Use the new randomize_addr(start, range) call to set the requested
address.
Link: http://lkml.kernel.org/r/20160803233913.32511-5-jason@lakedaemon.net
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: "Russell King - ARM Linux" <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When doing an nmi backtrace of many cores, most of which are idle, the
output is a little overwhelming and very uninformative. Suppress
messages for cpus that are idling when they are interrupted and just
emit one line, "NMI backtrace for N skipped: idling at pc 0xNNN".
We do this by grouping all the cpuidle code together into a new
.cpuidle.text section, and then checking the address of the interrupted
PC to see if it lies within that section.
This commit suitably tags x86 and tile idle routines, and only adds in
the minimal framework for other architectures.
Link: http://lkml.kernel.org/r/1472487169-14923-5-git-send-email-cmetcalf@mellanox.com
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
Tested-by: Petr Mladek <pmladek@suse.com>
Cc: Aaron Tomlin <atomlin@redhat.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Here is the big TTY and Serial patch set for 4.9-rc1.
It also includes some drivers/dma/ changes, as those were needed by some
serial drivers, and they were all acked by the DMA maintainer. Also in
here is the long-suffering ACPI SPCR patchset, which was passed around
from maintainer to maintainer like a hot-potato. Seems I was the
sucker^Wlucky one. All of those patches have been acked by the various
subsystem maintainers as well.
All of this has been in linux-next with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iFYEABECABYFAlfyNjEPHGdyZWdAa3JvYWguY29tAAoJEDFH1A3bLfspwIcAn2uN
qCD8xQJ0Cs61hD1nUzhNygG8AJ94I4zz/fPGpyh/CtJfLQwtUdLhNA==
=Rken
-----END PGP SIGNATURE-----
Merge tag 'tty-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty and serial updates from Greg KH:
"Here is the big tty and serial patch set for 4.9-rc1.
It also includes some drivers/dma/ changes, as those were needed by
some serial drivers, and they were all acked by the DMA maintainer.
Also in here is the long-suffering ACPI SPCR patchset, which was
passed around from maintainer to maintainer like a hot-potato. Seems I
was the sucker^Wlucky one. All of those patches have been acked by the
various subsystem maintainers as well.
All of this has been in linux-next with no reported issues"
* tag 'tty-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (111 commits)
Revert "serial: pl011: add console matching function"
MAINTAINERS: update entry for atmel_serial driver
serial: pl011: add console matching function
ARM64: ACPI: enable ACPI_SPCR_TABLE
ACPI: parse SPCR and enable matching console
of/serial: move earlycon early_param handling to serial
Revert "drivers/tty: Explicitly pass current to show_stack"
tty: amba-pl011: Don't complain on -EPROBE_DEFER when no irq
nios2: dts: 10m50: Add tx-threshold parameter
serial: 8250: Set Altera 16550 TX FIFO Threshold
serial: 8250: of: Load TX FIFO Threshold from DT
Documentation: dt: serial: Add TX FIFO threshold parameter
drivers/tty: Explicitly pass current to show_stack
serial: imx: Fix DCD reading
serial: stm32: mark symbols static where possible
serial: xuartps: Add some register initialisation to cdns_early_console_setup()
serial: xuartps: Removed unwanted checks while reading the error conditions
serial: xuartps: Rewrite the interrupt handling logic
serial: stm32: use mapbase instead of membase for DMA
tty/serial: atmel: fix fractional baud rate computation
...
Pull CPU hotplug updates from Thomas Gleixner:
"Yet another batch of cpu hotplug core updates and conversions:
- Provide core infrastructure for multi instance drivers so the
drivers do not have to keep custom lists.
- Convert custom lists to the new infrastructure. The block-mq custom
list conversion comes through the block tree and makes the diffstat
tip over to more lines removed than added.
- Handle unbalanced hotplug enable/disable calls more gracefully.
- Remove the obsolete CPU_STARTING/DYING notifier support.
- Convert another batch of notifier users.
The relayfs changes which conflicted with the conversion have been
shipped to me by Andrew.
The remaining lot is targeted for 4.10 so that we finally can remove
the rest of the notifiers"
* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (46 commits)
cpufreq: Fix up conversion to hotplug state machine
blk/mq: Reserve hotplug states for block multiqueue
x86/apic/uv: Convert to hotplug state machine
s390/mm/pfault: Convert to hotplug state machine
mips/loongson/smp: Convert to hotplug state machine
mips/octeon/smp: Convert to hotplug state machine
fault-injection/cpu: Convert to hotplug state machine
padata: Convert to hotplug state machine
cpufreq: Convert to hotplug state machine
ACPI/processor: Convert to hotplug state machine
virtio scsi: Convert to hotplug state machine
oprofile/timer: Convert to hotplug state machine
block/softirq: Convert to hotplug state machine
lib/irq_poll: Convert to hotplug state machine
x86/microcode: Convert to hotplug state machine
sh/SH-X3 SMP: Convert to hotplug state machine
ia64/mca: Convert to hotplug state machine
ARM/OMAP/wakeupgen: Convert to hotplug state machine
ARM/shmobile: Convert to hotplug state machine
arm64/FP/SIMD: Convert to hotplug state machine
...
Pull low-level x86 updates from Ingo Molnar:
"In this cycle this topic tree has become one of those 'super topics'
that accumulated a lot of changes:
- Add CONFIG_VMAP_STACK=y support to the core kernel and enable it on
x86 - preceded by an array of changes. v4.8 saw preparatory changes
in this area already - this is the rest of the work. Includes the
thread stack caching performance optimization. (Andy Lutomirski)
- switch_to() cleanups and all around enhancements. (Brian Gerst)
- A large number of dumpstack infrastructure enhancements and an
unwinder abstraction. The secret long term plan is safe(r) live
patching plus maybe another attempt at debuginfo based unwinding -
but all these current bits are standalone enhancements in a frame
pointer based debug environment as well. (Josh Poimboeuf)
- More __ro_after_init and const annotations. (Kees Cook)
- Enable KASLR for the vmemmap memory region. (Thomas Garnier)"
[ The virtually mapped stack changes are pretty fundamental, and not
x86-specific per se, even if they are only used on x86 right now. ]
* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
x86/asm: Get rid of __read_cr4_safe()
thread_info: Use unsigned long for flags
x86/alternatives: Add stack frame dependency to alternative_call_2()
x86/dumpstack: Fix show_stack() task pointer regression
x86/dumpstack: Remove dump_trace() and related callbacks
x86/dumpstack: Convert show_trace_log_lvl() to use the new unwinder
oprofile/x86: Convert x86_backtrace() to use the new unwinder
x86/stacktrace: Convert save_stack_trace_*() to use the new unwinder
perf/x86: Convert perf_callchain_kernel() to use the new unwinder
x86/unwind: Add new unwind interface and implementations
x86/dumpstack: Remove NULL task pointer convention
fork: Optimize task creation by caching two thread stacks per CPU if CONFIG_VMAP_STACK=y
sched/core: Free the stack early if CONFIG_THREAD_INFO_IN_TASK
lib/syscall: Pin the task stack in collect_syscall()
x86/process: Pin the target stack in get_wchan()
x86/dumpstack: Pin the target stack when dumping it
kthread: Pin the stack via try_get_task_stack()/put_task_stack() in to_live_kthread() function
sched/core: Add try_get_task_stack() and put_task_stack()
x86/entry/64: Fix a minor comment rebase error
iommu/amd: Don't put completion-wait semaphore on stack
...
- Support for execute-only page permissions
- Support for hibernate and DEBUG_PAGEALLOC
- Support for heterogeneous systems with mismatches cache line sizes
- Errata workarounds (A53 843419 update and QorIQ A-008585 timer bug)
- arm64 PMU perf updates, including cpumasks for heterogeneous systems
- Set UTS_MACHINE for building rpm packages
- Yet another head.S tidy-up
- Some cleanups and refactoring, particularly in the NUMA code
- Lots of random, non-critical fixes across the board
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJX7k31AAoJELescNyEwWM0XX0H/iOaWCfKlWOhvBsStGUCsLrK
XryTzQT2KjdnLKf3jwP+1ateCuBR5ROurYxoDCX5/7mD63c5KiI338Vbv61a1lE1
AAwjt1stmQVUg/j+kqnuQwB/0DYg+2C8se3D3q5Iyn7zc19cDZJEGcBHNrvLMufc
XgHrgHgl/rzBDDlHJXleknDFge/MfhU5/Q1vJMRRb4JYrpAtmIokzCO75CYMRcCT
ND2QbmppKtsyuFPGUTVbAFzJlP6dGKb3eruYta7/ct5d0pJQxav3u98D2yWGfjdM
YaYq1EmX5Pol7rWumqLtk0+mA9yCFcKLLc+PrJu20Vx0UkvOq8G8Xt70sHNvZU8=
=gdPM
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"It's a bit all over the place this time with no "killer feature" to
speak of. Support for mismatched cache line sizes should help people
seeing whacky JIT failures on some SoCs, and the big.LITTLE perf
updates have been a long time coming, but a lot of the changes here
are cleanups.
We stray outside arch/arm64 in a few areas: the arch/arm/ arch_timer
workaround is acked by Russell, the DT/OF bits are acked by Rob, the
arch_timer clocksource changes acked by Marc, CPU hotplug by tglx and
jump_label by Peter (all CC'd).
Summary:
- Support for execute-only page permissions
- Support for hibernate and DEBUG_PAGEALLOC
- Support for heterogeneous systems with mismatches cache line sizes
- Errata workarounds (A53 843419 update and QorIQ A-008585 timer bug)
- arm64 PMU perf updates, including cpumasks for heterogeneous systems
- Set UTS_MACHINE for building rpm packages
- Yet another head.S tidy-up
- Some cleanups and refactoring, particularly in the NUMA code
- Lots of random, non-critical fixes across the board"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (100 commits)
arm64: tlbflush.h: add __tlbi() macro
arm64: Kconfig: remove SMP dependence for NUMA
arm64: Kconfig: select OF/ACPI_NUMA under NUMA config
arm64: fix dump_backtrace/unwind_frame with NULL tsk
arm/arm64: arch_timer: Use archdata to indicate vdso suitability
arm64: arch_timer: Work around QorIQ Erratum A-008585
arm64: arch_timer: Add device tree binding for A-008585 erratum
arm64: Correctly bounds check virt_addr_valid
arm64: migrate exception table users off module.h and onto extable.h
arm64: pmu: Hoist pmu platform device name
arm64: pmu: Probe default hw/cache counters
arm64: pmu: add fallback probe table
MAINTAINERS: Update ARM PMU PROFILING AND DEBUGGING entry
arm64: Improve kprobes test for atomic sequence
arm64/kvm: use alternative auto-nop
arm64: use alternative auto-nop
arm64: alternative: add auto-nop infrastructure
arm64: lse: convert lse alternatives NOP padding to use __nops
arm64: barriers: introduce nops and __nops macros for NOP sequences
arm64: sysreg: replace open-coded mrs_s/msr_s with {read,write}_sysreg_s
...
SBBR mentions SPCR as a mandatory ACPI table. So enable it for ARM64
Earlycon should be set up as early as possible. ACPI boot tables are
mapped in arch/arm64/kernel/acpi.c:acpi_boot_table_init() that
is called from setup_arch() and that's where we parse SPCR.
So it has to be opted-in per-arch.
When ACPI_SPCR_TABLE is defined initialization of DT earlycon is
deferred until the DT/ACPI decision is done. Initialize DT earlycon
if ACPI is disabled.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Aleksey Makarov <aleksey.makarov@linaro.org>
Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In some places, dump_backtrace() is called with a NULL tsk parameter,
e.g. in bug_handler() in arch/arm64, or indirectly via show_stack() in
core code. The expectation is that this is treated as if current were
passed instead of NULL. Similar is true of unwind_frame().
Commit a80a0eb70c ("arm64: make irq_stack_ptr more robust") didn't
take this into account. In dump_backtrace() it compares tsk against
current *before* we check if tsk is NULL, and in unwind_frame() we never
set tsk if it is NULL.
Due to this, we won't initialise irq_stack_ptr in either function. In
dump_backtrace() this results in calling dump_mem() for memory
immediately above the IRQ stack range, rather than for the relevant
range on the task stack. In unwind_frame we'll reject unwinding frames
on the IRQ stack.
In either case this results in incomplete or misleading backtrace
information, but is not otherwise problematic. The initial percpu areas
(including the IRQ stacks) are allocated in the linear map, and dump_mem
uses __get_user(), so we shouldn't access anything with side-effects,
and will handle holes safely.
This patch fixes the issue by having both functions handle the NULL tsk
case before doing anything else with tsk.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: a80a0eb70c ("arm64: make irq_stack_ptr more robust")
Acked-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Instead of comparing the name to a magic string, use archdata to
explicitly communicate whether the arch timer is suitable for
direct vdso access.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Handle read-only cases when CONFIG_DEBUG_RODATA (4.0) or
CONFIG_DEBUG_SET_MODULE_RONX (3.18) are enabled by using
aarch64_insn_write() instead of probe_kernel_write() as introduced by
commit 2f896d5866 ("arm64: use fixmap for text patching") in 4.0.
Fixes: 11d91a770f ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support")
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The wq_numa_init() function makes a private CPU to node map by calling
cpu_to_node() early in the boot process, before the non-boot CPUs are
brought online. Since the default implementation of cpu_to_node()
returns zero for CPUs that have never been brought online, the
workqueue system's view is that *all* CPUs are on node zero.
When the unbound workqueue for a non-zero node is created, the
tsk_cpus_allowed() for the worker threads is the empty set because
there are, in the view of the workqueue system, no CPUs on non-zero
nodes. The code in try_to_wake_up() using this empty cpumask ends up
using the cpumask empty set value of NR_CPUS as an index into the
per-CPU area pointer array, and gets garbage as it is one past the end
of the array. This results in:
[ 0.881970] Unable to handle kernel paging request at virtual address fffffb1008b926a4
[ 1.970095] pgd = fffffc00094b0000
[ 1.973530] [fffffb1008b926a4] *pgd=0000000000000000, *pud=0000000000000000, *pmd=0000000000000000
[ 1.982610] Internal error: Oops: 96000004 [#1] SMP
[ 1.987541] Modules linked in:
[ 1.990631] CPU: 48 PID: 295 Comm: cpuhp/48 Tainted: G W 4.8.0-rc6-preempt-vol+ #9
[ 1.999435] Hardware name: Cavium ThunderX CN88XX board (DT)
[ 2.005159] task: fffffe0fe89cc300 task.stack: fffffe0fe8b8c000
[ 2.011158] PC is at try_to_wake_up+0x194/0x34c
[ 2.015737] LR is at try_to_wake_up+0x150/0x34c
[ 2.020318] pc : [<fffffc00080e7468>] lr : [<fffffc00080e7424>] pstate: 600000c5
[ 2.027803] sp : fffffe0fe8b8fb10
[ 2.031149] x29: fffffe0fe8b8fb10 x28: 0000000000000000
[ 2.036522] x27: fffffc0008c63bc8 x26: 0000000000001000
[ 2.041896] x25: fffffc0008c63c80 x24: fffffc0008bfb200
[ 2.047270] x23: 00000000000000c0 x22: 0000000000000004
[ 2.052642] x21: fffffe0fe89d25bc x20: 0000000000001000
[ 2.058014] x19: fffffe0fe89d1d00 x18: 0000000000000000
[ 2.063386] x17: 0000000000000000 x16: 0000000000000000
[ 2.068760] x15: 0000000000000018 x14: 0000000000000000
[ 2.074133] x13: 0000000000000000 x12: 0000000000000000
[ 2.079505] x11: 0000000000000000 x10: 0000000000000000
[ 2.084879] x9 : 0000000000000000 x8 : 0000000000000000
[ 2.090251] x7 : 0000000000000040 x6 : 0000000000000000
[ 2.095621] x5 : ffffffffffffffff x4 : 0000000000000000
[ 2.100991] x3 : 0000000000000000 x2 : 0000000000000000
[ 2.106364] x1 : fffffc0008be4c24 x0 : ffffff0ffffada80
[ 2.111737]
[ 2.113236] Process cpuhp/48 (pid: 295, stack limit = 0xfffffe0fe8b8c020)
[ 2.120102] Stack: (0xfffffe0fe8b8fb10 to 0xfffffe0fe8b90000)
[ 2.125914] fb00: fffffe0fe8b8fb80 fffffc00080e7648
.
.
.
[ 2.442859] Call trace:
[ 2.445327] Exception stack(0xfffffe0fe8b8f940 to 0xfffffe0fe8b8fa70)
[ 2.451843] f940: fffffe0fe89d1d00 0000040000000000 fffffe0fe8b8fb10 fffffc00080e7468
[ 2.459767] f960: fffffe0fe8b8f980 fffffc00080e4958 ffffff0ff91ab200 fffffc00080e4b64
[ 2.467690] f980: fffffe0fe8b8f9d0 fffffc00080e515c fffffe0fe8b8fa80 0000000000000000
[ 2.475614] f9a0: fffffe0fe8b8f9d0 fffffc00080e58e4 fffffe0fe8b8fa80 0000000000000000
[ 2.483540] f9c0: fffffe0fe8d10000 0000000000000040 fffffe0fe8b8fa50 fffffc00080e5ac4
[ 2.491465] f9e0: ffffff0ffffada80 fffffc0008be4c24 0000000000000000 0000000000000000
[ 2.499387] fa00: 0000000000000000 ffffffffffffffff 0000000000000000 0000000000000040
[ 2.507309] fa20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 2.515233] fa40: 0000000000000000 0000000000000000 0000000000000000 0000000000000018
[ 2.523156] fa60: 0000000000000000 0000000000000000
[ 2.528089] [<fffffc00080e7468>] try_to_wake_up+0x194/0x34c
[ 2.533723] [<fffffc00080e7648>] wake_up_process+0x28/0x34
[ 2.539275] [<fffffc00080d3764>] create_worker+0x110/0x19c
[ 2.544824] [<fffffc00080d69dc>] alloc_unbound_pwq+0x3cc/0x4b0
[ 2.550724] [<fffffc00080d6bcc>] wq_update_unbound_numa+0x10c/0x1e4
[ 2.557066] [<fffffc00080d7d78>] workqueue_online_cpu+0x220/0x28c
[ 2.563234] [<fffffc00080bd288>] cpuhp_invoke_callback+0x6c/0x168
[ 2.569398] [<fffffc00080bdf74>] cpuhp_up_callbacks+0x44/0xe4
[ 2.575210] [<fffffc00080be194>] cpuhp_thread_fun+0x13c/0x148
[ 2.581027] [<fffffc00080dfbac>] smpboot_thread_fn+0x19c/0x1a8
[ 2.586929] [<fffffc00080dbd64>] kthread+0xdc/0xf0
[ 2.591776] [<fffffc0008083380>] ret_from_fork+0x10/0x50
[ 2.597147] Code: b00057e1 91304021 91005021 b8626822 (b8606821)
[ 2.603464] ---[ end trace 58c0cd36b88802bc ]---
[ 2.608138] Kernel panic - not syncing: Fatal exception
Fix by moving call to numa_store_cpu_info() for all CPUs into
smp_prepare_cpus(), which happens before wq_numa_init(). Since
smp_store_cpu_info() now contains only a single function call,
simplify by removing the function and out-lining its contents.
Suggested-by: Robert Richter <rric@kernel.org>
Fixes: 1a2db30034 ("arm64, numa: Add NUMA support for arm64 platforms.")
Cc: <stable@vger.kernel.org> # 4.7.x-
Signed-off-by: David Daney <david.daney@cavium.com>
Reviewed-by: Robert Richter <rrichter@cavium.com>
Tested-by: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
These files were only including module.h for exception table
related functions. We've now separated that content out into its
own file "extable.h" so now move over to that and avoid all the
extra header content in module.h that we don't really need to compile
these files.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Move the PMU name into a common header file so it may
be referenced by other users.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv8 machines can identify the micro/arch defined counters
that are available on a machine. Add all these counters to the
default armv8 perf map. At run-time disable the counters which
are not available on the given PMU.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In preparation for ACPI support, add a pmu_probe_info table to
the arm_pmu_device_probe() call. This table gets used when
probing in the absence of a devicetree node for PMU.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Kprobes searches backwards a finite number of instructions to determine if
there is an attempt to probe a load/store exclusive sequence. It stops when
it hits the maximum number of instructions or a load or store exclusive.
However this means it can run up past the beginning of the function and
start looking at literal constants. This has been shown to cause a false
positive and blocks insertion of the probe. To fix this, further limit the
backwards search to stop if it hits a symbol address from kallsyms. The
presumption is that this is the entry point to this code (particularly for
the common case of placing probes at the beginning of functions).
This also improves efficiency by not searching code that is not part of the
function. There may be some possibility that the label might not denote the
entry path to the probed instruction but the likelihood seems low and this
is just another example of how the kprobes user really needs to be
careful about what they are doing.
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Make use of the new alternative_if and alternative_else_nop_endif and
get rid of our homebew NOP sleds, making the code simpler to read.
Note that for cpu_do_switch_mm the ret has been moved out of the
alternative sequence, and in the default case there will be three
additional NOPs executed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Systems with differing CPU i-cache/d-cache line sizes can cause
problems with the cache management by software when the execution
is migrated from one to another. Usually, the application reads
the cache size on a CPU and then uses that length to perform cache
operations. However, if it gets migrated to another CPU with a smaller
cache line size, things could go completely wrong. To prevent such
cases, always use the smallest cache line size among the CPUs. The
kernel CPU feature infrastructure already keeps track of the safe
value for all CPUID registers including CTR. This patch works around
the problem by :
For kernel, dynamically patch the kernel to read the cache size
from the system wide copy of CTR_EL0.
For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
and emulate the mrs instruction to return the system wide safe value
of CTR_EL0.
For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
via read_system_reg), we keep track of the pointer to table entry for
CTR_EL0 in the CPU feature infrastructure.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Right now we trap some of the user space data cache operations
based on a few Errata (ARM 819472, 826319, 827319 and 824069).
We need to trap userspace access to CTR_EL0, if we detect mismatched
cache line size. Since both these traps share the EC, refactor
the handler a little bit to make it a bit more reader friendly.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On systems with mismatched i/d cache min line sizes, we need to use
the smallest size possible across all CPUs. This will be done by fetching
the system wide safe value from CPU feature infrastructure.
However the some special users(e.g kexec, hibernate) would need the line
size on the CPU (rather than the system wide), when either the system
wide feature may not be accessible or it is guranteed that the caller
executes with a gurantee of no migration.
Provide another helper which will fetch cache line size on the current CPU.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Reviewed-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
adrp uses PC-relative address offset to a page (of 4K size) of
a symbol. If it appears in an alternative code patched in, we
should adjust the offset to reflect the address where it will
be run from. This patch adds support for fixing the offset
for adrp instructions.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Adds helpers for decoding/encoding the PC relative addresses for adrp.
This will be used for handling dynamic patching of 'adrp' instructions
in alternative code patching.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The alternative code patching doesn't check if the replaced instruction
uses a pc relative literal. This could cause silent corruption in the
instruction stream as the instruction will be executed from a different
address than what it was compiled for. Catch all such cases.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Right now we run through the work around checks on a CPU
from __cpuinfo_store_cpu. There are some problems with that:
1) We initialise the system wide CPU feature registers only after the
Boot CPU updates its cpuinfo. Now, if a work around depends on the
variance of a CPU ID feature (e.g, check for Cache Line size mismatch),
we have no way of performing it cleanly for the boot CPU.
2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It
is not an obvious place for that.
This patch rearranges the CPU specific capability(aka work around) checks.
1) At the moment we use verify_local_cpu_capabilities() to check if a new
CPU has all the system advertised features. Use this for the secondary CPUs
to perform the work around check. For that we rename
verify_local_cpu_capabilities() => check_local_cpu_capabilities()
which:
If the system wide capabilities haven't been initialised (i.e, the CPU
is activated at the boot), update the system wide detected work arounds.
Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the
system wide capabilities.
2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have
initialised the system wide CPU feature values.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This is a cosmetic change to rename the functions dealing with
the errata work arounds to be more consistent with their naming.
1) check_local_cpu_errata() => update_cpu_errata_workarounds()
check_local_cpu_errata() actually updates the system's errata work
arounds. So rename it to reflect the same.
2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds()
Use errata_workarounds instead of _errata.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is
not defined at the moment. The safer value for the L1Ip should be
the weakest of the policies, which happens to be AIVIVT. While at it,
fix the comment about safe_val.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
1. Remove the old binding code.
2. Read the nid of cpu0 from dts.
3. Fallback the nid of cpu0 to 0 when numa=off is set in bootargs.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
numa_init may return error because of numa configuration error. So "No
NUMA configuration found" is inaccurate. In fact, specific configuration
error information should be immediately printed by the testing branch.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
By using a common attr_groups array, the common arm_pmu code can set up
common files (e.g. cpumask) for us in subsequent patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A while back we added {read,write}_sysreg accessors to handle accesses
to system registers, without the usual boilerplate asm volatile,
temporary variable, etc.
This patch makes use of these across arm64 to make code shorter and
clearer. For sequences with a trailing ISB, the existing isb() macro is
also used so that asm blocks can be removed entirely.
A few uses of inline assembly for msr/mrs are left as-is. Those
manipulating sp_el0 for the current thread_info value have special
clobber requiremends.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds static keys transparently for all the cpu_hwcaps
features by implementing an array of default-false static keys and
enabling them when detected. The cpus_have_cap() check uses the static
keys if the feature being checked is a constant, otherwise the compiler
generates the bitmap test.
Because of the early call to static_branch_enable() via
check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels
are initialised in cpuinfo_store_boot_cpu().
Cc: Will Deacon <will.deacon@arm.com>
Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, enabling stacktrace of a kprobe events generates warning:
echo stacktrace > /sys/kernel/debug/tracing/trace_options
echo "p xhci_irq" > /sys/kernel/debug/tracing/kprobe_events
echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
save_stack_trace_regs() not implemented yet.
------------[ cut here ]------------
WARNING: CPU: 1 PID: 0 at ../kernel/stacktrace.c:74 save_stack_trace_regs+0x3c/0x48
Modules linked in:
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-rc4-dirty #5128
Hardware name: ARM Juno development board (r1) (DT)
task: ffff800975dd1900 task.stack: ffff800975ddc000
PC is at save_stack_trace_regs+0x3c/0x48
LR is at save_stack_trace_regs+0x3c/0x48
pc : [<ffff000008126c64>] lr : [<ffff000008126c64>] pstate: 600003c5
sp : ffff80097ef52c00
Call trace:
save_stack_trace_regs+0x3c/0x48
__ftrace_trace_stack+0x168/0x208
trace_buffer_unlock_commit_regs+0x5c/0x7c
kprobe_trace_func+0x308/0x3d8
kprobe_dispatcher+0x58/0x60
kprobe_breakpoint_handler+0xbc/0x18c
brk_handler+0x50/0x90
do_debug_exception+0x50/0xbc
This patch implements save_stack_trace_regs(), so that stacktrace of a
kprobe events can be obtained.
After this patch, there is no warning and we can see the stacktrace for
kprobe events in trace buffer.
more /sys/kernel/debug/tracing/trace
<idle>-0 [004] d.h. 1356.000496: p_xhci_irq_0:(xhci_irq+0x0/0x9ac)
<idle>-0 [004] d.h. 1356.000497: <stack trace>
=> xhci_irq
=> __handle_irq_event_percpu
=> handle_irq_event_percpu
=> handle_irq_event
=> handle_fasteoi_irq
=> generic_handle_irq
=> __handle_domain_irq
=> gic_handle_irq
=> el1_irq
=> arch_cpu_idle
=> default_idle_call
=> cpu_startup_entry
=> secondary_start_kernel
=>
Tested-by: David A. Long <dave.long@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit b5fe242972 ("arm64: kernel: fix style issues in sleep.S")
changed the linkage of _cpu_resume() to local, even though the symbol
is also referenced from hibernate.c. So revert this change.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Single-step traps to userspace (e.g. via ptrace) are expected to use
the TRAP_TRACE for the si_code field of the siginfo, as opposed to
TRAP_HWBRPT that we report currently.
Fix the reported value, which has no effect on existing and legacy
builds of GDB.
Reported-by: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that the only remaining occurrences of the use of callee saved
registers are on the primary boot path, add a comment to the code
which register is used for what.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Instead of stashing the value of the link register in x28 before setting
up the stack and calling into C code, create an ordinary PCS compatible
stack frame so that we can push the return address onto the stack.
Since exception handlers require a stack as well, assign the stack pointer
register before installing the vector table.
Note that this accounts for the difference between THREAD_START_SP and
THREAD_SIZE, given that the stack pointer is always decremented before
calling into any C code.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Keeping __PHYS_OFFSET in x24 is actually less clear than simply taking
the value of __PHYS_OFFSET using an adrp instruction in the three places
that we need it. So change that.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Using x27 for passing to __enable_mmu what is essentially the return
address makes the code look more complicated than it needs to be. So
switch to x30/lr, and update the secondary and cpu_resume call sites to
simply call __enable_mmu as an ordinary function, with a bl instruction.
This requires the callers to be covered by .idmap.text.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The KASLR processing is only used by the primary boot path, and
complements the processing that takes place in __primary_switch().
Move the two parts together, to make the code easier to understand.
Also, fix up a minor whitespace issue.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: fixed conflict with -rc3 due to lack of fd363bd417]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The function el2_setup() passes its return value in register w20, and
in the two cases where the caller actually cares about this return value,
it is passed into set_cpu_boot_mode_flag() [almost] directly, which
expects its input in w20 as well.
So there is no reason to use a 'special' callee saved register here, but
we can simply follow the PCS for return value and first argument,
respectively.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This fixes a number of style issues in sleep.S. No functional changes are
intended:
- replace absolute literal references with relative references in
__cpu_suspend_enter(), which executes from its virtual address
- replace explicit lr assignment plus branch with bl in cpu_resume(), which
aligns it with stext() and secondary_startup()
- don't export _cpu_resume()
- use adr_l for mpidr_hash reference, and fix the incorrect accompanying
comment, which has been out of date since commit cabe1c81ea ("arm64:
Change cpu_resume() to enable mmu early then access sleep_sp by va")
- replace leading spaces with tabs, and add a bit of whitespace for
readability
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit e19a6ee246 ("arm64: kernel: Save and restore UAO and
addr_limit on exception entry") states that exception handler inherits
the original PSTATE.UAO value, so UAO needes to be reset
explicitly. However, ARM 8.2 Extension documentation says:
PSTATE.UAO is copied to SPSR_ELx.UAO and is then set to 0 on an
exception taken from AArch64 to AArch64
so hardware already does the right thing.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arm64 debug monitor initialisation code uses a CPU hotplug notifier
to clear the OS lock when CPUs come online.
This patch converts the code to the new hotplug mechanism.
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arm64 hw_breakpoint implementation uses a CPU hotplug notifier to
reset the {break,watch}point registers when CPUs come online.
This patch converts the code to the new hotplug mechanism, whilst moving
the invocation earlier to remove the need to disable IRQs explicitly in
the driver (which could cause havok if we trip a watchpoint in an IRQ
handler whilst restoring the debug register state).
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When TIF_SINGLESTEP is set for a task, the single-step state machine is
enabled and we must take care not to reset it to the active-not-pending
state if it is already in the active-pending state.
Unfortunately, that's exactly what user_enable_single_step does, by
unconditionally setting the SS bit in the SPSR for the current task.
This causes failures in the GDB testsuite, where GDB ends up missing
expected step traps if the instruction being stepped generates another
trap, e.g. PTRACE_EVENT_FORK from an SVC instruction.
This patch fixes the problem by preserving the current state of the
stepping state machine when TIF_SINGLESTEP is set on the current thread.
Cc: <stable@vger.kernel.org>
Reported-by: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Expose the arm64_ftr_reg struct covering CTR_EL0 outside of cpufeature.o
so that other code can refer to it directly (i.e., without performing the
binary search)
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Constify the arm64_ftr_regs array, by moving the mutable arm64_ftr_reg
fields out of the array itself. This also streamlines the bsearch, since
the entire array can be covered by fewer cachelines. Moving the payload
out of the array also allows us to have special explicitly defined
struct instance in case other code needs to refer to it directly.
Note that this replaces the runtime sorting of the array with a runtime
BUG() check whether the array is sorted correctly in the code.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arm64_ftr_bits structures are never modified, so make them read-only.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The make rpm target depends on proper UTS_MACHINE definition. Also, use
the variable in arch/arm64/kernel/setup.c, so that it's not accidentally
removed in the future.
Reported-and-tested-by: Fabian Vogt <fvogt@suse.com>
Signed-off-by: Michal Marek <mmarek@suse.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we use the MPIDR to resume on the same CPU that we hibernated on,
we no longer need to refuse to hibernate if the boot cpu is offline. (Which
we can't possibly know if kexec causes logical CPUs to be renumbered).
This reverts commit 1fe492ce64.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
disable_nonboot_cpus() assumes that the lowest numbered online CPU is
the boot CPU, and that this is the correct CPU to run any power
management code on.
On arm64 CPU0 can be taken offline. For hibernate/resume this means we
may hibernate on a CPU other than CPU0. If the system is rebooted with
kexec 'CPU0' will be assigned to a different CPU. This complicates
hibernate/resume as now we can't trust the CPU numbers.
We currently forbid hibernate if CPU0 has been hotplugged out to avoid
this situation without kexec.
Save the MPIDR of the CPU we hibernated on in the hibernate arch-header,
use hibernate_resume_nonboot_cpu_disable() to direct which CPU we should
resume on based on the MPIDR of the CPU we hibernated on. This allows us to
hibernate/resume on any CPU, even if the logical numbers have been
shuffled by kexec.
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Follow the example set by x86 in commit 9ccaf77cf0 ("x86/mm:
Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option"), and
make these protections a fundamental security feature rather than an
opt-in. This also results in a minor code simplification.
For those rare cases when users wish to disable this protection (e.g.
for debugging), this can be done by passing 'rodata=off' on the command
line.
As DEBUG_RODATA_ALIGN is only intended to address a performance/memory
tradeoff, and does not affect correctness, this is left user-selectable.
DEBUG_MODULE_RONX is also left user-selectable until the core code
provides a boot-time option to disable the protection for debugging
use-cases.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Kdump(kexec-tools) parses /proc/iomem to identify all the memory regions
on the system. Since the current kernel names "nomap" regions, like UEFI
runtime services code/data, as "System RAM," kexec-tools sets up elf core
header to include them in a crash dump file (/proc/vmcore).
Then crash dump kernel parses UEFI memory map again, re-marks those regions
as "nomap" and does not create a memory mapping for them unlike the other
areas of System RAM. In this case, copying /proc/vmcore through
copy_oldmem_page() on crash dump kernel will end up with a kernel abort,
as reported in [1].
This patch names all the "nomap" regions explicitly as "reserved" so that
we can exclude them from a crash dump file. acpi_os_ioremap() must also
be modified because those regions have WB attributes [2].
Apart from kdump, this change also matches x86's use of acpi (and
/proc/iomem).
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/448186.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/450089.html
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
DEBUG_PAGEALLOC removes the valid bit of page table entries to prevent
any access to unallocated memory. Hibernate uses this as a hint that those
pages don't need to be saved/restored. This patch adds the
kernel_page_present() function it uses.
hibernate.c copies the resume kernel's linear map for use during restore.
Add _copy_pte() to fill-in the holes made by DEBUG_PAGEALLOC in the resume
kernel, so we can restore data the original kernel had at these addresses.
Finally, DEBUG_PAGEALLOC means the linear-map alias of KERNEL_START to
KERNEL_END may have holes in it, so we can't lazily clean this whole
area to the PoC. Only clean the new mmuoff region, and the kernel/kvm
idmaps.
This reverts commit da24eb1f3f.
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Resume from hibernate needs to clean any text executed by the kernel with
the MMU off to the PoC. Collect these functions together into the
.idmap.text section as all this code is tightly coupled and also needs
the same cleaning after resume.
Data is more complicated, secondary_holding_pen_release is written with
the MMU on, clean and invalidated, then read with the MMU off. In contrast
__boot_cpu_mode is written with the MMU off, the corresponding cache line
is invalidated, so when we read it with the MMU on we don't get stale data.
These cache maintenance operations conflict with each other if the values
are within a Cache Writeback Granule (CWG) of each other.
Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
the linker script ensures mmuoff.data.write section is aligned to the
architectural maximum CWG of 2KB.
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Each time new section markers are added, kernel/vmlinux.ld.S is updated,
and new extern char __start_foo[] definitions are scattered through the
tree.
Create asm/include/sections.h to collect these definitions (and include
the existing asm-generic version).
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Whenever we are hitting a kprobe from a none-kprobe debug exception handler,
we hit an infinite occurrences of "Unexpected kernel single-step exception
at EL1"
PSTATE.D is debug exception mask bit. It is set whenever we enter into an
exception mode. When it is set then Watchpoint, Breakpoint, and Software
Step exceptions are masked. However, software Breakpoint Instruction
exceptions can never be masked. Therefore, if we ever execute a BRK
instruction, irrespective of D-bit setting, we will be receiving a
corresponding breakpoint exception.
For example:
- We are executing kprobe pre/post handler, and kprobe has been inserted in
one of the instruction of a function called by handler. So, it executes
BRK instruction and we land into the case of KPROBE_REENTER. (This case is
already handled by current code)
- We are executing uprobe handler or any other BRK handler such as in
WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we
enter into kprobe breakpoint handler,from another BRK handler.(This case
is not being handled currently)
In all such cases kprobe breakpoint exception will be raised when we were
already in debug exception mode. SPSR's D bit (bit 9) shows the value of
PSTATE.D immediately before the exception was taken. So, in above example
cases we would find it set in kprobe breakpoint handler. Single step
exception will always be followed by a kprobe breakpoint exception.However,
it will only be raised gracefully if we clear D bit while returning from
breakpoint exception. If D bit is set then, it results into undefined
exception and when it's handler enables dbg then single step exception is
generated, however it will never be handled(because address does not match
and therefore treated as unexpected).
This patch clears D-flag unconditionally in setup_singlestep, so that we can
always get single step exception correctly after returning from breakpoint
exception. Additionally, it also removes D-flag set statement for
KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will
always take place in a debug exception state. So, D-flag will already be set
in this case.
Acked-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.
Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.
Fixes: f80fb3a3d5 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Storing this value will help prevent unwinders from getting out of sync
with the function graph tracer ret_stack. Now instead of needing a
stateful iterator, they can compare the return address pointer to find
the right ret_stack entry.
Note that an array of 50 ftrace_ret_stack structs is allocated for every
task. So when an arch implements this, it will add either 200 or 400
bytes of memory usage per task (depending on whether it's a 32-bit or
64-bit platform).
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nilay Vaish <nilayvaish@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/a95cfcc39e8f26b89a430c56926af0bb217bc0a1.1471607358.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Make HAVE_FUNCTION_GRAPH_FP_TEST a normal define, independent from
kconfig. This removes some config file pollution and simplifies the
checking for the fp test.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nilay Vaish <nilayvaish@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/2c4e5f05054d6d367f702fd153af7a0109dd5c81.1471607358.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently, x25 and x26 hold the physical addresses of idmap_pg_dir
and swapper_pg_dir, respectively, when running early boot code. But
having registers with 'global' scope in files that contain different
sections with different lifetimes, and that are called by different
CPUs at different times is a bit messy, especially since stashing the
values does not buy us anything in terms of code size or clarity.
So simply replace each reference to x25 or x26 with an adrp instruction
referring to idmap_pg_dir or swapper_pg_dir directly.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
These objects are set during initialization, thereafter are read only.
Previously I only want to mark vdso_pages, vdso_spec, vectors_page and
cpu_ops as __read_mostly from performance point of view. Then inspired
by Kees's patch[1] to apply more __ro_after_init for arm, I think it's
better to mark them as __ro_after_init. What's more, I find some more
objects are also read only after init. So apply __ro_after_init to all
of them.
This patch also removes global vdso_pagelist and tries to clean up
vdso_spec[] assignment code.
[1] http://www.spinics.net/lists/arm-kernel/msg523188.html
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The vm_special_mapping spec which is used for aarch32 vectors page is
never modified, so mark it as const.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
It is not needed after booting, this patch moves the alloc_vectors_page
function to the __init section.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the builtin_platform_driver() to simplify code.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently ret_fast_syscall, work_pending, and ret_to_user form an ad-hoc
state machine that can be difficult to reason about due to duplicated
code and a large number of branch targets.
This patch factors the common logic out into the existing
do_notify_resume function, converting the code to C in the process,
making the code more legible.
This patch tries to closely mirror the existing behaviour while using
the usual C control flow primitives. As local_irq_{disable,enable} may
be instrumented, we balance exception entry (where we will almost most
likely enable IRQs) with a call to trace_hardirqs_on just before the
return to userspace.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In break_before_make_ttbr_switch we perform broadcast TLB maintenance
for the inner shareable domain, and use a DSB ISH to complete this.
However, at the point we execute this, secondary CPUs are either
physically offline, or executing code outside of the kernel. Upon
entering the kernel, secondary CPUs will invalidate their TLBs before
enabling their MMUs.
Thus we do not need to invalidate TLBs of other CPUs, and as with
idmap_cpu_replace_ttbr1 we can reduce the scope of maintenance to the
TLBs of the local CPU. This keeps our TLB maintenance code consistent,
and is a minor optimisation.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Literal loads of virtual addresses are subject to runtime relocation when
CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
MMU and caches enabled, literal loads of relocated values performed with
the MMU off are not guaranteed to return the latest value unless the
memory covering the literal is cleaned to the PoC explicitly.
So defer the literal load until after the MMU has been enabled, just like
we do for primary_switch() and secondary_switch() in head.S.
Fixes: 1e48ef7fcc ("arm64: add support for building vmlinux as a relocatable PIE binary")
Cc: <stable@vger.kernel.org> # 4.6+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In create_safe_exec_page(), we create a copy of the hibernate exit text,
along with some page tables to map this via TTBR0. We then install the
new tables in TTBR0.
In swsusp_arch_resume() we call create_safe_exec_page() before trying a
number of operations which may fail (e.g. copying the linear map page
tables). If these fail, we bail out of swsusp_arch_resume() and return
an error code, but leave TTBR0 as-is. Subsequently, the core hibernate
code will call free_basic_memory_bitmaps(), which will free all of the
memory allocations we made, including the page tables installed in
TTBR0.
Thus, we may have TTBR0 pointing at dangling freed memory for some
period of time. If the hibernate attempt was triggered by a user
requesting a hibernate test via the reboot syscall, we may return to
userspace with the clobbered TTBR0 value.
Avoid these issues by reorganising swsusp_arch_resume() such that we
have no failure paths after create_safe_exec_page(). We also add a check
that the zero page allocation succeeded, matching what we have for other
allocations.
Fixes: 82869ac57b ("arm64: kernel: Add support for hibernate/suspend-to-disk")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 4.7+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In create_safe_exec_page we install a set of global mappings in TTBR0,
then subsequently invalidate TLBs. While TTBR0 points at the zero page,
and the TLBs should be free of stale global entries, we may have stale
ASID-tagged entries (e.g. from the EFI runtime services mappings) for
the same VAs. Per the ARM ARM these ASID-tagged entries may conflict
with newly-allocated global entries, and we must follow a
Break-Before-Make approach to avoid issues resulting from this.
This patch reworks create_safe_exec_page to invalidate TLBs while the
zero page is still in place, ensuring that there are no potential
conflicts when the new TTBR0 value is installed. As a single CPU is
online while this code executes, we do not need to perform broadcast TLB
maintenance, and can call local_flush_tlb_all(), which also subsumes
some barriers. The remaining assembly is converted to use write_sysreg()
and isb().
Other than this, we safely manipulate TTBRs in the hibernate dance. The
code we install as part of the new TTBR0 mapping (the hibernated
kernel's swsusp_arch_suspend_exit) installs a zero page into TTBR1,
invalidates TLBs, then installs its preferred value. Upon being restored
to the middle of swsusp_arch_suspend, the new image will call
__cpu_suspend_exit, which will call cpu_uninstall_idmap, installing the
zero page in TTBR0 and invalidating all TLB entries.
Fixes: 82869ac57b ("arm64: kernel: Add support for hibernate/suspend-to-disk")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 4.7+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Executing from a non-executable area gives an ugly message:
lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0e08
lkdtm: attempting bad execution at ffff000008880700
Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL)
CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13
Hardware name: linux,dummy-virt (DT)
task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88
The 'IABT (current EL)' indicates the error but it's a bit cryptic
without knowledge of the ARM ARM. There is also no indication of the
specific address which triggered the fault. The increase in kernel
page permissions makes hitting this case more likely as well.
Handling the case in the vectors gives a much more familiar looking
error message:
lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0840
lkdtm: attempting bad execution at ffff000008880680
Unable to handle kernel paging request at virtual address ffff000008880680
pgd = ffff8000089b2000
[ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000
Internal error: Oops: 8400000e [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24
Hardware name: linux,dummy-virt (DT)
task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Because the arm64 calling standard allows stacked function arguments to be
anywhere in the stack frame, do not attempt to duplicate the stack frame for
jprobes handler functions.
Documentation changes to describe this issue have been broken out into a
separate patch in order to simultaneously address them in other
architecture(s).
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Enable the hard limit of cpu count by set boot options nr_cpus=x
on arm64, and make a minor change about message when total number
of cpu exceeds the limit.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reported-by: Shiyuan Hu <hushiyuan@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- Fix HugeTLB leak due to CoW and PTE_RDONLY mismatch
- Avoid accessing unmapped FDT fields when checking validity
- Correctly account for vDSO AUX entry in ARCH_DLINFO
- Fix kallsyms with absolute expressions in linker script
- Kill unnecessary symbol-based relocs in vmlinux
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJXpFZ5AAoJELescNyEwWM0PI4IALsTuHRzClOSMDLiqMUj8t+5
WNAcqybxAjCOVxAHckhweju++TeJBxcRH1nvBoNwiHIdHTv4fq1TZ3PeEq9kWMg5
JbKjYjvd9dW8k6LXMya8iXCYtG3kzbNejkNpOTVebC86yvas1IiEjNb/ztPdhJeM
HBSOkhfk8RcskfNxhuscZzGXbbdH9/R+XSTNRHN/RwCZH8PlInmduD9BbMvDhZyP
NLFonD2IgQ4as1kYG/HdIcw0BamHiURjd043+gyoqMvm7JjPksRzlQnr91SMkX17
LykXjHYPi2Me3aTrZ1NtkUNd5FHLHZ6/b9Wg6nA19d5KWkd3ER9uSJqGxkkbnt0=
=dtGK
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
- fix HugeTLB leak due to CoW and PTE_RDONLY mismatch
- avoid accessing unmapped FDT fields when checking validity
- correctly account for vDSO AUX entry in ARCH_DLINFO
- fix kallsyms with absolute expressions in linker script
- kill unnecessary symbol-based relocs in vmlinux
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Fix copy-on-write referencing in HugeTLB
arm64: mm: avoid fdt_check_header() before the FDT is fully mapped
arm64: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
arm64: relocatable: suppress R_AARCH64_ABS64 relocations in vmlinux
arm64: vmlinux.lds: make __rela_offset and __dynsym_offset ABSOLUTE
VGIC implementation.
- s390: support for trapping software breakpoints, nested virtualization
(vSIE), the STHYI opcode, initial extensions for CPU model support.
- MIPS: support for MIPS64 hosts (32-bit guests only) and lots of cleanups,
preliminary to this and the upcoming support for hardware virtualization
extensions.
- x86: support for execute-only mappings in nested EPT; reduced vmexit
latency for TSC deadline timer (by about 30%) on Intel hosts; support for
more than 255 vCPUs.
- PPC: bugfixes.
The ugly bit is the conflicts. A couple of them are simple conflicts due
to 4.7 fixes, but most of them are with other trees. There was definitely
too much reliance on Acked-by here. Some conflicts are for KVM patches
where _I_ gave my Acked-by, but the worst are for this pull request's
patches that touch files outside arch/*/kvm. KVM submaintainers should
probably learn to synchronize better with arch maintainers, with the
latter providing topic branches whenever possible instead of Acked-by.
This is what we do with arch/x86. And I should learn to refuse pull
requests when linux-next sends scary signals, even if that means that
submaintainers have to rebase their branches.
Anyhow, here's the list:
- arch/x86/kvm/vmx.c: handle_pcommit and EXIT_REASON_PCOMMIT was removed
by the nvdimm tree. This tree adds handle_preemption_timer and
EXIT_REASON_PREEMPTION_TIMER at the same place. In general all mentions
of pcommit have to go.
There is also a conflict between a stable fix and this patch, where the
stable fix removed the vmx_create_pml_buffer function and its call.
- virt/kvm/kvm_main.c: kvm_cpu_notifier was removed by the hotplug tree.
This tree adds kvm_io_bus_get_dev at the same place.
- virt/kvm/arm/vgic.c: a few final bugfixes went into 4.7 before the
file was completely removed for 4.8.
- include/linux/irqchip/arm-gic-v3.h: this one is entirely our fault;
this is a change that should have gone in through the irqchip tree and
pulled by kvm-arm. I think I would have rejected this kvm-arm pull
request. The KVM version is the right one, except that it lacks
GITS_BASER_PAGES_SHIFT.
- arch/powerpc: what a mess. For the idle_book3s.S conflict, the KVM
tree is the right one; everything else is trivial. In this case I am
not quite sure what went wrong. The commit that is causing the mess
(fd7bacbca4, "KVM: PPC: Book3S HV: Fix TB corruption in guest exit
path on HMI interrupt", 2016-05-15) touches both arch/powerpc/kernel/
and arch/powerpc/kvm/. It's large, but at 396 insertions/5 deletions
I guessed that it wasn't really possible to split it and that the 5
deletions wouldn't conflict. That wasn't the case.
- arch/s390: also messy. First is hypfs_diag.c where the KVM tree
moved some code and the s390 tree patched it. You have to reapply the
relevant part of commits 6c22c98637, plus all of e030c1125e, to
arch/s390/kernel/diag.c. Or pick the linux-next conflict
resolution from http://marc.info/?l=kvm&m=146717549531603&w=2.
Second, there is a conflict in gmap.c between a stable fix and 4.8.
The KVM version here is the correct one.
I have pushed my resolution at refs/heads/merge-20160802 (commit
3d1f53419842) at git://git.kernel.org/pub/scm/virt/kvm/kvm.git.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJXoGm7AAoJEL/70l94x66DugQIAIj703ePAFepB/fCrKHkZZia
SGrsBdvAtNsOhr7FQ5qvvjLxiv/cv7CymeuJivX8H+4kuUHUllDzey+RPHYHD9X7
U6n1PdCH9F15a3IXc8tDjlDdOMNIKJixYuq1UyNZMU6NFwl00+TZf9JF8A2US65b
x/41W98ilL6nNBAsoDVmCLtPNWAqQ3lajaZELGfcqRQ9ZGKcAYOaLFXHv2YHf2XC
qIDMf+slBGSQ66UoATnYV2gAopNlWbZ7n0vO6tE2KyvhHZ1m399aBX1+k8la/0JI
69r+Tz7ZHUSFtmlmyByi5IAB87myy2WQHyAPwj+4vwJkDGPcl0TrupzbG7+T05Y=
=42ti
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
- ARM: GICv3 ITS emulation and various fixes. Removal of the
old VGIC implementation.
- s390: support for trapping software breakpoints, nested
virtualization (vSIE), the STHYI opcode, initial extensions
for CPU model support.
- MIPS: support for MIPS64 hosts (32-bit guests only) and lots
of cleanups, preliminary to this and the upcoming support for
hardware virtualization extensions.
- x86: support for execute-only mappings in nested EPT; reduced
vmexit latency for TSC deadline timer (by about 30%) on Intel
hosts; support for more than 255 vCPUs.
- PPC: bugfixes.
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (302 commits)
KVM: PPC: Introduce KVM_CAP_PPC_HTM
MIPS: Select HAVE_KVM for MIPS64_R{2,6}
MIPS: KVM: Reset CP0_PageMask during host TLB flush
MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX()
MIPS: KVM: Sign extend MFC0/RDHWR results
MIPS: KVM: Fix 64-bit big endian dynamic translation
MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase
MIPS: KVM: Use 64-bit CP0_EBase when appropriate
MIPS: KVM: Set CP0_Status.KX on MIPS64
MIPS: KVM: Make entry code MIPS64 friendly
MIPS: KVM: Use kmap instead of CKSEG0ADDR()
MIPS: KVM: Use virt_to_phys() to get commpage PFN
MIPS: Fix definition of KSEGX() for 64-bit
KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLD
kvm: x86: nVMX: maintain internal copy of current VMCS
KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE
KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures
KVM: arm64: vgic-its: Simplify MAPI error handling
KVM: arm64: vgic-its: Make vgic_its_cmd_handle_mapi similar to other handlers
KVM: arm64: vgic-its: Turn device_id validation into generic ID validation
...
* pci/resource:
unicore32/PCI: Remove pci=firmware command line parameter handling
ARM/PCI: Remove arch-specific pcibios_enable_device()
ARM64/PCI: Remove arch-specific pcibios_enable_device()
MIPS/PCI: Claim bus resources on PCI_PROBE_ONLY set-ups
ARM/PCI: Claim bus resources on PCI_PROBE_ONLY set-ups
PCI: generic: Claim bus resources on PCI_PROBE_ONLY set-ups
PCI: Add generic pci_bus_claim_resources()
alx: Use pci_(request|release)_mem_regions
ethernet/intel: Use pci_(request|release)_mem_regions
GenWQE: Use pci_(request|release)_mem_regions
lpfc: Use pci_(request|release)_mem_regions
NVMe: Use pci_(request|release)_mem_regions
PCI: Add helpers to request/release memory and I/O regions
PCI: Extending pci=resource_alignment to specify device/vendor IDs
sparc/PCI: Implement pci_resource_to_user() with pcibios_resource_to_bus()
powerpc/pci: Implement pci_resource_to_user() with pcibios_resource_to_bus()
microblaze/PCI: Implement pci_resource_to_user() with pcibios_resource_to_bus()
PCI: Unify pci_resource_to_user() declarations
microblaze/PCI: Remove useless __pci_mmap_set_pgprot()
powerpc/pci: Remove __pci_mmap_set_pgprot()
PCI: Ignore write combining when mapping I/O port space
- Removal of most of_platform_populate() calls in arch code. Now the DT
core code calls it in the default case and platforms only need to call
it if they have special needs.
- Use pr_fmt on all the DT core print statements.
- CoreSight binding doc improvements to block name descriptions.
- Add dt_to_config script which can parse dts files and list
corresponding kernel config options.
- Fix memory leak hit with a PowerMac DT.
- Correct a bunch of STMicro compatible strings to use the correct
vendor prefix.
- Fix DA9052 PMIC binding doc to match what is actually used in dts
files.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXm9KcAAoJEPr7XbWNvGHDRT4QAIIIOSB4AWHardnMLROgGge9
aOQKZ/05O9feOcxYKe8FkQbcH+IujJjrUL+yrRD36yGQPAyBP21gtcmmfrkCcwFM
kH915f/JbGvXpfwEf8dcarHhzYH6FFJiQGduPpWfwSSWynx+xq5EKPwCqYzMg8bN
SExxt7vUx1MKFOExZ0K8BNCo8VMVLUWQoJ1DNeJDuL25Op4EU3i2l1HQNYV/3XDk
BSA3x7Lw3GjrWEH20VWYn2Azq1OFLY+E2FC2lnG4nbkk5X8dZbUH9PR1Sk7uTQDj
uxTjWe59NBpliCxKSAbMbTAU/WwSB1pJ0I+zDJBiQsdFT+nb5F4zOrs3qSKHa/A9
Rv6AC8k5gdSMrDB1dOspfF2vWvOOInXgNV4/Kza0D92mbCpwyUuF+vhE6rfcMrZU
OiD7rj2/fvO7Y9fUAhrp6zrfrOfH9B1Z9vS+940AlK96YwPE2+J0SA2vBxR/wg8H
7fj4Ud5X+SFisXWQhh5Wlv0W9o6e7C7fsi8vpkQ7gufmezLFWVnJKsUfQaxGEwhG
Hkhm9kuSHHMd+6dEnn2756DnNfJAtQv6rSR0/QR4Lf9y5L4dvR3kAQIci8X/nx4P
sIk+IJWGZG6wziZq59hh+SO6HEqdSNuvh+5sbR0iUimdE/1HsDBdPiocXf/r8iwK
NY9nGeZPRrXmFgdpoZfm
=wLMr
-----END PGP SIGNATURE-----
Merge tag 'devicetree-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux
Pull DeviceTree updates from Rob Herring:
- remove most of_platform_populate() calls in arch code. Now the DT
core code calls it in the default case and platforms only need to
call it if they have special needs
- use pr_fmt on all the DT core print statements
- CoreSight binding doc improvements to block name descriptions
- add dt_to_config script which can parse dts files and list
corresponding kernel config options
- fix memory leak hit with a PowerMac DT
- correct a bunch of STMicro compatible strings to use the correct
vendor prefix
- fix DA9052 PMIC binding doc to match what is actually used in dts
files
* tag 'devicetree-for-4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (35 commits)
documentation: da9052: Update regulator bindings names to match DA9052/53 DTS expectations
xtensa: Partially Revert "xtensa: Remove unnecessary of_platform_populate with default match table"
xtensa: Fix build error due to missing include file
MIPS: ath79: Add missing include file
Fix spelling errors in Documentation/devicetree
ARM: dts: fix STMicroelectronics compatible strings
powerpc/dts: fix STMicroelectronics compatible strings
Documentation: dt: i2c: use correct STMicroelectronics vendor prefix
scripts/dtc: dt_to_config - kernel config options for a devicetree
of: fdt: mark unflattened tree as detached
of: overlay: add resolver error prints
coresight: document binding acronyms
Documentation/devicetree: document cavium-pip rx-delay/tx-delay properties
of: use pr_fmt prefix for all console printing
of/irq: Mark initialised interrupt controllers as populated
of: fix memory leak related to safe_name()
Revert "of/platform: export of_default_bus_match_table"
of: unittest: use of_platform_default_populate() to populate default bus
memory: omap-gpmc: use of_platform_default_populate() to populate default bus
bus: uniphier-system-bus: use of_platform_default_populate() to populate default bus
...
Pull security subsystem updates from James Morris:
"Highlights:
- TPM core and driver updates/fixes
- IPv6 security labeling (CALIPSO)
- Lots of Apparmor fixes
- Seccomp: remove 2-phase API, close hole where ptrace can change
syscall #"
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (156 commits)
apparmor: fix SECURITY_APPARMOR_HASH_DEFAULT parameter handling
tpm: Add TPM 2.0 support to the Nuvoton i2c driver (NPCT6xx family)
tpm: Factor out common startup code
tpm: use devm_add_action_or_reset
tpm2_i2c_nuvoton: add irq validity check
tpm: read burstcount from TPM_STS in one 32-bit transaction
tpm: fix byte-order for the value read by tpm2_get_tpm_pt
tpm_tis_core: convert max timeouts from msec to jiffies
apparmor: fix arg_size computation for when setprocattr is null terminated
apparmor: fix oops, validate buffer size in apparmor_setprocattr()
apparmor: do not expose kernel stack
apparmor: fix module parameters can be changed after policy is locked
apparmor: fix oops in profile_unpack() when policy_db is not present
apparmor: don't check for vmalloc_addr if kvzalloc() failed
apparmor: add missing id bounds check on dfa verification
apparmor: allow SYS_CAP_RESOURCE to be sufficient to prlimit another task
apparmor: use list_next_entry instead of list_entry_next
apparmor: fix refcount race when finding a child profile
apparmor: fix ref count leak when profile sha1 hash is read
apparmor: check that xindex is in trans_table bounds
...
Pull smp hotplug updates from Thomas Gleixner:
"This is the next part of the hotplug rework.
- Convert all notifiers with a priority assigned
- Convert all CPU_STARTING/DYING notifiers
The final removal of the STARTING/DYING infrastructure will happen
when the merge window closes.
Another 700 hundred line of unpenetrable maze gone :)"
* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
timers/core: Correct callback order during CPU hot plug
leds/trigger/cpu: Move from CPU_STARTING to ONLINE level
powerpc/numa: Convert to hotplug state machine
arm/perf: Fix hotplug state machine conversion
irqchip/armada: Avoid unused function warnings
ARC/time: Convert to hotplug state machine
clocksource/atlas7: Convert to hotplug state machine
clocksource/armada-370-xp: Convert to hotplug state machine
clocksource/exynos_mct: Convert to hotplug state machine
clocksource/arm_global_timer: Convert to hotplug state machine
rcu: Convert rcutree to hotplug state machine
KVM/arm/arm64/vgic-new: Convert to hotplug state machine
smp/cfd: Convert core to hotplug state machine
x86/x2apic: Convert to CPU hotplug state machine
profile: Convert to hotplug state machine
timers/core: Convert to hotplug state machine
hrtimer: Convert to hotplug state machine
x86/tboot: Convert to hotplug state machine
arm64/armv8 deprecated: Convert to hotplug state machine
hwtracing/coresight-etm4x: Convert to hotplug state machine
...
The linker routines that we rely on to produce a relocatable PIE binary
treat it as a shared ELF object in some ways, i.e., it emits symbol based
R_AARCH64_ABS64 relocations into the final binary since doing so would be
appropriate when linking a shared library that is subject to symbol
preemption. (This means that an executable can override certain symbols
that are exported by a shared library it is linked with, and that the
shared library *must* update all its internal references as well, and point
them to the version provided by the executable.)
Symbol preemption does not occur for OS hosted PIE executables, let alone
for vmlinux, and so we would prefer to get rid of these symbol based
relocations. This would allow us to simplify the relocation routines, and
to strip the .dynsym, .dynstr and .hash sections from the binary. (Note
that these are tiny, and are placed in the .init segment, but they clutter
up the vmlinux binary.)
Note that these R_AARCH64_ABS64 relocations are only emitted for absolute
references to symbols defined in the linker script, all other relocatable
quantities are covered by anonymous R_AARCH64_RELATIVE relocations that
simply list the offsets to all 64-bit values in the binary that need to be
fixed up based on the offset between the link time and run time addresses.
Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared
libraries to allow them to ignore symbol preemption, and unconditionally
bind all internal symbol references to its own definitions. So set it for
our PIE binary as well, and get rid of the asoociated sections and the
relocation code that processes them.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: fixed conflict with __dynsym_offset linker script entry]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Due to the untyped KIMAGE_VADDR constant, the linker may not notice
that the __rela_offset and __dynsym_offset expressions are absolute
values (i.e., are not subject to relocation). This does not matter for
KASLR, but it does confuse kallsyms in relative mode, since it uses
the lowest non-absolute symbol address as the anchor point, and expects
all other symbol addresses to be within 4 GB of it.
Fix this by qualifying these expressions as ABSOLUTE() explicitly.
Fixes: 0cd3defe0a ("arm64: kernel: perform relocation processing from ID map")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- ACPI support for guests on ARM platforms.
- Generic steal time support for arm and x86.
- Support cases where kernel cpu is not Xen VCPU number (e.g., if
in-guest kexec is used).
- Use the system workqueue instead of a custom workqueue in various
places.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJXmLlrAAoJEFxbo/MsZsTRvRQH/1wOMF8BmlbZfR7H3qwDfjst
ApNifCiZE08xDtWBlwUaBFAQxyflQS9BBiNZDVK0sysIdXeOdpWV7V0ZjRoLL+xr
czsaaGXDcmXxJxApoMDVuT7FeP6rEk6LVAYRoHpVjJjMZGW3BbX1vZaMW4DXl2WM
9YNaF2Lj+rpc1f8iG31nUxwkpmcXFog6ct4tu7HiyCFT3hDkHt/a4ghuBdQItCkd
vqBa1pTpcGtQBhSmWzlylN/PV2+NKcRd+kGiwd09/O/rNzogTMCTTWeHKAtMpPYb
Cu6oSqJtlK5o0vtr0qyLSWEGIoyjE2gE92s0wN3iCzFY1PldqdsxUO622nIj+6o=
=G6q3
-----END PGP SIGNATURE-----
Merge tag 'for-linus-4.8-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen updates from David Vrabel:
"Features and fixes for 4.8-rc0:
- ACPI support for guests on ARM platforms.
- Generic steal time support for arm and x86.
- Support cases where kernel cpu is not Xen VCPU number (e.g., if
in-guest kexec is used).
- Use the system workqueue instead of a custom workqueue in various
places"
* tag 'for-linus-4.8-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: (47 commits)
xen: add static initialization of steal_clock op to xen_time_ops
xen/pvhvm: run xen_vcpu_setup() for the boot CPU
xen/evtchn: use xen_vcpu_id mapping
xen/events: fifo: use xen_vcpu_id mapping
xen/events: use xen_vcpu_id mapping in events_base
x86/xen: use xen_vcpu_id mapping when pointing vcpu_info to shared_info
x86/xen: use xen_vcpu_id mapping for HYPERVISOR_vcpu_op
xen: introduce xen_vcpu_id mapping
x86/acpi: store ACPI ids from MADT for future usage
x86/xen: update cpuid.h from Xen-4.7
xen/evtchn: add IOCTL_EVTCHN_RESTRICT
xen-blkback: really don't leak mode property
xen-blkback: constify instance of "struct attribute_group"
xen-blkfront: prefer xenbus_scanf() over xenbus_gather()
xen-blkback: prefer xenbus_scanf() over xenbus_gather()
xen: support runqueue steal time on xen
arm/xen: add support for vm_assist hypercall
xen: update xen headers
xen-pciback: drop superfluous variables
xen-pciback: short-circuit read path used for merging write values
...
- Kexec support for arm64
- Kprobes support
- Expose MIDR_EL1 and REVIDR_EL1 CPU identification registers to sysfs
- Trapping of user space cache maintenance operations and emulation in
the kernel (CPU errata workaround)
- Clean-up of the early page tables creation (kernel linear mapping, EFI
run-time maps) to avoid splitting larger blocks (e.g. pmds) into
smaller ones (e.g. ptes)
- VDSO support for CLOCK_MONOTONIC_RAW in clock_gettime()
- ARCH_HAS_KCOV enabled for arm64
- Optimise IP checksum helpers
- SWIOTLB optimisation to only allocate/initialise the buffer if the
available RAM is beyond the 32-bit mask
- Properly handle the "nosmp" command line argument
- Fix for the initialisation of the CPU debug state during early boot
- vdso-offsets.h build dependency workaround
- Build fix when RANDOMIZE_BASE is enabled with MODULES off
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXmF/UAAoJEGvWsS0AyF7x+jwP/2fErtX6FTXmdG0c3HBkTpuy
gEuzN2ByWbP6Io+unLC6NvbQQb1q6c73PTqjsoeMHUx2o8YK3jgWEBcC+7AuepoZ
YGl3r08e75a/fGrgNwEQQC1lNlgjpog4kzVDh5ji6oRXNq+OkjJGUtRPe3gBoqxv
NAjviciID/MegQaq4SaMd26AmnjuUGKogo5vlIaXK0SemX9it+ytW7eLAXuVY+gW
EvO3Nxk0Y5oZKJF8qRw6oLSmw1bwn2dD26OgfXfCiI30QBookRyWIoXRedUOZmJq
D0+Tipd7muO4PbjlxS8aY/wd/alfnM5+TJ6HpGDo+Y1BDauXfiXMf3ktDFE5QvJB
KgtICmC0stWwbDT35dHvz8sETsrCMA2Q/IMrnyxG+nj9BxVQU7rbNrxfCXesJy7Q
4EsQbcTyJwu+ECildBezfoei99XbFZyWk2vKSkTCFKzgwXpftGFaffgZ3DIzBAHH
IjecDqIFENC8ymrjyAgrGjeFG+2WB/DBgoSS3Baiz6xwQqC4wFMnI3jPECtJjb/U
6e13f+onXu5lF1YFKAiRjGmqa/G1ZMr+uKZFsembuGqsZdAPkzzUHyAE9g4JVO8p
t3gc3/M3T7oLSHuw4xi1/Ow5VGb2UvbslFrp7OpuFZ7CJAvhKlHL5rPe385utsFE
7++5WHXHAegeJCDNAKY2
=iJOY
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- Kexec support for arm64
- Kprobes support
- Expose MIDR_EL1 and REVIDR_EL1 CPU identification registers to sysfs
- Trapping of user space cache maintenance operations and emulation in
the kernel (CPU errata workaround)
- Clean-up of the early page tables creation (kernel linear mapping,
EFI run-time maps) to avoid splitting larger blocks (e.g. pmds) into
smaller ones (e.g. ptes)
- VDSO support for CLOCK_MONOTONIC_RAW in clock_gettime()
- ARCH_HAS_KCOV enabled for arm64
- Optimise IP checksum helpers
- SWIOTLB optimisation to only allocate/initialise the buffer if the
available RAM is beyond the 32-bit mask
- Properly handle the "nosmp" command line argument
- Fix for the initialisation of the CPU debug state during early boot
- vdso-offsets.h build dependency workaround
- Build fix when RANDOMIZE_BASE is enabled with MODULES off
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (64 commits)
arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototype
arm64: Only select ARM64_MODULE_PLTS if MODULES=y
arm64: mm: run pgtable_page_ctor() on non-swapper translation table pages
arm64: mm: make create_mapping_late() non-allocating
arm64: Honor nosmp kernel command line option
arm64: Fix incorrect per-cpu usage for boot CPU
arm64: kprobes: Add KASAN instrumentation around stack accesses
arm64: kprobes: Cleanup jprobe_return
arm64: kprobes: Fix overflow when saving stack
arm64: kprobes: WARN if attempting to step with PSTATE.D=1
arm64: debug: remove unused local_dbg_{enable, disable} macros
arm64: debug: remove redundant spsr manipulation
arm64: debug: unmask PSTATE.D earlier
arm64: localise Image objcopy flags
arm64: ptrace: remove extra define for CPSR's E bit
kprobes: Add arm64 case in kprobe example module
arm64: Add kernel return probes support (kretprobes)
arm64: Add trampoline code for kretprobes
arm64: kprobes instruction simulation support
arm64: Treat all entry code as non-kprobe-able
...
* acpi-processor:
ACPI: enable ACPI_PROCESSOR_IDLE on ARM64
arm64: add support for ACPI Low Power Idle(LPI)
drivers: firmware: psci: initialise idle states using ACPI LPI
cpuidle: introduce CPU_PM_CPU_IDLE_ENTER macro for ARM{32, 64}
arm64: cpuidle: drop __init section marker to arm_cpuidle_init
ACPI / processor_idle: Add support for Low Power Idle(LPI) states
ACPI / processor_idle: introduce ACPI_PROCESSOR_CSTATE
* acpi-cppc:
mailbox: pcc: Add PCC request and free channel declarations
ACPI / CPPC: Prevent cpc_desc_ptr points to the invalid data
ACPI: CPPC: Return error if _CPC is invalid on a CPU
* acpi-apei:
ACPI / APEI: Add Boot Error Record Table (BERT) support
ACPI / einj: Make error paths more talkative
ACPI / einj: Convert EINJ_PFX to proper pr_fmt
* acpi-sleep:
ACPI: Execute _PTS before system reboot
* acpi-tables:
ACPI: Rename configfs.c to acpi_configfs.c to prevent link error
ACPI: add support for loading SSDTs via configfs
ACPI: add support for configfs
efi / ACPI: load SSTDs from EFI variables
spi / ACPI: add support for ACPI reconfigure notifications
i2c / ACPI: add support for ACPI reconfigure notifications
ACPI: add support for ACPI reconfiguration notifiers
ACPI / scan: fix enumeration (visited) flags for bus rescans
ACPI / documentation: add SSDT overlays documentation
ACPI: ARM64: support for ACPI_TABLE_UPGRADE
ACPI / tables: introduce ARCH_HAS_ACPI_TABLE_UPGRADE
ACPI / tables: move arch-specific symbol to asm/acpi.h
ACPI / tables: table upgrade: refactor function definitions
ACPI / tables: table upgrade: use cacheable map for tables
Conflicts:
arch/arm64/include/asm/acpi.h
This patch adds appropriate callbacks to support ACPI Low Power Idle
(LPI) on ARM64.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Commit ea389daa7f (arm64: cpuidle: add __init section marker to
arm_cpuidle_init) added the __init annotation to arm_cpuidle_init
as it was not needed after booting which was correct at that time.
However with the introduction of ACPI LPI support, this will be used
from cpuhotplug path in ACPI processor driver.
This patch drops the __init annotation from arm_cpuidle_init to avoid
the following warning:
WARNING: vmlinux.o(.text+0x113c8): Section mismatch in reference from the
function acpi_processor_ffh_lpi_probe() to the function
.init.text:arm_cpuidle_init()
The function acpi_processor_ffh_lpi_probe() references
the function __init arm_cpuidle_init().
This is often because acpi_processor_ffh_lpi_probe() lacks a __init
annotation or the annotation of arm_cpuidle_init is wrong.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* kprobes:
arm64: kprobes: Add KASAN instrumentation around stack accesses
arm64: kprobes: Cleanup jprobe_return
arm64: kprobes: Fix overflow when saving stack
arm64: kprobes: WARN if attempting to step with PSTATE.D=1
kprobes: Add arm64 case in kprobe example module
arm64: Add kernel return probes support (kretprobes)
arm64: Add trampoline code for kretprobes
arm64: kprobes instruction simulation support
arm64: Treat all entry code as non-kprobe-able
arm64: Blacklist non-kprobe-able symbol
arm64: Kprobes with single stepping support
arm64: add conditional instruction simulation support
arm64: Add more test functions to insn.c
arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature
Passing "nosmp" should boot the kernel with a single processor, without
provision to enable secondary CPUs even if they are present. "nosmp" is
implemented by setting maxcpus=0. At the moment we still mark the secondary
CPUs present even with nosmp, which allows the userspace to bring them
up. This patch corrects the smp_prepare_cpus() to honor the maxcpus == 0.
Commit 44dbcc93ab ("arm64: Fix behavior of maxcpus=N") fixed the
behavior for maxcpus >= 1, but broke maxcpus = 0.
Fixes: 44dbcc93ab ("arm64: Fix behavior of maxcpus=N")
Cc: <stable@vger.kernel.org> # 4.7+
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
[catalin.marinas@arm.com: updated code comment]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In smp_prepare_boot_cpu(), we invoke cpuinfo_store_boot_cpu to store
the cpuinfo in a per-cpu ptr, before initialising the per-cpu offset for
the boot CPU. This patch reorders the sequence to make sure we initialise
the per-cpu offset before accessing the per-cpu area.
Commit 4b998ff188 ("arm64: Delay cpuinfo_store_boot_cpu") fixed the
issue where we modified the per-cpu area even before the kernel initialises
the per-cpu areas, but failed to wait until the boot cpu updated it's
offset.
Fixes: 4b998ff188 ("arm64: Delay cpuinfo_store_boot_cpu")
Cc: <stable@vger.kernel.org> # 4.4+
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
jprobe_return seems to have aged badly. Comments referring to
non-existent behaviours, and a dangerous habit of messing
with registers without telling the compiler.
This patches applies the following remedies:
- Fix the comments to describe the actual behaviour
- Tidy up the asm sequence to directly assign the
stack pointer without clobbering extra registers
- Mark the rest of the function as unreachable() so
that the compiler knows that there is no need for
an epilogue
- Stop making jprobe_return_break a global function
(you really don't want to call that guy, and it isn't
even a function).
Tested with tcp_probe.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The MIN_STACK_SIZE macro tries evaluate how much stack space needs
to be saved in the jprobes_stack array, sized at 128 bytes.
When using the IRQ stack, said macro can happily return up to
IRQ_STACK_SIZE, which is 16kB. Mayhem follows.
This patch fixes things by getting rid of the crazy macro and
limiting the copy to be at most the size of the jprobes_stack
array, no matter which stack we're on.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Stepping with PSTATE.D=1 is bad news. The step won't generate a debug
exception and we'll likely walk off into random data structures. This
should never happen, but when it does, it's a PITA to debug. Add a
WARN_ON to shout if we realise this is about to take place.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There is no need to explicitly clear the SS bit immediately before
setting it unconditionally.
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Clearing PSTATE.D is one of the requirements for generating a debug
exception. The arm64 booting protocol requires that PSTATE.D is set,
since many of the debug registers (for example, the hw_breakpoint
registers) are UNKNOWN out of reset and could potentially generate
spurious, fatal debug exceptions in early boot code if PSTATE.D was
clear. Once the debug registers have been safely initialised, PSTATE.D
is cleared, however this is currently broken for two reasons:
(1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary
CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall
runs after SMP (and the scheduler) have been initialised, there is
no guarantee that it is actually running on the boot CPU. In this
case, the boot CPU is left with PSTATE.D set and is not capable of
generating debug exceptions.
(2) In a preemptible kernel, we may explicitly schedule on the IRQ
return path to EL1. If an IRQ occurs with PSTATE.D set in the idle
thread, then we may schedule the kthread_init thread, run the
postcore_initcall to clear PSTATE.D and then context switch back
to the idle thread before returning from the IRQ. The exception
return path will then restore PSTATE.D from the stack, and set it
again.
This patch fixes the problem by moving the clearing of PSTATE.D earlier
to proc.S. This has the desirable effect of clearing it in one place for
all CPUs, long before we have to worry about the scheduler or any
exception handling. We ensure that the previous reset of MDSCR_EL1 has
completed before unmasking the exception, so that any spurious
exceptions resulting from UNKNOWN debug registers are not generated.
Without this patch applied, the kprobes selftests have been seen to fail
under KVM, where we end up attempting to step the OOL instruction buffer
with PSTATE.D set and therefore fail to complete the step.
Cc: <stable@vger.kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The pre-handler of this special 'trampoline' kprobe executes the return
probe handler functions and restores original return address in ELR_EL1.
This way the saved pt_regs still hold the original register context to be
carried back to the probed kernel function.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The trampoline code is used by kretprobes to capture a return from a probed
function. This is done by saving the registers, calling the handler, and
restoring the registers. The code then returns to the original saved caller
return address. It is necessary to do this directly instead of using a
software breakpoint because the code used in processing that breakpoint
could itself be kprobe'd and cause a problematic reentry into the debug
exception handler.
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: removed unnecessary masking of the PSTATE bits]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Kprobes needs simulation of instructions that cannot be stepped
from a different memory location, e.g.: those instructions
that uses PC-relative addressing. In simulation, the behaviour
of the instruction is implemented using a copy of pt_regs.
The following instruction categories are simulated:
- All branching instructions(conditional, register, and immediate)
- Literal access instructions(load-literal, adr/adrp)
Conditional execution is limited to branching instructions in
ARM v8. If conditions at PSTATE do not match the condition fields
of opcode, the instruction is effectively NOP.
Thanks to Will Cohen for assorted suggested changes.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: removed linux/module.h include]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Entry symbols are not kprobe safe. So blacklist them for kprobing.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: Do not include syscall wrappers in .entry.text]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add all function symbols which are called from do_debug_exception under
NOKPROBE_SYMBOL, as they can not kprobed.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add support for basic kernel probes(kprobes) and jump probes
(jprobes) for ARM64.
Kprobes utilizes software breakpoint and single step debug
exceptions supported on ARM v8.
A software breakpoint is placed at the probe address to trap the
kernel execution into the kprobe handler.
ARM v8 supports enabling single stepping before the break exception
return (ERET), with next PC in exception return address (ELR_EL1). The
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed, and
enables single stepping. The PC is set to the out-of-line slot address
before the ERET. With this scheme, the instruction is executed with the
exact same register context except for the PC (and DAIF) registers.
Debug mask (PSTATE.D) is enabled only when single stepping a recursive
kprobe, e.g.: during kprobes reenter so that probed instruction can be
single stepped within the kprobe handler -exception- context.
The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
any further re-entry is prevented by not calling handlers and the case
counted as a missed kprobe).
Single stepping from the x-o-l slot has a drawback for PC-relative accesses
like branching and symbolic literals access as the offset from the new PC
(slot address) may not be ensured to fit in the immediate value of
the opcode. Such instructions need simulation, so reject
probing them.
Instructions generating exceptions or cpu mode change are rejected
for probing.
Exclusive load/store instructions are rejected too. Additionally, the
code is checked to see if it is inside an exclusive load/store sequence
(code from Pratyush).
System instructions are mostly enabled for stepping, except MSR/MRS
accesses to "DAIF" flags in PSTATE, which are not safe for
probing.
This also changes arch/arm64/include/asm/ptrace.h to use
include/asm-generic/ptrace.h.
Thanks to Steve Capper and Pratyush Anand for several suggested
Changes.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cease using the arm32 arm_check_condition() function and replace it with
a local version for use in deprecated instruction support on arm64. Also
make the function table used by this available for future use by kprobes
and/or uprobes.
This function is derived from code written by Sandeepa Prabhu.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Certain instructions are hard to execute correctly out-of-line (as in
kprobes). Test functions are added to insn.[hc] to identify these. The
instructions include any that use PC-relative addressing, change the PC,
or change interrupt masking. For efficiency and simplicity test
functions are also added for small collections of related instructions.
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting
functions and defines.
Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: Remove unused functions]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
It can be useful for JIT software to be aware of MIDR_EL1 and
REVIDR_EL1 to ascertain the presence of any core errata that could
affect code generation.
This patch exposes these registers through sysfs:
/sys/devices/system/cpu/cpu$ID/regs/identification/midr_el1
/sys/devices/system/cpu/cpu$ID/regs/identification/revidr_el1
where $ID is the cpu number. For big.LITTLE systems, one can have a
mixture of cores (e.g. Cortex A53 and Cortex A57), thus all CPUs need
to be enumerated.
If the kernel does not have valid information to populate these entries
with, an empty string is returned to userspace.
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
[suzuki.poulose@arm.com: ABI documentation updates, hotplug notifiers, kobject changes]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
So far the arm64 clock_gettime() vDSO implementation only supported
the following clocks, falling back to the syscall for the others:
- CLOCK_REALTIME{,_COARSE}
- CLOCK_MONOTONIC{,_COARSE}
This patch adds support for the CLOCK_MONOTONIC_RAW clock, taking
advantage of the recent refactoring of the vDSO time functions. Like
the non-_COARSE clocks, this only works when the "arch_sys_counter"
clocksource is in use (allowing us to read the current time from the
virtual counter register), otherwise we also have to fall back to the
syscall.
Most of the data is shared with CLOCK_MONOTONIC, and the algorithm is
similar. The reference implementation in kernel/time/timekeeping.c
shows that:
- CLOCK_MONOTONIC = tk->wall_to_monotonic + tk->xtime_sec +
timekeeping_get_ns(&tk->tkr_mono)
- CLOCK_MONOTONIC_RAW = tk->raw_time + timekeeping_get_ns(&tk->tkr_raw)
- tkr_mono and tkr_raw are identical (in particular, same
clocksource), except these members:
* mult (only mono's multiplier is NTP-adjusted)
* xtime_nsec (always 0 for raw)
Therefore, tk->raw_time and tkr_raw->mult are now also stored in the
vDSO data page.
Cc: Ali Saidi <ali.saidi@arm.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Time functions are directly implemented in assembly in arm64, and it
is desirable to keep it this way for performance reasons (everything
fits in registers, so that the stack is not used at all). However, the
current implementation is quite difficult to read and understand (even
considering it's assembly). Additionally, due to the structure of
__kernel_clock_gettime, which heavily uses conditional branches to
share code between the different clocks, it is difficult to support a
new clock without making the branches even harder to follow.
This commit completely refactors the structure of clock_gettime (and
gettimeofday along the way) while keeping exactly the same algorithms.
We no longer try to share code; instead, macros provide common
operations. This new approach comes with a number of advantages:
- In clock_gettime, clock implementations are no longer interspersed,
making them much more readable. Additionally, macros only use
registers passed as arguments or reserved with .req, this way it is
easy to make sure that registers are properly allocated. To avoid a
large number of branches in a given execution path, a jump table is
used; a normal execution uses 3 unconditional branches.
- __do_get_tspec has been replaced with 2 macros (get_ts_clock_mono,
get_clock_shifted_nsec) and explicit loading of data from the vDSO
page. Consequently, clock_gettime and gettimeofday are now leaf
functions, and saving x30 (lr) is no longer necessary.
- Variables protected by tb_seq_count are now loaded all at once,
allowing to merge the seqcnt_read macro into seqcnt_check.
- For CLOCK_REALTIME_COARSE, removed an unused load of the wall to
monotonic timespec.
- For CLOCK_MONOTONIC_COARSE, removed a few shift instructions.
Obviously, the downside of sharing less code is an increase in code
size. However since the vDSO has its own code page, this does not
really matter, as long as the size of the DSO remains below 4 kB. For
now this should be all right:
Before After
vdso.so size (B) 2776 3000
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arm64/kernel/{vdso,signal}.c include vdso-offsets.h, as well as any
file that includes asm/vdso.h. Therefore, vdso-offsets.h must be
generated before these files are compiled.
The current rules in arm64/kernel/Makefile do not actually enforce
this, because even though $(obj)/vdso is listed as a prerequisite for
vdso-offsets.h, this does not result in the intended effect of
building the vdso subdirectory (before all the other objects). As a
consequence, depending on the order in which the rules are followed,
vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
are built. The current rules also impose an unnecessary dependency on
vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
rebuilds. This is made obvious when using make -j:
touch arch/arm64/kernel/vdso/gettimeofday.S && make -j$NCPUS arch/arm64/kernel
will sometimes result in none of arm64/kernel/*.o being
rebuilt, sometimes all of them, or even just some of them.
It is quite difficult to ensure that a header is generated before it
is used with recursive Makefiles by using normal rules. Instead,
arch-specific generated headers are normally built in the archprepare
recipe in the arch Makefile (see for instance arch/ia64/Makefile).
Unfortunately, asm-offsets.h is included in gettimeofday.S, and must
therefore be generated before vdso-offsets.h, which is not the case if
archprepare is used. For this reason, a rule run after archprepare has
to be used.
This commit adds rules in arm64/Makefile to build vdso-offsets.h
during the prepare step, ensuring that vdso-offsets.h is generated
before building anything. It also removes the now-unnecessary
dependencies on vdso-offsets.h in arm64/kernel/Makefile. Finally, it
removes the duplication of asm-offsets.h between arm64/kernel/vdso/
and include/generated/ and makes include/generated/vdso-offsets.h a
target in arm64/kernel/vdso/Makefile.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.com>
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This reverts commit 90f777beb7.
While this commit was aimed at fixing the dependencies, with a large
make -j the vdso-offsets.h file is not generated, leading to build
failures.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arch/arm64/kernel/{vdso,signal}.c include generated/vdso-offsets.h, and
therefore the symbol offsets must be generated before these files are
compiled.
The current rules in arm64/kernel/Makefile do not actually enforce
this, because even though $(obj)/vdso is listed as a prerequisite for
vdso-offsets.h, this does not result in the intended effect of
building the vdso subdirectory (before all the other objects). As a
consequence, depending on the order in which the rules are followed,
vdso-offsets.h is updated or not before arm64/kernel/{vdso,signal}.o
are built. The current rules also impose an unnecessary dependency on
vdso-offsets.h for all arm64/kernel/*.o, resulting in unnecessary
rebuilds.
This patch removes the arch/arm64/kernel/vdso/vdso-offsets.h file
generation, leaving only the include/generated/vdso-offsets.h one. It
adds a forced dependency check of the vdso-offsets.h file in
arch/arm64/kernel/Makefile which, if not up to date according to the
arch/arm64/kernel/vdso/Makefile rules (depending on vdso.so.dbg), will
trigger the vdso/ subdirectory build and vdso-offsets.h re-generation.
Automatic kbuild dependency rules between kernel/{vdso,signal}.c rules
and vdso-offsets.h will guarantee that the vDSO object is built first,
followed by the generated symbol offsets header file.
Reported-by: Kevin Brodsky <kevin.brodsky@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cavium erratum 27456 commit 104a0c02e8
("arm64: Add workaround for Cavium erratum 27456")
is applicable for thunderx-81xx pass1.0 SoC as well.
Adding code to enable to 81xx.
Signed-off-by: Ganapatrao Kulkarni <gkulkarni@cavium.com>
Reviewed-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If we take an exception while at EL1, the exception handler inherits
the original context's addr_limit and PSTATE.UAO values. To be consistent
always reset addr_limit and PSTATE.UAO on (re-)entry to EL1. This
prevents accidental re-use of the original context's addr_limit.
Based on a similar patch for arm from Russell King.
Cc: <stable@vger.kernel.org> # 4.6-
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Move xen_early_init() before efi_init(), then when calling efi_init()
could initialize Xen specific UEFI.
Check if it runs on Xen hypervisor through the flat dts.
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Julien Grall <julien.grall@arm.com>
Tested-by: Julien Grall <julien.grall@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Add the code that enables the switch to the lower HYP VA range.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
When running the OS with a page size > 4 KB, we need to round up mappings
for regions that are not aligned to the OS's page size. We already avoid
block mappings for EfiRuntimeServicesCode/Data regions for other reasons,
but in the unlikely event that other unaliged regions exists that have the
EFI_MEMORY_RUNTIME attribute set, ensure that unaligned regions are always
mapped down to pages. This way, the overlapping page is guaranteed not to
be covered by a block mapping that needs to be split.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To avoid triggering diagnostics in the MMU code that are finicky about
splitting block mappings into more granular mappings, ensure that regions
that are likely to appear in the Memory Attributes table as well as the
UEFI memory map are always mapped down to pages. This way, we can use
apply_to_page_range() instead of create_pgd_mapping() for the second pass,
which cannot split or merge block entries, and operates strictly on PTEs.
Note that this aligns the arm64 Memory Attributes table handling code with
the ARM code, which already uses apply_to_page_range() to set the strict
permissions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add a bool parameter 'allow_block_mappings' to create_pgd_mapping() and
the various helper functions that it descends into, to give the caller
control over whether block entries may be used to create the mapping.
The UEFI runtime mapping routines will use this to avoid creating block
entries that would need to split up into page entries when applying the
permissions listed in the Memory Attributes firmware table.
This also replaces the block_mappings_allowed() helper function that was
added for DEBUG_PAGEALLOC functionality, but the resulting code is
functionally equivalent (given that debug_page_alloc does not operate on
EFI page table entries anyway)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The ARM errata 819472, 826319, 827319 and 824069 for affected
Cortex-A53 cores demand to promote "dc cvau" instructions to
"dc civac". Since we allow userspace to also emit those instructions,
we should make sure that "dc cvau" gets promoted there too.
So lets grasp the nettle here and actually trap every userland cache
maintenance instruction once we detect at least one affected core in
the system.
We then emulate the instruction by executing it on behalf of userland,
promoting "dc cvau" to "dc civac" on the way and injecting access
fault back into userspace.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[catalin.marinas@arm.com: s/set_segfault/arm64_notify_segfault/]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The code for injecting a signal into userland if a trapped instruction
fails emulation due to a _userland_ error (like an illegal address)
will be used more often with the next patch.
Factor out the core functionality into a separate function and use
that both for the existing trap handler and for the deprecated
instructions emulation.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[catalin.marinas@arm.com: s/set_segfault/arm64_notify_segfault/]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently we call the (optional) enable function for CPU _features_
only. As CPU _errata_ descriptions share the same data structure and
having an enable function is useful for errata as well (for instance
to set bits in SCTLR), lets call it when enumerating erratas too.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As Kees Cook notes in the ARM counterpart of this patch [0]:
The _etext position is defined to be the end of the kernel text code,
and should not include any part of the data segments. This interferes
with things that might check memory ranges and expect executable code
up to _etext.
In particular, Kees is referring to the HARDENED_USERCOPY patch set [1],
which rejects attempts to call copy_to_user() on kernel ranges containing
executable code, but does allow access to the .rodata segment. Regardless
of whether one may or may not agree with the distinction, it makes sense
for _etext to have the same meaning across architectures.
So let's put _etext where it belongs, between .text and .rodata, and fix
up existing references to use __init_begin instead, which unlike _end_rodata
includes the exception and notes sections as well.
The _etext references in kaslr.c are left untouched, since its references
to [_stext, _etext) are meant to capture potential jump instruction targets,
and so disregarding .rodata is actually an improvement here.
[0] http://article.gmane.org/gmane.linux.kernel/2245084
[1] http://thread.gmane.org/gmane.linux.kernel.hardened.devel/2502
Reported-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To aid in debugging kexec problems or when adding new functionality to
kexec add a new routine kexec_image_info() and several inline pr_debug
statements.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add three new files, kexec.h, machine_kexec.c and relocate_kernel.S to the
arm64 architecture that add support for the kexec re-boot mechanism
(CONFIG_KEXEC) on arm64 platforms.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Reviewed-by: James Morse <james.morse@arm.com>
[catalin.marinas@arm.com: removed dead code following James Morse's comments]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 68234df4ea ("arm64: kill flush_cache_all()") removed the global
arm64 routines cpu_reset() and cpu_soft_restart() needed by the arm64
kexec and kdump support. Add back a simplified version of
cpu_soft_restart() with some changes needed for kexec in the new files
cpu_reset.S, and cpu_reset.h.
When a CPU is reset it needs to be put into the exception level it had when
it entered the kernel. Update cpu_soft_restart() to accept an argument
which signals if the reset address should be entered at EL1 or EL2, and
add a new hypercall HVC_SOFT_RESTART which is used for the EL2 switch.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
kernel/smp.c has a fancy counter that keeps track of the number of CPUs
it marked as not-present and left in cpu_park_loop(). If there are any
CPUs spinning in here, features like kexec or hibernate may release them
by overwriting this memory.
This problem also occurs on machines using spin-tables to release
secondary cores.
After commit 44dbcc93ab ("arm64: Fix behavior of maxcpus=N")
we bring all known cpus into the secondary holding pen, meaning this
memory can't be re-used by kexec or hibernate.
Add a function cpus_are_stuck_in_kernel() to determine if either of these
cases have occurred.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: cherry-picked from mainline for kexec dependency]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On systems with PCI_PROBE_ONLY set, we rely on BAR assignments from
firmware. Previously we did not insert those resources into the resource
tree, so we had to skip pci_enable_resources() because it fails if
resources are not in the resource tree.
Now that we *do* insert resources even when PCI_PROBE_ONLY is set, we no
longer need the ARM64-specific pcibios_enable_device(). Remove it so we
use the generic version.
[bhelgaas: changelog]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Will Deacon <will.deacon@arm.com>
CC: Arnd Bergmann <arnd@arndb.de>
CC: Catalin Marinas <catalin.marinas@arm.com>
After patch "of/platform: Add common method to populate default bus",
it is possible for arch code to remove unnecessary callers of
of_platform_populate with default match table.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Rob Herring <robh@kernel.org>
The of_iommu_init() is called multiple times by arch code,
make it postcore_initcall_sync, then we can drop relevant
calls fully.
Note, the IOMMUs should have a chance to perform some basic
initialisation before we start adding masters to them. So
postcore_initcall_sync is good choice, it ensures of_iommu_init()
called before of_platform_populate.
Acked-by: Rich Felker <dalias@libc.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Rich Felker <dalias@libc.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Rob Herring <robh@kernel.org>
Hibernate relies on cpu hotplug to prevent secondary cores executing
the kernel text while it is being restored.
Add a call to cpus_are_stuck_in_kernel() to determine if there are
CPUs not counted by 'num_online_cpus()', and prevent hibernate in this
case.
Fixes: 82869ac57b ("arm64: kernel: Add support for hibernate/suspend-to-disk")
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
kernel/smp.c has a fancy counter that keeps track of the number of CPUs
it marked as not-present and left in cpu_park_loop(). If there are any
CPUs spinning in here, features like kexec or hibernate may release them
by overwriting this memory.
This problem also occurs on machines using spin-tables to release
secondary cores.
After commit 44dbcc93ab ("arm64: Fix behavior of maxcpus=N")
we bring all known cpus into the secondary holding pen, meaning this
memory can't be re-used by kexec or hibernate.
Add a function cpus_are_stuck_in_kernel() to determine if either of these
cases have occurred.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds support for ACPI_TABLE_UPGRADE for ARM64
To access initrd image we need to move initialization
of linear mapping a bit earlier.
The implementation of the feature acpi_table_upgrade()
(drivers/acpi/tables.c) works with initrd data represented as an array
in virtual memory. It uses some library utility to find the redefined
tables in that array and iterates over it to copy the data to new
allocated memory. So to access the initrd data via fixmap
we need to rewrite it considerably.
In x86 arch, kernel memory is already mapped by the time when
acpi_table_upgrade() and acpi_boot_table_init() are called so I
think that we can just move this mapping one function earlier too.
Signed-off-by: Jon Masters <jcm@redhat.com>
Signed-off-by: Aleksey Makarov <aleksey.makarov@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing
instruction aborts from data aborts, but this bit is architecturally
RES0 for instruction aborts, and could be allocated for an arbitrary
purpose in future. Additionally, we hard-code the value in entry.S
without the mnemonic, making the code difficult to understand.
Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr,
which we already pass to the sole use of ESR_LNX_EXEC. A new helper,
is_el0_instruction_abort() is added to make the logic clear. Any
instruction aborts taken from EL1 will already have been handled by
bad_mode, so we need not handle that case in the helper.
For consistency, the existing permission_fault helper is renamed to
is_permission_fault, and the return type is changed to bool. There
should be no functional changes as the return value was a boolean
expression, and the result is only used in another boolean expression.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: Huang Shijie <shijie.huang@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Several places open-code extraction of the EC field from an ESR_ELx
value, in subtly different ways. This is unfortunate duplication and
variation, and the precise logic used to extract the field is a
distraction.
This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from
an ESR_ELx value in a consistent fashion.
Existing open-coded extractions in core arm64 code are moved over to the
new helper. KVM code is left as-is for the moment.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Huang Shijie <shijie.huang@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently dump_mem attempts to dump memory in 64-bit chunks when
reporting a failure in 64-bit code, or 32-bit chunks when reporting a
failure in 32-bit code. We added code to handle these two cases
separately in commit e147ae6d7f ("arm64: modify the dump mem for
64 bit addresses").
However, in all cases dump_mem is called, the failing context is a
kernel rather than user context. Additionally dump_mem is assumed to
only be used for kernel contexts, as internally it switches to
KERNEL_DS, and its callers pass kernel stack bounds.
This patch removes the redundant 32-bit chunk logic and associated
compat parameter, largely reverting the aforementioned commit. For the
call in __die(), the check of in_interrupt() is removed also, as __die()
is only called in response to faults from the kernel's exception level,
and thus the !user_mode(regs) check is sufficient. Were this not the
case, the used of task_stack_page(tsk) to generate the stack bounds
would be erroneous.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The upstream commit 1771c6e1a5
("x86/kasan: instrument user memory access API") added KASAN instrument to
x86 user memory access API, so added such instrument to ARM64 too.
Define __copy_to/from_user in C in order to add kasan_check_read/write call,
rename assembly implementation to __arch_copy_to/from_user.
Tested by test_kasan module.
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Current versions of gdb do not interoperate cleanly with kgdb on arm64
systems because gdb and kgdb do not use the same register description.
This patch modifies kgdb to work with recent releases of gdb (>= 7.8.1).
Compatibility with gdb (after the patch is applied) is as follows:
gdb-7.6 and earlier Ok
gdb-7.7 series Works if user provides custom target description
gdb-7.8(.0) Works if user provides custom target description
gdb-7.8.1 and later Ok
When commit 44679a4f14 ("arm64: KGDB: Add step debugging support") was
introduced it was paired with a gdb patch that made an incompatible
change to the gdbserver protocol. This patch was eventually merged into
the gdb sources:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=a4d9ba85ec5597a6a556afe26b712e878374b9dd
The change to the protocol was mostly made to simplify big-endian support
inside the kernel gdb stub. Unfortunately the gdb project released
gdb-7.7.x and gdb-7.8.0 before the protocol incompatibility was identified
and reversed:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=bdc144174bcb11e808b4e73089b850cf9620a7ee
This leaves us in a position where kgdb still uses the no-longer-used
protocol; gdb-7.8.1, which restored the original behaviour, was
released on 2014-10-29.
I don't believe it is possible to detect/correct the protocol
incompatiblity which means the kernel must take a view about which
version of the gdb remote protocol is "correct". This patch takes the
view that the original/current version of the protocol is correct
and that version found in gdb-7.7.x and gdb-7.8.0 is anomalous.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Close the hole where ptrace can change a syscall out from under seccomp.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Currently, if arch code wants to supply seccomp_data directly to
seccomp (which is generally much faster than having seccomp do it
using the syscall_get_xyz() API), it has to use the two-phase
seccomp hooks. Add it to the easy hooks, too.
Cc: linux-arch@vger.kernel.org
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
If the kernel is set to show unhandled signals, and a user task does not
handle a SIGILL as a result of an instruction abort, we will attempt to
log the offending instruction with dump_instr before killing the task.
We use dump_instr to log the encoding of the offending userspace
instruction. However, dump_instr is also used to dump instructions from
kernel space, and internally always switches to KERNEL_DS before dumping
the instruction with get_user. When both PAN and UAO are in use, reading
a user instruction via get_user while in KERNEL_DS will result in a
permission fault, which leads to an Oops.
As we have regs corresponding to the context of the original instruction
abort, we can inspect this and only flip to KERNEL_DS if the original
abort was taken from the kernel, avoiding this issue. At the same time,
remove the redundant (and incorrect) comments regarding the order
dump_mem and dump_instr are called in.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: <stable@vger.kernel.org> #4.6+
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Fixes: 57f4959bad ("arm64: kernel: Add support for User Access Override")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Implement pci_acpi_scan_root() and other arch-specific calls so ARM64 can
use ACPI to setup and enumerate PCI buses.
Use memory-mapped configuration space information from either the ACPI
_CBA method or the MCFG table and the ECAM library and generic ECAM config
accessor ops.
Implement acpi_pci_bus_find_domain_nr() to retrieve the domain number from
the acpi_pci_root structure.
Implement pcibios_add_bus() and pcibios_remove_bus() to call
acpi_pci_add_bus() and acpi_pci_remove_bus() for ACPI slot management and
other configuration.
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Jayachandran C <jchandra@broadcom.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
On ACPI systems, the PCI_Config OperationRegion allows AML to access PCI
configuration space. The ACPI CA AML interpreter uses performs config
space accesses with acpi_os_read_pci_configuration() and
acpi_os_write_pci_configuration(), which are OS-dependent functions
supplied by acpi/osl.c.
Implement the arch-specific raw_pci_read() and raw_pci_write() interfaces
used by acpi/osl.c for PCI_Config accesses.
N.B. PCI_Config accesses are not supported before PCI bus enumeration.
[bhelgaas: changelog]
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Jayachandran C <jchandra@broadcom.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
To enable PCI legacy IRQs on platforms booting with ACPI, arch code should
include ACPI-specific callbacks that parse and set-up the device IRQ
number, equivalent to the DT boot path. Owing to the current ACPI core scan
handlers implementation, ACPI PCI legacy IRQs bindings cannot be parsed at
device add time, since that would trigger ACPI scan handlers ordering
issues depending on how the ACPI tables are defined.
To solve this problem and consolidate FW PCI legacy IRQs parsing in one
single pcibios callback (pending final removal), this patch moves DT PCI
IRQ parsing to the pcibios_alloc_irq() callback (called by PCI core code at
driver probe time) and adds ACPI PCI legacy IRQs parsing to the same
callback too, so that FW PCI legacy IRQs parsing is confined in one single
arch callback that can be easily removed when code parsing PCI legacy IRQs
is consolidated and moved to core PCI code.
Suggested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Extend pci_bus_find_domain_nr() so it can find the domain from either:
- ACPI, via the new acpi_pci_bus_find_domain_nr() interface, or
- DT, via of_pci_bus_find_domain_nr()
Note that this is only used for CONFIG_PCI_DOMAINS_GENERIC=y, so it does
not affect x86 or ia64.
[bhelgaas: changelog]
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
If we take an exception we don't expect (e.g. SError), we report this in
the bad_mode handler with pr_crit. Depending on the configured log
level, we may or may not log additional information in functions called
subsequently. Notably, the messages in dump_stack (including the CPU
number) are printed with KERN_DEFAULT and may not appear.
Some exceptions have an IMPLEMENTATION DEFINED ESR_ELx.ISS encoding, and
knowing the CPU number is crucial to correctly decode them. To ensure
that this is always possible, we should log the CPU number along with
the ESR_ELx value, so we are not reliant on subsequent logs or
additional printk configuration options.
This patch logs the CPU number in bad_mode such that it is possible for
a developer to decode these exceptions, provided access to sufficient
documentation.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Al Grant <Al.Grant@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch brings the PER_LINUX32 /proc/cpuinfo format more in line with
the 32-bit ARM one by providing an additional line:
model name : ARMv8 Processor rev X (v8l)
Cc: <stable@vger.kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce a new file to hold ACPI based NUMA information parsing from
SRAT and SLIT.
SRAT includes the CPU ACPI ID to Proximity Domain mappings and memory
ranges to Proximity Domain mapping. SLIT has the information of inter
node distances(relative number for access latency).
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
[rrichter@cavium.com Reworked for numa v10 series ]
Signed-off-by: Robert Richter <rrichter@cavium.com>
[david.daney@cavium.com reorderd and combinded with other patches in
Hanjun Guo's original set, removed get_mpidr_in_madt() and use
acpi_map_madt_entry() instead.]
Signed-off-by: David Daney <david.daney@cavium.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Dennis Chen <dennis.chen@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Pull perf updates from Ingo Molnar:
"Mostly tooling and PMU driver fixes, but also a number of late updates
such as the reworking of the call-chain size limiting logic to make
call-graph recording more robust, plus tooling side changes for the
new 'backwards ring-buffer' extension to the perf ring-buffer"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits)
perf record: Read from backward ring buffer
perf record: Rename variable to make code clear
perf record: Prevent reading invalid data in record__mmap_read
perf evlist: Add API to pause/resume
perf trace: Use the ptr->name beautifier as default for "filename" args
perf trace: Use the fd->name beautifier as default for "fd" args
perf report: Add srcline_from/to branch sort keys
perf evsel: Record fd into perf_mmap
perf evsel: Add overwrite attribute and check write_backward
perf tools: Set buildid dir under symfs when --symfs is provided
perf trace: Only auto set call-graph to "dwarf" when syscalls are being traced
perf annotate: Sort list of recognised instructions
perf annotate: Fix identification of ARM blt and bls instructions
perf tools: Fix usage of max_stack sysctl
perf callchain: Stop validating callchains by the max_stack sysctl
perf trace: Fix exit_group() formatting
perf top: Use machine->kptr_restrict_warned
perf trace: Warn when trying to resolve kernel addresses with kptr_restrict=1
perf machine: Do not bail out if not managing to read ref reloc symbol
perf/x86/intel/p4: Trival indentation fix, remove space
...
most architectures are relying on mmap_sem for write in their
arch_setup_additional_pages. If the waiting task gets killed by the oom
killer it would block oom_reaper from asynchronous address space reclaim
and reduce the chances of timely OOM resolving. Wait for the lock in
the killable mode and return with EINTR if the task got killed while
waiting.
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Andy Lutomirski <luto@amacapital.net> [x86 vdso]
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Define HAVE_EXIT_THREAD for archs which want to do something in
exit_thread. For others, let's define exit_thread as an empty inline.
This is a cleanup before we change the prototype of exit_thread to
accept a task parameter.
[akpm@linux-foundation.org: fix mips]
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: David Howells <dhowells@redhat.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
User visible:
- Honour the kernel.perf_event_max_stack knob more precisely by not counting
PERF_CONTEXT_{KERNEL,USER} when deciding when to stop adding entries to
the perf_sample->ip_callchain[] array (Arnaldo Carvalho de Melo)
- Fix identation of 'stalled-backend-cycles' in 'perf stat' (Namhyung Kim)
- Update runtime using 'cpu-clock' event in 'perf stat' (Namhyung Kim)
- Use 'cpu-clock' for cpu targets in 'perf stat' (Namhyung Kim)
- Avoid fractional digits for integer scales in 'perf stat' (Andi Kleen)
- Store vdso buildid unconditionally, as it appears in callchains and
we're not checking those when creating the build-id table, so we
end up not being able to resolve VDSO symbols when doing analysis
on a different machine than the one where recording was done, possibly
of a different arch even (arm -> x86_64) (He Kuang)
Infrastructure:
- Generalize max_stack sysctl handler, will be used for configuring
multiple kernel knobs related to callchains (Arnaldo Carvalho de Melo)
Cleanups:
- Introduce DSO__NAME_KALLSYMS and DSO__NAME_KCORE, to stop using
open coded strings (Masami Hiramatsu)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJXOn7eAAoJENZQFvNTUqpAsOAP/3f/XJekPQAnMcKRBp2noCuj
nRu1kBltVJyP8iOU5PKSJwel4F9ykNNMl+/rzzxHDo13IM8uc+HnZOJZ6e9mJIJ1
xqjdqM4EDlYYoFApJzCjTK6CMlevCazosdQT1bbmMDYVPc2uQR/GnutFrzqf/Plg
hEougIGtfrdy85g95CRdxpy2yMwDK4EwsiDRm9ib1hnuamQZl97buWemBVqSJmLY
p82E2aMU5Fv5+B8AO4I7V88ZmgpmryjxpM+LjffgNUDSKsSHrlG4NiQ3znV1bgst
Rc++w78+qxoIozOu6/IX8eSI2L/1eyM/yQ6Qre0KuvYXCl+NopTAYSSJlaA4tyHF
c55z7HucuyATN3PrFRHlbWUT/RMIVC0j0lnZOc7SJLl90hJQ+nv0iZcbYwMbeHu1
3LGlcd9jDwQYiClbaT9ATxZJ8B9An0/k/HJdatbAHN0wRomP2Ozz/qD2nmEbUwpV
sCyLOo/LJkvVkuUjSg6ZiOArNIk4iTSPSAUV+SAL6YOEOZMAX5ISUJQ174+zFC9a
gqtVsCXvwLIsndXb8ys1r9/fit/MUci0OzKX3SG1K765+E4Bk23KcAgMNbM/a7lp
ZmHDXMC+yBYcnYNnaxkp7c55CWUlKGOeR4e+KmB99KoeIleYgPhD2UM5beo61TmN
yUEPtiiFiZmTRkiAu83R
=7OdF
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-20160516' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
- Honour the kernel.perf_event_max_stack knob more precisely by not counting
PERF_CONTEXT_{KERNEL,USER} when deciding when to stop adding entries to
the perf_sample->ip_callchain[] array (Arnaldo Carvalho de Melo)
- Fix identation of 'stalled-backend-cycles' in 'perf stat' (Namhyung Kim)
- Update runtime using 'cpu-clock' event in 'perf stat' (Namhyung Kim)
- Use 'cpu-clock' for cpu targets in 'perf stat' (Namhyung Kim)
- Avoid fractional digits for integer scales in 'perf stat' (Andi Kleen)
- Store vdso buildid unconditionally, as it appears in callchains and
we're not checking those when creating the build-id table, so we
end up not being able to resolve VDSO symbols when doing analysis
on a different machine than the one where recording was done, possibly
of a different arch even (arm -> x86_64) (He Kuang)
Infrastructure changes:
- Generalize max_stack sysctl handler, will be used for configuring
multiple kernel knobs related to callchains (Arnaldo Carvalho de Melo)
Cleanups:
- Introduce DSO__NAME_KALLSYMS and DSO__NAME_KCORE, to stop using
open coded strings (Masami Hiramatsu)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We will use it to count how many addresses are in the entry->ip[] array,
excluding PERF_CONTEXT_{KERNEL,USER,etc} entries, so that we can really
return the number of entries specified by the user via the relevant
sysctl, kernel.perf_event_max_contexts, or via the per event
perf_event_attr.sample_max_stack knob.
This way we keep the perf_sample->ip_callchain->nr meaning, that is the
number of entries, be it real addresses or PERF_CONTEXT_ entries, while
honouring the max_stack knobs, i.e. the end result will be max_stack
entries if we have at least that many entries in a given stack trace.
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-s8teto51tdqvlfhefndtat9r@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This makes perf_callchain_{user,kernel}() receive the max stack
as context for the perf_callchain_entry, instead of accessing
the global sysctl_perf_event_max_stack.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
- Support for the PMU in Broadcom's Vulcan CPU
- Dynamic event detection using the PMCEIDn_EL0 ID registers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJXNhyFAAoJELescNyEwWM0A0gIAL52VQid16PvLgEO4g6mzv5B
S1ef/y45342R/DYczcUSFboMPuqYSxZ/i7dCwpvLUKX/YjyqQrrGvvS4IYOS99Mp
/OAcf8eTyzzVpJiGQetta3q20gNHGXOxd48R1zcgt+bbEax89lyHQul0A8+rPWLq
RZhEI6Hcq9fb70AjXjWvDxdbbJhtDKc8BGuptygOEqc8LO3mrb1J60TclU629XOH
Jn4Vdu5f6Rx8hPFdw5HXn+Vdheymphz0qj1lyGCQS4Am97bM5J/54a/A4tyHnHuQ
s9Y26NIAvrktp9wCMlXGQhYL94e1rZowXCWxF98D9XrlIzYORIdf/OZ5DCS8LCA=
=0ntf
-----END PGP SIGNATURE-----
Merge tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 perf updates from Will Deacon:
"The main addition here is support for Broadcom's Vulcan core using the
architected ID registers for discovering supported events.
- Support for the PMU in Broadcom's Vulcan CPU
- Dynamic event detection using the PMCEIDn_EL0 ID registers"
* tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: perf: don't expose CHAIN event in sysfs
arm64/perf: Add Broadcom Vulcan PMU support
arm64/perf: Filter common events based on PMCEIDn_EL0
arm64/perf: Access pmu register using <read/write>_sys_reg
arm64/perf: Define complete ARMv8 recommended implementation defined events
arm64/perf: Changed events naming as per the ARM ARM
arm64: dts: Add Broadcom Vulcan PMU in dts
Documentation: arm64: pmu: Add Broadcom Vulcan PMU binding
- virt_to_page/page_address optimisations
- Support for NUMA systems described using device-tree
- Support for hibernate/suspend-to-disk
- Proper support for maxcpus= command line parameter
- Detection and graceful handling of AArch64-only CPUs
- Miscellaneous cleanups and non-critical fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJXNbgkAAoJELescNyEwWM0PtcIAK11xaOMmSqXz8fcTeNLw4dS
taaPWhjCYus8EhJyvTetfwk74+qVApdvKXKNKgODJXQEjeQx2brdUfbQZb31DTGT
798UYCAyEYCWkXspqi+/dpZEgUGPYH7uGOu2eDd19+PhTeX/EQSRX3fC9k0BNhvh
PN9pOgRcKAlIExZ6QYmT0g56VLtbCfFShN41mQ8HdpShl6pPJuhQ+kDDzudmRjuD
11/oYuOaVTnwbPuXn+sjOrWvMkfINHI70BAQnnBs0v+5c45mzpqEMsy0dYo2Pl2m
ar5lUFVIZggQkiqcOzqBzEgF+4gNw4LUu1DgK6cNKNMtL6k8E9zeOZMWeSVr0lg=
=bT5E
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
- virt_to_page/page_address optimisations
- support for NUMA systems described using device-tree
- support for hibernate/suspend-to-disk
- proper support for maxcpus= command line parameter
- detection and graceful handling of AArch64-only CPUs
- miscellaneous cleanups and non-critical fixes
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits)
arm64: do not enforce strict 16 byte alignment to stack pointer
arm64: kernel: Fix incorrect brk randomization
arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str
arm64: secondary_start_kernel: Remove unnecessary barrier
arm64: Ensure pmd_present() returns false after pmd_mknotpresent()
arm64: Replace hard-coded values in the pmd/pud_bad() macros
arm64: Implement pmdp_set_access_flags() for hardware AF/DBM
arm64: Fix typo in the pmdp_huge_get_and_clear() definition
arm64: mm: remove unnecessary EXPORT_SYMBOL_GPL
arm64: always use STRICT_MM_TYPECHECKS
arm64: kvm: Fix kvm teardown for systems using the extended idmap
arm64: kaslr: increase randomization granularity
arm64: kconfig: drop CONFIG_RTC_LIB dependency
arm64: make ARCH_SUPPORTS_DEBUG_PAGEALLOC depend on !HIBERNATION
arm64: hibernate: Refuse to hibernate if the boot cpu is offline
arm64: kernel: Add support for hibernate/suspend-to-disk
PM / Hibernate: Call flush_icache_range() on pages restored in-place
arm64: Add new asm macro copy_page
arm64: Promote KERNEL_START/KERNEL_END definitions to a header file
arm64: kernel: Include _AC definition in page.h
...
Pull perf updates from Ingo Molnar:
"Bigger kernel side changes:
- Add backwards writing capability to the perf ring-buffer code,
which is preparation for future advanced features like robust
'overwrite support' and snapshot mode. (Wang Nan)
- Add pause and resume ioctls for the perf ringbuffer (Wang Nan)
- x86 Intel cstate code cleanups and reorgnization (Thomas Gleixner)
- x86 Intel uncore and CPU PMU driver updates (Kan Liang, Peter
Zijlstra)
- x86 AUX (Intel PT) related enhancements and updates (Alexander
Shishkin)
- x86 MSR PMU driver enhancements and updates (Huang Rui)
- ... and lots of other changes spread out over 40+ commits.
Biggest tooling side changes:
- 'perf trace' features and enhancements. (Arnaldo Carvalho de Melo)
- BPF tooling updates (Wang Nan)
- 'perf sched' updates (Jiri Olsa)
- 'perf probe' updates (Masami Hiramatsu)
- ... plus 200+ other enhancements, fixes and cleanups to tools/
The merge commits, the shortlog and the changelogs contain a lot more
details"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (249 commits)
perf/core: Disable the event on a truncated AUX record
perf/x86/intel/pt: Generate PMI in the STOP region as well
perf buildid-cache: Use lsdir() for looking up buildid caches
perf symbols: Use lsdir() for the search in kcore cache directory
perf tools: Use SBUILD_ID_SIZE where applicable
perf tools: Fix lsdir to set errno correctly
perf trace: Move seccomp args beautifiers to tools/perf/trace/beauty/
perf trace: Move flock op beautifier to tools/perf/trace/beauty/
perf build: Add build-test for debug-frame on arm/arm64
perf build: Add build-test for libunwind cross-platforms support
perf script: Fix export of callchains with recursion in db-export
perf script: Fix callchain addresses in db-export
perf script: Fix symbol insertion behavior in db-export
perf symbols: Add dso__insert_symbol function
perf scripting python: Use Py_FatalError instead of die()
perf tools: Remove xrealloc and ALLOC_GROW
perf help: Do not use ALLOC_GROW in add_cmd_list
perf pmu: Make pmu_formats_string to check return value of strbuf
perf header: Make topology checkers to check return value of strbuf
perf tools: Make alias handler to check return value of strbuf
...
copy_thread should not be enforcing 16 byte aligment and returning
-EINVAL. Other architectures trap misaligned stack access with SIGBUS
so arm64 should follow this convention, so remove the strict enforcement
check.
For example, currently clone(2) fails with -EINVAL when passing
a misaligned stack and this gives little clue to what is wrong. Instead,
it is arguable that a SIGBUS on the fist access to a misaligned stack
allows one to figure out that it is a misaligned stack issue rather
than trying to figure out why an unconventional (and undocumented)
-EINVAL is being returned.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This fixes two issues with the arm64 brk randomziation. First, the
STACK_RND_MASK was being used incorrectly. The original code was:
unsigned long range_end = base + (STACK_RND_MASK << PAGE_SHIFT) + 1;
STACK_RND_MASK is 0x7ff (32-bit) or 0x3ffff (64-bit), with 4K pages where
PAGE_SHIFT is 12:
#define STACK_RND_MASK (test_thread_flag(TIF_32BIT) ? \
0x7ff >> (PAGE_SHIFT - 12) : \
0x3ffff >> (PAGE_SHIFT - 12))
This means the resulting offset from base would be 0x7ff0001 or 0x3ffff0001,
which is wrong since it creates an unaligned end address. It was likely
intended to be:
unsigned long range_end = base + ((STACK_RND_MASK + 1) << PAGE_SHIFT)
Which would result in offsets of 0x800000 (32-bit) and 0x40000000 (64-bit).
However, even this corrected 32-bit compat offset (0x00800000) is much
smaller than native ARM's brk randomization value (0x02000000):
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
unsigned long range_end = mm->brk + 0x02000000;
return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
}
So, instead of basing arm64's brk randomization on mistaken STACK_RND_MASK
calculations, just use specific corrected values for compat (0x2000000)
and native arm64 (0x40000000).
Reviewed-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
[will: use is_compat_task() as suggested by tixy]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The loop that browses the array compat_hwcap_str will stop when a NULL
is encountered, however NULL is missing at the end of array. This will
lead to overrun until a NULL is found somewhere in the following memory.
In reality, this works out because the compat_hwcap2_str array tends to
follow immediately in memory, and that *is* terminated correctly.
Furthermore, the unsigned int compat_elf_hwcap is checked before
printing each capability, so we end up doing the right thing because
the size of the two arrays is less than 32. Still, this is an obvious
mistake and should be fixed.
Note for backporting: commit 12d11817ea ("arm64: Move
/proc/cpuinfo handling code") moved this code in v4.4. Prior to that
commit, the same change should be made in arch/arm64/kernel/setup.c.
Fixes: 44b82b7700 "arm64: Fix up /proc/cpuinfo"
Cc: <stable@vger.kernel.org> # v3.19+ (but see note above prior to v4.4)
Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Remove the unnecessary smp_wmb(), which was added to make sure
that the update_cpu_boot_status() completes before we mark the
CPU online. But update_cpu_boot_status() already has dsb() (required
for the failing CPUs) to ensure the correct behavior.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Dennis Chen <dennis.chen@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Hibernation represents a system state save/restore through
a system reboot; this implies that the logical cpus carrying
out hibernation/thawing must be the same, so that the context
saved in the snapshot image on hibernation is consistent with
the state of the system on resume. If resume from hibernation
is driven through kernel command line parameter, the cpu responsible
for thawing the system will be whatever CPU firmware boots the system
on upon cold-boot (ie logical cpu 0); this means that in order to
keep system context consistent between the hibernate snapshot image
and system state on kernel resume from hibernate, logical cpu 0 must
be online on hibernation and must be the logical cpu that creates
the snapshot image.
This patch adds a PM notifier that enforces logical cpu 0 is online
when the hibernation is started (and prevents hibernation if it is
not), which is sufficient to guarantee it will be the one creating
the snapshot image therefore providing the resume cpu a consistent
snapshot of the system to resume to.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add support for hibernate/suspend-to-disk.
Suspend borrows code from cpu_suspend() to write cpu state onto the stack,
before calling swsusp_save() to save the memory image.
Restore creates a set of temporary page tables, covering only the
linear map, copies the restore code to a 'safe' page, then uses the copy to
restore the memory image. The copied code executes in the lower half of the
address space, and once complete, restores the original kernel's page
tables. It then calls into cpu_resume(), and follows the normal
cpu_suspend() path back into the suspend code.
To restore a kernel using KASLR, the address of the page tables, and
cpu_resume() are stored in the hibernate arch-header and the el2
vectors are pivotted via the 'safe' page in low memory.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Kevin Hilman <khilman@baylibre.com> # Tested on Juno R2
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
KERNEL_START and KERNEL_END are useful outside head.S, move them to a
header file.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
By enabling the MMU early in cpu_resume(), the sleep_save_sp and stack can
be accessed by VA, which avoids the need to convert-addresses and clean to
PoC on the suspend path.
MMU setup is shared with the boot path, meaning the swapper_pg_dir is
restored directly: ttbr1_el1 is no longer saved/restored.
struct sleep_save_sp is removed, replacing it with a single array of
pointers.
cpu_do_{suspend,resume} could be further reduced to not restore: cpacr_el1,
mdscr_el1, tcr_el1, vbar_el1 and sctlr_el1, all of which are set by
__cpu_setup(). However these values all contain res0 bits that may be used
to enable future features.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Hibernate could make use of the cpu_suspend() code to save/restore cpu
state, however it needs to be able to return '0' from the 'finisher'.
Rework cpu_suspend() so that the finisher is called from C code,
independently from the save/restore of cpu state. Space to save the context
in is allocated in the caller's stack frame, and passed into
__cpu_suspend_enter().
Hibernate's use of this API will look like a copy of the cpu_suspend()
function.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A later patch implements kvm_arch_hardware_disable(), to remove kvm
from el2, and re-instate the hyp-stub.
This can happen while guests are running, particularly when kvm_reboot()
calls kvm_arch_hardware_disable() on each cpu. This can interrupt a guest,
remove kvm, then allow the guest to be scheduled again. This causes
kvm_call_hyp() to be run against the hyp-stub.
Change the hyp-stub to return a new exception type when this happens,
and add code to kvm's handle_exit() to tell userspace we failed to
enter the guest.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The existing arm64 hcall implementations are limited in that they only
allow for two distinct hcalls; with the x0 register either zero or not
zero. Also, the API of the hyp-stub exception vector routines and the
KVM exception vector routines differ; hyp-stub uses a non-zero value in
x0 to implement __hyp_set_vectors, whereas KVM uses it to implement
kvm_call_hyp.
To allow for additional hcalls to be defined and to make the arm64 hcall
API more consistent across exception vector routines, change the hcall
implementations to reserve all x0 values below 0xfff for hcalls such
as {s,g}et_vectors().
Define two new preprocessor macros HVC_GET_VECTORS, and HVC_SET_VECTORS
to be used as hcall type specifiers and convert the existing
__hyp_get_vectors() and __hyp_set_vectors() routines to use these new
macros when executing an HVC call. Also, change the corresponding
hyp-stub and KVM el1_sync exception vector routines to use these new
macros.
Signed-off-by: Geoff Levand <geoff@infradead.org>
[Merged two hcall patches, moved immediate value from esr to x0, use lr
as a scratch register, changed limit to 0xfff]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Today the 'hvc' calling KVM or the hyp-stub is expected to preserve all
registers. KVM saves/restores the registers it needs on the EL2 stack using
do_el2_call(). The hyp-stub has no stack, later patches need to be able to
be able to clobber the link register.
Move the link register save/restore to the the call sites.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Unlike on 32-bit ARM, where we need to pass the stub's version of struct
screen_info to the kernel proper via a configuration table, on 64-bit ARM
it simply involves making the core kernel's copy of struct screen_info
visible to the stub by exposing an __efistub_ alias for it.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: David Herrmann <dh.herrmann@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Jones <pjones@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1461614832-17633-21-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Recent UEFI versions expose permission attributes for runtime services
memory regions, either in the UEFI memory map or in the separate memory
attributes table. This allows the kernel to map these regions with
stricter permissions, rather than the RWX permissions that are used by
default. So wire this up in our mapping routine.
Note that in the absence of permission attributes, we still only map
regions of type EFI_RUNTIME_SERVICE_CODE with the executable bit set.
Also, we base the mapping attributes of EFI_MEMORY_MAPPED_IO on the
type directly rather than on the absence of the EFI_MEMORY_WB attribute.
This is more correct, but is also required for compatibility with the
upcoming support for the Memory Attributes Table, which only carries
permission attributes, not memory type attributes.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Jones <pjones@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1461614832-17633-12-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The default remains 127, which is good for most cases, and not even hit
most of the time, but then for some cases, as reported by Brendan, 1024+
deep frames are appearing on the radar for things like groovy, ruby.
And in some workloads putting a _lower_ cap on this may make sense. One
that is per event still needs to be put in place tho.
The new file is:
# cat /proc/sys/kernel/perf_event_max_stack
127
Chaging it:
# echo 256 > /proc/sys/kernel/perf_event_max_stack
# cat /proc/sys/kernel/perf_event_max_stack
256
But as soon as there is some event using callchains we get:
# echo 512 > /proc/sys/kernel/perf_event_max_stack
-bash: echo: write error: Device or resource busy
#
Because we only allocate the callchain percpu data structures when there
is a user, which allows for changing the max easily, its just a matter
of having no callchain users at that point.
Reported-and-Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com>
Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/r/20160426002928.GB16708@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
If both ACPI and DT platform descriptions are available, and the
kernel was configured at build time to support both flavours, the
default policy is to prefer DT over ACPI, and preferring ACPI over
DT while still allowing DT as a fallback is not possible.
Since some enterprise features (such as RAS) depend on ACPI, it may
be desirable for, e.g., distro installers to prefer ACPI boot but
fall back to DT rather than failing completely if no ACPI tables are
available.
So introduce the 'acpi=on' kernel command line parameter for arm64,
which signifies that ACPI should be used if available, and DT should
only be used as a fallback.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When booting a relocatable kernel image, there is no practical reason
to refuse an image whose load address is not exactly TEXT_OFFSET bytes
above a 2 MB aligned base address, as long as the physical and virtual
misalignment with respect to the swapper block size are equal, and are
both aligned to THREAD_SIZE.
Since the virtual misalignment is under our control when we first enter
the kernel proper, we can simply choose its value to be equal to the
physical misalignment.
So treat the misalignment of the physical load address as the initial
KASLR offset, and fix up the remaining code to deal with that.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For historical reasons, the kernel Image must be loaded into physical
memory at a 512 KB offset above a 2 MB aligned base address. The region
between the base address and the start of the kernel Image has no
significance to the kernel itself, but it is currently mapped explicitly
into the early kernel VMA range for all translation granules.
In some cases (i.e., 4 KB granule), this is unavoidable, due to the 2 MB
granularity of the early kernel mappings. However, in other cases, e.g.,
when running with larger page sizes, or in the future, with more granular
KASLR, there is no reason to map it explicitly like we do currently.
So update the logic so that the region is mapped only if that happens as
a side effect of rounding the start address of the kernel to swapper block
size, and leave it unmapped otherwise.
Since the symbol kernel_img_size now simply resolves to the memory
footprint of the kernel Image, we can drop its definition from image.h
and opencode its calculation.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When building a relocatable kernel, we currently rely on the fact that
early 64-bit literal loads need to be deferred to after the relocation
has been performed only if they involve symbol references, and not if
they involve assemble time constants. While this is not an unreasonable
assumption to make, it is better to switch to movk/movz sequences, since
these are guaranteed to be resolved at link time, simply because there are
no dynamic relocation types to describe them.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Refactor the relocation processing so that the code executes from the
ID map while accessing the relocation tables via the virtual mapping.
This way, we can use literals containing virtual addresses as before,
instead of having to use convoluted absolute expressions.
For symmetry with the secondary code path, the relocation code and the
subsequent jump to the virtual entry point are implemented in a function
called __primary_switch(), and __mmap_switched() is renamed to
__primary_switched(). Also, the call sequence in stext() is aligned with
the one in secondary_startup(), by replacing the awkward 'adr_l lr' and
'b cpu_setup' sequence with a simple branch and link.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We can simply use a relocated 64-bit literal to store the address of
__secondary_switched(), and the relocation code will ensure that it
holds the correct value at secondary entry time, as long as we make sure
that the literal is not dereferenced until after we have enabled the MMU.
So jump via a small __secondary_switch() function covered by the ID map
that performs the literal load and branch-to-register.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This unexports some symbols from head.S that are only used locally.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
maxcpu=n sets the number of CPUs activated at boot time to a max of n,
but allowing the remaining CPUs to be brought up later if the user
decides to do so. However, on arm64 due to various reasons, we disallowed
hotplugging CPUs beyond n, by marking them not present. Now that
we have checks in place to make sure the hotplugged CPUs have compatible
features with system and requires no new errata, relax the restriction.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
CPU Errata work arounds are detected and applied to the
kernel code at boot time and the data is then freed up.
If a new hotplugged CPU requires a work around which
was not applied at boot time, there is nothing we can
do but simply fail the booting.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that the capabilities are only available once all the CPUs
have booted, we're unable to check for a particular feature
in any subsystem that gets initialized before then.
In order to support this, introduce a local_cpu_has_cap() function
that tests for the presence of a given capability independently
of the whole framework.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[ Added preemptible() check ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
[will: remove duplicate initialisation of caps in this_cpu_has_cap]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add scope parameter to the arm64_cpu_capabilities::matches(), so that
this can be reused for checking the capability on a given CPU vs the
system wide. The system uses the default scope associated with the
capability for initialising the CPU_HWCAPs and ELF_HWCAPs.
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The CHAIN event allows two 32-bit counters to be treated as a single
64-bit counter, under certain allocation restrictions on the PMU.
Whilst userspace could theoretically create CHAIN events using the raw
event syntax, we don't really want to advertise this in sysfs, since
it's useless in isolation. This patch removes the event from our /sys
entries.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Broadcom Vulcan uses ARMv8 PMUv3 and supports most of
the ARMv8 recommended implementation defined events.
Added Vulcan events mapping for perf and perf_cache map.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The complete common architectural and micro-architectural
event number structure is filtered based on PMCEIDn_EL0 and
exposed to /sys using is_visibile function pointer in events
attribute_group.
To filter the events in is_visible function, pmceid based bitmap
is stored in arm_pmu structure and the id field from
perf_pmu_events_attr is used to check against the bitmap.
The function which derives event bitmap from PMCEIDn_EL0 is
executed in the cpus, which has the pmu being initialized,
for heterogeneous pmu support.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
changed pmu register access to make use of <read/write>_sys_reg
from sysreg.h instead of accessing them directly.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Defined all the ARMv8 recommended implementation defined events
from J3 - "ARM recommendations for IMPLEMENTATION DEFINED event numbers"
in ARM DDI 0487A.g.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
changed all the common events name definition as per the document
ARM DDI 0487A.g
SoC specific event names follow the general naming style in
the file and doesn't reflect any document.
changed ARMV8_A53_PERFCTR_PREFETCH_LINEFILL to
ARMV8_A53_PERFCTR_PREF_LINEFILL to match with other SoC specific
event names which use _PREF_ style.
corrected typo l21 to l2i.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Improve the readability of dt_scan_depth1_nodes by removing the nested
conditionals.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When it's a Xen domain0 booting with ACPI, it will supply a /chosen and
a /hypervisor node in DT. So check if it needs to enable ACPI.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Julien Grall <julien.grall@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When using the Virtualisation Host Extensions, EL1 is not used in
the host and requires no separate configuration.
In addition, with VHE enabled, non-hyp-specific EL2 configuration
that does not need to be done early will be done anyway in
__cpu_setup via the _EL1 system register aliases. In particular,
the layout and definition of CPTR_EL2 are changed by enabling VHE
so that they resemble CPACR_EL1, so existing code to initialise
CPTR_EL2 becomes architecturally wrong in this case.
This patch simply skips the affected initialisation code in the
non-VHE case.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Make sure we have AArch32 state available for running COMPAT
binaries and also for switching the personality to PER_LINUX32.
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
[ Added cap bit, checks for HWCAP, personality ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Tested-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add cpu_hwcap bit for keeping track of the support for 32bit EL0.
Tested-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On ARMv8 support for AArch32 state is optional. Hence it is
not safe to check the AArch32 ID registers for sanity, which
could lead to false warnings. This patch makes sure that the
AArch32 state is implemented before we keep track of the 32bit
ID registers.
As per ARM ARM (D.1.21.2 - Support for Exception Levels and
Execution States, DDI0487A.h), checking the support for AArch32
at EL0 is good enough to check the support for AArch32 (i.e,
AArch32 at EL1 => AArch32 at EL0, but not vice versa).
Tested-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order to handle systems which do not support 32bit at EL0,
split the COMPAT HWCAP entries into a separate table which can
be processed, only if the support is available.
Tested-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We use hwcaps for referring to ELF hwcaps capability information.
However this can be confusing with 'cpu_hwcaps' which stands for the
CPU capability bit field. This patch cleans up the names to make it
a bit more readable.
Tested-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since of_get_cpu_node() increments refcount, the node should be put.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When CPUs are stopped during an abnormal operation like panic
for each CPU a line is printed and the stack trace is dumped.
This information is only interesting for the aborting CPU
and on systems with many CPUs it only makes it harder to
debug if after the aborting CPU the log is flooded with data
about all other CPUs too.
Therefore remove the stack dump and printk of other CPUs
and only print a single line that the other CPUs are going to be
stopped and, in case any CPUs remain online list them.
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In head.S, the str_l macro, which takes a source register, a symbol name
and a temp register, is used to store a status value to the variable
__early_cpu_boot_status. Subsequently, the value of the temp register is
reused to invalidate any cachelines covering this variable.
However, since str_l resolves to
adrp \tmp, \sym
str \src, [\tmp, :lo12:\sym]
the temp register never actually holds the address of the variable but
only of the 4 KB window that covers it, and reusing it leads to the
wrong cacheline being invalidated. So instead, take the address
explicitly before doing the store, and reuse that value to perform
the cache invalidation.
Fixes: bb9052744f ("arm64: Handle early CPU boot failures")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Suzuki K Poulose <Suzuki.Poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Attempt to get the memory and CPU NUMA node via of_numa. If that
fails, default the dummy NUMA node and map all memory and CPUs to node
0.
Tested-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order to extract NUMA information from the device tree, we need to
have the tree in its unflattened form.
Move the call to bootmem_init() in the tail of paging_init() into
setup_arch, and adjust header files so that its declaration is
visible.
Move the unflatten_device_tree() call between the calls to
paging_init() and bootmem_init(). Follow on patches add NUMA handling
to bootmem_init().
Signed-off-by: David Daney <david.daney@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
With a VHE capable CPU, kernel can run at EL2 and is a decided at early
boot. If some of the CPUs didn't start it EL2 or doesn't have VHE, we
could have CPUs running at different exception levels, all in the same
kernel! This patch adds an early check for the secondary CPUs to detect
such situations.
For each non-boot CPU add a sanity check to make sure we don't have
different run levels w.r.t the boot CPU. We save the information on
whether the boot CPU is running in hyp mode or not and ensure the
remaining CPUs match it.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
[will: made boot_cpu_hyp_mode static]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since commit 1cf4f629d9 ("cpu/hotplug: Move online calls to
hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and
CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP
function calls are no longer required.
Replace smp_call_function_single() with a direct call of
hw_breakpoint_reset(). To keep the calling convention, interrupts are
explicitly disabled around the call.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since commit 1cf4f629d9 ("cpu/hotplug: Move online calls to
hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and
CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP
function calls are no longer required.
Replace smp_call_function_single() with a direct call to
clear_os_lock(). The function writes the OSLAR register to clear OS
locking. This does not require to be called with interrupts disabled,
therefore the smp_call_function_single() calling convention is not
preserved.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The mapping of the kernel consist of four segments, each of which is mapped
with different permission attributes and/or lifetimes. To optimize the TLB
and translation table footprint, we define various opaque constants in the
linker script that resolve to different aligment values depending on the
page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
Considering that
- a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
the fact that it allows the use of the contiguous bit),
- the minimum alignment of the .data segment is THREAD_SIZE already, not
PAGE_SIZE (i.e., we already have padding between _data and the start of
the .data payload in many cases),
- 2 MB is a suitable alignment value on all granule sizes, either for
mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
16 KB and 64 KB),
- anything beyond 2 MB exceeds the minimum alignment mandated by the boot
protocol, and can only be mapped efficiently if the physical alignment
happens to be the same,
we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
regardless of granule size, all segments are aligned either to 64 KB, or to
2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Keeping .head.text out of the .text mapping buys us very little: its actual
payload is only 4 KB, most of which is padding, but the page alignment may
add up to 2 MB (in case of CONFIG_DEBUG_ALIGN_RODATA=y) of additional
padding to the uncompressed kernel Image.
Also, on 4 KB granule kernels, the 4 KB misalignment of .text forces us to
map the adjacent 56 KB of code without the PTE_CONT attribute, and since
this region contains things like the vector table and the GIC interrupt
handling entry point, this region is likely to benefit from the reduced TLB
pressure that results from PTE_CONT mappings.
So remove the alignment between the .head.text and .text sections, and use
the [_text, _etext) rather than the [_stext, _etext) interval for mapping
the .text segment.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Apart from the arm64/linux and EFI header data structures, there is nothing
in the .head.text section that must reside at the beginning of the Image.
So let's move it to the .init section where it belongs.
Note that this involves some minor tweaking of the EFI header, primarily
because the address of 'stext' no longer coincides with the start of the
.text section. It also requires a couple of relocated symbol references
to be slightly rewritten or their definition moved to the linker script.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Before restricting virt_to_page() to the linear mapping, ensure that
the text patching code does not use it to resolve references into the
core kernel text, which is mapped in the vmalloc area.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The translation performed by virt_to_page() is only valid for linear
addresses, and kernel symbols are no longer in the linear mapping.
So perform the __pa() translation explicitly, which does the right
thing in either case, and only then translate to a struct page offset.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This removes the relocate_initrd() implementation and invocation, which are
no longer needed now that the placement of the initrd is guaranteed to be
covered by the linear mapping.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, we check two pointers: cpu_ops and cpu_suspend on every idle
state entry. These pointers check can be avoided:
If cpu_ops has not been registered, arm_cpuidle_init() will return
-EOPNOTSUPP, so arm_cpuidle_suspend() will never have chance to
run. In other word, the cpu_ops check can be avoid.
Similarly, the cpu_suspend check could be avoided in this hot path by
moving it into arm_cpuidle_init().
I measured the 4096 * time from arm_cpuidle_suspend entry point to the
cpu_psci_cpu_suspend entry point. HW platform is Marvell BG4CT STB
board.
1. only one shell, no other process, hot-unplug secondary cpus, execute
the following cmd
while true
do
sleep 0.2
done
before the patch: 1581220ns
after the patch: 1579630ns
reduced by 0.1%
2. only one shell, no other process, hot-unplug secondary cpus, execute
the following cmd
while true
do
md5sum /tmp/testfile
sleep 0.2
done
NOTE: the testfile size should be larger than L1+L2 cache size
before the patch: 1961960ns
after the patch: 1912500ns
reduced by 2.5%
So the more complex the system load, the bigger the improvement.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There are some new cpu features which can be identified by id_aa64mmfr2,
this patch appends all fields of it.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Set a default event->overflow_handler in perf_event_alloc() so don't
need to check event->overflow_handler in __perf_event_overflow().
Following commits can give a different default overflow_handler.
Initial idea comes from Peter:
http://lkml.kernel.org/r/20130708121557.GA17211@twins.programming.kicks-ass.net
Since the default value of event->overflow_handler is not NULL, existing
'if (!overflow_handler)' checks need to be changed.
is_default_overflow_handler() is introduced for this.
No extra performance overhead is introduced into the hot path because in the
original code we still need to read this handler from memory. A conditional
branch is avoided so actually we remove some instructions.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <pi3orama@163.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/r/1459147292-239310-3-git-send-email-wangnan0@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
To use the ARMv8 PMU related register defines from the KVM code, we move
the relevant definitions to asm/perf_event.h header file and rename them
with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h.
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
KASAN needs to know whether the allocation happens in an IRQ handler.
This lets us strip everything below the IRQ entry point to reduce the
number of unique stack traces needed to be stored.
Move the definition of __irq_entry to <linux/interrupt.h> so that the
users don't need to pull in <linux/ftrace.h>. Also introduce the
__softirq_entry macro which is similar to __irq_entry, but puts the
corresponding functions to the .softirqentry.text section.
Signed-off-by: Alexander Potapenko <glider@google.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The KASLR code incorrectly expects the contents of x18 to be preserved
across a call into C code, and uses it to stash the contents of SCTLR_EL1
before enabling the MMU. If the MMU needs to be disabled again to create
the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is
likely to crash the system if x18 has been clobbered by kasan_early_init()
or kaslr_early_init(). So use x22 instead, which is not in use so far in
head.S
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Initial support for ARMv8.1 CPU PMUs
- Support for the CPU PMU in Cavium ThunderX
- CPU PMU support for systems running 32-bit Linux in secure mode
- Support for the system PMU in ARM CCI-550 (Cache Coherent Interconnect)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJW794rAAoJELescNyEwWM0O5IH/0ejoUjip3n4dFZnSzAbQQZe
VxCy3DXW5gS8YaswwX2dFw9K772/BpHlazq8AIJIhaR+b+Zzl5t0iOc12HluDilV
pMvi0JTCxwJhsEiKZnP0cVAU9HM6MAgtMOEegkd/YNESKQey30NeDtIcz/pQfTUV
28AF71+w5VPj/1EpHEEhHQsASRIx7eDbKzThzdlb8PnDS0o23QJhL9HjVTNIAlB8
BGxrUBKtBu0eH2Hx33vNjc7UYx1WZQlCk5cAaXevA8mbFXzYaMQI2Cel2nbNMO9i
eu5zPkDUCG7dq16PxK6IgM4AsDCtmmDuckLdN6UEQWYxkLbb2qHNRKtj0bKB8Sk=
=E4PE
-----END PGP SIGNATURE-----
Merge tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm[64] perf updates from Will Deacon:
"I have another mixed bag of ARM-related perf patches here.
It's about 25% CPU and 75% interconnect, but with drivers/bus/
languishing without an obvious maintainer or tree, Olof and I agreed
to keep all of these PMU patches together. I suspect a whole load of
code from drivers/bus/arm-* can be moved under drivers/perf/, so
that's on the radar for the future.
Summary:
- Initial support for ARMv8.1 CPU PMUs
- Support for the CPU PMU in Cavium ThunderX
- CPU PMU support for systems running 32-bit Linux in secure mode
- Support for the system PMU in ARM CCI-550 (Cache Coherent Interconnect)"
* tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (26 commits)
drivers/perf: arm_pmu: avoid NULL dereference when not using devicetree
arm64: perf: Extend ARMV8_EVTYPE_MASK to include PMCR.LC
arm-cci: remove unused variable
arm-cci: don't return value from void function
arm-cci: make private functions static
arm-cci: CoreLink CCI-550 PMU driver
arm-cci500: Rearrange PMU driver for code sharing with CCI-550 PMU
arm-cci: CCI-500: Work around PMU counter writes
arm-cci: Provide hook for writing to PMU counters
arm-cci: Add helper to enable PMU without synchornising counters
arm-cci: Add routines to save/restore all counters
arm-cci: Get the status of a counter
arm-cci: write_counter: Remove redundant check
arm-cci: Delay PMU counter writes to pmu::pmu_enable
arm-cci: Refactor CCI PMU enable/disable methods
arm-cci: Group writes to counter
arm-cci: fix handling cpumask_any_but return value
arm-cci: simplify sysfs attr handling
drivers/perf: arm_pmu: implement CPU_PM notifier
arm64: dts: Add Cavium ThunderX specific PMU
...
Commit f80fb3a3d5 ("arm64: add support for kernel ASLR") missed a
DSB necessary to complete I-cache maintenance in the primary boot path,
and hence stale instructions may still be present in the I-cache and may
be executed until the I-cache maintenance naturally completes.
Since commit 8ec4198743 ("arm64: mm: ensure patched kernel text is
fetched from PoU"), all CPUs invalidate their I-caches after their MMU
is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
have been fetched from the PoC into I-caches. We never patch text
expected to be executed with the MMU off. Thus, it is unnecessary to
perform broadcast I-cache maintenance in the primary boot path.
This patch reduces the scope of the I-cache maintenance to the local
CPU, and adds the missing DSB with similar scope, matching prior
maintenance in the primary boot path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesehvuel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The implementation of macro inv_entry refers to its 'el' argument without
the required leading backslash, which results in an undefined symbol
'el' to be passed into the kernel_entry macro rather than the index of
the exception level as intended.
This undefined symbol strangely enough does not result in build failures,
although it is visible in vmlinux:
$ nm -n vmlinux |head
U el
0000000000000000 A _kernel_flags_le_hi32
0000000000000000 A _kernel_offset_le_hi32
0000000000000000 A _kernel_size_le_hi32
000000000000000a A _kernel_flags_le_lo32
.....
However, it does result in incorrect code being generated for invalid
exceptions taken from EL0, since the argument check in kernel_entry
assumes EL1 if its argument does not equal '0'.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull EFI updates from Ingo Molnar:
"The main changes are:
- Use separate EFI page tables when executing EFI firmware code.
This isolates the EFI context from the rest of the kernel, which
has security and general robustness advantages. (Matt Fleming)
- Run regular UEFI firmware with interrupts enabled. This is already
the status quo under other OSs. (Ard Biesheuvel)
- Various x86 EFI enhancements, such as the use of non-executable
attributes for EFI memory mappings. (Sai Praneeth Prakhya)
- Various arm64 UEFI enhancements. (Ard Biesheuvel)
- ... various fixes and cleanups.
The separate EFI page tables feature got delayed twice already,
because it's an intrusive change and we didn't feel confident about
it - third time's the charm we hope!"
* 'efi-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits)
x86/mm/pat: Fix boot crash when 1GB pages are not supported by the CPU
x86/efi: Only map kernel text for EFI mixed mode
x86/efi: Map EFI_MEMORY_{XP,RO} memory region bits to EFI page tables
x86/mm/pat: Don't implicitly allow _PAGE_RW in kernel_map_pages_in_pgd()
efi/arm*: Perform hardware compatibility check
efi/arm64: Check for h/w support before booting a >4 KB granular kernel
efi/arm: Check for LPAE support before booting a LPAE kernel
efi/arm-init: Use read-only early mappings
efi/efistub: Prevent __init annotations from being used
arm64/vmlinux.lds.S: Handle .init.rodata.xxx and .init.bss sections
efi/arm64: Drop __init annotation from handle_kernel_image()
x86/mm/pat: Use _PAGE_GLOBAL bit for EFI page table mappings
efi/runtime-wrappers: Run UEFI Runtime Services with interrupts enabled
efi: Reformat GUID tables to follow the format in UEFI spec
efi: Add Persistent Memory type name
efi: Add NV memory attribute
x86/efi: Show actual ending addresses in efi_print_memmap
x86/efi/bgrt: Don't ignore the BGRT if the 'valid' bit is 0
efivars: Use to_efivar_entry
efi: Runtime-wrapper: Get rid of the rtc_lock spinlock
...
Pull ARM updates from Russell King:
"Another mixture of changes this time around:
- Split XIP linker file from main linker file to make it more
maintainable, and various XIP fixes, and clean up a resulting
macro.
- Decompressor cleanups from Masahiro Yamada
- Avoid printing an error for a missing L2 cache
- Remove some duplicated symbols in System.map, and move
vectors/stubs back into kernel VMA
- Various low priority fixes from Arnd
- Updates to allow bus match functions to return negative errno
values, touching some drivers and the driver core. Greg has acked
these changes.
- Virtualisation platform udpates form Jean-Philippe Brucker.
- Security enhancements from Kees Cook
- Rework some Kconfig dependencies and move PSCI idle management code
out of arch/arm into drivers/firmware/psci.c
- ARM DMA mapping updates, touching media, acked by Mauro.
- Fix places in ARM code which should be using virt_to_idmap() so
that Keystone2 can work.
- Fix Marvell Tauros2 to work again with non-DT boots.
- Provide a delay timer for ARM Orion platforms"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (45 commits)
ARM: 8546/1: dma-mapping: refactor to fix coherent+cma+gfp=0
ARM: 8547/1: dma-mapping: store buffer information
ARM: 8543/1: decompressor: rename suffix_y to compress-y
ARM: 8542/1: decompressor: merge piggy.*.S and simplify Makefile
ARM: 8541/1: decompressor: drop redundant FORCE in Makefile
ARM: 8540/1: decompressor: use clean-files instead of extra-y to clean files
ARM: 8539/1: decompressor: drop more unneeded assignments to "targets"
ARM: 8538/1: decompressor: drop unneeded assignments to "targets"
ARM: 8532/1: uncompress: mark putc as inline
ARM: 8531/1: turn init_new_context into an inline function
ARM: 8530/1: remove VIRT_TO_BUS
ARM: 8537/1: drop unused DEBUG_RODATA from XIP_KERNEL
ARM: 8536/1: mm: hide __start_rodata_section_aligned for non-debug builds
ARM: 8535/1: mm: DEBUG_RODATA makes no sense with XIP_KERNEL
ARM: 8534/1: virt: fix hyp-stub build for pre-ARMv7 CPUs
ARM: make the physical-relative calculation more obvious
ARM: 8512/1: proc-v7.S: Adjust stack address when XIP_KERNEL
ARM: 8411/1: Add default SPARSEMEM settings
ARM: 8503/1: clk_register_clkdev: remove format string interface
ARM: 8529/1: remove 'i' and 'zi' targets
...
- Initial page table creation reworked to avoid breaking large block
mappings (huge pages) into smaller ones. The ARM architecture requires
break-before-make in such cases to avoid TLB conflicts but that's not
always possible on live page tables
- Kernel virtual memory layout: the kernel image is no longer linked to
the bottom of the linear mapping (PAGE_OFFSET) but at the bottom of
the vmalloc space, allowing the kernel to be loaded (nearly) anywhere
in physical RAM
- Kernel ASLR: position independent kernel Image and modules being
randomly mapped in the vmalloc space with the randomness is provided
by UEFI (efi_get_random_bytes() patches merged via the arm64 tree,
acked by Matt Fleming)
- Implement relative exception tables for arm64, required by KASLR
(initial code for ARCH_HAS_RELATIVE_EXTABLE added to lib/extable.c but
actual x86 conversion to deferred to 4.7 because of the merge
dependencies)
- Support for the User Access Override feature of ARMv8.2: this allows
uaccess functions (get_user etc.) to be implemented using LDTR/STTR
instructions. Such instructions, when run by the kernel, perform
unprivileged accesses adding an extra level of protection. The
set_fs() macro is used to "upgrade" such instruction to privileged
accesses via the UAO bit
- Half-precision floating point support (part of ARMv8.2)
- Optimisations for CPUs with or without a hardware prefetcher (using
run-time code patching)
- copy_page performance improvement to deal with 128 bytes at a time
- Sanity checks on the CPU capabilities (via CPUID) to prevent
incompatible secondary CPUs from being brought up (e.g. weird
big.LITTLE configurations)
- valid_user_regs() reworked for better sanity check of the sigcontext
information (restored pstate information)
- ACPI parking protocol implementation
- CONFIG_DEBUG_RODATA enabled by default
- VDSO code marked as read-only
- DEBUG_PAGEALLOC support
- ARCH_HAS_UBSAN_SANITIZE_ALL enabled
- Erratum workaround Cavium ThunderX SoC
- set_pte_at() fix for PROT_NONE mappings
- Code clean-ups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJW6u95AAoJEGvWsS0AyF7xMyoP/3x2O6bgreSQ84BdO4JChN4+
RQ9OVdX8u2ItO9sgaCY2AA6KoiBuEjGmPl/XRuK0I7DpODTtRjEXQHuNNhz8AelC
hn4AEVqamY6Z5BzHFIjs8G9ydEbq+OXcKWEdwSsBhP/cMvI7ss3dps1f5iNPT5Vv
50E/kUz+aWYy7pKlB18VDV7TUOA3SuYuGknWV8+bOY5uPb8hNT3Y3fHOg/EuNNN3
DIuYH1V7XQkXtF+oNVIGxzzJCXULBE7egMcWAm1ydSOHK0JwkZAiL7OhI7ceVD0x
YlDxBnqmi4cgzfBzTxITAhn3OParwN6udQprdF1WGtFF6fuY2eRDSH/L/iZoE4DY
OulL951OsBtF8YC3+RKLk908/0bA2Uw8ftjCOFJTYbSnZBj1gWK41VkCYMEXiHQk
EaN8+2Iw206iYIoyvdjGCLw7Y0oakDoVD9vmv12SOaHeQljTkjoN8oIlfjjKTeP7
3AXj5v9BDMDVh40nkVayysRNvqe48Kwt9Wn0rhVTLxwdJEiFG/OIU6HLuTkretdN
dcCNFSQrRieSFHpBK9G0vKIpIss1ZwLm8gjocVXH7VK4Mo/TNQe4p2/wAF29mq4r
xu1UiXmtU3uWxiqZnt72LOYFCarQ0sFA5+pMEvF5W+NrVB0wGpXhcwm+pGsIi4IM
LepccTgykiUBqW5TRzPz
=/oS+
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
"Here are the main arm64 updates for 4.6. There are some relatively
intrusive changes to support KASLR, the reworking of the kernel
virtual memory layout and initial page table creation.
Summary:
- Initial page table creation reworked to avoid breaking large block
mappings (huge pages) into smaller ones. The ARM architecture
requires break-before-make in such cases to avoid TLB conflicts but
that's not always possible on live page tables
- Kernel virtual memory layout: the kernel image is no longer linked
to the bottom of the linear mapping (PAGE_OFFSET) but at the bottom
of the vmalloc space, allowing the kernel to be loaded (nearly)
anywhere in physical RAM
- Kernel ASLR: position independent kernel Image and modules being
randomly mapped in the vmalloc space with the randomness is
provided by UEFI (efi_get_random_bytes() patches merged via the
arm64 tree, acked by Matt Fleming)
- Implement relative exception tables for arm64, required by KASLR
(initial code for ARCH_HAS_RELATIVE_EXTABLE added to lib/extable.c
but actual x86 conversion to deferred to 4.7 because of the merge
dependencies)
- Support for the User Access Override feature of ARMv8.2: this
allows uaccess functions (get_user etc.) to be implemented using
LDTR/STTR instructions. Such instructions, when run by the kernel,
perform unprivileged accesses adding an extra level of protection.
The set_fs() macro is used to "upgrade" such instruction to
privileged accesses via the UAO bit
- Half-precision floating point support (part of ARMv8.2)
- Optimisations for CPUs with or without a hardware prefetcher (using
run-time code patching)
- copy_page performance improvement to deal with 128 bytes at a time
- Sanity checks on the CPU capabilities (via CPUID) to prevent
incompatible secondary CPUs from being brought up (e.g. weird
big.LITTLE configurations)
- valid_user_regs() reworked for better sanity check of the
sigcontext information (restored pstate information)
- ACPI parking protocol implementation
- CONFIG_DEBUG_RODATA enabled by default
- VDSO code marked as read-only
- DEBUG_PAGEALLOC support
- ARCH_HAS_UBSAN_SANITIZE_ALL enabled
- Erratum workaround Cavium ThunderX SoC
- set_pte_at() fix for PROT_NONE mappings
- Code clean-ups"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (99 commits)
arm64: kasan: Fix zero shadow mapping overriding kernel image shadow
arm64: kasan: Use actual memory node when populating the kernel image shadow
arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission
arm64: Fix misspellings in comments.
arm64: efi: add missing frame pointer assignment
arm64: make mrs_s prefixing implicit in read_cpuid
arm64: enable CONFIG_DEBUG_RODATA by default
arm64: Rework valid_user_regs
arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly
arm64: KVM: Move kvm_call_hyp back to its original localtion
arm64: mm: treat memstart_addr as a signed quantity
arm64: mm: list kernel sections in order
arm64: lse: deal with clobbered IP registers after branch via PLT
arm64: mm: dump: Use VA_START directly instead of private LOWEST_ADDR
arm64: kconfig: add submenu for 8.2 architectural features
arm64: kernel: acpi: fix ioremap in ACPI parking protocol cpu_postboot
arm64: Add support for Half precision floating point
arm64: Remove fixmap include fragility
arm64: Add workaround for Cavium erratum 27456
arm64: mm: Mark .rodata as RO
...
but lots of architecture-specific changes.
* ARM:
- VHE support so that we can run the kernel at EL2 on ARMv8.1 systems
- PMU support for guests
- 32bit world switch rewritten in C
- various optimizations to the vgic save/restore code.
* PPC:
- enabled KVM-VFIO integration ("VFIO device")
- optimizations to speed up IPIs between vcpus
- in-kernel handling of IOMMU hypercalls
- support for dynamic DMA windows (DDW).
* s390:
- provide the floating point registers via sync regs;
- separated instruction vs. data accesses
- dirty log improvements for huge guests
- bugfixes and documentation improvements.
* x86:
- Hyper-V VMBus hypercall userspace exit
- alternative implementation of lowest-priority interrupts using vector
hashing (for better VT-d posted interrupt support)
- fixed guest debugging with nested virtualizations
- improved interrupt tracking in the in-kernel IOAPIC
- generic infrastructure for tracking writes to guest memory---currently
its only use is to speedup the legacy shadow paging (pre-EPT) case, but
in the future it will be used for virtual GPUs as well
- much cleanup (LAPIC, kvmclock, MMU, PIT), including ubsan fixes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJW5r3BAAoJEL/70l94x66D2pMH/jTSWWwdTUJMctrDjPVzKzG0
yOzHW5vSLFoFlwEOY2VpslnXzn5TUVmCAfrdmFNmQcSw6hGb3K/xA/ZX/KLwWhyb
oZpr123ycahga+3q/ht/dFUBCCyWeIVMdsLSFwpobEBzPL0pMgc9joLgdUC6UpWX
tmN0LoCAeS7spC4TTiTTpw3gZ/L+aB0B6CXhOMjldb9q/2CsgaGyoVvKA199nk9o
Ngu7ImDt7l/x1VJX4/6E/17VHuwqAdUrrnbqerB/2oJ5ixsZsHMGzxQ3sHCmvyJx
WG5L00ubB1oAJAs9fBg58Y/MdiWX99XqFhdEfxq4foZEiQuCyxygVvq3JwZTxII=
=OUZZ
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
"One of the largest releases for KVM... Hardly any generic
changes, but lots of architecture-specific updates.
ARM:
- VHE support so that we can run the kernel at EL2 on ARMv8.1 systems
- PMU support for guests
- 32bit world switch rewritten in C
- various optimizations to the vgic save/restore code.
PPC:
- enabled KVM-VFIO integration ("VFIO device")
- optimizations to speed up IPIs between vcpus
- in-kernel handling of IOMMU hypercalls
- support for dynamic DMA windows (DDW).
s390:
- provide the floating point registers via sync regs;
- separated instruction vs. data accesses
- dirty log improvements for huge guests
- bugfixes and documentation improvements.
x86:
- Hyper-V VMBus hypercall userspace exit
- alternative implementation of lowest-priority interrupts using
vector hashing (for better VT-d posted interrupt support)
- fixed guest debugging with nested virtualizations
- improved interrupt tracking in the in-kernel IOAPIC
- generic infrastructure for tracking writes to guest
memory - currently its only use is to speedup the legacy shadow
paging (pre-EPT) case, but in the future it will be used for
virtual GPUs as well
- much cleanup (LAPIC, kvmclock, MMU, PIT), including ubsan fixes"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (217 commits)
KVM: x86: remove eager_fpu field of struct kvm_vcpu_arch
KVM: x86: disable MPX if host did not enable MPX XSAVE features
arm64: KVM: vgic-v3: Only wipe LRs on vcpu exit
arm64: KVM: vgic-v3: Reset LRs at boot time
arm64: KVM: vgic-v3: Do not save an LR known to be empty
arm64: KVM: vgic-v3: Save maintenance interrupt state only if required
arm64: KVM: vgic-v3: Avoid accessing ICH registers
KVM: arm/arm64: vgic-v2: Make GICD_SGIR quicker to hit
KVM: arm/arm64: vgic-v2: Only wipe LRs on vcpu exit
KVM: arm/arm64: vgic-v2: Reset LRs at boot time
KVM: arm/arm64: vgic-v2: Do not save an LR known to be empty
KVM: arm/arm64: vgic-v2: Move GICH_ELRSR saving to its own function
KVM: arm/arm64: vgic-v2: Save maintenance interrupt state only if required
KVM: arm/arm64: vgic-v2: Avoid accessing GICH registers
KVM: s390: allocate only one DMA page per VM
KVM: s390: enable STFLE interpretation only if enabled for the guest
KVM: s390: wake up when the VCPU cpu timer expires
KVM: s390: step the VCPU timer while in enabled wait
KVM: s390: protect VCPU cpu timer with a seqcount
KVM: s390: step VCPU cpu timer during kvm_run ioctl
...
Pull cpu hotplug updates from Thomas Gleixner:
"This is the first part of the ongoing cpu hotplug rework:
- Initial implementation of the state machine
- Runs all online and prepare down callbacks on the plugged cpu and
not on some random processor
- Replaces busy loop waiting with completions
- Adds tracepoints so the states can be followed"
More detailed commentary on this work from an earlier email:
"What's wrong with the current cpu hotplug infrastructure?
- Asymmetry
The hotplug notifier mechanism is asymmetric versus the bringup and
teardown. This is mostly caused by the notifier mechanism.
- Largely undocumented dependencies
While some notifiers use explicitely defined notifier priorities,
we have quite some notifiers which use numerical priorities to
express dependencies without any documentation why.
- Control processor driven
Most of the bringup/teardown of a cpu is driven by a control
processor. While it is understandable, that preperatory steps,
like idle thread creation, memory allocation for and initialization
of essential facilities needs to be done before a cpu can boot,
there is no reason why everything else must run on a control
processor. Before this patch series, bringup looks like this:
Control CPU Booting CPU
do preparatory steps
kick cpu into life
do low level init
sync with booting cpu sync with control cpu
bring the rest up
- All or nothing approach
There is no way to do partial bringups. That's something which is
really desired because we waste e.g. at boot substantial amount of
time just busy waiting that the cpu comes to life. That's stupid
as we could very well do preparatory steps and the initial IPI for
other cpus and then go back and do the necessary low level
synchronization with the freshly booted cpu.
- Minimal debuggability
Due to the notifier based design, it's impossible to switch between
two stages of the bringup/teardown back and forth in order to test
the correctness. So in many hotplug notifiers the cancel
mechanisms are either not existant or completely untested.
- Notifier [un]registering is tedious
To [un]register notifiers we need to protect against hotplug at
every callsite. There is no mechanism that bringup/teardown
callbacks are issued on the online cpus, so every caller needs to
do it itself. That also includes error rollback.
What's the new design?
The base of the new design is a symmetric state machine, where both
the control processor and the booting/dying cpu execute a well
defined set of states. Each state is symmetric in the end, except
for some well defined exceptions, and the bringup/teardown can be
stopped and reversed at almost all states.
So the bringup of a cpu will look like this in the future:
Control CPU Booting CPU
do preparatory steps
kick cpu into life
do low level init
sync with booting cpu sync with control cpu
bring itself up
The synchronization step does not require the control cpu to wait.
That mechanism can be done asynchronously via a worker or some
other mechanism.
The teardown can be made very similar, so that the dying cpu cleans
up and brings itself down. Cleanups which need to be done after
the cpu is gone, can be scheduled asynchronously as well.
There is a long way to this, as we need to refactor the notion when a
cpu is available. Today we set the cpu online right after it comes
out of the low level bringup, which is not really correct.
The proper mechanism is to set it to available, i.e. cpu local
threads, like softirqd, hotplug thread etc. can be scheduled on that
cpu, and once it finished all booting steps, it's set to online, so
general workloads can be scheduled on it. The reverse happens on
teardown. First thing to do is to forbid scheduling of general
workloads, then teardown all the per cpu resources and finally shut it
off completely.
This patch series implements the basic infrastructure for this at the
core level. This includes the following:
- Basic state machine implementation with well defined states, so
ordering and prioritization can be expressed.
- Interfaces to [un]register state callbacks
This invokes the bringup/teardown callback on all online cpus with
the proper protection in place and [un]installs the callbacks in
the state machine array.
For callbacks which have no particular ordering requirement we have
a dynamic state space, so that drivers don't have to register an
explicit hotplug state.
If a callback fails, the code automatically does a rollback to the
previous state.
- Sysfs interface to drive the state machine to a particular step.
This is only partially functional today. Full functionality and
therefor testability will be achieved once we converted all
existing hotplug notifiers over to the new scheme.
- Run all CPU_ONLINE/DOWN_PREPARE notifiers on the booting/dying
processor:
Control CPU Booting CPU
do preparatory steps
kick cpu into life
do low level init
sync with booting cpu sync with control cpu
wait for boot
bring itself up
Signal completion to control cpu
In a previous step of this work we've done a full tree mechanical
conversion of all hotplug notifiers to the new scheme. The balance
is a net removal of about 4000 lines of code.
This is not included in this series, as we decided to take a
different approach. Instead of mechanically converting everything
over, we will do a proper overhaul of the usage sites one by one so
they nicely fit into the symmetric callback scheme.
I decided to do that after I looked at the ugliness of some of the
converted sites and figured out that their hotplug mechanism is
completely buggered anyway. So there is no point to do a
mechanical conversion first as we need to go through the usage
sites one by one again in order to achieve a full symmetric and
testable behaviour"
* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits)
cpu/hotplug: Document states better
cpu/hotplug: Fix smpboot thread ordering
cpu/hotplug: Remove redundant state check
cpu/hotplug: Plug death reporting race
rcu: Make CPU_DYING_IDLE an explicit call
cpu/hotplug: Make wait for dead cpu completion based
cpu/hotplug: Let upcoming cpu bring itself fully up
arch/hotplug: Call into idle with a proper state
cpu/hotplug: Move online calls to hotplugged cpu
cpu/hotplug: Create hotplug threads
cpu/hotplug: Split out the state walk into functions
cpu/hotplug: Unpark smpboot threads from the state machine
cpu/hotplug: Move scheduler cpu_online notifier to hotplug core
cpu/hotplug: Implement setup/removal interface
cpu/hotplug: Make target state writeable
cpu/hotplug: Add sysfs state interface
cpu/hotplug: Hand in target state to _cpu_up/down
cpu/hotplug: Convert the hotplugged cpu work to a state machine
cpu/hotplug: Convert to a state machine for the control processor
cpu/hotplug: Add tracepoints
...
Pull ram resource handling changes from Ingo Molnar:
"Core kernel resource handling changes to support NVDIMM error
injection.
This tree introduces a new I/O resource type, IORESOURCE_SYSTEM_RAM,
for System RAM while keeping the current IORESOURCE_MEM type bit set
for all memory-mapped ranges (including System RAM) for backward
compatibility.
With this resource flag it no longer takes a strcmp() loop through the
resource tree to find "System RAM" resources.
The new resource type is then used to extend ACPI/APEI error injection
facility to also support NVDIMM"
* 'core-resources-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
ACPI/EINJ: Allow memory error injection to NVDIMM
resource: Kill walk_iomem_res()
x86/kexec: Remove walk_iomem_res() call with GART type
x86, kexec, nvdimm: Use walk_iomem_res_desc() for iomem search
resource: Add walk_iomem_res_desc()
memremap: Change region_intersects() to take @flags and @desc
arm/samsung: Change s3c_pm_run_res() to use System RAM type
resource: Change walk_system_ram() to use System RAM type
drivers: Initialize resource entry to zero
xen, mm: Set IORESOURCE_SYSTEM_RAM to System RAM
kexec: Set IORESOURCE_SYSTEM_RAM for System RAM
arch: Set IORESOURCE_SYSTEM_RAM flag for System RAM
ia64: Set System RAM type and descriptor
x86/e820: Set System RAM type and descriptor
resource: Add I/O resource descriptor
resource: Handle resource flags properly
resource: Add System RAM resource type
Functions which the compiler has instrumented for KASAN place poison on
the stack shadow upon entry and remove this poison prior to returning.
In the case of cpuidle, CPUs exit the kernel a number of levels deep in
C code. Any instrumented functions on this critical path will leave
portions of the stack shadow poisoned.
If CPUs lose context and return to the kernel via a cold path, we
restore a prior context saved in __cpu_suspend_enter are forgotten, and
we never remove the poison they placed in the stack shadow area by
functions calls between this and the actual exit of the kernel.
Thus, (depending on stackframe layout) subsequent calls to instrumented
functions may hit this stale poison, resulting in (spurious) KASAN
splats to the console.
To avoid this, clear any stale poison from the idle thread for a CPU
prior to bringing a CPU online.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The prologue of the EFI entry point pushes x29 and x30 onto the stack but
fails to create the stack frame correctly by omitting the assignment of x29
to the new value of the stack pointer. So fix that.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 0f54b14e76 ("arm64: cpufeature: Change read_cpuid() to use
sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
register names, to allow manual assembly of registers unknown by the
toolchain, using tables in sysreg.h.
This interacts poorly with commit 42b5573403 ("efi/arm64: Check
for h/w support before booting a >4 KB granular kernel"), which is
curretly queued via the tip tree, and uses read_cpuid without a SYS_
prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
are selected.
To avoid this issue when trees are merged, move the required SYS_
prefixing into read_cpuid, and revert all of the updated callsites to
pass plain register names. This effectively reverts the bulk of commit
0f54b14e76.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We validate pstate using PSR_MODE32_BIT, which is part of the
user-provided pstate (and cannot be trusted). Also, we conflate
validation of AArch32 and AArch64 pstate values, making the code
difficult to reason about.
Instead, validate the pstate value based on the associated task. The
task may or may not be current (e.g. when using ptrace), so this must be
passed explicitly by callers. To avoid circular header dependencies via
sched.h, is_compat_task is pulled out of asm/ptrace.h.
To make the code possible to reason about, the AArch64 and AArch32
validation is split into separate functions. Software must respect the
RES0 policy for SPSR bits, and thus the kernel mirrors the hardware
policy (RAZ/WI) for bits as-yet unallocated. When these acquire an
architected meaning writes may be permitted (potentially with additional
validation).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Let the non boot cpus call into idle with the corresponding hotplug state, so
the hotplug core can handle the further bringup. That's a first step to
convert the boot side of the hotplugged cpus to do all the synchronization
with the other side through the state machine. For now it'll only start the
hotplug thread and kick the full bringup of the cpu.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: Rik van Riel <riel@redhat.com>
Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Commit 7175f0591e ("arm64: perf: Enable PMCR long cycle counter bit")
added initial support for a 64-bit cycle counter enabled using PMCR.LC.
Unfortunately, that patch doesn't extend ARMV8_EVTYPE_MASK, so any
attempts to set the enable bit are ignored by armv8pmu_pmcr_write.
This patch extends the mask to include the new bit.
Signed-off-by: Will Deacon <will.deacon@arm.com>
With ARMv8.1 VHE, the architecture is able to (almost) transparently
run the kernel at EL2, despite being written for EL1.
This patch takes care of the "almost" part, mostly preventing the kernel
from dropping from EL2 to EL1, and setting up the HYP configuration.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
When the kernel is running in HYP (with VHE), it is necessary to
include EL2 events if the user requests counting kernel or
hypervisor events.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The fault decoding process (including computing the IPA in the case
of a permission fault) would be much better done in C code, as we
have a reasonable infrastructure to deal with the VHE/non-VHE
differences.
Let's move the whole thing to C, including the workaround for
erratum 834220, and just patch the odd ESR_EL2 access remaining
in hyp-entry.S.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Add a new ARM64_HAS_VIRT_HOST_EXTN features to indicate that the
CPU has the ARMv8.1 VHE capability.
This will be used to trigger kernel patching in KVM.
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
When secondary cpus are booted through the ACPI parking protocol, the
booted cpu should check that FW has correctly cleared its mailbox entry
point value to make sure the boot process was correctly executed.
The entry point check is carried in the cpu_ops->cpu_postboot method, that
is executed by secondary cpus when entering the kernel with irqs disabled.
The ACPI parking protocol cpu_ops maps/unmaps the mailboxes on the
primary CPU to trigger secondary boot in the cpu_ops->cpu_boot method
and on secondary processors to carry out FW checks on the booted CPU
to verify the boot protocol was successfully executed in the
cpu_ops->cpu_postboot method.
Therefore, the cpu_ops->cpu_postboot method is forced to ioremap/unmap the
mailboxes, which is wrong in that ioremap cannot be safely be carried out
with irqs disabled.
To fix this issue, this patch reshuffles the code so that the mailboxes
are still mapped after the boot processor executes the cpu_ops->cpu_boot
method for a given cpu, and the VA at which a mailbox is mapped for a given
cpu is stashed in the per-cpu data struct so that secondary cpus can
retrieve them in the cpu_ops->cpu_postboot and complete the required
FW checks.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reported-by: Itaru Kitayama <itaru.kitayama@riken.jp>
Tested-by: Loc Ho <lho@apm.com>
Tested-by: Itaru Kitayama <itaru.kitayama@riken.jp>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Loc Ho <lho@apm.com>
Cc: Itaru Kitayama <itaru.kitayama@riken.jp>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Al Stone <ahs3@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARMv8.2 extensions [1] include an optional feature, which supports
half precision(16bit) floating point/asimd data processing
instructions. This patch adds support for detecting and exposing
the same to the userspace via HWCAPs
[1] https://community.arm.com/groups/processors/blog/2016/01/05/armv8-a-architecture-evolution
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI
instructions may cause the icache to become corrupted if it contains
data for a non-current ASID.
This patch implements the workaround (which invalidates the local
icache when switching the mm) by using code patching.
Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently the .rodata section is actually still executable when DEBUG_RODATA
is enabled. This changes that so the .rodata is actually read only, no execute.
It also adds the .rodata section to the mem_init banner.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
[catalin.marinas@arm.com: added vm_struct vmlinux_rodata in map_kernel()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we have a clear understanding of the sign of a feature,
rename the routines to reflect the sign, so that it is not misused.
The cpuid_feature_extract_field() now accepts a 'sign' parameter.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Use the appropriate accessor for the feature bit by keeping
track of the sign of the feature
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There is a confusion on whether the values of a feature are signed
or not in ARM. This is not clearly mentioned in the ARM ARM either.
We have dealt most of the bits as signed so far, and marked the
rest as unsigned explicitly. This fixed in ARM ARM and will be rolled
out soon.
Here is the criteria in a nutshell:
1) The fields, which are either signed or unsigned, use increasing
numerical values to indicate an increase in functionality. Thus, if a value
of 0x1 indicates the presence of some instructions, then the 0x2 value will
indicate the presence of those instructions plus some additional instructions
or functionality.
2) For ID field values where the value 0x0 defines that a feature is not present,
the number is an unsigned value.
3) For some features where the feature was made optional or removed after the
start of the definition of the architecture, the value 0x0 is used to
indicate the presence of a feature, and 0xF indicates the absence of the
feature. In these cases, the fields are, in effect, holding signed values.
So with these rules applied, we have only the following fields which are signed and
the rest are unsigned.
a) ID_AA64PFR0_EL1: {FP, ASIMD}
b) ID_AA64MMFR0_EL1: {TGran4K, TGran64K}
c) ID_AA64DFR0_EL1: PMUVer (0xf - PMUv3 not implemented)
d) ID_DFR0_EL1: PerfMon
e) ID_MMFR0_EL1: {InnerShr, OuterShr}
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Correct the feature bit entries for :
ID_DFR0
ID_MMFR0
to fix the default safe value for some of the bits.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Adds a hook for checking whether a secondary CPU has the
features used already by the kernel during early boot, based
on the boot CPU and plugs in the check for ASID size.
The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm context
id and is used in the early boot to make decisions. The value is
picked up from the Boot CPU and cannot be delayed until other CPUs
are up. If a secondary CPU has a smaller size than that of the Boot
CPU, things will break horribly and the usual SANITY check is not good
enough to prevent the system from crashing. So, crash the system with
enough information.
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We verify the capabilities of the secondary CPUs only when
hotplug is enabled. The boot time activated CPUs do not
go through the verification by checking whether the system
wide capabilities were initialised or not.
This patch removes the capability check dependency on CONFIG_HOTPLUG_CPU,
to make sure that all the secondary CPUs go through the check.
The boot time activated CPUs will still skip the system wide
capability check. The plan is to hook in a check for CPU features
used by the kernel at early boot up, based on the Boot CPU values.
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
A secondary CPU could fail to come online due to insufficient
capabilities and could simply die or loop in the kernel.
e.g, a CPU with no support for the selected kernel PAGE_SIZE
loops in kernel with MMU turned off.
or a hotplugged CPU which doesn't have one of the advertised
system capability will die during the activation.
There is no way to synchronise the status of the failing CPU
back to the master. This patch solves the issue by adding a
field to the secondary_data which can be updated by the failing
CPU. If the secondary CPU fails even before turning the MMU on,
it updates the status in a special variable reserved in the head.txt
section to make sure that the update can be cache invalidated safely
without possible sharing of cache write back granule.
Here are the possible states :
-1. CPU_MMU_OFF - Initial value set by the master CPU, this value
indicates that the CPU could not turn the MMU on, hence the status
could not be reliably updated in the secondary_data. Instead, the
CPU has updated the status @ __early_cpu_boot_status.
0. CPU_BOOT_SUCCESS - CPU has booted successfully.
1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
master CPU to synchronise by issuing a cpu_ops->cpu_kill.
2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
looping in the kernel. This information could be used by say,
kexec to check if it is really safe to do a kexec reboot.
3. CPU_PANIC_KERNEL - CPU detected some serious issues which
requires kernel to crash immediately. The secondary CPU cannot
call panic() until it has initialised the GIC. This flag can
be used to instruct the master to do so.
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
[catalin.marinas@arm.com: conflict resolution]
[catalin.marinas@arm.com: converted "status" from int to long]
[catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch moves cpu_die_early to smp.c, where it fits better.
No functional changes, except for adding the necessary checks
for CONFIG_HOTPLUG_CPU.
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Or in other words, make fail_incapable_cpu() reusable.
We use fail_incapable_cpu() to kill a secondary CPU early during the
bringup, which doesn't have the system advertised capabilities.
This patch makes the routine more generic, to kill a secondary
booting CPU, getting rid of the dependency on capability struct.
This can be used by checks which are not necessarily attached to
a capability struct (e.g, cpu ASIDBits).
In that process, renames the function to cpu_die_early() to better
match its functionality. This will be moved to arch/arm64/kernel/smp.c
later.
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Adds a routine which can be used to park CPUs (spinning in kernel)
when they can't be killed.
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
provided by the bootloader, randomize the placement of RAM inside the
linear region if sufficient space is available. For instance, on a 4KB
granule/3 levels kernel, the linear region is 256 GB in size, and we can
choose any 1 GB aligned offset that is far enough from the top of the
address space to fit the distance between the start of the lowest memblock
and the top of the highest memblock.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This adds support for KASLR is implemented, based on entropy provided by
the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
of the address space (VA_BITS) and the page size, the entropy in the
virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
4 levels), with the sidenote that displacements that result in the kernel
image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
granule kernels, respectively) are not allowed, and will be rounded up to
an acceptable value.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
randomized independently from the core kernel. This makes it less likely
that the location of core kernel data structures can be determined by an
adversary, but causes all function calls from modules into the core kernel
to be resolved via entries in the module PLTs.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
randomized by choosing a page aligned 128 MB region inside the interval
[_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
entropy (depending on page size), independently of the kernel randomization,
but still guarantees that modules are within the range of relative branch
and jump instructions (with the caveat that, since the module region is
shared with other uses of the vmalloc area, modules may need to be loaded
further away if the module region is exhausted)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This implements CONFIG_RELOCATABLE, which links the final vmlinux
image with a dynamic relocation section, allowing the early boot code
to perform a relocation to a different virtual address at runtime.
This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Instead of using absolute addresses for both the exception location
and the fixup, use offsets relative to the exception table entry values.
Not only does this cut the size of the exception table in half, it is
also a prerequisite for KASLR, since absolute exception table entries
are subject to dynamic relocation, which is incompatible with the sorting
of the exception table that occurs at build time.
This patch also introduces the _ASM_EXTABLE preprocessor macro (which
exists on x86 as well) and its _asm_extable assembly counterpart, as
shorthands to emit exception table entries.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Before implementing KASLR for arm64 by building a self-relocating PIE
executable, we have to ensure that values we use before the relocation
routine is executed are not subject to dynamic relocation themselves.
This applies not only to virtual addresses, but also to values that are
supplied by the linker at build time and relocated using R_AARCH64_ABS64
relocations.
So instead, use assemble time constants, or force the use of static
relocations by folding the constants into the instructions.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Unfortunately, the current way of using the linker to emit build time
constants into the Image header will no longer work once we switch to
the use of PIE executables. The reason is that such constants are emitted
into the binary using R_AARCH64_ABS64 relocations, which are resolved at
runtime, not at build time, and the places targeted by those relocations
will contain zeroes before that.
So refactor the endian swapping linker script constant generation code so
that it emits the upper and lower 32-bit words separately.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This adds support for emitting PLTs at module load time for relative
branches that are out of range. This is a prerequisite for KASLR, which
may place the kernel and the modules anywhere in the vmalloc area,
making it more likely that branch target offsets exceed the maximum
range of +/- 128 MB.
In this version, I removed the distinction between relocations against
.init executable sections and ordinary executable sections. The reason
is that it is hardly worth the trouble, given that .init.text usually
does not contain that many far branches, and this version now only
reserves PLT entry space for jump and call relocations against undefined
symbols (since symbols defined in the same module can be assumed to be
within +/- 128 MB)
For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
built with -mcmodel=large gives the following relocation counts:
relocs branches unique !local
.text 3925 3347 518 219
.init.text 11 8 7 1
.exit.text 4 4 4 1
.text.unlikely 81 67 36 17
('unique' means branches to unique type/symbol/addend combos, of which
!local is the subset referring to undefined symbols)
IOW, we are only emitting a single PLT entry for the .init sections, and
we are better off just adding it to the core PLT section instead.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The EFI stub is typically built into the decompressor (x86, ARM) so none
of its symbols are annotated as __init. However, on arm64, the stub is
linked into the kernel proper, and the code is __init annotated at the
section level by prepending all names of SHF_ALLOC sections with '.init'.
This results in section names like .init.rodata.str1.8 (for string literals)
and .init.bss (which is tiny), both of which can be moved into the .init.data
output section.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1455712566-16727-6-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This relaxes the kernel Image placement requirements, so that it
may be placed at any 2 MB aligned offset in physical memory.
This is accomplished by ignoring PHYS_OFFSET when installing
memblocks, and accounting for the apparent virtual offset of
the kernel Image. As a result, virtual address references
below PAGE_OFFSET are correctly mapped onto physical references
into the kernel Image regardless of where it sits in memory.
Special care needs to be taken for dealing with memory limits passed
via mem=, since the generic implementation clips memory top down, which
may clip the kernel image itself if it is loaded high up in memory. To
deal with this case, we simply add back the memory covering the kernel
image, which may result in more memory to be retained than was passed
as a mem= parameter.
Since mem= should not be considered a production feature, a panic notifier
handler is installed that dumps the memory limit at panic time if one was
set.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
the symbolic virtual base of the kernel region, i.e., the kernel's virtual
offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
equal to PAGE_OFFSET, but in the future, it will be moved below it once
we move the kernel virtual mapping out of the linear mapping.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This function was introduced by previous commits implementing UAO.
However, it can be replaced with task_thread_info() in
uao_thread_switch() or get_fs() in do_page_fault() (the latter being
called only on the current context, so no need for using the saved
pt_regs).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If a CPU supports both Privileged Access Never (PAN) and User Access
Override (UAO), we don't need to disable/re-enable PAN round all
copy_to_user() like calls.
UAO alternatives cause these calls to use the 'unprivileged' load/store
instructions, which are overridden to be the privileged kind when
fs==KERNEL_DS.
This patch changes the copy_to_user() calls to have their PAN toggling
depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
If both features are detected, PAN will be enabled, but the copy_to_user()
alternatives will not be applied. This means PAN will be enabled all the
time for these functions. If only PAN is detected, the toggling will be
enabled as normal.
This will save the time taken to disable/re-enable PAN, and allow us to
catch copy_to_user() accesses that occur with fs==KERNEL_DS.
Futex and swp-emulation code continue to hang their PAN toggling code on
ARM64_HAS_PAN.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
CPU feature code uses the desc field as a test to find the end of the list,
this means every entry must have a description. This generates noise for
entries in the list that aren't really features, but combinations of them.
e.g.
> CPU features: detected feature: Privileged Access Never
> CPU features: detected feature: PAN and not UAO
These combination features are needed for corner cases with alternatives,
where cpu features interact.
Change all walkers of the arm64_features[] and arm64_hwcaps[] lists to test
'matches' not 'desc', and only print 'desc' if it is non-NULL.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
'User Access Override' is a new ARMv8.2 feature which allows the
unprivileged load and store instructions to be overridden to behave in
the normal way.
This patch converts {get,put}_user() and friends to use ldtr*/sttr*
instructions - so that they can only access EL0 memory, then enables
UAO when fs==KERNEL_DS so that these functions can access kernel memory.
This allows user space's read/write permissions to be checked against the
page tables, instead of testing addr<USER_DS, then using the kernel's
read/write permissions.
Signed-off-by: James Morse <james.morse@arm.com>
[catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARMv8.1 increases the PMU event number space to 16 bit so increase
the EVTYPE mask.
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
With the long cycle counter bit (LC) disabled the cycle counter is not
working on ThunderX SOC (ThunderX only implements Aarch64).
Also, according to documentation LC == 0 is deprecated.
To keep the code simple the patch does not introduce 64 bit wide counter
functions. Instead writing the cycle counter always sets the upper
32 bits so overflow interrupts are generated as before.
Original patch from Andrew Pinksi <Andrew.Pinksi@caviumnetworks.com>
Signed-off-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The implemented Cortex A57 events are strictly-speaking not
A57 specific. They are ARM recommended implementation defined events
and can be found on other ARMv8 SOCs like Cavium ThunderX too.
Therefore rename these events to allow using them in other
implementations too.
Signed-off-by: Jan Glauber <jglauber@cavium.com>
[will: capitalisation and ordering]
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
cpu feature boiler plate used by the actual features in later patches.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Older assemblers may not have support for newer feature registers. To get
round this, sysreg.h provides a 'mrs_s' macro that takes a register
encoding and generates the raw instruction.
Change read_cpuid() to use mrs_s in all cases so that new registers
don't have to be a special case. Including sysreg.h means we need to move
the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__
to avoid syntax errors in vmlinux.lds.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Although the arm64 vDSO is cleanly separated by code/data with the
code being read-only in userspace mappings, the code page is still
writable from the kernel. There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.
Prevent this specific exploit on arm64 by putting the vDSO code page
in read-only memory as well.
Before the change:
[ 3.138366] vdso: 2 pages (1 code @ ffffffc000a71000, 1 data @ ffffffc000a70000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b6000 1752K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b6000-0xffffffc000c00000 2344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
After:
[ 3.138368] vdso: 2 pages (1 code @ ffffffc0006de000, 1 data @ ffffffc000a74000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b8000 1760K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b8000-0xffffffc000c00000 2336K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.
Signed-off-by: David Brown <david.brown@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Most CPUs have a hardware prefetcher which generally performs better
without explicit prefetch instructions issued by software, however
some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch
instructions.
This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to
allow our library code to make use of explicit prefetch instructions
during things like copy routines only when the CPU does not have the
capability to perform the prefetching itself.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The SBBR and ACPI specifications allow ACPI based systems that do not
implement PSCI (eg systems with no EL3) to boot through the ACPI parking
protocol specification[1].
This patch implements the ACPI parking protocol CPU operations, and adds
code that eases parsing the parking protocol data structures to the
ARM64 SMP initializion carried out at the same time as cpus enumeration.
To wake-up the CPUs from the parked state, this patch implements a
wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the
ARM one, so that a specific IPI is sent for wake-up purpose in order
to distinguish it from other IPI sources.
Given the current ACPI MADT parsing API, the patch implements a glue
layer that helps passing MADT GICC data structure from SMP initialization
code to the parking protocol implementation somewhat overriding the CPU
operations interfaces. This to avoid creating a completely trasparent
DT/ACPI CPU operations layer that would require creating opaque
structure handling for CPUs data (DT represents CPU through DT nodes, ACPI
through static MADT table entries), which seems overkill given that ACPI
on ARM64 mandates only two booting protocols (PSCI and parking protocol),
so there is no need for further protocol additions.
Based on the original work by Mark Salter <msalter@redhat.com>
[1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docx
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Loc Ho <lho@apm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Al Stone <ahs3@redhat.com>
[catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently we have separate ALIGN_DEBUG_RO{,_MIN} directives to align
_etext and __init_begin. While we ensure that __init_begin is
page-aligned, we do not provide the same guarantee for _etext. This is
not problematic currently as the alignment of __init_begin is sufficient
to prevent issues when we modify permissions.
Subsequent patches will assume page alignment of segments of the kernel
we wish to map with different permissions. To ensure this, move _etext
after the ALIGN_DEBUG_RO_MIN for the init section. This renders the
prior ALIGN_DEBUG_RO irrelevant, and hence it is removed. Likewise,
upgrade to ALIGN_DEBUG_RO_MIN(PAGE_SIZE) for _stext.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
During boot we leave the idmap in place until paging_init, as we
previously had to wait for the zero page to become allocated and
accessible.
Now that we have a statically-allocated zero page, we can uninstall the
idmap much earlier in the boot process, making it far easier to spot
accidental use of physical addresses. This also brings the cold boot
path in line with the secondary boot path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We currently open-code the removal of the idmap and restoration of the
current task's MMU state in a few places.
Before introducing yet more copies of this sequence, unify these to call
a new helper, cpu_uninstall_idmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently the zero page is set up in paging_init, and thus we cannot use
the zero page earlier. We use the zero page as a reserved TTBR value
from which no TLB entries may be allocated (e.g. when uninstalling the
idmap). To enable such usage earlier (as may be required for invasive
changes to the kernel page tables), and to minimise the time that the
idmap is active, we need to be able to use the zero page before
paging_init.
This patch follows the example set by x86, by allocating the zero page
at compile time, in .bss. This means that the zero page itself is
available immediately upon entry to start_kernel (as we zero .bss before
this), and also means that the zero page takes up no space in the raw
Image binary. The associated struct page is allocated in bootmem_init,
and remains unavailable until this time.
Outside of arch code, the only users of empty_zero_page assume that the
empty_zero_page symbol refers to the zeroed memory itself, and that
ZERO_PAGE(x) must be used to acquire the associated struct page,
following the example of x86. This patch also brings arm64 inline with
these assumptions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Changes introduced in the upstream version of libfdt pulled in by commit
91feabc2e2 ("scripts/dtc: Update to upstream commit b06e55c88b9b") use
the strnlen() function, which isn't currently available to the EFI name-
space. Add it to the EFI namespace to avoid a linker error.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rob Herring <robh@kernel.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Switching between stacks is only valid if we are tracing ourselves while on the
irq_stack, so it is only valid when in current and non-preemptible context,
otherwise is is just zeroed off.
Fixes: 132cd887b5 ("arm64: Modify stack trace and dump for use with irq_stack")
Acked-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARM64 PSCI kernel interfaces that initialize idle states and implement
the suspend API to enter them are generic and can be shared with the
ARM architecture.
To achieve that goal, this patch moves ARM64 PSCI idle management
code to drivers/firmware, so that the interface to initialize and
enter idle states can actually be shared by ARM and ARM64 arches
back-ends.
The ARM generic CPUidle implementation also requires the definition of
a cpuidle_ops section entry for the kernel to initialize the CPUidle
operations at boot based on the enable-method (ie ARM64 has the
statically initialized cpu_ops counterparts for that purpose); therefore
this patch also adds the required section entry on CONFIG_ARM for PSCI so
that the kernel can initialize the PSCI CPUidle back-end when PSCI is
the probed enable-method.
On ARM64 this patch provides no functional change.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arch/arm64]
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Jisheng Zhang <jszhang@marvell.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
force_sig_info can sleep under an -rt kernel, so attempting to send a
breakpoint SIGTRAP with interrupts disabled yields the following BUG:
BUG: sleeping function called from invalid context at
/kernel-source/kernel/locking/rtmutex.c:917
in_atomic(): 0, irqs_disabled(): 128, pid: 551, name: test.sh
CPU: 5 PID: 551 Comm: test.sh Not tainted 4.1.13-rt13 #7
Hardware name: Freescale Layerscape 2085a RDB Board (DT)
Call trace:
dump_backtrace+0x0/0x128
show_stack+0x24/0x30
dump_stack+0x80/0xa0
___might_sleep+0x128/0x1a0
rt_spin_lock+0x2c/0x40
force_sig_info+0xcc/0x210
brk_handler.part.2+0x6c/0x80
brk_handler+0xd8/0xe8
do_debug_exception+0x58/0xb8
This patch fixes the problem by ensuring that interrupts are enabled
prior to sending the SIGTRAP if they were already enabled in the user
context.
Reported-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arm64 generates asm/pci-bridge.h, which merely includes the now-empty
asm-generic/pci-bridge.h. Stop generating asm/pci-bridge.h, and stop
including it.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Set IORESOURCE_SYSTEM_RAM in flags of resource ranges with
"System RAM", "Kernel code", "Kernel data", and "Kernel bss".
Note that:
- IORESOURCE_SYSRAM (i.e. modifier bit) is set in flags when
IORESOURCE_MEM is already set. IORESOURCE_SYSTEM_RAM is defined
as (IORESOURCE_MEM|IORESOURCE_SYSRAM).
- Some archs do not set 'flags' for children nodes, such as
"Kernel code". This patch does not change 'flags' in this
case.
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: linux-arch@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mips@linux-mips.org
Cc: linux-mm <linux-mm@kvack.org>
Cc: linux-parisc@vger.kernel.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/1453841853-11383-7-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The Performance Monitors extension is an optional feature of the
AArch64 architecture, therefore, in order to access Performance
Monitors registers safely, the kernel should detect the architected
PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field
before accessing them.
This patch implements a guard by reading the ID_AA64DFR0_EL1 register
PMUVer field to detect the architected PMU presence and prevent accessing
PMU system registers if the Performance Monitors extension is not
implemented in the core.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: <stable@vger.kernel.org>
Fixes: 60792ad349 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit e8f3010f73 ("arm64/efi: isolate EFI stub from the kernel
proper") isolated the EFI stub code from the kernel proper by prefixing
all of its symbols with __efistub_, and selectively allowing access to
core kernel symbols from the stub by emitting __efistub_ aliases for
functions and variables that the stub can access legally.
As an unintended side effect, these aliases are emitted into the
kallsyms symbol table, which means they may turn up in backtraces,
e.g.,
...
PC is at __efistub_memset+0x108/0x200
LR is at fixup_init+0x3c/0x48
...
[<ffffff8008328608>] __efistub_memset+0x108/0x200
[<ffffff8008094dcc>] free_initmem+0x2c/0x40
[<ffffff8008645198>] kernel_init+0x20/0xe0
[<ffffff8008085cd0>] ret_from_fork+0x10/0x40
The backtrace in question has nothing to do with the EFI stub, but
simply returns one of the several aliases of memset() that have been
recorded in the kallsyms table. This is undesirable, since it may
suggest to people who are not aware of this that the issue they are
seeing is somehow EFI related.
So hide the __efistub_ aliases from kallsyms, by emitting them as
absolute linker symbols explicitly. The distinction between those
and section relative symbols is completely irrelevant to these
definitions, and to the final link we are performing when these
definitions are being taken into account (the distinction is only
relevant to symbols defined inside a section definition when performing
a partial link), and so the resulting values are identical to the
original ones. Since absolute symbols are ignored by kallsyms, this
will result in these values to be omitted from its symbol table.
After this patch, the backtrace generated from the same address looks
like this:
...
PC is at __memset+0x108/0x200
LR is at fixup_init+0x3c/0x48
...
[<ffffff8008328608>] __memset+0x108/0x200
[<ffffff8008094dcc>] free_initmem+0x2c/0x40
[<ffffff8008645198>] kernel_init+0x20/0xe0
[<ffffff8008085cd0>] ret_from_fork+0x10/0x40
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
During code generation, we used to BUG_ON unknown/unsupported encoding
or invalid parameters.
Instead, now we report these as errors and simply return the
instruction AARCH64_BREAK_FAULT. Users of these codegen helpers should
check for and handle this failure condition as appropriate.
Otherwise, unhandled codegen failure will result in trapping at
run-time due to AARCH64_BREAK_FAULT, which is arguably better than a
BUG_ON.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
support of 248 VCPUs.
* ARM: rewrite of the arm64 world switch in C, support for
16-bit VM identifiers. Performance counter virtualization
missed the boat.
* x86: Support for more Hyper-V features (synthetic interrupt
controller), MMU cleanups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJWlSKwAAoJEL/70l94x66DY0UIAK5vp4zfQoQOJC4KP4Xgxwdu
kpnK2Boz3/74o1b0y5+eJZoUZCsXCVLtmP5uhmMxUYWDgByFG2X8ZDhPFwB5FYLT
2dN+Lr4tsolgIfRdHZtrT6Svp9SDL039bWTdscnbR6l37/j9FRWvpKdhI3orloFD
/i4CSW2dVIq1/9Xctwu/rtcOEesEx4Cad+6YV3/530eVAXFzE908nXfmqJNZTocY
YCGcmrMVCOu0ng5QM4xSzmmYjKMLUcRs+QzZWkVBzdJtTgwZUr09yj7I2dZ1yj/i
cxYrJy6shSwE74XkXsmvG+au3C5u3vX4tnXjBFErnPJ99oqzHatVnFWNRhj4dLQ=
=PIj1
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
"PPC changes will come next week.
- s390: Support for runtime instrumentation within guests, support of
248 VCPUs.
- ARM: rewrite of the arm64 world switch in C, support for 16-bit VM
identifiers. Performance counter virtualization missed the boat.
- x86: Support for more Hyper-V features (synthetic interrupt
controller), MMU cleanups"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (115 commits)
kvm: x86: Fix vmwrite to SECONDARY_VM_EXEC_CONTROL
kvm/x86: Hyper-V SynIC timers tracepoints
kvm/x86: Hyper-V SynIC tracepoints
kvm/x86: Update SynIC timers on guest entry only
kvm/x86: Skip SynIC vector check for QEMU side
kvm/x86: Hyper-V fix SynIC timer disabling condition
kvm/x86: Reorg stimer_expiration() to better control timer restart
kvm/x86: Hyper-V unify stimer_start() and stimer_restart()
kvm/x86: Drop stimer_stop() function
kvm/x86: Hyper-V timers fix incorrect logical operation
KVM: move architecture-dependent requests to arch/
KVM: renumber vcpu->request bits
KVM: document which architecture uses each request bit
KVM: Remove unused KVM_REQ_KICK to save a bit in vcpu->requests
kvm: x86: Check kvm_write_guest return value in kvm_write_wall_clock
KVM: s390: implement the RI support of guest
kvm/s390: drop unpaired smp_mb
kvm: x86: fix comment about {mmu,nested_mmu}.gva_to_gpa
KVM: x86: MMU: Use clear_page() instead of init_shadow_page_table()
arm/arm64: KVM: Detect vGIC presence at runtime
...
- Stolen ticks and PV wallclock support for arm/arm64.
- Add grant copy ioctl to gntdev device.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWk5IUAAoJEFxbo/MsZsTRLxwH/1BDcrbQDRc5hxUOG9JEYSUt
H/lMjvZRShPkzweijdNon95ywAXhcSbkS9IV2Mp0+CZV7VyeymW7QIW/g4+G6iRg
+LnoV77PAhPv/cmsr1pENXqRCclvemlxQOf7UyWLezuKhB71LC+oNaEnpk/tPIZS
et/qef+m/SgSP5R91nO0Esv2KfP7za0UrgJf3Ee4GzjSeDkya0Hko06Cy3yc1/RT
082kHpQ1/KFcHHh2qhdCQwyzhq/cwFkuDA6ksKYJoxC6YAVC2mvvkuIOZYbloHDL
c/dzuP9qjjxOZ7Gblv2cmg+RE4UqRfBhxmMycxSCcwW/Mt5LaftCpAxpBQKq2/8=
=6F/q
-----END PGP SIGNATURE-----
Merge tag 'for-linus-4.5-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen updates from David Vrabel:
"Xen features and fixes for 4.5-rc0:
- Stolen ticks and PV wallclock support for arm/arm64
- Add grant copy ioctl to gntdev device"
* tag 'for-linus-4.5-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/gntdev: add ioctl for grant copy
x86/xen: don't reset vcpu_info on a cancelled suspend
xen/gntdev: constify mmu_notifier_ops structures
xen/grant-table: constify gnttab_ops structure
xen/time: use READ_ONCE
xen/x86: convert remaining timespec to timespec64 in xen_pvclock_gtod_notify
xen/x86: support XENPF_settime64
xen/arm: set the system time in Xen via the XENPF_settime64 hypercall
xen/arm: introduce xen_read_wallclock
arm: extend pvclock_wall_clock with sec_hi
xen: introduce XENPF_settime64
xen/arm: introduce HYPERVISOR_platform_op on arm and arm64
xen: rename dom0_op to platform_op
xen/arm: account for stolen ticks
arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
missing include asm/paravirt.h in cputime.c
xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
Pull ARM updates from Russell King:
- UEFI boot and runtime services support for ARM from Ard Biesheuvel
and Roy Franz.
- DT compatibility with old atags booting protocol for Nokia N900
devices from Ivaylo Dimitrov.
- PSCI firmware interface using new arm-smc calling convention from
Jens Wiklander.
- Runtime patching for udiv/sdiv instructions for ARMv7 CPUs that
support these instructions from Nicolas Pitre.
- L2x0 cache updates from Dirk B and Linus Walleij.
- Randconfig fixes from Arnd Bergmann.
- ARMv7M (nommu) updates from Ezequiel Garcia
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (34 commits)
ARM: 8481/2: drivers: psci: replace psci firmware calls
ARM: 8480/2: arm64: add implementation for arm-smccc
ARM: 8479/2: add implementation for arm-smccc
ARM: 8478/2: arm/arm64: add arm-smccc
ARM: 8494/1: mm: Enable PXN when running non-LPAE kernel on LPAE processor
ARM: 8496/1: OMAP: RX51: save ATAGS data in the early boot stage
ARM: 8495/1: ATAGS: move save_atags() to arch/arm/include/asm/setup.h
ARM: 8452/3: PJ4: make coprocessor access sequences buildable in Thumb2 mode
ARM: 8482/1: l2x0: make it possible to disable outer sync from DT
ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI
ARM: 8487/1: Remove IPI_CALL_FUNC_SINGLE
ARM: 8485/1: cpuidle: remove cpu parameter from the cpuidle_ops suspend hook
ARM: 8484/1: Documentation: l2c2x0: Mention separate controllers explicitly
ARM: 8483/1: Documentation: l2c: Rename l2cc to l2c2x0
ARM: 8477/1: runtime patch udiv/sdiv instructions into __aeabi_{u}idiv()
ARM: 8476/1: VDSO: use PTR_ERR_OR_ZERO for vma check
ARM: 8453/2: proc-v7.S: don't locate temporary stack space in .text section
ARM: add UEFI stub support
ARM: wire up UEFI init and runtime support
ARM: only consider memblocks with NOMAP cleared for linear mapping
...
- Support for the CPU PMU in Cortex-A72
- Add sysfs entries to describe the architected events and their
mappings for PMUv{1-3}
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJWj+uEAAoJELescNyEwWM0PzgIALXISGukbDOLBXFYRc+6g3BT
zb9W2rFtN0j7+WmspGbdocDqnS1gPrqXftAHyk2XPRmfh5rr9aP5qWefJ9fDptTB
GCTpW4iG5chHi+er13ovz20Cphz55k3VRA4suBlHHyNLjAwLvnpW28SSAssPJDbB
8UHOqHhNRmnI3D4amJhEfldvk+0h54I5W6odXthxOQZREwA87jQlbRr3PFlBUbIX
NN+X6/j1N5Jja6DtaCzfDpybeLR3XQM+Fj+xokyUw5duwfrXgwoMO6N8lDTH3zwe
MoWViwCVBMPA0RzJdAD1sbpdIR/e6xT3/VHfkRyR/zS9UalSTv+VAlAanGb6KzY=
=1wJ0
-----END PGP SIGNATURE-----
Merge tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm[64] perf updates from Will Deacon:
"In the past, I have funnelled perf updates through the respective
architecture trees, but now that the arm/arm64 perf driver has been
largely consolidated under drivers/perf/, it makes more sense to send
a separate pull, particularly as I'm listed as maintainer for all the
files involved. I offered the branch to arm-soc, but Arnd suggested
that I just send it to you directly.
So, here is the arm/arm64 perf queue for 4.5. The main features are
described below, but the most useful change is from Drew, which
advertises our architected event mapping in sysfs so that the perf
tool is a lot more user friendly and no longer requires the use of
magic hex constants for profiling common events.
- Support for the CPU PMU in Cortex-A72
- Add sysfs entries to describe the architected events and their
mappings for PMUv{1-3}"
* tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: perf: add support for Cortex-A72
arm64: perf: add format entry to describe event -> config mapping
ARM: perf: add format entry to describe event -> config mapping
arm64: kernel: enforce pmuserenr_el0 initialization and restore
arm64: perf: Correct Cortex-A53/A57 compatible values
arm64: perf: Add event descriptions
arm64: perf: Convert event enums to #defines
arm: perf: Add event descriptions
arm: perf: Convert event enums to #defines
drivers/perf: kill armpmu_register
- Support for a separate IRQ stack, although we haven't reduced the size
of our thread stack just yet since we don't have enough data to
determine a safe value
- Refactoring of our EFI initialisation and runtime code into
drivers/firmware/efi/ so that it can be reused by arch/arm/.
- Ftrace improvements when unwinding in the function graph tracer
- Document our silicon errata handling process
- Cache flushing optimisation when mapping executable pages
- Support for hugetlb mappings using the contiguous hint in the pte
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJWj+pFAAoJELescNyEwWM0/V8IALu8i2d6LijVICyZ/MH6pK+F
krbkIjdKFmIoFqo8HolCDMDqWfdzCLW671iYmks1DYVqM0Q5SXRa1rIzMw1Nbd3s
PzHS8qvnJFGtjXgwX5yxcyA5nU5hG5/mHJ8tbEg4zlQXvGONU6rZOlt4xY3ocZR7
iWmqoNX8LbPv5UgpifQ06QXEiC+4pm/BgADl2995oZfOaZ37L6c0oh6VcxQWyEf8
7OFRYtwruNyX2S5zJkL41Rh8gFAL9/j7lrHt2D+cxHR58X+qiRYKTjxkwJUt6i3E
ROZROsdQpyHojIIIYZEfNCZWjV0NwSghQfCnbsDwxVkkVeY414UXIno8JV4MyCk=
=JHvb
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Here is the core arm64 queue for 4.5. As you might expect, the
Christmas break resulted in a number of patches not making the final
cut, so 4.6 is likely to be larger than usual. There's still some
useful stuff here, however, and it's detailed below.
The EFI changes have been Reviewed-by Matt and the memblock change got
an "OK" from akpm.
Summary:
- Support for a separate IRQ stack, although we haven't reduced the
size of our thread stack just yet since we don't have enough data
to determine a safe value
- Refactoring of our EFI initialisation and runtime code into
drivers/firmware/efi/ so that it can be reused by arch/arm/.
- Ftrace improvements when unwinding in the function graph tracer
- Document our silicon errata handling process
- Cache flushing optimisation when mapping executable pages
- Support for hugetlb mappings using the contiguous hint in the pte"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (45 commits)
arm64: head.S: use memset to clear BSS
efi: stub: define DISABLE_BRANCH_PROFILING for all architectures
arm64: entry: remove pointless SPSR mode check
arm64: mm: move pgd_cache initialisation to pgtable_cache_init
arm64: module: avoid undefined shift behavior in reloc_data()
arm64: module: fix relocation of movz instruction with negative immediate
arm64: traps: address fallout from printk -> pr_* conversion
arm64: ftrace: fix a stack tracer's output under function graph tracer
arm64: pass a task parameter to unwind_frame()
arm64: ftrace: modify a stack frame in a safe way
arm64: remove irq_count and do_softirq_own_stack()
arm64: hugetlb: add support for PTE contiguous bit
arm64: Use PoU cache instr for I/D coherency
arm64: Defer dcache flush in __cpu_copy_user_page
arm64: reduce stack use in irq_handler
arm64: mm: ensure that the zero page is visible to the page table walker
arm64: Documentation: add list of software workarounds for errata
arm64: mm: place __cpu_setup in .text
arm64: cmpxchg: Don't incldue linux/mmdebug.h
arm64: mm: fold alternatives into .init
...
Currently we use an open-coded memzero to clear the BSS. As it is a
trivial implementation, it is sub-optimal.
Our optimised memset doesn't use the stack, is position-independent, and
for the memzero case can use of DC ZVA to clear large blocks
efficiently. In __mmap_switched the MMU is on and there are no live
caller-saved registers, so we can safely call an uninstrumented memset.
This patch changes __mmap_switched to use memset when clearing the BSS.
We use the __pi_memset alias so as to avoid any instrumentation in all
kernel configurations.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In work_pending, we may skip work if the stacked SPSR value represents
anything other than an EL0 context. We then immediately invoke the
kernel_exit 0 macro as part of ret_to_user, assuming a return to EL0.
This is somewhat confusing.
We use work_pending as part of the ret_to_user/ret_fast_syscall state
machine. We only use ret_fast_syscall in the return from an SVC issued
from EL0. We use ret_to_user for return from EL0 exception handlers and
also for return from ret_from_fork in the case the task was not a kernel
thread (i.e. it is a user task).
Thus in all cases the stacked SPSR value must represent an EL0 context,
and the check is redundant. This patch removes it, along with the now
unused no_work_pending label.
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Compilers may engage the improbability drive when encountering shifts
by a distance that is a multiple of the size of the operand type. Since
the required bounds check is very simple here, we can get rid of all the
fuzzy masking, shifting and comparing, and use the documented bounds
directly.
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The test whether a movz instruction with a signed immediate should be
turned into a movn instruction (i.e., when the immediate is negative)
is flawed, since the value of imm is always positive. Also, the
subsequent bounds check is incorrect since the limit update never
executes, due to the fact that the imm_type comparison will always be
false for negative signed immediates.
Let's fix this by performing the sign test on sval directly, and
replacing the bounds check with a simple comparison against U16_MAX.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: tidied up use of sval, renamed MOVK enum value to MOVKZ]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Switch to use a generic interface for issuing SMC/HVC based on ARM SMC
Calling Convention. Removes now the now unused psci-call.S.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cortex-A72 has a PMUv3 implementation that is compatible with the PMU
implemented by Cortex-A57.
This patch hooks up the new compatible string so that the Cortex-A57
event mappings are used.
Signed-off-by: Will Deacon <will.deacon@arm.com>
It's all very well providing an events directory to userspace that
details our events in terms of "event=0xNN", but if we don't define how
to encode the "event" field in the perf attr.config, then it's a waste
of time.
This patch adds a single format entry to describe that the event field
occupies the bottom 10 bits of our config field on ARMv8 (PMUv3).
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit ac7b406c1a ("arm64: Use pr_* instead of printk") was a fairly
mindless s/printk/pr_*/ change driven by a complaint from checkpatch.
As is usual with such changes, this has led to some odd behaviour on
arm64:
* syslog now picks up the "pr_emerg" line from dump_backtrace, but not
the actual trace, which leads to a bunch of "kernel:Call trace:"
lines in the log
* __{pte,pmd,pgd}_error print at KERN_CRIT, as opposed to KERN_ERR
which is used by other architectures.
This patch restores the original printk behaviour for dump_backtrace
and downgrade the pgtable error macros to KERN_ERR.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Function graph tracer modifies a return address (LR) in a stack frame
to hook a function return. This will result in many useless entries
(return_to_handler) showing up in
a) a stack tracer's output
b) perf call graph (with perf record -g)
c) dump_backtrace (at panic et al.)
For example, in case of a),
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
$ echo 1 > /proc/sys/kernel/stack_trace_enabled
$ cat /sys/kernel/debug/tracing/stack_trace
Depth Size Location (54 entries)
----- ---- --------
0) 4504 16 gic_raise_softirq+0x28/0x150
1) 4488 80 smp_cross_call+0x38/0xb8
2) 4408 48 return_to_handler+0x0/0x40
3) 4360 32 return_to_handler+0x0/0x40
...
In case of b),
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
$ perf record -e mem:XXX:x -ag -- sleep 10
$ perf report
...
| | |--0.22%-- 0x550f8
| | | 0x10888
| | | el0_svc_naked
| | | sys_openat
| | | return_to_handler
| | | return_to_handler
...
In case of c),
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
$ echo c > /proc/sysrq-trigger
...
Call trace:
[<ffffffc00044d3ac>] sysrq_handle_crash+0x24/0x30
[<ffffffc000092250>] return_to_handler+0x0/0x40
[<ffffffc000092250>] return_to_handler+0x0/0x40
...
This patch replaces such entries with real addresses preserved in
current->ret_stack[] at unwind_frame(). This way, we can cover all
the cases.
Reviewed-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[will: fixed minor context changes conflicting with irq stack bits]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Function graph tracer modifies a return address (LR) in a stack frame
to hook a function's return. This will result in many useless entries
(return_to_handler) showing up in a call stack list.
We will fix this problem in a later patch ("arm64: ftrace: fix a stack
tracer's output under function graph tracer"). But since real return
addresses are saved in ret_stack[] array in struct task_struct,
unwind functions need to be notified of, in addition to a stack pointer
address, which task is being traced in order to find out real return
addresses.
This patch extends unwind functions' interfaces by adding an extra
argument of a pointer to task_struct.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Function graph tracer modifies a return address (LR) in a stack frame by
calling ftrace_prepare_return() in a traced function's function prologue.
The current code does this modification before preserving an original
address at ftrace_push_return_trace() and there is always a small window
of inconsistency when an interrupt occurs.
This doesn't matter, as far as an interrupt stack is introduced, because
stack tracer won't be invoked in an interrupt context. But it would be
better to proactively minimize such a window by moving the LR modification
after ftrace_push_return_trace().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
sysrq_handle_reboot() re-enables interrupts while on the irq stack. The
irq_stack implementation wrongly assumed this would only ever happen
via the softirq path, allowing it to update irq_count late, in
do_softirq_own_stack().
This means if an irq occurs in sysrq_handle_reboot(), during
emergency_restart() the stack will be corrupted, as irq_count wasn't
updated.
Lose the optimisation, and instead of moving the adding/subtracting of
irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare
sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us
if we are on a task stack, if so, we can safely switch to the irq stack.
Finally, remove do_softirq_own_stack(), we don't need it anymore.
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
[will: use get_thread_info macro]
Signed-off-by: Will Deacon <will.deacon@arm.com>
The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
Current kernel code resets that register value iff the core pmu device is
correctly probed in the kernel. On platforms with missing DT pmu nodes (or
disabled perf events in the kernel), the pmu is not probed, therefore the
pmuserenr_el0 register is not reset in the kernel, which means that its
value retains the reset value that is architecturally UNKNOWN (system
may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
is available at EL0, which must be disallowed).
This patch adds code that resets pmuserenr_el0 on cold boot and restores
it on core resume from shutdown, so that the pmuserenr_el0 setup is
always enforced in the kernel.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
Necessary duplication of paravirt.h and paravirt.c with ARM.
The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.
This allows us to make use of steal_account_process_tick for stolen
ticks accounting.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
The code for switching to irq_stack stores three pieces of information on
the stack, fp+lr, as a fake stack frame (that lets us walk back onto the
interrupted tasks stack frame), and the address of the struct pt_regs that
contains the register values from kernel entry. (which dump_backtrace()
will print in any stack trace).
To reduce this, we store fp, and the pointer to the struct pt_regs.
unwind_frame() can recognise this as the irq_stack dummy frame, (as it only
appears at the top of the irq_stack), and use the struct pt_regs values
to find the missing interrupted link-register.
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As we've now rewritten most of our code-base in C, most of the
KVM-specific code in asm-offset.c is useless. Delete-time again!
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Having the system register numbers as #defines has been a pain
since day one, as the ordering is pretty fragile, and moving
things around leads to renumbering and epic conflict resolutions.
Now that we're mostly acessing the sysreg file in C, an enum is
a much better type to use, and we can clean things up a bit.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Currently we treat the alternatives separately from other data that's
only used during initialisation, using separate .altinstructions and
.altinstr_replacement linker sections. These are freed for general
allocation separately from .init*. This is problematic as:
* We do not remove execute permissions, as we do for .init, leaving the
memory executable.
* We pad between them, making the kernel Image bianry up to PAGE_SIZE
bytes larger than necessary.
This patch moves the two sections into the contiguous region used for
.init*. This saves some memory, ensures that we remove execute
permissions, and allows us to remove some code made redundant by this
reorganisation.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently we place an ALIGN_DEBUG_RO between text and data for the .text
and .init sections, and depending on configuration each of these may
result in up to SECTION_SIZE bytes worth of padding (for
DEBUG_RODATA_ALIGN).
We make no distinction between the text and data in each of these
sections at any point when creating the initial page tables in head.S.
We also make no distinction when modifying the tables; __map_memblock,
fixup_executable, mark_rodata_ro, and fixup_init only work at section
granularity. Thus this padding is unnecessary.
For the spit between init text and data we impose a minimum alignment of
16 bytes, but this is also unnecessary. The init data is output
immediately after the padding before any symbols are defined, so this is
not required to keep a symbol for linker a section array correctly
associated with the data. Any objects within the section will be given
at least their usual alignment regardless.
This patch removes the redundant padding.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On entry from el0, we save all the registers on the kernel stack, and
restore them before returning. x29 remains unchanged when we call out
to C code, which will store x29 as the frame-pointer on the stack.
Instead, write 0 into x29 after entry from el0, to avoid any risk of
tracing into user space.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When unwind_frame() reaches the bottom of the irq_stack, the last fp
points to the original task stack. unwind_frame() uses
IRQ_STACK_TO_TASK_STACK() to find the sp value. If either values is
wrong, we may end up walking a corrupt stack.
Check these values are sane by testing if they are both on the stack
pointed to by current->stack.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
irq_stack is a per_cpu variable, that needs to be access from entry.S.
Use an assembler macro instead of the unreadable details.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This refactors the EFI init and runtime code that will be shared
between arm64 and ARM so that it can be built for both archs.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This splits off the early EFI init and runtime code that
- discovers the EFI params and the memory map from the FDT, and installs
the memblocks and config tables.
- prepares and installs the EFI page tables so that UEFI Runtime Services
can be invoked at the virtual address installed by the stub.
This will allow it to be reused for 32-bit ARM.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change the EFI memory reservation logic to use memblock_mark_nomap()
rather than memblock_reserve() to mark UEFI reserved regions as
occupied. In addition to reserving them against allocations done by
memblock, this will also prevent them from being covered by the linear
mapping.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Running with CONFIG_DEBUG_SPINLOCK=y can trigger a BUG with the new IRQ
stack code:
BUG: spinlock lockup suspected on CPU#1
This is due to the IRQ_STACK_TO_TASK_STACK macro incorrectly retrieving
the task stack pointer stashed at the top of the IRQ stack.
Sayeth James:
| Yup, this is what is happening. Its an off-by-one due to broken
| thinking about how the stack works. My broken thinking was:
|
| > top ------------
| > | dummy_lr | <- irq_stack_ptr
| > ------------
| > | x29 |
| > ------------
| > | x19 | <- irq_stack_ptr - 0x10
| > ------------
| > | xzr |
| > ------------
|
| But the stack-pointer is decreased before use. So it actually looks
| like this:
|
| > ------------
| > | | <- irq_stack_ptr
| > top ------------
| > | dummy_lr |
| > ------------
| > | x29 | <- irq_stack_ptr - 0x10
| > ------------
| > | x19 |
| > ------------
| > | xzr | <- irq_stack_ptr - 0x20
| > ------------
|
| The value being used as the original stack is x29, which in all the
| tests is sp but without the current frames data, hence there are no
| missing frames in the output.
|
| Jungseok Lee picked it up with a 32bit user space because aarch32
| can't use x29, so it remains 0 forever. The fix he posted is correct.
This patch fixes the macro and adds some of this wisdom to a comment,
so that the layout of the IRQ stack is well understood.
Cc: James Morse <james.morse@arm.com>
Reported-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
entry.S is modified to switch to the per_cpu irq_stack during el{0,1}_irq.
irq_count is used to detect recursive interrupts on the irq_stack, it is
updated late by do_softirq_own_stack(), when called on the irq_stack, before
__do_softirq() re-enables interrupts to process softirqs.
do_softirq_own_stack() is added by this patch, but does not yet switch
stack.
This patch adds the dummy stack frame and data needed by the previous
stack tracing patches.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch allows unwind_frame() to traverse from interrupt stack to task
stack correctly. It requires data from a dummy stack frame, created
during irq_stack_entry(), added by a later patch.
A similar approach is taken to modify dump_backtrace(), which expects to
find struct pt_regs underneath any call to functions marked __exception.
When on an irq_stack, the struct pt_regs is stored on the old task stack,
the location of which is stored in the dummy stack frame.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[james.morse: merged two patches, reworked for per_cpu irq_stacks, and
no alignment guarantees, added irq_stack definitions]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There is need for figuring out how to manage struct thread_info data when
IRQ stack is introduced. struct thread_info information should be copied
to IRQ stack under the current thread_info calculation logic whenever
context switching is invoked. This is too expensive to keep supporting
the approach.
Instead, this patch pays attention to sp_el0 which is an unused scratch
register in EL1 context. sp_el0 utilization not only simplifies the
management, but also prevents text section size from being increased
largely due to static allocated IRQ stack as removing masking operation
using THREAD_SIZE in many places.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Make sure to clear out any ptrace singlestep state when a ptrace(2)
PTRACE_DETACH call is made on arm64 systems.
Otherwise, the previously ptraced task will die off with a SIGTRAP
signal if the debugger just previously singlestepped the ptraced task.
Cc: <stable@vger.kernel.org>
Signed-off-by: John Blackwood <john.blackwood@ccur.com>
[will: added comment to justify why this is in the arch code]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bring the linker script in line with the recent increase of
L1_CACHE_BYTES to 128. Replace the hardcoded value of 64 with the
symbolic constant.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
[catalin.marinas@arm.com: fix up RW_DATA_SECTION as well]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There is no need to worry about module and __init text disappearing
case, because that ftrace has a module notifier that is called when
a module is being unloaded and before the text goes away and this
code grabs the ftrace_lock mutex and removes the module functions
from the ftrace list, such that it will no longer do any
modifications to that module's text, the update to make functions
be traced or not is done under the ftrace_lock mutex as well.
And by now, __init section codes should not been modified
by ftrace, because it is black listed in recordmcount.c and
ignored by ftrace.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Li Bin <huawei.libin@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For ftrace on arm64, kstop_machine which is hugely disruptive
to a running system is not needed to convert nops to ftrace calls
or back, because that to be modified instrucions, that NOP, B or BL,
are all safe instructions which called "concurrent modification
and execution of instructions", that can be executed by one
thread of execution as they are being modified by another thread
of execution without requiring explicit synchronization.
Signed-off-by: Li Bin <huawei.libin@huawei.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
These functions/variables are not needed after booting, so mark them
as __init or __initdata.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- Build fix when !CONFIG_UID16 (the patch is touching generic files but
it only affects arm64 builds; submitted by Arnd Bergmann)
- EFI fixes to deal with early_memremap() returning NULL and correctly
mapping run-time regions
- Fix CPUID register extraction of unsigned fields (not to be
sign-extended)
- ASID allocator fix to deal with long-running tasks over multiple
generation roll-overs
- Revert support for marking page ranges as contiguous PTEs (it leads to
TLB conflicts and requires additional non-trivial kernel changes)
- Proper early_alloc() failure check
- Disable KASan for 48-bit VA and 16KB page configuration (the pgd is
larger than the KASan shadow memory)
- Update the fault_info table (original descriptions based on early
engineering spec)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWWJiuAAoJEGvWsS0AyF7xlWIP/3ma+ZSyitIS0FWOld/uo3c1
KbH9i7DrEL9tOzz4AhkKHBA7LOs0NvNkjz2sPLbnVg57H6r2y6Bi1ls5ODUWFy6y
CKI0aaCYhWPyYWDq6H9NfD5Xh6jx0+45dMqKiCy1mvpChEwPfW4aZGceKptNbBrG
v0VG1H5s0U+SjNqKqZ3W/hbwyQ1ZvAXJ022q7/ihPt6s2U0ebjXqc+6S2TcJyWNn
C0bDn40+MK7p8jqRrq80bAjAvC5yDQ7/o7fBsNzsVYhuNTA3HR5CG1jGMJwGcVvA
NJt71vfBq8L4PT2ndt8BxC5G500GdkQk2Nb2i1G9EgakH8Yv5Y2deFTUFDYPTHBg
EfUgORet2iBiCcLY+lLTonjKICsHi4Bn//DsyyEZ7HXAovS0DIH3rQfKubYNlT3p
FR2eskr3cDoQei3L9u0YU1zn+OuWRS7yJdjisjcTAEFaRBKqRXYMoczhVvJPb5xQ
RPtHZNAS0JXH+0Cmdo+nHjSfpEo20nefBvd3Xvs0jvwWKxS6rwexxQWYTKNTbycq
5iTYOGXlequnyTztK5M0AcfAajE+EVT2mAXkD/C727tUdO7yiCh86CNLIREHK8sH
cLnc2iJ12IsJmqV7uRPI5YjNmYau7ZQpfcRfflt1LlL7mx1VmSiyb4JeomGEE/gu
IdJ1iBl2JGguat1DHIXU
=YgtU
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- Build fix when !CONFIG_UID16 (the patch is touching generic files but
it only affects arm64 builds; submitted by Arnd Bergmann)
- EFI fixes to deal with early_memremap() returning NULL and correctly
mapping run-time regions
- Fix CPUID register extraction of unsigned fields (not to be
sign-extended)
- ASID allocator fix to deal with long-running tasks over multiple
generation roll-overs
- Revert support for marking page ranges as contiguous PTEs (it leads
to TLB conflicts and requires additional non-trivial kernel changes)
- Proper early_alloc() failure check
- Disable KASan for 48-bit VA and 16KB page configuration (the pgd is
larger than the KASan shadow memory)
- Update the fault_info table (original descriptions based on early
engineering spec)
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: efi: fix initcall return values
arm64: efi: deal with NULL return value of early_memremap()
arm64: debug: Treat the BRPs/WRPs as unsigned
arm64: cpufeature: Track unsigned fields
arm64: cpufeature: Add helpers for extracting unsigned values
Revert "arm64: Mark kernel page ranges contiguous"
arm64: mm: keep reserved ASIDs in sync with mm after multiple rollovers
arm64: KASAN depends on !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
arm64: efi: correctly map runtime regions
arm64: mm: fix fault_info table xFSC decoding
arm64: fix building without CONFIG_UID16
arm64: early_alloc: Fix check for allocation failure
Even though initcall return values are typically ignored, the
prototype is to return 0 on success or a negative errno value on
error. So fix the arm_enable_runtime_services() implementation to
return 0 on conditions that are not in fact errors, and return a
meaningful error code otherwise.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add NULL return value checks to two invocations of early_memremap()
in the UEFI init code. For the UEFI configuration tables, we just
warn since we have a better chance of being able to report the issue
in a way that can actually be noticed by a human operator if we don't
abort right away. For the UEFI memory map, however, all we can do is
panic() since we cannot proceed without a description of memory.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Some of the feature bits have unsigned values and need
to be treated accordingly to avoid errors. Adds the property
to the feature bits and use the appropriate field extract helpers.
Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The kernel may use a page granularity of 4K, 16K, or 64K depending on
configuration.
When mapping EFI runtime regions, we use memrange_efi_to_native to round
the physical base address of a region down to a kernel page boundary,
and round the size up to a kernel page boundary, adding the residue left
over from rounding down the physical base address. We do not round down
the virtual base address.
In __create_mapping we account for the offset of the virtual base from a
granule boundary, adding the residue to the size before rounding the
base down to said granule boundary.
Thus we account for the residue twice, and when the residue is non-zero
will cause __create_mapping to map an additional page at the end of the
region. Depending on the memory map, this page may be in a region we are
not intended/permitted to map, or may clash with a different region that
we wish to map. In typical cases, mapping the next item in the memory
map will overwrite the erroneously created entry, as we sort the memory
map in the stub.
As __create_mapping can cope with base addresses which are not page
aligned, we can instead rely on it to map the region appropriately, and
simplify efi_virtmap_init by removing the unnecessary code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults
when a Stage 1 permission fault or device alignment fault should
have been reported.
This patch implements the workaround (which is to validate that the
Stage-1 translation actually succeeds) by using code patching.
Cc: stable@vger.kernel.org
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
As previously reported, some userspace applications depend on bogomips
showed by /proc/cpuinfo. Although there is much less legacy impact on
aarch64 than arm, it does break libvirt.
This patch reverts commit 326b16db9f ("arm64: delay: don't bother
reporting bogomips in /proc/cpuinfo"), but with some tweak due to
context change and without the pr_info().
Fixes: 326b16db9f ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo")
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.12+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As pointed out by Russell King in response to the proposed ARM version
of this code, the sequence to switch between the UEFI runtime mapping
and current's actual userland mapping (and vice versa) is potentially
unsafe, since it leaves a time window between the switch to the new
page tables and the TLB flush where speculative accesses may hit on
stale global TLB entries.
So instead, use non-global mappings, and perform the switch via the
ordinary ASID-aware context switch routines.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The function graph tracer adds instrumentation that is required to trace
both entry and exit of a function. In particular the function graph
tracer updates the "return address" of a function in order to insert
a trace callback on function exit.
Kernel power management functions like cpu_suspend() are called
upon power down entry with functions called "finishers" that are in turn
called to trigger the power down sequence but they may not return to the
kernel through the normal return path.
When the core resumes from low-power it returns to the cpu_suspend()
function through the cpu_resume path, which leaves the trace stack frame
set-up by the function tracer in an incosistent state upon return to the
kernel when tracing is enabled.
This patch fixes the issue by pausing/resuming the function graph
tracer on the thread executing cpu_suspend() (ie the function call that
subsequently triggers the "suspend finishers"), so that the function graph
tracer state is kept consistent across functions that enter power down
states and never return by effectively disabling graph tracer while they
are executing.
Fixes: 819e50e25d ("arm64: Add ftrace support")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.16+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add additional information about the ARM architected hardware events
to make counters self describing. This makes the hardware PMUs easier
to use as perf list contains possible events instead of users having
to refer to documentation like the ARM TRMs.
Signed-off-by: Drew Richardson <drew.richardson@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The enums are not necessary and this allows the event values to be
used to construct static strings at compile time.
Signed-off-by: Drew Richardson <drew.richardson@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
- __cmpxchg_double*() return type fix to avoid truncation of a long to
int and subsequent logical "not" in cmpxchg_double() misinterpreting
the operation success/failure
- BPF fixes for mod and div by zero
- Fix compilation with STRICT_MM_TYPECHECKS enabled
- VDSO build fix without libgcov
- Some static and __maybe_unused annotations
- Kconfig clean-up (FRAME_POINTER)
- defconfig update for CRYPTO_CRC32_ARM64
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWRNNbAAoJEGvWsS0AyF7xp+wQAIc0A+uSReEJ0Be3kSWZIy0O
9wGCtfp2e3X78ibgVoP/+KvA1JUrMJNwNH54CgGgG6H4rwjRthCvIV/HbKfYufM8
vfuTL2MV1ywkNO0uTzspsICqgKPcpG27SwAlgOcxNXpO0Kui2OlKSxS4kTA8+6Z5
Lm64qDmFG7Z6wcBHhr8JSngC+xvXOvlcUW8odnjXjyCimwnpCFXXnRWDU3RnXJZa
3Khgp8OiRtnCSLfj7YBQA9wfNNgPgKdJ5wevz2g7hiIbYx0IOHmDpzbb3sUNMMKV
XLKeeJgqZL4EXZBCzapHRHCE/q0kiiBhzYSHw6aOBwjD9v683aytT/ax2/AgjzvW
nB3ZPdrbRMjcmNRBT2bheoU8diilhtfxSxf+4T+pVUnVMXDNl/xY9hekGA0hFO1z
nH5P5vkFKsX3U02Ox/G50Od2rM6p7uGRGFYuomSIoJYBItuxGOAuYWlY2+ujcxY5
YvAQ+3FYCkjLipVutlqLxKoZSY8Ex+0LOjPYYsI/+rsE70IVjGuLj0bTm8B/aTcy
dOctNqvOGwo8O5n2jsKM3XkjfUCPRdzu1C7rQz2BqfE9cPAZxg2fQpPv4SGtPuFe
lEvokuYRJ3qYnMt5MG/9Mkqmczfbch88A41wgS9/ySQ57eo3wISLkOiKqzKdJjOa
0qldWaEvST2iVUQmiMl7
=ApkD
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes and clean-ups from Catalin Marinas:
"Here's a second pull request for this merging window with some
fixes/clean-ups:
- __cmpxchg_double*() return type fix to avoid truncation of a long
to int and subsequent logical "not" in cmpxchg_double()
misinterpreting the operation success/failure
- BPF fixes for mod and div by zero
- Fix compilation with STRICT_MM_TYPECHECKS enabled
- VDSO build fix without libgcov
- Some static and __maybe_unused annotations
- Kconfig clean-up (FRAME_POINTER)
- defconfig update for CRYPTO_CRC32_ARM64"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: suspend: make hw_breakpoint_restore static
arm64: mmu: make split_pud and fixup_executable static
arm64: smp: make of_parse_and_init_cpus static
arm64: use linux/types.h in kvm.h
arm64: build vdso without libgcov
arm64: mark cpus_have_hwcap as __maybe_unused
arm64: remove redundant FRAME_POINTER kconfig option and force to select it
arm64: fix R/O permissions of FDT mapping
arm64: fix STRICT_MM_TYPECHECKS issue in PTE_CONT manipulation
arm64: bpf: fix mod-by-zero case
arm64: bpf: fix div-by-zero case
arm64: Enable CRYPTO_CRC32_ARM64 in defconfig
arm64: cmpxchg_dbl: fix return value type
hw_breakpoint_restore is only used within suspend.c, so it can be
declared static.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
of_parse_and_init_cpus is only called from within smp.c, so it can be
declared static.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On a cross-toolchain without glibc support, libgcov may not be
available, and attempting to build an arm64 kernel with GCOV
enabled then results in a build error:
/home/arnd/cross-gcc/lib/gcc/aarch64-linux/5.2.1/../../../../aarch64-linux/bin/ld: cannot find -lgcov
We don't really want to link libgcov into the vdso anyway, so
this patch just disables GCOV in the vdso directory, just as
we do for most other architectures.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
cpus_have_hwcap() is defined as a 'static' function an only used in
one place that is inside of an #ifdef, so we get a warning when
the only user is disabled:
arch/arm64/kernel/cpufeature.c:699:13: warning: 'cpus_have_hwcap' defined but not used [-Wunused-function]
This marks the function as __maybe_unused, so the compiler knows that
it can drop the function definition without warning about it.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 37b01d53ce ("arm64/HWCAP: Use system wide safe values")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As we've enabled multiplatform kernels on ARM, and greatly done away with
the contents under arch/arm/mach-*, there's still need for SoC-related
drivers to go somewhere.
Many of them go in through other driver trees, but we still have
drivers/soc to hold some of the "doesn't fit anywhere" lowlevel code
that might be shared between ARM and ARM64 (or just in general makes
sense to not have under the architecture directory).
This branch contains mostly such code:
- Drivers for qualcomm SoCs for SMEM, SMD and SMD-RPM, used to communicate
with power management blocks on these SoCs for use by clock, regulator and
bus frequency drivers.
- Allwinner Reduced Serial Bus driver, again used to communicate with PMICs.
- Drivers for ARM's SCPI (System Control Processor). Not to be confused with
PSCI (Power State Coordination Interface). SCPI is used to communicate with
the assistant embedded cores doing power management, and we have yet to see
how many of them will implement this for their hardware vs abstracting in
other ways (or not at all like in the past).
- To make confusion between SCPI and PSCI more likely, this release also
includes an update of PSCI to interface version 1.0.
- Rockchip support for power domains.
- A driver to talk to the firmware on Raspberry Pi.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWQC+cAAoJEIwa5zzehBx3jEUP/0GpxfDVanEUkudVLLe7J0RH
CNlRan107Cw6hXRUJo7elEsuCALjccXjc1CAH4+RnNpOAeBKW97n+WU7trTv+wUZ
sQX4SkBPKFBlgwGF2qhsi5q74gms/BrgtCa4kNb9joOYso039tlfIOPzK80DMkOm
TkyIJdUCgFJMjCQLhX6kGT0PDcrbIjb6aA2cF3FAVeaJA7uz8lNe/eHJr3oHxIEY
CvC651yJ2mIHQUU4BJx/AJo+wXg3dRUXNCAtBjwLRPEAzduYZXYm1ZTVIby/1q9r
dR2KDFEuibODXmXrDBzKNJwCu/TLJEwo/1oPaEIVfY91XLKfiWUhgVqa1o1I+d9U
XoGPibCW461qFahjQW87MfInALpCOA7/RbTNjFp+MVyipCYvkaYq7KFiYEldgFDx
z4Qx/J4hYc2TlDWrpNiUCZMfmhwi7y+Ib+tnenYTO1eyMuw0e9mfnVdjk5iU3Pvk
Ye4qPqpYclJruyHbYi164878+1lLaW2NCUgC3rkBO/GWPAzp7d9iLWoZ3PuyD5i5
PEjs668UcRdZYbI4rdrhGHL8Eq9Gnuc4Rthu7HxPOK+DG0XgP8r97PhM8aYGYVDO
+yikBtjWRsA9fPj3rMKA3UsQ61DAeR9LmZ0XPGjWFMCjCG0JlUoIMaA+Uu0i8fr8
95qxBVxbO7rhL39r1rhV
=dm+I
-----END PGP SIGNATURE-----
Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC driver updates from Olof Johansson:
"As we've enabled multiplatform kernels on ARM, and greatly done away
with the contents under arch/arm/mach-*, there's still need for
SoC-related drivers to go somewhere.
Many of them go in through other driver trees, but we still have
drivers/soc to hold some of the "doesn't fit anywhere" lowlevel code
that might be shared between ARM and ARM64 (or just in general makes
sense to not have under the architecture directory).
This branch contains mostly such code:
- Drivers for qualcomm SoCs for SMEM, SMD and SMD-RPM, used to
communicate with power management blocks on these SoCs for use by
clock, regulator and bus frequency drivers.
- Allwinner Reduced Serial Bus driver, again used to communicate with
PMICs.
- Drivers for ARM's SCPI (System Control Processor). Not to be
confused with PSCI (Power State Coordination Interface). SCPI is
used to communicate with the assistant embedded cores doing power
management, and we have yet to see how many of them will implement
this for their hardware vs abstracting in other ways (or not at all
like in the past).
- To make confusion between SCPI and PSCI more likely, this release
also includes an update of PSCI to interface version 1.0.
- Rockchip support for power domains.
- A driver to talk to the firmware on Raspberry Pi"
* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (57 commits)
soc: qcom: smd-rpm: Correct size of outgoing message
bus: sunxi-rsb: Add driver for Allwinner Reduced Serial Bus
bus: sunxi-rsb: Add Allwinner Reduced Serial Bus (RSB) controller bindings
ARM: bcm2835: add mutual inclusion protection
drivers: psci: make PSCI 1.0 functions initialization version dependent
dt-bindings: Correct paths in Rockchip power domains binding document
soc: rockchip: power-domain: don't try to print the clock name in error case
soc: qcom/smem: add HWSPINLOCK dependency
clk: berlin: add cpuclk
ARM: berlin: dts: add CLKID_CPU for BG2Q
ARM: bcm2835: Add the Raspberry Pi firmware driver
soc: qcom: smem: Move RPM message ram out of smem DT node
soc: qcom: smd-rpm: Correct the active vs sleep state flagging
soc: qcom: smd: delete unneeded of_node_put
firmware: qcom-scm: build for correct architecture level
soc: qcom: smd: Correct SMEM items for upper channels
qcom-scm: add missing prototype for qcom_scm_is_available()
qcom-scm: fix endianess issue in __qcom_scm_is_call_available
soc: qcom: smd: Reject send of too big packets
soc: qcom: smd: Handle big endian CPUs
...
Here's the "big" driver core updates for 4.4-rc1. Primarily a bunch of
debugfs updates, with a smattering of minor driver core fixes and
updates as well.
All have been in linux-next for a long time.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEABECAAYFAlY6ePQACgkQMUfUDdst+ymNTgCgpP0CZw57GpwF/Hp2L/lMkVeo
Kx8AoKhEi4iqD5fdCQS9qTfomB+2/M6g
=g7ZO
-----END PGP SIGNATURE-----
Merge tag 'driver-core-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core updates from Greg KH:
"Here's the "big" driver core updates for 4.4-rc1. Primarily a bunch
of debugfs updates, with a smattering of minor driver core fixes and
updates as well.
All have been in linux-next for a long time"
* tag 'driver-core-4.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
debugfs: Add debugfs_create_ulong()
of: to support binding numa node to specified device in devicetree
debugfs: Add read-only/write-only bool file ops
debugfs: Add read-only/write-only size_t file ops
debugfs: Add read-only/write-only x64 file ops
debugfs: Consolidate file mode checks in debugfs_create_*()
Revert "mm: Check if section present during memory block (un)registering"
driver-core: platform: Provide helpers for multi-driver modules
mm: Check if section present during memory block (un)registering
devres: fix a for loop bounds check
CMA: fix CONFIG_CMA_SIZE_MBYTES overflow in 64bit
base/platform: assert that dev_pm_domain callbacks are called unconditionally
sysfs: correctly handle short reads on PREALLOC attrs.
base: soc: siplify ida usage
kobject: move EXPORT_SYMBOL() macros next to corresponding definitions
kobject: explain what kobject's sd field is
debugfs: document that debugfs_remove*() accepts NULL and error values
debugfs: Pass bool pointer to debugfs_create_bool()
ACPI / EC: Fix broken 64bit big-endian users of 'global_lock'
- ACPICA update to upstream revision 20150930 (Bob Moore, Lv Zheng).
The most significant change is to allow the AML debugger to be
built into the kernel. On top of that there is an update related
to the NFIT table (the ACPI persistent memory interface)
and a few fixes and cleanups.
- ACPI CPPC2 (Collaborative Processor Performance Control v2)
support along with a cpufreq frontend (Ashwin Chaugule).
This can only be enabled on ARM64 at this point.
- New ACPI infrastructure for the early probing of IRQ chips and
clock sources (Marc Zyngier).
- Support for a new hierarchical properties extension of the ACPI
_DSD (Device Specific Data) device configuration object allowing
the kernel to handle hierarchical properties (provided by the
platform firmware this way) automatically and make them available
to device drivers via the generic device properties interface
(Rafael Wysocki).
- Generic device properties API extension to obtain an index of
certain string value in an array of strings, along the lines of
of_property_match_string(), but working for all of the supported
firmware node types, and support for the "dma-names" device
property based on it (Mika Westerberg).
- ACPI core fix to parse the MADT (Multiple APIC Description Table)
entries in the order expected by platform firmware (and mandated
by the specification) to avoid confusion on systems with more than
255 logical CPUs (Lukasz Anaczkowski).
- Consolidation of the ACPI-based handling of PCI host bridges
on x86 and ia64 (Jiang Liu).
- ACPI core fixes to ensure that the correct IRQ number is used to
represent the SCI (System Control Interrupt) in the cases when
it has been re-mapped (Chen Yu).
- New ACPI backlight quirk for Lenovo IdeaPad S405 (Hans de Goede).
- ACPI EC driver fixes (Lv Zheng).
- Assorted ACPI fixes and cleanups (Dan Carpenter, Insu Yun, Jiri
Kosina, Rami Rosen, Rasmus Villemoes).
- New mechanism in the PM core allowing drivers to check if the
platform firmware is going to be involved in the upcoming system
suspend or if it has been involved in the suspend the system is
resuming from at the moment (Rafael Wysocki).
This should allow drivers to optimize their suspend/resume
handling in some cases and the changes include a couple of users
of it (the i8042 input driver, PCI PM).
- PCI PM fix to prevent runtime-suspended devices with PME enabled
from being resumed during system suspend even if they aren't
configured to wake up the system from sleep (Rafael Wysocki).
- New mechanism to report the number of a wakeup IRQ that woke up
the system from sleep last time (Alexandra Yates).
- Removal of unused interfaces from the generic power domains
framework and fixes related to latency measurements in that
code (Ulf Hansson, Daniel Lezcano).
- cpufreq core sysfs interface rework to make it handle CPUs that
share performance scaling settings (represented by a common
cpufreq policy object) more symmetrically (Viresh Kumar).
This should help to simplify the CPU offline/online handling among
other things.
- cpufreq core fixes and cleanups (Viresh Kumar).
- intel_pstate fixes related to the Turbo Activation Ratio (TAR)
mechanism on client platforms which causes the turbo P-states
range to vary depending on platform firmware settings (Srinivas
Pandruvada).
- intel_pstate sysfs interface fix (Prarit Bhargava).
- Assorted cpufreq driver (imx, tegra20, powernv, integrator) fixes
and cleanups (Bai Ping, Bartlomiej Zolnierkiewicz, Shilpasri G
Bhat, Luis de Bethencourt).
- cpuidle mvebu driver cleanups (Russell King).
- OPP (Operating Performance Points) framework code reorganization
to make it more maintainable (Viresh Kumar).
- Intel Broxton support for the RAPL (Running Average Power Limits)
power capping driver (Amy Wiles).
- Assorted power management code fixes and cleanups (Dan Carpenter,
Geert Uytterhoeven, Geliang Tang, Luis de Bethencourt, Rasmus
Villemoes).
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJWOC9oAAoJEILEb/54YlRx/c8P/joflwoFsISwJccG62YTQMuc
bMQKM4Kw0vl5La8+pkLpe5t6+mW7l81UFtYF6Dzd8LOKlD9sszD34z1lHmCeT/oR
wn0uZpHagRyLMUfoyiEtlU/VRU6WQNNtS3EgjwUi7xgFz9Q0pjcCZ9OQ6vKov1j5
+6j40ODif5sgo+2vl+rztJiV0SIMkYdkgNqgfN1FE9bdLA2Zkk+PxxJbtGQORuDu
O/K+XhQT2xWquVWi/1p+VtQxs5glBS1oKm0kogV5bElCvNTRNIVABUNcjogITQwo
QSAKgoCKIoaIl5jtDT6u5dc0y67q/dMtqOY9fOCcOz1Z7jbWQzR8D7mpFWIsJUPK
K2LClI3t85ynpN6Jref246A6+C9nwB8JMAiAR11oBw7WbBlkd6tbRgcT5B+iz8UE
FuCCif7pha/Fs+Jt1YRazscIqteQ2bAhhxikuIPMfw2M6M67MNfVNeKA1bAoSM34
dH7JsilblitvV7shrwJHwXPXCOF2jEPoK8I4/q2+TR5qUxEpRJjelQxXGSaQScMZ
iNnjeTgv8H8q+rY5Yjzsl4pxP0Fvf7IuqkptWOJbgepg4cQc9pS87wOpY3uEeQzr
H7ruaQJFCnLO4aXbPNClsiJARhrBk+qMlsh4vBEyCJ2T0ucb+nIUcN4BTi8t85yl
X97BfHHUiDoUrnIsNids
=1gaH
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki:
"Quite a new features are included this time.
First off, the Collaborative Processor Performance Control interface
(version 2) defined by ACPI will now be supported on ARM64 along with
a cpufreq frontend for CPU performance scaling.
Second, ACPI gets a new infrastructure for the early probing of IRQ
chips and clock sources (along the lines of the existing similar
mechanism for DT).
Next, the ACPI core and the generic device properties API will now
support a recently introduced hierarchical properties extension of the
_DSD (Device Specific Data) ACPI device configuration object. If the
ACPI platform firmware uses that extension to organize device
properties in a hierarchical way, the kernel will automatically handle
it and make those properties available to device drivers via the
generic device properties API.
It also will be possible to build the ACPICA's AML interpreter
debugger into the kernel now and use that to diagnose AML-related
problems more efficiently. In the future, this should make it
possible to single-step AML execution and do similar things.
Interesting stuff, although somewhat experimental at this point.
Finally, the PM core gets a new mechanism that can be used by device
drivers to distinguish between suspend-to-RAM (based on platform
firmware support) and suspend-to-idle (or other variants of system
suspend the platform firmware is not involved in) and possibly
optimize their device suspend/resume handling accordingly.
In addition to that, some existing features are re-organized quite
substantially.
First, the ACPI-based handling of PCI host bridges on x86 and ia64 is
unified and the common code goes into the ACPI core (so as to reduce
code duplication and eliminate non-essential differences between the
two architectures in that area).
Second, the Operating Performance Points (OPP) framework is
reorganized to make the code easier to find and follow.
Next, the cpufreq core's sysfs interface is reorganized to get rid of
the "primary CPU" concept for configurations in which the same
performance scaling settings are shared between multiple CPUs.
Finally, some interfaces that aren't necessary any more are dropped
from the generic power domains framework.
On top of the above we have some minor extensions, cleanups and bug
fixes in multiple places, as usual.
Specifics:
- ACPICA update to upstream revision 20150930 (Bob Moore, Lv Zheng).
The most significant change is to allow the AML debugger to be
built into the kernel. On top of that there is an update related
to the NFIT table (the ACPI persistent memory interface) and a few
fixes and cleanups.
- ACPI CPPC2 (Collaborative Processor Performance Control v2) support
along with a cpufreq frontend (Ashwin Chaugule).
This can only be enabled on ARM64 at this point.
- New ACPI infrastructure for the early probing of IRQ chips and
clock sources (Marc Zyngier).
- Support for a new hierarchical properties extension of the ACPI
_DSD (Device Specific Data) device configuration object allowing
the kernel to handle hierarchical properties (provided by the
platform firmware this way) automatically and make them available
to device drivers via the generic device properties interface
(Rafael Wysocki).
- Generic device properties API extension to obtain an index of
certain string value in an array of strings, along the lines of
of_property_match_string(), but working for all of the supported
firmware node types, and support for the "dma-names" device
property based on it (Mika Westerberg).
- ACPI core fix to parse the MADT (Multiple APIC Description Table)
entries in the order expected by platform firmware (and mandated by
the specification) to avoid confusion on systems with more than 255
logical CPUs (Lukasz Anaczkowski).
- Consolidation of the ACPI-based handling of PCI host bridges on x86
and ia64 (Jiang Liu).
- ACPI core fixes to ensure that the correct IRQ number is used to
represent the SCI (System Control Interrupt) in the cases when it
has been re-mapped (Chen Yu).
- New ACPI backlight quirk for Lenovo IdeaPad S405 (Hans de Goede).
- ACPI EC driver fixes (Lv Zheng).
- Assorted ACPI fixes and cleanups (Dan Carpenter, Insu Yun, Jiri
Kosina, Rami Rosen, Rasmus Villemoes).
- New mechanism in the PM core allowing drivers to check if the
platform firmware is going to be involved in the upcoming system
suspend or if it has been involved in the suspend the system is
resuming from at the moment (Rafael Wysocki).
This should allow drivers to optimize their suspend/resume handling
in some cases and the changes include a couple of users of it (the
i8042 input driver, PCI PM).
- PCI PM fix to prevent runtime-suspended devices with PME enabled
from being resumed during system suspend even if they aren't
configured to wake up the system from sleep (Rafael Wysocki).
- New mechanism to report the number of a wakeup IRQ that woke up the
system from sleep last time (Alexandra Yates).
- Removal of unused interfaces from the generic power domains
framework and fixes related to latency measurements in that code
(Ulf Hansson, Daniel Lezcano).
- cpufreq core sysfs interface rework to make it handle CPUs that
share performance scaling settings (represented by a common cpufreq
policy object) more symmetrically (Viresh Kumar).
This should help to simplify the CPU offline/online handling among
other things.
- cpufreq core fixes and cleanups (Viresh Kumar).
- intel_pstate fixes related to the Turbo Activation Ratio (TAR)
mechanism on client platforms which causes the turbo P-states range
to vary depending on platform firmware settings (Srinivas
Pandruvada).
- intel_pstate sysfs interface fix (Prarit Bhargava).
- Assorted cpufreq driver (imx, tegra20, powernv, integrator) fixes
and cleanups (Bai Ping, Bartlomiej Zolnierkiewicz, Shilpasri G
Bhat, Luis de Bethencourt).
- cpuidle mvebu driver cleanups (Russell King).
- OPP (Operating Performance Points) framework code reorganization to
make it more maintainable (Viresh Kumar).
- Intel Broxton support for the RAPL (Running Average Power Limits)
power capping driver (Amy Wiles).
- Assorted power management code fixes and cleanups (Dan Carpenter,
Geert Uytterhoeven, Geliang Tang, Luis de Bethencourt, Rasmus
Villemoes)"
* tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (108 commits)
cpufreq: postfix policy directory with the first CPU in related_cpus
cpufreq: create cpu/cpufreq/policyX directories
cpufreq: remove cpufreq_sysfs_{create|remove}_file()
cpufreq: create cpu/cpufreq at boot time
cpufreq: Use cpumask_copy instead of cpumask_or to copy a mask
cpufreq: ondemand: Drop unnecessary locks from update_sampling_rate()
PM / Domains: Merge measurements for PM QoS device latencies
PM / Domains: Don't measure ->start|stop() latency in system PM callbacks
PM / clk: Fix broken build due to non-matching code and header #ifdefs
ACPI / Documentation: add copy_dsdt to ACPI format options
ACPI / sysfs: correctly check failing memory allocation
ACPI / video: Add a quirk to force native backlight on Lenovo IdeaPad S405
ACPI / CPPC: Fix potential memory leak
ACPI / CPPC: signedness bug in register_pcc_channel()
ACPI / PAD: power_saving_thread() is not freezable
ACPI / PM: Fix incorrect wakeup IRQ setting during suspend-to-idle
ACPI: Using correct irq when waiting for events
ACPI: Use correct IRQ when uninstalling ACPI interrupt handler
cpuidle: mvebu: disable the bind/unbind attributes and use builtin_platform_driver
cpuidle: mvebu: clean up multiple platform drivers
...
- "genirq: Introduce generic irq migration for cpu hotunplugged" patch
merged from tip/irq/for-arm to allow the arm64-specific part to be
upstreamed via the arm64 tree
- CPU feature detection reworked to cope with heterogeneous systems
where CPUs may not have exactly the same features. The features
reported by the kernel via internal data structures or ELF_HWCAP are
delayed until all the CPUs are up (and before user space starts)
- Support for 16KB pages, with the additional bonus of a 36-bit VA
space, though the latter only depending on EXPERT
- Implement native {relaxed, acquire, release} atomics for arm64
- New ASID allocation algorithm which avoids IPI on roll-over, together
with TLB invalidation optimisations (using local vs global where
feasible)
- KASan support for arm64
- EFI_STUB clean-up and isolation for the kernel proper (required by
KASan)
- copy_{to,from,in}_user optimisations (sharing the memcpy template)
- perf: moving arm64 to the arm32/64 shared PMU framework
- L1_CACHE_BYTES increased to 128 to accommodate Cavium hardware
- Support for the contiguous PTE hint on kernel mapping (16 consecutive
entries may be able to use a single TLB entry)
- Generic CONFIG_HZ now used on arm64
- defconfig updates
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWOkmIAAoJEGvWsS0AyF7x4GgQAINU3NePjFFvWZNCkqobeH9+
jFKwtXamIudhTSdnXNXyYWmtRL9Krg3qI4zDQf68dvDFAZAze2kVuOi1yPpCbpFZ
/j/afNyQc7+PoyqRAzmT+EMPZlcuOA84Prrl1r3QWZ58QaFeVk/6ZxrHunTHxN0x
mR9PIXfWx73MTo+UnG8FChkmEY6LmV4XpemgTaMR9FqFhdT51OZSxDDAYXOTm4JW
a5HdN9OWjjJ2rhLlFEaC7tszG9B5doHdy2tr5ge/YERVJzIPDogHkMe8ZhfAJc+x
SQU5tKN6Pg4MOi+dLhxlk0/mKCvHLiEQ5KVREJnt8GxupAR54Bat+DQ+rP9cSnpq
dRQTcARIOyy9LGgy+ROAsSo+NiyM5WuJ0/WJUYKmgWTJOfczRYoZv6TMKlwNOUYb
tGLCZHhKPM3yBHJlWbQykl3xmSuudxCMmjlZzg7B+MVfTP6uo0CRSPmYl+v67q+J
bBw/Z2RYXWYGnvlc6OfbMeImI6prXeE36+5ytyJFga0m+IqcTzRGzjcLxKEvdbiU
pr8n9i+hV9iSsT/UwukXZ8ay6zH7PrTLzILWQlieutfXlvha7MYeGxnkbLmdYcfe
GCj374io5cdImHcVKmfhnOMlFOLuOHphl9cmsd/O2LmCIqBj9BIeNH2Om8mHVK2F
YHczMdpESlJApE7kUc1e
=3six
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- "genirq: Introduce generic irq migration for cpu hotunplugged" patch
merged from tip/irq/for-arm to allow the arm64-specific part to be
upstreamed via the arm64 tree
- CPU feature detection reworked to cope with heterogeneous systems
where CPUs may not have exactly the same features. The features
reported by the kernel via internal data structures or ELF_HWCAP are
delayed until all the CPUs are up (and before user space starts)
- Support for 16KB pages, with the additional bonus of a 36-bit VA
space, though the latter only depending on EXPERT
- Implement native {relaxed, acquire, release} atomics for arm64
- New ASID allocation algorithm which avoids IPI on roll-over, together
with TLB invalidation optimisations (using local vs global where
feasible)
- KASan support for arm64
- EFI_STUB clean-up and isolation for the kernel proper (required by
KASan)
- copy_{to,from,in}_user optimisations (sharing the memcpy template)
- perf: moving arm64 to the arm32/64 shared PMU framework
- L1_CACHE_BYTES increased to 128 to accommodate Cavium hardware
- Support for the contiguous PTE hint on kernel mapping (16 consecutive
entries may be able to use a single TLB entry)
- Generic CONFIG_HZ now used on arm64
- defconfig updates
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (91 commits)
arm64/efi: fix libstub build under CONFIG_MODVERSIONS
ARM64: Enable multi-core scheduler support by default
arm64/efi: move arm64 specific stub C code to libstub
arm64: page-align sections for DEBUG_RODATA
arm64: Fix build with CONFIG_ZONE_DMA=n
arm64: Fix compat register mappings
arm64: Increase the max granular size
arm64: remove bogus TASK_SIZE_64 check
arm64: make Timer Interrupt Frequency selectable
arm64/mm: use PAGE_ALIGNED instead of IS_ALIGNED
arm64: cachetype: fix definitions of ICACHEF_* flags
arm64: cpufeature: declare enable_cpu_capabilities as static
genirq: Make the cpuhotplug migration code less noisy
arm64: Constify hwcap name string arrays
arm64/kvm: Make use of the system wide safe values
arm64/debug: Make use of the system wide safe value
arm64: Move FP/ASIMD hwcap handling to common code
arm64/HWCAP: Use system wide safe values
arm64/capabilities: Make use of system wide safe value
arm64: Delay cpu feature capability checks
...
Pull EFI changes from Ingo Molnar:
"The main changes in this cycle were:
- further EFI code generalization to make it more workable for ARM64
- various extensions, such as 64-bit framebuffer address support,
UEFI v2.5 EFI_PROPERTIES_TABLE support
- code modularization simplifications and cleanups
- new debugging parameters
- various fixes and smaller additions"
* 'core-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits)
efi: Fix warning of int-to-pointer-cast on x86 32-bit builds
efi: Use correct type for struct efi_memory_map::phys_map
x86/efi: Fix kernel panic when CONFIG_DEBUG_VIRTUAL is enabled
efi: Add "efi_fake_mem" boot option
x86/efi: Rename print_efi_memmap() to efi_print_memmap()
efi: Auto-load the efi-pstore module
efi: Introduce EFI_NX_PE_DATA bit and set it from properties table
efi: Add support for UEFIv2.5 Properties table
efi: Add EFI_MEMORY_MORE_RELIABLE support to efi_md_typeattr_format()
efifb: Add support for 64-bit frame buffer addresses
efi/arm64: Clean up efi_get_fdt_params() interface
arm64: Use core efi=debug instead of uefi_debug command line parameter
efi/x86: Move efi=debug option parsing to core
drivers/firmware: Make efi/esrt.c driver explicitly non-modular
efi: Use the generic efi.memmap instead of 'memmap'
acpi/apei: Use appropriate pgprot_t to map GHES memory
arm64, acpi/apei: Implement arch_apei_get_mem_attributes()
arm64/mm: Add PROT_DEVICE_nGnRnE and PROT_NORMAL_WT
acpi, x86: Implement arch_apei_get_mem_attributes()
efi, x86: Rearrange efi_mem_attributes()
...
Pull irq updates from Thomas Gleixner:
"The irq departement delivers:
- Rework the irqdomain core infrastructure to accomodate ACPI based
systems. This is required to support ARM64 without creating
artificial device tree nodes.
- Sanitize the ACPI based ARM GIC initialization by making use of the
new firmware independent irqdomain core
- Further improvements to the generic MSI management
- Generalize the irq migration on CPU hotplug
- Improvements to the threaded interrupt infrastructure
- Allow the migration of "chained" low level interrupt handlers
- Allow optional force masking of interrupts in disable_irq[_nosysnc]
- Support for two new interrupt chips - Sigh!
- A larger set of errata fixes for ARM gicv3
- The usual pile of fixes, updates, improvements and cleanups all
over the place"
* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (71 commits)
Document that IRQ_NONE should be returned when IRQ not actually handled
PCI/MSI: Allow the MSI domain to be device-specific
PCI: Add per-device MSI domain hook
of/irq: Use the msi-map property to provide device-specific MSI domain
of/irq: Split of_msi_map_rid to reuse msi-map lookup
irqchip/gic-v3-its: Parse new version of msi-parent property
PCI/MSI: Use of_msi_get_domain instead of open-coded "msi-parent" parsing
of/irq: Use of_msi_get_domain instead of open-coded "msi-parent" parsing
of/irq: Add support code for multi-parent version of "msi-parent"
irqchip/gic-v3-its: Add handling of PCI requester id.
PCI/MSI: Add helper function pci_msi_domain_get_msi_rid().
of/irq: Add new function of_msi_map_rid()
Docs: dt: Add PCI MSI map bindings
irqchip/gic-v2m: Add support for multiple MSI frames
irqchip/gic-v3: Fix translation of LPIs after conversion to irq_fwspec
irqchip/mxs: Add Alphascale ASM9260 support
irqchip/mxs: Prepare driver for hardware with different offsets
irqchip/mxs: Panic if ioremap or domain creation fails
irqdomain: Documentation updates
irqdomain/msi: Use fwnode instead of of_node
...
Now that we added special handling to the C files in libstub, move
the one remaining arm64 specific EFI stub C file to libstub as
well, so that it gets the same treatment. This should prevent future
changes from resulting in binaries that may execute incorrectly in
UEFI context.
With efi-entry.S the only remaining EFI stub source file under
arch/arm64, we can also simplify the Makefile logic somewhat.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
A kernel built with DEBUG_RO_DATA && !CONFIG_DEBUG_ALIGN_RODATA doesn't
have .text aligned to a page boundary, though fixup_executable works at
page-granularity thanks to its use of create_mapping. If .text is not
page-aligned, the first page it exists in may be marked non-executable,
leading to failures when an attempt is made to execute code in said
page.
This patch upgrades ALIGN_DEBUG_RO and ALIGN_DEBUG_RO_MIN to force page
alignment for DEBUG_RO_DATA && !CONFIG_DEBUG_ALIGN_RODATA kernels,
ensuring that all sections with specific RWX permission requirements are
mapped with the correct permissions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Laura Abbott <laura@labbott.name>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: da141706ae ("arm64: add better page protections to arm64")
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The current arm64 Image relocation code in the UEFI stub assumes that
the dram_base argument it receives is always a multiple of 2 MB. In
reality, it is simply the lowest start address of all RAM entries in
the UEFI memory map, which means it could be any multiple of 4 KB.
Since the arm64 kernel Image needs to reside TEXT_OFFSET bytes beyond
a 2 MB aligned base, or it will fail to boot, make sure we round dram_base
to 2 MB before using it to calculate the relocation address.
Fixes: e38457c361 ("arm64: efi: prefer AllocatePages() over efi_low_alloc() for vmlinux")
Reported-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Timur Tabi <timur@codeaurora.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
enable_cpu_capabilities is only called from within cpufeature.c, so it
can be declared static.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This reverts commit e306dfd06f.
With this patch applied, we were the only architecture making this sort
of adjustment to the PC calculation in the unwinder. This causes
problems for ftrace, where the PC values are matched against the
contents of the stack frames in the callchain and fail to match any
records after the address adjustment.
Whilst there has been some effort to change ftrace to workaround this,
those patches are not yet ready for mainline and, since we're the odd
architecture in this regard, let's just step in line with other
architectures (like arch/arm/) for now.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit dd006da216 ("arm64: mm: increase VA range of identity map")
introduced a mechanism to extend the virtual memory map range
to support arm64 systems with system RAM located at very high offset,
where the identity mapping used to enable/disable the MMU requires
additional translation levels to map the physical memory at an equal
virtual offset.
The kernel detects at boot time the tcr_el1.t0sz value required by the
identity mapping and sets-up the tcr_el1.t0sz register field accordingly,
any time the identity map is required in the kernel (ie when enabling the
MMU).
After enabling the MMU, in the cold boot path the kernel resets the
tcr_el1.t0sz to its default value (ie the actual configuration value for
the system virtual address space) so that after enabling the MMU the
memory space translated by ttbr0_el1 is restored as expected.
Commit dd006da216 ("arm64: mm: increase VA range of identity map")
also added code to set-up the tcr_el1.t0sz value when the kernel resumes
from low-power states with the MMU off through cpu_resume() in order to
effectively use the identity mapping to enable the MMU but failed to add
the code required to restore the tcr_el1.t0sz to its default value, when
the core returns to the kernel with the MMU enabled, so that the kernel
might end up running with tcr_el1.t0sz value set-up for the identity
mapping which can be lower than the value required by the actual virtual
address space, resulting in an erroneous set-up.
This patchs adds code in the resume path that restores the tcr_el1.t0sz
default value upon core resume, mirroring this way the cold boot path
behaviour therefore fixing the issue.
Cc: <stable@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: dd006da216 ("arm64: mm: increase VA range of identity map")
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If the STXR instruction fails in the SWP emulation code, we leave *data
overwritten with the loaded value, therefore corrupting the data written
by a subsequent, successful attempt.
This patch re-jigs the code so that we only write back to *data once we
know that the update has happened.
Cc: <stable@vger.kernel.org>
Fixes: bd35a4adc4 ("arm64: Port SWP/SWPB emulation support from arm")
Reported-by: Shengjiu Wang <shengjiu.wang@freescale.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We have been getting away with using a void* for the physical
address of the UEFI memory map, since, even on 32-bit platforms
with 64-bit physical addresses, no truncation takes place if the
memory map has been allocated by the firmware (which only uses
1:1 virtually addressable memory), which is usually the case.
However, commit:
0f96a99dab ("efi: Add "efi_fake_mem" boot option")
adds code that clones and modifies the UEFI memory map, and the
clone may live above 4 GB on 32-bit platforms.
This means our use of void* for struct efi_memory_map::phys_map has
graduated from 'incorrect but working' to 'incorrect and
broken', and we need to fix it.
So redefine struct efi_memory_map::phys_map as phys_addr_t, and
get rid of a bunch of casts that are now unneeded.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: izumi.taku@jp.fujitsu.com
Cc: kamezawa.hiroyu@jp.fujitsu.com
Cc: linux-efi@vger.kernel.org
Cc: matt.fleming@intel.com
Link: http://lkml.kernel.org/r/1445593697-1342-1-git-send-email-ard.biesheuvel@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
features for arm/arm64 platforms:
- Lorenzo Pieralisi adds support for the PSCI_FEATURES call, manages
various 1.0 specifications updates (power state id and functions return
values) and provides PSCI v1.0 DT bindings
- Sudeep Holla implements PSCI v1.0 system suspend support to enable PSCI
based suspend-to-RAM
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJWEmNwAAoJEIKLOaai0TZ/unUP/1+90ySUHIChaH70WrvIqJ3Z
YfueoM63cOcRNB7EFx5SrcKWh6tFgeUsT2+4WDA8j0CIvRwO6YaESvtXRHmGfwgQ
F3NWlWnJF3xiCU0kg26YQjvNll9OgQXkwCLzbsk1cdpi8hMfVL4Du8DgjREVIDff
PZ4LiGlYSqvesp+kipXzfAHgHJENXJYISkRv2VwcewOnq28B/Iode3T6Ro8dSjeR
2C9hjZsrwtRZ2lpjU2/4nLMd+On+nA9fw7XhlH/FU8HeLcNlngeQghlA30uQsIxB
1wgBEzHohBqUSSMLPSrk1nca/JV5haSeZWGacdLQ/qLUeMV9TgAB8wAXayVXUoHN
kkSoNWf5lh8ica4BkSGEPun4GY4h3Uv0AhnwCCKiVWL1eXglVklT4QRDVlyBPdXY
nWpRN5RlQLf60Zf6vZrUoRsApy1WVFGWW+GbjBKlWWImCzEDaIJ3695C7Qmj3BJ+
4mSqKE0xkQ8M8eYpS/ilKnmlh7+xNU3za9LtebV97v31+D9TNqvdXxEh0qGuasG4
BWGPptGlxqYH3RH5PrvjFGt0EZmyJ4uD/7doYrXFrXkiv/FX1tDzTAgIWswHeeG5
c4gAw/GlGq3d+oL/aydWlWW/UQYdmfLt2U+NZ3VdkVZVuF33VprJolBC8YQuQldS
2zvgsnflfrLeFayRtDf2
=rYJ6
-----END PGP SIGNATURE-----
Merge tag 'firmware/psci-1.0' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux into next/drivers
This pull request contains patches that enable PSCI 1.0 firmware
features for arm/arm64 platforms:
- Lorenzo Pieralisi adds support for the PSCI_FEATURES call, manages
various 1.0 specifications updates (power state id and functions return
values) and provides PSCI v1.0 DT bindings
- Sudeep Holla implements PSCI v1.0 system suspend support to enable PSCI
based suspend-to-RAM
* tag 'firmware/psci-1.0' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux:
drivers: firmware: psci: add system suspend support
drivers: firmware: psci: define more generic PSCI_FN_NATIVE macro
drivers: firmware: psci: add PSCI v1.0 DT bindings
drivers: firmware: psci: add extended stateid power_state support
drivers: firmware: psci: add PSCI_FEATURES call
drivers: firmware: psci: move power_state handling to generic code
drivers: firmware: psci: add INVALID_ADDRESS return value
Signed-off-by: Olof Johansson <olof@lixom.net>
The hwcap string arrays used for generating the contents of
/proc/cpuinfo are currently arrays of non-const pointers.
There's no need for these pointers to be mutable, so this patch makes
them const so that they can be moved to .rodata.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Use the system wide value of ID_AA64DFR0 to make safer decisions
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The FP/ASIMD is detected in fpsimd_init(), which is built-in
unconditionally. Lets move the hwcap handling to the central place.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Extend struct arm64_cpu_capabilities to handle the HWCAP detection
and make use of the system wide value of the feature registers for
a reliable set of HWCAPs.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we can reliably read the system wide safe value for a
feature register, use that to compute the system capability.
This patch also replaces the 'feature-register-specific'
methods with a generic routine to check the capability.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
At the moment we run through the arm64_features capability list for
each CPU and set the capability if one of the CPU supports it. This
could be problematic in a heterogeneous system with differing capabilities.
Delay the CPU feature checks until all the enabled CPUs are up(i.e,
smp_cpus_done(), so that we can make better decisions based on the
overall system capability. Once we decide and advertise the capabilities
the alternatives can be applied. From this state, we cannot roll back
a feature to disabled based on the values from a new hotplugged CPU,
due to the runtime patching and other reasons. So, for all new CPUs,
we need to make sure that they have the established system capabilities.
Failing which, we bring the CPU down, preventing it from turning online.
Once the capabilities are decided, any new CPU booting up goes through
verification to ensure that it has all the enabled capabilities and also
invokes the respective enable() method on the CPU.
The CPU errata checks are not delayed and is still executed per-CPU
to detect the respective capabilities. If we ever come across a non-errata
capability that needs to be checked on each-CPU, we could introduce them via
a new capability table(or introduce a flag), which can be processed per CPU.
The next patch will make the feature checks use the system wide
safe value of a feature register.
NOTE: The enable() methods associated with the capability is scheduled
on all the CPUs (which is the only use case at the moment). If we need
a different type of 'enable()' which only needs to be run once on any CPU,
we should be able to handle that when needed.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
[catalin.marinas@arm.com: static variable and coding style fixes]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
check_cpu_capabilities runs through a given list of caps and
checks if the system has the cap, updates the system capability
bitmap and also runs any enable() methods associated with them.
All of this is not quite obvious from the name 'check'. This
patch splits the check_cpu_capabilities into two parts :
1) update_cpu_capabilities
=> Runs through the given list and updates the system
wide capability map.
2) enable_cpu_capabilities
=> Runs through the given list and invokes enable() (if any)
for the caps enabled on the system.
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Suggested-by: Catalin Marinas <catalin.marinsa@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Make use of the system wide safe register to decide the support
for mixed endian.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add an API for reading the safe CPUID value across the
system from the new infrastructure.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch consolidates the CPU Sanity check to the new infrastructure.
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds an infrastructure to keep track of the CPU feature
registers on the system. For each register, the infrastructure keeps
track of the system wide safe value of the feature bits. Also, tracks
the which fields of a register should be matched strictly across all
the CPUs on the system for the SANITY check infrastructure.
The feature bits are classified into following 3 types depending on
the implication of the possible values. This information is used to
decide the safe value for a feature.
LOWER_SAFE - The smaller value is safer
HIGHER_SAFE - The bigger value is safer
EXACT - We can't decide between the two, so a predefined safe_value is used.
This infrastructure will be later used to make better decisions for:
- Kernel features (e.g, KVM, Debug)
- SANITY Check
- CPU capability
- ELF HWCAP
- Exposing CPU Feature register to userspace.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
[catalin.marinas@arm.com: whitespace fix]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch moves the /proc/cpuinfo handling code:
arch/arm64/kernel/{setup.c to cpuinfo.c}
No functional changes
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Move the mixed endian support detection code to cpufeature.c
from cpuinfo.c. This also moves the update_cpu_features()
used by mixed endian detection code, which will get more
functionality.
Also moves the ID register field shifts to asm/sysreg.h,
where all the useful definitions will end up in later patches.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch moves the CPU feature detection code from
arch/arm64/kernel/{setup.c to cpufeature.c}
The plan is to consolidate all the CPU feature handling
in cpufeature.c.
Apart from changing pr_fmt from "alternatives" to "cpu features",
there are no functional changes.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
At the moment the boot CPU stores the cpuinfo long before the
PERCPU areas are initialised by the kernel. This could be problematic
as the non-boot CPU data structures might get copied with the data
from the boot CPU, giving us no chance to detect if a particular CPU
updated its cpuinfo. This patch delays the boot cpu store to
smp_prepare_boot_cpu().
Also kills the setup_processor() which no longer does meaningful
work.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are
up, i.e, smp_cpus_done(). This is in preparation for detecting the
common features across the CPUS and creating a consistent ELF HWCAP
for the system.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
At early boot, we print the CPU version/revision. On a heterogeneous
system, we could have different types of CPUs. Print the CPU info for
all active cpus. Also, the secondary CPUs prints the message only when
they turn online.
Also, remove the redundant 'revision' information which doesn't
make any sense without the 'variant' field.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Unlike perf callchain relying on walk_stackframe(), dump_backtrace()
has its own backtrace logic. A major difference between them is the
moment a symbol is recorded. Perf writes down a symbol *before*
calling unwind_frame(), but dump_backtrace() prints it out *after*
unwind_frame(). As a result, the last valid symbol cannot be hooked
in case of dump_backtrace(). This patch addresses the issue as
synchronising dump_backtrace() with perf callchain.
A simple test and its results are as follows:
- crash trigger
$ sudo echo c > /proc/sysrq-trigger
- current status
Call trace:
[<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
[<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
[<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
[<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
[<fffffe00001f2638>] __vfs_write+0x44/0x104
[<fffffe00001f2e60>] vfs_write+0x98/0x1a8
[<fffffe00001f3730>] SyS_write+0x50/0xb0
- with this change
Call trace:
[<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
[<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
[<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
[<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
[<fffffe00001f2638>] __vfs_write+0x44/0x104
[<fffffe00001f2e60>] vfs_write+0x98/0x1a8
[<fffffe00001f3730>] SyS_write+0x50/0xb0
[<fffffe00000939ec>] el0_svc_naked+0x20/0x28
Note that this patch does not cover a case where MMU is disabled. The
last stack frame of swapper, for example, has PC in a form of physical
address. Unfortunately, a simple conversion using phys_to_virt() cannot
cover all scenarios since PC is retrieved from LR - 4, not LR. It is
a big tradeoff to change both head.S and unwind_frame() for only a few
of symbols in *.S. Thus, this hunk does not take care of the case.
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently, if cpuidle is disabled or not supported, powertop reports
zero wakeups and zero events. This is due to the cpu_idle tracepoints
are missing.
This patch is to make cpu_idle tracepoints always available even if
cpuidle is disabled or not supported.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the page size to the arm64 kernel image header
so that one can infer the PAGESIZE used by the kernel. This will
be helpful to diagnose failures to boot the kernel with page size
not supported by the CPU.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ensure that the selected page size is supported by the CPU(s). If it doesn't
park it.
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
At the moment, we only support maximum of 3-level page table for
swapper. With 48bit VA, 64K has only 3 levels and 4K uses section
mapping. Add support for 4-level page table for swapper, needed
by 16K pages.
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Move the kernel pagetable (both swapper and idmap) definitions
from the generic asm/page.h to a new file, asm/kernel-pgtable.h.
This is mostly a cosmetic change, to clean up the asm/page.h to
get rid of the arch specific details which are not needed by the
generic code.
Also renames the symbols to prevent conflicts. e.g,
BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARMv7 does not have a PC alignment exception. ARMv8 AArch32
user space however can produce a PC alignment exception. Add
handler so that we do not dump an unexpected stack trace in
the logs.
Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
non-modular by ripping out the module_* code since Kconfig doesn't
allow it to be built as a module anyway - Paul Gortmaker
* Make the x86 efi=debug kernel parameter, which enables EFI debug
code and output, generic and usable by arm64 - Leif Lindholm
* Add support to the x86 EFI boot stub for 64-bit Graphics Output
Protocol frame buffer addresses - Matt Fleming
* Detect when the UEFI v2.5 EFI_PROPERTIES_TABLE feature is enabled
in the firmware and set an efi.flags bit so the kernel knows when
it can apply more strict runtime mapping attributes - Ard Biesheuvel
* Auto-load the efi-pstore module on EFI systems, just like we
currently do for the efivars module - Ben Hutchings
* Add "efi_fake_mem" kernel parameter which allows the system's EFI
memory map to be updated with additional attributes for specific
memory ranges. This is useful for testing the kernel code that handles
the EFI_MEMORY_MORE_RELIABLE memmap bit even if your firmware
doesn't include support - Taku Izumi
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWG7OwAAoJEC84WcCNIz1VEEEP/0SsdrwJ66B4MfP5YNjqHYWm
+OTHR6Ovv2i10kc+NjOV/GN8sWPndnkLfIfJ4EqJ9BoQ9PDEYZilV2aleSQ4DrPm
H7uGwBXQkfd76tZKX9pMToK76mkhg6M7M2LR3Suv3OGfOEzuozAOt3Ez37lpksTN
2ByhHr/oGbhu99jC2ki5+k0ySH8PMqDBRxqrPbBzTD+FfB7bM11vAJbSNbSMQ21R
ZwX0acZBLqb9J2Vf7tDsW+fCfz0TFo8JHW8jdLRFm/y2dpquzxswkkBpODgA8+VM
0F5UbiUdkaIRug75I6N/OJ8+yLwdzuxm7ul+tbS3JrXGLAlK3850+dP2Pr5zQ2Ce
zaYGRUy+tD5xMXqOKgzpu+Ia8XnDRLhOlHabiRd5fG6ZC9nR8E9uK52g79voSN07
pADAJnVB03CGV/HdduDOI4C4UykUKubuArbQVkqWJcecV1Jic/tYI0gjeACmU1VF
v8FzXpBUe3U3A0jauOz8PBz8M+k5qky/GbIrnEvXreBtKdt999LN9fykTN7rBOpo
dk/6vTR1Jyv3aYc9EXHmRluktI6KmfWCqmRBOIgQveX1VhdRM+1w2LKC0+8co3dF
v/DBh19KDyfPI8eOvxKykhn164UeAt03EXqDa46wFGr2nVOm/JiShL/d+QuyYU4G
8xb/rET4JrhCG4gFMUZ7
=1Oee
-----END PGP SIGNATURE-----
Merge tag 'efi-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into core/efi
Pull v4.4 EFI updates from Matt Fleming:
- Make the EFI System Resource Table (ESRT) driver explicitly
non-modular by ripping out the module_* code since Kconfig doesn't
allow it to be built as a module anyway. (Paul Gortmaker)
- Make the x86 efi=debug kernel parameter, which enables EFI debug
code and output, generic and usable by arm64. (Leif Lindholm)
- Add support to the x86 EFI boot stub for 64-bit Graphics Output
Protocol frame buffer addresses. (Matt Fleming)
- Detect when the UEFI v2.5 EFI_PROPERTIES_TABLE feature is enabled
in the firmware and set an efi.flags bit so the kernel knows when
it can apply more strict runtime mapping attributes - Ard Biesheuvel
- Auto-load the efi-pstore module on EFI systems, just like we
currently do for the efivars module. (Ben Hutchings)
- Add "efi_fake_mem" kernel parameter which allows the system's EFI
memory map to be updated with additional attributes for specific
memory ranges. This is useful for testing the kernel code that handles
the EFI_MEMORY_MORE_RELIABLE memmap bit even if your firmware
doesn't include support. (Taku Izumi)
Note: there is a semantic conflict between the following two commits:
8a53554e12 ("x86/efi: Fix multiple GOP device support")
ae2ee627dc ("efifb: Add support for 64-bit frame buffer addresses")
I fixed up the interaction in the merge commit, changing the type of
current_fb_base from u32 to u64.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch adds arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).
1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from vmalloc area.
At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.
Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since arm64 does not use a builtin decompressor, the EFI stub is built
into the kernel proper. So far, this has been working fine, but actually,
since the stub is in fact a PE/COFF relocatable binary that is executed
at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
should not be seamlessly sharing code with the kernel proper, which is a
position dependent executable linked at a high virtual offset.
So instead, separate the contents of libstub and its dependencies, by
putting them into their own namespace by prefixing all of its symbols
with __efistub. This way, we have tight control over what parts of the
kernel proper are referenced by the stub.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As we now have a common debug infrastructure between core and arm64 efi,
drop the bit of the interface passing verbose output flags around.
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Now that we have an efi=debug command line option in the core code, use
this instead of the arm64-specific uefi_debug option.
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
A prior commit used to detect the hw breakpoint ABI behaviour based on
the target state missed the asm/compat.h include and the build fails
with !CONFIG_COMPAT.
Fixes: 8f48c06290 ("arm64: hw_breakpoint: use target state to determine ABI behaviour")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As the firmware (or the hypervisor) may have disabled SRE access,
check that SRE can actually be enabled before declaring that we
do have that capability.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Contrary to what was originally expected, EL3 firmware can (for whatever
reason) disable GICv3 system register access. In this case, the kernel
explodes very early.
Work around this by testing if the SRE bit sticks or not. If it doesn't,
abort the GICv3 setup, and pray that the firmware has passed a DT that
doesn't contain a GICv3 node.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
When cpu is disabled, all irqs will be migratged to another cpu.
In some cases, a new affinity is different, the old affinity need
to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
the old affinity can not be updated. Fix it by using irq_do_set_affinity.
And migrating interrupts is a core code matter, so use the generic
function irq_migrate_all_off_this_cpu() to migrate interrupts in
kernel/irq/migration.c.
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The Cortex-A57 PMU supports a few events outside of the required PMUv3
set that are rather useful.
This patch adds the event map data for said events.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The Cortex-A53 PMU supports a few events outside of the required PMUv3
set that are rather useful.
This patch adds the event map data for said events.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that the arm_pmu framework has been factored out to drivers/perf we
can make use of it for arm64, gaining support for heterogeneous PMUs
and unifying the two codebases before they diverge further.
The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching
the style previously applied to the 32-bit PMUs.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 hw_breakpoint interface is slightly less flexible than its
32-bit counterpart, thanks to some changes in the architecture rendering
unaligned watchpoint addresses obselete for AArch64.
However, in a multi-arch environment (i.e. debugging a 32-bit target
with a 64-bit GDB under a 64-bit kernel), we need to provide a feature
compatible interface to GDB in order for debugging to function correctly.
This patch adds a new helper, is_compat_bp, to our hw_breakpoint
implementation which changes the interface behaviour based on the
architecture of the debug target as opposed to the debugger itself.
This allows debugged to function as expected for multi-arch
configurations without relying on deprecated architectural behaviours
when debugging native applications.
Cc: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
mm_cpumask isn't actually used for anything on arm64, so remove all the
code trying to keep it up-to-date.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Our current switch_mm implementation suffers from a number of problems:
(1) The ASID allocator relies on IPIs to synchronise the CPUs on a
rollover event
(2) Because of (1), we cannot allocate ASIDs with interrupts disabled
and therefore make use of a TIF_SWITCH_MM flag to postpone the
actual switch to finish_arch_post_lock_switch
(3) We run context switch with a reserved (invalid) TTBR0 value, even
though the ASID and pgd are updated atomically
(4) We take a global spinlock (cpu_asid_lock) during context-switch
(5) We use h/w broadcast TLB operations when they are not required
(e.g. in flush_context)
This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There are a number of places where a single CPU is running with a
private page-table and we need to perform maintenance on the TLB and
I-cache in order to ensure correctness, but do not require the operation
to be broadcast to other CPUs.
This patch adds local variants of tlb_flush_all and __flush_icache_all
to support these use-cases and updates the callers respectively.
__local_flush_icache_all also implies an isb, since it is intended to be
used synchronously.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Daney <david.daney@cavium.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When booting a kernel without an initrd, the kernel reports that it
moves -1 bytes worth, having gone through the motions with initrd_start
equal to initrd_end:
Moving initrd from [4080000000-407fffffff] to [9fff49000-9fff48fff]
Prevent this by bailing out early when the initrd size is zero (i.e. we
have no initrd), avoiding the confusing message and other associated
work.
Fixes: 1570f0d7ab ("arm64: support initrd outside kernel linear map")
Cc: Mark Salter <msalter@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When running kprobe test on arm64 rt kernel, it reports the below warning:
root@qemu7:~# modprobe kprobe_example
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
in_atomic(): 0, irqs_disabled(): 128, pid: 484, name: modprobe
CPU: 0 PID: 484 Comm: modprobe Not tainted 4.1.6-rt5 #2
Hardware name: linux,dummy-virt (DT)
Call trace:
[<ffffffc0000891b8>] dump_backtrace+0x0/0x128
[<ffffffc000089300>] show_stack+0x20/0x30
[<ffffffc00061dae8>] dump_stack+0x1c/0x28
[<ffffffc0000bbad0>] ___might_sleep+0x120/0x198
[<ffffffc0006223e8>] rt_spin_lock+0x28/0x40
[<ffffffc000622b30>] __aarch64_insn_write+0x28/0x78
[<ffffffc000622e48>] aarch64_insn_patch_text_nosync+0x18/0x48
[<ffffffc000622ee8>] aarch64_insn_patch_text_cb+0x70/0xa0
[<ffffffc000622f40>] aarch64_insn_patch_text_sync+0x28/0x48
[<ffffffc0006236e0>] arch_arm_kprobe+0x38/0x48
[<ffffffc00010e6f4>] arm_kprobe+0x34/0x50
[<ffffffc000110374>] register_kprobe+0x4cc/0x5b8
[<ffffffbffc002038>] kprobe_init+0x38/0x7c [kprobe_example]
[<ffffffc000084240>] do_one_initcall+0x90/0x1b0
[<ffffffc00061c498>] do_init_module+0x6c/0x1cc
[<ffffffc0000fd0c0>] load_module+0x17f8/0x1db0
[<ffffffc0000fd8cc>] SyS_finit_module+0xb4/0xc8
Convert patch_lock to raw loc kto avoid this issue.
Although the problem is found on rt kernel, the fix should be applicable to
mainline kernel too.
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Its a bit odd that debugfs_create_bool() takes 'u32 *' as an argument,
when all it needs is a boolean pointer.
It would be better to update this API to make it accept 'bool *'
instead, as that will make it more consistent and often more convenient.
Over that bool takes just a byte.
That required updates to all user sites as well, in the same commit
updating the API. regmap core was also using
debugfs_{read|write}_file_bool(), directly and variable types were
updated for that to be bool as well.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pull EFI fixes from Ingo Molnar:
"Two EFI fixes: one for x86, one for ARM, fixing a boot crash bug that
can trigger under newer EFI firmware"
* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions
x86/efi: Fix boot crash by mapping EFI memmap entries bottom-up at runtime, instead of top-down
Functions implemented on arm64 to check if a power_state parameter
is valid and if the power_state implies context loss are not
arm64 specific and should be moved to generic code so that they
can be reused on arm systems too.
This patch moves the functions handling the power_state parameter
to generic PSCI firmware layer code.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Jisheng Zhang <jszhang@marvell.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
The new Properties Table feature introduced in UEFIv2.5 may
split memory regions that cover PE/COFF memory images into
separate code and data regions. Since these regions only differ
in the type (runtime code vs runtime data) and the permission
bits, but not in the memory type attributes (UC/WC/WT/WB), the
spec does not require them to be aligned to 64 KB.
Since the relative offset of PE/COFF .text and .data segments
cannot be changed on the fly, this means that we can no longer
pad out those regions to be mappable using 64 KB pages.
Unfortunately, there is no annotation in the UEFI memory map
that identifies data regions that were split off from a code
region, so we must apply this logic to all adjacent runtime
regions whose attributes only differ in the permission bits.
So instead of rounding each memory region to 64 KB alignment at
both ends, only round down regions that are not directly
preceded by another runtime region with the same type
attributes. Since the UEFI spec does not mandate that the memory
map be sorted, this means we also need to sort it first.
Note that this change will result in all EFI_MEMORY_RUNTIME
regions whose start addresses are not aligned to the OS page
size to be mapped with executable permissions (i.e., on kernels
compiled with 64 KB pages). However, since these mappings are
only active during the time that UEFI Runtime Services are being
invoked, the window for abuse is rather small.
Tested-by: Mark Salter <msalter@redhat.com>
Tested-by: Mark Rutland <mark.rutland@arm.com> [UEFI 2.4 only]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Reviewed-by: Mark Salter <msalter@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: <stable@vger.kernel.org> # v4.0+
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1443218539-7610-3-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Seeing the 'of' characters in a symbol that is being called from
ACPI seems to freak out people. So let's do a bit of pointless
renaming so that these folks do feel at home.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
It is now absolutely trivial to convert the arch timer driver to
use ACPI probing, just like its DT counterpart.
Let's enjoy another crapectomy.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Now that we have a basic infrastructure to register irqchips and
call them on discovery of a matching entry in MADT, convert the
GIC driver to this new probing method.
It ends up being a code deletion party, which is a rather good thing.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This patch implements Cavium ThunderX erratum 23154.
The gicv3 of ThunderX requires a modified version for reading the IAR
status to ensure data synchronization. Since this is in the fast-path
and called with each interrupt, runtime patching is used using jump
label patching for smallest overhead (no-op). This is the same
technique as used for tracepoints.
Signed-off-by: Robert Richter <rrichter@cavium.com>
Reviewed-by: Marc Zygnier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1442869119-1814-3-git-send-email-rric@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cortex-A53 processors <= r0p4 are affected by erratum #843419 which can
lead to a memory access using an incorrect address in certain sequences
headed by an ADRP instruction.
There is a linker fix to generate veneers for ADRP instructions, but
this doesn't work for kernel modules which are built as unlinked ELF
objects.
This patch adds a new config option for the erratum which, when enabled,
builds kernel modules with the mcmodel=large flag. This uses absolute
addressing for all kernel symbols, thereby removing the use of ADRP as
a PC-relative form of addressing. The ADRP relocs are removed from the
module loader so that we fail to load any potentially affected modules.
Cc: <stable@vger.kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When saving/restoring the VFP registers from a compat (AArch32)
signal frame, we rely on the compat registers forming a prefix of the
native register file and therefore make use of copy_{to,from}_user to
transfer between the native fpsimd_state and the compat_vfp_sigframe.
Unfortunately, this doesn't work so well in a big-endian environment.
Our fpsimd save/restore code operates directly on 128-bit quantities
(Q registers) whereas the compat_vfp_sigframe represents the registers
as an array of 64-bit (D) registers. The architecture packs the compat D
registers into the Q registers, with the least significant bytes holding
the lower register. Consequently, we need to swap the 64-bit halves when
converting between these two representations on a big-endian machine.
This patch replaces the __copy_{to,from}_user invocations in our
compat VFP signal handling code with explicit __put_user loops that
operate on 64-bit values and swap them accordingly.
Cc: <stable@vger.kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We have a couple of CPU hotplug notifiers for resetting the CPU debug
state to a sane value when a CPU comes online.
This patch ensures that we mask out CPU_TASKS_FROZEN so that we don't
miss any online events occuring due to suspend/resume.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When entering the kernel at EL2, we fail to initialise the MDCR_EL2
register which controls debug access and PMU capabilities at EL1.
This patch ensures that the register is initialised so that all traps
are disabled and all the PMU counters are available to the host. When a
guest is scheduled, KVM takes care to configure trapping appropriately.
Cc: <stable@vger.kernel.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Table 8 of UEFI 2.5 section 2.3.6.1 defines mappings from EFI
memory types to MAIR attribute encodings for arm64.
If the physical address has memory attributes defined by EFI
memmap as EFI_MEMORY_[UC|WC|WT], return approprate page
protection type according to the UEFI spec. Otherwise, return
PAGE_KERNEL.
Signed-off-by: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
[ Small stylistic tweaks. ]
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1441372302-23242-2-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The use of mem= could leave part or all of the initrd outside of the
kernel linear map. This will lead to an error when unpacking the initrd
and a probable failure to boot. This patch catches that situation and
relocates the initrd to be fully within the linear map.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- Support for new architectural features introduced in ARMv8.1:
* Privileged Access Never (PAN) to catch user pointer dereferences in
the kernel
* Large System Extension (LSE) for building scalable atomics and locks
(depends on locking/arch-atomic from tip, which is included here)
* Hardware Dirty Bit Management (DBM) for updating clean PTEs
automatically
- Move our PSCI implementation out into drivers/firmware/, where it can
be shared with arch/arm/. RMK has also pulled this component branch
and has additional patches moving arch/arm/ over. MAINTAINERS is
updated accordingly.
- Better BUG implementation based on the BRK instruction for trapping
- Leaf TLB invalidation for unmapping user pages
- Support for PROBE_ONLY PCI configurations
- Various cleanups and non-critical fixes, including:
* Always flush FP/SIMD state over exec()
* Restrict memblock additions based on range of linear mapping
* Ensure *(LIST_POISON) generates a fatal fault
* Context-tracking syscall return no longer corrupts return value when
not forced on.
* Alternatives patching synchronisation/stability improvements
* Signed sub-word cmpxchg compare fix (tickled by HAVE_CMPXCHG_LOCAL)
* Force SMP=y
* Hide direct DCC access from userspace
* Fix EFI stub memory allocation when DRAM starts at 0x0
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJV5XXWAAoJELescNyEwWM0p4UIAIQwgoUnj01LvtImjMyG0NiY
38GbAia7FsyIktSjuCaEhLsWjL8WSMscRsz6MLK01ir3iOoKdtXd/OptlsJTV5c5
5POPAU6hvdfKj6MtsaOAOx4dz7bhM/HB9JSZmcbHqytOxIi4Tp1JoBrmM1mpNwmp
VFy+GAOs5H6Lb/xUMm50pVUx+mjMXsH4Bo1c/0Y/gYsjhcvcRgE2iqnl7UExgDcW
5sbhpsdw8zleDx+kzTmt5QoFWk/4l3d/F+0dzLCYfxzCLNYacksbQqEbGFVAsiIl
aACK3Uqk7v7ZtFqqQLtNzE6Pfiw0CzajINPUyykoMCnDtMsyhYbxqezywCAPpSY=
=8qHf
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
- Support for new architectural features introduced in ARMv8.1:
* Privileged Access Never (PAN) to catch user pointer dereferences in
the kernel
* Large System Extension (LSE) for building scalable atomics and locks
(depends on locking/arch-atomic from tip, which is included here)
* Hardware Dirty Bit Management (DBM) for updating clean PTEs
automatically
- Move our PSCI implementation out into drivers/firmware/, where it can
be shared with arch/arm/. RMK has also pulled this component branch
and has additional patches moving arch/arm/ over. MAINTAINERS is
updated accordingly.
- Better BUG implementation based on the BRK instruction for trapping
- Leaf TLB invalidation for unmapping user pages
- Support for PROBE_ONLY PCI configurations
- Various cleanups and non-critical fixes, including:
* Always flush FP/SIMD state over exec()
* Restrict memblock additions based on range of linear mapping
* Ensure *(LIST_POISON) generates a fatal fault
* Context-tracking syscall return no longer corrupts return value when
not forced on.
* Alternatives patching synchronisation/stability improvements
* Signed sub-word cmpxchg compare fix (tickled by HAVE_CMPXCHG_LOCAL)
* Force SMP=y
* Hide direct DCC access from userspace
* Fix EFI stub memory allocation when DRAM starts at 0x0
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits)
arm64: flush FP/SIMD state correctly after execve()
arm64: makefile: fix perf_callchain.o kconfig dependency
arm64: set MAX_MEMBLOCK_ADDR according to linear region size
of/fdt: make memblock maximum physical address arch configurable
arm64: Fix source code file path in comments
arm64: entry: always restore x0 from the stack on syscall return
arm64: mdscr_el1: avoid exposing DCC to userspace
arm64: kconfig: Move LIST_POISON to a safe value
arm64: Add __exception_irq_entry definition for function graph
arm64: mm: ensure patched kernel text is fetched from PoU
arm64: alternatives: ensure secondary CPUs execute ISB after patching
arm64: make ll/sc __cmpxchg_case_##name asm consistent
arm64: dma-mapping: Simplify pgprot handling
arm64: restore cpu suspend/resume functionality
ARM64: PCI: do not enable resources on PROBE_ONLY systems
arm64: cmpxchg: truncate sub-word signed types before comparison
arm64: alternative: put secondary CPUs into polling loop during patch
arm64/Documentation: clarify wording regarding memory below the Image
arm64: lse: fix lse cmpxchg code indentation
arm64: remove redundant object file list
...
Pull ARM development updates from Russell King:
"Included in this update:
- moving PSCI code from ARM64/ARM to drivers/
- removal of some architecture internals from global kernel view
- addition of software based "privileged no access" support using the
old domains register to turn off the ability for kernel
loads/stores to access userspace. Only the proper accessors will
be usable.
- addition of early fixup support for early console
- re-addition (and reimplementation) of OMAP special interconnect
barrier
- removal of finish_arch_switch()
- only expose cpuX/online in sysfs if hotpluggable
- a number of code cleanups"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (41 commits)
ARM: software-based priviledged-no-access support
ARM: entry: provide uaccess assembly macro hooks
ARM: entry: get rid of multiple macro definitions
ARM: 8421/1: smp: Collapse arch_cpu_idle_dead() into cpu_die()
ARM: uaccess: provide uaccess_save_and_enable() and uaccess_restore()
ARM: mm: improve do_ldrd_abort macro
ARM: entry: ensure that IRQs are enabled when calling syscall_trace_exit()
ARM: entry: efficiency cleanups
ARM: entry: get rid of asm_trace_hardirqs_on_cond
ARM: uaccess: simplify user access assembly
ARM: domains: remove DOMAIN_TABLE
ARM: domains: keep vectors in separate domain
ARM: domains: get rid of manager mode for user domain
ARM: domains: move initial domain setting value to asm/domains.h
ARM: domains: provide domain_mask()
ARM: domains: switch to keeping domain value in register
ARM: 8419/1: dma-mapping: harmonize definition of DMA_ERROR_CODE
ARM: 8417/1: refactor bitops functions with BIT_MASK() and BIT_WORD()
ARM: 8416/1: Feroceon: use of_iomap() to map register base
ARM: 8415/1: early fixmap support for earlycon
...
Pull locking and atomic updates from Ingo Molnar:
"Main changes in this cycle are:
- Extend atomic primitives with coherent logic op primitives
(atomic_{or,and,xor}()) and deprecate the old partial APIs
(atomic_{set,clear}_mask())
The old ops were incoherent with incompatible signatures across
architectures and with incomplete support. Now every architecture
supports the primitives consistently (by Peter Zijlstra)
- Generic support for 'relaxed atomics':
- _acquire/release/relaxed() flavours of xchg(), cmpxchg() and {add,sub}_return()
- atomic_read_acquire()
- atomic_set_release()
This came out of porting qwrlock code to arm64 (by Will Deacon)
- Clean up the fragile static_key APIs that were causing repeat bugs,
by introducing a new one:
DEFINE_STATIC_KEY_TRUE(name);
DEFINE_STATIC_KEY_FALSE(name);
which define a key of different types with an initial true/false
value.
Then allow:
static_branch_likely()
static_branch_unlikely()
to take a key of either type and emit the right instruction for the
case. To be able to know the 'type' of the static key we encode it
in the jump entry (by Peter Zijlstra)
- Static key self-tests (by Jason Baron)
- qrwlock optimizations (by Waiman Long)
- small futex enhancements (by Davidlohr Bueso)
- ... and misc other changes"
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits)
jump_label/x86: Work around asm build bug on older/backported GCCs
locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations
locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h
locking/qrwlock: Make use of _{acquire|release|relaxed}() atomics
locking/qrwlock: Implement queue_write_unlock() using smp_store_release()
locking/lockref: Remove homebrew cmpxchg64_relaxed() macro definition
locking, asm-generic: Add _{relaxed|acquire|release}() variants for 'atomic_long_t'
locking, asm-generic: Rework atomic-long.h to avoid bulk code duplication
locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations
locking, compiler.h: Cast away attributes in the WRITE_ONCE() magic
locking/static_keys: Make verify_keys() static
jump label, locking/static_keys: Update docs
locking/static_keys: Provide a selftest
jump_label: Provide a self-test
s390/uaccess, locking/static_keys: employ static_branch_likely()
x86, tsc, locking/static_keys: Employ static_branch_likely()
locking/static_keys: Add selftest
locking/static_keys: Add a new static_key interface
locking/static_keys: Rework update logic
locking/static_keys: Add static_key_{en,dis}able() helpers
...
and the addition of new clock drivers. Stephen Boyd has also done a lot
of subsystem-wide driver clean-ups (thanks!). There are also fixes to
the framework core and changes to better split clock provider drivers
from clock consumer drivers.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJV5KelAAoJEKI6nJvDJaTUwaQP/RVb70v6XSgMIePuOq3iaECT
bclCAyito3YFwykrPPmQ1DucHvEjlWopeFwKqEE9VjNl07TVIH/OMGeonb9yErIY
aN+FMoA9RUGVexMhy004q5sSbOEihAqTgKWaOiYoY8zAfJfeTpYXUoy34FcrW7MB
j/cDDJgigtWe9zzcdrW04oT454lXQaSQuGX39tDCR0s0S3soYU2JyjkyBGiO5Yid
1yIMq/nzI8SrCwxwD/nFwQNtg7lqiAN291Nbi4At1vvG5r4RhNveuLGv8uJ50XRB
xwy0sdHLIVJrIJ8OUcs1sY8wxu7ghDS8u+vjTNO2RzBf3KZWbuXWX+yVM7JQi4Ty
0iL5hGbvERy5E9QSzzH+Ox2jVt5e/r/dyvRf3oBDPVrFXhKusYhn6JmdUVJkTZ83
GTw2sQdEpcmry4z/50/MaqpZuXVZ09VTOCTqp8ToseJjsz9jXxVhQ4HdAwLc8cmV
txWGRXuBxCB+2o8M0oky3IKS69VFFH5u6QQ0KG8+JYOrDDG7GcnJsFeV7mQjlu8g
3evYUILNAUfJGBpkOeLs654KUBHwUyXc87cUIKwjGaPruWb2048+kdCVrL3IFwPb
sS/7Qn3DQ90pHFUTssDnWLz3X0IWT3H0iV4zZyAqqdARugEo+mpykmXmMWcWc3VR
MrD1l3GVxLegEf242Zpo
=QAiQ
-----END PGP SIGNATURE-----
Merge tag 'clk-for-linus-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clk updates from Michael Turquette:
"The clk framework changes for 4.3 are mostly updates to existing
drivers and the addition of new clock drivers. Stephen Boyd has also
done a lot of subsystem-wide driver clean-ups (thanks!). There are
also fixes to the framework core and changes to better split clock
provider drivers from clock consumer drivers"
* tag 'clk-for-linus-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: (227 commits)
clk: s5pv210: add missing call to samsung_clk_of_add_provider()
clk: pistachio: correct critical clock list
clk: pistachio: Fix PLL rate calculation in integer mode
clk: pistachio: Fix override of clk-pll settings from boot loader
clk: pistachio: Fix 32bit integer overflows
clk: tegra: Fix some static checker problems
clk: qcom: Fix MSM8916 prng clock enable bit
clk: Add missing header for 'bool' definition to clk-conf.h
drivers/clk: appropriate __init annotation for const data
clk: rockchip: register pll mux before pll itself
clk: add bindings for the Ux500 clocks
clk/ARM: move Ux500 PRCC bases to the device tree
clk: remove duplicated code with __clk_set_parent_after
clk: Convert __clk_get_name(hw->clk) to clk_hw_get_name(hw)
clk: Constify clk_hw argument to provider APIs
clk: Hi6220: add stub clock driver
dt-bindings: clk: Hi6220: Document stub clock driver
dt-bindings: arm: Hi6220: add doc for SRAM controller
clk: atlas7: fix pll missed divide NR in fraction mode
clk: atlas7: fix bit field and its root clk for coresight_tpiu
...
When a task calls execve(), its FP/SIMD state is flushed so that
none of the original program state is observeable by the incoming
program.
However, since this flushing consists of setting the in-memory copy
of the FP/SIMD state to all zeroes, the CPU field is set to CPU 0 as
well, which indicates to the lazy FP/SIMD preserve/restore code that
the FP/SIMD state does not need to be reread from memory if the task
is scheduled again on CPU 0 without any other tasks having entered
userland (or used the FP/SIMD in kernel mode) on the same CPU in the
mean time. If this happens, the FP/SIMD state of the old program will
still be present in the registers when the new program starts.
So set the CPU field to the invalid value of NR_CPUS when performing
the flush, by calling fpsimd_flush_task_state().
Cc: <stable@vger.kernel.org>
Reported-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com>
Reported-by: Janet Liu <janet.liu@spreadtrum.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
#ifdefs") incorrectly resolved a conflict on arch/arm64/kernel/Makefile
which resulted in a partial revert of 52da443ec4 ("arm64: perf: factor
out callchain code"), leading to perf_callchain.o depending on
CONFIG_HW_PERF_EVENTS instead of CONFIG_PERF_EVENTS.
This patch restores the kconfig dependency for perf_callchain.o.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Highlights for KVM PPC this time around:
- Book3S: A few bug fixes
- Book3S: Allow micro-threading on POWER8
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
iQIcBAABAgAGBQJV2D6aAAoJECszeR4D/txggBMP/3nHD3UjEAFUhhA6VjfK2wNw
IW2aXQ5+2T51l1K8iSGMyKpW2w4zG5Bv9LdBP2badhaVpgM4//nVf7kcEBrdhjYq
ns7V3klzTuNY5RBbWZz3Zri0mgCkJVF1XlC3xBzGPSNKpZyrkORhlxfg5GXig8lj
pvUcku7XgkCFabAIIZmf0pg9hpDHpH3k1G9yZxuA8pys951IPRoo1CgsYmWSbmzh
jfA2CxBl10dHZOuk/ENyJveJgtthmBB4ezCbWXy+wcMzBKhMC5R93LUoiKXMLWpM
HkziNGjHA1gFSxDtfUVgkcXfan3a5JmlC+u50dLCTetXOVL7m2beIiXwv3smfjLn
AkpcChceEChxn0MxwKJjNvU+RVh3kmv8rklfPlBXHTtQ5ZSXxlcxYrmgL64stmrt
e27dzvJd9J7KX6wEpNyuZINsmFyn3lM3IoxqmSsVCRd43fzhZt9QGcYEXMIe1+lb
E7QncsYMuuWB/sfSieyPaXtmK5ym2+R220xlKezBZdzWdtisPrpCRyl7BdiqCj6O
1gROi6qEyj3m5Qw/eGbFKBF0d8oVXqo1wBJkbihMl55D+jMeZMk673aeGhno8au1
kH+Im+H5xU3oEzdqvC9y3c9kE2sRkzj43GjepIb86Y463fg6KQ5j2gbZUZolGsGH
AnRSGcbbVer/q+9kymPw
=t+9t
-----END PGP SIGNATURE-----
Merge tag 'signed-kvm-ppc-next' of git://github.com/agraf/linux-2.6 into kvm-queue
Patch queue for ppc - 2015-08-22
Highlights for KVM PPC this time around:
- Book3S: A few bug fixes
- Book3S: Allow micro-threading on POWER8
We have a micro-optimisation on the fast syscall return path where we
take care to keep x0 live with the return value from the syscall so that
we can avoid restoring it from the stack. The benefit of doing this is
fairly suspect, since we will be restoring x1 from the stack anyway
(which lives adjacent in the pt_regs structure) and the only additional
cost is saving x0 back to pt_regs after the syscall handler, which could
be seen as a poor man's prefetch.
More importantly, this causes issues with the context tracking code.
The ct_user_enter macro ends up branching into C code, which is free to
use x0 as a scratch register and consequently leads to us returning junk
back to userspace as the syscall return value. Rather than special case
the context-tracking code, this patch removes the questionable
optimisation entirely.
Cc: <stable@vger.kernel.org>
Cc: Larry Bassel <larry.bassel@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since commit 8a14849 (arm64: KVM: Switch vgic save/restore to
alternative_insn) vgic_sr_vectors is not used anymore, so remove
remaining leftovers and kill the structure.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Since 906c55579a ("timekeeping: Copy the shadow-timekeeper over the
real timekeeper last") it has become possible on arm64 to:
- Obtain a CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE timestamp
via syscall.
- Subsequently obtain a timestamp for the same clock ID via VDSO which
predates the first timestamp (by one jiffy).
This is because arm64's update_vsyscall is deriving the coarse time
using the __current_kernel_time interface, when it should really be
using the timekeeper object provided to it by the timekeeping core.
It happened to work before only because __current_kernel_time would
access the same timekeeper object which had been passed to
update_vsyscall. This is no longer the case.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This function may copy the si_addr_lsb, si_lower and si_upper fields to
user mode when they haven't been initialized, which can leak kernel
stack data to user mode.
Just checking the value of si_code is insufficient because the same
si_code value is shared between multiple signals. This is solved by
checking the value of si_signo in addition to si_code.
Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This function can leak kernel stack data when the user siginfo_t has a
positive si_code value. The top 16 bits of si_code descibe which fields
in the siginfo_t union are active, but they are treated inconsistently
between copy_siginfo_from_user32, copy_siginfo_to_user32 and
copy_siginfo_to_user.
copy_siginfo_from_user32 is called from rt_sigqueueinfo and
rt_tgsigqueueinfo in which the user has full control overthe top 16 bits
of si_code.
This fixes the following information leaks:
x86: 8 bytes leaked when sending a signal from a 32-bit process to
itself. This leak grows to 16 bytes if the process uses x32.
(si_code = __SI_CHLD)
x86: 100 bytes leaked when sending a signal from a 32-bit process to
a 64-bit process. (si_code = -1)
sparc: 4 bytes leaked when sending a signal from a 32-bit process to a
64-bit process. (si_code = any)
parsic and s390 have similar bugs, but they are not vulnerable because
rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code
to a different process. These bugs are also fixed for consistency.
Signed-off-by: Amanieu d'Antras <amanieu@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The arm64 booting document requires that the bootloader has cleaned the
kernel image to the PoC. However, when a CPU re-enters the kernel due to
either a CPU hotplug "on" event or resuming from a low-power state (e.g.
cpuidle), the kernel text may in-fact be dirty at the PoU due to things
like alternative patching or even module loading.
Thanks to I-cache speculation with the MMU off, stale instructions could
be fetched prior to enabling the MMU, potentially leading to crashes
when executing regions of code that have been modified at runtime.
This patch addresses the issue by ensuring that the local I-cache is
invalidated immediately after a CPU has enabled its MMU but before
jumping out of the identity mapping. Any stale instructions fetched from
the PoC will then be discarded and refetched correctly from the PoU.
Patching kernel text executed prior to the MMU being enabled is
prohibited, so the early entry code will always be clean.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order to guarantee that the patched instruction stream is visible to
a CPU, that CPU must execute an isb instruction after any related cache
maintenance has completed.
The instruction patching routines in kernel/insn.c get this right for
things like jump labels and ftrace, but the alternatives patching omits
it entirely leaving secondary cores in a potential limbo between the old
and the new code.
This patch adds an isb following the secondary polling loop in the
altenatives patching.
Signed-off-by: Will Deacon <will.deacon@arm.com>
To enable sharing with arm, move the core PSCI framework code to
drivers/firmware. This results in a minor gain in lines of code, but
this will quickly be amortised by the removal of code currently
duplicated in arch/arm.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since we've already stepped away from ENABLE is a JMP and DISABLE is a
NOP with the branch_default bits, and are going to make it even worse,
rename it to make it all clearer.
This way we don't mix multiple levels of logic attributes, but have a
plain 'physical' name for what the current instruction patching status
of a jump label is.
This is a first step in removing the naming confusion that has led to
a stream of avoidable bugs such as:
a833581e37 ("x86, perf: Fix static_key bug in load_mm_cr4()")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
[ Beefed up the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
accidentally retained code for !CONFIG_SMP in cpu_resume function. This
resulted in the hash index being zeroed in x7 after proper computation,
which is then used to get the cpu context pointer while resuming.
This patch removes the remanant code and restores back the cpu suspend/
resume functionality.
Fixes: 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant #ifdefs")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On ARM64 PROBE_ONLY PCI systems resources are not currently claimed,
therefore they can't be enabled since they do not have a valid
parent pointer; this in turn prevents enabling PCI devices on
ARM64 PROBE_ONLY systems, causing PCI devices initialization to
fail.
To solve this issue, resources must be claimed when devices are
added on PROBE_ONLY systems, which ensures that the resource hierarchy
is validated and the resource tree is sane, but this requires changes
in the ARM64 resource management that can affect adversely existing
PCI set-ups (claiming resources on !PROBE_ONLY systems might break
existing ARM64 PCI platform implementations).
As a temporary solution in preparation for a proper resources claiming
implementation in ARM64 core, to enable PCI PROBE_ONLY systems on ARM64,
this patch adds a pcibios_enable_device() arch implementation that
simply prevents enabling resources on PROBE_ONLY systems (mirroring ARM
behaviour).
This is always a safe thing to do because on PROBE_ONLY systems the
configuration space set-up can be considered immutable, and it is in
preparation of proper resource claiming that would finally validate
the PCI resources tree in the ARM64 arch implementation on PROBE_ONLY
systems.
For !PROBE_ONLY systems resources enablement in pcibios_enable_device()
on ARM64 is implemented as in current PCI core, leaving the behaviour
unchanged.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When patching the kernel text with alternatives, we may end up patching
parts of the stop_machine state machine (e.g. atomic_dec_and_test in
ack_state) and consequently corrupt the instruction stream of any
secondary CPUs.
This patch passes the cpu_online_mask to stop_machine, forcing all of
the CPUs into our own callback which can place the secondary cores into
a dumb (but safe!) polling loop whilst the patching is carried out.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
#ifdefs") forces SMP on arm64. To build the necessary objects for SMP,
they were added to the arm64-obj-y rule in arch/arm64/kernel/Makefile,
without removing the arm64-obj-$(CONFIG_SMP) rule.
Remove redundant object file list depending on always-yes CONFIG_SMP in
arch/arm64/kernel/Makefile.
Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 4b3dc9679c ("arm64: force CONFIG_SMP=y and remove redundant
and therfore can not be selected anymore.
Remove dead #ifdef-block depending on UP_LATE_INIT in
arch/arm64/kernel/setup.c
Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de>
[will: kill do_post_cpus_up_work altogether]
Signed-off-by: Will Deacon <will.deacon@arm.com>
lib/list_sort.c defines a 'struct debug_el', where "el" is assumedly a
a contraction of "element". This conflicts with 'enum debug_el' in our
asm/debug-monitors.h header file, where "el" stands for Exception Level.
The result is build failure when targetting allmodconfig, so rename our
enum to 'dbg_active_el' to be slightly more explicit about what it is.
Signed-off-by: Will Deacon <will.deacon@arm.com>
At boot, the UTF-16 UEFI vendor string is copied from the system
table into a char array with a size of 100 bytes. However, this
size of 100 bytes is also used for memremapping() the source,
which may not be sufficient if the vendor string exceeds 50
UTF-16 characters, and the placement of the vendor string inside
a 4 KB page happens to leave the end unmapped.
So use the correct '100 * sizeof(efi_char16_t)' for the size of
the mapping.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: f84d02755f ("arm64: add EFI runtime services")
Cc: <stable@vger.kernel.org> # 3.16+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
cpuid_feature_extract_field takes care of the fiddly ID register
field sign-extension, so use that instead of rolling our own version.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Rework the cpufeature detection to support ISAR0 and use that for
detecting the presence of LSE atomics.
Signed-off-by: Will Deacon <will.deacon@arm.com>
On CPUs which support the LSE atomic instructions introduced in ARMv8.1,
it makes sense to use them in preference to ll/sc sequences.
This patch introduces runtime patching of atomic_t and atomic64_t
routines so that the call-site for the out-of-line ll/sc sequences is
patched with an LSE atomic instruction when we detect that
the CPU supports it.
If binutils is not recent enough to assemble the LSE instructions, then
the ll/sc sequences are inlined as though CONFIG_ARM64_LSE_ATOMICS=n.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add a CPU feature for the LSE atomic instructions, so that they can be
patched in at runtime when we detect that they are supported.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM v8.1 architecture introduces new atomic instructions to the A64
instruction set for things like cmpxchg, so advertise their availability
to userspace using a hwcap.
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The generic slowpath WARN implementation prints a backtrace, but
the report_bug() based implementation does not, opting to print the
registers instead which is generally not as useful.
Ideally, report_bug() should be fixed to make the behaviour more
consistent, but in the meantime this patch generates a backtrace
directly from the arm64 backend instead so that this functionality
is not lost with the migration to report_bug().
As a side-effect, the backtrace will be outside the oops end
marker, but that's hard to avoid without modifying generic code.
This patch can go away if report_bug() grows the ability in the
future to generate a backtrace directly or call an arch hook at the
appropriate time.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently, the minimal default BUG() implementation from asm-
generic is used for arm64.
This patch uses the BRK software breakpoint instruction to generate
a trap instead, similarly to most other arches, with the generic
BUG code generating the dmesg boilerplate.
This allows bug metadata to be moved to a separate table and
reduces the amount of inline code at BUG and WARN sites. This also
avoids clobbering any registers before they can be dumped.
To mitigate the size of the bug table further, this patch makes
use of the existing infrastructure for encoding addresses within
the bug table as 32-bit offsets instead of absolute pointers.
(Note that this limits the kernel size to 2GB.)
Traps are registered at arch_initcall time for aarch64, but BUG
has minimal real dependencies and it is desirable to be able to
generate bug splats as early as possible. This patch redirects
all debug exceptions caused by BRK directly to bug_handler() until
the full debug exception support has been initialised.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The way the KGDB_DYN_BRK_INS_BYTEx macros are declared is more
complex than it needs to be. Also, the macros are only used in one
place, which is arch-specific anyway.
This patch refactors the macros to simplify them, and exposes an
argument so that we can have a single macro instead of 4.
As a side effect, this patch also fixes some anomalous spellings of
"KGDB".
These changes alter the compile types of some integer constants
that are harmless but trigger truncation warnings in gcc when
assigning to 32-bit variables. This patch adds an explicit cast
for the affected cases.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The naming of DBG_ESR_VAL_BRK is inconsistent with the way other
similar macros are named.
This patch makes the naming more consistent, and appends "64"
as a reminder that this ESR pattern only matches from AArch64
state.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A little change to patch_map() function,
use set_fixmap_offset() to make code more clear.
Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When allocating memory for the kernel image, try the AllocatePages()
boot service to obtain memory at the preferred offset of
'dram_base + TEXT_OFFSET', and only revert to efi_low_alloc() if that
fails. This is the only way to allocate at the base of DRAM if DRAM
starts at 0x0, since efi_low_alloc() refuses to allocate at 0x0.
Tested-by: Haojian Zhuang <haojian.zhuang@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since both CONFIG_ACPI and CONFIG_OF are enabled when booting using ACPI
tables on ARM64 platforms, we get few device tree warnings which are not
valid for ACPI boot. We can use of_have_populated_dt to check if the
device tree is populated or not before throwing out those errors.
This patch uses of_have_populated_dt to remove non legitimate device
tree warning when booting using ACPI tables.
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
'Privileged Access Never' is a new arm8.1 feature which prevents
privileged code from accessing any virtual address where read or write
access is also permitted at EL0.
This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
helpers temporarily to permit access.
This will catch kernel bugs where user memory is accessed directly.
'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
[will: use ALTERNATIVE in asm and tidy up pan_enable check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When a new cpu feature is available, the cpu feature bits will have some
initial value, which is incremented when the feature is updated.
This patch changes 'register_value' to be 'min_field_value', and checks
the feature bits value (interpreted as a signed int) is greater than this
minimum.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch adds an 'enable()' callback to cpu capability/feature
detection, allowing features that require some setup or configuration
to get this opportunity once the feature has been detected.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1
register.
This patch moves this function into header a file.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Convert the dynamic patching for ARM64_WORKAROUND_845719 over to
the newly added alternative assembler macros.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Most of the cache events an architecture might support do not map well
to those provided by the ARM architecture, and as such most entries in
the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid
physical event identifier, the *_UNSUPPORTED macros expand to a non-zero
value and thus each unsupported event must be explicitly initialised as
such. This leads to large diffs when adding support for a new CPU, and
makes it difficult to spot the important information.
This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED
macros to initialise all entries to *_UNSUPPORTED before overriding this
for the specific events we actually support, resulting in a significant
source code reduction.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Nobody seems to be producing !SMP systems anymore, so this is just
becoming a source of kernel bugs, particularly if people want to use
coherent DMA with non-shared pages.
This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
code in the process.
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently bundle the callchain handling code with the PMU code,
despite the fact the two are distinct, and the former can be useful even
in the absence of the latter.
Follow the example of arch/arm and factor the callchain handling into
its own file dependent on CONFIG_PERF_EVENTS rather than
CONFIG_HW_PERF_EVENTS.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The compat ptrace interface allows access to the TLS register, hardware
breakpoints and watchpoints, syscall number. However, a native task
using the native ptrace interface to debug compat tasks (e.g. multi-arch
gdb) only has access to the general and VFP register sets. The compat
ptrace interface cannot be accessed from a native task.
This patch adds a new user_aarch32_ptrace_view which contains the TLS,
hardware breakpoint/watchpoint and syscall number regsets in addition to
the existing GPR and VFP regsets. This view is backwards compatible with
the previous kernels. Core dumping of 32-bit tasks and compat ptrace are
not affected since the original user_aarch32_view is preserved.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Yao Qi <yao.qi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch_find_n_match_cpu_physical_id parses the device tree to get the
device node for a given logical cpu index. However, since ARM PMUs get
probed after the CPU device nodes are stashed while registering the
cpus, we can use of_cpu_device_node_get to avoid another DT parse.
This patch replaces arch_find_n_match_cpu_physical_id with
of_cpu_device_node_get to reuse the stashed value directly instead.
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARM64 pmu prints an error message in event_init() when
no hardware PMU is available. This is pretty annoying as
it keeps printing the message for every single trial, flooding
the kernel logs, unnecessarily. The return code is sufficient for
the user to figure out the reason.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This is a preparatory patch for moving irq_data struct members.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 0c8c0f03e3 ("x86/fpu, sched: Dynamically allocate 'struct fpu'")
moved the thread_struct to the bottom of task_struct. As a result, the
offset is now too large to be used in an immediate add on arm64 with
some kernel configs:
arch/arm64/kernel/entry.S: Assembler messages:
arch/arm64/kernel/entry.S:588: Error: immediate out of range
arch/arm64/kernel/entry.S:597: Error: immediate out of range
This patch calculates the offset using an additional register instead of
an immediate offset.
Fixes: 0c8c0f03e3 ("x86/fpu, sched: Dynamically allocate 'struct fpu'")
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Finally advertise the KVM capability for SET_GUEST_DEBUG. Once arm
support is added this check can be moved to the common
kvm_vm_ioctl_check_extension() code.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This introduces a level of indirection for the debug registers. Instead
of using the sys_regs[] directly we store registers in a structure in
the vcpu. The new kvm_arm_reset_debug_ptr() sets the debug ptr to the
guest context.
Because we no longer give the sys_regs offset for the sys_reg_desc->reg
field, but instead the index into a debug-specific struct we need to
add a number of additional trap functions for each register. Also as the
generic generic user-space access code no longer works we have
introduced a new pair of function pointers to the sys_reg_desc structure
to override the generic code when needed.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This is a precursor for later patches which will need to do more to
setup debug state before entering the hyp.S switch code. The existing
functionality for setting mdcr_el2 has been moved out of hyp.S and now
uses the value kept in vcpu->arch.mdcr_el2.
As the assembler used to previously mask and preserve MDCR_EL2.HPMN I've
had to add a mechanism to save the value of mdcr_el2 as a per-cpu
variable during the initialisation code. The kernel never sets this
number so we are assuming the bootcode has set up the correct value
here.
This also moves the conditional setting of the TDA bit from the hyp code
into the C code which is currently used for the lazy debug register
context switch code.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This file doesn't use the clk provider APIs. Remove the include.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
We currently set x27 in compat_sys_sigreturn_wrapper and
compat_sys_rt_sigreturn_wrapper, similarly to what we do with r8/why on
32-bit ARM, in an attempt to prevent sigreturns from being restarted.
However, on arm64 we have always used pt_regs::syscallno for syscall
restarting (for both native and compat tasks), and x27 is never
inspected again before being overwritten in kernel_exit.
This patch removes the pointless register assignments.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently we enable debug exceptions before reading ESR_EL1 in both
el0_inv and el1_inv. If a debug exception is taken before we read
ESR_EL1, the value will have been corrupted.
As el*_inv is typically fatal, an intervening debug exception results in
misleading debug information being logged to the console, but is not
otherwise harmful.
As with the other entry paths, we can use the ESR_EL1 value stashed
earlier in the exception entry (in x25 for el0_sync{,_compat}, and x1
for el1_sync), giving us better error reporting in this case.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For those parts of the arm64 ACPI code that need to check GICC subtables
in the MADT, use the new BAD_MADT_GICC_ENTRY macro instead of the previous
BAD_MADT_ENTRY. The new macro takes into account differences in the size
of the GICC subtable that the old macro did not; this caused failures even
though the subtable entries are valid.
Fixes: aeb823bbac ("ACPICA: ACPI 6.0: Add changes for FADT table.")
Signed-off-by: Al Stone <al.stone@linaro.org>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- suspicious RCU usage warning
- BPF (out of bounds array read and endianness conversion)
- perf (of_node usage after of_node_put, cpu_pmu->plat_device
assignment)
- huge pmd/pud check for value 0
- rate-limiting should only take unhandled signals into account
Clean-up:
- incorrect use of pgprot_t type
- unused header include
- __init annotation to arm_cpuidle_init
- pr_debug instead of pr_error for disabled GICC entries in ACPI/MADT
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVlrS4AAoJEGvWsS0AyF7x/TUQAK1o3/jlyAOY2eFRZPR9CJrU
xBB3bFs1TcYGyl5TvSssyAHS9RgpsagS+ZHbSaWTR9zjrtkJ5/kAFS0JY3Y3/oOy
4cPACISX8ps+YcekCIF0PLsznmvKstpXasvfEVr+QlspzF0utxZa/koP2ofzK+J4
IuTIMvXjym4x8GnV4FzkyFSOtJLEmtFpeW8GbzbrhjrIiCYP7Hx5zsn2PNfpdsQj
2UqE0JdMsCS42fmjzmX+MTTiQBIzfDXR9BSxUtbarR/wLEZbcN8cfQGIm/mrj69Y
1S6TuQVCskUC+8Ue2jKPJEb18igUZcbNulBPEEyMt1dNQxHrvicdgK/bthwmGlxb
sp5W82BT1ws3E4IHFB1tY89mp4j7GpvWf19su6xLmcbgYAh+/bQZHB0OR6iBnPHI
Zyp3BotBAk7zvf7hpdV2l9a3RkNR0tjI6eD7L1I7trxvPqq46DedxqpnBwXzElI8
xfTm8G4EaOBK65LnvPXnWDRawKhOcfEEAFEzJYZoO5g0gw4fA4+dRMsXBF/ENPde
FTv9afGMDy1uK7s449x+F4XwcZiiCaXBjtxAbXKNbfqPmMgv3v05/oE66LyTyYYM
lcEdQLMJ62zrKASPfKS3Iw5BChLoFFDKnnZRryGuYt5h5fGyMBlGYoe3y9K8zTV6
iGYFzZ6cQfv/7HvYIZqB
=R+XU
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes (and cleanups) from Catalin Marinas:
"Various arm64 fixes:
- suspicious RCU usage warning
- BPF (out of bounds array read and endianness conversion)
- perf (of_node usage after of_node_put, cpu_pmu->plat_device
assignment)
- huge pmd/pud check for value 0
- rate-limiting should only take unhandled signals into account
Clean-up:
- incorrect use of pgprot_t type
- unused header include
- __init annotation to arm_cpuidle_init
- pr_debug instead of pr_error for disabled GICC entries in
ACPI/MADT"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Fix show_unhandled_signal_ratelimited usage
ARM64 / SMP: Switch pr_err() to pr_debug() for disabled GICC entry
arm64: cpuidle: add __init section marker to arm_cpuidle_init
arm64: Don't report clear pmds and puds as huge
arm64: perf: fix unassigned cpu_pmu->plat_device when probing PMU PPIs
arm64: perf: Don't use of_node after putting it
arm64: fix incorrect use of pgprot_t variable
arm64/hw_breakpoint.c: remove unnecessary header
arm64: bpf: fix endianness conversion bugs
arm64: bpf: fix out-of-bounds read in bpf2a64_offset()
ARM64: smp: Fix suspicious RCU usage with ipi tracepoints
Commit 86dca36e6b introduced ratelimited usage for
'unhandled_signal' messages.
The commit checks the ratelimit irrespective of whether
the signal is handled or not, which is wrong and leads
to false reports like the below in dmesg :
__do_user_fault: 127 callbacks suppressed
Do the ratelimit check only if the signal is unhandled.
Fixes: 86dca36e6b ("arm64: use private ratelimit state along with show_unhandled_signals")
Cc: Vladimir Murzin <Vladimir.Murzin@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
It is normal that firmware presents GICC entry or entries (processors)
with disabled flag in ACPI MADT, taking a system of 16 cpus for example,
ACPI firmware may present 8 ebabled first with another 8 cpus disabled
in MADT, the disabled cpus can be hot-added later.
Firmware may also present more cpus than the hardware actually has, but
disabled the unused ones, and easily enable it when the hardware has such
cpus to make the firmware code scalable.
So that's not an error for disabled cpus in MADT, we can switch pr_err()
to pr_debug() to make the boot a little quieter by default.
Since hwid for disabled cpus often are invalid, and we check invalid hwid
first in the code, for use case that hot add cpus later will be filtered
out and will not be counted in possible cups, so move this check before
the hwid one to prepare the code to count for disabeld cpus when cpu
hot-plug is introduced.
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Reviewed-by: Al Stone <ahs3@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
It is not needed after booting, this patch moves the arm_cpuidle_init()
function to the __init section.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Add "make xenconfig" to assist in generating configs for Xen guests.
- Preparatory cleanups necessary for supporting 64 KiB pages in ARM
guests.
- Automatically use hvc0 as the default console in ARM guests.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQEcBAABAgAGBQJVkpoqAAoJEFxbo/MsZsTRu3IH/2AMPx2i65hoSqfHtGf3sz/z
XNfcidVmOElFVXGaW83m0tBWMemT5LpOGRfiq5sIo8xt/8xD2vozEkl/3kkf3RrX
EmZDw3E8vmstBdBTjWdovVhNenRc0m0pB5daS7wUdo9cETq1ag1L3BHTB3fEBApO
74V6qAfnhnq+snqWhRD3XAk3LKI0nWuWaV+5HsmxDtnunGhuRLGVs7mwxZGg56sM
mILA0eApGPdwyVVpuDe0SwV52V8E/iuVOWTcomGEN2+cRWffG5+QpHxQA8bOtF6O
KfqldiNXOY/idM+5+oSm9hespmdWbyzsFqmTYz0LvQvxE8eEZtHHB3gIcHkE8QU=
=danz
-----END PGP SIGNATURE-----
Merge tag 'for-linus-4.2-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen updates from David Vrabel:
"Xen features and cleanups for 4.2-rc0:
- add "make xenconfig" to assist in generating configs for Xen guests
- preparatory cleanups necessary for supporting 64 KiB pages in ARM
guests
- automatically use hvc0 as the default console in ARM guests"
* tag 'for-linus-4.2-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
block/xen-blkback: s/nr_pages/nr_segs/
block/xen-blkfront: Remove invalid comment
block/xen-blkfront: Remove unused macro MAXIMUM_OUTSTANDING_BLOCK_REQS
arm/xen: Drop duplicate define mfn_to_virt
xen/grant-table: Remove unused macro SPP
xen/xenbus: client: Fix call of virt_to_mfn in xenbus_grant_ring
xen: Include xen/page.h rather than asm/xen/page.h
kconfig: add xenconfig defconfig helper
kconfig: clarify kvmconfig is for kvm
xen/pcifront: Remove usage of struct timeval
xen/tmem: use BUILD_BUG_ON() in favor of BUG_ON()
hvc_xen: avoid uninitialized variable warning
xenbus: avoid uninitialized variable warning
xen/arm: allow console=hvc0 to be omitted for guests
arm,arm64/xen: move Xen initialization earlier
arm/xen: Correctly check if the event channel interrupt is present
Commit d795ef9aa8 ("arm64: perf: don't warn about missing
interrupt-affinity property for PPIs") added a check for PPIs so that
we avoid parsing the interrupt-affinity property for these naturally
affine interrupts.
Unfortunately, this check can trigger an early (successful) return and
we will not assign the value of cpu_pmu->plat_device. This patch fixes
the issue.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
It's possible, albeit unlikely, that using the of_node here will
reference freed memory. Call of_node_put() after printing the
name to be safe.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
4 drivers / enabling modules:
NFIT:
Instantiates an "nvdimm bus" with the core and registers memory devices
(NVDIMMs) enumerated by the ACPI 6.0 NFIT (NVDIMM Firmware Interface
table). After registering NVDIMMs the NFIT driver then registers
"region" devices. A libnvdimm-region defines an access mode and the
boundaries of persistent memory media. A region may span multiple
NVDIMMs that are interleaved by the hardware memory controller. In
turn, a libnvdimm-region can be carved into a "namespace" device and
bound to the PMEM or BLK driver which will attach a Linux block device
(disk) interface to the memory.
PMEM:
Initially merged in v4.1 this driver for contiguous spans of persistent
memory address ranges is re-worked to drive PMEM-namespaces emitted by
the libnvdimm-core. In this update the PMEM driver, on x86, gains the
ability to assert that writes to persistent memory have been flushed all
the way through the caches and buffers in the platform to persistent
media. See memcpy_to_pmem() and wmb_pmem().
BLK:
This new driver enables access to persistent memory media through "Block
Data Windows" as defined by the NFIT. The primary difference of this
driver to PMEM is that only a small window of persistent memory is
mapped into system address space at any given point in time. Per-NVDIMM
windows are reprogrammed at run time, per-I/O, to access different
portions of the media. BLK-mode, by definition, does not support DAX.
BTT:
This is a library, optionally consumed by either PMEM or BLK, that
converts a byte-accessible namespace into a disk with atomic sector
update semantics (prevents sector tearing on crash or power loss). The
sinister aspect of sector tearing is that most applications do not know
they have a atomic sector dependency. At least today's disk's rarely
ever tear sectors and if they do one almost certainly gets a CRC error
on access. NVDIMMs will always tear and always silently. Until an
application is audited to be robust in the presence of sector-tearing
the usage of BTT is recommended.
Thanks to: Ross Zwisler, Jeff Moyer, Vishal Verma, Christoph Hellwig,
Ingo Molnar, Neil Brown, Boaz Harrosh, Robert Elliott, Matthew Wilcox,
Andy Rudoff, Linda Knippers, Toshi Kani, Nicholas Moulin, Rafael
Wysocki, and Bob Moore.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVjZGBAAoJEB7SkWpmfYgC4fkP/j+k6HmSRNU/yRYPyo7CAWvj
3P5P1i6R6nMZZbjQrQArAXaIyLlFk4sEQDYsciR6dmslhhFZAkR2eFwVO5rBOyx3
QN0yxEpyjJbroRFUrV/BLaFK4cq2oyJAFFHs0u7/pLHBJ4MDMqfRKAMtlnBxEkTE
LFcqXapSlvWitSbjMdIBWKFEvncaiJ2mdsFqT4aZqclBBTj00eWQvEG9WxleJLdv
+tj7qR/vGcwOb12X5UrbQXgwtMYos7A6IzhHbqwQL8IrOcJ6YB8NopJUpLDd7ZVq
KAzX6ZYMzNueN4uvv6aDfqDRLyVL7qoxM9XIjGF5R8SV9sF2LMspm1FBpfowo1GT
h2QMr0ky1nHVT32yspBCpE9zW/mubRIDtXxEmZZ53DIc4N6Dy9jFaNVmhoWtTAqG
b9pndFnjUzzieCjX5pCvo2M5U6N0AQwsnq76/CasiWyhSa9DNKOg8MVDRg0rbxb0
UvK0v8JwOCIRcfO3qiKcx+02nKPtjCtHSPqGkFKPySRvAdb+3g6YR26CxTb3VmnF
etowLiKU7HHalLvqGFOlDoQG6viWes9Zl+ZeANBOCVa6rL2O7ZnXJtYgXf1wDQee
fzgKB78BcDjXH4jHobbp/WBANQGN/GF34lse8yHa7Ym+28uEihDvSD1wyNLnefmo
7PJBbN5M5qP5tD0aO7SZ
=VtWG
-----END PGP SIGNATURE-----
Merge tag 'libnvdimm-for-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm
Pull libnvdimm subsystem from Dan Williams:
"The libnvdimm sub-system introduces, in addition to the
libnvdimm-core, 4 drivers / enabling modules:
NFIT:
Instantiates an "nvdimm bus" with the core and registers memory
devices (NVDIMMs) enumerated by the ACPI 6.0 NFIT (NVDIMM Firmware
Interface table).
After registering NVDIMMs the NFIT driver then registers "region"
devices. A libnvdimm-region defines an access mode and the
boundaries of persistent memory media. A region may span multiple
NVDIMMs that are interleaved by the hardware memory controller. In
turn, a libnvdimm-region can be carved into a "namespace" device and
bound to the PMEM or BLK driver which will attach a Linux block
device (disk) interface to the memory.
PMEM:
Initially merged in v4.1 this driver for contiguous spans of
persistent memory address ranges is re-worked to drive
PMEM-namespaces emitted by the libnvdimm-core.
In this update the PMEM driver, on x86, gains the ability to assert
that writes to persistent memory have been flushed all the way
through the caches and buffers in the platform to persistent media.
See memcpy_to_pmem() and wmb_pmem().
BLK:
This new driver enables access to persistent memory media through
"Block Data Windows" as defined by the NFIT. The primary difference
of this driver to PMEM is that only a small window of persistent
memory is mapped into system address space at any given point in
time.
Per-NVDIMM windows are reprogrammed at run time, per-I/O, to access
different portions of the media. BLK-mode, by definition, does not
support DAX.
BTT:
This is a library, optionally consumed by either PMEM or BLK, that
converts a byte-accessible namespace into a disk with atomic sector
update semantics (prevents sector tearing on crash or power loss).
The sinister aspect of sector tearing is that most applications do
not know they have a atomic sector dependency. At least today's
disk's rarely ever tear sectors and if they do one almost certainly
gets a CRC error on access. NVDIMMs will always tear and always
silently. Until an application is audited to be robust in the
presence of sector-tearing the usage of BTT is recommended.
Thanks to: Ross Zwisler, Jeff Moyer, Vishal Verma, Christoph Hellwig,
Ingo Molnar, Neil Brown, Boaz Harrosh, Robert Elliott, Matthew Wilcox,
Andy Rudoff, Linda Knippers, Toshi Kani, Nicholas Moulin, Rafael
Wysocki, and Bob Moore"
* tag 'libnvdimm-for-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm: (33 commits)
arch, x86: pmem api for ensuring durability of persistent memory updates
libnvdimm: Add sysfs numa_node to NVDIMM devices
libnvdimm: Set numa_node to NVDIMM devices
acpi: Add acpi_map_pxm_to_online_node()
libnvdimm, nfit: handle unarmed dimms, mark namespaces read-only
pmem: flag pmem block devices as non-rotational
libnvdimm: enable iostat
pmem: make_request cleanups
libnvdimm, pmem: fix up max_hw_sectors
libnvdimm, blk: add support for blk integrity
libnvdimm, btt: add support for blk integrity
fs/block_dev.c: skip rw_page if bdev has integrity
libnvdimm: Non-Volatile Devices
tools/testing/nvdimm: libnvdimm unit test infrastructure
libnvdimm, nfit, nd_blk: driver for BLK-mode access persistent memory
nd_btt: atomic sector updates
libnvdimm: infrastructure for btt devices
libnvdimm: write blk label set
libnvdimm: write pmem label set
libnvdimm: blk labels and namespace instantiation
...
John Stultz reported an RCU splat on ARM with ipi trace events
enabled. It looks like the same problem exists on ARM64.
At this point in the IPI handling path we haven't called
irq_enter() yet, so RCU doesn't know that we're about to exit
idle and properly warns that we're using RCU from an idle CPU.
Use trace_ipi_entry_rcuidle() instead of trace_ipi_entry() so
that RCU is informed about our exit from idle.
Cc: John Stultz <john.stultz@linaro.org>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: <stable@vger.kernel.org> # 3.17+
Fixes: 45ed695ac1 ("ARM64: add IPI tracepoints")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
printk_ratelimit() shares the ratelimiting state with other callers what
may lead to scenarios where at the time we want to print out debug
information we already limited, so nothing appears in the dmesg - this
makes exception-trace quite poor helper in debugging.
Additionally, we have imbalance with some messages limited with global
ratelimit state and other messages limited with their private state
defined via pr_*_ratelimited().
To address this inconsistency show_unhandled_signals_ratelimited()
macro is introduced and caller sites are converted to use it.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When building the kernel with a bare-metal (ELF) toolchain, the -shared
option may not be passed down to collect2, resulting in silent corruption
of the vDSO image (in particular, the DYNAMIC section is omitted).
The effect of this corruption is that the dynamic linker fails to find
the vDSO symbols and libc is instead used for the syscalls that we
intended to optimise (e.g. gettimeofday). Functionally, there is no
issue as the sigreturn trampoline is still intact and located by the
kernel.
This patch fixes the problem by explicitly passing -shared to the linker
when building the vDSO.
Cc: <stable@vger.kernel.org>
Reported-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
Reported-by: James Greenlaigh <james.greenhalgh@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch renames __cpu_suspend to cpu_suspend so that it's aligned
with ARM32. It also removes the redundant wrapper created.
This is in preparation to implement generic PSCI system suspend using
the cpu_{suspend,resume} which now has the same interface on both ARM
and ARM64.
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We check against compat_sp, but print out arm64's sp - fix it.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 6c81fe7925 ("arm64: enable context tracking") did not
update el0_sp_pc to use ct_user_exit, but this appears to have been
unintentional. In commit 6ab6463aeb ("arm64: adjust el0_sync so
that a function can be called") we made x0 available, and in the return
to userspace we call ct_user_enter in the kernel_exit macro.
Due to this, we currently don't correctly inform RCU of the user->kernel
transition, and may erroneously account for time spent in the kernel as
if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
is enabled.
As we do record the kernel->user transition, a userspace application
making accesses from an unaligned stack pointer can demonstrate the
imbalance, provoking the following warning:
------------[ cut here ]------------
WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
Modules linked in:
CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[<ffffffc000089914>] dump_backtrace+0x0/0x124
[<ffffffc000089a48>] show_stack+0x10/0x1c
[<ffffffc0005b3cbc>] dump_stack+0x84/0xc8
[<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0
[<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20
[<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4
[<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114
[<ffffffc00008561c>] el1_preempt+0x4/0x28
[<ffffffc0001b8040>] exit_files+0x38/0x4c
[<ffffffc0000b5b94>] do_exit+0x430/0x978
[<ffffffc0000b614c>] do_group_exit+0x40/0xd4
[<ffffffc0000c0208>] get_signal+0x23c/0x4f4
[<ffffffc0000890b4>] do_signal+0x1ac/0x518
[<ffffffc000089650>] do_notify_resume+0x5c/0x68
---[ end trace 963c192600337066 ]---
This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
correcting the context tracking for this case.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Fixes: 6c81fe7925 ("arm64: enable context tracking")
Cc: <stable@vger.kernel.org> # v3.17+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
So far, we configured the world-switch by having a small array
of pointers to the save and restore functions, depending on the
GIC used on the platform.
Loading these values each time is a bit silly (they never change),
and it makes sense to rely on the instruction patching instead.
This leads to a nice cleanup of the code.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add a new item to the feature set (ARM64_HAS_SYSREG_GIC_CPUIF)
to indicate that we have a system register GIC CPU interface
This will help KVM switching to alternative instruction patching.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When building without CONFIG_HOTPLUG_CPU, GCC complains (rightly) that
psci_tos_resident_on is unused:
arch/arm64/kernel/psci.c:61:13: warning: ‘psci_tos_resident_on’ defined but not used [-Wunused-function]
static bool psci_tos_resident_on(int cpu)
As it's only ever used when CONFIG_HOTPLUG_CPU is selected, let's move
it into the existing ifdef.
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Mark: write commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now FPSIMD don't handle HOTPLUG_CPU. This introduces bug after cpu down/up process.
After cpu down/up process, the FPSMID hardware register is default value, not any
process's fpsimd context. when CPU_DEAD set cpu's fpsimd_state to NULL, it will force
to load the fpsimd context for the thread, to avoid the chance to skip to load the context.
If process A is the last user process on CPU N before cpu down, and the first user process
on the same CPU N after cpu up, A's fpsimd_state.cpu is the current cpu id,
and per_cpu(fpsimd_last_state) points A's fpsimd_state, so kernel will not reload the
context during it return to user space.
Signed-off-by: Janet Liu <janet.liu@spreadtrum.com>
Signed-off-by: Xiongshan An <xiongshan.an@spreadtrum.com>
Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com>
[catalin.marinas@arm.com: some mostly cosmetic clean-ups]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
kernel thread's default fpsimd state is zero. When fork a thread, if parent is kernel thread,
and save hardware context to parent's fpsimd state, but this hardware context is user
process's context, because kernel thread don't use fpsimd, it will not introduce issue,
it add a little cost.
Signed-off-by: Janet Liu <janet.liu@spreadtrum.com>
Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
the fast path. It's then possible to have TIF_SYSCALL_TRACE added in
the middle of the syscall, but ret_fast_syscall doesn't check this flag
again. This causes a ptrace syscall-exit-stop to be missed.
For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
Now the completion of the fork should have a syscall-exit-stop.
Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
fast exit path. Do the same on arm64.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Josh Stone <jistone@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
asm/alternative-asm.h and asm/alternative.h are extremely similar,
and really deserve to live in the same file (as this makes further
modufications a bit easier).
Fold the content of alternative-asm.h into alternative.h, and
update the few users.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since all branches are PC-relative on AArch64, these instructions
cannot be used as an alternative with the simplistic approach
we currently have (the immediate has been computed from
the .altinstr_replacement section, and end-up being completely off
if the target is outside of the replacement sequence).
This patch handles the branch instructions in a different way,
using the insn framework to recompute the immediate, and generate
the right displacement in the above case.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The workaround for erratum 845719 is currently using
a branch between two alternate sequences, which is
quite fragile, and that we are going to break as we
rework the alternative code.
This patch reworks the workaround to fit in a single
alternative sequence. The generated code itself is
unchanged.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In order to deal with branches located in alternate sequences,
but pointing to the main kernel text, it is required to extract
the relative displacement encoded in the instruction, and to be
able to update said instruction with a new offset (once it is
known).
For this, we introduce three new helpers:
- aarch64_insn_is_branch_imm is a predicate indicating if the
instruction is an immediate branch
- aarch64_get_branch_offset returns a signed value representing
the byte offset encoded in a branch instruction
- aarch64_set_branch_offset takes an instruction and an offset,
and returns the corresponding updated instruction.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Two cleanups of the asm function cpu_resume():
- The global variable sleep_idmap_phys always points to idmap_pg_dir,
so we can just use that value directly in the CPU resume path.
- Unclutter the load of sleep_save_sp::save_ptr_stash_phys.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit ea8c2e1124 ("arm64: Extend the idmap to the whole kernel
image") changed the early page table code so that the entire kernel
Image is covered by the identity map. This allows functions that
need to enable or disable the MMU to reside anywhere in the kernel
Image.
However, this change has the unfortunate side effect that the Image
cannot cross a physical 512 MB alignment boundary anymore, since the
early page table code cannot deal with the Image crossing a /virtual/
512 MB alignment boundary.
So instead, reduce the ID map to a single page, that is populated by
the contents of the .idmap.text section. Only three functions reside
there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
If new code is introduced that needs to manipulate the MMU state, it
should be added to this section as well.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently, the FDT blob needs to be in the same 512 MB region as
the kernel, so that it can be mapped into the kernel virtual memory
space very early on using a minimal set of statically allocated
translation tables.
Now that we have early fixmap support, we can relax this restriction,
by moving the permanent FDT mapping to the fixmap region instead.
This way, the FDT blob may be anywhere in memory.
This also moves the vetting of the FDT to mmu.c, since the early
init code in head.S does not handle mapping of the FDT anymore.
At the same time, fix up some comments in head.S that have gone stale.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since commit a4780adeef ("ARM: 7735/2: Preserve the user r/w register
TPIDRURW on context switch and fork"), arch/arm/ has context switched
the user-writable TLS register, so do the same for compat tasks running
under the arm64 kernel.
Reported-by: André Hentschel <nerv@dawncrow.de>
Tested-by: André Hentschel <nerv@dawncrow.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently, Xen is initialized/discovered in an initcall. This doesn't
allow us to support earlyprintk or choosing the preferred console when
running on Xen.
The current function xen_guest_init is now split in 2 parts:
- xen_early_init: Check if there is a Xen node in the device tree
and setup domain type
- xen_guest_init: Retrieve the information from the device node and
initialize Xen (grant table, shared page...)
The former is called in setup_arch, while the latter is an initcall.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Will Deacon <will.deacon@arm.com>
ACPI 6.0 formalizes e820-type-7 and efi-type-14 as persistent memory.
Mark it "reserved" and allow it to be claimed by a persistent memory
device driver.
This definition is in addition to the Linux kernel's existing type-12
definition that was recently added in support of shipping platforms with
NVDIMM support that predate ACPI 6.0 (which now classifies type-12 as
OEM reserved).
Note, /proc/iomem can be consulted for differentiating legacy
"Persistent Memory (legacy)" E820_PRAM vs standard "Persistent Memory"
E820_PMEM.
Cc: Boaz Harrosh <boaz@plexistor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jens Axboe <axboe@fb.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matthew Wilcox <willy@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Jeff Moyer <jmoyer@redhat.com>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Tested-by: Toshi Kani <toshi.kani@hp.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The 32-bit ARM port doesn't have ACPI headers, and conditionally
including them is going to look horrendous. In preparation for sharing
the PSCI invocation code with 32-bit, move the acpi_psci_* function
declarations and definitions such that the PSCI client code need not
include ACPI headers.
While it would seem like we could simply hide the ACPI includes in
psci.h, the ACPI headers have hilarious circular dependencies which make
this infeasible without reorganising most of ACPICA. So rather than
doing that, move the acpi_psci_* prototypes into psci.h.
The psci_acpi_init function is made dependent on CONFIG_ACPI (with a
stub implementation in asm/psci.h) such that it need not be built for
32-bit ARM or kernels without ACPI support. The currently missing __init
annotations are added to the prototypes in the header.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Reviewed-by: Al Stone <al.stone@linaro.org>
Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
A PSCI 1.0 implementation may choose to use the new extended StateID
format, the presence of which may be queried via the PSCI_FEATURES call.
The layout of this new StateID format is incompatible with the existing
format, and so to handle both we must abstract attempts to parse the
fields.
In preparation for PSCI 1.0 support, this patch introduces
psci_power_state_loses_context and psci_power_state_is_valid functions
to query information from a PSCI power state, which is no longer
decomposed (and hence the pack/unpack functions are removed). As it is
no longer decomposed, it is now passed round as an opaque u32 token.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Software resident in the secure world (a "Trusted OS") may cause CPU_OFF
calls for the CPU it is resident on to be denied. Such a denial would be
fatal for the kernel, and so we must detect when this can happen before
the point of no return.
This patch implements Trusted OS detection for PSCI 0.2+ systems, using
MIGRATE_INFO_TYPE and MIGRATE_INFO_UP_CPU. When a trusted OS is detected
as resident on a particular CPU, attempts to hot unplug that CPU will be
denied early, before they can prove fatal.
Trusted OS migration is not implemented by this patch. Implementation of
migratable UP trusted OSs seems unlikely, and the right policy for
migration is unclear (and will likely differ across implementations). As
such, it is likely that migration will require cooperation with Trusted
OS drivers.
PSCI implementations prior to 0.1 do not provide the facility to detect
the presence of a Trusted OS, nor the CPU any such OS is resident on, so
without additional information it is not possible to handle Trusted OSs
with PSCI 0.1.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
PSCI_VERSION and MIGRATE_INFO_TYPE_UP_CPU return unsigned values, with
the latter returning a 64-bit value. However, the PSCI invocation
functions have prototypes returning int.
This patch upgrades the invocation functions to return unsigned long,
with a new typedef to keep things legible. As PSCI_VERSION cannot return
a negative value, the erroneous check against PSCI_RET_NOT_SUPPORTED is
also removed. The unrelated psci_initcall_t typedef is moved closer to
its first user, to avoid confusion with the invocation functions.
In preparation for sharing the code with ARM, unsigned long is used in
preference of u64. In the SMC32 calling convention, the relevant fields
will be 32 bits wide.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
PSCI 0.1 did not define canonical IDs for CPU_ON, CPU_OFF, CPU_SUSPEND,
or MIGRATE, and so these need to be provided when using firmware
compliant to PSCI 0.1.
However, functions introduced in 0.2 or later have canonical IDs, and
these cannot be provided via DT. There's no need to indirect the IDs via
a table; they can be used directly at callsites (and already are for
SYSTEM_OFF and SYSTEM_RESET).
This patch removes the unnecessary function ID indirection for
AFFINITY_INFO and MIGRATE_INFO_TYPE.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
cpu_kill currently returns one for success and zero for failure, which
is unlike all the other cpu_operations, which return zero for success
and an error code upon failure. This difference is unnecessarily
confusing.
Make cpu_kill consistent with the other cpu_operations.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
This commit removes the open-coded CPU-offline notification with new
common code. In particular, this change avoids calling scheduler code
using RCU from an offline CPU that RCU is ignoring. This is a minimal
change. A more intrusive change might invoke the cpu_check_up_prepare()
and cpu_set_state_online() functions at CPU-online time, which would
allow onlining throw an error if the CPU did not go offline properly.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The code that initializes cpus on arm64 is currently split in two
different code paths that carry out DT and ACPI cpus initialization.
Most of the code executing SMP initialization is common and should
be merged to reduce discrepancies between ACPI and DT initialization
and to have code initializing cpus in a single common place in the
kernel.
This patch refactors arm64 SMP cpus initialization code to merge
ACPI and DT boot paths in a common file and to create sanity
checks that can be reused by both boot methods.
Current code assumes PSCI is the only available boot method
when arm64 boots with ACPI; this can be easily extended if/when
the ACPI parking protocol is merged into the kernel.
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Mark Rutland <mark.rutland@arm.com> [DT]
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM64 CPU operations such as cpu_init and cpu_init_idle take
a struct device_node pointer as a parameter, which corresponds to
the device tree node of the logical cpu on which the operation
has to be applied.
With the advent of ACPI on arm64, where MADT static table entries
are used to initialize cpus, the device tree node parameter
in cpu_ops hooks become useless when booting with ACPI, since
in that case cpu device tree nodes are not present and can not be
used for cpu initialization.
The current cpu_init hook requires a struct device_node pointer
parameter because it is called while parsing the device tree to
initialize CPUs, when the cpu_logical_map (that is used to match
a cpu node reg property to a device tree node) for a given logical
cpu id is not set up yet. This means that the cpu_init hook cannot
rely on the of_get_cpu_node function to retrieve the device tree
node corresponding to the logical cpu id passed in as parameter,
so the cpu device tree node must be passed in as a parameter to fix
this catch-22 dependency cycle.
This patch reshuffles the cpu_logical_map initialization code so
that the cpu_init cpu_ops hook can safely use the of_get_cpu_node
function to retrieve the cpu device tree node, removing the need for
the device tree node pointer parameter.
In the process, the patch removes device tree node parameters
from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus
initialization consolidation.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Mark Rutland <mark.rutland@arm.com> [DT]
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The documented semantics of flush_cache_all are not possible to provide
for arm64 (short of flushing the entire physical address space by VA),
and there are currently no users; KVM uses VA maintenance exclusively,
cpu_reset is never called, and the only two users outside of arch code
cannot be built for arm64.
While cpu_soft_reset and related functions (which call flush_cache_all)
were thought to be useful for kexec, their current implementations only
serve to mask bugs. For correctness kexec will need to perform
maintenance by VA anyway to account for system caches, line migration,
and other subtleties of the cache architecture. As the extent of this
cache maintenance will be kexec-specific, it should probably live in the
kexec code.
This patch removes flush_cache_all, and related unused components,
preventing further abuse.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: Geoff Levand <geoff@infradead.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Mark the PMU interrupts as non-threadable, as is the case with
arch/arm: d9c3365 ARM: 7813/1: Mark pmu interupt IRQF_NO_THREAD
Acked-by: Will Deacon <will.deacon@arm.com>
Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit d795ef9aa8 ("arm64: perf: don't warn about missing
interrupt-affinity property for PPIs") added a check for PPIs so that
we avoid parsing the interrupt-affinity property for these naturally
affine interrupts.
Unfortunately, this check can trigger an early (successful) return and
we will leak the irqs array. This patch fixes the issue by reordering
the code so that the check is performed before any independent
allocation.
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This reverts most of commit fef7f2b201.
It turns out that there are a couple of problems with the way we're
fixing up branch instructions used as part of alternative instruction
sequences:
(1) If the branch target is also in the alternative sequence, we'll
generate a branch into the .altinstructions section which actually
gets freed.
(2) The calls to aarch64_insn_{read,write} bring an awful lot more
code into the patching path (e.g. taking locks, poking the fixmap,
invalidating the TLB) which isn't actually needed for the early
patching run under stop_machine, but makes the use of alternative
sequences extremely fragile (as we can't patch code that could be
used by the patching code).
Given that no code actually requires alternative patching of immediate
branches, let's remove this support for now and revisit it when we've
got a user. We leave the updated size check, since we really do require
the sequences to be the same length.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
With commit d5efd9cc9c ("arm64: pmu: add support for interrupt-affinity
property"), we print a warning when we find a PMU SPI with a missing
missing interrupt-affinity property in a pmu node. Unfortunately, we
pass the wrong (NULL) device node to of_node_full_name, resulting in
unhelpful messages such as:
hw perfevents: Failed to parse <no-node>/interrupt-affinity[0]
This patch fixes the name to that of the pmu node.
Fixes: d5efd9cc9c (arm64: pmu: add support for interrupt-affinity property)
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
PPIs are affine by nature, so the interrupt-affinity property is not
used and therefore we shouldn't print a warning in its absence.
Reported-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This series introduces preliminary ACPI 5.1 support to the arm64 kernel
using the "hardware reduced" profile. We don't support any peripherals
yet, so it's fairly limited in scope:
- Memory init (UEFI)
- ACPI discovery (RSDP via UEFI)
- CPU init (FADT)
- GIC init (MADT)
- SMP boot (MADT + PSCI)
- ACPI Kconfig options (dependent on EXPERT)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJVNOC2AAoJELescNyEwWM08dIH/1Pn5xa04wwNDn0MOpbuQMk2
kHM7hx69fbXflTJpnZRVyFBjRxxr5qilA7rljAFLnFeF8Fcll/s5VNy7ElHKLISq
CB0ywgUfOd/sFJH57rcc67pC1b/XuqTbE1u1NFwvp2R3j1kGAEJWNA6SyxIP4bbc
NO5jScx0lQOJ3rrPAXBW8qlGkeUk7TPOQJtMrpftNXlFLFrR63rPaEmMZ9dWepBF
aRE4GXPvyUhpyv5o9RvlN5l8bQttiRJ3f9QjyG7NYhX0PXH3DyvGUzYlk2IoZtID
v3ssCQH3uRsAZHIBhaTyNqFnUIaDR825bvGqyG/tj2Dt3kQZiF+QrfnU5D9TuMw=
=zLJn
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull initial ACPI support for arm64 from Will Deacon:
"This series introduces preliminary ACPI 5.1 support to the arm64
kernel using the "hardware reduced" profile. We don't support any
peripherals yet, so it's fairly limited in scope:
- MEMORY init (UEFI)
- ACPI discovery (RSDP via UEFI)
- CPU init (FADT)
- GIC init (MADT)
- SMP boot (MADT + PSCI)
- ACPI Kconfig options (dependent on EXPERT)
ACPI for arm64 has been in development for a while now and hardware
has been available that can boot with either FDT or ACPI tables. This
has been made possible by both changes to the ACPI spec to cater for
ARM-based machines (known as "hardware-reduced" in ACPI parlance) but
also a Linaro-driven effort to get this supported on top of the Linux
kernel. This pull request is the result of that work.
These changes allow us to initialise the CPUs, interrupt controller,
and timers via ACPI tables, with memory information and cmdline coming
from EFI. We don't support a hybrid ACPI/FDT scheme. Of course,
there is still plenty of work to do (a serial console would be nice!)
but I expect that to happen on a per-driver basis after this core
series has been merged.
Anyway, the diff stat here is fairly horrible, but splitting this up
and merging it via all the different subsystems would have been
extremely painful. Instead, we've got all the relevant Acks in place
and I've not seen anything other than trivial (Kconfig) conflicts in
-next (for completeness, I've included my resolution below). Nearly
half of the insertions fall under Documentation/.
So, we'll see how this goes. Right now, it all depends on EXPERT and
I fully expect people to use FDT by default for the immediate future"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (31 commits)
ARM64 / ACPI: make acpi_map_gic_cpu_interface() as void function
ARM64 / ACPI: Ignore the return error value of acpi_map_gic_cpu_interface()
ARM64 / ACPI: fix usage of acpi_map_gic_cpu_interface
ARM64: kernel: acpi: honour acpi=force command line parameter
ARM64: kernel: acpi: refactor ACPI tables init and checks
ARM64: kernel: psci: let ACPI probe PSCI version
ARM64: kernel: psci: factor out probe function
ACPI: move arm64 GSI IRQ model to generic GSI IRQ layer
ARM64 / ACPI: Don't unflatten device tree if acpi=force is passed
ARM64 / ACPI: additions of ACPI documentation for arm64
Documentation: ACPI for ARM64
ARM64 / ACPI: Enable ARM64 in Kconfig
XEN / ACPI: Make XEN ACPI depend on X86
ARM64 / ACPI: Select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64
clocksource / arch_timer: Parse GTDT to initialize arch timer
irqchip: Add GICv2 specific ACPI boot support
ARM64 / ACPI: Introduce ACPI_IRQ_MODEL_GIC and register device's gsi
ACPI / processor: Make it possible to get CPU hardware ID via GICC
ACPI / processor: Introduce phys_cpuid_t for CPU hardware ID
ARM64 / ACPI: Parse MADT for SMP initialization
...
functions, prompted by their mis-use in staging.
With these function removed, all cpu functions should only iterate to
nr_cpu_ids, so we finally only allocate that many bits when cpumasks
are allocated offstack.
Thanks,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVNPMuAAoJENkgDmzRrbjx7ZIP/j65e6xs1jEyXR3WOYSdTU1x
bMo6JcII6O1oEZLgyKXgx9KiBg6uIIDta1NG/H/XIe354dwfHVsHvj5HHHQR5Xof
iRrjLOaHj4XglI3hvsk0eEEl3/OBBLgyo9bUwDvMF1fmr/9tW4caMs3Op6n7Evzm
YIvoAyeJ0A8BfEtOU5lXhcVIGmnHtSw0x6mdGXpXIBmWYQPCtdQP868s4lnl44w0
bSNpAYdzEqg64Ph3SK0prgWPrn5+5EiaAhV7HZzENZ5+o0DAdIXWq/W7uHyCWPKH
536cJDojec+nSUQkPYngngGprxrKO02aBcMw/3JGJ0tdCDj8yw3XAyVAFzz4hmMb
Lkmyv4QHHIILLvJ4ZRH5KHWCjjVBg41LNCs2H3HnoxFACdm0lZYKHsUAh2ucBVtU
Wb/eHmLxOG43UIkpX4yrhy3SfE1ZdnOVzEzOzPXtr51t8ojqk+bLFe/hJ6EkzrQX
X+90qHfBq+PMJlAnc3zdXHjxoJrL6KPWVwVvFrNeibgEKtVvy/BiwZkS6QceC1Ea
TatOYA5r6awFVHHQCooN1DGAxN5Juvu2SmdnTUA9ymsCNDghj1YUoAKRNP81u8Sa
pe3hco/63iCuPna+vlwNDU6SgsaMk9m0p+1n1BiDIfVJIkWYCNeG+u2gQkzbDKlQ
AJuKKQv1QuZiF0ylZ0wq
=VAgA
-----END PGP SIGNATURE-----
Merge tag 'cpumask-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull final removal of deprecated cpus_* cpumask functions from Rusty Russell:
"This is the final removal (after several years!) of the obsolete
cpus_* functions, prompted by their mis-use in staging.
With these function removed, all cpu functions should only iterate to
nr_cpu_ids, so we finally only allocate that many bits when cpumasks
are allocated offstack"
* tag 'cpumask-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (25 commits)
cpumask: remove __first_cpu / __next_cpu
cpumask: resurrect CPU_MASK_CPU0
linux/cpumask.h: add typechecking to cpumask_test_cpu
cpumask: only allocate nr_cpumask_bits.
Fix weird uses of num_online_cpus().
cpumask: remove deprecated functions.
mips: fix obsolete cpumask_of_cpu usage.
x86: fix more deprecated cpu function usage.
ia64: remove deprecated cpus_ usage.
powerpc: fix deprecated CPU_MASK_CPU0 usage.
CPU_MASK_ALL/CPU_MASK_NONE: remove from deprecated region.
staging/lustre/o2iblnd: Don't use cpus_weight
staging/lustre/libcfs: replace deprecated cpus_ calls with cpumask_
staging/lustre/ptlrpc: Do not use deprecated cpus_* functions
blackfin: fix up obsolete cpu function usage.
parisc: fix up obsolete cpu function usage.
tile: fix up obsolete cpu function usage.
arm64: fix up obsolete cpu function usage.
mips: fix up obsolete cpu function usage.
x86: fix up obsolete cpu function usage.
...
The main change here is a significant head.S rework that allows us to
boot on machines with physical memory at a really high address without
having to increase our mapped VA range. Other changes include:
- AES performance boost for Cortex-A57
- AArch32 (compat) userspace with 64k pages
- Cortex-A53 erratum workaround for #845719
- defconfig updates (new platforms, PCI, ...)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJVLnQpAAoJELescNyEwWM03RIH/iwcDc0MBZgkwfD5cnY+29p4
m89lMDo3SyGQT4NynHSw7P3R7c3zULmI+9hmJMw/yfjjjL6m7X+vVAF3xj1Am4Al
OzCqYLHyFnlRktzJ6dWeF1Ese7tWqPpxn+OCXgYNpz/r5MfF/HhlyX/qNzAQPKrw
ZpDvnt44DgUfweqjTbwQUg2wkyCRjmz57MQYxDcmJStdpHIu24jWOvDIo3OJGjyS
L49I9DU6DGUhkISZmmBE0T7vmKMD1BcgI7OIzX2WIqn521QT+GSLMhRxaHmK1s1V
A8gaMTwpo0xFhTAt7sbw/5+2663WmfRdZI+FtduvORsoxX6KdDn7DH1NQixIm8s=
=+F0I
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Here are the core arm64 updates for 4.1.
Highlights include a significant rework to head.S (allowing us to boot
on machines with physical memory at a really high address), an AES
performance boost on Cortex-A57 and the ability to run a 32-bit
userspace with 64k pages (although this requires said userspace to be
built with a recent binutils).
The head.S rework spilt over into KVM, so there are some changes under
arch/arm/ which have been acked by Marc Zyngier (KVM co-maintainer).
In particular, the linker script changes caused us some issues in
-next, so there are a few merge commits where we had to apply fixes on
top of a stable branch.
Other changes include:
- AES performance boost for Cortex-A57
- AArch32 (compat) userspace with 64k pages
- Cortex-A53 erratum workaround for #845719
- defconfig updates (new platforms, PCI, ...)"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (39 commits)
arm64: fix midr range for Cortex-A57 erratum 832075
arm64: errata: add workaround for cortex-a53 erratum #845719
arm64: Use bool function return values of true/false not 1/0
arm64: defconfig: updates for 4.1
arm64: Extract feature parsing code from cpu_errata.c
arm64: alternative: Allow immediate branch as alternative instruction
arm64: insn: Add aarch64_insn_decode_immediate
ARM: kvm: round HYP section to page size instead of log2 upper bound
ARM: kvm: assert on HYP section boundaries not actual code size
arm64: head.S: ensure idmap_t0sz is visible
arm64: pmu: add support for interrupt-affinity property
dt: pmu: extend ARM PMU binding to allow for explicit interrupt affinity
arm64: head.S: ensure visibility of page tables
arm64: KVM: use ID map with increased VA range if required
arm64: mm: increase VA range of identity map
ARM: kvm: implement replacement for ld's LOG2CEIL()
arm64: proc: remove unused cpu_get_pgd macro
arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol
arm64: remove __calc_phys_offset
arm64: merge __enable_mmu and __turn_mmu_on
...
Pull exec domain removal from Richard Weinberger:
"This series removes execution domain support from Linux.
The idea behind exec domains was to support different ABIs. The
feature was never complete nor stable. Let's rip it out and make the
kernel signal handling code less complicated"
* 'exec_domain_rip_v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (27 commits)
arm64: Removed unused variable
sparc: Fix execution domain removal
Remove rest of exec domains.
arch: Remove exec_domain from remaining archs
arc: Remove signal translation and exec_domain
xtensa: Remove signal translation and exec_domain
xtensa: Autogenerate offsets in struct thread_info
x86: Remove signal translation and exec_domain
unicore32: Remove signal translation and exec_domain
um: Remove signal translation and exec_domain
tile: Remove signal translation and exec_domain
sparc: Remove signal translation and exec_domain
sh: Remove signal translation and exec_domain
s390: Remove signal translation and exec_domain
mn10300: Remove signal translation and exec_domain
microblaze: Remove signal translation and exec_domain
m68k: Remove signal translation and exec_domain
m32r: Remove signal translation and exec_domain
m32r: Autogenerate offsets in struct thread_info
frv: Remove signal translation and exec_domain
...
- Generic PM domains support update including new PM domain
callbacks to handle device initialization better (Russell King,
Rafael J Wysocki, Kevin Hilman).
- Unified device properties API update including a new mechanism
for accessing data provided by platform initialization code
(Rafael J Wysocki, Adrian Hunter).
- ARM cpuidle update including ARM32/ARM64 handling consolidation
(Daniel Lezcano).
- intel_idle update including support for the Silvermont Core in
the Baytrail SOC and for the Airmont Core in the Cherrytrail and
Braswell SOCs (Len Brown, Mathias Krause).
- New cpufreq driver for Hisilicon ACPU (Leo Yan).
- intel_pstate update including support for the Knights Landing
chip (Dasaratharaman Chandramouli, Kristen Carlson Accardi).
- QorIQ cpufreq driver update (Tang Yuantian, Arnd Bergmann).
- powernv cpufreq driver update (Shilpasri G Bhat).
- devfreq update including Tegra support changes (Tomeu Vizoso,
MyungJoo Ham, Chanwoo Choi).
- powercap RAPL (Running-Average Power Limit) driver update
including support for Intel Broadwell server chips (Jacob Pan,
Mathias Krause).
- ACPI device enumeration update related to the handling of the
special PRP0001 device ID allowing DT-style 'compatible' property
to be used for ACPI device identification (Rafael J Wysocki).
- ACPI EC driver update including limited _DEP support (Lan Tianyu,
Lv Zheng).
- ACPI backlight driver update including a new mechanism to allow
native backlight handling to be forced on non-Windows 8 systems
and a new quirk for Lenovo Ideapad Z570 (Aaron Lu, Hans de Goede).
- New Windows Vista compatibility quirk for Sony VGN-SR19XN (Chen Yu).
- Assorted ACPI fixes and cleanups (Aaron Lu, Martin Kepplinger,
Masanari Iida, Mika Westerberg, Nan Li, Rafael J Wysocki).
- Fixes related to suspend-to-idle for the iTCO watchdog driver and
the ACPI core system suspend/resume code (Rafael J Wysocki, Chen Yu).
- PM tracing support for the suspend phase of system suspend/resume
transitions (Zhonghui Fu).
- Configurable delay for the system suspend/resume testing facility
(Brian Norris).
- PNP subsystem cleanups (Peter Huewe, Rafael J Wysocki).
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJVLbO+AAoJEILEb/54YlRx5N4QAJXsmEW1FL2l6mMAyTQkEsVj
nbqjF9I6aJgYM9+i8GKaZJxpN17SAZ7Ii7aCAXjPwX8AvjT70+gcZr+KDWtPir61
B75VNVEcUYOR4vOF5Z6rQcQMlhGPkfMOJYXFMahpOG6DdPbVh1x2/tuawfc6IC0V
a6S/fln6WqHrXQ+8swDSv1KuZsav6+8AQaTlNUQkkuXdY9b3k/3xiy5C2K26APP8
x1B39iAF810qX6ipnK0gEOC3Vs29dl7hvNmgOVmmkBGVS7+pqTuy5n1/9M12cDRz
78IQ7DXB0NcSwr5tdrmGVUyH0Q6H9lnD3vO7MJkYwKDh5a/2MiBr2GZc4KHDKDWn
E1sS27f1Pdn9qnpWLzTcY+yYNV3EEyre56L2fc+sh+Xq9sNOjUah+Y/eAej/IxYD
XYRf+GAj768yCJgNP+Y3PJES/PRh+0IZ/dn5k0Qq2iYvc8mcObyG6zdQIvCucv/i
70uV1Z2GWEb31cI9TUV8o5GrMW3D0KI9EsCEEpiFFUnhjNog3AWcerGgFQMHxu7X
ZnNSzudvek+XJ3NtpbPgTiJAmnMz8bDvBQm3G1LUO2TQdjYTU6YMUHsfzXs8DL6c
aIMWO4stkVuDtWrlT/hfzIXepliccyXmSP6sbH+zNNCepulXe5C4M2SftaDi4l/B
uIctXWznvHoGys+EFL+v
=erd3
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-4.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI updates from Rafael Wysocki:
"These are mostly fixes and cleanups all over, although there are a few
items that sort of fall into the new feature category.
First off, we have new callbacks for PM domains that should help us to
handle some issues related to device initialization in a better way.
There also is some consolidation in the unified device properties API
area allowing us to use that inferface for accessing data coming from
platform initialization code in addition to firmware-provided data.
We have some new device/CPU IDs in a few drivers, support for new
chips and a new cpufreq driver too.
Specifics:
- Generic PM domains support update including new PM domain callbacks
to handle device initialization better (Russell King, Rafael J
Wysocki, Kevin Hilman)
- Unified device properties API update including a new mechanism for
accessing data provided by platform initialization code (Rafael J
Wysocki, Adrian Hunter)
- ARM cpuidle update including ARM32/ARM64 handling consolidation
(Daniel Lezcano)
- intel_idle update including support for the Silvermont Core in the
Baytrail SOC and for the Airmont Core in the Cherrytrail and
Braswell SOCs (Len Brown, Mathias Krause)
- New cpufreq driver for Hisilicon ACPU (Leo Yan)
- intel_pstate update including support for the Knights Landing chip
(Dasaratharaman Chandramouli, Kristen Carlson Accardi)
- QorIQ cpufreq driver update (Tang Yuantian, Arnd Bergmann)
- powernv cpufreq driver update (Shilpasri G Bhat)
- devfreq update including Tegra support changes (Tomeu Vizoso,
MyungJoo Ham, Chanwoo Choi)
- powercap RAPL (Running-Average Power Limit) driver update including
support for Intel Broadwell server chips (Jacob Pan, Mathias Krause)
- ACPI device enumeration update related to the handling of the
special PRP0001 device ID allowing DT-style 'compatible' property
to be used for ACPI device identification (Rafael J Wysocki)
- ACPI EC driver update including limited _DEP support (Lan Tianyu,
Lv Zheng)
- ACPI backlight driver update including a new mechanism to allow
native backlight handling to be forced on non-Windows 8 systems and
a new quirk for Lenovo Ideapad Z570 (Aaron Lu, Hans de Goede)
- New Windows Vista compatibility quirk for Sony VGN-SR19XN (Chen Yu)
- Assorted ACPI fixes and cleanups (Aaron Lu, Martin Kepplinger,
Masanari Iida, Mika Westerberg, Nan Li, Rafael J Wysocki)
- Fixes related to suspend-to-idle for the iTCO watchdog driver and
the ACPI core system suspend/resume code (Rafael J Wysocki, Chen Yu)
- PM tracing support for the suspend phase of system suspend/resume
transitions (Zhonghui Fu)
- Configurable delay for the system suspend/resume testing facility
(Brian Norris)
- PNP subsystem cleanups (Peter Huewe, Rafael J Wysocki)"
* tag 'pm+acpi-4.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (74 commits)
ACPI / scan: Fix NULL pointer dereference in acpi_companion_match()
ACPI / scan: Rework modalias creation when "compatible" is present
intel_idle: mark cpu id array as __initconst
powercap / RAPL: mark rapl_ids array as __initconst
powercap / RAPL: add ID for Broadwell server
intel_pstate: Knights Landing support
intel_pstate: remove MSR test
cpufreq: fix qoriq uniprocessor build
ACPI / scan: Take the PRP0001 position in the list of IDs into account
ACPI / scan: Simplify acpi_match_device()
ACPI / scan: Generalize of_compatible matching
device property: Introduce firmware node type for platform data
device property: Make it possible to use secondary firmware nodes
PM / watchdog: iTCO: stop watchdog during system suspend
cpufreq: hisilicon: add acpu driver
ACPI / EC: Call acpi_walk_dep_device_list() after installing EC opregion handler
cpufreq: powernv: Report cpu frequency throttling
intel_idle: Add support for the Airmont Core in the Cherrytrail and Braswell SOCs
intel_idle: Update support for Silvermont Core in Baytrail SOC
PM / devfreq: tegra: Register governor on module init
...
arch/arm64/kernel/signal.c: In function ‘handle_signal’:
arch/arm64/kernel/signal.c:290:22: warning: unused variable ‘thread’ [-Wunused-variable]
Fixes: arm64: Remove signal translation and exec_domain
Reported-by: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
As execution domain support is gone we can remove
signal translation from the signal code and remove
exec_domain from thread_info.
Signed-off-by: Richard Weinberger <richard@nod.at>
Register MIDR_EL1 is masked to get variant and revision fields, then
compared against midr_range_min and midr_range_max when checking
whether CPU is affected by any particular erratum. However, variant
and revision fields in MIDR_EL1 are separated by 16 bits, so the min
and max of midr range should be constructed accordingly, otherwise
the patch will not be applied when variant field is non-0.
Cc: stable@vger.kernel.org # 3.19+
Acked-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Bo Yan <byan@nvidia.com>
[will: use MIDR_VARIANT_SHIFT to construct upper bound]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0
from a virtual address that matches the bottom 32 bits of the virtual
address used by a recent load at (AArch64) EL1 might return incorrect
data.
This patch works around the issue by writing to the contextidr_el1
register on the exception return path when returning to a 32-bit task.
This workaround is patched in at runtime based on the MIDR value of the
processor.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since the only caller of acpi_parse_gic_cpu_interface() doesn't
need the return value, make it have a void return type to avoid
introducing subtle bugs, and update the comments of the function
accordingly.
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
MADT scanning will stop when it gets an error from the handler,
acpi_map_gic_cpu_interface(), on arm64. However, we need to
find all of the enabled CPUs so that SMP initialization can work
properly. So, if an error occurs in this case, ignore it for
now so that we can find all of the enabled CPUs.
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As we detect more architectural features at runtime, it makes
sense to reuse the existing framework whilst avoiding to call
a feature an erratum...
This patch extract the core capability parsing, moves it into
a new file (cpufeature.c), and let the CPU errata detection code
use it.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since all immediate branches are PC-relative on Aarch64, these
instructions cannot be used as an alternative with the simplistic
approach we currently have (the immediate has been computed from
the .altinstr_replacement section, and end-up being completely off
if we insert it directly).
This patch handles the b and bl instructions in a different way,
using the insn framework to recompute the immediate, and generate
the right displacement.
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Patching an instruction sometimes requires extracting the immediate
field from this instruction. To facilitate this, and avoid
potential duplication of code, add aarch64_insn_decode_immediate
as the reciprocal to aarch64_insn_encode_immediate.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
An upcoming patch will depend on tai_ns() and NMI-safe ktime_get_raw_fast(),
so merge timers/core here in a separate topic branch until it's all cooked
and timers/core is merged upstream.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
In preparation of adding another tkr field, rename this one to
tkr_mono. Also rename tk_read_base::base_mono to tk_read_base::base,
since the structure is not specific to CLOCK_MONOTONIC and the mono
name got added to the tk_read_base instance.
Lots of trivial churn.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150319093400.344679419@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
acpi_parse_gic_cpu_interface calls acpi_map_gic_cpu_interface by both
passing a 32-bit value in the u8 enabled parameter and then subsequently
ignoring its return value.
Sort it out.
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If acpi=force is passed on the command line, it forces ACPI to be
the only available boot method, hence it must be left enabled even
if the initialization and sanity checks on ACPI tables fails.
This patch refactors ACPI initialization to prevent disabling ACPI
if acpi=force is passed on the command line.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Current ACPI init code on ARM64 relies on acpi_table_parse() API to
check if the FADT is present and to carry out sanity checks on that.
The handler passed to the acpi_table_parse() function and used to
carry out the parsing on the requested table returns a value that is
ignored by the acpi_table_parse() function, so it is not possible
to propagate errors back to the acpi_table_parse() caller through
the handler.
This forces ARM64 ACPI init code to have disable_acpi() calls scattered
all over the place that makes code unwieldy and not easy to follow.
This patch refactors the ARM64 ACPI init code, by creating a
self-contained function (ie acpi_fadt_sanity_check()) that carries
out the required checks on FADT and returns an adequate return value
to the caller. This allows creating a common error path that disables
ACPI and makes code more readable and easy to parse and change were
further checks FADT to be added in the future.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
PSCI v0.2+ allows the kernel to probe the PSCI firmware version.
This patch replaces the default initialization of PSCI v0.2+
functions with code that allows probing PSCI firmware version
and initializes PSCI functions accordingly.
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
PSCI v0.2+ versions provide a specific PSCI call (PSCI_VERSION) to
detect the PSCI version at run-time. Current PSCI v0.2 init code
carries out the version probing in the PSCI 0.2 DT init function,
but the version probing does not depend on DT so it can be factored out
in order to make it available to other boot mechanisms (ie ACPI) to
reuse. The psci_probe() probing function can be easily extended
to add detection and initialization of PSCI functions defined in
PSCI versions >0.2.
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The code deployed to implement GSI linux IRQ numbers mapping on arm64 turns
out to be generic enough so that it can be moved to ACPI core code along
with its respective config option ACPI_GENERIC_GSI selectable on
architectures that can reuse the same code.
Current ACPI IRQ mapping code is not integrated in the kernel IRQ domain
infrastructure, in particular there is no way to look-up the
IRQ domain associated with a particular interrupt controller, so this
first version of GSI generic code carries out the GSI<->IRQ mapping relying
on the IRQ default domain which is supposed to be always set on a
specific architecture in case the domain structure passed to
irq_create/find_mapping() functions is missing.
This patch moves the arm64 acpi functions that implement the gsi mappings:
acpi_gsi_to_irq()
acpi_register_gsi()
acpi_unregister_gsi()
to ACPI core code. Since the generic GSI<->domain mapping is based on IRQ
domains, it can be extended as soon as a way to map an interrupt
controller to an IRQ domain is implemented for ACPI in the IRQ domain
layer.
x86 and ia64 code for GSI mappings cannot rely on the generic GSI
layer at present for legacy reasons, so they do not select the
ACPI_GENERIC_GSI config options and keep relying on their arch
specific GSI mapping layer.
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since the policy is that once we pass acpi=force in the early
param, we will not unflatten device tree even if ACPI is disabled
in ACPI table init fails, so fix the code by comparinging both
acpi_disabled and param_acpi_force before the device tree is
unflattened.
CC: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ACPI reduced hardware mode is disabled by default, but ARM64
can only run properly in ACPI hardware reduced mode, so select
ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64.
If the firmware is not using hardware reduced ACPI mode, we
will disable ACPI to avoid nightmare such as accessing some
registers which are not available on ARM64.
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Using the information presented by GTDT (Generic Timer Description Table)
to initialize the arch timer (not memory-mapped).
CC: Daniel Lezcano <daniel.lezcano@linaro.org>
CC: Thomas Gleixner <tglx@linutronix.de>
Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
ACPI kernel uses MADT table for proper GIC initialization. It needs to
parse GIC related subtables, collect CPU interface and distributor
addresses and call driver initialization function (which is hardware
abstraction agnostic). In a similar way, FDT initialize GICv1/2.
NOTE: This commit allow to initialize GICv1/2 basic functionality.
While now simple GICv2 init call is used, any further GIC features
require generic infrastructure for proper ACPI irqchip initialization.
That mechanism and stacked irqdomains to support GICv2 MSI/virtualization
extension, GICv3/4 and its ITS are considered as next steps.
CC: Jason Cooper <jason@lakedaemon.net>
CC: Marc Zyngier <marc.zyngier@arm.com>
CC: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce ACPI_IRQ_MODEL_GIC which is needed for ARM64 as GIC is
used, and then register device's gsi with the core IRQ subsystem.
acpi_register_gsi() is similar to DT based irq_of_parse_and_map(),
since gsi is unique in the system, so use hwirq number directly
for the mapping.
We are going to implement stacked domains when GICv2m, GICv3, ITS
support are added.
CC: Marc Zyngier <marc.zyngier@arm.com>
Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
MADT contains the information for MPIDR which is essential for
SMP initialization, parse the GIC cpu interface structures to
get the MPIDR value and map it to cpu_logical_map(), and add
enabled cpu with valid MPIDR into cpu_possible_map.
ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and
Parking protocol, but the Parking protocol is only specified for
ARMv7 now, so make PSCI as the only way for the SMP boot protocol
before some updates for the ACPI spec or the Parking protocol spec.
Parking protocol patches for SMP boot will be sent to upstream when
the new version of Parking protocol is ready.
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Mark Rutland <mark.rutland@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There are two flags: PSCI_COMPLIANT and PSCI_USE_HVC. When set,
the former signals to the OS that the firmware is PSCI compliant.
The latter selects the appropriate conduit for PSCI calls by
toggling between Hypervisor Calls (HVC) and Secure Monitor Calls
(SMC).
FADT table contains such information in ACPI 5.1, FADT table was
parsed in ACPI table init and copy to struct acpi_gbl_FADT, so
use the flags in struct acpi_gbl_FADT for PSCI init.
Since ACPI 5.1 doesn't support self defined PSCI function IDs,
which means that only PSCI 0.2+ is supported in ACPI.
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If the early boot methods of acpi are happy that we have valid ACPI
tables and acpi=force has been passed, then do not unflat devicetree
effectively disabling further hardware probing from DT.
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This implements the following policy to decide whether ACPI should
be used to boot the system:
- acpi=off: ACPI will not be used to boot the system, even if there is
no alternative available (e.g., device tree is empty)
- acpi=force: only ACPI will be used to boot the system; if that fails,
there will be no fallback to alternative methods (such as device tree)
- otherwise, ACPI will be used as a fallback if the device tree turns out
to lack a platform description; the heuristic to decide this is whether
/chosen is the only node present at depth 1
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Rafael J. Wysocki <rjw@rjwysocki.net>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
CONFIG_ACPI depends CONFIG_PCI on x86 and ia64, in ARM64 server
world we will have PCIe in most cases, but some of them may not,
make CONFIG_ACPI depend CONFIG_PCI on ARM64 will satisfy both.
With that case, we need some arch dependent PCI functions to
access the config space before the PCI root bridge is created, and
pci_acpi_scan_root() to create the PCI root bus. So introduce
some stub function here to make ACPI core compile and revisit
them later when implemented on ARM64.
CC: Liviu Dudau <Liviu.Dudau@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
As we want to get ACPI tables to parse and then use the information
for system initialization, we should get the RSDP (Root System
Description Pointer) first, it then locates Extended Root Description
Table (XSDT) which contains all the 64-bit physical address that
pointer to other boot-time tables.
Introduce acpi.c and its related head file in this patch to provide
fundamental needs of extern variables and functions for ACPI core,
and then get boot-time tables as needed.
- asm/acenv.h for arch specific ACPICA environments and
implementation, It is needed unconditionally by ACPI core;
- asm/acpi.h for arch specific variables and functions needed by
ACPI driver core;
- acpi.c for ARM64 related ACPI implementation for ACPI driver
core;
acpi_boot_table_init() is introduced to get RSDP and boot-time tables,
it will be called in setup_arch() before paging_init(), so we should
use eary_memremap() mechanism here to get the RSDP and all the table
pointers.
FADT Major.Minor version was introduced in ACPI 5.1, it is the same
as ACPI version.
In ACPI 5.1, some major gaps are fixed for ARM, such as updates in
MADT table for GIC and SMP init, without those updates, we can not
get the MPIDR for SMP init, and GICv2/3 related init information, so
we can't boot arm64 ACPI properly with table versions predating 5.1.
If firmware provides ACPI tables with ACPI version less than 5.1,
OS has no way to retrieve the configuration data that is necessary
to init SMP boot protocol and the GIC properly, so disable ACPI if
we get an FADT table with version less that 5.1 when acpi_boot_table_init()
called.
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
guarnatee that the kernel Image is clean, not invalid in the caches, and
therefore we might read a stale value once the MMU is enabled.
This patch ensures we invalidate the corresponding cacheline after the
write as we do for all other data written before we set SCTLR_EL1.{C.M},
guaranteeing that the value will be visible later. We rely on the DSBs
in __create_page_tables to complete the maintenance.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Historically, the PMU devicetree bindings have expected SPIs to be
listed in order of *logical* CPU number. This is problematic for
bootloaders, especially when the boot CPU (logical ID 0) isn't listed
first in the devicetree.
This patch adds a new optional property, interrupt-affinity, to the
PMU node which allows the interrupt affinity to be described using
a list of phandled to CPU nodes, with each entry in the list
corresponding to the SPI at the same index in the interrupts property.
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
After writing the page tables, we use __inval_cache_range to invalidate
any stale cache entries. Strongly Ordered memory accesses are not
ordered w.r.t. cache maintenance instructions, and hence explicit memory
barriers are required to provide this ordering. However,
__inval_cache_range was written to be used on Normal Cacheable memory
once the MMU and caches are on, and does not have any barriers prior to
the DC instructions.
This patch adds a DMB between the page tables being written and the
corresponding cachelines being invalidated, ensuring that the
invalidation makes the new data visible to subsequent cacheable
accesses. A barrier is not required before the prior invalidate as we do
not access the page table memory area prior to this, and earlier
barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
ordering w.r.t. any stores performed prior to entering Linux.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: c218bca74e ("arm64: Relax the kernel cache requirements for boot")
Signed-off-by: Will Deacon <will.deacon@arm.com>
With this change the cpuidle-arm64.c file calls the same function name
for both ARM and ARM64.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Rob Herring <robherring2@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
The page size and the number of translation levels, and hence the supported
virtual address range, are build-time configurables on arm64 whose optimal
values are use case dependent. However, in the current implementation, if
the system's RAM is located at a very high offset, the virtual address range
needs to reflect that merely because the identity mapping, which is only used
to enable or disable the MMU, requires the extended virtual range to map the
physical memory at an equal virtual offset.
This patch relaxes that requirement, by increasing the number of translation
levels for the identity mapping only, and only when actually needed, i.e.,
when system RAM's offset is found to be out of reach at runtime.
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Rework of the KVM HYP bounce page from Ard Biesheuvel. Subsequent arm64
idmap rework depends on this, so merge it here with Marc Zyngier's
blessing (kvm-arm co-maintainer).
init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which
contains kernel mappings) and is used as the active_mm for the idle
thread.
When restoring the pgd after an EFI call, we write current->active_mm
into TTBR0. If the current task is actually the idle thread (e.g. when
initialising the EFI RTC before entering userspace), then the TLB can
erroneously populate itself with junk global entries as a result of
speculative table walks.
When we do eventually return to userspace, the task can end up hitting
these junk mappings leading to lockups, corruption or crashes.
This patch fixes the problem in the same way as the CPU suspend code by
ensuring that we never switch to the init_mm in efi_set_pgd and instead
point TTBR0 at the zero page. A check is also added to cpu_switch_mm to
BUG if we get passed swapper_pg_dir.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: f3cdfd239d ("arm64/efi: move SetVirtualAddressMap() to UEFI stub")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
According to the arm64 boot protocol, registers x1 to x3 should be
zero upon kernel entry, and non-zero values are reserved for future
use. This future use is going to be problematic if we never enforce
the current rules, so start enforcing them now, by emitting a warning
if non-zero values are detected.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This removes the function __calc_phys_offset and all open coded
virtual to physical address translations using the offset kept
in x28.
Instead, just use absolute or PC-relative symbol references as
appropriate when referring to virtual or physical addresses,
respectively.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Enabling of the MMU is split into two functions, with an align and
a branch in the middle. On arm64, the entire kernel Image is ID mapped
so this is really not necessary, and we can just merge it into a
single function.
Also replaces an open coded adrp/add reference to __enable_mmu pair
with adr_l.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Replace the confusing virtual/physical address arithmetic with a simple
PC-relative reference.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This removes the confusing __switch_data object from head.S,
and replaces it with standard PC-relative references to the
various symbols it encapsulates.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The global processor_id is assigned the MIDR_EL1 value of the boot
CPU in the early init code, but is never referenced afterwards.
As the relevance of the MIDR_EL1 value of the boot CPU is debatable
anyway, especially under big.LITTLE, let's remove it before anyone
starts using it.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
struct cpu_table is an artifact left from the (very) early days of
the arm64 port, and its only real use is to allow the most beautiful
"AArch64 Processor" string to be displayed at boot time.
Really? Yes, really.
Let's get rid of it. In order to avoid another BogoMips-gate, the
aforementioned string is preserved.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The perf core implicitly rejects events spanning multiple HW PMUs, as in
these cases the event->ctx will differ. However this validation is
performed after pmu::event_init() is called in perf_init_event(), and
thus pmu::event_init() may be called with a group leader from a
different HW PMU.
The ARM64 PMU driver does not take this fact into account, and when
validating groups assumes that it can call to_arm_pmu(event->pmu) for
any HW event. When the event in question is from another HW PMU this is
wrong, and results in dereferencing garbage.
This patch updates the ARM64 PMU driver to first test for and reject
events from other PMUs, moving the to_arm_pmu and related logic after
this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with
a CCI PMU present:
Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL)
CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249
Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT)
task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000
PC is at 0x0
LR is at validate_event+0x90/0xa8
pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145
sp : ffffffc07b0a3ba0
[< (null)>] (null)
[<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc
[<ffffffc00015d870>] perf_try_init_event+0x34/0x70
[<ffffffc000164094>] perf_init_event+0xe0/0x10c
[<ffffffc000164348>] perf_event_alloc+0x288/0x358
[<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c
Code: bad PC value
Also cleans up the code to use the arm_pmu only when we know
that we are dealing with an arm pmu event.
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Peter Ziljstra (Intel) <peterz@infradead.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.
For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.
For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size
Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.
On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arm mmap2 syscall takes the offset in units of 4K, thus with 64K pages
the offset needs to be scaled to units of pages.
Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
[will: removed redundant lr parameter, localised PAGE_SHIFT #if check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
We currently don't log the boot mode for arm64 as we do for arm, and
without KVM the user is provided with no indication as to which mode(s)
CPUs were booted in, which can seriously hinder debugging in some cases.
Add logging to the boot path once all CPUs are up. Where CPUs are
mismatched in violation of the boot protocol, WARN and set a taint (as
we do for CPU other CPU feature mismatches) given that the
firmware/bootloader is buggy and should be fixed.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 828e9834e9 ("arm64: head: create a new function for setting
the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value
replacing uses of zero. However it failed to update __boot_cpu_mode
appropriately.
A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and
a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1].
Later is_hyp_mode_mismatched() determines there to be a mismatch if
__boot_cpu_mode[0] != __boot_cpu_mode[1].
If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to
BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value
of zero, and is_hyp_mode_mismatched will erroneously determine that the
boot modes are mismatched. This hasn't been a problem so far, but later
patches which will make use of is_hyp_mode_mismatched() expect it to
work correctly.
This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing
the erroneous mismatch detection when all CPUs are booted at EL1.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently we only perform alternative patching for kernels built with
CONFIG_SMP, as we call apply_alternatives_all() in smp.c, which is only
built for CONFIG_SMP. Thus !SMP kernels may not have necessary
alternatives patched in.
This patch ensures that we call apply_alternatives_all() once all CPUs
are booted, even for !SMP kernels, by having the smp_init_cpus() stub
call this for !SMP kernels via up_late_init. A new wrapper,
do_post_cpus_up_work, is added so we can hook other calls here later
(e.g. boot mode logging).
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: e039ee4ee3 ("arm64: add alternative runtime patching")
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Another one for the big head.S spring cleaning: the label should
be after the .align or it may point to the padding.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If UEFI Runtime Services are available, they are preferred over direct
PSCI calls or other methods to reset the system.
For the reset case, we need to hook into machine_restart(), as the
arm_pm_restart function pointer may be overwritten by modules.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The native (64-bit) sigval_t union contains sival_int (32-bit) and
sival_ptr (64-bit). When a compat application invokes a syscall that
takes a sigval_t value (as part of a larger structure, e.g.
compat_sys_mq_notify, compat_sys_timer_create), the compat_sigval_t
union is converted to the native sigval_t with sival_int overlapping
with either the least or the most significant half of sival_ptr,
depending on endianness. When the corresponding signal is delivered to a
compat application, on big endian the current (compat_uptr_t)sival_ptr
cast always returns 0 since sival_int corresponds to the top part of
sival_ptr. This patch fixes copy_siginfo_to_user32() so that sival_int
is copied to the compat_siginfo_t structure.
Cc: <stable@vger.kernel.org>
Reported-by: Bamvor Jian Zhang <bamvor.zhangjian@huawei.com>
Tested-by: Bamvor Jian Zhang <bamvor.zhangjian@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Patch 2f896d5866 ("arm64: use fixmap for text patching") changed
the way we patch the kernel text, using a fixmap when the kernel or
modules are flagged as read only.
Unfortunately, a flaw in the logic makes it fall over when patching
modules without CONFIG_DEBUG_SET_MODULE_RONX enabled:
[...]
[ 32.032636] Call trace:
[ 32.032716] [<fffffe00003da0dc>] __copy_to_user+0x2c/0x60
[ 32.032837] [<fffffe0000099f08>] __aarch64_insn_write+0x94/0xf8
[ 32.033027] [<fffffe000009a0a0>] aarch64_insn_patch_text_nosync+0x18/0x58
[ 32.033200] [<fffffe000009c3ec>] ftrace_modify_code+0x58/0x84
[ 32.033363] [<fffffe000009c4e4>] ftrace_make_nop+0x3c/0x58
[ 32.033532] [<fffffe0000164420>] ftrace_process_locs+0x3d0/0x5c8
[ 32.033709] [<fffffe00001661cc>] ftrace_module_init+0x28/0x34
[ 32.033882] [<fffffe0000135148>] load_module+0xbb8/0xfc4
[ 32.034044] [<fffffe0000135714>] SyS_finit_module+0x94/0xc4
[...]
This is triggered by the use of virt_to_page() on a module address,
which ends to pointing to Nowhereland if you're lucky, or corrupt
your precious data if not.
This patch fixes the logic by mimicking what is done on arm:
- If we're patching a module and CONFIG_DEBUG_SET_MODULE_RONX is set,
use vmalloc_to_page().
- If we're patching the kernel and CONFIG_DEBUG_RODATA is set,
use virt_to_page().
- Otherwise, use the provided address, as we can write to it directly.
Tested on 4.0-rc1 as a KVM guest.
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Laura Abbott <lauraa@codeaurora.org>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
An arm64 allmodconfig fails to build with GCC 5 due to __asmeq
assertions in the PSCI firmware calling code firing due to mcount
preambles breaking our assumptions about register allocation of function
arguments:
/tmp/ccDqJsJ6.s: Assembler messages:
/tmp/ccDqJsJ6.s:60: Error: .err encountered
/tmp/ccDqJsJ6.s:61: Error: .err encountered
/tmp/ccDqJsJ6.s:62: Error: .err encountered
/tmp/ccDqJsJ6.s:99: Error: .err encountered
/tmp/ccDqJsJ6.s💯 Error: .err encountered
/tmp/ccDqJsJ6.s:101: Error: .err encountered
This patch fixes the issue by moving the PSCI calls out-of-line into
their own assembly files, which are safe from the compiler's meddling
fingers.
Reported-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The vdso implementation of clock_getres currently returns 0 (success)
whenever a null timespec is provided by the caller, regardless of the
clock id supplied.
This behavior is incorrect. It should fall back to syscall when an
unrecognized clock id is passed, even when the timespec argument is
null. This ensures that clock_getres always returns an error for
invalid clock ids.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ftrace_enable_ftrace_graph_caller and ftrace_disable_ftrace_graph_caller
should replace B(jmp) instruction and not BL(call) instruction.
Commit 9f1ae7596aad("arm64: Correct ftrace calls to
aarch64_insn_gen_branch_imm()") had a typo and used
AARCH64_INSN_BRANCH_LINK instead of AARCH64_INSN_BRANCH_NOLINK.
Either instruction will work, as the link register is saved/restored
across the branch but this better matches the intention of the code.
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For instrumenting global variables KASan will shadow memory backing memory
for modules. So on module loading we will need to allocate memory for
shadow and map it at address in shadow that corresponds to the address
allocated in module_alloc().
__vmalloc_node_range() could be used for this purpose, except it puts a
guard hole after allocated area. Guard hole in shadow memory should be a
problem because at some future point we might need to have a shadow memory
at address occupied by guard hole. So we could fail to allocate shadow
for module_alloc().
Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into
__vmalloc_node_range(). Add new parameter 'vm_flags' to
__vmalloc_node_range() function.
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Serebryany <kcc@google.com>
Cc: Dmitry Chernenkov <dmitryc@google.com>
Signed-off-by: Andrey Konovalov <adech.fo@gmail.com>
Cc: Yuri Gribov <tetra2005@gmail.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Common: Optional support for adding a small amount of polling on each HLT
instruction executed in the guest (or equivalent for other architectures).
This can improve latency up to 50% on some scenarios (e.g. O_DSYNC writes
or TCP_RR netperf tests). This also has to be enabled manually for now,
but the plan is to auto-tune this in the future.
ARM/ARM64: the highlights are support for GICv3 emulation and dirty page
tracking
s390: several optimizations and bugfixes. Also a first: a feature
exposed by KVM (UUID and long guest name in /proc/sysinfo) before
it is available in IBM's hypervisor! :)
MIPS: Bugfixes.
x86: Support for PML (page modification logging, a new feature in
Broadwell Xeons that speeds up dirty page tracking), nested virtualization
improvements (nested APICv---a nice optimization), usual round of emulation
fixes. There is also a new option to reduce latency of the TSC deadline
timer in the guest; this needs to be tuned manually.
Some commits are common between this pull and Catalin's; I see you
have already included his tree.
ARM has other conflicts where functions are added in the same place
by 3.19-rc and 3.20 patches. These are not large though, and entirely
within KVM.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJU28rkAAoJEL/70l94x66DXqQH/1TDOfJIjW7P2kb0Sw7Fy1wi
cEX1KO/VFxAqc8R0E/0Wb55CXyPjQJM6xBXuFr5cUDaIjQ8ULSktL4pEwXyyv/s5
DBDkN65mriry2w5VuEaRLVcuX9Wy+tqLQXWNkEySfyb4uhZChWWHvKEcgw5SqCyg
NlpeHurYESIoNyov3jWqvBjr4OmaQENyv7t2c6q5ErIgG02V+iCux5QGbphM2IC9
LFtPKxoqhfeB2xFxTOIt8HJiXrZNwflsTejIlCl/NSEiDVLLxxHCxK2tWK/tUXMn
JfLD9ytXBWtNMwInvtFm4fPmDouv2VDyR0xnK2db+/axsJZnbxqjGu1um4Dqbak=
=7gdx
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM update from Paolo Bonzini:
"Fairly small update, but there are some interesting new features.
Common:
Optional support for adding a small amount of polling on each HLT
instruction executed in the guest (or equivalent for other
architectures). This can improve latency up to 50% on some
scenarios (e.g. O_DSYNC writes or TCP_RR netperf tests). This
also has to be enabled manually for now, but the plan is to
auto-tune this in the future.
ARM/ARM64:
The highlights are support for GICv3 emulation and dirty page
tracking
s390:
Several optimizations and bugfixes. Also a first: a feature
exposed by KVM (UUID and long guest name in /proc/sysinfo) before
it is available in IBM's hypervisor! :)
MIPS:
Bugfixes.
x86:
Support for PML (page modification logging, a new feature in
Broadwell Xeons that speeds up dirty page tracking), nested
virtualization improvements (nested APICv---a nice optimization),
usual round of emulation fixes.
There is also a new option to reduce latency of the TSC deadline
timer in the guest; this needs to be tuned manually.
Some commits are common between this pull and Catalin's; I see you
have already included his tree.
Powerpc:
Nothing yet.
The KVM/PPC changes will come in through the PPC maintainers,
because I haven't received them yet and I might end up being
offline for some part of next week"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (130 commits)
KVM: ia64: drop kvm.h from installed user headers
KVM: x86: fix build with !CONFIG_SMP
KVM: x86: emulate: correct page fault error code for NoWrite instructions
KVM: Disable compat ioctl for s390
KVM: s390: add cpu model support
KVM: s390: use facilities and cpu_id per KVM
KVM: s390/CPACF: Choose crypto control block format
s390/kernel: Update /proc/sysinfo file with Extended Name and UUID
KVM: s390: reenable LPP facility
KVM: s390: floating irqs: fix user triggerable endless loop
kvm: add halt_poll_ns module parameter
kvm: remove KVM_MMIO_SIZE
KVM: MIPS: Don't leak FPU/DSP to guest
KVM: MIPS: Disable HTW while in guest
KVM: nVMX: Enable nested posted interrupt processing
KVM: nVMX: Enable nested virtual interrupt delivery
KVM: nVMX: Enable nested apic register virtualization
KVM: nVMX: Make nested control MSRs per-cpu
KVM: nVMX: Enable nested virtualize x2apic mode
KVM: nVMX: Prepare for using hardware MSR bitmap
...
If an attacker can cause a controlled kernel stack overflow, overwriting
the restart block is a very juicy exploit target. This is because the
restart_block is held in the same memory allocation as the kernel stack.
Moving the restart block to struct task_struct prevents this exploit by
making the restart_block harder to locate.
Note that there are other fields in thread_info that are also easy
targets, at least on some architectures.
It's also a decent simplification, since the restart code is more or less
identical on all architectures.
[james.hogan@imgtec.com: metag: align thread_info::supervisor_stack]
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: David Miller <davem@davemloft.net>
Acked-by: Richard Weinberger <richard@nod.at>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- reimplementation of the virtual remapping of UEFI Runtime Services in
a way that is stable across kexec
- emulation of the "setend" instruction for 32-bit tasks (user
endianness switching trapped in the kernel, SCTLR_EL1.E0E bit set
accordingly)
- compat_sys_call_table implemented in C (from asm) and made it a
constant array together with sys_call_table
- export CPU cache information via /sys (like other architectures)
- DMA API implementation clean-up in preparation for IOMMU support
- macros clean-up for KVM
- dropped some unnecessary cache+tlb maintenance
- CONFIG_ARM64_CPU_SUSPEND clean-up
- defconfig update (CPU_IDLE)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJU25v3AAoJEGvWsS0AyF7xYjcP/j8ESvs+z0BPgeJ6XREfOnCh
cp+w/1rJ5BafJ5RRkibrciwTNOIJS4FGMivWyURtoh430lS0Rh7fxZ3Ouna3xjrT
Nf7AxenWoA8Lo6wHh+FlNUeGk3iWfX6WwA2tYrbKudK+LBJ1wHjwpE7cWQO0FgwJ
aFDahu+QD5/u45p/VcVctMtiEDvOxBdO8gfat6r+YkLm7pbRxQkZnpA/JE4Gps1p
Td5jvMNH9pXI5pffSbeR9Q+vs/r0yqKLXQg01Eb2bZgGDgwf9yzADrHuaKamZt35
X5flmLiTGC6swJCJvUkZC1Nuue33bXcvW5+vgvar+MNGyXsxv+B/wARLqGhiWhQZ
nLGwFpuNu6wdY9tGHb/XR8khcewkw1/lRH1hHKhchrmRyUqHvXcPgC5tamjLrY8C
BV3BAeQvRho8OKwWUmbXIlyON1vPux6CJdj4D/A5NL+qph2WHeVWJCXg6nVFx0Wc
Eb3bXbI4QRwTFL7pGRF8RyZJBAQtgYhQMKWMW2GHgUgn+r1EixG73BZoSwvpHrrw
FOR9AVNfVBqmNON8xiIb3DN4EViq76EF0jrsZh5I9EoWS2w5qtk60kJQgXE+M4EE
vOlmh3dhEVfCN2SxOn0bgoQmTulyjqGauTSSJKQbIBuinPFveukrJfGNFIWt0SZs
f38FBMo6sgU4VG85B+Fr
=X5x/
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
"arm64 updates for 3.20:
- reimplementation of the virtual remapping of UEFI Runtime Services
in a way that is stable across kexec
- emulation of the "setend" instruction for 32-bit tasks (user
endianness switching trapped in the kernel, SCTLR_EL1.E0E bit set
accordingly)
- compat_sys_call_table implemented in C (from asm) and made it a
constant array together with sys_call_table
- export CPU cache information via /sys (like other architectures)
- DMA API implementation clean-up in preparation for IOMMU support
- macros clean-up for KVM
- dropped some unnecessary cache+tlb maintenance
- CONFIG_ARM64_CPU_SUSPEND clean-up
- defconfig update (CPU_IDLE)
The EFI changes going via the arm64 tree have been acked by Matt
Fleming. There is also a patch adding sys_*stat64 prototypes to
include/linux/syscalls.h, acked by Andrew Morton"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (47 commits)
arm64: compat: Remove incorrect comment in compat_siginfo
arm64: Fix section mismatch on alloc_init_p[mu]d()
arm64: Avoid breakage caused by .altmacro in fpsimd save/restore macros
arm64: mm: use *_sect to check for section maps
arm64: drop unnecessary cache+tlb maintenance
arm64:mm: free the useless initial page table
arm64: Enable CPU_IDLE in defconfig
arm64: kernel: remove ARM64_CPU_SUSPEND config option
arm64: make sys_call_table const
arm64: Remove asm/syscalls.h
arm64: Implement the compat_sys_call_table in C
syscalls: Declare sys_*stat64 prototypes if __ARCH_WANT_(COMPAT_)STAT64
compat: Declare compat_sys_sigpending and compat_sys_sigprocmask prototypes
arm64: uapi: expose our struct ucontext to the uapi headers
smp, ARM64: Kill SMP single function call interrupt
arm64: Emulate SETEND for AArch32 tasks
arm64: Consolidate hotplug notifier for instruction emulation
arm64: Track system support for mixed endian EL0
arm64: implement generic IOMMU configuration
arm64: Combine coherent and non-coherent swiotlb dma_ops
...
since that's a more logical and accurate place - Leif Lindholm
* Update efibootmgr URL in Kconfig help - Peter Jones
* Improve accuracy of EFI guid function names - Borislav Petkov
* Expose firmware platform size in sysfs for the benefit of EFI boot
loader installers and other utilities - Steve McIntyre
* Cleanup __init annotations for arm64/efi code - Ard Biesheuvel
* Mark the UIE as unsupported for rtc-efi - Ard Biesheuvel
* Fix memory leak in error code path of runtime map code - Dan Carpenter
* Improve robustness of get_memory_map() by removing assumptions on the
size of efi_memory_desc_t (which could change in future spec
versions) and querying the firmware instead of guessing about the
memmap size - Ard Biesheuvel
* Remove superfluous guid unparse calls - Ivan Khoronzhuk
* Delete unnecessary chosen@0 DT node FDT code since was duplicated
from code in drivers/of and is entirely unnecessary - Leif Lindholm
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUv69oAAoJEC84WcCNIz1VEYgP/1b27WRfCXs4q/8FP+UheSDS
nAFbGe9PjVPnxo5pA9VwPP6eNQ2zYiyNGEK1BlbQlFPZdSD1updIraA78CiF5iys
iSYyG9xVIcTB23RZI8aJLnBXbosIUKPJZ3FORv1LPhI6Mz1rCpraEaaUlv67rUKr
FLBG9cR7t9f/f+fJw6LOAAISGIG/4s0wQdA5/noaYkj5R5bICl2UTGtbwa0oNstb
NUO93aKDgaG/VljpIEeG6XV96Ioz7cHjQsEaX8sTrvT0n7nPNIqSDjFJOqWKJOXl
RsFrzyl8fFIbMuQatYv1f3efPvyH+iKOfHnHrvcjUNje0xhm7F0Bd86BkOw1a3JQ
pNb0YUWecI0Z/8GSzN8X0JQ7cowa3wI15Z/Hfs03odTXiM6VqwFAhuz/s5DEUdKS
U+rOPjU0ezt3G4oBB/VGgF9w5JWKfsMcsHgmLX9P+JYzKFrxggo1SXAtXUeRAqQp
agKmUB+k6Y1baQO8efkoM7rKL2F0q1SR9QiK+16BHCCkevD23v7IFGrHm2r1xKil
kvWlY4MkRVa4KGPxEFEDVty0HjXxImwYsxTaYVHTS7SMeoP41f6koHKB19NaB3No
5fqn/rT1KcJuhQj/I+vAixIX4WMJkX/MQVbtKfqSaKlAiRg3eRY6ONYr0jOglfF6
gaMuvmDd0HlV6UJvH/9L
=iPpM
-----END PGP SIGNATURE-----
Merge tag 'efi-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into x86/efi
Pull EFI updates from Matt Fleming:
" - Move efivarfs from the misc filesystem section to pseudo filesystem,
since that's a more logical and accurate place - Leif Lindholm
- Update efibootmgr URL in Kconfig help - Peter Jones
- Improve accuracy of EFI guid function names - Borislav Petkov
- Expose firmware platform size in sysfs for the benefit of EFI boot
loader installers and other utilities - Steve McIntyre
- Cleanup __init annotations for arm64/efi code - Ard Biesheuvel
- Mark the UIE as unsupported for rtc-efi - Ard Biesheuvel
- Fix memory leak in error code path of runtime map code - Dan Carpenter
- Improve robustness of get_memory_map() by removing assumptions on the
size of efi_memory_desc_t (which could change in future spec
versions) and querying the firmware instead of guessing about the
memmap size - Ard Biesheuvel
- Remove superfluous guid unparse calls - Ivan Khoronzhuk
- Delete unnecessary chosen@0 DT node FDT code since was duplicated
from code in drivers/of and is entirely unnecessary - Leif Lindholm
There's nothing super scary, mainly cleanups, and a merge from Ricardo who
kindly picked up some patches from the linux-efi mailing list while I
was out on annual leave in December.
Perhaps the biggest risk is the get_memory_map() change from Ard, which
changes the way that both the arm64 and x86 EFI boot stub build the
early memory map. It would be good to have it bake in linux-next for a
while.
"
Signed-off-by: Ingo Molnar <mingo@kernel.org>
ARM64_CPU_SUSPEND config option was introduced to make code providing
context save/restore selectable only on platforms requiring power
management capabilities.
Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
in turn is set by the SUSPEND config option.
The introduction of CPU_IDLE for arm64 requires that code configured
by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
in order to enable the CPU idle driver to rely on CPU operations
carrying out context save/restore.
The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
failed dependencies, which is not a clean way of handling the kernel
configuration option.
For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
and makes the context save/restore dependent on CPU_PM, which is selected
whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
in the process.
This way, code previously configured through ARM64_CPU_SUSPEND is
compiled in whenever a power management subsystem requires it to be
present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
expected on ARM64 kernels.
The cpu_suspend and cpu_init_idle CPU operations are added only if
CPU_IDLE is selected, since they are CPU_IDLE specific methods and
should be grouped and defined accordingly.
PSCI CPU operations are updated to reflect the introduced changes.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As with x86, mark the sys_call_table const such that it will be placed
in the .rodata section. This will cause attempts to modify the table
(accidental or deliberate) to fail when strict page permissions are in
place. In the absence of strict page permissions, there should be no
functional change.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch moves the sys_rt_sigreturn_wrapper prototype to
arch/arm64/kernel/sys.c and removes the asm/syscalls.h header.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Unlike the sys_call_table[], the compat one was implemented in sys32.S
making it impossible to notice discrepancies between the number of
compat syscalls and the __NR_compat_syscalls macro, the latter having to
be defined in asm/unistd.h as including asm/unistd32.h would cause
conflicts on __NR_* definitions. With this patch, incorrect
__NR_compat_syscalls values will result in a build-time error.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Commit 9a46ad6d6d "smp: make smp_call_function_many() use logic
similar to smp_call_function_single()" has unified the way to handle
single and multiple cross-CPU function calls. Now only one interrupt
is needed for architecture specific code to support generic SMP function
call interfaces, so kill the redundant single function call interrupt.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Emulate deprecated 'setend' instruction for AArch32 bit tasks.
setend [le/be] - Sets the endianness of EL0
On systems with CPUs which support mixed endian at EL0, the hardware
support for the instruction can be enabled by setting the SCTLR_EL1.SED
bit. Like the other emulated instructions it is controlled by an entry in
/proc/sys/abi/. For more information see :
Documentation/arm64/legacy_instructions.txt
The instruction is emulated by setting/clearing the SPSR_EL1.E bit, which
will be reflected in the PSTATE.E in AArch32 context.
This patch also restores the native endianness for the execution of signal
handlers, since the process could have changed the endianness.
Note: All CPUs on the system must have mixed endian support at EL0. Once the
handler is registered, hotplugging a CPU which doesn't support mixed endian,
could lead to unexpected results/behavior in applications.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As of now each insn_emulation has a cpu hotplug notifier that
enables/disables the CPU feature bit for the functionality. This
patch re-arranges the code, such that there is only one notifier
that runs through the list of registered emulation hooks and runs
their corresponding set_hw_mode.
We do nothing when a CPU is dying as we will set the appropriate bits
as it comes back online based on the state of the hooks.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Punit Agrawal <punit.agrawal@arm.com>
[catalin.marinas@arm.com: fix pr_warn compilation error]
[catalin.marinas@arm.com: remove unnecessary "insn" check]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch keeps track of the mixed endian EL0 support across
the system and provides helper functions to export it. The status
is a boolean indicating whether all the CPUs on the system supports
mixed endian at EL0.
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add the necessary call to of_iommu_init.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that the create_mapping() code in mm/mmu.c is able to support
setting up kernel page tables at initcall time, we can move the whole
virtmap creation to arm64_enable_runtime_services() instead of having
a distinct stage during early boot. This also allows us to drop the
arm64-specific EFI_VIRTMAP flag.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add page protections for arm64 similar to those in arm.
This is for security reasons to prevent certain classes
of exploits. The current method:
- Map all memory as either RWX or RW. We round to the nearest
section to avoid creating page tables before everything is mapped
- Once everything is mapped, if either end of the RWX section should
not be X, we split the PMD and remap as necessary
- When initmem is to be freed, we change the permissions back to
RW (using stop machine if necessary to flush the TLB)
- If CONFIG_DEBUG_RODATA is set, the read only sections are set
read only.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When kernel text is marked as read only, it cannot be modified directly.
Use a fixmap to modify the text instead in a similar manner to
x86 and arm.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ICC_SRE_EL1 is a system register allowing msr/mrs accesses to the
GIC CPU interface for EL1 (guests). Currently we force it to 0, but
for proper GICv3 support we have to allow guests to use it (depending
on their selected virtual GIC model).
So add ICC_SRE_EL1 to the list of saved/restored registers on a
world switch, but actually disallow a guest to change it by only
restoring a fixed, once-initialized value.
This value depends on the GIC model userland has chosen for a guest.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
When booting with EFI, we acquire the EFI memory map after parsing the
early params. This unfortuantely renders the option useless as we call
memblock_enforce_memory_limit (which uses memblock_remove_range behind
the scenes) before we've added any memblocks. We end up removing
nothing, then adding all of memory later when efi_init calls
reserve_regions.
Instead, we can log the limit and apply this later when we do the rest
of the memblock work in memblock_init, which should work regardless of
the presence of EFI. At the same time we may as well move the early
parameter into arm64's mm/init.c, close to arm64_memblock_init.
Any memory which must be mapped (e.g. for use by EFI runtime services)
must be mapped explicitly reather than relying on the linear mapping,
which may be truncated as a result of a mem= option passed on the kernel
command line.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When remapping the UEFI memory map using ioremap_cache(), we
have to deal with potential failure. Note that, even if the
common case is for ioremap_cache() to return the existing linear
mapping of the memory map, we cannot rely on that to be always the
case, e.g., in the presence of a mem= kernel parameter.
At the same time, remove a stale comment and move the memmap code
together.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This ensures all stub component are freed when the kernel proper is
done booting, by prefixing the names of all ELF sections that have
the SHF_ALLOC attribute with ".init". This approach ensures that even
implicitly emitted allocated data (like initializer values and string
literals) are covered.
At the same time, remove some __init annotations in the stub that have
now become redundant, and add the __init annotation to handle_kernel_image
which will now trigger a section mismatch warning without it.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
To aid the developer when something triggers an unexpected exception,
decode the ESR_ELx.EC field when logging an ESR_ELx value. This doesn't
tell the developer the specifics of the exception encoded in the
remaining IL and ISS bits, but it can be helpful to distinguish between
exception classes (e.g. SError and a data abort) without having to
manually decode the field, which can be tiresome.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Now that we have common ESR_ELx_* macros, move the core arm64 code over
to them.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
This patch adds support for cacheinfo on ARM64.
On ARMv8, the cache hierarchy can be identified through Cache Level ID
(CLIDR) register while the cache geometry is provided by Cache Size ID
(CCSIDR) register.
Since the architecture doesn't provide any way of detecting the cpus
sharing particular cache, device tree is used for the same purpose.
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we have moved the call to SetVirtualAddressMap() to the stub,
UEFI has no use for the ID map, so we can drop the code that installs
ID mappings for UEFI memory regions.
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Now that we are calling SetVirtualAddressMap() from the stub, there is no
need to reserve boot-only memory regions, which implies that there is also
no reason to free them again later.
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
In order to support kexec, the kernel needs to be able to deal with the
state of the UEFI firmware after SetVirtualAddressMap() has been called.
To avoid having separate code paths for non-kexec and kexec, let's move
the call to SetVirtualAddressMap() to the stub: this will guarantee us
that it will only be called once (since the stub is not executed during
kexec), and ensures that the UEFI state is identical between kexec and
normal boot.
This implies that the layout of the virtual mapping needs to be created
by the stub as well. All regions are rounded up to a naturally aligned
multiple of 64 KB (for compatibility with 64k pages kernels) and recorded
in the UEFI memory map. The kernel proper reads those values and installs
the mappings in a dedicated set of page tables that are swapped in during
UEFI Runtime Services calls.
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Pull perf fixes from Ingo Molnar:
"Mostly tooling fixes, but also some kernel side fixes: uncore PMU
driver fix, user regs sampling fix and an instruction decoder fix that
unbreaks PEBS precise sampling"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/uncore/hsw-ep: Handle systems with only two SBOXes
perf/x86_64: Improve user regs sampling
perf: Move task_pt_regs sampling into arch code
x86: Fix off-by-one in instruction decoder
perf hists browser: Fix segfault when showing callchain
perf callchain: Free callchains when hist entries are deleted
perf hists: Fix children sort key behavior
perf diff: Fix to sort by baseline field by default
perf list: Fix --raw-dump option
perf probe: Fix crash in dwarf_getcfi_elf
perf probe: Fix to fall back to find probe point in symbols
perf callchain: Append callchains only when requested
perf ui/tui: Print backtrace symbols when segfault occurs
perf report: Show progress bar for output resorting
On x86_64, at least, task_pt_regs may be only partially initialized
in many contexts, so x86_64 should not use it without extra care
from interrupt context, let alone NMI context.
This will allow x86_64 to override the logic and will supply some
scratch space to use to make a cleaner copy of user regs.
Tested-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: chenggang.qcg@taobao.com
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jean Pihet <jean.pihet@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/e431cd4c18c2e1c44c774f10758527fb2d1025c4.1420396372.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The early ioremap support introduced by patch bf4b558eba
("arm64: add early_ioremap support") failed to add a call to
early_ioremap_reset() at an appropriate time. Without this call,
invocations of early_ioremap etc. that are done too late will go
unnoticed and may cause corruption.
This is exactly what happened when the first user of this feature
was added in patch f84d02755f ("arm64: add EFI runtime services").
The early mapping of the EFI memory map is unmapped during an early
initcall, at which time the early ioremap support is long gone.
Fix by adding the missing call to early_ioremap_reset() to
setup_arch(), and move the offending early_memunmap() to right after
the point where the early mapping of the EFI memory map is last used.
Fixes: f84d02755f ("arm64: add EFI runtime services")
Cc: <stable@vger.kernel.org>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On next-20150105, defconfig compilation breaks with:
arch/arm64/kernel/smp_spin_table.c:80:2: error: implicit declaration of function ‘ioremap_cache’ [-Werror=implicit-function-declaration]
arch/arm64/kernel/smp_spin_table.c:92:2: error: implicit declaration of function ‘writeq_relaxed’ [-Werror=implicit-function-declaration]
arch/arm64/kernel/smp_spin_table.c:101:2: error: implicit declaration of function ‘iounmap’ [-Werror=implicit-function-declaration]
Fix by including asm/io.h, which contains definitions or prototypes
for these macros or functions.
This second version incorporates a comment from Mark Rutland
<mark.rutland@arm.com> to keep the includes in alphabetical order
by filename.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Paul Walmsley <pwalmsley@nvidia.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
On next-20150105, defconfig compilation breaks with:
arch/arm64/kernel/module.c:408:4: error: implicit declaration of function ‘apply_alternatives’ [-Werror=implicit-function-declaration]
Fix by including asm/alternative.h, where the apply_alternatives()
prototype is declared.
This second version incorporates a comment from Mark Rutland
<mark.rutland@arm.com> to keep the includes in alphabetical order
by filename.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Paul Walmsley <pwalmsley@nvidia.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We don't currently check a number of registers exposed to AArch32 guests
(MVFR{0,1,2}_EL1 and ID_DFR0_EL1), despite the fact these describe
AArch32 feature support exposed to userspace and KVM guests similarly to
AArch64 registers which we do check. We do not expect these registers to
vary across a set of CPUs.
This patch adds said registers to the cpuinfo framework and sanity
checks. No sanity check failures have been observed on a current ARMv8
big.LITTLE platform (Juno).
Cc: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The current logic in arm64 pci_bus_assign_domain_nr() is flawed in that
depending on the host controller configuration for a platform and the
initialization sequence, core code may end up allocating PCI domain numbers
from both DT and the generic domain counter, which would result in PCI
domain allocation aliases/errors.
Fix the logic behind the PCI domain number assignment and move the
resulting code to the PCI core so the same domain allocation logic is used
on all platforms that select CONFIG_PCI_DOMAINS_GENERIC.
[bhelgaas: tidy changelog]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Liviu Dudau <Liviu.Dudau@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
CC: Rob Herring <robh+dt@kernel.org>
CC: Catalin Marinas <catalin.marinas@arm.com>
On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0
page tables on boot or to the active_mm mappings belonging to user space
processes, it must never be set to swapper_pg_dir page tables mappings.
When a CPU is booted its active_mm is set to init_mm even though its
TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies
that when __cpu_suspend is triggered the active_mm can point at
init_mm even if the current TTBR0_EL1 register contains the reserved
TTBR0_EL1 mappings.
Therefore, the mm save and restore executed in __cpu_suspend might
turn out to be erroneous in that, if the current->active_mm corresponds
to init_mm, on resume from low power it ends up restoring in the
TTBR0_EL1 the init_mm mappings that are global and can cause speculation
of TLB entries which end up being propagated to user space.
This patch fixes the issue by checking the active_mm pointer before
restoring the TTBR0 mappings. If the current active_mm == &init_mm,
the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of
switching back to the active_mm, which is the expected behaviour
corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered.
Fixes: 95322526ef ("arm64: kernel: cpu_{suspend/resume} implementation")
Cc: <stable@vger.kernel.org> # 3.14+: 18ab7db
Cc: <stable@vger.kernel.org> # 3.14+: 714f599
Cc: <stable@vger.kernel.org> # 3.14+: c3684fb
Cc: <stable@vger.kernel.org> # 3.14+
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Fix some fallout introduced during the merge window:
- Build failure when PM_SLEEP is disabled but CPU_IDLE is enabled
- Compiler warning from page table dumper w/ 48-bit VAs
- Erroneous page table truncation in reported dump
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJUkAJYAAoJELescNyEwWM00S0IAJiLPggHcSZjy6z9MSxztuiX
g0CQQNTFOrn10mfjTQ6tNJoEcgp3sXfB+ypPgsAOSstt1r0+XclK+dr+3WeafWfj
BDp2U+YhHKnV9o7LGCtH3+9dS/YVN0mtAblmEXIm09xteNQ9UPuVuCxlaacPLH5Q
0MBsmdXUNRu9ZvX5YcAHiH5Ve6fTYNVtJ819KPEELt0U2FeEpPuVQls7cyNlNuF6
O+TCRWD8n4yrFLjUntqbdWxpNqD3ZT2nKOvWAQdutWmXw7K2zKl59GdJ6mMNb9IX
zOc3ARHnwKYSPMdU7bk1Ti8MvJfKiDgcJyJ1XrmELeX/verjOKBFEcMRe4ypjwE=
=gIhT
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Given that my availability next week is likely to be poor, here are
three arm64 fixes to resolve some issues introduced by features merged
last week. I was going to wait until -rc1, but it doesn't make much
sense to sit on fixes.
Fix some fallout introduced during the merge window:
- Build failure when PM_SLEEP is disabled but CPU_IDLE is enabled
- Compiler warning from page table dumper w/ 48-bit VAs
- Erroneous page table truncation in reported dump"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: mm: dump: don't skip final region
arm64: mm: dump: fix shift warning
arm64: psci: Fix build breakage without PM_SLEEP
Pull percpu updates from Tejun Heo:
"Nothing interesting. A patch to convert the remaining __get_cpu_var()
users, another to fix non-critical off-by-one in an assertion and a
cosmetic conversion to lockless_dereference() in percpu-ref.
The back-merge from mainline is to receive lockless_dereference()"
* 'for-3.19' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu: Replace smp_read_barrier_depends() with lockless_dereference()
percpu: Convert remaining __get_cpu_var uses in 3.18-rcX
percpu: off by one in BUG_ON()
Fix build failure of defconfig when PM_SLEEP is disabled (e.g. by
disabling SUSPEND) and CPU_IDLE enabled:
arch/arm64/kernel/psci.c:543:2: error: unknown field ‘cpu_suspend’ specified in initializer
.cpu_suspend = cpu_psci_cpu_suspend,
^
arch/arm64/kernel/psci.c:543:2: warning: initialization from incompatible pointer type [enabled by default]
arch/arm64/kernel/psci.c:543:2: warning: (near initialization for ‘cpu_psci_ops.cpu_prepare’) [enabled by default]
make[1]: *** [arch/arm64/kernel/psci.o] Error 1
The cpu_operations.cpu_suspend field exists only if ARM64_CPU_SUSPEND is
defined, not CPU_IDLE.
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Changes include:
- Support for alternative instruction patching from Andre
- seccomp from Akashi
- Some AArch32 instruction emulation, required by the Android folks
- Optimisations for exception entry/exit code, cmpxchg, pcpu atomics
- mmu_gather range calculations moved into core code
- EFI updates from Ard, including long-awaited SMBIOS support
- /proc/cpuinfo fixes to align with the format used by arch/arm/
- A few non-critical fixes across the architecture
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJUhbSAAAoJELescNyEwWM07PQH/AolxqOJTTg8TKe2wvRC+DwY
R98bcECMwhXvwep1KhTBew7z7NRzXJvVVs+EePSpXWX2+KK2aWN4L50rAb9ow4ty
PZ5EFw564g3rUpc7cbqIrM/lasiYWuIWw/BL+wccOm3mWbZfokBB2t0tn/2rVv0K
5tf2VCLLxgiFJPLuYk61uH7Nshvv5uJ6ODwdXjbrH+Mfl6xsaiKv17ZrfP4D/M4o
hrLoXxVTuuWj3sy/lBJv8vbTbKbQ6BGl9JQhBZGZHeKOdvX7UnbKH4N5vWLUFZya
QYO92AK1xGolu8a9bEfzrmxn0zXeAHgFTnRwtDCekOvy0kTR9MRIqXASXKO3ZEU=
=rnFX
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Here's the usual mixed bag of arm64 updates, also including some
related EFI changes (Acked by Matt) and the MMU gather range cleanup
(Acked by you).
Changes include:
- support for alternative instruction patching from Andre
- seccomp from Akashi
- some AArch32 instruction emulation, required by the Android folks
- optimisations for exception entry/exit code, cmpxchg, pcpu atomics
- mmu_gather range calculations moved into core code
- EFI updates from Ard, including long-awaited SMBIOS support
- /proc/cpuinfo fixes to align with the format used by arch/arm/
- a few non-critical fixes across the architecture"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (70 commits)
arm64: remove the unnecessary arm64_swiotlb_init()
arm64: add module support for alternatives fixups
arm64: perf: Prevent wraparound during overflow
arm64/include/asm: Fixed a warning about 'struct pt_regs'
arm64: Provide a namespace to NCAPS
arm64: bpf: lift restriction on last instruction
arm64: Implement support for read-mostly sections
arm64: compat: align cacheflush syscall with arch/arm
arm64: add seccomp support
arm64: add SIGSYS siginfo for compat task
arm64: add seccomp syscall for compat task
asm-generic: add generic seccomp.h for secure computing mode 1
arm64: ptrace: allow tracer to skip a system call
arm64: ptrace: add NT_ARM_SYSTEM_CALL regset
arm64: Move some head.text functions to executable section
arm64: jump labels: NOP out NOP -> NOP replacement
arm64: add support to dump the kernel page tables
arm64: Add FIX_HOLE to permanent fixed addresses
arm64: alternatives: fix pr_fmt string for consistency
arm64: vmlinux.lds.S: don't discard .exit.* sections at link-time
...
Currently the kernel patches all necessary instructions once at boot
time, so modules are not covered by this.
Change the apply_alternatives() function to take a beginning and an
end pointer and introduce a new variant (apply_alternatives_all()) to
cover the existing use case for the static kernel image section.
Add a module_finalize() function to arm64 to check for an
alternatives section in a module and patch only the instructions from
that specific area.
Since that module code is not touched before the module
initialization has ended, we don't need to halt the machine before
doing the patching in the module's code.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If the overflow threshold for a counter is set above or near the
0xffffffff boundary then the kernel may lose track of the overflow
causing only events that occur *after* the overflow to be recorded.
Specifically the problem occurs when the value of the performance counter
overtakes its original programmed value due to wrap around.
Typical solutions to this problem are either to avoid programming in
values likely to be overtaken or to treat the overflow bit as the 33rd
bit of the counter.
Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
during irqsave sections (context switches for example) so instead we take
the simpler approach of avoiding values likely to be overtaken.
We set the limit to half of max_period because this matches the limit
imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
rate for large threshold values, however even with a very fast counter
ticking at 4GHz the interrupt rate would only be ~1Hz.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Building arm64.allmodconfig leads to the following warning:
usb/gadget/function/f_ncm.c:203:0: warning: "NCAPS" redefined
#define NCAPS (USB_CDC_NCM_NCAP_ETH_FILTER | USB_CDC_NCM_NCAP_CRC_MODE)
^
In file included from /home/build/work/batch/arch/arm64/include/asm/io.h:32:0,
from /home/build/work/batch/include/linux/clocksource.h:19,
from /home/build/work/batch/include/clocksource/arm_arch_timer.h:19,
from /home/build/work/batch/arch/arm64/include/asm/arch_timer.h:27,
from /home/build/work/batch/arch/arm64/include/asm/timex.h:19,
from /home/build/work/batch/include/linux/timex.h:65,
from /home/build/work/batch/include/linux/sched.h:19,
from /home/build/work/batch/arch/arm64/include/asm/compat.h:25,
from /home/build/work/batch/arch/arm64/include/asm/stat.h:23,
from /home/build/work/batch/include/linux/stat.h:5,
from /home/build/work/batch/include/linux/module.h:10,
from /home/build/work/batch/drivers/usb/gadget/function/f_ncm.c:19:
arch/arm64/include/asm/cpufeature.h:27:0: note: this is the location of the previous definition
#define NCAPS 2
So add a ARM64 prefix to avoid such problem.
Reported-by: Olof's autobuilder <build@lixom.net>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Update handling of cacheflush syscall with changes made in arch/arm
counterpart:
- return error to userspace when flushing syscall fails
- split user cache-flushing into interruptible chunks
- don't bother rounding to nearest vma
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
[will: changed internal return value from -EINTR to 0 to match arch/arm/]
Signed-off-by: Will Deacon <will.deacon@arm.com>
secure_computing() is called first in syscall_trace_enter() so that
a system call will be aborted quickly without doing succeeding syscall
tracing if seccomp rules want to deny that system call.
On compat task, syscall numbers for system calls allowed in seccomp mode 1
are different from those on normal tasks, and so _NR_seccomp_xxx_32's need
to be redefined.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
SIGSYS is primarily used in secure computing to notify tracer of syscall
events. This patch allows signal handler on compat task to get correct
information with SA_SIGINFO specified when this signal is delivered.
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If tracer modifies a syscall number to -1, this traced system call should
be skipped with a return value specified in x0.
This patch implements this semantics.
Please note:
* syscall entry tracing and syscall exit tracing (ftrace tracepoint and
audit) are always executed, if enabled, even when skipping a system call
(that is, -1).
In this way, we can avoid a potential bug where audit_syscall_entry()
might be called without audit_syscall_exit() at the previous system call
being called, that would cause OOPs in audit_syscall_entry().
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
[will: fixed up conflict with blr rework]
Signed-off-by: Will Deacon <will.deacon@arm.com>
This regeset is intended to be used to get and set a system call number
while tracing.
There was some discussion about possible approaches to do so:
(1) modify x8 register with ptrace(PTRACE_SETREGSET) indirectly,
and update regs->syscallno later on in syscall_trace_enter(), or
(2) define a dedicated regset for this purpose as on s390, or
(3) support ptrace(PTRACE_SET_SYSCALL) as on arch/arm
Thinking of the fact that user_pt_regs doesn't expose 'syscallno' to
tracer as well as that secure_computing() expects a changed syscall number,
especially case of -1, to be visible before this function returns in
syscall_trace_enter(), (1) doesn't work well.
We will take (2) since it looks much cleaner.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The head.text section is intended to be run at early bootup
before any of the regular kernel mappings have been setup.
Parts of head.text may be freed back into the buddy allocator
due to TEXT_OFFSET so for security requirements this memory
must not be executable. The suspend/resume/hotplug code path
requires some of these head.S functions to run however which
means they need to be executable. Support these conflicting
requirements by moving the few head.text functions that need
to be executable to the text section which has the appropriate
page table permissions.
Tested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In the arm64 arch_static_branch implementation we place an A64 NOP into
the instruction stream and log relevant details to a jump_entry in a
__jump_table section. Later this may be replaced with an immediate
branch without link to the code for the unlikely case.
At init time, the core calls arch_jump_label_transform_static to
initialise the NOPs. On x86 this involves inserting the optimal NOP for
a given microarchitecture, but on arm64 we only use the architectural
NOP, and hence replace each NOP with the exact same NOP. This is
somewhat pointless.
Additionally, at module load time we don't call jump_label_apply_nops to
patch the optimal NOPs in, unlike other architectures, but get away with
this because we only use the architectural NOP anyway. A later notifier
will patch NOPs with branches as required.
Similarly to x86 commit 11570da1c5 (x86/jump-label: Do not bother
updating NOPs if they are correct), we can avoid patching NOPs with
identical NOPs. Given that we only use a single NOP encoding, this means
we can NOP-out the body of arch_jump_label_transform_static entirely. As
the default __weak arch_jump_label_transform_static implementation
performs a patch, we must use an empty function to achieve this.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Jiang Liu <liuj97@gmail.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
.exit.* sections may be subject to patching by the new alternatives
framework and so shouldn't be discarded at link-time. Without this patch,
such a section will result in the following linker error:
`.exit.text' referenced in section `.altinstructions' of
drivers/built-in.o: defined in discarded section `.exit.text' of
drivers/built-in.o
Signed-off-by: Will Deacon <will.deacon@arm.com>
The fixmap API was originally added for arm64 for
early_ioremap purposes. It can be used for other purposes too
so move the initialization from ioremap to somewhere more
generic. This makes it obvious where the fixmap is being set
up and allows for a cleaner implementation of __set_fixmap.
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The function cpu_resume currently lives in the .data section.
There's no reason for it to be there since we can use relative
instructions without a problem. Move a few cpu_resume data
structures out of the assembly file so the .data annotation
can be dropped completely and cpu_resume ends up in the read
only text section.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The hyp stub vectors are currently loaded using adr. This
instruction has a +/- 1MB range for the loading address. If
the alignment for sections is changed the address may be more
than 1MB away, resulting in reclocation errors. Switch to using
adrp for getting the address to ensure we aren't affected by the
location of the __hyp_stub_vectors.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
handle_arch_irq isn't actually text, it's just a function pointer.
It doesn't need to be stored in the text section and doing so
causes problesm if we ever want to make the kernel text read only.
Declare handle_arch_irq as a proper function pointer stored in
the data section.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
While we currently expect self-hosted debug support to be identical
across CPUs, we don't currently sanity check this.
This patch adds logging of the ID_AA64DFR{0,1}_EL1 values and associated
sanity checking code.
It's not clear to me whether we need to check PMUVer, TraceVer, and
DebugVer, as we don't currently rely on these fields at all.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
A missing newline in the WARN_TAINT_ONCE string results in ugly and
somewhat difficult to read output in the case of a sanity check failure,
as the next print does not appear on a new line:
Unsupported CPU feature variation.Modules linked in:
This patch adds the missing newline, fixing the output formatting.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
It seems that Cortex-A53 r0p4 added support for AIFSR and ADFSR, and
ID_MMFR0.AuxReg has been updated accordingly to report this fact. As
Cortex-A53 could be paired with CPUs which do not implement these
registers (e.g. all current revisions of Cortex-A57), this may trigger a
sanity check failure at boot.
The AuxReg value describes the availability of the ACTLR, AIFSR, and
ADFSR registers, which are only of use to 32-bit guest OSs, and have
IMPLEMENTATION DEFINED contents. Given the nature of these registers it
is likely that KVM will need to trap accesses regardless of whether the
CPUs are heterogeneous.
This patch masks out the ID_MMFR0.AuxReg value from the sanity checks,
preventing spurious warnings at boot time.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The only requirement the scheduler has on cluster IDs is that they must
be unique. When enumerating the topology based on MPIDR information the
kernel currently generates cluster IDs by using the first level of
affinity above the core ID (either level one or two depending on if the
core has multiple threads) however the ARMv8 architecture allows for up
to three levels of affinity. This means that an ARMv8 system may
contain cores which have MPIDRs identical other than affinity level
three which with current code will cause us to report multiple cores
with the same identification to the scheduler in violation of its
uniqueness requirement.
Ensure that we do not violate the scheduler requirements on systems that
uses all the affinity levels by incorporating both affinity levels two
and three into the cluser ID when the cores are not threaded.
While no currently known hardware uses multi-level clusters it is better
to program defensively, this will help ease bringup of systems that have
them and will ensure that things like distribution install media do not
need to be respun to replace kernels in order to deploy such systems.
In the worst case the system will work but perform suboptimally until a
kernel modified to handle the new topology better is installed, in the
best case this will be an adequate description of such topologies for
the scheduler to perform well.
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Not all of the errata we have workarounds for apply necessarily to all
SoCs, so people compiling a kernel for one very specific SoC may not
need to patch the kernel.
Introduce a new submenu in the "Platform selection" menu to allow
people to turn off certain bugs if they are not affected. By default
all of them are enabled.
Normal users or distribution kernels shouldn't bother to deselect any
bugs here, since the alternatives framework will take care of
patching them in only if needed.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[will: moved kconfig menu under `Kernel Features']
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM erratum 832075 applies to certain revisions of Cortex-A57,
one of the workarounds is to change device loads into using
load-aquire semantics.
This is achieved using the alternatives framework.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The ARM errata 819472, 826319, 827319 and 824069 define the same
workaround for these hardware issues in certain Cortex-A53 parts.
Use the new alternatives framework and the CPU MIDR detection to
patch "cache clean" into "cache clean and invalidate" instructions if
an affected CPU is detected at runtime.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[will: add __maybe_unused to squash gcc warning]
Signed-off-by: Will Deacon <will.deacon@arm.com>
After each CPU has been started, we iterate through a list of
CPU features or bugs to detect CPUs which need (or could benefit
from) kernel code patches.
For each feature/bug there is a function which checks if that
particular CPU is affected. We will later provide some more generic
functions for common things like testing for certain MIDR ranges.
We do this for every CPU to cover big.LITTLE systems properly as
well.
If a certain feature/bug has been detected, the capability bit will
be set, so that later the call to apply_alternatives() will trigger
the actual code patching.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
With a blatant copy of some x86 bits we introduce the alternative
runtime patching "framework" to arm64.
This is quite basic for now and we only provide the functions we need
at this time.
This is connected to the newly introduced feature bits.
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
For taking note if at least one CPU in the system needs a bug
workaround or would benefit from a code optimization, we create a new
bitmap to hold (artificial) feature bits.
Since elf_hwcap is part of the userland ABI, we keep it alone and
introduce a new data structure for that (along with some accessors).
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
update_insn_emulation_mode() returns 0 on success, so we should be
treating any non-zero values as failure, rather than the other way
around. Otherwise, writes to the sysctl file controlling the emulation
are ignored and immediately rolled back.
Reported-by: Gene Hackmann <ghackmann@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce an event to trace the usage of emulated instructions. The
trace event is intended to help identify and encourage the migration
of legacy software using the emulation features.
Use this event to trace usage of swp and CP15 barrier emulation.
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The CP15 barrier instructions (CP15ISB, CP15DSB and CP15DMB) are
deprecated in the ARMv7 architecture, superseded by ISB, DSB and DMB
instructions respectively. Some implementations may provide the
ability to disable the CP15 barriers by disabling the CP15BEN bit in
SCTLR_EL1. If not enabled, the encodings for these instructions become
undefined.
To support legacy software using these instructions, this patch
register hooks to -
* emulate CP15 barriers and warn the user about their use
* toggle CP15BEN in SCTLR_EL1
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The SWP instruction was deprecated in the ARMv6 architecture. The
ARMv7 multiprocessing extensions mandate that SWP/SWPB instructions
are treated as undefined from reset, with the ability to enable them
through the System Control Register SW bit. With ARMv8, the option to
enable these instructions through System Control Register was dropped
as well.
To support legacy applications using these instructions, port the
emulation of the SWP and SWPB instructions from the arm port to arm64.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Typically, providing support for legacy instructions requires
emulating the behaviour of instructions whose encodings have become
undefined. If the instructions haven't been removed from the
architecture, there maybe an option in the implementation to turn
on/off the support for these instructions.
Create common infrastructure to support legacy instruction
emulation. In addition to emulation, also provide an option to support
hardware execution when supported. The default execution mode (one of
undef, emulate, hw exeuction) is dependent on the state of the
instruction (deprecated or obsolete) in the architecture and
can specified at the time of registering the instruction handlers. The
runtime state of the emulation can be controlled by writing to
individual nodes in sysctl. The expected default behaviour is
documented as part of this patch.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Port support for AArch32 instruction condition code checking from arm
to arm64.
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Add support to register hooks for undefined instructions. The handlers
will be called when the undefined instruction and the processor state
(as contained in pstate) match criteria used at registration.
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit a469abd0f8 (ARM: elf: add new hwcap for identifying atomic
ldrd/strd instructions) introduces HWCAP_ELF for 32-bit ARM
applications. As LPAE is always present on arm64, report the
corresponding compat HWCAP to user space.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org> # 3.11+
Signed-off-by: Will Deacon <will.deacon@arm.com>
The push/pop instructions can be suboptimal when saving/restoring large
amounts of data to/from the stack, for example on entry/exit from the
kernel. This is because:
(1) They act on descending addresses (i.e. the newly decremented sp),
which may defeat some hardware prefetchers
(2) They introduce an implicit dependency between each instruction, as
the sp has to be updated in order to resolve the address of the
next access.
This patch removes the push/pop instructions from our kernel entry/exit
macros in favour of ldp/stp plus offset.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Using an explicit adr instruction to set the link register to point at
ret_fast_syscall/ret_to_user can defeat branch and return stack predictors.
Instead, use the standard calling instructions (bl, blr) and have an
unconditional branch as the following instruction.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit d7a49086f2 (arm64: cpuinfo: print info for all CPUs)
attempted to clean up /proc/cpuinfo, but due to concerns regarding
further changes was reverted in commit 5e39977edf (Revert "arm64:
cpuinfo: print info for all CPUs").
There are two major issues with the arm64 /proc/cpuinfo format
currently:
* The "Features" line describes (only) the 64-bit hwcaps, which is
problematic for some 32-bit applications which attempt to parse it. As
the same names are used for analogous ISA features (e.g. aes) despite
these generally being architecturally unrelated, it is not possible to
simply append the 64-bit and 32-bit hwcaps in a manner that might not
be misleading to some applications.
Various potential solutions have appeared in vendor kernels. Typically
the format of the Features line varies depending on whether the task
is 32-bit.
* Information is only printed regarding a single CPU. This does not
match the ARM format, and does not provide sufficient information in
big.LITTLE systems where CPUs are heterogeneous. The CPU information
printed is queried from the current CPU's registers, which is racy
w.r.t. cross-cpu migration.
This patch attempts to solve these issues. The following changes are
made:
* When a task with a LINUX32 personality attempts to read /proc/cpuinfo,
the "Features" line contains the decoded 32-bit hwcaps, as with the
arm port. Otherwise, the decoded 64-bit hwcaps are shown. This aligns
with the behaviour of COMPAT_UTS_MACHINE and COMPAT_ELF_PLATFORM. In
the absense of compat support, the Features line is empty.
The set of hwcaps injected into a task's auxval are unaffected.
* Properties are printed per-cpu, as with the ARM port. The per-cpu
information is queried from pre-recorded cpu information (as used by
the sanity checks).
* As with the previous attempt at fixing up /proc/cpuinfo, the hardware
field is removed. The only users so far are 32-bit applications tied
to particular boards, so no portable applications should be affected,
and this should prevent future tying to particular boards.
The following differences remain:
* No model_name is printed, as this cannot be queried from the hardware
and cannot be provided in a stable fashion. Use of the CPU
{implementor,variant,part,revision} fields is sufficient to identify a
CPU and is portable across arm and arm64.
* The following system-wide properties are not provided, as they are not
possible to provide generally. Programs relying on these are already
tied to particular (32-bit only) boards:
- Hardware
- Revision
- Serial
No software has yet been identified for which these remaining
differences are problematic.
Cc: Greg Hackmann <ghackmann@google.com>
Cc: Ian Campbell <ijc@hellion.org.uk>
Cc: Serban Constantinescu <serban.constantinescu@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: cross-distro@lists.linaro.org
Cc: linux-api@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When experimenting with patches to provide kprobes support for aarch64
smp machines would hang when inserting breakpoints into kernel code.
The hangs were caused by a race condition in the code called by
aarch64_insn_patch_text_sync(). The first processor in the
aarch64_insn_patch_text_cb() function would patch the code while other
processors were still entering the function and incrementing the
cpu_count field. This resulted in some processors never observing the
exit condition and exiting the function. Thus, processors in the
system hung.
The first processor to enter the patching function performs the
patching and signals that the patching is complete with an increment
of the cpu_count field. When all the processors have incremented the
cpu_count field the cpu_count will be num_cpus_online()+1 and they
will return to normal execution.
Fixes: ae16480785 arm64: introduce interfaces to hotpatch kernel and module code
Signed-off-by: William Cohen <wcohen@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
While efi-entry.S mentions that efi_entry() will have relocated the
kernel image, it actually means that efi_entry will have placed a copy
of the kernel in the appropriate location, and until this is branched to
at the end of efi_entry.S, all instructions are executed from the
original image.
Thus while the flush in efi_entry.S does ensure that the copy is visible
to noncacheable accesses, it does not guarantee that this is true for
the image instructions are being executed from. This could have
disasterous effects when the MMU and caches are disabled if the image
has not been naturally evicted to the PoC.
Additionally, due to a missing dsb following the ic ialluis, the new
kernel image is not necessarily clean in the I-cache when it is branched
to, with similar potentially disasterous effects.
This patch adds additional flushing to ensure that the currently
executing stub text is flushed to the PoC and is thus visible to
noncacheable accesses. As it is placed after the instructions cache
maintenance for the new image and __flush_dcache_area already contains a
dsb, we do not need to add a separate barrier to ensure completion of
the icache maintenance.
Comments are updated to clarify the situation with regard to the two
images and the maintenance required for both.
Fixes: 3c7f255039
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Joel Schopp <joel.schopp@amd.com>
Reviewed-by: Roy Franz <roy.franz@linaro.org>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ian Campbell <ijc@hellion.org.uk>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- enable bpf syscall for compat
- cpu_suspend fix when checking the idle state type
- defconfig update
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUXzboAAoJEGvWsS0AyF7xSP8P/1IGrDeWgT5+J9edv1B4QYm1
hw+X+T6Wywl4yPpkh7vljL64ajDa0wPFKu3t83xLpuFE2TEKT+EdABB9TJHAb8RN
hebRGYQXMCfDowTXS2QGK058rtHFVsdH3j8NWoUNwIBVW6Wq2Jcaw87V3E+K3TLL
FlYnxKen40N/IUJRe+M5nn7VUo7lXPbIiVGHb3gfPBq4yeLccIsVYWYNIYUEDMTQ
/BJTmbY9Hte0uZK5+q4cXqmNKiDgODskIFIOQUowJoKPR0cuhBraC61GIMSSbqRA
sbOs8yjadRcHsyjnFL3dlrhbRKzV9Vy4EDJ7l15m9j67MSs2UHiZ65AKHHyUbS1C
vIYMNc2gcTQH3kUyoWs+Dc5lrBhQ/T3rBz9bPnRu9SXmr0RjwnFkKcgOsm19B6cv
hqYuj8Pbjg5+Qoq0gRznQRzM3S3RN2WhzXSHt/gmHIKfcd8mtpHQDNRKayqlIWBS
paiCKkl7fn8uuZVuuzn4gSK50k3kvpJV6YddiTfLa7bYslIEpQyIAJXHurCY+y96
tTVIyO2vh1RzJ0b1HT2hHeDSDHmAXuHMQoc3Q4TNkHap+TTcI70+/TTCw23qswV9
dMuz7iN0/U96O2EiQlmWFpGhC1VZnjbzdIalnnEnfEfMTFA2vJqZr5eyzID4TfR0
b3PkGNf1/qgPiibwbJ3G
=CBMO
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- enable bpf syscall for compat
- cpu_suspend fix when checking the idle state type
- defconfig update
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: defconfig: update defconfig for 3.18
arm64: compat: Enable bpf syscall
arm64: psci: fix cpu_suspend to check idle state type for index
Replace ldr xN, =<symbol> with adrp/add or adrp/ldr [as appropriate]
in the implementation of _mcount(), which may be called very often.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In certain debugging scenarios it's useful to know the physical ID (i.e.
the MPIDR_EL1.Aff* fields) of the boot CPU, but we don't currently log
this as we do for 32-bit ARM kernels.
This patch makes the kernel log the physical ID of the boot CPU early in
the boot process. The CPU logical map initialisation is folded in to
smp_setup_processor_id (which contrary to its name is also called by UP
kernels). This is called before setup_arch, so should not adversely
affect existing cpu_logical_map users.
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Lorenzo Pieralisis <lorenzo.pieralisi@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This installs the machine name as recorded by setup_machine_fdt()
as dump stack arch description. This results in the string to be
included in call stack dumps, as is shown here:
...
Bad mode in Synchronous Abort handler detected, code 0x84000005
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.18.0-rc2+ #548
> Hardware name: linux,dummy-virt (DT)
task: ffffffc07c870000 ti: ffffffc07c878000 task.ti: ffffffc07c878000
PC is at 0x0
...
Note that systems that support DMI/SMBIOS may override this later.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Optimize memcpy_{from,to}io() and memset_io() by transferring in 64 bit
as much as possible with minimized barrier usage. This simplest
optimization brings faster throughput compare to current byte-by-byte read
and write with barrier in the loop. Code's skeleton is taken from the
powerpc.
Link: http://lkml.kernel.org/p/20141020133304.GH23751@e104818-lin.cambridge.arm.com
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Trilok Soni <tsoni@codeaurora.org>
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This sets the DMI string, containing system type, serial number,
firmware version etc. as dump stack arch description, so that oopses
and other kernel stack dumps automatically have this information
included, if available.
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
SMBIOS is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.
Signed-off-by: Yi Li <yi.li@linaro.org>
Tested-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
The EFI_CONFIG_TABLES bit already gets set by efi_config_init(),
so there is no reason to set it again after this function returns
successfully.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Instead of reserving the memory regions based on which types we know
need to be reserved, consider only regions of the following types as
free for general use by the OS:
EFI_LOADER_CODE
EFI_LOADER_DATA
EFI_BOOT_SERVICES_CODE
EFI_BOOT_SERVICES_DATA
EFI_CONVENTIONAL_MEMORY
Note that this also fixes a problem with the original code, which would
misidentify a EFI_RUNTIME_SERVICES_DATA region as not reserved if it
does not have the EFI_MEMORY_RUNTIME attribute set. However, it is
perfectly legal for the firmware not to request a virtual mapping for
EFI_RUNTIME_SERVICES_DATA regions that contain configuration tables, in
which case the EFI_MEMORY_RUNTIME attribute would not be set.
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Change our PE/COFF header to use the minimum file alignment of
512 bytes (0x200), as mandated by the PE/COFF spec v8.3
Also update the linker script so that the Image file itself is also a
round multiple of FileAlignment.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Position independent AArch64 code needs to be linked and loaded at the
same relative offset from a 4 KB boundary, or adrp/add and adrp/ldr
pairs will not work correctly. (This is how PC relative symbol
references with a 4 GB reach are emitted)
We need to declare this in the PE/COFF header, otherwise the PE/COFF
loader may load the Image and invoke the stub at an offset which
violates this rule.
Reviewed-by: Roy Franz <roy.franz@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
After the EFI stub has done its business, it jumps into the kernel by
branching to offset #0 of the loaded Image, which is where it expects
to find the header containing a 'branch to stext' instruction.
However, the UEFI spec 2.1.1 states the following regarding PE/COFF
image loading:
"A UEFI image is loaded into memory through the LoadImage() Boot
Service. This service loads an image with a PE32+ format into memory.
This PE32+ loader is required to load all sections of the PE32+ image
into memory."
In other words, it is /not/ required to load parts of the image that are
not covered by a PE/COFF section, so it may not have loaded the header
at the expected offset, as it is not covered by any PE/COFF section.
So instead, jump to 'stext' directly, which is at the base of the
PE/COFF .text section, by supplying a symbol 'stext_offset' to
efi-entry.o which contains the relative offset of stext into the Image.
Also replace other open coded calculations of the same value with a
reference to 'stext_offset'
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This fix rectifies the psci cpu_suspend implementation to check the
PSCI power state parameter type field associated with the requested idle
state index.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
During the 3.18 merge period additional __get_cpu_var uses were
added. The patch converts these to this_cpu_ptr().
Signed-off-by: Christoph Lameter <cl@linux.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Tejun Heo <tj@kernel.org>
- Enable 48-bit VA space now that KVM has been fixed, together with
a couple of fixes for pgd allocation alignment and initial memblock
current_limit. There is still a dependency on !ARM_SMMU which needs to
be updated as it uses the page table manipulation macros of the host
kernel
- eBPF fixes following changes/conflicts during the merging window
- Compat types affecting compat_elf_prpsinfo
- Compilation error on UP builds
- ASLR fix when /proc/sys/kernel/randomize_va_space == 0
- DT definitions for CLCD support on ARMv8 model platform
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUSowhAAoJEGvWsS0AyF7x4A8QAK/KJj3aEhRoJCFKtkrKcQer
WydkHEVpJtk8Y+o9zIBU/J9HXDDakQlIZi3bNcWA+TQda1yr7zqEgVRZwhfaQMIu
oXLzSLnZtiqe2HU7TaccJfFG293K+gysjTPRPixdAwWO/9hvoPOqJHnBRWKTDNzh
8D04PTM9dcpKXvVjPcRHIxbk2oH04a/tjOBeTpi5uWaUdZLWjHt2dTjWwP/q0af4
XsDrF5pYQaYEzCI9MczSbcQLwFPkxhS36JH+V+OhmVoCFv0PT7mm5o29DiU1N/Rt
UsAwtBQ4oQV8seZMQaT5sVDNBqqqyfrYDAACdY0ewIr81PF7z8tdm5+G1P4JfQ0t
iVguz3s1rJ6V0yXy0t18XHgpPFLLqpoEDEO6obYXYrhe2nTquQulgJoLaIu2qXmO
jlL8R1rHWKRAQ7xIyLATjhUmW5dc2aK6xO+/3Xuz1+JOunNeOZW67xexpPzRU4Vh
sw9S8sKwJmL5wH+ojqxsbg73WvTUs5dd4WoK7Tci8FZ0qfG14pyaX4s9iRhUZArQ
4vx8lfF7FQma8nZ0ytXSY/666dAedL/bXZrmPhjVl/XYoEA4IFaW7uDIIqGbUMWr
oNOe4QahxZu5jHI8CYncXHw51RXst+03oE5Uon30x7F3ZD71perLZe2dMTTaezKv
2MX9/BULRrjpgkA4gW5L
=QssH
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- enable 48-bit VA space now that KVM has been fixed, together with a
couple of fixes for pgd allocation alignment and initial memblock
current_limit. There is still a dependency on !ARM_SMMU which needs
to be updated as it uses the page table manipulation macros of the
host kernel
- eBPF fixes following changes/conflicts during the merging window
- Compat types affecting compat_elf_prpsinfo
- Compilation error on UP builds
- ASLR fix when /proc/sys/kernel/randomize_va_space == 0
- DT definitions for CLCD support on ARMv8 model platform
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Fix memblock current_limit with 64K pages and 48-bit VA
arm64: ASLR: Don't randomise text when randomise_va_space == 0
arm64: vexpress: Add CLCD support to the ARMv8 model platform
arm64: Fix compilation error on UP builds
Documentation/arm64/memory.txt: fix typo
net: bpf: arm64: minor fix of type in jited
arm64: bpf: add 'load 64-bit immediate' instruction
arm64: bpf: add 'shift by register' instructions
net: bpf: arm64: address randomize and write protect JIT code
arm64: mm: Correct fixmap pagetable types
arm64: compat: fix compat types affecting struct compat_elf_prpsinfo
arm64: Align less than PAGE_SIZE pgds naturally
arm64: Allow 48-bits VA space without ARM_SMMU
When user asks to turn off ASLR by writing "0" to
/proc/sys/kernel/randomize_va_space there should not be
any randomization to mmap base, stack, VDSO, libs, text and heap
Currently arm64 violates this behavior by randomising text.
Fix this by defining a constant ELF_ET_DYN_BASE. The randomisation of
mm->mmap_base is done by setup_new_exec -> arch_pick_mmap_layout ->
mmap_base -> mmap_rnd.
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull x86 EFI updates from Peter Anvin:
"This patchset falls under the "maintainers that grovel" clause in the
v3.18-rc1 announcement. We had intended to push it late in the merge
window since we got it into the -tip tree relatively late.
Many of these are relatively simple things, but there are a couple of
key bits, especially Ard's and Matt's patches"
* 'x86-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits)
rtc: Disable EFI rtc for x86
efi: rtc-efi: Export platform:rtc-efi as module alias
efi: Delete the in_nmi() conditional runtime locking
efi: Provide a non-blocking SetVariable() operation
x86/efi: Adding efi_printks on memory allocationa and pci.reads
x86/efi: Mark initialization code as such
x86/efi: Update comment regarding required phys mapped EFI services
x86/efi: Unexport add_efi_memmap variable
x86/efi: Remove unused efi_call* macros
efi: Resolve some shadow warnings
arm64: efi: Format EFI memory type & attrs with efi_md_typeattr_format()
ia64: efi: Format EFI memory type & attrs with efi_md_typeattr_format()
x86: efi: Format EFI memory type & attrs with efi_md_typeattr_format()
efi: Introduce efi_md_typeattr_format()
efi: Add macro for EFI_MEMORY_UCE memory attribute
x86/efi: Clear EFI_RUNTIME_SERVICES if failing to enter virtual mode
arm64/efi: Do not enter virtual mode if booting with efi=noruntime or noefi
arm64/efi: uefi_init error handling fix
efi: Add kernel param efi=noruntime
lib: Add a generic cmdline parse function parse_option_str
...
Pull audit updates from Eric Paris:
"So this change across a whole bunch of arches really solves one basic
problem. We want to audit when seccomp is killing a process. seccomp
hooks in before the audit syscall entry code. audit_syscall_entry
took as an argument the arch of the given syscall. Since the arch is
part of what makes a syscall number meaningful it's an important part
of the record, but it isn't available when seccomp shoots the
syscall...
For most arch's we have a better way to get the arch (syscall_get_arch)
So the solution was two fold: Implement syscall_get_arch() everywhere
there is audit which didn't have it. Use syscall_get_arch() in the
seccomp audit code. Having syscall_get_arch() everywhere meant it was
a useless flag on the stack and we could get rid of it for the typical
syscall entry.
The other changes inside the audit system aren't grand, fixed some
records that had invalid spaces. Better locking around the task comm
field. Removing some dead functions and structs. Make some things
static. Really minor stuff"
* git://git.infradead.org/users/eparis/audit: (31 commits)
audit: rename audit_log_remove_rule to disambiguate for trees
audit: cull redundancy in audit_rule_change
audit: WARN if audit_rule_change called illegally
audit: put rule existence check in canonical order
next: openrisc: Fix build
audit: get comm using lock to avoid race in string printing
audit: remove open_arg() function that is never used
audit: correct AUDIT_GET_FEATURE return message type
audit: set nlmsg_len for multicast messages.
audit: use union for audit_field values since they are mutually exclusive
audit: invalid op= values for rules
audit: use atomic_t to simplify audit_serial()
kernel/audit.c: use ARRAY_SIZE instead of sizeof/sizeof[0]
audit: reduce scope of audit_log_fcaps
audit: reduce scope of audit_net_id
audit: arm64: Remove the audit arch argument to audit_syscall_entry
arm64: audit: Add audit hook in syscall_trace_enter/exit()
audit: x86: drop arch from __audit_syscall_entry() interface
sparc: implement is_32bit_task
sparc: properly conditionalize use of TIF_32BIT
...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUJQ8/AAoJEMsfJm/On5mBMNgP+QEUHpRKJaOGU3jX/ftHH/t3
EoNUx7lZt6Q0c9MB2ySAxILYpWUujc9N0tDkRDyW7mTWunF8gEGiRN+iKaSbzcUN
Y4VffRAbxBasIaBqRtpDl08ycODh6Xu1t8sAao03DdhnMNLGNNO79s3UFHsubdTC
cXx9mfYR/2SHV/0BXiFvKi8ovdqUspdp9cyZO/qc0PVFGbsADx3MNGGzkvWfgvcE
6vXnKnUkZrNl5JPiG77kTKZnDsjEMXggmA9DGWKijFCJjGIbuLiuIDf63Zp+eQ52
mJMRA+ViP/dDgAxY1dkWBcF5nOBT1vTYwLfy69jEoQeHzcomiHVoDKmCSBOpeAEH
G8VoasWKWYpYnlcOJb+XgkA3QTe6mOPgAPzNsbYr0Ep7hMFw66mOQgKbgi6k4Qts
HHimG9pnBYpPlBUfvNh+6K4dHAm0C2IyoZyMhKWsyFH6hkhS8TVM8j0gPR8rTTmk
0a9/e2vxcFnfBe3UAJaqzWRVFsBkOHrTNpG1hvID3Oq8IeywSBXw2VMSR93+mwaB
sa/GCZKlqHGpOfmtILlhiXQX0E/tTHmcrI2VqyCpX0J2CW+MiGvkcGOwKHOJciSA
Cj9D68y837QU/DCpMQ6ec/5wqWqZKz8yQb8kxb6vJcL19JcVKdAiPzbuOI49C3Ux
YxDWoUutzDfVoUD5RhcJ
=cP1w
-----END PGP SIGNATURE-----
Merge tag 'restart-handler-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull restart handler infrastructure from Guenter Roeck:
"This series was supposed to be pulled through various trees using it,
and I did not plan to send a separate pull request. As it turns out,
the pinctrl tree did not merge with it, is now upstream, and uses it,
meaning there are now build failures.
Please pull this series directly to fix those build failures"
* tag 'restart-handler-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
arm/arm64: unexport restart handlers
watchdog: sunxi: register restart handler with kernel restart handler
watchdog: alim7101: register restart handler with kernel restart handler
watchdog: moxart: register restart handler with kernel restart handler
arm: support restart through restart handler call chain
arm64: support restart through restart handler call chain
power/restart: call machine_restart instead of arm_pm_restart
kernel: add support for kernel restart handler call chain
Pull irq updates from Thomas Gleixner:
"The irq departement delivers:
- a cleanup series to get rid of mindlessly copied code.
- another bunch of new pointlessly different interrupt chip drivers.
Adding homebrewn irq chips (and timers) to SoCs must provide a
value add which is beyond the imagination of mere mortals.
- the usual SoC irq controller updates, IOW my second cat herding
project"
* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (44 commits)
irqchip: gic-v3: Implement CPU PM notifier
irqchip: gic-v3: Refactor gic_enable_redist to support both enabling and disabling
irqchip: renesas-intc-irqpin: Add minimal runtime PM support
irqchip: renesas-intc-irqpin: Add helper variable dev = &pdev->dev
irqchip: atmel-aic5: Add sama5d4 support
irqchip: atmel-aic5: The sama5d3 has 48 IRQs
Documentation: bcm7120-l2: Add Broadcom BCM7120-style L2 binding
irqchip: bcm7120-l2: Add Broadcom BCM7120-style Level 2 interrupt controller
irqchip: renesas-irqc: Add binding docs for new R-Car Gen2 SoCs
irqchip: renesas-irqc: Add DT binding documentation
irqchip: renesas-intc-irqpin: Document SoC-specific bindings
openrisc: Get rid of handle_IRQ
arm64: Get rid of handle_IRQ
ARM: omap2: irq: Convert to handle_domain_irq
ARM: imx: tzic: Convert to handle_domain_irq
ARM: imx: avic: Convert to handle_domain_irq
irqchip: or1k-pic: Convert to handle_domain_irq
irqchip: atmel-aic5: Convert to handle_domain_irq
irqchip: atmel-aic: Convert to handle_domain_irq
irqchip: gic-v3: Convert to handle_domain_irq
...
Pull timer fixes from Ingo Molnar:
"Main changes:
- Fix the deadlock reported by Dave Jones et al
- Clean up and fix nohz_full interaction with arch abilities
- nohz init code consolidation/cleanup"
* 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
nohz: nohz full depends on irq work self IPI support
nohz: Consolidate nohz full init code
arm64: Tell irq work about self IPI support
arm: Tell irq work about self IPI support
x86: Tell irq work about self IPI support
irq_work: Force raised irq work to run on irq work interrupt
irq_work: Introduce arch_irq_work_has_interrupt()
nohz: Move nohz full init call to tick init
- eBPF JIT compiler for arm64
- CPU suspend backend for PSCI (firmware interface) with standard idle
states defined in DT (generic idle driver to be merged via a different
tree)
- Support for CONFIG_DEBUG_SET_MODULE_RONX
- Support for unmapped cpu-release-addr (outside kernel linear mapping)
- set_arch_dma_coherent_ops() implemented and bus notifiers removed
- EFI_STUB improvements when base of DRAM is occupied
- Typos in KGDB macros
- Clean-up to (partially) allow kernel building with LLVM
- Other clean-ups (extern keyword, phys_addr_t usage)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUNB6NAAoJEGvWsS0AyF7x22sP/1qPQvFoY71fSqTZmSY+kfgW
UMXhDFZOd+khD2TPHWptbgBRDElTQjRPHyISv/8ILKwDNoMlUDLlYkp1XPLM/nlB
ea9ou2GX8iktqgM2JF5r4vk1hjH6JqEGOUHyWKZc7ibphTVm3dhg3nWL1A4peOUG
0UyX79kl8BLAaggLSUhjtUz1GMpSNlb6Pc1ForUXaPMayBlOcVoOzh1ir7b5wb3e
IvotUY1gv+opE9uK0QPr1AJSfpCogPEfQ2TSCP8MQZjxkrEz69n0HaFvdy60rwf4
DaJiqBoQ5MSP3Bw+qvoYgyz+tfiPFAvEF+O3YQ5x3LBTteoooriFYH4mL7DsicAs
2WLor/342mHykE0bOc44/gNl8B/xaZNzvO2ezLYrjVGsiY2QHTZ7fXB8arPUvQSS
RUXVfHmcv4qthZjI17rgreBKvsfeFIMighSfvMJnVhGqDSvB8abjiPwZjzqB91Bq
pu5MDitNgR3k3ctwzRaS6JtH2CluVFv97xIS4VaD/hm3JnS5NPeTXFou3Gb3lvon
d/wXOIB3vY8FDMIt+BMCQPzWiU0liZ/sN7p1bsOmkgZ1wLOZ0nmsaHF09PDRGbtA
vifopwaw9qtNlcVrTB/rDBCDaT0Ds/mTYD/a3+ch5CYUeLmQmfW/vBMfq/3gUt65
JdI/nTVXawbl2CpBWw36
=SAfQ
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- eBPF JIT compiler for arm64
- CPU suspend backend for PSCI (firmware interface) with standard idle
states defined in DT (generic idle driver to be merged via a
different tree)
- Support for CONFIG_DEBUG_SET_MODULE_RONX
- Support for unmapped cpu-release-addr (outside kernel linear mapping)
- set_arch_dma_coherent_ops() implemented and bus notifiers removed
- EFI_STUB improvements when base of DRAM is occupied
- Typos in KGDB macros
- Clean-up to (partially) allow kernel building with LLVM
- Other clean-ups (extern keyword, phys_addr_t usage)
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (51 commits)
arm64: Remove unneeded extern keyword
ARM64: make of_device_ids const
arm64: Use phys_addr_t type for physical address
aarch64: filter $x from kallsyms
arm64: Use DMA_ERROR_CODE to denote failed allocation
arm64: Fix typos in KGDB macros
arm64: insn: Add return statements after BUG_ON()
arm64: debug: don't re-enable debug exceptions on return from el1_dbg
Revert "arm64: dmi: Add SMBIOS/DMI support"
arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers
of: amba: use of_dma_configure for AMBA devices
arm64: dmi: Add SMBIOS/DMI support
arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm()
arm64:mm: initialize max_mapnr using function set_max_mapnr
setup: Move unmask of async interrupts after possible earlycon setup
arm64: LLVMLinux: Fix inline arm64 assembly for use with clang
arm64: pageattr: Correctly adjust unaligned start addresses
net: bpf: arm64: fix module memory leak when JIT image build fails
arm64: add PSCI CPU_SUSPEND based cpu_suspend support
arm64: kernel: introduce cpu_init_idle CPU operation
...
In case efi runtime disabled via noefi kernel cmdline
arm64_enter_virtual_mode should error out.
At the same time move early_memunmap(memmap.map, mapsize) to the
beginning of the function or it will leak early mem.
Signed-off-by: Dave Young <dyoung@redhat.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
There's one early memmap leak in uefi_init error path, fix it and
slightly tune the error handling code.
Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: Mark Salter <msalter@redhat.com>
Reported-by: Will Deacon <will.deacon@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
of_device_ids (i.e. compatible strings and the respective data) are not
supposed to change at runtime. All functions working with of_device_ids
provided by <linux/of.h> work with const of_device_ids. So mark the
only non-const struct in arch/arm64 as const, too.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch changes the __init_end address to a
page align address, so that free_initmem() can
free the whole .init section, because if the end
address is not page aligned, it will round down to
a page align address, then the tail unligned page
will not be freed.
Signed-off-by: wang <yalin.wang2010@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Use the generic PCI domain and OF functions to provide support for PCI
on arm64.
[bhelgaas: Change comments to use generic PCI, not just PCIe. Nothing at
this level is PCIe-specific.]
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Implementing a restart handler in a module don't make sense as there would
be no guarantee that the module is loaded when a restart is needed.
Unexport arm_pm_restart to ensure that no one gets the idea to do it
anyway.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The kernel core now supports a restart handler call chain to restart the
system. Call it if arm_pm_restart is not set.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Heiko Stuebner <heiko@sntech.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonas Jensen <jonas.jensen@gmail.com>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tomasz Figa <t.figa@samsung.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Some of the KGDB macros used for generating the BRK instructions had the
wrong spelling for DBG and KGDB abbreviations.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Following a recent series of enhancements to the insn code the ARMv8
allnoconfig build has been generating a large number of warnings in the
form of:
arch/arm64/kernel/insn.c:689:8: warning: 'insn' may be used uninitialized in this function [-Wmaybe-uninitialized]
This is because BUG() and related macros can be compiled out so we get
execution paths which normally result in a panic compiling out to noops
instead.
I wasn't able to immediately identify a sensible return value to use in
these cases so just return AARCH64_BREAK_FAULT - this is all "should
never happen" code so hopefully it never has a practical impact.
Signed-off-by: Mark Brown <broonie@kernel.org>
[catalin.marinas@arm.com: AARCH64_BREAK_FAULT definition contributed by Daniel Borkmann]
[catalin.marinas@arm.com: replace return 0 with AARCH64_BREAK_FAULT]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 tree added calls to audit_syscall_entry() and rightly included
the syscall number. The interface has since been changed to not need
the syscall number. As such, arm64 should no longer pass that value.
Signed-off-by: Eric Paris <eparis@redhat.com>
This patch adds auditing functions on entry to or exit from
every system call invocation.
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Acked-by Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When returning from a debug exception taken from EL1, we unmask debug
exceptions after handling the exception. This is crucial for debug
exceptions taken from EL0, so that any kernel work on the ret_to_user
path can be debugged by kgdb.
However, when returning back to EL1 the only thing left to do is to
restore the original register state before the exception return. If
single-step has been enabled by the debug exception handler, we will
get stuck in an infinite debug exception loop, since we will take the
step exception as soon as we unmask debug exceptions.
This patch avoids unmasking debug exceptions on the debug exception
return path when the exception was taken from EL1.
Fixes: 2a2830703a (arm64: debug: avoid accessing mdscr_el1 on fault paths where possible)
Cc: <stable@vger.kernel.org> #3.16+
Reported-by: David Long <dave.long@linaro.org>
Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
SMBIOS is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.
This has been tested by dmidecode and lshw tools.
This patch adds the call to dmi_scan_machine() to arm64_enter_virtual_mode(),
as that is the point where the EFI Configuration Tables are registered as
being available. It needs to be in an early_initcall anyway as dmi_id_init(),
which is an arch_initcall itself, depends on dmi_scan_machine() having been
called already.
Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The aarch64_insn_gen_branch_imm() function takes an enum as the last
argument rather than a bool. It happens to work because
AARCH64_INSN_BRANCH_LINK matches 'true' but better to use the actual
type.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The kernel wants to enable reporting of asynchronous interrupts (i.e.
System Errors) as early as possible. But if this happens too early then
any pending System Error on initial entry into the kernel may never be
reported where a user can see it. This situation will occur if the kernel
is configured with CONFIG_PANIC_ON_OOPS set and (default or command line)
enabled, in which case the kernel will panic as intended, however the
associated logging messages indicating this failure condition will remain
only in the kernel ring buffer and never be flushed out to the (not yet
configured) console. Therefore, this patch moves the enabling of
asynchronous interrupts during early setup to as early as reasonable,
but after parsing any possible earlycon parameters setting up earlycon.
Signed-off-by: Jon Masters <jcm@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM64 irq work self-IPI support depends on __smp_cross_call to point to
some relevant IRQ controller operations. This information should be
available after the call to init_IRQ().
Lets implement arch_irq_work_has_interrupt() accordingly.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* cpuidle:
arm64: add PSCI CPU_SUSPEND based cpu_suspend support
arm64: kernel: introduce cpu_init_idle CPU operation
arm64: kernel: refactor the CPU suspend API for retention states
Documentation: arm: define DT idle states bindings
This patch implements the cpu_suspend cpu operations method through
the PSCI CPU SUSPEND API. The PSCI implementation translates the idle state
index passed by the cpu_suspend core call into a valid PSCI state according to
the PSCI states initialized at boot through the cpu_init_idle() CPU
operations hook.
The PSCI CPU suspend operation hook checks if the PSCI state is a
standby state. If it is, it calls the PSCI suspend implementation
straight away, without saving any context. If the state is a power
down state the kernel calls the __cpu_suspend API (that saves the CPU
context) and passed the PSCI suspend finisher as a parameter so that PSCI
can be called by the __cpu_suspend implementation after saving and flushing
the context as last function before power down.
For power down states, entry point is set to cpu_resume physical address,
that represents the default kernel execution address following a CPU reset.
Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The CPUidle subsystem on ARM64 machines requires the idle states
implementation back-end to initialize idle states parameter upon
boot. This patch adds a hook in the CPU operations structure that
should be initialized by the CPU operations back-end in order to
provide a function that initializes cpu idle states.
This patch also adds the infrastructure to arm64 kernel required
to export the CPU operations based initialization interface, so
that drivers (ie CPUidle) can use it when they are initialized
at probe time.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
CPU suspend is the standard kernel interface to be used to enter
low-power states on ARM64 systems. Current cpu_suspend implementation
by default assumes that all low power states are losing the CPU context,
so the CPU registers must be saved and cleaned to DRAM upon state
entry. Furthermore, the current cpu_suspend() implementation assumes
that if the CPU suspend back-end method returns when called, this has
to be considered an error regardless of the return code (which can be
successful) since the CPU was not expected to return from a code path that
is different from cpu_resume code path - eg returning from the reset vector.
All in all this means that the current API does not cope well with low-power
states that preserve the CPU context when entered (ie retention states),
since first of all the context is saved for nothing on state entry for
those states and a successful state entry can return as a normal function
return, which is considered an error by the current CPU suspend
implementation.
This patch refactors the cpu_suspend() API so that it can be split in
two separate functionalities. The arm64 cpu_suspend API just provides
a wrapper around CPU suspend operation hook. A new function is
introduced (for architecture code use only) for states that require
context saving upon entry:
__cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
__cpu_suspend() saves the context on function entry and calls the
so called suspend finisher (ie fn) to complete the suspend operation.
The finisher is not expected to return, unless it fails in which case
the error is propagated back to the __cpu_suspend caller.
The API refactoring results in the following pseudo code call sequence for a
suspending CPU, when triggered from a kernel subsystem:
/*
* int cpu_suspend(unsigned long idx)
* @idx: idle state index
*/
{
-> cpu_suspend(idx)
|---> CPU operations suspend hook called, if present
|--> if (retention_state)
|--> direct suspend back-end call (eg PSCI suspend)
else
|--> __cpu_suspend(idx, &back_end_finisher);
}
By refactoring the cpu_suspend API this way, the CPU operations back-end
has a chance to detect whether idle states require state saving or not
and can call the required suspend operations accordingly either through
simple function call or indirectly through __cpu_suspend() which carries out
state saving and suspend finisher dispatching to complete idle state entry.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Nathan reports that we leak TLS information from the parent context
during an exec, as we don't clear the TLS registers when flushing the
thread state.
This patch updates the flushing code so that we:
(1) Unconditionally zero the tpidr_el0 register (since this is fully
context switched for native tasks and zeroed for compat tasks)
(2) Zero the tp_value state in thread_info before clearing the
tpidrr0_el0 register for compat tasks (since this is only writable
by the set_tls compat syscall and therefore not fully switched).
A missing compiler barrier is also added to the compat set_tls syscall.
Cc: <stable@vger.kernel.org>
Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate logical (shifted register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (3 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (2 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate data-processing (1 source) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate add/subtract (shifted register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate move wide (immediate) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate bitfield instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate add/subtract (immediate) instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate load/store pair instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate load/store (register offset)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate conditional branch (immediate)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate unconditional branch (register)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Introduce function to generate compare & branch (immediate)
instructions.
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The global register current_stack_pointer holds the current stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The current soft_restart() and setup_restart implementations incorrectly
assume that compiler will not spill/fill values to/from stack. However
this assumption seems to be wrong, revealed by the disassembly of the
currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.
ffffffc000085224 <soft_restart>:
ffffffc000085224: a9be7bfd stp x29, x30, [sp,#-32]!
ffffffc000085228: 910003fd mov x29, sp
ffffffc00008522c: f9000fa0 str x0, [x29,#24]
ffffffc000085230: 94003d21 bl ffffffc0000946b4 <setup_mm_for_reboot>
ffffffc000085234: 94003b33 bl ffffffc000093f00 <flush_cache_all>
ffffffc000085238: 94003dfa bl ffffffc000094a20 <cpu_cache_off>
ffffffc00008523c: 94003b31 bl ffffffc000093f00 <flush_cache_all>
ffffffc000085240: b0003321 adrp x1, ffffffc0006ea000 <reset_devices>
ffffffc000085244: f9400fa0 ldr x0, [x29,#24] ----> spilled addr
ffffffc000085248: f942fc22 ldr x2, [x1,#1528] ----> global memstart_addr
ffffffc00008524c: f0000061 adrp x1, ffffffc000094000 <__inval_cache_range+0x40>
ffffffc000085250: 91290021 add x1, x1, #0xa40
ffffffc000085254: 8b010041 add x1, x2, x1
ffffffc000085258: d2c00802 mov x2, #0x4000000000 // #274877906944
ffffffc00008525c: 8b020021 add x1, x1, x2
ffffffc000085260: d63f0020 blr x1
...
Here the compiler generates memory accesses after the cache is disabled,
loading stale values for the spilled value and global variable. As we cannot
control when the compiler will access memory we must rewrite the
functions in assembly to stash values we need in registers prior to
disabling the cache, avoiding the use of memory.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If we cannot relocate the kernel Image to its preferred offset of base of DRAM
plus TEXT_OFFSET, instead relocate it to the lowest available 2 MB boundary plus
TEXT_OFFSET. We may lose a bit of memory at the low end, but we can still
proceed normally otherwise.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.
However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.
Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In certain cases the cpu-release-addr of a CPU may not fall in the
linear mapping (e.g. when the kernel is loaded above this address due to
the presence of other images in memory). This is problematic for the
spin-table code as it assumes that it can trivially convert a
cpu-release-addr to a valid VA in the linear map.
This patch modifies the spin-table code to use a temporary cached
mapping to write to a given cpu-release-addr, enabling us to support
addresses regardless of whether they are covered by the linear mapping.
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[ardb: added (__force void *) cast]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
VIPT caches are non-aliasing if the index is derived from address bits that
are always equal between VA and PA. Classifying these as aliasing results in
unnecessary flushing which may hurt performance.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This adds helper functions and #defines to <asm/cachetype.h> to read the
line size and the number of sets from the level 1 instruction cache.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arm64 interrupt migration code on cpu offline calls
irqchip.irq_set_affinity() with the argument force=true. Originally
this argument had no effect because it was not used by any interrupt
chip driver and there was no semantics defined.
This changed with commit 01f8fa4f01 ("genirq: Allow forcing cpu
affinity of interrupts") which made the force argument useful to route
interrupts to not yet online cpus without checking the target cpu
against the cpu online mask. The following commit ffde1de640
("irqchip: gic: Support forced affinity setting") implemented this for
the GIC interrupt controller.
As a consequence the cpu offline irq migration fails if CPU0 is
offlined, because CPU0 is still set in the affinity mask and the
validation against cpu online mask is skipped to the force argument
being true. The following first_cpu(mask) selection always selects
CPU0 as the target.
Commit 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity") intended to fix the above mentioned issue but
introduced another issue where affinity can be migrated to a wrong
CPU due to unconditional copy of cpu_online_mask.
As with for arm, solve the issue by calling irq_set_affinity() with
force=false from the CPU offline irq migration code so the GIC driver
validates the affinity mask against CPU online mask and therefore
removes CPU0 from the possible target candidates. Also revert the
changes done in the commit 601c942176 as it's no longer needed.
Tested on Juno platform.
Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.10.x
Signed-off-by: Will Deacon <will.deacon@arm.com>
All the arm64 irqchip drivers have been converted to handle_domain_irq,
making it possible to remove the handle_IRQ stub entierely.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-26-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
In order to limit code duplication, convert the architecture specific
handle_IRQ to use the generic __handle_domain_irq function.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
It turns out that vendors are relying on the format of /proc/cpuinfo,
and we've even spotted out-of-tree hacks attempting to make it look
identical to the format used by arch/arm/. That means we can't afford to
churn this interface in mainline, so revert the recent reformatting of
the file for arm64 pending discussions on the list to find out what
people actually want.
This reverts commit d7a49086f2.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now arm64 defers reloading FPSIMD state, but this optimization also
introduces the bug after cpu resume back from low power mode.
The reason is after the cpu has been powered off, s/w need set the
cpu's fpsimd_last_state to NULL so that it will force to reload
FPSIMD state for the thread, otherwise there has the chance to meet
the condition for both the task's fpsimd_state.cpu field contains the
id of the current cpu, and the cpu's fpsimd_last_state per-cpu variable
points to the task's fpsimd_state, so finally kernel will skip to reload
the context during it return back to userland.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leo Yan <leoy@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The current perf_regs code relies on sp and pc sitting just off the end
of the pt_regs->regs array. This is ugly and fragile, so this patch
checks for these register explicitly and returns the appropriate field.
Acked-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
copy_{to,from}_user return the number of bytes remaining on failure, not
an error code.
This patch returns -EFAULT when the copy operation didn't complete,
rather than expose the number of bytes not copied directly to userspace.
Signed-off-by: Will Deacon <will.deacon@arm.com>
I'm not sure what I was on when I wrote this, but when iterating over
the hardware watchpoint array (hbp_watch_array), our index is off by
ARM_MAX_BRP, so we walk off the end of our thread_struct...
... except, a dodgy condition in the loop means that it never executes
at all (bp cannot be NULL).
This patch fixes the code so that we remove the bp check and use the
correct index for accessing the watchpoint structures.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Remove an unused local variable from head.S. It seems this was never
used even from the initial commit
9703d9d7f7 (arm64: Kernel booting and
initialisation), and is a left over from a previous implementation
of __calc_phys_offset.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Pull x86 fixes from Ingo Molnar:
"A couple of EFI fixes, plus misc fixes all around the map"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
efi/arm64: Store Runtime Services revision
firmware: Do not use WARN_ON(!spin_is_locked())
x86_32, entry: Clean up sysenter_badsys declaration
x86/doc: Fix the 'tlb_single_page_flush_ceiling' sysconfig path
x86/mm: Fix sparse 'tlb_single_page_flush_ceiling' warning and make the variable read-mostly
x86/mm: Fix RCU splat from new TLB tracepoints
"efi" global data structure contains "runtime_version" field which must
be assigned in order to use it later in Runtime Services virtual calls
(virt_efi_* functions).
Before this patch "runtime_version" was unassigned (0), so each
Runtime Service virtual call that checks revision would fail.
Signed-off-by: Semen Protsenko <semen.protsenko@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
For some reason, the audit patches didn't make it out of -next this
merge window, so revert our temporary hack and let the audit guys deal
with fixing up -next.
This reverts commit 2a8f45b040.
Signed-off-by: Will Deacon <will.deacon@arm.com>
UEFI provides its own method for marking regions to reserve, via the
memory map which is also used to initialise memblock. So when using the
UEFI memory map, ignore any memreserve entries present in the DT.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
the embedded EFI stub is executed in place. The EFI stub relocates the
Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
the kernel proper.
In AArch64, PC relative symbol references are emitted using adrp/add or
adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
separate :lo12: relocation. This implicitly assumes that the code will
always be executed at the same relative offset with respect to a 4 kB
boundary, or the references will point to the wrong address.
This means we should link the kernel at a 4 kB aligned base address in
order to remain compatible with the base address the UEFI loader uses
when doing the initial load of Image. So update the code that generates
TEXT_OFFSET to choose a multiple of 4 kB.
At the same time, update the code so it chooses from the interval [0..2MB)
as the author originally intended.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This removes an unfortunately placed semi-colon resulting in all instruction
caches being classified as AIVIVT.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arm and arm64 architectures. It required some minor updates to the generic
tracepoint system, so it had to wait for me to implement them.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT5N6DAAoJEKQekfcNnQGuv60H/2NXDO/kUtvdF0L7ewaGbDaO
sjGOXMHDDgF4fQixPsIYNHdra0iGSPL59NBjIaLsESFsB8SUOVqXSclV0MSiZJQc
1PgTduE19p2kEMsqw6F4l8Ir8hPrUT8V8pQScR9lUkww3ANpyTB6Bbg1rZHcmTYA
yAq20q85rfQrAGwbvvhg40UYF8/su0FMUAbt/a180kVL8yeQI2liAkNOJTMCVq35
PpL7if4dlqAhKMqne71ae080PIPOH34q2lmZX3/SbpRvT2tSkS4dkoSFtCAD4pvx
c2TKNOxEDDWlinN/305PXH2yQ87MTIm44SBaTu/WPllUSQoO//EKI7+13tNS8Qc=
=/VeP
-----END PGP SIGNATURE-----
Merge tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull IPI tracepoints for ARM from Steven Rostedt:
"Nicolas Pitre added generic tracepoints for tracing IPIs and updated
the arm and arm64 architectures. It required some minor updates to
the generic tracepoint system, so it had to wait for me to implement
them"
* tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ARM64: add IPI tracepoints
ARM: add IPI tracepoints
tracepoint: add generic tracepoint definitions for IPI tracing
tracing: Do not do anything special with tracepoint_string when tracing is disabled
Pull arch signal handling cleanup from Richard Weinberger:
"This patch series moves all remaining archs to the get_signal(),
signal_setup_done() and sigsp() functions.
Currently these archs use open coded variants of the said functions.
Further, unused parameters get removed from get_signal_to_deliver(),
tracehook_signal_handler() and signal_delivered().
At the end of the day we save around 500 lines of code."
* 'signal-cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (43 commits)
powerpc: Use sigsp()
openrisc: Use sigsp()
mn10300: Use sigsp()
mips: Use sigsp()
microblaze: Use sigsp()
metag: Use sigsp()
m68k: Use sigsp()
m32r: Use sigsp()
hexagon: Use sigsp()
frv: Use sigsp()
cris: Use sigsp()
c6x: Use sigsp()
blackfin: Use sigsp()
avr32: Use sigsp()
arm64: Use sigsp()
arc: Use sigsp()
sas_ss_flags: Remove nested ternary if
Rip out get_signal_to_deliver()
Clean up signal_delivered()
tracehook_signal_handler: Remove sig, info, ka and regs
...
Merge more incoming from Andrew Morton:
"Two new syscalls:
memfd_create in "shm: add memfd_create() syscall"
kexec_file_load in "kexec: implementation of new syscall kexec_file_load"
And:
- Most (all?) of the rest of MM
- Lots of the usual misc bits
- fs/autofs4
- drivers/rtc
- fs/nilfs
- procfs
- fork.c, exec.c
- more in lib/
- rapidio
- Janitorial work in filesystems: fs/ufs, fs/reiserfs, fs/adfs,
fs/cramfs, fs/romfs, fs/qnx6.
- initrd/initramfs work
- "file sealing" and the memfd_create() syscall, in tmpfs
- add pci_zalloc_consistent, use it in lots of places
- MAINTAINERS maintenance
- kexec feature work"
* emailed patches from Andrew Morton <akpm@linux-foundation.org: (193 commits)
MAINTAINERS: update nomadik patterns
MAINTAINERS: update usb/gadget patterns
MAINTAINERS: update DMA BUFFER SHARING patterns
kexec: verify the signature of signed PE bzImage
kexec: support kexec/kdump on EFI systems
kexec: support for kexec on panic using new system call
kexec-bzImage64: support for loading bzImage using 64bit entry
kexec: load and relocate purgatory at kernel load time
purgatory: core purgatory functionality
purgatory/sha256: provide implementation of sha256 in purgaotory context
kexec: implementation of new syscall kexec_file_load
kexec: new syscall kexec_file_load() declaration
kexec: make kexec_segment user buffer pointer a union
resource: provide new functions to walk through resources
kexec: use common function for kimage_normal_alloc() and kimage_crash_alloc()
kexec: move segment verification code in a separate function
kexec: rename unusebale_pages to unusable_pages
kernel: build bin2c based on config option CONFIG_BUILD_BIN2C
bin2c: move bin2c in scripts/basic
shm: wait for pins to be released when sealing
...
The core mm code will provide a default gate area based on
FIXADDR_USER_START and FIXADDR_USER_END if
!defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR).
This default is only useful for ia64. arm64, ppc, s390, sh, tile, 64-bit
UML, and x86_32 have their own code just to disable it. arm, 32-bit UML,
and x86_64 have gate areas, but they have their own implementations.
This gets rid of the default and moves the code into ia64.
This should save some code on architectures without a gate area: it's now
possible to inline the gate_area functions in the default case.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle]
Acked-by: Richard Weinberger <richard@nod.at> [for um]
Acked-by: Will Deacon <will.deacon@arm.com> [for arm64]
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The strings used to list IPIs in /proc/interrupts are reused for tracing
purposes.
While at it, the code is slightly cleaned up so the ipi_types array
indices are no longer offset by IPI_RESCHEDULE whose value is 0 anyway.
Link: http://lkml.kernel.org/p/1406318733-26754-5-git-send-email-nicolas.pitre@linaro.org
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
they had small conflicts (respectively within KVM documentation,
and with 3.16-rc changes). Since they were all within the subsystem,
I took care of them.
Stephen Rothwell reported some snags in PPC builds, but they are all
fixed now; the latest linux-next report was clean.
New features for ARM include:
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
And for PPC:
- Book3S: Good number of LE host fixes, enable HV on LE
- Book3S HV: Add in-guest debug support
This release drops support for KVM on the PPC440. As a result, the
PPC merge removes more lines than it adds. :)
I also included an x86 change, since Davidlohr tied it to an independent
bug report and the reporter quickly provided a Tested-by; there was no
reason to wait for -rc2.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJT4iIJAAoJEBvWZb6bTYbyZqoP/3Wxy8NWPFJ8HGt81NHlGnDS
a9UbL7EibcOEG+aaKqmtBglTD5YDiGBDNCxxiSJaDHt+grLN4fsWIliJob1nJFoO
90f89EWN2XjeCrJXA5nUoeg5tpc5OoYKsiP6pTgzIwkP8vvs/H1+zpcTS/UmYsr/
qipVMMsM+zZeHWZcSbqjW88z7YqIn1sr5282wJ85cbyv4KGizb/G4dyPuDqLb6np
hkAD8Ah6VV2suQ2FSy7G2fg20R0vglUi60hkEHLoCBPVqJCl7SmC8MvxNbjBnP8S
J36R0R0u1wHYKzAGooLJGVOZ/o/gSiVqKX+++L2EvJBN+kuA6u/7fxLyBT+LwDAE
IF/Aln5rpg1fe+eywvhz86WljTVEQ8bO1zVsIQUPY+/ZOPedZHMwyvXft8ogbjSp
2m9OJ/3e8Aggh0OeHpCDoeow+QDUXvX0YdCw+2Yh0p+7VMXqkyp0QEiBu38jrusC
rB3VNifJbDSWLKdG9LfCAPHnxZD2XYEwv2WFBo6KQOGMGHfx0GXpCOL/jQihrhA6
HtEG5Bs3lvnHQemdpUZ58xojiABbMaUPdcnPXQQEp23WhZzrfLMLzqVG0VYnhSsC
9pi7MJj8c31rqx5WU2oRM28i/BvNxN0NCtkDpineO5s3f89Ws1xnwxqlm38AKP0J
irJQTYFEqec+GM9JK1rG
=hyQP
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull second round of KVM changes from Paolo Bonzini:
"Here are the PPC and ARM changes for KVM, which I separated because
they had small conflicts (respectively within KVM documentation, and
with 3.16-rc changes). Since they were all within the subsystem, I
took care of them.
Stephen Rothwell reported some snags in PPC builds, but they are all
fixed now; the latest linux-next report was clean.
New features for ARM include:
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
And for PPC:
- Book3S: Good number of LE host fixes, enable HV on LE
- Book3S HV: Add in-guest debug support
This release drops support for KVM on the PPC440. As a result, the
PPC merge removes more lines than it adds. :)
I also included an x86 change, since Davidlohr tied it to an
independent bug report and the reporter quickly provided a Tested-by;
there was no reason to wait for -rc2"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (122 commits)
KVM: Move more code under CONFIG_HAVE_KVM_IRQFD
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
KVM: nVMX: Fix nested vmexit ack intr before load vmcs01
KVM: PPC: Enable IRQFD support for the XICS interrupt controller
KVM: Give IRQFD its own separate enabling Kconfig option
KVM: Move irq notifier implementation into eventfd.c
KVM: Move all accesses to kvm::irq_routing into irqchip.c
KVM: irqchip: Provide and use accessors for irq routing table
KVM: Don't keep reference to irq routing table in irqfd struct
KVM: PPC: drop duplicate tracepoint
arm64: KVM: fix 64bit CP15 VM access for 32bit guests
KVM: arm64: GICv3: mandate page-aligned GICV region
arm64: KVM: GICv3: move system register access to msr_s/mrs_s
KVM: PPC: PR: Handle FSCR feature deselects
KVM: PPC: HV: Remove generic instruction emulation
KVM: PPC: BOOKEHV: rename e500hv_spr to bookehv_spr
KVM: PPC: Remove DCR handling
KVM: PPC: Expose helper functions for data/inst faults
KVM: PPC: Separate loadstore emulation from priv emulation
KVM: PPC: Handle magic page in kvmppc_ld/st
...
Pull timer and time updates from Thomas Gleixner:
"A rather large update of timers, timekeeping & co
- Core timekeeping code is year-2038 safe now for 32bit machines.
Now we just need to fix all in kernel users and the gazillion of
user space interfaces which rely on timespec/timeval :)
- Better cache layout for the timekeeping internal data structures.
- Proper nanosecond based interfaces for in kernel users.
- Tree wide cleanup of code which wants nanoseconds but does hoops
and loops to convert back and forth from timespecs. Some of it
definitely belongs into the ugly code museum.
- Consolidation of the timekeeping interface zoo.
- A fast NMI safe accessor to clock monotonic for tracing. This is a
long standing request to support correlated user/kernel space
traces. With proper NTP frequency correction it's also suitable
for correlation of traces accross separate machines.
- Checkpoint/restart support for timerfd.
- A few NOHZ[_FULL] improvements in the [hr]timer code.
- Code move from kernel to kernel/time of all time* related code.
- New clocksource/event drivers from the ARM universe. I'm really
impressed that despite an architected timer in the newer chips SoC
manufacturers insist on inventing new and differently broken SoC
specific timers.
[ Ed. "Impressed"? I don't think that word means what you think it means ]
- Another round of code move from arch to drivers. Looks like most
of the legacy mess in ARM regarding timers is sorted out except for
a few obnoxious strongholds.
- The usual updates and fixlets all over the place"
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (114 commits)
timekeeping: Fixup typo in update_vsyscall_old definition
clocksource: document some basic timekeeping concepts
timekeeping: Use cached ntp_tick_length when accumulating error
timekeeping: Rework frequency adjustments to work better w/ nohz
timekeeping: Minor fixup for timespec64->timespec assignment
ftrace: Provide trace clocks monotonic
timekeeping: Provide fast and NMI safe access to CLOCK_MONOTONIC
seqcount: Add raw_write_seqcount_latch()
seqcount: Provide raw_read_seqcount()
timekeeping: Use tk_read_base as argument for timekeeping_get_ns()
timekeeping: Create struct tk_read_base and use it in struct timekeeper
timekeeping: Restructure the timekeeper some more
clocksource: Get rid of cycle_last
clocksource: Move cycle_last validation to core code
clocksource: Make delta calculation a function
wireless: ath9k: Get rid of timespec conversions
drm: vmwgfx: Use nsec based interfaces
drm: i915: Use nsec based interfaces
timekeeping: Provide ktime_get_raw()
hangcheck-timer: Use ktime_get_ns()
...
- Fixes and code refactoring for stage2 kvm MMU unmap_range
- Support unmapping IPAs on deleting memslots for arm and arm64
- Support MMIO mappings in stage2 faults
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
- Detect non page-aligned GICV regions and bail out (plugs guest-can-crash host bug)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT3oZTAAoJEEtpOizt6ddyKQIH/1Bj/cZYSkkSf3IJfQhHRbWN
jS37IBsvcHwjHkRJxCNmQuKP/Ho5XEusluPGrVY25PAgBMl+ouPqAuKzUk+GEab6
snjJjDFqw0zs0x0h3tg6UwfZdF+eyyIkmFGn8/IATD5P3PPd8kWBVtYnSnZmYK+R
KJNVcp6RPDrt9kvUDY8Ln9fW99Jl+7CdgQAnc3QkHcXUlanLyrfq+fE1lSzyrbhZ
ETzyMFAX4kCdc8tflgyyBS4A7+RvfQ6ZIQummxoAMFHIoSk90dtK7ovX68rd9U3e
yL+mpe130+dTIFpUMbxCnIdE7C0eud3vcgXC6MuWtFjUrxQoaEgsVE+ffGC5tX0=
=axkp
-----END PGP SIGNATURE-----
Merge tag 'kvm-arm-for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm
KVM/ARM New features for 3.17 include:
- Fixes and code refactoring for stage2 kvm MMU unmap_range
- Support unmapping IPAs on deleting memslots for arm and arm64
- Support MMIO mappings in stage2 faults
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
Conflicts:
virt/kvm/arm/vgic.c [last minute cherry-pick from 3.17 to 3.16]
Pull EFI changes from Ingo Molnar:
"Main changes in this cycle are:
- arm64 efi stub fixes, preservation of FP/SIMD registers across
firmware calls, and conversion of the EFI stub code into a static
library - Ard Biesheuvel
- Xen EFI support - Daniel Kiper
- Support for autoloading the efivars driver - Lee, Chun-Yi
- Use the PE/COFF headers in the x86 EFI boot stub to request that
the stub be loaded with CONFIG_PHYSICAL_ALIGN alignment - Michael
Brown
- Consolidate all the x86 EFI quirks into one file - Saurabh Tangri
- Additional error logging in x86 EFI boot stub - Ulf Winkelvos
- Support loading initrd above 4G in EFI boot stub - Yinghai Lu
- EFI reboot patches for ACPI hardware reduced platforms"
* 'x86-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (31 commits)
efi/arm64: Handle missing virtual mapping for UEFI System Table
arch/x86/xen: Silence compiler warnings
xen: Silence compiler warnings
x86/efi: Request desired alignment via the PE/COFF headers
x86/efi: Add better error logging to EFI boot stub
efi: Autoload efivars
efi: Update stale locking comment for struct efivars
arch/x86: Remove efi_set_rtc_mmss()
arch/x86: Replace plain strings with constants
xen: Put EFI machinery in place
xen: Define EFI related stuff
arch/x86: Remove redundant set_bit(EFI_MEMMAP) call
arch/x86: Remove redundant set_bit(EFI_SYSTEM_TABLES) call
efi: Introduce EFI_PARAVIRT flag
arch/x86: Do not access EFI memory map if it is not available
efi: Use early_mem*() instead of early_io*()
arch/ia64: Define early_memunmap()
x86/reboot: Add EFI reboot quirk for ACPI Hardware Reduced flag
efi/reboot: Allow powering off machines using EFI
efi/reboot: Add generic wrapper around EfiResetSystem()
...
Changes include:
- Context tracking support (NO_HZ_FULL) which narrowly missed 3.16
- vDSO layout rework following Andy's work on x86
- TEXT_OFFSET fuzzing for bootloader testing
- /proc/cpuinfo tidy-up
- Preliminary work to support 48-bit virtual addresses, but this is
currently disabled until KVM has been ported to use it (the patches
do, however, bring some nice clean-up)
- Boot-time CPU sanity checks (especially useful on heterogenous
systems)
- Support for syscall auditing
- Support for CC_STACKPROTECTOR
- defconfig updates
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCgAGBQJT3qkzAAoJEC379FI+VC/ZxwEP/3uYs9glDLTd1hmVFr1cRutg
j4m1Kc7RCO+zpbYCXJLAQLPjwjOaUWPZUeZPQZib6bO+4sTqFYe9vsaqRyvn/bxM
BaQhytpyxymfG8m3rmXaI97TzBwnRB2oQ0k36rsjMwG/VQMLf9kVuEwURoAHF07l
RyMK2sAwE0/8XIJZQFNo5SAbkO52EiHlehdlTzCXGWWOWdHDyVfks/k6YhIS991r
0W9Y0ghHaMz+mAumTSq7jzPQa3aF3GjTp0W7gJjk/PRBDHfPisphEO36zsA0yHtE
3uvEH0kUQK/ve4ZUQiNvuEZCSqalPFag6j5Z8BnFtafa66J5h414CGPAfER6Kz7+
KGpoEve+7Rpvvb1S4T0tTMg7HoGrvqc5wKS3uFxfoGooGUcUOchSkYiVTBMDJSKn
QlJbb1QSvuNFGhcKntTOe1QMT+x0w9urq/e+QfnQrZ/m5Er7J3qCZzeOfA2JFTjQ
sB24yjzAz5a5VwbKbuB2b4gDILY9oYNe94HFP08o/rJfANnL0dpP1Oyl0b12ILsI
a69EMdpaeEQo8703KLIlzfW6u92PqYs6UkYvya8o27FAvmNvDfB/PffjgVsOAHFi
Qc+dpYbnzNfwJgG9w0qhJ+MR8g5fiBYHqNpfGOY+g5M50j0hZUX9comoWw1xkl0X
HlvG7xzrTF7/VbWEtZ2o
=6XMc
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Once again, Catalin's off on holiday and I'm looking after the arm64
tree. Please can you pull the following arm64 updates for 3.17?
Note that this branch also includes the new GICv3 driver (merged via a
stable tag from Jason's irqchip tree), since there is a fix for older
binutils on top.
Changes include:
- context tracking support (NO_HZ_FULL) which narrowly missed 3.16
- vDSO layout rework following Andy's work on x86
- TEXT_OFFSET fuzzing for bootloader testing
- /proc/cpuinfo tidy-up
- preliminary work to support 48-bit virtual addresses, but this is
currently disabled until KVM has been ported to use it (the patches
do, however, bring some nice clean-up)
- boot-time CPU sanity checks (especially useful on heterogenous
systems)
- support for syscall auditing
- support for CC_STACKPROTECTOR
- defconfig updates"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (55 commits)
arm64: add newline to I-cache policy string
Revert "arm64: dmi: Add SMBIOS/DMI support"
arm64: fpsimd: fix a typo in fpsimd_save_partial_state ENDPROC
arm64: don't call break hooks for BRK exceptions from EL0
arm64: defconfig: enable devtmpfs mount option
arm64: vdso: fix build error when switching from LE to BE
arm64: defconfig: add virtio support for running as a kvm guest
arm64: gicv3: Allow GICv3 compilation with older binutils
arm64: fix soft lockup due to large tlb flush range
arm64/crypto: fix makefile rule for aes-glue-%.o
arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL
arm64: Fix barriers used for page table modifications
arm64: Add support for 48-bit VA space with 64KB page configuration
arm64: asm/pgtable.h pmd/pud definitions clean-up
arm64: Determine the vmalloc/vmemmap space at build time based on VA_BITS
arm64: Clean up the initial page table creation in head.S
arm64: Remove asm/pgtable-*level-types.h files
arm64: Remove asm/pgtable-*level-hwdef.h files
arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELS
arm64: mm: Implement 4 levels of translation tables
...
to the ftrace function callback infrastructure. It's introducing a
way to allow different functions to call directly different trampolines
instead of all calling the same "mcount" one.
The only user of this for now is the function graph tracer, which always
had a different trampoline, but the function tracer trampoline was called
and did basically nothing, and then the function graph tracer trampoline
was called. The difference now, is that the function graph tracer
trampoline can be called directly if a function is only being traced by
the function graph trampoline. If function tracing is also happening on
the same function, the old way is still done.
The accounting for this takes up more memory when function graph tracing
is activated, as it needs to keep track of which functions it uses.
I have a new way that wont take as much memory, but it's not ready yet
for this merge window, and will have to wait for the next one.
Another big change was the removal of the ftrace_start/stop() calls that
were used by the suspend/resume code that stopped function tracing when
entering into suspend and resume paths. The stop of ftrace was done
because there was some function that would crash the system if one called
smp_processor_id()! The stop/start was a big hammer to solve the issue
at the time, which was when ftrace was first introduced into Linux.
Now ftrace has better infrastructure to debug such issues, and I found
the problem function and labeled it with "notrace" and function tracing
can now safely be activated all the way down into the guts of suspend
and resume.
Other changes include clean ups of uprobe code.
Clean up of the trace_seq() code.
And other various small fixes and clean ups to ftrace and tracing.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT35zXAAoJEKQekfcNnQGuOz0H/38zqM0nLFhrgvz3EPk2UOjn
xqpX8qyb2V7TJZL+IqeXU2a5cQZl5ba0D4WtBGpxbTae3CJYiuQ87iKUNFoH0om5
FDpn80igb368k8V3qRdRsziKVCCf0XBd/NkHJXc0ZkfXGyzB2Ga4bBxALxp2gj9y
bnO+vKo6+tWYKG4hyQb4P3LRXUrK8/LWEsPr39cH2QH1Rdj69Lx9CgrCdUVJmwcb
Bj8hEiLXL/RYCFNn79A3wNTUvW0rG/AOIf4SLqXtasSRZ0ToaU0ZyDnrNv+0Ol47
rX8tSk+LfXchL9hpIvjCf1vlAYq3pO02favteR/jip3lx/dTjEDE4RJ9qtJzZ4Q=
=fwQY
-----END PGP SIGNATURE-----
Merge tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt:
"This pull request has a lot of work done. The main thing is the
changes to the ftrace function callback infrastructure. It's
introducing a way to allow different functions to call directly
different trampolines instead of all calling the same "mcount" one.
The only user of this for now is the function graph tracer, which
always had a different trampoline, but the function tracer trampoline
was called and did basically nothing, and then the function graph
tracer trampoline was called. The difference now, is that the
function graph tracer trampoline can be called directly if a function
is only being traced by the function graph trampoline. If function
tracing is also happening on the same function, the old way is still
done.
The accounting for this takes up more memory when function graph
tracing is activated, as it needs to keep track of which functions it
uses. I have a new way that wont take as much memory, but it's not
ready yet for this merge window, and will have to wait for the next
one.
Another big change was the removal of the ftrace_start/stop() calls
that were used by the suspend/resume code that stopped function
tracing when entering into suspend and resume paths. The stop of
ftrace was done because there was some function that would crash the
system if one called smp_processor_id()! The stop/start was a big
hammer to solve the issue at the time, which was when ftrace was first
introduced into Linux. Now ftrace has better infrastructure to debug
such issues, and I found the problem function and labeled it with
"notrace" and function tracing can now safely be activated all the way
down into the guts of suspend and resume
Other changes include clean ups of uprobe code, clean up of the
trace_seq() code, and other various small fixes and clean ups to
ftrace and tracing"
* tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (57 commits)
ftrace: Add warning if tramp hash does not match nr_trampolines
ftrace: Fix trampoline hash update check on rec->flags
ring-buffer: Use rb_page_size() instead of open coded head_page size
ftrace: Rename ftrace_ops field from trampolines to nr_trampolines
tracing: Convert local function_graph functions to static
ftrace: Do not copy old hash when resetting
tracing: let user specify tracing_thresh after selecting function_graph
ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()
tracing: Remove function_trace_stop and HAVE_FUNCTION_TRACE_MCOUNT_TEST
s390/ftrace: remove check of obsolete variable function_trace_stop
arm64, ftrace: Remove check of obsolete variable function_trace_stop
Blackfin: ftrace: Remove check of obsolete variable function_trace_stop
metag: ftrace: Remove check of obsolete variable function_trace_stop
microblaze: ftrace: Remove check of obsolete variable function_trace_stop
MIPS: ftrace: Remove check of obsolete variable function_trace_stop
parisc: ftrace: Remove check of obsolete variable function_trace_stop
sh: ftrace: Remove check of obsolete variable function_trace_stop
sparc64,ftrace: Remove check of obsolete variable function_trace_stop
tile: ftrace: Remove check of obsolete variable function_trace_stop
ftrace: x86: Remove check of obsolete variable function_trace_stop
...
Due to a missing newline in the I-cache policy detection log output,
it's possible to get some ratehr unfortunate output at boot time:
CPU1: Booted secondary processor
Detected VIPT I-cache on CPU1CPU2: Booted secondary processor
Detected VIPT I-cache on CPU2CPU3: Booted secondary processor
Detected VIPT I-cache on CPU3CPU4: Booted secondary processor
Detected PIPT I-cache on CPU4CPU5: Booted secondary processor
Detected PIPT I-cache on CPU5Brought up 6 CPUs
SMP: Total of 6 processors activated.
This patch adds the missing newline to the format string, cleaning up
the output.
Fixes: 59ccc0d41b ("arm64: cachetype: report weakest cache policy")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This reverts commit a28e3f4b90.
Ard and Yi Li report that this patch is broken by design, so revert it
and let them sort it out for 3.18 instead.
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit 190f1ca85d ("arm64: add support for kernel mode NEON in interrupt
context") introduced a typing error in fpsimd_save_partial_state ENDPROC.
This patch fixes the typing error.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: byungchul.park <byungchul.park@lge.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Our break hooks are used to handle brk exceptions from kgdb (and potentially
kprobes if that code ever resurfaces), so don't bother calling them if
the BRK exception comes from userspace.
This prevents userspace from trapping to a kdb shell on systems where
kgdb is enabled and active.
Cc: <stable@vger.kernel.org>
Reported-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Building a kernel with CPU_BIG_ENDIAN fails if there are stale objects
from a !CPU_BIG_ENDIAN build. Due to a missing FORCE prerequisite on an
if_changed rule in the VDSO Makefile, we attempt to link a stale LE
object into the new BE kernel.
According to Documentation/kbuild/makefiles.txt, FORCE is required for
if_changed rules and forgetting it is a common mistake, so fix it by
'Forcing' the build of vdso. This patch fixes build errors like these:
arch/arm64/kernel/vdso/note.o: compiled for a little endian system and target is big endian
failed to merge target specific data of file arch/arm64/kernel/vdso/note.o
arch/arm64/kernel/vdso/sigreturn.o: compiled for a little endian system and target is big endian
failed to merge target specific data of file arch/arm64/kernel/vdso/sigreturn.o
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
GICv3 introduces new system registers accessible with the full msr/mrs
syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent
binutils understand the new syntax. This patch introduces msr_s/mrs_s
assembly macros which generate the equivalent instructions above and
converts the existing GICv3 code (both drivers/irqchip/ and
arch/arm64/kernel/).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Olof Johansson <olof@lixom.net>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
* tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux:
irqchip: gic-v3: Initial support for GICv3
irqchip: gic: Move some bits of GICv2 to a library-type file
Conflicts:
arch/arm64/Kconfig
This is a temporary patch to be able to compile the kernel in linux-next
where the audit_syscall_* API has been changed. To be reverted once the
proper arm64 fix can be applied.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The members of the new struct are the required ones for the new NMI
safe accessor to clcok monotonic. In order to reuse the existing
timekeeping code and to make the update of the fast NMI safe
timekeepers a simple memcpy use the struct for the timekeeper as well
and convert all users.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
cycle_last was added to the clocksource to support the TSC
validation. We moved that to the core code, so we can get rid of the
extra copy.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
This patch allows support for 3 levels of page tables with 64KB page
configuration allowing 48-bit VA space. The pgd is no longer a full
PAGE_SIZE (PTRS_PER_PGD is 64) and (swapper|idmap)_pg_dir are not fully
populated (pgd_alloc falls back to kzalloc).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
This patch adds a create_table_entry macro which is used to populate pgd
and pud entries, also reducing the number of arguments for
create_pgd_entry.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
Rather than having several Kconfig options, define int
ARM64_PGTABLE_LEVELS which will be also useful in converting some of the
pgtable macros.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
This patch implements 4 levels of translation tables since 3 levels
of page tables with 4KB pages cannot support 40-bit physical address
space described in [1] due to the following issue.
It is a restriction that kernel logical memory map with 4KB + 3 levels
(0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
mapping for this region in map_mem function since __phys_to_virt for
this region reaches to address overflow.
If SoC design follows the document, [1], over 32GB RAM would be placed
from 544GB. Even 64GB system is supposed to use the region from 544GB
to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
However, it is recommended 4 levels of page table should be only enabled
if memory map is too sparse or there is about 512GB RAM.
References
----------
[1]: Principles of ARM Memory Maps, White Paper, Issue C
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
[catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
[catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
The early_ioremap_init() function already handles fixmap pte
initialisation, so upgrade this to cover all of pud/pmd/pte and remove
one page from swapper_pg_dir.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
SMbios is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.
This has been tested by dmidecode and lshw tools.
Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If we cannot resolve the virtual address of the UEFI System Table, its
physical offset must be missing from the virtual memory map, and there
is really no point in proceeding with installing the virtual memory map
and the runtime services dispatch table. So back out gracefully.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
This patch changes both x86 and arm64 efistub implementations
from #including shared .c files under drivers/firmware/efi to
building shared code as a static library.
The x86 code uses a stub built into the boot executable which
uncompresses the kernel at boot time. In this case, the library is
linked into the decompressor.
In the arm64 case, the stub is part of the kernel proper so the library
is linked into the kernel proper as well.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.
arm64 was broken anyway, as it had an ifdef testing
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST which is only set if
the arch supports the code (which it obviously did not), and
it was testing a non existent ftrace_trace_stop instead of
function_trace_stop.
Link: http://lkml.kernel.org/r/20140627124421.GP26276@arm.com
Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently reading /proc/cpuinfo will result in information being read
out of the MIDR_EL1 of the current CPU, and the information is not
associated with any particular logical CPU number.
This is problematic for systems with heterogeneous CPUs (i.e.
big.LITTLE) where MIDR fields will vary across CPUs, and the output will
differ depending on the executing CPU.
This patch reorganises the code responsible for /proc/cpuinfo to print
information per-cpu. In the process, we perform several cleanups:
* Property names are coerced to lower-case (to match "processor" as per
glibc's expectations).
* Property names are simplified and made to match the MIDR field names.
* Revision is changed to hex as with every other field.
* The meaningless Architecture property is removed.
* The ripe-for-abuse Machine field is removed.
The features field (a human-readable representation of the hwcaps)
remains printed once, as this is expected to remain in use as the
globally support CPU features. To enable the possibility of the addition
of per-cpu HW feature information later, this is printed before any
CPU-specific information.
Comments are added to guide userspace developers in the right direction
(using the hwcaps provided in auxval). Hopefully where userspace
applications parse /proc/cpuinfo rather than using the readily available
hwcaps, they limit themselves to reading said first line.
If CPU features differ from each other, the previously installed sanity
checks will give us some advance notice with warnings and
TAINT_CPU_OUT_OF_SPEC. If we are lucky, we will never see such systems.
Rework will be required in many places to support such systems anyway.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marcus Shawcroft <marcus.shawcroft@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: remove machine_name as it is no longer reported]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Unexpected variation in certain system register values across CPUs is an
indicator of potential problems with a system. The kernel expects CPUs
to be mostly identical in terms of supported features, even in systems
with heterogeneous CPUs, with uniform instruction set support being
critical for the correct operation of userspace.
To help detect issues early where hardware violates the expectations of
the kernel, this patch adds simple runtime sanity checks on important ID
registers in the bring up path of each CPU.
Where CPUs are fundamentally mismatched, set TAINT_CPU_OUT_OF_SPEC.
Given that the kernel assumes CPUs are identical feature wise, let's not
pretend that we expect such configurations to work. Supporting such
configurations would require massive rework, and hopefully they will
never exist.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In big.LITTLE systems, the I-cache policy may differ across CPUs, and
thus we must always meet the most stringent maintenance requirements of
any I-cache in the system when performing maintenance to ensure
correctness. Unfortunately this requirement is not met as we always look
at the current CPU's cache type register to determine the maintenance
requirements.
This patch causes the I-cache policy of all CPUs to be taken into
account for icache_is_aliasing and icache_is_aivivt. If any I-cache in
the system is aliasing or AIVIVT, the respective function will return
true. At boot each CPU may set flags to identify that at least one
I-cache in the system is aliasing and/or AIVIVT.
The now unused and potentially misleading icache_policy function is
removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Several kernel subsystems need to know details about CPU system register
values, sometimes for CPUs other than that they are executing on. Rather
than hard-coding system register accesses and cross-calls for these
cases, this patch adds logic to record various system register values at
boot-time. This may be used for feature reporting, firmware bug
detection, etc.
Separate hooks are added for the boot and hotplug paths to enable
one-time intialisation and cold/warm boot value mismatch detection in
later patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Suspend init function must be marked as __init, since it is not needed
after the kernel has booted. This patch moves the cpu_suspend_init()
function to the __init section.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
PSCI init functions must be marked as __init so that they are freed
by the kernel upon boot.
This patch marks the PSCI init functions as such since they need not
be persistent in the kernel address space after the kernel has booted.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
PSCI CPU operations have to be enabled on UP kernels so that calls
like eg cpu_suspend can be made functional on UP too.
This patch reworks the PSCI CPU operations so that they can be
enabled on UP systems.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Andy pointed out that binutils generates additional sections in the vdso
image (e.g. section string table) which, if our .text section gets big
enough, could cross a page boundary and end up screwing up the location
where the kernel expects to put the data page.
This patch solves the issue in the same manner as x86_32, by moving the
data page before the code pages.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
_install_special_mapping replaces install_special_mapping and removes
the need to detect special VMA in arch_vma_name.
This patch moves the vdso and compat vectors page code over to the new
API.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The VDSO datapage doesn't need to be executable (no code there) or
CoW-able (the kernel writes the page, so a private copy is totally
useless).
This patch moves the datapage into its own VMA, identified as "[vvar]"
in /proc/<pid>/maps.
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch fixed the following checkpatch complaint as using pr_*
instead of printk.
WARNING: printk() should include KERN_ facility level
Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
which, apart from reducing code duplication also stops the arm64 stub
being rebuilt every time make is invoked - Ard Biesheuvel
* Fix the EFI fdt code to not report a boot error if UEFI is
unavailable since booting without UEFI parameters is a valid use case
for non-UEFI platforms - Catalin Marinas
* Include a .bss section in the EFI boot stub PE/COFF headers to fix a
memory corruption bug - Michael Brown
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJTw9I4AAoJEC84WcCNIz1VJcwQAKBZXaIjsxWqtMsqvB7RmCtw
5wi44hf2sdrwKCBUdxRBHBoiISps3f+5VJZN25eJ5QG+eqcqoQkGoQsAVlMbGrOJ
iyYF79REUj/ujovtodKQPOci4ueYhiRemBc8o6abOjSwdAe3fj5vpQgKQuS9RwtG
SIy4MBucE58Zle6vGHXLxHKdJVCB0/ALESwjg9fXWluopHdWkmmN0kKhYlysElHx
7zn2C2RoVDNQQwknueUznktKUH9pxLBEW74qE98CBZ9Spt8QpjvT22UkxoOU8grU
RnogYgJhwiy1+GNQ5524rZwcM2XP1qFhCcWoKxP3I0UhTOe2Za7Ogi8ggUAXbvEh
+hCsCIM3DqzAROOQMsmUZHkMJ0Gi2HSL12tt1KHNZ2zh74hiMPqDAmzkjBuf2KE0
5Lxdnc47UmHE9qVduj8fg2A+cMLV/K4NBlitOXq6lJp9+n8Wa93MLF0WENd9akE+
c9u7sAoTwG/5DUDy3rB1H5P65/WKyXNe9Db8JdZp2+62dxmsNyF++zkApDJT3Op7
MyFTpVkeZrWZbZVwZJXZuKNdh19Jv/adZrvC7OKvcDBfjow8AGzbDDnr4ez42QT8
OkM3uGyBuVTgtUdmPYNREXP5zjr1NfZV/w1fbslaJl/UbFZrNUILTP5YyqTphv2M
W9eO6CyrmX10gd5v47A7
=Cvne
-----END PGP SIGNATURE-----
Merge tag 'efi-urgent' into x86/urgent
* Remove a duplicate copy of linux_banner from the arm64 EFI stub
which, apart from reducing code duplication also stops the arm64 stub
being rebuilt every time make is invoked - Ard Biesheuvel
* Fix the EFI fdt code to not report a boot error if UEFI is
unavailable since booting without UEFI parameters is a valid use case
for non-UEFI platforms - Catalin Marinas
* Include a .bss section in the EFI boot stub PE/COFF headers to fix a
memory corruption bug - Michael Brown
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers "only").
Also, we only save/restore them when MDSCR_EL1 has debug enabled,
or when we've flagged the debug registers as dirty. It means that
most of the time, we only save/restore MDSCR_EL1.
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.
Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order to use that file from assembly code.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Introduce the GICv3 world switch code used to save/restore the
GICv3 context.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Move the GICv2 world switch code into its own file, and add the
necessary indirection to the arm64 switch code.
Also introduce a new type field to the vgic_params structure.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In order to make way for the GICv3 registers, move the v2-specific
registers to their own structure.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The arm64 Image header contains a text_offset field which bootloaders
are supposed to read to determine the offset (from a 2MB aligned "start
of memory" per booting.txt) at which to load the kernel. The offset is
not well respected by bootloaders at present, and due to the lack of
variation there is little incentive to support it. This is unfortunate
for the sake of future kernels where we may wish to vary the text offset
(even zeroing it).
This patch adds options to arm64 to enable fuzz-testing of text_offset.
CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
16-byte aligned value value in the range [0..2MB) upon a build of the
kernel. It is recommended that distribution kernels enable randomization
to test bootloaders such that any compliance issues can be fixed early.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently the kernel Image is stripped of everything past the initial
stack, and at runtime the memory is initialised and used by the kernel.
This makes the effective minimum memory footprint of the kernel larger
than the size of the loaded binary, though bootloaders have no mechanism
to identify how large this minimum memory footprint is. This makes it
difficult to choose safe locations to place both the kernel and other
binaries required at boot (DTB, initrd, etc), such that the kernel won't
clobber said binaries or other reserved memory during initialisation.
Additionally when big endian support was added the image load offset was
overlooked, and is currently of an arbitrary endianness, which makes it
difficult for bootloaders to make use of it. It seems that bootloaders
aren't respecting the image load offset at present anyway, and are
assuming that offset 0x80000 will always be correct.
This patch adds an effective image size to the kernel header which
describes the amount of memory from the start of the kernel Image binary
which the kernel expects to use before detecting memory and handling any
memory reservations. This can be used by bootloaders to choose suitable
locations to load the kernel and/or other binaries such that the kernel
will not clobber any memory unexpectedly. As before, memory reservations
are required to prevent the kernel from clobbering these locations
later.
Both the image load offset and the effective image size are forced to be
little-endian regardless of the native endianness of the kernel to
enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
which wish to make use of the load offset can inspect the effective
image size field for a non-zero value to determine if the offset is of a
known endianness. To enable software to determine the endinanness of the
kernel as may be required for certain use-cases, a new flags field (also
little-endian) is added to the kernel header to export this information.
The documentation is updated to clarify these details. To discourage
future assumptions regarding the value of text_offset, the value at this
point in time is removed from the main flow of the documentation (though
kept as a compatibility note). Some minor formatting issues in the
documentation are also corrected.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Tom Rini <trini@ti.com>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Kevin Hilman <kevin.hilman@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
bootloaders may use portions of this memory below the kernel and we do
not parse the memory reservation list until after the MMU has been
enabled. As such we may clobber some memory a bootloader wishes to have
preserved.
To enable the use of all of this memory by bootloaders (when the
required memory reservations are communicated to the kernel) it is
necessary to move our initial page tables elsewhere. As we currently
have an effectively unbound requirement for memory at the end of the
kernel image for .bss, we can place the page tables here.
This patch moves the initial page table to the end of the kernel image,
after the BSS. As they do not consist of any initialised data they will
be stripped from the kernel Image as with the BSS. The BSS clearing
routine is updated to stop at __bss_stop rather than _end so as to not
clobber the page tables, and memory reservations made redundant by the
new organisation are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't
span any page boundary, which simplifies the idmap and spares us
requiring an additional page table to map half of the function. In
keeping with other important requirements in architecture code, this
fact is undocumented.
Additionally, as the function consists of three instructions totalling
12 bytes with no literal pool data, a smaller alignment of 16 bytes
would be sufficient.
This patch reduces the alignment to 16 bytes and documents the
underlying reason for the alignment. This reduces the required alignment
of the entire .head.text section from 64 bytes to 16 bytes, though it
may still be aligned to a larger value depending on TEXT_OFFSET.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds auditing functions on entry to or exit from
every system call invocation.
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Acked-by Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds __NR_* definitions to asm/unistd32.h, moves the
__NR_compat_* definitions to asm/unistd.h and removes all the explicit
unistd32.h includes apart from the one building the compat syscall
table. The aim is to have the compat __NR_* definitions available but
without colliding with the native syscall definitions (required by
lib/compat_audit.c to avoid duplicating the audit header files between
native and compat).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Make calls to ct_user_enter when the kernel is exited
and ct_user_exit when the kernel is entered (in el0_da,
el0_ia, el0_svc, el0_irq and all of the "error" paths).
These macros expand to function calls which will only work
properly if el0_sync and related code has been rearranged
(in a previous patch of this series).
The calls to ct_user_exit are made after hw debugging has been
enabled (enable_dbg_and_irq).
The call to ct_user_enter is made at the beginning of the
kernel_exit macro.
This patch is based on earlier work by Kevin Hilman.
Save/restore optimizations were also done by Kevin.
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To implement the context tracker properly on arm64,
a function call needs to be made after debugging and
interrupts are turned on, but before the lr is changed
to point to ret_to_user(). If the function call
is made after the lr is changed the function will not
return to the correct place.
For similar reasons, defer the setting of x0 so that
it doesn't need to be saved around the function call
(save far_el1 in x26 temporarily instead).
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
arm64 currently lacks support for -fstack-protector. Add
similar functionality to arm to detect stack corruption.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Create cpu topology based on MPIDR. When hardware sets MPIDR to sane
values, this method will always work. Therefore it should also work well
as the fallback method. [1]
When we have multiple processing elements in the system, we create
the cpu topology by mapping each affinity level (from lowest to highest)
to threads (if they exist), cores, and clusters.
[1] http://www.spinics.net/lists/arm-kernel/msg317445.html
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Zi Shen Lim <zlim@broadcom.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The Generic Interrupt Controller (version 3) offers services that are
similar to GICv2, with a number of additional features:
- Affinity routing based on the CPU MPIDR (ARE)
- System register for the CPU interfaces (SRE)
- Support for more that 8 CPUs
- Locality-specific Peripheral Interrupts (LPIs)
- Interrupt Translation Services (ITS)
This patch adds preliminary support for GICv3 with ARE and SRE,
non-secure mode only. It relies on higher exception levels to grant ARE
and SRE access.
Support for LPI and ITS will be added at a later time.
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Reviewed-by: Zi Shen Lim <zlim@broadcom.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Reviewed-by: Yun Wu <wuyun.wu@huawei.com>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com>
Tested-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
In order to move from the #include "../../../xxxxx.c" anti-pattern used
by both the x86 and arm64 versions of the stub to a static library
linked into either the kernel proper (arm64) or a separate boot
executable (x86), there is some prepatory work required.
This patch does the following:
- move forward declarations of functions shared between the arch
specific and the generic parts of the stub to include/linux/efi.h
- move forward declarations of functions shared between various .c files
of the generic stub code to a new local header file called "efistub.h"
- add #includes to all .c files which were formerly relying on the
#includor to include the correct header files
- remove all static modifiers from functions which will need to be
externally visible once we move to a static library
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
This moves definitions depended upon both by code under arch/arm64/boot
and under drivers/firmware/efi to <asm/efi.h>. This is in preparation of
turning the stub code under drivers/firmware/efi into a static library.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
As EFI_ERROR is not a UEFI result code but a local invention only
intended to allow get_dram_base() to signal failure, we should not use
it elsewhere.
Replace with EFI_LOAD_ERROR.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
According to the UEFI spec section 2.3.6.4, the use of FP/SIMD
instructions is allowed, and should adhere to the AAPCS64 calling
convention, which states that 'only the bottom 64 bits of each value
stored in registers v8-v15 need to be preserved' (section 5.1.2).
This applies equally to UEFI Runtime Services called by the kernel, so
make sure the FP/SIMD register file is preserved in this case. We do this
by enabling the wrappers for UEFI Runtime Services (CONFIG_EFI_RUNTIME_WRAPPERS)
and inserting calls to kernel_neon_begin()and kernel_neon_end() into
these wrappers.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
The shared efistub code for ARM and arm64 contains a local copy of
linux_banner, allowing it to be referenced from separate executables
such as the ARM decompressor. However, this introduces a dependency on
generated header files, causing unnecessary rebuilds of the stub itself
and, in case of arm64, vmlinux which contains it.
On arm64, the copy is not actually needed since we can reference the
original symbol directly, and as it turns out, there may be better ways
to deal with this for ARM as well, so let's remove it from the shared
code. If it still needs to be reintroduced for ARM later, it should live
under arch/arm anyway and not in shared code.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
The shared efistub code for ARM and arm64 contains a local copy of
linux_banner, allowing it to be referenced from separate executables
such as the ARM decompressor. However, this introduces a dependency on
generated header files, causing unnecessary rebuilds of the stub itself
and, in case of arm64, vmlinux which contains it.
On arm64, the copy is not actually needed since we can reference the
original symbol directly, and as it turns out, there may be better ways
to deal with this for ARM as well, so let's remove it from the shared
code. If it still needs to be reintroduced for ARM later, it should live
under arch/arm anyway and not in shared code.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
The CurrentEL system register reports the Current Exception Level
of the CPU. It doesn't say anything about the stack handling, and
yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h.
It works by chance because PSR_MODE_EL2t happens to match the right
bits, but that's otherwise a very bad idea. Just check for the EL
value instead.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently core file of aarch32 process prstatus note has empty
registers set. As result aarch32 core files create by V8 kernel are
not very useful.
It happens because compat_gpr_get and compat_gpr_set functions can
copy registers values to/from either kbuf or ubuf. ELF core file
collection function fill_thread_core_info calls compat_gpr_get
with kbuf set and ubuf set to 0. But current compat_gpr_get and
compat_gpr_set function handle copy to/from only ubuf case.
Fix is to handle kbuf and ubuf as two separate cases in similar
way as other functions like user_regset_copyout, user_regset_copyin do.
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Our compat PTRACE_POKEUSR implementation simply passes the user data to
regset_copy_from_user after some simple range checking. Unfortunately,
the data in question has already been copied to the kernel stack by this
point, so the subsequent access_ok check fails and the ptrace request
returns -EFAULT. This causes problems tracing fork() with older versions
of strace.
This patch briefly changes the fs to KERNEL_DS, so that the access_ok
check passes even with a kernel address.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The value of ESR has been stored into x1, and should be directly pass to
do_sp_pc_abort function, "MOV x1, x25" is an extra operation and do_sp_pc_abort
will get the wrong value of ESR.
Signed-off-by: ChiaHao <andy.jhshiu@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org>
Strings library contributed to glibc but re-licensed under GPLv2)
- Optimised crypto algorithms making use of the ARMv8 crypto extensions
(together with kernel API for using FPSIMD instructions in interrupt
context)
- Ftrace support
- CPU topology parsing from DT
- ESR_EL1 (Exception Syndrome Register) exposed to user space signal
handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu)
- 1GB section linear mapping if applicable
- Barriers usage clean-up
- Default pgprot clean-up
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJTkb+CAAoJEGvWsS0AyF7xLyEQAJgL8s2SdDyd+R8aukNDu3n9
tCK7yVHO9Kg96dfeXVuSOVEo2jszo6R3nxzUL05FMovr230WBcmoeHvHz8ETGnw1
g0yO8Ltkckjevog4UleCa3wGtYISjvwwrTalzbqoEWzsF2AV8oiqv/yuIn/EdkUr
jaOqfNsnAQa8TIz4vMhi/AVdJWTTU/F6WP80oqCbxqXu/WL2InuBlHtOJMbk1HDI
u1DJUGDQ1B9OgSVRkAOjCjSsEtz8sDY3lXsg3V1qT5+NbZTyomYM2IiBLdgQcX4P
t/rqX9nX4VmRQtzefeP5WhKFks2x80C0BKibWC4teeL++tJHbgbFkyjoZZGcP27o
zued3cYABrjrcAEU6ko/LUiL2Q4ozBOzosClpjpWulCxNPzsOps82UZWo3F3XbAt
xjE3k7WF9WeNBOJdDGrarEaSLdnjjgCLoWVs8cOUYLpOOrtdSw16D29jJ68U0Y5g
31wdwKxoueC8SFt8M9fP9J9Jyau08g+kvW1xQXrRmroppweFxjSpSy90imARyux/
wUFz79HxkQB79ZHpJ0I5TNrw/w+7pBnfVSKGPOzrk+ZUsaH76caNRBoffUCzFMzz
T3Sc8A36TZtOIcGR/Q4DMZNFXlIUXDSzCHP2Iu0QoIjTd5Ex96cqNvy3nswCYWwv
yGe3ZEqUq9+WL7snNW4v
=Jj8U
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next
Pull arm64 updates from Catalin Marinas:
- Optimised assembly string/memory routines (based on the AArch64
Cortex Strings library contributed to glibc but re-licensed under
GPLv2)
- Optimised crypto algorithms making use of the ARMv8 crypto extensions
(together with kernel API for using FPSIMD instructions in interrupt
context)
- Ftrace support
- CPU topology parsing from DT
- ESR_EL1 (Exception Syndrome Register) exposed to user space signal
handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu)
- 1GB section linear mapping if applicable
- Barriers usage clean-up
- Default pgprot clean-up
Conflicts as per Catalin.
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits)
arm64: kernel: initialize broadcast hrtimer based clock event device
arm64: ftrace: Add system call tracepoint
arm64: ftrace: Add CALLER_ADDRx macros
arm64: ftrace: Add dynamic ftrace support
arm64: Add ftrace support
ftrace: Add arm64 support to recordmcount
arm64: Add 'notrace' attribute to unwind_frame() for ftrace
arm64: add __ASSEMBLY__ in asm/insn.h
arm64: Fix linker script entry point
arm64: lib: Implement optimized string length routines
arm64: lib: Implement optimized string compare routines
arm64: lib: Implement optimized memcmp routine
arm64: lib: Implement optimized memset routine
arm64: lib: Implement optimized memmove routine
arm64: lib: Implement optimized memcpy routine
arm64: defconfig: enable a few more common/useful options in defconfig
ftrace: Make CALLER_ADDRx macros more generic
arm64: Fix deadlock scenario with smp_send_stop()
arm64: Fix machine_shutdown() definition
arm64: Support arch_irq_work_raise() via self IPIs
...
Pull ARM64 EFI update from Peter Anvin:
"By agreement with the ARM64 EFI maintainers, we have agreed to make
-tip the upstream for all EFI patches. That is why this patchset
comes from me :)
This patchset enables EFI stub support for ARM64, like we already have
on x86"
* 'arm64-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
arm64: efi: only attempt efi map setup if booting via EFI
efi/arm64: ignore dtb= when UEFI SecureBoot is enabled
doc: arm64: add description of EFI stub support
arm64: efi: add EFI stub
doc: arm: add UEFI support documentation
arm64: add EFI runtime services
efi: Add shared FDT related functions for ARM/ARM64
arm64: Add function to create identity mappings
efi: add helper function to get UEFI params from FDT
doc: efi-stub.txt updates for ARM
lib: add fdt_empty_tree.c
was a pretty active cycle for KVM. Changes include:
- a lot of s390 changes: optimizations, support for migration,
GDB support and more
- ARM changes are pretty small: support for the PSCI 0.2 hypercall
interface on both the guest and the host (the latter acked by Catalin)
- initial POWER8 and little-endian host support
- support for running u-boot on embedded POWER targets
- pretty large changes to MIPS too, completing the userspace interface
and improving the handling of virtualized timer hardware
- for x86, a larger set of changes is scheduled for 3.17. Still,
we have a few emulator bugfixes and support for running nested
fully-virtualized Xen guests (para-virtualized Xen guests have
always worked). And some optimizations too.
The only missing architecture here is ia64. It's not a coincidence
that support for KVM on ia64 is scheduled for removal in 3.17.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJTjtlBAAoJEBvWZb6bTYbyMOUP/2NAePghE3IjG99ikHFdn+BX
BfrURsuR6GD0AhYQnBidBmpFbAmN/LwSJxv/M7sV7OBRWLu3qbt69DrPTU2e/FK1
j9q25peu8jRyHzJ1q9rBroo74nD9lQYuVr3uXNxxcg0DRnw14JHGlM3y8LDEknO8
W+gpWTeAQ+2AuOX98MpRbCRMuzziCSv5bP5FhBVnsWHiZfvMbcUrbeJt+zYSiDAZ
0tHm/5dFKzfj/vVrrnjD4EZcRr688Bs5rztG96hY6aoVJryjZGLtLp92wCWkRRmH
CCvZwd245NmNthuKHzcs27/duSWfU0uOlu7AMrD44QYhzeDGyB/2nbCxbGqLLoBA
nnOviXH4cC65/CnisZ79zfo979HbZcX+Lzg747EjBgCSxJmLlwgiG8yXtDvk5otB
TH6GUeGDiEEPj//JD3XtgSz0sF2NvjREWRyemjDMvhz6JC/bLytXKb3sn+NXSj8m
ujzF9eQoa4qKDcBL4IQYGTJ4z5nY3Pd68dHFIPHB7n82OxFLSQUBKxXw8/1fb5og
VVb8PL4GOcmakQlAKtTMlFPmuy4bbL2r/2iV5xJiOZKmXIu8Hs1JezBE3SFAltbl
3cAGwSM9/dDkKxUbTFblyOE9bkKbg4WYmq0LkdzsPEomb3IZWntOT25rYnX+LrBz
bAknaZpPiOrW11Et1htY
=j5Od
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm into next
Pull KVM updates from Paolo Bonzini:
"At over 200 commits, covering almost all supported architectures, this
was a pretty active cycle for KVM. Changes include:
- a lot of s390 changes: optimizations, support for migration, GDB
support and more
- ARM changes are pretty small: support for the PSCI 0.2 hypercall
interface on both the guest and the host (the latter acked by
Catalin)
- initial POWER8 and little-endian host support
- support for running u-boot on embedded POWER targets
- pretty large changes to MIPS too, completing the userspace
interface and improving the handling of virtualized timer hardware
- for x86, a larger set of changes is scheduled for 3.17. Still, we
have a few emulator bugfixes and support for running nested
fully-virtualized Xen guests (para-virtualized Xen guests have
always worked). And some optimizations too.
The only missing architecture here is ia64. It's not a coincidence
that support for KVM on ia64 is scheduled for removal in 3.17"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (203 commits)
KVM: add missing cleanup_srcu_struct
KVM: PPC: Book3S PR: Rework SLB switching code
KVM: PPC: Book3S PR: Use SLB entry 0
KVM: PPC: Book3S HV: Fix machine check delivery to guest
KVM: PPC: Book3S HV: Work around POWER8 performance monitor bugs
KVM: PPC: Book3S HV: Make sure we don't miss dirty pages
KVM: PPC: Book3S HV: Fix dirty map for hugepages
KVM: PPC: Book3S HV: Put huge-page HPTEs in rmap chain for base address
KVM: PPC: Book3S HV: Fix check for running inside guest in global_invalidates()
KVM: PPC: Book3S: Move KVM_REG_PPC_WORT to an unused register number
KVM: PPC: Book3S: Add ONE_REG register names that were missed
KVM: PPC: Add CAP to indicate hcall fixes
KVM: PPC: MPIC: Reset IRQ source private members
KVM: PPC: Graciously fail broken LE hypercalls
PPC: ePAPR: Fix hypercall on LE guest
KVM: PPC: BOOK3S: Remove open coded make_dsisr in alignment handler
KVM: PPC: BOOK3S: Always use the saved DAR value
PPC: KVM: Make NX bit available with magic page
KVM: PPC: Disable NX for old magic page using guests
KVM: PPC: BOOK3S: HV: Add mixed page-size support for guest
...
Here is the big tty / serial driver pull request for 3.16-rc1.
A variety of different serial driver fixes and updates and additions,
nothing huge, and no real major core tty changes at all.
All have been in linux-next for a while.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iEYEABECAAYFAlONXgoACgkQMUfUDdst+ymdSwCgwL0xmWjFYr/UbJ4LslOZ29Q4
BFQAoKyYe9LsfEyodBPabxJjKUtj1htz
=ZGSN
-----END PGP SIGNATURE-----
Merge tag 'tty-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty into next
Pull tty/serial driver updates from Greg KH:
"Here is the big tty / serial driver pull request for 3.16-rc1.
A variety of different serial driver fixes and updates and additions,
nothing huge, and no real major core tty changes at all.
All have been in linux-next for a while"
* tag 'tty-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (84 commits)
Revert "serial: imx: remove the DMA wait queue"
serial: kgdb_nmi: Improve console integration with KDB I/O
serial: kgdb_nmi: Switch from tasklets to real timers
serial: kgdb_nmi: Use container_of() to locate private data
serial: cpm_uart: No LF conversion in put_poll_char()
serial: sirf: Fix compilation failure
console: Remove superfluous readonly check
console: Use explicit pointer type for vc_uni_pagedir* fields
vgacon: Fix & cleanup refcounting
ARM: tty: Move HVC DCC assembly to arch/arm
tty/hvc/hvc_console: Fix wakeup of HVC thread on hvc_kick()
drivers/tty/n_hdlc.c: replace kmalloc/memset by kzalloc
vt: emulate 8- and 24-bit colour codes.
printk/of_serial: fix serial console cessation part way through boot.
serial: 8250_dma: check the result of TX buffer mapping
serial: uart: add hw flow control support configuration
tty/serial: at91: add interrupts for modem control lines
tty/serial: at91: use mctrl_gpio helpers
tty/serial: Add GPIOLIB helpers for controlling modem lines
ARM: at91: gpio: implement get_direction
...
On platforms implementing CPU power management, the CPUidle subsystem
can allow CPUs to enter idle states where local timers logic is lost on power
down. To keep the software timers functional the kernel relies on an
always-on broadcast timer to be present in the platform to relay the
interrupt signalling the timer expiries.
For platforms implementing CPU core gating that do not implement an always-on
HW timer or implement it in a broken way, this patch adds code to initialize
the kernel hrtimer based clock event device upon boot (which can be chosen as
tick broadcast device by the kernel).
It relies on a dynamically chosen CPU to be always powered-up. This CPU then
relays the timer interrupt to CPUs in deep-idle states through its HW local
timer device.
Having a CPU always-on has implications on power management platform
capabilities and makes CPUidle suboptimal, since at least a CPU is kept
always in a shallow idle state by the kernel to relay timer interrupts,
but at least leaves the kernel with a functional system with some working
power management capabilities.
The hrtimer based clock event device is unconditionally registered, but
has the lowest possible rating such that any broadcast-capable HW clock
event device present will be chosen in preference as the tick broadcast
device.
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch allows system call entry or exit to be traced as ftrace events,
ie. sys_enter_*/sys_exit_*, if CONFIG_FTRACE_SYSCALLS is enabled.
Those events appear and can be controlled under
${sysfs}/tracing/events/syscalls/
Please note that we can't trace compat system calls here because
AArch32 mode does not share the same syscall table with AArch64.
Just define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS in order to avoid unexpected
results (bogus syscalls reported or even hang-up).
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
CALLER_ADDRx returns caller's address at specified level in call stacks.
They are used for several tracers like irqsoff and preemptoff.
Strange to say, however, they are refered even without FTRACE.
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch allows "dynamic ftrace" if CONFIG_DYNAMIC_FTRACE is enabled.
Here we can turn on and off tracing dynamically per-function base.
On arm64, this is done by patching single branch instruction to _mcount()
inserted by gcc -pg option. The branch is replaced to NOP initially at
kernel start up, and later on, NOP to branch to ftrace_caller() when
enabled or branch to NOP when disabled.
Please note that ftrace_caller() is a counterpart of _mcount() in case of
'static' ftrace.
More details on architecture specific requirements are described in
Documentation/trace/ftrace-design.txt.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This patch implements arm64 specific part to support function tracers,
such as function (CONFIG_FUNCTION_TRACER), function_graph
(CONFIG_FUNCTION_GRAPH_TRACER) and function profiler
(CONFIG_FUNCTION_PROFILER).
With 'function' tracer, all the functions in the kernel are traced with
timestamps in ${sysfs}/tracing/trace. If function_graph tracer is
specified, call graph is generated.
The kernel must be compiled with -pg option so that _mcount() is inserted
at the beginning of functions. This function is called on every function's
entry as long as tracing is enabled.
In addition, function_graph tracer also needs to be able to probe function's
exit. ftrace_graph_caller() & return_to_handler do this by faking link
register's value to intercept function's return path.
More details on architecture specific requirements are described in
Documentation/trace/ftrace-design.txt.
Reviewed-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
walk_stackframe() calls unwind_frame(), and if walk_stackframe() is
"notrace", unwind_frame() should be also "notrace".
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change the arm64 linker script ENTRY() command to define _text as the
kernel entry point.
The arm64 boot protocol specifies that the kernel must be entered at the
beginning of the kernel image. The existing ENTRY() command defined the
symbol stext as the entry point, which emitted an incorrect entry point,
but would not cause a runtime error because the existing entry code
immediately jumps to stext.
Signed-off-by: Geoff Levand <geoff@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Booting a kernel with CONFIG_EFI enabled on a non-EFI system caused
an oops with the current UEFI support code.
Add the required test to prevent this.
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
If one process calls sys_reboot and that process then stops other
CPUs while those CPUs are within a spin_lock() region we can
potentially encounter a deadlock scenario like below.
CPU 0 CPU 1
----- -----
spin_lock(my_lock)
smp_send_stop()
<send IPI> handle_IPI()
disable_preemption/irqs
while(1);
<PREEMPT>
spin_lock(my_lock) <--- Waits forever
We shouldn't attempt to run any other tasks after we send a stop
IPI to a CPU so disable preemption so that this task runs to
completion. We use local_irq_disable() here for cross-arch
consistency with x86.
Based-on-work-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch ports most of commit 19ab428f4b "ARM: 7759/1: decouple CPU
offlining from reboot/shutdown" by Stephen Warren from arch/arm to
arch/arm64.
machine_shutdown() is a hook for kexec. Add a comment saying so, since
it isn't obvious from the function name.
Halt, power-off, and restart have different requirements re: stopping
secondary CPUs than kexec has. The former simply require the secondary
CPUs to be quiesced somehow, whereas kexec requires them to be
completely non-operational, so that no matter where the kexec target
images are written in RAM, they won't influence operation of the
secondary CPUS,which could happen if the CPUs were still executing some
kind of pin loop. To this end, modify machine_halt, power_off, and
restart to call smp_send_stop() directly, rather than calling
machine_shutdown().
In machine_shutdown(), replace the call to smp_send_stop() with a call
to disable_nonboot_cpus(). This completely disables all but one CPU,
thus satisfying the kexec requirements a couple paragraphs above.
Signed-off-by: Arun KS <getarunks@gmail.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Support for arch_irq_work_raise() was missing from
arm64 (a prerequisite for FULL_NOHZ).
This patch is based on the arm32 patch ARM 7872/1.
commit bf18525fd7
Author: Stephen Boyd <sboyd@codeaurora.org>
Date: Tue Oct 29 20:32:56 2013 +0100
ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
By default, IRQ work is run from the tick interrupt (see
irq_work_run() in update_process_times()). When we're in full
NOHZ mode, restarting the tick requires the use of IRQ work and
if the only place we run IRQ work is in the tick interrupt we
have an unbreakable cycle. Implement arch_irq_work_raise() via
self IPIs to break this cycle and get the tick started again.
Note that we implement this via IPIs which are only available on
SMP builds. This shouldn't be a problem because full NOHZ is only
supported on SMP builds anyway.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add support for parsing the explicit topology bindings to discover the
topology of the system.
Since it is not currently clear how to map multi-level clusters for the
scheduler all leaf clusters are presented to the scheduler at the same
level. This should be enough to provide good support for current systems.
Signed-off-by: Mark Brown <broonie@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As a legacy of the way 32 bit ARM did things the topology code uses a null
topology map by default and then overwrites it by mapping cores with no
information to a cluster by themselves later. In order to make it simpler
to reset things as part of recovering from parse failures in firmware
information directly set this configuration on init. A core will always be
its own sibling so there should be no risk of confusion with firmware
provided information.
Signed-off-by: Mark Brown <broonie@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQF8BAABCgBmBQJTdT+DXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ5Q0QyQTBEQTZBRDhGNzMzMDE3NUUyQkJD
MjM3MjA3RTk1NzRGQTdEAAoJEMI3IH6VdPp9jL8H/3kzXh+5rZycb5r48E6Cic/a
Gl0NmRjGtbsxLZcvd8NR3cDol1c9mEAelFrwSA3ar0W91hDf9gsEgxYSBcGKfX/b
sxqzhFoArMDitvu8QQ38SMIlXaGokW3sevj4B93ljw9DqhFR/BvJctmVfyNuPLpp
fFZh0JRyHnkMkXKMJKtYyiXRgfGiJ90rGHPRZLJW7zCIk0/oYd4LESpnqWC+cGKg
w8WIe0Vu4CsiQffkWRmMMJmcJIVayJXAtMHOyBNCKMnYziZSqs+JIUwhbPCxci51
vTxWOFs/CAeZp8mrlHR2TD3dHtzP8c/NSL7E3k4QA0tYReA8T3XHwhRmqtHCp7E=
=sllM
-----END PGP SIGNATURE-----
Merge tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm into upstream
FPSIMD register bank context switching and crypto algorithms
optimisations for arm64 from Ard Biesheuvel.
* tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm:
arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions
arm64: pull in <asm/simd.h> from asm-generic
arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions
arm64/crypto: AES using ARMv8 Crypto Extensions
arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions
arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions
arm64/crypto: SHA-1 using ARMv8 Crypto Extensions
arm64: add support for kernel mode NEON in interrupt context
arm64: defer reloading a task's FPSIMD state to userland resume
arm64: add abstractions for FPSIMD state manipulation
asm-generic: allow generic unaligned access if the arch supports it
Conflicts:
arch/arm64/include/asm/thread_info.h
PSCIv0.2 adds a new function called AFFINITY_INFO, which
can be used to query if a specified CPU has actually gone
offline. Calling this function via cpu_kill ensures that
a CPU has quiesced after a call to cpu_die. This helps
prevent the CPU from doing arbitrary bad things when data
or instructions are clobbered (as happens with kexec)
in the window between a CPU announcing that it is dead
and said CPU leaving the kernel.
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
The PSCIv0.2 spec defines standard values of function IDs
and introduces a few new functions. Detect version of PSCI
and appropriately select the right PSCI functions.
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Reviewed-by: Rob Herring <robh@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Some kernel files may include both linux/compat.h and asm/compat.h directly
or indirectly. Since both header files contain is_compat_task() under
!CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail.
Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c
(do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is
implemented).
So this patch proactively
1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h
2) replaces asm/compat.h to linux/compat.h in kernel/*.c,
but asm/compat.h is still necessary in ptrace.c and process.c because
they use is_compat_thread().
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As done in arm, this change makes it easy to confirm we invoke syscall
related hooks, including syscall tracepoint, audit and seccomp which would
be implemented later, in correct order. That is, undoing operations in the
opposite order on exit that they were done on entry.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently syscall_trace() is called only for ptrace.
With additional TIF_xx flags defined, it is now called in all the cases
of audit, ftrace and seccomp in addition to ptrace.
Acked-by: Richard Guy Briggs <rgb@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since mdscr_el1 is part of the debug register group, it is highly likely
to be trapped by a hypervisor to prevent virtual machines from debugging
(buggering?) each other. Unfortunately, this absolutely destroys our
performance, since we access the register on many of our low-level
fault handling paths to keep track of the various debug state machines.
This patch removes our dependency on mdscr_el1 in the case that debugging
is not being used. More specifically we:
- Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and
avoid disabling step in the MDSCR when we don't need to.
MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from
userspace.
- Ensure debug exceptions are re-enabled on *all* exception entry
paths, even the debug exception handling path (where we re-enable
exceptions after invoking the handler). Since we can now rely on
MDSCR_EL1.SS being cleared by the entry code, exception handlers can
usually enable debug immediately before enabling interrupts.
- Remove all debug exception unmasking from ret_to_user and
el1_preempt, since we will never get here with debug exceptions
masked.
This results in a slight change to kernel debug behaviour, where we now
step into interrupt handlers and data aborts from EL1 when debugging the
kernel, which is actually a useful thing to do. A side-effect of this is
that it *does* potentially prevent stepping off {break,watch}points when
there is a high-frequency interrupt source (e.g. a timer), so a debugger
would need to use either breakpoints or manually disable interrupts to
get around this issue.
With this patch applied, guest performance is restored under KVM when
debug register accesses are trapped (and we get a measurable performance
increase on the host on Cortex-A57 too).
Cc: Ian Campbell <ian.campbell@citrix.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 01f8fa4f01d8("genirq: Allow forcing cpu affinity of interrupts")
enabled the forced irq_set_affinity which previously refused to route an
interrupt to an offline cpu.
Commit ffde1de64012("irqchip: Gic: Support forced affinity setting")
implements this force logic and disables the cpu online check for GIC
interrupt controller.
When __cpu_disable calls migrate_irqs, it disables the current cpu in
cpu_online_mask and uses forced irq_set_affinity to migrate the IRQs
away from the cpu but passes affinity mask with the cpu being offlined
also included in it.
When calling irq_set_affinity with force == true in a cpu hotplug path,
the caller must ensure that the cpu being offlined is not present in the
affinity mask or it may be selected as the target CPU, leading to the
interrupt not being migrated.
This patch uses cpu_online_mask when using forced irq_set_affinity so
that the IRQs are properly migrated away.
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
set_cpu_boot_mode_flag is used to identify which exception levels are
encountered across the system by CPUs trying to enter the kernel. The
basic algorithm is: if a CPU is booting at EL2, it will set a flag at
an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
Otherwise, a flag is set at an offset of zero into the same cacheline.
This enables us to check that all CPUs booted at the same exception
level.
This cacheline is written with the stage-1 MMU off (that is, via a
strongly-ordered mapping) and will bypass any clean lines in the cache,
leading to potential coherence problems when the variable is later
checked via the normal, cacheable mapping of the kernel image.
This patch reworks the broken flushing code so that we:
(1) Use a DMB to order the strongly-ordered write of the cacheline
against the subsequent cache-maintenance operation (by-VA
operations only hazard against normal, cacheable accesses).
(2) Use a single dc ivac instruction to invalidate any clean lines
containing a stale copy of the line after it has been updated.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When calling our low-level barrier macros directly, we can often suffice
with more relaxed behaviour than the default "all accesses, full system"
option.
This patch updates the users of dsb() to specify the option which they
actually require.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The primary aim of this patchset is to remove the pgprot_default and
prot_sect_default global variables and rely strictly on predefined
values. The original goal was to be able to run SMP kernels on UP
hardware by not setting the Shareability bit. However, it is unlikely to
see UP ARMv8 hardware and even if we do, the Shareability bit is no
longer assumed to disable cacheable accesses.
A side effect is that the device mappings now have the Shareability
attribute set. The hardware, however, should ignore it since Device
accesses are always Outer Shareable.
Following the removal of the two global variables, there is some PROT_*
macro reshuffling and cleanup, including the __PAGE_* macros (replaced
by PAGE_*).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
This information is useful for instruction emulators to detect
read/write and access size without having to decode the faulting
instruction. The current patch exports it via sigcontext (struct
esr_context) and is only valid for SIGSEGV and SIGBUS.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch removes the aux_context structure (and the containing file)
to allow the placement of the _aarch64_ctx end magic based on the
context stored on the signal stack.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault
was caused by a write access and applications like Qemu rely on such
information being provided in sigcontext. This patch introduces the
ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly
in compat sigcontext.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The synchronisation with the boot thread already happens in __cpu_up()
via wait_for_completion_timeout(). In addition, __cpu_up() calls are
protected by the cpu_add_remove_lock mutex and already serialised.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The hardware provides the maximum cache line size in the system via the
CTR_EL0.CWG bits. This patch implements the cache_line_size() function
to read such information, together with a sanity check if the statically
defined L1_CACHE_BYTES is smaller than the hardware value.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
This patch modifies kernel_neon_begin() and kernel_neon_end(), so
they may be called from any context. To address the case where only
a couple of registers are needed, kernel_neon_begin_partial(u32) is
introduced which takes as a parameter the number of bottom 'n' NEON
q-registers required. To mark the end of such a partial section, the
regular kernel_neon_end() should be used.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
If a task gets scheduled out and back in again and nothing has touched
its FPSIMD state in the mean time, there is really no reason to reload
it from memory. Similarly, repeated calls to kernel_neon_begin() and
kernel_neon_end() will preserve and restore the FPSIMD state every time.
This patch defers the FPSIMD state restore to the last possible moment,
i.e., right before the task returns to userland. If a task does not return to
userland at all (for any reason), the existing FPSIMD state is preserved
and may be reused by the owning task if it gets scheduled in again on the
same CPU.
This patch adds two more functions to abstract away from straight FPSIMD
register file saves and restores:
- fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded
- fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
There are two tacit assumptions in the FPSIMD handling code that will no longer
hold after the next patch that optimizes away some FPSIMD state restores:
. the FPSIMD registers of this CPU contain the userland FPSIMD state of
task 'current';
. when switching to a task, its FPSIMD state will always be restored from
memory.
This patch adds the following functions to abstract away from straight FPSIMD
register file saves and restores:
- fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved
- fpsimd_update_current_state -> replace current's FPSIMD state
Where necessary, the signal handling and fork code are updated to use the above
wrappers instead of poking into the FPSIMD registers directly.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Recently, the default DMA ops have been changed to non-coherent for
alignment with 32-bit ARM platforms (and DT files). This patch adds bus
notifiers to be able to set the coherent DMA ops (with no cache
maintenance) for devices explicitly marked as coherent via the
"dma-coherent" DT property.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit d57c33c5da (add generic fixmap.h) added (among other
similar things) set_fixmap_io to deal with early ioremap of devices.
More recently, commit bf4b558eba (arm64: add early_ioremap support)
converted the arm64 earlyprintk to use set_fixmap_io. A side effect of
this conversion is that my virtual machines have stopped booting when
I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel.
Turns out that the new earlyprintk code doesn't care at all about
sub-page offsets, and just assumes that the earlyprintk device will
be page-aligned. Obviously, that doesn't play well with the above example.
Further investigation shows that set_fixmap_io uses __set_fixmap instead
of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that
uses the latter, and to remove the superflous call to fix_to_virt
(which only returns the value that set_fixmap_io has already given us).
With this applied, my VMs are back in business. Tested on a Cortex-A57
platform with kvmtool as platform emulation.
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds PE/COFF header fields to the start of the kernel
Image so that it appears as an EFI application to UEFI firmware.
An EFI stub is included to allow direct booting of the kernel
Image.
Signed-off-by: Mark Salter <msalter@redhat.com>
[Add support in PE/COFF header for signed images]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
This patch adds EFI runtime support for arm64. This runtime support allows
the kernel to access various EFI runtime services provided by EFI firmware.
Things like reboot, real time clock, EFI boot variables, and others.
This functionality is supported for little endian kernels only. The UEFI
firmware standard specifies that the firmware be little endian. A future
patch is expected to add support for big endian kernels running with
little endian firmware.
Signed-off-by: Mark Salter <msalter@redhat.com>
[ Remove unnecessary cache/tlb maintenance. ]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Clock providers should be initialized before clocksource_of_init.
If not, Clock source initialization can be fail to get the clock.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Chanho Min <chanho.min@lge.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Sending a SIGTRAP to a user task after execution of a BRK instruction at
EL0 is fundamental to the way in which software breakpoints work and
doesn't deserve a warning to be logged in dmesg. Whilst the warning can
be justified from EL1, do_debug_exception will already do the right thing,
so simply remove the code altogether.
Cc: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
Reported-by: Kyrylo Tkachov <kyrylo.tkachov@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Now that we have equivalent earlycon support, arm64's earlyprintk code
can be removed.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
- Additional cache flushing during boot (needed in the presence of
external caches or under virtualisation)
- DMA range invalidation fix for non cache line aligned buffers
- Build failure fix with !COMPAT
- Kconfig update for STRICT_DEVMEM
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJTRDI8AAoJEGvWsS0AyF7xx2IP/jgBjjeIII7L6T45dG3BeJR1
ph8CBfnsHK3wr5KXiFrdQ2mhAWFr44SLNOd9xuZZmzpA/QWlyqaH46+oqz2GozUf
y2HucJoEIO+wXMFscQ/EQDiT4uUSbEXBQ6JeZzrhCgVaXeSs+wkKTtGXkxEh2gWo
w4OI1/JX7phv4heim51aabzziQ3o9JziIs6hALv6OVZVsuPF/TX+mK8C2ejJWLnv
ou+6E6iv69wNrgPnM3fcKj1CDisCNdFVvjd2LwzkJS7MUra74SWoXczCbfBYW6Ty
1GgZ/t3TOluDoaLgXfGyQXxnhOUyHdV16034/k8wISfuXG59V0eT+CaCgAotftKD
5oH+P4MfyVOvZpILrRtY/4MajlCr4V1RnzSYKnS/h3zzHW7Cx/BtYbbQEOVQnZwc
Gh4adLqc0f8QtkD4zGI7UWmxPxiI9KX9EEpVDAU3TJw6FjVSp7qZ1ifajsWc201h
STzQEu8LDBWQY2WKrtZxXvFjZj4eSSXNaDHNVCugODk2FBU6wNv4P5q1S+23xt+G
rR9UI8a0mpginNvhnwHoIR6X+RW3CDaUzn9k3gaJWDWvGoTeuAIstdyxCn6Dm26c
XF2B5xf5SdxXeNv513WfULCyLgJmLs9CzVFybgYWKLciLbLn7f3pAiisHuO1DWvu
Gv+70wfYArAcbkGPw5Qt
=mDkd
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull second set of arm64 updates from Catalin Marinas:
"A second pull request for this merging window, mainly with fixes and
docs clarification:
- Documentation clarification on CPU topology and booting
requirements
- Additional cache flushing during boot (needed in the presence of
external caches or under virtualisation)
- DMA range invalidation fix for non cache line aligned buffers
- Build failure fix with !COMPAT
- Kconfig update for STRICT_DEVMEM"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Fix DMA range invalidation for cache line unaligned buffers
arm64: Add missing Kconfig for CONFIG_STRICT_DEVMEM
arm64: fix !CONFIG_COMPAT build failures
Revert "arm64: virt: ensure visibility of __boot_cpu_mode"
arm64: Relax the kernel cache requirements for boot
arm64: Update the TCR_EL1 translation granule definitions for 16K pages
ARM: topology: Make it clear that all CPUs need to be described
Merge second patch-bomb from Andrew Morton:
- the rest of MM
- zram updates
- zswap updates
- exit
- procfs
- exec
- wait
- crash dump
- lib/idr
- rapidio
- adfs, affs, bfs, ufs
- cris
- Kconfig things
- initramfs
- small amount of IPC material
- percpu enhancements
- early ioremap support
- various other misc things
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (156 commits)
MAINTAINERS: update Intel C600 SAS driver maintainers
fs/ufs: remove unused ufs_super_block_third pointer
fs/ufs: remove unused ufs_super_block_second pointer
fs/ufs: remove unused ufs_super_block_first pointer
fs/ufs/super.c: add __init to init_inodecache()
doc/kernel-parameters.txt: add early_ioremap_debug
arm64: add early_ioremap support
arm64: initialize pgprot info earlier in boot
x86: use generic early_ioremap
mm: create generic early_ioremap() support
x86/mm: sparse warning fix for early_memremap
lglock: map to spinlock when !CONFIG_SMP
percpu: add preemption checks to __this_cpu ops
vmstat: use raw_cpu_ops to avoid false positives on preemption checks
slub: use raw_cpu_inc for incrementing statistics
net: replace __this_cpu_inc in route.c with raw_cpu_inc
modules: use raw_cpu_write for initialization of per cpu refcount.
mm: use raw_cpu ops for determining current NUMA node
percpu: add raw_cpu_ops
slub: fix leak of 'name' in sysfs_slab_add
...
Add support for early IO or memory mappings which are needed before the
normal ioremap() is usable. This also adds fixmap support for permanent
fixed mappings such as that used by the earlyprintk device register
region.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot
values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc.
The new fixmap and early_ioremap support also needs to use these macros
before paging_init() is called. This patch moves the init_mem_pgprot()
call out of paging_init() and into setup_arch() so that pgprot_default
gets initialized in time for fixmap and early_ioremap.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Borislav Petkov <borislav.petkov@amd.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The purpose of this single series of commits from Srivatsa S Bhat (with
a small piece from Gautham R Shenoy) touching multiple subsystems that use
CPU hotplug notifiers is to provide a way to register them that will not
lead to deadlocks with CPU online/offline operations as described in the
changelog of commit 93ae4f978c (CPU hotplug: Provide lockless versions
of callback registration functions).
The first three commits in the series introduce the API and document it
and the rest simply goes through the users of CPU hotplug notifiers and
converts them to using the new method.
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJTQow2AAoJEILEb/54YlRxW4QQAJlYRDUzwFJzJzYhltQYuVR+
4D74XMtvXgoJfg3cwdSWvMKKpJZnA9BVN0f7Hcx9wYmgdexYUuHeZJmMNyc3S2+g
KjKBIsugvgmZhHbbLd6TJ6GBbhGT5JLt9VmSfL9zIkveInU1YHFUUqL/mxdHm4J0
BSGKjk2rN3waRJgmY+xfliFLtQjDKFwJpMuvrgtoUyfas3f4sIV43UNbqdvA/weJ
rzedxXOlKH/id4b56lj/4iIzcoL3mwvJJ7r6n0CEMsKv87z09kqR0O+69Tsq/cgs
j17CsvoJOmZGk3QTeKVMQWBsvk6aPoDu3zK83gLbQMt+qjOpSTbJLz/3HZw4/TrW
ss4nuZne1DLMGS+6hoxYbTP+6Ni//Kn+l/LrHc5jb7m1X3lMO4W2aV3IROtIE1rv
lEP1IG01NU4u9YwkVj1dyhrkSp8tLPul4SrUK8W+oNweOC5crjJV7vJbIPJgmYiM
IZN55wln0yVRtR4TX+rmvN0PixsInE8MeaVCmReApyF9pdzul/StxlBze5BKLSJD
cqo1kNPpsmdxoDucqUpQ/gSvy+IOl2qnlisB5PpV93sk7De6TFDYrGHxjYIW7jMf
StXwdCDDQhzd2Q8Kfpp895A1dbIl8rKtwA6bTU2eX+BfMVFzuMdT44cvosx1+UdQ
sWl//rg76nb13dFjvF+q
=SW7Q
-----END PGP SIGNATURE-----
Merge tag 'cpu-hotplug-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull CPU hotplug notifiers registration fixes from Rafael Wysocki:
"The purpose of this single series of commits from Srivatsa S Bhat
(with a small piece from Gautham R Shenoy) touching multiple
subsystems that use CPU hotplug notifiers is to provide a way to
register them that will not lead to deadlocks with CPU online/offline
operations as described in the changelog of commit 93ae4f978c ("CPU
hotplug: Provide lockless versions of callback registration
functions").
The first three commits in the series introduce the API and document
it and the rest simply goes through the users of CPU hotplug notifiers
and converts them to using the new method"
* tag 'cpu-hotplug-3.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (52 commits)
net/iucv/iucv.c: Fix CPU hotplug callback registration
net/core/flow.c: Fix CPU hotplug callback registration
mm, zswap: Fix CPU hotplug callback registration
mm, vmstat: Fix CPU hotplug callback registration
profile: Fix CPU hotplug callback registration
trace, ring-buffer: Fix CPU hotplug callback registration
xen, balloon: Fix CPU hotplug callback registration
hwmon, via-cputemp: Fix CPU hotplug callback registration
hwmon, coretemp: Fix CPU hotplug callback registration
thermal, x86-pkg-temp: Fix CPU hotplug callback registration
octeon, watchdog: Fix CPU hotplug callback registration
oprofile, nmi-timer: Fix CPU hotplug callback registration
intel-idle: Fix CPU hotplug callback registration
clocksource, dummy-timer: Fix CPU hotplug callback registration
drivers/base/topology.c: Fix CPU hotplug callback registration
acpi-cpufreq: Fix CPU hotplug callback registration
zsmalloc: Fix CPU hotplug callback registration
scsi, fcoe: Fix CPU hotplug callback registration
scsi, bnx2fc: Fix CPU hotplug callback registration
scsi, bnx2i: Fix CPU hotplug callback registration
...
Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with:
arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’:
arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’
arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’
This is due to some recent arm64 perf commits with compat support:
commit 23c7d70d55:
ARM64: perf: add support for frame pointer unwinding in compat mode
commit 2ee0d7fd36:
ARM64: perf: add support for perf registers API
Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not
defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch
allows the arm64 kernel to build with and without CONFIG_COMPAT.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).
This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- PCI I/O space extended to 16M (in preparation of PCIe support patches)
- Dropping ZONE_DMA32 in favour of ZONE_DMA (we only need one for the
time being), together with swiotlb late initialisation to correctly
setup the bounce buffer
- DMA API cache maintenance support (not all ARMv8 platforms have
hardware cache coherency)
- Crypto extensions advertising via ELF_HWCAP2 for compat user space
- Perf support for dwarf unwinding in compat mode
- asm/tlb.h converted to the generic mmu_gather code
- asm-generic rwsem implementation
- Code clean-up
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJTOaqsAAoJEGvWsS0AyF7xYNUP/3/IPySIB+/6pyUG6q7kvIpF
Di93M+VdmnLEOKhhx/tjkiEmEQMp0hFPeOlQRWf/Ugg4ksulP6gRejdDEjIfkmsk
LrRXLjvH79NDJbN0pTUXqGDvLLZ9Qnib+HEOuKABIYUrwhNKySBk+5omGfXFtwLR
Mb5JxPX0kbBXOqbOX4RgANQoRlE8GxJR3V245zlGxA4klcN4IiaDy/99kj+kaeaa
Cl8X9K2I550IZ2YUAWPOut2aee2qRFQtAhIDgVthTYlGRx7Y/rDLM16B8fFY/T0H
7azIpSO5hk5lp8J3giJHYajlJlXNla5FeHQb8XAVnlyqFBmCUn0vvd2VbPvWREJp
UD8t1vZZt/s2he6CVAQIfQghwLyzrpPa19KbnyI+3HtsZ+NS/puBJmcVKZ2PBY/L
28BsRzB7BKAPEVhNmyPwFHNdZTvjaqYUCLhQ0uTp1sSHMcLeSs7+vyMR99f/0u9E
doSYAeF41ZkxHXL5xEevdj4sFkCEY1XFxER1Y8VM1rqHTeGEoeYbdS/u9tEeBgit
jBelvHAlNTBgbur2nW4E9fQpAF2CsvWnRq6lSmDRTkyjzcLUQqA8bsQJ3aUyJtZt
j17kUIzSH1q7x3zAaWQcvMVeawdkv2+HanjuTOdeO2ehvyG71vvxA3RkCv8o5Jhh
da+jAMhkpYQxk8mSKkWm
=8+cB
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull ARM64 updates from Catalin Marinas:
- KGDB support for arm64
- PCI I/O space extended to 16M (in preparation of PCIe support
patches)
- Dropping ZONE_DMA32 in favour of ZONE_DMA (we only need one for the
time being), together with swiotlb late initialisation to correctly
setup the bounce buffer
- DMA API cache maintenance support (not all ARMv8 platforms have
hardware cache coherency)
- Crypto extensions advertising via ELF_HWCAP2 for compat user space
- Perf support for dwarf unwinding in compat mode
- asm/tlb.h converted to the generic mmu_gather code
- asm-generic rwsem implementation
- Code clean-up
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (42 commits)
arm64: Remove pgprot_dmacoherent()
arm64: Support DMA_ATTR_WRITE_COMBINE
arm64: Implement custom mmap functions for dma mapping
arm64: Fix __range_ok macro
arm64: Fix duplicated Kconfig entries
arm64: mm: Route pmd thp functions through pte equivalents
arm64: rwsem: use asm-generic rwsem implementation
asm-generic: rwsem: de-PPCify rwsem.h
arm64: enable generic CPU feature modalias matching for this architecture
arm64: smp: make local symbol static
arm64: debug: make local symbols static
ARM64: perf: support dwarf unwinding in compat mode
ARM64: perf: add support for frame pointer unwinding in compat mode
ARM64: perf: add support for perf registers API
arm64: Add boot time configuration of Intermediate Physical Address size
arm64: Do not synchronise I and D caches for special ptes
arm64: Make DMA coherent and strongly ordered mappings not executable
arm64: barriers: add dmb barrier
arm64: topology: Implement basic CPU topology support
arm64: advertise ARMv8 extensions to 32-bit compat ELF binaries
...
Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:
get_online_cpus();
for_each_online_cpu(cpu)
init_cpu(cpu);
register_cpu_notifier(&foobar_cpu_notifier);
put_online_cpus();
This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).
Instead, the correct and race-free way of performing the callback
registration is:
cpu_notifier_register_begin();
for_each_online_cpu(cpu)
init_cpu(cpu);
/* Note the use of the double underscored version of the API */
__register_cpu_notifier(&foobar_cpu_notifier);
cpu_notifier_register_done();
Fix the debug-monitors code in arm64 by using this latter form of callback
registration.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Ingo Molnar <mingo@kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:
get_online_cpus();
for_each_online_cpu(cpu)
init_cpu(cpu);
register_cpu_notifier(&foobar_cpu_notifier);
put_online_cpus();
This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).
Instead, the correct and race-free way of performing the callback
registration is:
cpu_notifier_register_begin();
for_each_online_cpu(cpu)
init_cpu(cpu);
/* Note the use of the double underscored version of the API */
__register_cpu_notifier(&foobar_cpu_notifier);
cpu_notifier_register_done();
Fix the hw-breakpoint code in arm64 by using this latter form of callback
registration.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Make smp_spin_table_cpu_postboot() static, because this function
is used only in this file.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Make local symbols static, because these are used only in this
file.
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When profiling a 32-bit application, user space callchain unwinding
using the frame pointer is performed in compat mode. The code is taken
over from the AARCH32 code and adapted to work on AARCH64.
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch implements the functions required for the perf registers API,
allowing the perf tool to interface kernel register dumps with libunwind
in order to provide userspace backtracing.
Compat mode is also supported.
Only the general purpose user space registers are exported, i.e.:
PERF_REG_ARM_X0,
...
PERF_REG_ARM_X28,
PERF_REG_ARM_FP,
PERF_REG_ARM_LR,
PERF_REG_ARM_SP,
PERF_REG_ARM_PC
and not the PERF_REG_ARM_V* registers.
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add basic CPU topology support to arm64, based on the existing pre-v8
code and some work done by Mark Hambleton. This patch does not
implement any topology discovery support since that should be based on
information from firmware, it merely implements the scaffolding for
integration of topology support in the architecture.
No locking of the topology data is done since it is only modified during
CPU bringup with external serialisation from the SMP code.
The goal is to separate the architecture hookup for providing topology
information from the DT parsing in order to ease review and avoid
blocking the architecture code (which will be built on by other work)
with the DT code review by providing something simple and basic.
Following patches will implement support for interpreting topology
information from MPIDR and for parsing the DT topology bindings for ARM,
similar patches will be needed for ACPI.
Signed-off-by: Mark Brown <broonie@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
[catalin.marinas@arm.com: removed CONFIG_CPU_TOPOLOGY, always on if SMP]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This adds support for advertising the presence of ARMv8 Crypto
Extensions in the Aarch32 execution state to 32-bit ELF binaries
running in 32-bit compat mode under the arm64 kernel.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add support for the ELF auxv entry AT_HWCAP2 when running 32-bit
ELF binaries in compat mode.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
psci_init() is written to return err code if something goes wrong. However,
the single user, setup_arch(), doesn't care about it. Moreover, every error
path is supplied with a clear message which is enough for pleasant debugging.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since 652af89979 "arm64: factor out spin-table
boot method" psci prefix's been introduced. We have a common pr_fmt, so clean
them up.
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Remove some unnecessary bits that were apparently carried over from
another architecture's implementation:
- No need to get_page() the vdso text/data - these are part of the
kernel image.
- No need for ClearPageReserved on the vdso text.
- No need to vmap the first text page to check the ELF header - this
can be done through &vdso_start.
Also some minor cleanup:
- Use kcalloc for vdso_pagelist array allocation.
- Don't print on allocation failure, slab/slub will do that for us.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change the soft_restart() routine to call cpu_reset() at its identity mapped
physical address.
The cpu_reset() routine must be called at its identity mapped physical address
so that when the MMU is turned off the instruction pointer will be at the correct
location in physical memory.
Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch changes the idmap page table creation during boot to cover
the whole kernel image, allowing functions like cpu_reset() to be safely
called with the physical address.
This patch also simplifies the create_block_map asm macro to no longer
take an idmap argument and always use the phys/virt/end parameters. For
the idmap case, phys == virt.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
processor debug state PSTATE.D is unmasked in smp call
clear_os_lock for secondary cpus. So debug state is still
masked in normal kernel context. With this patch, unmask
debug state on secondary boot for the cpus in normal kernel
context. Now kgdb tests passed with multicore.
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add KGDB software step debugging support for EL1 debug
in AArch64 mode.
KGDB registers step debug handler with debug monitor.
On receiving 'step' command from GDB tool, target enables
software step debugging and step address is updated in ELR.
Software Step debugging is disabled when 'continue' command
is received
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add KGDB debug support for kernel debugging.
With this patch, basic KGDB debugging is possible.GDB register
layout is updated and GDB tool can establish connection with
target and can set/clear breakpoints.
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add macros to enable and disable to manage PSTATE.D
for debugging. The macros local_dbg_save and local_dbg_restore
are moved to irqflags.h file
KGDB boot tests fail because of PSTATE.D is masked.
unmask it for debugging support
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The core idle loop now takes care of it.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: linux-pm@vger.kernel.org
Cc: linaro-kernel@lists.linaro.org
Link: http://lkml.kernel.org/n/tip-wk9vpc8dsn46s12pl602ljpo@git.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The frame PC value in the unwind code used to just take the saved LR
value and use that. That's incorrect as a stack trace, since it shows
the return path stack, not the call path stack.
In particular, it shows faulty information in case the bl is done as
the very last instruction of one label, since the return point will be
in the next label. That can easily be seen with tail calls to panic(),
which is marked __noreturn and thus doesn't have anything useful after it.
Easiest here is to just correct the unwind code and do a -4, to get the
actual call site for the backtrace instead of the return site.
Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.
On arm64, these operations have been incorrectly implemented as follows:
// A, B, C are independent memory locations
<Access [A]>
// atomic_op (B)
1: ldaxr x0, [B] // Exclusive load with acquire
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
<Access [C]>
The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -> B -> C
(where B is the atomic operation involving both a load and a store).
Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.
The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:
<Access [A]>
// atomic_op (B)
dmb ish // Full barrier
1: ldxr x0, [B] // Exclusive load
<op(B)>
stxr w1, x0, [B] // Exclusive store
cbnz w1, 1b
dmb ish // Full barrier
<Access [C]>
but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:
<Access [A]>
// atomic_op (B)
1: ldxr x0, [B] // Exclusive load
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
dmb ish // Full barrier
<Access [C]>
The simple observations here are:
- The dmb ensures that no subsequent accesses (e.g. the access to C)
can enter or pass the atomic sequence.
- The dmb also ensures that no prior accesses (e.g. the access to A)
can pass the atomic sequence.
- Therefore, no prior access can pass a subsequent access, or
vice-versa (i.e. A is strictly ordered before C).
- The stlxr ensures that no prior access can pass the store component
of the atomic operation.
The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.
From an (arbitrary) observer's point of view, there are two scenarios:
1. We have observed the ldxr. This means that if we perform a store to
[B], the ldxr will still return older data. If we can observe the
ldxr, then we can potentially observe the permitted re-ordering
with the access to A, which is clearly an issue when compared to
the dmb variant of the code. Thankfully, the exclusive monitor will
save us here since it will be cleared as a result of the store and
the ldxr will retry. Notice that any use of a later memory
observation to imply observation of the ldxr will also imply
observation of the access to A, since the stlxr/dmb ensure strict
ordering.
2. We have not observed the ldxr. This means we can perform a store
and influence the later ldxr. However, that doesn't actually tell
us anything about the access to [A], so we've not lost anything
here either when compared to the dmb variant.
This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.
Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Update wall-to-monotonic fields in the VDSO data page
unconditionally. These are used to service CLOCK_MONOTONIC_COARSE,
which is not guarded by use_syscall.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or
CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the
caller has placed in x2 ("ret x2" to return from the fast path). Fix
this by saving x30/LR to x2 only in code that will call
__do_get_tspec, restoring x30 afterward, and using a plain "ret" to
return from the routine.
Also: while the resulting tv_nsec value for CLOCK_REALTIME and
CLOCK_MONOTONIC must be computed using intermediate values that are
left-shifted by cs_shift (x12, set by __do_get_tspec), the results for
coarse clocks should be calculated using unshifted values
(xtime_coarse_nsec is in units of actual nanoseconds). The current
code shifts intermediate values by x12 unconditionally, but x12 is
uninitialized when servicing a coarse clock. Fix this by setting x12
to 0 once we know we are dealing with a coarse clock id.
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Whilst the text segment for our VDSO is marked as PT_LOAD in the ELF
headers, it is mapped by the kernel and not actually subject to
demand-paging. ld doesn't realise this, and emits a p_align field of 64k
(the maximum supported page size), which conflicts with the load address
picked by the kernel on 4k systems, which will be 4k aligned. This
causes GDB to fail with "Failed to read a valid object file image from
memory" when attempting to load the VDSO.
This patch passes the -n option to ld, which prevents it from aligning
PT_LOAD segments to the maximum page size.
Cc: <stable@vger.kernel.org>
Reported-by: Kyle McMartin <kyle@redhat.com>
Acked-by: Kyle McMartin <kyle@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Introduction of PTE_WRITE to distinguish between writable but clean
and truly read-only pages
- FIQs enabling/disabling clean-up (they aren't used on arm64)
- CPU resume fix for the per-cpu offset restoring
- Code comment typos
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJS6+SLAAoJEGvWsS0AyF7xk60P/RBEAXqBdlNkTPQ9eIBs8P9b
txv7T6k6MnQt6vK+3Kepgnoadeaw4WRCqKPduA90v7tnrw8fVLmuAM29o0uITkDH
4Cz69/8A9B2Y+ouiODS6MrEvlz99BfbSZdUZd89eNSO7tTzv1YckpepNQCL3Wf+S
1nXa4dZrciCR2vMmkF01jvc3yTjCJ0W7IduUhFQevpMfEzq2bLhvS5eGQd618pBc
Ih44IYO6RfBeYB3CxWiSJeHgSXzlSijUeoPQbiDEhFsKXO7SaZN5oQ5Ra8v+YAMr
fhcblX9rxH2Z1AfYizPssQ0VEnm3cN4nKurr7CA5R+LeHuR38lD9WWABhkYgQMRB
V43/3wvozRyWhLBkS9V0NjObfpPhDuLR6pdbWRQnCO97mH/0HVd2ql49gvsOcdgb
qa2XTF+iOJOQ3325SGdruTkbzkccFoKniiiOsndhx48VGLhHD9U0sCYLfb5X6NJC
ZFI3qQx+3+q32jkDoShqQZ/4fnb8TqzH/KqNtZdZPJ7dlkw/2B8wcnAIoF+BfgbH
2TRfbndSrNGLJSc4bgFQ3e7SCHSdm/PUR/l4c+xN9Ia4tF9614ndfPAaSKaMaKca
O6IwX8+IBo94JDhtPsypplYllBlBeXzMjZFO6b60sBF9UaxgdS1ZQ3NP9pumGN80
2czoY2kwV68DScgxBFLx
=tWas
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pyll ARM64 patches from Catalin Marinas:
- Build fix with DMA_CMA enabled
- Introduction of PTE_WRITE to distinguish between writable but clean
and truly read-only pages
- FIQs enabling/disabling clean-up (they aren't used on arm64)
- CPU resume fix for the per-cpu offset restoring
- Code comment typos
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: mm: Introduce PTE_WRITE
arm64: mm: Remove PTE_BIT_FUNC macro
arm64: FIQs are unused
arm64: mm: fix the function name in comment of cpu_do_switch_mm
arm64: fix build error if DMA_CMA is enabled
arm64: kernel: fix per-cpu offset restore on resume
arm64: mm: fix the function name in comment of __flush_dcache_area
arm64: mm: use ubfm for dcache_line_size
So any FIQ handling is superfluous at the moment. The functions to
disable/enable FIQs is kept around if ever someone needs them in the
future, but existing calling sites including arch_cpu_idle_prepare()
may go for now.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The introduction of percpu offset optimisation through tpidr_el1 in:
Commit id :7158627686f02319c50c8d9d78f75d4c8
"arm64: percpu: implement optimised pcpu access using tpidr_el1"
requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume
so that percpu variables can be addressed correctly when a CPU comes out
of reset from warm-boot.
This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by
calling the set_my_cpu_offset C API, as it is done on primary and secondary
CPUs on cold boot, so that, even if the register used to store the percpu
offset is changed, the save and restore of general purpose registers does not
have to be updated.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Interface)
- Jump label support
- CMA can now be enabled on arm64
- HWCAP bits for crypto and CRC32 extensions
- Optimised percpu using tpidr_el1 register
- Code cleanup
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJS3WLBAAoJEGvWsS0AyF7xxx8P+wUYzu04rKtz4BOj5IHl8TxB
xsRw8ce9MdxIVgdtCjDzmdpkd0s8ZPTEweJnVGYYlB9O9Pmz0VSX4Z1y6W6k0P1f
GCKDMa+hn2uQYnw3bS022Zji6OjUfad9XUfe3f61YdA7GrSdjTVMapXuloASRcfl
0XkfpXwbfLPGpuNp4q/QaA9K/y93T/gc6O/ctJh3OUJDOWJXZGsUTRIKXTF9GrWn
/gPEK9MiatAPpcS7iO283a3vllDalNoEGpt+a4cYCc8il2kCWUpX6W2c9m3Ua26k
mAvkoUErfb3cW/PzqDZzr8M3XbnXb8Je99HBbcjQluL6zyw+0hdUHJpCFOamsz5m
pEpT1e0Hvxb6yNbjqyituiYFPwOUZHP/HeZpH4l1njhN7sIyZP5cUEV4f53VN4lB
KL3HSGzUTaNzT5UpD35CA4vXRwRKrV8YsAVhB0p53KgkUreKA6wbJHSXHorjBZaE
uuP7kqOMGQ494+f6h+yvZqwIcObQGaHYNQJLY3Yhzg3WAs59s/bX/s4yWhgkte0U
yfxKpxTSiLjv5LmZrVQer04DIf9duNkEpI/DAKUbXagHJ7RCHjOneg9F5ZvJ598o
umCo+ok9hV+vLUhagh4t5guSk2ehW7qoZOG44XkYcCLXTIlMV1AQA6oJr804DUm2
71UbGFi01OY0Jtp8prO3
=TPaC
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull ARM64 updates from Catalin Marinas:
- CPU suspend support on top of PSCI (firmware Power State Coordination
Interface)
- jump label support
- CMA can now be enabled on arm64
- HWCAP bits for crypto and CRC32 extensions
- optimised percpu using tpidr_el1 register
- code cleanup
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (42 commits)
arm64: fix typo in entry.S
arm64: kernel: restore HW breakpoint registers in cpu_suspend
jump_label: use defined macros instead of hard-coding for better readability
arm64, jump label: optimize jump label implementation
arm64, jump label: detect %c support for ARM64
arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions
arm64: move encode_insn_immediate() from module.c to insn.c
arm64: introduce interfaces to hotpatch kernel and module code
arm64: introduce basic aarch64 instruction decoding helpers
arm64: dts: Reduce size of virtio block device for foundation model
arm64: Remove unused __data_loc variable
arm64: Enable CMA
arm64: Warn on NULL device structure for dma APIs
arm64: Add hwcaps for crypto and CRC32 extensions.
arm64: drop redundant macros from read_cpuid()
arm64: Remove outdated comment
arm64: cmpxchg: update macros to prevent warnings
arm64: support single-step and breakpoint handler hooks
ARM64: fix framepointer check in unwind_frame
ARM64: check stack pointer in get_wchan
...
Commit 64681787 (arm64: let the core code deal with preempt_count)
changed the code, but left the comments unchanged, fix it.
Signed-off-by: Neil Zhang <zhangwm@marvell.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When a CPU resumes from low-power, it restores HW breakpoint and
watchpoint slots through a CPU PM notifier. Since we want to enable
debugging as early as possible in the resume path, the mdscr content
is restored along the general purpose registers in the cpu_suspend API
and debug exceptions are reenabled when cpu_suspend returns. Since the
CPU PM notifier is run after a CPU has been resumed, we cannot expect
HW breakpoint registers to contain sane values till the notifier is run,
since the HW breakpoints registers content is unknown at reset; this means
that the CPU might run with debug exceptions enabled, mdscr restored but HW
breakpoint registers containing junk values that can trigger spurious
debug exceptions.
This patch fixes current HW breakpoints restore by moving the HW breakpoints
registers restoration to the cpu_suspend API, before the debug exceptions are
enabled. This way, as soon as the cpu_suspend function returns the
kernel can resume debugging with sane values in HW breakpoint registers.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Introduce aarch64_insn_gen_{nop|branch_imm}() helper functions, which
will be used to implement jump label on ARM64.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Function encode_insn_immediate() will be used by other instruction
manipulate related functions, so move it into insn.c and rename it
as aarch64_insn_encode_immediate().
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Introduce three interfaces to patch kernel and module code:
aarch64_insn_patch_text_nosync():
patch code without synchronization, it's caller's responsibility
to synchronize all CPUs if needed.
aarch64_insn_patch_text_sync():
patch code and always synchronize with stop_machine()
aarch64_insn_patch_text():
patch code and synchronize with stop_machine() if needed
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The __data_loc variable is an unused left over from the 32 bit arm implementation.
Remove that variable and adjust the __mmap_switched startup routine accordingly.
Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Advertise the optional cryptographic and CRC32 instructions to
user space where present. Several hwcap bits [3-7] are allocated.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
[bit 2 is taken now so use bits 3-7 instead]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Code referenced in the comment has moved to arch/arm64/kernel/cputable.c
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
AArch64 Single Steping and Breakpoint debug exceptions will be
used by multiple debug framworks like kprobes & kgdb.
This patch implements the hooks for those frameworks to register
their own handlers for handling breakpoint and single step events.
Reworked the debug exception handler in entry.S: do_dbg to route
software breakpoint (BRK64) exception to do_debug_exception()
Signed-off-by: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We need at least 24 bytes above frame pointer.
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
get_wchan() is lockless. Task may wakeup at any time and change its own stack,
thus each next stack frame may be overwritten and filled with random stuff.
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch implements the word-at-a-time interface for arm64 using the
same algorithm as ARM. We use the fls64 macro, which expands to a clz
instruction via a compiler builtin. Big-endian configurations make use
of the implementation from asm-generic.
With this implemented, we can replace our byte-at-a-time strnlen_user
and strncpy_from_user functions with the optimised generic versions.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch implements optimised percpu variable accesses using the
el1 r/w thread register (tpidr_el1) along the same lines as arch/arm/.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add support for irq registration when pmu interrupt is percpu.
Signed-off-by: Vinayak Kale <vkale@apm.com>
Signed-off-by: Tuan Phan <tphan@apm.com>
[will: tidied up cross-calling to pass &irq]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We currently try to emit .comment twice, once in STABS_DEBUG, and once
in the line immediately following it. As the two section definitions are
identical, the latter is redundant and can be dropped.
This patch drops the redundant .comment section definition.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 8f34a1da35 ("arm64: ptrace: use HW_BREAKPOINT_EMPTY type for
disabled breakpoints") fixed an issue with GDB trying to zero breakpoint
control registers. The problem there is that the arch hw_breakpoint code
will attempt to create a (disabled), execute breakpoint of length 0.
This will fail validation and report unexpected failure to GDB. To avoid
this, we treated disabled breakpoints as HW_BREAKPOINT_EMPTY, but that
seems to have broken with recent kernels, causing watchpoints to be
treated as TYPE_INST in the core code and returning ENOSPC for any
further breakpoints.
This patch fixes the problem by prioritising the `enable' field of the
breakpoint: if it is cleared, we simply update the perf_event_attr to
indicate that the thing is disabled and don't bother changing either the
type or the length. This reinforces the behaviour that the breakpoint
control register is essentially read-only apart from the enable bit
when disabling a breakpoint.
Cc: <stable@vger.kernel.org>
Reported-by: Aaron Liu <liucy214@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the required makefile and kconfig entries to enable PM
for arm64 systems.
The kernel relies on the cpu_{suspend}/{resume} infrastructure to
properly save the context for a CPU and put it to sleep, hence this
patch adds the config option required to enable cpu_{suspend}/{resume}
API.
In order to rely on the CPU PM implementation for saving and restoring
of CPU subsystems like GIC and PMU, the arch Kconfig must be also
augmented to select the CONFIG_CPU_PM option when SUSPEND or CPU_IDLE
kernel implementations are selected.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
When CPU idle is enabled, the architectural idle call should go through
the idle subsystem to allow CPUs to enter idle states defined
by the platform CPU idle back-end operations.
This patch, mirroring other archs behaviour, adds the CPU idle call to the
architectural arch_cpu_idle implementation for arm64.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
On platforms with power management capabilities, timers that are shut
down when a CPU enters deep C-states must be emulated using an always-on
timer and a timer IPI to relay the timer IRQ to target CPUs on an SMP
system.
This patch enables the generic clockevents broadcast infrastructure for
arm64, by providing the required Kconfig entries and adding the timer
IPI infrastructure.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
When a CPU is shutdown either through CPU idle or suspend to RAM, the
content of HW breakpoint registers must be reset or restored to proper
values when CPU resume from low power states. This patch adds debug register
restore operations to the HW breakpoint control function and implements a
CPU PM notifier that allows to restore the content of HW breakpoint registers
to allow proper suspend/resume operations.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Most of the code executed to install and uninstall breakpoints is
common and can be factored out in a function that through a runtime
operations type provides the requested implementation.
This patch creates a common function that can be used to install/uninstall
breakpoints and defines the set of operations that can be carried out
through it.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
When a CPU enters a low power state, its FP register content is lost.
This patch adds a notifier to save the FP context on CPU shutdown
and restore it on CPU resume. The context is saved and restored only
if the suspending thread is not a kernel thread, mirroring the current
context switch behaviour.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Kernel subsystems like CPU idle and suspend to RAM require a generic
mechanism to suspend a processor, save its context and put it into
a quiescent state. The cpu_{suspend}/{resume} implementation provides
such a framework through a kernel interface allowing to save/restore
registers, flush the context to DRAM and suspend/resume to/from
low-power states where processor context may be lost.
The CPU suspend implementation relies on the suspend protocol registered
in CPU operations to carry out a suspend request after context is
saved and flushed to DRAM. The cpu_suspend interface:
int cpu_suspend(unsigned long arg);
allows to pass an opaque parameter that is handed over to the suspend CPU
operations back-end so that it can take action according to the
semantics attached to it. The arg parameter allows suspend to RAM and CPU
idle drivers to communicate to suspend protocol back-ends; it requires
standardization so that the interface can be reused seamlessly across
systems, paving the way for generic drivers.
Context memory is allocated on the stack, whose address is stashed in a
per-cpu variable to keep track of it and passed to core functions that
save/restore the registers required by the architecture.
Even though, upon successful execution, the cpu_suspend function shuts
down the suspending processor, the warm boot resume mechanism, based
on the cpu_resume function, makes the resume path operate as a
cpu_suspend function return, so that cpu_suspend can be treated as a C
function by the caller, which simplifies coding the PM drivers that rely
on the cpu_suspend API.
Upon context save, the minimal amount of memory is flushed to DRAM so
that it can be retrieved when the MMU is off and caches are not searched.
The suspend CPU operation, depending on the required operations (eg CPU vs
Cluster shutdown) is in charge of flushing the cache hierarchy either
implicitly (by calling firmware implementations like PSCI) or explicitly
by executing the required cache maintainance functions.
Debug exceptions are disabled during cpu_{suspend}/{resume} operations
so that debug registers can be saved and restored properly preventing
preemption from debug agents enabled in the kernel.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
On ARM64 SMP systems, cores are identified by their MPIDR_EL1 register.
The MPIDR_EL1 guidelines in the ARM ARM do not provide strict enforcement of
MPIDR_EL1 layout, only recommendations that, if followed, split the MPIDR_EL1
on ARM 64 bit platforms in four affinity levels. In multi-cluster
systems like big.LITTLE, if the affinity guidelines are followed, the
MPIDR_EL1 can not be considered a linear index. This means that the
association between logical CPU in the kernel and the HW CPU identifier
becomes somewhat more complicated requiring methods like hashing to
associate a given MPIDR_EL1 to a CPU logical index, in order for the look-up
to be carried out in an efficient and scalable way.
This patch provides a function in the kernel that starting from the
cpu_logical_map, implement collision-free hashing of MPIDR_EL1 values by
checking all significative bits of MPIDR_EL1 affinity level bitfields.
The hashing can then be carried out through bits shifting and ORing; the
resulting hash algorithm is a collision-free though not minimal hash that can
be executed with few assembly instructions. The mpidr_el1 is filtered through a
mpidr mask that is built by checking all bits that toggle in the set of
MPIDR_EL1s corresponding to possible CPUs. Bits that do not toggle do not
carry information so they do not contribute to the resulting hash.
Pseudo code:
/* check all bits that toggle, so they are required */
for (i = 1, mpidr_el1_mask = 0; i < num_possible_cpus(); i++)
mpidr_el1_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0));
/*
* Build shifts to be applied to aff0, aff1, aff2, aff3 values to hash the
* mpidr_el1
* fls() returns the last bit set in a word, 0 if none
* ffs() returns the first bit set in a word, 0 if none
*/
fs0 = mpidr_el1_mask[7:0] ? ffs(mpidr_el1_mask[7:0]) - 1 : 0;
fs1 = mpidr_el1_mask[15:8] ? ffs(mpidr_el1_mask[15:8]) - 1 : 0;
fs2 = mpidr_el1_mask[23:16] ? ffs(mpidr_el1_mask[23:16]) - 1 : 0;
fs3 = mpidr_el1_mask[39:32] ? ffs(mpidr_el1_mask[39:32]) - 1 : 0;
ls0 = fls(mpidr_el1_mask[7:0]);
ls1 = fls(mpidr_el1_mask[15:8]);
ls2 = fls(mpidr_el1_mask[23:16]);
ls3 = fls(mpidr_el1_mask[39:32]);
bits0 = ls0 - fs0;
bits1 = ls1 - fs1;
bits2 = ls2 - fs2;
bits3 = ls3 - fs3;
aff0_shift = fs0;
aff1_shift = 8 + fs1 - bits0;
aff2_shift = 16 + fs2 - (bits0 + bits1);
aff3_shift = 32 + fs3 - (bits0 + bits1 + bits2);
u32 hash(u64 mpidr_el1) {
u32 l[4];
u64 mpidr_el1_masked = mpidr_el1 & mpidr_el1_mask;
l[0] = mpidr_el1_masked & 0xff;
l[1] = mpidr_el1_masked & 0xff00;
l[2] = mpidr_el1_masked & 0xff0000;
l[3] = mpidr_el1_masked & 0xff00000000;
return (l[0] >> aff0_shift | l[1] >> aff1_shift | l[2] >> aff2_shift |
l[3] >> aff3_shift);
}
The hashing algorithm relies on the inherent properties set in the ARM ARM
recommendations for the MPIDR_EL1. Exotic configurations, where for instance
the MPIDR_EL1 values at a given affinity level have large holes, can end up
requiring big hash tables since the compression of values that can be achieved
through shifting is somewhat crippled when holes are present. Kernel warns if
the number of buckets of the resulting hash table exceeds the number of
possible CPUs by a factor of 4, which is a symptom of a very sparse HW
MPIDR_EL1 configuration.
The hash algorithm is quite simple and can easily be implemented in assembly
code, to be used in code paths where the kernel virtual address space is
not set-up (ie cpu_resume) and instruction and data fetches are strongly
ordered so code must be compact and must carry out few data accesses.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
The refactoring of el2_setup split code setting up EL2 and detecting the
CPU boot mode in separate chunks. This allows the code that sets up EL2 to
run in an endian independent way - ie before the endianess is set up in
the respective sctlr registers.
This patch brings secondary_entry up-to-date so that CPUs entering the
kernel through this code path set-up EL2 and the cpu boot mode properly.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Mark Rutland <mark.rutand@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The current breakpoint instruction checking code for A32 is not endian
clean. Fix this with appropriate byte-swapping when retrieving
instructions.
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
On a BE system the wrong half of the X registers is retrieved/written
when attempting to get/set the value of aarch32 registers through
ptrace.
Ensure that types are the correct width so that the relevant
casting occurs.
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The asynchronous aborts are generally fatal for the kernel but they can
be masked via the pstate A bit. If a system error happens while in
kernel mode, it won't be visible until returning to user space. This
patch enables this kind of abort early to help identifying the cause.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit f27dde8dee (sched: Add NEED_RESCHED to the preempt_count)
introduced the use of bit 31 in preempt_count for obscure scheduling
purposes.
This causes interrupts taken from EL0 to hit the (open coded) BUG when
this flag is flipped while handling the interrupt (we compare the
values before and after, and kill the kernel if they are different).
The fix is to stop messing with the preempt count entirely, as this
is already being dealt with in the generic code (irq_enter/irq_exit).
Tested on a dual A53 FPGA running cyclictest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull ARM updates from Russell King:
"Included in this series are:
1. BE8 (modern big endian) changes for ARM from Ben Dooks
2. big.Little support from Nicolas Pitre and Dave Martin
3. support for LPAE systems with all system memory above 4GB
4. Perf updates from Will Deacon
5. Additional prefetching and other performance improvements from Will.
6. Neon-optimised AES implementation fro Ard.
7. A number of smaller fixes scattered around the place.
There is a rather horrid merge conflict in tools/perf - I was never
notified of the conflict because it originally occurred between Will's
tree and other stuff. Consequently I have a resolution which Will
forwarded me, which I'll forward on immediately after sending this
mail.
The other notable thing is I'm expecting some build breakage in the
crypto stuff on ARM only with Ard's AES patches. These were merged
into a stable git branch which others had already pulled, so there's
little I can do about this. The problem is caused because these
patches have a dependency on some code in the crypto git tree - I
tried requesting a branch I can pull to resolve these, and all I got
each time from the crypto people was "we'll revert our patches then"
which would only make things worse since I still don't have the
dependent patches. I've no idea what's going on there or how to
resolve that, and since I can't split these patches from the rest of
this pull request, I'm rather stuck with pushing this as-is or
reverting Ard's patches.
Since it should "come out in the wash" I've left them in - the only
build problems they seem to cause at the moment are with randconfigs,
and since it's a new feature anyway. However, if by -rc1 the
dependencies aren't in, I think it'd be best to revert Ard's patches"
I resolved the perf conflict roughly as per the patch sent by Russell,
but there may be some differences. Any errors are likely mine. Let's
see how the crypto issues work out..
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (110 commits)
ARM: 7868/1: arm/arm64: remove atomic_clear_mask() in "include/asm/atomic.h"
ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' in atomic_cmpxchg().
ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.h
ARM: 7871/1: amba: Extend number of IRQS
ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise()
ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode
ARM: 7878/1: nommu: Implement dummy early_paging_init()
ARM: 7876/1: clear Thumb-2 IT state on exception handling
ARM: 7874/2: bL_switcher: Remove cpu_hotplug_driver_{lock,unlock}()
ARM: footbridge: fix build warnings for netwinder
ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu
ARM: fix misplaced arch_virt_to_idmap()
ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown
ARM: 7847/1: mcpm: Factor out logical-to-physical CPU translation
ARM: 7869/1: remove unused XSCALE_PMU Kconfig param
ARM: 7864/1: Handle 64-bit memory in case of 32-bit phys_addr_t
ARM: 7863/1: Let arm_add_memory() always use 64-bit arguments
ARM: 7862/1: pcpu: replace __get_cpu_var_uses
ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
...
Merge first patch-bomb from Andrew Morton:
"Quite a lot of other stuff is banked up awaiting further
next->mainline merging, but this batch contains:
- Lots of random misc patches
- OCFS2
- Most of MM
- backlight updates
- lib/ updates
- printk updates
- checkpatch updates
- epoll tweaking
- rtc updates
- hfs
- hfsplus
- documentation
- procfs
- update gcov to gcc-4.7 format
- IPC"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (269 commits)
ipc, msg: fix message length check for negative values
ipc/util.c: remove unnecessary work pending test
devpts: plug the memory leak in kill_sb
./Makefile: export initial ramdisk compression config option
init/Kconfig: add option to disable kernel compression
drivers: w1: make w1_slave::flags long to avoid memory corruption
drivers/w1/masters/ds1wm.cuse dev_get_platdata()
drivers/memstick/core/ms_block.c: fix unreachable state in h_msb_read_page()
drivers/memstick/core/mspro_block.c: fix attributes array allocation
drivers/pps/clients/pps-gpio.c: remove redundant of_match_ptr
kernel/panic.c: reduce 1 byte usage for print tainted buffer
gcov: reuse kbasename helper
kernel/gcov/fs.c: use pr_warn()
kernel/module.c: use pr_foo()
gcov: compile specific gcov implementation based on gcc version
gcov: add support for gcc 4.7 gcov format
gcov: move gcov structs definitions to a gcc version specific file
kernel/taskstats.c: return -ENOMEM when alloc memory fails in add_del_listener()
kernel/taskstats.c: add nla_nest_cancel() for failure processing between nla_nest_start() and nla_nest_end()
kernel/sysctl_binary.c: use scnprintf() instead of snprintf()
...
Pull vfs updates from Al Viro:
"All kinds of stuff this time around; some more notable parts:
- RCU'd vfsmounts handling
- new primitives for coredump handling
- files_lock is gone
- Bruce's delegations handling series
- exportfs fixes
plus misc stuff all over the place"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (101 commits)
ecryptfs: ->f_op is never NULL
locks: break delegations on any attribute modification
locks: break delegations on link
locks: break delegations on rename
locks: helper functions for delegation breaking
locks: break delegations on unlink
namei: minor vfs_unlink cleanup
locks: implement delegations
locks: introduce new FL_DELEG lock flag
vfs: take i_mutex on renamed file
vfs: rename I_MUTEX_QUOTA now that it's not used for quotas
vfs: don't use PARENT/CHILD lock classes for non-directories
vfs: pull ext4's double-i_mutex-locking into common code
exportfs: fix quadratic behavior in filehandle lookup
exportfs: better variable name
exportfs: move most of reconnect_path to helper function
exportfs: eliminate unused "noprogress" counter
exportfs: stop retrying once we race with rename/remove
exportfs: clear DISCONNECTED on all parents sooner
exportfs: more detailed comment for path_reconnect
...
Use more appropriate NUMA_NO_NODE instead of -1 in all archs' module_alloc()
Signed-off-by: Jianguo Wu <wujianguo@huawei.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
usual for this cycle with lots of clean-up.
- Cross arch clean-up and consolidation of early DT scanning code.
- Clean-up and removal of arch prom.h headers. Makes arch specific
prom.h optional on all but Sparc.
- Addition of interrupts-extended property for devices connected to
multiple interrupt controllers.
- Refactoring of DT interrupt parsing code in preparation for deferred
probe of interrupts.
- ARM cpu and cpu topology bindings documentation.
- Various DT vendor binding documentation updates.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQEcBAABAgAGBQJSgPQ4AAoJEMhvYp4jgsXif28H/1WkrXq5+lCFQZF8nbYdE2h0
R8PsfiJJmAl6/wFgQTsRel+ScMk2hiP08uTyqf2RLnB1v87gCF7MKVaLOdONfUDi
huXbcQGWCmZv0tbBIklxJe3+X3FIJch4gnyUvPudD1m8a0R0LxWXH/NhdTSFyB20
PNjhN/IzoN40X1PSAhfB5ndWnoxXBoehV/IVHVDU42vkPVbVTyGAw5qJzHW8CLyN
2oGTOalOO4ffQ7dIkBEQfj0mrgGcODToPdDvUQyyGZjYK2FY2sGrjyquir6SDcNa
Q4gwatHTu0ygXpyphjtQf5tc3ZCejJ/F0s3olOAS1ahKGfe01fehtwPRROQnCK8=
=GCbY
-----END PGP SIGNATURE-----
Merge tag 'devicetree-for-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux
Pull devicetree updates from Rob Herring:
"DeviceTree updates for 3.13. This is a bit larger pull request than
usual for this cycle with lots of clean-up.
- Cross arch clean-up and consolidation of early DT scanning code.
- Clean-up and removal of arch prom.h headers. Makes arch specific
prom.h optional on all but Sparc.
- Addition of interrupts-extended property for devices connected to
multiple interrupt controllers.
- Refactoring of DT interrupt parsing code in preparation for
deferred probe of interrupts.
- ARM cpu and cpu topology bindings documentation.
- Various DT vendor binding documentation updates"
* tag 'devicetree-for-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (82 commits)
powerpc: add missing explicit OF includes for ppc
dt/irq: add empty of_irq_count for !OF_IRQ
dt: disable self-tests for !OF_IRQ
of: irq: Fix interrupt-map entry matching
MIPS: Netlogic: replace early_init_devtree() call
of: Add Panasonic Corporation vendor prefix
of: Add Chunghwa Picture Tubes Ltd. vendor prefix
of: Add AU Optronics Corporation vendor prefix
of/irq: Fix potential buffer overflow
of/irq: Fix bug in interrupt parsing refactor.
of: set dma_mask to point to coherent_dma_mask
of: add vendor prefix for PHYTEC Messtechnik GmbH
DT: sort vendor-prefixes.txt
of: Add vendor prefix for Cadence
of: Add empty for_each_available_child_of_node() macro definition
arm/versatile: Fix versatile irq specifications.
of/irq: create interrupts-extended property
microblaze/pci: Drop PowerPC-ism from irq parsing
of/irq: Create of_irq_parse_and_map_pci() to consolidate arch code.
of/irq: Use irq_of_parse_and_map()
...
Pull timer changes from Ingo Molnar:
"Main changes in this cycle were:
- Updated full dynticks support.
- Event stream support for architected (ARM) timers.
- ARM clocksource driver updates.
- Move arm64 to using the generic sched_clock framework & resulting
cleanup in the generic sched_clock code.
- Misc fixes and cleanups"
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (50 commits)
x86/time: Honor ACPI FADT flag indicating absence of a CMOS RTC
clocksource: sun4i: remove IRQF_DISABLED
clocksource: sun4i: Report the minimum tick that we can program
clocksource: sun4i: Select CLKSRC_MMIO
clocksource: Provide timekeeping for efm32 SoCs
clocksource: em_sti: convert to clk_prepare/unprepare
time: Fix signedness bug in sysfs_get_uname() and its callers
timekeeping: Fix some trivial typos in comments
alarmtimer: return EINVAL instead of ENOTSUPP if rtcdev doesn't exist
clocksource: arch_timer: Do not register arch_sys_counter twice
timer stats: Add a 'Collection: active/inactive' line to timer usage statistics
sched_clock: Remove sched_clock_func() hook
arch_timer: Move to generic sched_clock framework
clocksource: tcb_clksrc: Remove IRQF_DISABLED
clocksource: tcb_clksrc: Improve driver robustness
clocksource: tcb_clksrc: Replace clk_enable/disable with clk_prepare_enable/disable_unprepare
clocksource: arm_arch_timer: Use clocksource for suspend timekeeping
clocksource: dw_apb_timer_of: Mark a few more functions as __init
clocksource: Put nodes passed to CLOCKSOURCE_OF_DECLARE callbacks centrally
arm: zynq: Enable arm_global_timer
...
The non-IPI interrupts are displayed only for the online cpus from
show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts().
As a result, the column headers and the IPI count don't match if any
CPU is offline.
This patch fixes show_ipi_list to display IPIs for online CPUs only.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The ARM architecture reference specifies that the IT state bits in the
PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM
function is registered as a signal handler, and that signal is delivered
inside a block of instructions following an IT instruction, some of the
instructions at the beginning of the signal handler may be skipped if
the IT state bits of the Program Status Register are not cleared by the
kernel.
Signed-off-by: T.J. Purtell <tj@mobisocial.us>
[catalin.marinas@arm.com: code comment and commit log updated]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Relocations that require an instruction immediate to be re-encoded must
ensure that the instruction pattern is represented in a little-endian
format for the manipulation code to work correctly.
This patch converts the loaded instruction into native-endianess prior
to encoding and then converts back to little-endian byteorder before
updating memory.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
preempt_count is defined as an int. Oddly enough, we access it
as a 64bit value. Things become interesting when running a BE
kernel, and looking at the current CPU number, which is stored
as an int next to preempt_count. Like in a per-cpu interrupt
handler, for example...
Using a 32bit access fixes the issue for good.
Cc: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit 53ae3acd (arm64: Only enable local interrupts after the CPU
is marked online) moved the enabling of the GIC after the CPUs are
marked online.
This has some interesting effect:
[...]
[<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160
[<ffffffc000088f58>] smp_send_reschedule+0x38/0x40
[<ffffffc0000c8728>] resched_task+0x84/0xc0
[<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98
[<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4
[<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70
[<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4
[<ffffffc0000cae6c>] default_wake_function+0x10/0x18
[<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0
[<ffffffc0000c7784>] complete+0x48/0x64
[<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110
[...]
Here, we end-up calling gic_raise_softirq without having initialized
the interrupt controller for this CPU. While this goes unnoticed
with GICv2 (the distributor is always accessible), it explodes with
GICv3.
The fix is to move the call to notify_cpu_starting before we set
the secondary CPU online.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The .data section in the arm64 linker script currently lacks a
definition for page-aligned data. This leads to a .page_aligned
section being placed between the end of data and start of bss.
This patch corrects that by using the generic RW_DATA_SECTION
macro which includes support for page-aligned data.
Signed-off-by: Mark Salter <msalter@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit e8765b265a (arm64: read enable-method for CPU0) introduced
checks for the enable method on CPU0 (to be later used with CPU
suspend). However, if the kernel is compiled for UP and a DT file is
used with a method like 'spin-table', Linux complains about 'invalid
enable method'. This patch turns it into an 'unsupported enable method'
warning.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Once the cpu_logical_map for any logical cpu is populated with the
corresponding physical identifier(i.e. mpidr), it's device node can
be retrieved using the DT helper 'of_get_cpu_node'. Currently the
device tree parsing code to get boot cpu node is duplicated in
'cpu_read_bootcpu_ops'.
This patch replaces the code parsing the device tree for the boot
cpu with of_get_cpu_node.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
OF/DT core library provides architecture specific hook to match the
logical cpu index with the corresponding physical identifier.
On ARM64, the MPIDR_EL1 contains specific bitfields(MPIDR_EL1.Aff{3..0})
which uniquely identify a CPU, in addition to some non-identifying
information and reserved bits. The ARM cpu binding defines the 'reg'
property to only contain the affinity bits, and any cpu nodes with other
bits set in their 'reg' entry are skipped.
This patch overrides the weak definition of arch_match_cpu_phys_id
with ARM64 specific version using MPIDR_EL1.Aff{3..0} as cpu physical
identifiers.
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is the ARM part of Christoph's patchset cleaning up the various
uses of __get_cpu_var across the tree.
The idea is to convert __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations
that use the offset. Thereby address calculations are avoided and fewer
registers are used when code is generated.
[will: fixed debug ref counting checks and pcpu array accesses]
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch updates the barrier semantics in the kuser helper functions
to take advantage of the ARMv8 additions to AArch32, which are
guaranteed to be available in situations where these functions will be
called.
Note that this slightly changes the cmpxchg functions in that they are
no longer necessarily full barriers if they return 1. However, the
documentation only states they include their own barriers "as needed",
not that they are obligated to act as a full barrier for the caller.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
CC: Matthew Leach <matthew.leach@arm.com>
CC: Dave Martin <dave.martin@arm.com>
CC: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch fixes ARMV8_EVTYPE_* macros since evtCount (event number)
field width is 10bits in event selection register.
Signed-off-by: Vinayak Kale <vkale@apm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently when CPUs are brought online via a spin-table, the address
they should jump to is written to the cpu-release-addr in the kernel's
native endianness. As the kernel may switch endianness, secondaries
might read the value byte-reversed from what was intended, and they
would jump to the wrong address.
As the only current arm64 spin-table implementations are
little-endian, stricten up the arm64 spin-table definition such that
the value written to cpu-release-addr is _always_ little-endian
regardless of the endianness of any CPU. If a spinning CPU is
operating big-endian, it must byte-reverse the value before jumping to
handle this.
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The endianness of memory accesses at EL2 and EL1 are configured by
SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted,
the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must
ensure that they are set before performing any memory accesses.
This patch ensures that SCTLR_EL{2,1} are configured appropriately at
boot for kernels of either endianness.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
[catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently, the code for setting the __cpu_boot_mode flag is munged in
with el2_setup. This makes things difficult on a BE bringup as a
memory access has to have occurred before el2_setup which is the place
that we'd like to set the endianess on the current EL.
Create a new function for setting __cpu_boot_mode and have el2_setup
return the mode the CPU. Also define a new constant in virt.h,
BOOT_CPU_MODE_EL1, for readability.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently the sigreturn compat code is copied to an offset in the
vectors table. When using a BE kernel this data will be stored in the
wrong endianess so when returning from a signal on a 32-bit BE system,
arbitrary code will be executed.
Instead of declaring the code inside a struct and copying that, use
the assembler's .byte directives to store the code in the correct
endianess regardless of platform endianess.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 port contains wrappers for arm32 syscalls that pass 64-bit
values. These wrappers concatenate the two registers to hold a 64-bit
value in a single X register. On BE, however, the lower and higher
words are swapped.
Create a new assembler macro, regs_to_64, that when on BE systems
swaps the registers in the orr instruction.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Matthew Leach <matthew.leach@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
uname -m reports the machine field from the current utsname, which should
reflect the endianness of the system.
This patch reports ELF_PLATFORM for the field, so that everything appears
consistent from userspace.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for using PSCI CPU_OFF calls for CPU hotplug.
With this code it is possible to hot unplug CPUs with "psci" as their
boot-method, as long as there's an appropriate cpu_off function id
specified in the psci node.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds the basic infrastructure necessary to support
CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug
support will depend on an implementation's cpu_operations (e.g. PSCI).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may
tells us something useful (i.e. how to hotplug it back on), so we must
read it along with all the enable-method for all the other CPUs. Even
on UP the enable-method may tell us useful information (e.g. if a core
has some mechanism that might be usable for cpuidle), so we should
always read it.
This patch factors out the reading of the enable method, and ensures
that CPU0's enable method is read regardless of whether the kernel is
built with SMP support.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arm64 kernel has an internal holding pen, which is necessary for
some systems where we can't bring CPUs online individually and must hold
multiple CPUs in a safe area until the kernel is able to handle them.
The current SMP infrastructure for arm64 is closely coupled to this
holding pen, and alternative boot methods must launch CPUs into the pen,
where they sit before they are launched into the kernel proper.
With PSCI (and possibly other future boot methods), we can bring CPUs
online individually, and need not perform the secondary_holding_pen
dance. Instead, this patch factors the holding pen management code out
to the spin-table boot method code, as it is the only boot method
requiring the pen.
A new entry point for secondaries, secondary_entry is added for other
boot methods to use, which bypasses the holding pen and its associated
overhead when bringing CPUs online. The smp.pen.text section is also
removed, as the pen can live in head.text without problem.
The cpu_operations structure is extended with two new functions,
cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
performing any post-boot cleanup required by a bootmethod (e.g.
resetting the secondary_holding_pen_release to INVALID_HWID).
Documentation is added for cpu_operations.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For hotplug support, we're going to want a place to store operations
that do more than bring CPUs online, and it makes sense to group these
with our current smp_enable_ops. For cpuidle support, we'll want to
group additional functions, and we may want them even for UP kernels.
This patch renames smp_enable_ops to the more general cpu_operations,
and pulls the definitions out of smp code such that they can be used in
UP kernels. While we're at it, fix up instances of the cpu parameter to
be an unsigned int, drop the init markings and rename the *_cpu
functions to cpu_* to reduce future churn when cpu_operations is
extended.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The functions in psci.c are only used from smp_psci.c, and smp_psci
cannot function without psci.c. Additionally psci.c is built when !SMP,
where it's expected that cpu_suspend may be useful.
This patch unifies the two files, removing pointless duplication and
paving the way for PSCI support in UP systems.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This function may be called from loadable modules, so it needs
exporting.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Loc Ho <lho@apm.com>
Pull more timekeeping items for v3.13 from John Stultz:
* Small cleanup in the clocksource code.
* Fix for rtc-pl031 to let it work with alarmtimers.
* Move arm64 to using the generic sched_clock framework & resulting
cleanup in the generic sched_clock code.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Convert arm64 to use the common of_flat_dt_get_machine_name function.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Register with the generic sched_clock framework now that it
supports 64 bits. This fixes two problems with the current
sched_clock support for machines using the architected timers.
First off, we don't subtract the start value from subsequent
sched_clock calls so we can potentially start off with
sched_clock returning gigantic numbers. Second, there is no
support for suspend/resume handling so problems such as discussed
in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock() during
suspend, 2012-10-23) can happen without this patch. Finally, it
allows us to move the sched_clock setup into drivers clocksource
out of the arch ports.
Cc: Christopher Covington <cov@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Create a weak version of early_init_dt_add_memory_arch which uses
memblock. This will unify all architectures except ones with custom
memory bank structs.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Jonas Bonn <jonas@southpole.se>
Acked-by: Grant Likely <grant.likely@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: microblaze-uclinux@itee.uq.edu.au
Cc: linux@lists.openrisc.net
Cc: devicetree@vger.kernel.org
Convert arm64 to use new early_init_dt_scan function.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
If context switching happens during executing fpsimd_flush_thread(),
stale value in FPSIMD registers will be saved into current thread's
fpsimd_state by fpsimd_thread_switch(). That may cause invalid
initialization state for the new process, so disable preemption
when executing fpsimd_flush_thread().
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for configuring the event stream frequency
and enabling it.
It also adds the hwcaps as well as compat-specific definitions to
the user to detect this event stream feature.
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in
a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit.
This patch widens elf_hwcap to be 64 bit.
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When a task crashes and we print debugging information, ensure that
compat tasks show the actual AArch32 LR and SP registers rather than the
AArch64 ones.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Generally minor changes. A bunch of bug fixes, particularly for
initialization and some refactoring. Most notable change if feeding the
entire flattened tree into the random pool at boot. May not be
significant, but shouldn't hurt either.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJSL12LAAoJEEFnBt12D9kB64gP/RBipnYbo3RPanHg+lE/J1V7
KSVFNGKWJHxTg47VVC1YJGIG21jqxAilpdS2MQL5FP7iyd+IzvtHpQiJgp+2G+pq
di06yrdyrYErxRgZgGQi8IpR538ZzOEVLCKJGdb09YelkRzPT5au7CC1MAsX3qco
yba7PHk0/Nc4hZE4aGbgR1DlRmn86ob7mM0KFE/LORaSN2BueMgWcwKhQXYNGyoh
assX4yNhAbUG6Bgw7paBLDGqHh8c5Ei5AppU8yPb+N094jgYHBJryUoDlzzUHD23
qqiEqHhUKT0TpgHNs8KH0WZFugcmjKvYEbzdzadBxqfXnJN4fKSEcdfF3iz4T14j
U6EZks89GoHwA523OghUZkKNOqlsUdWfdKz+8/grQqKisYwDcf3fCxEYk/4weDCQ
b6fFlOv6+AI3btjXp6F511ZKxyT4ZZzkHjp/ZSrhBygyamNZfax0ma0j+ZS9AZql
kPxQS0nOve6NKaP7vXxMmW5sGMnL19ER/Hm31wthGcWI43GVebUdklnzfGaEeSjs
pmP8oiCNemceqVpiPKxcOxiguf/eyIjP1SFXbguASygUmQeTDbbJ8n1FYznCitue
xJgWttKWsEf/aMR3eJtQ3aBmHR3rijAV4E28Wlq8XMkocwvpQm2zMocS2Z5BJ80S
hi1kQVy8+RxNX96tOSp1
=GSWl
-----END PGP SIGNATURE-----
Merge tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux
Pull device tree core updates from Grant Likely:
"Generally minor changes. A bunch of bug fixes, particularly for
initialization and some refactoring. Most notable change if feeding
the entire flattened tree into the random pool at boot. May not be
significant, but shouldn't hurt either"
Tim Bird questions whether the boot time cost of the random feeding may
be noticeable. And "add_device_randomness()" is definitely not some
speed deamon of a function.
* tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux:
of/platform: add error reporting to of_amba_device_create()
irq/of: Fix comment typo for irq_of_parse_and_map
of: Feed entire flattened device tree into the random pool
of/fdt: Clean up casting in unflattening path
of/fdt: Remove duplicate memory clearing on FDT unflattening
gpio: implement gpio-ranges binding document fix
of: call __of_parse_phandle_with_args from of_parse_phandle
of: introduce of_parse_phandle_with_fixed_args
of: move of_parse_phandle()
of: move documentation of of_parse_phandle_with_args
of: Fix missing memory initialization on FDT unflattening
of: consolidate definition of early_init_dt_alloc_memory_arch()
of: Make of_get_phy_mode() return int i.s.o. const int
include: dt-binding: input: create a DT header defining key codes.
of/platform: Staticize of_platform_device_create_pdata()
of: Specify initrd location using 64-bit
dt: Typo fix
OF: make of_property_for_each_{u32|string}() use parameters if OF is not enabled
TCR.TBI0 can be used to cause hardware address translation to ignore the
top byte of userspace virtual addresses. Whilst not especially useful in
standard C programs, this can be used by JITs to `tag' pointers with
various pieces of metadata.
This patch enables this bit for AArch64 Linux, and adds a new file to
Documentation/arm64/ which describes some potential caveats when using
tagged virtual addresses.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We always use a timer-backed delay loop for arm64, so don't bother
reporting a bogomips value which appears to confuse some people.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Most architectures use the same implementation. Collapse the common ones
into a single weak function that can be overridden.
Signed-off-by: Grant Likely <grant.likely@linaro.org>
The current vmlinux.lds.S places the notes sections between the
end of rw data and start of bss. This means that _edata doesn't
really point to the end of data. Since notes are read-only, this
patch moves them to the read-only segment so that _edata does
point to the end of initialized rw data.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
do_undefinstr() has to be called with interrupts disabled since it may
read the instruction from the user address space which could lead to a
data abort and subsequent might_sleep() warning in do_page_fault().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Expand the arm64 image header to allow for co-existance with
PE/COFF header required by the EFI stub. The PE/COFF format
requires the "MZ" header to be at offset 0, and the offset
to the PE/COFF header to be at offset 0x3c. The image
header is expanded to allow 2 instructions at the beginning
to accommodate a benign intruction at offset 0 that includes
the "MZ" header, a magic number, and the offset to the PE/COFF
header.
Signed-off-by: Roy Franz <roy.franz@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add <asm/neon.h> containing kernel_neon_begin/kernel_neon_end function
declarations and corresponding definitions in fpsimd.c
These are needed to wrap uses of NEON in kernel mode. The names are
identical to the ones used in arm/ so code using intrinsics or
vectorized by GCC can be shared between arm and arm64.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a port of f2fe09b055 ("ARM: 7663/1: perf: fix ARMv7 EVTYPE_MASK
to include NSH bit") to arm64, which fixes the broken evtype mask to
include the NSH bit, allowing profiling at EL2.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a port of cb2d8b342a ("ARM: 7698/1: perf: fix group validation
when using enable_on_exec") to arm64, which fixes the event validation
checking so that events in the OFF state are still considered when
enable_on_exec is true.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a port of c95eb3184e ("ARM: 7809/1: perf: fix event validation
for software group leaders") to arm64, which fixes a panic in the arm64
perf backend found as a result of Vince's fuzzing tool.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a port of d9f966357b ("ARM: 7810/1: perf: Fix array out of
bounds access in armpmu_map_hw_event()") to arm64, which fixes an oops
in the arm64 perf backend found as a result of Vince's fuzzing tool.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Written by Catalin Marinas, tested by APM on storm platform. This is needed
because of the failures encountered when running SpecWeb benchmark test.
Signed-off-by: Feng Kan <fkan@apm.com>
Acked-by: Kumar Sankaran <ksankaran@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Commit ff70130 (arm64: use common reboot infrastructure) converted the
arm_pm_restart declaration to the new reboot infrastructure but missed
the actual definition.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Fixes (user cache maintenance fault handling, !COMPAT compilation, CPU
online and interrupt hanlding).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJR6WGUAAoJEGvWsS0AyF7xaMQP/3WSpcEUQ+k8wCbjzkbpnQJ3
Z0Ufz0XeBUgmaZNwYFjnpVTm/R04F1gcsoA5qE//6iMkbcpbM2sWVN/uQvY3l+fQ
haJmCWZ7PIXxm3vbKrcSBiJ+WKpvUkzlL+Q1upmQWFCxkP6noejCsvazM5lPgt1q
1Vr5/9q87TOfxG8Udki3pPqRazd4YwQJx6JCZ46P+mlqQk9gxDoQ7RRXy9Viv+SV
6Jq5Lt8Qj7iXmq8DDTYdF7DHE7rnIhdCUhJu7UN1G/O7U4s9nlUbv55fHMY76ZAn
BddgNtIsMsvfIxlYZ6f0r2ccrgPHaDTLVW/q0VkwdP8jD2EQWAK7gI21OgQm0ebl
OL+t29zdx9nidpI+cCTlwejh8i7vRsoxgYki5qfnYR3SHL5HhQSHrUTZvEFr+u4I
ceXnDmTZ46HmPSCC6/5cFiXxsw1zbBxSB7rNFvXmF2Jr7F3TvAxCWvrIfmrmYdrC
bw4UMBB15SaJud3maqVGhj6aVo4bEand4g++Dk0ytXbGq+5Ke5CktmPOmzNLVu9p
R8tHDyp9szFP0eqqF1/XTZx2rsHvop7D1QUpQVgFC+mRWI3gjtFPtShW1GgbQQ+P
t7gvHJQ2SF1BRTSf6KH/cqXDgHzpg3VuP2moM7YbR0/qvRf7Eh/M3Hrg/s6xCeKr
ywmW7/+McIz3kDsk1gUM
=M4kj
-----END PGP SIGNATURE-----
Merge tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 fixes from Catalin Marinas:
- Post -rc1 update to the common reboot infrastructure.
- Fixes (user cache maintenance fault handling, !COMPAT compilation,
CPU online and interrupt hanlding).
* tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
arm64: use common reboot infrastructure
arm64: mm: don't treat user cache maintenance faults as writes
arm64: add '#ifdef CONFIG_COMPAT' for aarch32_break_handler()
arm64: Only enable local interrupts after the CPU is marked online
Commit 7b6d864b48 (reboot: arm: change reboot_mode to use enum
reboot_mode) changed the way reboot is handled on arm, which has a
direct impact on arm64 as we share the reset driver on the VE platform.
The obvious fix is to move arm64 to use the same infrastructure.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[catalin.marinas@arm.com: removed reboot_mode = REBOOT_HARD default setting]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There is a slight chance that (timer) interrupts are triggered before a
secondary CPU has been marked online with implications on softirq thread
affinity.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Kirill Tkhai <tkhai@yandex.ru>
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/arm64 uses of the __cpuinit macros from
all C files. Currently arm64 does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Finally plug KVM/arm64 into the config system, making it possible
to enable KVM support on AArch64 CPUs.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- KVM and Xen ports to AArch64
- Hugetlbfs and transparent huge pages support for arm64
- Applied Micro X-Gene Kconfig entry and dts file
- Cache flushing improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJR0bZAAAoJEGvWsS0AyF7xTEEP/R/aRoqWwbVAMlwAhujq616O
t4RzIyBXZXqxS9I+raokCX4mgYxdeisJlzN2hoq73VEX2BQlXZoYh8vmfY9WeNSM
2pdfif2HF7oo9ymCRyqfuhbumPrTyJhpbguzOYrxPqpp2f1hv2D8hbUJEFj429yL
UjqTFoONngfouZmAlwrPGZQKhBI95vvN53yvDMH0PWfvpm07DKGIQMYp20y0pj8j
slhLH3bh2kfpS1cf23JtH6IICwWD2pXW0POo569CfZry6bI74xve+Trcsm7iPnsO
PSI1P046ME1mu3SBbKwiPIdN/FQqWwTHW07fvMmH/xuXu3Zs/mxgzi7vDzDrVvTg
PJSbKWD6N/IPPwKS/gCUmWWDASO0bXx3KlDuRZqAjbRojs0UPUOTUhzJM/BHUms1
vY2QS9lAm02LmZZrk1LeKKP85gB+qKQvHuOVhIOldWeLGKtsNufz1kynz6YTqsLq
uUB55KwbhQ7q8+aoY6lWujqiTXMoLkBgGdjHs2I407PAv7ZjlhRWk2fIry7xJifp
rKu2cIlWsRe4CGvGI410NvIJFrGvJAV4wA43sgBDjPumyILgT/5jw9r3RpJEBZZs
akw/Bl1CbL+gMjyoPUWgcWZdRkUCE0eLrgyMOmaYfst8cOTaWw4dWLvUG/bBZg+Y
mGnuEQUQtAPadk8P/Sv3
=PZ/e
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull ARM64 updates from Catalin Marinas:
"Main features:
- KVM and Xen ports to AArch64
- Hugetlbfs and transparent huge pages support for arm64
- Applied Micro X-Gene Kconfig entry and dts file
- Cache flushing improvements
For arm64 huge pages support, there are x86 changes moving part of
arch/x86/mm/hugetlbpage.c into mm/hugetlb.c to be re-used by arm64"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (66 commits)
arm64: Add initial DTS for APM X-Gene Storm SOC and APM Mustang board
arm64: Add defines for APM ARMv8 implementation
arm64: Enable APM X-Gene SOC family in the defconfig
arm64: Add Kconfig option for APM X-Gene SOC family
arm64/Makefile: provide vdso_install target
ARM64: mm: THP support.
ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
ARM64: mm: HugeTLB support.
ARM64: mm: Move PTE_PROT_NONE bit.
ARM64: mm: Make PAGE_NONE pages read only and no-execute.
ARM64: mm: Restore memblock limit when map_mem finished.
mm: thp: Correct the HPAGE_PMD_ORDER check.
x86: mm: Remove general hugetlb code from x86.
mm: hugetlb: Copy general hugetlb code from x86 to mm.
x86: mm: Remove x86 version of huge_pmd_share.
mm: hugetlb: Copy huge_pmd_share from x86 to mm.
arm64: KVM: document kernel object mappings in HYP
arm64: KVM: MAINTAINERS update
arm64: KVM: userspace API documentation
arm64: KVM: enable initialization of a 32bit vcpu
...
With this change, we no longer lose the innermost entry in the user-mode
part of the call chain. See also the x86 port, which includes the ip,
and the corresponding change in arch/arm.
Signed-off-by: Jed Davis <jld@mozilla.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The HYP mode world switch in all its glory.
Implements save/restore of host/guest registers, EL2 trapping,
IPA resolution, and additional services (tlb invalidation).
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The software breakpoint handlers are hooked in directly from ptrace,
which makes it difficult to add additional handlers for things like
kprobes and kgdb.
This patch moves the handling code into debug-monitors.c, where we can
dispatch to different debug subsystems more easily.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
As is done for other architectures, sort the exception table at
build-time rather than during boot.
Since sortextable appears to be a standalone C program relying on the
host elf.h to provide EM_AARCH64, I've had to add a conditional check in
order to allow cross-compilation on machines that aren't running a
bleeding-edge libc-dev.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Under arm64, we will calibrate the delay loop statically using a known
timer frequency, so delete read_current_timer(), or it will cause
compiling issue with allmodconfig.
The related error:
ERROR: "read_current_timer" [lib/rbtree_test.ko] undefined!
ERROR: "read_current_timer" [lib/interval_tree_test.ko] undefined!
ERROR: "read_current_timer" [fs/ext4/ext4.ko] undefined!
ERROR: "read_current_timer" [crypto/tcrypt.ko] undefined!
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add the necessary infrastructure for identity-mapped HYP page
tables. Idmap-ed code must be in the ".hyp.idmap.text" linker
section.
The rest of the HYP ends up in ".hyp.text".
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Rather than completely killing the kernel if we receive an esr value we
can't deal with in the el0 handlers, send the process a SIGILL and log
the esr value in the hope that we can debug it. If we receive a bad esr
from el1, we'll die() as before.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Currently, if a compat process reads or writes from/to a disabled
cp15/cp14 register, the trap is not handled by the el0_sync_compat
handler, and the kernel will head to bad_mode, where it will die(), and
oops(). For 64 bit processes, disabled system register accesses are
currently treated as unhandled instructions.
This patch modifies entry.S to treat these unhandled traps as undefined
instructions, sending a SIGILL to userspace. This gives processes a
chance to handle this and stop using inaccessible registers, and
prevents further issues in the kernel as a result of the die().
Reported-by: Johannes Jensen <Johannes.Jensen@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Currently user faults (page, undefined instruction) are always reported
even though the user may have a signal handler for them. This patch adds
unhandled_signal() check together with printk_ratelimit() for these
cases.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The of_platform_populate() is currently invoked at device_initcall()
level. There are however drivers that use platform_driver_probe()
directly and they need the devices to be populated. This patch makes the
of_platform_populate() and arch_initcall().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Benoit Lecardonnel <Benoit.Lecardonnel@synopsys.com>
Tested-by: Benoit Lecardonnel <Benoit.Lecardonnel@synopsys.com>
During boot, we take the debug OS lock before interrupts are enabled.
This is required to prevent clearing of PSTATE.D on the interrupt entry
path, which could result in spurious debug exceptions before we've got
round to resetting things like the hardware breakpoints registers to a
sane state.
A problem with this approach is that taking the OS lock prevents an
external JTAG debugger from debugging the system, which is especially
irritating during boot, where JTAG debugging can be most useful.
This patch clears mdscr_el1 rather than taking the lock, clearing the
MDE and KDE bits and preventing self-hosted hardware debug exceptions
from occurring.
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
When compiling with allmodconfig. early_console is already defined as an
extern global pointer. Need let it point to the object which we intend
to (like arm32 done).
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull stray syscall bits from Al Viro:
"Several syscall-related commits that were missing from the original"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal:
switch compat_sys_sysctl to COMPAT_SYSCALL_DEFINE
unicore32: just use mmap_pgoff()...
unify compat fanotify_mark(2), switch to COMPAT_SYSCALL_DEFINE
x86, vm86: fix VM86 syscalls: use SYSCALL_DEFINEx(...)
specifics (the 'gic' branch merged), it can be enabled on arm64.
- Enable arm64 support for poweroff/restart (for code under
drivers/power/reset/).
- Fixes (dts file, exception handling, bitops)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJRijNyAAoJEGvWsS0AyF7x+McP/1ckDkp5Bz5vOI+3cY7kaDxs
P11k7bDhwv4Xiucxlgk0D2C9xJlQRwknXjR4DRvdluFF0XE9/ZrM6PE96jQLZzh+
NNQXdD6sYxj6l5UJxymDsxP+5ZMdYCOUmNI6iC+vMrrvfXGNA4QgFBjldsAFINFu
CFQu9Xke1JXy0TvP9QDYCVDzT025rjxuQCzAr3MShy7dPZEkavtBHqfCVg0qz6LN
lTmX/mh66KT2M5NrQiMAfuBlwIy/cX8ahOk5znLJo2kekZz5BvehYIPswADNU3K+
rZkZ1hsaQzDA5XHRw/NFcMr/5mDXVxQ9TG555kB5uBsT0dp3kTpCItmHxzVCp6YG
2TUgXG7qz/EecFf2CtjuvPwGy+cyT62ROOAGBxWZitwe9kXuPvim+sF2xJZvf7ak
+z7W75xrWZinGiw5WlKoGt4FNwyGloU5LO0NC61AXJo4cUpHUIjGHiYHGi9ft9UQ
33BNZdPcbwQ1CQhTsayzrzG2mFR1NrBIr0TzJLzARn13rAtFx1Ay8qRIe+b0D1uZ
zPP0sdydYdYUQm/Dj7gj5zl8sgj5E3szYVlqgOaMMqxw7d+J2iHH+CVaJehFDO/g
scmYBmt1mlENgaNaa+8VXWcQH8oo7KLb0F3gaUnYpybM+u4fiM/UoVTsX0pC4GGH
Gl+tuKc7FC7yaXMdzhKU
=qd94
-----END PGP SIGNATURE-----
Merge tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 update from Catalin Marinas:
- Since drivers/irqchip/irq-gic.c no longer has dependencies on arm32
specifics (the 'gic' branch merged), it can be enabled on arm64.
- Enable arm64 support for poweroff/restart (for code under
drivers/power/reset/).
- Fixes (dts file, exception handling, bitops)
* tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
arm64: Treat the bitops index argument as an 'int'
arm64: Ignore the 'write' ESR flag on cache maintenance faults
arm64: dts: fix #address-cells for foundation-v8
arm64: vexpress: Add support for poweroff/restart
arm64: Enable support for the ARM GIC interrupt controller
This patch adds the arm_pm_poweroff definition expected by the
vexpress-poweroff.c driver and enables the latter for arm64.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Pawel Moll <pawel.moll@arm.com>
These are cleanups and smaller changes that either depend on earlier
feature branches or came in late during the development cycle.
We normally try to get all cleanups early, so these are the exceptions:
- A follow-up on the clocksource reworks, hopefully the last time
we need to merge clocksource subsystem changes through arm-soc.
A first set of patches was part of the original 3.10 arm-soc cleanup
series because of interdependencies with timer drivers now moved out
of arch/arm.
- Migrating the SPEAr13xx platform away from using auxdata for DMA
channel descriptions towards using information in device tree,
based on the earlier SPEAr multiplatform series
- A few follow-ups on the Atmel SAMA5 support and other changes
for Atmel at91 based on the larger at91 reworks.
- Moving the armada irqchip implementation to drivers/irqchip
- Several OMAP cleanups following up on the larger series already
merged in 3.10.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAUYj5U2CrR//JCVInAQLNIRAAvsCtYOmXTxkRBxdtNEUUbkEjx71Se7q0
h9PR8vqlkbYwONkJ8a6j8pKq/WJDmLpHQWg/moBsvlGc6uEVBPBFhCWHs1+yGUzX
GhnJOaIKh3+651hIoXccS+/YZ16e1EAzdCM7+1QegPTldsRGkTOiwXgmR51kmPrz
6cZ8P5MFqMrWIy4XqWhOBbMDCY/An05IHMpniGIamUg2/uB921Z0wNFvDrnsg97u
DsVEwimyCJ0j7aO4TH+fkvsjoGWnIhxPtpaIm8iff6TPRI49deRb3zYpnIONm+oG
/cQrRf3BNW+aiTuRCTEjdBNGtcrYgN6CLWWjzgMhv1itSlX8swBcOhuNJRCGNQRI
v3wL4aEBxUpPGGL8erc2GIW7pe29YC2UEYI2z1X/5MEzYO589zkkG2k+/3HQVUwp
dnYpQxhjRMvh4mcodBJFRjzH1Z7agKUwtoKalAHRRH7r5gJDkpL3zLoMhYPTG5IZ
OwU+aYf+dDxh2kKW0zs8a/qL97UTHjlTRUC9LPoumvJ7LlKeDfzEn7DHUm2gggiu
dO9ye/NF/xEXoDXTl0Qp2wJ6/sbPSLyCYCIMdP/gJjWUiDDqqZ0VRaKL7vE/JWrd
NJ7k5yunX8/kRgfqgRFLDdFnPj1JeYHlmexsq4l9TPbPstoIcbw8u1v9sr8aZF+Z
agh9u4e7QU8=
=HWfp
-----END PGP SIGNATURE-----
Merge tag 'cleanup-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC late cleanups from Arnd Bergmann:
"These are cleanups and smaller changes that either depend on earlier
feature branches or came in late during the development cycle. We
normally try to get all cleanups early, so these are the exceptions:
- A follow-up on the clocksource reworks, hopefully the last time we
need to merge clocksource subsystem changes through arm-soc.
A first set of patches was part of the original 3.10 arm-soc
cleanup series because of interdependencies with timer drivers now
moved out of arch/arm.
- Migrating the SPEAr13xx platform away from using auxdata for DMA
channel descriptions towards using information in device tree,
based on the earlier SPEAr multiplatform series
- A few follow-ups on the Atmel SAMA5 support and other changes for
Atmel at91 based on the larger at91 reworks.
- Moving the armada irqchip implementation to drivers/irqchip
- Several OMAP cleanups following up on the larger series already
merged in 3.10."
* tag 'cleanup-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (50 commits)
ARM: OMAP4: change the device names in usb_bind_phy
ARM: OMAP2+: Fix mismerge for timer.c between ff931c82 and da4a686a
ARM: SPEAr: conditionalize SMP code
ARM: arch_timer: Silence debug preempt warnings
ARM: OMAP: remove unused variable
serial: amba-pl011: fix !CONFIG_DMA_ENGINE case
ata: arasan: remove the need for platform_data
ARM: at91/sama5d34ek.dts: remove not needed compatibility string
ARM: at91: dts: add MCI DMA support
ARM: at91: dts: add i2c dma support
ARM: at91: dts: set #dma-cells to the correct value
ARM: at91: suspend both memory controllers on at91sam9263
irqchip: armada-370-xp: slightly cleanup irq controller driver
irqchip: armada-370-xp: move IRQ handler to avoid forward declaration
irqchip: move IRQ driver for Armada 370/XP
ARM: mvebu: move L2 cache initialization in init_early()
devtree: add binding documentation for sp804
ARM: integrator-cp: convert use CLKSRC_OF for timer init
ARM: versatile: use OF init for sp804 timer
ARM: versatile: add versatile dtbs to dtbs target
...
Pull compat cleanup from Al Viro:
"Mostly about syscall wrappers this time; there will be another pile
with patches in the same general area from various people, but I'd
rather push those after both that and vfs.git pile are in."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal:
syscalls.h: slightly reduce the jungles of macros
get rid of union semop in sys_semctl(2) arguments
make do_mremap() static
sparc: no need to sign-extend in sync_file_range() wrapper
ppc compat wrappers for add_key(2) and request_key(2) are pointless
x86: trim sys_ia32.h
x86: sys32_kill and sys32_mprotect are pointless
get rid of compat_sys_semctl() and friends in case of ARCH_WANT_OLD_COMPAT_IPC
merge compat sys_ipc instances
consolidate compat lookup_dcookie()
convert vmsplice to COMPAT_SYSCALL_DEFINE
switch getrusage() to COMPAT_SYSCALL_DEFINE
switch epoll_pwait to COMPAT_SYSCALL_DEFINE
convert sendfile{,64} to COMPAT_SYSCALL_DEFINE
switch signalfd{,4}() to COMPAT_SYSCALL_DEFINE
make SYSCALL_DEFINE<n>-generated wrappers do asmlinkage_protect
make HAVE_SYSCALL_WRAPPERS unconditional
consolidate cond_syscall and SYSCALL_ALIAS declarations
teach SYSCALL_DEFINE<n> how to deal with long long/unsigned long long
get rid of duplicate logics in __SC_....[1-6] definitions
show_regs() is inherently arch-dependent but it does make sense to print
generic debug information and some archs already do albeit in slightly
different forms. This patch introduces a generic function to print debug
information from show_regs() so that different archs print out the same
information and it's much easier to modify what's printed.
show_regs_print_info() prints out the same debug info as dump_stack()
does plus task and thread_info pointers.
* Archs which didn't print debug info now do.
alpha, arc, blackfin, c6x, cris, frv, h8300, hexagon, ia64, m32r,
metag, microblaze, mn10300, openrisc, parisc, score, sh64, sparc,
um, xtensa
* Already prints debug info. Replaced with show_regs_print_info().
The printed information is superset of what used to be there.
arm, arm64, avr32, mips, powerpc, sh32, tile, unicore32, x86
* s390 is special in that it used to print arch-specific information
along with generic debug info. Heiko and Martin think that the
arch-specific extra isn't worth keeping s390 specfic implementation.
Converted to use the generic version.
Note that now all archs print the debug info before actual register
dumps.
An example BUG() dump follows.
kernel BUG at /work/os/work/kernel/workqueue.c:4841!
invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0-rc1-work+ #7
Hardware name: empty empty/S3992, BIOS 080011 10/26/2007
task: ffff88007c85e040 ti: ffff88007c860000 task.ti: ffff88007c860000
RIP: 0010:[<ffffffff8234a07e>] [<ffffffff8234a07e>] init_workqueues+0x4/0x6
RSP: 0000:ffff88007c861ec8 EFLAGS: 00010246
RAX: ffff88007c861fd8 RBX: ffffffff824466a8 RCX: 0000000000000001
RDX: 0000000000000046 RSI: 0000000000000001 RDI: ffffffff8234a07a
RBP: ffff88007c861ec8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffffffff8234a07a
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff88007dc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff88015f7ff000 CR3: 00000000021f1000 CR4: 00000000000007f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Stack:
ffff88007c861ef8 ffffffff81000312 ffffffff824466a8 ffff88007c85e650
0000000000000003 0000000000000000 ffff88007c861f38 ffffffff82335e5d
ffff88007c862080 ffffffff8223d8c0 ffff88007c862080 ffffffff81c47760
Call Trace:
[<ffffffff81000312>] do_one_initcall+0x122/0x170
[<ffffffff82335e5d>] kernel_init_freeable+0x9b/0x1c8
[<ffffffff81c47760>] ? rest_init+0x140/0x140
[<ffffffff81c4776e>] kernel_init+0xe/0xf0
[<ffffffff81c6be9c>] ret_from_fork+0x7c/0xb0
[<ffffffff81c47760>] ? rest_init+0x140/0x140
...
v2: Typo fix in x86-32.
v3: CPU number dropped from show_regs_print_info() as
dump_stack_print_info() has been updated to print it. s390
specific implementation dropped as requested by s390 maintainers.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Acked-by: Chris Metcalf <cmetcalf@tilera.com> [tile bits]
Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon bits]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Both dump_stack() and show_stack() are currently implemented by each
architecture. show_stack(NULL, NULL) dumps the backtrace for the
current task as does dump_stack(). On some archs, dump_stack() prints
extra information - pid, utsname and so on - in addition to the
backtrace while the two are identical on other archs.
The usages in arch-independent code of the two functions indicate
show_stack(NULL, NULL) should print out bare backtrace while
dump_stack() is used for debugging purposes when something went wrong,
so it does make sense to print additional information on the task which
triggered dump_stack().
There's no reason to require archs to implement two separate but mostly
identical functions. It leads to unnecessary subtle information.
This patch expands the dummy fallback dump_stack() implementation in
lib/dump_stack.c such that it prints out debug information (taken from
x86) and invokes show_stack(NULL, NULL) and drops arch-specific
dump_stack() implementations in all archs except blackfin. Blackfin's
dump_stack() does something wonky that I don't understand.
Debug information can be printed separately by calling
dump_stack_print_info() so that arch-specific dump_stack()
implementation can still emit the same debug information. This is used
in blackfin.
This patch brings the following behavior changes.
* On some archs, an extra level in backtrace for show_stack() could be
printed. This is because the top frame was determined in
dump_stack() on those archs while generic dump_stack() can't do that
reliably. It can be compensated by inlining dump_stack() but not
sure whether that'd be necessary.
* Most archs didn't use to print debug info on dump_stack(). They do
now.
An example WARN dump follows.
WARNING: at kernel/workqueue.c:4841 init_workqueues+0x35/0x505()
Hardware name: empty
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.9.0-rc1-work+ #9
0000000000000009 ffff88007c861e08 ffffffff81c614dc ffff88007c861e48
ffffffff8108f50f ffffffff82228240 0000000000000040 ffffffff8234a03c
0000000000000000 0000000000000000 0000000000000000 ffff88007c861e58
Call Trace:
[<ffffffff81c614dc>] dump_stack+0x19/0x1b
[<ffffffff8108f50f>] warn_slowpath_common+0x7f/0xc0
[<ffffffff8108f56a>] warn_slowpath_null+0x1a/0x20
[<ffffffff8234a071>] init_workqueues+0x35/0x505
...
v2: CPU number added to the generic debug info as requested by s390
folks and dropped the s390 specific dump_stack(). This loses %ksp
from the debug message which the maintainers think isn't important
enough to keep the s390-specific dump_stack() implementation.
dump_stack_print_info() is moved to kernel/printk.c from
lib/dump_stack.c. Because linkage is per objecct file,
dump_stack_print_info() living in the same lib file as generic
dump_stack() means that archs which implement custom dump_stack()
- at this point, only blackfin - can't use dump_stack_print_info()
as that will bring in the generic version of dump_stack() too. v1
The v1 patch broke build on blackfin due to this issue. The build
breakage was reported by Fengguang Wu.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390 bits]
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon bits]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- Versatile Express SoC (model) support - DT files and Kconfig entries
(there are no arch/arm64/mach-* directories). The bulk of the code has
already been moved to drivers/ as part of the ARM SoC clean-up.
- Basic multi-cluster support (CPU logical map initialised from the DT).
- Simple earlyprintk support for UART 8250/16550 and FastModel console
output.
- Optimised kernel library bitops and string functions.
- Automatic initialisation of the irqchip and clocks via DT.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJRf+a/AAoJEGvWsS0AyF7x4cAP/199we5Q3IvXnOiyoiJSwk/v
FJ8i+B/cICd8LiSRhpUkDYDWboBWq+LkG+SxmhPGjvWp0GXt6RFgSuT7okQrQy4l
a3UpRNLAc7HRamehreWqdIXuD1JwkrUwRC5hxVjQBrI0Q2JeQgEbghtFwiK2qRAg
yLo9PPRfrBHq3jEJXhaiHyb/oQm9jNUbgJ5KSes6gjgwBtnsATiaSyLe5FbZqzIV
aYPKblFMUjR38TQE7uAxWtA8hPDG558dkiBJfs1GHRtkGt7odrEvAdscxw0mjDSd
rvP+XW8URVDYUhZVSaDKB8z4l436jCSAg3GYCWf/CC+sQDdxBBa1Lp5ADg/xQLrh
L+oICtDeX3v3LGhGs0x4RG4hdvyCATyHI+P/bIKA9sWUWbKTTCqEQahG0puyiun+
7nB89xGKZDIcy7yc+2UivHXIsTIRXETLmEFDKciit3f7ua4maoIvJ8oOHkJaPJRQ
imWRRg1BVMvq3L1IPCwuoIirasXzcrAkONw5Gw5ZHKVps0+xarNd4T/RWp1rmjXt
R4ZdgWNMyC3PHoP7CwG9h7Ia4CUYuaxqnmWQRALrHahr1Cxb/rnU98xadjkkBTwX
Xh1b6fdz4yKXgVBiirfPPRA3xjSYJkwEBqFwO5IAPhYulUw10U56+K0I77ixYQ8G
IDtjzKiHu/P05Nwncgn6
=Uvte
-----END PGP SIGNATURE-----
Merge tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 update from Catalin Marinas:
"Main features:
- Versatile Express SoC (model) support - DT files and Kconfig
entries (there are no arch/arm64/mach-* directories). The bulk of
the code has already been moved to drivers/ as part of the ARM SoC
clean-up.
- Basic multi-cluster support (CPU logical map initialised from the
DT)
- Simple earlyprintk support for UART 8250/16550 and FastModel
console output
- Optimised kernel library bitops and string functions.
- Automatic initialisation of the irqchip and clocks via DT"
* tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (26 commits)
arm64: Use acquire/release semantics instead of explicit DMB
arm64: klib: bitops: fix unpredictable stxr usage
arm64: vexpress: Enable ARMv8 RTSM model (SoC) support
arm64: vexpress: Add dts files for the ARMv8 RTSM models
arm64: Survive invalid cpu enable-methods
arm64: mm: Correct show_pte behaviour
arm64: Fix compat types affecting struct compat_stat
arm64: Execute DSB during thread switching for TLB/cache maintenance
arm64: compiling issue, need add include/asm/vga.h file
arm64: smp: honour #address-size when parsing CPU reg property
arm64: Define cmpxchg64 and cmpxchg64_local for outside use
arm64: Define readq and writeq for driver module using
arm64: Fix task tracing
arm64: add explicit symbols to ESR_EL1 decoding
arm64: Use irqchip_init() for interrupt controller initialisation
arm64: psci: Use the MPIDR values from cpu_logical_map for cpu ids.
arm64: klib: Optimised atomic bitops
arm64: klib: Optimised string functions
arm64: klib: Optimised memory functions
arm64: head: match all affinity levels in the pen of the secondaries
...
Currently, if you pass the kernel a dtb where a cpu node has an
unsupported enable-method property (e.g. "not-psci"), it'll explode
horribly, as it iterates over the enable_ops array incorrectly. It
increments the pointer *at* the current element, rather than
incrementing the pointer *to* the current element. As the first two
elements pointed to structures that were contiguous in memory, this
happened to be equivalent. However the third element is NULL, so when
the list is exhausted, smp_get_enable_ops generates the wrong pointer,
and dereferences an arbitrary portion of memory, which currently happens
to contain zero.
This patch fixes this by indirecting the pointer one level, so we
iterate over the array elements correctly, avoiding the below panic:
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The DSB following TLB or cache maintenance ops must be run on the same
CPU. With kernel preemption enabled or for user-space cache maintenance
this may not be the case. This patch adds an explicit DSB in the
__switch_to() function.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
For systems where the top 32-bits of the MPIDR are all zero, we should
allow the device-tree to specify an #address-size of 0x1 for the CPU reg
property and then zero extend the value there.
Without this patch, kvmtool breaks with the recent mpidr parsing code
introduced in 4c7aa00213 ("arm64: kernel: initialise cpu_logical_map
from the DT").
Acked-by: Javi Merino <javi.merino@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
the following changes:
- Add sched_clock selection logic to select the highest frequency clock
- Use full 64-bit arch timer counter for sched_clock
- Convert arch timer, sp804 and integrator-cp timers to CLKSRC_OF and
adapt all users to use clocksource_of_init
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQEcBAABAgAGBQJRZx0aAAoJEMhvYp4jgsXiB7MH/AutBUa40yuzTA1RzaDxYTX7
m1NrjmsTI8pFLX6VTvtwNXdT1AJ0JbzPxu35E1Y8xsu8tnx/RjG/hhqq8R2rXd5t
oqilT46SPBZpKBSrPSuEQde5v8XlKT5kEcUlg47bHGB1JrI9Ip14okRcg5aCJJzu
Pb25NqxTWS+vFTNV7C+UzuJ72lJ24FHQXK4AbZqaWcaokGCRLP1QE1s83jY7mpX7
zd5xWMPygKR8oYNPVhxoD1ajUo5cqVHtcXFRnWy1o/T/8ZPqCuSOsyJokScPHzwa
vUwoAn2OQSFLJZgITu8+9JSlLxW40BdMHaJ+jTlOXMGDq6RHZY1FHAy8PTf43wU=
=QjNu
-----END PGP SIGNATURE-----
Merge tag 'clksrc-cleanup-for-3.10-part2' of git://sources.calxeda.com/kernel/linux into late/clksrc
This is the 2nd part of ARM timer clean-ups for 3.10. This series has
the following changes:
- Add sched_clock selection logic to select the highest frequency clock
- Use full 64-bit arch timer counter for sched_clock
- Convert arch timer, sp804 and integrator-cp timers to CLKSRC_OF and
adapt all users to use clocksource_of_init
* tag 'clksrc-cleanup-for-3.10-part2' of git://sources.calxeda.com/kernel/linux:
devtree: add binding documentation for sp804
ARM: integrator-cp: convert use CLKSRC_OF for timer init
ARM: versatile: use OF init for sp804 timer
ARM: versatile: add versatile dtbs to dtbs target
ARM: vexpress: remove extra timer-sp control register clearing
ARM: dts: vexpress: disable CA9 core tile sp804 timer
ARM: vexpress: remove sp804 OF init
ARM: highbank: use OF init for sp804 timer
ARM: timer-sp: convert to use CLKSRC_OF init
OF: add empty of_device_is_available for !OF
ARM: convert arm/arm64 arch timer to use CLKSRC_OF init
ARM: make machine_desc->init_time default to clocksource_of_init
ARM: arch_timer: use full 64-bit counter for sched_clock
ARM: make sched_clock just call a function pointer
ARM: sched_clock: allow changing to higher frequency counter
Signed-off-by: Olof Johansson <olof@lixom.net>
This has a nasty set of conflicts with the exynos MCT code, which was
moved in a separate branch, and then fixed up when merged in, but still
conflicts a bit here. It should have been sorted out by this merge though.
For accurate accounting call contextidr_thread_switch before a
task is scheduled, rather than after, when the 'next' variable has a
different meaning since we switched the stacks.
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The ESR_EL1 decoding process is a bit cryptic, and KVM has also
a need for the same constants.
Add a new esr.h file containing the appropriate exception classes
constants, and change entry.S to use it. Fix a small bug in the
EL1 breakpoint check while we're at it.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This converts arm and arm64 to use CLKSRC_OF DT based initialization for
the arch timer. A new function arch_timer_arch_init is added to allow for
arch specific setup.
This has a side effect of enabling sched_clock on omap5 and exynos5. There
should not be any reason not to use the arch timers for sched_clock.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Simon Horman <horms@verge.net.au>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-samsung-soc@vger.kernel.org
Cc: linux-omap@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
This patch uses the generic irqchip_init() function for initialising the
interrupt controller on arm64. It also adds several definitions required
by the ARM GIC irqchip driver but does not enable ARM_GIC yet.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With the (re)introduction of cpu_logical_map in arm64 we switch to
the use of MPIDR values to identify CPUs. Update the psci code to
do that.
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch implements the AArch64-specific atomic bitops functions using
exclusive memory accesses to avoid locking.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch introduces AArch64-specific memory functions (memcpy,
memmove, memchr, memset). These functions are not optimised for any CPU
implementation but can be used as a starting point once hardware is
available.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The reg property of the cpu nodes in the DT now contains all the
affinity levels in (MPIDR[39:32] and MPIDR[23:0]) and that's what
boot_secondary() writes in the pen, so increase the mask in
secondary_holding_pen accordingly.
Signed-off-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When booting the kernel, the cpu logical id map must be initialised
using device tree data passed by FW or through an embedded blob.
This patch parses the reg property in device tree "cpu" nodes,
retrieves the corresponding CPUs hardware identifiers (MPIDR) and
initialises the cpu logical map accordingly.
The device tree HW identifiers are considered valid if all CPU nodes
contain a "reg" property, there are no duplicate "reg" entries and the
DT defines a CPU node whose "reg" property defines affinity levels
that matches those of the boot CPU.
The primary CPU is assigned cpu logical number 0 to keep the current
convention valid.
Based on a0ae024050 (ARM: kernel: add
device tree init map function).
Signed-off-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change the prototype of write_pen_release() accordingly and clarify
that's holding the hardware id of the secondary that's going to boot.
This is in preparation of getting HWIDs parsed from the DT.
Signed-off-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for using earlyprintk with 8250/16550 UART
ports. The 8250/16550 UART can either have 8-bit or 32-bit aligned
registers which is HW vendor dependent.
Kernel args for 8-bit aligned regs: earlyprintk=uart8250-8bit,<phys_address>
Kernel args for 32-bit aligned regs: earlyprintk=uart8250-32bit,<phys_address>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Enable early_printk to use the FastModel semihosting to output
the early kernel messages. Works both for host and guest kernels.
To use this feature, pass "early_printk=smh" to the kernel.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The __atomic_hash is only defined when SMP is enabled but the
arm64ksyms.c exports it even for the UP case.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Pull signal/compat fixes from Al Viro:
"Fixes for several regressions introduced in the last signal.git pile,
along with fixing bugs in truncate and ftruncate compat (on just about
anything biarch at least one of those two had been done wrong)."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal:
compat: restore timerfd settime and gettime compat syscalls
[regression] braino in "sparc: convert to ksignal"
fix compat truncate/ftruncate
switch lseek to COMPAT_SYSCALL_DEFINE
lseek() and truncate() on sparc really need sign extension
lockdep, but it's a mechanical change.
Cheers,
Rusty.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJRJAcuAAoJENkgDmzRrbjxsw0P/3eXb+LddYnx0V0uHYdKpCUf
4vdW7X0fX3Z+aUK69IWRL/6ahoO4TpaHYGHBDjEoivyQ0GDq14X7JNWsYYt3LdMf
3wmDgRc2cn/mZOJbFeVpNV8ox5l/xc0CUvV+iQ8tMjfQItXMXgWUFZKMECsXKSO6
eex3lrw9M2jAX2uL8LQPp9W8xtKu24nSZRC6tH5riE/8fCzi1cZPPAqfxP5c8Lee
ZXtbCRSyAFENZLpKyMe1PC7HvtJyi5NDn9xwOQiXULZV/VOlvP94DGBLIKCM/6dn
4QvZxpG0P0uOlpCgRAVLyh/z7g4XY4VF/fHopLCmEcqLsvgD+V2LQpQ9zWUalLPC
Z+pUpz2vu0gIddPU1nR8R6oGpEdJ8O12aJle62p/RSXWZGx12qUQ+Tamu0tgKcv1
AsiJfbUGNDYfxgU6sHsoQjl2f68LTVckCU1C1LqEbW/S104EIORtGx30CHM4LRiO
32kDC5TtgYDBKQAIqJ4bL48ZMh+9W3uX40p7xzOI5khHQjvswUKa3jcxupU0C1uv
lx8KXo7pn8WT33QGysWC782wJCgJuzSc2vRn+KQoqoynuHGM6agaEtR59gil3QWO
rQEcxH63BBRDgHlg4FM9IkJwwsnC3PWKL8gbX0uAWXAPMbgapJkuuGZAwt0WDGVK
+GszxsFkCjlW0mK0egTb
=tiSY
-----END PGP SIGNATURE-----
Merge tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
Pull module update from Rusty Russell:
"The sweeping change is to make add_taint() explicitly indicate whether
to disable lockdep, but it's a mechanical change."
* tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux:
MODSIGN: Add option to not sign modules during modules_install
MODSIGN: Add -s <signature> option to sign-file
MODSIGN: Specify the hash algorithm on sign-file command line
MODSIGN: Simplify Makefile with a Kconfig helper
module: clean up load_module a little more.
modpost: Ignore ARC specific non-alloc sections
module: constify within_module_*
taint: add explicit flag to show whether lock dep is still OK.
module: printk message when module signature fail taints kernel.
Pull signal handling cleanups from Al Viro:
"This is the first pile; another one will come a bit later and will
contain SYSCALL_DEFINE-related patches.
- a bunch of signal-related syscalls (both native and compat)
unified.
- a bunch of compat syscalls switched to COMPAT_SYSCALL_DEFINE
(fixing several potential problems with missing argument
validation, while we are at it)
- a lot of now-pointless wrappers killed
- a couple of architectures (cris and hexagon) forgot to save
altstack settings into sigframe, even though they used the
(uninitialized) values in sigreturn; fixed.
- microblaze fixes for delivery of multiple signals arriving at once
- saner set of helpers for signal delivery introduced, several
architectures switched to using those."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (143 commits)
x86: convert to ksignal
sparc: convert to ksignal
arm: switch to struct ksignal * passing
alpha: pass k_sigaction and siginfo_t using ksignal pointer
burying unused conditionals
make do_sigaltstack() static
arm64: switch to generic old sigaction() (compat-only)
arm64: switch to generic compat rt_sigaction()
arm64: switch compat to generic old sigsuspend
arm64: switch to generic compat rt_sigqueueinfo()
arm64: switch to generic compat rt_sigpending()
arm64: switch to generic compat rt_sigprocmask()
arm64: switch to generic sigaltstack
sparc: switch to generic old sigsuspend
sparc: COMPAT_SYSCALL_DEFINE does all sign-extension as well as SYSCALL_DEFINE
sparc: kill sign-extending wrappers for native syscalls
kill sparc32_open()
sparc: switch to use of generic old sigaction
sparc: switch sys_compat_rt_sigaction() to COMPAT_SYSCALL_DEFINE
mips: switch to generic sys_fork() and sys_clone()
...
This contains parts of the ARM KVM support that have dependencies on
other patches merged through the arm-soc tree. In combination with
patches coming through Russell's tree, this will finally add full
support for the kernel based virtual machine on ARM, which has
been awaited for some time now.
Further, we now have a separate platform for virtual machines
and qemu booting that is used by both Xen and KVM, separating
these from the Versatile Express reference implementation.
Obviously, this new platform is multiplatform capable so it
can be combined with existing machines in the same kernel.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIVAwUAUSUyWmCrR//JCVInAQIomBAAohivsGbHStveQpeYe8H2IH7Oa2/w1YGm
x8sJEGARj/qpqKSDaMf4NJ/wcpn5AMgUqx4S4kR3p3MG5whr1Wxx7Bdl7FUPpCnY
uLwD2RsEEfSHDobKIoBcrFZDFPyW+be/nAYFfBUhEUFs8VTot2i4fMhu4HdRUWCt
e24ig6jHD/dMMPk0dL7SqZ/Hv/bhwhD9ZOckjKDdFlXJQ36nYWTCb6FbsAysNg+x
zMV3v+cSwngJZHGu3A9oy0KR0COm6aLb5FsJrS3Ni/mYHF/O75tKJGgXkF6JiuVe
DI5jtVyfT2effUH9I8R7hzkzYPi47XnOjhtZiIlGvBqhcPGu1tBxDy8+H4D0TZ94
ySUhNPutl6+BTmNjJag61hPZ66kY8yPJkpreHZdvRBZaZMSNLzMnhgdyQRzwbrDL
VFDxot+zyB8KE7ffsZ2KTr+nwcVCC2XmgYhihRujn8m1u4NF2PYZOIWCuAyPaVEn
zH1FztrMwXrXbHb0g4kXmZkewzqEHV2GrFzZCNkTjANTcwkB/xLcvSnloxUYGrLv
URHnJbXM9FyehcY6rlSNMAdwrxa9fFMqmUb08WLv9cxHNYPBg9UN5vIGzjLfOUeK
ur+i63pgtF8pYy7MaLj+Pa5g19Xk8GHWH4KEQOr6T4yz6z7gyDrjcPDi9fVFGd5E
u9lP8aNKXcM=
=u9F0
-----END PGP SIGNATURE-----
Merge tag 'virt' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM virtualization changes:
"This contains parts of the ARM KVM support that have dependencies on
other patches merged through the arm-soc tree. In combination with
patches coming through Russell's tree, this will finally add full
support for the kernel based virtual machine on ARM, which has been
awaited for some time now.
Further, we now have a separate platform for virtual machines and qemu
booting that is used by both Xen and KVM, separating these from the
Versatile Express reference implementation. Obviously, this new
platform is multiplatform capable so it can be combined with existing
machines in the same kernel."
* tag 'virt' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (38 commits)
ARM: arch_timer: include linux/errno.h
arm: arch_timer: add missing inline in stub function
ARM: KVM: arch_timers: Wire the init code and config option
ARM: KVM: arch_timers: Add timer world switch
ARM: KVM: arch_timers: Add guest timer core support
ARM: KVM: Add VGIC configuration option
ARM: KVM: VGIC initialisation code
ARM: KVM: VGIC control interface world switch
ARM: KVM: VGIC interrupt injection
ARM: KVM: vgic: retire queued, disabled interrupts
ARM: KVM: VGIC virtual CPU interface management
ARM: KVM: VGIC distributor handling
ARM: KVM: VGIC accept vcpu and dist base addresses from user space
ARM: KVM: Initial VGIC infrastructure code
ARM: KVM: Keep track of currently running vcpus
KVM: ARM: Introduce KVM_ARM_SET_DEVICE_ADDR ioctl
ARM: gic: add __ASSEMBLY__ guard to C definitions
ARM: gic: define GICH offsets for VGIC support
ARM: gic: add missing distributor defintions
ARM: mach-virt: fixup machine descriptor after removal of sys_timer
...
interface).
- Simple earlyprintk support.
- Platform devices populated by default from the DT (SoC-agnostic).
- CONTEXTIDR support (used by external trace tools).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJRJOI1AAoJEGvWsS0AyF7xzfwP/0vyy1BE9oBWsISu/ITCfMa0
X6nffGoh1bcq2TqwqYD3M9gH5HJkd/JzyGDyuJdZB72XOB2Z0eiZYTs1E46MhP8g
17FH0QJG0D5hi32byDM8A1NC+z2Ep6YRDQzlvG1F2StsLDJqZIL0tsSdld8L7gu6
F/IS6NQy2eGmCcFvX7MCBRien53dPxYhrQfzKcUcCsWZDy1HfSoOBZE4XhZO3eTL
HiAMDtqb1mTxsHOHHehJLOcEeFH4U7FM1k525VFzrFgCoZ10N85X57guzhnK63Bj
zmdlP0CvVZDt/vGU08ZJDDWahGzDo6i1tLF66ZDy888zG/QjM2AdGjz99YH/L2so
TrqGvVtX89doV5isvmS0mJjDx42QwO9881DlrDSMumWu2iVtwmRWcrdZ5MPSyrKU
bhrDU57/EvHTMbNScaA4SrPVhM2kIoUUYJBLDxiaKfk577mjFod/gfhgg7Mc0I67
q3jWv9NdcODmX5FAWXZ2jQ0Lz1PsVHNNjpZZQY3TKMLbSLgk+Vcsu07quWxtJl6K
QqOGAS9QWX6jb1qIxw8NdB0qbGpGR2jIHBRtMNNB65luPm8EqH4bHmGWy6fUahJr
8UJBUA+v5kRjYwtO6AAtzaJavYePRhAqMIGUu8eoatPzN5JOObp9RAtYuQZNTN9v
asKJKRGfD8kwBSeovt+X
=2prj
-----END PGP SIGNATURE-----
Merge tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 patches from Catalin Marinas:
- SMP support for the PSCI booting protocol (power state coordination
interface).
- Simple earlyprintk support.
- Platform devices populated by default from the DT (SoC-agnostic).
- CONTEXTIDR support (used by external trace tools).
* tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
arm64: mm: update CONTEXTIDR register to contain PID of current process
arm64: atomics: fix grossly inconsistent asm constraints for exclusives
arm64: compat: use compat_uptr_t type for compat_ucontext.uc_link
arm64: Select ARCH_WANT_FRAME_POINTERS
arm64: Add kvm_para.h and xor.h generic headers
arm64: SMP: enable PSCI boot method
arm64: psci: add support for PSCI invocations from the kernel
arm64: SMP: rework the SMP code to be enabling method agnostic
arm64: perf: add guest vs host discrimination
arm64: add COMPAT_PSR_*_BIT flags
arm64: Add simple earlyprintk support
arm64: Populate the platform devices
pm_idle() on arm64 was a synonym for default_idle(),
so remove it and invoke default_idle() directly.
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
This patch is a port of 575320d62 ("ARM: 7445/1: mm: update CONTEXTIDR
register to contain PID of current process") from ARM that introduces a
new Kconfig option which, when enabled, causes the kernel to write the
PID of the current task into the CONTEXTIDR register on context switch.
This is useful when analysing hardware trace, since writes to this
register can be configured to emit an event into the trace stream.
Signed-off-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: contextidr_thread_switch() moved to mmu_context.h]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
struct compat_ucontext * is a 64-bit pointer, so we need to use a
compat_uptr_t instead to avoid declaring a structure incompatible with
what AArch32 userspace expects.
Cc: <stable@vger.kernel.org>
Reported-by: Edmund Grimley-Evans <Edmund.Grimley-Evans@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The arch_timer driver supports a superset of the functionality of the
arm_generic driver, and is not tied to a particular arch.
This patch moves arm64 to use the arch_timer driver, gaining additional
functionality in doing so, and removes the (now unused) arm_generic
driver. Timer-related hooks specific to arm64 are moved into
arch/arm64/kernel/time.c.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Wire the PSCI implementation into the SMP secondary startup
code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for the Power State Coordination Interface
defined by ARM, allowing Linux to request CPU-centric power-management
operations from firmware implementing the PSCI protocol.
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Marc: s/u32/u64/ in the relevant spots, and switch from an initcall
to an simpler init function]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In order to introduce PSCI support, let the SMP code handle
multiple enabling methods. This also allow CPUs to be booted
using different methods (though this feels a bit weird...).
In the process, move the spin-table code to its own file.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add minimal guest support to perf, so it can distinguish whether
the PMU interrupt was in the host or the guest, as well as collecting
some very basic information (guest PC, user vs kernel mode).
This is not feature complete though, as it doesn't support backtracing
in the guest.
Based on the x86 implementation, tested with KVM/arm64.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds support for "earlyprintk=" parameter on the kernel
command line. The format is:
earlyprintk=<name>[,<addr>][,<options>]
where <name> is the name of the (UART) device, e.g. "pl011", <addr> is
the I/O address. The <options> aren't currently used.
The mapping of the earlyprintk device is done very early during kernel
boot and there are restrictions on which functions it can call. A
special early_io_map() function is added which creates the mapping from
the pre-defined EARLY_IOBASE to the device I/O address passed via the
kernel parameter. The pgd entry corresponding to EARLY_IOBASE is
pre-populated in head.S during kernel boot.
Only PL011 is currently supported and it is assumed that the interface
is already initialised by the boot loader before the kernel is started.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
This patch add a device_initcall() to populate the platform devices
(of_default_bus_match_table). This allows SoC implementations that do
not require earlier initcalls to avoid any platform-specific code under
arch/arm64.
GIC and generic timer initialisation is done via FDT and CPU notifiers
independently of the SoC code.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Fix up all callers as they were before, with make one change: an
unsigned module taints the kernel, but doesn't turn off lockdep.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This patch is an arm64 version of ce73ec6db4 ("powerpc/vdso: Remove
redundant locking in update_vsyscall_tz()").
Timezone data is not protected, so the sequence counter is not required
to ensure consistency. Furthermore, having multiple paths updating the
counter leads to a race between update_vsyscall and update_vsyscall_tz,
so remove the timezone sequence counting from both the kernel and the
vdso.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This fixes up all of the smaller arches that had __dev* markings for
their platform-specific drivers.
CONFIG_HOTPLUG is going away as an option. As a result, the __dev*
markings need to be removed.
This change removes the use of __devinit, __devexit_p, __devinitdata,
__devinitconst, and __devexit from these drivers.
Based on patches originally written by Bill Pemberton, but redone by me
in order to handle some of the coding style issues better, by hand.
Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Bob Liu <lliubbo@gmail.com>
Cc: Srinivas Kandagatla <srinivas.kandagatla@st.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Myron Stowe <myron.stowe@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Thierry Reding <thierry.reding@avionic-design.de>
Cc: Greg Ungerer <gerg@uclinux.org>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Yong Zhang <yong.zhang0@gmail.com>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Jan Glauber <jang@linux.vnet.ibm.com>
Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Cc: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This function is used by sparc, powerpc tile and arm64 for compat support.
The patch adds a generic implementation with a wrapper for PowerPC to do
the u32->int sign extension.
The reason for a single patch covering powerpc, tile, sparc and arm64 is
to keep it bisectable, otherwise kernel building may fail with mismatched
function declarations.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Chris Metcalf <cmetcalf@tilera.com> [for tile]
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull big execve/kernel_thread/fork unification series from Al Viro:
"All architectures are converted to new model. Quite a bit of that
stuff is actually shared with architecture trees; in such cases it's
literally shared branch pulled by both, not a cherry-pick.
A lot of ugliness and black magic is gone (-3KLoC total in this one):
- kernel_thread()/kernel_execve()/sys_execve() redesign.
We don't do syscalls from kernel anymore for either kernel_thread()
or kernel_execve():
kernel_thread() is essentially clone(2) with callback run before we
return to userland, the callbacks either never return or do
successful do_execve() before returning.
kernel_execve() is a wrapper for do_execve() - it doesn't need to
do transition to user mode anymore.
As a result kernel_thread() and kernel_execve() are
arch-independent now - they live in kernel/fork.c and fs/exec.c
resp. sys_execve() is also in fs/exec.c and it's completely
architecture-independent.
- daemonize() is gone, along with its parts in fs/*.c
- struct pt_regs * is no longer passed to do_fork/copy_process/
copy_thread/do_execve/search_binary_handler/->load_binary/do_coredump.
- sys_fork()/sys_vfork()/sys_clone() unified; some architectures
still need wrappers (ones with callee-saved registers not saved in
pt_regs on syscall entry), but the main part of those suckers is in
kernel/fork.c now."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (113 commits)
do_coredump(): get rid of pt_regs argument
print_fatal_signal(): get rid of pt_regs argument
ptrace_signal(): get rid of unused arguments
get rid of ptrace_signal_deliver() arguments
new helper: signal_pt_regs()
unify default ptrace_signal_deliver
flagday: kill pt_regs argument of do_fork()
death to idle_regs()
don't pass regs to copy_process()
flagday: don't pass regs to copy_thread()
bfin: switch to generic vfork, get rid of pointless wrappers
xtensa: switch to generic clone()
openrisc: switch to use of generic fork and clone
unicore32: switch to generic clone(2)
score: switch to generic fork/vfork/clone
c6x: sanitize copy_thread(), get rid of clone(2) wrapper, switch to generic clone()
take sys_fork/sys_vfork/sys_clone prototypes to linux/syscalls.h
mn10300: switch to generic fork/vfork/clone
h8300: switch to generic fork/vfork/clone
tile: switch to generic clone()
...
Conflicts:
arch/microblaze/include/asm/Kbuild
In order to be able to reuse the save-restore code in KVM, move
it to a pair of macros, similar to what the 32bit code does.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The architecture doesn't mandate any reset value for vttbr_el2.
Better set it to a known value before some HYP code gets confused.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If booted in EL2, install an dummy hypervisor whose only purpose
is to be replaced by a full fledged one.
A minimal API allows to:
- obtain the current HYP vectors (__hyp_get_vectors)
- set new HYP vectors (__hyp_set_vectors)
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To be able to signal the availability of EL2 to other parts of
the kernel, record the boot mode.
Once booted, two predicates indicate if HYP mode is available,
and if not, whether this is due to a boot mode mismatch or not.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This macro is also useful to other bits defining vectors (hypervisor
stub, KVM...).
Move it to a common location.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We want to use the virtual counter at EL0, as the physical counter
may not track the current clocksource for guests running under a
hypervisor.
This patch updates the vdso and generic timer driver to use the virtual
counter. The kernel EL2 entry code is also updated to ensure that the
virtual offset is initialised to zero.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Shifting the nanosecond component of the computed timespec early can
lead to sub-ns inaccuracies when using the truncated value as input to
further arithmetic for things like conversions to monotonic time.
This patch defers the timespec shifting until after the final value has
been computed.
Reported-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In preparation for sub-ns precision in the vdso timespec maths, change
the __do_get_tspec register allocation so that we return the clocksource
shift value instead of the unused xtime tspec.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When returning coarse realtime values from clock_gettime, we must still
check the sequence counter to ensure that the kernel does not update
the vdso datapage whilst we are loading the coarse timespec as this
could potentially result in time appearing to go backwards.
This patch delays the coarse realtime check until after we have loaded
successfully from the vdso datapage. This does mean that we always load
the wtm timespec, but conditionalising the load and adding an extra
sequence test is unlikely to buy us anything other than messy code,
particularly as the sequence test implies a read barrier.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The generic timer clocksource has 56 bits of precision and as such must
be masked appropriately after we have read it. The current mask
generated by a movn instruction is off by 4 bits, so we accidentally
include the top 4 bits in the final value.
This patch fixes the broken mask.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There's no reason to mark compat_get_sigframe inline explicitly, so
remove the annotation and let the compiler decide what's best.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We only have one type of frame (rt_sigframe) for arm64, so just return
that type directly and dispense with the framesize argument, which is
presumably a hangover from code copied from arch/arm/.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
setup_return is a void function, so make compat_setup_return look the
same rather then unconditionally return 0.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
To allow debuggers to unwind through signal frames, we create a fake
stack unwinding prologue containing the link register and frame pointer
of the interrupted context. The signal frame is then offset by 16 bytes
to make room for the two saved registers which are pushed onto the frame
of the *interrupted* context, rather than placed directly above the
signal stack.
This doesn't work when an alternative signal stack is set up for a SEGV
handler, which is raised in response to RLIMIT_STACK being reached. In
this case, we try to push the unwinding prologue onto the full stack and
subsequently take a fault which we fail to resolve, causing setup_return
to return -EFAULT and handle_signal to force_sigsegv on the current task.
This patch fixes the problem by including the unwinding prologue as part
of the rt_sigframe definition, which is populated during setup_sigframe,
ensuring that it always ends up on the signal stack.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org>
Backmerge from mainline commit that introduced a trivial conflict in
arch/arm64/kernel/process.c - a bunch of functions removed next to the
place where kernel_thread() used to be.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Commit 149c24151e ("ARM: SMP: use a timing out completion for cpu
hotplug") modified arm's CPU up path to use completions. It seems that
we only got half of this patch for arm64, so add the missing call to
complete.
Reported-by: Jon Brawn <jon.brawn@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
struct user_fp does not exist for arm64, so use struct user_fpsimd_state
instead for the ELF core dumping definitions. Furthermore, since we use
regset-based core dumping, we do not need definitions for dump_task_regs
and dump_fpu.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
We currently use a fake event encoding (0xFF) to indicate CPU cycles so
that we don't waste an event counter and can target the hardware cycle
counter instead.
The problem with this approach is that the event space defined by the
architecture permits an implementation to allocate 0xFF for some other
event.
This patch uses the architected cycle counter encoding (0x11) so that
we avoid potentially clashing with event encodings on future CPU
implementations.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
If a debugger tries to zero a hardware debug control register, the
kernel will try to infer both the type and length of the breakpoint
in order to sanity-check against the requested regset type. This will
fail because the encoding will appear as a zero-length breakpoint.
This patch changes the control register setting so that disabled
breakpoints are treated as HW_BREAKPOINT_EMPTY and no further
sanity-checking is required.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The user_hwdebug_state structure contains implicit padding to conform to
the alignment requirements of the AArch64 ABI (namely that aggregates
must be aligned to their most aligned member).
This patch fixes the ptrace functions operating on struct
user_hwdebug_state so that the padding is handled correctly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
According to Documentation/arm64/booting.txt, the kernel image must be
loaded at a pre-defined offset from the start of RAM so that the kernel
can calculate PHYS_OFFSET based on this address. If the DT contains
memory blocks below this PHYS_OFFSET, report them and ignore the
corresponding memory range.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
With commit 576094b7 (time: Introduce new GENERIC_TIME_VSYSCALL) the old
update_vsyscall() prototype is no longer available. This patch updates
the arm64 port.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: John Stultz <john.stultz@linaro.org>
This patch converts the arm64 port to use the generic sys_execve()
implementation removing the arm64-specific (compat_)sys_execve_wrapper()
functions.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch enables CONFIG_GENERIC_KERNEL_EXECVE on arm64 and removes the
arm64-specific implementation of kernel_execve().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch enables CONFIG_GENERIC_KERNEL_THREAD on arm64, changes
copy_threads to cope with kernel threads creation and adapts
ret_from_fork accordingly. The arm64-specific kernel_thread
implementation is no longer needed.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Do not export the COMPAT_* definitions to user
- Simplify the compat unistd32.h definitions and remove the
__SYSCALL_COMPAT guard
- Disintegrate the arch/arm64/include/asm/* headers
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJQdvO5AAoJEGvWsS0AyF7x3nUQAIZMqDkKMQ57EbTi5F+JScN4
HSYSto3+L4xEkqcukRAmSVQgJE4NoTBFXN+saKf/Sf2By2hiv6YmnC1ILA/ZMXu5
ez0Mzn6OnDy0eH6h7d7/wEMlxBWCn+s53ATRZIW+LMVkEjKEClQlOPiLwjS72EAi
6+Cfxg8fNfvi4NQPX8js6ILdh4T5OM43pBxzfRHZV1VVFHBiCDTKN8RYVYMlKM+N
UzIeK65v5F8IWsscCsY0FEPitkklDfktXaRKptSkeX9ATzHjT7Tj15ArmcaXflqB
SMOe2B7p8qh6sJq/4Ny+spKR/bcTEl6+tPgHZ8aHa6BRgsT5ZL4vTv7hgZk011UA
TpR7pI7r29n/tH0dHr0dEohCbnU7445kQRgX2A4en3LAnXQB1+ImMQjeXpQuvTlP
H4/z5+ZyU9nbXz4/gqWaagecurRe+X8g/Je4d3S75KMaTXNCnBL8Buwsij8HtckS
Bf219OxLO9V/pPGFdTWCw3k38jLYQLbj2j7iiA+GXNgK1rpL1QXyBHpyXGvSkeK7
TJg6CHuUYlwlnujkN308hr3yY2oZLVeDHvjC7zegUJzRKHNTXd3BEruWC+nPtRPk
zykMQwBMPwpILrnZZ7PantBMoOq7nNWNKfOVf3SGdJwqguXF0FjpvSHGLe3EIzo1
NqBrHo1ETRgw+OqTuvQS
=1+LF
-----END PGP SIGNATURE-----
Merge tag 'arm64-uapi' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 uapi disintegration from Catalin Marinas:
"UAPI headers for arm64 together with some clean-up to make it
possible:
- Do not export the COMPAT_* definitions to user
- Simplify the compat unistd32.h definitions and remove the
__SYSCALL_COMPAT guard
- Disintegrate the arch/arm64/include/asm/* headers"
* tag 'arm64-uapi' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
UAPI: (Scripted) Disintegrate arch/arm64/include/asm
arm64: Do not export the compat-specific definitions to the user
arm64: Do not include asm/unistd32.h in asm/unistd.h
arm64: Remove unused definitions from asm/unistd32.h
getname() is intended to copy pathname strings from userspace into a
kernel buffer. The result is just a string in kernel space. It would
however be quite helpful to be able to attach some ancillary info to
the string.
For instance, we could attach some audit-related info to reduce the
amount of audit-related processing needed. When auditing is enabled,
we could also call getname() on the string more than once and not
need to recopy it from userspace.
This patchset converts the getname()/putname() interfaces to return
a struct instead of a string. For now, the struct just tracks the
string in kernel space and the original userland pointer for it.
Later, we'll add other information to the struct as it becomes
convenient.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
This patch adds #ifdef __KERNEL__ guards around the COMPAT_* definitions
to avoid exporting them to user. AArch32 user requiring the kernel
headers must use those generated with ARCH=arm.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
This patch only includes asm/unistd32.h where necessary and removes its
inclusion in the asm/unistd.h file. The __SYSCALL_COMPAT guard is
dropped.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
This patch removes the compat __NR_* definitions from the unistd32.h
file and only keeps those that are used by the AArch64 kernel with a new
__NR_compat_* prefix. The additional wrapper definitions in
arch/arm64/kernel/sys32.S have been removed and the actual wrapper names
included in the asm/unistd32.h file.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
task_work_run() implementation had the side effect of enabling
interrupts. With commit ac3d0da8 (task_work: Make task_work_add()
lockless), interrupts are no longer enabled revealing the bug in the
arch code. This patch enables the interrupt explicitly before calling
do_notify_resume().
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The generic implementation of compat_sys_sendfile() has been introduced
by commit 8f9c0119. This patch removes the arm64 implementation in
favour of the generic one.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a preparatory patch for the introduction of NT_SIGINFO elf note.
Make the location of compat_siginfo_t uniform across eight architectures
which have it. Now it can be pulled in by including asm/compat.h or
linux/compat.h.
Most of the copies are verbatim. compat_uid[32]_t had to be replaced by
__compat_uid[32]_t. compat_uptr_t had to be moved up before
compat_siginfo_t in asm/compat.h on a several architectures (tile already
had it moved up). compat_sigval_t had to be relocated from linux/compat.h
to asm/compat.h.
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Amerigo Wang <amwang@redhat.com>
Cc: "Jonathan M. Foote" <jmfoote@cert.org>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Pedro Alves <palves@redhat.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The use of regsets has removed the need for many private ptrace requests,
so remove the corresponding definitions from the user-visible ptrace.h
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds Makefile and Kconfig files required for building an
AArch64 kernel.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
This patch adds support for the ARM generic timers with A64 instructions
for accessing the timer registers. It uses the physical counter as the
clock source and the virtual counter as sched_clock.
The timer frequency can be specified via DT or read from the CNTFRQ_EL0
register. The physical counter is also accessible from user space
allowing fast gettimeofday() implementation.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
This patch adds support for loadable modules. Loadable modules are
loaded 64MB below the kernel image due to branch relocation restrictions
(see Documentation/arm64/memory.txt).
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
This patch adds udelay, memory and bit operations together with the
ksyms exports.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
This patch adds support for the AArch64 performance counters.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
This patch adds support for FP/ASIMD register bank saving and restoring
during context switch and FP exception handling to generate SIGFPE.
There are 32 128-bit registers and the context switching is currently
done non-lazily. Benchmarks on real hardware are required before
implementing lazy FP state saving/restoring.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
This patch adds support for 32-bit applications. The vectors page is a
binary blob mapped into the application user space at 0xffff0000 (the
AArch64 toolchain does not support compilation of AArch32 code). Full
compatibility with ARMv7 user space is supported. The use of deprecated
ARMv7 functionality (SWP, CP15 barriers) has been disabled by default on
AArch64 kernels and unaligned LDM/STM is not supported.
Please note that only the ARM 32-bit EABI is supported, so no OABI
compatibility.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
This patch adds support for signal handling. The sigreturn is done via
VDSO, introduced by a previous patch. The SA_RESTORER is still defined
as it is required for 32-bit (compat) support but it is not to be used
for 64-bit applications.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
This patch adds VDSO support for 64-bit applications. The VDSO code is
currently used for sys_rt_sigreturn() and optimised gettimeofday()
(using the user-accessible generic counter).
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>