xdp_umem.c had overlapping changes between the 64-bit math fix
for the calculation of npgs and the removal of the zerocopy
memory type which got rid of the chunk_size_nohdr member.
The mlx5 Kconfig conflict is a case where we just take the
net-next copy of the Kconfig entry dependency as it takes on
the ESWITCH dependency by one level of indirection which is
what the 'net' conflicting change is trying to ensure.
Signed-off-by: David S. Miller <davem@davemloft.net>
Support for Clang's Shadow Call Stack in the kernel
(Sami Tolvanen and Will Deacon)
* for-next/scs:
arm64: entry-ftrace.S: Update comment to indicate that x18 is live
scs: Move DEFINE_SCS macro into core code
scs: Remove references to asm/scs.h from core code
scs: Move scs_overflow_check() out of architecture code
arm64: scs: Use 'scs_sp' register alias for x18
scs: Move accounting into alloc/free functions
arm64: scs: Store absolute SCS stack pointer value in thread_info
efi/libstub: Disable Shadow Call Stack
arm64: scs: Add shadow stacks for SDEI
arm64: Implement Shadow Call Stack
arm64: Disable SCS for hypervisor code
arm64: vdso: Disable Shadow Call Stack
arm64: efi: Restore register x18 if it was corrupted
arm64: Preserve register x18 when CPU is suspended
arm64: Reserve register x18 from general allocation with SCS
scs: Disable when function graph tracing is enabled
scs: Add support for stack usage debugging
scs: Add page accounting for shadow call stack allocations
scs: Add support for Clang's Shadow Call Stack (SCS)
KVM CPU errata rework
(Andrew Scull and Marc Zyngier)
* for-next/kvm/errata:
KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h
arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}
Support for Branch Target Identification (BTI) in user and kernel
(Mark Brown and others)
* for-next/bti: (39 commits)
arm64: vdso: Fix CFI directives in sigreturn trampoline
arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
arm64: bti: Fix support for userspace only BTI
arm64: kconfig: Update and comment GCC version check for kernel BTI
arm64: vdso: Map the vDSO text with guarded pages when built for BTI
arm64: vdso: Force the vDSO to be linked as BTI when built for BTI
arm64: vdso: Annotate for BTI
arm64: asm: Provide a mechanism for generating ELF note for BTI
arm64: bti: Provide Kconfig for kernel mode BTI
arm64: mm: Mark executable text as guarded pages
arm64: bpf: Annotate JITed code for BTI
arm64: Set GP bit in kernel page tables to enable BTI for the kernel
arm64: asm: Override SYM_FUNC_START when building the kernel with BTI
arm64: bti: Support building kernel C code using BTI
arm64: Document why we enable PAC support for leaf functions
arm64: insn: Report PAC and BTI instructions as skippable
arm64: insn: Don't assume unrecognized HINTs are skippable
arm64: insn: Provide a better name for aarch64_insn_is_nop()
arm64: insn: Add constants for new HINT instruction decode
arm64: Disable old style assembly annotations
...
ACPI and IORT updates
(Lorenzo Pieralisi)
* for-next/acpi:
ACPI/IORT: Remove the unused __get_pci_rid()
ACPI/IORT: Fix PMCG node single ID mapping handling
ACPI: IORT: Add comments for not calling acpi_put_table()
ACPI: GTDT: Put GTDT table after parsing
ACPI: IORT: Add extra message "applying workaround" for off-by-1 issue
ACPI/IORT: work around num_ids ambiguity
Revert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()"
ACPI/IORT: take _DMA methods into account for named components
BPF JIT optimisations for immediate value generation
(Luke Nelson)
* for-next/bpf:
bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates
bpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediates
arm64: insn: Fix two bugs in encoding 32-bit logical immediates
Addition of new CPU ID register fields and removal of some benign sanity checks
(Anshuman Khandual and others)
* for-next/cpufeature: (27 commits)
KVM: arm64: Check advertised Stage-2 page size capability
arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
arm64/cpufeature: Add remaining feature bits in ID_PFR0 register
arm64/cpufeature: Introduce ID_MMFR5 CPU register
arm64/cpufeature: Introduce ID_DFR1 CPU register
arm64/cpufeature: Introduce ID_PFR2 CPU register
arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
arm64/cpufeature: Drop open encodings while extracting parange
arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug
arm64: cpufeature: Group indexed system register definitions by name
arm64: cpufeature: Extend comment to describe absence of field info
arm64: drop duplicate definitions of ID_AA64MMFR0_TGRAN constants
arm64: cpufeature: Add an overview comment for the cpufeature framework
...
Minor documentation tweaks for silicon errata and booting requirements
(Rob Herring and Will Deacon)
* for-next/docs:
arm64: silicon-errata.rst: Sort the Cortex-A55 entries
arm64: docs: Mandate that the I-cache doesn't hold stale kernel text
Minor Kconfig cleanups
(Geert Uytterhoeven)
* for-next/kconfig:
arm64: cpufeature: Add "or" to mitigations for multiple errata
arm64: Sort vendor-specific errata
Miscellaneous updates
(Ard Biesheuvel and others)
* for-next/misc:
arm64: mm: Add asid_gen_match() helper
arm64: stacktrace: Factor out some common code into on_stack()
arm64: Call debug_traps_init() from trap_init() to help early kgdb
arm64: cacheflush: Fix KGDB trap detection
arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()
arm64: kexec_file: print appropriate variable
arm: mm: use __pfn_to_section() to get mem_section
arm64: Reorder the macro arguments in the copy routines
efi/libstub/arm64: align PE/COFF sections to segment alignment
KVM: arm64: Drop PTE_S2_MEMATTR_MASK
arm64/kernel: Fix range on invalidating dcache for boot page tables
arm64: set TEXT_OFFSET to 0x0 in preparation for removing it entirely
arm64: lib: Consistently enable crc32 extension
arm64/mm: Use phys_to_page() to access pgtable memory
arm64: smp: Make cpus_stuck_in_kernel static
arm64: entry: remove unneeded semicolon in el1_sync_handler()
arm64/kernel: vmlinux.lds: drop redundant discard/keep macros
arm64: drop GZFLAGS definition and export
arm64: kexec_file: Avoid temp buffer for RNG seed
arm64: rename stext to primary_entry
Perf PMU driver updates
(Tang Bin and others)
* for-next/perf:
pmu/smmuv3: Clear IRQ affinity hint on device removal
drivers/perf: hisi: Permit modular builds of HiSilicon uncore drivers
drivers/perf: hisi: Fix typo in events attribute array
drivers/perf: arm_spe_pmu: Avoid duplicate printouts
drivers/perf: arm_dsu_pmu: Avoid duplicate printouts
Pointer authentication updates and support for vmcoreinfo
(Amit Daniel Kachhap and Mark Rutland)
* for-next/ptr-auth:
Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'
arm64/crash_core: Export KERNELPACMASK in vmcoreinfo
arm64: simplify ptrauth initialization
arm64: remove ptrauth_keys_install_kernel sync arg
SDEI cleanup and non-critical fixes
(James Morse and others)
* for-next/sdei:
firmware: arm_sdei: Document the motivation behind these set_fs() calls
firmware: arm_sdei: remove unused interfaces
firmware: arm_sdei: Put the SDEI table after using it
firmware: arm_sdei: Drop check for /firmware/ node and always register driver
SMCCC updates and refactoring
(Sudeep Holla)
* for-next/smccc:
firmware: smccc: Fix missing prototype warning for arm_smccc_version_init
firmware: smccc: Add function to fetch SMCCC version
firmware: smccc: Refactor SMCCC specific bits into separate file
firmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead
firmware: smccc: Add the definition for SMCCCv1.2 version/error codes
firmware: smccc: Update link to latest SMCCC specification
firmware: smccc: Add HAVE_ARM_SMCCC_DISCOVERY to identify SMCCC v1.1 and above
vDSO cleanup and non-critical fixes
(Mark Rutland and Vincenzo Frascino)
* for-next/vdso:
arm64: vdso: Add --eh-frame-hdr to ldflags
arm64: vdso: use consistent 'map' nomenclature
arm64: vdso: use consistent 'abi' nomenclature
arm64: vdso: simplify arch_vdso_type ifdeffery
arm64: vdso: remove aarch32_vdso_pages[]
arm64: vdso: Add '-Bsymbolic' to ldflags
With ARMv8.5-GTG, the hardware (or more likely a hypervisor) can
advertise the supported Stage-2 page sizes.
Let's check this at boot time.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
If boot_secondary() was successful, and cpu_online() was an error in
__cpu_up(), -EIO was returned, but 0 is returned by commit d22b115cbf
("arm64/kernel: Simplify __cpu_up() by bailing out early").
Therefore, bringup_wait_for_ap() causes the primary core to wait for a
long time, which may cause boot failure.
This commit sets -EIO to return code under the same conditions.
Fixes: d22b115cbf ("arm64/kernel: Simplify __cpu_up() by bailing out early")
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
Tested-by: Yuji Ishikawa <yuji2.ishikawa@toshiba.co.jp>
Acked-by: Will Deacon <will@kernel.org>
Cc: Gavin Shan <gshan@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200527233457.2531118-1-nobuhiro1.iwamatsu@toshiba.co.jp
[catalin.marinas@arm.com: return -EIO at the end of the function]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
More EFI changes for v5.8:
- Rename pr_efi/pr_efi_err to efi_info/efi_err, and use them consistently
- Simplify and unify initrd loading
- Parse the builtin command line on x86 (if provided)
- Implement printk() support, including support for wide character strings
- Some fixes for issues introduced by the first batch of v5.8 changes
- Fix a missing prototypes warning
- Simplify GDT handling in early mixed mode thunking code
- Some other minor fixes and cleanups
Conflicts:
drivers/firmware/efi/libstub/efistub.h
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The MSCC bug fix in 'net' had to be slightly adjusted because the
register accesses are done slightly differently in net-next.
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix a missing prototype warning by adding a forward declaration
for the PE/COFF entrypoint, and while at it, align the function
name between the x86 and ARM versions of the stub.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Daniel reports that the .cfi_startproc is misplaced for the sigreturn
trampoline, which causes LLVM's unwinder to misbehave:
| I run into this with LLVM’s unwinder.
| This combination was always broken.
This prompted Dave to question our use of CFI directives more generally,
and I ended up going down a rabbit hole trying to figure out how this
very poorly documented stuff gets used.
Move the CFI directives so that the "mysterious NOP" is included in
the .cfi_{start,end}proc block and add a bunch of comments so that I
can save myself another headache in future.
Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reported-by: Dave Martin <dave.martin@arm.com>
Reported-by: Daniel Kiss <daniel.kiss@arm.com>
Tested-by: Daniel Kiss <daniel.kiss@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
For better or worse, GDB relies on the exact instruction sequence in the
VDSO sigreturn trampoline in order to unwind from signals correctly.
Commit c91db232da ("arm64: vdso: Convert to modern assembler annotations")
unfortunately added a BTI C instruction to the start of __kernel_rt_sigreturn,
which breaks this check. Thankfully, it's also not required, since the
trampoline is called from a RET instruction when returning from the signal
handler
Remove the unnecessary BTI C instruction from __kernel_rt_sigreturn,
and do the same for the 32-bit VDSO as well for good measure.
Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Fixes: c91db232da ("arm64: vdso: Convert to modern assembler annotations")
Signed-off-by: Will Deacon <will@kernel.org>
Quoth the man page:
```
If the tracee was restarted by PTRACE_SYSCALL or PTRACE_SYSEMU, the
tracee enters syscall-enter-stop just prior to entering any system
call (which will not be executed if the restart was using
PTRACE_SYSEMU, regardless of any change made to registers at this
point or how the tracee is restarted after this stop).
```
The parenthetical comment is currently true on x86 and powerpc,
but not currently true on arm64. arm64 re-checks the _TIF_SYSCALL_EMU
flag after the syscall entry ptrace stop. However, at this point,
it reflects which method was used to re-start the syscall
at the entry stop, rather than the method that was used to reach it.
Fix that by recording the original flag before performing the ptrace
stop, bringing the behavior in line with documentation and x86/powerpc.
Fixes: f086f67485 ("arm64: ptrace: add support for syscall emulation")
Cc: <stable@vger.kernel.org> # 5.3.x-
Signed-off-by: Keno Fischer <keno@juliacomputing.com>
Acked-by: Will Deacon <will@kernel.org>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Bin Lu <Bin.Lu@arm.com>
[catalin.marinas@arm.com: moved 'flags' bit masking]
[catalin.marinas@arm.com: changed 'flags' type to unsigned long]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
APEI is unable to do all of its error handling work in nmi-context, so
it defers non-fatal work onto the irq_work queue. arch_irq_work_raise()
sends an IPI to the calling cpu, but this is not guaranteed to be taken
before returning to user-space.
Unless the exception interrupted a context with irqs-masked,
irq_work_run() can run immediately. Otherwise return -EINPROGRESS to
indicate ghes_notify_sea() found some work to do, but it hasn't
finished yet.
With this apei_claim_sea() returning '0' means this external-abort was
also notification of a firmware-first RAS error, and that APEI has
processed the CPER records.
Signed-off-by: James Morse <james.morse@arm.com>
Tested-by: Tyler Baicar <baicar@os.amperecomputing.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
A new kgdb feature will soon land (kgdb_earlycon) that lets us run
kgdb much earlier. In order for everything to work properly it's
important that the break hook is setup by the time we process
"kgdbwait".
Right now the break hook is setup in debug_traps_init() and that's
called from arch_initcall(). That's a bit too late since
kgdb_earlycon really needs things to be setup by the time the system
calls dbg_late_init().
We could fix this by adding call_break_hook() into early_brk64() and
that works fine. However, it's a little ugly. Instead, let's just
add a call to debug_traps_init() straight from trap_init(). There's
already a documented dependency between trap_init() and
debug_traps_init() and this makes the dependency more obvious rather
than just relying on a comment.
NOTE: this solution isn't early enough to let us select the
"ARCH_HAS_EARLY_DEBUG" KConfig option that is introduced by the
kgdb_earlycon patch series. That would only be set if we could do
breakpoints when early params are parsed. This patch only enables
"late early" breakpoints, AKA breakpoints when dbg_late_init() is
called. It's expected that this should be fine for most people.
It should also be noted that if you crash you can still end up in kgdb
earlier than debug_traps_init(). Since you don't need breakpoints to
debug a crash that's fine.
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200513160501.1.I0b5edf030cc6ebef6ab4829f8867cdaea42485d8@changeid
Signed-off-by: Will Deacon <will@kernel.org>
The Shadow Call Stack pointer is held in x18, so update the ftrace
entry comment to indicate that it cannot be safely clobbered.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Defining static shadow call stacks is not architecture-specific, so move
the DEFINE_SCS() macro into the core header file.
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
There is nothing architecture-specific about scs_overflow_check() as
it's just a trivial wrapper around scs_corrupted().
For parity with task_stack_end_corrupted(), rename scs_corrupted() to
task_scs_end_corrupted() and call it from schedule_debug() when
CONFIG_SCHED_STACK_END_CHECK_is enabled, which better reflects its
purpose as a debug feature to catch inadvertent overflow of the SCS.
Finally, remove the unused scs_overflow_check() function entirely.
This has absolutely no impact on architectures that do not support SCS
(currently arm64 only).
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
x18 holds the SCS stack pointer value, so introduce a register alias to
make this easier to read in assembly code.
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Storing the SCS information in thread_info as a {base,offset} pair
introduces an additional load instruction on the ret-to-user path,
since the SCS stack pointer in x18 has to be converted back to an offset
by subtracting the base.
Replace the offset with the absolute SCS stack pointer value instead
and avoid the redundant load.
Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Move the bpf verifier trace check into the new switch statement in
HEAD.
Resolve the overlapping changes in hinic, where bug fixes overlap
the addition of VF support.
Signed-off-by: David S. Miller <davem@davemloft.net>
This change adds per-CPU shadow call stacks for the SDEI handler.
Similarly to how the kernel stacks are handled, we add separate shadow
stacks for normal and critical events.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
If we detect a corrupted x18, restore the register before jumping back
to potentially SCS instrumented code. This is safe, because the wrapper
is called with preemption disabled and a separate shadow stack is used
for interrupt handling.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Several actions are not mitigations for a single erratum, but for
multiple errata. However, printing a line like
CPU features: detected: ARM errata 1165522, 1530923
may give the false impression that all three listed errata have been
detected. This can confuse the user, who may think his Cortex-A55 is
suddenly affected by a Cortex-A76 erratum.
Add "or" to all descriptions for mitigations for multiple errata, to
make it clear that only one or more of the errata printed are
applicable, and not necessarily all of them.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200512145255.5520-1-geert+renesas@glider.be
Signed-off-by: Will Deacon <will@kernel.org>
Recently arm64 linux kernel added support for Armv8.3-A Pointer
Authentication feature. If this feature is enabled in the kernel and the
hardware supports address authentication then the return addresses are
signed and stored in the stack to prevent ROP kind of attack. Kdump tool
will now dump the kernel with signed lr values in the stack.
Any user analysis tool for this kernel dump may need the kernel pac mask
information in vmcoreinfo to generate the correct return address for
stacktrace purpose as well as to resolve the symbol name.
This patch is similar to commit ec6e822d1a ("arm64: expose user PAC
bit positions via ptrace") which exposes pac mask information via ptrace
interfaces.
The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros
like ptrauth_kernel_pac_mask can be used ungaurded. This config protection
is confusing as the pointer authentication feature may be missing at
runtime even though this config is present.
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1589202116-18265-1-git-send-email-amit.kachhap@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
This patch fixes two issues present in the current function for encoding
arm64 logical immediates when using the 32-bit variants of instructions.
First, the code does not correctly reject an all-ones 32-bit immediate,
and returns an undefined instruction encoding.
Second, the code incorrectly rejects some 32-bit immediates that are
actually encodable as logical immediates. The root cause is that the code
uses a default mask of 64-bit all-ones, even for 32-bit immediates.
This causes an issue later on when the default mask is used to fill the
top bits of the immediate with ones, shown here:
/*
* Pattern: 0..01..10..01..1
*
* Fill the unused top bits with ones, and check if
* the result is a valid immediate (all ones with a
* contiguous ranges of zeroes).
*/
imm |= ~mask;
if (!range_of_ones(~imm))
return AARCH64_BREAK_FAULT;
To see the problem, consider an immediate of the form 0..01..10..01..1,
where the upper 32 bits are zero, such as 0x80000001. The code checks
if ~(imm | ~mask) contains a range of ones: the incorrect mask yields
1..10..01..10..0, which fails the check; the correct mask yields
0..01..10..0, which succeeds.
The fix for both issues is to generate a correct mask based on the
instruction immediate size, and use the mask to check for all-ones,
all-zeroes, and values wider than the mask.
Currently, arch/arm64/kvm/va_layout.c is the only user of this function,
which uses 64-bit immediates and therefore won't trigger these bugs.
We tested the new code against llvm-mc with all 1,302 encodable 32-bit
logical immediates and all 5,334 encodable 64-bit logical immediates.
Fixes: ef3935eeeb ("arm64: insn: Add encoder for bitwise operations using literals")
Suggested-by: Will Deacon <will@kernel.org>
Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200508181547.24783-2-luke.r.nels@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
The kernel is responsible for mapping the vDSO into userspace processes,
including mapping the text section as executable. Handle the mapping of
the vDSO for BTI similarly, mapping the text section as guarded pages so
the BTI annotations in the vDSO become effective when they are present.
This will mean that we can have BTI active for the vDSO in processes that
do not otherwise support BTI. This should not be an issue for any expected
use of the vDSO.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-12-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
When the kernel and hence vDSO are built with BTI enabled force the linker
to link the vDSO as BTI. This will cause the linker to warn if any of the
input files do not have the BTI annotation, ensuring that we don't silently
fail to provide a vDSO that is built and annotated for BTI when the
kernel is being built with BTI.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-11-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>