Christophe Leroy
de1cd07906
powerpc/32s: Use SPRN_SPRG_SCRATCH2 in DSI prolog
...
Use SPRN_SPRG_SCRATCH2 as an alternative scratch register in
the early part of DSI prolog in order to avoid clobbering
SPRN_SPRG_SCRATCH0/1 used by other prologs.
The 603 doesn't like a jump from DataLoadTLBMiss to the 10 nops
that are now in the beginning of DSI exception as a result of
the feature section. To workaround this, add a jump as alternative.
It also avoids fetching 10 nops for nothing.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/f9f8df2a2be93568768ef1ac793639f7914cf103.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:32 +11:00
Christophe Leroy
6285f9cff5
powerpc/32: Simplify EXCEPTION_PROLOG_1 macro
...
Make code more readable with a clear CONFIG_VMAP_STACK
section and a clear non CONFIG_VMAP_STACK section.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/c0f16cf432d22fc80097264d94649460d3dd761d.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:32 +11:00
Christophe Leroy
c4a22611bf
powerpc/603: Use SPRN_SDR1 to store the pgdir phys address
...
On the 603, SDR1 is not used.
In order to free SPRN_SPRG2, use SPRN_SDR1 to store the pgdir
phys addr.
But only some bits of SDR1 can be used (0xffff01ff).
As the pgdir is 4k aligned, rotate it by 4 bits to the left.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/7370574b49d8476878ce5480726197993cb76108.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:31 +11:00
Christophe Leroy
7b107a71e7
powerpc/32s: Fix an FTR_SECTION_ELSE
...
An FTR_SECTION_ELSE is in the middle of
BEGIN_MMU_FTR_SECTION/ALT_MMU_FTR_SECTION_END_IFSET
Change it to MMU_FTR_SECTION_ELSE
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/61790f1a91692950a6bb5bb53d6d514d9bcdad74.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:31 +11:00
Christophe Leroy
035b19a15a
powerpc/32s: Always map kernel text and rodata with BATs
...
Since commit 2b279c0348 ("powerpc/32s: Allow mapping with BATs with
DEBUG_PAGEALLOC"), there is no real situation where mapping without
BATs is required.
In order to simplify memory handling, always map kernel text
and rodata with BATs even when "nobats" kernel parameter is set.
Also fix the 603 TLB miss exceptions that don't require anymore
kernel page table if DEBUG_PAGEALLOC.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/da51f7ec632825a4ce43290a904aad61648408c0.1606285013.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:31 +11:00
Athira Rajeev
91668ab7db
powerpc/perf: MMCR0 control for PMU registers under PMCC=00
...
PowerISA v3.1 introduces new control bit (PMCCEXT) for restricting
access to group B PMU registers in problem state when
MMCR0 PMCC=0b00. In problem state and when MMCR0 PMCC=0b00,
setting the Monitor Mode Control Register bit 54 (MMCR0 PMCCEXT),
will restrict read permission on Group B Performance Monitor
Registers (SIER, SIAR, SDAR and MMCR1). When this bit is set to zero,
group B registers will be readable. In other platforms (like power9),
the older behaviour is retained where group B PMU SPRs are readable.
Patch adds support for MMCR0 PMCCEXT bit in power10 by enabling
this bit during boot and during the PMU event enable/disable callback
functions.
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/1606409684-1589-8-git-send-email-atrajeev@linux.vnet.ibm.com
2020-12-04 01:01:29 +11:00
Aneesh Kumar K.V
ec0f9b98f7
powerpc/book3s64/pkeys: Optimize KUAP and KUEP feature disabled case
...
If FTR_BOOK3S_KUAP is disabled, kernel will continue to run with the same AMR
value with which it was entered. Hence there is a high chance that
we can return without restoring the AMR value. This also helps the case
when applications are not using the pkey feature. In this case, different
applications will have the same AMR values and hence we can avoid restoring
AMR in this case too.
Also avoid isync() if not really needed.
Do the same for IAMR.
null-syscall benchmark results:
With smap/smep disabled:
Without patch:
957.95 ns 2778.17 cycles
With patch:
858.38 ns 2489.30 cycles
With smap/smep enabled:
Without patch:
1017.26 ns 2950.36 cycles
With patch:
1021.51 ns 2962.44 cycles
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-23-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:28 +11:00
Aneesh Kumar K.V
48a8ab4eeb
powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.
...
Now that kernel correctly store/restore userspace AMR/IAMR values, avoid
manipulating AMR and IAMR from the kernel on behalf of userspace.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Reviewed-by: Sandipan Das <sandipan@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-15-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:26 +11:00
Aneesh Kumar K.V
edc541ecaa
powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one.
...
We will remove thread.amr/iamr/uamor in a later patch
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-14-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:26 +11:00
Aneesh Kumar K.V
d5fa30e699
powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec
...
On fork, we inherit from the parent and on exec, we should switch to default_amr values.
Also, avoid changing the AMR register value within the kernel. The kernel now runs with
different AMR values.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Reviewed-by: Sandipan Das <sandipan@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-13-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:26 +11:00
Aneesh Kumar K.V
f643fcab74
powerpc/book3s64/pkeys: Inherit correctly on fork.
...
Child thread.kuap value is inherited from the parent in copy_thread_tls. We still
need to make sure when the child returns from a fork in the kernel we start with the kernel
default AMR value.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Reviewed-by: Sandipan Das <sandipan@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-12-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:25 +11:00
Aneesh Kumar K.V
8e560921b5
powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel
...
This prepare kernel to operate with a different value than userspace AMR/IAMR.
For this, AMR/IAMR need to be saved and restored on entry and return from the
kernel.
With KUAP we modify kernel AMR when accessing user address from the kernel
via copy_to/from_user interfaces. We don't need to modify IAMR value in
similar fashion.
If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on entering
kernel from userspace. If not we can assume that AMR/IAMR is not modified
from userspace.
We need to save AMR if we have MMU_FTR_BOOK3S_KUAP feature enabled and we are
interrupted within kernel. This is required so that if we get interrupted
within copy_to/from_user we continue with the right AMR value.
If we hae MMU_FTR_BOOK3S_KUEP enabled we need to restore IAMR on
return to userspace beause kernel will be running with a different
IAMR value.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Reviewed-by: Sandipan Das <sandipan@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-11-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:25 +11:00
Aneesh Kumar K.V
d7df77e890
powerpc/exec: Set thread.regs early during exec
...
In later patches during exec, we would like to access default regs.amr to
control access to the user mapping. Having thread.regs set early makes the
code changes simpler.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-10-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:25 +11:00
Aneesh Kumar K.V
227ae62552
powerpc/book3s64/kuap/kuep: Add PPC_PKEY config on book3s64
...
The config CONFIG_PPC_PKEY is used to select the base support that is
required for PPC_MEM_KEYS, KUAP, and KUEP. Adding this dependency
reduces the code complexity(in terms of #ifdefs) and enables us to
move some of the initialization code to pkeys.c
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201127044424.40686-4-aneesh.kumar@linux.ibm.com
2020-12-04 01:01:24 +11:00
Nicholas Piggin
4a869531dd
powerpc/64s: Remove "Host" from MCE logging
...
"Host" caused machine check is printed when the kernel sees a MCE
hit in this kernel or userspace, and "Guest" if it hit one of its
guests. This is confusing when a guest kernel handles a hypervisor-
delivered MCE, it also prints "Host".
Just remove "Host". "Guest" is adequate to make the distinction.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201128070728.825934-8-npiggin@gmail.com
2020-12-04 01:01:23 +11:00
Nicholas Piggin
82f70a0510
powerpc/64s/pseries: Add ERAT specific machine check handler
...
Don't treat ERAT MCEs as SLB, don't save the SLB and use a specific
ERAT flush to recover it.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201128070728.825934-7-npiggin@gmail.com
2020-12-04 01:01:23 +11:00
Nicholas Piggin
0ce2382657
powerpc/64s/powernv: Allow KVM to handle guest machine check details
...
KVM has strategies to perform machine check recovery. If a MCE hits
in a guest, have the low level handler just decode and save the MCE
but not try to recover anything, so KVM can deal with it.
The host does not own SLBs and does not need to report the SLB state
in case of a multi-hit for example, or know about the virtual memory
map of the guest.
UE and memory poisoning of guest pages in the host is one thing that
is possibly not completely robust at the moment, but this too needs
to go via KVM (possibly via the guest and back out to host via hcall)
rather than being handled at a low level in the host handler.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201128070728.825934-3-npiggin@gmail.com
2020-12-04 01:01:22 +11:00
Srikar Dronamraju
a21d1becaa
powerpc: Reintroduce is_kvm_guest() as a fast-path check
...
Introduce a static branch that would be set during boot if the OS
happens to be a KVM guest. Subsequent checks to see if we are on KVM
will rely on this static branch. This static branch would be used in
vcpu_is_preempted() in a subsequent patch.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com >
Acked-by: Waiman Long <longman@redhat.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201202050456.164005-4-srikar@linux.vnet.ibm.com
2020-12-04 01:01:22 +11:00
Srikar Dronamraju
16520a858a
powerpc: Rename is_kvm_guest() to check_kvm_guest()
...
We want to reuse the is_kvm_guest() name in a subsequent patch but
with a new body. Hence rename is_kvm_guest() to check_kvm_guest(). No
additional changes.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com >
Acked-by: Waiman Long <longman@redhat.com >
Signed-off-by: kernel test robot <lkp@intel.com > # int -> bool fix
[mpe: Fold in fix from lkp to use true/false not 0/1]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201202050456.164005-3-srikar@linux.vnet.ibm.com
2020-12-04 01:01:21 +11:00
Srikar Dronamraju
92cc6bf01c
powerpc: Refactor is_kvm_guest() declaration to new header
...
Only code/declaration movement, in anticipation of doing a KVM-aware
vcpu_is_preempted(). No additional changes.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com >
Acked-by: Waiman Long <longman@redhat.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201202050456.164005-2-srikar@linux.vnet.ibm.com
2020-12-04 01:01:21 +11:00
Nicholas Piggin
bf13718bc5
powerpc: show registers when unwinding interrupt frames
...
It's often useful to know the register state for interrupts in
the stack frame. In the below example (with this patch applied),
the important information is the state of the page fault.
A blatant case like this probably rather should have the page
fault regs passed down to the warning, but quite often there are
less obvious cases where an interrupt shows up that might give
some more clues.
The downside is longer and more complex bug output.
Bug: Write fault blocked by AMR!
WARNING: CPU: 0 PID: 72 at arch/powerpc/include/asm/book3s/64/kup-radix.h:164 __do_page_fault+0x880/0xa90
Modules linked in:
CPU: 0 PID: 72 Comm: systemd-gpt-aut Not tainted
NIP: c00000000006e2f0 LR: c00000000006e2ec CTR: 0000000000000000
REGS: c00000000a4f3420 TRAP: 0700
MSR: 8000000000021033 <SF,ME,IR,DR,RI,LE> CR: 28002840 XER: 20040000
CFAR: c000000000128be0 IRQMASK: 3
GPR00: c00000000006e2ec c00000000a4f36c0 c0000000014f0700 0000000000000020
GPR04: 0000000000000001 c000000001290f50 0000000000000001 c000000001290f80
GPR08: c000000001612b08 0000000000000000 0000000000000000 00000000ffffe0f7
GPR12: 0000000048002840 c0000000016e0000 c00c000000021c80 c000000000fd6f60
GPR16: 0000000000000000 c00000000a104698 0000000000000003 c0000000087f0000
GPR20: 0000000000000100 c0000000070330b8 0000000000000000 0000000000000004
GPR24: 0000000002000000 0000000000000300 0000000002000000 c00000000a5b0c00
GPR28: 0000000000000000 000000000a000000 00007fffb2a90038 c00000000a4f3820
NIP [c00000000006e2f0] __do_page_fault+0x880/0xa90
LR [c00000000006e2ec] __do_page_fault+0x87c/0xa90
Call Trace:
[c00000000a4f36c0] [c00000000006e2ec] __do_page_fault+0x87c/0xa90 (unreliable)
[c00000000a4f3780] [c000000000e1c034] do_page_fault+0x34/0x90
[c00000000a4f37b0] [c000000000008908] data_access_common_virt+0x158/0x1b0
--- interrupt: 300 at __copy_tofrom_user_base+0x9c/0x5a4
NIP: c00000000009b028 LR: c000000000802978 CTR: 0000000000000800
REGS: c00000000a4f3820 TRAP: 0300
MSR: 800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24004840 XER: 00000000
CFAR: c00000000009aff4 DAR: 00007fffb2a90038 DSISR: 0a000000 IRQMASK: 0
GPR00: 0000000000000000 c00000000a4f3ac0 c0000000014f0700 00007fffb2a90028
GPR04: c000000008720010 0000000000010000 0000000000000000 0000000000000000
GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000001
GPR12: 0000000000004000 c0000000016e0000 c00c000000021c80 c000000000fd6f60
GPR16: 0000000000000000 c00000000a104698 0000000000000003 c0000000087f0000
GPR20: 0000000000000100 c0000000070330b8 0000000000000000 0000000000000004
GPR24: c00000000a4f3c80 c000000008720000 0000000000010000 0000000000000000
GPR28: 0000000000010000 0000000008720000 0000000000010000 c000000001515b98
NIP [c00000000009b028] __copy_tofrom_user_base+0x9c/0x5a4
LR [c000000000802978] copyout+0x68/0xc0
--- interrupt: 300
[c00000000a4f3af0] [c0000000008074b8] copy_page_to_iter+0x188/0x540
[c00000000a4f3b50] [c00000000035c678] generic_file_buffered_read+0x358/0xd80
[c00000000a4f3c40] [c0000000004c1e90] blkdev_read_iter+0x50/0x80
[c00000000a4f3c60] [c00000000045733c] new_sync_read+0x12c/0x1c0
[c00000000a4f3d00] [c00000000045a1f0] vfs_read+0x1d0/0x240
[c00000000a4f3d50] [c00000000045a7f4] ksys_read+0x84/0x140
[c00000000a4f3da0] [c000000000033a60] system_call_exception+0x100/0x280
[c00000000a4f3e10] [c00000000000c508] system_call_common+0xf8/0x2f8
Instruction dump:
eae10078 3be0000b 4bfff890 60420000 792917e1 4182ff18 3c82ffab 3884a5e0
3c62ffab 3863a6e8 480ba891 60000000 <0fe00000> 3be0000b 4bfff860 e93c0938
Signed-off-by: Nicholas Piggin <npiggin@gmail.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201107023305.2384874-1-npiggin@gmail.com
2020-12-04 01:01:21 +11:00
Youling Tang
a21df7a1d6
powerpc: Use common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macro
...
Use the common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macro rule for
the linker script in an effort.
Signed-off-by: Youling Tang <tangyouling@loongson.cn >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/1606460857-2723-1-git-send-email-tangyouling@loongson.cn
2020-12-04 01:01:20 +11:00
Jordan Niethe
fe18a35e68
powerpc/64: Fix an EMIT_BUG_ENTRY in head_64.S
...
Commit 63ce271b5e ("powerpc/prom: convert PROM_BUG() to standard
trap") added an EMIT_BUG_ENTRY for the trap after the branch to
start_kernel(). The EMIT_BUG_ENTRY was for the address "0b", however the
trap was not labeled with "0". Hence the address used for bug is in
relative_toc() where the previous "0" label is. Label the trap as "0" so
the correct address is used.
Fixes: 63ce271b5e ("powerpc/prom: convert PROM_BUG() to standard trap")
Signed-off-by: Jordan Niethe <jniethe5@gmail.com >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/20201130004404.30953-1-jniethe5@gmail.com
2020-12-04 01:01:20 +11:00
Christophe Leroy
676155ab23
powerpc/vdso: Remove VDSO32_LBASE and VDSO64_LBASE
...
VDSO32_LBASE and VDSO64_LBASE are 0. Remove them to simplify code.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/6c4d6570d886bbe1cc471e8ca01602e4b4d9beb5.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:19 +11:00
Christophe Leroy
e90903203d
powerpc/vdso: Remove DBG()
...
DBG() is not used anymore. Remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/e11a9b50e709f197bb3aa2ed1d80d2dee8714afc.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:19 +11:00
Christophe Leroy
23c4ceaf1a
powerpc/vdso: Remove vdso_ready
...
There is no way to get out of vdso_init() prematuraly anymore.
Remove vdso_ready as it will always be 1.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/0e1e18c6329b848aa3edeeba76509b4d76182e7d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:19 +11:00
Christophe Leroy
a4ccd64acb
powerpc/vdso: Remove vdso_setup()
...
vdso_fixup_features() cannot fail anymore and that's
the only function called by vdso_setup().
vdso_setup() has become trivial and can be removed.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/11522eec6140f510a8c89c63cbb739277d097fdc.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:19 +11:00
Christophe Leroy
67a354051d
powerpc/vdso: Remove lib32_elfinfo and lib64_elfinfo
...
lib32_elfinfo and lib64_elfinfo are not used anymore, remove them.
Also remove vdso32_kbase and vdso64_kbase while removing the
last use.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/01ac65abf22f0428f8f764525a7d84459c54d806.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:19 +11:00
Christophe Leroy
6ed613ad57
powerpc/vdso: Remove symbol section information in struct lib32/64_elfinfo
...
The members related to the symbol section in struct lib32_elfinfo and
struct lib64_elfinfo are not used anymore, removed them.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/b779e5b7cc0354e2f87fd407fe5b02f4a8a73825.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:18 +11:00
Christophe Leroy
e113f8ef1c
powerpc/vdso: Remove unused text member in struct lib32/64_elfinfo
...
The text member in struct lib32_elfinfo and struct lib64_elfinfo
is not used, remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/f53dcc9bb1946a7854d15b34d03d3d2e2003848c.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:18 +11:00
Christophe Leroy
5cda7c7549
powerpc/vdso: Remove vdso_patches[] and associated functions
...
vdso_patches[] is now empty, remove it and remove
all functions that depends on it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/27d75debd6e4ddeaffe1d66ffed1e7526684a004.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:18 +11:00
Christophe Leroy
899367ea50
powerpc/vdso: Remove runtime generated sigtramp offsets
...
Signal trampoline offsets are now generated at buildtime.
Runtime generated offsets are not used anymore, remove them.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/7c192d35a437151837cf4c48aeccb42380d6daac.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:18 +11:00
Christophe Leroy
49bf59fd03
powerpc/vdso: Remove __kernel_datapage_offset
...
__kernel_datapage_offset is not used anymore, remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/ddb5c746bec4e1a026d7c85243213a1876ef844f.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:18 +11:00
Christophe Leroy
b7fe9c15b5
powerpc/vdso: Remove vdso32_pages and vdso64_pages
...
vdso32_pages and vdso64_pages are not used anymore.
Remove them.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/bce021f616cbaf39dfb5766cf7ef114adcb918d9.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:18 +11:00
Christophe Leroy
0fc980db9a
powerpc/vdso: Merge __kernel_sync_dicache_p5() into __kernel_sync_dicache()
...
__kernel_sync_dicache_p5() is an alternative to
__kernel_sync_dicache() when cpu has CPU_FTR_COHERENT_ICACHE
Remove this alternative function and merge
__kernel_sync_dicache_p5() into __kernel_sync_dicache() using
standard CPU feature fixup.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/4c7dcc6544882761b2b0249d7a8ec2c3a8088cb5.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:17 +11:00
Christophe Leroy
ed07f6353d
powerpc/vdso: Use builtin symbols to locate fixup section
...
Add builtin symbols to locate fixup section and use them
instead of locating sections through elf headers at runtime.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/2954526981859ca1ccfcfc7a7c4263920e9ddfcb.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:17 +11:00
Christophe Leroy
91bf695596
powerpc/vdso: Retrieve sigtramp offsets at buildtime
...
This is copied from arm64.
Instead of using runtime generated signal trampoline offsets,
get offsets at buildtime.
If the said trampoline doesn't exist, build will fail. So no
need to check whether the trampoline exists or not in the VDSO.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/f8bfd6812c3e3678b1cdb4d55a52f9eb022b40d3.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:17 +11:00
Christophe Leroy
550e6074c1
powerpc/vdso: Remove unused \tmp param in __get_datapage()
...
The \tmp param is not used anymore, remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/4b13f897dcccce8ae03c031a4598cf26b32e2f1c.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:17 +11:00
Christophe Leroy
591857b635
powerpc/vdso: Simplify __get_datapage()
...
The VDSO datapage and the text pages are always located immediately
next to each other, so it can be hardcoded without an indirection
through __kernel_datapage_offset
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/b08f5ef99d64cfc38f79b7ad5310d9b4d2479eeb.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:17 +11:00
Christophe Leroy
511157ab64
powerpc/vdso: Move vdso datapage up front
...
Move the vdso datapage in front of the VDSO area,
before vdso test.
This will allow to remove the __kernel_datapage_offset symbol
and simplify __get_datapage() in following patches.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/b68c99b6e8ee0b1d99bfa4c7e34c359fc1bc1000.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:17 +11:00
Christophe Leroy
c102f07667
powerpc/vdso: Replace vdso_base by vdso
...
All other architectures but s390 use a void pointer named 'vdso'
to reference the VDSO mapping.
In a following patch, the VDSO data page will be put in front of
text, vdso_base will then not anymore point to VDSO text.
To avoid confusion between vdso_base and VDSO text, rename vdso_base
into vdso and make it a void __user *.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/8e6cefe474aa4ceba028abb729485cd46c140990.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:16 +11:00
Christophe Leroy
526a9c4a72
powerpc/vdso: Provide vdso_remap()
...
Provide vdso_remap() through _install_special_mapping() and
drop arch_remap().
This adds a test of the size and returns -EINVAL if the size
is not correct.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/373c66f768fa9cc8890f3b55462209a98c522326.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:16 +11:00
Christophe Leroy
c1bab64360
powerpc/vdso: Move to _install_special_mapping() and remove arch_vma_name()
...
Copied from commit 2fea7f6c98 ("arm64: vdso: move to
_install_special_mapping and remove arch_vma_name").
Use the new _install_special_mapping() API added by
commit a62c34bd2a ("x86, mm: Improve _install_special_mapping
and fix x86 vdso naming") which obsolete install_special_mapping().
And remove arch_vma_name() as the name is handled by the new API.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: kernel test robot <lkp@intel.com >
[mpe: Squash fix to use PTR_ERR_OR_ZERO() from lkp]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/e7e5dfe0f93234e31051f2a610b4b07f50b0082f.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:16 +11:00
Christophe Leroy
b2df3f60b4
powerpc/vdso: Simplify arch_setup_additional_pages() exit
...
To simplify arch_setup_additional_pages() exit, rename
it __arch_setup_additional_pages() and create a caller
arch_setup_additional_pages() which does the locking.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/603c1d039d3f928ee95e547fcd2219fcf4c3b514.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:16 +11:00
Christophe Leroy
7461a4f79b
powerpc/vdso: Use VDSO size in arch_setup_additional_pages()
...
In arch_setup_additional_pages(), instead of using number of VDSO
pages and recalculate VDSO size, directly use the VDSO size.
As vdso_ready is set, vdso_pages can't be 0 so just remove the test.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/4edfa548c3885a430b765335dc720105716e273f.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:16 +11:00
Christophe Leroy
4fe0e3c172
powerpc/vdso: Remove unnecessary ifdefs in vdso_pagelist initialization
...
No need of all those #ifdefs around the pagelist initialisation,
use IS_ENABLED(), GCC will kick out unused static variables.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/f9333432e329b1fcbbbf846cb1cd4a1c4127a60b.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:16 +11:00
Christophe Leroy
3cf6382541
powerpc/vdso: Refactor 32 bits and 64 bits pages setup
...
The setup of VDSO pages is identical for 32 bits VDSO and
64 bits VDSO.
Refactor that setup.
And use &vdsoXX_start which is synonym of vdsoXX_kbase.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/269ffb54c37fc1d46128f77d7a39f88ef4a9957d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:15 +11:00
Christophe Leroy
35c1c7c0bc
powerpc/vdso: Remove NULL termination element in vdso_pagelist
...
No need of a NULL last element in pagelists, install_special_mapping()
knows how long the list is.
Remove that element.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/e58d95ab859e3cbc9bae3c9ce2959e17d2864f5d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:15 +11:00
Christophe Leroy
abcdbd039e
powerpc/vdso: Remove get_page() in vdso_pagelist initialization
...
Partly copied from commit 16fb1a9bec ("arm64: vdso: clean up
vdso_pagelist initialization").
No need to get_page() the vdso text/data - these are part of the
kernel image.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/9d14540bd10832b6c9519d74fb5728fdc4974b36.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:15 +11:00
Christophe Leroy
1bb30b7a45
powerpc/vdso: Rename syscall_map_32/64 to simplify vdso_setup_syscall_map()
...
Today vdso_data structure has:
- syscall_map_32[] and syscall_map_64[] on PPC64
- syscall_map_32[] on PPC32
On PPC32, syscall_map_32[] is populated using sys_call_table[].
On PPC64, syscall_map_64[] is populated using sys_call_table[]
and syscal_map_32[] is populated using compat_sys_call_table[].
To simplify vdso_setup_syscall_map(),
- On PPC32 rename syscall_map_32[] into syscall_map[],
- On PPC64 rename syscall_map_64[] into syscall_map[],
- On PPC64 rename syscall_map_32[] into compat_syscall_map[].
That way, syscall_map[] gets populated using sys_call_table[] and
compat_syscall_map[] gets population using compat_sys_call_table[].
Also define an empty compat_syscall_map[] on PPC32 to avoid ifdefs.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu >
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au >
Link: https://lore.kernel.org/r/472734be0d9991eee320a06824219a5b2663736b.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04 01:01:15 +11:00