__hyp_stub_install duplicates quite a bit of safe_svcmode_maskall
by forcing the CPU back to SVC. This is unnecessary, as
safe_svcmode_maskall is called just after.
Furthermore, the way we build SPSR_hyp is buggy as we fail to mask
the interrupts, leading to interesting behaviours on TC2 + UEFI.
The fix is to simply remove this code and rely on safe_svcmode_maskall
to do the right thing.
Cc: <stable@vger.kernel.org>
Reviewed-by: Dave Martin <dave.martin@linaro.org>
Reported-by: Harry Liebel <harry.liebel@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Non-T variants of ARMv4 do not support the bx instruction.
However, __hyp_stub_install is always called from the same
instruction set used to build the bulk of the kernel, so bx should
not be necessary.
This patch uses the traditional "mov pc" instead of bx.
Cc: <stable@vger.kernel.org>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
[will: fixed up remaining bx instruction]
Signed-off-by: Will Deacon <will.deacon@arm.com>
If booting in HYP mode, it makes sense to enable the use of the
physical timers, so the kernel can use them directly.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The zImage loader needs to turn on the MMU in order to take
advantage of caching while decompressing the zImage. Running this
in hyp mode would require the LPAE pagetable format to be
supported; to avoid this complexity, this patch switches out of hyp
mode, and returns back to hyp mode just before booting the kernel.
This implementation assumes that the Hyp mode view of memory and the
PL1 view of memory are coherent, providing that the MMU and caches
are off in both, as required by the boot protocol. The zImage
decompression code must drain the write buffer on completion anyway, and
entry into Hyp mode should flush any prefetch buffer, avoiding hazards
associated with local write buffers and the pipeline.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This patch does two things:
* Ensure that asynchronous aborts are masked at kernel entry.
The bootloader should be masking these anyway, but this reduces
the damage window just in case it doesn't.
* Enter svc mode via exception return to ensure that CPU state is
properly serialised. This does not matter when switching from
an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
parlance), but it potentially does matter when switching from a
another privileged mode such as hyp mode.
This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>