KVM/arm64 updates for 6.7

- Generalized infrastructure for 'writable' ID registers, effectively
    allowing userspace to opt-out of certain vCPU features for its guest
 
  - Optimization for vSGI injection, opportunistically compressing MPIDR
    to vCPU mapping into a table
 
  - Improvements to KVM's PMU emulation, allowing userspace to select
    the number of PMCs available to a VM
 
  - Guest support for memory operation instructions (FEAT_MOPS)
 
  - Cleanups to handling feature flags in KVM_ARM_VCPU_INIT, squashing
    bugs and getting rid of useless code
 
  - Changes to the way the SMCCC filter is constructed, avoiding wasted
    memory allocations when not in use
 
  - Load the stage-2 MMU context at vcpu_load() for VHE systems, reducing
    the overhead of errata mitigations
 
  - Miscellaneous kernel and selftest fixes
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQSNXHjWXuzMZutrKNKivnWIJHzdFgUCZUFJRgAKCRCivnWIJHzd
 FtgYAP9cMsc1Mhlw3jNQnTc6q0cbTulD/SoEDPUat1dXMqjs+gEAnskwQTrTX834
 fgGQeCAyp7Gmar+KeP64H0xm8kPSpAw=
 =R4M7
 -----END PGP SIGNATURE-----

Merge tag 'kvmarm-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 updates for 6.7

 - Generalized infrastructure for 'writable' ID registers, effectively
   allowing userspace to opt-out of certain vCPU features for its guest

 - Optimization for vSGI injection, opportunistically compressing MPIDR
   to vCPU mapping into a table

 - Improvements to KVM's PMU emulation, allowing userspace to select
   the number of PMCs available to a VM

 - Guest support for memory operation instructions (FEAT_MOPS)

 - Cleanups to handling feature flags in KVM_ARM_VCPU_INIT, squashing
   bugs and getting rid of useless code

 - Changes to the way the SMCCC filter is constructed, avoiding wasted
   memory allocations when not in use

 - Load the stage-2 MMU context at vcpu_load() for VHE systems, reducing
   the overhead of errata mitigations

 - Miscellaneous kernel and selftest fixes
This commit is contained in:
Paolo Bonzini 2023-10-31 16:37:07 -04:00
commit 45b890f768
64 changed files with 3024 additions and 1182 deletions

View File

@ -3422,6 +3422,8 @@ return indicates the attribute is implemented. It does not necessarily
indicate that the attribute can be read or written in the device's
current state. "addr" is ignored.
.. _KVM_ARM_VCPU_INIT:
4.82 KVM_ARM_VCPU_INIT
----------------------
@ -6140,6 +6142,56 @@ writes to the CNTVCT_EL0 and CNTPCT_EL0 registers using the SET_ONE_REG
interface. No error will be returned, but the resulting offset will not be
applied.
.. _KVM_ARM_GET_REG_WRITABLE_MASKS:
4.139 KVM_ARM_GET_REG_WRITABLE_MASKS
-------------------------------------------
:Capability: KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES
:Architectures: arm64
:Type: vm ioctl
:Parameters: struct reg_mask_range (in/out)
:Returns: 0 on success, < 0 on error
::
#define KVM_ARM_FEATURE_ID_RANGE 0
#define KVM_ARM_FEATURE_ID_RANGE_SIZE (3 * 8 * 8)
struct reg_mask_range {
__u64 addr; /* Pointer to mask array */
__u32 range; /* Requested range */
__u32 reserved[13];
};
This ioctl copies the writable masks for a selected range of registers to
userspace.
The ``addr`` field is a pointer to the destination array where KVM copies
the writable masks.
The ``range`` field indicates the requested range of registers.
``KVM_CHECK_EXTENSION`` for the ``KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES``
capability returns the supported ranges, expressed as a set of flags. Each
flag's bit index represents a possible value for the ``range`` field.
All other values are reserved for future use and KVM may return an error.
The ``reserved[13]`` array is reserved for future use and should be 0, or
KVM may return an error.
KVM_ARM_FEATURE_ID_RANGE (0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Feature ID range is defined as the AArch64 System register space with
op0==3, op1=={0, 1, 3}, CRn==0, CRm=={0-7}, op2=={0-7}.
The mask returned array pointed to by ``addr`` is indexed by the macro
``ARM64_FEATURE_ID_RANGE_IDX(op0, op1, crn, crm, op2)``, allowing userspace
to know what fields can be changed for the system register described by
``op0, op1, crn, crm, op2``. KVM rejects ID register values that describe a
superset of the features supported by the system.
5. The kvm_run structure
========================

View File

@ -11,3 +11,4 @@ ARM
hypercalls
pvtime
ptp_kvm
vcpu-features

View File

@ -0,0 +1,48 @@
.. SPDX-License-Identifier: GPL-2.0
===============================
vCPU feature selection on arm64
===============================
KVM/arm64 provides two mechanisms that allow userspace to configure
the CPU features presented to the guest.
KVM_ARM_VCPU_INIT
=================
The ``KVM_ARM_VCPU_INIT`` ioctl accepts a bitmap of feature flags
(``struct kvm_vcpu_init::features``). Features enabled by this interface are
*opt-in* and may change/extend UAPI. See :ref:`KVM_ARM_VCPU_INIT` for complete
documentation of the features controlled by the ioctl.
Otherwise, all CPU features supported by KVM are described by the architected
ID registers.
The ID Registers
================
The Arm architecture specifies a range of *ID Registers* that describe the set
of architectural features supported by the CPU implementation. KVM initializes
the guest's ID registers to the maximum set of CPU features supported by the
system. The ID register values may be VM-scoped in KVM, meaning that the
values could be shared for all vCPUs in a VM.
KVM allows userspace to *opt-out* of certain CPU features described by the ID
registers by writing values to them via the ``KVM_SET_ONE_REG`` ioctl. The ID
registers are mutable until the VM has started, i.e. userspace has called
``KVM_RUN`` on at least one vCPU in the VM. Userspace can discover what fields
are mutable in the ID registers using the ``KVM_ARM_GET_REG_WRITABLE_MASKS``.
See the :ref:`ioctl documentation <KVM_ARM_GET_REG_WRITABLE_MASKS>` for more
details.
Userspace is allowed to *limit* or *mask* CPU features according to the rules
outlined by the architecture in DDI0487J.a D19.1.3 'Principles of the ID
scheme for fields in ID register'. KVM does not allow ID register values that
exceed the capabilities of the system.
.. warning::
It is **strongly recommended** that userspace modify the ID register values
before accessing the rest of the vCPU's CPU register state. KVM may use the
ID register values to control feature emulation. Interleaving ID register
modification with other system register accesses may lead to unpredictable
behavior.

View File

@ -59,6 +59,13 @@ Groups:
It is invalid to mix calls with KVM_VGIC_V3_ADDR_TYPE_REDIST and
KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attributes.
Note that to obtain reproducible results (the same VCPU being associated
with the same redistributor across a save/restore operation), VCPU creation
order, redistributor region creation order as well as the respective
interleaves of VCPU and region creation MUST be preserved. Any change in
either ordering may result in a different vcpu_id/redistributor association,
resulting in a VM that will fail to run at restore time.
Errors:
======= =============================================================

View File

@ -102,7 +102,9 @@
#define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
#define HCRX_GUEST_FLAGS \
(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
(cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
#define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
/* TCR_EL2 Registers bits */

View File

@ -54,6 +54,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
static inline bool vcpu_has_feature(const struct kvm_vcpu *vcpu, int feature)
{
return test_bit(feature, vcpu->kvm->arch.vcpu_features);
}
#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__)
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
@ -62,7 +67,7 @@ static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
#else
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
return test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features);
return vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
}
#endif
@ -465,7 +470,7 @@ static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
{
return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
return __vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
}
static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
@ -565,12 +570,6 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu_set_flag((v), e); \
} while (0)
static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
{
return test_bit(feature, vcpu->arch.features);
}
static __always_inline void kvm_write_cptr_el2(u64 val)
{
if (has_vhe() || has_hvhe())

View File

@ -78,7 +78,7 @@ extern unsigned int __ro_after_init kvm_sve_max_vl;
int __init kvm_arm_init_sve(void);
u32 __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
void kvm_reset_vcpu(struct kvm_vcpu *vcpu);
void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu);
struct kvm_hyp_memcache {
@ -158,6 +158,16 @@ struct kvm_s2_mmu {
phys_addr_t pgd_phys;
struct kvm_pgtable *pgt;
/*
* VTCR value used on the host. For a non-NV guest (or a NV
* guest that runs in a context where its own S2 doesn't
* apply), its T0SZ value reflects that of the IPA size.
*
* For a shadow S2 MMU, T0SZ reflects the PARange exposed to
* the guest.
*/
u64 vtcr;
/* The last vcpu id that ran on each physical CPU */
int __percpu *last_vcpu_ran;
@ -202,12 +212,34 @@ struct kvm_protected_vm {
struct kvm_hyp_memcache teardown_mc;
};
struct kvm_mpidr_data {
u64 mpidr_mask;
DECLARE_FLEX_ARRAY(u16, cmpidr_to_idx);
};
static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
{
unsigned long mask = data->mpidr_mask;
u64 aff = mpidr & MPIDR_HWID_BITMASK;
int nbits, bit, bit_idx = 0;
u16 index = 0;
/*
* If this looks like RISC-V's BEXT or x86's PEXT
* instructions, it isn't by accident.
*/
nbits = fls(mask);
for_each_set_bit(bit, &mask, nbits) {
index |= (aff & BIT(bit)) >> (bit - bit_idx);
bit_idx++;
}
return index;
}
struct kvm_arch {
struct kvm_s2_mmu mmu;
/* VTCR_EL2 value for this VM */
u64 vtcr;
/* Interrupt controller */
struct vgic_dist vgic;
@ -239,15 +271,16 @@ struct kvm_arch {
#define KVM_ARCH_FLAG_VM_COUNTER_OFFSET 5
/* Timer PPIs made immutable */
#define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 6
/* SMCCC filter initialized for the VM */
#define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 7
/* Initial ID reg values loaded */
#define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 8
#define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 7
unsigned long flags;
/* VM-wide vCPU feature set */
DECLARE_BITMAP(vcpu_features, KVM_VCPU_MAX_FEATURES);
/* MPIDR to vcpu index mapping, optional */
struct kvm_mpidr_data *mpidr_data;
/*
* VM-wide PMU filter, implemented as a bitmap and big enough for
* up to 2^10 events (ARMv8.0) or 2^16 events (ARMv8.1+).
@ -257,6 +290,9 @@ struct kvm_arch {
cpumask_var_t supported_cpus;
/* PMCR_EL0.N value for the guest */
u8 pmcr_n;
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
@ -574,9 +610,6 @@ struct kvm_vcpu_arch {
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
/* feature flags */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
/* Virtual SError ESR to restore when HCR_EL2.VSE is set */
u64 vsesr_el2;
@ -1025,7 +1058,7 @@ int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
extern unsigned int __ro_after_init kvm_arm_vmid_bits;
int __init kvm_arm_vmid_alloc_init(void);
void __init kvm_arm_vmid_alloc_free(void);
void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
void kvm_arm_vmid_clear_active(void);
static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
@ -1078,6 +1111,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
struct kvm_arm_copy_mte_tags *copy_tags);
int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
struct kvm_arm_counter_offset *offset);
int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm,
struct reg_mask_range *range);
/* Guest/host FPSIMD coordination helpers */
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
@ -1109,8 +1144,8 @@ static inline bool kvm_set_pmuserenr(u64 val)
}
#endif
void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu);
void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu);
int __init kvm_set_ipa_limit(void);

View File

@ -93,6 +93,8 @@ void __timer_disable_traps(struct kvm_vcpu *vcpu);
void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt);
#else
void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu);
void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu);
void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt);
void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt);
void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt);
@ -111,11 +113,6 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
void __sve_restore_state(void *sve_pffr, u32 *fpsr);
#ifndef __KVM_NVHE_HYPERVISOR__
void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
#endif
u64 __guest_enter(struct kvm_vcpu *vcpu);
bool kvm_host_psci_handler(struct kvm_cpu_context *host_ctxt, u32 func_id);

View File

@ -150,9 +150,9 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v)
*/
#define KVM_PHYS_SHIFT (40)
#define kvm_phys_shift(kvm) VTCR_EL2_IPA(kvm->arch.vtcr)
#define kvm_phys_size(kvm) (_AC(1, ULL) << kvm_phys_shift(kvm))
#define kvm_phys_mask(kvm) (kvm_phys_size(kvm) - _AC(1, ULL))
#define kvm_phys_shift(mmu) VTCR_EL2_IPA((mmu)->vtcr)
#define kvm_phys_size(mmu) (_AC(1, ULL) << kvm_phys_shift(mmu))
#define kvm_phys_mask(mmu) (kvm_phys_size(mmu) - _AC(1, ULL))
#include <asm/kvm_pgtable.h>
#include <asm/stage2_pgtable.h>
@ -224,16 +224,41 @@ static inline void __clean_dcache_guest_page(void *va, size_t size)
kvm_flush_dcache_to_poc(va, size);
}
static inline size_t __invalidate_icache_max_range(void)
{
u8 iminline;
u64 ctr;
asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
"movk %0, #0, lsl #16\n"
"movk %0, #0, lsl #32\n"
"movk %0, #0, lsl #48\n",
ARM64_ALWAYS_SYSTEM,
kvm_compute_final_ctr_el0)
: "=r" (ctr));
iminline = SYS_FIELD_GET(CTR_EL0, IminLine, ctr) + 2;
return MAX_DVM_OPS << iminline;
}
static inline void __invalidate_icache_guest_page(void *va, size_t size)
{
if (icache_is_aliasing()) {
/* any kind of VIPT cache */
/*
* VPIPT I-cache maintenance must be done from EL2. See comment in the
* nVHE flavor of __kvm_tlb_flush_vmid_ipa().
*/
if (icache_is_vpipt() && read_sysreg(CurrentEL) != CurrentEL_EL2)
return;
/*
* Blow the whole I-cache if it is aliasing (i.e. VIPT) or the
* invalidation range exceeds our arbitrary limit on invadations by
* cache line.
*/
if (icache_is_aliasing() || size > __invalidate_icache_max_range())
icache_inval_all_pou();
} else if (read_sysreg(CurrentEL) != CurrentEL_EL1 ||
!icache_is_vpipt()) {
/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
else
icache_inval_pou((unsigned long)va, (unsigned long)va + size);
}
}
void kvm_set_way_flush(struct kvm_vcpu *vcpu);
@ -299,7 +324,7 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu,
struct kvm_arch *arch)
{
write_sysreg(arch->vtcr, vtcr_el2);
write_sysreg(mmu->vtcr, vtcr_el2);
write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
/*

View File

@ -2,13 +2,14 @@
#ifndef __ARM64_KVM_NESTED_H
#define __ARM64_KVM_NESTED_H
#include <asm/kvm_emulate.h>
#include <linux/kvm_host.h>
static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
{
return (!__is_defined(__KVM_NVHE_HYPERVISOR__) &&
cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features));
vcpu_has_feature(vcpu, KVM_ARM_VCPU_HAS_EL2));
}
extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);

View File

@ -21,13 +21,13 @@
* (IPA_SHIFT - 4).
*/
#define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4)
#define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr)
#define kvm_stage2_levels(mmu) VTCR_EL2_LVLS((mmu)->vtcr)
/*
* kvm_mmmu_cache_min_pages() is the number of pages required to install
* a stage-2 translation. We pre-allocate the entry level page table at
* the VM creation.
*/
#define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1)
#define kvm_mmu_cache_min_pages(mmu) (kvm_stage2_levels(mmu) - 1)
#endif /* __ARM64_S2_PGTABLE_H_ */

View File

@ -270,6 +270,8 @@
/* ETM */
#define SYS_TRCOSLAR sys_reg(2, 1, 1, 0, 4)
#define SYS_BRBCR_EL2 sys_reg(2, 4, 9, 0, 0)
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
@ -484,6 +486,7 @@
#define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0)
#define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1)
#define SYS_SCTLR2_EL2 sys_reg(3, 4, 1, 0, 3)
#define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0)
#define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1)
#define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2)
@ -497,10 +500,15 @@
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
#define SYS_SPSR_irq sys_reg(3, 4, 4, 3, 0)
#define SYS_SPSR_abt sys_reg(3, 4, 4, 3, 1)
#define SYS_SPSR_und sys_reg(3, 4, 4, 3, 2)
#define SYS_SPSR_fiq sys_reg(3, 4, 4, 3, 3)
#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
#define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0)
#define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1)
@ -514,6 +522,18 @@
#define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0)
#define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0)
#define SYS_MPAMHCR_EL2 sys_reg(3, 4, 10, 4, 0)
#define SYS_MPAMVPMV_EL2 sys_reg(3, 4, 10, 4, 1)
#define SYS_MPAM2_EL2 sys_reg(3, 4, 10, 5, 0)
#define __SYS__MPAMVPMx_EL2(x) sys_reg(3, 4, 10, 6, x)
#define SYS_MPAMVPM0_EL2 __SYS__MPAMVPMx_EL2(0)
#define SYS_MPAMVPM1_EL2 __SYS__MPAMVPMx_EL2(1)
#define SYS_MPAMVPM2_EL2 __SYS__MPAMVPMx_EL2(2)
#define SYS_MPAMVPM3_EL2 __SYS__MPAMVPMx_EL2(3)
#define SYS_MPAMVPM4_EL2 __SYS__MPAMVPMx_EL2(4)
#define SYS_MPAMVPM5_EL2 __SYS__MPAMVPMx_EL2(5)
#define SYS_MPAMVPM6_EL2 __SYS__MPAMVPMx_EL2(6)
#define SYS_MPAMVPM7_EL2 __SYS__MPAMVPMx_EL2(7)
#define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0)
#define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1)
@ -562,24 +582,49 @@
#define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1)
#define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2)
#define SYS_SCXTNUM_EL2 sys_reg(3, 4, 13, 0, 7)
#define __AMEV_op2(m) (m & 0x7)
#define __AMEV_CRm(n, m) (n | ((m & 0x8) >> 3))
#define __SYS__AMEVCNTVOFF0n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0x8, m), __AMEV_op2(m))
#define SYS_AMEVCNTVOFF0n_EL2(m) __SYS__AMEVCNTVOFF0n_EL2(m)
#define __SYS__AMEVCNTVOFF1n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0xA, m), __AMEV_op2(m))
#define SYS_AMEVCNTVOFF1n_EL2(m) __SYS__AMEVCNTVOFF1n_EL2(m)
#define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3)
#define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0)
#define SYS_CNTHP_TVAL_EL2 sys_reg(3, 4, 14, 2, 0)
#define SYS_CNTHP_CTL_EL2 sys_reg(3, 4, 14, 2, 1)
#define SYS_CNTHP_CVAL_EL2 sys_reg(3, 4, 14, 2, 2)
#define SYS_CNTHV_TVAL_EL2 sys_reg(3, 4, 14, 3, 0)
#define SYS_CNTHV_CTL_EL2 sys_reg(3, 4, 14, 3, 1)
#define SYS_CNTHV_CVAL_EL2 sys_reg(3, 4, 14, 3, 2)
/* VHE encodings for architectural EL0/1 system registers */
#define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0)
#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2)
#define SYS_SCTLR2_EL12 sys_reg(3, 5, 1, 0, 3)
#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0)
#define SYS_TRFCR_EL12 sys_reg(3, 5, 1, 2, 1)
#define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6)
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
#define SYS_TCR2_EL12 sys_reg(3, 5, 2, 0, 3)
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
#define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0)
#define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0)
#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0)
#define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0)
#define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0)
#define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0)
#define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1)
#define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7)
#define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0)
#define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0)
#define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1)

View File

@ -333,7 +333,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
* This is meant to avoid soft lock-ups on large TLB flushing ranges and not
* necessarily a performance improvement.
*/
#define MAX_TLBI_OPS PTRS_PER_PTE
#define MAX_DVM_OPS PTRS_PER_PTE
/*
* __flush_tlb_range_op - Perform TLBI operation upon a range
@ -413,12 +413,12 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
/*
* When not uses TLB range ops, we can handle up to
* (MAX_TLBI_OPS - 1) pages;
* (MAX_DVM_OPS - 1) pages;
* When uses TLB range ops, we can handle up to
* (MAX_TLBI_RANGE_PAGES - 1) pages.
*/
if ((!system_supports_tlb_range() &&
(end - start) >= (MAX_TLBI_OPS * stride)) ||
(end - start) >= (MAX_DVM_OPS * stride)) ||
pages >= MAX_TLBI_RANGE_PAGES) {
flush_tlb_mm(vma->vm_mm);
return;
@ -451,7 +451,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
{
unsigned long addr;
if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) {
if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) {
flush_tlb_all();
return;
}

View File

@ -9,10 +9,9 @@
#include <linux/list.h>
#include <asm/esr.h>
#include <asm/ptrace.h>
#include <asm/sections.h>
struct pt_regs;
#ifdef CONFIG_ARMV8_DEPRECATED
bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn);
#else
@ -101,4 +100,55 @@ static inline unsigned long arm64_ras_serror_get_severity(unsigned long esr)
bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned long esr);
void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr);
static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned long esr)
{
bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION;
bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A;
int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr);
int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr);
int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr);
unsigned long dst, src, size;
dst = regs->regs[dstreg];
src = regs->regs[srcreg];
size = regs->regs[sizereg];
/*
* Put the registers back in the original format suitable for a
* prologue instruction, using the generic return routine from the
* Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH.
*/
if (esr & ESR_ELx_MOPS_ISS_MEM_INST) {
/* SET* instruction */
if (option_a ^ wrong_option) {
/* Format is from Option A; forward set */
regs->regs[dstreg] = dst + size;
regs->regs[sizereg] = -size;
}
} else {
/* CPY* instruction */
if (!(option_a ^ wrong_option)) {
/* Format is from Option B */
if (regs->pstate & PSR_N_BIT) {
/* Backward copy */
regs->regs[dstreg] = dst - size;
regs->regs[srcreg] = src - size;
}
} else {
/* Format is from Option A */
if (size & BIT(63)) {
/* Forward copy */
regs->regs[dstreg] = dst + size;
regs->regs[srcreg] = src + size;
regs->regs[sizereg] = -size;
}
}
}
if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE)
regs->pc -= 8;
else
regs->pc -= 4;
}
#endif

View File

@ -505,6 +505,38 @@ struct kvm_smccc_filter {
#define KVM_HYPERCALL_EXIT_SMC (1U << 0)
#define KVM_HYPERCALL_EXIT_16BIT (1U << 1)
/*
* Get feature ID registers userspace writable mask.
*
* From DDI0487J.a, D19.2.66 ("ID_AA64MMFR2_EL1, AArch64 Memory Model
* Feature Register 2"):
*
* "The Feature ID space is defined as the System register space in
* AArch64 with op0==3, op1=={0, 1, 3}, CRn==0, CRm=={0-7},
* op2=={0-7}."
*
* This covers all currently known R/O registers that indicate
* anything useful feature wise, including the ID registers.
*
* If we ever need to introduce a new range, it will be described as
* such in the range field.
*/
#define KVM_ARM_FEATURE_ID_RANGE_IDX(op0, op1, crn, crm, op2) \
({ \
__u64 __op1 = (op1) & 3; \
__op1 -= (__op1 == 3); \
(__op1 << 6 | ((crm) & 7) << 3 | (op2)); \
})
#define KVM_ARM_FEATURE_ID_RANGE 0
#define KVM_ARM_FEATURE_ID_RANGE_SIZE (3 * 8 * 8)
struct reg_mask_range {
__u64 addr; /* Pointer to mask array */
__u32 range; /* Requested range */
__u32 reserved[13];
};
#endif
#endif /* __ARM_KVM_H__ */

View File

@ -516,53 +516,7 @@ void do_el1_fpac(struct pt_regs *regs, unsigned long esr)
void do_el0_mops(struct pt_regs *regs, unsigned long esr)
{
bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION;
bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A;
int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr);
int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr);
int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr);
unsigned long dst, src, size;
dst = pt_regs_read_reg(regs, dstreg);
src = pt_regs_read_reg(regs, srcreg);
size = pt_regs_read_reg(regs, sizereg);
/*
* Put the registers back in the original format suitable for a
* prologue instruction, using the generic return routine from the
* Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH.
*/
if (esr & ESR_ELx_MOPS_ISS_MEM_INST) {
/* SET* instruction */
if (option_a ^ wrong_option) {
/* Format is from Option A; forward set */
pt_regs_write_reg(regs, dstreg, dst + size);
pt_regs_write_reg(regs, sizereg, -size);
}
} else {
/* CPY* instruction */
if (!(option_a ^ wrong_option)) {
/* Format is from Option B */
if (regs->pstate & PSR_N_BIT) {
/* Backward copy */
pt_regs_write_reg(regs, dstreg, dst - size);
pt_regs_write_reg(regs, srcreg, src - size);
}
} else {
/* Format is from Option A */
if (size & BIT(63)) {
/* Forward copy */
pt_regs_write_reg(regs, dstreg, dst + size);
pt_regs_write_reg(regs, srcreg, src + size);
pt_regs_write_reg(regs, sizereg, -size);
}
}
}
if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE)
regs->pc -= 8;
else
regs->pc -= 4;
arm64_mops_reset_regs(&regs->user_regs, esr);
/*
* If single stepping then finish the step before executing the

View File

@ -453,7 +453,7 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
timer_ctx->irq.level);
if (!userspace_irqchip(vcpu->kvm)) {
ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu,
timer_irq(timer_ctx),
timer_ctx->irq.level,
timer_ctx);
@ -936,7 +936,7 @@ void kvm_timer_sync_user(struct kvm_vcpu *vcpu)
unmask_vtimer_irq_user(vcpu);
}
int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = vcpu_timer(vcpu);
struct timer_map map;
@ -980,8 +980,6 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
soft_timer_cancel(&map.emul_vtimer->hrtimer);
if (map.emul_ptimer)
soft_timer_cancel(&map.emul_ptimer->hrtimer);
return 0;
}
static void timer_context_init(struct kvm_vcpu *vcpu, int timerid)

View File

@ -205,6 +205,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
if (is_protected_kvm_enabled())
pkvm_destroy_hyp_vm(kvm);
kfree(kvm->arch.mpidr_data);
kvm_destroy_vcpus(kvm);
kvm_unshare_hyp(kvm, kvm + 1);
@ -317,6 +318,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES:
r = kvm_supported_block_sizes();
break;
case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES:
r = BIT(0);
break;
default:
r = 0;
}
@ -367,7 +371,6 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
/* Force users to call KVM_ARM_VCPU_INIT */
vcpu_clear_flag(vcpu, VCPU_INITIALIZED);
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
@ -438,9 +441,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
* We might get preempted before the vCPU actually runs, but
* over-invalidation doesn't affect correctness.
*/
if (*last_ran != vcpu->vcpu_id) {
if (*last_ran != vcpu->vcpu_idx) {
kvm_call_hyp(__kvm_flush_cpu_context, mmu);
*last_ran = vcpu->vcpu_id;
*last_ran = vcpu->vcpu_idx;
}
vcpu->cpu = cpu;
@ -448,7 +451,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_vgic_load(vcpu);
kvm_timer_vcpu_load(vcpu);
if (has_vhe())
kvm_vcpu_load_sysregs_vhe(vcpu);
kvm_vcpu_load_vhe(vcpu);
kvm_arch_vcpu_load_fp(vcpu);
kvm_vcpu_pmu_restore_guest(vcpu);
if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
@ -472,7 +475,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_arch_vcpu_put_debug_state_flags(vcpu);
kvm_arch_vcpu_put_fp(vcpu);
if (has_vhe())
kvm_vcpu_put_sysregs_vhe(vcpu);
kvm_vcpu_put_vhe(vcpu);
kvm_timer_vcpu_put(vcpu);
kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
@ -578,6 +581,57 @@ static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
return vcpu_get_flag(vcpu, VCPU_INITIALIZED);
}
static void kvm_init_mpidr_data(struct kvm *kvm)
{
struct kvm_mpidr_data *data = NULL;
unsigned long c, mask, nr_entries;
u64 aff_set = 0, aff_clr = ~0UL;
struct kvm_vcpu *vcpu;
mutex_lock(&kvm->arch.config_lock);
if (kvm->arch.mpidr_data || atomic_read(&kvm->online_vcpus) == 1)
goto out;
kvm_for_each_vcpu(c, vcpu, kvm) {
u64 aff = kvm_vcpu_get_mpidr_aff(vcpu);
aff_set |= aff;
aff_clr &= aff;
}
/*
* A significant bit can be either 0 or 1, and will only appear in
* aff_set. Use aff_clr to weed out the useless stuff.
*/
mask = aff_set ^ aff_clr;
nr_entries = BIT_ULL(hweight_long(mask));
/*
* Don't let userspace fool us. If we need more than a single page
* to describe the compressed MPIDR array, just fall back to the
* iterative method. Single vcpu VMs do not need this either.
*/
if (struct_size(data, cmpidr_to_idx, nr_entries) <= PAGE_SIZE)
data = kzalloc(struct_size(data, cmpidr_to_idx, nr_entries),
GFP_KERNEL_ACCOUNT);
if (!data)
goto out;
data->mpidr_mask = mask;
kvm_for_each_vcpu(c, vcpu, kvm) {
u64 aff = kvm_vcpu_get_mpidr_aff(vcpu);
u16 index = kvm_mpidr_index(data, aff);
data->cmpidr_to_idx[index] = c;
}
kvm->arch.mpidr_data = data;
out:
mutex_unlock(&kvm->arch.config_lock);
}
/*
* Handle both the initialisation that is being done when the vcpu is
* run for the first time, as well as the updates that must be
@ -601,6 +655,8 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
if (likely(vcpu_has_run_once(vcpu)))
return 0;
kvm_init_mpidr_data(kvm);
kvm_arm_vcpu_init_debug(vcpu);
if (likely(irqchip_in_kernel(kvm))) {
@ -801,8 +857,7 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
}
if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
kvm_pmu_handle_pmcr(vcpu,
__vcpu_sys_reg(vcpu, PMCR_EL0));
kvm_vcpu_reload_pmu(vcpu);
if (kvm_check_request(KVM_REQ_RESYNC_PMU_EL0, vcpu))
kvm_vcpu_pmu_restore_guest(vcpu);
@ -950,7 +1005,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
* making a thread's VMID inactive. So we need to call
* kvm_arm_vmid_update() in non-premptible context.
*/
kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid);
if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) &&
has_vhe())
__load_stage2(vcpu->arch.hw_mmu,
vcpu->arch.hw_mmu->arch);
kvm_pmu_flush_hwstate(vcpu);
@ -1134,27 +1192,23 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
bool line_status)
{
u32 irq = irq_level->irq;
unsigned int irq_type, vcpu_idx, irq_num;
int nrcpus = atomic_read(&kvm->online_vcpus);
unsigned int irq_type, vcpu_id, irq_num;
struct kvm_vcpu *vcpu = NULL;
bool level = irq_level->level;
irq_type = (irq >> KVM_ARM_IRQ_TYPE_SHIFT) & KVM_ARM_IRQ_TYPE_MASK;
vcpu_idx = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK;
vcpu_idx += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1);
vcpu_id = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK;
vcpu_id += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1);
irq_num = (irq >> KVM_ARM_IRQ_NUM_SHIFT) & KVM_ARM_IRQ_NUM_MASK;
trace_kvm_irq_line(irq_type, vcpu_idx, irq_num, irq_level->level);
trace_kvm_irq_line(irq_type, vcpu_id, irq_num, irq_level->level);
switch (irq_type) {
case KVM_ARM_IRQ_TYPE_CPU:
if (irqchip_in_kernel(kvm))
return -ENXIO;
if (vcpu_idx >= nrcpus)
return -EINVAL;
vcpu = kvm_get_vcpu(kvm, vcpu_idx);
vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id);
if (!vcpu)
return -EINVAL;
@ -1166,17 +1220,14 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
if (!irqchip_in_kernel(kvm))
return -ENXIO;
if (vcpu_idx >= nrcpus)
return -EINVAL;
vcpu = kvm_get_vcpu(kvm, vcpu_idx);
vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id);
if (!vcpu)
return -EINVAL;
if (irq_num < VGIC_NR_SGIS || irq_num >= VGIC_NR_PRIVATE_IRQS)
return -EINVAL;
return kvm_vgic_inject_irq(kvm, vcpu->vcpu_id, irq_num, level, NULL);
return kvm_vgic_inject_irq(kvm, vcpu, irq_num, level, NULL);
case KVM_ARM_IRQ_TYPE_SPI:
if (!irqchip_in_kernel(kvm))
return -ENXIO;
@ -1184,12 +1235,36 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
if (irq_num < VGIC_NR_PRIVATE_IRQS)
return -EINVAL;
return kvm_vgic_inject_irq(kvm, 0, irq_num, level, NULL);
return kvm_vgic_inject_irq(kvm, NULL, irq_num, level, NULL);
}
return -EINVAL;
}
static unsigned long system_supported_vcpu_features(void)
{
unsigned long features = KVM_VCPU_VALID_FEATURES;
if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1))
clear_bit(KVM_ARM_VCPU_EL1_32BIT, &features);
if (!kvm_arm_support_pmu_v3())
clear_bit(KVM_ARM_VCPU_PMU_V3, &features);
if (!system_supports_sve())
clear_bit(KVM_ARM_VCPU_SVE, &features);
if (!system_has_full_ptr_auth()) {
clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features);
clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features);
}
if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
clear_bit(KVM_ARM_VCPU_HAS_EL2, &features);
return features;
}
static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
{
@ -1204,12 +1279,25 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
return -ENOENT;
}
if (features & ~system_supported_vcpu_features())
return -EINVAL;
/*
* For now make sure that both address/generic pointer authentication
* features are requested by the userspace together.
*/
if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features) !=
test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features))
return -EINVAL;
/* Disallow NV+SVE for the time being */
if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features) &&
test_bit(KVM_ARM_VCPU_SVE, &features))
return -EINVAL;
if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features))
return 0;
if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
return -EINVAL;
/* MTE is incompatible with AArch32 */
if (kvm_has_mte(vcpu->kvm))
return -EINVAL;
@ -1226,7 +1314,23 @@ static bool kvm_vcpu_init_changed(struct kvm_vcpu *vcpu,
{
unsigned long features = init->features[0];
return !bitmap_equal(vcpu->arch.features, &features, KVM_VCPU_MAX_FEATURES);
return !bitmap_equal(vcpu->kvm->arch.vcpu_features, &features,
KVM_VCPU_MAX_FEATURES);
}
static int kvm_setup_vcpu(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
int ret = 0;
/*
* When the vCPU has a PMU, but no PMU is set for the guest
* yet, set the default one.
*/
if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu)
ret = kvm_arm_set_default_pmu(kvm);
return ret;
}
static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
@ -1239,21 +1343,21 @@ static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
mutex_lock(&kvm->arch.config_lock);
if (test_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags) &&
!bitmap_equal(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES))
kvm_vcpu_init_changed(vcpu, init))
goto out_unlock;
bitmap_copy(vcpu->arch.features, &features, KVM_VCPU_MAX_FEATURES);
/* Now we know what it is, we can reset it. */
ret = kvm_reset_vcpu(vcpu);
if (ret) {
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
goto out_unlock;
}
bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES);
ret = kvm_setup_vcpu(vcpu);
if (ret)
goto out_unlock;
/* Now we know what it is, we can reset it. */
kvm_reset_vcpu(vcpu);
set_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags);
vcpu_set_flag(vcpu, VCPU_INITIALIZED);
ret = 0;
out_unlock:
mutex_unlock(&kvm->arch.config_lock);
return ret;
@ -1278,7 +1382,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
if (kvm_vcpu_init_changed(vcpu, init))
return -EINVAL;
return kvm_reset_vcpu(vcpu);
kvm_reset_vcpu(vcpu);
return 0;
}
static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
@ -1629,6 +1734,13 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
return kvm_vm_set_attr(kvm, &attr);
}
case KVM_ARM_GET_REG_WRITABLE_MASKS: {
struct reg_mask_range range;
if (copy_from_user(&range, argp, sizeof(range)))
return -EFAULT;
return kvm_vm_ioctl_get_reg_writable_masks(kvm, &range);
}
default:
return -EINVAL;
}
@ -2341,6 +2453,18 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
unsigned long i;
mpidr &= MPIDR_HWID_BITMASK;
if (kvm->arch.mpidr_data) {
u16 idx = kvm_mpidr_index(kvm->arch.mpidr_data, mpidr);
vcpu = kvm_get_vcpu(kvm,
kvm->arch.mpidr_data->cmpidr_to_idx[idx]);
if (mpidr != kvm_vcpu_get_mpidr_aff(vcpu))
vcpu = NULL;
return vcpu;
}
kvm_for_each_vcpu(i, vcpu, kvm) {
if (mpidr == kvm_vcpu_get_mpidr_aff(vcpu))
return vcpu;

View File

@ -648,15 +648,80 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
SR_TRAP(SYS_APGAKEYLO_EL1, CGT_HCR_APK),
SR_TRAP(SYS_APGAKEYHI_EL1, CGT_HCR_APK),
/* All _EL2 registers */
SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0),
sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV),
SR_TRAP(SYS_BRBCR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VPIDR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VMPIDR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_SCTLR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_ACTLR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_SCTLR2_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_HCR_EL2,
SYS_HCRX_EL2, CGT_HCR_NV),
SR_TRAP(SYS_SMPRIMAP_EL2, CGT_HCR_NV),
SR_TRAP(SYS_SMCR_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_TTBR0_EL2,
SYS_TCR2_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VTTBR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VTCR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VNCR_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_HDFGRTR_EL2,
SYS_HAFGRTR_EL2, CGT_HCR_NV),
/* Skip the SP_EL1 encoding... */
SR_TRAP(SYS_SPSR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_ELR_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1),
sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV),
SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0),
sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV),
/* Skip SPSR_irq, SPSR_abt, SPSR_und, SPSR_fiq */
SR_TRAP(SYS_AFSR0_EL2, CGT_HCR_NV),
SR_TRAP(SYS_AFSR1_EL2, CGT_HCR_NV),
SR_TRAP(SYS_ESR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VSESR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_TFSR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_FAR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_HPFAR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_PMSCR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_MAIR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_AMAIR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_MPAMHCR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_MPAMVPMV_EL2, CGT_HCR_NV),
SR_TRAP(SYS_MPAM2_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_MPAMVPM0_EL2,
SYS_MPAMVPM7_EL2, CGT_HCR_NV),
/*
* Note that the spec. describes a group of MEC registers
* whose access should not trap, therefore skip the following:
* MECID_A0_EL2, MECID_A1_EL2, MECID_P0_EL2,
* MECID_P1_EL2, MECIDR_EL2, VMECID_A_EL2,
* VMECID_P_EL2.
*/
SR_RANGE_TRAP(SYS_VBAR_EL2,
SYS_RMR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_VDISR_EL2, CGT_HCR_NV),
/* ICH_AP0R<m>_EL2 */
SR_RANGE_TRAP(SYS_ICH_AP0R0_EL2,
SYS_ICH_AP0R3_EL2, CGT_HCR_NV),
/* ICH_AP1R<m>_EL2 */
SR_RANGE_TRAP(SYS_ICH_AP1R0_EL2,
SYS_ICH_AP1R3_EL2, CGT_HCR_NV),
SR_TRAP(SYS_ICC_SRE_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_ICH_HCR_EL2,
SYS_ICH_EISR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_ICH_ELRSR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_ICH_VMCR_EL2, CGT_HCR_NV),
/* ICH_LR<m>_EL2 */
SR_RANGE_TRAP(SYS_ICH_LR0_EL2,
SYS_ICH_LR15_EL2, CGT_HCR_NV),
SR_TRAP(SYS_CONTEXTIDR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_TPIDR_EL2, CGT_HCR_NV),
SR_TRAP(SYS_SCXTNUM_EL2, CGT_HCR_NV),
/* AMEVCNTVOFF0<n>_EL2, AMEVCNTVOFF1<n>_EL2 */
SR_RANGE_TRAP(SYS_AMEVCNTVOFF0n_EL2(0),
SYS_AMEVCNTVOFF1n_EL2(15), CGT_HCR_NV),
/* CNT*_EL2 */
SR_TRAP(SYS_CNTVOFF_EL2, CGT_HCR_NV),
SR_TRAP(SYS_CNTPOFF_EL2, CGT_HCR_NV),
SR_TRAP(SYS_CNTHCTL_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_CNTHP_TVAL_EL2,
SYS_CNTHP_CVAL_EL2, CGT_HCR_NV),
SR_RANGE_TRAP(SYS_CNTHV_TVAL_EL2,
SYS_CNTHV_CVAL_EL2, CGT_HCR_NV),
/* All _EL02, _EL12 registers */
SR_RANGE_TRAP(sys_reg(3, 5, 0, 0, 0),
sys_reg(3, 5, 10, 15, 7), CGT_HCR_NV),

View File

@ -30,6 +30,7 @@
#include <asm/fpsimd.h>
#include <asm/debug-monitors.h>
#include <asm/processor.h>
#include <asm/traps.h>
struct kvm_exception_table_entry {
int insn, fixup;
@ -265,6 +266,22 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault);
}
static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
{
*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2);
write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR);
/*
* Finish potential single step before executing the prologue
* instruction.
*/
*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);
return true;
}
static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);

View File

@ -197,7 +197,8 @@
#define PVM_ID_AA64ISAR2_ALLOW (\
ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \
ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) \
ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | \
ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) \
)
u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);

View File

@ -129,8 +129,8 @@ static void prepare_host_vtcr(void)
parange = kvm_get_parange(id_aa64mmfr0_el1_sys_val);
phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange);
host_mmu.arch.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val,
id_aa64mmfr1_el1_sys_val, phys_shift);
host_mmu.arch.mmu.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val,
id_aa64mmfr1_el1_sys_val, phys_shift);
}
static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot);
@ -235,7 +235,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd)
unsigned long nr_pages;
int ret;
nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT;
nr_pages = kvm_pgtable_stage2_pgd_size(mmu->vtcr) >> PAGE_SHIFT;
ret = hyp_pool_init(&vm->pool, hyp_virt_to_pfn(pgd), nr_pages, 0);
if (ret)
return ret;
@ -295,7 +295,7 @@ int __pkvm_prot_finalize(void)
return -EPERM;
params->vttbr = kvm_get_vttbr(mmu);
params->vtcr = host_mmu.arch.vtcr;
params->vtcr = mmu->vtcr;
params->hcr_el2 |= HCR_VM;
/*

View File

@ -303,7 +303,7 @@ static void init_pkvm_hyp_vm(struct kvm *host_kvm, struct pkvm_hyp_vm *hyp_vm,
{
hyp_vm->host_kvm = host_kvm;
hyp_vm->kvm.created_vcpus = nr_vcpus;
hyp_vm->kvm.arch.vtcr = host_mmu.arch.vtcr;
hyp_vm->kvm.arch.mmu.vtcr = host_mmu.arch.mmu.vtcr;
}
static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
@ -483,7 +483,7 @@ int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
}
vm_size = pkvm_get_hyp_vm_size(nr_vcpus);
pgd_size = kvm_pgtable_stage2_pgd_size(host_mmu.arch.vtcr);
pgd_size = kvm_pgtable_stage2_pgd_size(host_mmu.arch.mmu.vtcr);
ret = -ENOMEM;

View File

@ -192,6 +192,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
[ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low,
[ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low,
[ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth,
[ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
};
static const exit_handler_fn pvm_exit_handlers[] = {
@ -203,6 +204,7 @@ static const exit_handler_fn pvm_exit_handlers[] = {
[ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low,
[ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low,
[ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth,
[ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
};
static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)

View File

@ -1314,7 +1314,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr,
ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level,
KVM_PGTABLE_WALK_HANDLE_FAULT |
KVM_PGTABLE_WALK_SHARED);
if (!ret)
if (!ret || ret == -EAGAIN)
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level);
return ret;
}
@ -1511,7 +1511,7 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu,
kvm_pgtable_force_pte_cb_t force_pte_cb)
{
size_t pgd_sz;
u64 vtcr = mmu->arch->vtcr;
u64 vtcr = mmu->vtcr;
u32 ia_bits = VTCR_EL2_IPA(vtcr);
u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr);
u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0;

View File

@ -137,12 +137,12 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
NOKPROBE_SYMBOL(__deactivate_traps);
/*
* Disable IRQs in {activate,deactivate}_traps_vhe_{load,put}() to
* Disable IRQs in __vcpu_{load,put}_{activate,deactivate}_traps() to
* prevent a race condition between context switching of PMUSERENR_EL0
* in __{activate,deactivate}_traps_common() and IPIs that attempts to
* update PMUSERENR_EL0. See also kvm_set_pmuserenr().
*/
void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
static void __vcpu_load_activate_traps(struct kvm_vcpu *vcpu)
{
unsigned long flags;
@ -151,7 +151,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
local_irq_restore(flags);
}
void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
static void __vcpu_put_deactivate_traps(struct kvm_vcpu *vcpu)
{
unsigned long flags;
@ -160,6 +160,19 @@ void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
local_irq_restore(flags);
}
void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu)
{
__vcpu_load_switch_sysregs(vcpu);
__vcpu_load_activate_traps(vcpu);
__load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch);
}
void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
{
__vcpu_put_deactivate_traps(vcpu);
__vcpu_put_switch_sysregs(vcpu);
}
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
@ -170,6 +183,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
[ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low,
[ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low,
[ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth,
[ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
};
static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
@ -214,17 +228,11 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
sysreg_save_host_state_vhe(host_ctxt);
/*
* ARM erratum 1165522 requires us to configure both stage 1 and
* stage 2 translation for the guest context before we clear
* HCR_EL2.TGE.
*
* We have already configured the guest's stage 1 translation in
* kvm_vcpu_load_sysregs_vhe above. We must now call
* __load_stage2 before __activate_traps, because
* __load_stage2 configures stage 2 translation, and
* __activate_traps clear HCR_EL2.TGE (among other things).
* Note that ARM erratum 1165522 requires us to configure both stage 1
* and stage 2 translation for the guest context before we clear
* HCR_EL2.TGE. The stage 1 and stage 2 guest context has already been
* loaded on the CPU in kvm_vcpu_load_vhe().
*/
__load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch);
__activate_traps(vcpu);
__kvm_adjust_pc(vcpu);

View File

@ -52,7 +52,7 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt)
NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe);
/**
* kvm_vcpu_load_sysregs_vhe - Load guest system registers to the physical CPU
* __vcpu_load_switch_sysregs - Load guest system registers to the physical CPU
*
* @vcpu: The VCPU pointer
*
@ -62,7 +62,7 @@ NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe);
* and loading system register state early avoids having to load them on
* every entry to the VM.
*/
void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
@ -92,12 +92,10 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
__sysreg_restore_el1_state(guest_ctxt);
vcpu_set_flag(vcpu, SYSREGS_ON_CPU);
activate_traps_vhe_load(vcpu);
}
/**
* kvm_vcpu_put_sysregs_vhe - Restore host system registers to the physical CPU
* __vcpu_put_switch_syregs - Restore host system registers to the physical CPU
*
* @vcpu: The VCPU pointer
*
@ -107,13 +105,12 @@ void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu)
* and deferring saving system register state until we're no longer running the
* VCPU avoids having to save them on every exit from the VM.
*/
void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu)
void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt;
struct kvm_cpu_context *host_ctxt;
host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
deactivate_traps_vhe_put(vcpu);
__sysreg_save_el1_state(guest_ctxt);
__sysreg_save_user_state(guest_ctxt);

View File

@ -11,18 +11,25 @@
#include <asm/tlbflush.h>
struct tlb_inv_context {
unsigned long flags;
u64 tcr;
u64 sctlr;
struct kvm_s2_mmu *mmu;
unsigned long flags;
u64 tcr;
u64 sctlr;
};
static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
struct tlb_inv_context *cxt)
{
struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
u64 val;
local_irq_save(cxt->flags);
if (vcpu && mmu != vcpu->arch.hw_mmu)
cxt->mmu = vcpu->arch.hw_mmu;
else
cxt->mmu = NULL;
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
/*
* For CPUs that are affected by ARM errata 1165522 or 1530923,
@ -66,10 +73,13 @@ static void __tlb_switch_to_host(struct tlb_inv_context *cxt)
* We're done with the TLB operation, let's restore the host's
* view of HCR_EL2.
*/
write_sysreg(0, vttbr_el2);
write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
isb();
/* ... and the stage-2 MMU context that we switched away from */
if (cxt->mmu)
__load_stage2(cxt->mmu, cxt->mmu->arch);
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
/* Restore the registers to what they were */
write_sysreg_el1(cxt->tcr, SYS_TCR);

View File

@ -133,12 +133,10 @@ static bool kvm_smccc_test_fw_bmap(struct kvm_vcpu *vcpu, u32 func_id)
ARM_SMCCC_SMC_64, \
0, ARM_SMCCC_FUNC_MASK)
static void init_smccc_filter(struct kvm *kvm)
static int kvm_smccc_filter_insert_reserved(struct kvm *kvm)
{
int r;
mt_init(&kvm->arch.smccc_filter);
/*
* Prevent userspace from handling any SMCCC calls in the architecture
* range, avoiding the risk of misrepresenting Spectre mitigation status
@ -148,14 +146,25 @@ static void init_smccc_filter(struct kvm *kvm)
SMC32_ARCH_RANGE_BEGIN, SMC32_ARCH_RANGE_END,
xa_mk_value(KVM_SMCCC_FILTER_HANDLE),
GFP_KERNEL_ACCOUNT);
WARN_ON_ONCE(r);
if (r)
goto out_destroy;
r = mtree_insert_range(&kvm->arch.smccc_filter,
SMC64_ARCH_RANGE_BEGIN, SMC64_ARCH_RANGE_END,
xa_mk_value(KVM_SMCCC_FILTER_HANDLE),
GFP_KERNEL_ACCOUNT);
WARN_ON_ONCE(r);
if (r)
goto out_destroy;
return 0;
out_destroy:
mtree_destroy(&kvm->arch.smccc_filter);
return r;
}
static bool kvm_smccc_filter_configured(struct kvm *kvm)
{
return !mtree_empty(&kvm->arch.smccc_filter);
}
static int kvm_smccc_set_filter(struct kvm *kvm, struct kvm_smccc_filter __user *uaddr)
@ -184,13 +193,14 @@ static int kvm_smccc_set_filter(struct kvm *kvm, struct kvm_smccc_filter __user
goto out_unlock;
}
if (!kvm_smccc_filter_configured(kvm)) {
r = kvm_smccc_filter_insert_reserved(kvm);
if (WARN_ON_ONCE(r))
goto out_unlock;
}
r = mtree_insert_range(&kvm->arch.smccc_filter, start, end,
xa_mk_value(filter.action), GFP_KERNEL_ACCOUNT);
if (r)
goto out_unlock;
set_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags);
out_unlock:
mutex_unlock(&kvm->arch.config_lock);
return r;
@ -201,7 +211,7 @@ static u8 kvm_smccc_filter_get_action(struct kvm *kvm, u32 func_id)
unsigned long idx = func_id;
void *val;
if (!test_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags))
if (!kvm_smccc_filter_configured(kvm))
return KVM_SMCCC_FILTER_HANDLE;
/*
@ -387,7 +397,7 @@ void kvm_arm_init_hypercalls(struct kvm *kvm)
smccc_feat->std_hyp_bmap = KVM_ARM_SMCCC_STD_HYP_FEATURES;
smccc_feat->vendor_hyp_bmap = KVM_ARM_SMCCC_VENDOR_HYP_FEATURES;
init_smccc_filter(kvm);
mt_init(&kvm->arch.smccc_filter);
}
void kvm_arm_teardown_hypercalls(struct kvm *kvm)
@ -554,7 +564,7 @@ int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
bool wants_02;
wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
wants_02 = vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2);
switch (val) {
case KVM_ARM_PSCI_0_1:

View File

@ -135,6 +135,9 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
* volunteered to do so, and bail out otherwise.
*/
if (!kvm_vcpu_dabt_isvalid(vcpu)) {
trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
kvm_vcpu_get_hfar(vcpu), fault_ipa);
if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER,
&vcpu->kvm->arch.flags)) {
run->exit_reason = KVM_EXIT_ARM_NISV;
@ -143,7 +146,6 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
return 0;
}
kvm_pr_unimpl("Data abort outside memslots with no valid syndrome info\n");
return -ENOSYS;
}

View File

@ -892,7 +892,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift);
mmu->vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift);
if (mmu->pgt != NULL) {
kvm_err("kvm_arch already initialized?\n");
@ -1067,7 +1067,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t addr;
int ret = 0;
struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO };
struct kvm_pgtable *pgt = kvm->arch.mmu.pgt;
struct kvm_s2_mmu *mmu = &kvm->arch.mmu;
struct kvm_pgtable *pgt = mmu->pgt;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE |
KVM_PGTABLE_PROT_R |
(writable ? KVM_PGTABLE_PROT_W : 0);
@ -1080,7 +1081,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) {
ret = kvm_mmu_topup_memory_cache(&cache,
kvm_mmu_cache_min_pages(kvm));
kvm_mmu_cache_min_pages(mmu));
if (ret)
break;
@ -1298,28 +1299,8 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
if (sz < PMD_SIZE)
return PAGE_SIZE;
/*
* The address we faulted on is backed by a transparent huge
* page. However, because we map the compound huge page and
* not the individual tail page, we need to transfer the
* refcount to the head page. We have to be careful that the
* THP doesn't start to split while we are adjusting the
* refcounts.
*
* We are sure this doesn't happen, because mmu_invalidate_retry
* was successful and we are holding the mmu_lock, so if this
* THP is trying to split, it will be blocked in the mmu
* notifier before touching any of the pages, specifically
* before being able to call __split_huge_page_refcount().
*
* We can therefore safely transfer the refcount from PG_tail
* to PG_head and switch the pfn from a tail page to the head
* page accordingly.
*/
*ipap &= PMD_MASK;
kvm_release_pfn_clean(pfn);
pfn &= ~(PTRS_PER_PMD - 1);
get_page(pfn_to_page(pfn));
*pfnp = pfn;
return PMD_SIZE;
@ -1431,7 +1412,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (fault_status != ESR_ELx_FSC_PERM ||
(logging_active && write_fault)) {
ret = kvm_mmu_topup_memory_cache(memcache,
kvm_mmu_cache_min_pages(kvm));
kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
if (ret)
return ret;
}
@ -1747,7 +1728,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
}
/* Userspace should not be able to register out-of-bounds IPAs */
VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm));
VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->arch.hw_mmu));
if (fault_status == ESR_ELx_FSC_ACCESS) {
handle_access_fault(vcpu, fault_ipa);
@ -2021,7 +2002,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
* Prevent userspace from creating a memory region outside of the IPA
* space addressable by the KVM guest IPA space.
*/
if ((new->base_gfn + new->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT))
if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT))
return -EFAULT;
hva = new->userspace_addr;

View File

@ -123,7 +123,7 @@ static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
if (host_kvm->created_vcpus < 1)
return -EINVAL;
pgd_sz = kvm_pgtable_stage2_pgd_size(host_kvm->arch.vtcr);
pgd_sz = kvm_pgtable_stage2_pgd_size(host_kvm->arch.mmu.vtcr);
/*
* The PGD pages will be reclaimed using a hyp_memcache which implies

View File

@ -60,6 +60,23 @@ static u32 kvm_pmu_event_mask(struct kvm *kvm)
return __kvm_pmu_event_mask(pmuver);
}
u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
{
u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 |
kvm_pmu_event_mask(kvm);
u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0))
mask |= ARMV8_PMU_INCLUDE_EL2;
if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0))
mask |= ARMV8_PMU_EXCLUDE_NS_EL0 |
ARMV8_PMU_EXCLUDE_NS_EL1 |
ARMV8_PMU_EXCLUDE_EL3;
return mask;
}
/**
* kvm_pmc_is_64bit - determine if counter is 64bit
* @pmc: counter context
@ -72,7 +89,7 @@ static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
{
u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0);
u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc));
return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) ||
(pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC));
@ -250,7 +267,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
{
u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT;
u64 val = kvm_vcpu_read_pmcr(vcpu) >> ARMV8_PMU_PMCR_N_SHIFT;
val &= ARMV8_PMU_PMCR_N_MASK;
if (val == 0)
@ -272,7 +289,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
if (!kvm_vcpu_has_pmu(vcpu))
return;
if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val)
return;
for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
@ -324,7 +341,7 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
{
u64 reg = 0;
if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
@ -348,7 +365,7 @@ static void kvm_pmu_update_state(struct kvm_vcpu *vcpu)
pmu->irq_level = overflow;
if (likely(irqchip_in_kernel(vcpu->kvm))) {
int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu,
pmu->irq_num, overflow, pmu);
WARN_ON(ret);
}
@ -426,7 +443,7 @@ static void kvm_pmu_counter_increment(struct kvm_vcpu *vcpu,
{
int i;
if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E))
return;
/* Weed out disabled counters */
@ -569,7 +586,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
{
struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) &&
(__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx));
}
@ -584,6 +601,7 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
struct perf_event *event;
struct perf_event_attr attr;
u64 eventsel, reg, data;
bool p, u, nsk, nsu;
reg = counter_index_to_evtreg(pmc->idx);
data = __vcpu_sys_reg(vcpu, reg);
@ -610,13 +628,18 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
!test_bit(eventsel, vcpu->kvm->arch.pmu_filter))
return;
p = data & ARMV8_PMU_EXCLUDE_EL1;
u = data & ARMV8_PMU_EXCLUDE_EL0;
nsk = data & ARMV8_PMU_EXCLUDE_NS_EL1;
nsu = data & ARMV8_PMU_EXCLUDE_NS_EL0;
memset(&attr, 0, sizeof(struct perf_event_attr));
attr.type = arm_pmu->pmu.type;
attr.size = sizeof(attr);
attr.pinned = 1;
attr.disabled = !kvm_pmu_counter_is_enabled(pmc);
attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
attr.exclude_user = (u != nsu);
attr.exclude_kernel = (p != nsk);
attr.exclude_hv = 1; /* Don't count EL2 events */
attr.exclude_host = 1; /* Don't count host events */
attr.config = eventsel;
@ -657,18 +680,13 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx)
{
struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, select_idx);
u64 reg, mask;
u64 reg;
if (!kvm_vcpu_has_pmu(vcpu))
return;
mask = ARMV8_PMU_EVTYPE_MASK;
mask &= ~ARMV8_PMU_EVTYPE_EVENT;
mask |= kvm_pmu_event_mask(vcpu->kvm);
reg = counter_index_to_evtreg(pmc->idx);
__vcpu_sys_reg(vcpu, reg) = data & mask;
__vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm);
kvm_pmu_create_perf_event(pmc);
}
@ -717,10 +735,9 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
* It is still necessary to get a valid cpu, though, to probe for the
* default PMU instance as userspace is not required to specify a PMU
* type. In order to uphold the preexisting behavior KVM selects the
* PMU instance for the core where the first call to the
* KVM_ARM_VCPU_PMU_V3_CTRL attribute group occurs. A dependent use case
* would be a user with disdain of all things big.LITTLE that affines
* the VMM to a particular cluster of cores.
* PMU instance for the core during vcpu init. A dependent use
* case would be a user with disdain of all things big.LITTLE that
* affines the VMM to a particular cluster of cores.
*
* In any case, userspace should just do the sane thing and use the UAPI
* to select a PMU type directly. But, be wary of the baggage being
@ -786,6 +803,17 @@ u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
return val & mask;
}
void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu)
{
u64 mask = kvm_pmu_valid_counter_mask(vcpu);
kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu));
__vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask;
__vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask;
__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask;
}
int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
{
if (!kvm_vcpu_has_pmu(vcpu))
@ -874,6 +902,52 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
return true;
}
/**
* kvm_arm_pmu_get_max_counters - Return the max number of PMU counters.
* @kvm: The kvm pointer
*/
u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm)
{
struct arm_pmu *arm_pmu = kvm->arch.arm_pmu;
/*
* The arm_pmu->num_events considers the cycle counter as well.
* Ignore that and return only the general-purpose counters.
*/
return arm_pmu->num_events - 1;
}
static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu)
{
lockdep_assert_held(&kvm->arch.config_lock);
kvm->arch.arm_pmu = arm_pmu;
kvm->arch.pmcr_n = kvm_arm_pmu_get_max_counters(kvm);
}
/**
* kvm_arm_set_default_pmu - No PMU set, get the default one.
* @kvm: The kvm pointer
*
* The observant among you will notice that the supported_cpus
* mask does not get updated for the default PMU even though it
* is quite possible the selected instance supports only a
* subset of cores in the system. This is intentional, and
* upholds the preexisting behavior on heterogeneous systems
* where vCPUs can be scheduled on any core but the guest
* counters could stop working.
*/
int kvm_arm_set_default_pmu(struct kvm *kvm)
{
struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu();
if (!arm_pmu)
return -ENODEV;
kvm_arm_set_pmu(kvm, arm_pmu);
return 0;
}
static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
{
struct kvm *kvm = vcpu->kvm;
@ -893,7 +967,7 @@ static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
break;
}
kvm->arch.arm_pmu = arm_pmu;
kvm_arm_set_pmu(kvm, arm_pmu);
cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus);
ret = 0;
break;
@ -916,23 +990,6 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
if (vcpu->arch.pmu.created)
return -EBUSY;
if (!kvm->arch.arm_pmu) {
/*
* No PMU set, get the default one.
*
* The observant among you will notice that the supported_cpus
* mask does not get updated for the default PMU even though it
* is quite possible the selected instance supports only a
* subset of cores in the system. This is intentional, and
* upholds the preexisting behavior on heterogeneous systems
* where vCPUs can be scheduled on any core but the guest
* counters could stop working.
*/
kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();
if (!kvm->arch.arm_pmu)
return -ENODEV;
}
switch (attr->attr) {
case KVM_ARM_VCPU_PMU_V3_IRQ: {
int __user *uaddr = (int __user *)(long)attr->addr;
@ -1072,3 +1129,15 @@ u8 kvm_arm_pmu_get_pmuver_limit(void)
ID_AA64DFR0_EL1_PMUVer_V3P5);
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp);
}
/**
* kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU
* @vcpu: The vcpu pointer
*/
u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
{
u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) &
~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
}

View File

@ -73,11 +73,8 @@ int __init kvm_arm_init_sve(void)
return 0;
}
static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
{
if (!system_supports_sve())
return -EINVAL;
vcpu->arch.sve_max_vl = kvm_sve_max_vl;
/*
@ -86,8 +83,6 @@ static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
* kvm_arm_vcpu_finalize(), which freezes the configuration.
*/
vcpu_set_flag(vcpu, GUEST_HAS_SVE);
return 0;
}
/*
@ -170,20 +165,9 @@ static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu));
}
static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
static void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
{
/*
* For now make sure that both address/generic pointer authentication
* features are requested by the userspace together and the system
* supports these capabilities.
*/
if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
!test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) ||
!system_has_full_ptr_auth())
return -EINVAL;
vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
return 0;
}
/**
@ -204,10 +188,9 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
* disable preemption around the vcpu reset as we would otherwise race with
* preempt notifiers which also call put/load.
*/
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
struct vcpu_reset_state reset_state;
int ret;
bool loaded;
u32 pstate;
@ -224,29 +207,16 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
if (loaded)
kvm_arch_vcpu_put(vcpu);
/* Disallow NV+SVE for the time being */
if (vcpu_has_nv(vcpu) && vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) {
ret = -EINVAL;
goto out;
}
if (!kvm_arm_vcpu_sve_finalized(vcpu)) {
if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) {
ret = kvm_vcpu_enable_sve(vcpu);
if (ret)
goto out;
}
if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE))
kvm_vcpu_enable_sve(vcpu);
} else {
kvm_vcpu_reset_sve(vcpu);
}
if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
if (kvm_vcpu_enable_ptrauth(vcpu)) {
ret = -EINVAL;
goto out;
}
}
if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) ||
vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC))
kvm_vcpu_enable_ptrauth(vcpu);
if (vcpu_el1_is_32bit(vcpu))
pstate = VCPU_RESET_PSTATE_SVC;
@ -255,11 +225,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
else
pstate = VCPU_RESET_PSTATE_EL1;
if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) {
ret = -EINVAL;
goto out;
}
/* Reset core registers */
memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
@ -294,12 +259,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
}
/* Reset timer */
ret = kvm_timer_vcpu_reset(vcpu);
out:
kvm_timer_vcpu_reset(vcpu);
if (loaded)
kvm_arch_vcpu_load(vcpu, smp_processor_id());
preempt_enable();
return ret;
}
u32 get_kvm_ipa_limit(void)

View File

@ -379,7 +379,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1);
u32 sr = reg_to_encoding(r);
if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
@ -719,14 +719,9 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
u64 mask = BIT(ARMV8_PMU_CYCLE_IDX);
u8 n = vcpu->kvm->arch.pmcr_n;
/* No PMU available, any PMU reg may UNDEF... */
if (!kvm_arm_support_pmu_v3())
return 0;
n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
n &= ARMV8_PMU_PMCR_N_MASK;
if (n)
mask |= GENMASK(n - 1, 0);
@ -746,8 +741,12 @@ static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
/* This thing will UNDEF, who cares about the reset value? */
if (!kvm_vcpu_has_pmu(vcpu))
return 0;
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
__vcpu_sys_reg(vcpu, r->reg) &= kvm_pmu_evtyper_mask(vcpu->kvm);
return __vcpu_sys_reg(vcpu, r->reg);
}
@ -762,17 +761,15 @@ static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 pmcr;
u64 pmcr = 0;
/* No PMU available, PMCR_EL0 may UNDEF... */
if (!kvm_arm_support_pmu_v3())
return 0;
/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
if (!kvm_supports_32bit_el0())
pmcr |= ARMV8_PMU_PMCR_LC;
/*
* The value of PMCR.N field is included when the
* vCPU register is read via kvm_vcpu_read_pmcr().
*/
__vcpu_sys_reg(vcpu, r->reg) = pmcr;
return __vcpu_sys_reg(vcpu, r->reg);
@ -822,7 +819,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
* Only update writeable bits of PMCR (continuing into
* kvm_pmu_handle_pmcr() as well)
*/
val = __vcpu_sys_reg(vcpu, PMCR_EL0);
val = kvm_vcpu_read_pmcr(vcpu);
val &= ~ARMV8_PMU_PMCR_MASK;
val |= p->regval & ARMV8_PMU_PMCR_MASK;
if (!kvm_supports_32bit_el0())
@ -830,7 +827,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
kvm_pmu_handle_pmcr(vcpu, val);
} else {
/* PMCR.P & PMCR.C are RAZ */
val = __vcpu_sys_reg(vcpu, PMCR_EL0)
val = kvm_vcpu_read_pmcr(vcpu)
& ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C);
p->regval = val;
}
@ -879,7 +876,7 @@ static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
{
u64 pmcr, val;
pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0);
pmcr = kvm_vcpu_read_pmcr(vcpu);
val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) {
kvm_inject_undefined(vcpu);
@ -988,12 +985,45 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
kvm_vcpu_pmu_restore_guest(vcpu);
} else {
p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK;
p->regval = __vcpu_sys_reg(vcpu, reg);
}
return true;
}
static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 val)
{
bool set;
val &= kvm_pmu_valid_counter_mask(vcpu);
switch (r->reg) {
case PMOVSSET_EL0:
/* CRm[1] being set indicates a SET register, and CLR otherwise */
set = r->CRm & 2;
break;
default:
/* Op2[0] being set indicates a SET register, and CLR otherwise */
set = r->Op2 & 1;
break;
}
if (set)
__vcpu_sys_reg(vcpu, r->reg) |= val;
else
__vcpu_sys_reg(vcpu, r->reg) &= ~val;
return 0;
}
static int get_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 *val)
{
u64 mask = kvm_pmu_valid_counter_mask(vcpu);
*val = __vcpu_sys_reg(vcpu, r->reg) & mask;
return 0;
}
static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
@ -1103,6 +1133,51 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
return true;
}
static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
u64 *val)
{
*val = kvm_vcpu_read_pmcr(vcpu);
return 0;
}
static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
u64 val)
{
u8 new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
struct kvm *kvm = vcpu->kvm;
mutex_lock(&kvm->arch.config_lock);
/*
* The vCPU can't have more counters than the PMU hardware
* implements. Ignore this error to maintain compatibility
* with the existing KVM behavior.
*/
if (!kvm_vm_has_ran_once(kvm) &&
new_n <= kvm_arm_pmu_get_max_counters(kvm))
kvm->arch.pmcr_n = new_n;
mutex_unlock(&kvm->arch.config_lock);
/*
* Ignore writes to RES0 bits, read only bits that are cleared on
* vCPU reset, and writable bits that KVM doesn't support yet.
* (i.e. only PMCR.N and bits [7:0] are mutable from userspace)
* The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU.
* But, we leave the bit as it is here, as the vCPU's PMUver might
* be changed later (NOTE: the bit will be cleared on first vCPU run
* if necessary).
*/
val &= ARMV8_PMU_PMCR_MASK;
/* The LC bit is RES1 when AArch32 is not supported */
if (!kvm_supports_32bit_el0())
val |= ARMV8_PMU_PMCR_LC;
__vcpu_sys_reg(vcpu, r->reg) = val;
return 0;
}
/* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
#define DBG_BCR_BVR_WCR_WVR_EL1(n) \
{ SYS_DESC(SYS_DBGBVRn_EL1(n)), \
@ -1216,8 +1291,14 @@ static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp,
/* Some features have different safe value type in KVM than host features */
switch (id) {
case SYS_ID_AA64DFR0_EL1:
if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT)
switch (kvm_ftr.shift) {
case ID_AA64DFR0_EL1_PMUVer_SHIFT:
kvm_ftr.type = FTR_LOWER_SAFE;
break;
case ID_AA64DFR0_EL1_DebugVer_SHIFT:
kvm_ftr.type = FTR_LOWER_SAFE;
break;
}
break;
case SYS_ID_DFR0_EL1:
if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT)
@ -1228,7 +1309,7 @@ static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp,
return arm64_ftr_safe_value(&kvm_ftr, new, cur);
}
/**
/*
* arm64_check_features() - Check if a feature register value constitutes
* a subset of features indicated by the idreg's KVM sanitised limit.
*
@ -1338,7 +1419,6 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3));
if (!cpus_have_final_cap(ARM64_HAS_WFXT))
val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT);
val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS);
break;
case SYS_ID_AA64MMFR2_EL1:
val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
@ -1373,6 +1453,13 @@ static inline bool is_id_reg(u32 id)
sys_reg_CRm(id) < 8);
}
static inline bool is_aa32_id_reg(u32 id)
{
return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
sys_reg_CRm(id) <= 3);
}
static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
@ -1469,14 +1556,21 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
return val;
}
#define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \
({ \
u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \
(val) &= ~reg##_##field##_MASK; \
(val) |= FIELD_PREP(reg##_##field##_MASK, \
min(__f_val, (u64)reg##_##field##_##limit)); \
(val); \
})
static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
u64 val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
/* Limit debug to ARMv8.0 */
val &= ~ID_AA64DFR0_EL1_DebugVer_MASK;
val |= SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, DebugVer, IMP);
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8);
/*
* Only initialize the PMU version if the vCPU was configured with one.
@ -1496,6 +1590,7 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 val)
{
u8 debugver = SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, val);
u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val);
/*
@ -1515,6 +1610,13 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)
val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
/*
* ID_AA64DFR0_EL1.DebugVer is one of those awkward fields with a
* nonzero minimum safe value.
*/
if (debugver < ID_AA64DFR0_EL1_DebugVer_IMP)
return -EINVAL;
return set_id_reg(vcpu, rd, val);
}
@ -1528,6 +1630,8 @@ static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu,
if (kvm_vcpu_has_pmu(vcpu))
val |= SYS_FIELD_PREP(ID_DFR0_EL1, PerfMon, perfmon);
val = ID_REG_LIMIT_FIELD_ENUM(val, ID_DFR0_EL1, CopDbg, Debugv8p8);
return val;
}
@ -1536,6 +1640,7 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
u64 val)
{
u8 perfmon = SYS_FIELD_GET(ID_DFR0_EL1, PerfMon, val);
u8 copdbg = SYS_FIELD_GET(ID_DFR0_EL1, CopDbg, val);
if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) {
val &= ~ID_DFR0_EL1_PerfMon_MASK;
@ -1551,6 +1656,9 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
if (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3)
return -EINVAL;
if (copdbg < ID_DFR0_EL1_CopDbg_Armv8)
return -EINVAL;
return set_id_reg(vcpu, rd, val);
}
@ -1791,8 +1899,8 @@ static unsigned int el2_visibility(const struct kvm_vcpu *vcpu,
* HCR_EL2.E2H==1, and only in the sysreg table for convenience of
* handling traps. Given that, they are always hidden from userspace.
*/
static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
static unsigned int hidden_user_visibility(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
return REG_HIDDEN_USER;
}
@ -1803,7 +1911,7 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
.reset = rst, \
.reg = name##_EL1, \
.val = v, \
.visibility = elx2_visibility, \
.visibility = hidden_user_visibility, \
}
/*
@ -1817,11 +1925,14 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
* from userspace.
*/
/* sys_reg_desc initialiser for known cpufeature ID registers */
#define ID_SANITISED(name) { \
#define ID_DESC(name) \
SYS_DESC(SYS_##name), \
.access = access_id_reg, \
.get_user = get_id_reg, \
.get_user = get_id_reg \
/* sys_reg_desc initialiser for known cpufeature ID registers */
#define ID_SANITISED(name) { \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
@ -1830,15 +1941,22 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
/* sys_reg_desc initialiser for known cpufeature ID registers */
#define AA32_ID_SANITISED(name) { \
SYS_DESC(SYS_##name), \
.access = access_id_reg, \
.get_user = get_id_reg, \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = aa32_id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = 0, \
}
/* sys_reg_desc initialiser for writable ID registers */
#define ID_WRITABLE(name, mask) { \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = id_visibility, \
.reset = kvm_read_sanitised_id_reg, \
.val = mask, \
}
/*
* sys_reg_desc initialiser for architecturally unallocated cpufeature ID
* register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2
@ -1860,9 +1978,7 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
* RAZ for the guest.
*/
#define ID_HIDDEN(name) { \
SYS_DESC(SYS_##name), \
.access = access_id_reg, \
.get_user = get_id_reg, \
ID_DESC(name), \
.set_user = set_id_reg, \
.visibility = raz_visibility, \
.reset = kvm_read_sanitised_id_reg, \
@ -1961,7 +2077,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
// DBGDTR[TR]X_EL0 share the same encoding
{ SYS_DESC(SYS_DBGDTRTX_EL0), trap_raz_wi },
{ SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 },
{ SYS_DESC(SYS_DBGVCR32_EL2), trap_undef, reset_val, DBGVCR32_EL2, 0 },
{ SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
@ -1980,7 +2096,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.set_user = set_id_dfr0_el1,
.visibility = aa32_id_visibility,
.reset = read_sanitised_id_dfr0_el1,
.val = ID_DFR0_EL1_PerfMon_MASK, },
.val = ID_DFR0_EL1_PerfMon_MASK |
ID_DFR0_EL1_CopDbg_MASK, },
ID_HIDDEN(ID_AFR0_EL1),
AA32_ID_SANITISED(ID_MMFR0_EL1),
AA32_ID_SANITISED(ID_MMFR1_EL1),
@ -2014,11 +2131,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.get_user = get_id_reg,
.set_user = set_id_reg,
.reset = read_sanitised_id_aa64pfr0_el1,
.val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, },
.val = ~(ID_AA64PFR0_EL1_AMU |
ID_AA64PFR0_EL1_MPAM |
ID_AA64PFR0_EL1_SVE |
ID_AA64PFR0_EL1_RAS |
ID_AA64PFR0_EL1_GIC |
ID_AA64PFR0_EL1_AdvSIMD |
ID_AA64PFR0_EL1_FP), },
ID_SANITISED(ID_AA64PFR1_EL1),
ID_UNALLOCATED(4,2),
ID_UNALLOCATED(4,3),
ID_SANITISED(ID_AA64ZFR0_EL1),
ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),
ID_HIDDEN(ID_AA64SMFR0_EL1),
ID_UNALLOCATED(4,6),
ID_UNALLOCATED(4,7),
@ -2029,7 +2152,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.get_user = get_id_reg,
.set_user = set_id_aa64dfr0_el1,
.reset = read_sanitised_id_aa64dfr0_el1,
.val = ID_AA64DFR0_EL1_PMUVer_MASK, },
.val = ID_AA64DFR0_EL1_PMUVer_MASK |
ID_AA64DFR0_EL1_DebugVer_MASK, },
ID_SANITISED(ID_AA64DFR1_EL1),
ID_UNALLOCATED(5,2),
ID_UNALLOCATED(5,3),
@ -2039,9 +2163,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_UNALLOCATED(5,7),
/* CRm=6 */
ID_SANITISED(ID_AA64ISAR0_EL1),
ID_SANITISED(ID_AA64ISAR1_EL1),
ID_SANITISED(ID_AA64ISAR2_EL1),
ID_WRITABLE(ID_AA64ISAR0_EL1, ~ID_AA64ISAR0_EL1_RES0),
ID_WRITABLE(ID_AA64ISAR1_EL1, ~(ID_AA64ISAR1_EL1_GPI |
ID_AA64ISAR1_EL1_GPA |
ID_AA64ISAR1_EL1_API |
ID_AA64ISAR1_EL1_APA)),
ID_WRITABLE(ID_AA64ISAR2_EL1, ~(ID_AA64ISAR2_EL1_RES0 |
ID_AA64ISAR2_EL1_APA3 |
ID_AA64ISAR2_EL1_GPA3)),
ID_UNALLOCATED(6,3),
ID_UNALLOCATED(6,4),
ID_UNALLOCATED(6,5),
@ -2049,9 +2178,23 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_UNALLOCATED(6,7),
/* CRm=7 */
ID_SANITISED(ID_AA64MMFR0_EL1),
ID_SANITISED(ID_AA64MMFR1_EL1),
ID_SANITISED(ID_AA64MMFR2_EL1),
ID_WRITABLE(ID_AA64MMFR0_EL1, ~(ID_AA64MMFR0_EL1_RES0 |
ID_AA64MMFR0_EL1_TGRAN4_2 |
ID_AA64MMFR0_EL1_TGRAN64_2 |
ID_AA64MMFR0_EL1_TGRAN16_2)),
ID_WRITABLE(ID_AA64MMFR1_EL1, ~(ID_AA64MMFR1_EL1_RES0 |
ID_AA64MMFR1_EL1_HCX |
ID_AA64MMFR1_EL1_XNX |
ID_AA64MMFR1_EL1_TWED |
ID_AA64MMFR1_EL1_XNX |
ID_AA64MMFR1_EL1_VH |
ID_AA64MMFR1_EL1_VMIDBits)),
ID_WRITABLE(ID_AA64MMFR2_EL1, ~(ID_AA64MMFR2_EL1_RES0 |
ID_AA64MMFR2_EL1_EVT |
ID_AA64MMFR2_EL1_FWB |
ID_AA64MMFR2_EL1_IDS |
ID_AA64MMFR2_EL1_NV |
ID_AA64MMFR2_EL1_CCIDX)),
ID_SANITISED(ID_AA64MMFR3_EL1),
ID_UNALLOCATED(7,4),
ID_UNALLOCATED(7,5),
@ -2116,9 +2259,11 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* PMBIDR_EL1 is not trapped */
{ PMU_SYS_REG(PMINTENSET_EL1),
.access = access_pminten, .reg = PMINTENSET_EL1 },
.access = access_pminten, .reg = PMINTENSET_EL1,
.get_user = get_pmreg, .set_user = set_pmreg },
{ PMU_SYS_REG(PMINTENCLR_EL1),
.access = access_pminten, .reg = PMINTENSET_EL1 },
.access = access_pminten, .reg = PMINTENSET_EL1,
.get_user = get_pmreg, .set_user = set_pmreg },
{ SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi },
{ SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 },
@ -2166,14 +2311,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CTR_EL0), access_ctr },
{ SYS_DESC(SYS_SVCR), undef_access },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr,
.reset = reset_pmcr, .reg = PMCR_EL0 },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
.reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr },
{ PMU_SYS_REG(PMCNTENSET_EL0),
.access = access_pmcnten, .reg = PMCNTENSET_EL0 },
.access = access_pmcnten, .reg = PMCNTENSET_EL0,
.get_user = get_pmreg, .set_user = set_pmreg },
{ PMU_SYS_REG(PMCNTENCLR_EL0),
.access = access_pmcnten, .reg = PMCNTENSET_EL0 },
.access = access_pmcnten, .reg = PMCNTENSET_EL0,
.get_user = get_pmreg, .set_user = set_pmreg },
{ PMU_SYS_REG(PMOVSCLR_EL0),
.access = access_pmovs, .reg = PMOVSSET_EL0 },
.access = access_pmovs, .reg = PMOVSSET_EL0,
.get_user = get_pmreg, .set_user = set_pmreg },
/*
* PM_SWINC_EL0 is exposed to userspace as RAZ/WI, as it was
* previously (and pointlessly) advertised in the past...
@ -2201,7 +2349,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr,
.reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 },
{ PMU_SYS_REG(PMOVSSET_EL0),
.access = access_pmovs, .reg = PMOVSSET_EL0 },
.access = access_pmovs, .reg = PMOVSSET_EL0,
.get_user = get_pmreg, .set_user = set_pmreg },
{ SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 },
{ SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 },
@ -2380,18 +2529,28 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(VTTBR_EL2, access_rw, reset_val, 0),
EL2_REG(VTCR_EL2, access_rw, reset_val, 0),
{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
{ SYS_DESC(SYS_DACR32_EL2), trap_undef, reset_unknown, DACR32_EL2 },
EL2_REG(HDFGRTR_EL2, access_rw, reset_val, 0),
EL2_REG(HDFGWTR_EL2, access_rw, reset_val, 0),
EL2_REG(SPSR_EL2, access_rw, reset_val, 0),
EL2_REG(ELR_EL2, access_rw, reset_val, 0),
{ SYS_DESC(SYS_SP_EL1), access_sp_el1},
{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
/* AArch32 SPSR_* are RES0 if trapped from a NV guest */
{ SYS_DESC(SYS_SPSR_irq), .access = trap_raz_wi,
.visibility = hidden_user_visibility },
{ SYS_DESC(SYS_SPSR_abt), .access = trap_raz_wi,
.visibility = hidden_user_visibility },
{ SYS_DESC(SYS_SPSR_und), .access = trap_raz_wi,
.visibility = hidden_user_visibility },
{ SYS_DESC(SYS_SPSR_fiq), .access = trap_raz_wi,
.visibility = hidden_user_visibility },
{ SYS_DESC(SYS_IFSR32_EL2), trap_undef, reset_unknown, IFSR32_EL2 },
EL2_REG(AFSR0_EL2, access_rw, reset_val, 0),
EL2_REG(AFSR1_EL2, access_rw, reset_val, 0),
EL2_REG(ESR_EL2, access_rw, reset_val, 0),
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
{ SYS_DESC(SYS_FPEXC32_EL2), trap_undef, reset_val, FPEXC32_EL2, 0x700 },
EL2_REG(FAR_EL2, access_rw, reset_val, 0),
EL2_REG(HPFAR_EL2, access_rw, reset_val, 0),
@ -2438,14 +2597,15 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
if (p->is_write) {
return ignore_write(vcpu, p);
} else {
u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT);
u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1);
u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr);
p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) |
(((dfr >> ID_AA64DFR0_EL1_BRPs_SHIFT) & 0xf) << 24) |
(((dfr >> ID_AA64DFR0_EL1_CTX_CMPs_SHIFT) & 0xf) << 20)
| (6 << 16) | (1 << 15) | (el3 << 14) | (el3 << 12));
p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) |
(SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) |
(SYS_FIELD_GET(ID_AA64DFR0_EL1, CTX_CMPs, dfr) << 20) |
(SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, dfr) << 16) |
(1 << 15) | (el3 << 14) | (el3 << 12));
return true;
}
}
@ -3572,6 +3732,65 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
return write_demux_regids(uindices);
}
#define KVM_ARM_FEATURE_ID_RANGE_INDEX(r) \
KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(r), \
sys_reg_Op1(r), \
sys_reg_CRn(r), \
sys_reg_CRm(r), \
sys_reg_Op2(r))
static bool is_feature_id_reg(u32 encoding)
{
return (sys_reg_Op0(encoding) == 3 &&
(sys_reg_Op1(encoding) < 2 || sys_reg_Op1(encoding) == 3) &&
sys_reg_CRn(encoding) == 0 &&
sys_reg_CRm(encoding) <= 7);
}
int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *range)
{
const void *zero_page = page_to_virt(ZERO_PAGE(0));
u64 __user *masks = (u64 __user *)range->addr;
/* Only feature id range is supported, reserved[13] must be zero. */
if (range->range ||
memcmp(range->reserved, zero_page, sizeof(range->reserved)))
return -EINVAL;
/* Wipe the whole thing first */
if (clear_user(masks, KVM_ARM_FEATURE_ID_RANGE_SIZE * sizeof(__u64)))
return -EFAULT;
for (int i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) {
const struct sys_reg_desc *reg = &sys_reg_descs[i];
u32 encoding = reg_to_encoding(reg);
u64 val;
if (!is_feature_id_reg(encoding) || !reg->set_user)
continue;
/*
* For ID registers, we return the writable mask. Other feature
* registers return a full 64bit mask. That's not necessary
* compliant with a given revision of the architecture, but the
* RES0/RES1 definitions allow us to do that.
*/
if (is_id_reg(encoding)) {
if (!reg->val ||
(is_aa32_id_reg(encoding) && !kvm_supports_32bit_el0()))
continue;
val = reg->val;
} else {
val = ~0UL;
}
if (put_user(val, (masks + KVM_ARM_FEATURE_ID_RANGE_INDEX(encoding))))
return -EFAULT;
}
return 0;
}
int __init kvm_sys_reg_table_init(void)
{
struct sys_reg_params params;

View File

@ -136,6 +136,31 @@ TRACE_EVENT(kvm_mmio_emulate,
__entry->vcpu_pc, __entry->instr, __entry->cpsr)
);
TRACE_EVENT(kvm_mmio_nisv,
TP_PROTO(unsigned long vcpu_pc, unsigned long esr,
unsigned long far, unsigned long ipa),
TP_ARGS(vcpu_pc, esr, far, ipa),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( unsigned long, esr )
__field( unsigned long, far )
__field( unsigned long, ipa )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->esr = esr;
__entry->far = far;
__entry->ipa = ipa;
),
TP_printk("ipa %#016lx, esr %#016lx, far %#016lx, pc %#016lx",
__entry->ipa, __entry->esr,
__entry->far, __entry->vcpu_pc)
);
TRACE_EVENT(kvm_set_way_flush,
TP_PROTO(unsigned long vcpu_pc, bool cache),
TP_ARGS(vcpu_pc, cache),

View File

@ -166,7 +166,7 @@ static void print_header(struct seq_file *s, struct vgic_irq *irq,
if (vcpu) {
hdr = "VCPU";
id = vcpu->vcpu_id;
id = vcpu->vcpu_idx;
}
seq_printf(s, "\n");
@ -212,7 +212,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq,
" %2d "
"\n",
type, irq->intid,
(irq->target_vcpu) ? irq->target_vcpu->vcpu_id : -1,
(irq->target_vcpu) ? irq->target_vcpu->vcpu_idx : -1,
pending,
irq->line_level,
irq->active,
@ -224,7 +224,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq,
irq->mpidr,
irq->source,
irq->priority,
(irq->vcpu) ? irq->vcpu->vcpu_id : -1);
(irq->vcpu) ? irq->vcpu->vcpu_idx : -1);
}
static int vgic_debug_show(struct seq_file *s, void *v)

View File

@ -23,7 +23,7 @@ static int vgic_irqfd_set_irq(struct kvm_kernel_irq_routing_entry *e,
if (!vgic_valid_spi(kvm, spi_id))
return -EINVAL;
return kvm_vgic_inject_irq(kvm, 0, spi_id, level, NULL);
return kvm_vgic_inject_irq(kvm, NULL, spi_id, level, NULL);
}
/**

View File

@ -378,6 +378,12 @@ static int update_affinity(struct vgic_irq *irq, struct kvm_vcpu *vcpu)
return ret;
}
static struct kvm_vcpu *collection_to_vcpu(struct kvm *kvm,
struct its_collection *col)
{
return kvm_get_vcpu_by_id(kvm, col->target_addr);
}
/*
* Promotes the ITS view of affinity of an ITTE (which redistributor this LPI
* is targeting) to the VGIC's view, which deals with target VCPUs.
@ -391,7 +397,7 @@ static void update_affinity_ite(struct kvm *kvm, struct its_ite *ite)
if (!its_is_collection_mapped(ite->collection))
return;
vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr);
vcpu = collection_to_vcpu(kvm, ite->collection);
update_affinity(ite->irq, vcpu);
}
@ -679,7 +685,7 @@ int vgic_its_resolve_lpi(struct kvm *kvm, struct vgic_its *its,
if (!ite || !its_is_collection_mapped(ite->collection))
return E_ITS_INT_UNMAPPED_INTERRUPT;
vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr);
vcpu = collection_to_vcpu(kvm, ite->collection);
if (!vcpu)
return E_ITS_INT_UNMAPPED_INTERRUPT;
@ -887,7 +893,7 @@ static int vgic_its_cmd_handle_movi(struct kvm *kvm, struct vgic_its *its,
return E_ITS_MOVI_UNMAPPED_COLLECTION;
ite->collection = collection;
vcpu = kvm_get_vcpu(kvm, collection->target_addr);
vcpu = collection_to_vcpu(kvm, collection);
vgic_its_invalidate_cache(kvm);
@ -1121,7 +1127,7 @@ static int vgic_its_cmd_handle_mapi(struct kvm *kvm, struct vgic_its *its,
}
if (its_is_collection_mapped(collection))
vcpu = kvm_get_vcpu(kvm, collection->target_addr);
vcpu = collection_to_vcpu(kvm, collection);
irq = vgic_add_lpi(kvm, lpi_nr, vcpu);
if (IS_ERR(irq)) {
@ -1242,21 +1248,22 @@ static int vgic_its_cmd_handle_mapc(struct kvm *kvm, struct vgic_its *its,
u64 *its_cmd)
{
u16 coll_id;
u32 target_addr;
struct its_collection *collection;
bool valid;
valid = its_cmd_get_validbit(its_cmd);
coll_id = its_cmd_get_collection(its_cmd);
target_addr = its_cmd_get_target_addr(its_cmd);
if (target_addr >= atomic_read(&kvm->online_vcpus))
return E_ITS_MAPC_PROCNUM_OOR;
if (!valid) {
vgic_its_free_collection(its, coll_id);
vgic_its_invalidate_cache(kvm);
} else {
struct kvm_vcpu *vcpu;
vcpu = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd));
if (!vcpu)
return E_ITS_MAPC_PROCNUM_OOR;
collection = find_collection(its, coll_id);
if (!collection) {
@ -1270,9 +1277,9 @@ static int vgic_its_cmd_handle_mapc(struct kvm *kvm, struct vgic_its *its,
coll_id);
if (ret)
return ret;
collection->target_addr = target_addr;
collection->target_addr = vcpu->vcpu_id;
} else {
collection->target_addr = target_addr;
collection->target_addr = vcpu->vcpu_id;
update_affinity_collection(kvm, its, collection);
}
}
@ -1382,7 +1389,7 @@ static int vgic_its_cmd_handle_invall(struct kvm *kvm, struct vgic_its *its,
if (!its_is_collection_mapped(collection))
return E_ITS_INVALL_UNMAPPED_COLLECTION;
vcpu = kvm_get_vcpu(kvm, collection->target_addr);
vcpu = collection_to_vcpu(kvm, collection);
vgic_its_invall(vcpu);
return 0;
@ -1399,23 +1406,21 @@ static int vgic_its_cmd_handle_invall(struct kvm *kvm, struct vgic_its *its,
static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its,
u64 *its_cmd)
{
u32 target1_addr = its_cmd_get_target_addr(its_cmd);
u32 target2_addr = its_cmd_mask_field(its_cmd, 3, 16, 32);
struct kvm_vcpu *vcpu1, *vcpu2;
struct vgic_irq *irq;
u32 *intids;
int irq_count, i;
if (target1_addr >= atomic_read(&kvm->online_vcpus) ||
target2_addr >= atomic_read(&kvm->online_vcpus))
/* We advertise GITS_TYPER.PTA==0, making the address the vcpu ID */
vcpu1 = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd));
vcpu2 = kvm_get_vcpu_by_id(kvm, its_cmd_mask_field(its_cmd, 3, 16, 32));
if (!vcpu1 || !vcpu2)
return E_ITS_MOVALL_PROCNUM_OOR;
if (target1_addr == target2_addr)
if (vcpu1 == vcpu2)
return 0;
vcpu1 = kvm_get_vcpu(kvm, target1_addr);
vcpu2 = kvm_get_vcpu(kvm, target2_addr);
irq_count = vgic_copy_lpi_list(kvm, vcpu1, &intids);
if (irq_count < 0)
return irq_count;
@ -2258,7 +2263,7 @@ static int vgic_its_restore_ite(struct vgic_its *its, u32 event_id,
return PTR_ERR(ite);
if (its_is_collection_mapped(collection))
vcpu = kvm_get_vcpu(kvm, collection->target_addr);
vcpu = kvm_get_vcpu_by_id(kvm, collection->target_addr);
irq = vgic_add_lpi(kvm, lpi_id, vcpu);
if (IS_ERR(irq)) {
@ -2573,7 +2578,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
coll_id = val & KVM_ITS_CTE_ICID_MASK;
if (target_addr != COLLECTION_NOT_MAPPED &&
target_addr >= atomic_read(&kvm->online_vcpus))
!kvm_get_vcpu_by_id(kvm, target_addr))
return -EINVAL;
collection = find_collection(its, coll_id);

View File

@ -27,7 +27,8 @@ int vgic_check_iorange(struct kvm *kvm, phys_addr_t ioaddr,
if (addr + size < addr)
return -EINVAL;
if (addr & ~kvm_phys_mask(kvm) || addr + size > kvm_phys_size(kvm))
if (addr & ~kvm_phys_mask(&kvm->arch.mmu) ||
(addr + size) > kvm_phys_size(&kvm->arch.mmu))
return -E2BIG;
return 0;
@ -339,13 +340,9 @@ int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr,
{
int cpuid;
cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >>
KVM_DEV_ARM_VGIC_CPUID_SHIFT;
cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr);
if (cpuid >= atomic_read(&dev->kvm->online_vcpus))
return -EINVAL;
reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid);
reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid);
reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK;
return 0;

View File

@ -1013,35 +1013,6 @@ int vgic_v3_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr)
return 0;
}
/*
* Compare a given affinity (level 1-3 and a level 0 mask, from the SGI
* generation register ICC_SGI1R_EL1) with a given VCPU.
* If the VCPU's MPIDR matches, return the level0 affinity, otherwise
* return -1.
*/
static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu)
{
unsigned long affinity;
int level0;
/*
* Split the current VCPU's MPIDR into affinity level 0 and the
* rest as this is what we have to compare against.
*/
affinity = kvm_vcpu_get_mpidr_aff(vcpu);
level0 = MPIDR_AFFINITY_LEVEL(affinity, 0);
affinity &= ~MPIDR_LEVEL_MASK;
/* bail out if the upper three levels don't match */
if (sgi_aff != affinity)
return -1;
/* Is this VCPU's bit set in the mask ? */
if (!(sgi_cpu_mask & BIT(level0)))
return -1;
return level0;
}
/*
* The ICC_SGI* registers encode the affinity differently from the MPIDR,
@ -1052,6 +1023,38 @@ static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu)
((((reg) & ICC_SGI1R_AFFINITY_## level ##_MASK) \
>> ICC_SGI1R_AFFINITY_## level ##_SHIFT) << MPIDR_LEVEL_SHIFT(level))
static void vgic_v3_queue_sgi(struct kvm_vcpu *vcpu, u32 sgi, bool allow_group1)
{
struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, sgi);
unsigned long flags;
raw_spin_lock_irqsave(&irq->irq_lock, flags);
/*
* An access targeting Group0 SGIs can only generate
* those, while an access targeting Group1 SGIs can
* generate interrupts of either group.
*/
if (!irq->group || allow_group1) {
if (!irq->hw) {
irq->pending_latch = true;
vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
} else {
/* HW SGI? Ask the GIC to inject it */
int err;
err = irq_set_irqchip_state(irq->host_irq,
IRQCHIP_STATE_PENDING,
true);
WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
}
} else {
raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
}
vgic_put_irq(vcpu->kvm, irq);
}
/**
* vgic_v3_dispatch_sgi - handle SGI requests from VCPUs
* @vcpu: The VCPU requesting a SGI
@ -1062,83 +1065,46 @@ static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu)
* This will trap in sys_regs.c and call this function.
* This ICC_SGI1R_EL1 register contains the upper three affinity levels of the
* target processors as well as a bitmask of 16 Aff0 CPUs.
* If the interrupt routing mode bit is not set, we iterate over all VCPUs to
* check for matching ones. If this bit is set, we signal all, but not the
* calling VCPU.
*
* If the interrupt routing mode bit is not set, we iterate over the Aff0
* bits and signal the VCPUs matching the provided Aff{3,2,1}.
*
* If this bit is set, we signal all, but not the calling VCPU.
*/
void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1)
{
struct kvm *kvm = vcpu->kvm;
struct kvm_vcpu *c_vcpu;
u16 target_cpus;
unsigned long target_cpus;
u64 mpidr;
int sgi;
int vcpu_id = vcpu->vcpu_id;
bool broadcast;
unsigned long c, flags;
u32 sgi, aff0;
unsigned long c;
sgi = (reg & ICC_SGI1R_SGI_ID_MASK) >> ICC_SGI1R_SGI_ID_SHIFT;
broadcast = reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT);
target_cpus = (reg & ICC_SGI1R_TARGET_LIST_MASK) >> ICC_SGI1R_TARGET_LIST_SHIFT;
sgi = FIELD_GET(ICC_SGI1R_SGI_ID_MASK, reg);
/* Broadcast */
if (unlikely(reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT))) {
kvm_for_each_vcpu(c, c_vcpu, kvm) {
/* Don't signal the calling VCPU */
if (c_vcpu == vcpu)
continue;
vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1);
}
return;
}
/* We iterate over affinities to find the corresponding vcpus */
mpidr = SGI_AFFINITY_LEVEL(reg, 3);
mpidr |= SGI_AFFINITY_LEVEL(reg, 2);
mpidr |= SGI_AFFINITY_LEVEL(reg, 1);
target_cpus = FIELD_GET(ICC_SGI1R_TARGET_LIST_MASK, reg);
/*
* We iterate over all VCPUs to find the MPIDRs matching the request.
* If we have handled one CPU, we clear its bit to detect early
* if we are already finished. This avoids iterating through all
* VCPUs when most of the times we just signal a single VCPU.
*/
kvm_for_each_vcpu(c, c_vcpu, kvm) {
struct vgic_irq *irq;
/* Exit early if we have dealt with all requested CPUs */
if (!broadcast && target_cpus == 0)
break;
/* Don't signal the calling VCPU */
if (broadcast && c == vcpu_id)
continue;
if (!broadcast) {
int level0;
level0 = match_mpidr(mpidr, target_cpus, c_vcpu);
if (level0 == -1)
continue;
/* remove this matching VCPU from the mask */
target_cpus &= ~BIT(level0);
}
irq = vgic_get_irq(vcpu->kvm, c_vcpu, sgi);
raw_spin_lock_irqsave(&irq->irq_lock, flags);
/*
* An access targeting Group0 SGIs can only generate
* those, while an access targeting Group1 SGIs can
* generate interrupts of either group.
*/
if (!irq->group || allow_group1) {
if (!irq->hw) {
irq->pending_latch = true;
vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
} else {
/* HW SGI? Ask the GIC to inject it */
int err;
err = irq_set_irqchip_state(irq->host_irq,
IRQCHIP_STATE_PENDING,
true);
WARN_RATELIMIT(err, "IRQ %d", irq->host_irq);
raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
}
} else {
raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
}
vgic_put_irq(vcpu->kvm, irq);
for_each_set_bit(aff0, &target_cpus, hweight_long(ICC_SGI1R_TARGET_LIST_MASK)) {
c_vcpu = kvm_mpidr_to_vcpu(kvm, mpidr | aff0);
if (c_vcpu)
vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1);
}
}

View File

@ -422,7 +422,7 @@ retry:
/**
* kvm_vgic_inject_irq - Inject an IRQ from a device to the vgic
* @kvm: The VM structure pointer
* @cpuid: The CPU for PPIs
* @vcpu: The CPU for PPIs or NULL for global interrupts
* @intid: The INTID to inject a new state to.
* @level: Edge-triggered: true: to trigger the interrupt
* false: to ignore the call
@ -436,24 +436,22 @@ retry:
* level-sensitive interrupts. You can think of the level parameter as 1
* being HIGH and 0 being LOW and all devices being active-HIGH.
*/
int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
bool level, void *owner)
int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu,
unsigned int intid, bool level, void *owner)
{
struct kvm_vcpu *vcpu;
struct vgic_irq *irq;
unsigned long flags;
int ret;
trace_vgic_update_irq_pending(cpuid, intid, level);
ret = vgic_lazy_init(kvm);
if (ret)
return ret;
vcpu = kvm_get_vcpu(kvm, cpuid);
if (!vcpu && intid < VGIC_NR_PRIVATE_IRQS)
return -EINVAL;
trace_vgic_update_irq_pending(vcpu ? vcpu->vcpu_idx : 0, intid, level);
irq = vgic_get_irq(kvm, vcpu, intid);
if (!irq)
return -EINVAL;

View File

@ -135,10 +135,11 @@ void kvm_arm_vmid_clear_active(void)
atomic64_set(this_cpu_ptr(&active_vmids), VMID_ACTIVE_INVALID);
}
void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
{
unsigned long flags;
u64 vmid, old_active_vmid;
bool updated = false;
vmid = atomic64_read(&kvm_vmid->id);
@ -156,17 +157,21 @@ void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
if (old_active_vmid != 0 && vmid_gen_match(vmid) &&
0 != atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
old_active_vmid, vmid))
return;
return false;
raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
/* Check that our VMID belongs to the current generation. */
vmid = atomic64_read(&kvm_vmid->id);
if (!vmid_gen_match(vmid))
if (!vmid_gen_match(vmid)) {
vmid = new_vmid(kvm_vmid);
updated = true;
}
atomic64_set(this_cpu_ptr(&active_vmids), vmid);
raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
return updated;
}
/*

View File

@ -96,7 +96,7 @@ struct arch_timer_cpu {
int __init kvm_timer_hyp_init(bool has_gic);
int kvm_timer_enable(struct kvm_vcpu *vcpu);
int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu);
void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu);
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu);
void kvm_timer_sync_user(struct kvm_vcpu *vcpu);
bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu);

View File

@ -13,7 +13,6 @@
#define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1)
#if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM)
struct kvm_pmc {
u8 idx; /* index into the pmu->pmc array */
struct perf_event *perf_event;
@ -63,6 +62,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
u64 select_idx);
void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu);
int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
@ -77,7 +77,7 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
void kvm_vcpu_pmu_resync_el0(void);
#define kvm_vcpu_has_pmu(vcpu) \
(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
(vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3))
/*
* Updates the vcpu's view of the pmu events for this cpu.
@ -101,7 +101,11 @@ void kvm_vcpu_pmu_resync_el0(void);
})
u8 kvm_arm_pmu_get_pmuver_limit(void);
u64 kvm_pmu_evtyper_mask(struct kvm *kvm);
int kvm_arm_set_default_pmu(struct kvm *kvm);
u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm);
u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu);
#else
struct kvm_pmu {
};
@ -168,12 +172,32 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {}
static inline u8 kvm_arm_pmu_get_pmuver_limit(void)
{
return 0;
}
static inline u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
{
return 0;
}
static inline void kvm_vcpu_pmu_resync_el0(void) {}
static inline int kvm_arm_set_default_pmu(struct kvm *kvm)
{
return -ENODEV;
}
static inline u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm)
{
return 0;
}
static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu)
{
return 0;
}
#endif
#endif

View File

@ -26,7 +26,7 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu)
* revisions. It is thus safe to return the latest, unless
* userspace has instructed us otherwise.
*/
if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) {
if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) {
if (vcpu->kvm->arch.psci_version)
return vcpu->kvm->arch.psci_version;

View File

@ -375,8 +375,8 @@ int kvm_vgic_map_resources(struct kvm *kvm);
int kvm_vgic_hyp_init(void);
void kvm_vgic_init_cpu_hardware(void);
int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid,
bool level, void *owner);
int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu,
unsigned int intid, bool level, void *owner);
int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, unsigned int host_irq,
u32 vintid, struct irq_ops *ops);
int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int vintid);

View File

@ -234,9 +234,12 @@
/*
* Event filters for PMUv3
*/
#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31)
#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30)
#define ARMV8_PMU_INCLUDE_EL2 (1U << 27)
#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31)
#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30)
#define ARMV8_PMU_EXCLUDE_NS_EL1 (1U << 29)
#define ARMV8_PMU_EXCLUDE_NS_EL0 (1U << 28)
#define ARMV8_PMU_INCLUDE_EL2 (1U << 27)
#define ARMV8_PMU_EXCLUDE_EL3 (1U << 26)
/*
* PMUSERENR: user enable reg

View File

@ -1200,6 +1200,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_COUNTER_OFFSET 227
#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228
#define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229
#define KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 230
#ifdef KVM_CAP_IRQ_ROUTING
@ -1571,6 +1572,7 @@ struct kvm_s390_ucas_mapping {
#define KVM_ARM_MTE_COPY_TAGS _IOR(KVMIO, 0xb4, struct kvm_arm_copy_mte_tags)
/* Available with KVM_CAP_COUNTER_OFFSET */
#define KVM_ARM_SET_COUNTER_OFFSET _IOW(KVMIO, 0xb5, struct kvm_arm_counter_offset)
#define KVM_ARM_GET_REG_WRITABLE_MASKS _IOR(KVMIO, 0xb6, struct reg_mask_range)
/* ioctl for vm fd */
#define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device)

1
tools/arch/arm64/include/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
generated/

View File

@ -0,0 +1,26 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_GPR_NUM_H
#define __ASM_GPR_NUM_H
#ifdef __ASSEMBLY__
.irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
.equ .L__gpr_num_x\num, \num
.equ .L__gpr_num_w\num, \num
.endr
.equ .L__gpr_num_xzr, 31
.equ .L__gpr_num_wzr, 31
#else /* __ASSEMBLY__ */
#define __DEFINE_ASM_GPR_NUMS \
" .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \
" .equ .L__gpr_num_x\\num, \\num\n" \
" .equ .L__gpr_num_w\\num, \\num\n" \
" .endr\n" \
" .equ .L__gpr_num_xzr, 31\n" \
" .equ .L__gpr_num_wzr, 31\n"
#endif /* __ASSEMBLY__ */
#endif /* __ASM_GPR_NUM_H */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,38 @@
# SPDX-License-Identifier: GPL-2.0
ifeq ($(top_srcdir),)
top_srcdir := $(patsubst %/,%,$(dir $(CURDIR)))
top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir)))
top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir)))
top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir)))
endif
include $(top_srcdir)/tools/scripts/Makefile.include
AWK ?= awk
MKDIR ?= mkdir
RM ?= rm
ifeq ($(V),1)
Q =
else
Q = @
endif
arm64_tools_dir = $(top_srcdir)/arch/arm64/tools
arm64_sysreg_tbl = $(arm64_tools_dir)/sysreg
arm64_gen_sysreg = $(arm64_tools_dir)/gen-sysreg.awk
arm64_generated_dir = $(top_srcdir)/tools/arch/arm64/include/generated
arm64_sysreg_defs = $(arm64_generated_dir)/asm/sysreg-defs.h
all: $(arm64_sysreg_defs)
@:
$(arm64_sysreg_defs): $(arm64_gen_sysreg) $(arm64_sysreg_tbl)
$(Q)$(MKDIR) -p $(dir $@)
$(QUIET_GEN)$(AWK) -f $^ > $@
clean:
$(Q)$(RM) -rf $(arm64_generated_dir)
.PHONY: all clean

View File

@ -0,0 +1,308 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2012 ARM Ltd.
*/
#ifndef __PERF_ARM_PMUV3_H
#define __PERF_ARM_PMUV3_H
#include <assert.h>
#include <asm/bug.h>
#define ARMV8_PMU_MAX_COUNTERS 32
#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
/*
* Common architectural and microarchitectural event numbers.
*/
#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001
#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004
#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005
#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006
#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007
#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008
#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009
#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A
#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B
#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C
#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D
#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E
#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F
#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010
#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011
#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012
#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018
#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019
#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A
#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B
#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C
#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D
#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020
#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021
#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022
#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023
#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024
#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025
#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C
#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D
#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E
#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F
#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030
#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031
#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033
#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034
#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037
#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039
#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A
#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B
#define ARMV8_PMUV3_PERFCTR_STALL 0x003C
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F
/* Statistical profiling extension microarchitectural events */
#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000
#define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001
#define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002
#define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003
/* AMUv1 architecture events */
#define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004
#define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005
/* long-latency read miss events */
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B
/* Trace buffer events */
#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C
#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E
/* Trace unit events */
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B
/* additional latency from alignment events */
#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020
#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021
#define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022
/* Armv8.5 Memory Tagging Extension events */
#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024
#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025
#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026
/* ARMv8 recommended implementation defined event types */
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065
#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066
#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A
#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C
#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D
#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E
#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F
#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070
#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071
#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072
#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073
#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074
#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075
#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076
#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077
#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078
#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079
#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A
#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C
#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D
#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E
#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081
#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082
#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083
#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084
#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086
#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087
#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088
#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F
#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090
#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8
/*
* Per-CPU PMCR: config reg
*/
#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
#define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */
#define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMU_PMCR_N_MASK 0x1f
#define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */
/*
* PMOVSR: counters overflow flag status reg
*/
#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
/*
* PMXEVTYPER: Event selection reg
*/
#define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */
#define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */
/*
* Event filters for PMUv3
*/
#define ARMV8_PMU_EXCLUDE_EL1 (1U << 31)
#define ARMV8_PMU_EXCLUDE_EL0 (1U << 30)
#define ARMV8_PMU_INCLUDE_EL2 (1U << 27)
/*
* PMUSERENR: user enable reg
*/
#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
/* PMMIR_EL1.SLOTS mask */
#define ARMV8_PMU_SLOTS_MASK 0xff
#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
/*
* This code is really good
*/
#define PMEVN_CASE(n, case_macro) \
case n: case_macro(n); break
#define PMEVN_SWITCH(x, case_macro) \
do { \
switch (x) { \
PMEVN_CASE(0, case_macro); \
PMEVN_CASE(1, case_macro); \
PMEVN_CASE(2, case_macro); \
PMEVN_CASE(3, case_macro); \
PMEVN_CASE(4, case_macro); \
PMEVN_CASE(5, case_macro); \
PMEVN_CASE(6, case_macro); \
PMEVN_CASE(7, case_macro); \
PMEVN_CASE(8, case_macro); \
PMEVN_CASE(9, case_macro); \
PMEVN_CASE(10, case_macro); \
PMEVN_CASE(11, case_macro); \
PMEVN_CASE(12, case_macro); \
PMEVN_CASE(13, case_macro); \
PMEVN_CASE(14, case_macro); \
PMEVN_CASE(15, case_macro); \
PMEVN_CASE(16, case_macro); \
PMEVN_CASE(17, case_macro); \
PMEVN_CASE(18, case_macro); \
PMEVN_CASE(19, case_macro); \
PMEVN_CASE(20, case_macro); \
PMEVN_CASE(21, case_macro); \
PMEVN_CASE(22, case_macro); \
PMEVN_CASE(23, case_macro); \
PMEVN_CASE(24, case_macro); \
PMEVN_CASE(25, case_macro); \
PMEVN_CASE(26, case_macro); \
PMEVN_CASE(27, case_macro); \
PMEVN_CASE(28, case_macro); \
PMEVN_CASE(29, case_macro); \
PMEVN_CASE(30, case_macro); \
default: \
WARN(1, "Invalid PMEV* index\n"); \
assert(0); \
} \
} while (0)
#endif

View File

@ -443,6 +443,15 @@ drm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/drm_ioctl.sh
# Create output directory if not already present
_dummy := $(shell [ -d '$(beauty_ioctl_outdir)' ] || mkdir -p '$(beauty_ioctl_outdir)')
arm64_gen_sysreg_dir := $(srctree)/tools/arch/arm64/tools
arm64-sysreg-defs: FORCE
$(Q)$(MAKE) -C $(arm64_gen_sysreg_dir)
arm64-sysreg-defs-clean:
$(call QUIET_CLEAN,arm64-sysreg-defs)
$(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) clean > /dev/null
$(drm_ioctl_array): $(drm_hdr_dir)/drm.h $(drm_hdr_dir)/i915_drm.h $(drm_ioctl_tbl)
$(Q)$(SHELL) '$(drm_ioctl_tbl)' $(drm_hdr_dir) > $@
@ -716,7 +725,9 @@ endif
__build-dir = $(subst $(OUTPUT),,$(dir $@))
build-dir = $(or $(__build-dir),.)
prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders $(drm_ioctl_array) \
prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders \
arm64-sysreg-defs \
$(drm_ioctl_array) \
$(fadvise_advice_array) \
$(fsconfig_arrays) \
$(fsmount_arrays) \
@ -1125,7 +1136,7 @@ endif # BUILD_BPF_SKEL
bpf-skel-clean:
$(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS)
clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean
clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean
$(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-iostat $(LANG_BINDINGS)
$(Q)find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
$(Q)$(RM) $(OUTPUT).config-detected

View File

@ -345,7 +345,7 @@ CFLAGS_rbtree.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ET
CFLAGS_libstring.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))"
CFLAGS_hweight.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))"
CFLAGS_header.o += -include $(OUTPUT)PERF-VERSION-FILE
CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/
CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/ -I$(srctree)/tools/arch/arm64/include/generated/
$(OUTPUT)util/argv_split.o: ../lib/argv_split.c FORCE
$(call rule_mkdir)

View File

@ -17,6 +17,15 @@ else
ARCH_DIR := $(ARCH)
endif
ifeq ($(ARCH),arm64)
arm64_tools_dir := $(top_srcdir)/tools/arch/arm64/tools/
GEN_HDRS := $(top_srcdir)/tools/arch/arm64/include/generated/
CFLAGS += -I$(GEN_HDRS)
$(GEN_HDRS): $(wildcard $(arm64_tools_dir)/*)
$(MAKE) -C $(arm64_tools_dir)
endif
LIBKVM += lib/assert.c
LIBKVM += lib/elf.c
LIBKVM += lib/guest_modes.c
@ -146,10 +155,12 @@ TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
TEST_GEN_PROGS_aarch64 += aarch64/hypercalls
TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test
TEST_GEN_PROGS_aarch64 += aarch64/psci_test
TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs
TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter
TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
TEST_GEN_PROGS_aarch64 += demand_paging_test
TEST_GEN_PROGS_aarch64 += dirty_log_test
@ -257,13 +268,18 @@ $(TEST_GEN_OBJ): $(OUTPUT)/%.o: %.c
$(SPLIT_TESTS_TARGETS): %: %.o $(SPLIT_TESTS_OBJS)
$(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) $^ $(LDLIBS) -o $@
EXTRA_CLEAN += $(LIBKVM_OBJS) $(TEST_DEP_FILES) $(TEST_GEN_OBJ) $(SPLIT_TESTS_OBJS) cscope.*
EXTRA_CLEAN += $(GEN_HDRS) \
$(LIBKVM_OBJS) \
$(SPLIT_TESTS_OBJS) \
$(TEST_DEP_FILES) \
$(TEST_GEN_OBJ) \
cscope.*
x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ))))
$(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c
$(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(GEN_HDRS)
$(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@
$(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S
$(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(GEN_HDRS)
$(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@
# Compile the string overrides as freestanding to prevent the compiler from
@ -273,8 +289,10 @@ $(LIBKVM_STRING_OBJ): $(OUTPUT)/%.o: %.c
$(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c -ffreestanding $< -o $@
x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS))))
$(SPLIT_TESTS_OBJS): $(GEN_HDRS)
$(TEST_GEN_PROGS): $(LIBKVM_OBJS)
$(TEST_GEN_PROGS_EXTENDED): $(LIBKVM_OBJS)
$(TEST_GEN_OBJ): $(GEN_HDRS)
cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib ..
cscope:

View File

@ -146,8 +146,8 @@ static bool vcpu_aarch64_only(struct kvm_vcpu *vcpu)
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), val);
return el0 == ID_AA64PFR0_ELx_64BIT_ONLY;
el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
return el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY;
}
int main(void)

View File

@ -116,12 +116,12 @@ static void reset_debug_state(void)
/* Reset all bcr/bvr/wcr/wvr registers */
dfr0 = read_sysreg(id_aa64dfr0_el1);
brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), dfr0);
brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), dfr0);
for (i = 0; i <= brps; i++) {
write_dbgbcr(i, 0);
write_dbgbvr(i, 0);
}
wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), dfr0);
wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), dfr0);
for (i = 0; i <= wrps; i++) {
write_dbgwcr(i, 0);
write_dbgwvr(i, 0);
@ -418,7 +418,7 @@ static void guest_code_ss(int test_cnt)
static int debug_version(uint64_t id_aa64dfr0)
{
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), id_aa64dfr0);
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0);
}
static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
@ -539,14 +539,14 @@ void test_guest_debug_exceptions_all(uint64_t aa64dfr0)
int b, w, c;
/* Number of breakpoints */
brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), aa64dfr0) + 1;
brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), aa64dfr0) + 1;
__TEST_REQUIRE(brp_num >= 2, "At least two breakpoints are required");
/* Number of watchpoints */
wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), aa64dfr0) + 1;
wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), aa64dfr0) + 1;
/* Number of context aware breakpoints */
ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_CTX_CMPS), aa64dfr0) + 1;
ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_CTX_CMPs), aa64dfr0) + 1;
pr_debug("%s brp_num:%d, wrp_num:%d, ctx_brp_num:%d\n", __func__,
brp_num, wrp_num, ctx_brp_num);

View File

@ -96,14 +96,14 @@ static bool guest_check_lse(void)
uint64_t isar0 = read_sysreg(id_aa64isar0_el1);
uint64_t atomic;
atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS), isar0);
atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0);
return atomic >= 2;
}
static bool guest_check_dc_zva(void)
{
uint64_t dczid = read_sysreg(dczid_el0);
uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_DZP), dczid);
uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid);
return dzp == 0;
}
@ -135,8 +135,8 @@ static void guest_at(void)
uint64_t par;
asm volatile("at s1e1r, %0" :: "r" (guest_test_memory));
par = read_sysreg(par_el1);
isb();
par = read_sysreg(par_el1);
/* Bit 1 indicates whether the AT was successful */
GUEST_ASSERT_EQ(par & 1, 0);
@ -196,7 +196,7 @@ static bool guest_set_ha(void)
uint64_t hadbs, tcr;
/* Skip if HA is not supported. */
hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS), mmfr1);
hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1);
if (hadbs == 0)
return false;
@ -842,6 +842,7 @@ static void help(char *name)
.name = SCAT2(ro_memslot_no_syndrome, _access), \
.data_memslot_flags = KVM_MEM_READONLY, \
.pt_memslot_flags = KVM_MEM_READONLY, \
.guest_prepare = { _PREPARE(_access) }, \
.guest_test = _access, \
.fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \
.expected_events = { .fail_vcpu_runs = 1 }, \
@ -865,6 +866,7 @@ static void help(char *name)
.name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \
.data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \
.pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \
.guest_prepare = { _PREPARE(_access) }, \
.guest_test = _access, \
.guest_test_check = { _test_check }, \
.fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \
@ -894,6 +896,7 @@ static void help(char *name)
.data_memslot_flags = KVM_MEM_READONLY, \
.pt_memslot_flags = KVM_MEM_READONLY, \
.mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \
.guest_prepare = { _PREPARE(_access) }, \
.guest_test = _access, \
.uffd_data_handler = _uffd_data_handler, \
.uffd_pt_handler = uffd_pt_handler, \

View File

@ -0,0 +1,481 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* set_id_regs - Test for setting ID register from usersapce.
*
* Copyright (c) 2023 Google LLC.
*
*
* Test that KVM supports setting ID registers from userspace and handles the
* feature set correctly.
*/
#include <stdint.h>
#include "kvm_util.h"
#include "processor.h"
#include "test_util.h"
#include <linux/bitfield.h>
enum ftr_type {
FTR_EXACT, /* Use a predefined safe value */
FTR_LOWER_SAFE, /* Smaller value is safe */
FTR_HIGHER_SAFE, /* Bigger value is safe */
FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */
FTR_END, /* Mark the last ftr bits */
};
#define FTR_SIGNED true /* Value should be treated as signed */
#define FTR_UNSIGNED false /* Value should be treated as unsigned */
struct reg_ftr_bits {
char *name;
bool sign;
enum ftr_type type;
uint8_t shift;
uint64_t mask;
int64_t safe_val;
};
struct test_feature_reg {
uint32_t reg;
const struct reg_ftr_bits *ftr_bits;
};
#define __REG_FTR_BITS(NAME, SIGNED, TYPE, SHIFT, MASK, SAFE_VAL) \
{ \
.name = #NAME, \
.sign = SIGNED, \
.type = TYPE, \
.shift = SHIFT, \
.mask = MASK, \
.safe_val = SAFE_VAL, \
}
#define REG_FTR_BITS(type, reg, field, safe_val) \
__REG_FTR_BITS(reg##_##field, FTR_UNSIGNED, type, reg##_##field##_SHIFT, \
reg##_##field##_MASK, safe_val)
#define S_REG_FTR_BITS(type, reg, field, safe_val) \
__REG_FTR_BITS(reg##_##field, FTR_SIGNED, type, reg##_##field##_SHIFT, \
reg##_##field##_MASK, safe_val)
#define REG_FTR_END \
{ \
.type = FTR_END, \
}
static const struct reg_ftr_bits ftr_id_aa64dfr0_el1[] = {
S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, PMUVer, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, DebugVer, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_dfr0_el1[] = {
S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_DFR0_EL1, PerfMon, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_DFR0_EL1, CopDbg, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64isar0_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RNDR, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TLB, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TS, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, FHM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, DP, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM4, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM3, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA3, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RDM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TME, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, ATOMIC, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, CRC32, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA2, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA1, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, AES, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64isar1_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, LS64, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, XS, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, I8MM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, DGH, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, BF16, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, SPECRES, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, SB, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, FRINTTS, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, LRCPC, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, FCMA, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, JSCVT, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, DPB, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64isar2_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, BC, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, RPRES, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, WFxT, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, CSV3, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, CSV2, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, DIT, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, SEL2, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64mmfr0_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ECV, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, EXS, 0),
S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN4, 0),
S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN64, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN16, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGENDEL0, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, SNSMEM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGEND, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ASIDBITS, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, PARANGE, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64mmfr1_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, TIDCP1, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, AFP, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, ETS, 0),
REG_FTR_BITS(FTR_HIGHER_SAFE, ID_AA64MMFR1_EL1, SpecSEI, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, PAN, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, LO, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, HPDS, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, HAFDBS, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64mmfr2_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, E0PD, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, BBM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, TTL, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, AT, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, ST, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, VARange, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, IESB, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, LSM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, UAO, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, CnP, 0),
REG_FTR_END,
};
static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, I8MM, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SM4, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SHA3, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, BF16, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, BitPerm, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, AES, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SVEver, 0),
REG_FTR_END,
};
#define TEST_REG(id, table) \
{ \
.reg = id, \
.ftr_bits = &((table)[0]), \
}
static struct test_feature_reg test_regs[] = {
TEST_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0_el1),
TEST_REG(SYS_ID_DFR0_EL1, ftr_id_dfr0_el1),
TEST_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0_el1),
TEST_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1_el1),
TEST_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2_el1),
TEST_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0_el1),
TEST_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0_el1),
TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1),
TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1),
TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1),
};
#define GUEST_REG_SYNC(id) GUEST_SYNC_ARGS(0, id, read_sysreg_s(id), 0, 0);
static void guest_code(void)
{
GUEST_REG_SYNC(SYS_ID_AA64DFR0_EL1);
GUEST_REG_SYNC(SYS_ID_DFR0_EL1);
GUEST_REG_SYNC(SYS_ID_AA64ISAR0_EL1);
GUEST_REG_SYNC(SYS_ID_AA64ISAR1_EL1);
GUEST_REG_SYNC(SYS_ID_AA64ISAR2_EL1);
GUEST_REG_SYNC(SYS_ID_AA64PFR0_EL1);
GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1);
GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1);
GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1);
GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1);
GUEST_DONE();
}
/* Return a safe value to a given ftr_bits an ftr value */
uint64_t get_safe_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
{
uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
if (ftr_bits->type == FTR_UNSIGNED) {
switch (ftr_bits->type) {
case FTR_EXACT:
ftr = ftr_bits->safe_val;
break;
case FTR_LOWER_SAFE:
if (ftr > 0)
ftr--;
break;
case FTR_HIGHER_SAFE:
if (ftr < ftr_max)
ftr++;
break;
case FTR_HIGHER_OR_ZERO_SAFE:
if (ftr == ftr_max)
ftr = 0;
else if (ftr != 0)
ftr++;
break;
default:
break;
}
} else if (ftr != ftr_max) {
switch (ftr_bits->type) {
case FTR_EXACT:
ftr = ftr_bits->safe_val;
break;
case FTR_LOWER_SAFE:
if (ftr > 0)
ftr--;
break;
case FTR_HIGHER_SAFE:
if (ftr < ftr_max - 1)
ftr++;
break;
case FTR_HIGHER_OR_ZERO_SAFE:
if (ftr != 0 && ftr != ftr_max - 1)
ftr++;
break;
default:
break;
}
}
return ftr;
}
/* Return an invalid value to a given ftr_bits an ftr value */
uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
{
uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
if (ftr_bits->type == FTR_UNSIGNED) {
switch (ftr_bits->type) {
case FTR_EXACT:
ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1);
break;
case FTR_LOWER_SAFE:
ftr++;
break;
case FTR_HIGHER_SAFE:
ftr--;
break;
case FTR_HIGHER_OR_ZERO_SAFE:
if (ftr == 0)
ftr = ftr_max;
else
ftr--;
break;
default:
break;
}
} else if (ftr != ftr_max) {
switch (ftr_bits->type) {
case FTR_EXACT:
ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1);
break;
case FTR_LOWER_SAFE:
ftr++;
break;
case FTR_HIGHER_SAFE:
ftr--;
break;
case FTR_HIGHER_OR_ZERO_SAFE:
if (ftr == 0)
ftr = ftr_max - 1;
else
ftr--;
break;
default:
break;
}
} else {
ftr = 0;
}
return ftr;
}
static void test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg,
const struct reg_ftr_bits *ftr_bits)
{
uint8_t shift = ftr_bits->shift;
uint64_t mask = ftr_bits->mask;
uint64_t val, new_val, ftr;
vcpu_get_reg(vcpu, reg, &val);
ftr = (val & mask) >> shift;
ftr = get_safe_value(ftr_bits, ftr);
ftr <<= shift;
val &= ~mask;
val |= ftr;
vcpu_set_reg(vcpu, reg, val);
vcpu_get_reg(vcpu, reg, &new_val);
TEST_ASSERT_EQ(new_val, val);
}
static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg,
const struct reg_ftr_bits *ftr_bits)
{
uint8_t shift = ftr_bits->shift;
uint64_t mask = ftr_bits->mask;
uint64_t val, old_val, ftr;
int r;
vcpu_get_reg(vcpu, reg, &val);
ftr = (val & mask) >> shift;
ftr = get_invalid_value(ftr_bits, ftr);
old_val = val;
ftr <<= shift;
val &= ~mask;
val |= ftr;
r = __vcpu_set_reg(vcpu, reg, val);
TEST_ASSERT(r < 0 && errno == EINVAL,
"Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno);
vcpu_get_reg(vcpu, reg, &val);
TEST_ASSERT_EQ(val, old_val);
}
static void test_user_set_reg(struct kvm_vcpu *vcpu, bool aarch64_only)
{
uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
struct reg_mask_range range = {
.addr = (__u64)masks,
};
int ret;
/* KVM should return error when reserved field is not zero */
range.reserved[0] = 1;
ret = __vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range);
TEST_ASSERT(ret, "KVM doesn't check invalid parameters.");
/* Get writable masks for feature ID registers */
memset(range.reserved, 0, sizeof(range.reserved));
vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range);
for (int i = 0; i < ARRAY_SIZE(test_regs); i++) {
const struct reg_ftr_bits *ftr_bits = test_regs[i].ftr_bits;
uint32_t reg_id = test_regs[i].reg;
uint64_t reg = KVM_ARM64_SYS_REG(reg_id);
int idx;
/* Get the index to masks array for the idreg */
idx = KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(reg_id), sys_reg_Op1(reg_id),
sys_reg_CRn(reg_id), sys_reg_CRm(reg_id),
sys_reg_Op2(reg_id));
for (int j = 0; ftr_bits[j].type != FTR_END; j++) {
/* Skip aarch32 reg on aarch64 only system, since they are RAZ/WI. */
if (aarch64_only && sys_reg_CRm(reg_id) < 4) {
ksft_test_result_skip("%s on AARCH64 only system\n",
ftr_bits[j].name);
continue;
}
/* Make sure the feature field is writable */
TEST_ASSERT_EQ(masks[idx] & ftr_bits[j].mask, ftr_bits[j].mask);
test_reg_set_fail(vcpu, reg, &ftr_bits[j]);
test_reg_set_success(vcpu, reg, &ftr_bits[j]);
ksft_test_result_pass("%s\n", ftr_bits[j].name);
}
}
}
static void test_guest_reg_read(struct kvm_vcpu *vcpu)
{
bool done = false;
struct ucall uc;
uint64_t val;
while (!done) {
vcpu_run(vcpu);
switch (get_ucall(vcpu, &uc)) {
case UCALL_ABORT:
REPORT_GUEST_ASSERT(uc);
break;
case UCALL_SYNC:
/* Make sure the written values are seen by guest */
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(uc.args[2]), &val);
TEST_ASSERT_EQ(val, uc.args[3]);
break;
case UCALL_DONE:
done = true;
break;
default:
TEST_FAIL("Unexpected ucall: %lu", uc.cmd);
}
}
}
int main(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
bool aarch64_only;
uint64_t val, el0;
int ftr_cnt;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES));
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
/* Check for AARCH64 only system */
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
aarch64_only = (el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY);
ksft_print_header();
ftr_cnt = ARRAY_SIZE(ftr_id_aa64dfr0_el1) + ARRAY_SIZE(ftr_id_dfr0_el1) +
ARRAY_SIZE(ftr_id_aa64isar0_el1) + ARRAY_SIZE(ftr_id_aa64isar1_el1) +
ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) +
ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) +
ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) -
ARRAY_SIZE(test_regs);
ksft_set_plan(ftr_cnt);
test_user_set_reg(vcpu, aarch64_only);
test_guest_reg_read(vcpu);
kvm_vm_free(vm);
ksft_finished();
}

View File

@ -0,0 +1,670 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* vpmu_counter_access - Test vPMU event counter access
*
* Copyright (c) 2023 Google LLC.
*
* This test checks if the guest can see the same number of the PMU event
* counters (PMCR_EL0.N) that userspace sets, if the guest can access
* those counters, and if the guest is prevented from accessing any
* other counters.
* It also checks if the userspace accesses to the PMU regsisters honor the
* PMCR.N value that's set for the guest.
* This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
*/
#include <kvm_util.h>
#include <processor.h>
#include <test_util.h>
#include <vgic.h>
#include <perf/arm_pmuv3.h>
#include <linux/bitfield.h>
/* The max number of the PMU event counters (excluding the cycle counter) */
#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
/* The cycle counter bit position that's common among the PMU registers */
#define ARMV8_PMU_CYCLE_IDX 31
struct vpmu_vm {
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
int gic_fd;
};
static struct vpmu_vm vpmu_vm;
struct pmreg_sets {
uint64_t set_reg_id;
uint64_t clr_reg_id;
};
#define PMREG_SET(set, clr) {.set_reg_id = set, .clr_reg_id = clr}
static uint64_t get_pmcr_n(uint64_t pmcr)
{
return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
}
static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
{
*pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
*pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
}
static uint64_t get_counters_mask(uint64_t n)
{
uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX);
if (n)
mask |= GENMASK(n - 1, 0);
return mask;
}
/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
static inline unsigned long read_sel_evcntr(int sel)
{
write_sysreg(sel, pmselr_el0);
isb();
return read_sysreg(pmxevcntr_el0);
}
/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
static inline void write_sel_evcntr(int sel, unsigned long val)
{
write_sysreg(sel, pmselr_el0);
isb();
write_sysreg(val, pmxevcntr_el0);
isb();
}
/* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
static inline unsigned long read_sel_evtyper(int sel)
{
write_sysreg(sel, pmselr_el0);
isb();
return read_sysreg(pmxevtyper_el0);
}
/* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
static inline void write_sel_evtyper(int sel, unsigned long val)
{
write_sysreg(sel, pmselr_el0);
isb();
write_sysreg(val, pmxevtyper_el0);
isb();
}
static inline void enable_counter(int idx)
{
uint64_t v = read_sysreg(pmcntenset_el0);
write_sysreg(BIT(idx) | v, pmcntenset_el0);
isb();
}
static inline void disable_counter(int idx)
{
uint64_t v = read_sysreg(pmcntenset_el0);
write_sysreg(BIT(idx) | v, pmcntenclr_el0);
isb();
}
static void pmu_disable_reset(void)
{
uint64_t pmcr = read_sysreg(pmcr_el0);
/* Reset all counters, disabling them */
pmcr &= ~ARMV8_PMU_PMCR_E;
write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
isb();
}
#define RETURN_READ_PMEVCNTRN(n) \
return read_sysreg(pmevcntr##n##_el0)
static unsigned long read_pmevcntrn(int n)
{
PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
return 0;
}
#define WRITE_PMEVCNTRN(n) \
write_sysreg(val, pmevcntr##n##_el0)
static void write_pmevcntrn(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
isb();
}
#define READ_PMEVTYPERN(n) \
return read_sysreg(pmevtyper##n##_el0)
static unsigned long read_pmevtypern(int n)
{
PMEVN_SWITCH(n, READ_PMEVTYPERN);
return 0;
}
#define WRITE_PMEVTYPERN(n) \
write_sysreg(val, pmevtyper##n##_el0)
static void write_pmevtypern(int n, unsigned long val)
{
PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
isb();
}
/*
* The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
* accessors that test cases will use. Each of the accessors will
* either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0
* (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
* PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
*
* This is used to test that combinations of those accessors provide
* the consistent behavior.
*/
struct pmc_accessor {
/* A function to be used to read PMEVTCNTR<n>_EL0 */
unsigned long (*read_cntr)(int idx);
/* A function to be used to write PMEVTCNTR<n>_EL0 */
void (*write_cntr)(int idx, unsigned long val);
/* A function to be used to read PMEVTYPER<n>_EL0 */
unsigned long (*read_typer)(int idx);
/* A function to be used to write PMEVTYPER<n>_EL0 */
void (*write_typer)(int idx, unsigned long val);
};
struct pmc_accessor pmc_accessors[] = {
/* test with all direct accesses */
{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
/* test with all indirect accesses */
{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
/* read with direct accesses, and write with indirect accesses */
{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
/* read with indirect accesses, and write with direct accesses */
{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
};
/*
* Convert a pointer of pmc_accessor to an index in pmc_accessors[],
* assuming that the pointer is one of the entries in pmc_accessors[].
*/
#define PMC_ACC_TO_IDX(acc) (acc - &pmc_accessors[0])
#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \
{ \
uint64_t _tval = read_sysreg(regname); \
\
if (set_expected) \
__GUEST_ASSERT((_tval & mask), \
"tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
_tval, mask, set_expected); \
else \
__GUEST_ASSERT(!(_tval & mask), \
"tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \
_tval, mask, set_expected); \
}
/*
* Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
* are set or cleared as specified in @set_expected.
*/
static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
{
GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected);
GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected);
GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
}
/*
* Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding
* to the specified counter (@pmc_idx) can be read/written as expected.
* When @set_op is true, it tries to set the bit for the counter in
* those registers by writing the SET registers (the bit won't be set
* if the counter is not implemented though).
* Otherwise, it tries to clear the bits in the registers by writing
* the CLR registers.
* Then, it checks if the values indicated in the registers are as expected.
*/
static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
{
uint64_t pmcr_n, test_bit = BIT(pmc_idx);
bool set_expected = false;
if (set_op) {
write_sysreg(test_bit, pmcntenset_el0);
write_sysreg(test_bit, pmintenset_el1);
write_sysreg(test_bit, pmovsset_el0);
/* The bit will be set only if the counter is implemented */
pmcr_n = get_pmcr_n(read_sysreg(pmcr_el0));
set_expected = (pmc_idx < pmcr_n) ? true : false;
} else {
write_sysreg(test_bit, pmcntenclr_el0);
write_sysreg(test_bit, pmintenclr_el1);
write_sysreg(test_bit, pmovsclr_el0);
}
check_bitmap_pmu_regs(test_bit, set_expected);
}
/*
* Tests for reading/writing registers for the (implemented) event counter
* specified by @pmc_idx.
*/
static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
{
uint64_t write_data, read_data;
/* Disable all PMCs and reset all PMCs to zero. */
pmu_disable_reset();
/*
* Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1.
*/
/* Make sure that the bit in those registers are set to 0 */
test_bitmap_pmu_regs(pmc_idx, false);
/* Test if setting the bit in those registers works */
test_bitmap_pmu_regs(pmc_idx, true);
/* Test if clearing the bit in those registers works */
test_bitmap_pmu_regs(pmc_idx, false);
/*
* Tests for reading/writing the event type register.
*/
/*
* Set the event type register to an arbitrary value just for testing
* of reading/writing the register.
* Arm ARM says that for the event from 0x0000 to 0x003F,
* the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
* the value written to the field even when the specified event
* is not supported.
*/
write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
acc->write_typer(pmc_idx, write_data);
read_data = acc->read_typer(pmc_idx);
__GUEST_ASSERT(read_data == write_data,
"pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
/*
* Tests for reading/writing the event count register.
*/
read_data = acc->read_cntr(pmc_idx);
/* The count value must be 0, as it is disabled and reset */
__GUEST_ASSERT(read_data == 0,
"pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx",
pmc_idx, PMC_ACC_TO_IDX(acc), read_data);
write_data = read_data + pmc_idx + 0x12345;
acc->write_cntr(pmc_idx, write_data);
read_data = acc->read_cntr(pmc_idx);
__GUEST_ASSERT(read_data == write_data,
"pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx",
pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data);
}
#define INVALID_EC (-1ul)
uint64_t expected_ec = INVALID_EC;
static void guest_sync_handler(struct ex_regs *regs)
{
uint64_t esr, ec;
esr = read_sysreg(esr_el1);
ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
__GUEST_ASSERT(expected_ec == ec,
"PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx; EC expected: 0x%lx",
regs->pc, esr, ec, expected_ec);
/* skip the trapping instruction */
regs->pc += 4;
/* Use INVALID_EC to indicate an exception occurred */
expected_ec = INVALID_EC;
}
/*
* Run the given operation that should trigger an exception with the
* given exception class. The exception handler (guest_sync_handler)
* will reset op_end_addr to 0, expected_ec to INVALID_EC, and skip
* the instruction that trapped.
*/
#define TEST_EXCEPTION(ec, ops) \
({ \
GUEST_ASSERT(ec != INVALID_EC); \
WRITE_ONCE(expected_ec, ec); \
dsb(ish); \
ops; \
GUEST_ASSERT(expected_ec == INVALID_EC); \
})
/*
* Tests for reading/writing registers for the unimplemented event counter
* specified by @pmc_idx (>= PMCR_EL0.N).
*/
static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
{
/*
* Reading/writing the event count/type registers should cause
* an UNDEFINED exception.
*/
TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx));
TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0));
TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx));
TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0));
/*
* The bit corresponding to the (unimplemented) counter in
* {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers should be RAZ.
*/
test_bitmap_pmu_regs(pmc_idx, 1);
test_bitmap_pmu_regs(pmc_idx, 0);
}
/*
* The guest is configured with PMUv3 with @expected_pmcr_n number of
* event counters.
* Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
* if reading/writing PMU registers for implemented or unimplemented
* counters works as expected.
*/
static void guest_code(uint64_t expected_pmcr_n)
{
uint64_t pmcr, pmcr_n, unimp_mask;
int i, pmc;
__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
"Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx",
expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS);
pmcr = read_sysreg(pmcr_el0);
pmcr_n = get_pmcr_n(pmcr);
/* Make sure that PMCR_EL0.N indicates the value userspace set */
__GUEST_ASSERT(pmcr_n == expected_pmcr_n,
"Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx",
expected_pmcr_n, pmcr_n);
/*
* Make sure that (RAZ) bits corresponding to unimplemented event
* counters in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers are reset
* to zero.
* (NOTE: bits for implemented event counters are reset to UNKNOWN)
*/
unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
check_bitmap_pmu_regs(unimp_mask, false);
/*
* Tests for reading/writing PMU registers for implemented counters.
* Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
*/
for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
for (pmc = 0; pmc < pmcr_n; pmc++)
test_access_pmc_regs(&pmc_accessors[i], pmc);
}
/*
* Tests for reading/writing PMU registers for unimplemented counters.
* Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
*/
for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
}
GUEST_DONE();
}
#define GICD_BASE_GPA 0x8000000ULL
#define GICR_BASE_GPA 0x80A0000ULL
/* Create a VM that has one vCPU with PMUv3 configured. */
static void create_vpmu_vm(void *guest_code)
{
struct kvm_vcpu_init init;
uint8_t pmuver, ec;
uint64_t dfr0, irq = 23;
struct kvm_device_attr irq_attr = {
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
.addr = (uint64_t)&irq,
};
struct kvm_device_attr init_attr = {
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
.attr = KVM_ARM_VCPU_PMU_V3_INIT,
};
/* The test creates the vpmu_vm multiple times. Ensure a clean state */
memset(&vpmu_vm, 0, sizeof(vpmu_vm));
vpmu_vm.vm = vm_create(1);
vm_init_descriptor_tables(vpmu_vm.vm);
for (ec = 0; ec < ESR_EC_NUM; ec++) {
vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec,
guest_sync_handler);
}
/* Create vCPU with PMUv3 */
vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code);
vcpu_init_descriptor_tables(vpmu_vm.vcpu);
vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64,
GICD_BASE_GPA, GICR_BASE_GPA);
__TEST_REQUIRE(vpmu_vm.gic_fd >= 0,
"Failed to create vgic-v3, skipping");
/* Make sure that PMUv3 support is indicated in the ID register */
vcpu_get_reg(vpmu_vm.vcpu,
KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0);
TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF &&
pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP,
"Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
/* Initialize vPMU */
vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
}
static void destroy_vpmu_vm(void)
{
close(vpmu_vm.gic_fd);
kvm_vm_free(vpmu_vm.vm);
}
static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
{
struct ucall uc;
vcpu_args_set(vcpu, 1, pmcr_n);
vcpu_run(vcpu);
switch (get_ucall(vcpu, &uc)) {
case UCALL_ABORT:
REPORT_GUEST_ASSERT(uc);
break;
case UCALL_DONE:
break;
default:
TEST_FAIL("Unknown ucall %lu", uc.cmd);
break;
}
}
static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail)
{
struct kvm_vcpu *vcpu;
uint64_t pmcr, pmcr_orig;
create_vpmu_vm(guest_code);
vcpu = vpmu_vm.vcpu;
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
pmcr = pmcr_orig;
/*
* Setting a larger value of PMCR.N should not modify the field, and
* return a success.
*/
set_pmcr_n(&pmcr, pmcr_n);
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
if (expect_fail)
TEST_ASSERT(pmcr_orig == pmcr,
"PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n",
pmcr, pmcr_n);
else
TEST_ASSERT(pmcr_n == get_pmcr_n(pmcr),
"Failed to update PMCR.N to %lu (received: %lu)\n",
pmcr_n, get_pmcr_n(pmcr));
}
/*
* Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
* and run the test.
*/
static void run_access_test(uint64_t pmcr_n)
{
uint64_t sp;
struct kvm_vcpu *vcpu;
struct kvm_vcpu_init init;
pr_debug("Test with pmcr_n %lu\n", pmcr_n);
test_create_vpmu_vm_with_pmcr_n(pmcr_n, false);
vcpu = vpmu_vm.vcpu;
/* Save the initial sp to restore them later to run the guest again */
vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
run_vcpu(vcpu, pmcr_n);
/*
* Reset and re-initialize the vCPU, and run the guest code again to
* check if PMCR_EL0.N is preserved.
*/
vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init);
init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
aarch64_vcpu_setup(vcpu, &init);
vcpu_init_descriptor_tables(vcpu);
vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
run_vcpu(vcpu, pmcr_n);
destroy_vpmu_vm();
}
static struct pmreg_sets validity_check_reg_sets[] = {
PMREG_SET(SYS_PMCNTENSET_EL0, SYS_PMCNTENCLR_EL0),
PMREG_SET(SYS_PMINTENSET_EL1, SYS_PMINTENCLR_EL1),
PMREG_SET(SYS_PMOVSSET_EL0, SYS_PMOVSCLR_EL0),
};
/*
* Create a VM, and check if KVM handles the userspace accesses of
* the PMU register sets in @validity_check_reg_sets[] correctly.
*/
static void run_pmregs_validity_test(uint64_t pmcr_n)
{
int i;
struct kvm_vcpu *vcpu;
uint64_t set_reg_id, clr_reg_id, reg_val;
uint64_t valid_counters_mask, max_counters_mask;
test_create_vpmu_vm_with_pmcr_n(pmcr_n, false);
vcpu = vpmu_vm.vcpu;
valid_counters_mask = get_counters_mask(pmcr_n);
max_counters_mask = get_counters_mask(ARMV8_PMU_MAX_COUNTERS);
for (i = 0; i < ARRAY_SIZE(validity_check_reg_sets); i++) {
set_reg_id = validity_check_reg_sets[i].set_reg_id;
clr_reg_id = validity_check_reg_sets[i].clr_reg_id;
/*
* Test if the 'set' and 'clr' variants of the registers
* are initialized based on the number of valid counters.
*/
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val);
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Initial read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n",
KVM_ARM64_SYS_REG(set_reg_id), reg_val);
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val);
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Initial read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n",
KVM_ARM64_SYS_REG(clr_reg_id), reg_val);
/*
* Using the 'set' variant, force-set the register to the
* max number of possible counters and test if KVM discards
* the bits for unimplemented counters as it should.
*/
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), max_counters_mask);
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val);
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n",
KVM_ARM64_SYS_REG(set_reg_id), reg_val);
vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val);
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n",
KVM_ARM64_SYS_REG(clr_reg_id), reg_val);
}
destroy_vpmu_vm();
}
/*
* Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
* the vCPU to @pmcr_n, which is larger than the host value.
* The attempt should fail as @pmcr_n is too big to set for the vCPU.
*/
static void run_error_test(uint64_t pmcr_n)
{
pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
test_create_vpmu_vm_with_pmcr_n(pmcr_n, true);
destroy_vpmu_vm();
}
/*
* Return the default number of implemented PMU event counters excluding
* the cycle counter (i.e. PMCR_EL0.N value) for the guest.
*/
static uint64_t get_pmcr_n_limit(void)
{
uint64_t pmcr;
create_vpmu_vm(guest_code);
vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
destroy_vpmu_vm();
return get_pmcr_n(pmcr);
}
int main(void)
{
uint64_t i, pmcr_n;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
pmcr_n = get_pmcr_n_limit();
for (i = 0; i <= pmcr_n; i++) {
run_access_test(i);
run_pmregs_validity_test(i);
}
for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
run_error_test(i);
return 0;
}

View File

@ -104,6 +104,7 @@ enum {
#define ESR_EC_SHIFT 26
#define ESR_EC_MASK (ESR_EC_NUM - 1)
#define ESR_EC_UNKNOWN 0x0
#define ESR_EC_SVC64 0x15
#define ESR_EC_IABT 0x21
#define ESR_EC_DABT 0x25

View File

@ -518,9 +518,9 @@ void aarch64_get_supported_page_sizes(uint32_t ipa,
err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg);
TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd));
*ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4), val) != 0xf;
*ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64), val) == 0;
*ps16k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16), val) != 0;
*ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN4), val) != 0xf;
*ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN64), val) == 0;
*ps16k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN16), val) != 0;
close(vcpu_fd);
close(vm_fd);