We leave some printf's because they inform the user the test is being
skipped. QUIET should not disable those. We also leave the printf's
used for help text.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There were a few problems with the way we output "debug" messages.
The first is that we used DEBUG() which is defined when NDEBUG is
not defined, but NDEBUG will never be defined for kselftests
because it relies too much on assert(). The next is that most
of the DEBUG() messages were actually "info" messages, which
users may want to turn off if they just want a silent test that
either completes or asserts. Finally, a debug message output from
a library function, and thus for all tests, was annoying when its
information wasn't interesting for a test.
Rework these messages so debug messages only output when DEBUG
is defined and info messages output unless QUIET is defined.
Also name the functions pr_debug and pr_info and make sure that
when they're disabled we eat all the inputs. The later avoids
unused variable warnings when the variables were only defined
for the purpose of printing.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In order to quantify demand paging performance, time guest execution
during demand paging.
Signed-off-by: Ben Gardon <bgardon@google.com>
[Move timespec-diff to test_util.h]
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Most VMs have multiple vCPUs, the concurrent execution of which has a
substantial impact on demand paging performance. Add an option to create
multiple vCPUs to each access disjoint regions of memory.
Signed-off-by: Ben Gardon <bgardon@google.com>
[guest_code() can't return, use GUEST_ASSERT(). Ensure the number
of guests pages is compatible with the host.]
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently vcpu_args_set is only implemented for x86. This makes writing
tests with multiple vCPUs difficult as each guest vCPU must either a.)
do the same thing or b.) derive some kind of unique token from it's
registers or the architecture. To simplify the process of writing tests
with multiple vCPUs for s390 and aarch64, add set args functions for
those architectures.
Signed-off-by: Ben Gardon <bgardon@google.com>
[Fixed array index (num => i) and made some style changes.]
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In preparation for supporting multiple vCPUs in the demand paging test,
pass arguments to the vCPU in a consolidated global struct instead of
syncing multiple globals.
Signed-off-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add an argument to allow the demand paging test to work on larger and
smaller guest sizes.
Signed-off-by: Ben Gardon <bgardon@google.com>
[Rewrote parse_size() to simplify and provide user more flexibility as
to how sizes are input. Also fixed size overflow assert.]
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When running the demand paging test with the -u option, the User Fault
FD handler essentially adds an arbitrary delay to page fault resolution.
To enable better simulation of a real demand paging scenario, add a
configurable delay to the UFFD handler.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The demand paging test is currently a simple page access test which, while
potentially useful, doesn't add much versus the existing dirty logging
test. To improve the demand paging test, add a basic userfaultfd demand
paging implementation.
Signed-off-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While userfaultfd, KVM's demand paging implementation, is not specific
to KVM, having a benchmark for its performance will be useful for
guiding performance improvements to KVM. As a first step towards creating
a userfaultfd demand paging test, create a simple memory access test,
based on dirty_log_test.
Reviewed-by: Oliver Upton <oupton@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Guests and hosts don't have to have the same page size. This means
calculations are necessary when selecting the number of guest pages
to allocate in order to ensure the number is compatible with the
host. Provide utilities to help with those calculations and apply
them where appropriate.
We also revert commit bffed38d4f ("kvm: selftests: aarch64:
dirty_log_test: fix unaligned memslot size") and then use
vm_adjust_num_guest_pages() there instead.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This array will allow us to easily translate modes to their parameter
values.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We're going to want this name in the library code, so use a shorter
name in the tests.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
BITS_PER_LONG and friends are provided by linux/bitops.h
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
I'm not sure how we ended up using printf instead of fprintf in
virt_dump(). Fix it.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add the basic infrastructure needed to test AMD nested SVM.
This is largely copied from the KVM unit test infrastructure.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
get_gdt_base() and get_idt_base() only return the base address
of the descriptor tables. Soon we will need to get the size as well.
Change the prototype of those functions so that they return
the whole desc_ptr struct instead of the address field.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Wei Huang <wei.huang2@amd.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Pull KVM updates from Paolo Bonzini:
"This is the first batch of KVM changes.
ARM:
- cleanups and corner case fixes.
PPC:
- Bugfixes
x86:
- Support for mapping DAX areas with large nested page table entries.
- Cleanups and bugfixes here too. A particularly important one is a
fix for FPU load when the thread has TIF_NEED_FPU_LOAD. There is
also a race condition which could be used in guest userspace to
exploit the guest kernel, for which the embargo expired today.
- Fast path for IPI delivery vmexits, shaving about 200 clock cycles
from IPI latency.
- Protect against "Spectre-v1/L1TF" (bring data in the cache via
speculative out of bound accesses, use L1TF on the sibling
hyperthread to read it), which unfortunately is an even bigger
whack-a-mole game than SpectreV1.
Sean continues his mission to rewrite KVM. In addition to a sizable
number of x86 patches, this time he contributed a pretty large
refactoring of vCPU creation that affects all architectures but should
not have any visible effect.
s390 will come next week together with some more x86 patches"
* tag 'kvm-5.6-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (204 commits)
x86/KVM: Clean up host's steal time structure
x86/KVM: Make sure KVM_VCPU_FLUSH_TLB flag is not missed
x86/kvm: Cache gfn to pfn translation
x86/kvm: Introduce kvm_(un)map_gfn()
x86/kvm: Be careful not to clear KVM_VCPU_FLUSH_TLB bit
KVM: PPC: Book3S PR: Fix -Werror=return-type build failure
KVM: PPC: Book3S HV: Release lock on page-out failure path
KVM: arm64: Treat emulated TVAL TimerValue as a signed 32-bit integer
KVM: arm64: pmu: Only handle supported event counters
KVM: arm64: pmu: Fix chained SW_INCR counters
KVM: arm64: pmu: Don't mark a counter as chained if the odd one is disabled
KVM: arm64: pmu: Don't increment SW_INCR if PMCR.E is unset
KVM: x86: Use a typedef for fastop functions
KVM: X86: Add 'else' to unify fastop and execute call path
KVM: x86: inline memslot_valid_for_gpte
KVM: x86/mmu: Use huge pages for DAX-backed files
KVM: x86/mmu: Remove lpage_is_disallowed() check from set_spte()
KVM: x86/mmu: Fold max_mapping_level() into kvm_mmu_hugepage_adjust()
KVM: x86/mmu: Zap any compound page when collapsing sptes
KVM: x86/mmu: Remove obsolete gfn restoration in FNAME(fetch)
...
Local IRQs are reset by a normal cpu reset. The initial cpu reset and
the clear cpu reset, as superset of the normal reset, both clear the
IRQs too.
Let's inject an interrupt to a vCPU before calling a reset and see if
it is gone after the reset.
We choose to inject only an emergency interrupt at this point and can
extend the test to other types of IRQs later.
Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
[minor fixups]
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Link: https://lore.kernel.org/r/20200131100205.74720-7-frankja@linux.ibm.com
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Sync msr-index.h to pull in recent renames of the IA32_FEATURE_CONTROL
MSR definitions. Update KVM's VMX selftest and turbostat accordingly.
Keep the full name in turbostat's output to avoid breaking someone's
workflow, e.g. if a script is looking for the full name.
While using the renamed defines is by no means necessary, do the sync
now to avoid leaving a landmine that will get stepped on the next time
msr-index.h needs to be refreshed for some other reason.
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20191221044513.21680-4-sean.j.christopherson@intel.com
The kernel's version of msr-index.h was pulled wholesale into tools by
commit
444e2ff34d ("tools arch x86: Grab a copy of the file containing the MSR numbers"),
Use the common msr-index.h instead of manually redefining everything in
a KVM-only header.
Note, a few MSR-related definitions remain in processor.h because they
are not covered by msr-index.h, including the awesomely named
APIC_BASE_MSR, which refers to starting index of the x2APIC MSRs, not
the actual MSR_IA32_APICBASE, which *is* defined by msr-index.h.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20191221044513.21680-3-sean.j.christopherson@intel.com
The mis-spelling is found by checkpatch.pl, so fix them.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rename the NMI-window exiting related definitions to match the latest
Intel SDM. No functional changes.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rename interrupt-windown exiting related definitions to match the
latest Intel SDM. No functional changes.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On kvm_create_max_vcpus test remove unneeded local
variable in the loop that add vcpus to the VM.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Ensure that IA32_XSS appears in KVM_GET_MSR_INDEX_LIST if it can be set
to a non-zero value.
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Change-Id: Ia2d644f69e2d6d8c27d7e0a7a45c2bf9c42bf5ff
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 204c91eff7 ("KVM: selftests: do not blindly clobber registers in
guest asm") was intended to make test more gcc-proof, however, the result
is exactly the opposite: on newer gccs (e.g. 8.2.1) the test breaks with
==== Test Assertion Failure ====
x86_64/sync_regs_test.c:168: run->s.regs.regs.rbx == 0xBAD1DEA + 1
pid=14170 tid=14170 - Invalid argument
1 0x00000000004015b3: main at sync_regs_test.c:166 (discriminator 6)
2 0x00007f413fb66412: ?? ??:0
3 0x000000000040191d: _start at ??:?
rbx sync regs value incorrect 0x1.
Apparently, compile is still free to play games with registers even
when they have variables attached.
Re-write guest code with 'asm volatile' by embedding ucall there and
making sure rbx is preserved.
Fixes: 204c91eff7 ("KVM: selftests: do not blindly clobber registers in guest asm")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vmx_dirty_log_test fails on AMD and this is no surprise as it is VMX
specific. Bail early when nested VMX is unsupported.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vmx_* tests require VMX and three of them implement the same check. Move it
to vmx library.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vmx_set_nested_state_test() checks if VMX is supported twice: in the very
beginning (and skips the whole test if it's not) and before doing
test_vmx_nested_state(). One should be enough.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
similar to commit 2c57da356800 ("selftests: kvm: fix sync_regs_test with
newer gccs") and commit 204c91eff7 ("KVM: selftests: do not blindly
clobber registers in guest asm") we better do not rely on gcc leaving
r11 untouched. We can write the simple ucall inline and have the guest
code completely as small assembler function.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Fix the following build error from "make TARGETS=kvm kselftest":
libkvm.a(assert.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
This error is seen when build is done from the main Makefile using
kselftest target. In this case KBUILD_CPPFLAGS and CC_OPTION_CFLAGS
are defined.
When build is invoked using:
"make -C tools/testing/selftests/kvm" KBUILD_CPPFLAGS and CC_OPTION_CFLAGS
aren't defined.
There is no need to pass in KBUILD_CPPFLAGS and CC_OPTION_CFLAGS for the
check to determine if --no-pie is necessary, which is the case when these
two aren't defined when "make -C tools/testing/selftests/kvm" runs.
Fix it by simplifying the no-pie-option logic. With this change, both
build variations work.
"make TARGETS=kvm kselftest"
"make -C tools/testing/selftests/kvm"
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When KVM_GET_MSRS fail the report looks like
==== Test Assertion Failure ====
lib/x86_64/processor.c:1089: r == nmsrs
pid=28775 tid=28775 - Argument list too long
1 0x000000000040a55f: vcpu_save_state at processor.c:1088 (discriminator 3)
2 0x00000000004010e3: main at state_test.c:171 (discriminator 4)
3 0x00007fb8e69223d4: ?? ??:0
4 0x0000000000401287: _start at ??:?
Unexpected result from KVM_GET_MSRS, r: 36 (failed at 194)
and it's not obvious that '194' here is the failed MSR index and that
it's printed in hex. Change that.
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check that accesses by nested guests are logged according to the
L1 physical addresses rather than L2.
Most of the patch is really adding EPT support to the testing
framework.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After commit e8bb4755eea2("KVM: selftests: Split ucall.c into architecture
specific files") selftests which use ucall on x86 started segfaulting and
apparently it's gcc to blame: it "optimizes" ucall() function throwing away
va_start/va_end part because it thinks the structure is not being used.
Previously, it couldn't do that because the there was also MMIO version and
the decision which particular implementation to use was done at runtime.
With older gccs it's possible to solve the problem by adding 'volatile'
to 'struct ucall' but at least with gcc-8.3 this trick doesn't work.
'memory' clobber seems to do the job.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The bit is supposed to be '1' when SMT is not supported or forcefully
disabled and '0' otherwise.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Remove the duplication code in run_test() of dirty_log_test because
after some reordering of functions now we can directly use the outcome
of vm_create().
Meanwhile, with the new VM_MODE_PXXV48_4K, we can safely revert
b442324b58 too where we stick the x86_64 PA width to 39 bits for
dirty_log_test.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The naming VM_MODE_P52V48_4K is explicit but unclear when used on
x86_64 machines, because x86_64 machines are having various physical
address width rather than some static values. Here's some examples:
- Intel Xeon E3-1220: 36 bits
- Intel Core i7-8650: 39 bits
- AMD EPYC 7251: 48 bits
All of them are using 48 bits linear address width but with totally
different physical address width (and most of the old machines should
be less than 52 bits).
Let's create a new guest mode called VM_MODE_PXXV48_4K for current
x86_64 tests and make it as the default to replace the old naming of
VM_MODE_P52V48_4K because it shows more clearly that the PA width is
not really a constant. Meanwhile we also stop assuming all the x86
machines are having 52 bits PA width but instead we fetch the real
vm->pa_bits from CPUID 0x80000008 during runtime.
We currently make this exclusively used by x86_64 but no other arch.
As a slight touch up, moving DEBUG macro from dirty_log_test.c to
kvm_util.h so lib can use it too.
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since we've just removed the dependency of vm type in previous patch,
now we can create the vm much earlier. Note that to move it earlier
we used an approximation of number of extra pages but it should be
fine.
This prepares for the follow up patches to finally remove the
duplication of guest mode parsings.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rather than passing the vm type from the top level to the end of vm
creation, let's simply keep that as an internal of kvm_vm struct and
decide the type in _vm_create(). Several reasons for doing this:
- The vm type is only decided by physical address width and currently
only used in aarch64, so we've got enough information as long as
we're passing vm_guest_mode into _vm_create(),
- This removes a loop dependency between the vm->type and creation of
vms. That's why now we need to parse vm_guest_mode twice sometimes,
once in run_test() and then again in _vm_create(). The follow up
patches will move on to clean up that as well so we can have a
single place to decide guest machine types and so.
Note that this patch will slightly change the behavior of aarch64
tests in that previously most vm_create() callers will directly pass
in type==0 into _vm_create() but now the type will depend on
vm_guest_mode, however it shouldn't affect any user because all
vm_create() users of aarch64 will be using VM_MODE_DEFAULT guest
mode (which is VM_MODE_P40V48_4K) so at last type will still be zero.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>