perf_test_util is used to set up KVM selftests where vCPUs touch a
region of memory. The guest code is implemented in perf_test_util.c (not
the calling selftests). The guest code requires a 1 parameter, the
vcpuid, which has to be set by calling vcpu_args_set(vm, vcpu_id, 1,
vcpu_id).
Today all of the selftests that use perf_test_util are making this call.
Instead, perf_test_util should just do it. This will save some code but
more importantly prevents mistakes since totally non-obvious that this
needs to be called and failing to do so results in vCPUs not accessing
the right regions of memory.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20210805172821.2622793-1-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Introduce a new option to dirty_log_perf_test: -x number_of_slots. This
causes the test to attempt to split the region of memory into the given
number of slots. If the region cannot be evenly divided, the test will
fail.
This allows testing with more than one slot and therefore measure how
performance scales with the number of memslots.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20210804222844.1419481-8-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This test measures the performance effects of KVM's access tracking.
Access tracking is driven by the MMU notifiers test_young, clear_young,
and clear_flush_young. These notifiers do not have a direct userspace
API, however the clear_young notifier can be triggered by marking a
pages as idle in /sys/kernel/mm/page_idle/bitmap. This test leverages
that mechanism to enable access tracking on guest memory.
To measure performance this test runs a VM with a configurable number of
vCPUs that each touch every page in disjoint regions of memory.
Performance is measured in the time it takes all vCPUs to finish
touching their predefined region.
Example invocation:
$ ./access_tracking_perf_test -v 8
Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages
guest physical test memory offset: 0xffdfffff000
Populating memory : 1.337752570s
Writing to populated memory : 0.010177640s
Reading from populated memory : 0.009548239s
Mark memory idle : 23.973131748s
Writing to idle memory : 0.063584496s
Mark memory idle : 24.924652964s
Reading from idle memory : 0.062042814s
Breaking down the results:
* "Populating memory": The time it takes for all vCPUs to perform the
first write to every page in their region.
* "Writing to populated memory" / "Reading from populated memory": The
time it takes for all vCPUs to write and read to every page in their
region after it has been populated. This serves as a control for the
later results.
* "Mark memory idle": The time it takes for every vCPU to mark every
page in their region as idle through page_idle.
* "Writing to idle memory" / "Reading from idle memory": The time it
takes for all vCPUs to write and read to every page in their region
after it has been marked idle.
This test should be portable across architectures but it is only enabled
for x86_64 since that's all I have tested.
Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20210713220957.3493520-7-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>