When we fail to emulate an instruction for the guest, we better go in and
tell it that we failed to emulate it, by throwing an illegal instruction
exception.
Please beware that we basically never get around to telling the guest that
we failed thanks to the debugging code right above it. If user space however
decides that it wants to ignore the debug, we would at least do "the right
thing" afterwards.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
The e500mc patches left some debug code in that we don't need. Remove it.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
We can't run e500v2 kvm on e500mc kernels, so indicate that by
making the 2 options mutually exclusive in kconfig.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
The CONFIG_KVM_E500 option really indicates that we're running on a V2 machine,
not on a machine of the generic E500 class. So indicate that properly and
change the config name accordingly.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
There's always a chance we're unable to read a guest instruction. The guest
could have its TLB mapped execute-, but not readable, something odd happens
and our TLB gets flushed. So it's a good idea to be prepared for that case
and have a fallback that allows us to fix things up in that case.
Add fixup code that keeps guest code from potentially crashing our host kernel.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
If we hit any exception whatsoever in the restore path and r1/r2 aren't the
host registers, we don't get a working oops. So it's always a good idea to
restore them as early as possible.
This time, it actually has practical reasons to do so too, since we need to
have the host page fault handler fix up our guest instruction read code. And
for that to work we need r1/r2 restored.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
When setting MSR for an e500mc guest, we implicitly always set MSR_GS
to make sure the guest is in guest state. Since we have this implicit
rule there, we don't need to explicitly pass MSR_GS to set_msr().
Remove all explicit setters of MSR_GS.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
When one vcpu wants to kick another, it can issue a special IPI instruction
called msgsnd. This patch emulates this instruction, its clearing counterpart
and the infrastructure required to actually trigger that interrupt inside
a guest vcpu.
With this patch, SMP guests on e500mc work.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Add processor support for e500mc, using hardware virtualization support
(GS-mode).
Current issues include:
- No support for external proxy (coreint) interrupt mode in the guest.
Includes work by Ashish Kalra <Ashish.Kalra@freescale.com>,
Varun Sethi <Varun.Sethi@freescale.com>, and
Liu Yu <yu.liu@freescale.com>.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
e500mc has a normal PPC FPU, rather than SPE which is found
on e500v1/v2.
Based on code from Liu Yu <yu.liu@freescale.com>.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Chips such as e500mc that implement category E.HV in Power ISA 2.06
provide hardware virtualization features, including a new MSR mode for
guest state. The guest OS can perform many operations without trapping
into the hypervisor, including transitions to and from guest userspace.
Since we can use SRR1[GS] to reliably tell whether an exception came from
guest state, instead of messing around with IVPR, we use DO_KVM similarly
to book3s.
Current issues include:
- Machine checks from guest state are not routed to the host handler.
- The guest can cause a host oops by executing an emulated instruction
in a page that lacks read permission. Existing e500/4xx support has
the same problem.
Includes work by Ashish Kalra <Ashish.Kalra@freescale.com>,
Varun Sethi <Varun.Sethi@freescale.com>, and
Liu Yu <yu.liu@freescale.com>.
Signed-off-by: Scott Wood <scottwood@freescale.com>
[agraf: remove pt_regs usage]
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
DO_KVM will need to identify the particular exception type.
There is an existing set of arbitrary numbers that Linux passes,
but it's an undocumented mess that sort of corresponds to server/classic
exception vectors but not really.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
tlbilx is the new, preferred invalidation instruction. It is not
found on e500 prior to e500mc, but there should be no harm in
supporting it on all e500.
Based on code from Ashish Kalra <Ashish.Kalra@freescale.com>.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Rather than invalidate everything when a TLB1 entry needs to be
taken down, keep track of which host TLB1 entries are used for
a given guest TLB1 entry, and invalidate just those entries.
Based on code from Ashish Kalra <Ashish.Kalra@freescale.com>
and Liu Yu <yu.liu@freescale.com>.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
The PID handling is e500v1/v2-specific, and is moved to e500.c.
The MMU sregs code and kvmppc_core_vcpu_translate will be shared with
e500mc, and is moved from e500.c to e500_tlb.c.
Partially based on patches from Liu Yu <yu.liu@freescale.com>.
Signed-off-by: Scott Wood <scottwood@freescale.com>
[agraf: fix bisectability]
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Move vcpu to the beginning of vcpu_e500 to give it appropriate
prominence, especially if more fields end up getting added to the
end of vcpu_e500 (and vcpu ends up in the middle).
Remove gratuitous "extern" and add parameter names to prototypes.
Signed-off-by: Scott Wood <scottwood@freescale.com>
[agraf: fix bisectability]
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Keeping two separate headers for e500-specific things was a
pain, and wasn't even organized along any logical boundary.
There was TLB stuff in <asm/kvm_e500.h> despite the existence of
arch/powerpc/kvm/e500_tlb.h, and nothing in <asm/kvm_e500.h> needed
to be referenced from outside arch/powerpc/kvm.
Signed-off-by: Scott Wood <scottwood@freescale.com>
[agraf: fix bisectability]
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
This is in preparation for merging in the contents of
arch/powerpc/include/asm/kvm_e500.h.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
e500mc will want to do lpid allocation/deallocation here.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
This gives us a place to put load/put actions that correspond to
code that is booke-specific but not specific to a particular core.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
We'll use it on e500mc as well.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Split e500 (v1/v2) and e500mc/e5500 to allow optimization of feature
checks that differ between the two.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Currently 32-bit only cares about this for choice of exception
vector, which is done in core-specific code. However, KVM will
want to distinguish as well.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
Now that we do neither double buffering nor heuristic selection of the
write protection method these are not needed anymore.
Note: some drivers have their own implementation of set_bit_le() and
making it generic needs a bit of work; so we use test_and_set_bit_le()
and will later replace it with generic set_bit_le().
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
We have seen some problems of the current implementation of
get_dirty_log() which uses synchronize_srcu_expedited() for updating
dirty bitmaps; e.g. it is noticeable that this sometimes gives us ms
order of latency when we use VGA displays.
Furthermore the recent discussion on the following thread
"srcu: Implement call_srcu()"
http://lkml.org/lkml/2012/1/31/211
also motivated us to implement get_dirty_log() without SRCU.
This patch achieves this goal without sacrificing the performance of
both VGA and live migration: in practice the new code is much faster
than the old one unless we have too many dirty pages.
Implementation:
The key part of the implementation is the use of xchg() operation for
clearing dirty bits atomically. Since this allows us to update only
BITS_PER_LONG pages at once, we need to iterate over the dirty bitmap
until every dirty bit is cleared again for the next call.
Although some people may worry about the problem of using the atomic
memory instruction many times to the concurrently accessible bitmap,
it is usually accessed with mmu_lock held and we rarely see concurrent
accesses: so what we need to care about is the pure xchg() overheads.
Another point to note is that we do not use for_each_set_bit() to check
which ones in each BITS_PER_LONG pages are actually dirty. Instead we
simply use __ffs() in a loop. This is much faster than repeatedly call
find_next_bit().
Performance:
The dirty-log-perf unit test showed nice improvements, some times faster
than before, except for some extreme cases; for such cases the speed of
getting dirty page information is much faster than we process it in the
userspace.
For real workloads, both VGA and live migration, we have observed pure
improvements: when the guest was reading a file during live migration,
we originally saw a few ms of latency, but with the new method the
latency was less than 200us.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
Dropped such mappings when we enabled dirty logging and we will never
create new ones until we stop the logging.
For this we introduce a new function which can be used to write protect
a range of PT level pages: although we do not need to care about a range
of pages at this point, the following patch will need this feature to
optimize the write protection of many pages.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
We will use this in the following patch to implement another function
which needs to write protect pages using the rmap information.
Note that there is a small change in debug printing for large pages:
we do not differentiate them from others to avoid duplicating code.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
check_and_clear_guest_paused does not need to be exported as it isn't used
by any modules, remove the export.
Signed-off-by: Eric B Munson <emunson@mgebm.net>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
S390's kvm_vcpu_stat does not contain halt_wakeup member.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
A suspended VM can cause spurious soft lockup warnings. To avoid these, the
watchdog now checks if the kernel knows it was stopped by the host and skips
the warning if so. When the watchdog is reset successfully, clear the guest
paused flag.
Signed-off-by: Eric B Munson <emunson@mgebm.net>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Now that we have a flag that will tell the guest it was suspended, create an
interface for that communication using a KVM ioctl.
Signed-off-by: Eric B Munson <emunson@mgebm.net>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
When a host stops or suspends a VM it will set a flag to show this. The
watchdog will use these functions to determine if a softlockup is real, or the
result of a suspended VM.
Signed-off-by: Eric B Munson <emunson@mgebm.net>
asm-generic changes Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
This flag will be used to check if the vm was stopped by the host when a soft
lockup was detected. The host will set the flag when it stops the guest. On
resume, the guest will check this flag if a soft lockup is detected and skip
issuing the warning.
Signed-off-by: Eric B Munson <emunson@mgebm.net>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
On PowerPC, we sometimes use a waitqueue per core, not per thread,
so we can't always use the vcpu internal waitqueue.
This code has been generalized by Christoffer Dall recently, but
unfortunately broke compilation for PowerPC. At the time the helper
function is defined, struct kvm_vcpu is not declared yet, so we can't
dereference it.
This patch moves all logic into the generic inline function, at which
time we have all information necessary.
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
The kvm_vcpu_kick function performs roughly the same funcitonality on
most all architectures, so we shouldn't have separate copies.
PowerPC keeps a pointer to interchanging waitqueues on the vcpu_arch
structure and to accomodate this special need a
__KVM_HAVE_ARCH_VCPU_GET_WQ define and accompanying function
kvm_arch_vcpu_wq have been defined. For all other architectures this
is a generic inline that just returns &vcpu->wq;
Acked-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Deprecated in favour of tracepoints.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Also count the exits of fast-path.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
kvm_io_bus devices are used for ioevent, pit, pic, ioapic,
coalesced_mmio.
Currently Qemu only emulates one PCI bus, it contains 32 slots,
one slot contains 8 functions, maximum of supported PCI devices:
1 * 32 * 8 = 256. One virtio-blk takes one iobus device,
one virtio-net(vhost=on) takes two iobus devices.
The maximum of coalesced mmio zone is 100, each zone
has an iobus devices. So 300 io_bus devices are not enough.
Set an upper bounds for kvm_io_range to limit userspace.
1000 is a very large limit and not bloat the typical user.
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
This patch makes the kvm_io_range array can be resized dynamically.
Signed-off-by: Amos Kong <akong@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Intel recently release 2 new features, HLE and RTM.
Refer to http://software.intel.com/file/41417.
This patch expose them to guest.
Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Two more small fixes:
- Now we have users for it that aren't running Android it turns out that
regcache_sync_region() is much more useful to drivers if it's exported
for use by modules. Who knew?
- Make sure we don't divide by zero when doing debugfs dumps of rbtrees,
not visible up until now because everything was providing at least
some cache on startup.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPfMvJAAoJEBus8iNuMP3d43YQAI8IJqPoAqK2eKjQlYNRzP3O
hWgA6oU56Yqg0PZKKTbWKkul2j9onRV7UrCsXrKo9gCVFNAROkMh9q8uZxzf7yl1
AlOsoKDH/ijYhuAkbLri5tWc8vw5SZS/rSXx6BnVAIPgDjaCEoJcd6swJTfieuyz
slN+y3Y3FDk7zIefkcAlMpUR5ks+jAHOHhk/Kwe5+xP3xk/09acuiNogpPYRH4Fp
2tV9Qr9cSrDKIX8eLkR/AkRkmESMIzkpEopQY4vpYO+GiEwyKGdGjMTqkgjQ7PSk
jL1lp36CAeVuR7Bp3OFT7bilXZKTrkOiwkC2ctFmyjYK+VO4HWBeOeMmoZvTBRCO
+RXAZVN0zFyxPuH6ZJqOuQpCyoY0JBZPZulwRrXGsQpQOoITuEt9yJpLfDSj6hYd
Pj8NLHT10n8DBnLk8nXuxT0mNgGDBTNOVCpVblmfm2CLcEGOQsAzWCgCKjkehCUJ
O3I/3ZHzs1tvCZNcmt5HH8d8D+iMtkOS8bSHTHvZ2ADjSXWGPgXYlUwObYH6kV9N
nMYi8Q6r8skkESL1jaE12XMZxGm07emIyUh+9hfM0lLGEC/cPff2gXwKhtZMDQfE
XELx3e/EbyqNsNqFd71v9XpGyJA9si7JvPY/ZSei/CTqToIEAsX/BwGMKGAWnrNy
ARlp9oaM6BOOg+i2Ddrg
=qm1Q
-----END PGP SIGNATURE-----
Merge tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull two more small regmap fixes from Mark Brown:
- Now we have users for it that aren't running Android it turns out
that regcache_sync_region() is much more useful to drivers if it's
exported for use by modules. Who knew?
- Make sure we don't divide by zero when doing debugfs dumps of
rbtrees, not visible up until now because everything was providing at
least some cache on startup.
* tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: prevent division by zero in rbtree_show
regmap: Export regcache_sync_region()
Pull a few KVM fixes from Avi Kivity:
"A bunch of powerpc KVM fixes, a guest and a host RCU fix (unrelated),
and a small build fix."
* 'kvm-updates/3.4' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: Resolve RCU vs. async page fault problem
KVM: VMX: vmx_set_cr0 expects kvm->srcu locked
KVM: PMU: Fix integer constant is too large warning in kvm_pmu_set_msr()
KVM: PPC: Book3S: PR: Fix preemption
KVM: PPC: Save/Restore CR over vcpu_run
KVM: PPC: Book3S HV: Save and restore CR in __kvmppc_vcore_entry
KVM: PPC: Book3S HV: Fix kvm_alloc_linear in case where no linears exist
KVM: PPC: Book3S: Compile fix for ppc32 in HIOR access code
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iEYEABECAAYFAk99uBgACgkQGkmNcg7/o7hglwCgqi6CE7i5gyneNYBn2ocRps4O
y1UAoMSIscO6YWcHPuxOiNBbJYUy/jMI
=SEO8
-----END PGP SIGNATURE-----
Merge tag 'sh-for-linus' of git://github.com/pmundt/linux-sh
Pull SuperH fixes from Paul Mundt.
* tag 'sh-for-linus' of git://github.com/pmundt/linux-sh:
sh: fix clock-sh7757 for the latest sh_mobile_sdhi driver
serial: sh-sci: use serial_port_in/out vs sci_in/out.
sh: vsyscall: Fix up .eh_frame generation.
sh: dma: Fix up device attribute mismatch from sysdev fallout.
sh: dwarf unwinder depends on SHcompact.
sh: fix up fallout from system.h disintegration.
Pull security layer fixlet from James Morris.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
sysctl: fix write access to dmesg_restrict/kptr_restrict
Pull ACPI & Power Management patches from Len Brown:
"Two fixes for cpuidle merge-window changes, plus a URL fix in
MAINTAINERS"
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux:
MAINTAINERS: Update git url for ACPI
cpuidle: Fix panic in CPU off-lining with no idle driver
ACPI processor: Use safe_halt() rather than halt() in acpi_idle_play_dead()
Pull target fixes from Nicholas Bellinger:
"Pull two tcm_fc fabric related fixes for -rc2:
Note that both have been CC'ed to stable, and patch #1 is the
important one that addresses a memory corruption bug related to FC
exchange timeouts + command abort.
Thanks again to MDR for tracking down this issue!"
* '3.4-rc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending:
tcm_fc: Do not free tpg structure during wq allocation failure
tcm_fc: Add abort flag for gracefully handling exchange timeout
Avoid freeing a registered tpg structure if an alloc_workqueue call
fails. This fixes a bug where the failure was leaking memory associated
with se_portal_group setup during the original core_tpg_register() call.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Acked-by: Kiran Patil <Kiran.patil@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Add abort flag and use it to terminate processing when an exchange
is timed out or is reset. The abort flag is used in place of the
transport_generic_free_cmd function call in the reset and timeout
cases, because calling that function in that context would free
memory that was in use. The aborted flag allows the lifetime to
be managed in a more normal way, while truncating the processing.
This change eliminates a source of memory corruption which
manifested in a variety of ugly ways.
(nab: Drop unused struct fc_exch *ep in ft_recv_seq)
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Acked-by: Kiran Patil <Kiran.patil@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>