diff options
author | Avi Kivity | 2012-07-08 17:16:30 +0300 |
---|---|---|
committer | Avi Kivity | 2012-07-09 14:18:59 +0300 |
commit | e676505ac96813e8b93170b1f5e5ffe0cf6a2348 (patch) | |
tree | 413e00fb20937767adf5f5e4ef9b83a9738b266f /arch/x86/kvm/mmu.c | |
parent | 5cfc2aabcb282f4554e7086c9893b386ad6ba9d4 (diff) |
KVM: MMU: Force cr3 reload with two dimensional paging on mov cr3 emulation
Currently the MMU's ->new_cr3() callback does nothing when guest paging
is disabled or when two-dimentional paging (e.g. EPT on Intel) is active.
This means that an emulated write to cr3 can be lost; kvm_set_cr3() will
write vcpu-arch.cr3, but the GUEST_CR3 field in the VMCS will retain its
old value and this is what the guest sees.
This bug did not have any effect until now because:
- with unrestricted guest, or with svm, we never emulate a mov cr3 instruction
- without unrestricted guest, and with paging enabled, we also never emulate a
mov cr3 instruction
- without unrestricted guest, but with paging disabled, the guest's cr3 is
ignored until the guest enables paging; at this point the value from arch.cr3
is loaded correctly my the mov cr0 instruction which turns on paging
However, the patchset that enables big real mode causes us to emulate mov cr3
instructions in protected mode sometimes (when guest state is not virtualizable
by vmx); this mov cr3 is effectively ignored and will crash the guest.
The fix is to make nonpaging_new_cr3() call mmu_free_roots() to force a cr3
reload. This is awkward because now all the new_cr3 callbacks to the same
thing, and because mmu_free_roots() is somewhat of an overkill; but fixing
that is more complicated and will be done after this minimal fix.
Observed in the Window XP 32-bit installer while bringing up secondary vcpus.
Signed-off-by: Avi Kivity <avi@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu.c')
-rw-r--r-- | arch/x86/kvm/mmu.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 3b53d9e08bfc..569cd66ba24c 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -188,6 +188,7 @@ static u64 __read_mostly shadow_dirty_mask; static u64 __read_mostly shadow_mmio_mask; static void mmu_spte_set(u64 *sptep, u64 spte); +static void mmu_free_roots(struct kvm_vcpu *vcpu); void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask) { @@ -2401,6 +2402,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, static void nonpaging_new_cr3(struct kvm_vcpu *vcpu) { + mmu_free_roots(vcpu); } static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, |