lkml.org 
[lkml]   [2015]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 8/9] KVM: MMU: fix MTRR update


On 05/08/2015 12:53 AM, Paolo Bonzini wrote:
>
>
> On 30/04/2015 12:24, guangrong.xiao@linux.intel.com wrote:
>> +static void vmx_set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr)
>> +{
>> + struct mtrr_state_type *mtrr_state = &vcpu->arch.mtrr_state;
>> + unsigned char mtrr_enabled = mtrr_state->enabled;
>> + gfn_t start, end, mask;
>> + int index;
>> + bool is_fixed = true;
>> +
>> + if (msr == MSR_IA32_CR_PAT || !enable_ept ||
>> + !kvm_arch_has_noncoherent_dma(vcpu->kvm))
>> + return;
>> +
>> + if (!(mtrr_enabled & 0x2) && msr != MSR_MTRRdefType)
>> + return;
>> +
>> + switch (msr) {
>> + case MSR_MTRRfix64K_00000:
>> + start = 0x0;
>> + end = 0x80000;
>> + break;
>> + case MSR_MTRRfix16K_80000:
>> + start = 0x80000;
>> + end = 0xa0000;
>> + break;
>> + case MSR_MTRRfix16K_A0000:
>> + start = 0xa0000;
>> + end = 0xc0000;
>> + break;
>> + case MSR_MTRRfix4K_C0000 ... MSR_MTRRfix4K_F8000:
>> + index = msr - MSR_MTRRfix4K_C0000;
>> + start = 0xc0000 + index * (32 << 10);
>> + end = start + (32 << 10);
>> + break;
>> + case MSR_MTRRdefType:
>> + is_fixed = false;
>> + start = 0x0;
>> + end = ~0ULL;
>> + break;
>> + default:
>> + /* variable range MTRRs. */
>> + is_fixed = false;
>> + index = (msr - 0x200) / 2;
>> + start = (((u64)mtrr_state->var_ranges[index].base_hi) << 32) +
>> + (mtrr_state->var_ranges[index].base_lo & PAGE_MASK);
>> + mask = (((u64)mtrr_state->var_ranges[index].mask_hi) << 32) +
>> + (mtrr_state->var_ranges[index].mask_lo & PAGE_MASK);
>> + mask |= ~0ULL << cpuid_maxphyaddr(vcpu);
>> +
>> + end = ((start & mask) | ~mask) + 1;
>> + }
>> +
>> + if (is_fixed && !(mtrr_enabled & 0x1))
>> + return;
>> +
>> + kvm_zap_gfn_range(vcpu->kvm, gpa_to_gfn(start), gpa_to_gfn(end));
>> +}
>
> I think this should all be generic logic, even if it causes some extra
> zaps on AMD. (It's AMD's bug that it doesn't honor MTRRs).

Okay, will move the function to x86.c and kill the callback in x86_ops.

>
> Even !enable_ept can be handled in a vendor-independent manner, as
> "vcpu->arch.mmu.page_fault == tdp_page_fault".

We can directly use 'tdp_enabled', it has already been extern-ed. :)


\
 
 \ /
  Last update: 2015-05-11 15:41    [W:0.070 / U:1.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site