lkml.org 
[lkml]   [2012]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] KVM: MMU: lazily drop large spte
On Fri, Nov 16, 2012 at 12:46:16PM +0800, Xiao Guangrong wrote:
> On 11/16/2012 11:56 AM, Marcelo Tosatti wrote:
> > On Fri, Nov 16, 2012 at 11:39:12AM +0800, Xiao Guangrong wrote:
> >> On 11/16/2012 11:02 AM, Marcelo Tosatti wrote:
> >>> On Thu, Nov 15, 2012 at 07:17:15AM +0800, Xiao Guangrong wrote:
> >>>> On 11/14/2012 10:37 PM, Marcelo Tosatti wrote:
> >>>>> On Tue, Nov 13, 2012 at 04:26:16PM +0800, Xiao Guangrong wrote:
> >>>>>> Hi Marcelo,
> >>>>>>
> >>>>>> On 11/13/2012 07:10 AM, Marcelo Tosatti wrote:
> >>>>>>> On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
> >>>>>>>> Do not drop large spte until it can be insteaded by small pages so that
> >>>>>>>> the guest can happliy read memory through it
> >>>>>>>>
> >>>>>>>> The idea is from Avi:
> >>>>>>>> | As I mentioned before, write-protecting a large spte is a good idea,
> >>>>>>>> | since it moves some work from protect-time to fault-time, so it reduces
> >>>>>>>> | jitter. This removes the need for the return value.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
> >>>>>>>> ---
> >>>>>>>> arch/x86/kvm/mmu.c | 34 +++++++++-------------------------
> >>>>>>>> 1 files changed, 9 insertions(+), 25 deletions(-)
> >>>>>>>
> >>>>>>> Its likely that other 4k pages are mapped read-write in the 2mb range
> >>>>>>> covered by a read-only 2mb map. Therefore its not entirely useful to
> >>>>>>> map read-only.
> >>>>>>>
> >>>>>>
> >>>>>> It needs a page fault to install a pte even if it is the read access.
> >>>>>> After the change, the page fault can be avoided.
> >>>>>>
> >>>>>>> Can you measure an improvement with this change?
> >>>>>>
> >>>>>> I have a test case to measure the read time which has been attached.
> >>>>>> It maps 4k pages at first (dirt-loggged), then switch to large sptes
> >>>>>> (stop dirt-logging), at the last, measure the read access time after write
> >>>>>> protect sptes.
> >>>>>>
> >>>>>> Before: 23314111 ns After: 11404197 ns
> >>>>>
> >>>>> Ok, i'm concerned about cases similar to e49146dce8c3dc6f44 (with shadow),
> >>>>> that is:
> >>>>>
> >>>>> - large page must be destroyed when write protecting due to
> >>>>> shadowed page.
> >>>>> - with shadow, it does not make sense to write protect
> >>>>> large sptes as mentioned earlier.
> >>>>>
> >>>>
> >>>> This case is removed now, the code when e49146dce8c3dc6f44 was applied is:
> >>>> |
> >>>> | pt = sp->spt;
> >>>> | for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
> >>>> | /* avoid RMW */
> >>>> | if (is_writable_pte(pt[i]))
> >>>> | update_spte(&pt[i], pt[i] & ~PT_WRITABLE_MASK);
> >>>> | }
> >>>>
> >>>> The real problem in this code is it would write-protect the spte even if
> >>>> it is not a last spte that caused the middle-level shadow page table was
> >>>> write-protected. So e49146dce8c3dc6f44 added this code:
> >>>> | if (sp->role.level != PT_PAGE_TABLE_LEVEL)
> >>>> | continue;
> >>>> |
> >>>> was good to fix this problem.
> >>>>
> >>>> Now, the current code is:
> >>>> | for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
> >>>> | if (!is_shadow_present_pte(pt[i]) ||
> >>>> | !is_last_spte(pt[i], sp->role.level))
> >>>> | continue;
> >>>> |
> >>>> | spte_write_protect(kvm, &pt[i], &flush, false);
> >>>> | }
> >>>> It only write-protect the last spte. So, it allows large spte existent.
> >>>> (the large spte can be broken by drop_large_spte() on the page-fault path.)
> >>>>
> >>>>> So i wonder why is this part from your patch
> >>>>>
> >>>>> - if (level > PT_PAGE_TABLE_LEVEL &&
> >>>>> - has_wrprotected_page(vcpu->kvm, gfn, level)) {
> >>>>> - ret = 1;
> >>>>> - drop_spte(vcpu->kvm, sptep);
> >>>>> - goto done;
> >>>>> - }
> >>>>>
> >>>>> necessary (assuming EPT is in use).
> >>>>
> >>>> This is safe, we change these code to:
> >>>>
> >>>> - if (mmu_need_write_protect(vcpu, gfn, can_unsync)) {
> >>>> + if ((level > PT_PAGE_TABLE_LEVEL &&
> >>>> + has_wrprotected_page(vcpu->kvm, gfn, level)) ||
> >>>> + mmu_need_write_protect(vcpu, gfn, can_unsync)) {
> >>>> pgprintk("%s: found shadow page for %llx, marking ro\n",
> >>>> __func__, gfn);
> >>>> ret = 1;
> >>>>
> >>>> The spte become read-only which can ensure the shadow gfn can not be changed.
> >>>>
> >>>> Btw, the origin code allows to create readonly spte under this case if !(pte_access & WRITEABBLE)
> >>>
> >>> Regarding shadow: it should be fine as long as fault path always deletes
> >>> large mappings, when shadowed pages are present in the region.
> >>
> >> For hard mmu is also safe, in this patch i added these code:
> >>
> >> @@ -2635,6 +2617,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
> >> break;
> >> }
> >>
> >> + drop_large_spte(vcpu, iterator.sptep);
> >> +
> >>
> >> It can delete large mappings like soft mmu does.
> >>
> >> Anything i missed?
> >>
> >>>
> >>> Ah, unshadowing from reexecute_instruction does not handle
> >>> large pages. I suppose that is what "simplification" refers
> >>> to.
> >>
> >> reexecute_instruction did not directly handle last spte, it just
> >> removes all shadow pages, then let cpu retry the instruction, the
> >> page can become writable when encounter #PF again, large spte is fine
> >> under this case.
> >
> > While searching for a given "gpa", you don't find large gfn which is
> > mapping it, right? (that is, searching for gfn 4 fails to find large
> > read-only "gfn 0"). Unshadowing gfn 4 will keep large read-only mapping
> > present.
> >
> > 1. large read-write spte to gfn 0
> > 2. shadow gfn 4
> > 3. write-protect large spte pointing to gfn 0
> > 4. write to gfn 4
> > 5. instruction emulation fails
> > 5. unshadow gfn 4
> > 6. refault, do not drop large spte because no pages shadowed
7. refault, then goto 2 (as part of write to gfn 4)
>
> Hmm, it is not true. :)
>
> The large spte can become writable since 'no pages adhadoes' (that means
> has_wrprotected_page() can return 0 for this case). No?

What if gfn 4 is a pagetable part of the pagedirectory chain used to
map gfn 4? See corrected step 7 above.



\
 
 \ /
  Last update: 2012-11-16 12:21    [W:0.207 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site