lkml.org 
[lkml]   [2012]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] KVM: MMU: lazily drop large spte
On 11/14/2012 10:37 PM, Marcelo Tosatti wrote:
> On Tue, Nov 13, 2012 at 04:26:16PM +0800, Xiao Guangrong wrote:
>> Hi Marcelo,
>>
>> On 11/13/2012 07:10 AM, Marcelo Tosatti wrote:
>>> On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
>>>> Do not drop large spte until it can be insteaded by small pages so that
>>>> the guest can happliy read memory through it
>>>>
>>>> The idea is from Avi:
>>>> | As I mentioned before, write-protecting a large spte is a good idea,
>>>> | since it moves some work from protect-time to fault-time, so it reduces
>>>> | jitter. This removes the need for the return value.
>>>>
>>>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
>>>> ---
>>>> arch/x86/kvm/mmu.c | 34 +++++++++-------------------------
>>>> 1 files changed, 9 insertions(+), 25 deletions(-)
>>>
>>> Its likely that other 4k pages are mapped read-write in the 2mb range
>>> covered by a read-only 2mb map. Therefore its not entirely useful to
>>> map read-only.
>>>
>>
>> It needs a page fault to install a pte even if it is the read access.
>> After the change, the page fault can be avoided.
>>
>>> Can you measure an improvement with this change?
>>
>> I have a test case to measure the read time which has been attached.
>> It maps 4k pages at first (dirt-loggged), then switch to large sptes
>> (stop dirt-logging), at the last, measure the read access time after write
>> protect sptes.
>>
>> Before: 23314111 ns After: 11404197 ns
>
> Ok, i'm concerned about cases similar to e49146dce8c3dc6f44 (with shadow),
> that is:
>
> - large page must be destroyed when write protecting due to
> shadowed page.
> - with shadow, it does not make sense to write protect
> large sptes as mentioned earlier.
>

This case is removed now, the code when e49146dce8c3dc6f44 was applied is:
|
| pt = sp->spt;
| for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
| /* avoid RMW */
| if (is_writable_pte(pt[i]))
| update_spte(&pt[i], pt[i] & ~PT_WRITABLE_MASK);
| }

The real problem in this code is it would write-protect the spte even if
it is not a last spte that caused the middle-level shadow page table was
write-protected. So e49146dce8c3dc6f44 added this code:
| if (sp->role.level != PT_PAGE_TABLE_LEVEL)
| continue;
|
was good to fix this problem.

Now, the current code is:
| for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
| if (!is_shadow_present_pte(pt[i]) ||
| !is_last_spte(pt[i], sp->role.level))
| continue;
|
| spte_write_protect(kvm, &pt[i], &flush, false);
| }
It only write-protect the last spte. So, it allows large spte existent.
(the large spte can be broken by drop_large_spte() on the page-fault path.)

> So i wonder why is this part from your patch
>
> - if (level > PT_PAGE_TABLE_LEVEL &&
> - has_wrprotected_page(vcpu->kvm, gfn, level)) {
> - ret = 1;
> - drop_spte(vcpu->kvm, sptep);
> - goto done;
> - }
>
> necessary (assuming EPT is in use).

This is safe, we change these code to:

- if (mmu_need_write_protect(vcpu, gfn, can_unsync)) {
+ if ((level > PT_PAGE_TABLE_LEVEL &&
+ has_wrprotected_page(vcpu->kvm, gfn, level)) ||
+ mmu_need_write_protect(vcpu, gfn, can_unsync)) {
pgprintk("%s: found shadow page for %llx, marking ro\n",
__func__, gfn);
ret = 1;

The spte become read-only which can ensure the shadow gfn can not be changed.

Btw, the origin code allows to create readonly spte under this case if !(pte_access & WRITEABBLE)



\
 
 \ /
  Last update: 2012-11-15 01:01    [W:0.079 / U:2.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site