lkml.org 
[lkml]   [2023]   [Mar]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [Patch v3 4/7] KVM: x86/mmu: Optimize SPTE change for aging gfn range
From
On Fri, Feb 10, 2023, Vipin Sharma wrote:
> } else {
> + new_spte = mark_spte_for_access_track(iter->old_spte);
> + iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
> + iter->old_spte, new_spte,
> + iter->level);
> /*
> * Capture the dirty status of the page, so that it doesn't get
> * lost when the SPTE is marked for access tracking.
> */
> - if (is_writable_pte(new_spte))
> - kvm_set_pfn_dirty(spte_to_pfn(new_spte));
> -
> - new_spte = mark_spte_for_access_track(new_spte);
> + if (is_writable_pte(iter->old_spte))
> + kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte));

Moving this block below kvm_tdp_mmu_write_spte() is an unrelated change. Much to
my chagrin, I discovered that past me gave you this code. I still think the change
is correct, but I dropped it for now, mostly because the legacy/shadow MMU has the
same pattern (marks the PFN dirty before setting the SPTE).

I think this might actually be a bug fix, e.g. if the XCHG races with a fast page
fault fix and drops the Writable bit, the CPU could insert writable entry into the
TLB without KVM invoking kvm_set_pfn_dirty(). But I'm not 100% confident that I'm
not missing something, and _if_ there's a bug then mmu_spte_age() needs the same
fix, so for now, I dropped it.

\
 
 \ /
  Last update: 2023-03-27 01:10    [W:0.184 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site