lkml.org 
[lkml]   [2021]   [Apr]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFCv2 13/13] KVM: unmap guest memory using poisoned pages
On Mon, Apr 19, 2021, Kirill A. Shutemov wrote:
> On Fri, Apr 16, 2021 at 05:30:30PM +0000, Sean Christopherson wrote:
> > I like the idea of using "special" PTE value to denote guest private memory,
> > e.g. in this RFC, HWPOISON. But I strongly dislike having KVM involved in the
> > manipulation of the special flag/value.
> >
> > Today, userspace owns the gfn->hva translations and the kernel effectively owns
> > the hva->pfn translations (with input from userspace). KVM just connects the
> > dots.
> >
> > Having KVM own the shared/private transitions means KVM is now part owner of the
> > entire gfn->hva->pfn translation, i.e. KVM is effectively now a secondary MMU
> > and a co-owner of the primary MMU. This creates locking madness, e.g. KVM taking
> > mmap_sem for write, mmu_lock under page lock, etc..., and also takes control away
> > from userspace. E.g. userspace strategy could be to use a separate backing/pool
> > for shared memory and change the gfn->hva translation (memslots) in reaction to
> > a shared/private conversion. Automatically swizzling things in KVM takes away
> > that option.
> >
> > IMO, KVM should be entirely "passive" in this process, e.g. the guest shares or
> > protects memory, userspace calls into the kernel to change state, and the kernel
> > manages the page tables to prevent bad actors. KVM simply does the plumbing for
> > the guest page tables.
>
> That's a new perspective for me. Very interesting.
>
> Let's see how it can look like:
>
> - KVM only allows poisoned pages (or whatever flag we end up using for
> protection) in the private mappings. SIGBUS otherwise.
>
> - Poisoned pages must be tied to the KVM instance to be allowed in the
> private mappings. Like kvm->id in the current prototype. SIGBUS
> otherwise.
>
> - Pages get poisoned on fault in if the VMA has a new vmflag set.
>
> - Fault in of a poisoned page leads to hwpoison entry. Userspace cannot
> access such pages.
>
> - Poisoned pages produced this way get unpoisoned on free.
>
> - The new VMA flag set by userspace. mprotect(2)?

Ya, or mmap(), though I'm not entirely sure a VMA flag would suffice. The
notion of the page being private is tied to the PFN, which would suggest "struct
page" needs to be involved.

But fundamentally the private pages, are well, private. They can't be shared
across processes, so I think we could (should?) require the VMA to always be
MAP_PRIVATE. Does that buy us enough to rely on the VMA alone? I.e. is that
enough to prevent userspace and unaware kernel code from acquiring a reference
to the underlying page?

> - Add a new GUP flag to retrive such pages from the userspace mapping.
> Used only for private mapping population.

> - Shared gfn ranges managed by userspace, based on hypercalls from the
> guest.
>
> - Shared mappings get populated via normal VMA. Any poisoned pages here
> would lead to SIGBUS.
>
> So far it looks pretty straight-forward.
>
> The only thing that I don't understand is at way point the page gets tied
> to the KVM instance. Currently we do it just before populating shadow
> entries, but it would not work with the new scheme: as we poison pages
> on fault it they may never get inserted into shadow entries. That's not
> good as we rely on the info to unpoison page on free.

Can you elaborate on what you mean by "unpoison"? If the page is never actually
mapped into the guest, then its poisoned status is nothing more than a software
flag, i.e. nothing extra needs to be done on free. If the page is mapped into
the guest, then KVM can be made responsible for reinitializing the page with
keyid=0 when the page is removed from the guest.

The TDX Module prevents mapping the same PFN into multiple guests, so the kernel
doesn't actually have to care _which_ KVM instance(s) is associated with a page,
it only needs to prevent installing valid PTEs in the host page tables.

> Maybe we should tie VMA to the KVM instance on setting the vmflags?
> I donno.
>
> Any comments?
>
> --
> Kirill A. Shutemov

\
 
 \ /
  Last update: 2021-04-19 18:02    [W:0.078 / U:1.476 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site