lkml.org 
[lkml]   [2020]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFCv2 00/16] KVM protected memory extension
> On Oct 19, 2020, at 11:19 PM, Kirill A. Shutemov <kirill@shutemov.name> wrote:

> For removing the userspace mapping, use a trick similar to what NUMA
> balancing does: convert memory that belongs to KVM memory slots to
> PROT_NONE: all existing entries converted to PROT_NONE with mprotect() and
> the newly faulted in pages get PROT_NONE from the updated vm_page_prot.
> The new VMA flag -- VM_KVM_PROTECTED -- indicates that the pages in the
> VMA must be treated in a special way in the GUP and fault paths. The flag
> allows GUP to return the page even though it is mapped with PROT_NONE, but
> only if the new GUP flag -- FOLL_KVM -- is specified. Any userspace access
> to the memory would result in SIGBUS. Any GUP access without FOLL_KVM
> would result in -EFAULT.
>

I definitely like the direction this patchset is going in, and I think
that allowing KVM guests to have memory that is inaccessible to QEMU
is a great idea.

I do wonder, though: do we really want to do this with these PROT_NONE
tricks, or should we actually come up with a way to have KVM guest map
memory that isn't mapped into QEMU's mm_struct at all? As an example
of the latter, I mean something a bit like this:

https://lkml.kernel.org/r/CALCETrUSUp_7svg8EHNTk3nQ0x9sdzMCU=h8G-Sy6=SODq5GHg@mail.gmail.com

I don't mean to say that this is a requirement of any kind of
protected memory like this, but I do think we should understand the
tradeoffs, in terms of what a full implementation looks like, the
effort and time frames involved, and the maintenance burden of
supporting whatever gets merged going forward.

\
 
 \ /
  Last update: 2020-10-21 20:21    [W:0.668 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site