lkml.org 
[lkml]   [2019]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH, RFC 45/62] mm: Add the encrypt_mprotect() system call for MKTME
    From
    Date
    On Mon, 2019-06-17 at 18:50 -0700, Andy Lutomirski wrote:
    > On Mon, Jun 17, 2019 at 5:48 PM Kai Huang <kai.huang@linux.intel.com> wrote:
    > >
    > >
    > > >
    > > > > And another silly argument: if we had /dev/mktme, then we could
    > > > > possibly get away with avoiding all the keyring stuff entirely.
    > > > > Instead, you open /dev/mktme and you get your own key under the hook.
    > > > > If you want two keys, you open /dev/mktme twice. If you want some
    > > > > other program to be able to see your memory, you pass it the fd.
    > > >
    > > > We still like the keyring because it's one-stop-shopping as the place
    > > > that *owns* the hardware KeyID slots. Those are global resources and
    > > > scream for a single global place to allocate and manage them. The
    > > > hardware slots also need to be shared between any anonymous and
    > > > file-based users, no matter what the APIs for the anonymous side.
    > >
    > > MKTME driver (who creates /dev/mktme) can also be the one-stop-shopping. I think whether to
    > > choose
    > > keyring to manage MKTME key should be based on whether we need/should take advantage of existing
    > > key
    > > retention service functionalities. For example, with key retention service we can
    > > revoke/invalidate/set expiry for a key (not sure whether MKTME needs those although), and we
    > > have
    > > several keyrings -- thread specific keyring, process specific keyring, user specific keyring,
    > > etc,
    > > thus we can control who can/cannot find the key, etc. I think managing MKTME key in MKTME driver
    > > doesn't have those advantages.
    > >
    >
    > Trying to evaluate this with the current proposed code is a bit odd, I
    > think. Suppose you create a thread-specific key and then fork(). The
    > child can presumably still use the key regardless of whether the child
    > can nominally access the key in the keyring because the PTEs are still
    > there.

    Right. This is a little bit odd, although virtualization (Qemu, which is the main use case of MKTME
    at least so far) doesn't use fork().

    >
    > More fundamentally, in some sense, the current code has no semantics.
    > Associating a key with memory and "encrypting" it doesn't actually do
    > anything unless you are attacking the memory bus but you haven't
    > compromised the kernel. There's no protection against a guest that
    > can corrupt its EPT tables, there's no protection against kernel bugs
    > (*especially* if the duplicate direct map design stays), and there
    > isn't even any fd or other object around by which you can only access
    > the data if you can see the key.

    I am not saying managing MKTME key/keyID in key retention service is definitely better, but it seems
    all those you mentioned are not related to whether to choose key retention service to manage MKTME
    key/keyID? Or you are saying it doesn't matter we manage key/keyID in key retention service or in
    MKTME driver, since MKTME barely have any security benefits (besides physical attack)?

    >
    > I'm also wondering whether the kernel will always be able to be a
    > one-stop shop for key allocation -- if the MKTME hardware gains
    > interesting new uses down the road, who knows how key allocation will
    > work?

    I by now don't have any use case which requires to manage key/keyID specifically for its own use,
    rather than letting kernel to manage keyID allocation. Please inspire us if you have any potential.

    Thanks,
    -Kai

    \
     
     \ /
      Last update: 2019-06-18 04:12    [W:4.285 / U:0.116 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site