lkml.org 
[lkml]   [2022]   [Nov]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory
On Tue, Nov 29, 2022 at 02:21:39PM +0300, Kirill A. Shutemov wrote:
> On Mon, Nov 28, 2022 at 06:06:32PM -0600, Michael Roth wrote:
> > On Tue, Oct 25, 2022 at 11:13:37PM +0800, Chao Peng wrote:
> > > From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> > >
> >
> > <snip>
> >
> > > +static struct file *restrictedmem_file_create(struct file *memfd)
> > > +{
> > > + struct restrictedmem_data *data;
> > > + struct address_space *mapping;
> > > + struct inode *inode;
> > > + struct file *file;
> > > +
> > > + data = kzalloc(sizeof(*data), GFP_KERNEL);
> > > + if (!data)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + data->memfd = memfd;
> > > + mutex_init(&data->lock);
> > > + INIT_LIST_HEAD(&data->notifiers);
> > > +
> > > + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb);
> > > + if (IS_ERR(inode)) {
> > > + kfree(data);
> > > + return ERR_CAST(inode);
> > > + }
> > > +
> > > + inode->i_mode |= S_IFREG;
> > > + inode->i_op = &restrictedmem_iops;
> > > + inode->i_mapping->private_data = data;
> > > +
> > > + file = alloc_file_pseudo(inode, restrictedmem_mnt,
> > > + "restrictedmem", O_RDWR,
> > > + &restrictedmem_fops);
> > > + if (IS_ERR(file)) {
> > > + iput(inode);
> > > + kfree(data);
> > > + return ERR_CAST(file);
> > > + }
> > > +
> > > + file->f_flags |= O_LARGEFILE;
> > > +
> > > + mapping = memfd->f_mapping;
> > > + mapping_set_unevictable(mapping);
> > > + mapping_set_gfp_mask(mapping,
> > > + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE);
> >
> > Is this supposed to prevent migration of pages being used for
> > restrictedmem/shmem backend?
>
> Yes, my bad. I expected it to prevent migration, but it is not true.
>
> Looks like we need to bump refcount in restrictedmem_get_page() and reduce
> it back when KVM is no longer use it.

The restrictedmem_get_page() has taken a reference, but later KVM
put_page() after populating the secondary page table entry through
kvm_release_pfn_clean(). One option would let the user feature(e.g.
TDX/SEV) to get_page/put_page() during populating the secondary page
table entry, AFAICS, this requirement also comes from these features.

Chao
>
> Chao, could you adjust it?
>
> --
> Kiryl Shutsemau / Kirill A. Shutemov

\
 
 \ /
  Last update: 2022-11-29 15:05    [W:0.127 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site