lkml.org 
[lkml]   [2021]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC v2 PATCH 01/13] mm/shmem: Introduce F_SEAL_GUEST
From
On 11/19/21 14:47, Chao Peng wrote:
> +static void guest_invalidate_page(struct inode *inode,
> + struct page *page, pgoff_t start, pgoff_t end)
> +{
> + struct shmem_inode_info *info = SHMEM_I(inode);
> +
> + if (!info->guest_ops || !info->guest_ops->invalidate_page_range)
> + return;
> +
> + start = max(start, page->index);
> + end = min(end, page->index + thp_nr_pages(page)) - 1;
> +
> + info->guest_ops->invalidate_page_range(inode, info->guest_owner,
> + start, end);
> +}

The lack of protection makes the API quite awkward to use;
the usual way to do this is with refcount_inc_not_zero (aka
kvm_get_kvm_safe).

Can you use the shmem_inode_info spinlock to protect against this? If
register/unregister take the spinlock, the invalidate and fallocate can
take a reference under the same spinlock, like this:

if (!info->guest_ops)
return;

spin_lock(&info->lock);
ops = info->guest_ops;
if (!ops) {
spin_unlock(&info->lock);
return;
}

/* Calls kvm_get_kvm_safe. */
r = ops->get_guest_owner(info->guest_owner);
spin_unlock(&info->lock);
if (r < 0)
return;

start = max(start, page->index);
end = min(end, page->index + thp_nr_pages(page)) - 1;

ops->invalidate_page_range(inode, info->guest_owner,
start, end);
ops->put_guest_owner(info->guest_owner);

Considering that you have to take a mutex anyway in patch 13, and that
the critical section here is very small, the extra indirect calls are
cheaper than walking the vm_list; and it makes the API clearer.

Paolo

\
 
 \ /
  Last update: 2021-11-23 09:57    [W:0.200 / U:0.732 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site