lkml.org 
[lkml]   [2022]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 6/8] hugetlb: add vma based lock for pmd sharing
On 08/27/22 17:30, Miaohe Lin wrote:
> On 2022/8/25 1:57, Mike Kravetz wrote:
> > Allocate a rw semaphore and hang off vm_private_data for
> > synchronization use by vmas that could be involved in pmd sharing. Only
> > add infrastructure for the new lock here. Actual use will be added in
> > subsequent patch.
> >
> > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
>
> <snip>
>
> > +static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
> > +{
> > + /*
> > + * Only present in sharable vmas. See comment in
> > + * __unmap_hugepage_range_final about the neeed to check both
>
> s/neeed/need/
>
> > + * VM_SHARED and VM_MAYSHARE in free path
>
> I think there might be some wrong checks around this patch. As above comment said, we
> need to check both flags, so we should do something like below instead?
>
> if (!(vma->vm_flags & (VM_MAYSHARE | VM_SHARED) == (VM_MAYSHARE | VM_SHARED)))
>
> > + */

Thanks. I will update.

> > + if (!vma || !(vma->vm_flags & (VM_MAYSHARE | VM_SHARED)))
> > + return;
> > +
> > + if (vma->vm_private_data) {
> > + kfree(vma->vm_private_data);
> > + vma->vm_private_data = NULL;
> > + }
> > +}
> > +
> > +static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma)
> > +{
> > + struct rw_semaphore *vma_sema;
> > +
> > + /* Only establish in (flags) sharable vmas */
> > + if (!vma || !(vma->vm_flags & VM_MAYSHARE))
> > + return;
> > +
> > + /* Should never get here with non-NULL vm_private_data */
>
> We can get here with non-NULL vm_private_data when called from hugetlb_vm_op_open during fork?

Right!

In fork, We allocate a new semaphore in hugetlb_dup_vma_private, and then
shortly after call hugetlb_vm_op_open.

It works as is, and I can update the comment. However, I wonder if we should
just clear vm_private_data in hugetlb_dup_vma_private and let hugetlb_vm_op_open
do the allocation.

>
> Also there's one missing change on comment:
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d0617d64d718..4bc844a1d312 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -863,7 +863,7 @@ __weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
> * faults in a MAP_PRIVATE mapping. Only the process that called mmap()
> * is guaranteed to have their future faults succeed.
> *
> - * With the exception of reset_vma_resv_huge_pages() which is called at fork(),
> + * With the exception of hugetlb_dup_vma_private() which is called at fork(),
> * the reserve counters are updated with the hugetlb_lock held. It is safe
> * to reset the VMA at fork() time as it is not in use yet and there is no
> * chance of the global counters getting corrupted as a result of the values.
>
>
> Otherwise this patch looks good to me. Thanks.

Will update, Thank you!
--
Mike Kravetz

\
 
 \ /
  Last update: 2022-08-30 00:25    [W:0.070 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site