lkml.org 
[lkml]   [2023]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] hugetlbfs: extend hugetlb_vma_lock to private VMAs
From
Date
On Fri, 2023-09-22 at 09:44 -0700, Mike Kravetz wrote:
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f906c5fa4d09..8f3d5895fffc 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -372,6 +372,11 @@ static void
> __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
>                 struct hugetlb_vma_lock *vma_lock = vma-
> >vm_private_data;
>  
>                 __hugetlb_vma_unlock_write_put(vma_lock);
> +       } else if (__vma_private_lock(vma)) {
> +               struct resv_map *resv_map = vma_resv_map(vma);
> +
> +               /* no free for anon vmas, but still need to unlock */
> +               up_write(&resv_map->rw_sema);
>         }
>  }
>

Nice catch. I'll add that.

I was still trying to reproduce the bug here.

The libhugetlbfs code compiles with the offending bits
commented out, but the misaligned_offset test wasn't
causing trouble on my test VM here.

Given the potential negative impact of moving from a
per-VMA lock to a per-backing-address_space lock, I'll
keep the 3 patches separate, and in the order they are
in now.

Let me go spin and test v2.

--
All Rights Reversed.

\
 
 \ /
  Last update: 2023-09-22 18:58    [W:0.039 / U:0.824 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site