lkml.org 
[lkml]   [2022]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 7/8] hugetlb: create hugetlb_unmap_file_folio to unmap single file folio
From
Date
On 2022/8/25 1:57, Mike Kravetz wrote:
> Create the new routine hugetlb_unmap_file_folio that will unmap a single
> file folio. This is refactored code from hugetlb_vmdelete_list. It is
> modified to do locking within the routine itself and check whether the
> page is mapped within a specific vma before unmapping.
>
> This refactoring will be put to use and expanded upon in a subsequent
> patch adding vma specific locking.
>
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
> fs/hugetlbfs/inode.c | 123 +++++++++++++++++++++++++++++++++----------
> 1 file changed, 94 insertions(+), 29 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index e83fd31671b3..b93d131b0cb5 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -371,6 +371,94 @@ static void hugetlb_delete_from_page_cache(struct page *page)
> delete_from_page_cache(page);
> }
>
> +/*
> + * Called with i_mmap_rwsem held for inode based vma maps. This makes
> + * sure vma (and vm_mm) will not go away. We also hold the hugetlb fault
> + * mutex for the page in the mapping. So, we can not race with page being
> + * faulted into the vma.
> + */
> +static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
> + unsigned long addr, struct page *page)
> +{
> + pte_t *ptep, pte;
> +
> + ptep = huge_pte_offset(vma->vm_mm, addr,
> + huge_page_size(hstate_vma(vma)));
> +
> + if (!ptep)
> + return false;
> +
> + pte = huge_ptep_get(ptep);
> + if (huge_pte_none(pte) || !pte_present(pte))
> + return false;
> +
> + if (pte_page(pte) == page)
> + return true;

I'm thinking whether pte entry could change after we check it since huge_pte_lock is not held here.
But I think holding i_mmap_rwsem in writelock mode should give us such a guarantee, e.g. migration
entry is changed back to huge pte entry while holding i_mmap_rwsem in readlock mode.
Or am I miss something?

> +
> + return false;
> +}
> +
> +/*
> + * Can vma_offset_start/vma_offset_end overflow on 32-bit arches?
> + * No, because the interval tree returns us only those vmas
> + * which overlap the truncated area starting at pgoff,
> + * and no vma on a 32-bit arch can span beyond the 4GB.
> + */
> +static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
> +{
> + if (vma->vm_pgoff < start)
> + return (start - vma->vm_pgoff) << PAGE_SHIFT;
> + else
> + return 0;
> +}
> +
> +static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end)
> +{
> + unsigned long t_end;
> +
> + if (!end)
> + return vma->vm_end;
> +
> + t_end = ((end - vma->vm_pgoff) << PAGE_SHIFT) + vma->vm_start;
> + if (t_end > vma->vm_end)
> + t_end = vma->vm_end;
> + return t_end;
> +}
> +
> +/*
> + * Called with hugetlb fault mutex held. Therefore, no more mappings to
> + * this folio can be created while executing the routine.
> + */
> +static void hugetlb_unmap_file_folio(struct hstate *h,
> + struct address_space *mapping,
> + struct folio *folio, pgoff_t index)
> +{
> + struct rb_root_cached *root = &mapping->i_mmap;
> + struct page *page = &folio->page;
> + struct vm_area_struct *vma;
> + unsigned long v_start;
> + unsigned long v_end;
> + pgoff_t start, end;
> +
> + start = index * pages_per_huge_page(h);
> + end = ((index + 1) * pages_per_huge_page(h));

It seems the outer parentheses is unneeded?

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

Thanks,
Miaohe Lin


\
 
 \ /
  Last update: 2022-08-29 04:45    [W:0.217 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site