lkml.org 
[lkml]   [2023]   [Aug]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH 07/12] hugetlb: perform vmemmap restoration on a list of pages
    From


    On 2023/8/26 03:04, Mike Kravetz wrote:
    > When removing hugetlb pages from the pool, we first create a list
    > of removed pages and then free those pages back to low level allocators.
    > Part of the 'freeing process' is to restore vmemmap for all base pages
    > if necessary. Pass this list of pages to a new routine
    > hugetlb_vmemmap_restore_folios() so that vmemmap restoration can be
    > performed in bulk.
    >
    > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
    > ---
    > mm/hugetlb.c | 3 +++
    > mm/hugetlb_vmemmap.c | 8 ++++++++
    > mm/hugetlb_vmemmap.h | 6 ++++++
    > 3 files changed, 17 insertions(+)
    >
    > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
    > index 3133dbd89696..1bde5e234d5c 100644
    > --- a/mm/hugetlb.c
    > +++ b/mm/hugetlb.c
    > @@ -1833,6 +1833,9 @@ static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list)
    > {
    > struct folio *folio, *t_folio;
    >
    > + /* First restore vmemmap for all pages on list. */
    > + hugetlb_vmemmap_restore_folios(h, list);
    > +
    > list_for_each_entry_safe(folio, t_folio, list, lru) {
    > update_and_free_hugetlb_folio(h, folio, false);
    > cond_resched();
    > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
    > index 147018a504a6..d5e6b6c76dce 100644
    > --- a/mm/hugetlb_vmemmap.c
    > +++ b/mm/hugetlb_vmemmap.c
    > @@ -479,6 +479,14 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
    > return ret;
    > }
    >

    Because it is a void function, I'd like to add a comment here like:

        This function only tries to restore a list of folios' vmemmap pages and
        does not guarantee that the restoration will succeed after it returns.

    Thanks.

    > +void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list)
    > +{
    > + struct folio *folio;
    > +
    > + list_for_each_entry(folio, folio_list, lru)
    > + hugetlb_vmemmap_restore(h, &folio->page);
    > +}
    > +
    > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
    > static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head)
    > {
    > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
    > index 036494e040ca..b7074672ceb2 100644
    > --- a/mm/hugetlb_vmemmap.h
    > +++ b/mm/hugetlb_vmemmap.h
    > @@ -12,6 +12,7 @@
    >
    > #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
    > int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head);
    > +void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list);
    > void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head);
    > void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list);
    >
    > @@ -44,6 +45,11 @@ static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *h
    > return 0;
    > }
    >
    > +static inline void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list)
    > +{
    > + return 0;
    > +}
    > +
    > static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head)
    > {
    > }

    \
     
     \ /
      Last update: 2023-08-30 21:31    [W:2.334 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site