lkml.org 
[lkml]   [2023]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v2 1/2] hugetlb: Do not clear hugetlb dtor until allocating vmemmap
    On Mon, Jul 17, 2023 at 5:50 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
    >
    > Freeing a hugetlb page and releasing base pages back to the underlying
    > allocator such as buddy or cma is performed in two steps:
    > - remove_hugetlb_folio() is called to remove the folio from hugetlb
    > lists, get a ref on the page and remove hugetlb destructor. This
    > all must be done under the hugetlb lock. After this call, the page
    > can be treated as a normal compound page or a collection of base
    > size pages.
    > - update_and_free_hugetlb_folio() is called to allocate vmemmap if
    > needed and the free routine of the underlying allocator is called
    > on the resulting page. We can not hold the hugetlb lock here.
    >
    > One issue with this scheme is that a memory error could occur between
    > these two steps. In this case, the memory error handling code treats
    > the old hugetlb page as a normal compound page or collection of base
    > pages. It will then try to SetPageHWPoison(page) on the page with an
    > error. If the page with error is a tail page without vmemmap, a write
    > error will occur when trying to set the flag.
    >
    > Address this issue by modifying remove_hugetlb_folio() and
    > update_and_free_hugetlb_folio() such that the hugetlb destructor is not
    > cleared until after allocating vmemmap. Since clearing the destructor
    > requires holding the hugetlb lock, the clearing is done in
    > remove_hugetlb_folio() if the vmemmap is present. This saves a
    > lock/unlock cycle. Otherwise, destructor is cleared in
    > update_and_free_hugetlb_folio() after allocating vmemmap.
    >
    > Note that this will leave hugetlb pages in a state where they are marked
    > free (by hugetlb specific page flag) and have a ref count. This is not
    > a normal state. The only code that would notice is the memory error
    > code, and it is set up to retry in such a case.
    >
    > A subsequent patch will create a routine to do bulk processing of
    > vmemmap allocation. This will eliminate a lock/unlock cycle for each
    > hugetlb page in the case where we are freeing a large number of pages.
    >
    > Fixes: ad2fa3717b74 ("mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page")
    > Cc: <stable@vger.kernel.org>
    > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
    > ---
    > mm/hugetlb.c | 90 ++++++++++++++++++++++++++++++++++++++--------------
    > 1 file changed, 66 insertions(+), 24 deletions(-)
    >
    > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
    > index 64a3239b6407..4a910121a647 100644
    > --- a/mm/hugetlb.c
    > +++ b/mm/hugetlb.c
    > @@ -1579,9 +1579,37 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio,
    > unsigned int order) { }
    > #endif
    >
    > +static inline void __clear_hugetlb_destructor(struct hstate *h,
    > + struct folio *folio)
    > +{
    > + lockdep_assert_held(&hugetlb_lock);
    > +
    > + /*
    > + * Very subtle
    > + *
    > + * For non-gigantic pages set the destructor to the normal compound
    > + * page dtor. This is needed in case someone takes an additional
    > + * temporary ref to the page, and freeing is delayed until they drop
    > + * their reference.
    > + *
    > + * For gigantic pages set the destructor to the null dtor. This
    > + * destructor will never be called. Before freeing the gigantic
    > + * page destroy_compound_gigantic_folio will turn the folio into a
    > + * simple group of pages. After this the destructor does not
    > + * apply.
    > + *
    > + */

    Is it correct and useful to add a
    WARN_ON_ONCE(folio_test_hugetlb_vmemmap_optimized(folio)) here?

    Feel free to add:

    Acked-by: James Houghton <jthoughton@google.com>

    > + if (hstate_is_gigantic(h))
    > + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR);
    > + else
    > + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR);
    > +}
    > +
    > /*
    > - * Remove hugetlb folio from lists, and update dtor so that the folio appears
    > - * as just a compound page.
    > + * Remove hugetlb folio from lists.
    > + * If vmemmap exists for the folio, update dtor so that the folio appears
    > + * as just a compound page. Otherwise, wait until after allocating vmemmap
    > + * to update dtor.
    > *
    > * A reference is held on the folio, except in the case of demote.
    > *
    > @@ -1612,31 +1640,19 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio,
    > }
    >
    > /*
    > - * Very subtle
    > - *
    > - * For non-gigantic pages set the destructor to the normal compound
    > - * page dtor. This is needed in case someone takes an additional
    > - * temporary ref to the page, and freeing is delayed until they drop
    > - * their reference.
    > - *
    > - * For gigantic pages set the destructor to the null dtor. This
    > - * destructor will never be called. Before freeing the gigantic
    > - * page destroy_compound_gigantic_folio will turn the folio into a
    > - * simple group of pages. After this the destructor does not
    > - * apply.
    > - *
    > - * This handles the case where more than one ref is held when and
    > - * after update_and_free_hugetlb_folio is called.
    > - *
    > - * In the case of demote we do not ref count the page as it will soon
    > - * be turned into a page of smaller size.
    > + * We can only clear the hugetlb destructor after allocating vmemmap
    > + * pages. Otherwise, someone (memory error handling) may try to write
    > + * to tail struct pages.
    > + */
    > + if (!folio_test_hugetlb_vmemmap_optimized(folio))
    > + __clear_hugetlb_destructor(h, folio);
    > +
    > + /*
    > + * In the case of demote we do not ref count the page as it will soon
    > + * be turned into a page of smaller size.
    > */
    > if (!demote)
    > folio_ref_unfreeze(folio, 1);
    > - if (hstate_is_gigantic(h))
    > - folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR);
    > - else
    > - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR);
    >
    > h->nr_huge_pages--;
    > h->nr_huge_pages_node[nid]--;
    > @@ -1728,6 +1744,19 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
    > return;
    > }
    >
    > + /*
    > + * If needed, clear hugetlb destructor under the hugetlb lock.
    > + * This must be done AFTER allocating vmemmap pages in case there is an
    > + * attempt to write to tail struct pages as in memory poison.
    > + * It must be done BEFORE PageHWPoison handling so that any subsequent
    > + * memory errors poison individual pages instead of head.
    > + */
    > + if (folio_test_hugetlb(folio)) {
    > + spin_lock_irq(&hugetlb_lock);
    > + __clear_hugetlb_destructor(h, folio);
    > + spin_unlock_irq(&hugetlb_lock);
    > + }
    > +
    > /*
    > * Move PageHWPoison flag from head page to the raw error pages,
    > * which makes any healthy subpages reusable.
    > @@ -3604,6 +3633,19 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
    > return rc;
    > }
    >
    > + /*
    > + * The hugetlb destructor could still be set for this folio if vmemmap
    > + * was actually allocated above. The ref count on all pages is 0.
    > + * Therefore, nobody should attempt access. However, before destroying
    > + * compound page below, clear the destructor. Unfortunately, this
    > + * requires a lock/unlock cycle.
    > + */
    > + if (folio_test_hugetlb(folio)) {
    > + spin_lock_irq(&hugetlb_lock);
    > + __clear_hugetlb_destructor(h, folio);
    > + spin_unlock_irq(&hugetlb_lock);
    > + }
    > +
    > /*
    > * Use destroy_compound_hugetlb_folio_for_demote for all huge page
    > * sizes as it will not ref count folios.
    > --
    > 2.41.0
    >

    \
     
     \ /
      Last update: 2023-07-18 18:15    [W:2.897 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site