lkml.org 
[lkml]   [2024]   [Feb]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] mm: hugetlb: fix hugetlb allocation failure when handling freed or in-use hugetlb
From


On 2/5/2024 5:31 PM, Michal Hocko wrote:
> On Mon 05-02-24 11:54:17, Baolin Wang wrote:
>> When handling the freed hugetlb or in-use hugetlb, we should ignore the
>> failure of alloc_buddy_hugetlb_folio() to dissolve the old hugetlb successfully,
>> since we did not use the new allocated hugetlb in this 2 cases.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> mm/hugetlb.c | 18 ++++++++++++------
>> 1 file changed, 12 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 9d996fe4ecd9..212ab331d355 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3042,9 +3042,8 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>> * under the lock.
>> */
>> new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
>> - if (!new_folio)
>> - return -ENOMEM;
>> - __prep_new_hugetlb_folio(h, new_folio);
>> + if (new_folio)
>> + __prep_new_hugetlb_folio(h, new_folio);
>
> Is there any reason why you haven't moved the allocation to the only
> branch that actually needs it? I know that we hold hugetlb lock but you

Nope, just did a simple patch to ignore the allocation failure.

> could have easily dropped the lock, allocate a page and then goto retry.
> This would actually save an allocation.

Yes, will do. Thanks.

> Something like this:
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ed1581b670d4..db5f72b94422 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3029,21 +3029,9 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> {
> gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
> int nid = folio_nid(old_folio);
> - struct folio *new_folio;
> + struct folio *new_folio = NULL;
> int ret = 0;
>
> - /*
> - * Before dissolving the folio, we need to allocate a new one for the
> - * pool to remain stable. Here, we allocate the folio and 'prep' it
> - * by doing everything but actually updating counters and adding to
> - * the pool. This simplifies and let us do most of the processing
> - * under the lock.
> - */
> - new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
> - if (!new_folio)
> - return -ENOMEM;
> - __prep_new_hugetlb_folio(h, new_folio);
> -
> retry:
> spin_lock_irq(&hugetlb_lock);
> if (!folio_test_hugetlb(old_folio)) {
> @@ -3073,6 +3061,15 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> cond_resched();
> goto retry;
> } else {
> +
> + if (!new_folio) {
> + spin_unlock_irq(&hugetlb_lock);
> + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL);
> + if (!new_folio)
> + return -ENOMEM;
> + __prep_new_hugetlb_folio(h, new_folio);
> + goto retry;
> + }
> /*
> * Ok, old_folio is still a genuine free hugepage. Remove it from
> * the freelist and decrease the counters. These will be
> @@ -3100,9 +3097,11 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
>
> free_new:
> spin_unlock_irq(&hugetlb_lock);
> - /* Folio has a zero ref count, but needs a ref to be freed */
> - folio_ref_unfreeze(new_folio, 1);
> - update_and_free_hugetlb_folio(h, new_folio, false);
> + if (new_folio) {
> + /* Folio has a zero ref count, but needs a ref to be freed */
> + folio_ref_unfreeze(new_folio, 1);
> + update_and_free_hugetlb_folio(h, new_folio, false);
> + }
>
> return ret;
> }

\
 
 \ /
  Last update: 2024-05-27 14:48    [W:0.136 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site