lkml.org 
[lkml]   [2023]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 2/8] migrate_pages: separate hugetlb folios migration
Date

"Huang, Ying" <ying.huang@intel.com> writes:

> [snip]
>
>>
>>>>> @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>>> nr_retry_pages = 0;
>>>>>
>>>>> list_for_each_entry_safe(folio, folio2, from, lru) {
>>>>> + if (folio_test_hugetlb(folio)) {
>>>>
>>>> How do we hit this case? Shouldn't migrate_hugetlbs() have already moved
>>>> any hugetlb folios off the from list?
>>>
>>> Retried hugetlb folios will be kept in from list.
>>
>> Couldn't migrate_hugetlbs() remove the failing retried pages from the
>> list on the final pass? That seems cleaner to me.
>
> To do that, we need to go through the folio list again to remove all
> hugetlb pages. It could be time-consuming in some cases. So I think
> that it's better to keep this.

Why? Couldn't we test pass == 9 and remove it from the list if it fails
the final retry in migrate_hugetlbs()? In any case if it's on the list
due to failed retries we have already passed over it 10 times, so the
extra loop hardly seems like a problem.

- Alistair

> Best Regards,
> Huang, Ying
>
>>>>> + list_move_tail(&folio->lru, &ret_folios);
>>>>> + continue;
>>>>> + }
>>>>> +
>>>>> /*
>>>>> * Large folio statistics is based on the source large
>>>>> * folio. Capture required information that might get
>>>>> * lost during migration.
>>>>> */
>>>>> - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
>>>>> + is_large = folio_test_large(folio);
>>>>> is_thp = is_large && folio_test_pmd_mappable(folio);
>>>>> nr_pages = folio_nr_pages(folio);
>>>>> +
>>>>> cond_resched();
>>>>>
>>>>> - if (folio_test_hugetlb(folio))
>>>>> - rc = unmap_and_move_huge_page(get_new_page,
>>>>> - put_new_page, private,
>>>>> - &folio->page, pass > 2, mode,
>>>>> - reason,
>>>>> - &ret_folios);
>>>>> - else
>>>>> - rc = unmap_and_move(get_new_page, put_new_page,
>>>>> - private, folio, pass > 2, mode,
>>>>> - reason, &ret_folios);
>>>>> + rc = unmap_and_move(get_new_page, put_new_page,
>>>>> + private, folio, pass > 2, mode,
>>>>> + reason, &ret_folios);
>>>>> /*
>>>>> * The rules are:
>>>>> - * Success: non hugetlb folio will be freed, hugetlb
>>>>> - * folio will be put back
>>>>> + * Success: folio will be freed
>>>>> * -EAGAIN: stay on the from list
>>>>> * -ENOMEM: stay on the from list
>>>>> * -ENOSYS: stay on the from list
>>>>> @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>>> stats.nr_thp_split += is_thp;
>>>>> break;
>>>>> }
>>>>> - /* Hugetlb migration is unsupported */
>>>>> } else if (!no_split_folio_counting) {
>>>>> nr_failed++;
>>>>> }

\
 
 \ /
  Last update: 2023-03-26 23:28    [W:0.447 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site