lkml.org 
[lkml]   [2022]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 2/2] mm: migrate: Try again if THP split is failed due to page refcnt
From


On 10/21/2022 3:21 AM, Yang Shi wrote:
> On Thu, Oct 20, 2022 at 2:33 AM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>>
>>
>> On 10/20/2022 4:24 PM, Huang, Ying wrote:
>>> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>>>
>>>> When creating a virtual machine, we will use memfd_create() to get
>>>> a file descriptor which can be used to create share memory mappings
>>>> using the mmap function, meanwhile the mmap() will set the MAP_POPULATE
>>>> flag to allocate physical pages for the virtual machine.
>>>>
>>>> When allocating physical pages for the guest, the host can fallback to
>>>> allocate some CMA pages for the guest when over half of the zone's free
>>>> memory is in the CMA area.
>>>>
>>>> In guest os, when the application wants to do some data transaction with
>>>> DMA, our QEMU will call VFIO_IOMMU_MAP_DMA ioctl to do longterm-pin and
>>>> create IOMMU mappings for the DMA pages. However, when calling
>>>> VFIO_IOMMU_MAP_DMA ioctl to pin the physical pages, we found it will be
>>>> failed to longterm-pin sometimes.
>>>>
>>>> After some invetigation, we found the pages used to do DMA mapping can
>>>> contain some CMA pages, and these CMA pages will cause a possible
>>>> failure of the longterm-pin, due to failed to migrate the CMA pages.
>>>> The reason of migration failure may be temporary reference count or
>>>> memory allocation failure. So that will cause the VFIO_IOMMU_MAP_DMA
>>>> ioctl returns error, which makes the application failed to start.
>>>>
>>>> I observed one migration failure case (which is not easy to reproduce) is
>>>> that, the 'thp_migration_fail' count is 1 and the 'thp_split_page_failed'
>>>> count is also 1.
>>>>
>>>> That means when migrating a THP which is in CMA area, but can not allocate
>>>> a new THP due to memory fragmentation, so it will split the THP. However
>>>> THP split is also failed, probably the reason is temporary reference count
>>>> of this THP. And the temporary reference count can be caused by dropping
>>>> page caches (I observed the drop caches operation in the system), but we
>>>> can not drop the shmem page caches due to they are already dirty at that time.
>>>>
>>>> Especially for THP split failure, which is caused by temporary reference
>>>> count, we can try again to mitigate the failure of migration in this case
>>>> according to previous discussion [1].
>>>
>>> Does the patch solved your problem?
>>
>> The problem is not easy to reproduce and I will test this patch on our
>> products. However I think this is a likely case to fail the migration,
>> which need to be addressed to mitigate the failure.
>
> You may try to trace all migrations across your fleet (or just pick
> some sample machines, this should make data analysis easier) and
> filter the migration by reasons, for example, MR_LONGTERM_PIN, then
> compare the migration success rate before and after the patch. It > should be a good justification. But it may need some work on data
> aggregation, process and analysis, not sure how feasible it is.

IMO the migration of MR_LONGTERM_PIN is very rare in this case, so we
can obeserve the migraion failure of longterm pin, once obeserved, the
application will be aborted. However like I said before, the problem is
not easy to reproduce :(

Anyway we'll test this 2 patches on our products.

>>>> [1] https://lore.kernel.org/all/470dc638-a300-f261-94b4-e27250e42f96@redhat.com/
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> ---
>>>> mm/huge_memory.c | 4 ++--
>>>> mm/migrate.c | 18 +++++++++++++++---
>>>> 2 files changed, 17 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index ad17c8d..a79f03b 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -2666,7 +2666,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>>>> * split PMDs
>>>> */
>>>> if (!can_split_folio(folio, &extra_pins)) {
>>>> - ret = -EBUSY;
>>>> + ret = -EAGAIN;
>>>> goto out_unlock;
>>>> }
>>>>
>>>> @@ -2716,7 +2716,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>>>> xas_unlock(&xas);
>>>> local_irq_enable();
>>>> remap_page(folio, folio_nr_pages(folio));
>>>> - ret = -EBUSY;
>>>> + ret = -EAGAIN;
>>>> }
>>>>
>>>> out_unlock:
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 8e5eb6e..55c7855 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -1506,9 +1506,21 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>> if (is_thp) {
>>>> nr_thp_failed++;
>>>> /* THP NUMA faulting doesn't split THP to retry. */
>>>> - if (!nosplit && !try_split_thp(page, &thp_split_pages)) {
>>>> - nr_thp_split++;
>>>> - break;
>>>> + if (!nosplit) {
>>>> + rc = try_split_thp(page, &thp_split_pages);
>>>> + if (!rc) {
>>>> + nr_thp_split++;
>>>> + break;
>>>> + } else if (reason == MR_LONGTERM_PIN &&
>>>> + rc == -EAGAIN) {
>>>
>>> In case reason != MR_LONGTERM_PIN, you change the return value of
>>> migrate_pages(). So you need to use another variable for return value.
>>
>> Good catch, will fix in next version. Thanks for your comments.

\
 
 \ /
  Last update: 2022-10-21 08:16    [W:0.410 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site