lkml.org 
[lkml]   [2023]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v8 7/8] hugetlb: batch TLB flushes when freeing vmemmap
From
Hi, Mike,

On 10/18/2023 7:31 PM, Mike Kravetz wrote:
> From: Joao Martins <joao.m.martins@oracle.com>
>
> Now that a list of pages is deduplicated at once, the TLB
> flush can be batched for all vmemmap pages that got remapped.
>
[..]

>
> @@ -719,19 +737,28 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
>
> list_for_each_entry(folio, folio_list, lru) {
> int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
> - &vmemmap_pages);
> + &vmemmap_pages,
> + VMEMMAP_REMAP_NO_TLB_FLUSH);
>
> /*
> * Pages to be freed may have been accumulated. If we
> * encounter an ENOMEM, free what we have and try again.
> + * This can occur in the case that both spliting fails
> + * halfway and head page allocation also failed. In this
> + * case __hugetlb_vmemmap_optimize() would free memory
> + * allowing more vmemmap remaps to occur.
> */
> if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) {
> + flush_tlb_all();
> free_vmemmap_page_list(&vmemmap_pages);
> INIT_LIST_HEAD(&vmemmap_pages);
> - __hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
> + __hugetlb_vmemmap_optimize(h, &folio->page,
> + &vmemmap_pages,
> + VMEMMAP_REMAP_NO_TLB_FLUSH);
> }
> }
>
> + flush_tlb_all();

It seems that if folio_list is empty, we could spend a tlb flush here.
perhaps it's worth to check against empty list up front and return ?

thanks,
-jane

> free_vmemmap_page_list(&vmemmap_pages);
> }
>

\
 
 \ /
  Last update: 2023-10-21 20:21    [W:0.097 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site