lkml.org 
[lkml]   [2024]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing
From
On 12.02.24 11:56, David Hildenbrand wrote:
> On 12.02.24 11:32, Ryan Roberts wrote:
>> On 12/02/2024 10:11, David Hildenbrand wrote:
>>> Hi Ryan,
>>>
>>>>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb)
>>>>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
>>>>>   {
>>>>> -    struct mmu_gather_batch *batch;
>>>>> -
>>>>> -    for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
>>>>> -        struct encoded_page **pages = batch->encoded_pages;
>>>>> +    struct encoded_page **pages = batch->encoded_pages;
>>>>> +    unsigned int nr, nr_pages;
>>>>>   +    /*
>>>>> +     * We might end up freeing a lot of pages. Reschedule on a regular
>>>>> +     * basis to avoid soft lockups in configurations without full
>>>>> +     * preemption enabled. The magic number of 512 folios seems to work.
>>>>> +     */
>>>>> +    if (!page_poisoning_enabled_static() && !want_init_on_free()) {
>>>>
>>>> Is the performance win really worth 2 separate implementations keyed off this?
>>>> It seems a bit fragile, in case any other operations get added to free which are
>>>> proportional to size in future. Why not just always do the conservative version?
>>>
>>> I really don't want to iterate over all entries on the "sane" common case. We
>>> already do that two times:
>>>
>>> a) free_pages_and_swap_cache()
>>>
>>> b) release_pages()
>>>
>>> Only the latter really is required, and I'm planning on removing the one in (a)
>>> to move it into (b) as well.
>>>
>>> So I keep it separate to keep any unnecessary overhead to the setups that are
>>> already terribly slow.
>>>
>>> No need to iterate a page full of entries if it can be easily avoided.
>>> Especially, no need to degrade the common order-0 case.
>>
>> Yeah, I understand all that. But given this is all coming from an array, (so
>> easy to prefetch?) and will presumably all fit in the cache for the common case,
>> at least, so its hot for (a) and (b), does separating this out really make a
>> measurable performance difference? If yes then absolutely this optimizaiton
>> makes sense. But if not, I think its a bit questionable.
>
> I primarily added it because
>
> (a) we learned that each cycle counts during mmap() just like it does
> during fork().
>
> (b) Linus was similarly concerned about optimizing out another batching
> walk in c47454823bd4 ("mm: mmu_gather: allow more than one batch of
> delayed rmaps"):
>
> "it needs to walk that array of pages while still holding the page table
> lock, and our mmu_gather infrastructure allows for batching quite a lot
> of pages. We may have thousands on pages queued up for freeing, and we
> wanted to walk only the last batch if we then added a dirty page to the
> queue."
>
> So if it matters enough for reducing the time we hold the page table
> lock, it surely adds "some" overhead in general.
>
>
>>
>> You're the boss though, so if your experience tells you this is neccessary, then
>> I'm ok with that.
>
> I did not do any measurements myself, I just did that intuitively as
> above. After all, it's all pretty straight forward (keeping the existing
> logic, we need a new one either way) and not that much code.
>
> So unless there are strong opinions, I'd just leave the common case as
> it was, and the odd case be special.

I think we can just reduce the code duplication easily:

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index d175c0f1e2c8..99b3e9408aa0 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -91,18 +91,21 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
}
#endif

-static void tlb_batch_pages_flush(struct mmu_gather *tlb)
-{
- struct mmu_gather_batch *batch;
+/*
+ * We might end up freeing a lot of pages. Reschedule on a regular
+ * basis to avoid soft lockups in configurations without full
+ * preemption enabled. The magic number of 512 folios seems to work.
+ */
+#define MAX_NR_FOLIOS_PER_FREE 512

- for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
- struct encoded_page **pages = batch->encoded_pages;
+static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
+{
+ struct encoded_page **pages = batch->encoded_pages;
+ unsigned int nr, nr_pages;

- while (batch->nr) {
- /*
- * limit free batch count when PAGE_SIZE > 4K
- */
- unsigned int nr = min(512U, batch->nr);
+ while (batch->nr) {
+ if (!page_poisoning_enabled_static() && !want_init_on_free()) {
+ nr = min(MAX_NR_FOLIOS_PER_FREE, batch->nr);

/*
* Make sure we cover page + nr_pages, and don't leave
@@ -111,14 +114,39 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
if (unlikely(encoded_page_flags(pages[nr - 1]) &
ENCODED_PAGE_BIT_NR_PAGES_NEXT))
nr++;
+ } else {
+ /*
+ * With page poisoning and init_on_free, the time it
+ * takes to free memory grows proportionally with the
+ * actual memory size. Therefore, limit based on the
+ * actual memory size and not the number of involved
+ * folios.
+ */
+ for (nr = 0, nr_pages = 0;
+ nr < batch->nr && nr_pages < MAX_NR_FOLIOS_PER_FREE;
+ nr++) {
+ if (unlikely(encoded_page_flags(pages[nr]) &
+ ENCODED_PAGE_BIT_NR_PAGES_NEXT))
+ nr_pages += encoded_nr_pages(pages[++nr]);
+ else
+ nr_pages++;
+ }
+ }

- free_pages_and_swap_cache(pages, nr);
- pages += nr;
- batch->nr -= nr;
+ free_pages_and_swap_cache(pages, nr);
+ pages += nr;
+ batch->nr -= nr;

- cond_resched();
- }
+ cond_resched();
}
+}
+
+static void tlb_batch_pages_flush(struct mmu_gather *tlb)
+{
+ struct mmu_gather_batch *batch;
+
+ for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
+ __tlb_batch_free_encoded_pages(batch);
tlb->active = &tlb->local;
}

--
2.43.0

--
Cheers,

David / dhildenb


\
 
 \ /
  Last update: 2024-05-27 14:57    [W:9.080 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site