lkml.org 
[lkml]   [2020]   [May]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm, memcg: reclaim more aggressively before high allocator throttling
Chris Down writes:
>>I believe I have asked in other email in this thread. Could you explain
>>why enforcint the requested target (memcg_nr_pages_over_high) is
>>insufficient for the problem you are dealing with? Because that would
>>make sense for large targets to me while it would keep relatively
>>reasonable semantic of the throttling - aka proportional to the memory
>>demand rather than the excess.
>
>memcg_nr_pages_over_high is related to the charge size. As such, if
>you're way over memory.high as a result of transient reclaim failures,
>but the majority of your charges are small, it's going to hard to make
>meaningful progress:
>
>1. Most nr_pages will be MEMCG_CHARGE_BATCH, which is not enough to help;
>2. Large allocations will only get a single reclaim attempt to succeed.
>
>As such, in many cases we're either doomed to successfully reclaim a
>paltry amount of pages, or fail to reclaim a lot of pages. Asking
>try_to_free_pages() to deal with those huge allocations is generally
>not reasonable, regardless of the specifics of why it doesn't work in
>this case.

Oh, I somehow elided the "enforcing" part of your proposal. Still, there's no
guarantee even if large allocations are reclaimed fully that we will end up
going back below memory.high, because even a single other large allocation
which fails to reclaim can knock us out of whack again.

\
 
 \ /
  Last update: 2020-05-21 15:07    [W:0.205 / U:0.484 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site