lkml.org 
[lkml]   [2022]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: Low TCP throughput due to vmpressure with swap enabled
From
On Wed, Dec 07, 2022 at 01:53:00PM +0100, Johannes Weiner wrote:
[...]
>
> I don't mind doing that if necessary, but looking at the code I don't
> see why it would be.
>
> The socket code sets protocol memory pressure on allocations that run
> into limits, and clears pressure on allocations that succeed and
> frees. Why shouldn't we do the same thing for memcg?
>

I think you are right. Let's go with whatever you have for now as this
will reduce vmpressure dependency.

However I think there are still open issues that needs to be addressed
in the future:

1. Unlike TCP memory accounting, memcg has to account/charge user
memory, kernel memory and tcp/netmem. So, it might make more sense to
enter the pressure state in try_charge_memcg() function. This means
charging of user memory or kernel memory can also put the memcg under
socket pressure.

2. On RX path, the memcg charge can succeed due to GFP_ATOMIC flag.
Should we reset the pressure state in that case?

3. On uncharge path, unlike network stack, should we unconditionally
reset the socket pressure state?

Shakeel

\
 
 \ /
  Last update: 2022-12-08 01:32    [W:0.042 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site