Messages in this thread | | | Date | Thu, 8 Dec 2022 00:31:36 +0000 | Subject | Re: Low TCP throughput due to vmpressure with swap enabled | From | Shakeel Butt <> |
| |
On Wed, Dec 07, 2022 at 01:53:00PM +0100, Johannes Weiner wrote: [...] > > I don't mind doing that if necessary, but looking at the code I don't > see why it would be. > > The socket code sets protocol memory pressure on allocations that run > into limits, and clears pressure on allocations that succeed and > frees. Why shouldn't we do the same thing for memcg? >
I think you are right. Let's go with whatever you have for now as this will reduce vmpressure dependency.
However I think there are still open issues that needs to be addressed in the future:
1. Unlike TCP memory accounting, memcg has to account/charge user memory, kernel memory and tcp/netmem. So, it might make more sense to enter the pressure state in try_charge_memcg() function. This means charging of user memory or kernel memory can also put the memcg under socket pressure.
2. On RX path, the memcg charge can succeed due to GFP_ATOMIC flag. Should we reset the pressure state in that case?
3. On uncharge path, unlike network stack, should we unconditionally reset the socket pressure state?
Shakeel
| |