lkml.org 
[lkml]   [2020]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 00/19] The new cgroup slab memory controller
On Tue, Jun 16, 2020 at 06:46:56PM -0700, Shakeel Butt wrote:
> On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
> >
> > This is v6 of the slab cgroup controller rework.
> >
> > The patchset moves the accounting from the page level to the object
> > level. It allows to share slab pages between memory cgroups.
> > This leads to a significant win in the slab utilization (up to 45%)
> > and the corresponding drop in the total kernel memory footprint.
>
> Is this based on just SLUB or does this have a similar impact on SLAB as well?

The numbers for SLAB are less impressive than for SLUB (I guess per-cpu partial
lists add to the problem), but also in double digits of percents.

>
> > The reduced number of unmovable slab pages should also have a positive
> > effect on the memory fragmentation.
>
> That would be awesome. We have seen fragmentation getting very bad on
> system (or node) level memory pressure. Is that the same for you?

Well, we didn't have any specific problems with the fragmentation,
but generally speaking reducing the size of unmovable memory by ~40%
should have a positive effect.

>
> >
> > The patchset makes the slab accounting code simpler: there is no more
> > need in the complicated dynamic creation and destruction of per-cgroup
> > slab caches, all memory cgroups use a global set of shared slab caches.
> > The lifetime of slab caches is not more connected to the lifetime
> > of memory cgroups.
> >
> > The more precise accounting does require more CPU, however in practice
> > the difference seems to be negligible. We've been using the new slab
> > controller in Facebook production for several months with different
> > workloads and haven't seen any noticeable regressions. What we've seen
> > were memory savings in order of 1 GB per host (it varied heavily depending
> > on the actual workload, size of RAM, number of CPUs, memory pressure, etc).
> >
> > The third version of the patchset added yet another step towards
> > the simplification of the code: sharing of slab caches between
> > accounted and non-accounted allocations. It comes with significant
> > upsides (most noticeable, a complete elimination of dynamic slab caches
> > creation) but not without some regression risks, so this change sits
> > on top of the patchset and is not completely merged in. So in the unlikely
> > event of a noticeable performance regression it can be reverted separately.
> >
>
> Have you performed any [perf] testing on SLAB with this patchset?

The accounting part is the same for SLAB and SLUB, so there should be no
significant difference. I've checked that it compiles, boots and passes
kselftests. And that memory savings are there.

Thanks!

\
 
 \ /
  Last update: 2020-06-17 04:42    [W:2.764 / U:0.436 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site