lkml.org 
[lkml]   [2020]   [Nov]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [LKP] Re: [mm/memcg] bd0b230fe1: will-it-scale.per_process_ops -22.7% regression
On Thu, Nov 12, 2020 at 11:43:45AM -0500, Waiman Long wrote:
> >>We tried below patch to make the 'page_counter' aligned.
> >> diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> >> index bab7e57..9efa6f7 100644
> >> --- a/include/linux/page_counter.h
> >> +++ b/include/linux/page_counter.h
> >> @@ -26,7 +26,7 @@ struct page_counter {
> >> /* legacy */
> >> unsigned long watermark;
> >> unsigned long failcnt;
> >> -};
> >> +} ____cacheline_internodealigned_in_smp;
> >>and with it, the -22.7% peformance change turns to a small -1.7%, which
> >>confirms the performance bump is caused by the change to data alignment.
> >>
> >>After the patch, size of 'page_counter' increases from 104 bytes to 128
> >>bytes, and the size of 'mem_cgroup' increases from 2880 bytes to 3008
> >>bytes(with our kernel config). Another major data structure which
> >>contains 'page_counter' is 'hugetlb_cgroup', whose size will change
> >>from 912B to 1024B.
> >>
> >>Should we make these page_counters aligned to reduce cacheline conflict?
> >I would rather focus on a more effective mem_cgroup layout. It is very
> >likely that we are just stumbling over two counters here.
> >
> >Could you try to add cache alignment of counters after memory and see
> >which one makes the difference? I do not expect memsw to be the one
> >because that one is used together with the main counter. But who knows
> >maybe the way it crosses the cache line has the exact effect. Hard to
> >tell without other numbers.
> >
> >Btw. it would be great to see what the effect is on cgroup v2 as well.
> >
> >Thanks for pursuing this!
>
> The contention may be in the page counters themselves or it can be in other
> fields below the page counters. The cacheline alignment will cause
> "high_work" just after the page counters to start at a cacheline boundary. I
> will try removing the cacheline alignment in the page counter and add it to
> high_work to see there is any change in performance. If there is no change,
> the performance problem will not be in the page counters.

Yes, that's a good spot to check. I even doubt it could be other members of
'struct mem_cgroup', which affects the benchmark, as we've seen some other
performance bump which is possibly related to it too.

Thanks,
Feng

\
 
 \ /
  Last update: 2020-11-13 08:40    [W:2.245 / U:0.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site