lkml.org 
[lkml]   [2023]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [linus:master] [mm] f1a7941243: unixbench.score -19.2% regression
From
On Tue, Jan 31, 2023 at 05:45:21AM +0000, Matthew Wilcox wrote:
[...]
> > I ran perf and it seems like percpu counter allocation is the additional
> > cost with this patch. See the report below. However I made spawn a bit
> > more sophisticated by adding a mmap() of a GiB then the page table
> > copy became the significant cost and no difference without or with the
> > given patch.
> >
> > I am now wondering if this fork ping pong really an important workload
> > that we should revert the patch or ignore for now but work on improving
> > the performance of __alloc_percpu_gfp code.
> >
> >
> > - 90.97% 0.06% spawn [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
> > - 90.91% entry_SYSCALL_64_after_hwframe
> > - 90.86% do_syscall_64
> > - 80.03% __x64_sys_clone
> > - 79.98% kernel_clone
> > - 75.97% copy_process
> > + 46.04% perf_event_init_task
> > - 21.50% copy_mm
> > - 10.05% mm_init
> > ----------------------> - 8.92% __percpu_counter_init
> > - 8.67% __alloc_percpu_gfp
> > - 5.70% pcpu_alloc
>
> 5.7% of our time spent in pcpu_alloc seems excessive. Are we contending
> on pcpu_alloc_mutex perhaps? Also, are you doing this on a 4-socket
> machine like the kernel test robot ran on?

I ran on 2-socket machine and I am not sure about pcpu_alloc_mutex but I
doubt that because I ran a single instance of the spawn test i.e. a
single fork ping pong.

>
> We could cut down the number of calls to pcpu_alloc() by a factor of 4
> by having a pcpu_alloc_bulk() that would allocate all four RSS counters
> at once.
>
> Just throwing out ideas ...

Thanks, I will take a stab at pcpu_alloc_bulk() and will share the
result tomorrow.

thanks,
Shakeel

\
 
 \ /
  Last update: 2023-03-27 00:03    [W:0.046 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site