lkml.org 
[lkml]   [2015]   [Apr]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 3/5] sched, timer: Use atomics in thread_group_cputimer to improve scalability
From
Date
On Wed, 2015-04-29 at 10:38 -0400, Rik van Riel wrote:
> On 04/28/2015 04:00 PM, Jason Low wrote:
> > While running a database workload, we found a scalability issue with itimers.
> >
> > Much of the problem was caused by the thread_group_cputimer spinlock.
> > Each time we account for group system/user time, we need to obtain a
> > thread_group_cputimer's spinlock to update the timers. On larger systems
> > (such as a 16 socket machine), this caused more than 30% of total time
> > spent trying to obtain this kernel lock to update these group timer stats.
> >
> > This patch converts the timers to 64 bit atomic variables and use
> > atomic add to update them without a lock. With this patch, the percent
> > of total time spent updating thread group cputimer timers was reduced
> > from 30% down to less than 1%.
> >
> > Note: On 32 bit systems using the generic 64 bit atomics, this causes
> > sample_group_cputimer() to take locks 3 times instead of just 1 time.
> > However, we tested this patch on a 32 bit system ARM system using the
> > generic atomics and did not find the overhead to be much of an issue.
> > An explanation for why this isn't an issue is that 32 bit systems usually
> > have small numbers of CPUs, and cacheline contention from extra spinlocks
> > called periodically is not really apparent on smaller systems.
>
> I don't see 32 bit systems ever getting so many CPUs
> that this becomes an issue :)

Yeah, the generic 64 bit atomics are meant to be used on systems with
(<=4 or so) CPUs.

> > Signed-off-by: Jason Low <jason.low2@hp.com>
>
> Acked-by: Rik van Riel <riel@redhat.com>

Thanks!



\
 
 \ /
  Last update: 2015-04-29 23:21    [W:0.108 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site