lkml.org 
[lkml]   [2013]   [Oct]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/6] Optimize the cpu hotplug locking -v2

* Andrew Morton <akpm@linux-foundation.org> wrote:

> On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra <peterz@infradead.org> wrote:
>
> > The current cpu hotplug lock is a single global lock; therefore
> > excluding hotplug is a very expensive proposition even though it is
> > rare occurrence under normal operation.
> >
> > There is a desire for a more light weight implementation of
> > {get,put}_online_cpus() from both the NUMA scheduling as well as the
> > -RT side.
> >
> > The current hotplug lock is a full reader preference lock -- and thus
> > supports reader recursion. However since we're making the read side
> > lock much cheaper it is the expectation that it will also be used far
> > more. Which in turn would lead to writer starvation.
> >
> > Therefore the new lock proposed is completely fair; albeit somewhat
> > expensive on the write side. This in turn means that we need a
> > per-task nesting count to support reader recursion.
>
> This is a lot of code and a lot of new complexity. It needs some pretty
> convincing performance numbers to justify its inclusion, no?

Should be fairly straightforward to test: the sys_sched_getaffinity() and
sys_sched_setaffinity() syscalls both make use of
get_online_cpus()/put_online_cpus(), so a testcase frobbing affinities on
N CPUs in parallel ought to demonstrate scalability improvements pretty
nicely.

[ It's not just about affinities: in particular sys_sched_getaffinity()
also gets used as a NR_CPUS runtime detection method in apps, so it
matters to regular non-affine loads as well. ]

Thanks,

Ingo


\
 
 \ /
  Last update: 2013-10-10 08:41    [W:0.258 / U:0.600 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site