lkml.org 
[lkml]   [2012]   [May]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Plumbers: Tweaking scheduler policy micro-conf RFP
* Peter Zijlstra <peterz@infradead.org> [2012-05-15 17:35:41]:

> On Tue, 2012-05-15 at 17:05 +0200, Vincent Guittot wrote:
> > On 15 May 2012 15:00, Peter Zijlstra <peterz@infradead.org> wrote:
> > > On Tue, 2012-05-15 at 14:57 +0200, Vincent Guittot wrote:

[snip]

> But really short, look at kernel/sched/core.c:default_topology[]
>
> I'd like to get rid of sd_init_* into a single function like
> sd_numa_init(), this would mean all archs would need to do is provide a
> simple list of ever increasing masks that match their topology.

You are suggesting that the archs will provide sched/core a list of
masks equivalent to the number of sched domain levels that we need to
build. The SDTL_SHARE_XXX flag will also be passed per mask in order
to decide the SD flags for that domain.

> To aid this we can add some SDTL_flags, initially I was thinking of:
>
> SDTL_SHARE_CORE -- aka SMT
> SDTL_SHARE_CACHE -- LLC cache domain (typically multi-core)
> SDTL_SHARE_MEMORY -- NUMA-node (typically socket)
>
> The 'performance' policy is typically to spread over shared resources so
> as to minimize contention on these.
>
> If you want to add some power we need some extra flags, maybe something
> like:
>
> SDTL_SHARE_POWERLINE -- power domain (typically socket)

Let me take a case of two-socket,quad-core,HT x86 (Nehalem):

SDTL_SHARE_POWERLINE should be passed along with a cpumask that
represents sd_init_CPU or cpu_cpu_mask today. So the number of
domains we build per-cpu will depend on the topology and the
sched_powersavings settings.

--Vaidy



\
 
 \ /
  Last update: 2012-05-16 21:01    [W:0.235 / U:0.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site