lkml.org 
[lkml]   [2015]   [Nov]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/3] sched/fair: Move hot load_avg into its own cacheline
On Wed, Nov 25, 2015 at 02:09:39PM -0500, Waiman Long wrote:
> +++ b/kernel/sched/sched.h
> @@ -248,7 +248,12 @@ struct task_group {
> unsigned long shares;
>
> #ifdef CONFIG_SMP
> - atomic_long_t load_avg;
> + /*
> + * load_avg can be heavily contended at clock tick time, so put
> + * it in its own cacheline separated from the fields above which
> + * will also be accessed at each tick.
> + */
> + atomic_long_t load_avg ____cacheline_aligned;

Same as with the other patch; this only works if the structure itself is
cacheline aligned, which I don't think it is.

> #endif
> #endif
>
> --
> 1.7.1
>


\
 
 \ /
  Last update: 2015-11-30 11:41    [W:0.090 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site