lkml.org 
[lkml]   [2020]   [Apr]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/4] cpumask: Make cpumask_any() truly random
On Tue, Apr 14, 2020 at 12:19:56PM -0400, Steven Rostedt wrote:

> > +/**
> > + * cpumask_any - pick a "random" cpu from *srcp
> > + * @srcp: the input cpumask
> > + *
> > + * Returns >= nr_cpu_ids if no cpus set.
> > + */
> > +int cpumask_any(const struct cpumask *srcp)
> > +{
> > + int next, prev;
> > +
> > + /* NOTE: our first selection will skip 0. */
> > + prev = __this_cpu_read(distribute_cpu_mask_prev);
> > +
> > + next = cpumask_next(prev, srcp);
> > + if (next >= nr_cpu_ids)
> > + next = cpumask_first(srcp);
> > +
> > + if (next < nr_cpu_ids)
> > + __this_cpu_write(distribute_cpu_mask_prev, next);
>
> Do we care if this gets preempted and migrated to a new CPU where we read
> "prev" from one distribute_cpu_mask_prev on one CPU and write it to another
> CPU?

I don't think we do; that just adds to the randomness ;-), but you do
raise a good point in that __this_cpu_*() ops assume preemption is
already disabled, which is true of the one exiting
cpumask_any_and_distribute() caller, but is no longer true after patch
1, and this patch repeats the mistake.

So either we need to disable preemption across the function or
transition to this_cpu_*() ops.

\
 
 \ /
  Last update: 2020-04-15 11:37    [W:0.111 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site