Messages in this thread |  | | Date | Fri, 13 Apr 2018 13:19:00 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting |
| |
On Fri, Apr 13, 2018 at 12:08:48PM +0100, Patrick Bellasi wrote: > On 13-Apr 11:46, Peter Zijlstra wrote: > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > > +static inline void uclamp_cpu_get(struct task_struct *p, int cpu, int clamp_id) > > > +{ > > > + struct uclamp_cpu *uc_cpu = &cpu_rq(cpu)->uclamp[clamp_id]; > > > + int clamp_value; > > > + int group_id; > > > + > > > + /* Get task's specific clamp value */ > > > + clamp_value = p->uclamp[clamp_id].value; > > > + group_id = p->uclamp[clamp_id].group_id; > > > + > > > + /* No task specific clamp values: nothing to do */ > > > + if (group_id == UCLAMP_NONE) > > > + return; > > > + > > > + /* Increment the current group_id */ > > > > That I think qualifies being called a bad comment. > > my bad :/ > > > > + uc_cpu->group[group_id].tasks += 1; > > > + > > > + /* Mark task as enqueued for this clamp index */ > > > + p->uclamp_group_id[clamp_id] = group_id; > > > > Why exactly do we need this? we got group_id from @p in the first place. > > The idea is to back-annotate on the task the group in which it has > been refcounted. That allows a much simpler and less racy refcount > decrements at dequeue/migration time.
I'm not following; the only possible reason for having this second copy of group_id is when your original value (p->uclamp[clamp_id].group_id) can change between enqueue and dequeue.
Why can this happen?
> That's also why we have a single call-back, uclamp_task_update(), > for both enqueue/dequeue. Depending on the check performed by > uclamp_task_affects() we know if we have to get or put the refcounter.
But that check is _completely_ redundant, because you already _know_ from being in the en/de-queue path. So having that single callback is actively harmfull (and confusing).
|  |