lkml.org 
[lkml]   [2020]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 2/4] sched/core: Set nr_lat_sensitive counter at various scheduler entry/exit points
On Fri, May 08, 2020 at 04:45:16PM +0530, Parth Shah wrote:
> Hi Pavan,
>
> Thanks for going through this patch-set.
>
> On 5/8/20 2:03 PM, Pavan Kondeti wrote:
> > Hi Parth,
> >
> > On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote:
> >> Monitor tasks at:
> >> 1. wake_up_new_task() - forked tasks
> >>
> >> 2. set_task_cpu() - task migrations, Load balancer
> >>
> >> 3. __sched_setscheduler() - set/unset latency_nice value
> >> Increment the nr_lat_sensitive count on the CPU with task marked with
> >> latency_nice == -20.
> >> Similarly, decrement the nr_lat_sensitive counter upon re-marking the task
> >> with >-20 latency_nice task.
> >>
> >> 4. finish_task_switch() - dying task
> >>
> >
> >
> >> Signed-off-by: Parth Shah <parth@linux.ibm.com>
> >> ---
> >> kernel/sched/core.c | 30 ++++++++++++++++++++++++++++--
> >> kernel/sched/sched.h | 5 +++++
> >> 2 files changed, 33 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> >> index 2d8b76f41d61..ad396c36eba6 100644
> >> --- a/kernel/sched/core.c
> >> +++ b/kernel/sched/core.c
> >> @@ -1744,6 +1744,11 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
> >> trace_sched_migrate_task(p, new_cpu);
> >>
> >> if (task_cpu(p) != new_cpu) {
> >> + if (task_is_lat_sensitive(p)) {
> >> + per_cpu(nr_lat_sensitive, task_cpu(p))--;
> >> + per_cpu(nr_lat_sensitive, new_cpu)++;
> >> + }
> >> +
> >
> > Since we can come here without rq locks, there is a possibility
> > of a race and incorrect updates can happen. Since the counters
> > are used to prevent C-states, we don't want that to happen.
>
> I did tried using task_lock(p) wherever we do change refcount and when
> latency_nice value is set. There I was using nr_lat_sensitive with atomic_t
> type.
>
> After lots of thinking to optimize it and thinking that we anyways hold rq
> lock, I thought of not using any lock here and see if scheduler community
> has well known solution for this :-)
>
> But in brief, using atomic_t nr_lat_sensitive and task_lock(p) when changin
> refcount should solve problem, right?
>
> If you or anyone have solution for this kind of pattern, then that surely
> will be helpful.
>
I am not sure if task_lock() can help here, because we are operating the
counter on per CPU basis here. May be cmpxchg based loop works here to make
sure that increment/decrement operation happens atomically here.

> >
> >> if (p->sched_class->migrate_task_rq)
> >> p->sched_class->migrate_task_rq(p, new_cpu);
> >> p->se.nr_migrations++;

[...]

> >> @@ -4732,8 +4749,17 @@ static void __setscheduler_params(struct task_struct *p,
> >> p->normal_prio = normal_prio(p);
> >> set_load_weight(p, true);
> >>
> >> - if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
> >> + if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) {
> >> + if (p->state != TASK_DEAD &&
> >> + attr->sched_latency_nice != p->latency_nice) {
> >> + if (attr->sched_latency_nice == MIN_LATENCY_NICE)
> >> + per_cpu(nr_lat_sensitive, task_cpu(p))++;
> >> + else if (task_is_lat_sensitive(p))
> >> + per_cpu(nr_lat_sensitive, task_cpu(p))--;
> >> + }
> >> +
> >> p->latency_nice = attr->sched_latency_nice;
> >> + }
> >> }
> >
> > There is a potential race here due to which we can mess up the refcount.
> >
> > - A latency sensitive task is marked TASK_DEAD
> > <snip>
> > - sched_setattr() called on the task to clear the latency nice. Since
> > we check the task state here, we skip the decrement.
> > - The task is finally context switched out and we skip the decrement again
> > since it is not a latency senstivie task.
>
> if task is already marked TASK_DEAD then we should have already decremented
> its refcount in finish_task_switch().
> am I missing something?

There is a window (context switch and dropping rq lock) between
marking a task DEAD (in do_task_dead()) and dropping the ref counter
(in finish_task_switch()) during which we can run into here and skip
the checking because task is marked as DEAD.

Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

\
 
 \ /
  Last update: 2020-05-09 04:40    [W:0.070 / U:0.756 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site