lkml.org 
[lkml]   [2018]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [BUG] schedutil governor produces regular max freq spikes because of lockup detector watchdog threads
From
Date
On Mon, 2018-01-08 at 15:14 +0000, Patrick Bellasi wrote:
> On 08-Jan 15:20, Leonard Crestez wrote:
> > On Mon, 2018-01-08 at 09:31 +0530, Viresh Kumar wrote:
> > > On 05-01-18, 23:18, Rafael J. Wysocki wrote:
> > > > On Fri, Jan 5, 2018 at 9:37 PM, Leonard Crestez wrote:
> > > > >
> > > > > When using the schedutil governor together with the softlockup detector
> > > > > all CPUs go to their maximum frequency on a regular basis. This seems
> > > > > to be because the watchdog creates a RT thread on each CPU and this
> > > > > causes regular kicks with:
> > > > >
> > > > >     cpufreq_update_this_cpu(rq, SCHED_CPUFREQ_RT);
> > > > >
> > > > > The schedutil governor responds to this by immediately setting the
> > > > > maximum cpu frequency, this is very undesirable.
> > > > >
> > > > > The issue can be fixed by this patch from android:
> > > > >
> > > > > The patch stalled in a long discussion about how it's difficult for
> > > > > cpufreq to deal with RT and how some RT users might just disable
> > > > > cpufreq. It is indeed hard but if the system experiences regular power
> > > > > kicks from a common debug feature they will end up disabling schedutil
> > > > > instead.

> > > > Patrick has a series of patches dealing with this problem area AFAICS,
> > > > but we are currently integrating material from Juri related to
> > > > deadline tasks.
> > >
> > > I am not sure if Patrick's patches would solve this problem at all as
> > > we still go to max for RT and the RT task is created from the
> > > softlockup detector somehow.

> > I assume you're talking about the series starting with
> > "[PATCH v3 0/6] cpufreq: schedutil: fixes for flags updates"
> >
> > I checked and they have no effect on this particular issue (not
> > surprising).

> Yeah, that series was addressing the same issue but for one specific
> RT thread: the one used by schedutil to change the frequency.
> For all other RT threads the intended behavior was still to got
> to max... moreover those patches has been superseded by a different
> solution which has been recently proposed by Peter:
>
>    20171220155625.lopjlsbvss3qgb4d@hirez.programming.kicks-ass.net
>
> As Viresh and Rafael suggested, we should eventually consider a
> different scheduling class and/or execution context for the watchdog.
> Maybe a generalization of Juri's proposed SCHED_FLAG_SUGOV flag for
> DL tasks can be useful:
>
>    20171204102325.5110-4-juri.lelli@redhat.com
>
> Although that solution is already considered "gross" and thus perhaps
> it does not make sense to keep adding special DL tasks.
>
> Another possible alternative to "tag an RT task" as being special, is
> to use an API similar to the one proposed by the util_clamp RFC:
>
>    20170824180857.32103-1-patrick.bellasi@arm.com
>
> which would allow to define what's the maximum utilization which can
> be required by a properly configured RT task.

Marking the watchdog as somehow "not important for performance" would
probably work, I guess it will take a while to get a stable solution.

BTW, in the current version it seems the kick happens *after* the RT
task executes. It seems very likely that cpufreq will go back down
before a RT executes again, so how does it help? Unless most of the
workload is RT. But even in that case aren't you better off with
regular scaling since schedutil will notice utilization is high anyway?

Scaling freq up first would make more sense except such operations can
have very high latencies anyway.

Viresh suggested earlier to move watchdog to DL but apparently per-cpu
threads are not supported. sched_setattr fails on this check:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree
/kernel/sched/core.c#n4167

--
Regards,
Leonard

\
 
 \ /
  Last update: 2018-01-14 23:17    [W:0.060 / U:0.740 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site