Messages in this thread | | | Subject | wakeup_affine_weight() is b0rked - was Re: [PATCH 2/2] sched/fair: Scale wakeup granularity relative to nr_running | From | Mike Galbraith <> | Date | Sun, 03 Oct 2021 05:07:58 +0200 |
| |
On Wed, 2021-09-22 at 07:22 +0200, Mike Galbraith wrote: > On Tue, 2021-09-21 at 11:36 +0100, Mel Gorman wrote: > > On Tue, Sep 21, 2021 at 05:52:32AM +0200, Mike Galbraith wrote: > > > > > > > Preemption does rapidly run into diminishing return as load climbs for > > > a lot of loads, but as you know, it's a rather sticky wicket because > > > even when over-committed, preventing light control threads from slicing > > > through (what can be a load's own work crew of) hogs can seriously > > > injure performance. > > > > > > > Turning this into a classic Rob Peter To Pay Paul problem. We don't know > > if there is a light control thread that needs to run or not that affects > > overall performance. It all depends on whether that control thread needs > > to make progress for the overall workload or whether there are a mix of > > workloads resulting in overloading. > > WRT overload, and our good buddies Peter and Paul :) I added... > if (gran >= sysctl_sched_latency >> 1) > trace_printk("runnable:%d preempt disabled\n",cfs_rq->nr_running); > ...to watch, and met the below when I.. logged in. > > homer:..debug/tracing # tail -20 trace > X-2229 [002] d..5. 60.690322: wakeup_gran: runnable:9 preempt disabled > X-2229 [002] d..5. 60.690325: wakeup_gran: runnable:10 preempt disabled > X-2229 [002] d..5. 60.690330: wakeup_gran: runnable:11 preempt disabled > X-2229 [002] d..5. 60.690363: wakeup_gran: runnable:13 preempt disabled > X-2229 [002] d..5. 60.690377: wakeup_gran: runnable:14 preempt disabled > X-2229 [002] d..5. 60.690390: wakeup_gran: runnable:15 preempt disabled > X-2229 [002] d..5. 60.690404: wakeup_gran: runnable:16 preempt disabled > X-2229 [002] d..5. 60.690425: wakeup_gran: runnable:9 preempt disabled > ksmserver-2694 [003] d..3. 60.690432: wakeup_gran: runnable:6 preempt disabled > ksmserver-2694 [003] d..3. 60.690436: wakeup_gran: runnable:7 preempt disabled > X-2229 [002] d..5. 60.690451: wakeup_gran: runnable:6 preempt disabled > X-2229 [002] d..5. 60.690465: wakeup_gran: runnable:7 preempt disabled > kmix-2736 [000] d..3. 60.690491: wakeup_gran: runnable:6 preempt disabled > X-2229 [004] d..5. 92.889635: wakeup_gran: runnable:6 preempt disabled > X-2229 [004] d..5. 92.889675: wakeup_gran: runnable:6 preempt disabled > X-2229 [004] d..5. 92.889863: wakeup_gran: runnable:6 preempt disabled > X-2229 [004] d..5. 92.889944: wakeup_gran: runnable:6 preempt disabled > X-2229 [004] d..5. 92.889957: wakeup_gran: runnable:7 preempt disabled > X-2229 [004] d..5. 92.889968: wakeup_gran: runnable:8 preempt disabled > QXcbEventQueue-2740 [000] d..4. 92.890025: wakeup_gran: runnable:6 preempt disabled > homer:..debug/tracing > > Watching 'while sleep 1; do clear;tail trace; done' with nothing but a > kbuild running is like watching top. There's enough stacking during > routine use of my desktop box that it runs into the tick granularity > wall pretty much continuously, so 'overload' may want redefining.
I looked into that crazy stacking depth...
static int wake_affine_weight(struct sched_domain *sd, struct task_struct *p, int this_cpu, int prev_cpu, int sync) { s64 this_eff_load, prev_eff_load; unsigned long task_load;
this_eff_load = cpu_load(cpu_rq(this_cpu)); ^^^^^^^^^^^^^^^^^^^^^^^^^^^ the butler didit!
That's pretty darn busted as it sits. Between load updates, X, or any other waker of many, can stack wakees to a ludicrous depth. Tracing kbuild vs firefox playing a youtube clip, I watched X stack 20 of the zillion firefox minions while their previous CPUs all had 1 lousy task running but a cpu_load() higher than the cpu_load() of X's CPU. Most of those prev_cpus were where X had left them when it migrated. Each and every crazy depth migration was wake_affine_weight() deciding we should pull based on crappy data. As instantaneous load on the waker CPU blew through the roof in my trace snapshot, its cpu_load() did finally budge.. a tiny bit.. downward. No idea where the stack would have topped out, my tracing_off() limit was 20.
Hohum, my box grew a WA_INST companion to SIS_MIN_LAT cache cold task distribulator feature ;-) Not particularly lovely, but it knocks over the leaning tower of minions.
-Mike
| |