lkml.org 
[lkml]   [2020]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6
> I'll continue thinking about it but whatever chance there is of
> improving it while keeping CPU balancing, NUMA balancing and wake affine
> consistent with each other, I think there is no chance with the
> inconsistent logic used in the vanilla code :(

Thank you, Mel!


On Thu, Mar 12, 2020 at 10:47 PM Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Thu, Mar 12, 2020 at 05:54:29PM +0100, Jirka Hladky wrote:
> > >
> > > find it unlikely that is common because who acquires such a large machine
> > > and then uses a tiny percentage of it.
> >
> >
> > I generally agree, but I also want to make a point that AMD made these
> > large systems much more affordable with their EPYC CPUs. The 8 NUMA node
> > server we are using costs under $8k.
> >
> >
> >
> > > This is somewhat of a dilemma. Without the series, the load balancer and
> > > NUMA balancer use very different criteria on what should happen and
> > > results are not stable.
> >
> >
> > Unfortunately, I see instabilities also for the series. This is again for
> > the sp_C test with 8 threads executed on dual-socket AMD 7351 (EPYC Naples)
> > server with 8 NUMA nodes. With the series applied, the runtime varies from
> > 86 to 165 seconds! Could we do something about it? The runtime of 86
> > seconds would be acceptable. If we could stabilize this case and get
> > consistent runtime around 80 seconds, the problem would be gone.
> >
> > Do you experience the similar instability of results on your HW for sp_C
> > with low thread counts?
> >
>
> I saw something similar but observed that it depended on whether the
> worker tasks got spread wide or not which partially came down to luck.
> The question is if it's possible to pick a point where we spread wide
> and can recover quickly enough when tasks need to remain close without
> knowledge of the future. Putting a balancing limit on tasks that
> recently woke would be one option but that could also cause persistent
> improper balancing for tasks that wake frequently.
>
> > Runtime with this series applied:
> > $ grep "Time in seconds" *log
> > sp.C.x.defaultRun.008threads.loop01.log: Time in seconds =
> > 125.73
> > sp.C.x.defaultRun.008threads.loop02.log: Time in seconds =
> > 87.54
> > sp.C.x.defaultRun.008threads.loop03.log: Time in seconds =
> > 86.93
> > sp.C.x.defaultRun.008threads.loop04.log: Time in seconds =
> > 165.98
> > sp.C.x.defaultRun.008threads.loop05.log: Time in seconds =
> > 114.78
> >
> > For comparison, here are vanilla kernel results:
> > $ grep "Time in seconds" *log
> > sp.C.x.defaultRun.008threads.loop01.log: Time in seconds =
> > 59.83
> > sp.C.x.defaultRun.008threads.loop02.log: Time in seconds =
> > 67.72
> > sp.C.x.defaultRun.008threads.loop03.log: Time in seconds =
> > 63.62
> > sp.C.x.defaultRun.008threads.loop04.log: Time in seconds =
> > 55.01
> > sp.C.x.defaultRun.008threads.loop05.log: Time in seconds =
> > 65.20
> >
> >
> >
> > > In *general*, I found that the series won a lot more than it lost across
> > > a spread of workloads and machines but unfortunately it's also an area
> > > where counter-examples can be found.
> >
> >
> > OK, fair enough. I understand that there will always be trade-offs when
> > making changes to scheduler like this. And I agree that cases with higher
> > system load (where is series is helpful) outweigh the performance drops for
> > low threads counts. I was hoping that it would be possible to improve the
> > small threads results while keeping the gains for other scenarios:-) But
> > let's be realistic - I would be happy to fix the extreme case mentioned
> > above. The other issues where performance drop is about 20% are OK with me
> > and are outweighed by the gains for different scenarios.
> >
>
> I'll continue thinking about it but whatever chance there is of
> improving it while keeping CPU balancing, NUMA balancing and wake affine
> consistent with each other, I think there is no chance with the
> inconsistent logic used in the vanilla code :(
>
> --
> Mel Gorman
> SUSE Labs
>


--
-Jirka

\
 
 \ /
  Last update: 2020-03-12 23:25    [W:0.251 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site