lkml.org 
[lkml]   [2021]   [Apr]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 00/10] sched/fair: wake_affine improvements
On Fri, Apr 23, 2021 at 04:01:29PM +0530, Srikar Dronamraju wrote:
> > The series also oopses a *lot* and didn't get through a run of basic
> > workloads on x86 on any of three machines. An example oops is
> >
>
> Can you pass me your failing config. I am somehow not been seeing this
> either on x86 or on Powerpc on multiple systems.

The machines have since moved onto testing something else (Rik's patch
for newidle) but the attached config should be close enough.

> Also if possible cat /proc/schedstat and cat
> /proc/sys/kernel/sched_domain/cpu0/domain*/name
>

For the vanilla kernel

SMT
MC
NUMA

> > [ 137.770968] BUG: unable to handle page fault for address: 000000000001a5c8
> > [ 137.777836] #PF: supervisor read access in kernel mode
> > [ 137.782965] #PF: error_code(0x0000) - not-present page
> > [ 137.788097] PGD 8000004098a42067 P4D 8000004098a42067 PUD 4092e36067 PMD 40883ac067 PTE 0
> > [ 137.796261] Oops: 0000 [#1] SMP PTI
> > [ 137.799747] CPU: 0 PID: 14913 Comm: GC Slave Tainted: G E 5.12.0-rc8-llcfallback-v1r1 #1
> > [ 137.809123] Hardware name: SGI.COM C2112-4GP3/X10DRT-P-Series, BIOS 2.0a 05/09/2016
> > [ 137.816765] RIP: 0010:cpus_share_cache+0x22/0x30
> > [ 137.821396] Code: fc ff 0f 0b eb 80 66 90 0f 1f 44 00 00 48 63 ff 48 63 f6 48 c7 c0 c8 a5 01 00 48 8b 0c fd 00 59 9d 9a 48 8b 14 f5 00 59 9d 9a <8b> 14 02 39 14 01 0f 94 c0 c3 0f 1f 40 00 0f 1f 44 00 00 41 57 41
>
> IP says cpus_share_cache, and it takes 2 ints,
> RAX is 000000000001a5c8 but the panic says
> "unable to handle page fault for address: 000000000001a5c8"
> so it must have failed for "per_cpu(sd_llc_id, xx_cpu)"
>

More than likely, I didn't look closely because the intent was to schedule
tests to get some data and do the review later when I had time. tbench
partially completed but oopsed for high thread counts. Another load failed
completely and I didn't test beyond that but tbench for high thread counts
should be reproducible.

--
Mel Gorman
SUSE Labs
[unhandled content-type:application/x-gzip]
\
 
 \ /
  Last update: 2021-04-23 14:39    [W:0.141 / U:0.312 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site