lkml.org 
[lkml]   [2017]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: BUG: using __this_cpu_read() in preemptible [00000000] code: mm_percpu_wq/7
From
Date
On 08/16/2017 04:20 PM, Tejun Heo wrote:
> Hello,
>
> On Wed, Aug 16, 2017 at 11:13:07AM +0200, Heiko Carstens wrote:
>> [ 5968.010352] WARNING: CPU: 54 PID: 7 at kernel/workqueue.c:2041 process_one_work+0x6d4/0x718
>>
>> (I don't remember we have seen the warning above in the first report) and then
>>
>> [ 5968.010913] Kernel panic - not syncing: preempt check
>> [ 5968.010919] CPU: 54 PID: 7 Comm: mm_percpu_wq Tainted: G W 4.13.0-rc4-dirty #3
>> [ 5968.010923] Hardware name: IBM 3906 M03 703 (z/VM 6.4.0)
>> [ 5968.010927] Workqueue: mm_percpu_wq vmstat_update
>> [ 5968.010933] Call Trace:
>> [ 5968.010937] ([<0000000000113fbe>] show_stack+0x8e/0xe0)
>> [ 5968.010942] [<0000000000a514be>] dump_stack+0x96/0xd8
>> [ 5968.010947] [<000000000014302a>] panic+0x102/0x248
>> [ 5968.010952] [<00000000007836d8>] check_preemption_disabled+0xf8/0x110
>> [ 5968.010956] [<00000000002ee8e2>] refresh_cpu_vm_stats+0x1b2/0x400
>> [ 5968.010961] [<00000000002ef8be>] vmstat_update+0x2e/0x98
>> [ 5968.010965] [<0000000000166374>] process_one_work+0x3d4/0x718
>> [ 5968.010970] [<000000000016708c>] rescuer_thread+0x214/0x390
>> [ 5968.010974] [<000000000016edbc>] kthread+0x16c/0x180
>> [ 5968.010978] [<0000000000a7273a>] kernel_thread_starter+0x6/0xc
>> [ 5968.010983] [<0000000000a72734>] kernel_thread_starter+0x0/0xc
>>
>> On cpu 54 we have mm_percpu_wq with:
>>
>> nr_cpus_allowed = 0x1,
>> cpus_allowed = {
>> bits = {0x4, 0x0, 0x0, 0x0}
>> },
>>
>> We also have CONFIG_NR_CPUS=256, so the above translates to cpu 3, which
>> obviously is not cpu 54 and explains the preempt check warning.
>
> Looks like the same issue Paul was hitting.
>
> http://lkml.kernel.org/r/1501541603-4456-3-git-send-email-paulmck@linux.vnet.ibm.com
>
> Can you see whether the above patch helps?
>
> Thank.s
>

Hello,

please excuse my late response. But I've found another kernel panic
which stopped my test case execution each time. Now I've managed to work
around the problem and it looks pretty good. With the patch I was not
able to reproduce the problem within 24 hours runtime. Previously I
could trigger it within 2-4 hours runtime.

Kind regards

André

\
 
 \ /
  Last update: 2017-09-01 11:37    [W:1.907 / U:0.000 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site