lkml.org 
[lkml]   [2008]   [Feb]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subjectcopy_user_generic_string and trace_hardirqs_on_thunk loop in qemu ??
Hi

I'm looking for some help with this problem - while doing some other
work my testscript seem to get looped in a really weird place.

Everything happens with kernel running in qemu -

When I run dmsetup status - occasionally dmsetup starts to take
100%CPU and cannot be killed. Happens only when there is high disk
load generated by some other code. This code has some bugs in locks
and semaphores.

But from the task trace I always get this Call Trace: for runnig dmsetup task

[ 140.106102] Call Trace:
[ 140.106102] [<ffffffff812eda0e>] ? thread_return+0x8a/0x50c
[ 140.106102] [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[ 140.106102] [<ffffffff8100cdfc>] ? restore_args+0x0/0x30
[ 140.106102] [<ffffffff8117b677>] ? copy_user_generic_string+0x17/0x40
[ 140.106102] [<ffffffff88044534>] ? :dm_mod:copy_params+0x94/0xf0
[ 140.106102] [<ffffffff81047e21>] ? __capable+0x11/0x30
[ 140.106102] [<ffffffff880446f9>] ? :dm_mod:ctl_ioctl+0x169/0x260
[ 140.106102] [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[ 140.106102] [<ffffffff8804481d>] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20

[ 141.324472] Call Trace:
[ 141.324472] [<ffffffff812eda0e>] ? thread_return+0x8a/0x50c
[ 141.324472] [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[ 141.324472] [<ffffffff880433e0>] ? :dm_mod:table_status+0x0/0x90
[ 141.324472] [<ffffffff8100cdfc>] ? restore_args+0x0/0x30
[ 141.324472] [<ffffffff8117b677>] ? copy_user_generic_string+0x17/0x40
[ 141.324472] [<ffffffff88044534>] ? :dm_mod:copy_params+0x94/0xf0
[ 141.324472] [<ffffffff81047e21>] ? __capable+0x11/0x30
[ 141.324472] [<ffffffff880446f9>] ? :dm_mod:ctl_ioctl+0x169/0x260
[ 141.324472] [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[ 141.324472] [<ffffffff8804481d>] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20
[ 141.324472] [<ffffffff810f63b2>] ? compat_sys_ioctl+0x182/0x3d0

[ 417.993881] Call Trace:
[ 417.993881] [<ffffffff812eda0e>] ? thread_return+0x8a/0x50c
[ 417.993881] [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[ 417.993881] [<ffffffff812f1859>] ? error_exit+0x0/0xb8
[ 417.993881] [<ffffffff8117b677>] ? copy_user_generic_string+0x17/0x40
[ 417.993881] [<ffffffff88044534>] ? :dm_mod:copy_params+0x94/0xf0
[ 417.993881] [<ffffffff81047e21>] ? __capable+0x11/0x30
[ 417.993881] [<ffffffff880446f9>] ? :dm_mod:ctl_ioctl+0x169/0x260
[ 417.993881] [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a

When I put there printk before calling copy_params in dm-ioctl.c to
print passed parameters (for copy there is 16KB) and then I add printk
after the call is fnished it really stops between these two printk the
call of copy_from_user.

When I run second dmsetup command in parallel - the first copy is
finished and I can see 'done' printed and I could start running
dmsetup status loop again.

Machine is 64bit - qemu kernel is 64bit - dmsetup application is 32bit
(but checked with 64bit - no difference)

Kernel is compiled with
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT=y
CONFIG_DEBUG_PREEMPT=y

Does anyone have any explanation for this weird behaviour?
Is this problem in qemu-kvm, kvm, or linux kernel ?

Zdenek


\
 
 \ /
  Last update: 2008-02-23 11:43    [W:0.016 / U:0.648 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site