lkml.org 
[lkml]   [2014]   [Feb]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectAnother preempt folding issue?
Hi Peter,

I am currently looking at a weird issue that manifest itself when trying to run
kvm enabled qemu on a i386 host (v3.13 kernel, oh and potentially important the
cpu is 64bit capable, so qemu-system-x86_64 is called). Sooner or later this
causes softlockup messages on the host. I tracked this down to __vcpu_run in
arch/x86/kvm/x86.c which does a loop which in that case never seems to make
progress or exit.

What I found is that vcpu_enter_guest will exit quickly without causing the loop
to exit when need_resched() is true. Looking at a crash dump I took, this was
the case (thread_info->flags had TIF_NEED_RESCHED set). So after immediately
returning __vcpu_run has the following code:

if (need_resched()) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
kvm_resched(vcpu); // now cond_resched();
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
}

The kvm_resched basically would end up doing a cond_resched() which now checks
preempt_count() to be 0. If that is zero it will do the reschedule, otherwise it
just does nothing. Looking at the percpu variables in the dump, I saw that
the preempt_count was 0x8000000 (actually it was 0x80110000 but that was me
triggering the kexec crashdump with sysrq-c).

I saw that there have been some changes in the upstream kernel and have picked
the following:
1) x86, acpi, idle: Restructure the mwait idle routines
2) x86, idle: Use static_cpu_has() for CLFLUSH workaround, add barriers
3) sched/preempt: Fix up missed PREEMPT_NEED_RESCHED folding
4) sched/preempt/x86: Fix voluntary preempt for x86

Patch 1) and 2) as dependencies of 3) (to get the mwait function correct and to
the other file). Finally 4) is fixing up 3). [maybe worth suggesting to do for
3.13.y stable].

Still, with all those I got the softlockup. Since I knew from the dump info that
something is wrong with the folding, I made the pragmatic approach and added the
following:

if (need_resched()) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
+ preempt_fold_need_resched();
kvm_resched(vcpu); // now cond_resched();
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
}

And this lets the kvm guest run without the softlockups! However I am less than
convinced that this is the right thing to do. Somehow something done when
converting the preempt_count into percpu has caused at least the i386 side to
get into this mess (as there has not been any whining about 64bit). Just fail to
see what.

-Stefan

[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2014-02-11 20:21    [W:0.108 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site