lkml.org 
[lkml]   [2014]   [Jan]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectDeadlock between cpu_hotplug_begin and cpu_add_remove_lock
This arises out of a report from a tester that offlining a CPU never
finished on a system they were testing. This was on a POWER8 running
a 3.10.x kernel, but the issue is still present in mainline AFAICS.

What I found when I looked at the system was this:

* There was a ppc64_cpu process stuck inside cpu_hotplug_begin(),
called from _cpu_down(), from cpu_down(). This process was holding
the cpu_add_remove_lock mutex, since cpu_down() calls
cpu_maps_update_begin() before calling _cpu_down(). It was stuck
there because cpu_hotplug.refcount == 1.

* There was a mdadm process trying to acquire the cpu_add_remove_lock
mutex inside register_cpu_notifier(), called from
raid5_alloc_percpu() in drivers/md/raid5.c. That process had
previously called get_online_cpus, which is why cpu_hotplug.refcount
was 1.

Result: deadlock.

Thus it seems that the following code is not safe:

get_online_cpus();
register_cpu_notifier(&...);
put_online_cpus();

There are a few different places that do that sort of thing; besides
drivers/md/raid5.c, there are instances in arch/x86/kernel/cpu,
arch/x86/oprofile, drivers/cpufreq/acpi-cpufreq.c,
drivers/oprofile/nmi_timer_int.c and kernel/trace/ring_buffer.c.

My question is this: is it reasonable to call register_cpu_notifier
inside a get/put_online_cpus block? If so, the deadlock needs to be
fixed; if not, the callers need to be fixed, and the restriction
should be documented.

Regards,
Paul.


\
 
 \ /
  Last update: 2014-01-22 07:41    [W:0.057 / U:0.204 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site