lkml.org 
[lkml]   [2015]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [Xen-devel] [patch 1/4] hotplug: Prevent alloc/free of irq descriptors during cpu up/down
On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> On 07/14/2015 01:32 PM, Thomas Gleixner wrote:
> > On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> > > On 07/14/2015 11:44 AM, Thomas Gleixner wrote:
> > > > On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> > > > > > Prevent allocation and freeing of interrupt descriptors accross cpu
> > > > > > hotplug.
> > > > > This breaks Xen guests that allocate interrupt descriptors in
> > > > > .cpu_up().
> > > > And where exactly does XEN allocate those descriptors?
> > > xen_cpu_up()
> > > xen_setup_timer()
> > > bind_virq_to_irqhandler()
> > > bind_virq_to_irq()
> > > xen_allocate_irq_dynamic()
> > > xen_allocate_irqs_dynamic()
> > > irq_alloc_descs()
> > >
> > >
> > > There is also a similar pass via xen_cpu_up() -> xen_smp_intr_init()
> > Sigh.
> >
> > > >
> > > > > Any chance this locking can be moved into arch code?
> > > > No.
> > The issue here is that all architectures need that protection and just
> > Xen does irq allocations in cpu_up.
> >
> > So moving that protection into architecture code is not really an
> > option.
> >
> > > > > Otherwise we will need to have something like arch_post_cpu_up()
> > > > > after the lock is released.
> > I'm not sure, that this will work. You probably want to do this in the
> > cpu prepare stage, i.e. before calling __cpu_up().
>
> For PV guests (the ones that use xen_cpu_up()) it will work either before or
> after __cpu_up(). At least my (somewhat limited) testing didn't show any
> problems so far.
>
> However, HVM CPUs use xen_hvm_cpu_up() and if you read comments there you will
> see that xen_smp_intr_init() needs to be called before native_cpu_up() but
> xen_init_lock_cpu() (which eventually calls irq_alloc_descs()) needs to be
> called after.
>
> I think I can split xen_init_lock_cpu() so that the part that needs to be
> called after will avoid going into irq core code. And then the rest will go
> into arch_cpu_prepare().

I think we should revisit this for 4.3. For 4.2 we can do the trivial
variant and move the locking in native_cpu_up() and x86 only. x86 was
the only arch on which such wreckage has been seen in the wild, but we
should have that protection for all archs in the long run.

Patch below should fix the issue.

Thanks,

tglx
---
commit d4a969314077914a623f3e2c5120cd2ef31aba30
Author: Thomas Gleixner <tglx@linutronix.de>
Date: Tue Jul 14 22:03:57 2015 +0200

genirq: Revert sparse irq locking around __cpu_up() and move it to x86 for now

Boris reported that the sparse_irq protection around __cpu_up() in the
generic code causes a regression on Xen. Xen allocates interrupts and
some more in the xen_cpu_up() function, so it deadlocks on the
sparse_irq_lock.

There is no simple fix for this and we really should have the
protection for all architectures, but for now the only solution is to
move it to x86 where actual wreckage due to the lack of protection has
been observed.

Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Fixes: a89941816726 'hotplug: Prevent alloc/free of irq descriptors during cpu up/down'
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: xiao jin <jin.xiao@intel.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index d3010aa79daf..b1f3ed9c7a9e 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -992,8 +992,17 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)

common_cpu_up(cpu, tidle);

+ /*
+ * We have to walk the irq descriptors to setup the vector
+ * space for the cpu which comes online. Prevent irq
+ * alloc/free across the bringup.
+ */
+ irq_lock_sparse();
+
err = do_boot_cpu(apicid, cpu, tidle);
+
if (err) {
+ irq_unlock_sparse();
pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
return -EIO;
}
@@ -1011,6 +1020,8 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
touch_nmi_watchdog();
}

+ irq_unlock_sparse();
+
return 0;
}

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6a374544d495..5644ec5582b9 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -527,18 +527,9 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
goto out_notify;
}

- /*
- * Some architectures have to walk the irq descriptors to
- * setup the vector space for the cpu which comes online.
- * Prevent irq alloc/free across the bringup.
- */
- irq_lock_sparse();
-
/* Arch-specific enabling code. */
ret = __cpu_up(cpu, idle);

- irq_unlock_sparse();
-
if (ret != 0)
goto out_notify;
BUG_ON(!cpu_online(cpu));

\
 
 \ /
  Last update: 2015-07-14 22:21    [W:0.202 / U:0.408 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site