lkml.org 
[lkml]   [2013]   [Mar]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
From
Date
On Fri, 2013-03-01 at 11:17 +0100, Thomas Gleixner wrote:

> > Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> > kernel/futex.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/futex.c b/kernel/futex.c
> > index 9e26e87..2b676a2 100644
> > --- a/kernel/futex.c
> > +++ b/kernel/futex.c
> > @@ -562,16 +562,17 @@ void exit_pi_state_list(struct task_struct *curr)
> >
> > spin_lock(&hb->lock);
> >
> > - raw_spin_lock_irq(&curr->pi_lock);
> > /*
> > * We dropped the pi-lock, so re-check whether this
> > * task still owns the PI-state:
> > */
>
> Did you read and understand this comment ?
>
> The logic here is
>
> raw_spin_lock_irq(&curr->pi_lock);
> next = head->next;
> raw_spin_unlock_irq(&curr->pi_lock);
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
> if (head->next != next)
>
> We must drop pi_lock before locking the hash bucket lock. That opens a
> window for another task to modify head list. So we must relock pi_lock
> and verify whether head->next is unmodified. If it changed, we need to
> reevaluate.
>
> > if (head->next != next) {
> > spin_unlock(&hb->lock);
> > + raw_spin_lock_irq(&curr->pi_lock);
> > continue;
> > }
> >
> > + raw_spin_lock_irq(&curr->pi_lock);
> > WARN_ON(pi_state->owner != curr);
> > WARN_ON(list_empty(&pi_state->list));
> > list_del_init(&pi_state->list);
>
> So both your patch description and your patch are patently wrong.
> Correct solution below.
>
> Thanks,
>
> tglx
> ---
> futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
>
> In exit_pi_state_list() we have the following locking construct:
>
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
>
> ...
> spin_unlock(&hb->lock);
>
> In !RT this works, but on RT the migrate_enable() function which is
> called from spin_unlock() sees atomic context due to the held pi_lock
> and just decrements the migrate_disable_atomic counter of the
> task. Now the next call to migrate_disable() sees the counter being
> negative and issues a warning. That check should be in
> migrate_enable() already.
>
> Fix this by dropping pi_lock before unlocking hb->lock and reaquire
> pi_lock after that again. This is safe as the loop code reevaluates
> head again under the pi_lock.
>
> Reported-by: Yong Zhang <yong.zhang@windriver.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>
> diff --git a/kernel/futex.c b/kernel/futex.c
> index f15f0e4..c795c9c 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -568,7 +568,9 @@ void exit_pi_state_list(struct task_struct *curr)
> * task still owns the PI-state:
> */
> if (head->next != next) {

Can we add the following comment here:

/*
* Normal spin_lock() and matching spin_unlock() must not be
* inside a raw_spin_lock() or any raw preempt disabled
* context.
*/

Or something similar. Otherwise, people looking at this may think that
the unlock and relock is unnecessary. Or they may just not understand it
in general. Things like this deserve comments, or we ourselves may
forget why we did it ;-)

We probably should document somewhere (if we haven't already, I haven't
looked), that if a spin_trylock() is inside a preempt disabled section,
the entire context of that lock must stay inside a preempt disabled
section (to the unlock). No normal spin_lock()s, and their matching
unlocks, should be in any preempt disabled section, although a preempt
disabled section may exist between the two.

-- Steve

> + raw_spin_unlock_irq(&curr->pi_lock);
> spin_unlock(&hb->lock);
> + raw_spin_lock_irq(&curr->pi_lock);
> continue;
> }
>




\
 
 \ /
  Last update: 2013-03-03 23:41    [W:0.103 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site