lkml.org 
[lkml]   [2013]   [Jan]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectTASK_DEAD && ttwu() again (Was: ensure arch_ptrace/ptrace_request can never race with SIGKILL)
To avoid the confusion, this is not connected to ptrace_freeze_traced()
changes...

With or without these changes, there is another problem: a spurious
wakeup from try_to_wake_up(TASK_NORMAL) which doesn't necessarily see
the "right" task->state.

As for ptrace_stop() this is purely theoretical, but I thought that
perhaps it makes sense to extract the "mb + unlock_wait(pi_lock)" code
from do_exit() into the generic helper,
set_current_state_sync_because_we_cant_tolerate_a_wrong_wakeup().
But when I look at this code again I am not sure it is right.

Let me remind the problem. To oversimplify, we have

try_to_wake_up(task, state)
{
lock(task->pi_lock);

if (task->state & state)
task->state = TASK_RUNNING;

unlock(task->pi_lock);
}

And this means that a task doing

current->state = STATE_1;
// no schedule() in between
current->state = STATE_2;
schedule();

can be actually woken up by try_to_wake_up(STATE_1) even if it already
sleeps in STATE_2.

Usually this is fine, any wait_event-like code should be careful. But
sometimes we can't afford the false wakeup, that is why do_wait() roughly
does

do_exit()
{
// down_read(mmap_sem) can do this without schedule()
current->state = TASK_UNINTERRUPTIBLE;
current->state = TASK_RUNNING;

mb();

spin_unlock_wait(current->pi_lock);

current->state = TASK_DEAD;
schedule();
}

This should ensure that any subsequent (after unlock_wait) try_to_wake_up()
can't see state == UNINTERRUPTIBLE and I think this works.

But. Somehow we missed the fact (I think) that we also need to serialize
unlock_wait() and "state = TASK_DEAD". The code above can be reordered,

do_exit()
{
// down_read(mmap_sem) can do this without schedule()
current->state = TASK_UNINTERRUPTIBLE;
current->state = TASK_RUNNING;

mb();

current->state = TASK_DEAD;
// !!! ttwu() can change ->state here !!!
spin_unlock_wait(current->pi_lock);

schedule();
}

and we have the same problem again. So _I think_ that we we need another
mb() after unlock_wait() ?

And, afaics, in theory we can't simply move the current mb() down, after
unlock_wait(). (again, only in theory, if nothing else we should have
the implicit barrrers after we played with ->state in the past).

Or perhaps we should modify ttwu_do_wakeup() to not blindly set RUNNING,
say, cmpxchg(old_state, RUNNING). But this is not simple/nice.

Or I missed something?

Oleg.


--- x/kernel/exit.c
+++ x/kernel/exit.c
@@ -869,8 +869,15 @@ void do_exit(long code)
* To avoid it, we have to wait for releasing tsk->pi_lock which
* is held by try_to_wake_up()
*/
+
smp_mb();
raw_spin_unlock_wait(&tsk->pi_lock);
+ /*
+ * The first mb() ensures that after that try_to_wake_up() must see
+ * state == TASK_RUNNING. We need another one to ensure that we set
+ * TASK_DEAD only after ->pi_lock is really unlocked.
+ */
+ smp_mb();

/* causes final put_task_struct in finish_task_switch(). */
tsk->state = TASK_DEAD;


\
 
 \ /
  Last update: 2013-01-23 21:01    [W:0.775 / U:0.320 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site