Messages in this thread Patch in this message | | | Date | Wed, 12 Sep 2018 18:33:35 +0200 | From | Oleg Nesterov <> | Subject | [PATCH 1/2] introduce for_each_process_thread_break() and for_each_process_thread_continue() |
| |
Usage:
rcu_read_lock(); for_each_process_thread(p, t) { do_something_slow(p, t);
if (SPENT_TOO_MUCH_TIME) { for_each_process_thread_break(p, t); rcu_read_unlock(); schedule(); rcu_read_lock(); for_each_process_thread_continue(&p, &t); } } rcu_read_unlock();
This looks similar to rcu_lock_break(), but much better and the next patch changes check_hung_uninterruptible_tasks() to use these new helpers. But my real target is show_state_filter() which can trivially lead to lockup.
Compared to rcu_lock_break(), for_each_process_thread_continue() never gives up, it relies on fact that both process and thread lists are sorted by the task->start_time key. So, for example, even if both leader/thread are already dead we can find the next alive process and continue.
Strictly speaking, the for_each_process/for_each_thread loops in _continue() could be "SPEND_TOO_MUCH_TIME" by themselves, so perhaps we will add another "max_scan" argument later or do something else. But at least they can not livelock under heavy fork/exit loads, they are bounded by PID_MAX_DEFAULT in the worst case.
NOTE: it seems that, contrary to the comment, task_struct->start_time is not really monotonic, and this should be probably fixed. Until then _continue() might skip more threads with the same ->start_time than necessary.
Signed-off-by: Oleg Nesterov <oleg@redhat.com> --- include/linux/sched/signal.h | 10 ++++++++++ kernel/exit.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 52 insertions(+)
diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 1be3572..1c957d4 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -565,6 +565,16 @@ extern bool current_is_single_threaded(void); #define for_each_process_thread(p, t) \ for_each_process(p) for_each_thread(p, t) +static inline void +for_each_process_thread_break(struct task_struct *p, struct task_struct *t) +{ + get_task_struct(p); + get_task_struct(t); +} + +extern void +for_each_process_thread_continue(struct task_struct **, struct task_struct **); + typedef int (*proc_visitor)(struct task_struct *p, void *data); void walk_process_tree(struct task_struct *top, proc_visitor, void *); diff --git a/kernel/exit.c b/kernel/exit.c index 0e21e6d..71380c7 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -319,6 +319,48 @@ void rcuwait_wake_up(struct rcuwait *w) rcu_read_unlock(); } +void for_each_process_thread_continue(struct task_struct **p_leader, + struct task_struct **p_thread) +{ + struct task_struct *leader = *p_leader, *thread = *p_thread; + struct task_struct *prev, *next; + u64 start_time; + + if (pid_alive(thread)) { + /* mt exec could change the leader */ + *p_leader = thread->group_leader; + } else if (pid_alive(leader)) { + start_time = thread->start_time; + prev = leader; + + for_each_thread(leader, next) { + if (next->start_time > start_time) + break; + prev = next; + } + + *p_thread = prev; + } else { + start_time = leader->start_time; + prev = &init_task; + + for_each_process(next) { + if (next->start_time > start_time) + break; + prev = next; + } + + *p_leader = prev; + /* a new thread can come after that, but this is fine */ + *p_thread = list_last_entry(&prev->signal->thread_head, + struct task_struct, + thread_node); + } + + put_task_struct(leader); + put_task_struct(thread); +} + /* * Determine if a process group is "orphaned", according to the POSIX * definition in 2.2.2.52. Orphaned process groups are not to be affected -- 2.5.0
| |