lkml.org 
[lkml]   [2021]   [Mar]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCHSET 0/2] PF_IO_WORKER signal tweaks
Date
Hi,

Been trying to ensure that we do the right thing wrt signals and
PF_IO_WORKER threads, and I think there are two cases we need to handle
explicitly:

1) Just don't allow signals to them in general. We do mask everything
as blocked, outside of SIGKILL, so things like wants_signal() will
never return true for them. But it's still possible to send them a
signal via (ultimately) group_send_sig_info(). This will then deliver
the signal to the original io_uring owning task, and that seems a bit
unexpected. So just don't allow them in general.

2) STOP is done a bit differently, and we should not allow that either.

Outside of that, I've been looking at same_thread_group(). This will
currently return true for an io_uring task and it's IO workers, since
they do share ->signal. From looking at the kernel users of this, that
actually seems OK for the cases I checked. One is accounting related,
which we obviously want, and others are related to permissions between
tasks. FWIW, I ran with the below and didn't observe any ill effects,
but I'd like someone to actually think about and verify that PF_IO_WORKER
same_thread_group() usage is sane.

diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
index 3f6a0fcaa10c..a580bc0f8aa3 100644
--- a/include/linux/sched/signal.h
+++ b/include/linux/sched/signal.h
@@ -667,10 +667,17 @@ static inline bool thread_group_leader(struct task_struct *p)
return p->exit_signal >= 0;
}

+static inline
+bool same_thread_group_account(struct task_struct *p1, struct task_struct *p2)
+{
+ return p1->signal == p2->signal
+}
+
static inline
bool same_thread_group(struct task_struct *p1, struct task_struct *p2)
{
- return p1->signal == p2->signal;
+ return same_thread_group_account(p1, p2) &&
+ !((p1->flags | p2->flags) & PF_IO_WORKER);
}

static inline struct task_struct *next_thread(const struct task_struct *p)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 5f611658eeab..625110cacc2a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -307,7 +307,7 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
* those pending times and rely only on values updated on tick or
* other scheduler action.
*/
- if (same_thread_group(current, tsk))
+ if (same_thread_group_account(current, tsk))
(void) task_sched_runtime(current);

rcu_read_lock();

\
 
 \ /
  Last update: 2021-03-20 16:40    [W:0.177 / U:1.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site