Messages in this thread Patch in this message |  | | From | Valentin Schneider <> | Subject | [PATCH v2 2/3] x86/intel_rdt: Plug task_work vs task_struct {rmid,closid} update race | Date | Mon, 23 Nov 2020 02:24:32 +0000 |
| |
Upon moving a task to a new control / monitor group, said task's {closid, rmid} fields are updated *after* triggering the move_myself() task_work callback. If the triggering thread gets preempted, or if the targeted task was already on its way to return to userspace, then move_myself() might be executed before the relevant task's {closid, rmid} fields have been updated.
Update the task_struct's {closid, rmid} tuple *before* invoking task_work_add(). Highlight the required ordering with a pair of comments.
Fixes: e02737d5b826 ("x86/intel_rdt: Add tasks files") Reviewed-by: James Morse <James.Morse@arm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> --- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 34 ++++++++++++++++---------- 1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index b6b5b95df833..f62d81104fd0 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -524,6 +524,8 @@ static void move_myself(struct callback_head *head) * If resource group was deleted before this task work callback * was invoked, then assign the task to root group and free the * resource group. + * + * See pairing atomic_inc() in __rdtgroup_move_task() */ if (atomic_dec_and_test(&rdtgrp->waitcount) && (rdtgrp->flags & RDT_DELETED)) { @@ -553,14 +555,32 @@ static int __rdtgroup_move_task(struct task_struct *tsk, callback = kzalloc(sizeof(*callback), GFP_KERNEL); if (!callback) return -ENOMEM; - callback->work.func = move_myself; + + init_task_work(&callback->work, move_myself); callback->rdtgrp = rdtgrp; + /* + * For ctrl_mon groups move both closid and rmid. + * For monitor groups, can move the tasks only from + * their parent CTRL group. + */ + if (rdtgrp->type == RDTCTRL_GROUP) + tsk->closid = rdtgrp->closid; + tsk->rmid = rdtgrp->mon.rmid; + /* * Take a refcount, so rdtgrp cannot be freed before the * callback has been invoked. + * + * Also ensures above {closid, rmid} writes are observed by + * move_myself(), as it can run immediately after task_work_add(). + * Otherwise old values may be loaded, and the move will only actually + * happen at the next context switch. + * + * Pairs with atomic_dec() in move_myself(). */ atomic_inc(&rdtgrp->waitcount); + ret = task_work_add(tsk, &callback->work, TWA_RESUME); if (ret) { /* @@ -571,18 +591,6 @@ static int __rdtgroup_move_task(struct task_struct *tsk, atomic_dec(&rdtgrp->waitcount); kfree(callback); rdt_last_cmd_puts("Task exited\n"); - } else { - /* - * For ctrl_mon groups move both closid and rmid. - * For monitor groups, can move the tasks only from - * their parent CTRL group. - */ - if (rdtgrp->type == RDTCTRL_GROUP) { - tsk->closid = rdtgrp->closid; - tsk->rmid = rdtgrp->mon.rmid; - } else if (rdtgrp->type == RDTMON_GROUP) { - tsk->rmid = rdtgrp->mon.rmid; - } } return ret; } -- 2.27.0
|  |