lkml.org 
[lkml]   [2022]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [REPORT] Use-after-free Read in __fdget_raw in v5.10.y
On Tue, 17 May 2022, Jens Axboe wrote:

> On 5/17/22 7:00 AM, Lee Jones wrote:
> > On Tue, 17 May 2022, Jens Axboe wrote:
> >
> >> On 5/17/22 6:36 AM, Lee Jones wrote:
> >>> On Tue, 17 May 2022, Jens Axboe wrote:
> >>>
> >>>> On 5/17/22 6:24 AM, Lee Jones wrote:
> >>>>> On Tue, 17 May 2022, Jens Axboe wrote:
> >>>>>
> >>>>>> On 5/17/22 5:41 AM, Lee Jones wrote:
> >>>>>>> Good afternoon Jens, Pavel, et al.,
> >>>>>>>
> >>>>>>> Not sure if you are presently aware, but there appears to be a
> >>>>>>> use-after-free issue affecting the io_uring worker driver (fs/io-wq.c)
> >>>>>>> in Stable v5.10.y.
> >>>>>>>
> >>>>>>> The full sysbot report can be seen below [0].
> >>>>>>>
> >>>>>>> The C-reproducer has been placed below that [1].
> >>>>>>>
> >>>>>>> I had great success running this reproducer in an infinite loop.
> >>>>>>>
> >>>>>>> My colleague reverse-bisected the fixing commit to:
> >>>>>>>
> >>>>>>> commit fb3a1f6c745ccd896afadf6e2d6f073e871d38ba
> >>>>>>> Author: Jens Axboe <axboe@kernel.dk>
> >>>>>>> Date: Fri Feb 26 09:47:20 2021 -0700
> >>>>>>>
> >>>>>>> io-wq: have manager wait for all workers to exit
> >>>>>>>
> >>>>>>> Instead of having to wait separately on workers and manager, just have
> >>>>>>> the manager wait on the workers. We use an atomic_t for the reference
> >>>>>>> here, as we need to start at 0 and allow increment from that. Since the
> >>>>>>> number of workers is naturally capped by the allowed nr of processes,
> >>>>>>> and that uses an int, there is no risk of overflow.
> >>>>>>>
> >>>>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >>>>>>>
> >>>>>>> fs/io-wq.c | 30 ++++++++++++++++++++++--------
> >>>>>>> 1 file changed, 22 insertions(+), 8 deletions(-)
> >>>>>>
> >>>>>> Does this fix it:
> >>>>>>
> >>>>>> commit 886d0137f104a440d9dfa1d16efc1db06c9a2c02
> >>>>>> Author: Jens Axboe <axboe@kernel.dk>
> >>>>>> Date: Fri Mar 5 12:59:30 2021 -0700
> >>>>>>
> >>>>>> io-wq: fix race in freeing 'wq' and worker access
> >>>>>>
> >>>>>> Looks like it didn't make it into 5.10-stable, but we can certainly
> >>>>>> rectify that.
> >>>>>
> >>>>> Thanks for your quick response Jens.
> >>>>>
> >>>>> This patch doesn't apply cleanly to v5.10.y.
> >>>>
> >>>> This is probably why it never made it into 5.10-stable :-/
> >>>
> >>> Right. It doesn't apply at all unfortunately.
> >>>
> >>>>> I'll have a go at back-porting it. Please bear with me.
> >>>>
> >>>> Let me know if you into issues with that and I can help out.
> >>>
> >>> I think the dependency list is too big.
> >>>
> >>> Too much has changed that was never back-ported.
> >>>
> >>> Actually the list of patches pertaining to fs/io-wq.c alone isn't so
> >>> bad, I did start to back-port them all but some of the big ones have
> >>> fs/io_uring.c changes incorporated and that list is huge (256 patches
> >>> from v5.10 to the fixing patch mentioned above).
> >>
> >> The problem is that 5.12 went to the new worker setup, and this patch
> >> landed after that even though it also applies to the pre-native workers.
> >> Hence the dependency chain isn't really as long as it seems, probably
> >> just a few patches backporting the change references and completions.
> >>
> >> I'll take a look this afternoon.
> >
> > Thanks Jens. I really appreciate it.
>
> Can you see if this helps? Untested...

What base does this apply against please?

I tried Mainline and v5.10.116 and both failed.

> diff --git a/fs/io-wq.c b/fs/io-wq.c
> index 3d5fc76b92d0..35af489bcaf6 100644
> --- a/fs/io-wq.c
> +++ b/fs/io-wq.c
> @@ -125,6 +125,9 @@ struct io_wq {
> refcount_t refs;
> struct completion done;
>
> + atomic_t worker_refs;
> + struct completion worker_done;
> +
> struct hlist_node cpuhp_node;
>
> refcount_t use_refs;
> @@ -250,8 +253,8 @@ static void io_worker_exit(struct io_worker *worker)
> raw_spin_unlock_irq(&wqe->lock);
>
> kfree_rcu(worker, rcu);
> - if (refcount_dec_and_test(&wqe->wq->refs))
> - complete(&wqe->wq->done);
> + if (atomic_dec_and_test(&wqe->wq->worker_refs))
> + complete(&wqe->wq->worker_done);
> }
>
> static inline bool io_wqe_run_queue(struct io_wqe *wqe)
> @@ -695,9 +698,13 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
> worker->wqe = wqe;
> spin_lock_init(&worker->lock);
>
> + atomic_inc(&wq->worker_refs);
> +
> worker->task = kthread_create_on_node(io_wqe_worker, worker, wqe->node,
> "io_wqe_worker-%d/%d", index, wqe->node);
> if (IS_ERR(worker->task)) {
> + if (atomic_dec_and_test(&wq->worker_refs))
> + complete(&wq->worker_done);
> kfree(worker);
> return false;
> }
> @@ -717,7 +724,6 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
> if (index == IO_WQ_ACCT_UNBOUND)
> atomic_inc(&wq->user->processes);
>
> - refcount_inc(&wq->refs);
> wake_up_process(worker->task);
> return true;
> }
> @@ -822,17 +828,18 @@ static int io_wq_manager(void *data)
> task_work_run();
>
> out:
> - if (refcount_dec_and_test(&wq->refs)) {
> - complete(&wq->done);
> - return 0;
> - }
> /* if ERROR is set and we get here, we have workers to wake */
> - if (test_bit(IO_WQ_BIT_ERROR, &wq->state)) {
> - rcu_read_lock();
> - for_each_node(node)
> - io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
> - rcu_read_unlock();
> - }
> + rcu_read_lock();
> + for_each_node(node)
> + io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
> + rcu_read_unlock();
> +
> + if (atomic_read(&wq->worker_refs))
> + wait_for_completion(&wq->worker_done);
> +
> + if (refcount_dec_and_test(&wq->refs))
> + complete(&wq->done);
> +
> return 0;
> }
>
> @@ -1135,6 +1142,9 @@ struct io_wq *io_wq_create(unsigned bounded, struct io_wq_data *data)
>
> init_completion(&wq->done);
>
> + init_completion(&wq->worker_done);
> + atomic_set(&wq->worker_refs, 0);
> +
> wq->manager = kthread_create(io_wq_manager, wq, "io_wq_manager");
> if (!IS_ERR(wq->manager)) {
> wake_up_process(wq->manager);
> @@ -1179,11 +1189,6 @@ static void __io_wq_destroy(struct io_wq *wq)
> if (wq->manager)
> kthread_stop(wq->manager);
>
> - rcu_read_lock();
> - for_each_node(node)
> - io_wq_for_each_worker(wq->wqes[node], io_wq_worker_wake, NULL);
> - rcu_read_unlock();
> -
> wait_for_completion(&wq->done);
>
> for_each_node(node)
>
>

--
Lee Jones [李琼斯]
Principal Technical Lead - Developer Services
Linaro.org │ Open source software for Arm SoCs
Follow Linaro: Facebook | Twitter | Blog

\
 
 \ /
  Last update: 2022-05-18 14:56    [W:0.117 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site