lkml.org 
[lkml]   [2022]   [May]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [REPORT] Use-after-free Read in __fdget_raw in v5.10.y
On Wed, 18 May 2022, Jens Axboe wrote:

> On 5/18/22 10:34 AM, Lee Jones wrote:
> > On Wed, 18 May 2022, Jens Axboe wrote:
> >
> >> On 5/18/22 09:39, Lee Jones wrote:
> >>> On Wed, 18 May 2022, Jens Axboe wrote:
> >>>
> >>>> On 5/18/22 9:14 AM, Lee Jones wrote:
> >>>>> On Wed, 18 May 2022, Jens Axboe wrote:
> >>>>>
> >>>>>> On 5/18/22 6:54 AM, Jens Axboe wrote:
> >>>>>>> On 5/18/22 6:52 AM, Jens Axboe wrote:
> >>>>>>>> On 5/18/22 6:50 AM, Lee Jones wrote:
> >>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
> >>>>>>>>>
> >>>>>>>>>> On 5/17/22 7:00 AM, Lee Jones wrote:
> >>>>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>> On 5/17/22 6:36 AM, Lee Jones wrote:
> >>>>>>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> On 5/17/22 6:24 AM, Lee Jones wrote:
> >>>>>>>>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On 5/17/22 5:41 AM, Lee Jones wrote:
> >>>>>>>>>>>>>>>>> Good afternoon Jens, Pavel, et al.,
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Not sure if you are presently aware, but there appears to be a
> >>>>>>>>>>>>>>>>> use-after-free issue affecting the io_uring worker driver (fs/io-wq.c)
> >>>>>>>>>>>>>>>>> in Stable v5.10.y.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> The full sysbot report can be seen below [0].
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> The C-reproducer has been placed below that [1].
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> I had great success running this reproducer in an infinite loop.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> My colleague reverse-bisected the fixing commit to:
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> commit fb3a1f6c745ccd896afadf6e2d6f073e871d38ba
> >>>>>>>>>>>>>>>>> Author: Jens Axboe <axboe@kernel.dk>
> >>>>>>>>>>>>>>>>> Date: Fri Feb 26 09:47:20 2021 -0700
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> io-wq: have manager wait for all workers to exit
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Instead of having to wait separately on workers and manager, just have
> >>>>>>>>>>>>>>>>> the manager wait on the workers. We use an atomic_t for the reference
> >>>>>>>>>>>>>>>>> here, as we need to start at 0 and allow increment from that. Since the
> >>>>>>>>>>>>>>>>> number of workers is naturally capped by the allowed nr of processes,
> >>>>>>>>>>>>>>>>> and that uses an int, there is no risk of overflow.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> fs/io-wq.c | 30 ++++++++++++++++++++++--------
> >>>>>>>>>>>>>>>>> 1 file changed, 22 insertions(+), 8 deletions(-)
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Does this fix it:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> commit 886d0137f104a440d9dfa1d16efc1db06c9a2c02
> >>>>>>>>>>>>>>>> Author: Jens Axboe <axboe@kernel.dk>
> >>>>>>>>>>>>>>>> Date: Fri Mar 5 12:59:30 2021 -0700
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> io-wq: fix race in freeing 'wq' and worker access
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Looks like it didn't make it into 5.10-stable, but we can certainly
> >>>>>>>>>>>>>>>> rectify that.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> Thanks for your quick response Jens.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> This patch doesn't apply cleanly to v5.10.y.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> This is probably why it never made it into 5.10-stable :-/
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Right. It doesn't apply at all unfortunately.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I'll have a go at back-porting it. Please bear with me.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Let me know if you into issues with that and I can help out.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I think the dependency list is too big.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Too much has changed that was never back-ported.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Actually the list of patches pertaining to fs/io-wq.c alone isn't so
> >>>>>>>>>>>>> bad, I did start to back-port them all but some of the big ones have
> >>>>>>>>>>>>> fs/io_uring.c changes incorporated and that list is huge (256 patches
> >>>>>>>>>>>>> from v5.10 to the fixing patch mentioned above).
> >>>>>>>>>>>>
> >>>>>>>>>>>> The problem is that 5.12 went to the new worker setup, and this patch
> >>>>>>>>>>>> landed after that even though it also applies to the pre-native workers.
> >>>>>>>>>>>> Hence the dependency chain isn't really as long as it seems, probably
> >>>>>>>>>>>> just a few patches backporting the change references and completions.
> >>>>>>>>>>>>
> >>>>>>>>>>>> I'll take a look this afternoon.
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks Jens. I really appreciate it.
> >>>>>>>>>>
> >>>>>>>>>> Can you see if this helps? Untested...
> >>>>>>>>>
> >>>>>>>>> What base does this apply against please?
> >>>>>>>>>
> >>>>>>>>> I tried Mainline and v5.10.116 and both failed.
> >>>>>>>>
> >>>>>>>> It's against 5.10.116, so that's puzzling. Let me double check I sent
> >>>>>>>> the right one...
> >>>>>>>
> >>>>>>> Looks like I sent the one from the wrong directory, sorry about that.
> >>>>>>> This one should be better:
> >>>>>>
> >>>>>> Nope, both are the right one. Maybe your mailer is mangling the patch?
> >>>>>> I'll attach it gzip'ed here in case that helps.
> >>>>>
> >>>>> Okay, that applied, thanks.
> >>>>>
> >>>>> Unfortunately, I am still able to crash the kernel in the same way.
> >>>>
> >>>> Alright, maybe it's not enough. I can't get your reproducer to crash,
> >>>> unfortunately. I'll try on a different box.
> >>>
> >>> You need to have fuzzing and kasan enabled.
> >>
> >> I do have kasan enabled. What's fuzzing?
> >
> > CONFIG_KCOV
>
> Ah ok - I don't think that's needed for this.
>
> Looking a bit deeper at this, I'm now convinced your bisect went off the
> rails at some point. Probably because this can be timing specific.
>
> Can you try with this patch?
>
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 4330603eae35..3ecf71151fb1 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -4252,12 +4252,8 @@ static int io_statx(struct io_kiocb *req, bool force_nonblock)
> struct io_statx *ctx = &req->statx;
> int ret;
>
> - if (force_nonblock) {
> - /* only need file table for an actual valid fd */
> - if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD)
> - req->flags |= REQ_F_NO_FILE_TABLE;
> + if (force_nonblock)
> return -EAGAIN;
> - }
>
> ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask,
> ctx->buffer);

This does appear to solve the issue. :)

Thanks so much for working on this.

What are the next steps?

Are you able to submit this to Stable?

--
Lee Jones [李琼斯]
Principal Technical Lead - Developer Services
Linaro.org │ Open source software for Arm SoCs
Follow Linaro: Facebook | Twitter | Blog

\
 
 \ /
  Last update: 2022-05-19 11:27    [W:0.078 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site