lkml.org 
[lkml]   [2022]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [REPORT] Use-after-free Read in __fdget_raw in v5.10.y
From
On 5/18/22 9:14 AM, Lee Jones wrote:
> On Wed, 18 May 2022, Jens Axboe wrote:
>
>> On 5/18/22 6:54 AM, Jens Axboe wrote:
>>> On 5/18/22 6:52 AM, Jens Axboe wrote:
>>>> On 5/18/22 6:50 AM, Lee Jones wrote:
>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>
>>>>>> On 5/17/22 7:00 AM, Lee Jones wrote:
>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>>>
>>>>>>>> On 5/17/22 6:36 AM, Lee Jones wrote:
>>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>>>>>
>>>>>>>>>> On 5/17/22 6:24 AM, Lee Jones wrote:
>>>>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>>>>>>>
>>>>>>>>>>>> On 5/17/22 5:41 AM, Lee Jones wrote:
>>>>>>>>>>>>> Good afternoon Jens, Pavel, et al.,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Not sure if you are presently aware, but there appears to be a
>>>>>>>>>>>>> use-after-free issue affecting the io_uring worker driver (fs/io-wq.c)
>>>>>>>>>>>>> in Stable v5.10.y.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The full sysbot report can be seen below [0].
>>>>>>>>>>>>>
>>>>>>>>>>>>> The C-reproducer has been placed below that [1].
>>>>>>>>>>>>>
>>>>>>>>>>>>> I had great success running this reproducer in an infinite loop.
>>>>>>>>>>>>>
>>>>>>>>>>>>> My colleague reverse-bisected the fixing commit to:
>>>>>>>>>>>>>
>>>>>>>>>>>>> commit fb3a1f6c745ccd896afadf6e2d6f073e871d38ba
>>>>>>>>>>>>> Author: Jens Axboe <axboe@kernel.dk>
>>>>>>>>>>>>> Date: Fri Feb 26 09:47:20 2021 -0700
>>>>>>>>>>>>>
>>>>>>>>>>>>> io-wq: have manager wait for all workers to exit
>>>>>>>>>>>>>
>>>>>>>>>>>>> Instead of having to wait separately on workers and manager, just have
>>>>>>>>>>>>> the manager wait on the workers. We use an atomic_t for the reference
>>>>>>>>>>>>> here, as we need to start at 0 and allow increment from that. Since the
>>>>>>>>>>>>> number of workers is naturally capped by the allowed nr of processes,
>>>>>>>>>>>>> and that uses an int, there is no risk of overflow.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>>>>>>>>>>>>
>>>>>>>>>>>>> fs/io-wq.c | 30 ++++++++++++++++++++++--------
>>>>>>>>>>>>> 1 file changed, 22 insertions(+), 8 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>>>> Does this fix it:
>>>>>>>>>>>>
>>>>>>>>>>>> commit 886d0137f104a440d9dfa1d16efc1db06c9a2c02
>>>>>>>>>>>> Author: Jens Axboe <axboe@kernel.dk>
>>>>>>>>>>>> Date: Fri Mar 5 12:59:30 2021 -0700
>>>>>>>>>>>>
>>>>>>>>>>>> io-wq: fix race in freeing 'wq' and worker access
>>>>>>>>>>>>
>>>>>>>>>>>> Looks like it didn't make it into 5.10-stable, but we can certainly
>>>>>>>>>>>> rectify that.
>>>>>>>>>>>
>>>>>>>>>>> Thanks for your quick response Jens.
>>>>>>>>>>>
>>>>>>>>>>> This patch doesn't apply cleanly to v5.10.y.
>>>>>>>>>>
>>>>>>>>>> This is probably why it never made it into 5.10-stable :-/
>>>>>>>>>
>>>>>>>>> Right. It doesn't apply at all unfortunately.
>>>>>>>>>
>>>>>>>>>>> I'll have a go at back-porting it. Please bear with me.
>>>>>>>>>>
>>>>>>>>>> Let me know if you into issues with that and I can help out.
>>>>>>>>>
>>>>>>>>> I think the dependency list is too big.
>>>>>>>>>
>>>>>>>>> Too much has changed that was never back-ported.
>>>>>>>>>
>>>>>>>>> Actually the list of patches pertaining to fs/io-wq.c alone isn't so
>>>>>>>>> bad, I did start to back-port them all but some of the big ones have
>>>>>>>>> fs/io_uring.c changes incorporated and that list is huge (256 patches
>>>>>>>>> from v5.10 to the fixing patch mentioned above).
>>>>>>>>
>>>>>>>> The problem is that 5.12 went to the new worker setup, and this patch
>>>>>>>> landed after that even though it also applies to the pre-native workers.
>>>>>>>> Hence the dependency chain isn't really as long as it seems, probably
>>>>>>>> just a few patches backporting the change references and completions.
>>>>>>>>
>>>>>>>> I'll take a look this afternoon.
>>>>>>>
>>>>>>> Thanks Jens. I really appreciate it.
>>>>>>
>>>>>> Can you see if this helps? Untested...
>>>>>
>>>>> What base does this apply against please?
>>>>>
>>>>> I tried Mainline and v5.10.116 and both failed.
>>>>
>>>> It's against 5.10.116, so that's puzzling. Let me double check I sent
>>>> the right one...
>>>
>>> Looks like I sent the one from the wrong directory, sorry about that.
>>> This one should be better:
>>
>> Nope, both are the right one. Maybe your mailer is mangling the patch?
>> I'll attach it gzip'ed here in case that helps.
>
> Okay, that applied, thanks.
>
> Unfortunately, I am still able to crash the kernel in the same way.

Alright, maybe it's not enough. I can't get your reproducer to crash,
unfortunately. I'll try on a different box.

--
Jens Axboe

\
 
 \ /
  Last update: 2022-05-18 17:24    [W:0.086 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site