lkml.org 
[lkml]   [2021]   [May]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: 回复: [syzbot] KASAN: use-after-free Read in io worker handle work
From
Date
On 5/22/21 1:55 AM, Pavel Begunkov wrote:
> On 5/21/21 9:45 AM, Zhang, Qiang wrote:
> [...]
>> It looks like
>> thread iou-wrk-28796 in io-wq(A) access wqe in the wait queue(data->hash->wait), but this wqe has been free due to the destruction of another io-wq(B).
>>
>> Should we after wait for all iou-wrk thread exit in the io-wq, remove wqe from the waiting queue (data->hash->wait). prevent some one wqe belonging to this io-wq , may be still existing in the (data->hash->wait)queue before releasing.
>
> The guess looks reasonable, it's likely a problem.
> Not sure about the diff, it seems racy but I need to
> take a closer look to say for sure

It looks sensible, please send a patch


>> look forward to your opinion.
>>
>> --- a/fs/io-wq.c
>> +++ b/fs/io-wq.c
>> @@ -1003,13 +1003,17 @@ static void io_wq_exit_workers(struct io_wq *wq)
>> struct io_wqe *wqe = wq->wqes[node];
>>
>> io_wq_for_each_worker(wqe, io_wq_worker_wake, NULL);
>> - spin_lock_irq(&wq->hash->wait.lock);
>> - list_del_init(&wq->wqes[node]->wait.entry);
>> - spin_unlock_irq(&wq->hash->wait.lock);
>> }
>> rcu_read_unlock();
>> io_worker_ref_put(wq);
>> wait_for_completion(&wq->worker_done);
>> + for_each_node(node) {
>> + struct io_wqe *wqe = wq->wqes[node];
>> +
>> + spin_lock_irq(&wq->hash->wait.lock);
>> + list_del_init(&wq->wqes[node]->wait.entry);
>> + spin_unlock_irq(&wq->hash->wait.lock);
>> + }
>> put_task_struct(wq->task);
>> wq->task = NULL;
>> }
>

--
Pavel Begunkov

\
 
 \ /
  Last update: 2021-05-23 22:08    [W:0.139 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site