lkml.org 
[lkml]   [2021]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.10 07/77] io_uring: dont take uring_lock during iowq cancel
    Date
    From: Pavel Begunkov <asml.silence@gmail.com>

    commit 792bb6eb862333658bf1bd2260133f0507e2da8d upstream.

    [ 97.866748] a.out/2890 is trying to acquire lock:
    [ 97.867829] ffff8881046763e8 (&ctx->uring_lock){+.+.}-{3:3}, at:
    io_wq_submit_work+0x155/0x240
    [ 97.869735]
    [ 97.869735] but task is already holding lock:
    [ 97.871033] ffff88810dfe0be8 (&ctx->uring_lock){+.+.}-{3:3}, at:
    __x64_sys_io_uring_enter+0x3f0/0x5b0
    [ 97.873074]
    [ 97.873074] other info that might help us debug this:
    [ 97.874520] Possible unsafe locking scenario:
    [ 97.874520]
    [ 97.875845] CPU0
    [ 97.876440] ----
    [ 97.877048] lock(&ctx->uring_lock);
    [ 97.877961] lock(&ctx->uring_lock);
    [ 97.878881]
    [ 97.878881] *** DEADLOCK ***
    [ 97.878881]
    [ 97.880341] May be due to missing lock nesting notation
    [ 97.880341]
    [ 97.881952] 1 lock held by a.out/2890:
    [ 97.882873] #0: ffff88810dfe0be8 (&ctx->uring_lock){+.+.}-{3:3}, at:
    __x64_sys_io_uring_enter+0x3f0/0x5b0
    [ 97.885108]
    [ 97.885108] stack backtrace:
    [ 97.890457] Call Trace:
    [ 97.891121] dump_stack+0xac/0xe3
    [ 97.891972] __lock_acquire+0xab6/0x13a0
    [ 97.892940] lock_acquire+0x2c3/0x390
    [ 97.894894] __mutex_lock+0xae/0x9f0
    [ 97.901101] io_wq_submit_work+0x155/0x240
    [ 97.902112] io_wq_cancel_cb+0x162/0x490
    [ 97.904126] io_async_find_and_cancel+0x3b/0x140
    [ 97.905247] io_issue_sqe+0x86d/0x13e0
    [ 97.909122] __io_queue_sqe+0x10b/0x550
    [ 97.913971] io_queue_sqe+0x235/0x470
    [ 97.914894] io_submit_sqes+0xcce/0xf10
    [ 97.917872] __x64_sys_io_uring_enter+0x3fb/0x5b0
    [ 97.921424] do_syscall_64+0x2d/0x40
    [ 97.922329] entry_SYSCALL_64_after_hwframe+0x44/0xa9

    While holding uring_lock, e.g. from inline execution, async cancel
    request may attempt cancellations through io_wq_submit_work, which may
    try to grab a lock. Delay it to task_work, so we do it from a clean
    context and don't have to worry about locking.

    Cc: <stable@vger.kernel.org> # 5.5+
    Fixes: c07e6719511e ("io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()")
    Reported-by: Abaci <abaci@linux.alibaba.com>
    Reported-by: Hao Xu <haoxu@linux.alibaba.com>
    Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    [Lee: The first hunk solves a different (double free) issue in v5.10.
    Only the first hunk of the original patch is relevant to v5.10 AND
    the first hunk of the original patch is only relevant to v5.10]
    Reported-by: syzbot+59d8a1f4e60c20c066cf@syzkaller.appspotmail.com
    Signed-off-by: Lee Jones <lee.jones@linaro.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    fs/io_uring.c | 2 ++
    1 file changed, 2 insertions(+)

    --- a/fs/io_uring.c
    +++ b/fs/io_uring.c
    @@ -2075,7 +2075,9 @@ static void io_req_task_cancel(struct ca
    struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
    struct io_ring_ctx *ctx = req->ctx;

    + mutex_lock(&ctx->uring_lock);
    __io_req_task_cancel(req, -ECANCELED);
    + mutex_unlock(&ctx->uring_lock);
    percpu_ref_put(&ctx->refs);
    }


    \
     
     \ /
      Last update: 2021-11-01 10:35    [W:4.411 / U:1.780 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site