Messages in this thread | | | Date | Sat, 7 May 2022 10:26:07 +0100 | Subject | Re: [PATCH 3/5] io_uring: let fast poll support multishot | From | Pavel Begunkov <> |
| |
On 5/6/22 23:02, Jens Axboe wrote: > On 5/6/22 11:19 AM, Pavel Begunkov wrote: >> On 5/6/22 08:01, Hao Xu wrote: >>> From: Hao Xu <howeyxu@tencent.com> >>> >>> For operations like accept, multishot is a useful feature, since we can >>> reduce a number of accept sqe. Let's integrate it to fast poll, it may >>> be good for other operations in the future. >>> >>> Signed-off-by: Hao Xu <howeyxu@tencent.com> >>> --- >>> fs/io_uring.c | 41 ++++++++++++++++++++++++++--------------- >>> 1 file changed, 26 insertions(+), 15 deletions(-) >>> >>> diff --git a/fs/io_uring.c b/fs/io_uring.c >>> index 8ebb1a794e36..d33777575faf 100644 >>> --- a/fs/io_uring.c >>> +++ b/fs/io_uring.c >>> @@ -5952,7 +5952,7 @@ static void io_poll_remove_entries(struct io_kiocb *req) >>> * either spurious wakeup or multishot CQE is served. 0 when it's done with >>> * the request, then the mask is stored in req->cqe.res. >>> */ >>> -static int io_poll_check_events(struct io_kiocb *req, bool locked) >>> +static int io_poll_check_events(struct io_kiocb *req, bool *locked) >>> { >>> struct io_ring_ctx *ctx = req->ctx; >>> int v; >>> @@ -5981,17 +5981,26 @@ static int io_poll_check_events(struct io_kiocb *req, bool locked) >>> /* multishot, just fill an CQE and proceed */ >>> if (req->cqe.res && !(req->apoll_events & EPOLLONESHOT)) { >>> - __poll_t mask = mangle_poll(req->cqe.res & req->apoll_events); >>> - bool filled; >>> - >>> - spin_lock(&ctx->completion_lock); >>> - filled = io_fill_cqe_aux(ctx, req->cqe.user_data, mask, >>> - IORING_CQE_F_MORE); >>> - io_commit_cqring(ctx); >>> - spin_unlock(&ctx->completion_lock); >>> - if (unlikely(!filled)) >>> - return -ECANCELED; >>> - io_cqring_ev_posted(ctx); >>> + if (req->flags & REQ_F_APOLL_MULTISHOT) { >>> + io_tw_lock(req->ctx, locked); >>> + if (likely(!(req->task->flags & PF_EXITING))) >>> + io_queue_sqe(req); >> >> That looks dangerous, io_queue_sqe() usually takes the request >> ownership and doesn't expect that someone, i.e. >> io_poll_check_events(), may still be actively using it. > > I took a look at this, too. We do own the request at this point, but
Right, but we don't pass the ownership into io_queue_sqe(). IOW, it can potentially free it / use tw / etc. inside and then we return back to io_poll_check_events() with a broken req.
> it's still on the poll list. If io_accept() fails, then we do run the > poll_clean. > >> E.g. io_accept() fails on fd < 0, return an error, io_queue_sqe() -> >> io_queue_async() -> io_req_complete_failed() kills it. Then >> io_poll_check_events() and polling in general carry on using the freed >> request => UAF. Didn't look at it too carefully, but there might other >> similar cases. > > But we better have done poll_clean() before returning the error. What am > I missing here?
One scenario I'd be worry about is sth like:
io_apoll_task_func() | -> io_poll_check_events() | // 1st iteration | -> io_queue_sqe() | | poll cancel() | -> set IO_POLL_CANCEL_FLAG -> io_accept() fails | -> io_poll_clean() | -> io_req_complete_failed() | // 2nd iteration finds IO_POLL_CANCEL_FLAG | return -ECANCELLED | -> io_req_complete_failed(req, ret) |
The problem in this example is double io_req_complete_failed()
-- Pavel Begunkov
| |