Messages in this thread | | | Date | Wed, 6 May 2015 11:14:10 +0800 | Subject | Re: [PATCH 2/2] block: loop: avoiding too many pending per work I/O | From | Ming Lei <> |
| |
On Wed, May 6, 2015 at 12:55 AM, Tejun Heo <tj@kernel.org> wrote: > Hello, Ming. > > On Tue, May 05, 2015 at 10:46:10PM +0800, Ming Lei wrote: >> On Tue, May 5, 2015 at 9:59 PM, Tejun Heo <tj@kernel.org> wrote: >> > It's a bit weird to hard code this to 16 as this effectively becomes a >> > hidden bottleneck for concurrency. For cases where 16 isn't a good >> > value, hunting down what's going on can be painful as it's not visible >> > anywhere. I still think the right knob to control concurrency is >> > nr_requests for the loop device. You said that for linear IOs, it's >> > better to have higher nr_requests than concurrency but can you >> > elaborate why? >> >> I mean, in case of sequential IO, the IO may hit page cache a bit easier, >> so handling the IO may be quite quick, then it is often more efficient to >> handle them in one same context(such as, handle one by one from IO >> queue) than from different contexts(scheduled from different worker >> threads). And that can be made by setting a bigger nr_requests(queue_depth). > > Ah, so, it's about the queueing latency. Blocking the issuer from > get_request side for the same level of concurrency would incur a lot > longer latency before the next IO can be dispatched. The arbitrary 16 > is still bothering but for now it's fine I guess, but we need to > revisit the whole thing including WQ_HIGHPRI thing. Maybe it made > sense when we had only one thread servicing all IOs but w/ high > concurrency I don't think it's a good idea.
Yes, I was thinking about it too, but concurrency can improve random I/O throughput a lot in my tests.
Also I have patches to use aio/dio for loop, then one thread is enough, and both double cache and high context switch can be avoided.
I will post them later for review.
> > Please feel free to add > > Acked-by: Tejun Heo <tj@kernel.org>
Thanks for your review and ack!
thanks, Ming Lei
| |