lkml.org 
[lkml]   [2015]   [May]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v6] block: loop: avoiding too many pending per work I/O
From
On Sun, May 3, 2015 at 9:52 AM, Tejun Heo <tj@kernel.org> wrote:
> Hello,
>
> On Sat, May 02, 2015 at 10:56:20PM +0800, Ming Lei wrote:
>> > Maybe just cap max_active to NR_OF_LOOP_DEVS * 16 or sth? But idk,
>>
>> It might not work because there are nested loop devices like fedora live CD, and
>> in theory the max_active should have been set as loop's queue depth *
>> nr_loop, otherwise there may be possibility of hanging.
>>
>> So this patch is introduced.
>
> If loop devices can be stacked, regardless of what you do with
> nr_active, it may deadlock. There needs to be a rescuer per each
> nesting level (or just one per device). This means that the current
> code is broken.

Yes.

>> > how many concurrent workers are we talking about and why are we
>> > capping per-queue concurrency from worker pool side instead of command
>> > tag side?
>>
>> I think there should be performance advantage to make queue depth a bit more
>> because it can help to make queue pipeline as full. Also queue depth often
>> means how many requests the hardware can queue, and it is a bit different
>> with per-queue concurrency.
>
> I'm not really following. Can you please elaborate?

In case of loop-mq, a bigger queue_depth often has better performance
when doing read/write page cache in sequential read/write because they
are very quick and better to run them as a batch in one time work function,
but simply deceasing queue depth may hurt performance for this case.

Thanks,
Ming Lei


\
 
 \ /
  Last update: 2015-05-04 15:21    [W:0.049 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site