lkml.org 
[lkml]   [2012]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: readahead: remove redundant ra_pages in file_ra_state
On 10/26/2012 11:28 AM, YingHang Zhu wrote:
> On Fri, Oct 26, 2012 at 10:30 AM, Ni zhan Chen <nizhan.chen@gmail.com> wrote:
>> On 10/26/2012 09:27 AM, Fengguang Wu wrote:
>>> On Fri, Oct 26, 2012 at 11:25:44AM +1100, Dave Chinner wrote:
>>>> On Thu, Oct 25, 2012 at 10:58:26AM +0800, Fengguang Wu wrote:
>>>>> Hi Chen,
>>>>>
>>>>>> But how can bdi related ra_pages reflect different files' readahead
>>>>>> window? Maybe these different files are sequential read, random read
>>>>>> and so on.
>>>>> It's simple: sequential reads will get ra_pages readahead size while
>>>>> random reads will not get readahead at all.
>>>>>
>>>>> Talking about the below chunk, it might hurt someone that explicitly
>>>>> takes advantage of the behavior, however the ra_pages*2 seems more
>>>>> like a hack than general solution to me: if the user will need
>>>>> POSIX_FADV_SEQUENTIAL to double the max readahead window size for
>>>>> improving IO performance, then why not just increase bdi->ra_pages and
>>>>> benefit all reads? One may argue that it offers some differential
>>>>> behavior to specific applications, however it may also present as a
>>>>> counter-optimization: if the root already tuned bdi->ra_pages to the
>>>>> optimal size, the doubled readahead size will only cost more memory
>>>>> and perhaps IO latency.
>>>>>
>>>>> --- a/mm/fadvise.c
>>>>> +++ b/mm/fadvise.c
>>>>> @@ -87,7 +86,6 @@ SYSCALL_DEFINE(fadvise64_64)(int fd, loff_t offset,
>>>>> loff_t len, int advice)
>>>>> spin_unlock(&file->f_lock);
>>>>> break;
>>>>> case POSIX_FADV_SEQUENTIAL:
>>>>> - file->f_ra.ra_pages = bdi->ra_pages * 2;
>>>> I think we really have to reset file->f_ra.ra_pages here as it is
>>>> not a set-and-forget value. e.g. shrink_readahead_size_eio() can
>>>> reduce ra_pages as a result of IO errors. Hence if you have had io
>>>> errors, telling the kernel that you are now going to do sequential
>>>> IO should reset the readahead to the maximum ra_pages value
>>>> supported....
>>> Good point!
>>>
>>> .... but wait .... this patch removes file->f_ra.ra_pages in all other
>>> places too, so there will be no file->f_ra.ra_pages to be reset here...
>>
>> In his patch,
>>
>>
>> static void shrink_readahead_size_eio(struct file *filp,
>> struct file_ra_state *ra)
>> {
>> - ra->ra_pages /= 4;
>> + spin_lock(&filp->f_lock);
>> + filp->f_mode |= FMODE_RANDOM;
>> + spin_unlock(&filp->f_lock);
>>
>> As the example in comment above this function, the read maybe still
>> sequential, and it will waste IO bandwith if modify to FMODE_RANDOM
>> directly.
> I've considered about this. On the first try I modified file_ra_state.size and
> file_ra_state.async_size directly, like
>
> file_ra_state.async_size = 0;
> file_ra_state.size /= 4;
>
> but as what I comment here, we can not
> predict whether the bad sectors will trash the readahead window, maybe the
> following sectors after current one are ok to go in normal readahead,
> it's hard to know,
> the current approach gives us a chance to slow down softly.

Then when will check filp->f_mode |= FMODE_RANDOM; ? Does it will
influence ra->ra_pages?

>
> Thanks,
> Ying Zhu
>>> Thanks,
>>> Fengguang
>>>



\
 
 \ /
  Last update: 2012-10-26 06:21    [W:0.074 / U:4.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site