lkml.org 
[lkml]   [2017]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -mm -v4 3/5] mm, swap: VMA based swap readahead
On Wed, 13 Sep 2017 10:40:19 +0900 Minchan Kim <minchan@kernel.org> wrote:

> Every zram users like low-end android device has used 0 page-cluster
> to disable swap readahead because it has no seek cost and works as
> synchronous IO operation so if we do readahead multiple pages,
> swap falut latency would be (4K * readahead window size). IOW,
> readahead is meaningful only if it doesn't bother faulted page's
> latency.
>
> However, this patch introduces additional knob /sys/kernel/mm/swap/
> vma_ra_max_order as well as page-cluster. It means existing users
> has used disabled swap readahead doesn't work until they should be
> aware of new knob and modification of their script/code to disable
> vma_ra_max_order as well as page-cluster.
>
> I say it's a *regression* and wanted to fix it but Huang's opinion
> is that it's not a functional regression so userspace should be fixed
> by themselves.
> Please look into detail of discussion in
> http://lkml.kernel.org/r/%3C1505183833-4739-4-git-send-email-minchan@kernel.org%3E

hm, tricky problem. I do agree that linking the physical and virtual
readahead schemes in the proposed fashion is unfortunate. I also agree
that breaking existing setups (a bit) is also unfortunate.

Would it help if, when page-cluster is written to zero, we do

printk_once("physical readahead disabled, virtual readahead still
enabled. Disable virtual readhead via
/sys/kernel/mm/swap/vma_ra_max_order").

Or something like that. It's pretty lame, but it should help alert the
zram-readahead-disabling people to the issue?

\
 
 \ /
  Last update: 2017-09-13 23:03    [W:0.127 / U:0.000 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site