lkml.org 
[lkml]   [2017]   [Aug]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH -mm -v4 2/5] mm, swap: Fix swap readahead marking
    Date
    From: Huang Ying <ying.huang@intel.com>

    In the original implementation, it is possible that the existing pages
    in the swap cache (not newly readahead) could be marked as the
    readahead pages. This will cause the statistics of swap readahead be
    wrong and influence the swap readahead algorithm too.

    This is fixed via marking a page as the readahead page only if it is
    newly allocated and read from the disk.

    When testing with linpack, after the fixing the swap readahead hit
    rate increased from ~66% to ~86%.

    Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Shaohua Li <shli@kernel.org>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Fengguang Wu <fengguang.wu@intel.com>
    Cc: Tim Chen <tim.c.chen@intel.com>
    Cc: Dave Hansen <dave.hansen@intel.com>
    ---
    mm/swap_state.c | 18 +++++++++++-------
    1 file changed, 11 insertions(+), 7 deletions(-)

    diff --git a/mm/swap_state.c b/mm/swap_state.c
    index d1bdb31cab13..a901afe9da61 100644
    --- a/mm/swap_state.c
    +++ b/mm/swap_state.c
    @@ -498,7 +498,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
    unsigned long start_offset, end_offset;
    unsigned long mask;
    struct blk_plug plug;
    - bool do_poll = true;
    + bool do_poll = true, page_allocated;

    mask = swapin_nr_pages(offset) - 1;
    if (!mask)
    @@ -514,14 +514,18 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
    blk_start_plug(&plug);
    for (offset = start_offset; offset <= end_offset ; offset++) {
    /* Ok, do the async read-ahead now */
    - page = read_swap_cache_async(swp_entry(swp_type(entry), offset),
    - gfp_mask, vma, addr, false);
    + page = __read_swap_cache_async(
    + swp_entry(swp_type(entry), offset),
    + gfp_mask, vma, addr, &page_allocated);
    if (!page)
    continue;
    - if (offset != entry_offset &&
    - likely(!PageTransCompound(page))) {
    - SetPageReadahead(page);
    - count_vm_event(SWAP_RA);
    + if (page_allocated) {
    + swap_readpage(page, false);
    + if (offset != entry_offset &&
    + likely(!PageTransCompound(page))) {
    + SetPageReadahead(page);
    + count_vm_event(SWAP_RA);
    + }
    }
    put_page(page);
    }
    --
    2.11.0
    \
     
     \ /
      Last update: 2017-08-07 07:43    [W:4.051 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site