lkml.org 
[lkml]   [2022]   [Jan]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 06/12] mm/vmscan: Optimise shrink_page_list for non-PMD-sized folios
    Date
    A large folio which is smaller than a PMD does not need to do the extra
    work in try_to_unmap() of trying to split a PMD entry.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    ---
    mm/vmscan.c | 3 ++-
    1 file changed, 2 insertions(+), 1 deletion(-)

    diff --git a/mm/vmscan.c b/mm/vmscan.c
    index 45665874082d..3181bf2f8a37 100644
    --- a/mm/vmscan.c
    +++ b/mm/vmscan.c
    @@ -1754,7 +1754,8 @@ static unsigned int shrink_page_list(struct list_head *page_list,
    enum ttu_flags flags = TTU_BATCH_FLUSH;
    bool was_swapbacked = PageSwapBacked(page);

    - if (unlikely(PageTransHuge(page)))
    + if (PageTransHuge(page) &&
    + thp_order(page) >= HPAGE_PMD_ORDER)
    flags |= TTU_SPLIT_HUGE_PMD;

    try_to_unmap(page, flags);
    --
    2.34.1
    \
     
     \ /
      Last update: 2022-01-16 13:20    [W:2.343 / U:1.384 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site