lkml.org 
[lkml]   [2023]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] mm: remove redundant lru_add_drain() prior to unmapping pages
On Thu, Dec 14, 2023 at 03:59:00PM -0800, Jianfeng Wang wrote:
> On 12/14/23 3:00 PM, Matthew Wilcox wrote:
> > On Thu, Dec 14, 2023 at 02:27:17PM -0800, Jianfeng Wang wrote:
> >> When unmapping VMA pages, pages will be gathered in batch and released by
> >> tlb_finish_mmu() if CONFIG_MMU_GATHER_NO_GATHER is not set. The function
> >> tlb_finish_mmu() is responsible for calling free_pages_and_swap_cache(),
> >> which calls lru_add_drain() to drain cached pages in folio_batch before
> >> releasing gathered pages. Thus, it is redundant to call lru_add_drain()
> >> before gathering pages, if CONFIG_MMU_GATHER_NO_GATHER is not set.
> >>
> >> Remove lru_add_drain() prior to gathering and unmapping pages in
> >> exit_mmap() and unmap_region() if CONFIG_MMU_GATHER_NO_GATHER is not set.
> >>
> >> Note that the page unmapping process in oom_killer (e.g., in
> >> __oom_reap_task_mm()) also uses tlb_finish_mmu() and does not have
> >> redundant lru_add_drain(). So, this commit makes the code more consistent.
> >
> > Shouldn't we put this in __tlb_gather_mmu() which already has the
> > CONFIG_MMU_GATHER_NO_GATHER ifdefs? That would presuambly help with, eg
> > zap_page_range_single() too.
> >
>
> Thanks. It makes sense to me.
> This commit is motivated by a workload that use mmap/unmap heavily.
> While the mmu_gather feature is also used by hugetlb, madvise, mprotect,
> etc., thus I prefer to have another standalone commit (following this one)
> that moves lru_add_drain() to __tlb_gather_mmu() to unify these cases for
> not making redundant lru_add_drain() calls when using mmu_gather.

That's not normally the approach we take.

\
 
 \ /
  Last update: 2023-12-15 04:07    [W:1.165 / U:1.948 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site