lkml.org 
[lkml]   [2022]   [Apr]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH v7 00/70] Introducing the Maple Tree
Date
* Yu Zhao <yuzhao@google.com> [220414 12:57]:
> On Thu, Apr 14, 2022 at 7:57 AM Liam Howlett <liam.howlett@oracle.com> wrote:
> >
> > * Andrew Morton <akpm@linux-foundation.org> [220414 02:51]:
> > > On Mon, 4 Apr 2022 14:35:26 +0000 Liam Howlett <liam.howlett@oracle.com> wrote:
> > >
> > > > Please add this patch set to your branch. They are based on v5.18-rc1.
> > >
> > > Do we get a nice [0/n] cover letter telling us all about this?
> > >
> > > I have that all merged up and it compiles.
> > >
> > > https://lkml.kernel.org/r/20220402094550.129-1-lipeifeng@oppo.com and
> > > https://lkml.kernel.org/r/20220412081014.399-1-lipeifeng@oppo.com are
> > > disabled for now.
> > >
> > >
> > > Several patches were marked
> > >
> > > From: Liam
> > > Signed-off-by: Matthew
> > > Signed-off-by: Liam
> > >
> > > Which makes me wonder whether the attribution was correct. Please
> > > double-check.
> >
> > I'll have a look, thanks.
> >
> > >
> > >
> > >
> > > I made a lame attempt to fix up mglru's get_next_vma(), and it probably
> > > wants a revisit in the maple-tree world anyway. Please check this and
> > > send me a better version ;)
> >
> > What you have below will function, but there is probably a more maple
> > way of doing things. Happy to help get the sap flowing - it is that
> > time of the year after all ;)
>
> Thanks. Please let me know when the more maple way is ready. I'll test with it.

Here is a patch to replace the fixup below. I suspect we could do
better, but for now I just used a VMA_ITERATOR.

This could be put into d2035fb88f9f "mm: multi-gen LRU: support page
table walks" if multi-gen LRU goes in after the maple tree.


>
> Also I noticed, for the end address to walk_page_range(), Matthew used
> -1 and you used ULONG_MAX in the maple branch; Andrew used TASK_SIZE
> below. Having a single value throughout would be great.
>
> > > --- a/mm/vmscan.c~mglru-vs-maple-tree
> > > +++ a/mm/vmscan.c
> > > @@ -3704,7 +3704,7 @@ static bool get_next_vma(struct mm_walk
> > >
> > > while (walk->vma) {
> > > if (next >= walk->vma->vm_end) {
> > > - walk->vma = walk->vma->vm_next;
> > > + walk->vma = find_vma(walk->mm, walk->vma->vm_end);
> > > continue;
> > > }
> > >
> > > @@ -3712,7 +3712,7 @@ static bool get_next_vma(struct mm_walk
> > > return false;
> > >
> > > if (should_skip_vma(walk->vma->vm_start, walk->vma->vm_end, walk)) {
> > > - walk->vma = walk->vma->vm_next;
> > > + walk->vma = find_vma(walk->mm, walk->vma->vm_end);
> > > continue;
> > > }
> > >
> > > @@ -4062,7 +4062,7 @@ static void walk_mm(struct lruvec *lruve
> > > /* the caller might be holding the lock for write */
> > > if (mmap_read_trylock(mm)) {
> > > unsigned long start = walk->next_addr;
> > > - unsigned long end = mm->highest_vm_end;
> > > + unsigned long end = TASK_SIZE;
> > >
> > > err = walk_page_range(mm, start, end, &mm_walk_ops, walk);
From 34af6dd5b84ecbe3a0b20f98acb4034d5708685c Mon Sep 17 00:00:00 2001
From: Andrew Morton <akpm@linux-foundation.org>
Date: Thu, 14 Apr 2022 12:16:57 -0700
Subject: [PATCH 1/3] mm/vmscan: Use VMA_ITERATOR in get_next_vma()

The next vma may actually be many VMAs away, so use the VMA_ITERATOR to
continue searching from vm_end onwards.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
---
mm/vmscan.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d4a7d2bd276d..0f5c53996365 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3697,24 +3697,21 @@ static bool get_next_vma(struct mm_walk *walk, unsigned long mask, unsigned long
unsigned long *start, unsigned long *end)
{
unsigned long next = round_up(*end, size);
+ VMA_ITERATOR(vmi, walk->mm, walk->vma->vm_end)

VM_BUG_ON(mask & size);
VM_BUG_ON(*start >= *end);
VM_BUG_ON((next & mask) != (*start & mask));

- while (walk->vma) {
- if (next >= walk->vma->vm_end) {
- walk->vma = walk->vma->vm_next;
+ for_each_mte_vma(vmi, walk->vma) {
+ if (next >= walk->vma->vm_end)
continue;
- }

if ((next & mask) != (walk->vma->vm_start & mask))
return false;

- if (should_skip_vma(walk->vma->vm_start, walk->vma->vm_end, walk)) {
- walk->vma = walk->vma->vm_next;
+ if (should_skip_vma(walk->vma->vm_start, walk->vma->vm_end, walk))
continue;
- }

*start = max(next, walk->vma->vm_start);
next = (next | ~mask) + 1;
@@ -4062,7 +4059,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_
/* the caller might be holding the lock for write */
if (mmap_read_trylock(mm)) {
unsigned long start = walk->next_addr;
- unsigned long end = mm->highest_vm_end;
+ unsigned long end = ULONG_MAX;

err = walk_page_range(mm, start, end, &mm_walk_ops, walk);

--
2.34.1
\
 
 \ /
  Last update: 2022-04-19 19:52    [W:1.107 / U:1.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site