lkml.org 
[lkml]   [2022]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC 2/6] mm/migrate_pages: split unmap_and_move() to _unmap() and _move()
On Mon, Sep 26, 2022 at 6:52 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Alistair Popple <apopple@nvidia.com> writes:
>
> > Yang Shi <shy828301@gmail.com> writes:
> >
> >> On Mon, Sep 26, 2022 at 2:37 AM Alistair Popple <apopple@nvidia.com> wrote:
> >>>
> >>>
> >>> Huang Ying <ying.huang@intel.com> writes:
> >>>
> >>> > This is a preparation patch to batch the page unmapping and moving
> >>> > for the normal pages and THP.
> >>> >
> >>> > In this patch, unmap_and_move() is split to migrate_page_unmap() and
> >>> > migrate_page_move(). So, we can batch _unmap() and _move() in
> >>> > different loops later. To pass some information between unmap and
> >>> > move, the original unused newpage->mapping and newpage->private are
> >>> > used.
> >>>
> >>> This looks like it could cause a deadlock between two threads migrating
> >>> the same pages if force == true && mode != MIGRATE_ASYNC as
> >>> migrate_page_unmap() will call lock_page() while holding the lock on
> >>> other pages in the list. Therefore the two threads could deadlock if the
> >>> pages are in a different order.
> >>
> >> It seems unlikely to me since the page has to be isolated from lru
> >> before migration. The isolating from lru is atomic, so the two threads
> >> unlikely see the same pages on both lists.
> >
> > Oh thanks! That is a good point and I agree since lru isolation is
> > atomic the two threads won't see the same pages. migrate_vma_setup()
> > does LRU isolation after locking the page which is why the potential
> > exists there. We could potentially switch that around but given
> > ZONE_DEVICE pages aren't on an lru it wouldn't help much.
> >
> >> But there might be other cases which may incur deadlock, for example,
> >> filesystem writeback IIUC. Some filesystems may lock a bunch of pages
> >> then write them back in a batch. The same pages may be on the
> >> migration list and they are also dirty and seen by writeback. I'm not
> >> sure whether I miss something that could prevent such a deadlock from
> >> happening.
> >
> > I'm not overly familiar with that area but I would assume any filesystem
> > code doing this would already have to deal with deadlock potential.
>
> Thank you very much for pointing this out. I think the deadlock is a
> real issue. Anyway, we shouldn't forbid other places in kernel to lock
> 2 pages at the same time.
>
> The simplest solution is to batch page migration only if mode ==
> MIGRATE_ASYNC. Then we may consider to fall back to non-batch mode if
> mode != MIGRATE_ASYNC and trylock page fails.

Seems like so...

>
> Best Regards,
> Huang, Ying
>
> [snip]

\
 
 \ /
  Last update: 2022-09-27 22:57    [W:0.079 / U:1.300 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site