lkml.org 
[lkml]   [2022]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH v4 1/2] mm: migration: fix the FOLL_GET failure on following huge page
Date
> -----Original Message-----
> From: Alistair Popple <apopple@nvidia.com>
> Sent: Monday, August 15, 2022 12:29
> To: linux-mm@kvack.org; linux-kernel@vger.kernel.org; Wang, Haiyue <haiyue.wang@intel.com>
> Cc: akpm@linux-foundation.org; david@redhat.com; linmiaohe@huawei.com; Huang, Ying
> <ying.huang@intel.com>; songmuchun@bytedance.com; naoya.horiguchi@linux.dev; alex.sierra@amd.com; Wang,
> Haiyue <haiyue.wang@intel.com>
> Subject: Re: [PATCH v4 1/2] mm: migration: fix the FOLL_GET failure on following huge page
>
> On Monday, 15 August 2022 11:59:08 AM AEST Haiyue Wang wrote:
> > Not all huge page APIs support FOLL_GET option, so the __NR_move_pages
> > will fail to get the page node information for huge page.
>
> I think you should be explicit in the commit message about which functions do
> not support FOLL_GET as it's not obvious what support needs to be added before
> this fix can be reverted.

Yes, make sense, will add them in new patch.

>
> Thanks.
>
> - Alistair
>
> > This is an temporary solution to mitigate the racing fix.
> >
> > After supporting follow huge page by FOLL_GET is done, this fix can be
> > reverted safely.
> >
> > Fixes: 4cd614841c06 ("mm: migration: fix possible do_pages_stat_array racing
> with memory offline")
> > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > ---
> > mm/migrate.c | 10 ++++++++--
> > 1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 6a1597c92261..581dfaad9257 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -1848,6 +1848,7 @@ static void do_pages_stat_array(struct mm_struct *mm,
> unsigned long nr_pages,
> >
> > for (i = 0; i < nr_pages; i++) {
> > unsigned long addr = (unsigned long)(*pages);
> > + unsigned int foll_flags = FOLL_DUMP;
> > struct vm_area_struct *vma;
> > struct page *page;
> > int err = -EFAULT;
> > @@ -1856,8 +1857,12 @@ static void do_pages_stat_array(struct mm_struct *mm,
> unsigned long nr_pages,
> > if (!vma)
> > goto set_status;
> >
> > + /* Not all huge page follow APIs support 'FOLL_GET' */
> > + if (!is_vm_hugetlb_page(vma))
> > + foll_flags |= FOLL_GET;
> > +
> > /* FOLL_DUMP to ignore special (like zero) pages */
> > - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
> > + page = follow_page(vma, addr, foll_flags);
> >
> > err = PTR_ERR(page);
> > if (IS_ERR(page))
> > @@ -1865,7 +1870,8 @@ static void do_pages_stat_array(struct mm_struct *mm,
> unsigned long nr_pages,
> >
> > if (page && !is_zone_device_page(page)) {
> > err = page_to_nid(page);
> > - put_page(page);
> > + if (foll_flags & FOLL_GET)
> > + put_page(page);
> > } else {
> > err = -ENOENT;
> > }
> >
>
>
>

\
 
 \ /
  Last update: 2022-08-15 06:42    [W:0.074 / U:0.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site