lkml.org 
[lkml]   [2019]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm,memory_hotplug: Explicitly pass the head to isolate_huge_page
On Tue 12-02-19 14:45:49, Oscar Salvador wrote:
> On Tue, Feb 12, 2019 at 09:33:29AM +0100, Michal Hocko wrote:
> > >
> > > if (PageHuge(page)) {
> > > struct page *head = compound_head(page);
> > > - pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
> > > if (compound_order(head) > PFN_SECTION_SHIFT) {
> > > ret = -EBUSY;
> > > break;
> > > }
> >
> > Why are we doing this, btw?
>
> I assume you are referring to:
>
> > > if (compound_order(head) > PFN_SECTION_SHIFT) {
> > > ret = -EBUSY;
> > > break;
> > > }

yes.

> I thought it was in case we stumble upon a gigantic page, and commit
> (c8721bbbdd36 mm: memory-hotplug: enable memory hotplug to handle hugepage)
> confirms it.
>
> But I am not really sure if the above condition would still hold on powerpc,
> I wanted to check it but it is a bit more tricky than it is in x86_64 because
> of the different hugetlb sizes.
> Could it be that the above condition is not true, but still the order of that
> hugetlb page goes beyond MAX_ORDER? It is something I have to check.

This check doesn't make much sense in principle. Why should we bail out
based on a section size? We are offlining a pfn range. All that we care
about is whether the hugetlb is migrateable.
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2019-02-12 15:40    [W:0.089 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site