Messages in this thread | | | Subject | Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0 | From | David Hildenbrand <> | Date | Thu, 24 Oct 2019 10:51:28 +0200 |
| |
On 24.10.19 10:42, Michal Hocko wrote: > On Wed 23-10-19 12:03:51, David Hildenbrand wrote: >>> Do you see any downsides? >> >> The only downside I see is that we get more false negatives on >> has_unmovable_pages(), eventually resulting in the offlining stage after >> isolation to loop forever (as some PageOffline() pages are not movable >> (especially, XEN balloon, HyperV balloon), there won't be progress). >> >> I somewhat don't like forcing everybody that uses PageOffline() (especially >> all users of balloon compaction) to implement memory notifiers just to avoid >> that. Maybe, we even want to use PageOffline() in the future in the core >> (e.g., for memory holes instead of PG_reserved or similar). > > There is only a handful of those and we need to deal with them anyway. > If you do not want to enforce them to create their own notifiers then we > can accomodate the hotplug code. __test_page_isolated_in_pageblock resp.
Yeah, I would prefer offlining code to be able to deal with that without notifier changes for all users.
> the call chain up can distinguish temporary and permanent failures > (EAGAIN vs. EBUSY). The current state when we always return EBUSY and > keep retrying for ever is not optimal at all, right? A referenced PageOffline
Very right!
> could be an example of EBUSY all other failures where we are effectively > waiting for pages to get freed finaly would be EAGAIN.
We have to watch out for PageOffline() pages that are actually movable (balloon compaction). But that doesn't sound too hard. > > It is a bit late in the process because a large portion of the work has > been done already but this doesn't sound like something to lose sleep > over. >
Right. I'll look into that to find out if this would work. And see if I can reproduce what I described at all (theoretical thoughts) :)
Again, thanks for looking into this Michal!
--
Thanks,
David / dhildenb
| |