lkml.org 
[lkml]   [2020]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Date
On Thu, 2020-10-29 at 09:54 +0200, Mike Rapoport wrote:
> __kernel_map_pages() on arm64 will also bail out if rodata_full is
> false:
> void __kernel_map_pages(struct page *page, int numpages, int enable)
> {
> if (!debug_pagealloc_enabled() && !rodata_full)
> return;
>
> set_memory_valid((unsigned long)page_address(page), numpages,
> enable);
> }
>
> So using set_direct_map() to map back pages removed from the direct
> map
> with __kernel_map_pages() seems safe to me.

Heh, one of us must have some simple boolean error in our head. I hope
its not me! :) I'll try on more time.

__kernel_map_pages() will bail out if rodata_full is false **AND**
debug page alloc is off. So it will only bail under conditions where
there could be nothing unmapped on the direct map.

Equivalent logic would be:
if (!(debug_pagealloc_enabled() || rodata_full))
return;

Or:
if (debug_pagealloc_enabled() || rodata_full)
set_memory_valid(blah)

So if either is on, the existing code will try to re-map. But the
set_direct_map_()'s will only work if rodata_full is on. So switching
hibernate to set_direct_map() will cause the remap to be missed for the
debug page alloc case, with !rodata_full.

It also breaks normal debug page alloc usage with !rodata_full for
similar reasons after patch 3. The pages would never get unmapped.


\
 
 \ /
  Last update: 2020-10-30 00:20    [W:0.072 / U:2.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site