lkml.org 
[lkml]   [2020]   [Nov]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/1] mm: compaction: avoid fast_isolate_around() to set pageblock_skip on reserved pages
On Wed, Nov 25, 2020 at 04:38:16PM -0500, Andrea Arcangeli wrote:
> On Wed, Nov 25, 2020 at 11:04:14PM +0200, Mike Rapoport wrote:
> > I think the very root cause is how e820__memblock_setup() registers
> > memory with memblock:
> >
> > if (entry->type == E820_TYPE_SOFT_RESERVED)
> > memblock_reserve(entry->addr, entry->size);
> >
> > if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN)
> > continue;
> >
> > memblock_add(entry->addr, entry->size);
> >
> > From that point the system has inconsistent view of RAM in both
> > memblock.memory and memblock.reserved and, which is then translated to
> > memmap etc.
> >
> > Unfortunately, simply adding all RAM to memblock is not possible as
> > there are systems that for them "the addresses listed in the reserved
> > range must never be accessed, or (as we discovered) even be reachable by
> > an active page table entry" [1].
> >
> > [1] https://lore.kernel.org/lkml/20200528151510.GA6154@raspberrypi/
>
> It looks like what's missing is a blockmem_reserve which I don't think
> would interfere at all with the issue above since it won't create
> direct mapping and it'll simply invoke the second stage that wasn't
> invoked here.
>
> I guess this would have a better chance to have the second
> initialization stage run in reserve_bootmem_region and it would likely
> solve the problem without breaking E820_TYPE_RESERVED which is known
> by the kernel:
>
> > if (entry->type == E820_TYPE_SOFT_RESERVED)
> > memblock_reserve(entry->addr, entry->size);
> >
>
> + if (entry->type == 20)
> + memblock_reserve(entry->addr, entry->size);
>
> > if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN)
> > continue;
> >
>
> This is however just to show the problem, I didn't check what type 20
> is.

I think it's inveneted by your BIOS vendor :)

> To me it doesn't look the root cause though, the root cause is that if
> you don't call memblock_reserve the page->flags remains uninitialized.

I didn't mean that root cause is that we don't call memblock_reserve().
I meant that the root cause is inconsitency in memory representation.

On most architectures, memblock.memory represents the entire RAM in the
system and memblock.reserved represents memory regions that were
reserved either by the firmware or by the kernel during early boot.

On x86 the memory that firmware reserved for its use is never considered
memory and some of the reserved memory types are never registered with
memblock at all.

As memblock data is used to initialize the memory map, we end up with
some page structs not being properly initialized.

> I think the page_alloc.c need to be more robust and detect at least if
> if holes within zones (but ideally all pfn_valid of all struct pages
> in system even if beyond the end of the zone) aren't being initialized
> in the second stage without relying on the arch code to remember to
> call memblock_reserve.

I agree that page_alloc.c needs to be more robust, but it anyway needs
to rely on some data supplied by arch to know where valid memory is.
With SPARSMEM, pfn_valid() only says where memmap exists, it's not
necessary there is an actual page frame behind a valid pfn.

> In fact it's not clear why memblock_reserve even exists, that
> information can be calculated reliably by page_alloc in function of
> memblock.memory alone by walking all nodes and all zones. It doesn't
> even seem to help in destroying the direct mapping,
> reserve_bootmem_region just initializes the struct pages so it doesn't
> need a special memeblock_reserved to find those ranges.

memblock_reserve() is there to allow architectures to mark memory
regions as busy so this memory won't be used by buddy as free pages. It
could be memory that firmware reported as reserved, memory occupied by
the kernel image and initrd, or the early memory allocations kernel does
before page allocator is up.

> In fact it's scary that codes then does stuff like this trusting the
> memblock_reserve is nearly complete information (which obviously isn't
> given type 20 doesn't get queued and I got that type 20 in all my systems):
>
> for_each_reserved_mem_region(i, &start, &end) {
> if (addr >= start && addr_end <= end)
> return true;
> }
>
> That code in irq-gic-v3-its.c should stop using
> for_each_reserved_mem_region and start doing
> pfn_valid(addr>>PAGE_SHIFT) if
> PageReserved(pfn_to_page(addr>>PAGE_SHIFT)) instead.

I think that for coldpluged CPUs this code runs before memmap us set up,
so pfn_valid() or PageReserved() are not yet available then.

> At best memory.reserved should be calculated automatically by the
> page_alloc.c based on the zone_start_pfn/zone_end_pfn and not passed
> by the e820 caller, instead of adding the memory_reserve call for type
> 20 we should delete the memory_reserve function.

memory.reserved cannot be calculated automatically. It represents all
the memory allocations made before page allocator is up. And as
memblock_reserve() is the most basic to allocate memory early at boot we
cannot really delete it ;-)

As for e820 and type 20, unless it is in memblock, page_alloc.c has no
way to properly initialize memmap for it. It can continue to guess, like
it does with init_unavailable_memory().

> Thanks,
> Andrea
>

--
Sincerely yours,
Mike.

\
 
 \ /
  Last update: 2020-11-26 10:37    [W:0.184 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site