lkml.org 
[lkml]   [2024]   [May]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
On Mon, Apr 15, 2024 at 10:22:53AM -0700, Kees Cook wrote:
> On Fri, Apr 12, 2024 at 06:19:40PM -0400, Steven Rostedt wrote:
> > On Fri, 12 Apr 2024 23:59:07 +0300
> > Mike Rapoport <rppt@kernel.org> wrote:
> >
> > > On Tue, Apr 09, 2024 at 04:41:24PM -0700, Kees Cook wrote:
> > > > On Tue, Apr 09, 2024 at 07:11:56PM -0400, Steven Rostedt wrote:
> > > > > On Tue, 9 Apr 2024 15:23:07 -0700
> > > > > Kees Cook <keescook@chromium.org> wrote:
> > > > >
> > > > > > Do we need to involve e820 at all? I think it might be possible to just
> > > > > > have pstore call request_mem_region() very early? Or does KASLR make
> > > > > > that unstable?
> > > > >
> > > > > Yeah, would that give the same physical memory each boot, and can we
> > > > > guarantee that KASLR will not map the kernel over the previous location?
> > > >
> > > > Hm, no, for physical memory it needs to get excluded very early, which
> > > > means e820.
> > >
> > > Whatever memory is reserved in arch/x86/kernel/e820.c, that happens after
> > > kaslr, so to begin with, a new memmap parameter should be also added to
> > > parse_memmap in arch/x86/boot/compressed/kaslr.c to ensure the same
> > > physical address will be available after KASLR.
> >
> > But doesn't KASLR only affect virtual memory not physical memory?
>
> KASLR for x86 (and other archs, like arm64) do both physical and virtual
> base randomization.
>
> > This just makes sure the physical memory it finds will not be used by the
> > system. Then ramoops does the mapping via vmap() I believe, to get a
> > virtual address to access the physical address.
>
> I was assuming, since you were in the e820 code, that it was
> manipulating that before KASLR chose a location. But if not, yeah, Mike
> is right -- you need to make sure this is getting done before
> decompress_kernel().

Right now kaslr can handle up to 4 memmap regions and parse_memmap() in
arch/x86/boot/compressed/kaslr.c should be updated for a new memmap type.

But I think it's better to add a new kernel parameter as I suggested in
another email and teach mem_avoid_memmap() in kaslr.c to deal with it, as
well as with crashkernel=size@offset, btw.

> --
> Kees Cook

--
Sincerely yours,
Mike.

\
 
 \ /
  Last update: 2024-05-27 18:11    [W:0.068 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site