lkml.org 
[lkml]   [2018]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] x86/mm/KASLR: Adjust the vmemmap size according to paging mode
On Mon, Sep 03, 2018 at 03:47:18PM +0800, Baoquan He wrote:
> On 09/02/18 at 11:52pm, Kirill A. Shutemov wrote:
> > On Thu, Aug 30, 2018 at 11:25:12PM +0800, Baoquan He wrote:
> > > Hi Kirill,
> > >
> > > I made a new version according to your suggestion, just a little
> > > different, I didn't make 1TB as default, just calculate with the actual
> > > size, then align up to 1TB boundary. Just found kcore is printing more
> > > entries than before, I thought it's caused by my code, later got it was
> > > touchde by other people.
> > >
> > > Any comment about this? I can change accordingly.
> >
> > Looks good to me.
> >
> > But there's corner case when struct page is unreasonably large and
> > vmemmap_size will be way to large. We probably have to report an error if
> > we cannot fit vmemmap properly into virtual memory layout.
>
> Hmm, sizeof(struct page) can't exceed one whole page surely, otherwise
> system bootup can't go over vmemmap initlization. Except of this, we may
> need think about the virtual memory layout which vmemmap can be allowed
> to occupy.
>
> If KASAN enabled, KASLR disabled,
> 4-level 1TB + 1TB hole (2TB)
> 5-level 512TB + 2034TB hole (2.5PB)
>
> If KASAN disabled, KASLR enabled,
> 4-level 1TB + 1TB hole + 16TB (18TB)
> 5-level 512TB + 2034TB hole + 8PB (10.5PB)
>
> So, as you can see, if add check in memory KASLR code, we should only
> consider KASLR enabled case. We possibly don't need to worry about
> 5-level case since the size 10.5PB is even bigger than the maximum
> physical RAM mapping size. For 4-level, 18TB align to multiples of 2, it
> will be 32 times of the current 1TB, then we usually assume 64 as the
> default value of sizeof(struct page), then 64*32 == 1024. So we can add
> check like this, what do you think? Or any other idea?

Looks reasonable to me.

But I would have the BUILD_BUG_ON() in generic code. If you struct page is
more than 1/4 of PAGE_SIZE something is horribly broken.

> diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> index 1db8e166455e..776ec759a87c 100644
> --- a/arch/x86/mm/kaslr.c
> +++ b/arch/x86/mm/kaslr.c
> @@ -90,6 +90,7 @@ void __init kernel_randomize_memory(void)
> BUILD_BUG_ON(vaddr_start >= vaddr_end);
> BUILD_BUG_ON(vaddr_end != CPU_ENTRY_AREA_BASE);
> BUILD_BUG_ON(vaddr_end > __START_KERNEL_map);
> + BUILD_BUG_ON(sizeof(struct page ) > PAGE_SIZE/4);

Nitpick: redundant space before ')'.

>
> if (!kaslr_memory_enabled())
> return;
>
>
> For 5-level paging mode, we
> may not need to worry about that. Since KASAN
>
> ***4-level***
> ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
> ... unused hole ...
> ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
> ... unused hole ...
>
>
>
> ***5-level***
> ffd4000000000000 - ffd5ffffffffffff (=49 bits) virtual memory map (512TB)
> ... unused hole ...
> ffdf000000000000 - fffffc0000000000 (=53 bits) kasan shadow memory (8PB)
>
> >
> > --
> > Kirill A. Shutemov

--
Kirill A. Shutemov

\
 
 \ /
  Last update: 2018-09-03 12:27    [W:0.084 / U:0.472 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site