lkml.org 
[lkml]   [2013]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] Early use of boot service memory
On Fri, Nov 15, 2013 at 11:16:25AM -0800, H. Peter Anvin wrote:
> On 11/15/2013 10:46 AM, H. Peter Anvin wrote:
> > On 11/15/2013 10:30 AM, Vivek Goyal wrote:
> >>
> >> I agree taking assistance of hypervisor should be useful.
> >>
> >> One reason we use kdump for VM too because it makes life simple. There
> >> is no difference in how we configure, start and manage crash dumps
> >> in baremetal or inside VM. And in practice have not heard of lot of
> >> failures of kdump in VM environment.
> >>
> >> So while reliability remains a theoritical concern, in practice it
> >> has not been a real concern and that's one reason I think we have
> >> not seen a major push for alternative method in VM environment.
> >>
> >
> > Another reason, again, is that it doesn't sit on all that memory.
> >
>
> This led me to a potentially interesting idea. If we can tell the
> hypervisor about which memory blocks belong to kdump, we can still use
> kdump in its current form with only a few hypervisor calls thrown in.
>
> One set of calls would mark memory ranges as belonging to kdump. This
> would (a) make them protected,

This sounds good. We already have arch hooks to map/unmap crash kernel
ranges, crash_map_reserved_pages() and crash_unmap_reserved_pages(). Now x86,
should be able to use these hooks to tell hypervisor to remove mappings
for certain physical certain ranges and remap these back when needed. s390
already does some magic there.

> and (b) tell the hypervisor that these
> memory ranges will not be accessed and don't need to occupy physical RAM.

I am not sure if we need to do anything here. I am assuming that most of
the crashkernel memory has not been touched and does not occupy physical
memory till crash actually happens. We probably will touch only 20-30MB
of crashkernel memory during kernel load and that should ultimately make
its way to swap at some point of time.

And if that's true, then reserving 72M extra due to crashkernel=X,high
should not be a big issue in KVM guests. It will still be an issue on
physical servers though.

Thanks
Vivek


\
 
 \ /
  Last update: 2013-11-18 16:41    [W:0.064 / U:0.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site