lkml.org 
[lkml]   [2008]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC] hotplug-memory: refactor online_pages to separate zone growth from page onlining
Dave Hansen wrote:
> On Wed, 2008-04-02 at 14:03 -0700, Jeremy Fitzhardinge wrote:
>
>> Dave Hansen wrote:
>> No, not in a Xen direct-pagetable guest. The guest actually sees real
>> hardware page numbers (mfns) when the hypervisor gives it a page. By
>> the time the hypervisor gives it a page reference, it already
>> guaranteeing that the page is available for guest use. The only thing
>> that we could do is prevent the guest from mapping the page, but that
>> doesn't really achieve much.
>>
>
> Oh, once we've let Linux establish ptes to it, we've required that the
> hypervisor have it around? How does that work with the balloon driver?
> Do we destroy the ptes when giving balloon memory back to the
> hypervisor?
>
> If we're talking about i386, then we're set. We don't map the hot-added
> memory at all because we only add highmem on i386. The only time we map
> these pages is *after* we actually allocate them when they get mapped
> into userspace or used as vmalloc() or they're kmap()'d.
>
>
>> I think we're getting off track here; this is a lot of extra complexity
>> to justify allowing usermode to use /sys to online a chunk of hotplugged
>> memory.
>>
>
> Either that, or we're going to develop the entire Xen/kvm memory hotplug
> architecture around the soon-to-be-legacy i386 limitations. :)
>

s:Xen/kvm:Xen:g

We don't need anything special for KVM. Bare metal memory hotplug
should be sufficient provided userspace udev scripts are properly
configured to offline memory automatically.

Regards,

Anthony Liguori

> -- Dave
>
>



\
 
 \ /
  Last update: 2008-04-02 23:39    [W:0.084 / U:1.648 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site