lkml.org 
[lkml]   [2020]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3/3] memory: introduce an option to force onlining of hotplug memory
From
Date
On 23.07.20 14:23, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>> Add an extra option to add_memory_resource that overrides the memory
>>> hotplug online behavior in order to force onlining of memory from
>>> add_memory_resource unconditionally.
>>>
>>> This is required for the Xen balloon driver, that must run the
>>> online page callback in order to correctly process the newly added
>>> memory region, note this is an unpopulated region that is used by Linux
>>> to either hotplug RAM or to map foreign pages from other domains, and
>>> hence memory hotplug when running on Xen can be used even without the
>>> user explicitly requesting it, as part of the normal operations of the
>>> OS when attempting to map memory from a different domain.
>>>
>>> Setting a different default value of memhp_default_online_type when
>>> attaching the balloon driver is not a robust solution, as the user (or
>>> distro init scripts) could still change it and thus break the Xen
>>> balloon driver.
>>
>> I think we discussed this a couple of times before (even triggered by my
>> request), and this is responsibility of user space to configure. Usually
>> distros have udev rules to online memory automatically. Especially, user
>> space should eb able to configure *how* to online memory.
>
> Note (as per the commit message) that in the specific case I'm
> referring to the memory hotplugged by the Xen balloon driver will be
> an unpopulated range to be used internally by certain Xen subsystems,
> like the xen-blkback or the privcmd drivers. The addition of such
> blocks of (unpopulated) memory can happen without the user explicitly
> requesting it, and hence not even aware such hotplug process is taking
> place. To be clear: no actual RAM will be added to the system.
>
> Failure to online such blocks using the Xen specific online handler
> (which does not handle back the memory to the allocator in any way)
> will result in the system getting stuck and malfunctioning.
>
>> It's the admin/distro responsibility to configure this properly. In case
>> this doesn't happen (or as you say, users change it), bad luck.
>>
>> E.g., virtio-mem takes care to not add more memory in case it is not
>> getting onlined. I remember hyper-v has similar code to at least wait a
>> bit for memory to get onlined.
>
> I don't think VirtIO or Hyper-V use the hotplug system in the same way
> as Xen, as said this is done to add unpopulated memory regions that
> will be used to map foreign memory (from other domains) by Xen drivers
> on the system.
>
> Maybe this should somehow use a different mechanism to hotplug such
> empty memory blocks? I don't mind doing this differently, but I would
> need some pointers. Allowing user-space to change a (seemingly
> unrelated) parameter and as a result produce failures on Xen drivers
> is not an acceptable solution IMO.

Maybe we can use the same approach as Xen PV-domains: pre-allocate a
region in the memory map to be used for mapping foreign pages. For the
kernel it will look like pre-ballooned memory, so it will create struct
page for the region (which is what we are after), but it won't give the
memory to the allocator.


Juergen

\
 
 \ /
  Last update: 2020-07-23 14:28    [W:0.179 / U:0.668 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site