lkml.org 
[lkml]   [2021]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: vmemmap alloc failure in hot_add_req()
From
Date
> It does look like this kernel configuration has
> CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y.

Okay, so then it's most likely really more of an issue with fragmented
physical memory -- which is suboptimal but not a show blocker in your setup.

(there are still cases where memory onlining can fail, especially with
kasan running, but these are rather corner cases)

>
>> If it's not getting onlined, you easily sport after hotplug e.g., via
>> "lsmem" that there are quite some offline memory blocks.
>>
>> Note that x86_64 code will fallback from populating huge pages to
>> populating base pages for the vmemmap; this can happen easily when under
>> memory pressure.
>
> Not sure if it is relevant or not but this warning can show up within a
> minute of startup without me doing anything in particular.

I remember that Hyper-V will start with a certain (configured) boot VM
memory size and once the guest is up and running, use memory stats of
the guest to decide whether to add (hotplug) or remove (balloon inflate)
memory from the VM.

So this could just be Hyper-V trying to apply its heuristics.

>
>> If adding memory would fail completely, you'd see another "hot_add
>> memory failed error is ..." error message from hyper-v in the kernel
>> log. If that doesn't show up, it's simply suboptimal, but hotplugging
>> memory still succeeded.
>
> I did notice that from the code in hv_balloon.c but I do not think I
> have ever seen that message in my logs.

Okay, so at least hotplugging memory is working.

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2021-06-17 10:42    [W:0.084 / U:1.852 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site