lkml.org 
[lkml]   [2022]   [Jan]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 0/1] memfd: Support mapping to zero page on reading
On 22.12.21 13:33, Peng Liang wrote:
> Hi all,
>
> Recently we are working on implementing CRIU [1] for QEMU based on
> Steven's work [2]. It will use memfd to allocate guest memory in order
> to restore (inherit) it in the new QEMU process. However, memfd will
> allocate a new page for reading while anonymous memory will map to zero
> page for reading. For QEMU, memfd may cause that all memory are
> allocated during the migration because QEMU will read all pages in
> migration. It may lead to OOM if over-committed memory is enabled,
> which is usually enabled in public cloud.

Hi,

it's the exact same problem as if just migrating a VM after inflating
the balloon, or after reporting free memory to the hypervisor via
virtio-balloon free page reporting.

Even populating the shared zero page still wastes CPU time and more
importantly memory for page tables. Further, you'll end up reading the
whole page to discover that you just populated the shared zeropage, far
from optimal. Instead of doing that dance, just check if there is
something worth reading at all.

You could simply sense if a page is actually populated before going
ahead and reading it for migration. I actually discussed that recently
with Dave Gilbert.

For anonymous memory it's pretty straight forward via
/proc/self/pagemap. For files you can use lseek.

https://lkml.kernel.org/r/20210923064618.157046-2-tiberiu.georgescu@nutanix.com

Contains some details. There was a discussion to eventually have a
better bulk interface for it if it's necessary for performance.

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2022-01-04 15:46    [W:0.108 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site