lkml.org 
[lkml]   [2016]   [Mar]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
    On Thu, Mar 03, 2016 at 06:44:24PM +0800, Liang Li wrote:
    > The current QEMU live migration implementation mark the all the
    > guest's RAM pages as dirtied in the ram bulk stage, all these pages
    > will be processed and that takes quit a lot of CPU cycles.
    >
    > From guest's point of view, it doesn't care about the content in free
    > pages. We can make use of this fact and skip processing the free
    > pages in the ram bulk stage, it can save a lot CPU cycles and reduce
    > the network traffic significantly while speed up the live migration
    > process obviously.
    >
    > This patch set is the QEMU side implementation.
    >
    > The virtio-balloon is extended so that QEMU can get the free pages
    > information from the guest through virtio.
    >
    > After getting the free pages information (a bitmap), QEMU can use it
    > to filter out the guest's free pages in the ram bulk stage. This make
    > the live migration process much more efficient.
    >
    > This RFC version doesn't take the post-copy and RDMA into
    > consideration, maybe both of them can benefit from this PV solution
    > by with some extra modifications.
    >
    > Performance data
    > ================
    >
    > Test environment:
    >
    > CPU: Intel (R) Xeon(R) CPU ES-2699 v3 @ 2.30GHz
    > Host RAM: 64GB
    > Host Linux Kernel: 4.2.0 Host OS: CentOS 7.1
    > Guest Linux Kernel: 4.5.rc6 Guest OS: CentOS 6.6
    > Network: X540-AT2 with 10 Gigabit connection
    > Guest RAM: 8GB
    >
    > Case 1: Idle guest just boots:
    > ============================================
    > | original | pv
    > -------------------------------------------
    > total time(ms) | 1894 | 421
    > --------------------------------------------
    > transferred ram(KB) | 398017 | 353242
    > ============================================
    >
    >
    > Case 2: The guest has ever run some memory consuming workload, the
    > workload is terminated just before live migration.
    > ============================================
    > | original | pv
    > -------------------------------------------
    > total time(ms) | 7436 | 552
    > --------------------------------------------
    > transferred ram(KB) | 8146291 | 361375
    > ============================================

    Both cases look very artificial to me. Normally you migrate VMs which
    have started long ago and which can't have their services terminated
    before the migration, so I wouldn't expect any useful amount of free
    pages obtained this way.

    OTOH I don't see why you can't just inflate the balloon before the
    migration, and really optimize the amount of transferred data this way?
    With the recently proposed VIRTIO_BALLOON_S_AVAIL you can have a fairly
    good estimate of the optimal balloon size, and with the recently merged
    balloon deflation on OOM it's a safe thing to do without exposing the
    guest workloads to OOM risks.

    Roman.

    \
     
     \ /
      Last update: 2016-03-03 15:41    [W:8.954 / U:1.340 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site