lkml.org 
[lkml]   [2023]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v10 3/6] fs/proc/task_mmu: Implement IOCTL to get and/or the clear info about PTEs
    Date


    > On Feb 28, 2023, at 7:55 AM, Peter Xu <peterx@redhat.com> wrote:
    >
    > !! External Email
    >
    > On Mon, Feb 27, 2023 at 11:09:12PM +0000, Nadav Amit wrote:
    >>
    >>
    >>> On Feb 27, 2023, at 1:18 PM, Peter Xu <peterx@redhat.com> wrote:
    >>>
    >>> !! External Email
    >>>
    >>> On Thu, Feb 23, 2023 at 05:11:11PM +0000, Nadav Amit wrote:
    >>>> From my experience with UFFD, proper ordering of events is crucial, although it
    >>>> is not always done well. Therefore, we should aim for improvement, not
    >>>> regression. I believe that utilizing the pagemap-based mechanism for WP'ing
    >>>> might be a step in the wrong direction. I think that it would have been better
    >>>> to emit a 'UFFD_FEATURE_WP_ASYNC' WP-log (and ordered) with UFFD #PF and
    >>>> events. The 'UFFD_FEATURE_WP_ASYNC'-log may not need to wake waiters on the
    >>>> file descriptor unless the log is full.
    >>>
    >>> Yes this is an interesting question to think about..
    >>>
    >>> Keeping the data in the pgtable has one good thing that it doesn't need any
    >>> complexity on maintaining the log, and no possibility of "log full".
    >>
    >> I understand your concern, but I think that eventually it might be simpler
    >> to maintain, since the logic of how to process the log is moved to userspace.
    >>
    >> At the same time, handling inputs from pagemap and uffd handlers and sync’ing
    >> them would not be too easy for userspace.
    >
    > I do not expect a common uffd-wp async user to provide a fault handler at
    > all. In my imagination it's in most cases used standalone from other uffd
    > modes; it means all the faults will still be handled by the kernel. Here
    > we only leverage the accuracy of userfaultfd comparing to soft-dirty, so
    > not really real "user"-faults.

    If that is the only use-case, it might make sense. But I guess most users would
    most likely use some library (and not syscalls directly). So slightly
    complicating the API for better generality may be reasonable.

    >
    >>
    >> But yes, allocation on the heap for userfaultfd_wait_queue-like entries would
    >> be needed, and there are some issues of ordering the events (I think all #PF
    >> and other events should be ordered regardless) and how not to traverse all
    >> async-userfaultfd_wait_queue’s (except those that block if the log is full)
    >> when a wakeup is needed.
    >
    > Will there be an ordering requirement for an async mode? Considering it
    > should be async to whatever else, I would think it's not a problem, but
    > maybe I missed something.

    You may be right, but I am not sure. I am still not sure what use-cases are
    targeted in this patch-set. For CRIU checkpoint use-case (when the app is
    not running), I guess the current interface makes sense. But if there are
    use-cases in which this you do care about UFFD-events this can become an
    issue.

    But even in some obvious use-cases, this might be the wrong interface for
    major performance issues. If we think about some incremental copying of
    modified pages (a-la pre-copy live-migration or to create point-in-time
    snapshots), it seems to me much more efficient for application to have a
    log than traversing all the page-tables.


    >>
    >>>
    >>> If there's possible "log full" then the next question is whether we should
    >>> let the worker wait the monitor if the monitor is not fast enough to
    >>> collect those data. It adds some slight dependency on the two threads, I
    >>> think it can make the tracking harder or impossible in latency sensitive
    >>> workloads.
    >>
    >> Again, I understand your concern. But this model that I propose is not new.
    >> It is used with PML (page-modification logging) and KVM, and IIRC there is
    >> a similar interface between KVM and QEMU to provide this information. There
    >> are endless other examples for similar producer-consumer mechanisms that
    >> might lead to stall in extreme cases.
    >
    > Yes, I'm not against thinking of using similar structures here. It's just
    > that it's definitely more complicated on the interface, at least we need
    > yet one more interface to setup the rings and define its interfaces.
    >
    > Note that although Muhammud is defining another new interface here too for
    > pagemap, I don't think it's strictly needed for uffd-wp async mode. One
    > can use uffd-wp async mode with PM_UFFD_WP which is with current pagemap
    > interface already.
    >
    > So what Muhammud is proposing here are two things to me: (1) uffd-wp async,
    > plus (2) a new pagemap interface (which will closely work with (1) only if
    > we need atomicity on get-dirty and reprotect).
    >
    > Defining new interface for uffd-wp async mode will be something extra, so
    > IMHO besides the heap allocation on the rings, we need to also justify
    > whether that is needed. That's why I think it's fine to go with what
    > Muhammud proposed, because it's a minimum changeset at least for userfault
    > to support an async mode, and anything else can be done on top if necessary.
    >
    > Going a bit back to the "lead to stall in extreme cases" above, just also
    > want to mention that the VM use case is slightly different - dirty tracking
    > is only heavily used during migration afaict, and it's a short period. Not
    > a lot of people will complain performance degrades during that period
    > because that's just rare. And, even without the ring the perf is really
    > bad during migration anyway... Especially when huge pages are used to back
    > the guest RAM.
    >
    > Here it's slightly different to me: it's about tracking dirty pages during
    > any possible workload, and it can be monitored periodically and frequently.
    > So IMHO stricter than a VM use case where migration is the only period to
    > use it.

    I still don’t get the use-cases. "monitored periodically and frequently” is
    not a use-case. And as I said before, actually, monitoring frequently is
    more performant with a log than with scanning all the page-tables.

    >
    >>
    >>>
    >>> The other thing is we can also make the log "never gonna full" by making it
    >>> a bitmap covering any registered ranges, but I don't either know whether
    >>> it'll be worth it for the effort.
    >>
    >> I do not see a benefit of half-log half-scan. It tries to take the
    >> data-structure of one format and combine it with another.
    >
    > What I'm saying here is not half-log / half-scan, but use a single bitmap
    > to store what page is dirty, just like KVM_GET_DIRTY_LOG. I think it
    > avoids any above "stall" issue.

    Oh, I never went into the KVM details before - stupid me. If that’s what
    eventually was proven to work for KVM/QEMU, then it really sounds like
    the pagemap solution that Muhammad proposed.

    But still not convoluting pagemap with userfaultfd (and especially
    uffd-wp) can be beneficial. Linus already threw some comments here and
    there about disliking uffd-wp, and I’m not sure adding uffd-wp specific
    stuff to pagemap would be welcomed.

    Anyhow, thanks for all the explanations. Eventually, I understand that
    using bitmaps can be more efficient than a log if the bits are condensed.

    \
     
     \ /
      Last update: 2023-03-27 00:38    [W:6.971 / U:1.752 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site