lkml.org 
[lkml]   [2019]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
    On Thu, Aug 01, 2019 at 01:02:18PM +0800, Jason Wang wrote:
    >
    > On 2019/8/1 上午3:30, Jason Gunthorpe wrote:
    > > On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote:
    > > > On 2019/7/31 下午8:39, Jason Gunthorpe wrote:
    > > > > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote:
    > > > > > We used to use RCU to synchronize MMU notifier with worker. This leads
    > > > > > calling synchronize_rcu() in invalidate_range_start(). But on a busy
    > > > > > system, there would be many factors that may slow down the
    > > > > > synchronize_rcu() which makes it unsuitable to be called in MMU
    > > > > > notifier.
    > > > > >
    > > > > > A solution is SRCU but its overhead is obvious with the expensive full
    > > > > > memory barrier. Another choice is to use seqlock, but it doesn't
    > > > > > provide a synchronization method between readers and writers. The last
    > > > > > choice is to use vq mutex, but it need to deal with the worst case
    > > > > > that MMU notifier must be blocked and wait for the finish of swap in.
    > > > > >
    > > > > > So this patch switches use a counter to track whether or not the map
    > > > > > was used. The counter was increased when vq try to start or finish
    > > > > > uses the map. This means, when it was even, we're sure there's no
    > > > > > readers and MMU notifier is synchronized. When it was odd, it means
    > > > > > there's a reader we need to wait it to be even again then we are
    > > > > > synchronized.
    > > > > You just described a seqlock.
    > > >
    > > > Kind of, see my explanation below.
    > > >
    > > >
    > > > > We've been talking about providing this as some core service from mmu
    > > > > notifiers because nearly every use of this API needs it.
    > > >
    > > > That would be very helpful.
    > > >
    > > >
    > > > > IMHO this gets the whole thing backwards, the common pattern is to
    > > > > protect the 'shadow pte' data with a seqlock (usually open coded),
    > > > > such that the mmu notififer side has the write side of that lock and
    > > > > the read side is consumed by the thread accessing or updating the SPTE.
    > > >
    > > > Yes, I've considered something like that. But the problem is, mmu notifier
    > > > (writer) need to wait for the vhost worker to finish the read before it can
    > > > do things like setting dirty pages and unmapping page.  It looks to me
    > > > seqlock doesn't provide things like this.
    > > The seqlock is usually used to prevent a 2nd thread from accessing the
    > > VA while it is being changed by the mm. ie you use something seqlocky
    > > instead of the ugly mmu_notifier_unregister/register cycle.
    >
    >
    > Yes, so we have two mappings:
    >
    > [1] vring address to VA
    > [2] VA to PA
    >
    > And have several readers and writers
    >
    > 1) set_vring_num_addr(): writer of both [1] and [2]
    > 2) MMU notifier: reader of [1] writer of [2]
    > 3) GUP: reader of [1] writer of [2]
    > 4) memory accessors: reader of [1] and [2]
    >
    > Fortunately, 1) 3) and 4) have already synchronized through vq->mutex. We
    > only need to deal with synchronization between 2) and each of the reset:
    > Sync between 1) and 2): For mapping [1], I do
    > mmu_notifier_unregister/register. This help to avoid holding any lock to do
    > overlap check.

    I suspect you could have done this with a RCU technique instead of
    register/unregister.

    > Sync between 2) and 4): For mapping [1], both are readers, no need any
    > synchronization. For mapping [2], synchronize through RCU (or something
    > simliar to seqlock).

    You can't really use a seqlock, seqlocks are collision-retry locks,
    and the semantic here is that invalidate_range_start *MUST* not
    continue until thread doing #4 above is guarenteed no longer touching
    the memory.

    This must be a proper barrier, like a spinlock, mutex, or
    synchronize_rcu.

    And, again, you can't re-invent a spinlock with open coding and get
    something better.

    Jason

    \
     
     \ /
      Last update: 2019-08-01 16:16    [W:2.351 / U:1.352 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site