lkml.org 
[lkml]   [2019]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: WARNING in __mmdrop
    From
    Date

    On 2019/7/23 下午6:27, Michael S. Tsirkin wrote:
    >> Yes, since there could be multiple co-current invalidation requests. We need
    >> count them to make sure we don't pin wrong pages.
    >>
    >>
    >>> I also wonder about ordering. kvm has this:
    >>> /*
    >>> * Used to check for invalidations in progress, of the pfn that is
    >>> * returned by pfn_to_pfn_prot below.
    >>> */
    >>> mmu_seq = kvm->mmu_notifier_seq;
    >>> /*
    >>> * Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in
    >>> * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
    >>> * risk the page we get a reference to getting unmapped before we have a
    >>> * chance to grab the mmu_lock without mmu_notifier_retry() noticing.
    >>> *
    >>> * This smp_rmb() pairs with the effective smp_wmb() of the combination
    >>> * of the pte_unmap_unlock() after the PTE is zapped, and the
    >>> * spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before
    >>> * mmu_notifier_seq is incremented.
    >>> */
    >>> smp_rmb();
    >>>
    >>> does this apply to us? Can't we use a seqlock instead so we do
    >>> not need to worry?
    >> I'm not familiar with kvm MMU internals, but we do everything under of
    >> mmu_lock.
    >>
    >> Thanks
    > I don't think this helps at all.
    >
    > There's no lock between checking the invalidate counter and
    > get user pages fast within vhost_map_prefetch. So it's possible
    > that get user pages fast reads PTEs speculatively before
    > invalidate is read.
    >
    > --


    In vhost_map_prefetch() we do:

            spin_lock(&vq->mmu_lock);

            ...

            err = -EFAULT;
            if (vq->invalidate_count)
                    goto err;

            ...

            npinned = __get_user_pages_fast(uaddr->uaddr, npages,
                                            uaddr->write, pages);

            ...

            spin_unlock(&vq->mmu_lock);

    Is this not sufficient?

    Thanks

    \
     
     \ /
      Last update: 2019-07-23 15:35    [W:3.638 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site