lkml.org 
[lkml]   [2019]   [Jul]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: WARNING in __mmdrop
On Mon, Jul 29, 2019 at 10:24:43PM +0800, Jason Wang wrote:
>
> On 2019/7/29 下午4:59, Michael S. Tsirkin wrote:
> > On Mon, Jul 29, 2019 at 01:54:49PM +0800, Jason Wang wrote:
> > > On 2019/7/26 下午9:49, Michael S. Tsirkin wrote:
> > > > > > Ok, let me retry if necessary (but I do remember I end up with deadlocks
> > > > > > last try).
> > > > > Ok, I play a little with this. And it works so far. Will do more testing
> > > > > tomorrow.
> > > > >
> > > > > One reason could be I switch to use get_user_pages_fast() to
> > > > > __get_user_pages_fast() which doesn't need mmap_sem.
> > > > >
> > > > > Thanks
> > > > OK that sounds good. If we also set a flag to make
> > > > vhost_exceeds_weight exit, then I think it will be all good.
> > >
> > > After some experiments, I came up two methods:
> > >
> > > 1) switch to use vq->mutex, then we must take the vq lock during range
> > > checking (but I don't see obvious slowdown for 16vcpus + 16queues). Setting
> > > flags during weight check should work but it still can't address the worst
> > > case: wait for the page to be swapped in. Is this acceptable?
> > >
> > > 2) using current RCU but replace synchronize_rcu() with vhost_work_flush().
> > > The worst case is the same as 1) but we can check range without holding any
> > > locks.
> > >
> > > Which one did you prefer?
> > >
> > > Thanks
> > I would rather we start with 1 and switch to 2 after we
> > can show some gain.
> >
> > But the worst case needs to be addressed.
>
>
> Yes.
>
>
> > How about sending a signal to
> > the vhost thread? We will need to fix up error handling (I think that
> > at the moment it will error out in that case, handling this as EFAULT -
> > and we don't want to drop packets if we can help it, and surely not
> > enter any error states. In particular it might be especially tricky if
> > we wrote into userspace memory and are now trying to log the write.
> > I guess we can disable the optimization if log is enabled?).
>
>
> This may work but requires a lot of changes.

I agree.

> And actually it's the price of
> using vq mutex.

Not sure what's meant here.

> Actually, the critical section should be rather small, e.g
> just inside memory accessors.

Also true.

>
> I wonder whether or not just do synchronize our self like:
>
> static void inline vhost_inc_vq_ref(struct vhost_virtqueue *vq)
> {
>         int ref = READ_ONCE(vq->ref);
>
>         WRITE_ONCE(vq->ref, ref + 1);
> smp_rmb();
> }
>
> static void inline vhost_dec_vq_ref(struct vhost_virtqueue *vq)
> {
>         int ref = READ_ONCE(vq->ref);
>
> smp_wmb();
>         WRITE_ONCE(vq->ref, ref - 1);
> }
>
> static void inline vhost_wait_for_ref(struct vhost_virtqueue *vq)
> {
>         while (READ_ONCE(vq->ref));
> mb();
> }

Looks good but I'd like to think of a strategy/existing lock that let us
block properly as opposed to spinning, that would be more friendly to
e.g. the realtime patch.

>
> Or using smp_load_acquire()/smp_store_release() instead?
>
> Thanks

These are cheaper on x86, yes.

> >

\
 
 \ /
  Last update: 2019-07-29 16:45    [W:0.184 / U:0.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site