lkml.org 
[lkml]   [2019]   [Aug]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V5 0/9] Fixes for vhost metadata acceleration
On Mon, Aug 12, 2019 at 10:44:51AM +0800, Jason Wang wrote:
>
> On 2019/8/11 上午1:52, Michael S. Tsirkin wrote:
> > On Fri, Aug 09, 2019 at 01:48:42AM -0400, Jason Wang wrote:
> > > Hi all:
> > >
> > > This series try to fix several issues introduced by meta data
> > > accelreation series. Please review.
> > >
> > > Changes from V4:
> > > - switch to use spinlock synchronize MMU notifier with accessors
> > >
> > > Changes from V3:
> > > - remove the unnecessary patch
> > >
> > > Changes from V2:
> > > - use seqlck helper to synchronize MMU notifier with vhost worker
> > >
> > > Changes from V1:
> > > - try not use RCU to syncrhonize MMU notifier with vhost worker
> > > - set dirty pages after no readers
> > > - return -EAGAIN only when we find the range is overlapped with
> > > metadata
> > >
> > > Jason Wang (9):
> > > vhost: don't set uaddr for invalid address
> > > vhost: validate MMU notifier registration
> > > vhost: fix vhost map leak
> > > vhost: reset invalidate_count in vhost_set_vring_num_addr()
> > > vhost: mark dirty pages during map uninit
> > > vhost: don't do synchronize_rcu() in vhost_uninit_vq_maps()
> > > vhost: do not use RCU to synchronize MMU notifier with worker
> > > vhost: correctly set dirty pages in MMU notifiers callback
> > > vhost: do not return -EAGAIN for non blocking invalidation too early
> > >
> > > drivers/vhost/vhost.c | 202 +++++++++++++++++++++++++-----------------
> > > drivers/vhost/vhost.h | 6 +-
> > > 2 files changed, 122 insertions(+), 86 deletions(-)
> > This generally looks more solid.
> >
> > But this amounts to a significant overhaul of the code.
> >
> > At this point how about we revert 7f466032dc9e5a61217f22ea34b2df932786bbfc
> > for this release, and then re-apply a corrected version
> > for the next one?
>
>
> If possible, consider we've actually disabled the feature. How about just
> queued those patches for next release?
>
> Thanks

Sorry if I was unclear. My idea is that
1. I revert the disabled code
2. You send a patch readding it with all the fixes squashed
3. Maybe optimizations on top right away?
4. We queue *that* for next and see what happens.

And the advantage over the patchy approach is that the current patches
are hard to review. E.g. it's not reasonable to ask RCU guys to review
the whole of vhost for RCU usage but it's much more reasonable to ask
about a specific patch.


--
MST

\
 
 \ /
  Last update: 2019-08-12 11:50    [W:0.185 / U:1.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site