lkml.org 
[lkml]   [2018]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 3/3] virtio-pmem: Add virtio pmem driver
On Thu, Sep 27, 2018 at 6:07 AM Pankaj Gupta <pagupta@redhat.com> wrote:
[..]
> > We are plugging VIRTIO based flush callback for virtio_pmem driver. If pmem
> > driver (pmem_make_request) has to queue request we have to plug "blk_mq_ops"
> > callbacks for corresponding VIRTIO vqs. AFAICU there is no existing
> > multiqueue
> > code merged for pmem driver yet, though i could see patches by Dave upstream.
> >
>
> I thought about this and with current infrastructure "make_request" releases spinlock
> and makes current thread/task. All Other threads are free to call 'make_request'/flush
> and similarly wait by releasing the lock.

Which lock are you referring?

> This actually works like a queue of threads
> waiting for notifications from host.
>
> Current pmem code do not have multiqueue support and I am not sure if core pmem code
> needs it. Adding multiqueue support just for virtio-pmem and not for pmem in same driver
> will be confusing or require alot of tweaking.

Why does the pmem driver need to be converted to multiqueue support?

> Could you please give your suggestions on this.

I was expecting that flush requests that cannot be completed
synchronously be placed on a queue and have bio_endio() called at a
future time. I.e. use bio_chain() to manage the async portion of the
flush request. This causes the guest block layer to just assume the
bio was queued and will be completed at some point in the future.

\
 
 \ /
  Last update: 2018-09-27 17:57    [W:0.069 / U:1.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site