lkml.org 
[lkml]   [2018]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Qemu-devel] [RFC v2 1/2] virtio: add pmem driver
On Thu, Apr 26, 2018 at 11:44:59AM -0400, Pankaj Gupta wrote:
> > > + int err;
> > > +
> > > + sg_init_one(&sg, buf, sizeof(buf));
> > > +
> > > + err = virtqueue_add_outbuf(vpmem->req_vq, &sg, 1, buf, GFP_KERNEL);
> > > +
> > > + if (err) {
> > > + dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");
> > > + return;
> > > + }
> > > +
> > > + virtqueue_kick(vpmem->req_vq);
> >
> > Is any locking necessary? Two CPUs must not invoke virtio_pmem_flush()
> > at the same time. Not sure if anything guarantees this, maybe you're
> > relying on libnvdimm but I haven't checked.
>
> I thought about it to some extent, and wanted to go ahead with simple version first:
>
> - I think file 'inode -> locking' sill is there for request on single file.
> - For multiple files, our aim is to just flush the backend block image.
> - Even there is collision for virt queue read/write entry it should just trigger a Qemu fsync.
> We just want most recent flush to assure guest writes are synced properly.
>
> Important point here: We are doing entire block fsync for guest virtual disk.

I don't understand your answer. Is locking necessary or not?

From the virtqueue_add_outbuf() documentation:

* Caller must ensure we don't call this with other virtqueue operations
* at the same time (except where noted).

Stefan
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2018-04-27 15:32    [W:0.059 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site