lkml.org 
[lkml]   [2018]   [Nov]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 0/1] vhost: add vhost_blk driver
On Sun, Nov 4, 2018 at 10:00 PM Jason Wang <jasowang@redhat.com> wrote:

> > # fio num-jobs
> > # A: bare metal over block
> > # B: bare metal over file
> > # C: virtio-blk over block
> > # D: virtio-blk over file
> > # E: vhost-blk bio over block
> > # F: vhost-blk kiocb over block
> > # G: vhost-blk kiocb over file
> > #
> > # A B C D E F G

> > 16 1480k 1506k 101k 102k 1346k 1202k 566k

> Hi:
>
> Thanks for the patches.
>
> This is not the first attempt for having vhost-blk:
>
> - Badari's version: https://lwn.net/Articles/379864/
>
> - Asias' version: https://lwn.net/Articles/519880/
>
> It's better to describe the differences (kiocb vs bio? performance?).
> E.g if my memory is correct, Asias said it doesn't give much improvement
> compared with userspace qemu.
>
> And what's more important, I believe we tend to use virtio-scsi nowdays.
> So what's the advantages of vhost-blk over vhost-scsi?

Hi,

Yes, I saw both. Frankly, my implementation is not that different,
because the whole thing has only twice more LOC that vhost/test.c.

I posted my numbers (see in quoted text above the 16 queues case),
IOPS goes from ~100k to 1.2M and almost reaches the physical
limitation of the backend.

submit_bio() is a bit faster, but can't be used for disk images placed
on a file system. I have that submit_bio implementation too.

Storage industry is shifting away from SCSI, which has a scaling
problem. I can compare vhost-scsi vs vhost-blk if you are curious.

Thanks!
--
wbr, Vitaly

\
 
 \ /
  Last update: 2018-11-05 04:24    [W:0.567 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site