lkml.org 
[lkml]   [2018]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 0/3] Provide more fine grained control over multipathing
    > Moreover, I also wanted to point out that fabrics array vendors are
    > building products that rely on standard nvme multipathing (and probably
    > multipathing over dispersed namespaces as well), and keeping a knob that
    > will keep nvme users with dm-multipath will probably not help them
    > educate their customers as well... So there is another angle to this.

    As a vendor who is building an NVMe-oF storage array, I can say that
    clarity around how Linux wants to handle NVMe multipath would
    definitely be appreciated. It would be great if we could all converge
    around the upstream native driver but right now it doesn't look
    adequate - having only a single active path is not the best way to use
    a multi-controller storage system. Unfortunately it looks like we're
    headed to a world where people have to write separate "best practices"
    documents to cover RHEL, SLES and other vendors.

    We plan to implement all the fancy NVMe standards like ANA, but it
    seems that there is still a requirement to let the host side choose
    policies about how to use paths (round-robin vs least queue depth for
    example). Even in the modern SCSI world with VPD pages and ALUA,
    there are still knobs that are needed. Maybe NVMe will be different
    and we can find defaults that work in all cases but I have to admit
    I'm skeptical...

    - R.

    \
     
     \ /
      Last update: 2018-06-05 00:00    [W:4.037 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site