lkml.org 
[lkml]   [2020]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.8 352/633] RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
    Date
    From: Jason Gunthorpe <jgg@nvidia.com>

    [ Upstream commit a40c20dabdf9045270767c75918feb67f0727c89 ]

    It is possible for a single SGL to span an aligned boundary, eg if the SGL
    is

    61440 -> 90112

    Then the length is 28672, which currently limits the block size to
    32k. With a 32k page size the two covering blocks will be:

    32768->65536 and 65536->98304

    However, the correct answer is a 128K block size which will span the whole
    28672 bytes in a single block.

    Instead of limiting based on length figure out which high IOVA bits don't
    change between the start and end addresses. That is the highest useful
    page size.

    Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
    Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
    Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
    Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com>
    Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    drivers/infiniband/core/umem.c | 9 +++++++--
    1 file changed, 7 insertions(+), 2 deletions(-)

    diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
    index 82455a1392f1d..1173b8cbe92b5 100644
    --- a/drivers/infiniband/core/umem.c
    +++ b/drivers/infiniband/core/umem.c
    @@ -156,8 +156,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
    return 0;

    va = virt;
    - /* max page size not to exceed MR length */
    - mask = roundup_pow_of_two(umem->length);
    + /* The best result is the smallest page size that results in the minimum
    + * number of required pages. Compute the largest page size that could
    + * work based on VA address bits that don't change.
    + */
    + mask = pgsz_bitmap &
    + GENMASK(BITS_PER_LONG - 1,
    + bits_per((umem->length - 1 + virt) ^ virt));
    /* offset into first SGL */
    pgoff = umem->address & ~PAGE_MASK;

    --
    2.25.1


    \
     
     \ /
      Last update: 2020-10-27 17:58    [W:4.251 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site