lkml.org 
[lkml]   [2020]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v7 21/36] drm: v3d: fix common struct sg_table related issues
    Date
    The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
    returns the number of the created entries in the DMA address space.
    However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
    dma_unmap_sg must be called with the original number of the entries
    passed to the dma_map_sg().

    struct sg_table is a common structure used for describing a non-contiguous
    memory buffer, used commonly in the DRM and graphics subsystems. It
    consists of a scatterlist with memory pages and DMA addresses (sgl entry),
    as well as the number of scatterlist entries: CPU pages (orig_nents entry)
    and DMA mapped pages (nents entry).

    It turned out that it was a common mistake to misuse nents and orig_nents
    entries, calling DMA-mapping functions with a wrong number of entries or
    ignoring the number of mapped entries returned by the dma_map_sg()
    function.

    To avoid such issues, lets use a common dma-mapping wrappers operating
    directly on the struct sg_table objects and use scatterlist page
    iterators where possible. This, almost always, hides references to the
    nents and orig_nents entries, making the code robust, easier to follow
    and copy/paste safe.

    Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
    Reviewed-by: Eric Anholt <eric@anholt.net>
    ---
    drivers/gpu/drm/v3d/v3d_mmu.c | 13 ++++++-------
    1 file changed, 6 insertions(+), 7 deletions(-)

    diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c
    index 3b81ea28c0bb..5a453532901f 100644
    --- a/drivers/gpu/drm/v3d/v3d_mmu.c
    +++ b/drivers/gpu/drm/v3d/v3d_mmu.c
    @@ -90,18 +90,17 @@ void v3d_mmu_insert_ptes(struct v3d_bo *bo)
    struct v3d_dev *v3d = to_v3d_dev(shmem_obj->base.dev);
    u32 page = bo->node.start;
    u32 page_prot = V3D_PTE_WRITEABLE | V3D_PTE_VALID;
    - unsigned int count;
    - struct scatterlist *sgl;
    + struct sg_dma_page_iter dma_iter;

    - for_each_sg(shmem_obj->sgt->sgl, sgl, shmem_obj->sgt->nents, count) {
    - u32 page_address = sg_dma_address(sgl) >> V3D_MMU_PAGE_SHIFT;
    + for_each_sgtable_dma_page(shmem_obj->sgt, &dma_iter, 0) {
    + dma_addr_t dma_addr = sg_page_iter_dma_address(&dma_iter);
    + u32 page_address = dma_addr >> V3D_MMU_PAGE_SHIFT;
    u32 pte = page_prot | page_address;
    u32 i;

    - BUG_ON(page_address + (sg_dma_len(sgl) >> V3D_MMU_PAGE_SHIFT) >=
    + BUG_ON(page_address + (PAGE_SIZE >> V3D_MMU_PAGE_SHIFT) >=
    BIT(24));
    -
    - for (i = 0; i < sg_dma_len(sgl) >> V3D_MMU_PAGE_SHIFT; i++)
    + for (i = 0; i < PAGE_SIZE >> V3D_MMU_PAGE_SHIFT; i++)
    v3d->pt[page++] = pte + i;
    }

    --
    2.17.1
    \
     
     \ /
      Last update: 2020-06-19 12:46    [W:4.168 / U:1.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site