lkml.org 
[lkml]   [2018]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    Subject[PATCH 3.16 006/254] iommu/vt-d: Fix scatterlist offset handling
    3.16.55-rc1 review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Robin Murphy <robin.murphy@arm.com>

    commit 29a90b70893817e2f2bb3cea40a29f5308e21b21 upstream.

    The intel-iommu DMA ops fail to correctly handle scatterlists where
    sg->offset is greater than PAGE_SIZE - the IOVA allocation is computed
    appropriately based on the page-aligned portion of the offset, but the
    mapping is set up relative to sg->page, which means it fails to actually
    cover the whole buffer (and in the worst case doesn't cover it at all):

    (sg->dma_address + sg->dma_len) ----+
    sg->dma_address ---------+ |
    iov_pfn------+ | |
    | | |
    v v v
    iova: a b c d e f
    |--------|--------|--------|--------|--------|
    <...calculated....>
    [_____mapped______]
    pfn: 0 1 2 3 4 5
    |--------|--------|--------|--------|--------|
    ^ ^ ^
    | | |
    sg->page ----+ | |
    sg->offset --------------+ |
    (sg->offset + sg->length) ----------+

    As a result, the caller ends up overrunning the mapping into whatever
    lies beyond, which usually goes badly:

    [ 429.645492] DMAR: DRHD: handling fault status reg 2
    [ 429.650847] DMAR: [DMA Write] Request device [02:00.4] fault addr f2682000 ...

    Whilst this is a fairly rare occurrence, it can happen from the result
    of intermediate scatterlist processing such as scatterwalk_ffwd() in the
    crypto layer. Whilst that particular site could be fixed up, it still
    seems worthwhile to bring intel-iommu in line with other DMA API
    implementations in handling this robustly.

    To that end, fix the intel_map_sg() path to line up the mapping
    correctly (in units of MM pages rather than VT-d pages to match the
    aligned_nrpages() calculation) regardless of the offset, and use
    sg_phys() consistently for clarity.

    Reported-by: Harsh Jain <Harsh@chelsio.com>
    Signed-off-by: Robin Murphy <robin.murphy@arm.com>
    Reviewed by: Ashok Raj <ashok.raj@intel.com>
    Tested by: Jacob Pan <jacob.jun.pan@intel.com>
    Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    ---
    drivers/iommu/intel-iommu.c | 8 +++++---
    1 file changed, 5 insertions(+), 3 deletions(-)

    --- a/drivers/iommu/intel-iommu.c
    +++ b/drivers/iommu/intel-iommu.c
    @@ -2008,10 +2008,12 @@ static int __domain_mapping(struct dmar_
    uint64_t tmp;

    if (!sg_res) {
    + unsigned int pgoff = sg->offset & ~PAGE_MASK;
    +
    sg_res = aligned_nrpages(sg->offset, sg->length);
    - sg->dma_address = ((dma_addr_t)iov_pfn << VTD_PAGE_SHIFT) + sg->offset;
    + sg->dma_address = ((dma_addr_t)iov_pfn << VTD_PAGE_SHIFT) + pgoff;
    sg->dma_length = sg->length;
    - pteval = page_to_phys(sg_page(sg)) | prot;
    + pteval = (sg_phys(sg) - pgoff) | prot;
    phys_pfn = pteval >> VTD_PAGE_SHIFT;
    }

    @@ -3345,7 +3347,7 @@ static int intel_nontranslate_map_sg(str

    for_each_sg(sglist, sg, nelems, i) {
    BUG_ON(!sg_page(sg));
    - sg->dma_address = page_to_phys(sg_page(sg)) + sg->offset;
    + sg->dma_address = sg_phys(sg);
    sg->dma_length = sg->length;
    }
    return nelems;
    \
     
     \ /
      Last update: 2018-02-28 16:33    [W:4.019 / U:0.084 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site