lkml.org 
[lkml]   [2022]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.18 828/879] Revert "mm/cma.c: remove redundant cma_mutex lock"
    Date
    From: Dong Aisheng <aisheng.dong@nxp.com>

    commit 60a60e32cf91169840abcb4a80f0b0df31708ba7 upstream.

    This reverts commit a4efc174b382fcdb which introduced a regression issue
    that when there're multiple processes allocating dma memory in parallel by
    calling dma_alloc_coherent(), it may fail sometimes as follows:

    Error log:
    cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
    cma: number of available pages:
    3@125+20@172+12@236+4@380+32@736+17@2287+23@2473+20@36076+99@40477+108@40852+44@41108+20@41196+108@41364+108@41620+
    108@42900+108@43156+483@44061+1763@45341+1440@47712+20@49324+20@49388+5076@49452+2304@55040+35@58141+20@58220+20@58284+
    7188@58348+84@66220+7276@66452+227@74525+6371@75549=> 33161 free of 81920 total pages

    When issue happened, we saw there were still 33161 pages (129M) free CMA
    memory and a lot available free slots for 148 pages in CMA bitmap that we
    want to allocate.

    When dumping memory info, we found that there was also ~342M normal
    memory, but only 1352K CMA memory left in buddy system while a lot of
    pageblocks were isolated.

    Memory info log:
    Normal free:351096kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB
    active_anon:98060kB inactive_anon:98948kB active_file:60864kB inactive_file:31776kB
    unevictable:0kB writepending:0kB present:1048576kB managed:1018328kB mlocked:0kB
    bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB lowmem_reserve[]: 0 0 0
    Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB (UMECI) 65*64kB (UMCI)
    36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI) 4*2048kB (MI) 8*4096kB (EI)
    8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kB

    The root cause of this issue is that since commit a4efc174b382 ("mm/cma.c:
    remove redundant cma_mutex lock"), CMA supports concurrent memory
    allocation. It's possible that the memory range process A trying to alloc
    has already been isolated by the allocation of process B during memory
    migration.

    The problem here is that the memory range isolated during one allocation
    by start_isolate_page_range() could be much bigger than the real size we
    want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.

    Taking an ARMv7 platform with 1G memory as an example, when
    MAX_ORDER_NR_PAGES is big (e.g. 32M with max_order 14) and CMA memory is
    relatively small (e.g. 128M), there're only 4 MAX_ORDER slot, then it's
    very easy that all CMA memory may have already been isolated by other
    processes when one trying to allocate memory using dma_alloc_coherent().
    Since current CMA code will only scan one time of whole available CMA
    memory, then dma_alloc_coherent() may easy fail due to contention with
    other processes.

    This patch simply falls back to the original method that using cma_mutex
    to make alloc_contig_range() run sequentially to avoid the issue.

    Link: https://lkml.kernel.org/r/20220509094551.3596244-1-aisheng.dong@nxp.com
    Link: https://lore.kernel.org/all/20220315144521.3810298-2-aisheng.dong@nxp.com/
    Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
    Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Acked-by: David Hildenbrand <david@redhat.com>
    Cc: Marek Szyprowski <m.szyprowski@samsung.com>
    Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: <stable@vger.kernel.org> [5.11+]
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    mm/cma.c | 4 +++-
    1 file changed, 3 insertions(+), 1 deletion(-)

    --- a/mm/cma.c
    +++ b/mm/cma.c
    @@ -37,6 +37,7 @@

    struct cma cma_areas[MAX_CMA_AREAS];
    unsigned cma_area_count;
    +static DEFINE_MUTEX(cma_mutex);

    phys_addr_t cma_get_base(const struct cma *cma)
    {
    @@ -468,9 +469,10 @@ struct page *cma_alloc(struct cma *cma,
    spin_unlock_irq(&cma->lock);

    pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
    + mutex_lock(&cma_mutex);
    ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
    GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
    -
    + mutex_unlock(&cma_mutex);
    if (ret == 0) {
    page = pfn_to_page(pfn);
    break;

    \
     
     \ /
      Last update: 2022-06-08 03:36    [W:4.030 / U:0.728 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site