lkml.org 
[lkml]   [2022]   [Aug]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/3] dma-pool: limit DMA and DMA32 zone size pools
On Wed 17-08-22 08:06:47, Christoph Hellwig wrote:
> Limit the sizing of the atomic pools for allocations from
> ZONE_DMA and ZONE_DMA32 based on the number of pages actually
> present in those zones instead of the total memory.
>
> Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask")
> Reported-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Yes this is much more reasonable estimation. Especially for the DMA
zone. I still believe some downscaling would make sense but as said
earlier nothing to really insist on. Also you likely want to have
pr_warn for non-poppulated DMA zone as pointed in other thread[1].

Other than that looks good to me. FWIW
Acked-by: Michal Hocko <mhocko@suse.com>

[1] https://lore.kernel.org/all/YvyN/vI7+cTv0geS@dhcp22.suse.cz/?q=http%3A%2F%2Flkml.kernel.org%2Fr%2FYvyN%2FvI7%2BcTv0geS%40dhcp22.suse.cz

> ---
> kernel/dma/pool.c | 40 +++++++++++++++++++++++++++++-----------
> 1 file changed, 29 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5b07b0379a501..f629c6dfd8555 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -193,28 +193,46 @@ static unsigned long calculate_pool_size(unsigned long zone_pages)
> return max_t(unsigned long, nr_pages << PAGE_SHIFT, SZ_128K);
> }
>
> +#if defined(CONFIG_ZONE_DMA) || defined(CONFIG_ZONE_DMA32)
> +static unsigned long __init nr_managed_pages(int zone_idx)
> +{
> + struct pglist_data *pgdat;
> + unsigned long nr_pages = 0;
> +
> + for_each_online_pgdat(pgdat)
> + nr_pages += zone_managed_pages(&pgdat->node_zones[zone_idx]);
> + return nr_pages;
> +}
> +#endif /* CONFIG_ZONE_DMA || CONFIG_ZONE_DMA32 */
> +
> static int __init dma_atomic_pool_init(void)
> {
> + unsigned long nr_zone_dma_pages, nr_zone_dma32_pages;
> +
> +#ifdef CONFIG_ZONE_DMA
> + nr_zone_dma_pages = nr_managed_pages(ZONE_DMA);
> + if (nr_zone_dma_pages)
> + atomic_pool_dma = __dma_atomic_pool_init(
> + calculate_pool_size(nr_zone_dma_pages),
> + GFP_DMA);
> +#endif
> +#ifdef CONFIG_ZONE_DMA32
> + nr_zone_dma32_pages = nr_managed_pages(ZONE_DMA32);
> + if (nr_zone_dma32_pages)
> + atomic_pool_dma32 = __dma_atomic_pool_init(
> + calculate_pool_size(nr_zone_dma32_pages),
> + GFP_DMA32);
> +#endif
> /*
> * If coherent_pool was not used on the command line, default the pool
> * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
> */
> if (!atomic_pool_size)
> atomic_pool_size = calculate_pool_size(totalram_pages());
> -
> - INIT_WORK(&atomic_pool_work, atomic_pool_work_fn);
> -
> atomic_pool_kernel = __dma_atomic_pool_init(atomic_pool_size,
> GFP_KERNEL);
> - if (has_managed_dma()) {
> - atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
> - GFP_KERNEL | GFP_DMA);
> - }
> - if (IS_ENABLED(CONFIG_ZONE_DMA32)) {
> - atomic_pool_dma32 = __dma_atomic_pool_init(atomic_pool_size,
> - GFP_KERNEL | GFP_DMA32);
> - }
>
> + INIT_WORK(&atomic_pool_work, atomic_pool_work_fn);
> dma_atomic_pool_debugfs_init();
> return 0;
> }
> --
> 2.30.2

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2022-08-17 14:55    [W:0.065 / U:0.684 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site