lkml.org 
[lkml]   [2022]   [Sep]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 7/8] nouveau/dmem: Evict device private memory during release
From
Date
Reviewed-by: Lyude Paul <lyude@redhat.com>

On Wed, 2022-09-28 at 22:01 +1000, Alistair Popple wrote:
> When the module is unloaded or a GPU is unbound from the module it is
> possible for device private pages to still be mapped in currently
> running processes. This can lead to a hangs and RCU stall warnings when
> unbinding the device as memunmap_pages() will wait in an uninterruptible
> state until all device pages have been freed which may never happen.
>
> Fix this by migrating device mappings back to normal CPU memory prior to
> freeing the GPU memory chunks and associated device private pages.
>
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> Cc: Lyude Paul <lyude@redhat.com>
> Cc: Ben Skeggs <bskeggs@redhat.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> ---
> drivers/gpu/drm/nouveau/nouveau_dmem.c | 48 +++++++++++++++++++++++++++-
> 1 file changed, 48 insertions(+)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index 65f51fb..5fe2091 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -367,6 +367,52 @@ nouveau_dmem_suspend(struct nouveau_drm *drm)
> mutex_unlock(&drm->dmem->mutex);
> }
>
> +/*
> + * Evict all pages mapping a chunk.
> + */
> +static void
> +nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
> +{
> + unsigned long i, npages = range_len(&chunk->pagemap.range) >> PAGE_SHIFT;
> + unsigned long *src_pfns, *dst_pfns;
> + dma_addr_t *dma_addrs;
> + struct nouveau_fence *fence;
> +
> + src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
> + dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
> + dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
> +
> + migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT,
> + npages);
> +
> + for (i = 0; i < npages; i++) {
> + if (src_pfns[i] & MIGRATE_PFN_MIGRATE) {
> + struct page *dpage;
> +
> + /*
> + * _GFP_NOFAIL because the GPU is going away and there
> + * is nothing sensible we can do if we can't copy the
> + * data back.
> + */
> + dpage = alloc_page(GFP_HIGHUSER | __GFP_NOFAIL);
> + dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
> + nouveau_dmem_copy_one(chunk->drm,
> + migrate_pfn_to_page(src_pfns[i]), dpage,
> + &dma_addrs[i]);
> + }
> + }
> +
> + nouveau_fence_new(chunk->drm->dmem->migrate.chan, false, &fence);
> + migrate_device_pages(src_pfns, dst_pfns, npages);
> + nouveau_dmem_fence_done(&fence);
> + migrate_device_finalize(src_pfns, dst_pfns, npages);
> + kfree(src_pfns);
> + kfree(dst_pfns);
> + for (i = 0; i < npages; i++)
> + dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL);
> + kfree(dma_addrs);
> +}
> +
> void
> nouveau_dmem_fini(struct nouveau_drm *drm)
> {
> @@ -378,8 +424,10 @@ nouveau_dmem_fini(struct nouveau_drm *drm)
> mutex_lock(&drm->dmem->mutex);
>
> list_for_each_entry_safe(chunk, tmp, &drm->dmem->chunks, list) {
> + nouveau_dmem_evict_chunk(chunk);
> nouveau_bo_unpin(chunk->bo);
> nouveau_bo_ref(NULL, &chunk->bo);
> + WARN_ON(chunk->callocated);
> list_del(&chunk->list);
> memunmap_pages(&chunk->pagemap);
> release_mem_region(chunk->pagemap.range.start,

--
Cheers,
Lyude Paul (she/her)
Software Engineer at Red Hat

\
 
 \ /
  Last update: 2022-09-28 23:38    [W:0.126 / U:0.312 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site