lkml.org 
[lkml]   [2020]   [May]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] staging: kpc2000: kpc_dma: Convert get_user_pages() --> pin_user_pages()
Date
In 2019, we introduced pin_user_pages*() and now we are converting
get_user_pages*() to the new API as appropriate. [1] & [2] could
be referred for more information.

When pin_user_pages() returns numbers of partially mapped pages,
those pages were not unpinned as part of error handling. Fixed
it as part of this patch.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>
---
Hi,

I'm compile tested this, but unable to run-time test, so any testing
help is much appriciated.

drivers/staging/kpc2000/kpc_dma/fileops.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/kpc2000/kpc_dma/fileops.c b/drivers/staging/kpc2000/kpc_dma/fileops.c
index 8975346..29bab13 100644
--- a/drivers/staging/kpc2000/kpc_dma/fileops.c
+++ b/drivers/staging/kpc2000/kpc_dma/fileops.c
@@ -48,6 +48,7 @@ static int kpc_dma_transfer(struct dev_private_data *priv,
u64 card_addr;
u64 dma_addr;
u64 user_ctl;
+ int nr_pages = 0;

ldev = priv->ldev;

@@ -76,13 +77,15 @@ static int kpc_dma_transfer(struct dev_private_data *priv,

// Lock the user buffer pages in memory, and hold on to the page pointers (for the sglist)
mmap_read_lock(current->mm); /* get memory map semaphore */
- rv = get_user_pages(iov_base, acd->page_count, FOLL_TOUCH | FOLL_WRITE | FOLL_GET, acd->user_pages, NULL);
+ rv = pin_user_pages(iov_base, acd->page_count, FOLL_TOUCH | FOLL_WRITE, acd->user_pages, NULL);
mmap_read_unlock(current->mm); /* release the semaphore */
if (rv != acd->page_count) {
- dev_err(&priv->ldev->pldev->dev, "Couldn't get_user_pages (%ld)\n", rv);
+ dev_err(&priv->ldev->pldev->dev, "Couldn't pin_user_pages (%ld)\n", rv);
+ nr_pages = rv;
goto err_get_user_pages;
}

+ nr_pages = acd->page_count;
// Allocate and setup the sg_table (scatterlist entries)
rv = sg_alloc_table_from_pages(&acd->sgt, acd->user_pages, acd->page_count, iov_base & (PAGE_SIZE - 1), iov_len, GFP_KERNEL);
if (rv) {
@@ -189,10 +192,9 @@ static int kpc_dma_transfer(struct dev_private_data *priv,
sg_free_table(&acd->sgt);
err_dma_map_sg:
err_alloc_sg_table:
- for (i = 0 ; i < acd->page_count ; i++)
- put_page(acd->user_pages[i]);
-
err_get_user_pages:
+ if (nr_pages > 0)
+ unpin_user_pages(acd->user_pages, nr_pages);
kfree(acd->user_pages);
err_alloc_userpages:
kfree(acd);
@@ -217,8 +219,7 @@ void transfer_complete_cb(struct aio_cb_data *acd, size_t xfr_count, u32 flags)

dma_unmap_sg(&acd->ldev->pldev->dev, acd->sgt.sgl, acd->sgt.nents, acd->ldev->dir);

- for (i = 0 ; i < acd->page_count ; i++)
- put_page(acd->user_pages[i]);
+ unpin_user_pages(acd->user_pages, acd->page_count);

sg_free_table(&acd->sgt);

--
1.9.1
\
 
 \ /
  Last update: 2020-05-31 19:44    [W:0.070 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site