lkml.org 
[lkml]   [2021]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v15 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
On Wed, Jan 20, 2021 at 08:06:08PM +0200, Mike Rapoport wrote:
> +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> + unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int err;
> +
> + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> + if (!page)
> + return -ENOMEM;

Does cma_alloc() zero the pages it allocates? If not, where do we avoid
leaking kernel memory to userspace?

\
 
 \ /
  Last update: 2021-01-21 02:03    [W:1.351 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site