lkml.org 
[lkml]   [2012]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRE: [PATCH 4/4] zsmalloc: zsmalloc: align cache line size
> From: Minchan Kim [mailto:minchan@kernel.org]
> Subject: Re: [PATCH 4/4] zsmalloc: zsmalloc: align cache line size
>
> On 05/08/2012 11:00 PM, Dan Magenheimer wrote:
>
> >> From: Minchan Kim [mailto:minchan@kernel.org]
> >>> zcache can potentially create a lot of pools, so the latter will save
> >>> some memory.
> >>
> >>
> >> Dumb question.
> >> Why should we create pool per user?
> >> What's the problem if there is only one pool in system?
> >
> > zcache doesn't use zsmalloc for cleancache pages today, but
> > that's Seth's plan for the future. Then if there is a
> > separate pool for each cleancache pool, when a filesystem
> > is umount'ed, it isn't necessary to walk through and delete
> > all pages one-by-one, which could take quite awhile.
>
> > ramster needs one pool for each client (i.e. machine in the
> > cluster) for frontswap pages for the same reason, and
> > later, for cleancache pages, one per mounted filesystem
> > per client
>
> Fair enough.
>
> Then, how about this interfaces like slab?
>
> 1. zs_handle zs_malloc(size_t size, gfp_t flags) - share a pool by many subsystem(like kmalloc)
> 2. zs_handle zs_malloc_pool(struct zs_pool *pool, size_t size) - use own pool(like kmem_cache_alloc)
>
> Any thoughts?

Seems fine to me.

> But some subsystems can't want a own pool for not waste unnecessary memory.

Are you using zsmalloc for something else in the kernel? I'm
wondering what other subsystem would have random size allocations
always less than a page.

Thanks,
Dan


\
 
 \ /
  Last update: 2012-05-09 05:41    [W:0.095 / U:2.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site