lkml.org 
[lkml]   [2021]   [Dec]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] mm/util.c: Make kvfree() safe for calling while holding spinlocks
On Sat, Dec 25, 2021 at 10:58:29PM +0000, Matthew Wilcox wrote:
> On Sat, Dec 25, 2021 at 07:54:12PM +0100, Uladzislau Rezki wrote:
> > +static void drain_vmap_area(struct work_struct *work)
> > +{
> > + if (mutex_trylock(&vmap_purge_lock)) {
> > + __purge_vmap_area_lazy(ULONG_MAX, 0);
> > + mutex_unlock(&vmap_purge_lock);
> > + }
> > +}
> > +
> > +static DECLARE_WORK(drain_vmap_area_work, drain_vmap_area);
>
> Presuambly if the worker fails to get the mutex, it should reschedule
> itself? And should it even trylock or just always lock?
>
mutex_trylock() has no sense here. It should just always get the lock.
Otherwise we can miss the point to purge. Agree with your opinion.

>
> This kind of ties into something I've been wondering about -- we have
> a number of places in the kernel which cache 'freed' vmalloc allocations
> in order to speed up future allocations of the same size. Kind of like
> slab. Would we be better off trying to cache frequent allocations
> inside vmalloc instead of always purging them?
>
Hm... Some sort of caching would be good. Though it will require some
time to think over all details and design itself. We can cache VAs
instead of purging them until some point or threshold. So basically
we can keep it in our data structures, associate it with some cache,
based on size and reuse it later in the alloc_vmap_area().

All that is related to "vmap_area" caching. Another option is to cache
the "vm_struct". It includes "vmap_area" + pages to drive the mapping.
It is a higher level of caching and i am not sure if an implementation
would be so straightforward.

--
Vlad Rezki

\
 
 \ /
  Last update: 2021-12-26 18:58    [W:0.076 / U:2.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site