lkml.org 
[lkml]   [2019]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2 0/3] vmalloc enhancements
Date
On Tue, Feb 12, 2019 at 12:34:09PM -0800, Andrew Morton wrote:
> On Tue, 12 Feb 2019 13:47:24 -0500 Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> > On Tue, Feb 12, 2019 at 09:56:45AM -0800, Roman Gushchin wrote:
> > > The patchset contains few changes to the vmalloc code, which are
> > > leading to some performance gains and code simplification.
> > >
> > > Also, it exports a number of pages, used by vmalloc(),
> > > in /proc/meminfo.
> > >
> > > Patch (1) removes some redundancy on __vunmap().
> > > Patch (2) separates memory allocation and data initialization
> > > in alloc_vmap_area()
> > > Patch (3) adds vmalloc counter to /proc/meminfo.
> > >
> > > v2->v1:
> > > - rebased on top of current mm tree
> > > - switch from atomic to percpu vmalloc page counter
> >
> > I don't understand what prompted this change to percpu counters.

I *think*, I see some performance difference, but it's barely measurable
in my setup. Also as I remember, Matthew was asking why not percpu here.
So if everybody prefers a global atomic, I'm fine with either.

> >
> > All writers already write vmap_area_lock and vmap_area_list, so it's
> > not really saving much. The for_each_possible_cpu() for /proc/meminfo
> > on the other hand is troublesome.
>
> percpu_counters would fit here. They have probably-unneeded locking
> but I expect that will be acceptable.
>
> And they address the issues with for_each_possible_cpu() avoidance, CPU
> hotplug and transient negative values.

Not sure, because percpu_counters are based on dynamic percpu allocations,
which are using vmalloc under the hood.

Thanks!

\
 
 \ /
  Last update: 2019-02-12 23:37    [W:0.080 / U:1.336 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site