lkml.org 
[lkml]   [2014]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] mm/percpu.c: don't bother to re-walk the pcpu_slot list if nobody free space since we last drop pcpu_lock.
On Thu, Mar 27, 2014 at 07:06:03PM +0800, Jianyu Zhan wrote:
> Presently, after we fail the first try to walk the pcpu_slot list
> to find a chunk for allocating, we just drop the pcpu_lock spinlock,
> and go allocating a new chunk. Then we re-gain the pcpu_lock and
> anchoring our hope on that during this period, some guys might have
> freed space for us(we still hold the pcpu_alloc_mutex during this
> period, so only freeing or reclaiming could happen), we do a fully
> rewalk of the pcpu_slot list.
>
> However if nobody free space, this fully rewalk may seem too silly,
> and we would eventually fall back to the new chunk.
>
> And since we hold pcpu_alloc_mutex, only freeing or reclaiming path
> could touch the pcpu_slot(which just need holding a pcpu_lock), we
> could maintain a pcpu_slot_stat bitmap to record that during the period
> we don't have the pcpu_lock, if anybody free space to any slot we
> interest in. If so, we just just go inside these slots for a try;
> if not, we just do allocation using the newly-allocated fully-free
> new chunk.

The patch probably needs to be refreshed on top of percpu/for-3.15.
Hmmm... I'm not sure whether the added complexity is worthwhile. It's
a fairly cold path. Can you show how helpful this optimization is?

Thanks.

--
tejun


\
 
 \ /
  Last update: 2014-03-27 16:41    [W:0.031 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site