[lkml]   [2013]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for cmpxchg
On Thu, Dec 12, 2013 at 05:46:02PM +0000, Christoph Lameter wrote:
> On Wed, 11 Dec 2013, Dave Hansen wrote:
> >
> > The write-argument to cmpxchg_double() must be 16-byte aligned.
> > We used to align 'struct page' itself in order to guarantee this,
> > but that wastes 8-bytes per page. Instead, we take 8-bytes
> > internal to the page before page->counters and move freelist
> > between there and the existing 8-bytes after counters. That way,
> > no matter how 'stuct page' itself is aligned, we can ensure that
> > we have a 16-byte area with which to to this cmpxchg.
> Well this adds additional branching to the fast paths.

The branch should be predictible and compare the cost of a branch
(near nothing on a modern OOO CPU with low IPC code like this when
predicted) to the cost of a cache miss (due to larger struct page)


-- -- Speaking for myself only

 \ /
  Last update: 2013-12-13 01:01    [W:0.067 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site