Messages in this thread |  | | Date | Thu, 12 Dec 2013 15:40:38 -0800 | From | Andi Kleen <> | Subject | Re: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for cmpxchg |
| |
On Thu, Dec 12, 2013 at 05:46:02PM +0000, Christoph Lameter wrote: > On Wed, 11 Dec 2013, Dave Hansen wrote: > > > > > The write-argument to cmpxchg_double() must be 16-byte aligned. > > We used to align 'struct page' itself in order to guarantee this, > > but that wastes 8-bytes per page. Instead, we take 8-bytes > > internal to the page before page->counters and move freelist > > between there and the existing 8-bytes after counters. That way, > > no matter how 'stuct page' itself is aligned, we can ensure that > > we have a 16-byte area with which to to this cmpxchg. > > Well this adds additional branching to the fast paths.
The branch should be predictible and compare the cost of a branch (near nothing on a modern OOO CPU with low IPC code like this when predicted) to the cost of a cache miss (due to larger struct page)
-Andi
-- ak@linux.intel.com -- Speaking for myself only
|  |