Messages in this thread |  | | Date | Thu, 12 Dec 2013 17:46:02 +0000 | From | Christoph Lameter <> | Subject | Re: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for cmpxchg |
| |
On Wed, 11 Dec 2013, Dave Hansen wrote:
> > The write-argument to cmpxchg_double() must be 16-byte aligned. > We used to align 'struct page' itself in order to guarantee this, > but that wastes 8-bytes per page. Instead, we take 8-bytes > internal to the page before page->counters and move freelist > between there and the existing 8-bytes after counters. That way, > no matter how 'stuct page' itself is aligned, we can ensure that > we have a 16-byte area with which to to this cmpxchg.
Well this adds additional branching to the fast paths.
|  |