lkml.org 
[lkml]   [2013]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for cmpxchg
On 12/12/2013 09:46 AM, Christoph Lameter wrote:
> On Wed, 11 Dec 2013, Dave Hansen wrote:
>> The write-argument to cmpxchg_double() must be 16-byte aligned.
>> We used to align 'struct page' itself in order to guarantee this,
>> but that wastes 8-bytes per page. Instead, we take 8-bytes
>> internal to the page before page->counters and move freelist
>> between there and the existing 8-bytes after counters. That way,
>> no matter how 'stuct page' itself is aligned, we can ensure that
>> we have a 16-byte area with which to to this cmpxchg.
>
> Well this adds additional branching to the fast paths.

I don't think it *HAS* to inherently. The reason here is really that we
swap the _order_ of the arguments to the cmpxchg() since their order in
memory changes. Essentially, we do:

| flags | freelist | counters | |
| flags | | counters | freelist |

I did this so I wouldn't have to make a helper for ->counters. But, if
we also move counters around, we can do:

| flags | counters | freelist | |
| flags | | counters | freelist |

I believe we can do that all with plain pointer arithmetic and masks so
that it won't cost any branches.



\
 
 \ /
  Last update: 2013-12-12 21:01    [W:0.102 / U:0.488 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site