Messages in this thread | | | Date | Mon, 30 Nov 2015 23:00:35 -0500 | From | Waiman Long <> | Subject | Re: [RFC PATCH 3/3] sched/fair: Use different cachelines for readers and writers of load_avg |
| |
On 11/30/2015 05:29 PM, Peter Zijlstra wrote: > On Mon, Nov 30, 2015 at 02:13:32PM -0500, Waiman Long wrote: >>> This would only work if the structure itself is allocated with cacheline >>> alignment, and looking at sched_create_group(), we use a plain kzalloc() >>> for this, which doesn't guarantee any sort of alignment beyond machine >>> word size IIRC. >> With a RHEL 6 derived .config file, the size of the task_group structure was >> 460 bytes on a 32-bit x86 kernel. Adding a ____cacheline_aligned tag >> increase the size to 512 bytes. So it did make the structure a multiple of >> the cacheline size. With both slub and slab, the allocated task group >> pointers from kzalloc() in sched_create_group() were all multiples of 0x200. >> So they were properly aligned for the ____cacheline_aligned tag to work. > Not sure we should rely on sl*b doing the right thing here. > KMALLOC_MIN_ALIGN is explicitly set to sizeof(long long). If you want > explicit alignment, one should use KMEM_CACHE().
I think the current kernel use power-of-2 kmemcaches to satisfy kalloc() requests except when the size is less than or equal to 192 where there are some non-power-of-2 kmemcaches available. Given that the task_group structure is large enough with FAIR_GROUP_SCHED enabled, we shouldn't hit the case that the allocated buffer is not cacheline aligned.
Cheers, Longman
| |