lkml.org 
[lkml]   [2014]   [Sep]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: regression caused by cgroups optimization in 3.17-rc2
On 09/04/2014 01:27 PM, Dave Hansen wrote:
> On 09/04/2014 07:27 AM, Michal Hocko wrote:
>> Ouch. free_pages_and_swap_cache completely kills the uncharge batching
>> because it reduces it to PAGEVEC_SIZE batches.
>>
>> I think we really do not need PAGEVEC_SIZE batching anymore. We are
>> already batching on tlb_gather layer. That one is limited so I think
>> the below should be safe but I have to think about this some more. There
>> is a risk of prolonged lru_lock wait times but the number of pages is
>> limited to 10k and the heavy work is done outside of the lock. If this
>> is really a problem then we can tear LRU part and the actual
>> freeing/uncharging into a separate functions in this path.
>>
>> Could you test with this half baked patch, please? I didn't get to test
>> it myself unfortunately.
>
> 3.16 settled out at about 11.5M faults/sec before the regression. This
> patch gets it back up to about 10.5M, which is good. The top spinlock
> contention in the kernel is still from the resource counter code via
> mem_cgroup_commit_charge(), though.
>
> I'm running Johannes' patch now.

This looks pretty good. The area where it plateaus (above 80 threads
where hyperthreading kicks in) might be a bit slower than it was in
3.16, but that could easily be from other things.

> https://www.sr71.net/~dave/intel/bb.html?1=3.16.0-rc4-g67b9d76/&2=3.17.0-rc3-g57b252f

Feel free to add my Tested-by:



\
 
 \ /
  Last update: 2014-09-05 01:41    [W:0.078 / U:1.880 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site