lkml.org 
[lkml]   [2018]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 00/19] Fixes for sched/numa_balancing
On Wed, Jun 20, 2018 at 10:32:41PM +0530, Srikar Dronamraju wrote:
> Srikar Dronamraju (19):
> sched/numa: Remove redundant field.
> sched/numa: Evaluate move once per node
> sched/numa: Simplify load_too_imbalanced
> sched/numa: Set preferred_node based on best_cpu
> sched/numa: Use task faults only if numa_group is not yet setup
> sched/debug: Reverse the order of printing faults
> sched/numa: Skip nodes that are at hoplimit
> sched/numa: Remove unused task_capacity from numa_stats
> sched/numa: Modify migrate_swap to accept additional params
> sched/numa: Restrict migrating in parallel to the same node.
> sched/numa: Remove numa_has_capacity
> sched/numa: Use group_weights to identify if migration degrades locality
> sched/numa: Move task_placement closer to numa_migrate_preferred

I took the above, but left the below for next time.

> sched/numa: Stop multiple tasks from moving to the cpu at the same time
> mm/migrate: Use xchg instead of spinlock
> sched/numa: Updation of scan period need not be in lock
> sched/numa: Detect if node actively handling migration
> sched/numa: Pass destination cpu as a parameter to migrate_task_rq
> sched/numa: Reset scan rate whenever task moves across nodes

\
 
 \ /
  Last update: 2018-07-23 15:58    [W:0.294 / U:1.908 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site