Messages in this thread | | | Subject | Re: [PATCH v2 2/3] sched/numa: expose per-task pages-migration-failure | From | 王贇 <> | Date | Mon, 2 Dec 2019 10:22:15 +0800 |
| |
Hi, Peter,
This has been acked by Mel Gorman, since it is not much related with the rest patches, would you like to pick this one now?
Regards, Michael Wang
On 2019/11/27 上午9:50, 王贇 wrote: > NUMA balancing will try to migrate pages between nodes, which > could caused by memory policy or numa group aggregation, while > the page migration could failed too for eg when the target node > run out of memory. > > Since this is critical to the performance, admin should know > how serious the problem is, and take actions before it causing > too much performance damage, thus this patch expose the counter > as 'migfailed' in '/proc/PID/sched'. > > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Michal Koutný <mkoutny@suse.com> > Suggested-by: Mel Gorman <mgorman@suse.de> > Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com> > --- > kernel/sched/debug.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c > index f7e4579e746c..73c4809c8f37 100644 > --- a/kernel/sched/debug.c > +++ b/kernel/sched/debug.c > @@ -848,6 +848,7 @@ static void sched_show_numa(struct task_struct *p, struct seq_file *m) > P(total_numa_faults); > SEQ_printf(m, "current_node=%d, numa_group_id=%d\n", > task_node(p), task_numa_group_id(p)); > + SEQ_printf(m, "migfailed=%lu\n", p->numa_faults_locality[2]); > show_numa_stats(p, m); > mpol_put(pol); > #endif >
| |