lkml.org 
[lkml]   [2014]   [Feb]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -mm 0/3] fix numa vs kvm scalability issue
On Tue, Feb 18, 2014 at 05:12:43PM -0500, riel@redhat.com wrote:
> The NUMA scanning code can end up iterating over many gigabytes
> of unpopulated memory, especially in the case of a freshly started
> KVM guest with lots of memory.
>
> This results in the mmu notifier code being called even when
> there are no mapped pages in a virtual address range. The amount
> of time wasted can be enough to trigger soft lockup warnings
> with very large (>2TB) KVM guests.
>
> This patch moves the mmu notifier call to the pmd level, which
> represents 1GB areas of memory on x86-64. Furthermore, the mmu
> notifier code is only called from the address in the PMD where
> present mappings are first encountered.
>
> The hugetlbfs code is left alone for now; hugetlb mappings are
> not relocatable, and as such are left alone by the NUMA code,
> and should never trigger this problem to begin with.
>
> The series also adds a cond_resched to task_numa_work, to
> fix another potential latency issue.

Andrew, I'll pick up the first kernel/sched/ patch; do you want the
other two mm/ patches?


\
 
 \ /
  Last update: 2014-02-19 10:41    [W:0.051 / U:0.320 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site