Messages in this thread | | | Date | Wed, 19 Feb 2014 09:59:17 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH -mm 0/3] fix numa vs kvm scalability issue |
| |
On Tue, Feb 18, 2014 at 05:12:43PM -0500, riel@redhat.com wrote: > The NUMA scanning code can end up iterating over many gigabytes > of unpopulated memory, especially in the case of a freshly started > KVM guest with lots of memory. > > This results in the mmu notifier code being called even when > there are no mapped pages in a virtual address range. The amount > of time wasted can be enough to trigger soft lockup warnings > with very large (>2TB) KVM guests. > > This patch moves the mmu notifier call to the pmd level, which > represents 1GB areas of memory on x86-64. Furthermore, the mmu > notifier code is only called from the address in the PMD where > present mappings are first encountered. > > The hugetlbfs code is left alone for now; hugetlb mappings are > not relocatable, and as such are left alone by the NUMA code, > and should never trigger this problem to begin with. > > The series also adds a cond_resched to task_numa_work, to > fix another potential latency issue.
Andrew, I'll pick up the first kernel/sched/ patch; do you want the other two mm/ patches?
| |