Messages in this thread |  | | From | Mel Gorman <> | Subject | [PATCH 0/3] TLB flush multiple pages per IPI v5 | Date | Mon, 8 Jun 2015 13:50:51 +0100 |
| |
Changelog since V4 o Rebase to 4.1-rc6
Changelog since V3 o Drop batching of TLB flush from migration o Redo how larger batching is managed o Batch TLB flushes when writable entries exist
When unmapping pages it is necessary to flush the TLB. If that page was accessed by another CPU then an IPI is used to flush the remote CPU. That is a lot of IPIs if kswapd is scanning and unmapping >100K pages per second.
There already is a window between when a page is unmapped and when it is TLB flushed. This series simply increases the window so multiple pages can be flushed using a single IPI. This *should* be safe or the kernel is hosed already but I've cc'd the x86 maintainers and some of the Intel folk for comment.
Patch 1 simply made the rest of the series easier to write as ftrace could identify all the senders of TLB flush IPIS.
Patch 2 collects a list of PFNs and sends one IPI to flush them all
Patch 3 tracks when there potentially are writable TLB entries that need to be batched differently
The performance impact is documented in the changelogs but in the optimistic case on a 4-socket machine the full series reduces interrupts from 900K interrupts/second to 60K interrupts/second.
arch/x86/Kconfig | 1 + arch/x86/include/asm/tlbflush.h | 2 + arch/x86/mm/tlb.c | 1 + include/linux/init_task.h | 8 +++ include/linux/mm_types.h | 1 + include/linux/rmap.h | 3 + include/linux/sched.h | 15 +++++ include/trace/events/tlb.h | 3 +- init/Kconfig | 8 +++ kernel/fork.c | 5 ++ kernel/sched/core.c | 3 + mm/internal.h | 15 +++++ mm/rmap.c | 119 +++++++++++++++++++++++++++++++++++++++- mm/vmscan.c | 30 +++++++++- 14 files changed, 210 insertions(+), 4 deletions(-)
-- 2.3.5
|  |