Messages in this thread | | | Subject | Re: [PATCH] mm: cache largest vma | From | Davidlohr Bueso <> | Date | Fri, 01 Nov 2013 14:11:31 -0700 |
| |
On Fri, 2013-11-01 at 16:38 -0400, KOSAKI Motohiro wrote: > (11/1/13 4:17 PM), Davidlohr Bueso wrote: > > While caching the last used vma already does a nice job avoiding > > having to iterate the rbtree in find_vma, we can improve. After > > studying the hit rate on a load of workloads and environments, > > it was seen that it was around 45-50% - constant for a standard > > desktop system (gnome3 + evolution + firefox + a few xterms), > > and multiple java related workloads (including Hadoop/terasort), > > and aim7, which indicates it's better than the 35% value documented > > in the code. > > > > By also caching the largest vma, that is, the one that contains > > most addresses, there is a steady 10-15% hit rate gain, putting > > it above the 60% region. This improvement comes at a very low > > overhead for a miss. Furthermore, systems with !CONFIG_MMU keep > > the current logic. > > I'm slightly surprised this cache makes 15% hit. Which application > get a benefit? You listed a lot of applications, but I'm not sure > which is highly depending on largest vma.
Well I chose the largest vma because it gives us a greater chance of being already cached when we do the lookup for the faulted address.
The 15% improvement was with Hadoop. According to my notes it was at ~48% with the baseline kernel and increased to ~63% with this patch.
In any case I didn't measure the rates on a per-task granularity, but at a general system level. When a system is first booted I can see that the mmap_cache access rate becomes the determinant factor and when adding a workload it doesn't change much. One exception to this was a kernel build, where we go from ~50% to ~89% hit rate on a vanilla kernel.
Thanks, Davidlohr
| |