lkml.org 
[lkml]   [2018]   [Mar]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH v4 3/3] mm/free_pcppages_bulk: prefetch buddy while not holding lock
    From
    Date
    On 03/01/2018 03:00 PM, Michal Hocko wrote:
    > On Thu 01-03-18 14:28:45, Aaron Lu wrote:
    >> When a page is freed back to the global pool, its buddy will be checked
    >> to see if it's possible to do a merge. This requires accessing buddy's
    >> page structure and that access could take a long time if it's cache cold.
    >>
    >> This patch adds a prefetch to the to-be-freed page's buddy outside of
    >> zone->lock in hope of accessing buddy's page structure later under
    >> zone->lock will be faster. Since we *always* do buddy merging and check
    >> an order-0 page's buddy to try to merge it when it goes into the main
    >> allocator, the cacheline will always come in, i.e. the prefetched data
    >> will never be unused.
    >>
    >> In the meantime, there are two concerns:
    >> 1 the prefetch could potentially evict existing cachelines, especially
    >> for L1D cache since it is not huge;
    >> 2 there is some additional instruction overhead, namely calculating
    >> buddy pfn twice.
    >>
    >> For 1, it's hard to say, this microbenchmark though shows good result but
    >> the actual benefit of this patch will be workload/CPU dependant;
    >> For 2, since the calculation is a XOR on two local variables, it's expected
    >> in many cases that cycles spent will be offset by reduced memory latency
    >> later. This is especially true for NUMA machines where multiple CPUs are
    >> contending on zone->lock and the most time consuming part under zone->lock
    >> is the wait of 'struct page' cacheline of the to-be-freed pages and their
    >> buddies.
    >>
    >> Test with will-it-scale/page_fault1 full load:
    >>
    >> kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S)
    >> v4.16-rc2+ 9034215 7971818 13667135 15677465
    >> patch2/3 9536374 +5.6% 8314710 +4.3% 14070408 +3.0% 16675866 +6.4%
    >> this patch 10338868 +8.4% 8544477 +2.8% 14839808 +5.5% 17155464 +2.9%
    >> Note: this patch's performance improvement percent is against patch2/3.
    >
    > I am really surprised that this has such a big impact.

    It's even stranger to me. Struct page is 64 bytes these days, exactly a
    a cache line. Unless that changed, Intel CPUs prefetched a "buddy" cache
    line (that forms an aligned 128 bytes block with the one we touch).
    Which is exactly a order-0 buddy struct page! Maybe that implicit
    prefetching stopped at L2 and explicit goes all the way to L1, can't
    remember. Would that make such a difference? It would be nice to do some
    perf tests with cache counters to see what is really going on...

    Vlastimil

    > Is this a win on
    > other architectures as well?
    >
    >> [changelog stole from Dave Hansen and Mel Gorman's comments]
    >> https://lkml.org/lkml/2018/1/24/551
    >
    > Please use http://lkml.kernel.org/r/<msg-id> for references because
    > lkml.org is quite unstable. It would be
    > http://lkml.kernel.org/r/148a42d8-8306-2f2f-7f7c-86bc118f8ccd@intel.com
    > here.
    >

    \
     
     \ /
      Last update: 2018-03-02 18:58    [W:2.746 / U:0.016 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site