lkml.org 
[lkml]   [2019]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] of: Rework and simplify phandle cache to use a fixed size
On Wed, Dec 11, 2019 at 5:23 PM Rob Herring <robh@kernel.org> wrote:
>
> The phandle cache was added to speed up of_find_node_by_phandle() by
> avoiding walking the whole DT to find a matching phandle. The
> implementation has several shortcomings:
>
> - The cache is designed to work on a linear set of phandle values.
> This is true for dtc generated DTs, but not for other cases such as
> Power.
> - The cache isn't enabled until of_core_init() and a typical system
> may see hundreds of calls to of_find_node_by_phandle() before that
> point.
> - The cache is freed and re-allocated when the number of phandles
> changes.
> - It takes a raw spinlock around a memory allocation which breaks on
> RT.
>
> Change the implementation to a fixed size and use hash_32() as the
> cache index. This greatly simplifies the implementation. It avoids
> the need for any re-alloc of the cache and taking a reference on nodes
> in the cache. We only have a single source of removing cache entries
> which is of_detach_node().
>
> Using hash_32() removes any assumption on phandle values improving
> the hit rate for non-linear phandle values. The effect on linear values
> using hash_32() is about a 10% collision. The chances of thrashing on
> colliding values seems to be low.
>
> To compare performance, I used a RK3399 board which is a pretty typical
> system. I found that just measuring boot time as done previously is
> noisy and may be impacted by other things. Also bringing up secondary
> cores causes some issues with measuring, so I booted with 'nr_cpus=1'.
> With no caching, calls to of_find_node_by_phandle() take about 20124 us
> for 1248 calls. There's an additional 288 calls before time keeping is
> up. Using the average time per hit/miss with the cache, we can calculate
> these calls to take 690 us (277 hit / 11 miss) with a 128 entry cache
> and 13319 us with no cache or an uninitialized cache.
>
> Comparing the 3 implementations the time spent in
> of_find_node_by_phandle() is:
>
> no cache: 20124 us (+ 13319 us)
> 128 entry cache: 5134 us (+ 690 us)
> current cache: 819 us (+ 13319 us)
>
> We could move the allocation of the cache earlier to improve the
> current cache, but that just further complicates the situation as it
> needs to be after slab is up, so we can't do it when unflattening (which
> uses memblock).
>
> Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Segher Boessenkool <segher@kernel.crashing.org>
> Cc: Frank Rowand <frowand.list@gmail.com>
> Signed-off-by: Rob Herring <robh@kernel.org>
> ---
> drivers/of/base.c | 133 ++++++++--------------------------------
> drivers/of/dynamic.c | 2 +-
> drivers/of/of_private.h | 4 +-
> drivers/of/overlay.c | 10 ---
> 4 files changed, 28 insertions(+), 121 deletions(-)

[...]

> - if (phandle_cache) {
> - if (phandle_cache[masked_handle] &&
> - handle == phandle_cache[masked_handle]->phandle)
> - np = phandle_cache[masked_handle];
> - if (np && of_node_check_flag(np, OF_DETACHED)) {
> - WARN_ON(1); /* did not uncache np on node removal */
> - of_node_put(np);
> - phandle_cache[masked_handle] = NULL;
> - np = NULL;
> - }
> + if (phandle_cache[handle_hash] &&
> + handle == phandle_cache[handle_hash]->phandle)
> + np = phandle_cache[handle_hash];
> + if (np && of_node_check_flag(np, OF_DETACHED)) {
> + WARN_ON(1); /* did not uncache np on node removal */

BTW, I don't think this check is even valid. If we failed to detach
and remove the node from the cache, then we could be accessing np
after freeing it.

Rob

\
 
 \ /
  Last update: 2019-12-12 00:49    [W:0.102 / U:0.588 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site