lkml.org 
[lkml]   [2020]   [Apr]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] of: Rework and simplify phandle cache to use a fixed size
From
Date
On 14/04/2020 16:00, Rob Herring wrote:
> +Karol
>
> On Mon, Jan 13, 2020 at 5:12 AM Jon Hunter <jonathanh@nvidia.com> wrote:
>>
>>
>> On 10/01/2020 23:50, Rob Herring wrote:
>>> On Tue, Jan 7, 2020 at 4:22 AM Jon Hunter <jonathanh@nvidia.com> wrote:
>>>>
>>>> Hi Rob,
>>>>
>>>> On 11/12/2019 23:23, Rob Herring wrote:
>>>>> The phandle cache was added to speed up of_find_node_by_phandle() by
>>>>> avoiding walking the whole DT to find a matching phandle. The
>>>>> implementation has several shortcomings:
>>>>>
>>>>> - The cache is designed to work on a linear set of phandle values.
>>>>> This is true for dtc generated DTs, but not for other cases such as
>>>>> Power.
>>>>> - The cache isn't enabled until of_core_init() and a typical system
>>>>> may see hundreds of calls to of_find_node_by_phandle() before that
>>>>> point.
>>>>> - The cache is freed and re-allocated when the number of phandles
>>>>> changes.
>>>>> - It takes a raw spinlock around a memory allocation which breaks on
>>>>> RT.
>>>>>
>>>>> Change the implementation to a fixed size and use hash_32() as the
>>>>> cache index. This greatly simplifies the implementation. It avoids
>>>>> the need for any re-alloc of the cache and taking a reference on nodes
>>>>> in the cache. We only have a single source of removing cache entries
>>>>> which is of_detach_node().
>>>>>
>>>>> Using hash_32() removes any assumption on phandle values improving
>>>>> the hit rate for non-linear phandle values. The effect on linear values
>>>>> using hash_32() is about a 10% collision. The chances of thrashing on
>>>>> colliding values seems to be low.
>>>>>
>>>>> To compare performance, I used a RK3399 board which is a pretty typical
>>>>> system. I found that just measuring boot time as done previously is
>>>>> noisy and may be impacted by other things. Also bringing up secondary
>>>>> cores causes some issues with measuring, so I booted with 'nr_cpus=1'.
>>>>> With no caching, calls to of_find_node_by_phandle() take about 20124 us
>>>>> for 1248 calls. There's an additional 288 calls before time keeping is
>>>>> up. Using the average time per hit/miss with the cache, we can calculate
>>>>> these calls to take 690 us (277 hit / 11 miss) with a 128 entry cache
>>>>> and 13319 us with no cache or an uninitialized cache.
>>>>>
>>>>> Comparing the 3 implementations the time spent in
>>>>> of_find_node_by_phandle() is:
>>>>>
>>>>> no cache: 20124 us (+ 13319 us)
>>>>> 128 entry cache: 5134 us (+ 690 us)
>>>>> current cache: 819 us (+ 13319 us)
>>>>>
>>>>> We could move the allocation of the cache earlier to improve the
>>>>> current cache, but that just further complicates the situation as it
>>>>> needs to be after slab is up, so we can't do it when unflattening (which
>>>>> uses memblock).
>>>>>
>>>>> Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>>>>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>>>>> Cc: Segher Boessenkool <segher@kernel.crashing.org>
>>>>> Cc: Frank Rowand <frowand.list@gmail.com>
>>>>> Signed-off-by: Rob Herring <robh@kernel.org>
>>>>
>>>> With next-20200106 I have noticed a regression on Tegra210 where it
>>>> appears that only one of the eMMC devices is being registered. Bisect is
>>>> pointing to this patch and reverting on top of next fixes the problem.
>>>> That is as far as I have got so far, so if you have any ideas, please
>>>> let me know. Unfortunately, there do not appear to be any obvious errors
>>>> from the bootlog.
>>>
>>> I guess that's tegra210-p2371-2180.dts because none of the others have
>>> 2 SD hosts enabled. I don't see anything obvious though. Are you doing
>>> any runtime mods to the DT?
>>
>> I have noticed that the bootloader is doing some runtime mods and so
>> checking if this is the cause. I will let you know, but most likely,
>> seeing as I cannot find anything wrong with this change itself.
>
> Did you figure out the problem here? Karol sees a similar problem on
> Tegra210 with the gpu node regulator.
>
> It looks like /external-memory-controller@7001b000 has a duplicate
> phandle. Comparing the dtb in the filesystem with what the kernel
> gets, that node is added by the bootloader. So the bootloader is
> definitely creating a broken dtb.

Yes it was caused by the bootloader, u-boot, incorrectly copying some
nodes. After preventing u-boot from doing that, it was fine. There are
some u-boot environment variables [0] that you can try clearing to
prevent this. Alternatively, if you use a upstream u-boot that should
also work.

Cheers
Jon

[0] https://elinux.org/Jetson/TX1_Upstream_Kernel#Upstream_Linux_kernel

--
nvpublic

\
 
 \ /
  Last update: 2020-04-14 21:45    [W:0.067 / U:0.812 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site