lkml.org 
[lkml]   [2020]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] x86/mm: use max memory block size with unaligned memory end
    On Thu, Jun 04, 2020 at 09:22:03AM +0200, David Hildenbrand wrote:
    > On 04.06.20 05:54, Daniel Jordan wrote:
    > > Some of our servers spend 14 out of the 21 seconds of kernel boot
    > > initializing memory block sysfs directories and then creating symlinks
    > > between them and the corresponding nodes. The slowness happens because
    > > the machines get stuck with the smallest supported memory block size on
    > > x86 (128M), which results in 16,288 directories to cover the 2T of
    > > installed RAM, and each of these paths does a linear search of the
    > > memory blocks for every block id, with atomic ops at each step.
    >
    > With 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in xarray
    > to accelerate lookup") merged by Linus' today (strange, I thought this
    > would be long upstream)

    Ah, thanks for pointing this out! It was only posted to LKML so I missed it.

    > all linear searches should be gone and at least
    > the performance observation in this patch no longer applies.

    The performance numbers as stated, that's certainly true, but this patch on top
    still improves kernel boot by 7%. It's a savings of half a second -- I'll take
    it.

    IMHO the root cause of this is really the small block size. Building a cache
    on top to avoid iterating over tons of small blocks seems like papering over
    the problem, especially when one of the two affected paths in boot is a
    cautious check that might be ready to be removed by now[0]:

    static int init_memory_block(struct memory_block **memory,
    unsigned long block_id, unsigned long state)
    {
    ...
    mem = find_memory_block_by_id(block_id);
    if (mem) {
    put_device(&mem->dev);
    return -EEXIST;
    }

    Anyway, I guess I'll redo the changelog and post again.

    > The memmap init should nowadays consume most time.

    Yeah, but of course it's not as bad as it was now that it's fully parallelized.

    [0] https://lore.kernel.org/linux-mm/a8e96df6-dc6d-037f-491c-92182d4ada8d@redhat.com/

    \
     
     \ /
      Last update: 2020-06-04 19:25    [W:2.223 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site