lkml.org 
[lkml]   [2021]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 3/4] RISC-V: Fix L1_CACHE_BYTES for RV32
    On Thu, Jan 14, 2021 at 11:59 PM Geert Uytterhoeven
    <geert@linux-m68k.org> wrote:
    >
    > Hi Atish,
    >
    > On Thu, Jan 14, 2021 at 10:11 PM Atish Patra <atishp@atishpatra.org> wrote:
    > > On Thu, Jan 14, 2021 at 11:46 AM Palmer Dabbelt <palmer@dabbelt.com> wrote:
    > > > On Thu, 14 Jan 2021 10:33:01 PST (-0800), atishp@atishpatra.org wrote:
    > > > > On Wed, Jan 13, 2021 at 9:10 PM Palmer Dabbelt <palmer@dabbelt.com> wrote:
    > > > >>
    > > > >> On Thu, 07 Jan 2021 01:26:51 PST (-0800), Atish Patra wrote:
    > > > >> > SMP_CACHE_BYTES/L1_CACHE_BYTES should be defined as 32 instead of
    > > > >> > 64 for RV32. Otherwise, there will be hole of 32 bytes with each memblock
    > > > >> > allocation if it is requested to be aligned with SMP_CACHE_BYTES.
    > > > >> >
    > > > >> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
    > > > >> > ---
    > > > >> > arch/riscv/include/asm/cache.h | 4 ++++
    > > > >> > 1 file changed, 4 insertions(+)
    > > > >> >
    > > > >> > diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
    > > > >> > index 9b58b104559e..c9c669ea2fe6 100644
    > > > >> > --- a/arch/riscv/include/asm/cache.h
    > > > >> > +++ b/arch/riscv/include/asm/cache.h
    > > > >> > @@ -7,7 +7,11 @@
    > > > >> > #ifndef _ASM_RISCV_CACHE_H
    > > > >> > #define _ASM_RISCV_CACHE_H
    > > > >> >
    > > > >> > +#ifdef CONFIG_64BIT
    > > > >> > #define L1_CACHE_SHIFT 6
    > > > >> > +#else
    > > > >> > +#define L1_CACHE_SHIFT 5
    > > > >> > +#endif
    > > > >> >
    > > > >> > #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
    > > > >>
    > > > >> Should we not instead just
    > > > >>
    > > > >> #define SMP_CACHE_BYTES L1_CACHE_BYTES
    > > > >>
    > > > >> like a handful of architectures do?
    > > > >>
    > > > >
    > > > > The generic code already defines it that way in include/linux/cache.h
    > > > >
    > > > >> The cache size is sort of fake here, as we don't have any non-coherent
    > > > >> mechanisms, but IIRC we wrote somewhere that it's recommended to have 64-byte
    > > > >> cache lines in RISC-V implementations as software may assume that for
    > > > >> performance reasons. Not really a strong reason, but I'd prefer to just make
    > > > >> these match.
    > > > >>
    > > > >
    > > > > If it is documented somewhere in the kernel, we should update that. I
    > > > > think SMP_CACHE_BYTES being 64
    > > > > actually degrades the performance as there will be a fragmented memory
    > > > > blocks with 32 bit bytes gap wherever
    > > > > SMP_CACHE_BYTES is used as an alignment requirement.
    > > >
    > > > I don't buy that: if you're trying to align to the cache size then the gaps are
    > > > the whole point. IIUC the 64-byte cache lines come from DDR, not XLEN, so
    > > > there's really no reason for these to be different between the base ISAs.
    > > >
    > >
    > > Got your point. I noticed this when fixing the resource tree issue
    > > where the SMP_CACHE_BYTES
    > > alignment was not intentional but causing the issue. The real issue
    > > was solved via another patch in this series though.
    > >
    > > Just to clarify, if the allocation function intends to allocate
    > > consecutive memory, it should use 32 instead of SMP_CACHE_BYTES.
    > > This will lead to a #ifdef macro in the code.
    > >
    > > > > In addition to that, Geert Uytterhoeven mentioned some panic on vex32
    > > > > without this patch.
    > > > > I didn't see anything in Qemu though.
    > > >
    > > > Something like that is probably only going to show up on real hardware, QEMU
    > > > doesn't really do anything with the cache line size. That said, as there's
    > > > nothing in our kernel now related to non-coherent memory there really should
    > > > only be performance issue (at least until we have non-coherent systems).
    > > >
    > > > I'd bet that the change is just masking some other bug, either in the software
    > > > or the hardware. I'd prefer to root cause this rather than just working around
    > > > it, as it'll probably come back later and in a more difficult way to find.
    > > >
    > >
    > > Agreed. @Geert Uytterhoeven Can you do a further analysis of the panic
    > > you were saying ?
    > > We may need to change an alignment requirement to 32 for RV32 manually
    > > at some place in code.
    >
    > My findings were in
    > https://lore.kernel.org/linux-riscv/CAMuHMdWf6K-5y02+WJ6Khu1cD6P0n5x1wYQikrECkuNtAA1pgg@mail.gmail.com/
    >
    > Note that when the memblock.reserved list kept increasing, it kept on
    > adding the same entry to the list. But that was fixed by "[PATCH 1/4]
    > RISC-V: Do not allocate memblock while iterating reserved memblocks".
    >
    > After that, only the (reproducible) "Unable to handle kernel paging
    > request at virtual address 61636473" was left, always at the same place.
    > No idea where the actual corruption happened.
    >

    Yes. I was asking about this panic. I don't have the litex fpga to
    reproduce this as well.
    Can you take a look at the epc & ra to figure out where exactly is the fault ?

    That will help to understand the real cause for this panic.

    > Gr{oetje,eeting}s,
    >
    > Geert
    >
    > --
    > Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
    >
    > In personal conversations with technical people, I call myself a hacker. But
    > when I'm talking to journalists I just say "programmer" or something like that.
    > -- Linus Torvalds



    --
    Regards,
    Atish

    \
     
     \ /
      Last update: 2021-01-16 02:41    [W:3.102 / U:0.180 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site