lkml.org 
[lkml]   [2022]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.19 0993/1157] powerpc/32: Call mmu_mark_initmem_nx() regardless of data block mapping.
    Date
    From: Christophe Leroy <christophe.leroy@csgroup.eu>

    [ Upstream commit 980bbf7ca72012d317617fcdbfabe8708e4cef29 ]

    mark_initmem_nx() calls either mmu_mark_initmem_nx() or
    set_memory_attr() based on return from v_block_mapped()
    of _sinittext.

    But we can now handle text and data independently, so that
    text may be mapped by block even when data is mapped by pages.

    On the 8xx for instance, at startup 32Mbytes of memory are
    pinned in TLB. So the pinned entries need to go away for sinittext.

    In next patch a BAT will be set to also covers sinittext on book3s/32.
    So it will also be needed to call mmu_mark_initmem_nx() even when
    data above sinittext is not mapped with BATs.

    As this is highly dependent on the platform, call mmu_mark_initmem_nx()
    regardless of data block mapping. Then the platform will know what to
    do.

    Modify 8xx mmu_mark_initmem_nx() so that inittext mapping is modified
    only when pagealloc debug and kfence are not active, otherwise inittext
    is mapped with standard pages. And don't do anything on kernel text
    which is already mapped with PAGE_KERNEL_TEXT.

    Fixes: da1adea07576 ("powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB")
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/db3fc14f3bfa6215b0786ef58a6e2bc1e1f964d7.1655202804.git.christophe.leroy@csgroup.eu
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    arch/powerpc/mm/nohash/8xx.c | 4 ++--
    arch/powerpc/mm/pgtable_32.c | 6 +++---
    2 files changed, 5 insertions(+), 5 deletions(-)

    diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
    index 27f9186ae374..1ee08c3efe5b 100644
    --- a/arch/powerpc/mm/nohash/8xx.c
    +++ b/arch/powerpc/mm/nohash/8xx.c
    @@ -179,8 +179,8 @@ void mmu_mark_initmem_nx(void)
    unsigned long boundary = strict_kernel_rwx_enabled() ? sinittext : etext8;
    unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);

    - mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, false);
    - mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);
    + if (!debug_pagealloc_enabled_or_kfence())
    + mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);

    mmu_pin_tlb(block_mapped_ram, false);
    }
    diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
    index a56ade39dc68..3ac73f9fb5d5 100644
    --- a/arch/powerpc/mm/pgtable_32.c
    +++ b/arch/powerpc/mm/pgtable_32.c
    @@ -135,9 +135,9 @@ void mark_initmem_nx(void)
    unsigned long numpages = PFN_UP((unsigned long)_einittext) -
    PFN_DOWN((unsigned long)_sinittext);

    - if (v_block_mapped((unsigned long)_sinittext)) {
    - mmu_mark_initmem_nx();
    - } else {
    + mmu_mark_initmem_nx();
    +
    + if (!v_block_mapped((unsigned long)_sinittext)) {
    set_memory_nx((unsigned long)_sinittext, numpages);
    set_memory_rw((unsigned long)_sinittext, numpages);
    }
    --
    2.35.1


    \
     
     \ /
      Last update: 2022-08-16 06:50    [W:3.253 / U:22.388 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site