lkml.org 
[lkml]   [2016]   [Mar]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.4 01/50] arm64: account for sparsemem section alignment when choosing vmemmap offset
    Date
    4.4-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Ard Biesheuvel <ard.biesheuvel@linaro.org>

    commit 36e5cd6b897e17d03008f81e075625d8e43e52d0 upstream.

    Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear
    region") fixed an issue where the struct page array would overflow into the
    adjacent virtual memory region if system RAM was placed so high up in
    physical memory that its addresses were not representable in the build time
    configured virtual address size.

    However, the fix failed to take into account that the vmemmap region needs
    to be relatively aligned with respect to the sparsemem section size, so that
    a sequence of page structs corresponding with a sparsemem section in the
    linear region appears naturally aligned in the vmemmap region.

    So round up vmemmap to sparsemem section size. Since this essentially moves
    the projection of the linear region up in memory, also revert the reduction
    of the size of the vmemmap region.

    Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")
    Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
    Tested-by: David Daney <david.daney@cavium.com>
    Tested-by: Robert Richter <rrichter@cavium.com>
    Acked-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/arm64/include/asm/pgtable.h | 5 +++--
    1 file changed, 3 insertions(+), 2 deletions(-)

    --- a/arch/arm64/include/asm/pgtable.h
    +++ b/arch/arm64/include/asm/pgtable.h
    @@ -40,7 +40,7 @@
    * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
    * fixed mappings and modules
    */
    -#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE)
    +#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)

    #ifndef CONFIG_KASAN
    #define VMALLOC_START (VA_START)
    @@ -52,7 +52,8 @@
    #define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)

    #define VMEMMAP_START (VMALLOC_END + SZ_64K)
    -#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
    +#define vmemmap ((struct page *)VMEMMAP_START - \
    + SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))

    #define FIRST_USER_ADDRESS 0UL


    \
     
     \ /
      Last update: 2016-03-14 19:41    [W:2.061 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site