lkml.org 
[lkml]   [2023]   [Feb]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v7 21/41] mm: Add guard pages around a shadow stack.
    Date
    From: Yu-cheng Yu <yu-cheng.yu@intel.com>

    The x86 Control-flow Enforcement Technology (CET) feature includes a new
    type of memory called shadow stack. This shadow stack memory has some
    unusual properties, which requires some core mm changes to function
    properly.

    The architecture of shadow stack constrains the ability of userspace to
    move the shadow stack pointer (SSP) in order to prevent corrupting or
    switching to other shadow stacks. The RSTORSSP can move the ssp to
    different shadow stacks, but it requires a specially placed token in order
    to do this. However, the architecture does not prevent incrementing the
    stack pointer to wander onto an adjacent shadow stack. To prevent this in
    software, enforce guard pages at the beginning of shadow stack vmas, such
    that there will always be a gap between adjacent shadow stacks.

    Make the gap big enough so that no userspace SSP changing operations
    (besides RSTORSSP), can move the SSP from one stack to the next. The
    SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET
    can move the SSP by a maximum of 8 bytes, at which point the shadow
    stack would be accessed.

    The INCSSP instruction can also increment the shadow stack pointer. It
    is the shadow stack analog of an instruction like:

    addq $0x80, %rsp

    However, there is one important difference between an ADD on %rsp and
    INCSSP. In addition to modifying SSP, INCSSP also reads from the memory
    of the first and last elements that were "popped". It can be thought of
    as acting like this:

    READ_ONCE(ssp); // read+discard top element on stack
    ssp += nr_to_pop * 8; // move the shadow stack
    READ_ONCE(ssp-8); // read+discard last popped stack element

    The maximum distance INCSSP can move the SSP is 2040 bytes, before it
    would read the memory. Therefore a single page gap will be enough to
    prevent any operation from shifting the SSP to an adjacent stack, since
    it would have to land in the gap at least once, causing a fault.

    This could be accomplished by using VM_GROWSDOWN, but this has a
    downside. The behavior would allow shadow stack's to grow, which is
    unneeded and adds a strange difference to how most regular stacks work.

    Tested-by: Pengfei Xu <pengfei.xu@intel.com>
    Tested-by: John Allen <john.allen@amd.com>
    Tested-by: Kees Cook <keescook@chromium.org>
    Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
    Reviewed-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
    Co-developed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
    Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
    Cc: Kees Cook <keescook@chromium.org>

    ---
    v5:
    - Fix typo in commit log

    v4:
    - Drop references to 32 bit instructions
    - Switch to generic code to drop __weak (Peterz)

    v2:
    - Use __weak instead of #ifdef (Dave Hansen)
    - Only have start gap on shadow stack (Andy Luto)
    - Create stack_guard_start_gap() to not duplicate code
    in an arch version of vm_start_gap() (Dave Hansen)
    - Improve commit log partly with verbiage from (Dave Hansen)

    Yu-cheng v25:
    - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c.
    ---
    include/linux/mm.h | 31 ++++++++++++++++++++++++++-----
    1 file changed, 26 insertions(+), 5 deletions(-)

    diff --git a/include/linux/mm.h b/include/linux/mm.h
    index 097544afb1aa..6a093daced88 100644
    --- a/include/linux/mm.h
    +++ b/include/linux/mm.h
    @@ -3107,15 +3107,36 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
    return mtree_load(&mm->mm_mt, addr);
    }

    +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
    +{
    + if (vma->vm_flags & VM_GROWSDOWN)
    + return stack_guard_gap;
    +
    + /*
    + * Shadow stack pointer is moved by CALL, RET, and INCSSPQ.
    + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB
    + * and touches the first and the last element in the range, which
    + * triggers a page fault if the range is not in a shadow stack.
    + * Because of this, creating 4-KB guard pages around a shadow
    + * stack prevents these instructions from going beyond.
    + *
    + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma
    + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK
    + */
    + if (vma->vm_flags & VM_SHADOW_STACK)
    + return PAGE_SIZE;
    +
    + return 0;
    +}
    +
    static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
    {
    + unsigned long gap = stack_guard_start_gap(vma);
    unsigned long vm_start = vma->vm_start;

    - if (vma->vm_flags & VM_GROWSDOWN) {
    - vm_start -= stack_guard_gap;
    - if (vm_start > vma->vm_start)
    - vm_start = 0;
    - }
    + vm_start -= gap;
    + if (vm_start > vma->vm_start)
    + vm_start = 0;
    return vm_start;
    }

    --
    2.17.1
    \
     
     \ /
      Last update: 2023-03-27 00:37    [W:4.394 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site