lkml.org 
[lkml]   [2018]   [Jan]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v2 1/2] x86/mm/64: Fix vmapped stack syncing on very-large-memory 4-level systems
    On Fri, Jan 26, 2018 at 10:51 AM, Kirill A. Shutemov
    <kirill@shutemov.name> wrote:
    > On Thu, Jan 25, 2018 at 01:12:14PM -0800, Andy Lutomirski wrote:
    >> Neil Berrington reported a double-fault on a VM with 768GB of RAM that
    >> uses large amounts of vmalloc space with PTI enabled.
    >>
    >> The cause is that load_new_mm_cr3() was never fixed to take the
    >> 5-level pgd folding code into account, so, on a 4-level kernel, the
    >> pgd synchronization logic compiles away to exactly nothing.
    >
    > Ouch. Sorry for this.
    >
    >>
    >> Interestingly, the problem doesn't trigger with nopti. I assume this
    >> is because the kernel is mapped with global pages if we boot with
    >> nopti. The sequence of operations when we create a new task is that
    >> we first load its mm while still running on the old stack (which
    >> crashes if the old stack is unmapped in the new mm unless the TLB
    >> saves us), then we call prepare_switch_to(), and then we switch to the
    >> new stack. prepare_switch_to() pokes the new stack directly, which
    >> will populate the mapping through vmalloc_fault(). I assume that
    >> we're getting lucky on non-PTI systems -- the old stack's TLB entry
    >> stays alive long enough to make it all the way through
    >> prepare_switch_to() and switch_to() so that we make it to a valid
    >> stack.
    >>
    >> Fixes: b50858ce3e2a ("x86/mm/vmalloc: Add 5-level paging support")
    >> Cc: stable@vger.kernel.org
    >> Reported-and-tested-by: Neil Berrington <neil.berrington@datacore.com>
    >> Signed-off-by: Andy Lutomirski <luto@kernel.org>
    >> ---
    >> arch/x86/mm/tlb.c | 34 +++++++++++++++++++++++++++++-----
    >> 1 file changed, 29 insertions(+), 5 deletions(-)
    >>
    >> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
    >> index a1561957dccb..5bfe61a5e8e3 100644
    >> --- a/arch/x86/mm/tlb.c
    >> +++ b/arch/x86/mm/tlb.c
    >> @@ -151,6 +151,34 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
    >> local_irq_restore(flags);
    >> }
    >>
    >> +static void sync_current_stack_to_mm(struct mm_struct *mm)
    >> +{
    >> + unsigned long sp = current_stack_pointer;
    >> + pgd_t *pgd = pgd_offset(mm, sp);
    >> +
    >> + if (CONFIG_PGTABLE_LEVELS > 4) {
    >
    > Can we have
    >
    > if (PTRS_PER_P4D > 1)
    >
    > here instead? This way I wouldn't need to touch the code again for
    > boot-time switching support.

    Want to send a patch?

    (Also, I haven't noticed a patch to fix up the SYSRET checking for
    boot-time switching. Have I just missed it?)

    --Andy

    \
     
     \ /
      Last update: 2018-01-26 20:03    [W:2.900 / U:0.056 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site