lkml.org 
[lkml]   [2022]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCHv11 05/16] x86/uaccess: Provide untagged_addr() and remove tags before address check
From
On 10/24/22 17:17, Kirill A. Shutemov wrote:
> untagged_addr() is a helper used by the core-mm to strip tag bits and
> get the address to the canonical shape. In only handles userspace
> addresses. The untagging mask is stored in mmu_context and will be set
> on enabling LAM for the process.
>
> The tags must not be included into check whether it's okay to access the
> userspace address.
>
> Strip tags in access_ok().
>
> get_user() and put_user() don't use access_ok(), but check access
> against TASK_SIZE directly in assembly. Strip tags, before calling into
> the assembly helper.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Tested-by: Alexander Potapenko <glider@google.com>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> arch/x86/include/asm/mmu.h | 3 +++
> arch/x86/include/asm/mmu_context.h | 11 ++++++++
> arch/x86/include/asm/uaccess.h | 42 +++++++++++++++++++++++++++---
> arch/x86/kernel/process.c | 3 +++
> 4 files changed, 56 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
> index 002889ca8978..2fdb390040b5 100644
> --- a/arch/x86/include/asm/mmu.h
> +++ b/arch/x86/include/asm/mmu.h
> @@ -43,6 +43,9 @@ typedef struct {
>
> /* Active LAM mode: X86_CR3_LAM_U48 or X86_CR3_LAM_U57 or 0 (disabled) */
> unsigned long lam_cr3_mask;
> +
> + /* Significant bits of the virtual address. Excludes tag bits. */
> + u64 untag_mask;
> #endif
>
> struct mutex lock;
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index 69c943b2ae90..5bd3d46685dc 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -100,6 +100,12 @@ static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm)
> static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
> {
> mm->context.lam_cr3_mask = oldmm->context.lam_cr3_mask;
> + mm->context.untag_mask = oldmm->context.untag_mask;
> +}
> +
> +static inline void mm_reset_untag_mask(struct mm_struct *mm)
> +{
> + mm->context.untag_mask = -1UL;
> }
>
> #else
> @@ -112,6 +118,10 @@ static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm)
> static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
> {
> }
> +
> +static inline void mm_reset_untag_mask(struct mm_struct *mm)
> +{
> +}
> #endif
>
> #define enter_lazy_tlb enter_lazy_tlb
> @@ -138,6 +148,7 @@ static inline int init_new_context(struct task_struct *tsk,
> mm->context.execute_only_pkey = -1;
> }
> #endif
> + mm_reset_untag_mask(mm);
> init_new_context_ldt(mm);
> return 0;
> }
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index 8bc614cfe21b..c6062c07ccd2 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -7,6 +7,7 @@
> #include <linux/compiler.h>
> #include <linux/instrumented.h>
> #include <linux/kasan-checks.h>
> +#include <linux/mm_types.h>
> #include <linux/string.h>
> #include <asm/asm.h>
> #include <asm/page.h>
> @@ -21,6 +22,30 @@ static inline bool pagefault_disabled(void);
> # define WARN_ON_IN_IRQ()
> #endif
>
> +#ifdef CONFIG_X86_64
> +/*
> + * Mask out tag bits from the address.
> + *
> + * Magic with the 'sign' allows to untag userspace pointer without any branches
> + * while leaving kernel addresses intact.
> + */
> +#define untagged_addr(mm, addr) ({ \
> + u64 __addr = (__force u64)(addr); \
> + s64 sign = (s64)__addr >> 63; \
> + __addr &= (mm)->context.untag_mask | sign; \
> + (__force __typeof__(addr))__addr; \
> +})
> +

I think this implementation is correct, but I'm wondering if there are
any callers of untagged_addr that actually need to preserve kernel
addresses. Are there? (There certainly *were* back when we had set_fs().)

I'm also mildly uneasy about a potential edge case. Naively, one would
expect:

untagged_addr(current->mm, addr) + size ==
untagged_addr(current->mm, addr + size)

at least for an address that is valid enough to be potentially
dereferenced. This isn't true any more for size that overflows into the
tag bit range.

I *think* we're okay though -- __access_ok requires that addr <= limit -
size, so any range that overflows into tag bits will be rejected even if
the entire range consists of valid (tagged) user addresses.

So:

Acked-by: Andy Lutomirski <luto@kernel.org>

\
 
 \ /
  Last update: 2022-11-07 15:51    [W:0.303 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site