lkml.org 
[lkml]   [2021]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v0.9.1 2/6] mm, x86/uaccess: add userspace atomic helpers
    On Mon, Nov 22, 2021 at 01:13:23PM -0800, Peter Oskolkov wrote:

    > +static inline int __try_cmpxchg_user_32(u32 *uval, u32 __user *uaddr,
    > + u32 oldval, u32 newval)

    That's a 'try_cmpxchg' function but *NOT* the try_cmpxchg semantics,
    please don't do that, fixed in the below.

    > +int cmpxchg_user_64(u64 __user *uaddr, u64 *curr_val, u64 new_val)
    > +{
    > + int ret = -EFAULT;
    > + u64 __old = *curr_val;
    > +
    > + /* Validate proper alignment. */
    > + if (unlikely(((unsigned long)uaddr % sizeof(*uaddr)) ||
    > + ((unsigned long)curr_val % sizeof(*curr_val))))
    > + return -EINVAL;
    > +
    > + if (unlikely(!access_ok(uaddr, sizeof(*uaddr))))
    > + return -EFAULT;
    > +
    > + pagefault_disable();
    > +
    > + while (true) {
    > + ret = -EFAULT;
    > + if (!user_access_begin(uaddr, sizeof(*uaddr)))
    > + break;
    > +
    > + ret = __try_cmpxchg_user_64(curr_val, uaddr, __old, new_val);
    > + user_access_end();
    > +
    > + if (!ret) {
    > + ret = *curr_val == __old ? 0 : -EAGAIN;
    > + break;
    > + }
    > +
    > + if (fix_pagefault((unsigned long)uaddr, true, sizeof(*uaddr)) < 0)
    > + break;
    > + }
    > +
    > + pagefault_enable();
    > + return ret;
    > +}

    Please, just read what you wrote. This scored *really* high on the
    WTF'o'meter.

    That is aside of:

    - that user_access_begin() includes access_ok().
    - the fact that having SMAP *inside* a cmpxchg loop is ridiculous.
    - that you write cmpxchg inside a loop, but it isn't actually a cmpxchg-loop.

    No the real problem is:

    - you *DISABLE* pagefaults
    - you force the exception handler
    - you manually fix up the fault

    while you could've just done the op and let the fault handler do it's
    thing, that whole function is pointless.



    So as a penance for not having looked at this before I wrote you the
    replacement. The asm-goto-output variant isn't actually compile tested,
    but the old complicated thing is. Also, I'm >.< close to merging the
    series that kills .fixup for x86, but the fixup (pun intended) should be
    trivial.

    Usage can be gleaned from the bigger patch I send you in reply to 0/ but
    TL;DR:

    if (!user_access_begin(uptr, sizeof(u64)))
    return -EFAULT;

    unsafe_get_user(old, uptr, Efault);
    do {
    new = func(old);
    } while (!__try_cmpxchg_user(uptr, &old, new, Efault));

    user_access_end();

    return 0;

    Efault:
    user_access_end();
    return -EFAULT;


    Then if called within pagefault_disable(), it'll get -EFAULT more, if
    called without it, it'll just take the fault and try to fix it up if at
    all possible.


    ---
    diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
    index 33a68407def3..909c48083c4f 100644
    --- a/arch/x86/include/asm/uaccess.h
    +++ b/arch/x86/include/asm/uaccess.h
    @@ -341,6 +341,37 @@ do { \
    : [umem] "m" (__m(addr)) \
    : : label)

    +#define __try_cmpxchg_user_asm(itype, _ptr, _pold, _new, label) ({ \
    + bool success; \
    + __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
    + __typeof__(*(_ptr)) __old = *_old; \
    + __typeof__(*(_ptr)) __new = (_new); \
    + asm_volatile_goto("\n" \
    + "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
    + _ASM_EXTABLE_UA(1b, %l[label]) \
    + : CC_OUT(z) (success), \
    + [ptr] "+m" (*_ptr), \
    + [old] "+a" (__old) \
    + : [new] "r" (__new) \
    + : "memory", "cc" \
    + : label); \
    + if (unlikely(!success)) \
    + *_old = __old; \
    + likely(success); })
    +
    +
    +#define __xchg_user_asm(itype, _ptr, _val, label) ({ \
    + __typeof__(*(_ptr)) __ret = (_val); \
    + asm_volatile_goto("\n" \
    + "1: " LOCK_PREFIX "xchg"itype" %[var], %[ptr]\n"\
    + _ASM_EXTABLE_UA(1b, %l[label]) \
    + : [var] "+r" (__ret). \
    + [ptr] "+m" (*(_ptr)) \
    + : \
    + : "memory", "cc" \
    + : label); \
    + __ret; })
    +
    #else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT

    #ifdef CONFIG_X86_32
    @@ -411,8 +442,83 @@ do { \
    : [umem] "m" (__m(addr)), \
    [efault] "i" (-EFAULT), "0" (err))

    +#define __try_cmpxchg_user_asm(itype, _ptr, _pold, _new, label) ({ \
    + int __err = 0; \
    + bool success; \
    + __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
    + __typeof__(*(_ptr)) __old = *_old; \
    + __typeof__(*(_ptr)) __new = (_new); \
    + asm volatile("\n" \
    + "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
    + CC_SET(z) \
    + "2:\n" \
    + ".pushsection .fixup,\"ax\"\n" \
    + "3: mov %[efault], %[errout]\n" \
    + " jmp 2b\n" \
    + ".popsection\n" \
    + _ASM_EXTABLE_UA(1b, 3b) \
    + : CC_OUT(z) (success), \
    + [errout] "+r" (__err), \
    + [ptr] "+m" (*_ptr), \
    + [old] "+a" (__old) \
    + : [new] "r" (__new), \
    + [efault] "i" (-EFAULT) \
    + : "memory", "cc"); \
    + if (unlikely(__err)) \
    + goto label; \
    + if (unlikely(!success)) \
    + *_old = __old; \
    + likely(success); })
    +
    +#define __xchg_user_asm(itype, _ptr, _val, label) ({ \
    + int __err = 0; \
    + __typeof__(*(_ptr)) __ret = (_val); \
    + asm volatile("\n" \
    + "1: " LOCK_PREFIX "xchg"itype" %[var], %[ptr]\n" \
    + "2:\n" \
    + ".pushsection .fixup,\"ax\"\n" \
    + "3: mov %[efault], %[errout]\n" \
    + " jmp 2b\n" \
    + ".popsection\n" \
    + _ASM_EXTABLE_UA(1b, 3b) \
    + : [ptr] "+m" (*(_ptr)), \
    + [var] "+r" (__ret), \
    + [errout] "+r" (__err) \
    + : [efault] "i" (-EFAULT) \
    + : "memory", "cc"); \
    + if (unlikely(__err)) \
    + goto label; \
    + __ret; })
    +
    #endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT

    +extern void __try_cmpxchg_user_wrong_size(void);
    +extern void __xchg_user_wrong_size(void);
    +
    +#define __try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \
    + __typeof__(*(_ptr)) __ret; \
    + switch (sizeof(__ret)) { \
    + case 4: __ret = __try_cmpxchg_user_asm("l", (_ptr), (_oldp), \
    + (_nval), _label); \
    + break; \
    + case 8: __ret = __try_cmpxchg_user_asm("q", (_ptr), (_oldp), \
    + (_nval), _label); \
    + break; \
    + default: __try_cmpxchg_user_wrong_size(); \
    + } \
    + __ret; })
    +
    +#define __xchg_user(_ptr, _nval, _label) ({ \
    + __typeof__(*(_ptr)) __ret; \
    + switch (sizeof(__ret)) { \
    + case 4: __ret = __xchg_user_asm("l", (_ptr), (_nval), _label); \
    + break; \
    + case 8: __ret = __xchg_user_asm("q", (_ptr), (_nval), _label); \
    + break; \
    + default: __xchg_user_wrong_size(); \
    + } \
    + __ret; })
    +
    /* FIXME: this hack is definitely wrong -AK */
    struct __large_struct { unsigned long buf[100]; };
    #define __m(x) (*(struct __large_struct __user *)(x))
    \
     
     \ /
      Last update: 2021-11-24 15:47    [W:3.098 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site