Messages in this thread | | | Subject | Re: [kernel-hardening] [PATCH v8 3/3] x86/refcount: Implement fast refcount overflow protection | From | Li Kun <> | Date | Tue, 25 Jul 2017 20:03:08 +0800 |
| |
Hi Kees,
on 2017/7/25 2:35, Kees Cook wrote: > +static __always_inline __must_check > +int __refcount_add_unless(refcount_t *r, int a, int u) > +{ > + int c, new; > + > + c = atomic_read(&(r->refs)); > + do { > + if (unlikely(c == u)) > + break; > + > + asm volatile("addl %2,%0\n\t" > + REFCOUNT_CHECK_LT_ZERO > + : "=r" (new) > + : "0" (c), "ir" (a), > + [counter] "m" (r->refs.counter) > + : "cc", "cx"); here when the result LT_ZERO, you will saturate the r->refs.counter and make the
atomic_try_cmpxchg(&(r->refs), &c, new) bound to fail first time.
maybe we can just saturate the value of variable "new" ?
> + > + } while (!atomic_try_cmpxchg(&(r->refs), &c, new)); > + > + return c; > +} > +
-- Best Regards Li Kun
| |