lkml.org 
[lkml]   [2020]   [Apr]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v11 01/12] add support for Clang's Shadow Call Stack (SCS)
On Mon, Apr 20, 2020 at 06:17:28PM +0100, Will Deacon wrote:
> > +ifdef CONFIG_SHADOW_CALL_STACK
> > +CC_FLAGS_SCS := -fsanitize=shadow-call-stack
> > +KBUILD_CFLAGS += $(CC_FLAGS_SCS)
> > +export CC_FLAGS_SCS
> > +endif
>
> CFLAGS_SCS would seem more natural to me, although I see ftrace does it this
> way.

Right, I followed ftrace's example here.

> > +config SHADOW_CALL_STACK
> > + bool "Clang Shadow Call Stack"
> > + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK
> > + help
> > + This option enables Clang's Shadow Call Stack, which uses a
> > + shadow stack to protect function return addresses from being
> > + overwritten by an attacker. More information can be found in
> > + Clang's documentation:
> > +
> > + https://clang.llvm.org/docs/ShadowCallStack.html
> > +
> > + Note that security guarantees in the kernel differ from the ones
> > + documented for user space. The kernel must store addresses of shadow
> > + stacks used by other tasks and interrupt handlers in memory, which
> > + means an attacker capable of reading and writing arbitrary memory
> > + may be able to locate them and hijack control flow by modifying
> > + shadow stacks that are not currently in use.
>
> Shouldn't some of this depend on CC_IS_CLANG?

Sure, I'll add CC_IS_CLANG here in the next version. Note that we do
check for compiler support before selecting ARCH_SUPPORTS_SHADOW_CALL_STACK.
The flags are architecture-specific, so the check is done in the arch Kconfig.

> > +config SHADOW_CALL_STACK_VMAP
> > + bool "Use virtually mapped shadow call stacks"
> > + depends on SHADOW_CALL_STACK
> > + help
> > + Use virtually mapped shadow call stacks. Selecting this option
> > + provides better stack exhaustion protection, but increases per-thread
> > + memory consumption as a full page is allocated for each shadow stack.
>
> Given that this feature applies only to arm64 kernels built with clang, it
> feels weird to further segment that userbase with another config option.
> Does Android enable SHADOW_CALL_STACK_VMAP? If not, maybe we should ditch
> it for now and add it when we have a user.

Android doesn't enable the VMAP option right now due to increased memory
overhead. I'll drop it from v12.

> > +/*
> > + * A random number outside the kernel's virtual address space to mark the
> > + * end of the shadow stack.
> > + */
> > +#define SCS_END_MAGIC 0xaf0194819b1635f6UL
>
> This seems like it might be arm64-specific. Why not choose something based
> off CONFIG_ILLEGAL_POINTER_VALUE (see linux/poison.h)?

Sure, I'll use POISON_POINTER_DELTA here.

> > +static inline void *__scs_base(struct task_struct *tsk)
>
> Please avoid using 'inline' in C files unless there's a good reason not
> to let the compiler figure it out.

Ack.

> > +{
> > + /*
> > + * To minimize risk the of exposure, architectures may clear a
>
> Should be "the risk of exposure".

Thanks.

> > + * The shadow call stack is aligned to SCS_SIZE, and grows
> > + * upwards, so we can mask out the low bits to extract the base
> > + * when the task is not running.
> > + */
> > + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1));
>
> Could we avoid forcing this alignment it we stored the SCS pointer as a
> (base,offset) pair instead? That might be friendlier on the allocations
> later on.

The idea is to avoid storing the current task's shadow stack address in
memory, which is why I would rather not store the base address either.

> > +static inline void scs_set_magic(void *s)
> > +{
> > + *scs_magic(s) = SCS_END_MAGIC;
>
> You added task_set_scs() for this sort of thing, so I'm not convinced you
> need this extra helper.

Agreed, I'll drop this.

> > + if (s)
> > + scs_set_magic(s);
> > + /* TODO: poison for KASAN, unpoison in scs_free */
>
> We don't usually commit these. What's missing?

At the time, KASAN didn't support poisoning vmalloc'ed memory, but looks
like that was fixed a while back.

> > +static int scs_cleanup(unsigned int cpu)
> > +{
> > + int i;
> > + void **cache = per_cpu_ptr(scs_cache, cpu);
> > +
> > + for (i = 0; i < NR_CACHED_SCS; i++) {
> > + vfree(cache[i]);
> > + cache[i] = NULL;
> > + }
>
> Hmm, can this run concurrently with another CPU doing a stack allocation
> with this_cpu_cmpxchg()? It probably works out on arm64 thanks to the use
> of atomics, but we shouldn't be relying on that in core code.

This is essentially identical to the code in kernel/fork.c. Anyway, all
of this code goes away with the VMAP option.

> > +void __init scs_init(void)
> > +{
> > + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE,
> > + 0, NULL);
> > + WARN_ON(!scs_cache);
>
> Memory allocation failure should be noisy enough without this.

Sure, I'll remove the warning.

> > +void scs_task_reset(struct task_struct *tsk)
> > +{
> > + /*
> > + * Reset the shadow stack to the base address in case the task
> > + * is reused.
> > + */
> > + task_set_scs(tsk, __scs_base(tsk));
> > +}
>
> Why isn't this in the header?

> > +bool scs_corrupted(struct task_struct *tsk)
> > +{
> > + unsigned long *magic = scs_magic(__scs_base(tsk));
> > +
> > + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC;
> > +}
>
> Same here.

I'll move both to the header file.

> > +void scs_release(struct task_struct *tsk)
> > +{
> > + void *s;
> > +
> > + s = __scs_base(tsk);
> > + if (!s)
> > + return;
> > +
> > + WARN_ON(scs_corrupted(tsk));
> > +
> > + task_set_scs(tsk, NULL);
>
> Aren't we about to free the task here? What does clearing the scs pointer
> achieve?

True, it doesn't achieve much, only leaves one fewer shadow stack pointer
in memory. I'll drop this from the next version.

Sami

\
 
 \ /
  Last update: 2020-04-20 23:19    [W:0.282 / U:0.732 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site