lkml.org 
[lkml]   [2020]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -tip 18/32] kernel/entry: Add support for core-wide protection of kernel-mode
Hi Peter,

On Tue, Nov 24, 2020 at 05:09:06PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 17, 2020 at 06:19:48PM -0500, Joel Fernandes (Google) wrote:
> > Core-scheduling prevents hyperthreads in usermode from attacking each
> > other, but it does not do anything about one of the hyperthreads
> > entering the kernel for any reason. This leaves the door open for MDS
> > and L1TF attacks with concurrent execution sequences between
> > hyperthreads.
> >
> > This patch therefore adds support for protecting all syscall and IRQ
> > kernel mode entries. Care is taken to track the outermost usermode exit
> > and entry using per-cpu counters. In cases where one of the hyperthreads
> > enter the kernel, no additional IPIs are sent. Further, IPIs are avoided
> > when not needed - example: idle and non-cookie HTs do not need to be
> > forced into kernel mode.
> >
> > More information about attacks:
> > For MDS, it is possible for syscalls, IRQ and softirq handlers to leak
> > data to either host or guest attackers. For L1TF, it is possible to leak
> > to guest attackers. There is no possible mitigation involving flushing
> > of buffers to avoid this since the execution of attacker and victims
> > happen concurrently on 2 or more HTs.
>
> Oh gawd; this is horrible...

Yeah, its the same issue we discussed earlier this year :)

> > +bool sched_core_wait_till_safe(unsigned long ti_check)
> > +{
> > + bool restart = false;
> > + struct rq *rq;
> > + int cpu;
> > +
> > + /* We clear the thread flag only at the end, so no need to check for it. */
> > + ti_check &= ~_TIF_UNSAFE_RET;
> > +
> > + cpu = smp_processor_id();
> > + rq = cpu_rq(cpu);
> > +
> > + if (!sched_core_enabled(rq))
> > + goto ret;
> > +
> > + /* Down grade to allow interrupts to prevent stop_machine lockups.. */
> > + preempt_disable();
> > + local_irq_enable();
> > +
> > + /*
> > + * Wait till the core of this HT is not in an unsafe state.
> > + *
> > + * Pair with raw_spin_lock/unlock() in sched_core_unsafe_enter/exit().
> > + */
> > + while (smp_load_acquire(&rq->core->core_unsafe_nest) > 0) {
> > + cpu_relax();
> > + if (READ_ONCE(current_thread_info()->flags) & ti_check) {
> > + restart = true;
> > + break;
> > + }
> > + }
>
> What's that ACQUIRE for?

I was concerned about something like below for weakly-ordered arch:

Kernel Attacker
------ --------
write unsafe=1

kernel code does stores while (unsafe == 1); (ACQUIRE)
^ ^
| needs to be ordered | needs to be ordered
v v
write unsafe=0 (RELEASE) Attacker code.


Here, I want the access made by kernel code to be ordered WRT the write to
the unsafe nesting counter variable, so that the attacker code does not see
those accesses later.

It could be argued its a theoretical concern, but I wanted to play it safe. In
particular, the existing uarch buffer flushing before entering the Attacker
code might make it sufficiently impossible for Attacker to do anything bad
even without the additional memory barriers.

> > +
> > + /* Upgrade it back to the expectations of entry code. */
> > + local_irq_disable();
> > + preempt_enable();
> > +
> > +ret:
> > + if (!restart)
> > + clear_tsk_thread_flag(current, TIF_UNSAFE_RET);
> > +
> > + return restart;
> > +}
>
> So if TIF_NEED_RESCHED gets set, we'll break out and reschedule, cute.

Thanks.

> > + /* Do nothing more if the core is not tagged. */
> > + if (!rq->core->core_cookie)
> > + goto unlock;
> > +
> > + for_each_cpu(i, smt_mask) {
> > + struct rq *srq = cpu_rq(i);
> > +
> > + if (i == cpu || cpu_is_offline(i))
> > + continue;
> > +
> > + if (!srq->curr->mm || is_task_rq_idle(srq->curr))
> > + continue;
> > +
> > + /* Skip if HT is not running a tagged task. */
> > + if (!srq->curr->core_cookie && !srq->core_pick)
> > + continue;
> > +
> > + /*
> > + * Force sibling into the kernel by IPI. If work was already
> > + * pending, no new IPIs are sent. This is Ok since the receiver
> > + * would already be in the kernel, or on its way to it.
> > + */
> > + irq_work_queue_on(&srq->core_irq_work, i);
>
> Why irq_work though? Why not smp_send_reschedule(i)?

I tried this approach. The main issue is sending IPIs if a previous IPI was
already sent. I needed some mechanism (shared variables to detect this)
so that an IPI would not be sent if one was just sent.

With irq_work, I get the irq_work_is_busy() which helps with exactly that.
With other approaches I tried, I ran into issues with CSD locking and it
seems irq_work implements the CSD locking the way I wanted to. So to avoid
reinventing the wheel, I stuck to irq_work and it worked well.

thanks,

- Joel

\
 
 \ /
  Last update: 2020-11-24 18:54    [W:0.227 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site