lkml.org 
[lkml]   [2022]   [May]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: is "premature next" a real world rng concern, or just an academic exercise?
Hi Ted,

That's a useful analysis; thanks for that.

On Sat, Apr 30, 2022 at 05:49:55PM -0700, tytso wrote:
> On Wed, Apr 27, 2022 at 03:58:51PM +0200, Jason A. Donenfeld wrote:
> >
> > 3) More broadly speaking, what kernel infoleak is actually acceptable to
> > the degree that anybody would feel okay in the first place about the
> > system continuing to run after it's been compromised?
>
> A one-time kernel infoleak where this might seem most likely is one
> where memory is read while the system is suspended/hibernated, or if
> you have a VM which is frozen and then replicated. A related version
> is one where a VM is getting migrated from one host to another, and
> the attacker is able to grab the system memory from the source "host"
> after the VM is migrated to the destination "host".

You've identified ~two places where compromises happen, but it's not an
attack that can just be repeated simply by re-running `./sploit > state`.

1) Virtual machines:

It seems like after a VM state compromise during migration, or during
snapshotting, the name of the game is getting entropy into the RNG in a
usable way _as soon as possible_, and not delaying that. This is
Nadia's point. There's some inherent tension between waiting some amount
of time to use all available entropy -- the premature next requirement
-- and using everything you can as fast as you can because your output
stream is compromised/duplicated and that's very bad and should be
mitigated ASAP at any expense.

[I'm also CC'ing Tom Risenpart, who's been following this thread, as he
did some work regarding VM snapshots and compromise, and what RNG
recovery in that context looks like, and arrived at pretty similar
points.]

You mentioned virtio-rng as a mitigation for this. That works, but only
if the data read from it are actually used rather quickly. So probably
/waiting/ to use that is suboptimal.

One of the things added for 5.18 is this new "vmgenid" driver, which
responds to fork/snapshot notifications from hypervisors, so that VMs
can do something _immediately_ upon resumption/migration/etc. That's
probably the best general solution to that problem.

Though vmgenid is supported by QEMU, VMware, Hyper-V, and hopefully soon
Firecracker, there'll still be people that don't have it for one reason
or another (and it has to be enabled manually in QEMU with `-device
vmgenid,guid=auto`; perhaps I should send a patch adding that to some
default machine types). Maybe that's their problem, but I take as your
point that we can still try to be less bad than otherwise by using more
entropy more often, and not delaying as the premature next model
requirements would have us do.

2) Suspend / hibernation:

This is kind of the same situation as virtual machines, but the
particulars are a little bit different:

- There's no hypervisor giving us new seed material on resumption like
we have with VM snapshots and vmgenid; but

- We also always know when it happens, because it's not transparent to
the OS, so at least we can attempt to do something immediately like
we do with the vmgenid driver.

Fortunately, most systems that are doing suspend or hibernation these
days also have a RDRAND-like thing. It seems like it'd be a good idea
for me to add a PM notifier, mix into the pool both
ktime_get_boottime_ns() and ktime_get(), in addition to whatever type
info I get from the notifier block (suspend vs hibernate vs whatever
else) to account for the amount of time in the sleeping state, and then
immediately reseed the crng, which will pull in a bunch of
RDSEED/RDRAND/RDTSC values. This way on resumption, the system is always
in a good place.

I did this years ago in WireGuard -- clearing key material before
suspend -- and there are some details around autosuspend (see
wg_pm_notification() in drivers/net/wireguard/device.c), but it's not
that hard to get right, so I'll give it a stab and send a patch.

> But if the attacker can actually obtain internal state from one
> reconstituted VM, and use that to attack another reconstituted VM, and
> the attacker also knows what the nonce or time seed that was used so
> that different reconstituted VMs will have unique CRNG streams, this
> might be a place where the "premature next" attack might come into
> play.

This is the place where it matters, I guess. It's also where the
tradeoff's from Nadia's argument come into play. System state gets
compromised during VM migration / hibernation. It comes back online and
starts doling out compromised random numbers. Worst case scenario is
there's no RDRAND or vmgenid or virtio-rng, and we've just got the good
old interrupt handler mangling cycle counters. Choices: A) recover from
the compromise /slowly/ in order to mitigate premature next, or B)
recover from the compromise /quickly/ in order to prevent things like
nonce reuse.

What is more likely? That an attacker who compromised this state at one
point in time doesn't have the means to do it again elsewhere in the
pipeline, will use a high bandwidth /dev/urandom output stream to mount
a premature next attack, and is going after a high value target that
inexplicably doesn't have RDRAND/vmgenid/virtio-rng enabled? Or that
Nadia's group (or that large building in Utah) will get an Internet tap
and simply start looking for repeated nonces to break?

Jason

\
 
 \ /
  Last update: 2022-05-01 13:17    [W:0.099 / U:0.428 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site