lkml.org 
[lkml]   [2022]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] exit: Put an upper limit on how often we can oops
Date
Many Linux systems are configured to not panic on oops; but allowing an
attacker to oops the system **really** often can make even bugs that look
completely unexploitable exploitable (like NULL dereferences and such) if
each crash elevates a refcount by one or a lock is taken in read mode, and
this causes a counter to eventually overflow.

The most interesting counters for this are 32 bits wide (like open-coded
refcounts that don't use refcount_t). (The ldsem reader count on 32-bit
platforms is just 16 bits, but probably nobody cares about 32-bit platforms
that much nowadays.)

So let's panic the system if the kernel is constantly oopsing.

The speed of oopsing 2^32 times probably depends on several factors, like
how long the stack trace is and which unwinder you're using; an empirically
important one is whether your console is showing a graphical environment or
a text console that oopses will be printed to.
In a quick single-threaded benchmark, it looks like oopsing in a vfork()
child with a very short stack trace only takes ~510 microseconds per run
when a graphical console is active; but switching to a text console that
oopses are printed to slows it down around 87x, to ~45 milliseconds per
run.
(Adding more threads makes this faster, but the actual oops printing
happens under &die_lock on x86, so you can maybe speed this up by a factor
of around 2 and then any further improvement gets eaten up by lock
contention.)

It looks like it would take around 8-12 days to overflow a 32-bit counter
with repeated oopsing on a multi-core X86 system running a graphical
environment; both me (in an X86 VM) and Seth (with a distro kernel on
normal hardware in a standard configuration) got numbers in that ballpark.

12 days aren't *that* short on a desktop system, and you'd likely need much
longer on a typical server system (assuming that people don't run graphical
desktop environments on their servers), and this is a *very* noisy and
violent approach to exploiting the kernel; and it also seems to take orders
of magnitude longer on some machines, probably because stuff like EFI
pstore will slow it down a ton if that's active.

Signed-off-by: Jann Horn <jannh@google.com>
---
I picked 10000 here to also provide safety for the ldsem code on 32-bit
systems, but you could also argue that the real fix there is to make
ldsem more robust, and that the limit should be something like 2^31...

An alternative approach would be to always let make_task_dead() take the
do_task_dead() path and never exit; but that would probably be a more
disruptive change?

@Kees should this go through your tree? (After waiting a while for
the inevitable bikeshedding on whether the default limit should be closer
to 10000 or 2^31.)

Documentation/admin-guide/sysctl/kernel.rst | 7 +++++++
include/linux/panic.h | 1 +
kernel/exit.c | 22 +++++++++++++++++++++
kernel/sysctl.c | 7 +++++++
4 files changed, 37 insertions(+)

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 98d1b198b2b4c..09713f25b3d62 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -667,6 +667,13 @@ This is the default behavior.
an oops event is detected.


+oops_limit
+==========
+
+Number of kernel oopses after which the kernel should panic when
+``panic_on_oops`` is not set.
+
+
osrelease, ostype & version
===========================

diff --git a/include/linux/panic.h b/include/linux/panic.h
index c7759b3f20452..5b3e029fe1eb0 100644
--- a/include/linux/panic.h
+++ b/include/linux/panic.h
@@ -21,6 +21,7 @@ extern int panic_on_oops;
extern int panic_on_unrecovered_nmi;
extern int panic_on_io_nmi;
extern int panic_on_warn;
+extern int oops_limit;

extern unsigned long panic_on_taint;
extern bool panic_on_taint_nousertaint;
diff --git a/kernel/exit.c b/kernel/exit.c
index 35e0a31a0315c..827ceffbfa432 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -872,8 +872,17 @@ void __noreturn do_exit(long code)
do_task_dead();
}

+/*
+ * The default value should be high enough to not crash a system that randomly
+ * crashes its kernel from time to time, but low enough to at least not permit
+ * overflowing 32-bit refcounts or the ldsem writer count.
+ */
+int oops_limit = 10000;
+
void __noreturn make_task_dead(int signr)
{
+ static atomic_t oops_count = ATOMIC_INIT(0);
+
/*
* Take the task off the cpu after something catastrophic has
* happened.
@@ -897,6 +906,19 @@ void __noreturn make_task_dead(int signr)
preempt_count_set(PREEMPT_ENABLED);
}

+ /*
+ * Every time the system oopses, if the oops happens while a reference
+ * to an object was held, the reference leaks.
+ * If the oops doesn't also leak memory, repeated oopsing can cause
+ * reference counters to wrap around (if they're not using refcount_t).
+ * This means that repeated oopsing can make unexploitable-looking bugs
+ * exploitable through repeated oopsing.
+ * To make sure this can't happen, place an upper bound on how often the
+ * kernel may oops without panic().
+ */
+ if (atomic_inc_return(&oops_count) >= READ_ONCE(oops_limit))
+ panic("Oopsed too often (oops_limit is %d)", oops_limit);
+
/*
* We're taking recursive faults here in make_task_dead. Safest is to just
* leave this task alone and wait for reboot.
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 188c305aeb8b7..63370aa4c078f 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1866,6 +1866,13 @@ static struct ctl_table kern_table[] = {
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "oops_limit",
+ .data = &oops_limit,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
{
.procname = "panic_print",
.data = &panic_print,
base-commit: f0c4d9fc9cc9462659728d168387191387e903cc
--
2.38.1.431.g37b22c650d-goog

\
 
 \ /
  Last update: 2022-11-07 21:14    [W:0.132 / U:0.712 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site