lkml.org 
[lkml]   [2022]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 2/2 v3] lib/vsprintf: Initialize vsprintf's pointer hash once the random core is ready.
The printk code invokes vnsprintf in order to compute the complete
string before adding it into its buffer. This happens in an IRQ-off
region which leads to a warning on PREEMPT_RT in the random code if the
format strings contains a %p for pointer printing. This happens because
the random core acquires locks which become sleeping locks on PREEMPT_RT
which must not be acquired with disabled interrupts and or preemption
disabled.
By default the pointers are hashed which requires a random value on the
first invocation (either by printk or another user which comes first.

One could argue that there is no need for printk to disable interrupts
during the vsprintf() invocation which would fix the just mentioned
problem. However printk itself can be invoked in a context with
disabled interrupts which would lead to the very same problem.

Move the initialization of ptr_key into a worker and schedule it from
subsys_initcall(). This happens early but after the workqueue subsystem
is ready. Use get_random_bytes() to retrieve the random value if the RNG
core is ready, otherwise schedule a worker in two seconds and try again.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v2…v3:
- schedule a worker every two seconds if the RNG core is not ready.

lib/vsprintf.c | 46 +++++++++++++++++++++++++++-------------------
1 file changed, 27 insertions(+), 19 deletions(-)

--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -751,31 +751,39 @@ static int __init debug_boot_weak_hash_e
early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);

static bool filled_random_ptr_key;
+static siphash_key_t ptr_key __read_mostly;
+static void fill_ptr_key_workfn(struct work_struct *work);
+static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
+
+static void fill_ptr_key_workfn(struct work_struct *work)
+{
+ if (!rng_is_initialized()) {
+ queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ * 2);
+ return;
+ }
+
+ get_random_bytes(&ptr_key, sizeof(ptr_key));
+
+ /* Pairs with smp_rmb() before reading ptr_key. */
+ smp_wmb();
+ WRITE_ONCE(filled_random_ptr_key, true);
+}
+
+static int __init vsprintf_init_hashval(void)
+{
+ fill_ptr_key_workfn(NULL);
+ return 0;
+}
+subsys_initcall(vsprintf_init_hashval)

/* Maps a pointer to a 32 bit unique identifier. */
static inline int __ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
{
- static siphash_key_t ptr_key __read_mostly;
unsigned long hashval;

- if (!READ_ONCE(filled_random_ptr_key)) {
- static bool filled = false;
- static DEFINE_SPINLOCK(filling);
- unsigned long flags;
-
- if (!rng_is_initialized() ||
- !spin_trylock_irqsave(&filling, flags))
- return -EAGAIN;
-
- if (!filled) {
- get_random_bytes(&ptr_key, sizeof(ptr_key));
- /* Pairs with smp_rmb() before reading ptr_key. */
- smp_wmb();
- WRITE_ONCE(filled_random_ptr_key, true);
- filled = true;
- }
- spin_unlock_irqrestore(&filling, flags);
- }
+ if (!READ_ONCE(filled_random_ptr_key))
+ return -EBUSY;
+
/* Pairs with smp_wmb() after writing ptr_key. */
smp_rmb();

\
 
 \ /
  Last update: 2022-08-01 11:35    [W:0.106 / U:0.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site