lkml.org 
[lkml]   [2019]   [Feb]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCHv5] random: Make /dev/random wait for input_pool initializedy
Date
Hi Theodore,

On 2/21/19 1:32 AM, Theodore Y. Ts'o wrote:
> Hi Brend,
>
> I've been looking at your patch, and as far as the core part of what
> you're patching, I think this patch below is a conceptually simpler
> way of fixing the original core problem. Please take a look and let
> me know what you think.
>
> There are some other auxiliary things that your patch is trying to do
> where I'm not sure what you're trying to doing and I'm not at all sure
> it's correct. Those things should really get separated out as a
> separate patches.
>

Yes, that is true. It just got more each time I looked at the code.
Especially the jiffies wrap-around issue is an independent thing.

>> Another minor tweak this patch makes, is when the primary
>> CRNG is periodically reseeded, we reserve twice the amount
>> of read_wakeup_threshold for fairness reasons, to keep
>> /dev/random readable when it is not accessed by user mode.
>
> I'm not sure what you're trying to do/fix here.
>

It is the select function that is not working.

What I try to fix is that you can select on /dev/random for
readable, and that it guarantees that at least one process
will be able to read at least one byte out of the device,
and if that is not done, it should stay in that state.
Additionally if the first byte is read out of /dev/random
or /dev/random is selected for readable, that should
ensure that /dev/urandom is also fully initialized.

Additionally I want to achieve that the CRNG is initialized
as fast as possible and once that is done, the /dev/random
is working as soon as possible. And that the re-seeding
of the CRNG is not creating race conditions which invalidate
the readability guarantee from the select.

Currently on a low entropy system the entroy_available does
count from 0 to 128, where CRNG initializes then again
from 0 to 128..256, where CRNG reseeds then again
from 0 to 128..256, where CRNG reseeds then again...
so each time the entropy_available is in the range 0..63
select indicates !ready and then again ready, so it toggles
all the time. That is what I wanted to fix mainly.





> - Ted
>
> From 9275727af22bc08958bad45d604038aa8e9b6766 Mon Sep 17 00:00:00 2001
> From: Theodore Ts'o <tytso@mit.edu>
> Date: Wed, 20 Feb 2019 16:06:38 -0500
> Subject: [PATCH] random: only read from /dev/random after its pool has received 128 bits
>
> Immediately after boot, we allow reads from /dev/random before its
> entropy pool has been fully initialized. Fix this so that we don't
> allow this until the blocking pool has received 128 bits.
>

My observation, with a previous attempt (v3) of my patch is that a system
where only interrupt randomness is available it takes typically 2 minutes
to accomplish the initial seeding of the CRNG, if from there one has to
wait until there is 128 bit entropy in the blocking pool it will take
much too long. The patch was actually buggy and did only ensure 80 bits
of entropy in the blocking pool but even that did take 6.5 minutes, which
felt to me like an absolutely unacceptable waiting time.

Therefore I would like to keep that to 2-3 minutes that the CRNG takes
for it's initialization. From there I thought, even if the entroy
in the input_pool drops to zero, the information content however is still
there, and after adding the next 64 bits if raw entropy, it would be fine
to extract 8 bytes form the input_pool and feed that to the
blocking pool, that would make 6 bytes of output, which should be
completely unpredictable, the approach taken in my v5 patch.
If that is not good enough, maybe extract 128 bits from the urandom
and inject that into the blocking pool without incrementing
the blocking_pool's entropy count.

> We do this by repurposing the initialized flag in the entropy pool
> struct, and use the initialized flag in the blocking pool to indicate
> whether it is safe to pull from the blocking pool.
>
> To do this, we needed to rework when we decide to push entropy from the
> input pool to the blocking pool, since the initialized flag for the
> input pool was used for this purpose. To simplify things, we no
> longer use the initialized flag for that purpose, nor do we use the
> entropy_total field any more.
>
> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
> ---
> drivers/char/random.c | 44 +++++++++++++++++------------------
> include/trace/events/random.h | 13 ++++-------
> 2 files changed, 27 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 47ac7cd20fb1..e247c45b2772 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -470,7 +470,6 @@ struct entropy_store {
> unsigned short add_ptr;
> unsigned short input_rotate;
> int entropy_count;
> - int entropy_total;
> unsigned int initialized:1;
> unsigned int last_data_init:1;
> __u8 last_data[EXTRACT_SIZE];
> @@ -643,7 +642,7 @@ static void process_random_ready_list(void)
> */
> static void credit_entropy_bits(struct entropy_store *r, int nbits)
> {
> - int entropy_count, orig;
> + int entropy_count, orig, has_initialized = 0;
> const int pool_size = r->poolinfo->poolfracbits;
> int nfrac = nbits << ENTROPY_SHIFT;
>
> @@ -698,23 +697,25 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
> entropy_count = 0;
> } else if (entropy_count > pool_size)
> entropy_count = pool_size;
> + if ((r == &blocking_pool) && !r->initialized &&
> + (entropy_count >> ENTROPY_SHIFT) > 128)
> + has_initialized = 1;
> if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
> goto retry;
>
> - r->entropy_total += nbits;
> - if (!r->initialized && r->entropy_total > 128) {
> + if (has_initialized)
> r->initialized = 1;
> - r->entropy_total = 0;
> - }
>
> trace_credit_entropy_bits(r->name, nbits,
> - entropy_count >> ENTROPY_SHIFT,
> - r->entropy_total, _RET_IP_);
> + entropy_count >> ENTROPY_SHIFT, _RET_IP_);
>
> if (r == &input_pool) {
> int entropy_bits = entropy_count >> ENTROPY_SHIFT;
> + struct entropy_store *other = &blocking_pool;
>
> - if (crng_init < 2 && entropy_bits >= 128) {
> + if (crng_init < 2) {
> + if (entropy_bits < 128)
> + return;
> crng_reseed(&primary_crng, r);
> entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
> }
> @@ -725,20 +726,14 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits)
> wake_up_interruptible(&random_read_wait);
> kill_fasync(&fasync, SIGIO, POLL_IN);
> }
> - /* If the input pool is getting full, send some
> - * entropy to the blocking pool until it is 75% full.
> + /* If the input pool is getting full, and the blocking
> + * pool has room, send some entropy to the blocking
> + * pool.
> */
> - if (entropy_bits > random_write_wakeup_bits &&
> - r->initialized &&
> - r->entropy_total >= 2*random_read_wakeup_bits) {

This condition (entropy_total >= 2*random_read_wakeup_bits) was not really bad,
since it was preventing all arriving entropy to be transferred from the
input pool to the blocking pool. It appeared to extract around 50%
of the entropy, leaving some left for CRNG reseeding:

if (crng_ready() &&
(time_after(crng_global_init_time, crng->init_time) ||
time_after(jiffies, crng->init_time + CRNG_RESEED_INTERVAL)))
crng_reseed(crng, crng == &primary_crng ? &input_pool : NULL);

This can only be done if >= 128 bits of entropy are in the input_pool, so it
should be avoided to extract everything from the input pool,
except if a user process is reading from /dev/random, then that request should
of course be satisfied as fast as possible.


> - struct entropy_store *other = &blocking_pool;
> -
> - if (other->entropy_count <=
> - 3 * other->poolinfo->poolfracbits / 4) {
> - schedule_work(&other->push_work);
> - r->entropy_total = 0;
> - }
> - }
> + if (!work_pending(&other->push_work) &&
> + (ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
> + (ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
> + schedule_work(&other->push_work);

push_to_pool will transfer chunks of random_read_wakeup_bits.

I think push_to_pool should also match this change.

I like it that this path is controllable via random_write_wakeup_bits,
that would be lost with this change.

> }
> }
>
> @@ -1553,6 +1548,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
> int large_request = (nbytes > 256);
>
> trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
> + if (!r->initialized && r->pull) {
> + xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
> + if (!r->initialized)
> + return 0;

Did you want to do that in push_to_pool?

> + }
> xfer_secondary_pool(r, nbytes);
> nbytes = account(r, nbytes, 0, 0);
>


The second part of the _random_read does not match this change:


wait_event_interruptible(random_read_wait,
ENTROPY_BITS(&input_pool) >=
random_read_wakeup_bits);
if (signal_pending(current))
return -ERESTARTSYS;

and will go into a busy loop, when
ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits, right?

The select is basically done here I think this should not indicate read readiness
before the pool is initialized that is needs to be changed, right?

random_poll(struct file *file, poll_table * wait)
{
__poll_t mask;
poll_wait(file, &random_read_wait, wait);
poll_wait(file, &random_write_wait, wait);
mask = 0;
if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
mask |= EPOLLIN | EPOLLRDNORM;

That said, I think about my patch this part is most likely subject to change:

num = extract_entropy(r, &buf, 32, 16, crng_ready()
? random_read_wakeup_bits / 4 : 0);

This makes the input_pool entropy_avail hover between 128..384 bits on a system
with very little entropy, in the normal case where random_read_wakeup_bits=64,
while actually, the goal I want to achive (the readability condition in
random_pool keep true at all times) it would only require

num = extract_entropy(r, &buf, 32, 16, crng_ready()
? (random_read_wakeup_bits + 7) / 8 : 0);

The former version just leaves a few bits more in the input_pool, but the
reason why I did it is just to be simple (save one add), and ensure the
ready condition in the random_pool does not change due to the CRNG reseeding.

What might also be changed, is inject random data from the CRNG once that
is initialized to the blocking_pool, to make that device start up faster, or
at least not make it much worse than before. As I said, I did not regard
that as absolutely necessary, but might change my mind about that.


I would be happy to hear your thoughts, how to proceed.


Bernd.

\
 
 \ /
  Last update: 2019-02-21 20:24    [W:0.109 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site