lkml.org 
[lkml]   [2014]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectReading large amounts from /dev/urandom broken
From
Dear developers, please check bugzilla ticket
https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
issue, but starting with comment#3.

Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
with the chunk

nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));

which is described in commit message as "additional paranoia check to
prevent overly large count values to be passed into urandom_read()".

I don't know why people pull such large amounts of data from urandom,
but given today there are two bugreports regarding problems doing
that, i consider that this is practiced.

--
Andrey Utkin


\
 
 \ /
  Last update: 2014-07-24 07:43    [W:0.098 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site