<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Dec 20, 2014 at 7:55 PM, Jim Cheetham <span dir="ltr"><<a href="mailto:jim@gonzul.net" target="_blank">jim@gonzul.net</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Bill.<br>
<br>
I thing that rngd is useful because it adds a level of protection<br>
against failure; if the data is not dirty enough, the source is<br>
rejected.<span class=""><br></span></blockquote><div><br></div><div>I hope to support rngd in the future, but it requires code changes to rngd, and I didn't want to hold up releasing my TRNG while rngd is updated. Also, enhancements to rngd can take years to propagate to the Debian stable distro.<br><br></div><div>I use Keccak-1600 (SHA3) to whiten the data I write to the Linux entropy pool. Rngd would never find a failure. Also, it's estimate of entropy is very poor, as is ent and other entropy estimators. I have a far more accurate estimator in the health checker, and I drop any 512-bit samples that have less than 400 bits of "surprise" entropy. Each 512 bit sample is written with ioctrl in one go, and I update the entropy estimate based on how much I measured, or the predicted average "surprise" entropy, whichever is lower. This causes the state of the Linux entropy pool to recover to a secure state after each write, with one remaining problem: I think the reading of the compromised data continues until the 512 bits written are reached. As an option, I can generate any multiple of 256 bits from the Keccac sponge. The most secure method seems to be writing 4096 bits after every 512 sample read from the TRNG, so that the entire state of the entropy pool is securely scrambled. That's how I would use it, but it does slow it down. The Linux rng code is not very fast.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
On Sun, Dec 21, 2014 at 10:23 AM, Bill Cox <<a href="mailto:waywardgeek@gmail.com">waywardgeek@gmail.com</a>> wrote:<br>
> Basically anyone using /dev/urandom effectively is<br>
> mounting a denial of service attack against people who need true random<br>
> data.<br>
<br>
</span>Certainly that's what I used to believe, but is not the conclusion to<br>
draw from the description Thomas gave. So I asked Ted T'so, and I've<br>
got a whole load more detail from him that I need to get into my head<br>
now. When I've done that I'll get back to you :-)<br>
<br>
-jim<br>
<br>
--<br>
<div class="HOEnZb"><div class="h5"><br></div></div></blockquote><div><br></div><div>Thanks. I would like to know more about this issue.<br><br>Bill <br></div></div></div></div>