<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Jul 8, 2015 at 2:47 AM, phred53 <span dir="ltr"><<a href="mailto:phred53@hotmail.com" target="_blank">phred53@hotmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I don't pretend to really understand but I couldn't see how CRC16, or for that matter any deterministic algorithm, "post processing" of the random data could _increase_ entropy,, but ent said it did. In addition I can't help but be suspicious of any "improvements" in dieharder test results after whitening; didn't seem to be the case for OneRNG.<br></blockquote><div><br></div><div>There is no way to increase entropy as a post-process. However, you can squeeze out all the predictability out of a TRNG generated bit stream, if you fully compress it knowing how the TRNG works, creating a true random stream. Cryptographic whitening with a CPRNG is then needed only to confound attackers who may find correlations in your compressed data that you did not know about. Since we need cryptographic whitening anyway, we might as well skip the compression step and let the CPRNG compress the stream at the right compression ratio for us.</div><div><br></div><div>In my driver, I output no more bits from the CPRING than the amount of entropy provably written to it, after dividing by a small safety margin (1.03) to account for component tolerances (1%) and the small non-ideal behavior of the TRNG (typically < 0.2%). This compresses out all the predictability while cryptographically whitening the stream.</div><div><br></div><div>Any TRNG that simply whitens a bit stream of n bits to generate n bits is dangerous. Any non-open-source TRNG probably does this, as it allows them to advertise a higher bit-rate of random data then they actually generate, which means they make more money. In fact, they can cheat and have a PRNG output N*entropy, and no one will ever know the difference, except maybe an attacker who reverse-engineers the device and learns how to PWN your cryptographic keys. This is why being open-source, like OneRNG, is so important.<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
There is a Crypto StackExchange discussion about entropy here -<br>
<a href="http://crypto.stackexchange.com/questions/10404/estimating-bits-of-entropy" rel="noreferrer" target="_blank">http://crypto.stackexchange.com/questions/10404/estimating-bits-of-entropy</a><br>
<br>
which appears to indicate that entropy is really about the physics of the device; measuring a bit stream doesn't tell one the true entropy and I would add particularly if whitened.<br></blockquote><div><br></div><div>The device will operate according to the physics that creates the entropy in the first place. I prefer thermal-noise, since this is provably present, with no way to get rid of it. The more common way is to use avalanche-noise from a zener-diode, but the entropy generated varies from device to device, and with both temperature and age of the device. This is one reason why good zener TRNGs use 2 zener sources rather than one.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
My conclusion is that OneRNG's entropy is what you basically state and what Shanon Entropy (ent) claims for raw mode bit streams. Assuming my capture utility set the mode and captured data correctly these are the Shanon entropy (ent) for 15MB files from the three raw modes -<br></blockquote><div><br></div><div>Ent is probably pretty good at estimating randomness from raw output of OneRNG zener source at a time. It looks like you measured about 6.6 bits of estimated entropy per byte, with very low serial correlation. This means the OneRNG is probably waiting long enough between samples for Ent to do a decent job at estimating entropy.</div><div><br></div><div>I would be interested in testing OneRNG output. I use an entropy estimator that does a much better job of compressing out correlations of nearby bits, and so far, it always outperforms ent. I've been able to prove ent over-estimates entropy by as much as 15% for some TRNG output. Could you send me a copy of your raw data?</div><div><br></div><div>With a zener source like OneRNG, and ent's common over-estimation of entropy, I'd use a safety margin on the compression rate in the CPRNG of at least 25%, just to be safe.</div><div><br></div><div>One thing I would _not_ do is use data from the OneRNG or any other TRNG to generate cryptographic keys or nounces before cryptographically whitening the output. I'd use it to feed/dev/random instead. With it's CRC mixing of two streams, each of which probably has at least 0.5 bits of entropy, the bits fed into /dev/random probably have close to 1 bit of entropy, and with cryptographic whitening, you should be in good shape.</div><div><br></div><div>Bill<br></div></div></div></div>