[onerng talk] Dieharder Test Failed with 256MB/512MB samples

Paul Campbell paul at taniwha.com
Thu Oct 4 01:06:20 BST 2018


On Thursday, 4 October 2018 12:37:37 PM NZDT Jim Cheetham wrote:
> I'd normally wait for Paul to answer you, but I didn't want you to think
> we're ignoring your email 

sorry, I'm quite snowed with other work at the moment, I hadn't forgotten to 
reply - I'd second what Jim has just said - and add a couple of things:

1) we fully admit that our RNG is not perfect, NO RNG is perfect, instead the 
trick is to quantify the imperfection and deal with it appropriately.

Running code through a whitener (a software RNG) is a good way to deal with 
temporal issues. We use an avalanche diode - it generates big and small, and 
in between, avalanches (pulses) - the big ones occasionally swamp the high 
freq data (the small ones) with low freq data ... (this is the temporal thing 
I worry most about)

We deal with not being perfect by telling the kernel RNG (thru rngd) that we 
are generating less that 1 bit/bit of entropy, so it reads more data from the 
OneRNG - you can tweak this by editing /etc/onerng.conf 

2) there was a bug we fixed quite recently (the latest software release) on 
some devices that resulted in some stretches of bad data - make sure you are 
using that latest code, which flushes the output pool before you use the device 
(this problem tended to make it fail badly, not slightly)

	Paul


More information about the Discuss mailing list