[onerng talk] Dieharder Test Failed with 256MB/512MB samples
Paul Campbell
paul at taniwha.com
Thu Oct 4 05:15:39 BST 2018
On Thursday, 4 October 2018 4:43:05 PM NZDT Victor Sun (孫國偉) wrote:
> My onerng.conf is default value
> ONERNG_START_RNGD="1"
> ONERNG_MODE_COMMAND="cmd0"
> ONERNG_VERIFY_FIRMWARE="1"
> ONERNG_AES_WHITEN="1"
> ONERNG_URANDOM_RESEED="0"
> ONERNG_ENTROPY=".93750"
>
> I capture the file is by this command
> /sbin/onerng.sh daemon ttyACM0
> dd if=/dev/random of=/media/sf_Downloads/OneRNG bs=256 count=4M
> iflag=fullblock &
>
> OneRNG is a 1024MB file and I split it 256*4 / 512*2 /1024*1 for dieharder
> test
so running it thru /dev/random you are IMHO measuring the validity of the
kernel CSPRNG far more than that of OneRNG - I think that if you did it
without OneRNG you're likely to get roughly the same result (randomly) but an
awful lot slower.
(bear in mind when testing random systems like this - you are getting random
data, results will be different every time, even random test failures.
Theoretically there's some tiny tiny chance you'll get a file full of 0s and it
would still be random (insert that XKCD cartoon here). The larger the sample
you take the more chance you are getting something close to the system's
actual performance)
BTW ONERNG_ENTROPY is the correction factor mentioned in the previous email -
the actual value we've measured from real OneRNGs is better than 0.93750 but
we're being a little paranoid here on purpose, if you want to be even more
paranoid, you can reduce this value closer to 0 - 0.5 will pull 2 bits from
onerng for ever bit /dev/random produces)
The way that we test OneRNGs is essentially what you're doing, but instead we
kill rngd and then dd directly from /dev/ttyACM0 rather than /dev/random
(which passes through the kernel CSPRNG)
Paul
More information about the Discuss
mailing list