[onerng talk] review of RNGs

Bill Cox waywardgeek at gmail.com
Mon Jul 13 16:51:13 BST 2015


On Sun, Jul 12, 2015 at 9:37 PM, Philipp Gühring <pg at futureware.at> wrote:

> > > Yes, oddly enough things I think OneRNG is good at:
> > > * Entropy Sources (e.g. Avalanche/RF)
>
> Hmm, Multiple-Multiple Choice? Or a free text-field?


Free text-field should be good enough.


>
> > > ** Configurable by user (i.e. choose which sources to use)
>
>
> > > * Raw output available (i.e. mode with no whitening)
>
> In that case, 2 submissions, one for the raw output and one for the
> filtered output might be a good idea.


This would make the submissions far more interesting, especially if the
data were made available for analysis.


>
> > > ** Whitening method (e.g. CRC16, AES)
>
> von-Neumann filter. Again, multiple-multiple choice of freetext?


This probably should be free text.  There are too many possible options.

How to auto-analyze the raw data is an interesting problem.  Reporting
ent's guess of entropy/bit and 1/0 bias might be good.  Pass/no-pass on
diehard tests would tell us if the "raw" data were actually raw.  "Passing"
the diehard tests with raw data should be severe cause for concern.

Another interesting stat might be raw bits/second generated.  It is
important to know what the TRNG is doing with the raw data.  If the raw
data is not very random, yet the output whitened bits are generated at the
same rate (or faster) than the raw bits, that's another red flag.

It might be interesting to allow TRNG builders to upload custom entropy/bit
estimators to give more accurate (meaning lower) estimates of entropy/byte
than ent.  For example, my Infinite Noise TRNG is modeled as having 2
states (which correspond to the 2 hold caps).  One state is generated from
the other + noise every clock cycle.  In ideal operation you can filter out
most of the non-randomness using an N-bit predictor that estimates the
probability of seeing a 1 given N prior bits.  I use 2 predictors, one for
even and one for odd bits, because they are generated by the two different
switched-cap circuits in an alternating pattern.

With this predictor, I am able to detect that the actual entropy/byte is
15% lower than reported by the ent program.  I am also able to verify that
the circuit is operating at the expected entropy generation rate, and stop
output if the operation is either too unpredictable, or not predictable
enough.  I allow +/- 3% variation from ideal operation.  It would be
interesting to run similar analysis on other raw data sets from different
TRNGs.

Bill
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ourshack.com/pipermail/discuss/attachments/20150713/6986261d/attachment.html>


More information about the Discuss mailing list