OpenBSD's use of randomness

Jim Cheetham jim at gonzul.net
Fri Jan 31 01:05:44 GMT 2014


Not completely on topic, but I've been talking to Theo from OpenBSD recently, and he is of course very confident that their philosophy for using random data is better than others'; but he admits that no independent academic investigation has been done ... yet ;-)

http://www.openbsd.org/crypto.html doesn't really have the details; essentially he describes the system as consuming random data almost constantly - almost every time the kernel or libc or some other part of their core code is generating something 'unique' (like a memory allocation address, for example) they are doing so from the random pool. Every time a process fork()s, it gets a new libc with a new 4k pool of data made available.  At the same time the OS is collecting interrupt-related data as an input to the collectors. The PRNG is reseeded occasionally too. His overall feeling is that 'constant usage' helps the overall quality of the system, and that therefore the output of the system PRNG is very very high, and this reduces his need to have hardware RNGs feeding it. I'm not so sure that conclusion is valid, but I don't know how to evaluate it.

(This is probably a bad representation of the situation, we were in a bar both times he explained it to me)

-jim


More information about the Discuss mailing list