Reminds me of one of my favorite commit logs from the LibreSSL project:<p>"Do not feed RSA private key information to the random subsystem as entropy. It might be fed to a pluggable random subsystem…. What were they thinking?!"<p><a href="http://opensslrampage.org/post/83007010531/well-even-if-time-isnt-random-your-rsa" rel="nofollow">http://opensslrampage.org/post/83007010531/well-even-if-time...</a>
I can't understand Why OpenSSL continues to use it's own PRNG implimentation when we have /dev/urandom and CryptGenRandom which are known to be good. This is basically what BoringSSL does (although if you have rdrand then it will get filtered through a ChaCha20 instance).<p>I'm pretty sure OpenSSL doesn't even reseed its PRNG on Windows unless the calling application does it so I'm not sure how that's safe either. If you look at applications using OpenSSL like OpenVPN I don't see any calls to the PRNG init function to ensure it has enough entropy. I'm not sure of the security impact of this.
I actually have several TB of OpenSSL PRNG output (256 bit len) that I've been analyzing. A fairly modest sample size, but I've come across some interesting patterns at the bit level. It's kind of a pain in the ass to write efficient analytics for a binary 256 bit pattern as a matter of fact.