I've read Ted T'so dismissing the "/dev/urandom for everything camp" somewhere lately.
Together with his appearance here on HN, I very much doubt he's going to change his position anytime soon.
He thinks the criticism of Linux's random device is purely academic. Maybe he's right, he's certainly way better qualified than I am, but I'm still very unimpressed by how the Linux guys just dismiss any discussion for a changed random device.
Well, they implemented getrandom quickly. Let's take it as a promising sign.
It's very clear that /dev/urandom will never be changed in a way that it could possibly block; that is considered by T'so to be breaking userspace. I could possibly see /dev/random adopting the BSD behavior, but more likely everyone who cares will switch to getrandom, and it won't be an issue anymore.
Because if a certain application expected that /dev/urandom never blocks and it suddenly starts blocking, the application might start behaving not as expected (performance degradation, race conditions, resource starvation, etc).
The proposed change is certainly not make urandom sporadically block the way /dev/random does. It's to make urandom block at boot if the RNG hasn't been seeded at all.
The entropy pool and blocking read of /dev/random is used as a safe-guard to ensure the impossibility of predicting the random number; if, for example, an attacker exhausted the entropy pool of a system, it is probable, though highly unlikely with today's technology, that he can predict the output of /dev/urandom which hasn't been reseeded for a long time (though doing that would also require the attacker to exhaust the system's ability to collect more entropies, which is also astronomically improbably).
This implies that urandom needs to be "reseeded" periodically in order to maintain its security.
My understanding is that if urandom has been seeded with 256 bits of entropy, then it's impossible for any attacker to ever predict urandom in any circumstance. If you don't know the seed, then predicting urandom is as impossible as decrypting AES-256 ciphertext that was encrypted with a random 256-bit key. Which is to say, computationally infeasible.
Is that correct? If so, what sort of C code or startup shell script would you recommend running in order to ensure 256 bits of entropy have been collected by the kernel and used as a seed for urandom?
Your understanding is pretty much correct. To quote DJB [1]:
Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
(1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
(2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
For a cryptographer this doesn't even pass the laugh test.
Very nice. It's quite interesting how much better the /dev/random implementations seem to be on FreeBSD and OpenBSD than on Linux. On Linux you have to choose between long-term blocking on information theory and completely unblocking sources which can produce insecure output. Wonder if they'll ever fix this.
urandom does not produce insecure output for userland applications once the system is booted up. A very annoying quirk in Linux random(4) means that distributions need to be careful about services at boot, but Linux applications should virtually always use urandom.
Indeed, the link you gave pretty much nails my exact issue with it:
> FreeBSD’s kernel crypto RNG doesn’t block regardless of whether you use /dev/random or urandom. Unless it hasn’t been seeded, in which case both block. This behavior, unlike Linux’s, makes sense. Linux should adopt it.
The system call is arguably better, sure, but it should also be quite possible to fix /dev/urandom to just block when unseeded so we can finally put this issue to rest.
NetBSD has a sysctl to give random data, which cannot fail, which is what arc4random uses. OpenBSD has their getrandom syscall now (in 5.6). I dont know of a FreeBSD syscall.
That's great news, Linux may finally have an RNG where we don't have to choose between poor speed and poor security. However, it is still quite possible to fix /dev/urandom for existing code which runs at boot, which has caused many security issues on routers and such. Glad to see progress is being made though.
I'm just going to keep replying to comments like this to point out that Linux applications are not actually forced to choose between random hangs (which is the real problem with random(4)) and security. Linux applications should just use urandom.
Actually they should "just use getrandom." Reasons: 1) urandom doesn't block if it's not initialized (that can happen on the embedded devices after the boot) and getrandom does that only then and never again. 2) it provides resilience against
file descriptor exhaustion attacks.
Except that's blatantly incorrect as anything running before the RNG is seeded will be completely insecure. Sure, that's good enough for generating SSL keys after a system has been running for a while, but what about those SSH keys generated on first boot?
This has been a huge problem in the embedded device world. We need something reliably secure, not just secure after a while. /dev/urandom does not fit this criteria. Currently the best reliable option for Linux applications is to seed a CSPRNG from /dev/random and run it in usermode. Which can also be quite error prone. This is why I think this syscall is a very good improvement over the current state of things.
This is why Linux distributions save seed files and seed the RNG at bootup. The RNG isn't "secure after awhile". The instant the RNG is seeded, it's secure.
You don't have to take my word for it; look at the design of the Nacl library --- that's Bernstein, Schwabe, and Lange --- for an example of "just use urandom".
And if there's no RNG seed available? We're talking about the first boot of a system here. You're going to have to wait for sufficient entropy to be available to the kernel or start reading junk. This is why the embedded devices hit many problems.
Sure, /dev/urandom is a good general strategy, I'm not disagreeing with you there, but it's not perfect and it very easily could be made better by simply blocking when needed to gather more entropy. getrandom() seems like a much better solution, providing an RNG without these idiosyncrasies.
Most developers are not going to be writing code that has to compete in a race against the operating system's RNG seeding process. (I have to explain this to PHP programmers all the time.) And the ones who are, ought to be made aware of the danger on Linux.
Patching /dev/urandom to block if it's not seeded on boot seems like a winning strategy to me. Why aren't we doing that?
Because it could break existing boot scripts/programs that read from /dev/urandom. Linus doesn't consider "[...] the ones who are [writing code that runs at boot], ought to be made aware of the danger" as a solution.