12

This question had been asked several times, but still something's not clearly understood from other answers

I have a few servers running with a custom built linux kernel (minimal driver modules etc), and no disks attached (Everything's NAS based). The kernel entropy_avail can be 180-200 most often and can go as low as 140 other times. My applications use /dev/urandom (a java application which uses SecureRandom() that internally uses /dev/urandom)

  • From this nice writeup, it seems, /dev/urandom random stream mostly depends on the seed file generated at the time of installation (and the later re-seeding during boot time). Does it mean that the entropy_avail figure has no impact on the random numbers generated with /dev/urandom. Question is, does the /dev/urandom random numbers depend in any way on the entropy available ?
  • If yes, what is an acceptable lower limit for the entropy available in the system ? ( 200, 256 ? There are plenty being quoted out there..)
  • If no, then quoting the man page.

    A read from the /dev/urandom device will not block waiting for more entropy. If there is not sufficient entropy, a pseudorandom number generator is used to create the requested bytes.

Isn't that contradictory ?
  • The significance of entropy_avail. We understand from the man page that the full entropy pool size is 4096 bits, which implies there are 2^4096 possibilities for the outcome. So if entropy_avail is 140 does it mean it shrinks to 2^140 ? I still think that is a huge number, from what point should I start worrying from ?
  • In my case, as you can see, the entropy_avail is probably lower than what is observed on a normal desktop system. Should I consider software entropy generation tools (haveged, rngd etc.) or some specific hardware to help with improving it. Would that really impact the output of /dev/urandom ?
Alavalathi
  • 265
  • 3
  • 7

2 Answers2

17

Entropy is required in the following sense: if a PRNG has only n bits of entropy, then this means that it has (conceptually) only 2n possible internal states, and thus could be broken through brutal enumeration of these 2n states, provided that n is low enough for such a brute force attack to be feasible.

Then things become complex, because the "entropy level" reported by the kernel is NOT the entropy. Not the one I talk about above.

From a cryptographic point of view, the algorithm that computes the output of both /dev/random and /dev/urandom is a (supposedly) cryptographically secure PRNG. What matters for practical security is, indeed, the accumulated entropy of the internal state. Barring cryptographic weaknesses in that PRNG (none is known right now), that entropy can only increase or remain constant over time. Indeed, "entropy" can also be called "that which the attacker does not know" and if the PRNG is indeed cryptographically secure, then, by definition, observing gigabytes of output yields only negligible information whatsoever on the internal state. That's what cryptographically secure means.

Therefore, if /dev/urandom had 200 bits of entropy at some point since last boot, then it still has 200 bits of entropy, or even more.

From the point of view of whoever wrote that code (and the dreaded corresponding man page), entropy is "depleted" upon use. This is the stance of someone who assumes, for the sake of the argument, that the PRNG is not cryptographically secure, and is in fact somehow equivalent to simply outputting the internal state as is. From that point of view, if /dev/random started with n bits of entropy and outputs k bits, then it now has n-k bits of entropy.

However, this point of view is not ultimately tenable, because while it is based on the assumption that the PRNG is utterly broken and a no-operation, it is also based, at the same time, on the assumption that the PRNG is still cryptographically secure enough to turn the "hardware entropy" (the sources of data elements that are assumed to be somewhat random) into a nice uniform sequence of bits. In short words, the notion of entropy depletion works only as long as we take the extreme assumption that the PRNG is utterly weak, but under this assumption the estimate of how much entropy is really there is completely off.

In essence, that point of view is self-contradictory. Unfortunately, /dev/random implements a blocking strategy that relies on this flawed entropy estimate, which is quite inconvenient.

/dev/urandom never blocks, regardless of how much "hardware entropy" has been gathered since last boot. However, in "normal" Linux installations, a random seed is inserted early in the boot process; that seed was saved upon the previous boot, and is renewed immediately after insertion. That seed mostly extends the entropy of /dev/urandom across reboots. So the assertion becomes: if /dev/urandom had 200 bits of entropy at any point since the OS was first installed, then it still has 200 bits of entropy.

This behaviour can still be somewhat troublesome for some specific cases, e.g. diskless boot. The booting machine may need some randomness before having access to its files (e.g. to establish an IPsec context needed to reach the server that contains the said files). A better implementation of /dev/urandom would block until a sufficient amount of hardware entropy has been gathered (e.g. 128 bits), but would then produce bits "forever", without implementing some sort of entropy depletion. This is precisely what FreeBSD's /dev/urandom does. And this is good.


Summary: don't worry. If the PRNG used in the kernel is cryptographically secure, as it seems to be, then the "entropy_avail" count is meaningless. If the PRNG used in the kernel is not cryptographically secure, then the "entropy_avail" count is still flawed, and you are in deep trouble anyway.

Note that VM snapshots break the entropy, since the behaviour of the VM after the restore will always work on the state that was saved in the snapshot, and will diverge only through accumulation of fresh hardware events (which can be tricky in a VM, since the VM hardware is not true hardware). The kernel's "entropy_avail" counter, and /dev/random blocking behaviour, change nothing at all to that. VM snapshot/restore are a much more plausible security vulnerability for the system PRNG than the academic, purely theoretical scenario that "entropy_avail" tries to capture (and actually fails to).

Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • 1
    Thank you. I was about to ask how to know the quality levels of random numbers in my _current_ system whose _x bits_ of entropy is unknown to me. But from your last para, it is clear the entropy levels doesn't really matter for PRNG which is cryptographically secure. Thanks again! – Alavalathi Apr 21 '15 at 09:21
-1

There is the hardware based RDRAND instruction on IvyBridge Intel processors. If that is availabe (ie chip has instuction and does not have the RDRAND hardware bug cover-up) then I think Linux does automatically use it. Meaning you should get very large amounts of true random numbers very fast.

  • This is not correct. While Linux does use it, it is only used to augment the existing random number generator. In terms of how quickly the entropy pool is "filled" and how quickly it can generate randomness, the presence or absence of RDRAND makes no difference. – forest May 19 '18 at 01:59