10

So my question is: lacking a legitimate (hardware) source of entropy, is it reasonable to routinely augment /dev/random using something like rng-tools (fed from /dev/urandom) or haveged?

It is more dangerous to have programs depending and/or waiting on /dev/random's shallow entropy pool, or to have (arguably) less robust randomness introduced purely by software?

And when I say "routinely", I mean "all my servers, starting when booted, running all the time, just to be safe."

I'm not questioning whether /dev/urandom is sufficiently strong - as per the cites above, almost everybody agrees it's fine (well, not everybody, but still). I want to be certain that using a daemon like rngd or haveged to work randomness back into /dev/random - even if they're based on /dev/urandom like rngd would be in most cases - doesn't introduce weaknesses. (I'm not comfortable with just mknod'ing the problem away, for maintenance and transparency reasons.)

(While this question could be considered a dupe of Is it safe to use rng-tools on a virtual machine?, that answer seems to fly against the widespread reputable voices saying /dev/urandom is sufficient, so this is in a sense seeking clarification as to where vulnerability would be introduced (if agreeing with that answer) or whether it is in fact introduced (if not).)

(related reading - Potter and Wood's BlackHat presentation "Managing and Understanding Entropy Usage" is what got me thinking about this)

gowenfawr
  • 72,355
  • 17
  • 162
  • 199
  • 4
    These were exactly the reasons that the getrandom syscall was introduced https://lwn.net/Articles/605828/ – paj28 Aug 13 '15 at 16:21

3 Answers3

8

The short answer is that feeding /dev/random back with the output of /dev/urandom will not decrease security. To make things clearer (the comments indicate that I was not precise enough), my point is that feeding /dev/random with the output of /dev/urandom is harmless (though it does not increase security); it is also rather useless (except as a way to easily support applications that insist on using /dev/random and blocking at inopportune times for no good reason).


To make things simple, let's define a working model of the functioning of /dev/random and /dev/urandom:

  • There is an internal state p that consists of k bytes (for some integer k).
  • Additional entropy x is injected by replacing p with H(p,x), where H is a "sort of" hash function. For instance, the current state is concatenated with the new entropy, the result being hashed, and the hash output is the new p.
  • Output is produced by using p as input to a CSPRNG; the first k bytes are the new value of p, and the subsequent bytes are the output for that run.
  • /dev/random differs from /dev/urandom in that it will occasionally refuse to output bytes until new entropy is injected (or gathered from the hardware).

(The above is a conceptual model which is close enough to the actual implementation for the purposes of the current discussion.)

The security of the whole thing depends on the entropy of p; roughly speaking, on how much p is unknown to attackers. Attackers know that p has size k, so they may try to guess p by brute force, which has a cost of about 28k-1 on average. The CSPRNG is deemed cryptographically secure because analysis of many bytes of the output does not yield information on other output bytes -- in particular, it does not yield information on p (either before or after the run).

We now suppose that x is chosen by the attacker. As long as H behaves like a secure hash function, H(p,x) has the same entropy as p -- the attacker still has no information on p after the operation. Strictly speaking, there can be a "space reduction" effect, down to about size 28k/2 if the attacker is allowed to do the "entropy feeding" trick 28k/2 times. If k is large enough (say 32, for a 256-bit internal state), the remaining space size is still large enough for security, and this situation cannot be attained anyway.

Since the CSPRNG does not yield information on p, the output of that CSPRNG cannot be worse for security than an attacker-controlled x. This is the part which would be most tricky to formalize in an academic way (this would take a few pages to write it down properly). Intuitively, the CSPRNG, when using k bytes of input and taking k bytes of output, behaves like a random oracle.

Therefore, feeding /dev/random from the output of /dev/urandom won't reduce security.

This of course relies on the idea that the state-update function (for entropy injection) and the CSPRNG are both "ideal" functions. If they are not, then you are doomed anyway (the autoblocking behaviour of /dev/random would not save you in that case).

Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • 1
    Your argument is that mixing functions _generate_ new entropy? I don't buy it. When you feed `/dev/urandom` into `/dev/random`, the entropy estimator will say "ooo, that looks random" and increase its entropy counter, which is incorrect since you have not actually introduced anything new. – Mike Ounsworth Aug 13 '15 at 16:11
  • Ah, sorry, your argument is actually that mixing functions don't _reduce_ entropy. Fine. In that case, what do you gain by doing it? – Mike Ounsworth Aug 13 '15 at 16:14
  • 2
    I am not totally up to date on the Linux rng stuff, but afaik if you put too much entropy in, some parts are discarded. So it might be possible to make the system ignore "real" entropy by always feeding it pseudorandom data and, while not decreasing the entropy (meaning at time t+1 there is less entropy than at time t), it will lead to a system that has much less entropy than a well functioning one. (meaning the "broken" system has the same entropy at time t1 than at time t, while a working system would have **more**) – Josef Aug 13 '15 at 16:28
  • 2
    @MikeOunsworth: the question is not about whether feeding `/dev/random` with `/dev/urandom` is a _good_ idea; only whether it is harmful. As long as `/dev/urandom` is secure, this is harmless (and useless, except if you want to transparently support applications that insist on using `/dev/random`, and stalling at inopportune times, for no good reason). – Thomas Pornin Aug 13 '15 at 17:13
  • @ThomasPornin lol ok. It's worth noting though that backdooring `/dev/random` like that will break FIPS 140-2 for any software running on your system that relies on `/dev/random` for its randomness, if that's a thing you care about. – Mike Ounsworth Aug 13 '15 at 17:16
  • @ThomasPornin Thank you - you got to the crux of the matter, which is wondering if I can support stupid applications which die on /dev/random without weakening my system's security. – gowenfawr Aug 14 '15 at 12:32
  • @ThomasPornin Sorry to bother you like this here, but do you have the time to look at my entropy question here? https://security.stackexchange.com/questions/96370/a-simple-question-about-entropy-and-random-data The existing answer looks dubious to me. How can 6-sided die tossed 100 times have 2.5806473 bit entropy? – cryptonamus Aug 16 '15 at 08:52
  • @gowenfawr why are you opposed to [mknod](http://security.stackexchange.com/a/14399/3365) or [otherwise change](http://superuser.com/a/563866/134112) /dev/random to access /dev/urandom ? – Josef Aug 17 '15 at 14:23
  • 1
    @Josef history teaches us that he who alters basic OS constructs pays for it sooner (e.g., symlinks aren't immune to some of the treatment special files quietly handle) or later (what do you mean the default installer doesn't have my specially kinked devices set up for me?). If you have to mess with something as basic as devices, better to do it in an externalized way whose failure mode is a return to the default behavior. – gowenfawr Aug 17 '15 at 14:26
  • @gowenfawr the failure mode of the udev rules **is** return to default behaviour. If the custom rule is not used, you will have the "normal" /dev/random with the blocking problems but otherwise working. If the rule is applied, you have the symlink. Only `/dev/eerandom` (the real `/dev/random`) will be missing without the custom rule, but you probably don't want to use that anyway. – Josef Aug 17 '15 at 14:37
  • In Linux, the well-know weak point of /dev/urandom occurs when it is used without sufficient >Initial< random seeding. /dev/random doesn't have >That< problem (never mind that it may go overboard throttling AFTER >That<). Without knowing how rng-tools actually works, it seems it would be enough to have an rng-service which waited until sufficient entropy (say 256 bits) was detected once, and only then start using /dev/urandom. – Craig Hicks Apr 27 '18 at 02:45
2

Unfortunately some of the people who prefer /dev/random are FIPS and Common Criteria. [EDIT: I can't find a citation for it, but see at the bottom for an unofficial source] So if you want your software to be FIPS / CC certified, it can not use /dev/urandom. It must prove that it keeps an accurate estimate of its entropy, and it must block when the entropy runs low.


Brief background:

All Deterministic Random Number Generators are only as random as their seed. Entropy is an estimate of how random (or unexpected, or unpredictable) your seed is. In order to increase the entropy of your RNG, you need to mix in fresh randomness from an outside source. If you have access to unpredictable sources of data (often timing on human input devices, or thermal noise), then by all means, feed it into /dev/random to increase the entropy of its seed.

The Linux kernel will automatically mix in as many sources of randomness as it has access to, timing of packets, timing of keystrokes, randomness from the process scheduler, attached hardware random number generators (which are starting to be included on consumer motherboards). So that's great.

The only case I'm aware of where /dev/random is known to be weak is on headless VMs where there is literally nothing to pull on for unpredictableness.


Alright, let's get to the actual question:

I want to be certain that using a daemon like rngd or haveged to work randomness back into /dev/random - even if they're based on /dev/urandom like rngd would be in most cases - doesn't introduce weaknesses.

My question is: where are rngd, haveged, and /dev/urandom getting their randomness from? Is it really new randomness coming from outside the machine, or is in just a re-hash of /dev/random?

I can't speak to rngd or haveged, but I know that on Linux /dev/urandom shares an RNG with /dev/random (see this awesome rant, from which I've borrowed an image below), and on FreeBSD /dev/urandom is literally a pointer back to /dev/random, so in all likelihood you're just feeding /dev/random back to itself.

I'm not an expert enough to know if this introduces weaknesses or not, but it certainly isn't doing any good.

enter image description here


The only mention of "no /dev/urandom" that I can find in FIPS docs was a draft from 2014 that was never released to the public. It included the footnote:

Note2: The /dev/urandom generator is assumed to provide no entropy unless it is specifically instrumented to ensure a minimum of 112-bits of available entropy at all times.

It then proceeds to list ways in which you can guarantee that entropy. This is a lot less rigid than I had been lead to believe. Cool, Today I learned!

Mike Ounsworth
  • 58,107
  • 21
  • 154
  • 209
  • 2
    fyi: `havaged` is hased on the [HAVEGE](https://www.irisa.fr/caps/projects/hipsor/) algorithm and uses internal processor states as random source. It works quite good, but there can be problems in virtualised environments. – Josef Aug 13 '15 at 16:18
1

Don't do fishy things with feeding hogwash data into /dev/random etc. If you want, you can replace /dev/random with a symlink to /dev/urandom

You can use haveged or something like that, but don't use anything without any external input. (also, especially with havaged take caution if you run virtualised hardware)

/dev/random is quite robust against input with bad entropy, but if you feed it too much nonsense, who knows what happens?

/dev/urandom is well known and tested. If you link /dev/random to it, you probably won't have problems. If you do fishy things, you might have problems!

Also: What would you want to be responsible for: "The first secure connection to our server after reboot takes almost a second!!!"/"Generating a lot of certificates is slow" or "The thing you did with our random stuff is totally broken! All our keys are broken, all our data is stolen! WHAT HAVE YOU DONE?"

Decide wisely!

Josef
  • 5,933
  • 26
  • 34
  • 1
    `but if you feed it too much nonsense, who knows what happens?` absolutely nothing. It uses a mixing function that does not reduce entropy (as the @ThomasPornin's answer explained). You could write from `/dev/zero` constantly, and nothing bad would happen. – forest Dec 19 '17 at 09:02
  • @forest with the current implementation with no bugs present. Sure, this is intended behavior. But the implementation of /dev/random in Linux has been changed a few times already. Also the given scenario is not a common scenario. If you feed a lot of garbage into /dev/random, there might (now or later) be a bug decreasing your security. There is no advantage if you do that, therefore you shouldn't. There have been problems with predictable keys after boot on some devices. Feeding back urandom might "show" you there is enough entropy but leave you vulnerable to this for example. – Josef Dec 19 '17 at 13:28
  • @forest also see my [comment to the other answer](https://security.stackexchange.com/questions/96741/is-it-worth-augmenting-dev-random-entropy-in-software/96750?noredirect=1#comment166507_96747). Even if nothing bad can happen, according to your definition, it could be that you prevent something good (more entropy added from "real" randomness sources") from happening! Assume a system has x bit of entropy at time t, and in normal operation would have 2*x bits at time t+δ. If with your modification the system only has (2*x)*ε (ε<<1) entropy at t+δ, I call it a problem. – Josef Dec 19 '17 at 13:35
  • 2
    It's no more likely for a bug to be present when used that way than when used any other way. Furthermore, it does not prevent "real" randomness from being added. Trickle reseeding hasn't been a thing since the 2.x days. Since a very long time, entropy events are added unconditionally. At worst, it would change the behavior of `/dev/random` to be non-blocking, which would be no different than using `/dev/urandom` directly. As long as it is initially seeded with good randomness, it will work just fine. – forest Dec 19 '17 at 13:53
  • 1
    As for the randomness driver being changed often, the last major change was in 4.8. I'm familiar with both the pre-4.8 and newer version of the driver and can say that the behavior has not changed such that feeding "bad" randomness into it would be dangerous at all. In fact, it is no different from simply using the `RNDADDENTROPY` ioctl haphazardly. Literally all it will do is cause the blocking pool to unblock at the wrong time (and waste your CPU cycles since the SHA-1 mixing operation is not optimized for some godforsaken reason). – forest Dec 19 '17 at 13:56