55

Which way of additionally feeding /dev/random entropy pool would you suggest for producing random passwords? Or, is there maybe a better way to locally create fully random passwords?

Scott Pack
  • 15,217
  • 5
  • 62
  • 91
tkit
  • 3,332
  • 6
  • 29
  • 36
  • 2
    I just faced this problem while trying to generate GPG keys on a virtual server. I found that downloading a large ISO (say a linux distribution DVD) adds entropy pretty quickly. – Mark E. Haase Jul 31 '14 at 18:06

9 Answers9

54

You should use /dev/urandom, not /dev/random. The two differences between /dev/random and /dev/urandom are (I am talking about Linux here):

  • /dev/random might be theoretically better in the context of an information-theoretically secure algorithm. This is the kind of algorithm which is secure against today's technology, and also tomorrow's technology, and technology used by aliens, and God's own iPad as well. Information-theoretically secure algorithm are secure against infinite computing power. Needless to say, such algorithms are pretty rare and if you were using one, you would know it. Also, this is a "might": internally, /dev/random uses conventional hash functions, so chances are that it would have weaknesses anyway if attacked with infinite power (nothing to worry about for Earth-based attackers, though).

  • /dev/urandom will not block, while /dev/random may do so. /dev/random maintains a counter of "how much entropy it still has" under the assumption that any bits it has produced is a lost entropy bit. Blocking induces very real issues, e.g. a server which fails to boot after an automated install because it is stalling on its SSH server key creation (yes, I have seen that). /dev/urandom uses a cryptographically strong pseudo-random number generator so it will not block, ever.

So you want to use /dev/urandom and stop to worry about this entropy business.

Now you may want to worry about entropy if you are writing the Linux installer. The trick is that /dev/urandom never blocks, ever, even when it should: /dev/urandom is secure as long as it has received enough bytes of "initial entropy" since the last boot (32 random bytes are enough). A normal Linux installation will create a random seed (from /dev/random) upon installation, and save it on the disk. Upon each reboot, the seed will be read, fed into /dev/urandom, and a new seed immediately generated (from /dev/urandom) to replace it. Thus, this guarantees that /dev/urandom will always have enough initial entropy to produce cryptographically strong alea, perfectly sufficient for any mundane cryptographic job, including password generation. The only critical point is during installation: the installer must get some entropy from /dev/random, which may block. This issue also occurs with live CD and other variants with no read-write permanent storage area. In these situations, you may want to find some source of entropy to ensure that /dev/random will be well-fed, and will not block.

The operating system itself, and more precisely the kernel, is at the right place to gather entropy from hardware event, since it handles the hardware. So there is relatively little that you can use for entropy that the kernel does not already use. One of those remaining sources is Webcam data: a webcam, even facing a blank wall, will output data with thermal noise, and since it outputs lots of data, it is a good entropy gatherer. So just grab a few frames from the webcam, hash them with a secure hash function (SHA-256), and write that into /dev/urandom. This is still big overkill.

AviD
  • 72,708
  • 22
  • 137
  • 218
Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • 18
    I should -1 you for having the audacity to suggest that any deity would use an iPad. Everyone knows God uses the Holy PSP. – Polynomial Apr 30 '12 at 09:29
  • 4
    Your god, maybe. My god uses an iPad, and furthermore has instructed me that so should you. – jakev Sep 30 '12 at 23:17
  • 3
    god has nothing to do with it, does any one see the problem of people booting from virtual machines that are the theoretical "seed" image running over a range of not that long – James Andino Dec 11 '13 at 03:30
  • 2
    It should be noted that for any system that can't save state (diskless workstations, and routers which have no writable disk), /dev/random can be starved of entropy when first starting up. This is a normal for linux and all systems to initially have a low amount of entropy on startup, but is solved by saving a random seed from the last boot. When you have nowhere to save the seed, you'll have an initial lack of entropy. This is rather a corner case, but is still important for some to know about /dev/random. – Steve Sether Dec 31 '15 at 15:38
25

You can feed it with white noise from your sound chip, if present. See this article: http://www.linuxfromscratch.org/hints/downloads/files/entropy.txt

Henri
  • 1,545
  • 10
  • 11
  • 13
    You *could* do that, I suppose, but why bother? There is no reason to. It's unnecessary. The kernel already feeds /dev/random and /dev/urandom with sufficient entropy for these purposes. Save your time for something that will actually improve security. Or, to put it another way, the question asked whether we would suggest adding extra entropy. The best answer is: No, there's no need to add extra entropy. Just go ahead and use /dev/urandom as is. – D.W. Jan 17 '11 at 05:57
  • 12
    @D.W. from my experience with VPS servers, where there is no real external entropy source like keyboard, mouse etc, the entropy pool gets very low and it seems to affect things like SSL. It might still work but it feels like things are running slower. After installing haveged (see another answer below), things were running much more smoothly. Perhaps there was something else I could have fixed, or did something wrong, but I'm not sure you can always rely on your kernel as your entropy source... – Yoav Aner Apr 30 '12 at 09:28
  • 4
    If I remember correctly, a few of the major entropy sources are CPU registers that get modified very frequently to reasonably random values. Unfortunately, virtualisation negatively impacts the randomness of those registers due to more predictable scheduling of threads. A solution is to have a HRNG on the bare metal server, then make it available to the VMs. – Polynomial Apr 30 '12 at 10:21
  • 2
    @YoavAner, the reason you are having problems is probably because you are using `/dev/random`. Don't do that. You should use `/dev/urandom`. Then you won't have those problems -- and it will be secure. See [Feeding /dev/random entropy pool?](http://security.stackexchange.com/q/89/971), [Is a rand from /dev/urandom secure for a login key?](http://security.stackexchange.com/q/3936/971), [Pseudo Random Generator is not initialized from the (entropy pool)?](http://security.stackexchange.com/q/14292/971). Short version: use /dev/urandom, not /dev/random. – D.W. Apr 30 '12 at 22:18
  • 1
    Thanks @D.W. I know that urandom should solve this, but I'm not sure all components on my system use it necessarily. For example, it must be some component of lighttpd web server, or openssl or who-knows-what that were getting funny. – Yoav Aner Apr 30 '12 at 22:31
  • 1
    Thanks, @YoavAner. I set up a separate question to try to identify any configuration changes needed to avoid this situation: [What do I need to configure, to make sure my software uses /dev/urandom?](http://security.stackexchange.com/q/14386/971). – D.W. Apr 30 '12 at 22:58
  • Nice idea, and an impressive list of products, but to be honest, I still prefer installing one component (haveged) and not having to worry about it. I doubt haveged's entropy is less secure than that of urandom, but I don't have the knowledge or expertise to evalute this. – Yoav Aner May 01 '12 at 08:38
13

I know of audio entropy daemon and havege which is used by haveged daemon, try them out.

krempita
  • 189
  • 4
  • 3
    Those are unnecessary. The kernel already feeds /dev/random and /dev/urandom with sufficient entropy for these purposes. – D.W. Jan 17 '11 at 05:53
  • 2
    @D.W. it does not, they did someyhing with the (Linux) kernel so it's not working any more as it used to (gets depleted very fast)... – tkit Jul 14 '11 at 12:58
  • @pootzko, I don't believe it. I suspect you are misinterpreting what you are seeing. /dev/urandom never gets depleted. – D.W. Jul 15 '11 at 05:57
  • 1
    first of all, I tested all this (monitoring entropy while using different methods, so when the entropy drops down rapidly to some very low values - it means it got depleted). second, I read a lot about it all at the time of asking the question - kernel handled all this differently some time ago. third - these are your words "/dev/random has some issues: it blocks, it depletes the entropy pool" :)) – tkit Jul 15 '11 at 06:38
  • 4
    +1 for haveged. From personal experience primarily with virtual servers, which do not have a keyboard or mouse attached, and can't have a microphone easily connected either, haveged really makes sure things are running smoothly. How strong its PRNG is, it's hard for me to say, but it sounds reasonably safe (particularly compared to just relying on urandom) – Yoav Aner Apr 30 '12 at 09:22
  • 1
    /dev/urandom doesn't get depleted, but the entropy from /dev/random may. Generating a lot of crypto keys or making a lot of SSL connections both can chew up a lot of entropy from /dev/random. Haveged at least adds some more entropy to /dev/random, so that you don't have to rely on the PRNG in /dev/urandom. – Gene Gotimer Oct 17 '14 at 21:53
  • @CoverosGene /dev/urandom and /dev/random use the same PRNG. – Viktor Dahl Jun 14 '15 at 18:44
  • 1
    @ViktorDahl /dev/random blocks unless it has enough entropy to avoid relying solely on the PRNG. It really just uses the PRNG as a mixing function. – Gene Gotimer Jun 15 '15 at 17:49
  • 1
    Most useful answer here, i don't know 4 years ago but today is very easy to deplete the kernel pool on servers especially if you want to do some kind of prediction resistance. So very useful. I did some tests with Haveged and is very good, it can raise my entropy to 0 from 1300 em less than 1 sec. Looking further to increase this even more – Freedo Jul 02 '15 at 05:53
8

The best value I've seen in a HW randomness device is the simtec entropy key.
Has a number of safeguards built in to protect against failure and attacks. For example, it runs the FIPS 140-2 randomness tests on each batch of 20Kb, shutting itself off if a statistically significant number of tests fail. I got one when I was doing a lot of key generation for DNSSEC research, and it greatly sped up my work. It passes all the dieharder tests. (note, always test your randomness streams periodically, no matter what the vendor tells you ;-)

spinkham
  • 422
  • 2
  • 3
  • 2
    HW randomness is probably overkill. /dev/urandom is perfectly adequate for this application, without anything else fed in. – D.W. Jan 17 '11 at 05:53
7

1) You don't need to add any more entropy to /dev/random, to use it for passwords. The system already does that for you.

2) To generate a random password, it's better to use /dev/urandom, not /dev/random. (/dev/random has some issues: it blocks, it depletes the entropy pool in a way that may cause other users of /dev/random to block. /dev/urandom is the better general-purpose interface.)

3) Here's a simple script I use to generate a random password. You're welcome to use it.

#!/bin/sh
# Make a 48-bit password (8 characters, 6 bits per char)
dd if=/dev/urandom count=1 2>/dev/null | base64 | head -1 | cut -c4-11 
Yuri
  • 157
  • 6
D.W.
  • 98,860
  • 33
  • 271
  • 588
  • 3
    your 1) and 2) are opposite to each other -> /dev/random depletes... that is exactly why I asked how to additionaly feed it.. – tkit Jul 14 '11 at 12:56
  • 1
    You are confusing two things. /dev/random is *secure* enough for password generation without additional entropy - so feeding stuff in to /dev/random is not needed for security, but it might be needed for performance. /dev/urandom is *also* secure enough for passwords, but it does not have the performance problem /dev/random has, so it should be preferred. – Nakedible Aug 03 '11 at 09:19
2

I use a combination of data sources and a good hashing algorithm to generate random data.

On a web-server you can combine server data (HW, SW, performance), client data (user-agent, request-time, cookie, URL variables, whatever you can gather), some external data (like random.org), mix everything with let say sha1(mixed_data + time + some_secret_key) and you get fairly unpredictable bits of random data.

You could also consider using P2PEG to easily collect entropy from clients and server.

DUzun
  • 121
  • 3
1

Passwords, if they are short, are always crackable by brute force if the speed or count of tries is not limited. If, on the other hand, tries are limited (eg. interactive login), even a small amount of entropy basically uncrackable - the amount of tries required becomes prohibitive really soon.

So, there should be no cases where getting really good entropy for passwords would matter.

So just use /dev/urandom, it's more than good enough.

The other answers given here are good comments on how to keep your /dev/random supplied with enough entropy, though, if you need it.

Nakedible
  • 4,531
  • 4
  • 26
  • 22
  • 5
    This answer is not accurate in practice, or is worded in a confusing way. In practice, one can choose passwords with sufficient entropy that they are not crackable by brute force. Or, to put it another way, in practice attackers are always limited in how many tries they can make, simply by the limited time available to them. I agree with the advice to use /dev/urandom instead of /dev/random, but the justification isn't right. – D.W. Jan 17 '11 at 05:48
-2

GUChaos.c retrieves random numbers from random.org, and changes them on-the-fly through a substitution cipher before feeding /dev/random.

S.L. Barth
  • 5,504
  • 8
  • 39
  • 47
justin
  • 13
  • 1
  • 3
    [Do **not** simply trust (let’s just call them) “services” like random.org for cryptographic purposes.](http://crypto.stackexchange.com/q/1619/12164) – e-sushi Jan 21 '16 at 03:10
-2

It is weird to see a bunch of recommendation to use /dev/urandom instead of using /dev/random cause when /dev/random depletes then /dev/urandom uses the last entropy repeatedly what is strongly unsecure for long term critical parameters.

Buktop
  • 47
  • 2
  • 4
    You should read these recommendations! Especially [Thomas Pornin's](http://security.stackexchange.com/questions/89/feeding-dev-random-entropy-pool/7074#7074). You're misunderstanding entropy. Reading from `/dev/random` or `/dev/urandom` doesn't deplete entropy (at least not on a scale that would matter: reading 2^n bits depletes at most n bits of entropy). – Gilles 'SO- stop being evil' Aug 13 '13 at 20:40
  • Do you real think that the entropy an operating system may obtain is inexhaustible? – Buktop Aug 13 '13 at 21:12
  • "Generating true entropy in a computer is fairly difficult because nothing, outside of quantum physics, is random. The Linux kernel uses keyboard, mouse, network, and disc activities, with a cryptographic algorithm (SHA1), to generate data for the /dev/random device. One of the problems with this is that the input is not constant, so the kernel entropy pool can easily become empty. The /dev/random device is called a "blocking device". This means if the entropy pool is empty applications trying to use /dev/random will have to wait, indefinitely, until something refills the pool." – Buktop Aug 13 '13 at 21:13
  • "When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered." – Buktop Aug 13 '13 at 21:14
  • "A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver." – Buktop Aug 13 '13 at 21:15
  • In practice, yes. Once you've filled in the entropy pool once, it would take more than the lifetime of the hardware to deplete it. – Gilles 'SO- stop being evil' Aug 13 '13 at 21:17
  • Keep in mind, that /dev/random and /dev/urandom provide only 160 bits of entropy for one request. – Buktop Aug 13 '13 at 21:20
  • 2
    Now work out how long it takes to deplete 160 bits of entropy. – Gilles 'SO- stop being evil' Aug 13 '13 at 21:21
  • We may mix those bits by hashing (as example) and relatively secure produce 2^48 blocks of 512 bits length. However, I would like to recommend to use 128 bits of entropy to produce only one RSA key (3072 bits) and refresh the entropy to generate the next key. Of course it costs and does make sense to implement if you hide something that costs at least the same. – Buktop Aug 15 '13 at 20:40