10

Our application needs to generate secure keys for long term storage on AWS using openssl. But I've seen some instance where entropy falls dangerously low on AWS machines and this doesn't appear to be 100% safe. People have reported low entropy in some situations.

What is the easiest way to ensure you are generating strong keys on AWS?

For example, is there a way to:

  • block execution until there is sufficient entropy as a fail safe
  • seed machines with more/higher quality entropy
  • supplement AWS with some other piece of hardware to provide a good source of randomness

I've seen random.org but using an external web service doesn't seem appropriate or fast enough for a production environment.

kalina
  • 3,374
  • 5
  • 21
  • 36
Brian Armstrong
  • 1,045
  • 2
  • 11
  • 16
  • If your plan on keeping the key active for a long time, you shouldn't concern yourself too much about how fast your key generation system will be: if you need 5 additional seconds to create a key that will be secure for 5 years, that's insignificant. – Stephane Oct 22 '13 at 08:43

4 Answers4

5

This "low entropy" mantra is a dud. See this answer for details. The executive summary is: some software, in particular /dev/random, may report a "low entropy" and block for reasons which make no practical sense. The idea of entropy being "exhausted" comes from a mathematical model which has only tenuous links with reality, and may (theoretically) provide any security gain only when the randomness is used for "information theoretically secure" algorithms. Usual algorithms such as RSA or AES are not of these class.

So what you need to do is the following:

  • OpenSSL should normally use /dev/urandom, avoiding the problem. If the problem occurs, it will manifest itself as a "blocking" behaviour (key generation takes an abnormal amount of time, with no CPU used).
  • If the problem occurs, "refill" /dev/random with fresh pseudo-randomness, which is good enough. Something like: dd if=/dev/urandom of=/dev/random bs=1024 count=64

There is, however, another issue which is not so easy to deal with. Cloud systems are virtual machines, which may run concurrently with other virtual machines on the same hardware. When two execution threads run on two cores of the same CPU, even if they relate to distinct virtual machines, they share the level-1 cache and may potentially spy on each other. The technique involves doing memory accesses in an array which uses the same cache lines as the target code, and reconstructing the target secrets by noticing when some array elements have been evicted from cache.

Working prototypes have been demonstrated in lab conditions. Whether they may be applied in a practical setup depends on a lot of parameters, and is currently unknown. However, it is plausible. The conclusion is that if you do something serious with keys, then you should strive to run on your own hardware, or at least obtain some guarantee from the cloud provider that your virtual machines will not share the same hardware servers as virtual machines from other customers.

Tom Leek
  • 170,038
  • 29
  • 342
  • 480
3

Old thread, but the good question, and the few answers here are not always specific to AWS machines or even Cloud VMs, as mentioned in the question, so I'll suggest taking a look at a recommendation from the Lemur team, that applies for wider scenarios: "The amount of effort you wish to expend ensuring that Lemur has good entropy to draw from is up to your specific risk tolerance and how Lemur is configured. If you wish to generate more entropy for your system we would suggest you take a look at the following resources:" and they proceed to recommend haveged.

For those interested in reading more about the issues with low entropy, the following paper, An Analysis of OpenSSL's Random Number Generator, and references, is worth a read.

The ability to generate high entropy random numbers is crucial to the generation of secret keys, initialization vectors, and other values that the security of cryptographic operations depends on.

This article, Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances, about Ubuntu's efforts towards improving entropy in the cloud, describes the issues and possible solutions (pollen) in much more detail:

Q: So my OS generates an initial seed at first boot? A: Yep, but computers are predictable, especially VMs. Computers are inherently deterministic And thus, bad at generating randomness Real hardware can provide quality entropy But virtual machines are basically clones of one another

The Ubuntu solution still may not meet the OP's requirements as it is an "entropy-as-a-service" solution, however (according to the developer) it is "fast, efficient, and scalable."

To move from theory to practice, check out Prangster if you are interested in assessing how good your PRNG is:

Now our goal is to determine the seed that produced a given sample of pseudorandom output, and in doing so, conclude with certainty the use of an insecure PRNG and prepare to attack the application that uses it. This is the primary function of the Prangster tool; all it needs is the output and the right PRNG and alphabet

.

michaelok
  • 131
  • 4
  • how do you respond to the top voted answer? if it is limited, can you describe why? – schroeder Jun 23 '17 at 17:14
  • It referenced another answer which wasn't specific to the scenario from the OP's question. I've updated my answer to explain that, and added another possible solution. – michaelok Jun 24 '17 at 21:50
1

In cloud environments like AWS you are in a bit of a sticky situation if you want to generate high-quality randoms locally on your instance, because of issues like @michaelok raised.

Instead, you should consider AWS Key Management Service - this is exactly what it was designed for. It wraps a strong Hardware Security Module with a simple API, and you just call an API like GenerateRandom. The API is protected with the same IAM security model that all AWS services are.

I'm sure other clouds have similar functionality.

schroeder
  • 125,553
  • 55
  • 289
  • 326
eddydee123
  • 41
  • 3
0

There is a lot of misconceptions and myths about /dev/random or /dev/urandom. It's not true in general that using /dev/urandom is always good enough and that /dev/random is for paranoid only.

The key here is the total amount of entropy collected during the lifespan of the instance before you start to generate numbers. That's what matters most. After instance spawn (from AMI) the random generator needs to accumulate enough entropy to start producing un-guessable numbers.

Once there is enough entropy, you can generate pretty much "any" amount of data that is crypto-strong from /dev/urandom. If the entropy collected is not enough however, then there is no other solution than to get it first.

That /dev/random reduces the counter of entropy is little bit paranoid. It is a must-have properly of a good rnd generator that you cannot guess next generated number even if you publicize any amount of previously generated numbers. To achieve that, generators use eaxctly the same constructs you find in encryption algorithms (AES, SHA, etc.). Therefore if you don't believe in your rnd generator you cannot believe in your encryption as such.

Viliam
  • 101
  • 1