1
void
 vncRandomBytes(unsigned char *bytes)
{
int i;
unsigned int seed = (unsigned int) time(0) + getpid() + rand();

srand(seed);
for (i = 0; i < CHALLENGESIZE; i++) {
    bytes[i] = (unsigned char)(rand() & 255);    
}
}

Can above code consider a secure random string genarator?

user236501
  • 311
  • 3
  • 12
  • 3
    No, it fails in both relevant ways. 1) Your seed is small an guessable. At best 32 bits of entropy, probably less. 2) The PRNG algo sucks. You can probably predict future outputs by observing a few bytes. – CodesInChaos Jan 18 '13 at 17:30
  • Take a look at my answer about VNC: http://security.stackexchange.com/questions/26155/what-are-realvnc-5-0-authentication-protocol-security-limitations/26164#26164 – F. Hauri - Give Up GitHub Jan 19 '13 at 12:42

2 Answers2

9

No, sorry, your generator is awfully bad. You seed it with values which do not have much entropy (the PID fits on 16 bits...) or are highly guessable (the "current time" is not exactly a secret value, everybody can know it with remarkable accuracy). And the generator hiding under rand() is known to be breakable (it depends on the actual operating system, but many system use a linear congruential generator which can easily be inverted).

You will get something which vaguely looks randomish at first sight, but will not resist successfully attempts at cracking it, i.e. predicting future output of the generator. Resisting cracking attempts from smart attackers is all what security is about. Your generator might be appropriate for some non-security-related usages, though (although rand() is known to be quite bad at those too).

For correct random number generation, you need:

  1. an initial seed with enough entropy;
  2. a cryptographically secure PRNG which will expand that seed into a long stream of bytes which will be indistinguishable from randomness (even in the eyes of a smart, powerful individual with big computers who is intent on predicting the next random byte).

The initial seed must come from some hardware events ("physical" randomness), because software is, down to the transistor level, deterministic. The operating system is in ideal position to gather physical events (that's its job, after all). Therefore, use the OS. This means:

  • On Unix-like systems (like Linux, FreeBSD, Solaris, MacOS X...), use /dev/urandom (not /dev/random; see this answer for details).
  • On Windows (Win32), call CryptGenRandom().
  • On Java, use java.security.SecureRandom.
  • On .NET, use System.Security.Cryptography.RNGCryptoServiceProvider.
Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • 2
    It's probably not "his" PRNG. Seems to be from a software called TightVNC: http://fossies.org/dox/tightvnc-1.3.10_unixsrc/vncauth_8c_source.html – CodesInChaos Jan 18 '13 at 19:06
  • 1
    Ya this is found on VNC source code, just wonder this random challenge is secure enough or not. – user236501 Jan 19 '13 at 03:32
3

Unfortunately this will not generate high-entropy material. Pretty much any kind of pseudo-random generation in software (such as rand()) is not going to be suitable for generating good quality random bytes.

Most operating systems come with some kind of maximum entropy stream, backed up by entropy-generating hardware. For example /dev/random on unix based machines. Reading from that stream is the closest you are likely to get to 1 bit of entropy per bit of data.

Edit: My esteemed colleagues have pointed out that usage of /dev/urandom is acceptable for high-entropy byte generation. It doesn't block in the way that /dev/random does, so performs infinitely better if you need to generate a large number of bytes.

lynks
  • 10,646
  • 5
  • 29
  • 54
  • 1
    Prefer `/dev/urandom` for most application. *Much* faster, and the security difference after the initial seeding is rather theoretical. – CodesInChaos Jan 18 '13 at 17:47
  • @CodesInChaos `urandom` is for unblocking read, it is not a maximum entropy stream. `random` blocks until the entropy pool is ready to supply another random byte. `urandom` should not be used for key generation. – lynks Jan 18 '13 at 17:49
  • 2
    After initial seeding the difference between them is rather theoretical. You will find no test that can distinguish them with reasonable computational power. Unfortunately both are flawed, in opposite ways. `urandom` doesn't block when it has insufficient initial entropy, and `random` blocks when it has lots of entropy, because it assumes that reading data depletes entropy. In practice you want a source that blocks until it has sufficient initial entropy, and then never blocks again. | I'd only use `/dev/random` for long-term keys, but not for normal keys. – CodesInChaos Jan 18 '13 at 17:54
  • 2
    I second @CodesInChaos; use `/dev/urandom`, not `/dev/random`. The idea of `/dev/random` being somewhat better than `/dev/urandom` is a myth coming from the (wrong) idea that entropy is like gasoline. See [this answer](http://security.stackexchange.com/a/3939/655) for details. – Thomas Pornin Jan 18 '13 at 18:01
  • Reading a 256 bit key from `/dev/urandom` will happen so quickly that the entropy pool will hardly have a chance to change during the process. What ends up happening is you generate your entire key from a seed that equals the size of the entropy pool. – lynks Jan 18 '13 at 18:01
  • @lynks "generate your entire key from a seed that equals the size of the entropy pool" And how is that bad? Once you have filled `urandom` with 256 bits of entropy that's enough forever. I only worry about those times where `urandom` wasn't filled yet but still doesn't block or when it's reset to an earlier state(during boot, after hibernation, VM cloning, after crashes etc.) – CodesInChaos Jan 18 '13 at 18:04
  • @CodesInChaos I'm happy to stand corrected on this one, I'm just repeating what I was taught in the past. I remember being told that the entropy pool is never large enough to perform well as a 'snapshot'. – lynks Jan 18 '13 at 18:08