The commonly used secure RNGs like Linux's /dev/random, ChaCha20, or RdRand work well for many conventional cases. However, they're far from idiot-proof. Say you do something funny like fail to set up your real-time clock when generating random numbers at boot. If you don't understand how that'll affect your RNG, someone who does might walk off with your private key. There's little room for error here because one small amount of non-randomness can compromise an entire cryptographical protocol like key-generation.
While issues with naïve roll-your-own implementations of random number generators or physical interference in your hardware make for good discussions, most vulnerabilities in the news with random number generators, like the Debian issue you mentioned, are not due to these issues. The biggest issues I've seen repeatedly are developers thinking they have a good source of entropy to seed the random number generator when they actually don't, mistakenly allowing the state of the random generator to be discovered and exploited, or lack of rigorous testing of the random number generator itself. The NSA doesn't need to backdoor your key generation if you're one of 0.75% of TLS clients using low-entropy keys. In summary, developers ignore the few, if any, warnings and assume their RNG will work in any application.
What's entropy and where do I get some?
Since all computer programs produce the same outputs given the same inputs, they must read from a source of entropy (or unpredictable data) in the operating system or hardware. Nowadays we have things like the RdRand command that can generate tens or hundreds of MB of entropy every second. However, devices with hardware random number generators like the Ferranti Mark 1 in 1951 or the Intel 82802 Firmware Hub in 1999 were the exception rather than the rule until the 2010's.
So historically random number generators rely on relatively slow entropy sources like human input or computer timings, and legacy systems might have almost no built-in functions with good sources of entropy available. Linux's /dev/random, for example, may use startup clock time, timing of human input devices, disk timings, IRQ timings, and even modification of the entropy pool by other threads
In many ways random number generators are fragile because these standard ways of getting entropy are not fool-proof. Anything that makes these entropy sources predictable or limited will compromise your RNG, for example:
- The Debian bug you noted used only the Process ID for entropy.
- If you use a headless, pre-configured operating system that generates keys at boot, a lot of Linux's entropy sources could be predictable. source
- Android's Java Cryptography Architecture was found to require explicit initialization from a good entropy source on some devices.
- Generating random numbers too quickly in Linux will deplete the entropy pool faster than it can be replenished, leading to less-random numbers.
Figuring out the state and lack of reseeding
Often RNGs don't get new entropy with every function call like /dev/random does. Sometimes you can't get enough entropy fast enough, or you don't trust the entropy source completely. So instead the RNG is seeded with a known source of entropy, then produces independent values from that seed. However, when someone figures out the internal state of the generator things go poorly, leading to everything from cloning smart cards to cheating a slot machine in Vegas.
A buffer overflow attack or similar attack can reveal the state of the random number generator. Learning the state may also be possible with a brute-force attack, especially if the algorithm is known and is reversible, can be computed quickly, or a plaintext is known. This was the case for issues with Windows XP, Dropbear SSH library, XorShift128+ in Chrome, and the Messerne twister algorithm, among many others.
Requiring advanced mitigation for these known-state attacks makes the RNG fragile. The best way to mitigate known-state attacks is by not using a vulnerable algorithm (like most CSRNGs). This question also explains in more detail exactly what makes a good RNG secure. However even CSRNG's sometimes also have weaknesses (for example, the RNG vulnerability in the Linux 2.6.10 kernel). So defense in depth requires mitigations like using separate states for random number generators (perhaps one per user), refreshing the seed frequently, and through protections from side-channel attacks and buffer overflows.
Passing blame between developers and users
Often these RNGs are fragile because of miscommunication of limitations between library developers or OS creators who can't design a fool-proof system and users who expect one. Linux, for example, forces users to choose between high-latency /dev/random and potentially low-entropy /dev/urandom. As another example, PHP prior to 5.3 had no support for strong PRNG's in Windows through interfaces such as mcrypt_create_iv(), and prior to 7.0 didn't have a good built-in CSPRNG.
Difficulty in Detection
There's a popular discussion point when discussing random numbers that, for a truly random number, every possibility is equally likely and there is an infinite numbers of potential patterns. So how can you truly look at a sequence and say it isn't random? (relevant Dilbert)
In reality, detecting patterns in random numbers is a mature, although imperfect, field and the question about whether non-randomness can be detected has been addressed since M.G. Kendall and B. Babington-Smith's 1938 paper. You can demonstrate that specific kinds of patterns are not significantly more likely to appear than random chance. For example, I can check if the digit 1 is more common than other digits, with thresholds determined by a chi-squared test. As long as these tested patterns are at least remotely likely and you check a long enough set of generated numbers, odds of a false positive is low. While some hidden issues with some random number generators can go undetected for years, if you've done basic cryptanalysis and then apply industrial-grade testing as covered in this question then you can't go too wrong.
However, designers may also just underestimate their attackers (how were you supposed to predict people would reverse-engineer and time your slot machine?). Worse, sometimes the random-number generator or entropy generation is never inspected by an expert, and only the outcome of the RNG's use is examined, like when PS3 firmware signatures were signed with a constant "random" output.
At the end of the day, the issue here is similar to that in much of cybersecurity: you have a very complex set of protocols, requirements, and devices for random numbers. As always, if you don't understand the complexity, you're vulnerable to an attacker who does.