64

In Information and IT Security there is a nasty tendency for specific "best practices" to become inviolable golden rules, which then leads to people recommending that they are applied regardless of whether they are appropriate for a given situation (similar to Cargo Cult Programming)

A good example of this is the common approach to password policies which applies a one-size fits all 8-character length requirement combined with high complexity requirements, 12 previous passwords stored in a history to stop re-use, 3 incorrect attempt lockout and 30 day rotation.

The 30 day rotation is intended to lower the window of opportunity for an atacker to use a stolen password, however it is likely to lead users to use sequence passwords meaning that if an attacker can crack one instance they can easily work out others, actually reversing the intended security benefit.

The high length and complexity requirements are intended to stop brute-force attacks. Online brute-force attacks are better mitigated with a combination of sensible lockout policies and intrusion detection, offline brute-force usually occurs when an attacker has compromised the database containing the passwords and is better mitigated by using a good storage mechanism (e.g. bcyprt, PBKDF2) also an unintended side affect is that it will lead to users finding one pattern which works and also increases the risk of the users writing the password down.

The 3 incorrect lockout policy is intended to stop online brute-force attacks, but setting it too low increases account lockouts and overloads helpdesks and also places a risk of Denial of service (many online systems have easily guessed username structures like firstname.lastname, so it's easy to lock users out)

What are other examples of Cargo-Cult security which commonly get applied inappropriately?

Noam M
  • 107
  • 1
  • 8
Rory McCune
  • 61,541
  • 14
  • 140
  • 221
  • 18
    This sounds suspiciously like a discussion question, Rory. – tylerl Sep 16 '13 at 16:46
  • Sorry, Rorry. This is indeed a discussion question. The closest closure reason is opinion-based. One way I could see this is as a CW at best. – Adi Sep 16 '13 at 16:50
  • 31
    @tylerl Adnan oah come on, live a little. Break some rules, pee in the shower. – rook Sep 16 '13 at 16:59
  • 9
    @Rook: It's all pipes! What's the difference?! – Scott Pack Sep 16 '13 at 20:56
  • @tylerl Yeah, but an objectively (mostly) answerable discussion question, with especially useful answers about the pitfalls that people fall into in information security. – Chris Cirefice Aug 04 '17 at 03:07

8 Answers8

47
  • Closed source is more secure than open-source as attackers can view the source code and find and exploit vulnerabilities. While I'm not claiming this is always false, with open source software it's at least possible for outside experts to review the software looking for gaping vulnerabilities/backdoors and then publicly patching them. With closed source software that simply isn't possible without painstakingly disassembling the binary. And while you and most attackers may not have access to the source code, there likely exist powerful attackers (e.g., US gov't) who may be able to obtain the source code or inject secret vulnerabilities into it.

  • Sending data over a network is secret if you encrypt the data. Encryption needs to be authenticated to prevent an attacker from altering your data. You need to verify the identity of the other party you are sending information to or else a man-in-the-middle can intercept and alter your traffic. Even with authentication and identification, encryption often leaks information. You talk to a server over HTTPS? Network eavesdroppers (anyone at your ISP) knows exactly how much traffic you sent, to what IP address, and what the size of each of the responses (e.g., you can fingerprint various webpages based on the size of all the resources transferred). Furthermore, especially with AJAX web sites, the information you type in often leads to a server response that's identifiable by its traffic patterns. See Side-Channel Leaks in Web Applications.

  • Weak Password Reset Questions - How was Sarah Palin's email hacked? A person went through the password reset procedure and could answer every question correctly from publicly available information. What password reset questions would a facebook acquaintance be able to figure out?

  • System X is unbreakable -- it uses 256-bit AES encryption and would take a billion ordinary computers a million billion billion billion billion billion years to likely crack. Yes, it can't be brute-forced as that would require ~2256 operations. But the password could be reused or in a dictionary of common passwords. Or you stuck a keylogger on the computer. Or you threatened someone with a $5 wrench and they told you the password. Side-channel attacks exist. Maybe the random number generator was flawed. Timing attacks exist. Social engineering attacks exist. These are generally the weakest links.

  • This weak practice is good enough for us, we don't have time to wait to do things securely. The US government doesn't need to worry about encrypting the video feeds from their drones - who will know to listen to the right carrier frequencies; besides encryption boxes will be heavy and costly - why bother?

  • Quantum Computers can quickly solve exponential time problems and will break all encryption methods. People read popular science articles on quantum computers and hear they are these mystical super-powerful entities that will harness the computing power of a near infinite number of parallel universes to do anything. It's only part true. Quantum computers will allow factoring and discrete logarithms to be done in polynomial time O(n3) via Shor's algorithm rendering RSA, DSA, and encryption based on those trap-door functions easily breakable. Similarly, quantum computers can use Grover's algorithm to brute force a password that should take O(2N) time in only O(2N/2) time; effectively halving the security of a symmetric key -- Granted Grover's algorithm is known to be asymptotically optimal for quantum computers, so don't expect further improvement.

techraf
  • 9,149
  • 11
  • 44
  • 62
dr jimbob
  • 38,936
  • 8
  • 92
  • 162
  • 17
    I'd add a corollary to your first, and that is "Open source is more secure than closed source because there are lots of people scrutinizing the code for issues." – Xander Sep 16 '13 at 17:25
  • @Xander - I'm not sure I'm willing to go that far (even though in practice, I agree, and if I was choosing between open and closed source for something high-security, I'll prefer the open version as I can review it for major holes). There are some examples of major security holes existing in open source code for years. The true weakness of open source is that attackers could deliberately introduce subtly flawed code that slips past the maintainers. Probably best would be to have in-house closed source developed by teams of security experts/auditors (you can view, but no one else can). – dr jimbob Sep 16 '13 at 17:47
  • 9
    "My computer is invulnerable because it isn't/has never been/will not ever be connected to the internet." – AJMansfield Sep 16 '13 at 19:35
  • 3
    @drjimbob Or software with open source but completely developed in-house without using a mess of contributors (c.f. Red Hat Enterprise Linux). – ithisa Sep 16 '13 at 20:32
  • 2
    @Xander Case in point: [ProFTP](http://www.zdnet.com/blog/security/open-source-proftpd-hacked-backdoor-planted-in-source-code/7787) – Phil Sep 16 '13 at 20:44
  • 4
    I'd add another corollary: *popular* open source is more secure than closed, I could create my own crypt suite and open source it, but that is likely to be insecure because no one would care about it enough to try and improve it – ratchet freak Sep 17 '13 at 08:59
  • "effectively halving the security of a symmetric key". Not quite. It is an exponential decrease. A 64-bit key is much more than just "half as secure" as a 128-bit key. A 2048-bit key is also much less secure than a 4096-bit one, but that is still nearly impossible to bruteforce. – Anorov Sep 19 '13 at 07:43
  • @Anorov - Sorry, I guess you were confused by my poor word choice: "effectively halves the key size of a symmetric key". Grover's algorithm allows quantum computer to reduce brute forcing AES-128 to an 2^64 time operation; so instead of N=O(2^128) to brute-force it takes sqrt(N)=O(2^64) time (quadratic speed-up). Second, note the *symmetric* key part (RSA is asymmetric). 2048-bit RSA/DSA key against quantum computers is only 8 times stronger than 4096-bit key due to Shor algorithm. Grover's algorithm doesn't make sense for RSA - you brute force by factoring -- not trying every key. – dr jimbob Sep 19 '13 at 13:06
  • @Anorov - Against conventional computers factoring against a L(1/3) general number field sieves; 4096/2048/1024-bit RSA key are equivalent to 156/117/87-bit symmetric keys, so yes RSA-4096 should be ~2^39 times harder than RSA-2048. Granted against a L(1/4) sieve against factoring that some expect believe may be publicly discovered soon based on [recent discrete log work](http://eprint.iacr.org/2013/095); 4096/2048/1024-bit RSA reduces to 96/75/59 bit symmetric keys. – dr jimbob Sep 19 '13 at 13:12
  • @ratchetfreak You mean something like that obscure OpenSSL heartbleed thing, don't you? – Pavel Apr 28 '16 at 13:21
  • Some of the more popular open source out there has proven in the last couple years to have glaring security issues despite being open and having the **possibility** of lots of people scrutinizing it. Some of it has even had severe issues that we find have been there for a decade or more. Though there is the ability for it to get patched fairly immediately once the issue is found. **ALL** open and closed source software has issues and that fact will never go away. – Fiasco Labs Apr 29 '16 at 03:54
28

Some examples:

  • Bigger keys. 4096-bit RSA, 256-bit AES... more bits are always better. (See the comments: there is no point to have keys bigger than the size which ensures the "cannot break it at all" status; but bigger keys imply network and CPU overhead, sometimes in large amounts.)

  • Automatic enforcement of "safe functions" like snprintf() instead of sprintf() (it won't do much good unless the programmer tests for the possible truncate, and it won't prevent using a user-provided string as format string). Extra points for strncpy() which does not do what most people seem to assume (in particular, it does not ensure a final '\0').

  • "Purity of the Security Manager". As an application of the separation of duties and roles, all "security-related" decisions should be taken by a specialist in security, who is distinct from the project designers and developers. Often taken to the misguided extreme, where the guy who decides what network ports should be left open on any firewall has no knowledge whatsoever about the project, and deliberately refuses to learn anything in that respect, because independent decision is more important than informed decision.

Tom Leek
  • 170,038
  • 29
  • 342
  • 480
  • 5
    Could you elaborate on why a larger key may not be better than a smaller key? – Phil Sep 16 '13 at 18:31
  • 6
    Larger keys imply higher bandwidth and CPU usage, and may imply interoperability issues as well; beyond the sizes where the key, by itself, is "unbreakable with Earth-based technology", extra length is just dead weight. RSA-2048 and AES-128 are already at that point. – Tom Leek Sep 16 '13 at 18:46
  • 1
    I see. If I were you, I'd add that to your answer. – Phil Sep 16 '13 at 20:37
  • 2
    I agree with your sentiment that more bits isn't always better--[sometimes its weaker-- for reduced-round AES-256 is weaker against some attacks than AES-128](http://eprint.iacr.org/2009/374.pdf). But when the extra milliseconds of CPU time is not an issue (e.g., encrypting a few files), more bits doesn't necessarily hurt and may protect against advances in attack method. RSA-2048 is comparable to 112-bit strong assuming a L(1/3) = e^O(N^1/3) general number field sieve, though [recent math breakthroughs reduced that to L(1/4) = e^O(N^1/4) for discrete log](http://eprint.iacr.org/2013/095). – dr jimbob Sep 17 '13 at 05:45
  • 2
    Some expect this will be extended to factoring, which would leave RSA-2048 only ~75-bit equivalent strength (making the sketchy assumption of same constants as 112-bit analysis in our big-O analysis) while RSA-4096 would be ~96-bit strong. RSA-768 was broken once with a distributed effort and had 76-bit of security using the old L(1/3) sieve. Similarly, one could choose AES-256, if you fear your adversaries will build a quantum computer at some point and be able to break AES-128 in 2^64 time with Grover's algorithm. Again, agree *always* use bigger is wrong, but sometimes it makes sense. – dr jimbob Sep 17 '13 at 05:48
  • 1
    @drjimbob that's the point of cargo culting, ignoring the reasons or context that make a given statement valid.... – AviD Sep 18 '13 at 07:38
  • And that's why `strlcpy` should be preferred over `strncpy` when copying whole strings. – Anorov Sep 19 '13 at 07:46
24

I'll add my own appsec examples that I have seen while consulting:

  • "I'll email you an encrypted zip and include the password in the same email..." This has happened to me more than once. A locked door won't stay locked if you leave the key in the door.
  • "But you couldn't have gotten SQL Injection and SMTP injection, we called sanitize() on everything!". There is no way to make a variable safe for every use, you need to use the sanitation routine for the job.
  • "We cannot be hacked because we only use XXX platform/language/OS". Every platform has security problems, period.
  • "We have a yearly security assessment, you won't be able to find anything." Frequency != Quality. Having frequent assessments is a good thing, but this does not guarantee anything!
  • "We have a WAF, which means we don't have to actually patch anything." Yeah, so this happens... I had a client that didn't patch known CSRF vulnerabilities, because they assumed the WAF would be able to stop these attacks. (No WAF can do this. I once found a WAF that claimed it could "prevent all of the owasp top 10", and the WAF's HTTP management interface was vulnerable to CSRF.)
rook
  • 47,004
  • 10
  • 94
  • 182
  • Wouldn't it be possible for a WAF to inject & verify CSRF tokens automatically? – SLaks Sep 16 '13 at 21:41
  • @SLaks I haven't seen anything like that. – rook Sep 17 '13 at 03:10
  • I wonder why... – SLaks Sep 17 '13 at 04:08
  • 1
    Regarding your first point, most of the corporates I've dealt with have to do this to get round the filters in their email system. The scanners (GMail included) won't allow you to send zip files with exe files in them, so you need to encrypt the zip file to send it (although personally I just rename the zip file to zop, then GMail leaves it alone!) – Matthew Steeples Sep 19 '13 at 09:17
  • 1
    *WAF* = Web Application Firewall, for those (like me) not familiar with the acronym. – Chris Cirefice Aug 04 '17 at 03:01
18
  • Passwords must be salted and hashed before storing in the database. SHA-1 is a good fit, SHA-512 is perfect.

I still hear that one from many security professionals, security training, and current security guides.

AviD
  • 72,708
  • 22
  • 137
  • 218
  • 9
    512 > 1. Bigger is better right? –  Sep 17 '13 at 06:12
  • 1
    Correct me if I'm wrong, but with the collision vulnerabilities in SHA-1 wouldn't any of the SHA-2 algorithms (SHA-256, SHA-512, etc) be better? – Hyppy Apr 22 '15 at 12:24
  • 2
    @Hyppy That's the point, these are "cargo cults" - i.e. not correct. As to your point - SHA-2 is not acceptable for password protection either - see http://security.stackexchange.com/a/31846/33 – AviD Apr 22 '15 at 12:58
14

Using SSL only for the login page rather than all the authenticated areas of a website.

Shurmajee
  • 7,335
  • 5
  • 28
  • 59
  • 8
    Or even worse, the login page uses http, but embeds a https iframe or posts to a https target. Trivial to use practically undetectable SSL Strip. (Stackexchange OpenID, I'm looking at you). – CodesInChaos Sep 18 '13 at 07:38
  • Oh man, this one really bugs me! I have also seen websites that are HTTPS, but the login iframe is HTTP. Like wtf, what developer came up with that brilliant piece of work?? – Chris Cirefice Aug 04 '17 at 03:03
11

Just one, but it's a biggie: "Information Security is a technology problem, that can be fixed with technology."

Graham Hill
  • 15,474
  • 37
  • 63
4

In order to prevent people finding out whether certain users exist in the system - hiding whether the password was incorrect or the username was invalid during a failed login attempt, ... While at the same time offering a password reset form that does leak this info.

Numeron
  • 2,485
  • 3
  • 15
  • 19
2

"Our website can't be hacked because we are using SSL". Sir that just makes it more easy to exploit if its vulnerable because even your IDS/IPS is rendered useless by the SSL stream.

void_in
  • 5,541
  • 1
  • 21
  • 28
  • 3
    Your intrusion detection/prevention system could have copies of your private keys and be able to decrypt incoming/outgoing TLS traffic. – dr jimbob Sep 16 '13 at 17:59
  • @drjimbob Most of the time that is not the case. And even with the private key, SSL is not something that is going to protect the website against any kind of exploitation at the application level. – void_in Sep 16 '13 at 18:30
  • Well, TLS can prevent say network eavesdroppers from listening in on passwords, session cookies, as well as the URL/data that's being accessed. Many sites that claim to be "hacked" merely have someone using an admin account with captured password and defacing the content. Agree, TLS is not in anyway a catch-all--it only prevents network eavesdropping/tampering (and really you should be using TLSv1.2 - though its fine for servers to accept TLSv1.1 from old browsers; you really shouldn't use any SSL or TLSv1.0). Agree, it doesn't prevent against SQL injection, CSRF, XSS, buffer overflow, etc. – dr jimbob Sep 16 '13 at 22:09