10

I tried to explain that security by obscurity should not be used but let's say I was challenged!

I received the answer to list the known security by obscurity that I know, as a kind of bad practice and that should not to be followed. Indeed it is quite difficult to define Security by Obscurity, but I am not convinced that listing a bad practice is the correct way neither.

Therefore my questions:

  • Does anyone has a list of security by obscurity solutions that I can list?
  • Wouldn't be better to define requirements that stop security by obscurity? but how to define those requirements? I was wondering whether someone know what would be the proper requirements to setup in order to disallow security by obscurity
Jeff Ferland
  • 38,170
  • 9
  • 94
  • 172
Phoenician-Eagle
  • 2,237
  • 17
  • 21
  • Can you clarify the specific problem you're trying to solve? From your other comments it seems that some of your requirements lead to a hard problem addressed by this question: [Storing private asymmetric key in application binary?](http://security.stackexchange.com/questions/1711/storing-private-asymmetric-key-in-application-binary). Note also this question: [The valid role of obscurity](http://security.stackexchange.com/questions/2430/the-valid-role-of-obscurity) – nealmcb Mar 07 '11 at 15:38

4 Answers4

9

An example is the content scrambling system to prevent DVD's being copied. While the scrambling system was unknown, this worked. Then a teenager called Jon Lech Johansen figured it out, wrote DeCSS and it was broken.

As @GrahamLee points out, security mechanisms which need to be relied on must still work even when known about by the bad guys.

These days Cryptography provides security in so many applications, so it is critical that it be implemented correctly. Because the mathematics is beyond most people, there is one school of thought which advocates publishing the algorithms and letting people the world over try to find flaws. The upside - many eyes. The downside - if the algorithm is already implemented, an attacker may find a flaw and be able to use it, so doing this before implementation is very useful.

The other viewpoint is to hide the algorithm to prevent attackers knowing how it works - The problem with obscurity is that it has been proven time and time again that relying on security through obscurity fails because at some point the bad guys discover the secret.

Worth reading this post by Bruce Schneier - a well known opponent of security through obscurity, and the author of various crypto algorithms which have been tested by the community - as an example of where obscurity has worked, but only because criminals did not learn of the plan.

I don't know if it is possible to really answer the last part of your question without more context. For example, if this is for code developed by 3rd parties you could demand a code review, but what you could really look for is a 3rd party that publishes their algorithms / code etc as open source.

Rory Alsop
  • 61,474
  • 12
  • 117
  • 321
  • Thanks! Regarding the context, well it is for our own developers not a third party application. Let's take for example a password that is required by a server in case of automatic restart. This password should be stored somewhere on server, because the server needs to access it automatically without an admin typing it in. Where to store this password in a way to avoid security by obscurity? – Phoenician-Eagle Jan 12 '11 at 15:30
  • @Paul - I think it must be a misunderstanding as to the meaning of security through obscurity. There is no way to "store the password to avoid security through obscurity" - what you need to do is aim to store the password somewhere that has access controls preventing users from accessing it, for example an area in a database or filesystem only available to the system account. It may help if this location is not known to attackers, but don't rely on it - protect it! – Rory Alsop Jan 12 '11 at 18:48
7

The canonical counterexample to security-through-obscurity is Kirchkoffs's Principle: cryptography designers should assume that the cryptographic system will be captured by the enemy, so it should still be secure if everything about it except the key is known.

Notice though that this doesn't say obscurity is bad and shouldn't be relied upon: it says that obscurity cannot be the only defence. That's because it's fragile. But obscurity still can slow down or dissuade unskilled attackers.

  • 2
    Correct, so can I consider this as a possible requirement: Your security mechanisms shall not brake in case identified by attackers? – Phoenician-Eagle Jan 12 '11 at 15:25
  • 1
    @Paul: that's a good requirement. Assume that attackers can reverse-engineer your binary, and discover your proprietary algorithms. Data must still be protected in this case. –  Jan 12 '11 at 15:38
6

Obscurity is that which cannot be quantified.

Proper security comes with cost estimates. We say that a 128-bit encryption key is secure because we can estimate how much it would cost (in dedicated processors and electrical power, and ultimately in dollars) to find the key by exhaustive search (trying all possible 128-bit keys). When the cost is much higher than what an attacker would be willing to spend, and, in particular, when it is much higher that what any attacker with earth-based technology could possibly spend, then we have achieved security.

When such an estimate is not possible, then it is security by obscurity. For instance, assume that you have some sort of encryption system on a software which you keep secret. How much secret is that software ? It is written on some hard disks. It has been developed somewhere, source code for it exists, stored somewhere. How difficult is it for an attacker to recover the algorithm ? Stored files leak in many places, e.g. old discarded computers, stolen laptops, indiscretions from subcontractors (the software source code is in files, but also in the brain of some programmers)... if the attacker can get hold of the binary, he can disassemble it, a process which is not immediate but limited only by the wit of the attacker.

The point of all this is that while making some code secret sure makes the task harder for the attacker, it is very difficult to say how much harder it gets, when you want to express it in dollars.

So here is my answer to your question: to prevent implementers from using security by obscurity, mandate that they should produce detailed justified cost estimates for attacks.

Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • I can't see how that is feasible? I mean developers (in principle) are not enough competent to know the attack and then know the settings in which the customers will be using the application and then estimate the cost! Please clarify – Phoenician-Eagle Jan 12 '11 at 15:51
  • @Paul: "developers (in principle) are not enough competent". I think you have nailed it in a very concise way. If you let people take decisions in areas where they are not competent, then you are doomed. There is no way a checklist of this-is-obscurity-that-s-bad will turn incompetent developers into good security system designers. Requiring people to document the attack cost for the solutions they design is a way to see whether the proponent has at least some understanding of what he is doing. – Thomas Pornin Jan 12 '11 at 17:12
  • Please can you show an example of a justified cost estimate for attacks? – Phoenician-Eagle Jan 12 '11 at 17:33
  • Examples would include eg for an online bank - if the cost of an outage for 2 hours is $5Million (a real example) what percentage of that will the business expend on reducing the likelihood of attack by 80%. Often it works out to be between 2 and 15% depending on the organisation. If the access controls are expected to reduce this likelihood by 95% you'd get better uptake. Part of the calculation will also want to estimate cost to reduce impact assuming a successful attack, and some organisations will spend more here depending on risk appetite. – Rory Alsop Jan 12 '11 at 21:18
1

If you're trying to construct secure coding guidelines for in-house developers, that wheel's already been invented. OWASP, for instance, has a broad overview which includes injunctions against relying on security through obscurity; and CERT has highly detailed standards for C and C++.

user502
  • 3,301
  • 1
  • 23
  • 18
  • No sorry, i am not looking for secure programming guidelines. Those already exist, but what i am afraid of is having keys stored in some jar file assuming NO reverse engineering is possible! – Phoenician-Eagle Jan 12 '11 at 15:52
  • 1
    @Paul - always assume reverse engineering is possible. All code that runs outside a secured environment should be considered vulnerable whether it runs in a browser, on a 3rd party server etc. Securing code by encryption is fine if you protect your keys. Anti-piracy solutions are always being thought up, and always broken if they rely on a code in an application the user possesses. Google for any broken computer game anti-piracy mechanism to see how bad it is! Even the ones which phone home can be broken by rerouting or subverting that call... – Rory Alsop Jan 12 '11 at 21:14