3

I need to demonstrate that security through obscurity fails twice in the following scenario.

I've a secret KEY.

User A gets MessageX = SecretTransformation(KEY, SecretValue1);

User B gets MessageY = SecretTransformation(KEY, SecretValue2);

SecretTransformation() is not a standard cryptographic function.

Now, Security through obscurity is in contrast with principle of open design and shannon's maxim. We also know that this security will fail as soon as an attacker manages to retrieve one of the "secrets"

what in addition it think can be demonstrated, is that:

if an attacker manages to retrieve, MessageX and MessageY without knowing any secret he can perform some cryptanalysis attack and retrieve the KEY or understand the SecretTransformation functioning.

This is supposed to be true and get stronger with the number of messages the attacker is able to collect. messageX messageY ... messageN I hope I've explained myself i need to be pointed out to some example like the one time pad, where if you XOR with the key more then one message you can XOR back and retrieve the secret.

here there is an image for the schema suggested http://s17.postimage.org/mdgij13un/bad_Transformation.png

Thank you!

NoobTom
  • 157
  • 5
  • 2
    So whats your question exactly? – MaxSan Mar 29 '12 at 12:59
  • I need a law, a conjecture or a theorem that states that if you have N messages "scrambled" with a non random secret value you can retrieve knowledge of the algorithm used to scramble or one of the secret value. It is not different from cryptanalysis attacks just i cannot find a rule for this or an example...or at least nothing comes to my mind – NoobTom Mar 29 '12 at 13:05
  • 1
    For the first case we also have In cryptography, Kerckhoffs's principle: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge. – NoobTom Mar 29 '12 at 13:06
  • http://en.wikipedia.org/wiki/Kerckhoffs%27s_principle – NoobTom Mar 29 '12 at 13:07
  • let me add an image to make things easier – NoobTom Mar 29 '12 at 13:10
  • http://s17.postimage.org/mdgij13un/bad_Transformation.png – NoobTom Mar 29 '12 at 13:10
  • 1
    I don't think you will find a law because most methods of encrypting a message would not fail no matter how many encrypted/unecrypted messages you had to compare to. Of course you can fail to implement something the correct way, and even AES+512 will fail to protect your data. You mention its not a standard method, so what is it exactly, anything non-standard and not proven by MATH to be secure its not secure period. – Ramhound Mar 29 '12 at 13:13
  • What you're asking may be provable for a specific encryption algorithm, but encryption mechanisms are extremely diverse, and such a premise can certainly not be proven to hold true in every case. – tylerl Mar 30 '12 at 03:05
  • Thank you for your comments. The transformation as i mentioned is not a approved standard method. The scope of this transformation is to mangle the key so that the final user is "unable" to know the key used in its application. I wanted to find a a theoretical answer, different from the reverse engineering empirical techniques. That why i was wondering if there was such a law. That transformation takes a constant, KEY and a SecretValue which is NOT a true random value, rather then it is an ID, like the cpu_ID. (and yes cpu ID is not always unique ) – NoobTom Mar 30 '12 at 11:26
  • @tylerl i think you are right, maybe it can be proved for a specific algorithm but it cannot be generalized so easily – NoobTom Mar 30 '12 at 11:30
  • 1
    @NoobTom - What you want does not exist. If knowing what multiple messages are can reveal the key, then the schema used to encrypt said messsages is flawed, and should not be used. – Ramhound Mar 30 '12 at 12:41

4 Answers4

7

There is no such law. You mention that you think a bunch of things are true, but I do not think those things are in fact true, so I think you need to re-examine your premises.

"Security through obscurity" does not mean that a system is necessarily insecure. It is a bad strategy and a bad idea for the security of your system to rely upon the secrecy of your system. Historically, most systems that have relied upon security through obscurity have proven to provide poor security.

But hiding the details of the system doesn't magically somehow transform a secure system into an insecure system. If you encrypt with PGP, but just don't tell anyone you're doing so, this doesn't somehow make PGP insecure. There is no law that says all systems that fail to disclose their details are necessarily insecure.

Iszi
  • 27,027
  • 18
  • 99
  • 163
D.W.
  • 98,860
  • 33
  • 271
  • 588
  • Thank you for your answer, but i wasnt looking for a law that states that when you hide something you make the system not secure. What i was trying to look for is something that states that , if you have N transformed messages, you can retrieve information about the algorithm or the value used in the transformations – NoobTom Mar 30 '12 at 11:22
  • 3
    @NoobTom - The law you want does not exist, because honestly any encryption method that created said law, would be useless. – Ramhound Mar 30 '12 at 12:42
3

"Security Through Obscurity" is a bit of loaded term that means a lot less than it sounds like it does. In a certain sense, nearly all security is gained through obscurity. For example, your password is only secure to the extent that it isn't publicly known. The key to real security, though, is to make your secrets easy to protect.

Certainly you could make your encryption password public and the algorithm secret, and you may enjoy a certain amount of security for some time. However, algorithms are notoriously easy to reverse-engineer. They can be obfuscated by compiler tricks and clever tactics, but a decent hacker with a moderate amount of caffeine can usually tackle any such challenge in a single day; two if he gets distracted.

In contrast, good encryption algorithms are built not only to protect the payload, but also the key, even in in extremely adverse conditions, such as "known plaintext" or "chosen plaintext" attacks, and often even take specific measures to protect against side-channel attacks. All of these measures are intended toward protecting a specific class of secret, and have a proven track-record of doing so.

Now, certainly there is no harm in keeping your algorithm secret. It may not afford any additional security, but it's not going to harm your security either. And there's no sense disclosing more information than you have to. But there's a huge difference between keeping the algorithm secret because you can, and keeping it secret because your security depends on it. Of all the secrets you could keep, this one is among the easiest for an attacker to derive. So it would be wise to plan your security accordingly.

tylerl
  • 82,665
  • 26
  • 149
  • 230
  • 1
    Thank you for your answer, it does offer an understanding of the security through obscurity idea. It is true that there is always something to be kept secret, but that secret is supposed to be something random, like a cryptographic key. If you keep secret knowledge, which is not random but has a logical structure, then the information that constitute that logic can be retrieve in one or more way, in little or huge time...but it can be reconstructed – NoobTom Mar 30 '12 at 11:29
  • You cannot reconstruct a AES+512 private key from the public key, even if you have 10 billion messages, encrypted from said private key. The only known flaw to AES is brute force, if the algorithm breaks down in other ways, then its flawed. – Ramhound Mar 30 '12 at 12:45
  • @Ramhound: Not sure what you're referring to, but AES is symmetric. No private/public keys. – tylerl Mar 30 '12 at 14:42
  • A key aspect of good security is that for each threat model, there must be at least one layer where one can say both "X is secure" and "the security of X is sufficient to prevent the threat.", Generating a random bunch of bits in a way that an adversary won't be able to guess, in cases where 99.999% of such bunches are just as good as any others, is a lot easier than trying to come up with a usable algorithm that an adversary won't be able to guess. – supercat May 30 '18 at 23:02
  • -1 this is a [common misunderstanding](https://security.stackexchange.com/a/44096/165253) of the meaning of obscurity. A system is not utilizing obscurity if the only things kept secret are the keys or any other relevant credentials. Essentially, security through obscurity is when the _implementation_ or _techniques_ are kept secret for protection. The operating system I run is open source and anyone can get an exact copy of it, but not anyone can log into _my_ system because the key, not the implementation, is secret. That is not obscurity. – forest Jun 01 '18 at 01:35
  • @forest yup. You'll notice that it was me who asked the question that Thomas responded to with his _obscurity_ vs _secrecy_ insight, and that it was later that same year. My answer here wasn't exact wrong, but it lacked the clarity that Thomas later was able to provide. – tylerl Jun 09 '18 at 16:27
2

I'd look at the case of the Enigma (naval Enigma in particular) in World War 2 for a demonstration of how a cryptographically insecure system was broken in large part because the attackers were able to get a large number of encrypted messages and to exploit non-random bits of those messages in order to work out how the system operated.

During World War 2, Germany used a machine called the Enigma to encrypt messages to submarines and to troops in the field using what was essentially a exquisitely complicated substitution cipher. The Allies, working out of Bletchley Park, were able to intercept most of these messages since they were broadcast o.ver radio. Over the course of years, they were able to exploit the fact that they had a rather large corpus of encrypted messages to search for patterns in the output that could be used to work out other bits of the algorithm until they were eventually able to break the cipher. With only very basic computers, this was a painstaking process-- there is a decent breakdown of how Enigma was broken in steps in a lecture by Tony Sale Bigrams, Trigrams and Naval Enigma.

The Enigma story is also points out the danger of relying on security through obscurity in other ways. During the war, the Allies were able to steal a handful of the Enigma machines from German subs before they sank which greatly simplified the work the code breakers were doing. In the same way, when real systems rely on the attacker not knowing how the system works, a determined attacker will invariably be able to get whatever information they need to complete the attack through other channels (social engineering and general information leakage when people leave a company for example).

Justin Cave
  • 4,006
  • 1
  • 13
  • 9
1

Consider a cryptosystem and its vulnerabilities

When you have the following secret components of the system:

  • Secret KEYs (usually one per user/use of the system).
  • SecretTransformation (only one singleton secret shared with all endpoints).

Compare these two scenarios

  1. Security of a secret key shared with only the interested parties.

  2. Security of the SecretTransformation. Deployed to ALL parties, users, installers, contractors, ISV's, DVD printers, developers, reviewers, sales people, hackers accessing all of the systems/backups where the code has been past/present/future.

Then compare what happens if each of the above is compromised.

  1. A secret key is re-generated and shared with the interested parties.

  2. The complete cryptosystem needs to be redeveloped and deployed through all the same weak-points.

So Succinctly:

  1. A Cryptosystem with only the Secret Key being secret has a minimum number of points of vulnerability equal to the number of places with the key. All parties have a good understanding of and responsibility for securing the secret keys. The scope of each key is limited by design.
  2. A cryptosystem with a SecretTransformation being secret has literally thousands of potential weak points any of which require a massive amount of work to recover from.

So why would you choose to develop a system with so many weak points.

A cryptosystem with a SecretTransformation MAY be OK until broken, but an open algorithm will ALWAYS be better.

Andrew Russell
  • 3,653
  • 1
  • 20
  • 29