92

I don't have any experience or scientific knowledge in security, I just wanted to ask if this is possible because I am interested in it.

What if I encrypt data and every password decrypts it, but only the right one does not create pointless data clutter? Same could be done with a login: false login data leads to fake, dummy accounts and only the right login details get you to the right accounts.

Wouldn't this be a way better method of encryption because you couldn't just try out passwords but had to look at the outcome to see if it was the right one?

Pabru
  • 119
  • 3
Tweakimp
  • 891
  • 1
  • 7
  • 8
  • 12
    Sounds like a [one-time pad](https://en.wikipedia.org/wiki/One-time_pad). – Mark Buffalo Jul 12 '16 at 23:56
  • 47
    A [honeypot](https://en.wikipedia.org/wiki/Honeypot_(computing)) or a [spamtrap](https://en.wikipedia.org/wiki/Spamtrap) is a system whose purpose is to waste the bot's time. – joeytwiddle Jul 13 '16 at 05:23
  • 6
    it would only work online, and you shouldn't be able to brute-force online passwords in the 1st place... – dandavis Jul 13 '16 at 08:45
  • 13
    Well, when trying to brute-force encrypted text that's what you *have* to do. There's no magic bell that tells you when you got the deciphering right, you have to look at the output and check. If someone encrypts a zipped file and you only check if your attempt produces plain text you'll never be able to decipher the text even trying out all possible keys. – Bakuriu Jul 13 '16 at 09:32
  • 3
    @Bakuriu: With poor encryption systems, you can just decrypt the first four bytes (precisely called the magic number, [I'm not kidding](https://en.wikipedia.org/wiki/File_format#Magic_number) ;) !) and check if you get, for instance, a correct ZIP file header (not that I think that ZIP actually provides such a poor system, but I was just reusing your example). With good encryption systems, you must first decipher the *whole* file, and only after that you will be able to check if the decryption key is indeed the right one. – WhiteWinterWolf Jul 13 '16 at 10:05
  • 1
    @WhiteWinterWolf All you say is obvious but you didn't get the point of my comment. Re-read it. – Bakuriu Jul 13 '16 at 10:27
  • 6
    @Bakuriu: most practical encryption system includes a hash, which tells you when you've got the decryption right. Also, some encryption schemes, e.g. full disk encryption, are designed so you can do random access efficiently, so you don't need to decrypt the entire volume to know you've got the right decryption key. – Lie Ryan Jul 13 '16 at 12:26
  • 2
    @LieRyan I don't see how your comment is relevant. As I said: if the **only check** you do to see if the decryption is correct is checking if the result **as plaintext ASCII** is intelligible, then you will never be able to decrypt a zipped encrypted message. Period. There are no hashes in this scenario. My point is that by definition decrypting via brute force requires you to look at the "decrypted data" and decide whether you got it right or not, and sometimes it is easy, other times it's hard or impossible to do. – Bakuriu Jul 13 '16 at 14:36
  • 5
    Don't forget your users. Can your encryption scheme deal with the "stupid software, it produces garbage when decrypting my files once in a while" problem? If you only have a few technical users who know enough, it might be useable. But if you're targeting a lot of uneducated users, for your own sanity's sake, make a "Wrong password!" dialog box :) – Luaan Jul 13 '16 at 15:14
  • Worth mentioning that Vim produces garbage when using the wrong encryption key on encrypted files. It's easy to tell, though, since the output ends of full of control characters. – Brian McCutchon Jul 16 '16 at 02:23
  • @LieRyan In a properly build crypto system, data is encrypted and then the MAC (which uses a hash) is applied. If you do the other way around, MAC [Hash] then encrypt, then yes you can use the MAC to tell if you have a valid decrypt. This make the brute force attack easier. Therefore, this is why modern systems don't do it this way (FYI, that is well known bug in SSL). See https://moxie.org/blog/the-cryptographic-doom-principle/ for more details. – Walter Jul 17 '16 at 18:45
  • This should happen randomly instead of consistently. E.g. at ~100 bad logins a false positive is presented. – Brian Risk Jul 18 '16 at 14:52
  • In addition to the excellent answers, take a look at the relatively new Honey Encryption: https://en.wikipedia.org/wiki/Honey_Encryption – Aventinus Oct 23 '16 at 11:43
  • There is a way to deal with contributions that are spam, flames etc on public forums that reminds me of the OP's suggestion: The idea is to not tell the poster that his post has been deemed unacceptable by the forum software, but rather to display his post to him, and only to him, as if it had been accepted. He'll be the only one to see hist post, but think he succeeded. – Out of Band Feb 11 '17 at 00:49

9 Answers9

115

The answer always depends on your threat model. Security is always woven into a balance between security and usability. Your approach inconveniences the hackers trying to break into the account, but also inconveniences a user who merely mistypes their password. If the fake account is believable enough to fool an attacker, it may also be believable enough to fool a valid user. That could be very bad.

This may be desirable in extremely high risk environments. If you had to store nuclear secrets out in the open on the internet, having every failed password lead you to an account that has access to fake documents which don't actually reveal national secrets could be quite powerful. However, for most cases it is unnecessary.

You also have to consider the alternatives. A very popular approach is to lock the account out after N attempts, which basically stops all brute force attempts cold, and has usability behaviors that most users are willing to accept.

Cort Ammon
  • 9,216
  • 3
  • 26
  • 26
  • Thank you for your answer. Actually I wanted to know if it is possible, not usable, maybe there could have been something I didnt see that would render my idea pointless. – Tweakimp Jul 12 '16 at 21:35
  • 7
    When it comes to possibility, the only limit woudl be the question of how realistic of a dummy account are you interested in taking the time to create. An interesting place to look for data on that is deniable encryption. In deniable encryption, you create a dummy partition, and make it impossible for the attacker to prove any other partition exists mathematically. That community has shown a great deal of interest in how to make the dummy partition look legitimate. – Cort Ammon Jul 12 '16 at 21:39
  • 13
    Such an account can also be used as a honeypot that trips security sensors when someone logs in there so you can detect an attacker early. – Johnny Jul 13 '16 at 01:05
  • 5
    The problem with honeypotting a login attempt is the valid user won't realize these aren't real plans so they build nukes that don't work right. I hope your fake plans don't break the safety checks. Most effective method I've seen is to force users to wait longer and longer between login attempts. Force them to slow down and think about what they're typing. That way when they hit the N attempt limit it's not a surprise. – candied_orange Jul 13 '16 at 11:24
  • @CandiedOrange - it's a dummy account with fake information. No legitimate user would ever log in to it. If someone steals the plans and builds a non-working nuke, then the system is working as intended. – Johnny Jul 13 '16 at 20:08
  • @Johnny I think there might be some confusion. I translated the OP's question as "every invalid login should be successful, but lead to a dummy account" Thus it would be highly plausible for a user to mistakenly type in their password wrong and arrive at such an account. If the honeypots are instead set up so that *most* of the invalid logins let you know the login was invalid, but a few honeypots are put in place with various usernames, then you would be correct. No user would ever log into such a honeypot by mistake. – Cort Ammon Jul 13 '16 at 20:33
  • 2
    A combination, where after a few failed attempts (3?) rather than locking you out of your account brings you to a fake account would certainly be interesting. As a side note, I once worked at a place where the user table PK was user+password. – Wayne Werner Jul 13 '16 at 22:00
  • "and has usability behaviors that most users are willing to accept" ...for reasonably high values of N. =) – jpmc26 Jul 14 '16 at 00:21
  • 1
    For low values of N, it's a DOS attack. And a DOS attack compromises availability, the A in the CIA triad of information security. – Damian Yerrick Jul 14 '16 at 02:27
  • 3
    @DamianYerrick That is very true. The DOS issues are one of the main reasons I only listed it as a popular alternative, not a solution. In some environments that risk is low enough to accept. In other environments, it's completely unacceptable. – Cort Ammon Jul 14 '16 at 03:22
47

Fooling an attacker with false positives isn't a bad idea, and it's not new. The following may interest you.

Cryptographic Camouflage

CA technologies has patented a technology known as Cryptographic Camouflage.

A sensitive point in public key cryptography is how to protect the private key. We outline a method of protecting private keys using cryptographic camouflage. Specifically, we do not encrypt the private key with a password that is too long for exhaustive attack. Instead, we encrypt it so that only one password will decrypt it correctly, but many passwords will decrypt it to produce a key that looks valid enough to fool an attacker. For certain applications, this method protects a private key against dictionary attack, as a smart card does, but entirely in software.

This isn't exactly what you are talking about (they're protecting a key, not access) but the concept is the same. You foil a brute force attack by making it difficult or impossible to determine if you've actually cracked the code.

Mousetrap

In 1984, Michael Crichton (author of Andromeda Strain and many others) wrote a short story centered around a hacker who thought he was stealing top secret files. He had guessed the right password, but unbeknownst to him, the computer was actually authenticating him not by looking at his password but at the speed and manner in which he used the keyboard and mouse-- sort of a biometric authentication mechanism. He failed authentication. But the computer didn't tell him he failed-- instead, it presented him with a false copy of the secret documents, which he then downloaded and attempted to sell on the black market.

Again, this is not exactly the same as what you are asking, but it demonstrates (in fiction, anyway) the use of false positives to thwart an attack.

Glorfindel
  • 2,263
  • 6
  • 19
  • 30
John Wu
  • 9,181
  • 1
  • 29
  • 39
16

To give you a straight answer, yes, it is possible to reduce the effectiveness of brute-force attacks and it can be done the way you suggested, but shouldn't. You can get very similar results just by implementing timing delays between each failed attempt and the next guess. Also, (just for your knowledge) very sophisticated and similar technologies have already been designed and implemented for this exact thing. Products like Canary, Honey Pots and Honey Docs all deliver similar things like fake environments, devices, servers, accounts etc.

  • Ok, but what about encryted files. You cant put a delay between failed decryption tries, can you? – Tweakimp Jul 13 '16 at 04:50
  • 1
    I don't think you can really make enough fake sets of files in an encrypted container reliably fool an automated cracking tool either. If the attack has the possibility of an offline attack, you must rely on other hardening techniques like key derivation functions that are slow to limit the number of tries per unit of time. – Simon Lindgren Jul 13 '16 at 08:32
  • 1
    @Tweakimp, you may be interested in "TrueCrypt hidden volume." That sounds like what you're describing. (But it only has two passwords, not infinite.) – Wildcard Jul 15 '16 at 03:45
  • @Wildcard Thank you, I will look at it. Two sounds way smaller than infinite though ;) – Tweakimp Jul 15 '16 at 06:21
  • @SimonLindgren that is until Quantum computing becomes a reality where the tries per second reach new heights. So then, the key derivation functions would also have to increase. – user4317867 Jul 16 '16 at 17:08
9

The effect is tiny

Let's suppose that your system transforms practical brute force from decrypting the first four bytes (realistically, the first much larger block, but whatever) to having to decrypt the full e.g. four gigabytes of encrypted data, making bruteforce attempts approximately billion times or 2^30 times slower.

Now, that might seem a big difference to you, but actually that effect is tiny compared to other alternatives. A scale of simply "billion times slower" is simply not that much in the world of cryptography. Why bother with added complexity that might fail to achieve the intended slowdown or introduce new bugs, if simply adding extra 30 bits to the encryption key length does the same thing, and increasing the key size from e.g. 128bit (if that's not already enough) to 256bit provides an incomparably much larger effect than that?

Peteris
  • 8,389
  • 1
  • 27
  • 35
7

Most already has been said, I just want to offer another perspective.

Imagine you would try to secure a house with this technique. You would let the intruder give access to a cellar room if he tries to open the door for some time.

The question is, would you want an intruder even there? The intruder will most certainly realize that he didn't get what he wanted after some time and try to get further from there. And you would have to maintain the extra security for the cellar you prepared.

So in a way, you only increase the amount of work for yourself to fool (inexperienced) attackers for some time.

gpinkas
  • 171
  • 5
  • Because if I did it in a good way, the intruder wouldnt know (for some time) he was in the fake room and couldnt even be sure if he was in the right one when he gets in. Also, looking in the room takes time, so you cant just test the door for a key but have to look in the room every time which will take up way more time to brute force your way trough. – Tweakimp Jul 13 '16 at 11:36
  • 8
    Yes, I can see what you mean. My point is: Setting up and securing the fake room in a good way requires your time. Time you could better invest in securing the "front door" in the first place. :) – gpinkas Jul 13 '16 at 12:14
  • 1
    Proper access control for legitimate authenticated users is already a necessity anyway; I can't see how it would be any extra work for a fake account. – Michelle Jul 14 '16 at 13:11
  • That's right, but securing the whole access control is a lot harder than securing the login screen. It's a necessity, but it's nearly impossible to have perfect security in place. With this scheme you let a possible attacker one step closer into your system. In most attacks, gaining access to ANY account in a system is the first step to gaining root privileges. – gpinkas Jul 14 '16 at 13:41
  • 2
    This is a nice recent example for privilege escalation: http://www.theregister.co.uk/2015/07/22/os_x_root_hole/ – gpinkas Jul 14 '16 at 13:44
  • @gpinkas: Letting someone in past the front gate is bad. Having fake front gates in addition to the real one, *all of which are fully outside the real security wall*, may or may not be worth the effort, but shouldn't affect security. The primary useful aspect I can see to such things is that if fake gates outnumber real gates by 1,000:1 or so, and trigger alarms when breached, it's likely that an intruder will trigger alarms before gaining access to anything important. – supercat Jul 15 '16 at 15:13
7

Sounds like you're talking about a form of "deniable encryption" or "plausible deniability" in the context of crypto; that is, an alternative secret that decrypts to plausible but non-authentic plaintext. See https://en.wikipedia.org/wiki/Deniable_encryption for details.

But strictly speaking, if someone has the capability to bruteforce your ciphertext, they will potentially discover all plausible plaintexts, and then, based on any knowledge they already have about the context, they will be able to decide which of the plaintexts is the authentic original one. The first part can be done by pseudo-AIs, but the second part still needs a human.

bernz
  • 171
  • 4
  • 2
    Using a one-time-pad every possible message is equally probable, you have no way of knowing what the original was without the original key. As an example, "Meet 3pm Wed" is equally as probable as "Meet 9am Tue" or "Done my task" - context doesn't always help – Dezza Jul 14 '16 at 11:03
2

The problem with keys is they exist as data and not as running code. Even with the CA and Crichton example, what happens is an out of band procedure occurs that provides you with reasonable responses for each decryption try. Mathematically this is impossible on the level of a ciphertext and brute force attempts.

m.kin
  • 21
  • 1
1

For remote access, as others have said, simple lockouts and delays can work.

For passwords, what you have is a one-way hash. To validate the password, you re-hash it, and compare the two hashes. Having more than one simple password produce a valid match against a single hash is considered undesirable: it means the hash is weak, and has "collisions".

So it's lkely that you are interested in encrypted drives.

What you describe -- fake, "outer" drives filled with fake data protecting the encrypted "inner" drive -- is possible, and has been done in truecrypt (which sadly has since died).

The following is my own naive understanding, and some or all may be wrong. I never used this feature, but considered it interesting.

Truecrypt allowed you to specify a second password, which would unlock a "layer" of the encrypted drive (might've been limited to one outer container, I forget). This had clear problems; the outer drives were unaware of the inner ones, which were stored in the "empty space" of the encrypted outer drives. So changes in the outer ones could destroy the inner drives. Also, the datestamps on the inner drives were not automatically updated when you accessed the encrypted drive. So someone with access to your machine could tell when you'd last modified the encrypted drive's file, and could compare those datestamps to the last-modified times on the encrypted drive, and immediately tell that you'd been using it more recently, so there must be an inner drive.

But the idea was, you have the outer drive have an easy-to-guess password, like password123, put some vaguely secret stuff in there, and that would make your opponents think they had got into your encrypted drive.

Anything less - anything which just returned garbage (random noise equivalent to an unformatted drive) would have been trivial to get around by checking for a "magic string" on the decrypted drive that would be required on any real drive but unlikely in a garbage drive.

Same with encrypted documents: most filetypes have magic strings, so if you know what filetype is contained, then any scrambling that's done can be brute-forced to find all ways that produce the magic string.

That doesn't mean it's a bad idea, though - if the magic string is, say, "jfif", then only about one in about 16 million passwords will result in that magic string. But if the key length is, say, 2^1024, then they've only reduced that to 2^1000 - which, sure, is certainly 16 million times faster to crack, but will still take literally forever to crack.

Casual password typos wouldn't make someone think they'd decrypted the file, but simply looking for the magic string wouldn't be enough.

Dewi Morgan
  • 1,340
  • 7
  • 14
  • 2
    You're missing one point: To unlock the "outer" drive, you were supposed to type *both* keys. That way you could make changes without destroying the inner drive. But when an attacker [threatened you with a wrench](https://xkcd.com/538/), you could just tell him the outer key and it *would* successfully open the drive, with no evidence that there was an inner drive—and hence, also, no way to protect the inner drive. – Wildcard Jul 15 '16 at 03:51
  • @wildcard Ah, OK. Though sadly, the timestamps on the "outer" drive will probably still make it fairly clear that you were actually only active on the inner, unless you're caferul to mount and access both, possibly with a script on the inner to manipulate the outer. – Dewi Morgan Jul 20 '16 at 01:26
1

Something like this was actually done in some versions of the RAR compression software (in the early days, not sure if it still is like this). An encrypted archive would be decrypted by any password entered, but a wrong password would result in gibberish output. It was done to prevent brute-forcing of passwords which at the time was feasable for ZIP archives which immediately returned a "wrong password" error.

Tom
  • 10,201
  • 19
  • 51
  • 1
    Actually, with zip not all passwords lead to the "wrong password" error. That quick reject happens for most of them (and thus a user mistype "always" encounter it), but if you are bruteforcing the passwords, you will get lots of false positives on the crc32 (it is based on a 1-byte or 2-byte crc check) that need to be verified by either fully extracting the file and then realising the extracted crc32 doesn't match or -if you are lucky- by looking at known plaintext at the beginning of the file to be extracted. – Ángel Jul 16 '16 at 00:13
  • Maybe this is new? I'm talking 20 years ago. – Tom Jul 16 '16 at 10:07
  • I am talking about traditional zip encryption, ie. the one that was available 20 years ago. – Ángel Jul 16 '16 at 23:00
  • I think I know precisely what you mean: I remember also trying to find back my password for an large archive, then WinRAR would first "extract" the whole archive before telling me that the password is wrong. I think this should be linked to an era where proper [key stretching](https://en.wikipedia.org/wiki/Key_stretching) methods were not common, WinRAR used this method to slow down brute force attacks. – WhiteWinterWolf Jul 17 '16 at 08:14
  • Such methods make brute force efficiency directly dependent on the archive size, which is not a good thing: small archives will be less protected, adding useless files to it would increase the security, it's just too hacky. Here come proper key stretching methods which allow to put a constant time for the password checking process, independently of the archive size: small archives are given the same level of protection against brute force than larger ones, level depending only on the chosen algorithm and parameters, which is obviously cleaner and therefore safer. – WhiteWinterWolf Jul 17 '16 at 08:16