15

I've read that password complexity requirements make passwords easier to guess via brute force attack. The reason for this is that the complexity requirement actually reduce the search space for brute force guessing by eliminating possible passwords. I would take this idea further by suggesting that passwords generated by human users tend to meet complexity requirements exactly. That is, if the requirement is password must be at least 8 characters and contain at least 1 numeric character, then the passwords that people tend to come up with will be exactly 8 characters long and will contain exactly 1 numeric character. This would have the effect of reducing the search space so far as to make it almost trivial to guess passwords that are created in this way.

My question is this:

Is it true that password complexity requirements have the effect of making passwords easier to guess by reducing the number of allowed passwords, and thereby reducing overall security? Extra points for links to detailed analysis and discussion.

This is an important question because if it is true then it is our responsibility as programmers to push back on product management when these kinds of requirements are proposed.

  • 4
    $0.02 You're nuts if you think a requirement for 8 characters including one digit reduces the search space "so far as to make it almost trivial to guess passwords that are created in this way". You do know what "power of 2" means, right? –  Apr 26 '12 at 16:33
  • Note that if a brute force attack consists of null terminated strings, saying that the first 8 characters must be > null does not actually make much difference to the search space. –  Apr 26 '12 at 16:55
  • I've wondered the same thing. But you have to take the worst case example of the user that chooses a short password that is in the dictionary. There are a few hundred thousand words in the dictionary, but 2×10^14 different alpha-numeric combinations of 8 mixed-case letters and numbers. That goes down to 1×10¹³ if it can have one and only one number. [One of my favorite xkcd comics on this issue.](http://xkcd.com/936/) –  Apr 26 '12 at 16:32

5 Answers5

14

The short answer is yes, but not really. Password complexity requirements do reduce the search space slightly by limiting the number of possible passwords. However, users don't select passwords uniformly over the set of all possible passwords (they're biased toward words, dates, etc...).

Combinatorial Analysis

To keep things simple, let's assume we've asked the user for an eight-character, case-insensitive alphanumeric password (/^[a-z0-9]{8}$/ if you'd like).

Without a complexity requirement, there are 36^8 possible passwords. This is approximately 2.8 trillion possibilities.

Let's say you then impose the requirement that at exactly one character is a digit. There are just ten choices of digit, but it could appear in any of the eight positions. This leaves 26^7*10*8 possibilities, or approximately 640 billion possibilities.

Of course, users are unlikely to use strange symbols in their passwords without a complexity requirement. If numbers are our stand-in for weird symbols in this case, that means each position really only has 26 likely options. This leaves 26^8, or approximately 208 billion possible passwords. The password with the complexity requirement is more than 3x better than unrestricted input from a biased user.

Summary

So yes, complexity requirements reduce the space of possible passwords. However, they help when there is a known bias toward a small subset of the possible passwords. Real complexity requirements don't limit users to a single special character, and there are many more than ten special characters possible. The benefit of real complexity requirements (including punctuation, unbounded length, and "at least one" requirements) versus allowing users to input any (possibly biased) password is much greater than what I outlined here.

9

The best way to defend against brute force attacks is a slow authentication response. If a single guess takes 1-5 seconds to validate and respond, the time required for a bruteforce attack to be successful quickly becomes too long to be practical on a large scale.

However, databases of stolen passwords have shown that some passwords are far more likely than others. Using attacks based on common passwords or common words also makes an effective attack.

Some complexity rules can reduce the effectiveness of these types of password attacks by eliminating common words and passwords, but often the same sites have easy-to-violate password recover utilities.

And of course, users may use the same password elsewhere, and have the other account compromised. Password security is not a simple problem with a simple solution. The weakest element of a security system defines the security of the entire system, and software systems have many elements, including human elements, that are vulnerable.

  • A slow authentication response is great for preventing brute force logins, but does nothing to protect users when password hashes have been leaked. This happens surprisingly frequently, and the only real defense at that point is a large space of possible passwords. –  Apr 27 '12 at 13:09
  • 1
    Great point. And of course no password is resistant to social engineering tactics: phishing, impersonating, identity theft, etc in which the user actually gives the password to another party they believe to be a trusted party. –  Apr 27 '12 at 17:33
  • Maybe this is a case for requiring users to make passwords they can't possibly remember :) –  Apr 27 '12 at 19:51
  • 1
    Or a case for two-factor security. Log in with password to ahve a second password texted to your phone. Or password plus biometric. Password plus recognizing a picture of your coworkers. Password plus secure dongle. Or - just use the lost password recovery, and attack that. –  Apr 27 '12 at 21:11
4

The answer is that it's not that simple. It's important to understand all of the approaches to attacking passwords before you can say that complex is good or bad, as the different requirements are intended to deter different types of attacks.

As you point out, given two equally long passwords from the same character set, password complexity requirements do reduce the overall search space. For example, given a 1 character ASCII password that must include a letter vs a 1 character ASCII password that may be anything, it's easier to guess the first password.

But password character requirements should not be used in exclusion of length. Consider the following password requirement sets' minimum strength:

  1. 8 printable ASCII characters = roughly 100^8 = 10^16 possible values
  2. 16 printable ASCII digits = 10^16 possible values
  3. 9 printable ASCII characters with at least 1 number, 1 letter (of any case) and 1 symbol = ((10 numbers)(52 letters)(30 symbols)*100^6) = 10^16

[NOTE: I know that 100 is not the true number, it just makes the math easier]

So we see that three different password requirements are equally strong, in terms of brute force requirements. But we haven't done anything about dictionary attacks. In a dictionary attack, we simply test a list of well known words and see if any of them is the password. Make any of them longer and you've made it stronger than the others.

Password requirement 1 does nothing to prevent dictionary attacks, and 3 does little. You can use the word "prevents", which is extremely likely to be found in a dictionary. You can look at real password lists that have been leaked on the internet (or create your own popular service and collect passwords) and discover popular passwords to optimize your attack for the most likely values (ex. if everyone in New York picks a variation of "Yankee1!", then your attack will start by trying variations of "Yankee1!").

So the next natural thing to do is to prevent passwords that are dictionary words. This reduces the number of valid passwords, but it also removes an optimization available to the attacker. Make the password a little longer and you can keep the no-dictionary-words password equally strong or stronger.

But this does nothing for leet speak passwords. You won't find P4$$w0rd in a dictionary, but it's a pretty standard variation that brute forcing tools test. Take every word in the dictionary, run it through leet speek, prepend, postpend, and insert one or two other characters at every location.

Password complexity requirements can then be introduced to restrict these types of attacks - multiple symbols, multiple numbers, and a (yet again) longer password. Something that requires a password management tool to track. It's hard to brute force (because it's ever longer) and it's hard optimize because it's not in a dictionary or like anything in a dictionary).

But what happens when you break a particular password, and the user changes to a new one? Users often try to do the easy thing, and pick an easy to remember password - one very similar to the previous. P4$$w0rd1 becomes P4$$w0rd2. But the attacker will simply add P4$$w0rd1 to his dictionary, and he'll automatically test for P4$$w0rd2. So re-use of passwords is often disallowed.

But the heart of the issue is, "Is it suitable to the organization's needs?" You need a password that is strong enough to be secure for as long as it is useful (ex. it is changed or the account is disabled), given all other compensating controls that we haven't even started talking about. If the brute forcing speed is slow, or you only get a few chances, weaker passwords are okay (ex. IronKey allows something like 17 attempts and requires physical access, so a 10^16 password is more than strong enough).

atk
  • 2,156
  • 14
  • 15
1

Yes, password complexity requirements reduce search space and therefore make guessing passwords easier, but at the same time can also increase security. Why? To understand this, one needs to understand that there is no such thing as a secure (or insecure) password. A password is only secure/insecure in relation to our assumptions on the attacker's attack method.


To understand this, consider the following example. If you choose at random a string of letters of 0 <= length < n from an alphabet of size d, there are exactly

d^0 + d^1 + ... + d^{n-1} = (d^n - 1)/(d-1)

possibilities. If you have picked a string of length l < n, and the attacker chooses a string of length n uniformly at random, with probability (d-1)/(d^n - 1) he will pick your password. Note, that this probability depends not on your password length but on the maximum length the attacker considers.

Thus we have an apparent paradox: the more passwords the attacker checks, the more secure your password becomes. Of course, that's nonsense. Indeed, if a attacker tries to crack your password, he won't just pick one password at random, he will go through all of them in a sequence.


So the proper way of asking for the security of your password is: How likely is it for an attacker to guess your password within k guesses.

If you assume your attacker goes through all passwords in lexicographic order, the password zzz...z is objectively the most secure passwords, and must be used by every user. This is ridiculous, but only as ridiculous as our assumption.


A more reasonable approach is to assume is that the attacker goes through all strings of length < n in a random order. With a probability of

1 - (N-1)/N * (N-2)/(N-1) * ... * (N-k)/(N-k+1) = 1 - (N-k)/N

he will have found your password (assuming the attacker keeps on guessing even if he has found it), where

N = (d^n - 1)/(d-1)

is the number of all strings of length < n.

Again, this number doesn't depend at all on the length l of your password. Thus, the empty password is just as secure as every other password.

Again, this is ridiculous: the attacker probably will not be as stupid as to test all possibilities in a uniform random way. He will probably go through a small subset of special cases first (password of small length, same letter passwords, ...) and then test the rest of them in a uniform random order.

To summarize: 'The security of a password' is only a subjective quantity you assign to it, based on your beliefs about the attacker. Since most of use believe attackers to check 'simple passwords' first, those are subjectively insecure. So insecure in fact, that it is reasonable to decrease the search space a bit by excluding these passwords.

BlenderBender
  • 539
  • 1
  • 4
  • 7
  • "A more reasonable approach is to assume is that the attacker goes through all strings of length < n in a random order." No, that's not reasonable. You should take a look at the algorithms used by password crackers (like oclHashcat). In reality, you should assume that the brute-force algorithms will try the password from the most likely patterns to the less likely. Rules reducing the search space can be taken into account by the brute-forcing software. – A. Hersean Apr 05 '17 at 12:14
  • It's not reasonable, but it's more reasonable than assuming the attacker simply goes through all passwords in lexicographic order. I also explain in my answer why it's not reasonable. – BlenderBender Apr 05 '17 at 23:18
1

What i would like to add to your question is : Do password complexity requirements reduce security by limiting search space theorically?, or practically?

I mean,
As far as i know, from a technical & hands-on point of view password complexity policies DO NOT exclude character sets but enforce to use some of them, By using password complexity policies, you enforce users to use at least for example 4 characters from different character sets, ie: password must use at least three Numbers (Number's char. set) , One Capital Letter (Capital letters char. set), two Symbols ( Special characters char. set) & one small letter (small letter char. set) .

You're not enforcing exclusions, like : "You should not use space in your passwords", or " You are not allowed to use Numbers in your password". So you're not reducing the character space.

In conclusion, I think password complexity does not reduce the search space.

But from a practical & real-world point of view the fact is:
We are working with Humans & most of them are too lazy to think, generate & memorize a good or at least a standard password.
Practically , the actual USED PASSWORDs belong to a very limited search space: Small letters & Numbers (thinking of a random generated password) , but not even the full space, The real search space is much more smaller in real world.
The real search space is the sum of common passwords databases + your users laziness!
computer names + 123, service name running on the host + 12345 & many more stupid passwords !!!
You have no idea what places i could successfully pen-test with these passwords on their Remote desktops, FTPs, Roots, Domain Admins etc & I'm not talking about SOHO, I'm talking about enterprise organizations, firewall appliance providers etc.
For a global example, COMODO. One of the major things that helped the intruder to it's goal was a weak password.

What i want to say is that without password complexity policies, the real space is not used, i refer you to an example i read long ago in the "Perfect Passwords" .

If the password possibility space is the whole world, 98% of the passwords are in the area of 1 square meter of your backyard!

So in my opinion, Password complexity does not reduce search space because it's not excluding any character sets, but it actually INCREASES the search space by enforcing the use of other character sets which are not used normally.

P.S: for a tip on password complexity, A good Password Complexity Enforcement Policy MUST take ENTROPY of password seriously.

Sam
  • 416
  • 3
  • 15
  • well it does reduce the search space for a bruteforce. if you dont know that a password policy exists then you have to try everything. but if you know that for example the password must be 8 characters long and has to have a number, then you dont need to search passwords that are shorter and/or have no number, so in plain bruteforce regard it does reduce the search space. – My1 Jan 25 '16 at 17:09