98

We know that to slow down password cracking in case a password database leak, passwords should be saved only in a hashed format. And not only that, but hashed with a strong and slow function with a possibility to vary the number of rounds.

Often algorithms like PBKDF2, bcrypt and scrypt are recommended for this, with bcrypt seemingly getting the loudest votes, e.g. here, here and here.

But what about the SHA256 and SHA512 based hashes implemented in at least glibc (description, specification) and used by default at least on some Linux distributions for regular login accounts? Is there some reason not to use them, or to otherwise prefer bcrypt over the SHA-2 based hashes?

Of course bcrypt is significantly older (1999) and thus more established, but the SHA-2 hashes are already nine years old by now (2007), and scrypt is even younger by a bit (2009), but still seems to be mentioned more often. Is it just an accepted practice, or is there some other reason? Are there any known weaknesses in the SHA-2 based hashes, or has anyone looked?

Note: I specifically mean the multi-round password hashes described by the linked documents and marked with the codes $5$ and $6$ in crypt hashes, not a single round of the plain SHA256 or SHA512 hash functions.

I have seen the question this is marked as a possible duplicate of. The answers there have no mention of the SHA256-crypt / SHA512-crypt hashes, which is what I am looking for.

ilkkachu
  • 2,106
  • 1
  • 11
  • 15
  • 5
    Possible duplicate of [How to securely hash passwords?](http://security.stackexchange.com/questions/211/how-to-securely-hash-passwords) – sethmlarson Aug 08 '16 at 12:37
  • 7
    The "weakness" in hashes that don't have significant number rounds and aren't memory intensive are their speed or ease of implementation in a GPU/specialized hardware. Did you read the answers that you linked? They're better then any answer I could give you. – sethmlarson Aug 08 '16 at 12:39
  • 1
    @Oasiscircle, well, the specification I linked to states "The default number of rounds for both algorithms is 5,000." and "maximum for N = 999,999,999". If that is not significant enough compared to bcrypt, I'd be happy to see that as an answer. – ilkkachu Aug 08 '16 at 12:43
  • 5
    SHA functions are not memory intensive like Blowfish (which is what bcrypt is built with) and thus can more easily be parallelized in a GPU/specialized hardware. That's the reason why having more rounds with SHA is far less significant than more rounds of bcrypt. Side note, I cannot view your specification document from where I am currently. – sethmlarson Aug 08 '16 at 12:47
  • 1
    @ilkkachu No, iterated hashing is not good enough. Attackers can run SHA in parallel on a GPU and get significant speedup. SHA is also used in bitcoin mining so there are plenty of ASICs that can bruteforce it. – Navin Aug 08 '16 at 19:32
  • 1
    @Navin, and still PBKDF2, which is likely built on SHA1 or SHA2 family, is usually listed as the first in the list of good password hashing functions, as in [here](http://security.stackexchange.com/a/31846/118457), so again, the question is, what is the difference here? – ilkkachu Aug 08 '16 at 19:40
  • @ilkkachu PBKDF2 can be used with any keyed HMAC. You can build a keyed HMAC with SHA, but you should not do that in new software! – Navin Aug 08 '16 at 19:54
  • @Navin, interesting note about SHA being used for bitcoin mining, but as far as I know, most ASICs are built to perform `sha256(sha256(x))`, so a Bitcoin mining ASIC may not be too helpful. It could be helpful if multiple iterations are used. (I.e., if number of iterations is even, perform all iterations on ASIC. Else, perform floor(iterationCount/2) iterations on ASIC and one iteration on GPU/CPU.) However, this ASIC advantage is defeated if the hashing scheme is something like `sha256(sha256(x)+Salt)`, because a mining ASIC cannot add data to the digest between first and second calculations. – Spencer D Aug 10 '16 at 17:23
  • @ilkkachu - I felt I had a good answer but others disagree, that's OK. I'll simply agree with the 2 comments Seth made above on Aug 8 '16. – Rob Jun 24 '17 at 00:03

5 Answers5

72

The main reason to use a specific password hashing function is to make life harder for attackers, or, more accurately, to prevent them from making their own life easier (when compared to that of the defender). In particular, the attacker may want to compute more hashes per second (i.e. try more passwords per second) with a given budget by using a GPU.

SHA-256, in particular, benefits a lot from being implemented on a GPU. Thus, if you use SHA-256-crypt, attackers will be more at an advantage than if you use bcrypt, which is hard to implement efficiently in a GPU.

See this answer for some discussion of bcrypt vs PBKDF2. Though SHA-256-crypt is not PBKDF2, it is similar enough in its performance behaviour on GPU, so the same conclusions apply.

Case for SHA-512 is a bit less clear because existing GPU are much better at using 32-bit integers than 64-bit, and SHA-512 uses mostly 64-bit operations. It is still expected that modern GPU allow more hashes per second than CPU (for a given budget) with SHA-512-crypt, which again points at bcrypt as the better choice.

Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • 2
    "Though SHA-256-crypt is not PBKDF2, it is similar enough in its performance behaviour on GPU, so the same conclusions apply." -- this is precisely what I was looking for. – ilkkachu Aug 11 '16 at 15:26
  • For brute-force attacks, this can be compensated by adding just one character to the password. And supercompensated by two characters already. – neverMind9 Mar 09 '19 at 05:31
  • So basically, it's bcrypt's being ancient that makes it more secure. Not often that you hear that in infosec/tech. – Hashim Aziz Jan 16 '21 at 00:59
38

SHA-2 family of hashes was designed to be fast. BCrypt was designed to be slow. Both are considered robust. With enough rounds or work-factor, either one can take longer than the other, but I would lean towards the one that was designed to be slow. (if server load is an issue, the Work Factor is adjustable)

Additionally, I would lean towards BCrypt because it is usually a Compiled implementation (C or C++).

The multi-round SHA can easily be implemented in high-level language, at least for the iteration, if not also for the hash itself. High level languages are less efficient for basic mathematical operations reducing the number of rounds your production hardware can complete per millisecond.

While both algorithms can be implemented in either high- or low-level languages, or a hybrid; in BCrypt the options available dictate that you are more likely to land on an efficient implementation. (puts you on a more even playing field with the attacker)

In regards to your specific example from the /etc/shadow file, you are likely on only low-level (efficient) algorithms either way. (SHA or BCrypt) In this example I would suggest you consult the OS documentation to optimize the rounds (work factor) based on the speed of the hardware -vs- how strong you would like the hash to be.

scrypt (with a great enough work factor) has the added benefit of having extra RAM/Memory requirements (not just CPU), making it more GPU-resistant than SHA, BCrypt or PBKDF2.

Edit: Thanks to Thomas for pointing out that BCrypt is more GPU-resistant than SHA-2, and that SHA-2 and PBKDF2 are practically equivalent in this regard.

700 Software
  • 13,897
  • 3
  • 53
  • 82
  • 15
    It must be noted that scrypt uses a configurable amount of memory that depends on how fast it must complete. If you use scrypt on a busy authentication server and must compute a password hash within less than 5 ms or so, then scrypt cannot use much RAM and turns out to be _less_ GPU-resistant than bcrypt. Scrypt was really meant for hard disk encryption (so 1 to 5 seconds computation time); it is not necessarily appropriate for all other situations. This is in fact what prompted the [Password Hashing Competition](https://password-hashing.net/). – Thomas Pornin Aug 09 '16 at 12:14
  • I think implementation details are a bit of a different question than the question of the relative merits of the algorithms, and they would be system-dependant too. e.g. on OpenBSD you'd be very likely to have `$2y$` supported by the system's libc, but not so on all Linuxes. As for the libraries of popular programming languages, I _hope_ they'd have C implementations of _any_ (password) hashes they support, but I can't be sure without checking. – ilkkachu Aug 11 '16 at 15:43
  • Part of your question *"Is there some reason not to use them"* relates very much to how well it was implemented. SHA-256 was not designed to be a 'password hash', so language-specific libraries may not be as seriously compelled to implement it in native C. This also makes installs easier for certain users who do not have the right compilers set up. Also it's tempting for a developer to write their own iteration for rounds. I like to include implementation details in an answer to prevent future visitors from jumping to a conclusion. – 700 Software Aug 11 '16 at 19:14
7

Note: I'm looking at this question after this edit was made, and taking it into account:

Note: I specifically mean the multi-round password hashes described by the linked documents and marked with the codes $5$ and $6$ in crypt hashes, not a single round of the plain SHA256 or SHA512 hash functions.


Looking at the long, 22-step algorithm in this link you provided, I'd rather flip a question around: why would you prefer to use this instead of PBKDF2 with HMAC-SHA2? Because, at least as presented:

  • The definition of PBKDF2 looks much simpler. This is because it is more modular—it defers most of its work to an externally-supplied pseudo-random function. This is normally instantiated with HMAC, which in turn defers most of its work to an external hash function like SHA-1 or SHA-2.
  • This means that the security of PBKDF2 should be easier to analyze.

In contrast, the algorithm in the document you provide lists a ton of steps whose motivation is harder to understand. For example:

11. For each bit of the binary representation of the length of the
    password string up to and including the highest 1-digit, starting
    from to lowest bit position (numeric value 1):

    a) for a 1-digit add digest B to digest A

    b) for a 0-digit add the password string

    NB: this step differs significantly from the MD5 algorithm.  It
    adds more randomness.

It adds more randomness? How does it do this? Why does it this step exist at all—is SHA-2 not adding sufficient randomness? If SHA-2 isn't random enough, why use it in the first place? And doesn't this step introduce secret-dependent branching into the algorithm, raising the question of possible timing attacks against it?

I'm not by any means saying that the algorithms you link are insecure. It's just that:

  • The work factor they introduce comes down to the same thing that PBDKF2--HMAC-SHA2 would do (a large number of SHA2 iterations);
  • They look very broadly similar to what you'd have if you unrolled a PBKDF2-HMAC-SHA2 implementation, but with additional complexity whose purpose I don't understand;
  • So at least as presented in those documents, I find it harder to gain confidence on their design than I do for PBKDF2.

EDIT: After I wrote all this I went and did a bit of research into this algorithm to try and understand it better. First, from the question's own "description" and "specification" links, we learn that the algorithm was derived from an older MD5-based one by making relatively minor modifications.

This older MD5-based algorithm appears the one that Poul-Henning Kamp wrote for FreeBSD-2.0 in 1994, which he no longer considers safe. In the first link (where he tells the history of the function), he mentions that glibc adopted his function as well. He also links to Provos and Mazières' 1999 paper on bcrypt and mentions that it expressed some disapproval, and funnily enough they highlighted the same step that caught my attention above:

MD5 crypt hashes the password and salt in a number of different combinations to slow down the evaluation speed. Some steps in the algorithm make it doubtful that the scheme was designed from a cryptographic point of view—for instance, the binary representation of the password length at some point determines which data is hashed, for every zero bit the first byte of the password and for every set bit the first byte of a previous hash computation.

But I think this explains the motivation of the newer functions that you ask about: they are a very minimal modification of an older function that predates most of the modern password hash functions, whose design has been called into question but is likely not fundamentally broken, just pointlessly complex.

Luis Casillas
  • 10,361
  • 2
  • 28
  • 42
  • I've wondered about the slightly eh, convoluted structure too, though we'd have to ask the author about that. Thank you for actually addressing the function I referred to, which I'm starting to feel is more unknown than I assumed. As I said, it's the default password hash for some Linux systems, so I didn't _choose_ to use it per se but it makes it somewhat interesting to learn more about. (The note you quoted was in the question from the beginning, for a reason.) – ilkkachu Aug 08 '16 at 23:10
  • 1
    @ilkkachu: I've updated my answer with some extra material you may be interested in reading. – Luis Casillas Aug 11 '16 at 00:09
4

The SHA-2 family itself is not necessarily bad. By design, there are not really any security flaws which make bcrypt or scrypt preferable. However, the issue that many security experts have with SHA is that it is too fast and it does not require much memory. In comparison, a hashing function like scrypt is much slower and expensive, so to speak.

Scrypt requires a decent amount of memory to calculate. In addition to all of this memory, and largely as a result of needing so much memory, scrypt requires a lot of computational time compared to SHA. This answer from the BitCoin Stack Exchange site sums up the advantages of scrypt rather nicely: What features of scrypt() make Tenebrix GPU-resistant? In essence, scrypt is designed to be slow and memory intensive. GPUs do not like that. GPUs generally do not have the memory capacity to store all of the memory needed for the scrypt calculation without having to resort to execution blocking methods (where the GPU blocks all threads because it can only retrieve values from the shared memory one at a time), and thus GPUs cannot provide any large benefit over CPUs in terms of processing time. Bcrypt is similar.

Bcrypt is tried and true for password hashing. It has been around for 17 years, and it still gets the job done. However, one day, GPU technology will advance to the point where it is capable of calculating bcrypt faster and more efficiently that a CPU. Technology is always evolving and developing, so it will happen eventually. When that day comes, bcrypt will no longer be such a great choice for password hashing, and cryptographers and security experts will need to replace bcrypt with a similar algorithm that is slower and more memory intensive than the existing bcrypt. Maybe that will be scrypt, but who's to say. So, why is SHA frowned upon?

SHA is generally discouraged not because of security flaws, but because of speed and its ability to be implemented on a GPU. Someone with unlimited machines/computational power could crack any kind of hashing algorithm, whether it be SHA, bcrypt, or scrypt, but that is theoretical. In practice, an attacker will not have an unlimited number of machines to try to crack hashes on, and, as a result, the more you can slow that attacker down, the more difficult it will be for them to crack a password. There is a budget limit for everything, and password cracking is no exception. The attacker can only crack passwords as fast as their budget will allow. (The best technology they can purchase within their budget, the cost to run that technology (e.g., electric costs), etc.) Of course, you could implement multiple rounds of SHA to severely slow down an attacker, but why not just use bcrypt at that point? Among other things, as GPU technology advances in the near future, you will need to add more and more rounds/iterations to your SHA calculations to slow it down to the point where it is slower than bcrypt. However, as GPU technology advances, bcrypt will remain unphased, up to the point that GPU technology allows bcrypt to be calculated efficiently. Therefore, bcrypt is the preferable choice where possible, not because SHA is insecure, but because SHA is too computationally efficient.

Spencer D
  • 770
  • 1
  • 5
  • 13
3

Well one of the features of bcrypt is that is a very slow algorythm. This slowness offers an additional security for brute force attacks. But for that same reason, it may not be the best solution for example for highly used web servers because here a highly charged machine will have to do that heavy computation on each and every login.

But as long as you system can accept it, that additional time actually bothers brute force attackers.

Serge Ballesta
  • 25,952
  • 4
  • 42
  • 84
  • 1
    `sha256crypt` is also a slow hash. – Stephen Touset Aug 09 '16 at 01:24
  • Why would anyone design a cipher algorithm slow (blowfish) – eckes Apr 29 '17 at 02:44
  • @eckes It's the key schedule of Blowfish that is slow, not the rest of the cipher. The key schedule is a one-time setup cost that is incurred each time a new key is used and is equivalent to about 521 encryptions. What bcrypt does is extract that key schedule and modify it a bit to make it more useful as a KDF. – forest Mar 07 '19 at 08:57