0

Google recently announced "the first practical technique for generating a collision" for SHA-1. I'm currently planning a password hashing implementation for a website, and historically we have used PBKDF2 as that's the default tool on our framework of choice - .NET.

They've specifically condemned SHA-1 for TLS certificates, but does this condemnation still apply to password hashing for websites? Is PBKDF2 no longer a safe pick given this revelation?

andrewb
  • 204
  • 1
  • 6
  • I will wait for smarter people than I to answer your question, but I would point out that the .NET implementation of PBKDF2 was considered to be sub-optimal (unless they've changed it recently). – AviD Apr 20 '17 at 08:51
  • @AviD: could you add more detail to your comment, i.e. why exactly it was considered sub-optimal and ideally also a source of this claim? Based on this one could decide if this is relevant to SHA-1 and thus to this question or not. – Steffen Ullrich Apr 20 '17 at 08:56
  • 1
    Also, I realized this has been answered in depth a couple years ago, but with an answer that is durable even for now. Closing as duplicate. – AviD Apr 20 '17 at 08:57
  • @SteffenUllrich there is [this comment on the duplicate question](https://security.stackexchange.com/questions/93435/is-rfc2898derivebytes-using-hmac-sha1-still-considered-secure-enough-for-hashi/93440#comment288633_93435): `the .net implementation is slow compared to a good implementation, widening the gap between the defenders performance and the attackers performance. Thus lowering security compared to a defender that uses a good implementation that allows them to choose a higher iteration count. – CodesInChaos` – AviD Apr 20 '17 at 11:28
  • @AviD ah right, good find. Didn't come up in the possible related questions unfortunately as I used PBKDF2 and they used the specific .NET function name. Seems there's one change since that question though, that the collision attacks are no longer just theoretical, but outcome is the same. – andrewb Apr 20 '17 at 22:56
  • True, but @ThomasPornin does say explicitly that collisions are irrelevant in this context anyway. – AviD Apr 21 '17 at 09:18
  • Sub-optimal implementations in the context of iterated HMAC means they do unneeded repetitions of the compression step. This means the defender has a unoptimized slow method and therefore uses less iterations than it would be possible. – eckes Apr 21 '17 at 16:14

1 Answers1

3

The use of SHA-1 inside PBKDF2 or other password hashing methods is not affected by the collision attack shown by Google. In functions like PBKDF2 it is important that the used hash can not simply be reversed, i.e. it is not possible to construct the input from a given output (preimage attack) and thus not reconstruct the password based on exfiltrated password "hashes" in case of a hacked system. Apart from that SHA-1 is used in several thousand iterations inside a single run of PBKDF2 which further makes it harder for the attacker even if he has some kind of (probably very slow) preimage attack against SHA-1.

Steffen Ullrich
  • 190,458
  • 29
  • 381
  • 434
  • However it is a good idea to migrate to PBKDF2 with SHA-512 as it also slows down the ASIC bruteforcing and avoid any compliance/risk discussions like the original question. Unfortunately implementations of this are not very widespread. – eckes Apr 21 '17 at 16:12
  • Just to highlite this: in some instances progress on the speed of finding collisions is so fundamental that it also affects other hash applications. In the google case I think it was purely by advances in brute force. – eckes Apr 21 '17 at 16:16