133

Google and Yubico just announced the availability of cryptographic security tokens following the FIDO U2F specification. Is this just another 2FA option, or is this significantly better than solutions such as SecureID and TOTP?

Specifically:

  • In what way is U2F fundamentally different from OTP?
  • How does U2F affect the feasibility of phishing attacks in comparison to OTP systems?
  • How feasible are non-interactive attacks against U2F (e.g. brute-force, etc)?
  • Can I safely use a single U2F token with multiple independent services?
  • How does U2F stack up against other commercial offerings? Are there better solutions available?
Philipp
  • 49,017
  • 8
  • 127
  • 158
tylerl
  • 82,665
  • 26
  • 149
  • 230
  • 15
    We need to distinguish between U2F protocol, and current U2F devices. U2F *protocol* protects you with all devices from phishing, when the user agent isn't compromised. But as current U2F devices don't have a screen, you can be phished when the user agent is compromised. It can make you believe you you log in to service A, you press the button, but actually you log in to service B. This can't be fixed by any protocol, the device needs a screen. But U2F protocol doesn't restrict devices to offer this additional security. This has been also [pointed out by Mathieu Stephan](http://goo.gl/bBPvM0). – user10008 Nov 16 '14 at 06:45
  • 4
    @user10008 there is, by definition, no way to protect against a compromised user agent. Even if you can protect the login step, a man-in-browser attack can simply wait silently until authentication has completed successfully and then insert its malicious traffic using the properly authenticated session. This is outside the scope of any authentication system as the authentication needn't be compromised. A screen in the device can't offer meaningful perfection here. – tylerl Nov 16 '14 at 11:22
  • 6
    Yes it can. If there is a screen, and I'm at a possibly compromised computer (e.g. internet cafe), I know whether I give access to my online banking (which I would never access from the internet cafe) or to some unimportant account. – user10008 Nov 16 '14 at 11:37
  • 6
    @user10008, eh. ok. That's not really a credible scenario, though. Certainly not one that warrants special design considerations and a 10x increase in hardware costs. The smart user would not knowingly use a compromised workstation *at all*, and a sensible security posture would disallow it completely, not try to accommodate it safely. – tylerl Nov 16 '14 at 23:35
  • 2
    While I still think its indeed very probable, I agree with the cost argument you make. Low price tag is one of U2F advantages. As an alternative solution to the problem, I can think of making smartphones (which have screens) acting as U2F keys (or relays) for untrusted workstations. I think U2F is something very great, and that the step from manually entered decimal codes to binary wire transmissions was a very good descision. – user10008 Nov 17 '14 at 03:21
  • @tylerl seeing that the device asks for confirmation about logging in on a different service could reveal a compromise. – Natanael Nov 20 '14 at 19:52
  • Client certs exist for decades and are so much better than this. – André Borie Sep 02 '15 at 16:20
  • 1
    @AndréBorie this has been discussed, and no, client certs don't fill the same role. Client certs have their place, but this isn't it. – tylerl Sep 02 '15 at 17:19
  • 1
    @user10008 for the attacker to succeed they must both control the computer you're using _and_ know your bank password through other means. It's conceivable but it's a pretty bad situation. – poolie Sep 13 '15 at 20:00
  • This question has been fairly well answered, but since this question has been asked (and answered), Google has published a paper on this subject that answers all of the questions asked in the original post, and many more. See http://fc16.ifca.ai/preproceedings/25_Lang.pdf – mti2935 Apr 13 '17 at 19:58

6 Answers6

93

The answers I've gotten have been good, but I wanted to provide a bit more depth, going specifically in to why the system exists at all, which should explain a bit more about what it's good for.

Disclaimer: While I now work for Google, I knew nothing about this project at the time this answer was written. Everything reported here was gathered from public sources. This post is my own opinions and observations and commentary, and does not represent the opinions, views, or intentions of Google.

Though it's worth pointing out that I've been using this and tinkering with it for quite some time now, and as someone who has dealt a lot with social engineering and account takeovers, I am disproportionately impressed with what has been accomplished here.

Why something new was needed

Think about this: Google deployed two-factor authentication a long time ago. This is a company that cares deeply about security, and theirs has been top notch. And while they were already using the best technology available, the additional security that U2F delivers above traditional 2-factor is so significant that it was worth the company's time and money to design, develop, deploy, and support a replacement system that they don't even themselves sell. Yes, it's a very socially-conscious move of them to go down this road, but it's not only about the community. Google also did it because they, themselves, need the security that U2F alone provides. A lot of people trust Google with their most precious information, and some in dangerous political environments even trust Google with their lives. Google needs the security to be able to deliver on that trust.

It comes down to phishing. Phishing is a big deal. It's extremely common and super effective. For attacks against hardened targets, phishing and similar attacks are really an attacker's best bet, and they know it. And more importantly:

Our phishing protection is laughable. We have two-factor auth, but the implementations offer little defense. Common systems such as SecurID, Google Authenticator, email, phone, and SMS loops -- all of these systems offer no protection at all against time-of-use phishing attacks. A one-time-password is still a password, and it can be disclosed to an attacker.

And this isn't just theoretical. We've seen these attacks actually carried out. Attackers do, in fact, capture second-factor responses sent to phishing sites and immediately play them on the real login page. This actually happens, right now.

So say you're Google. You've deployed the best protections available and you can see that they're not sufficient. What do you do? Nobody else is solving this problem for you; you've got to figure it out.

The solution is easy; Adoption is the real issue

Creating a second-factor solution that can't be phished is surprisingly simple. All you have to do is involve the browser. In the case of U2F, the device creates a public/private key pair for each site and burns the site's identity into the "Key Handle" that the site is supposed to use to request authentication. Then, that site identity is verified by the browser each time before any authentication is attempted. The site identity can even be tied to a specific TLS public key. And since it's a challenge-response protocol, replay is not possible either. And if the server accidentally leaks your "Key Handle" in a database breach, it still doesn't affect your security or reveal your identity. Employing this device effectively eliminates phishing as a possibility, which is a big deal to a security-sensitive organization.

Neither the crypto nor its application is new. Both are well-understood and trusted. The technology was never the difficulty, the difficulty is adoption. But Google is one of only a small number of players in a position to overcome the barriers that typically hold solutions like this back. Since Google makes the most popular browser, they can make sure that it's compatible by default. Since they make the most popular mobile OS, they can make sure that it works as well. And since they run the most popular email service, they can make sure that this technology has a relevant use case.

More Open than Necessary

Of course Google could have leveraged that position to give themselves a competitive advantage in the market, but they didn't. And that's pretty cool. Everyone needs this level of protection, including Yahoo and Microsoft with their competing email offerings. What's cool is that it was designed so that even competitors can safely make it their own. Nothing about the technology is tied to Google -- even the hardware is completely usage-agnostic.

The system was designed with the assumption that you wouldn't use it just for Google. A key feature of the protocol is that at no point does the token ever identify itself. In fact the specifications state that this design was chosen to prevent the possibility of creating a "supercookie" that could be used to track you between colluding services.

So you can get a single token and safely use it not only on Gmail, but also on any other service that supports U2F. This gives you a lot more reason to put down the money for one. And since Yubico published reference implementations of the server software in PHP, Java, and Python, getting authentication up and running on your own server is safely within the reach of even small shops.

tylerl
  • 82,665
  • 26
  • 149
  • 230
  • 7
    So why not SSL client cert? That should mitigate MITM as well. – phoeagon Mar 02 '15 at 15:56
  • 12
    @phoeagon: Been tried; regular certs are complicated to use, easy to lose (people forget to export them before reinstalling, attackers can easily export them, etc), and the way TLS uses them _was_ recently found vulnerable to similar attacks. (I can't remember at all where exactly I've seen that though.) – user1686 Apr 19 '15 at 13:07
  • @grawity I guess it's a little bit safer to guard against Superfish. True that you can't guarantee conifdentially if either side is compromised, but given the fact that users' PCs are much more vulnerable, it might add tons of grunt work to implement it. (and yes unfortunately grunt work for users too) – phoeagon Apr 20 '15 at 11:30
  • 5
    This is a fantastic take on phishing. I'm quite annoyed with "solutions" that propose to consume user time in order to detect phishing as they're mentally and economically taxing. – Steve Dodier-Lazaro May 13 '15 at 19:18
  • 2
    @grawity Here's an excellent article detailing the problems with client certificates: http://www.browserauth.net/tls-client-authentication – Ajedi32 Sep 29 '15 at 14:10
  • 3
    "All you have to do is involve the browser..." is a little misleading as this process can (and is) performed by user-agents that are not browsers. – Dori Feb 01 '16 at 11:02
  • @grawity, Recently they're trying to remake TLS too, thus the present TLS vulnerabilities are not **inherent** and can be fixed in the future. While hardware tokens can be more secure and can make sense in "high-stakes" situations, it isn't convincing that it is a feasible solution, nor a feasible alternative, **in general** for the masses. – Pacerier Apr 18 '16 at 17:15
  • @tylerl, How does this solution match up with RSA's SecurID http://www.cnet.com/news/rsa-cyberattack-could-put-customers-at-risk/ – Pacerier Apr 18 '16 at 17:16
  • @tylerl, Btw, do you mean to say that you don't work on the security branch of Google, or do you mean to say that you work on the security branch, but not specifically on the u2f branch? – Pacerier Apr 19 '16 at 13:54
  • @Pacerier How about: "As of the writing of this answer, I haven't contributed to the security key project." :) – tylerl Apr 20 '16 at 16:29
  • 1
    @Pacerier RSA's SecurID is just an OTP device. – Jonathan Cross Apr 29 '20 at 12:28
36

U2F is capable of using an encrypted channel using public key crypto to ensure ONLY the right server can get the one time token. This means plugging it in when on a phishing site means nothing happens—they can't get into your account. Instead they have to rely on technical attacks like XSS and local malware.

It is supposed to be able to hide the fact that you're using the same device for multiple services, so somebody which control both site A and site B can't see that you used the same device on both. It is supposed to be secure.

It seems to be the best option available now mainly because of the ongoing standardization process and the wide support and momentum for it.

From the FIDO spec

During registration with an online service, the user's client device creates a new key pair. It retains the private key and registers the public key with the online service. Authentication is done by the client device proving possession of the private key to the service by signing a challenge. The client's private keys can be used only after they are unlocked locally on the device by the user. The local unlock is accomplished by a user–friendly and secure action such as swiping a finger, entering a PIN, speaking into a microphone, inserting a second–factor device or pressing a button.

Michael
  • 2,432
  • 2
  • 20
  • 37
Natanael
  • 821
  • 7
  • 10
24

I have not yet fully explored the spec. But:

  1. In what way is U2F fundamentally different from OTP?
    U2F is not using an OTP. It is really about site authentication and using possession of a private key as a factor.

  2. How does U2F affect the feasibility of phishing attacks in comparison to OTP systems?
    Time-bound OTP systems do an excellent job of combating phishing (stealing credentials) because they are hard to steal. U2F is really meant to combat MiTM attacks.

  3. How feasible are non-interactive attacks against U2F (e.g. brute-force, etc)?
    Brute-force attacks would not really be the way to go. You would want to steal the keys - either from the server or the client. How it handles malware, etc is key. Implementation will be very important.

  4. Can I safely use a single U2F token with multiple independent services?
    Sure - that's why public/private keys are better than shared secrets.

  5. How does U2F stack up against other commercial offerings? Are there better solutions available?
    I can only speak to ours, which is in both our commercial and open-source versions. The main difference is that we store a hash of the targeted site's ssl cert in the authentication server and deliver with an OTP encrypted by the auth server's private key. Before the user gets the OTP, the software token fetches the target site's cert over the user's connection, hashes it and compares the two. If they match, the OTP is presented, copied to the clipboard and the browser is launched to the url. If they don't an error is given.

    So, there's no change to the server or browser needed. The keys are stored on a separate server than the web server. The OTP is part of the process (though it can be removed/hidden). It's open-source. On the other hand, U2F does have momentum, despite being a 'pay-to-play' consortium. U2F is available on some 'secure' hardware offering. Ours can be implemented on them (eg. a crypto-USB drive). YMMV.

    More info on WiKID's mutual auth is here: https://www.wikidsystems.com/learn-more/technology/mutual_authentication and a how-to here: http://www.howtoforge.com/prevent_phishing_with_mutual_authentication.

nowen
  • 777
  • 3
  • 8
  • 3
    A big thank you for disclosing your relationship to a commercial product. It really helps provide relevance and context. – schroeder Oct 22 '14 at 14:46
  • 5
    I'm hopeful you're not being sarcastic. It's not easy contributing sometimes, even releasing open-source software. "Consider the source" is important. Everyone is biased in some way. I will say that I don't think there are many solutions targeting this area. Mainly b/c the banking industry in NA is not a very good market for vendors - there are relatively few buyers. – nowen Oct 23 '14 at 15:14
  • 8
    No! Totally appreciative! We get people shilling their own stuff and it creates complications. The Internet really needs to develop that Sarcasm font. – schroeder Oct 23 '14 at 15:17
  • 1
    More than that, your answer has really helped me and enabled me to talk to people in my company about U2F. – schroeder Oct 23 '14 at 15:19
  • 4
    In light of the answer by the OP, which strongly focuses on phishing, I think your (2) (which currently seems to say that U2F adds nothing over Time-bound OTP systems) needs modification. As the OP says, "Attackers do, in fact, capture second-factor responses sent to phishing sites and immediately play them on the real login page." See for example (from recently) https://www.seancassidy.me/lostpass.html which phishes for the TOTP (Google Authenticator) code as well. – ShreevatsaR Jan 21 '16 at 00:39
17

I just read some of the specs because I wanted to know if the device stores the actual (private) keys. I can try to answer some of the questions.

OTP are simply one-time tokens, while U2F is based on public key cryptography; more specifically, the Yubico Fido U2F key seems to use elliptical curve cryptography.

U2F should help to protect against phishing attacks since you have to confirm the transaction by manual intervention, i.e. pushing the button. Stealing the keys would require stealing the device itself, which might be more difficult than OTP pins, depending on where people store those. Of course both approaches are still somewhat vulnerable to MitM attacks; if an attacker can somehow get between you and the service, interfere in an ongoing session, there's not much that can be done. Traffic should be encrypted, endpoint verified and noone should have full access to your computer; the obvious stuff.

I suppose the feasibility of breaking U2F keys would depend on the strength of the implemented public key algorithms on the specific hardware key. The Yubico key seems to use ECDSA on the P-256 NIST elliptic curve. Judge yourself if the number of bits (and source for the curve) are sufficiently secure and reliable...

The overview document from FIDO mentions "Inexpensive U2F Devices" that don't store the actual private keys but store them encrypted (wrapped) in the "Key Handle", which is the identifier that links private and public keys together and is sent out to remote services. So if I understand it correctly, the remote service gets both the public (as is) and private keys (encrypted in the Key Handle), so the security really stands or falls with the security of the algorithm used on the hardware device; the remote site has your private keys! In a way, it's the reverse of storing a user's session encrypted in a cookie – here the remote site keeps the keys, where the private key part is encrypted and in theory can only be decrypted by the hardware device. Interestingly, this Yubico device itself seems to be such a device, i.e. the keys leave the device instead of being contained in it.

I understand the economical and ease-of-use motivations – storing a lot of key pairs on the chips in these kind of devices would be more expensive – but I'm not sure I like this approach.

So to get back to your question about using the tokens with multiple independent services: the device can generate as many pairs as it wants since the key pairs are saved on the service itself. When you log in, the device unwraps the private key and checks the signature. The message contains the origin, so the key should only work for that specific service.

For high secure purposes, it would be better to use a device that stores the private keys (or generates them) in way they can't be retrieved at all and never leave the device. I don't know anything about the electronic side of these devices, but I assume it would require a pretty sophisticated attacker to steal and then crack the physical hardware to obtain the keys, assuming the device uses the same chips as used on modern smart cards, sims and other forms of hardware crypto.

Sources:

Jonathan Cross
  • 1,618
  • 1
  • 13
  • 25
wvh
  • 271
  • 1
  • 2
  • 1
    +1 for mentioning the private key is *not* stored on the Yubikey for economical reasons. Do you know any FIDO compliant device that stores the private keys instead of the Key Handle? – Morgan Courbet Dec 27 '17 at 13:54
  • I don't understand why this is even economical. Aren't private keys under a megabyte and wouldn't most people use this with fewer than two dozen sites? Storage isn't THAT expensive... – Stephen Jan 31 '19 at 02:24
  • The amount of persistent storage you could fit on a typical Yubikey would still be quite limited, and it would further complicate the device. Because there is no storage on the device itself it could actually last you a lifetime's worth of services instead of suddenly refusing to register a new keypair at a seemingly random time. There is a single unique private key on each device used in the generation of the keypair for each service. That key can not be retrieved and figuring out the next key to be generated is thus extremely difficult even with obvious physical compromise of a key. – TwoD Apr 06 '19 at 23:36
10

A U2F token implements a challenge response algorithm using public key cryptography. It provides two functions: registering a new origin and computing the response to a challenge.

Thus, it does not implement One Time Password (OTP) generation.

Registering a new Origin

(An origin string identifies the remote system, e.g. the hostname of the remote server.)

When registering a new origin, the token takes the origin string as input and returns

  • a newly generated public key (KA),
  • an attestation certificate (i.e. the attestation public key and a signature over KA using the attestation private key),
  • a key handle (H), and
  • a signature (over the origin, KA, and H)

using the newly created private key. The attestation key pair is shared between a group of devices produced by the same vendor and usually signed by a well-known CA. The key handle is a string that identifies KA. All those items are sent to the origin.

Signing the Challenge

When signing a challenge (i.e. generating the response) the token takes the origin string, challenge data (containing session information), and a key handle as input.

  • If the origin does not match the origin at key handle generation time, an error is returned.
  • If the key handle is unknown, an error is returned.
  • Otherwise, the signature over the challenge data and the value of an internal transaction counter is computed (using the private key that is referenced by the key handle) and is returned as well as the counter value.

Possible Token Implementations

  1. A valid U2F token implementation has a potentially large writable associative array where key handles are mapped to private keys and origin information. This array must not leave that token and thus should be protected against reading it out.

    The U2F specification does not allow the reuse of private keys for different origins; thus, a large array is needed.

  2. Alternatively, a U2F token implementation without any read-write memory is also possible: Instead of storing the private key and the origin inside the token, the token symmetrically encrypts them with an internal key (K0). The result is then just returned as the key handle. In a sane hardware design, K0 can't leave the token. In other words, the private key and origin string are externally stored, they are distributed to the origins because they are used as key handles - which is fine as long as the encryption can't be broken.

Basically most available U2F tokens implement the second method, and thus are relatively inexpensive to produce (starting around 5 € at Amazon: search for 'FIDO U2F'). The Yubikey U2F is an example of such a implementation.

Attacks

Under normal circumstances brute force attacks should be infeasible. One such attack would be trying to brute force the origin specific private key when you know the public key.

  • Assuming that an inexpensive U2F token is in use, one could also try to brute force the internal key (K0) when you know the origin specific key handle. This is only feasible, if the token vendor made a design mistake. For example, when the vendor ships each token with the same internal key and the key leaks somehow.
  • Or, in the case when the internal key K0 is: different for each token but K0 can't be reinitialized by the end-user, is retained by the vendor and (voluntarily or involuntarily) shared with another party — then that party has less effort to brute force one key handle (which originated from a token produced by that vendor).
  • Another risk would be a weak symmetric encryption algorithm implementation used inside the token, thus, making brute forcing the encrypted data in the key handle H easier.

Some phishing and man-in-the-middle scenarios are defeated because the U2F token verifies the origin and session specific data is used as challenge.

Michael
  • 2,432
  • 2
  • 20
  • 37
maxschlepzig
  • 550
  • 4
  • 10
6

I think it is very bad that the user of the token does not see what action he/she actually agrees by pressing button on his token. Otherwise, a user with an infected OS on the public untrusted PC can unwittingly let a malicious program into his own bank account instead of logging to the Facebook.

However, U2F protocol contains information about current action (URI, AppID and optional TLS channel ID). I think, before you start using these devices, it is make sense to wait for the appearance of U2F tokens with a small LCD screen that will display these info (at least AppID) and then allow to disagree action if it proves to be not what you expects.