22

There are many articles in the internet criticising JavaScript cryptography in the browser:

They raise some fair points, but I'll like to analyse them from a 2017 perspective and know your opinions. To do so, I'll define a possible architecture for a encrypted notes site so you can raise possible problems and solutions about the security aspect:

  1. Of course, we will be using SSL.

  2. The first time a user logins in our notes app, we send them our public key. This key will be used to verify the authenticity of our crypto.js script. The public key will be stored in the user's browser.

  3. A checker.js script is downloaded and stored as well. This script will never change and it will be in charge of checking the integrity of crypto.js and (2).

  4. In (2) and (3) we establish a Trust On First Use (TOFU) relationship between the user and our site. Both the public key and checker.js are cached using a service worker or similar. We will be using SRI as well to try to maximise integrity.

  5. Even though we are using SSL, a MITM attack could happen while downloading (2) and (3), so we could offer a way to check that the public key and checker.js are not compromised. For example by comparing hashes of the local copy of (2) and (3) with the real ones in our site or a third party site. This solution is not perfect and this is probably the weak link, but I believe that a similar attack could be performed on desktop apps.

  6. On first login, we also send to the user its private key. This private key will be used to encrypt and sign the notes. This private key will be encrypted.

  7. The key required to decrypt (6) is sent via email to the user. In this way we establish a two-channel authentication.

  8. Using Web Crypto we decrypt (6) with (7). In this way (6) is never stored in the browser decrypted and it is not accessible by JavaScript thanks to the Web Crypto API.

  9. Now we can start with the functionality of our web app: create encrypted notes. To do so, the user writes a note and click the save button. The server sends crypto.js signed with the server's private key (see 2).

  10. The signature is verified using the public key downloaded in (2) with (3) and if correct, the note is encrypted. If checker.js was modified, SRI should stop this process.

  11. The note is sent back to the server and stored.

  12. Depending on the functionality required, the server should delete the user's private key and keep only the public one or not.

What do you think about this setup?

I am not happy about (5), but this is something that can happen with native software. It is not the first time we see installers compromised. Also a MITM while downloading the installer can happen. And native code signing is not perfect.

Do you think that web crypto is still strictly worse than native crypto?

Any suggestions that could improve browser cryptography?

Anders
  • 65,052
  • 24
  • 180
  • 218
user1204395
  • 223
  • 2
  • 8
  • malicious browser extensions in the middle of every step of that process? – schroeder Nov 16 '17 at 15:47
  • 2
    Yes, malicious browser extensions can be a problem, like malware is a problem for native apps. However, we don't consider crypto in C++ as a bad idea because of keyloggers as well as I think we shouldn't consider crypto in the browser a bad idea because of extensions (and that is actually a point in some of the articles I linked). – user1204395 Nov 16 '17 at 16:03
  • 4
    "What do you think about this setup?" It is overly complicated. – A. Hersean Nov 16 '17 at 16:15
  • 7
    I think that's a great question but I find it a bit unfortunate that you combine it with the description of some particular system since answers will point out possible flaws in that instead of addressing the broader point of whether in-browser crypto is still bad in 2017. – Arminius Nov 16 '17 at 16:19
  • I though it will be easier to discuss if web crypto is still bad if I came out with an example that already solves some problems mentioned in many articles. But yes, follow @Arminius advice and discuss browser crypto in general, not only in this example. Thanks! – user1204395 Nov 16 '17 at 17:08
  • 3
    @user1204395 malicious browser extensions won't get caught by AV, a keylogger can – schroeder Nov 16 '17 at 17:30
  • 3
    "Both the public key and "checker.js" are cached using a service worker or similar" The web server can still replace the service worker though, right? I'm not really sure what the point of all this is if you still have to trust the web server. – Ajedi32 Nov 16 '17 at 19:58
  • @Ajedi32 yes, you need to trust the web server like you need to do in any native solution, for example, you need to trust your favourite PGP plugin to not upload your private key or push an evil update to all users. It is a fair point, but I don't think this is a disadvantage of Web vs Native. – user1204395 Nov 17 '17 at 10:40
  • 2
    @user1204395 If you trust the web server then all this extra complexity is pointless. A normal web app which transmits all data over TLS would work just as well, and be just as difficult to compromise. – Ajedi32 Nov 17 '17 at 16:12
  • @Ajedi32, yes, but you can use web crypto to encrypt local files that are not stored in a server. The general question is if web crypto is viable right now. – user1204395 Nov 18 '17 at 00:22
  • @A.Hersean this, a thousand times this. Every time something comes up with client side js encryption, it's always something like: "Could we use _overy complicated client side setup_ to replace _very simple server side setup_?", and my answer is always "WHY?" – BgrWorker Nov 20 '17 at 10:29
  • 1
    This question is old, but just wanted to mention that it’s harder to control side channel vulnerabilities with JavaScript than in lower level languages, which is especially important for cryptography. – Steve Aug 24 '18 at 06:27
  • 1
    @Arminius I couldn't agree more. There is a similar question at https://security.stackexchange.com/questions/133277/problems-with-in-browser-crypto which partially addresses this. As I see it, the problem that still remains (as of 2020) is the 'chicken and egg' problem with browser crypto (as coined in the article by NCC Group) - i.e. If you can't trust the server with your secrets, then how can you trust the server to serve you secure js crypto code? – mti2935 Jul 29 '20 at 18:29

3 Answers3

12

The main issue with cryptography in web pages is that, because the code you're executing is loaded from a web server, that server has full control over what that code is and can change it every time you refresh the page. Unless you manually inspect the code you're running every time you load a new page on that site (preferably before that code is actually executed), you have no way of knowing what that code will actually do.

The Web Cryptography API can mitigate the risks of this somewhat by securely storing cryptographic keys in a way that scripts running on the page can not access, but all the operations that can be performed with those keys (decrypting, signing, etc) will still be available to those (potentially malicious) scripts.

As long as you do trust the server not to behave maliciously cryptography in the browser can be quite useful, but in many applications where cryptography is used that level of trust in a remote server you do not control is unacceptable.

For your scheme in particular:

  1. Of course, we will be using SSL

This is good. Without SSL, all later security measures would be pointless because an attacker could simply replace your code with their own and do whatever they want with the user's data.

  1. The first time a user logins in our notes app, we send them our public key. This key will be used to verify the authenticity of our "crypto.js" script. The public key will be stored in the user's browser.

This seems pointless. TLS already sends the client your server's public key and uses it to verify the authenticity of all scripts you load over that connection. There's no reason to do the same thing all over again in JavaScript.

  1. A "checker.js" script is downloaded and stored as well. This script will never change and it will be in charge of checking the integrity of "crypto.js" and (2).

This is also pointless, because there's no way to enforce your requirement that "This script will never change". You could send a Cache-Control header with a long max-age, but there's no guarantee the user agent will always respect that value; caching is not intended to be relied upon for security.

  1. In (2) and (3) we establish a Trust On First Use (TOFU) relationship between the user and our site. Both the public key and "checker.js" are cached using a service worker or similar.

Just to be clear: caching those files with service workers has no impact on the security of the overall system. When the user later comes back to your site the browser will check with the server to see whether the service worker has updated and install the new version if it has. So the server still has full control of the code running in the user's browser. There's no "Trust On First Use (TOFU) relationship" here.

  1. Even though we are using SSL, a MITM attack could happen while downloading (2) and (3), so we could offer a way to check that the public key and "checker.js" are not compromised.

That's a nice gesture, but as I previously stated even if those files are not currently compromised, the server or a MITM (who somehow managed to compromise your TLS connection) can easily update those files at any time to compromise them without the user noticing, so I don't really see the point of this feature.

  1. On first login, we also send to the user its private key. This private key will be used to encrypt and sign the notes. This private key will be encrypted.

  2. The key required to decrypt (6) is sent via email to the user. In this way we establish a two-channel authentication.

  3. Using Web Crypto ( https://www.w3.org/TR/WebCryptoAPI/ ) we decrypt (6) with (7). In this way (6) is never stored in the browser decrypted and it is not accessible by JavaScript thanks to the Web Crypto API.

Implementing this would require that the server have access to a plaintext version of the user's private key. Depending on exactly what you're using these keys for, that could be problematic if the server is ever compromised. Instead, you should consider using the Web Crypto API to generate a private-public key pair on the user's device, and have the browser send the public portion of that key to the server. That way the server never has access to the user's private key.

  1. Now we can start with the functionality of our web app: create encrypted notes. To do so, the user writes a note and click the save button. The server sends "crypto.js" signed with the server's private key (see 2).

  2. The signature is verified using the public key downloaded in (2) with (3) and if correct, the note is encrypted. If "checker.js" was modified, SRI should stop this process.

Unless you're loading checker.js from an untrusted third-party server, Subresource Integrity is unnecessary in this scenario. Anyone who can compromise your server or its connection to the client to modify checker.js can also modify the values of the subresource integrity hashes so that the browser will accept the modified script without complaint. Or they could just modify the page to not load checker.js at all, and use a completely different script of their own making instead. Either way, subresource integrity doesn't help.

  1. The note is sent back to the server and stored.

That's fine as long as you fix the issue I already mentioned with 6, 7, and 8 so the server doesn't have the keys needed to decrypt the user's files. If you're fine with the server having the keys to access the user's files, there's no need for client-side crypto at all; just let the server handle the encryption.

  1. Depending on the functionality required, the server should delete the user's private key and keep only the public one or not.

Or, as I suggested, just don't give the server the user's key in the first place. Other than that though, this part is fine security-wise, in that it prevents the server from accessing the user's files while the user is not using the site.

Once the user visits the site though, the user's browser will load code from that server which will have the ability to use the user's keys to decrypt the user's notes. So for the average user, accessing their notes without giving your server the ability to read them is impossible.

There are also some usability issues with this implementation, as it means users will not be able to sign into their account from a new browser and still have access to their notes. A better implementation would be to derive users' crypto keys from their passwords using a key derivation algorithm like PBKDF2 (available via the Web Cryptography API) with a high work factor. This would allow them to access their notes from any browser. (But would still have all the same security downsides mentioned in my comments above.)

Ajedi32
  • 4,695
  • 2
  • 26
  • 61
8

The things that really stand out to me are 6 and 7. This is by far the thing that makes my cringe the most about this description.

The entire point of setting up TOFU is that there is two way trust. Of course first use trust has its own issues, and I believe you already outlined most of those cases, though less likely to occur, are possible.

But you are telling me, the site will generate a private key for me, and hand me that key encrypted, then give me a way to decrypt a private key via email? basically sending me a way to decrypt my way to decrypt via email.

I mean when I use a service I generally look for equal exposure. I don't want the website to be the single point of failure for anything that I do. It also creates a situation where messages meant for me can be decrypted by anyone with sysadmin access to the user generated private keys. Which means I can't trust it.

It completely undercuts the whole point of asymmetric cryptography. Especially since creating my own private key and sending the server the public key is a simple matter. Even for users who aren't technically inclined it could be included in the client. There is absolutely no reason for another party to create a private key for me IMO or for that key to ever touch the internet.

I'll let others answer the other points, I think 6 and 7 are the most dangerous. Barring MITM which you already mentioned in the OP.

Nalaurien
  • 1,614
  • 9
  • 16
  • Fair point and I absolutely agree, I didn't mention what you suggest (generating a pair of private and public key in the browser and store in the server only the public key) to keep the "notes app" as simple as possible. While storing the private key could be useful in company environments, where the admin can legally decrypt everything, is not acceptable for end users. Thanks a lot for your answer! – user1204395 Nov 16 '17 at 16:15
0

The only context where I do client-side cryptography is hashing client credentials (password, credit card info, etc.) so that the server does not know them. Yes, the server does not need to know the plain password that it has to check for validity. Server applies its own hashing on the pasword string it receives. As long as the client sends the same hash the scheme works fine. A valid password gets authenticated while the server is agnostic of the client plain password/client hashing algorithm. I used to encrypt/decrypt usernames in the past when some of our clients did not obey our recommendation to implement the system on https/tls. Encrypting/decrypting usernames is pointless on SSL communication because the server knows/stores the username. What do I mean? I dont recommend client-side crypto as long as the context does not make the server agnostic to the secret data of the client. It is to protect the client secret from the server or to protect the server from a vulnerability of holy scale. In all other contexts, don't deviate from the main road (https, tls, etc.)

P.S. Some users are stupid to use words in their usernames in their passwords when it comes to records management, document management contexts. My favorite attack is timing attack on usernames and then using usernames as a precious resource on dictionary attack on the passwords if timing attack on passwords directly fails. It is interesting to see that few people in the document/records management software development world care about timing attacks on usernames.