0

Recently there have been quite some discussions about the security approach of ProtonMail. Since it do crypto stuff at client-side, loading the javascript code in the user's browser, as far as i know, even if that code is published somewhere in the internet, there is no guarantee that it has not been manipulated by an evil entity with admin access to the server before user actually use it.

So, generally speaking the question is: how can i develop open source software and let the end user to verify if the code behind that software is the same published?

In case of compiled software i can use signed reproducible builds, but in case of interpreted code (for example JavaScript as in ProtonMail) what can i do?

From my very basic knowledge of programming and cryptography, i would try to solve this situation adding to the published code the fingerprint of, let's say, each source file. That fingerprint should also be signed by the developer. At this point when the user download the code while accessing to the web service, he can calculate the fingerprint and compare it against the public one. Does it is viable approach? Am i missing something?

Thanks in advance!

P.S. I have already read some other questions like this one and i think they still not fully answer to this question.

hwktest
  • 3
  • 1

2 Answers2

1

From MDN web docs:

Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match.

The idea is to generate a hash from your web app files (ex. javascript files) by using commands such as openssl or shasum, and specify the hash function such as (sha256, sha384, and sha512) which are the current allowed prefixes, and then embed the generated hash digest into the user running script through the integrity value attribute. The browser must first compare the script to the expected hash, and verify that there's a match.

If the script doesn’t match its associated integrity value, the browser will refuse to execute the script indicating it is not the same source code, probably because of network error or unexpected file manipulation.

Generating hash digest to "FILENAME.JS" example using openssl command:

cat FILENAME.js | openssl dgst -sha384 -binary | openssl base64 -A

And before executing the FILENAME.js script to the user side, you need to embed the generated hash through the integrity value in your script tag, for the browser to validate the script hash to the expected hash.

For more info:

https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity

  • 2
    This only helps if the first documented (which references all these resources and uses SRI) is already verified. In other words - SRI provides derived trust but does not provide the initial trust. If the attacker manages to provide the wrong initial document SRI will not help. – Steffen Ullrich Dec 09 '18 at 21:21
  • @SteffenUllrich If I understand your comment correctly- if the attacker manages to provide the wrong initial document then it will have different hash value where the browser will stop executing it because it does not match the hash value generated by user. This will detect tampering in the code and only verified code will run. Yes, it is derived trust but it can be used to prevent man-in-middle modification for example. – Hussain Mahfoodh Dec 10 '18 at 06:15
  • 1
    If the attacker can do a man in the middle attack or compromise the server he can control both the initial document and the included parts. This means he can control both the hashes included for SRI but also the scripts served and make sure that hash of script and hash given for SRI match even though the script is not the original one. This means SRI is not sufficient since it does not protect the initial document too. – Steffen Ullrich Dec 10 '18 at 06:22
  • @SteffenUllrich I understand your point. This can be mitigated through server side validation where the attacker have no access to tamper the code. For sure extra effort is needed from the developer side. – Hussain Mahfoodh Dec 10 '18 at 06:35
  • 1
    How does server side validation will help against man in the middle attacks? How does server side validation will help against a compromised server? If one could trust both the server (not compromised) and the transport (no MITM) then the problem would already be solved - even without SRI. The main point is that the client cannot trust what it receives. SRI only solves the problem when the client has some initial trust already, i.e. is typically used for including scripts from an untrusted third-party source within a trusted main source. – Steffen Ullrich Dec 10 '18 at 07:14
  • @SteffenUllrich Yes you are correct. My answer was based on a not compromised server. For sure defence on depth mechanism is needed. To be honest I can only think of digital signatures and PKI to support your comment which is a valid point. – Hussain Mahfoodh Dec 10 '18 at 08:38
0

Short answer: you can't.

I presume you are specifically talking about applications delivered via a browser.

The SSL cert on your server proves the code came from there. And as Hussein suggests, using Sub-resource integrity both reinforces the validity of the origin while also allowing for caching and CDN issues, but there's nothing to prevent upstream tampering with the code.

Microsoft signed ActiveX looks like it was designed to address the integrity of the full delivery chain, but we know how effective that has proved to be. Most browsers still support running signed Java code - but it is now 2018.

From my very basic knowledge of programming and cryptography,

What you propose is a credible approach (although see my previous comments about Java and ActiveX) But currently, in order to implement this, you'd need to make changes to the way both HTML and HTTP currently work in order to support this with on-demand applications. Either that or develop your own protocol and client.

I suppose it would be possible using a custom header and a browser extension - but you still need to think about how a client would know when it is connecting to a service implementing this kind of elevated security compared with the way it needs to treat the rest of the internet.

Using an HTML5 app certainly changes the landscape a lot and reduces the attack surface, but still does not solve the problem.

(BTW if your Javascript is not being compiled then you are doing it wrong)

symcbean
  • 18,418
  • 40
  • 74