7

I came across this issue when we implemented a new security solution. Said solution has its own root CA certificate and will create certificates for HTTPS web pages "on the fly". Each HTTPS page you visit now has an "instant" certificate that is issued by the security solution's CA and not its original certificate.

What you now basically do is communicate with the security solution, which acts as a proxy. It "breaks open" TLS traffic, inspects it, and in turn (re-)establishes an encrypted connection to the target web server.

Now, what if it wasn't a security solution, but a malicious actor who did this? This would be a very easy and convenient way to perform man-in-the-middle attacks. Is the installation of a CA certificate in the browser really the only thing between secure TLS connections and a MITM nightmare?

We have several (failed?) mechanisms which might prevent this, such as DANE or DNS CAA, but as it seems, none of those actually is being used by modern browsers.

Is there a way (in 2022) to prevent someone from just creating a TLS certificate and posing as another party in a way DANE or DNS CAA were supposed to do? I'm talking about actually preventing a client connecting to a server serving the wrong certificate, not just monitoring issuance like Certificate Transparency does.

Toby Speight
  • 1,226
  • 9
  • 17
Bebef
  • 79
  • 1
  • 2
  • WRT, 'Each https page you visit now has an "instant" certificate that is issued by the security solution's CA' - Is the security solution's CA's certificate installed in the users' browsers? – mti2935 Jan 05 '22 at 17:14
  • _"We have several (failed?) mechanisms which might prevent this, such as DANE or DNS CAA, but as it seems, none of those actually is being used by modern browsers."_ - CAA is not _supposed_ to be checked by browsers, it's [explicitly a mechanisms for CAs](https://security.stackexchange.com/a/180905/99775). I'm unsure where DANE falls wrt end-point checking. – marcelm Jan 06 '22 at 12:38
  • 10
    The "security solution" you're describing *is* a MITM. – Joseph Sible-Reinstate Monica Jan 06 '22 at 17:14

2 Answers2

16

What you describe is the normal way corporate firewalls or antivirus inspect HTTPS traffic. Browsers will by default block access to these sites and users are not supposed to click though the warnings. Instead the CA of the proxy needs to be imported as trusted into the browser/system, which is usually done automatically in corporate environments or when installing a local antivirus product.

Is there a way (in 2022) to prevent someone from just creating a TLS certificate and posing as another party ...

There is no way to prevent others from creating arbitrary certificates signed by a CA not trusted by most users. Only such certificates will not be trusted by sane clients, so there is no actual risk here to address.

SSL interception by an explicitly trusted party instead serves an accepted purpose. While it breaks end-to-end security and thus decreases security, it provides an actual security benefit by doing content inspection to protect against malware etc. And it does not allow an arbitrary attacker to be man in the middle since such an attacker has no access to the trusted CA certificate.

Steffen Ullrich
  • 190,458
  • 29
  • 381
  • 434
  • 11
    One potential **gotcha** I saw some years ago on a system using *Blue Coat* to inspect HTTPS traffic was that it implicitly validated a self signed certificate. From within a corporate environment running *Blue Coat*, I browsed to a web site using a self signed certificate. I received no browser warning as the browser saw only the validated *BC* certificate. It's pretty easy to see how this could be leveraged. Hopefully this has been corrected by now, I haven't re-checked. – user10216038 Jan 06 '22 at 00:12
  • 8
    @user10216038: There were several papers in the last years regarding the security impact of trusted SSL interception, which also highlighted several more or less broken implementations - see https://security.stackexchange.com/questions/160846/ and https://security.stackexchange.com/questions/158009/ for some links. While the situation might be better now due to such research there are likely still broken implementations out there. – Steffen Ullrich Jan 06 '22 at 07:04
  • No kidding. Last one that I saw was still a really broken implementation. I think the only way these can be made safe is to change to protocol so that they can still present the original certificate (say, by including it inside of the dynamic certificate). – Joshua Jan 06 '22 at 20:20
  • @Joshua - You can't, since if the real endpoint is contacted via the real cert the MITM has no way to read the information, which defeats the whole point. – Clockwork-Muse Jan 06 '22 at 23:05
  • @Clockwork-Muse: I wrote "change the protocol" for a reason. Legit MITMs should be trustworthy enough to not cheat here. – Joshua Jan 07 '22 at 03:30
  • @Joshua - Changing the protocol doesn't magically make the MITM implementation "safe". Fixing a "trust all self-signed" bug is trivial compared to whatever it is you imagine working. Keep in mind that the only thing that determines a "legitimate" MITM is whether the root cert is trusted by the client, and that deploying them _doesn't_ require a new protocol. Also, modern TLS was designed specifically to prevent interception by eavesdropping the handshake - MITM implementations sidestep that – Clockwork-Muse Jan 07 '22 at 04:01
  • @Clockwork-Muse: The point is, either you have "trust all self-signed" or "don't trust all self-signed" where neither one of them is correct. The MITM box doesn't know nor can it know which roots are trustworthy to what. I have actual use cases where traceable certificates _do not suffice by virtue of being traceable_. – Joshua Jan 07 '22 at 04:06
  • @Clockwork-Muse: *"Also, modern TLS was designed specifically to prevent interception by eavesdropping the handshake"* - TLS 1.3 primarily hardens against passive inspection in that a) certificates are encrypted and thus no longer visible when passively sniffing, which as a side-effect means that inspecting middleboxes either loose information or must switch to active MITM and b) RSA key exchange is no longer there, which as side-effect makes it impossible to passively inspect encrypted traffic in front of some server by simply sharing the servers certificate. Active MITM is mainly unchanged – Steffen Ullrich Jan 07 '22 at 06:54
  • 1
    @user10216038 Always test your new HTTPS sniffing tool by purposely loading websites with bad certificates, for example: https://badssl.com/ – Ferrybig Jan 07 '22 at 12:02
  • @SteffenUllrich Like I mentioned below, the only difference between "Corporate Security" and a "real MITM" is an "official" SSL certificate. Once you have that, you can intercept traffic with no one (that doesn't explicitly check the certificate in the browser, but who does that?) really noticing. – Bebef Jan 13 '22 at 07:18
  • @Bebef: I'm not sure what you mean with *official*. The point is not being *official*, but *trusted by the client*. It can be a private CA, which is usually the case in corporate environments. It can be a CA unique to the client device which is usually the case with SSL intercepting antivirus. – Steffen Ullrich Jan 13 '22 at 07:29
  • @SteffenUllrich What I mean is: a CA that already comes with your device. Nothing extra-installed. When you get a certificate signed from a CA that already is on the device (factory settings), all you need is to intercept the IP traffic and you can break open SSL, which would be a MITM attack. – Bebef Jan 13 '22 at 10:19
  • @Bebef: Certificates for inspection by corporate proxies and antivirus are extra installed on top of what the OS provides. Thus they might already come with device you got if pre-installed by the administrators (usually in an automated way) or they might get added later for example when installing some antivirus. In this case they don't come with the device. But none of these are part of what one usually considers factory settings, i.e. coming from the vendor, not from the administrators. – Steffen Ullrich Jan 13 '22 at 10:23
  • @SteffenUllrich Yes, exactly. This is why I wanted to point out a scenario, where you could successfully attack a PRIVATE device (let's say an iPhone right out of the box) in a MITM attack and all you need to break the SSL connection was a valid certificate for the destination. – Bebef Jan 13 '22 at 11:27
13

Is the installation of a CA certificate in the browser really the only thing between secure TLS connections and a MITM nightmare?

Installing a CA certificate in the browser (or the OS-level trust store that the browser uses) is ultimate trust. It means "if the bearer of this cert attests that I'm talking to foo.com, then I'm talking to foo.com". Yes, that puts the holder of that certificate in the position to MITM everything — but only for devices that place trust in that cert. In order to MITM someone this way you either have to control their device sufficiently to install your cert in its trust store, or else subvert one of the big-name CAs that everyone trusts by default.

[Corollary: you definitely don't have any privacy on a work-issued laptop. Don't use it to read your personal mail, check your bank balance, or anything.]

hobbs
  • 640
  • 4
  • 8
  • A key thing that IMO would improve this answer is that trust in a certificate is established via some kind of side channel. We trust CA certificates because they come pre-installed on the CD stamped with the holographic Windows logo. Trust in a self-signed certificate is established the same way. Ideally this means distributing it via a physical side channel. If you have to transmit it via an insecure channel, you could verify its hash via a side channel, like verify the hash over the phone before a MITM attacker would have time to create their own key with a hash collision. – Tech Inquisitor Jan 07 '22 at 19:07
  • Well, I just used the work laptop as an example, as this is where I first observed this behaviour. For a successful MITM attack, all you need is a valid certificate for the target site. This is a thing that [has happened in the past](https://blog.mozilla.org/security/2011/08/29/fraudulent-google-com-certificate/) and may happen again. The Signal desktop client seems to do it right and refuses to connect. I'm still puzzled how browsers will accept "wrong" certificates "just like that". Or would a browser actually check the CAA record if the root wasn't installed in the browser? – Bebef Jan 13 '22 at 07:07