22

I just finished reading through this paper by Georgiev et al, which demonstrates a wide range of serious security flaws in SSL certificate validation in various non-browser software, libraries and middleware, including EC2's Java library, PayPal's merchant library, osCommerce, ZenCart, etc.

The abstract is pretty impressive:

We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon’s EC2 Java library and all cloud clients based on it; Amazon’s and PayPal’s merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware—including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android—and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack.

From what I can tell, if the faults and proposed attacks are realistic, the consequences are far-reaching and rather disconcerting.

I don't pretend to fully understand the theory behind the attacks they posed, which is why I'm asking this question - what's the real potential impact here? Is there a real serious risk, or is all of this an exercise in FUD?

Polynomial
  • 133,763
  • 43
  • 302
  • 380

4 Answers4

21

All the attacks that are listed in the article are real, and serious, and they all match the same generic pattern: the server's certificate is not properly validated, i.e. verified to belong to the intended server as specified in the relevant standards.

Is it a flaw in SSL ? No. The protocol, as itself, is fine. If the correct, genuine server public key is known, then the SSL tunnel mechanism fulfills the security characteristics that are expected from it (provided that the implementation is correct and a not-too-old version is used, which means TLS 1.1 or later -- see this answer).

Is it a flaw in X.509 ? Yes or no. X.509 is known to be complex. In fact, the whole notion of certification is complex, and X.509 tries to handle it upfront. Through a long history of dubious technological choices (let me just say "UTCTime" and "TeletexString" -- if you do not know what this means, then, trust me, you are happier that way), X.509 might make the complexity even higher. However, there is something inherently hard in doing PKI properly. This is not new information; have a look at a previous version from early 1999, more than 13 years ago: it already shows all the tricky elements (in particular revocation checks).

Is it a flaw in SSL libraries ? To some extent, yes. The article authors point out that the libraries are under-documented, the defaults are vulnerable, and the API atrocious. They are totally right. It is hard to use these libraries properly, unless you already have some precise notions of how SSL works.

Is it a flaw in the common development model ? Definitely. We often say: "Do not invent or implement your own crypto ! Use existing libraries." The intent is good, but nowhere near sufficient. Security architecture design is hard. It takes a lot of specialized knowledge to make a secure application, and it begins at the structural level. Unfortunately, the security-aware developers are a very small minority. For the vast majority of developers, secure application design is an alien concept. Notably, secure design works backwards, with regards to software development as it is usually done: most developers concentrate on making normal things happen under normal conditions, while security focuses on avoiding bad things to happen under abnormal conditions (maliciously abnormal, not just statistical bad luck).

Requiring that architecture design is reserved to security specialists it not practical (it would guarantee me from unemployment until my death, though). As far as I know, there is no known solution for that -- it is akin to producing bug-free software, with the added problem that security is about an adversary who actively tries to trigger the bugs. Some people argue for making developers legally responsible for security holes. Indeed, if "hidden flaws" in the software industry were treated the same way they are in the automobile industry, then there would be much fewer bugs -- and much less software, too.

Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
7

The impact is that man-in-the-middle attacks are possible on such systems. For answers to the question "Is MitM a real serious risk?" see:

Without SSL, what vantage point does one need to MITM non-SSL'd HTTP?

and

Are "man in the middle" attacks extremely rare?

TL;DR: Yes, this is a "real serious risk".

David Wachtfogel
  • 5,522
  • 21
  • 35
5

A typical attack could be set up using a fake open WiFi service. I'd say it's quite realistic to expect people to try to connect to open WiFi networks they can find when sitting in a pub, or any similar situation.

Especially on mobile devices, warning messages seem to be even more obscure and ignorable than on desktop software (in addition, in a pub, a bit of alcohol in the user's bloodstream may encourage warnings to be even more ignored). It would also be quite realistic to assume that some apps may poll their server regularly, even when the device is in the user's pocket: badly programmed applications could very well leak credentials or other information this way.

A recent BBC News article (Android apps 'leak' personal details) points to this paper: Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security.

Blaming Android this way sounds a bit alarmist, but it does reflect the state of a number of applications indeed, unfortunately. Android's situation is probably coming from the Java legacy, where host name verification isn't done by default, and where you need your own code if you're directly using an SSLSocket.

The situation may be a little bit better in the iOS family, because it seems that Apple rejects apps that disable certificate verification (I don't have any personal experience with this). I'm not sure whether anonymous cipher suites are allowed, though, which could be inconsistent with such rejection policy.

I think developers tend to consider certificates as being complicated, maybe not everyone, but at least a sizeable number of developers. Admittedly, certificates are a bit of a pain to deal with, but if you manage their use properly, it's not much harder to set up a test CA for development than it is to set up other forms of dummy data for unit tests (let's just assume you don't need CRL/OCSP in a test environment, although you'd ideally have that too).

Whether you'd want to blame the library or the developers is arguable.

Not all libraries are equal, and not all target the same categories of users/developers.

One of the big reasons why SSL/TLS stacks don't verify the host name by default is that the host name verification mechanism depends on the application protocol used on top. One of the most common certainly is HTTPS (RFC 2818, Section 3.1). This has often be used a reason not to implement host name verification there. Things are changing though: RFC 6125 attempts to unify the host name verification method across all protocols. It's not widely implemented yet, but it's not very different from the HTTPS behaviour anyway. In addition, Java 7 now has a way to verify the host name with specific SSLParameters (enabling an X509ExtendedTrustManager) which makes it a bit more convenient to switch on host name verification, even when using an SSLSocket directly (without extra layers of HttpsURLConnection).

The same reason shouldn't apply to higher level library, such as those that provide HTTPS access. Unfortunately, some users will want to disable certificate verification one way or another anyway. (It's also unfortunate that some libraries don't seem to want to fix the problem.)

In my experience on StackOverflow, there are a number of questions that are about ignoring certificate errors (here is one today, again). Some answers that suggest disabling any trust management altogether get accepted and sometimes rather highly upvoted. (Sometimes, you even get downvotes for suggesting to do the right thing, although admittedly, this doesn't quite answer the question indeed.)

I'm not sure whether a better default library behaviour would help in this case, since those who ask just want to ignore the errors. Considering the Java example, most people complain about the error that happens when they get an exception because the remote host isn't trusted. There are far fewer questions about how to verify the host name (which isn't done by default when you use SSLSockets directly).

I'm not sure how representative the SO community is, but some people just want a quick fix, at least to get going with the rest of the application. My guess is that even in the cases it stays on the "fix-before-release" list, since it's then silent, it never actually gets fixed in some cases (especially because the box "is using SSL/TLS" is still ticked).

Certainly, the pressure for early release combined with a seeming lack of understanding about certificates contributes to having this sort of problems.

Not all players in the field are helpful (e.g. some major Certification Authorities). If you're a developer who doesn't know much about certificates and try to learn from the documentation you get from a CA's site, it's often difficult. (The classic one is that most CAs imply that they provide certificates that perform 256-bit encryption, which is misleading at best. Others have FAQs that are full of mistakes and/or misleading information. Others let you buy "an SSL" (see bottom of page).)

Bruno
  • 10,875
  • 1
  • 39
  • 61
  • The paper talks about *Hostname Verification*. Can't we simply bypass a correctly implemented Hostname Verification by spoofing the Host Name of a legitimate server? – Rahil Arora Mar 23 '14 at 00:09
  • @RahilArora. well, that's the point of hostname verification: to make sure the host hasn't been spoofed. If you assume that the certificate was only issued to the legitimate host name (as should be done by the CA), if someone spoofs the host name, they won't be able to provide you with the right cert. Hence, the client needs to verify the host name in the cert matches the name requested. – Bruno Mar 23 '14 at 00:12
  • The paper assumes that MITM posses a valid cert of a legitimate server (but NOT the Private Key). The attacker can mislead client to connect to a malicious server instead. However, client should refuse to accept cert from malicious server because of mismatch between name on cert and the domain to which client is connecting. But, what in case we spoof the domain itself? Will it still refuse to accept the certificate? How does the client retrieves legitimate server's domain? Is it even possible to spoof it? – Rahil Arora Mar 23 '14 at 00:23
  • 2
    @RahilArora, hostname verification is about checking that you're talking to the server you intended, and not an imposter, even if the imposter presents a legitimate certificate issued to its name (with its own private key). Let's say `goodguy.com` (at 10.0.0.1) and `badguy.com` (at 10.0.0.2) both have a valid cert, the clients wants to connect to `goodguy.com`, but is redirected by a MITM attack that spoofs the name and points `goodguy.com` to 10.0.0.2. This will show a valid cert, but for `badguy.com`. This cert will not fail the PKI verification, but would fail the name verification. – Bruno Mar 23 '14 at 00:28
  • Okay. Got it now. I was considering the scenario where `badguy.com` presents the cert of `goodguy.com` to the client after spoofing the hostname. But since `badguy.com` does not have the private key which was used to sign the cert for `goodguy.com`, it will not be able to decrypt handshake messages from the client (which will be encrypted using `goodguy.com`'s public key). Please correct me if am still missing something. – Rahil Arora Mar 23 '14 at 00:39
  • @RahilArora, "*But since badguy.com does not have the private key which was used to sign the cert for goodguy.com*". Not sure you meant it the way you wrote it here. The private key used to sign the cert for `goodguy.com` is that of the CA. If `badguy.com` has the private key of the CA, it can re-issue any cert it wants (a very bad scenario). Assuming that's not the case, the private key `badguy.com` doesn't have is the private key that matches the public key in `goodguy.com`'s cert (again, this isn't the key with which the cert was signed). Indeed, this will prevent the handshake to work. – Bruno Mar 23 '14 at 00:43
  • Oops. My bad. I actually meant the private key that matches the public key in `goodguy.com`'s cert. :) – Rahil Arora Mar 23 '14 at 00:53
  • I've just started to learn about Practical Crypto systems and despite of a lot of efforts, I'm finding them very confusing (but fascinating at the same time). Hope this confusion will go away as I learn more and more about them. Thanks a lot for your help. :) – Rahil Arora Mar 23 '14 at 00:54
4

If you do not authenticate the server in some way, any active attacker can impersonate the server. This means SSL without authentication(such as certificate validation) is totally broken when facing active attackers.

When using SSL with server certificate, the question is if an attacker can obtain a certificate that the client accepts for the server it wants to communicate with. If he can, you lose. These attacks are practical and relatively easy to execute. Validating server certificates is essential for security.

CodesInChaos
  • 11,964
  • 2
  • 40
  • 50