8

I was reading this article which talks about a new attack against TLS being called Lucky Thirteen. It claims to allow repeatable MitM attacks against HTTPS connections.

It's described as being fairly impractical to actually carry out:

  • Each tweak attempt causes a TLS session to terminate, which may be both noticeable and time-consuming.
  • Each tweaked session needs to have the same plaintext at the same packet location for the tweaking to be done exhaustively.
  • The authors needed 27 repetitions of an exhaustive set of 216 tweaks (that's eight million dud TLS sessions!) to produce enough statistically-significant timing data for a reliable result.

But there seems to be other voices that think it may become a practical, repeatable attack against HTTPS.

I was wondering:

  • How difficult it would be to take out of a lab environment into a known LAN, or from a known LAN to a WAN?
  • How difficult and how necessary would it be to defend against this attack given that I don't necessarily have control over the software? Are there methods of randomly padding and delaying HTTPS responses to avoid timing attacks?
  • How does this interact with current attacks against SSL/TLS such as BEAST? Does it provide an amplifying effect?

And most importantly:

  • How important is this result when considering future implementations?
Bob Watson
  • 2,866
  • 18
  • 29

3 Answers3

6

I point you to this particular blog post by Matthew Green which gives a good description of the attack. In particular this quote.

But there's no way this will work on TLS! It'll kill the session! Please recall that I described this as a practical attack on Datagram TLS (DTLS) -- and as a more theoretical one on TLS itself.* There's a reason for this.

The reason is that TLS (and not DTLS) includes one more countermeasure I haven't mentioned yet: anytime a record fails to decrypt (due to a bad MAC or padding error), the TLS server kills the session. DTLS does not do this, which makes this attack borderline practical. (Though it still takes millions of packet queries to execute.)

The standard TLS 'session kill' feature would appear to stop padding oracle attacks, since they require the attacker to make many, many decryption attempts. Killing the session limits the attacker to one decryption -- and intuitively that would seem to be the end of it.

But actually, this turns out not to be true.

You see, one of the neat things about padding oracle attacks is that they can work across different sessions (keys), provided that that (a) your victim is willing to re-initiate the session after it drops, and (b) the secret plaintext appears in the same position in each stream. Fortunately the design of browsers and HTTPS lets us satisfy both of these requirements. To make a target browser initiate many connections, you can feed it some custom Javascript that causes it to repeatedly connect to an SSL server (as in the CRIME attack). Note that the Javascript doesn't need to come from the target webserver -- it can even served on an unrelated non-HTTPS page, possibly running in a different tab. So in short: this is pretty feasible. Morover, thanks to the design of the HTTP(S) protocol, each of these connections will include cookies at a known location in HTTP stream. While you may not be able to decrypt the rest of the stream, these cookie values are generally all you need to break into somebody's account. Thus the only practical limitation on such a cookie attack is the time it takes for the server to re-initiate all of these connections. TLS handshakes aren't fast, and this attack can take tens of thousands (or millions!) of connections per byte. So in practice the TLS attack would probably take days. In other words: don't panic.

On the other hand, don't get complacent either. The authors propose some clever optimizations that could take the TLS attack into the realm of the feasible (for TLS) in the near future.

You ask:

How difficult it would be to take out of a lab environment into a known LAN, or from a known LAN to a WAN?

As the attack involves measuring very small timing differences, it might be slightly difficult to exploit outside a lab environment.

How difficult and how necessary would it be to defend against this attack given that I don't necessarily have control over the software? Are there methods of randomly padding and delaying HTTPS responses to avoid timing attacks?

There appears to be little an end-user can do to defend against this attack. System administrators can help mitigate this by updating their SSL implementations, switching to the RC4 ciphersuite (as a temporary measure) and using AEAD ciphersuites like AES-GCM.

How does this interact with current attacks against SSL/TLS such as BEAST? Does it provide an amplifying effect?

Like the team behind the attack suggested, the attack can be enhanced by combining it with BEAST-style techniques.

How important is this result when considering future implementations?

The attack really is nothing new when it comes to theory, it is common knowledge that you should always encrypt first before applying the MAC.

6

When the attack was first described in 2003 (the "bad padding" oracle through timing analysis), the intended attack scenario was an email client (e.g. Outlook Express) which connected regularly (e.g. once per minute) to the server to know whether some new mail has arrived (you cannot be notified of new mail when using the POP protocol; you have to poll). Since each connection would begin with the same authentication phase, the target password being at a deterministic, reproducible emplacement in the stream, and since Outlook Express is notoriously bad at error reporting (i.e. it is either silent, or it just updates a long-standing error popup which the user has been trained to ignore), then it was a good setup for the attack.

An important point to make is that such attacks must occur near the decryption point, where the "interesting data" (the password) is about to be decrypted. In the mail server scenario, this is near the server, not the client.

The "Lucky Thirteen" adds two new data points:

  1. It points out that the common defense against timing attacks (namely, when the padding is wrong, act as if it was good and compute the MAC nonetheless) can leak a bit (because the "assumed padding" does not have the exact length of the "good padding"). Where the initial attack of 2003 used delays of about 1 millisecond, the new leak is about one thousand times shorter, about 1 microsecond.

  2. It demonstrates that in lab conditions (100 Mbit/s ethernet with only one switch between target and attacker, millions of measures) measures down to about 1 microsecond are feasible.

The first point is of course interesting. I would claim, however, that the second point has this fundamental flaw: if the attacker can get this close to the target, then he can win in many other ways. Indeed, timing attacks are about extracting information from a closed system through time-based data leaks. We cryptographers tend to concentrate on the cryptographic layer, because that's our job, and cryptography is all about concentration of secrets: the "key" is the essence of secrecy, and a very valuable target. However, the whole point of encryption is to protect sensitive data and any processing of confidential data can leak some of it through timing.

In a complete data processing stack, the SSL/TLS layer stays between the low-level TCP/IP stack, and the "application" which uses the confidential data in various ways. Since decryption occurs in TLS, the TCP/IP layer sees only encrypted chunks and thus has nothing to leak. However, it would be overoptimistic, verging on the preposterous, to believe that leaks may occur only in the TLS layer. The complete application code is potentially as vulnerable to timing attacks. While an attack on TLS itself is more newsworthy, I claim that attacks on the application code are much more likely to be devastating.

To sum up, the "lucky thirteen" attack is interesting but not very realistic. With regards to timing attacks, the TLS layer is only the tip of the iceberg. To abuse the metaphor a bit further, worrying about the "lucky thirteen" is a bit like worrying about corrosion on board the Titanic: a valid concern in an abstract sort of way, but not as pressing as other issues related to boating.

Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • "any processing of confidential data can leak some of it through timing" Assuming this was really meant to be an absolute statement, curious - are timing attacks not eliminated by having constant processing time? – levant pied Oct 07 '22 at 13:28
0

As far as defense, under Linux one could do something like this:

tc qdisc add dev eth0 root netem delay 3ms 1ms

to add a random delay to a network interface as a basic defense against timing attacks, as mentioned in Kaminsky's 2012 Black Ops presenstation.

Of course there is a performance trade-off there, but a few milliseconds are trivial in many situations.

Cory J
  • 370
  • 1
  • 2
  • 8
  • 1
    One limitation of this kind of defense (add some random noise) is that an attacker who can gather enough samples can average many observations. This makes the noise disappear, leaving just the signal. In other words, adding random delays doesn't make the attack impossible, it just makes it take longer. So it's not a complete solution. Then again, all of the other existing defenses are band-aids as well, as described in Matt Green's blog post. – D.W. Feb 19 '13 at 06:17
  • A simple and working way to stop all timing attacks is to record the and on error always delay until is reached, where is a constnat much bigger than longest processing time. Thus all errors result into a response with a constant . – bebbo Sep 16 '14 at 15:58