15

In XKCD #936, a rate of 1000 password guesses/second is proposed as a "plausible attack on a weak remote web service". Under what assumptions is this rate plausible? It seems much too high to me. I would expect TCP handshakes, or waiting on server responses, to be the dominating factor, but I don't know enough about networking to justify these expectations.

My question is related to XKCD #936: Short complex password, or long dictionary passphrase?, but is not a duplicate, because no one there questions or explains the 1000 guesses/second assumption.

The only statistics I was able to find are for the THC-Hydra remote password cracking tool. See "STATISTICS" in http://www.thc.org/thc-hydra/README. But the services considered there (ftp,imap,pop3,telnet) are presumably not "weak".

ntc2
  • 253
  • 1
  • 5
  • 1
    I was running Hydra against port 22 over a VPN connection and getting over 2000 guesses per minute. No special optimizations involved. Why guess how many teeth the horse has? Go look in its mouth ... – schroeder Sep 27 '12 at 15:13
  • @schroeder: thanks for the stats! You're right that I should try an experiment, but I wasn't sure what to try. The (roughly) 30 guesses/second outcome of your experiment, while much better than the stats in the Hydra README, is still **way** under 1000 guesses/second. But, now that I have Thomas' scenario, I could try something like that. – ntc2 Sep 28 '12 at 00:02

1 Answers1

17

Note that the THC-Hydra statistics were made against a machine which is described as running SuSE Linux 7.2. In the history of the S.u.S.E / SuSE / SUSE / openSUSE Linux distributions, this brings us back to mid-2001, so this benchmark was probably done on a machine of that era. Technology has improved since, and computing speeds have grown.

A typical "weak remote web service" will use HTTP basic authentication, possibly within a SSL tunnel (i.e. that's HTTPS, not HTTP). Each password guess then entails the following: sending a HTTP header which contains the presumed password (say 200 bytes for the header) and then obtaining an error response (a 401 error code, with a header which will be probably less streamlined for size than what the attacker chooses; say 1 kB of header). Since HTTP supports keep-alive, the TCP connection (or SSL/TLS tunnel) can be immediately reused for another attempt; in particular, this means that there will be no noticeable overhead in the presence of SSL. 1000 attempts per second then requires a bandwidth of about 200 kB/s to the server, 1 MB/s from the server; that's a 10 Mbit/s bandwidth and even cheap hosting services offer that much. On local networks, 100 MBits/s or greater are the norm. No bandwidth problem here.

Of course, the attacker should try to parallelize in order to avoid latency. There is no need to wait for the server's answer before sending another request. Parallelization, here, means opening a hundred connections to the target server, and running them in parallel. Operating systems, nowadays, can handle many thousands of open connections; one hundred will not starve the machine.

Server-side, since all attempts target the same user the (hashed) password will be in RAM (it is normally stored in a file or database, but RAM is used as cache for frequently accessed data, and during a password-cracking session that data element will be very frequently accessed). Since we are talking about a weak remote web service, the server will store passwords "as is" or hashed with a simple hash function, not something like bcrypt. Rehashing each incoming password will use negligible CPU (an anemic PC can compute millions of hashes per second, so 1,000 hashes per second is nothing). Similarly, the HTTP header parsing will not use much CPU.

So 1000 password guesses per second seems a highly feasible figure; actually, it should not be too hard to exceed it by a large margin. Note the important points in the discussion above:

  • The attacker can parallelize, since he has many passwords to try.
  • Authentication occurs very early in the processing of an incoming request, so normal processing costs of requests (for whatever functionality the web service provides) do not apply).
  • Usage of slow hashing function (like bcrypt) can be effective in such a situation.
Jeff Ferland
  • 38,170
  • 9
  • 94
  • 172
Thomas Pornin
  • 322,884
  • 58
  • 787
  • 955
  • Thanks! Assuming ping is a good way to measure round-trip latency, I'm getting latencies from 10 - 100 ms for servers in the US. This supports your suggestion that 100 parallel connections is enough (since keep-alive got us down to a single round-trip per guess). – ntc2 Sep 28 '12 at 00:34
  • With pipelining, you don't even need a roundtrip of latency. You send off a few requests *without waiting*, in a single TCP packet. – derobert Oct 02 '12 at 22:22
  • It seems like in general, attempts/second will be extremely bimodal: either you're attacking a remote service at 10-1000/second (ideally your rate should be something like 10 per hour per IP or whatever) or you're trying to crack something local and can fly at billions/second. – Nick T Mar 23 '15 at 15:44