2

The issue

I am currently designing the backend for a SPA (Single Page Application), which I'm planning to construct in a fairly RESTful manner. The backend will ideally be just a thin layer between the client and the database. Almost all data in the database will be keyed to a specific user, which is why I need some form of authentication system.

Since we live in a universe where humans are fairly predictable and machines can be really really fast we obviously have to make password hashing and password verification slow (normally using key stretching schemes such as bcrypt or whatnot). This is all fine and dandy, but it complicates life for us poor souls who want to design fast and snappy applications using a RESTful backend, because ideally such a backend would not need to store any session data, but just authenticate every single request individually (using for example Basic Access Authentication).

However this would mean hashing the users password for every single request, which would add a painful penalty for every request (and possibly making DDoS attacks easier). Of course this issue can be solved by caching user credentials in memory at the backend, but that solution doesn't feel very clean and raises a few other issues, such as handling cache retirement and forcing the client to store credentials in plain text.

So in short, my issue is that I need some form of system to handle the authentication of users in a snappy way, but ideally still being able to avoid any server side state.

My proposed solution

First of all the traffic between the client and the server will be encrypted using bog standard TSL, so we should not be vulnerable to eavesdroppers nor man-in-the-middle attacks.

The solution that I have thought of is to issue a token to the client, upon initial successful authentication using credentials sent in plain text over TSL, which contains the necessary information needed for the server to authenticate a user. This token would be calculated in the following way:

key = a random key generated when the server starts, never to be shared
nonce = just some random bytes grabbed out of the air

token = nonce + timestamp + user_id + HMAC(key, nonce + timestamp + user_id)

This would allow the server to check whether the token is valid by simply validating the HMAC for every request, which is very cheap to do (about as cheap as doing a lookup in a hash table, but entirely sidestepping the need for a hash table in the first place). If the token is valid and has not expired (the timestamp is not to be allowed to be too old) I let the request proceed, and letting the database handle the authorisation issue.

This feels like a way to efficiently and securely authenticating users, in addition to requiring surprisingly few LoCs to implement as well as totally avoiding any kind of state in the backend (the server application would use a constant amount of memory throughout its lifespan).

My true question

Now this scheme may seem totally smashing at first sight, but after looking at it a while I see two potential issues with it that brings up questions:

  1. There is no way to retire a token, except waiting for it to expire. Is this actually an issue or is it me being paranoid? Seeing as it must be stored client side the security is immediately compromised if any malicious party gets access to a client before the token expires. Sure, the danger is contained to the timespan that the token is valid, but for this SPA that timespan might be counted in days.

  2. How do we scale? The secret server side key would have to be shared amongst all servers in the cluster, but how would we do this securely, and when should we retire a server side key?

There might also be other issues, but these are the two glaring ones that stood out to me. What are your thoughts on this scheme and these issues?

Note that this question is not a general question about secure token generation or session management (as that has been answered many times here), but about the peripheral issues regarding this specific scheme, issues that I have not seen discussed.

Fors
  • 121
  • 3
  • possible duplicate of [Token-based authentication - Securing the token](http://security.stackexchange.com/questions/19676/token-based-authentication-securing-the-token). In response to the second part, there are usually existing tools (or hardware) to manage such keys. Given our current understanding of the math behind them, sufficiently large keys don't 'go bad' and become too easy to figure out after a while - usually they're expired to limit _exposure_. Side note: you can/should use separate sub-keys for both the SSL connection and the HMAC, if you weren't planning on that already. – Clockwork-Muse Jun 01 '15 at 13:17
  • The accepted answer to that question suggests a scheme which is basically identical to the one in my question, but it doesn't touch the possible issues, which is what I am interested in. – Fors Jun 01 '15 at 13:21
  • 2
    http://hackingdistributed.com/2014/05/16/macaroons-are-better-than-cookies/ – Natanael Jun 03 '15 at 09:59
  • Those macaroons sounds very interesting indeed, and might indeed be very close to what I'd end up doing anyway. And let's not forget that they have a tasty name. Direct link to the paper: http://static.googleusercontent.com/media/research.google.com/sv//pubs/archive/41892.pdf – Fors Jun 03 '15 at 14:54

2 Answers2

1

Implementing strong encryption does add a performance penalty, but it should be negligible for a human user, consider a round trip from the web browser to the database taking 600ms without encryption vs 700ms. I do believe the 100ms increment is an exaggeration but it illustrates the point I try to make, a human is not able to tell the difference in 100ms. You may also implement improved performance techniques such as showing a spinner image to show that something is in fact happening.

As for specific attacks:

  • Brute force: The delay actually helps you, an attacker would need an extra century to finish processing the mask
  • DoS and DDoS: Packet analysers and flood control mechanism should be placed anyways, and these measures are mitigative but not avoidance methods, you have to accept a certain level of risk in these cases.

You haven't mentioned it but what happens when a token expires? is the user forced to re-login?. If not, then the inability to expire a token does pose a greater risk, since anyone able to steal that token may prevail the "session" for as long as they need. This however, is how most software works, they give the option to the user to log out and assume they remain logged in for a certain period of inactivity, renewing the expiration time with every request. So for your case, I would run a couple of use cases and determine the ideal session expiration, and adjust the time out accordingly. (Side note: be sure to test that the token and other temporary credentials are destroyed both in the client and in the server).

Another option is to force a renew of the token every so often, possibly transparent from the user (not requiring manual authentication) but just refreshing the token. This would invalidate any duplicated session with the disadvantage that users are able to open more than one tab and it may hinder the user experience.

As for scaling you may glue a session to a specific server, this would allow you to use different keys in your system

Purefan
  • 3,570
  • 19
  • 26
  • 2
    The time penalty may very well be negligible for the human user if there was just one user, but the performance penalty is _not_ negligible for the server, seeing as the server might have to handle thousands of requests per second. And a re-login would be required when a token expires, yes. An initial negotiation of which server to use is an interesting idea and might well be a good idea from a security standpoint (and it can be done without storing any server side state). +1 for that idea. – Fors Jun 01 '15 at 14:37
0

I think it's not a good idea to design your own token generation algorithm unless you really know what you are doing. You may want to check out this library: https://github.com/firebase?utf8=%E2%9C%93&query=token

  1. It is generally how web apps work today anyway, but you can still have a dictionary of manually invalidated tokens in memory to check against in a centralized service before forwarding requests to your app.

  2. Why not use sticky sessions to avoid sharing?

fips
  • 101
  • 1