14

I started working on an app that connects to a RESTful service for authentication and data. User POSTs the user name and password to /token endpoint. Once they log in successfully, they get a bearer token that they then append to the Authorization header in the subsequent calls to different protected resources.

My question is what prevents users from intercepting their regular post from the app (getting the token) and then possibly sending bunch of POST requests (using something like postman or fiddler) to create a large number of fake posts or articles or whatever else the app does.

What are some possible ways from protecting from this? Does the fact that the traffic to the service will eventually go via TLS make this a non-issue?

u9kV-6J
  • 145
  • 1
  • 7
  • 6
    It's not possible as the legitimate client (your app) is running within the user's (attacker in this case) control. They have full access to the device's memory and your app's binary and given enough time and effort will reverse-engineer whatever obfuscation/protection you implement. – André Borie Nov 13 '17 at 16:38
  • You can rate limit or ban any automated account, I would be very catious to the exact implementation of this though. Alternatively you could implement a feature to flag spammy or fake posts. – HopefullyHelpful Nov 13 '17 at 21:07
  • From what I can see online, it seems like a lot of apps use this type of Oath implicit grant (specifically the resource owner password grant -in cases where you control both ends; app and server). Even though imho it seems to be somewhat of a bad practice (due to fallbacks discussed here) it is used widely. – u9kV-6J Nov 13 '17 at 21:38
  • 1
    The token identifies the user. You could also opt to identify the application. Do something like put a private key in the app and sign each request with the key. The server can then validate the signature. Sniffing is useless as the key is never transmitted. However you do have the risk someone extracting the key from the app somehow, but there are things you can do to make this difficult. – phemmer Nov 14 '17 at 13:29
  • 1
    @Patrick - He said TLS. So signing with a key in the application makes absolutely no difference - since OP is asking about his users. A malicious user could easily extract the key in the same way they can easily MitM their own TLS connections. – Hector Nov 14 '17 at 16:47
  • There's a very confusing typo in the first sentence of the second paragraph ("form" should be "from") but the stupid thing won't let me edit it because it's less than six characters. – micheal65536 Nov 14 '17 at 17:55
  • @Hector Yes he said TLS. And since it wasn't otherwise stated, I assume this means normal (ala https) encryption, not client-side certs. Signing with client side key/cert is very different. Without it there is no key to extract, and MitMing applications is trivial. These are completely different scenarios. – phemmer Nov 14 '17 at 20:24
  • 1
    @Patrick - I was replying to your suggestion to sign requests in the application. Which still suffers from the problem the user can easy enough extract the key from the app and send requests as they like. The reason I brought up TLS is that you then can't MitM the connection unless you control the client. So it offers exactly the same protection/issues as your suggested client keys since the user does control the client. Ergo adding client keys achieves nothing. – Hector Nov 14 '17 at 20:57
  • 1
    How will you prevent the user from creating a bunch of fake posts by *pasting them into the box and pressing the post button* over and over? – user253751 Nov 15 '17 at 04:59
  • @immibis captcha could help with that – u9kV-6J Nov 15 '17 at 12:11
  • @ska-dev How do you stop the human from solving the captcha over and over? – user253751 Nov 15 '17 at 21:28
  • App auth helps with this - using an embedded client certificate to only authorize certain applications to make requests on behalf of a user. This can be broken too, but requires considerably more effort than a simple MitM capture. – brandonscript Nov 16 '17 at 00:12

10 Answers10

29

My question is what prevents users from intercepting their regular post form the app (getting the token) and then possibly sending bunch of POST requests (using something like postman or fiddler) to create a large number of fake posts or articles or whatever else the app does.

Nothing

Does the fact that the traffic to the service will eventually go via TLS make this a non-issue?

This makes no difference at all.

What are some possible ways from protecting from this?

The most common one is rate limiting. I.e. if someone posts at a much higher level than anticipated reject the post. There are several approaches to this - when did they last post, rolling average over N minutes etc. If you don't want false positives resulting in users losing post content then make them re-authenticate to continue.

Another approach is captchas. I.e trying to make the user prove they are human.

Another is attempting to detect automatically generated content using spam filters or AI.

Hector
  • 10,923
  • 3
  • 41
  • 44
  • Captchas will only help in preventing users of the app to not rapidly/accidentally submit multiple post requests and not help in any way from preventing someone from sending POST requests from external tools such as fiddler, for example (once they are able to intercept the token). Right? – u9kV-6J Nov 13 '17 at 17:25
  • 3
    @ska-dev No - you can either require one captcha per request for certain request types (either server side or using the captcha to give a request token to the client which is required on the restricted request) or you can integrate it with rate limiting - I.e a captcha is required to remove limiting after 5 posts in 10 minutes. For services with radically different user usage you can integrate an average request rate for the user to reduce false positives. – Hector Nov 13 '17 at 20:29
  • 2
    Captchas are being broken as we speak. – Worse_Username Nov 14 '17 at 15:55
  • 3
    @Worse_Username - They are also continually evolving. Old captcha systems are basically useless. Modern versions of advanced captchas like reCAPTCHA are fairly reliable. For example in this recent paper - http://www.cs.columbia.edu/~polakis/papers/sivakorn_eurosp16.pdf - whilst they had a >70% success rate it took an average of 19.2 seconds to defeat. At that speed automated attacks will post no faster than a human could. – Hector Nov 14 '17 at 16:05
  • @Hector unless they increase the throughput through parallelization. – Worse_Username Nov 14 '17 at 16:07
  • @Worse_Username - Which is trivial to defeat server side. Only allow one active captcha per user. – Hector Nov 14 '17 at 16:13
  • @Hector sounds like round-about way to do rate-limiting to me. If you go with rate-limiting, why even keep the captcha? – Worse_Username Nov 14 '17 at 16:24
  • @Worse_Username - You would ideally use both. Higher than expected frequency (either site wide or per user metrics) triggers a captcha allowing legitimate users to still post while filtering out all but the most advanced automated attacks. You can then have additional metrics which lead to temporary user or IP blocks - for example higher than humanly feasible post rate, <0.5s successful captcha response, high captcha fail rate etc. – Hector Nov 14 '17 at 16:32
  • The UX tradeoffs of some of the suggestions may not be worth it. "Only allow one active captcha per user" will mess up power users who keep several documents open in tabs, and the use of CAPTCHA in the first place may exclude users with disabilities in violation of applicable disability discrimination laws. – Damian Yerrick Nov 15 '17 at 04:56
  • @DamianYerrick - this comes down to balance and use case. Power users often have multiple pages open but usually most are for reference rather than the same post form. If captchas are only triggered above a certain post rate this isn't an issue at all - especially if you profile users behavior to increase limits for long term high frequency posters. Admins have to adjust based on userbase and usage model. As for disability discrimination again by only applying this to high post rate users most of this is mitigated. Most captcha vendors additionally support different types - like audio. – Hector Nov 15 '17 at 08:22
12

My question is what prevents users from intercepting their regular post form the app

Nothing.

Does the fact that the traffic to the service will eventually go via TLS make this a non-issue?

If you make it for an mobile platform (Android/iOS), that makes it much harder (but not impossible).

If you make it for the browser, this doesn't add much protection.

What are some possible ways from protecting from this?

It is hard to protect against automatic requests, but one thing you could do is rate limit.

Anders
  • 65,052
  • 24
  • 180
  • 218
Ruben_NL
  • 119
  • 3
  • 6
    And certainly nothing stops them from using an emulator to run the app and capture it on the computer... – corsiKa Nov 13 '17 at 19:40
  • 7
    I don't know for iOS but it's very easy on Android. The Packet Capture app on the play store can do it without root. – GrecKo Nov 13 '17 at 21:00
  • 1
    @GrekKo On iOS, you can mess with the proxy settings and install the mitmproxy CA cert. If the app has app transport security, [a jailbreak tweak](https://github.com/nabla-c0d3/ssl-kill-switch2) can fix that. If there is jailbreak detection...you get the point. It's not your computer anymore. :P – Andrew Sun Nov 14 '17 at 07:51
  • @AndrewSun - assuming they are even actually using the app at all. Users can easily enough decompile it and look at what it does. – Hector Nov 14 '17 at 11:15
6

My question is what prevents users from intercepting their regular post from the app (getting the token) and then possibly sending bunch of POST requests (using something like postman or fiddler) to create a large number of fake posts or articles or whatever else the app does.

What are some possible ways from protecting from this?

You don't. That is, you don't protect against this - from the perspective of authentication and authorization, there's no attack happening here, just perfectly legitimate traffic.

The problem instead is "How do I prevent users from spamming my service?" (or similar), and that's completely orthogonal to the question of authentication tokens. A user could similarly spam things manually through the app.

Rate limiting by user account, rate limiting by IP address, using cookies or device identifiers to tie multiple accounts together to rate limit by device, terms of blacklists, heuristics for spam, etc. are all common methods to deal with spam. But whatever your actual thing is that you're trying to prevent, that's what you should be looking into, not preventing users from modifying things client-side (which they will always be able to do).

Xiong Chiamiov
  • 9,402
  • 2
  • 35
  • 78
  • You could add a requirement for proof of work, but that does slow down legitimate users and you still need to receive and check the messages. – eckes Nov 15 '17 at 07:07
4

The token you give to the client should contain a signed expiration time that would be verified server-side (e.g., limited to a typical user session time you'd expect for your app). This won't prevent the re-posting, but will limit the period within which it could be done after the authentication. On expiration the user will have to re-authenticate. This is commonly implemented using JSON Web Token.

However, you're talking about malicious mis-use by a legitimate user (unless an attacker already compromised the legitimate user's device and can intercept the traffic in clear) - such mis-use is very difficult to prevent without making the app almost unusable as others noted (e.g., by making users authenticate on every request). Storing credentials on the device and silently re-sending them is a BIG NO-NO.

sleske
  • 1,642
  • 12
  • 22
Sasha K
  • 67
  • 3
  • Yes I have it set up with expiration time that is verified on server side. As in JWT. Thank for that second part of your answer that was my main concern with this question. – u9kV-6J Nov 13 '17 at 18:46
  • 1
    Shortening the token lifetime won't get you closer to your goals. Nothing prevents the malicious user to scrape your token service too and generate tokens from his bot. – Thibault D. Nov 14 '17 at 11:57
  • 2
    The bot script could re-authenticate automatically when the token expires. – micheal65536 Nov 14 '17 at 17:54
2

Correct, that a session token alone does not insure that a hacker can't intercept a packet and reuse the token for their own purposes... at least as long as the token is valid. Most session tokens have a time limit, although that is dependent upon the authorization method used to validate the token on the server end.

Timing is one way to guard against this, although it isn't foolproof. Since you're writing the app, you should have a reasonable idea of how quickly the app can be operated, and an expected rate of service calls. If your app, in typical use, can't submit a service call more than once a second, and the service receives 100 requests in one second, that's clearly a hacker at work.

However, this assumes a hacker will bombard your service with requests. A hacker could figure that out after a few failures, and lower their rate of requests. As soon as they see the service rejecting what they believe to be a valid session token, they'll start looking at the obvious, like timing.

Another way to guard against this is to require SSL to access the service. That will make the packets, and authorization token, difficult to extract. They'll need a lot more than a packet sniffer. A particularly knowledgable hacker might try to probe into the app's binary, but that's a lot of work, especially on mobile platforms. You would have to purchase a SSL certificate for the server, but that's cheap insurance.

A method I've been experimenting with is to append a sequence number to the session token, hashed so it doesn't look like a sequence number. The authorization service maintains a count that gets incremented every time a token is validated. It strips off the sequence bytes before validating the token, and then checks the sequence number.

The client is expected to start at zero when it initially receives the session token, and increment the appended count by one every time a call is made. Thus, when the server last received sequence 250, and another one comes in that's sequence 135... ignore the request, maybe lock out the source IP, and send the admins a notice that a hack attempt may be in progress.

However, that adds some complexity to the client app, and also could run afoul of dropped packets or dropped return packets. Just something I've been experimenting with.

And, yes, a hacker might eventually be able to figure that out, but only after a lot of false starts... giving the admins some warning that an intrusion attempt is under way.

tj1000
  • 131
  • 1
1

Plenty of other people have said "nothing" in response to the first question on "what prevents.." and it's true; ultimately nothing really defeats someone absolutely determined.

You also asked for strategies that can counter this; tj1000 touched on it, and I thought I'd toss a similar idea into the fray, based on work I used to do with credit card terminals.

Way back when I was a junior dev, I was handed a task that was deemed too hard to be worth solving by the pro devs (I guess it kinda shows what I was paid); we had thousands of credit card terminals that called in over an old pre-isdn link, did some auth, recorded a transaction, got an approve or decline from the server and onto the next transaction. The cute part is there was never another message followup from the terminal if the transaction was voided after we'd approved it (this was in the days of signatures, before user identity was pre-authed by a chip and pin), but there didn't need to be

These transactions were protected and confirmed by what was termed a MAC - message authentication code. Built into the hardware of the terminal was a hash key, unique per terminal, by the manufacturer. The manufacturer would share with us what the hashing key was, and when the terminal appeared presenting its unqiue ID we could look up the hash key. The message bytes the terminal formed would be hashed by the terminal, with half of the previous or initial hash being appended to the message. The other half would be used to update the hash key used for teh next message. At the server side, we'd carry out the same hashing to know if the message had been tampered with, and come to the same hash result, we'd also know to roll the hash key on with the same half residue we had, but we'd keep track of the previous hashkey too. The next time a message came in, one of two things was the case. If the previous transaction had succeeded and was to be accumulated into the daily totals, the terminal would use its new rolled hash key to hash the latest message. If the previous transaction was rolled back (user canceled, bad signature etc) the terminal would re0used the previous hash key. By hashing the message with the latest rolled key and finding no match, but hashing with the previous key and finding a match, we knew the fate of the previous transaction was fail, and we'd remove it from the daily totals.

Hash keys would occasionally go out of sync; when this happened, neither of our stored keys would produce a matching hash for the message. There's one more key to try - the initial key (the supervisor users could reset the key to initial, and some users seemed to do this upon any problem, believing it to be something like a reboot - seldom the case and caused more problems than it solved but..). If the initial key worked out, we couldn't say for sure what happened to the previous transaction but we usually accumulated them (charged people's accounts) based on the theory that people would complain if they weren't refunded when due but not if they were refunded something they'd bought..

if the initial key didn't work out, then the terminal effectively became useless, as the key rolling sync loss means no more messages can work. We didn't have the authority to tell the terminal to reset its key itself, but we could put a message on the display imploring the user to do so

Long story short, you don't have to use the same token, if you're concerned that tokens will be captured and replayed as a stored password alternative. Others have pointed to options of making tokens expire after a time; this method is essentially token expiry after every request (similar to another mention about appending a sequential number to a token), with a known, internal way of calculating a new token on each side that has to be performed in step.

If you're interested in the boring details of the way the credit card world does it in the UK, look up APACS 70 Standard Book 2 and Book 5. These aren't freely available, alas - you have to be a member to receive a copy of new publications, but you might find the content of old versions of them floating around the web

Caius Jard
  • 168
  • 3
1

So... there's nothing you can do to secure the client machine to keep the token secure on their end, but there are some best practices to JWT security:

  1. Implement refresh tokens and issue short lived access tokens - this adds complexity to any attack + the way I've seen refresh token implemented is with a SQL table... which you can manage (ex. instant boot).

  2. Delete the token from the client on logout (this one's obvious).

  3. You can encrypt the token, but the encryption's reversible, again makes it harder.

  4. Use a secret key to sign your tokens and rotate it as needed (ex. per release). Changing the secret key invalidates all prior tokens.

Also, read: https://auth0.com/blog/refresh-tokens-what-are-they-and-when-to-use-them/

If you need further security, you need more mechanisms to secure the client specifically: whitelisting user IPs, ensuring users have AV and anti-phishing training, etc...

RandomUs1r
  • 145
  • 4
1

TL;DR a skilled user satisfied with a reasonable spam rate will still get through - if by no other means, by entering their spam by hand.


Have the app authenticate each request independently. It's definitely not foolproof, and it makes for increased traffic, but it's doable.

One way: a message post is no longer a single post but a GET followed by the POST. The GET supplies a nonce (it could be the current timestamp), and the POST must supply the nonce together with the e.g. MD5 of the nonce, salted with an app secret. Of course you need to store the issued nonces to avoid replay attacks.

You can also supply a session nonce at login and use that for the whole session (of course the user can intercept it and copy it into a second spamming app, once he's broken the app secret out of the app). This isn't a significant improvement from the authentication cookie though.

Or you can silently add a validation to all app requests, in the form of the current timestamp plus hash of said timestamp salted with the app secret. Uniqueness can then be guaranteed by preventing two posts in the same second client side.

The server then verifies that the timestamp isn't too far from now(), and that the hash is matched. You can even omit the timestamp if it is acceptable for the server to bruteforce it (for timestamp = now()-60 to now()+60; if hash(secret+timestamp)...).

Now, the user has to hack the secret out of your app, which is harder than just intercepting the network traffic. If strings/data are easily recognizable in the app, you can add a little security through obscurity by sending the timestamp, and the hash of the secret plus the timestamp of seven seconds before1. This requires the user to reverse engineer the app altogether.


(1) I once broke a salting scheme by testing all sequences from 1 to 32 bytes in a program, affixed and suffixed to the timestamp in hex, binary and decimal, with a set of separators including the no-separator, and verifying whether the result was the response to the challenge. This allowed me not to debug the binary at all, which I wouldn't have been able to do, and it took twenty minutes to set up, and two to run. If the timestamp had been obfuscated by adding a known constant, I wouldn't have succeeded. If the constant was large, it wouldn't be practical even knowing the trick.

LSerni
  • 22,670
  • 4
  • 51
  • 60
  • Re: your salting scheme attack. I've seen a similar approach of hooking the random functions so they always return 0. This way you can identify the segments of a key that are random. – formicophobia Nov 15 '17 at 13:18
1

As others have touched on, there is basically nothing you can do once the code is running on a user's device. If it's running in the browser, reverse engineering the token process is trivial. If it's running in an app, you can make it much more difficult to reverse-engineer, but it's still possible.

For mobile apps, the attacker has two options:

1) Reverse engineer the protocol. The simplest approach is sniffing network traffic. There are obfuscations you can put in place to make that harder (cert pinning, MAC + secret key + rotation) but any determined attacker will eventually break through them. However, if you use cert pinning and/or the secret key approach, the attacker will not be able to simply sniff packets to reverse engineer the protocol. He will need to decompile the binary in order to disable cert pinning and/or locate the secret key in memory.

2) Treat the application as a blackbox and automate interaction with it. This could be done with a farm of physical devices (much easier if jailbroken iOS / rooted Android), or with a farm of emulators. From an attacker's perspective, this could be a more reliable approach as it would be resilient to any updates you push.

To guard against #1 (binary decompilation) you have many options, but they all come down to increasing the difficulty of reverse-engineering it, not preventing it altogether. The low-hanging fruit is binary obfuscation and debugger detection. For obfuscation, you can obfuscate the symbol tables, and/or hide the encryption logic in mundane looking functions (or write it directly in assembly). For debugger detection, there are different techniques for determining if a debugger is running; if you catch one, you can crash the app or play games with the attacker.

Guarding against #2 (emulator/device farm) is a bit harder, but again, you can make the attacker's job more difficult. One option is to check if the device is jailbroken (this would also defend against #1) and crash the app if it is. Another option, for Android at least, is Google's attestation service. That should prevent the emulator farm scenario, but not a physical farm. For iOS, Apple released a similar deviceCheck API in iOS11, although it's much more limited in scope.

If you want examples of this in the wild, checkout Snapchat.app, which has implemented many of these features.

formicophobia
  • 515
  • 4
  • 9
0

If an app is designed to authenticate (username/password) once and then employ a token system for REST services authorization then that app has effectively created its own back door access.

I am sure that this solution is going to get me flogged but you basically need to authenticate into every single RESTful endpoint.

You can choose to pester your users and ask for a password upon every single REST request but this might make your app prone to becoming a failure.

The other option is to store the user's credentials in the app's memory upon log-in and send the credentials silently to the REST endpoints.

MonkeyZeus
  • 517
  • 3
  • 10
  • 7
    Authenticating every request with a password makes no difference. This can be automated on every request. Storing the password in the client and re-sending is identical to storing a session token - except requires hashing the password on every request which is expensive for the server. – Hector Nov 13 '17 at 20:34