1

As a general rule, security through obscurity is very bad and security and obscurity is still bad. This is because cypher = AES(plaintext) is already very secure. There is no need to add a custom layer on top of this, and more code means more bugs.

However, what about the biggest weakness of security: users and their passwords? We have password reuse, weak passwords, passwords stored as plaintext, keystroke loggers, and more. Security measures to strengthen passwords (two-factor, password requirements, 3-try lockouts, education, captchas, forced changes, etc) trade away usability for security and so should be deployed only when necessary.

Tricking the bots

There are web-crawlers which scrape usernames and guess common passwords (or stolen passwords). So what are potential ways to confuse the bots?

  1. Change the words. Change "login" to something like "welcome" or combine image letters and text letters next to each-other. Add honeypot links to trap the bots.

  2. Have the client preform a minor Javascript proof-of-work to obtain a credential-specific hash. Bots would have to both know to run the code and submit a proof-of-work for each guess which would get expensive. Both the password and hash are encrypted when sent to the server, of course.

  3. Use browser fingerprinting. This is a cat-and-mouse game but any reduction in bot attacks is welcome. There is risk of eliminating "correct" browsers.

The goal is to use obscurity to reduce the number of attacks and to reduce DDoS loads. The industry standard web protocols and algorithms will not change. Is such anti-bot obscurity a good auxiliary defense?

  • 1
    So your question essentially is "How can I prevent crawling through bots/automated attacks in general?" – Karen Baudesson Nov 09 '22 at 23:27
  • 4) don't allow weak passwords 5) don't allow passwords at all – user253751 Nov 10 '22 at 12:45
  • Google's recaptcha has bot-detection: https://www.google.com/recaptcha/about/ There is also Cloudflare which has a very good bot-detection feature: https://www.cloudflare.com/ – pcalkins Nov 15 '22 at 21:35

2 Answers2

2

So what are potential ways to confuse the bots?

Wrong question.

If your web application lets a bot do brute-force attacks, your web application is broken.

Do what Unix login has been doing since the dark ages: Slow down repeated login attempts. Adding just 1 cumulative second to each attempt after the 3rd will barely ever be noticed be regular users, but trying out the most common 1000 passwords now suddenly takes almost a week.

If you combine that with a simple blacklist to ensure that your users can't choose any of the top 1000 (or 2000 or 5000) passwords, you almost guarantee that the bot will also waste that week for nothing.

Or you could simply blacklist any IP for an hour after 10 failed attempts, and a day (or a call to support) after 20 failed attempts. Again, regular users will only very rarely be impacted, in if so it will be an inconvenience rather than a complete block. But trying out 1000 passwords now takes close to two months.

Plus, of course, you simply shouldn't allow 1000 attempts on a password at all. Force a password change through 2FA or another secure channel you have with the actual user after 100 failed attempts.

The only thing that's left for the bot now is to slow down so much that he tries inbetween successful user login attempts so the clock is always reset. That'll take him weeks, months or years depending on the password quality. Here's one of the rare use cases were forcing a password change every 3 months or so is actually useful, because then the bot can never now if the password isn't something he already tried last week.

The goal is to use obscurity to reduce the number of attacks and to reduce DDoS loads.

If bots trying to log in cause you DoS issues, your web application is seriously broken. If you are really bombarded with thousands of login attempts per second then your defense option is even easier: Block any IP trying to log in multiple times per second, that definitely isn't a human user. You can block them at the firewall or even at the router so they never hit the web application.

Tom
  • 10,201
  • 19
  • 51
  • It's not really "brute force" if the bot checks the 64 most common passwords (over an hour or so) and also tries known passwords from compromised usernames on other sites. Or if the users are infected with a keystroke logger, for the logger to automatically figure out they are entering a password. – Kevin Kostlan Nov 15 '22 at 20:47
  • @KevinKostlan don't allow your users to pick "password" or "12345678" as their password. I mention a blacklist. Compromised passwords are an issue, but the OP doesn't mention them as his threat scenario, nor does he mention keyloggers. These two are different scenarios with different answers. – Tom Nov 16 '22 at 05:07
0

However, what about the biggest weakness of security: users and their passwords?

This is just your opinion. This is not an absolute truth. Every system can have its own attack vectors. Users and passwords can be not an issue at all.

Security measures ... (two-factor, password requirements, ... ) ... should be deployed only when necessary.

This is just your opinion. This is not an absolute truth. Thousands companies and millions users are happy with that. The 2nd factor in 2FA can be as easy as just a swipe in MS Authenticator app. Requirements for password complexity can be actually very easy to fulfill. Millions of users use password managers that generate passwords of needed complexity and enter them automatically where needed.

  1. Change the words. Change "login" to something like "welcome" or combine image letters and text letters next to each-other. Add honeypot links to trap the bots.

Obscure names will make management of your API more complicated and more error prone.

Bots can enumerate all your endpoints. You can prevent enumeration by using random names, e.g. 128 bit GUID strings. But the management of such API and integration with any applications will be hard and error prone.

If some attacker wants to attack specifically your application, they will know all the endpoints.

  1. Have the client preform a minor Javascript proof-of-work to obtain a credential-specific hash.

An effective protection with proof of work can be hard to implement. Bots can just send random hashes to your service without any computation. You cannot prevent it. Your service will have to analyze every such request.

If some attacker wants to attack specifically your application, they will know what algorithms you use and will try to essentially optimize it, e.g. reuse partial computation result across multiple data sets.

  1. Use [browser fingerprinting][2].

You cannot force bots to execute the fingerprinting code that you want. Bots can send any random data as a fingerprint. You cannot distinguish, if these data were produced by your code or if they were generated randomly.

What can you do?

It depends. There is no single answer that fits all cases. But consider following measures (some of them you don't like much, as you wrote):

  • Accept that in the Internet there will always be somebody whose bots will try to attack your application.

  • Lock accounts for a few minutes or hours in case of 3 - 5 failed login attempts. This will prevent brute-forcing.

  • Use 2FA. Even if bot has guessed the password, this will prevent it from login.

  • Use WAF.

  • If you cannot use WAF, analyze traffic and block IPs that try to brute-force passwords. This will save computation resources.

  • Do logging, especially for critical operations like login. Analyze these logs regularly.

mentallurg
  • 10,256
  • 5
  • 28
  • 44