0

I'm receiving a lot of requests like these

[server] zcat access*gz | grep 404 | awk '{print $1" "$7}' | sort | uniq -c | less
:
      2 103.193.242.137 /MCLi.php
      1 103.193.242.137 /Moxin.PHP
      1 103.193.242.137 /MyAdmin/index.php
      1 103.193.242.137 /MyAdmin/scripts/db___.init.php
      1 103.193.242.137 /MyAdmin/scripts/setup.php
      1 103.193.242.137 /MySQLAdmin/index.php
      1 103.193.242.137 /PMA/index.php
      1 103.193.242.137 /PMA/scripts/db___.init.php
      1 103.193.242.137 /PMA/scripts/setup.php
      1 103.193.242.137 /PMA2/index.php
      1 103.193.242.137 /Pings.php
      1 103.193.242.137 /SQL/index.php
      1 103.193.242.137 /Skri.php
      1 103.193.242.137 /Ss.php
      1 103.193.242.137 /Updata.php
      1 103.193.242.137 /WWW/phpMyAdmin/index.php
:

The hosted server is plain and simple HTML/CSS server. The server in Nginx.

Are these requests going to have any impact on the server?

Should I use rate limit? Will rate limit impact SEO bots?

Pallav Jha
  • 101
  • 2
  • It depends. The answers will be opinion based. I would close the question in such form. – mentallurg Dec 28 '19 at 10:36
  • Does this answer your question? [Strange requests to web server](https://security.stackexchange.com/questions/40291/strange-requests-to-web-server) – Polynomial Jan 02 '20 at 23:35

2 Answers2

2

Are these requests going to have any impact on the server?

Everything has some impact but if this impact is relevant depends on how many requests these are, how much your server can handle and if these requests trigger some kind of expensive actions in the server - none of this is known.

But in general such "noise" is very common on the internet so you better should be able to handle such requests and handle these as cheap as possible. If your server is configured to only handled static pages and just responds with 404 to these requests then this should be cheap enough.

Should I use rate limit?

I don't think that this helps in these cases since it will only result in another error response to the client, which is about the same effort as the 404 response. The client will likely not care about this error as much as it did not care about the 404 response, i.e. it will not slow down the client.

What I do is redirect such requests to a different port on same domain and have an iptables DROP rule on this port. Many will actually follow these redirects and try to access the new URL without success, i.e. they will run into a timeout after a while since they cannot connect to the new port (due to the DROP rule). This will slow them down.

Steffen Ullrich
  • 190,458
  • 29
  • 381
  • 434
1

Define "Too much".

Nginx will handle these missing URLs very efficiently, so if they are actually impacting your servers performance then something is very wrong.

Depending on your traffic profile, rate limiting could have quite a lot of impact on your server capacity, and its difficult to tune so that is effective. Also, the requests you've shown above are looking for vulnerabilities in your website. Rate limiting works well for DOS type attacks, not for attacks targeting vulnerabilities.

If you are exposing off-the-shelf applications on your webserver, then it might be a good idea to add some defences - but I suggest fail2ban is a more appropriate mechanism for this. Note that you don't have to use the default option of blocking an ip address via iptables - there are various ways of doing this in nginx - but you might start by looking at the geo plugin.

symcbean
  • 18,418
  • 40
  • 74