4

On my system (Ubuntu Server 18.04, if that matters) I have two servers. They are behind a Nginx reverse proxy (i.e. accessing service.mywebsite.com internaly proxies the request to 127.0.0.1:servicePort).

I have one server which is responsible for authenticating the user and generating access tokens. Basically it stores a 'auth token' cookie in the client browser which is shared to the other server (same subdomain). (let's call it the authenticator)

My other server should serve its service based on that refresh token. (let's call it the service)

What I imagined was to have that service connecting the authenticator with a simple TCP socket. The service would send the user's 'auth token' to the authenticator and it will respond 'yeah okay, this user is myUserName, it has permission to use your service').

Question 1: Is the simple fact that I (as the service) am connecting to 127.0.0.1 (the authenticator) enough to trust whatever its answer may be?

Question 2: Is there a simpler/safer method that TCP to communicate between the authenticator and the service? (they will stay on the same system, and should be different executables)

Gilles 'SO- stop being evil'
  • 51,415
  • 13
  • 121
  • 180
Sinder
  • 389
  • 2
  • 8
  • 1
    As usual, this question can't be answered until you tell us something: What's your threat model? Protecting against remote attackers is very different from protecting against someone who can compromise your network stack. – Nic Aug 16 '19 at 20:16
  • Also, quick question: Why are you using a server hosted locally accessed from your website instead of, say, a hardware token, a remote server, or a custom protocol handler? The latter are very well-supported, won't get you any issues with HTTPS, and don't open up an uncontrollable attack surface against the user's machine. (Custom protocols can be disabled by the user; accessing domains which secretly point to `localhost`... can't) – Nic Aug 16 '19 at 20:21
  • @NicHartley I'm assuming that my server is not comprimised. It that was the case even the database/authenticator communication will be a threat – Sinder Aug 16 '19 at 20:31
  • @NicHartley What are you calling a "hardware token", and what kind of protocol handler are you talking about? That's exactly the kind of think I may be looking for to avoid usign plain sockets – Sinder Aug 16 '19 at 20:32
  • WRT custom protocols: Something like [this](https://developers.google.com/web/updates/2011/06/Registering-a-custom-protocol-handler). Although on re-reading, I think I've misunderstood your question, actually. To be clear: Are you talking about redirecting to localhost _on the connecting client's machine_, or localhost _on the proxy_? I thought you were saying the former, but I think you might actually be saying the latter, in which case... please ignore my second comment, it's not relevant here. – Nic Aug 16 '19 at 20:44
  • @NicHartley I am not talking about any client redirection. I may not have been clear. The connection flow I am talking about is client → nginx → service → authenticator. nginx, service and authenticator are all running on the same machine, same IP. – Sinder Aug 16 '19 at 20:50
  • Ah, yep, I misunderstood, then -- I thought you were talking about redirecting to the client's localhost, which is a very common mistake to make. My apologies for misreading your post. It's very clear; I think I just misread "reverse proxy" as "DNS" or something. – Nic Aug 16 '19 at 21:00

2 Answers2

5

Communicating through 127.0.0.1 can be thought of as just another IPC mechanism, but one that re-uses existing protocols. Just like shared memory or UNIX domain sockets or pipes, it's one of countless ways that two processes can communicate on a single system. If you trust that the processes on your system have not been compromised, then you can "blindly" trust connections going through 127.0.0.1.

forest
  • 65,613
  • 20
  • 208
  • 262
4

If you know that the IP address at the other end of a TCP socket is 127.0.0.1, this guarantees that either the system administrator has configured the firewall to redirect this particular connection, or the other end of the TCP socket is a process running on the same machine. So if you trust your server machine as a whole, you can trust 127.0.0.1. However, there are advantages to not using a TCP socket, for defense in depth.

You need to be careful about how you implement the localhost check. Localhost is 127.0.0.1 until the day it isn't, for example because you switch to a version of some library that uses IPv6 by default, or because you decide to add some form of forwarding proxy to the mix to allow you to run the two services on different machines or containers. If you start using a proxy, be careful to check at the right place. And of course you must make sure never to host anything else that could be dodgy on the same machine (though why would you in those days of VMs and containers).

Knowing that you're talking with the same machine only tells you that some process on the same machine is at the other end of the connection. It doesn't tell you that it's the right process. Under normal operation, presumably, both processes are running. But if something wrong happens, such as one process crashing after running out of memory due to a denial-of-service attack, the port would be free for another process to listen. And at any time, any local process can connect to a running server. This requires the attacker to be able to run some process locally, but it could be some unprivileged process that would otherwise not be able to do much. So while relying on 127.0.0.1 isn't a vulnerability, it leaves you open to privilege escalation.

I you can, use a Unix socket instead. Unix and TCP sockets work the same except for specifying the address to connect to or listen on, so it wouldn't require much code change. A Unix socket can have permissions, or can be created by the parent supervisor, and nothing else can connect to it. With a Unix socket, you have the guarantee that not only what's at the other end of the socket is running on the same machine, but that it's the expected process. This only leaves you vulnerable to a security breach in the authenticator service or the main service, rather than a breach in anything running on the same machine.

Gilles 'SO- stop being evil'
  • 51,415
  • 13
  • 121
  • 180