2

In the near future, I will have about 50 remote computers to manage. These will be physical PCs running Debian 11, distributed all over the country. They will automatically perform a special kind of measurement repeatedly, and upload the results over the internet to 2 web servers.

I have already implemented the system, but there is a huge problem which must be solved before deployment: I will have to be able to manage the remote computers to fix bugs, install updates etc., but they will be unreachable over the network after they are deployed.

My first idea was to make the remote machines automatically connect to the web servers to create a reverse SSH tunnel. This doesn't work because from the internet the servers are only accessible via HTTPS (company rule).

My second idea was to apply the Reverse Shell (RSH) technique (source). I could write a custom RSH client for the remote machines, and extend the web servers with RSH server functionality. The RSH clients would run as services, periodically request commands from the RSH servers, perform the commands, and send back the result. I could SSH to one of the web servers and issue commands to any number of remote machines.

But can this be made safe? I mean, I'd like to be sure nobody else can send commands to the remote machines to stole measurement data, ruin the system etc.

The channel is HTTPS, but the web servers only have self-signed certificates. I have read that MITM attacks could be avoided by making the RSH clients check the server certificates' fingerprint (source). I also plan to install separate self-signed certificates for every RSH client, and have the RSH servers check the client certificates' fingerprint. Commands would only be sent and processed if this mutual fingerprint-based authentication is successful. Before the system is deployed, the actual fingerprints could be installed where they are needed to be checked.

Is this approach sound enough to save the remote computers from being turned into a zombie network by an attacker? If not, then what would be a good solution to manage unreachable, unattended remote PCs?

kol
  • 123
  • 4

1 Answers1

2

If done properly, the combination of HTTPS and self-signed certificates should ensure confidentiality and integrity when communicating between clients and servers.

However, implementing your own "secure" RHS software system seems like a bad idea from a security perspective. Are you certain that you will be able to do so without introducing a bunch of vulnerabilities yourself? Are you sure that any "smart" IDS/IPS or other protections that are in place, won't screw you over? Will you be able to commit yourself to maintaining this system as long as deemed necessary, whatever that might involve?

Also, is manually configuring each remote device manually really what you want? The first thing that popped in my mind when I read this, was to use Puppet to orchestrate everything.

Alan Verresen
  • 261
  • 1
  • 2
  • Yes, I also think mutual certificate authentication can make RSH safe -- but I also have a bad feeling about it... It might just be its bad reputation (RSH backdoors). I'd be happy to use something else, but I don't have any ideas what. Regarding installing the remote machines before deployment: I will use a declarative, scripted approach, probably Ansible. My problem is their remote management *after* deployment, when they are unreachable (no SSH or anything to these machines). – kol Feb 18 '22 at 16:41
  • 1
    Yes, that's why I mentioned puppet. Unlike using ansible to push commands to the remote devices, puppet lets the remote devices pull messages from the puppet master over HTTPS. – Alan Verresen Feb 18 '22 at 17:07