54

The most straightforward way to install NodeJS on Ubuntu or Debian seems to be Nodesource, whose installation instructions say to run:

curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -

This clashes with some basic security rules I learned long ago, such as "be suspicious of downloads" and "be cautious with sudo". However, I learned those rules long ago, and nowadays it seems like everyone is doing this...well, at least it has 350 upvotes on askubuntu.com.

As I read various opinions on other sites, I'm finding that some people also think curl-pipe-sudo-bash is unsafe:

while some people think it's just as safe as any other practical installation method:

There are also some that explore the problem without giving a decisive opinion:

Since there's no clear consensus from other sites, I'm asking here: Is curl-pipe-sudo-bash a reasonably safe installation method, or does it carry unnecessary risks that can be avoided by some other method?

Krubo
  • 829
  • 6
  • 9
  • 4
    This makes you trust the server you downloaded from -- note that normally, you don't need to trust the server, because if you're downloading an RPM or deb from your distro, it's signed, so you can just trust the signature to ensure that you have a genuine package *even if an attacker controls the mirror/server you downloaded it from*, or if that attacker controls your ISP and is substituting their own host, etc. – Charles Duffy Jul 13 '19 at 16:03
  • 4
    Note too that it's very possible to detect *whether* code is being piped to bash (via timing analysis), so folks can give different download results for code being saved for inspection vs code being run directly. – Charles Duffy Jul 13 '19 at 16:05
  • Only if preceded by `curl ...| less` – waltinator Jul 15 '19 at 00:40
  • 3
    http://blog.taz.net.au/2018/03/07/brawndo-installer/ - it's got what users crave. `alias brawndo='curl $1 | sudo bash'` – cas Jul 15 '19 at 04:13
  • 1
    While you asked specifically regarding "safety" (and this is security.se after all), I'd like to mention that there might be other interesting factors besides safety when evaluating an installation method (examples: can you find out later what was installed? Can you uninstall easily and reliably? Are you notified about security updates? Can you install different versions of the same software on one system?) – oliver Jul 15 '19 at 12:50
  • 2
    @cas, `curl $1` doesn't look at an argument to the `brawndo` alias, it looks at your shell's current argument list, which for interactive shells is usually empty. You probably want a function: `brawndo() { curl "$1" | sudo bash; }` -- or, to pass arguments past the first to the received script: `brawndo() { local url; url=$1; shift; curl "$url" | sudo bash -s "$@"; }` (of course, all that is said with my shell hat on; with my security hat, don't do any of this). – Charles Duffy Jul 15 '19 at 15:00
  • @CharlesDuffy - yeah, it should be a function, not an alias (as it was in my blog post). wasn't thinking when i retyped it. still, stupid mistakes just emphasise the point that all forms of brawndo-installer are a stupid mistake :) – cas Jul 16 '19 at 00:36

6 Answers6

40

It it's about as safe as any other standard1 installation method as long as you:

  • Use HTTPS (and reject certificate errors)
  • Are confident in your certificate trust store
  • Trust the server you're downloading from

You can, and should, separate the steps out -- download the script2, inspect it, and see if it's doing anything fishy before running the script you downloaded3. This is a good idea. It won't hurt anything if you do it and you might catch a compromise, which you can report to the source and the community at large. Be prepared to dig through quite a lot of Bash, if my experience with such things is any indicator. You can also try 'expanding' it, downloading any scripts that it would separately and tweaking the script to call those, if you're particularly worried about evil servers, but at some point you have to decide to just use a different server if you trust the first one so little.

Be aware that if the server (deb.nodesource.com) is compromised, you basically have no recourse. Many package managers offer to verify GPG signatures on packages, and even though a fundamental part of the keysigning architecture is broken, this does still by and large work. You can manually specify the CA for wget and curl, though this only proves you're really connecting to that server, not that the server is serving safe code or that it's legitimate code from the creators.4

If you're worried about arbitrary code execution, APT definitely allows that, and I'm fairly confident both Homebrew and Yum do as well. So comparatively, it's not unsafe. This method allows greater visibility; you know precisely what's happening: A file is being downloaded, and then interpreted by Bash as a script. Odds are good you have enough knowledge already to start investigating the script. At worst, the Bash may call another language you don't know, or download and run a compiled executable, but even those actions can be noticed beforehand and, if you're so inclined, investigated.

As a side note, given that a lot of the time you need to install things with sudo, I don't see its use here as any special concern. It's mildly disconcerting, yes, but no moreso than sudo apt install ....


1: There are significantly safer package managers, of course -- I'm only talking about standard ones like APT and yum.

2: ...while being careful with your copy/pastes, naturally. If you don't know why you should be careful with your copy/pastes, consider this HTML: Use this command: <code>echo 'Hello<span style="font-size: 0">, Evil</span>!'</code>. To be safe, try pasting into a (GUI) text editor, and ensure you copied what you think you did. If you didn't, then stop trusting that server immediately.

3: You can actually detect whether the script is just being downloaded or being downloaded-and-executed, because interpreting a script with Bash takes a different amount of time than saving it to a file, and Linux's pipe system can "back up", which can make those timing differences visible to the server. If you ran the exact curl | sudo bash command they gave, your examination is (at least if it's a malicious server...) meaningless.

4: Then again, it looks like NodeSource is creating some sort of custom installer, which wouldn't be signed by the Node team anyway, so... I'm not convinced that it's less safe in this particular case.

Nic
  • 1,826
  • 15
  • 22
  • 5
    Upvoted, but you did miss a few important considerations. 1) Make sure the source of the download is trustworthy (not some fly-by-night domain - HTTPS is free these days, and never did mean a domain wasn't malicious - or a writable file in some cloud or anything). 2) Bear in mind that you're trusting the server absolutely, which is not necessary. Linux package managers (for example) usually support and sometimes require a GPG signature or similar, so even if somebody compromised the server and replaced the package, it would get rejected. Bash has no such protection. – CBHacking Jul 12 '19 at 23:47
  • @CBHacking I addressed the first point in the second to last paragraph, but I'll make it more prominent. You're right, that's important. For the second, good point, and also a question: How do you _get_ the public keys, out of curiosity? From the [normal network](https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d6955275f)? That's a serious question, not some sarcastic setup; has that issue been fixed, or will trying to use the global keyserver network still make things die? – Nic Jul 13 '19 at 00:24
  • Getting the GPG public keys is still kind of a mess, yeah. You could treat the keys that come with the OS as the start of a trust chain, but I don't think that's how it's done in practice. It at least requires more effort from the attacker, though; they need to not only replace the package, but also replace the keyfile. PKI-based code signing is debatably more secure - at least you can check who issued (signed) the cert and see if you trust them, which is sort of theoretically possible with GPG but in practice basically never happens - but the FOSS community doesn't generally go in for that. – CBHacking Jul 13 '19 at 00:54
  • @CBHacking Or they could upload a fake package entirely. I took another look at the source for this example and it looks like NodeSource is making a custom Node installer. They could, at least theoretically, package and sign whatever malware they wanted. You're definitely right in the general case (which is why I edited my answer) but in this _specific case_ I don't think it reduces security. – Nic Jul 13 '19 at 03:30
  • 3
    @CBHacking: Most Linux distros ships with a default set of trusted keys roots for the package managers. Even though this usually uses GPG, the trust mechanism is set up essentially like a PKI. Using PKI wouldn't really be more secure. – Lie Ryan Jul 13 '19 at 12:25
  • 1
    As a reminder, you should be careful if you're copying text into your terminal: http://thejh.net/misc/website-terminal-copy-paste ; Also IDN homograph attacks might make it a bit more difficult to verify server identity. – Larkeith Jul 13 '19 at 23:29
  • @Larkeith I kinda assumed people knew not to copy/paste blindly already. You can also `font-size: 0`, etc. You can even do it without leaving an immediately visible trace in the HTML -- try running this JS in your console, then copying anything on the page you ran it on: `document.oncopy = e => { e.clipboardData.setData('text/plain', 'echo Hello'); e.preventDefault(); }`. I'll add a note, though, in case people actually don't know. – Nic Jul 14 '19 at 00:24
  • _It won't hurt anything if you do it_ - I disagree. It's far more likely that your text editor can be compromised. Did you remember to unset `LESSOPEN` et al? – forest Jul 14 '19 at 07:17
  • 2
    @NicHartley, ...also, note that rpm and dpkg aren't the only competing package formats out there. Consider Nix (wherein all builds are run in a networkless sandbox with only access to their declared dependencies; wherein packages aren't allowed to create setuid files; wherein all software is addressed by a hash of its sources, dependencies and build steps) as an alternative that does far better than any of them, and thus far, *far* better than the `curl | bash` travesty. – Charles Duffy Jul 14 '19 at 13:27
  • @CharlesDuffy Good point. There are more secure installation systems. Nix even has NodeJS. I'm unable to edit right now, but by all means add that to the answer, or write your own. Given that the question was "how secure is curl piped to bash", other nonstandard installation methods are worth mentioning. – Nic Jul 14 '19 at 14:37
  • @CharlesDuffy After some thinking, I'm... really not sure how to incorporate that into this answer. If you'd like to give it a shot, please do -- that's a valuable point and it shouldn't be relegated to a comment. Either that, or write up a proper answer, so I can upvote it. – Nic Jul 15 '19 at 00:56
  • 1
    @NicHartley, ...I've put an answer together; your feedback on what I covered/missed/probably should have left out would be welcome. – Charles Duffy Jul 15 '19 at 12:12
17

There are three major security features you'd want to look at when comparing curl ... | bash installation to a Unix distribution packaging system like apt or yum.

The first is ensuring that you are requesting the correct file(s). Apt does this by keeping its own mapping of package names to more complex URLs; the OCaml package manager is just opam offering fairly easy verification. By contrast, if I use opam's curl/shell installation method, I need to verify the URL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh, using my personal knowledge that raw.githubusercontent.com is a well-run site (owned and run by GitHub) that unlikely to have its certificate compromised, that it is indeed the correct site for downloading raw content from GitHub projects, the ocaml GitHub account is indeed the vendor whose software I want to install, and opam/master/shell/install.sh is the correct path to the software I want. This isn't terribly difficult, but you can see the opportunities for human error here (as compared to verifying apt-get install opam) and how they could be magnified with even less clear sites and URLs. In this particular case, too, an independent compromise of either of the two vendors above (GitHub and the OCaml project) could compromise the download without the other being able to do much about it.

The second security feature is confirming that the file you got is actually the correct one for the name above. The curl/shell method relies solely on the security provided by HTTPS, which could be compromised on the server side (unlikely so long as the server operator takes great care) and on the client side (far more frequent than you'd think in this age of TLS interception). By contrast, apt generally downloads via HTTP (thus rendering the entire TLS PKI irrelevant) and checks the integrity of downloads via a PGP signature, which is considerably easier to secure (because the secret keys don't need to be online, etc.).

The third is ensuring that, when you have the correct files from the vendor, that the vendor itself is not distributing malicious files. This comes down to how reliable the vendor's packaging and review processes are. In particular, I'd tend to trust the official Debian or Ubuntu teams that sign release packages to have produced a better-vetted product, both because that's the primary job of those teams and because they're doing an extra layer of review on top of what the upstream vendor did.

There's also an additional sometimes-valuable feature provided by packaging systems such as apt that may or may not be provided by systems using the curl/shell install procedure: audit of installed files. Because apt, yum, etc. have hashes for most of the files supplied by a package, it's possible to check an existing package installation (using programs such as debsums or rpm -v) to see if any of those installed files have been modified.

The curl/shell install method can offer a couple of potential advantages over using a packaging system such as apt or yum:

  1. You're generally getting a much more recent version of the software and, especially if it's a packging system itself (such as pip or Haskell Stack) it may do regular checks (when used) to see if it's up-to-date and offer an update system.

  2. Some systems allow you to do a non-root (i.e., in your home directory, owned by you) install of the software. For example, while the opam binary installed by the above install.sh is put into /usr/local/bin/ by default (requiring sudo access on many systems), there's no reason you can't put it in ~/.local/bin/ or similar, thus never giving the install script or subsequent software any root access at all. This has the advantage of ensuring that root compromise is avoided, though it does make it easier for later software runs to compromise the installed version of the software that you're using.

cjs
  • 359
  • 1
  • 6
  • 1
    You have missed one _disadvantage_: your local package manager doesn’t know about software installed this way. So any automated checks/downloads for patched versions won’t work. – Gaius Jul 15 '19 at 10:30
  • Actually, many software systems where this is both a recommended and typical installation know how to do their own update checks and updates. Often they are (usual language-specific) packaging systems themselves (e.g., pip, rvm, Haskell Stack). But this is certainly something to check and keep in mind for whatever particular system you install this way! – cjs Jul 15 '19 at 13:52
7

"Reasonably Safe" depends on your goalposts, but curl | bash is well behind state-of-the-art.

Let's take a look at the kind of verification one might want:

  • Ensuring that someone malicious at your ISP can't do a man-in-the-middle to feed you arbitrary code.
  • Ensuring that you're getting the same binaries the author published
  • Ensuring you're getting the same binaries that someone downloading the same filename also got.
  • Ensuring that the binaries you download reflect a specific, auditable set of sources and build steps, and can be reproduced from same.
  • Separating installing software from running software -- if you're installing software to be run by an untrusted, low-privileged user, no high-privileged account should be put at risk in the process.

With curl | sudo bash, you get only the first if that; with rpm or dpkg you get some of them; with nix, you can get all of them.

  • Using curl to download via https, you have some safety against a man-in-the-middle attacker, insofar as that attacker can't forge a certificate and key that's valid for the remote site. (You don't have safety against an attacker who broke into the remote server, or one who has access to the local CA your company put into all CA store lists on corporate-owned-hardware so they could MITM outgoing SSL connections for intentional "security" purposes!).

    This is the only threat model curl | sudo bash sometimes is successful at protecting you against.

  • Ensuring that you're getting the same binaries the author published can be done with a digital signature by that author (Linux distributions typically distribute a keychain of OpenPGP keys belonging to individuals authorized to publish packages to that distribution, or have a key they use for packages they built themselves, and use access control measures to restrict which authors are able to get packages into their build systems).

    Deployed correctly, rpm or dpkg gives you this safety; curl | bash does not.

  • Ensuring that requesting the same name always returns the same binaries is trickier, if an authorized author's key could have been captured. This can be accomplished, however, if the content you're downloading is hash-addressed; to publish different content under the same name, an attacker would need to either decouple the inputs from the hash from the file's contents (trivially detected if it's the hash of the binary that's published.

    Moving to hash-addressed build publication has two possible approaches:

    • If the hash is of the outputs of the build, an attacker's easiest approach is to find the mechanism by which the end-user looked up that hash and replace it with a malicious value -- so the point-of-attack moves, but the vulnerability itself does not.

    • If the hash is of the inputs to the build, checking that the output of the build genuinely matches those inputs requires more work (namely, rerunning the build!) to be done to check, but that check becomes far harder to evade.

    The latter approach is the better one, even though it's expensive to check and puts extra work on the folks doing software packaging (to deal with any places the author of a program folded in timestamps or other non-reproducible elements to build process itself).

    Dealing with malicious authors is not in the security model that rpm or dpkg tries to address, and of course, curl | bash doesn't do anything about it either.

  • Separating installation from runtime is a matter of designing the serialization format up-front without dangerous features -- not supporting setuid or setgid bits, not supporting install-time unsandboxed run scripts with arbitrary code, etc. curl | sudo bash gives you no protection here, but rpm and dpkg also don't. nix, by contrast, lets any unprivileged user install software into the store -- but the NAR serialization format it uses won't represent setuid or setgid bits, that content in the store is unreferenced by any user account that doesn't explicitly request it or a piece of software that depends on it, and cases where software needs setuid privileges to be installed require explicit out-of-band administrative action before those bits actually get set.

    Only oddball, niche, specialty software installation methods like nix get this right.

Charles Duffy
  • 497
  • 5
  • 9
  • Keep in mind that unless the keys for the specific project come with your distro -- not impossible, obviously, but not necessarily likely -- you'll have to get them from somewhere. That "somewhere" currently defaults to a broken keyserver network. It would be worth mentioning other trusted methods. Also, you can effectively client-side pin a specific certificate with `curl`, to remove that one attack (not the others). Finally, `curl | bash` is more likely to be up-to-date than anything from a package manager, precisely _because_ it's so uncontrolled. Probably worth a mention if nothing else. – Nic Jul 15 '19 at 14:15
  • Aside from those, which amount to minor nitpicks, I like this answer for offering a safer alternative to even "normal" installation methods, which even the currently highest-voted answer doesn't do. If only the author could have figured out how to incorporate smoothly. – Nic Jul 15 '19 at 14:16
  • Oh oops, you do address my first point, I just missed that paragraph. Sorry. Ignore that bit. – Nic Jul 15 '19 at 14:19
  • 1
    I do agree that keeping things up-to-date is a pain; [the backlog of PRs awaiting review/merge to nixpkgs is extensive](https://github.com/NixOS/nixpkgs/pulls), and the (very!) high bar to getting a commit bit helps keep it that way. – Charles Duffy Jul 15 '19 at 15:07
6

Submitting an answer to my own question. Not sure if this is the best answer, but I'm hoping other answers will address these points.

curl {something} | sudo bash - on Linux is equally safe as downloading something on Windows and right-clicking run as administrator. One can argue that this is 'reasonably safe', but as a recent xkcd suggests, nobody really knows how bad computer security is these days. In any event, this method is NOT as safe as other installation methods.

All safer methods include a step to verify the download integrity before installing anything, and there's no good reason to skip this step. Installers like apt have some form of this step built in. The goal is to ensure that what you have downloaded is what the publisher intended. This doesn't guarantee that the software is free of its own vulnerabilities, but it should at least protect against simple attacks that replace the download with malware. The essence is simply to verify the MD5 and SHA256 checksums posted by the software publisher. Some further improvements are possible:

  • It's better to get these checksums via a different network path, such as by calling a friend in another country, which would protect against MITM attacks.
  • It's better to get the checksums at least a day earlier/later, which would protect in case the publisher's website was briefly taken over but the takeover was stopped within a day.
  • It's better to verify the checksums themselves using GPG, which would protect in case the publisher's website was compromised but their GPG private key wasn't.

One side comment: Some sites say you should download the sh script and then inspect it before running it. Unfortunately, this gives a false sense of security unless you vet the script with a practically impossible level of precision. The shell script is probably a few hundred lines, and very tiny changes (such as an obfuscated one-character change to a URL) can convert a shell script into a malware installer.

Krubo
  • 829
  • 6
  • 9
  • 2
    "equally safe as downloading something on Windows and right-clicking run as administrator" - While that comparison may be true, it's _not common_ for Windows software vendors to instruct users to start their installer that way. – aroth Jul 14 '19 at 12:21
  • @aroth Good point: Even Windows vendors are moving to (somewhat) safer install methods nowadays. – Krubo Jul 15 '19 at 02:34
2

One option would be to attempt to do behavioural analysis of what the results are, by running the curl command separately to fetch a copy of whatever the script is.

Then run it in a linux vm and watch what connections out happen etc, you could even run file integrity monitoring on the system and see what's altered when it runs.

Ultimately, the context is important that this behaviour could lead to compromise, but isn't especially worse than many of the other methods by which people get software. Even with the behavioural analysis I mentioned above, you're limited by the secondary sources the script may retrieve from, which could be dynamic too - but so are the dependencies of real software, so at some level, you have to rely on trust of the source to not link something bad.

pacifist
  • 804
  • 4
  • 8
  • 1
    Running curl separately won't necessarily give the same result. There's been software developed to detect whether there's a shell on the other end by looking at how quickly chunks of the script are retrieved by the downloading end, and inject malicious code at the end of the file if-and-only-if the timing patterns look like it's being directly piped to a shell; see https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/ – Charles Duffy Jul 14 '19 at 11:48
  • @CharlesDuffy If you run the script in the VM, then run _the script you downloaded_ again on your main machine, assuming they can't detect a VM then it should be safe. (Big assumption, I know, but at some point you should just stop trusting that server and find another installation method...) – Nic Jul 14 '19 at 17:01
  • 1
    You may avoid things due to careless scripts, but a malicious one will not make obvious that it is doing malicious things. You will just have the rootkit in both the VM and your system afterwards, e.g. when it activates itself with a delay of three days, waits for incoming connections instead of creating connections itself or uses other hiding techniques. It could be worth trying tools like [maybe](https://github.com/p-e-w/maybe), but note that they may not provide the best security (i.e. their sandbox is not be perfect) either. – allo Jul 15 '19 at 08:47
1

No, it's not as safe. Your download can fail in the middle.

If your download fails in the middle then you'll have run a partial script, which can potentially fail to do some operations that it was supposed to do (cleanup, configuration, etc.).

It's not likely if the script is small or your connection is fast, but it's possible, especially on a slow connection.

This is an example of the difference between safety and security. :)

user541686
  • 2,522
  • 2
  • 22
  • 28