56

Since the beginning of the Ukraine-Russian war, a new kind of software was created, which is called "protestware".

In the best case, the devs only add some (personal) statements about the war or uncensored information to the repositories or when starting the application. Since Github and other platforms are not banned in Russia, this could help to reach users and provide them with news.

The open source initiative wrote in a blog post, that's ok to add a personal statement or add some commit messages with information about the war to reach users with uncensored information.

But there are also projects which add malicious behavior. One example is the "node-ipc package", which deletes files depending on the geolocation. The affected versions also have their own CVE (CVE-2022-23812) which was rated with a CVSS of 9.8.

From a security perspective, it's best practice to install the latest version, which should fix security issues but not introduce new ones as a "feature".

But the node-ipc module showed that each maintainer/developer can add bad behavior to the software as a political statement.

Question:

  • New software versions can be used as a political statement. As a user, should I be concerned about political messages in software?
  • What should I do to mitigate malicious behavior?
    • I can't review the code of all used libraries and applications.
    • A lot of users do not have the knowledge to understand the code.
schroeder
  • 125,553
  • 55
  • 289
  • 326
Manfred Kaiser
  • 1,306
  • 2
  • 5
  • 20
  • 61
    Any unexpected and undesired behaviour from software is a problem, protest or not. – schroeder Mar 27 '22 at 08:44
  • 30
    "What should I do to mitigate malicious behavior?" - this applies equally well to any type of malicious behavior with open-source software (or closed-source software), not just "protestware". There are a number of existing questions on that: [How can you be sure open-source code isn't malicious?](https://security.stackexchange.com/q/192553) [Are security scrutinies conducted by independent agencies on open-source software?](https://security.stackexchange.com/q/229314) [How can Linux be secure if it allows for open source contributions?](https://security.stackexchange.com/q/185701) – NotThatGuy Mar 27 '22 at 15:54
  • 14
    It's not new. Notepad++ has been doing this for a long time. Version 6.7.4 auto-typed a "#JeSuisCharlie" statement when you first launched it, for example. – OrangeDog Mar 28 '22 at 10:37
  • 3
    Security issues arguably enter new versions all the time, mostly as bugs but still... For me I think "What should I do to mitigate malicious behavior" is a question that applies regardless of whether security issues appear on purpose or not. – kutschkem Mar 28 '22 at 13:49
  • Can you explain a technical difference that singles out "protestware" from anything else unwelcome? – Robbie Goodwin Apr 02 '22 at 19:05

3 Answers3

85

Political statements in software can be a concern for a few reasons:

  • The may result in the software being banned in your country, so you should plan for that eventuality.
  • They may result in the software being targeted (for example, the Notepad++ GitHub has been repeatedly spammed by Chinese accounts over its various version names). And this may turn into more dangerous attacks which could compromise the software.
  • It may indicate that the author is more likely to make actual changes to the software down the line.
  • It suggests that the software is probably developed by an individual, which can make it more fragile and susceptible various issues.

But if the software actively does something malicious, then it's not "protestware". It's just malware. So you should treat it the same as if the software decided to bundle a password stealer/cryptominer/ransomware/etc - using your existing supply chain and dependency management processes.

The author(s) should also be blacklisted in your internal processes so that you don't use anything they have written (or anything that depends on them) again.

It's also worth nothing that this isn't really anything to do with "open source". Adding political messages to software is just as easy with closed source projects, and adding malicious code is much easier, because it's harder to detect. Because of this, a lot of organisations are advising against using software from unfriendly countries (such as the FCC recently stating that Kaspersky is considered an "unacceptable risk to national security").

Gh0stFish
  • 6,800
  • 1
  • 23
  • 23
  • 33
    Big +1. I have seriously considered writing a browser extension that flags all projects with commit access for GitHub users who put malware - no matter how targeted or sincerely well-intentioned or quickly removed - into their code or carry out similar supply-chain attacks, with a warning to the users about it. Possibly also extend that to the GitHub users who downvoted the malware report and/or upvoted the author's defense / deflection / outright lies about what he'd done (about 3% of total reactions). Open source often has no corporate reputations, but maybe it should have personal ones. – CBHacking Mar 27 '22 at 13:32
  • 15
    The main reason I haven't is that maintaining the list of who is and isn't on it seems like a nightmare if it ever got big, and a mostly-useless effort if it didn't, and there are already systems to monitor supply chains such as Snyk. The reason I'm tempted anyway is that a lot of individual devs aren't using such systems, and I really wish there was a good way to publicly document "missing stairs" in the open source community. – CBHacking Mar 27 '22 at 13:36
  • 2
    With that said, a caveat regarding the proprietary vs. open source situation: proprietary software is usually less composed of many small pieces developed individually. While a company could certainly decide to ship malware, it would take the agreement of more people (management, the actual developers, the peer reviewers, and probably anybody else who interfaces with or dogfoods that software internally... which might be a huge list) to do so and ship it without the word getting out. Lots of open source projects are one-person affairs where there are no checks on what goes into them. – CBHacking Mar 29 '22 at 00:44
  • @CBHacking that's true to some extent - but then almost all propriety software bundles a whole load of third party dependencies, and most companies aren't really doing much management of those. – Gh0stFish Mar 29 '22 at 08:10
6

What should I do to mitigate malicious behavior?

I can't review the code of all used libraries and applications.

Agreed. Very, very few shops can afford to review all dependencies in depth.

But that is, IMHO, no excuse to (automatically) pull untested and unverified dependencies:

  • When you get a new dependency, you at least smoke test it.
  • When you upgrade your package-lock.json, you check what packages changed: In most situations, you can't verify them all in detail, but you can run an internal test of your software for any obvious malicious behavior.
Martin
  • 1,247
  • 2
  • 12
  • 19
4

Obviously yes

... but not because it is protestware.

Here's the bottom line: open-source software is something your org doesn't control. The people who wrote it have no legal obligations to your org for the simple and obvious reason that neither they nor your org have undertaken to form a relationship by which you could hold each other accountable. That is true whether or not the software in question is "protestware."

The truth is that anybody who uses software they didn't write is putting their fate in someone else's hands. The only thing that sets protestware apart is that we think we know the reason the software doesn't do what it says it does -- that reason being that the actual authors have deliberately broken their software as an act of political protest. But significant breakage can happen even when everyone is trying to do a good job: I remember back in 2016 there was an innocent problem with some widely-used library that ended up breaking everyone's webpack builds for something like 24 hours.

A person doesn't need a war to justify that kind of action. They don't even need a reasonable belief: there are plenty of very smart programmers out there who are also tin-foil-hat crazy. And the other side of that coin is that there are some organizations that a sane person would be justified in subverting. None of this changes the fact that every organization is 100% responsible for taking appropriate safeguards to prevent outsiders from interfering, deliberately or otherwise, with the pursuit of that org's objectives.

The onus is, and always has been, on consumers of third-party software to take precautions against the possibility that the software they consume may change in a way they don't like. That's true whether or not their goals diverge from the goals of the random outsiders whose software they consume. The "advent" of protestware does not change that fact whatsoever.

Tom
  • 157
  • 2
  • 1
    There's nothing about this which is specific to open source software. – Gh0stFish Mar 29 '22 at 15:34
  • @Gh0stFish Closed-source software tends to come with transfers of money and legal accountability. – user253751 Mar 29 '22 at 18:53
  • 2
    @user253751 open source does not mean free, and free does not mean open source - they're two completely different things. And all software comes with license agreements - whether or not you pay for it makes no difference. – Gh0stFish Mar 29 '22 at 19:00
  • @user253751 Examples that come to mind immediately of closed-source might include nvidia drivers, wifi firmware, multitude of packages distributed binary only (way too many examples to list just in bioinformatics). Closed-source and accountability don't go together. – doneal24 Mar 30 '22 at 01:56
  • @doneal24 ... other examples include Windows, Oracle, Salesforce and SAP HANA. – user253751 Mar 30 '22 at 08:10
  • @Gh0stFish While what you say is technically true, it's a staple of open-source license agreements (and very uncommon for closed-source ones) to explicitly dismiss liability. – Egor Hans Mar 30 '22 at 09:37
  • @EgorHans Companies like Microsoft or Oracle are absolutely not going to accept liability for any issues caused by their software, or for removing or breaking functionality that you use or rely on - they'd have gone broke years ago if they did. And unlike smaller projects, they are much more capable of fighting off any claim that you try and make against them. – Gh0stFish Mar 30 '22 at 10:26
  • 1
    @EgorHans You might look at section 9d in the Windows 10 EULA as an example. You may not recover damages unless required by law and then only up the the amount you paid for the product. Similar for other products I looked at. – doneal24 Mar 30 '22 at 12:16
  • @doneal24 Fair point, they do dismiss liability to some extent. Still, a sufficiently large customer (talking B2B license agreements in particular) would definitely be able to enforce some warranties that the GPL excludes to begin with. – Egor Hans Mar 30 '22 at 12:20
  • 1
    @EgorHans Both the MS EULA and the GPL explicitly excludes any warranties so I'm not sure of the difference you're pointing out here. What customer is big enough to bully Microsoft or Oracle? If they're that big, the possible damages are huge and no company would want to take them on. – doneal24 Mar 30 '22 at 12:36
  • @doneal24 and if they're big enough to take on Microsoft or Oracle in court, they can definitely bully a small open source developer who probably hasn't ever read the GPL, and may not be able to afford a lawyer. Even if the developer is 100% in the right, big companies can drag out a legal case until the developer runs out of money and has to give in. – Gh0stFish Mar 30 '22 at 13:06
  • The big difference with issues caused by removing or breaking functionality and damage caused by provable malicious intent is that no contract or agreement will protect from latter. Developer would not even be free to remove his own software if it would cause harm, developer is aware of that and intend for the harm to happen. – Xerkus Apr 14 '22 at 05:53