9

A year or so ago, I set up this system which, whenever Composer (that's PHP's packet/library update manager) fetched new updates to my few (but critically required) third-party libraries, created a copy of the Composer dir and opened up WinMerge to display the differences for me to manually go through.

Initially, I thought I was oh-so-clever for doing this, and eagerly looked at every new change to verify that they didn't add some evil malware code.

Then, as the updates kept coming in regularly, I paid less and less attention...

Until today, when I got so sick of this that I removed the entire mechanism and reverted back to just blindly letting it fetch whatever it wants and me not verifying anything about it manually.

It's bad enough to do this for my own code (mostly to make sure I didn't leave in something temporary in by mistake when working on a bunch of different files in my system), but doing this for others' massive code trees is just hellish.

They would frequently edit many different files -- it wasn't just one line in one file per update or anything. I just couldn't keep on doing it. It's unthinkable.

But now, I obviously feel bad for not doing it again. And it makes me wonder: does anyone really do this in the entire world? Considering how little effort people seem to put into almost everything, I cannot imagine that many people sit there and go through all the updates.

And let's be honest: even if I did continue doing this, I may not even catch the subtle introduced bug/hole, which may happen gradually through many small changes over time. And of course the rest of my computer is full of uncontrolled proprietary nightmare blobs which do unknown stuff around the clock...

T R
  • 91
  • 1
  • I look through all the direct front-end updates to our system at work. At least, for the script we serve and that I think could mess things up; I tend to gloss over stuff like jquery updates figuring that enough eyes are on it that mine aren't needed. ā€“ dandavis Dec 03 '20 at 09:44

2 Answers2

7

Reviewing diffs between versions of open-source libraries and packages seems very excessive to me. The amount of time that it takes to do it, along with the necessary knowledge of the architecture and implementation details of the project make it seem very limited in utility. The general idea of vetting external software isn't. Various organizations that I've worked with have had different policies or expectations for how to handle third-party dependencies.

Some of the strategies that I've seen, used in various combinations, include:

  • Regular risk assessments. For commercial products, this would be a vendor assessment. For open-source products, evaluating the release cycle, testing, issue tracking, maintenance, and support for the product against needs. A component that doesn't meet needs can be scheduled for replacement based on risk.
  • Static analysis of open-source software components. Some vendors offer free scanning for open-source projects, so it could be a review of that report. It could also be using the organization's static analysis tools to scan the included open-source libraries.
  • Software composition analysis to track the versions of third-party dependencies and monitor for reported vulnerabilities.
  • System-wide scanning and testing. If the third-party dependency has a vulnerability that is exploitability, it may be detected when testing the entire system.
Thomas Owens
  • 1,052
  • 8
  • 9
  • SonarCloud is one such tool which is free for public projects - I am not associated with the company. Just providing an example here since we use it at work. ā€“ Chethan S. Dec 09 '20 at 06:55
1

No individual can do it for their own projects, as you have discovered. But Iā€™m sure that the security teams in Google, Facebook, Apple, Amazon and Microsoft do indeed vet all external software changes.

Mike Scott
  • 10,134
  • 1
  • 28
  • 35
  • Why do you think anyone would Waste Precious Time (TM) instead of Doing Actual Work (R)? ā€“ tomash Dec 23 '20 at 21:21