0

I'm sure I'm not the only one who is scared to death to blindly trust random strangers with all my data. Which is the case in practice if, for example, I'm to use this library right now, which I'm about to: https://github.com/jfcherng/php-diff

It is impossible for me to know if this (or any other library) does only what it claims, and what kind of security the author has. Tomorrow, he could get compromised and his account uploads an update which adds malware behaviour to his originally clean library, and then Composer will pull it down.

I wouldn't mind using a slightly outdated version if that version was "verified" somehow by a third party. However, I've never encountered such a thing.

Is there really no group of security-aware people who go around and "vet" libraries such as the one I've linked to above, marking specific releases as "OK" so that I can use them without (as much) fear? Yes, I have to trust that group, but it's at least another entity that claims to have done some kind of real vetting.

I almost feel like I'm missing something important. As in, nobody in their right mind would be using these GitHub projects like this, or something. Yet I have to. It is impossible for me to go through others' code (unless very trivial), but others claim/seem to be able to, so it seems reasonable that they don't just do it locally for themselves, but report their trusted "seal of approval" somehow to the world.

There must be companies out there who use open source projects and do have some paid guy going through the code before they just put it into production on their enterprise mainframes, right? Wouldn't it be a great way for them to give something back without having to donate money or actual code, by simply sending a signal saying: "We, Entity X, believe that version Y of library Z is clean, and will be using this in production."?

If this were standardized in some way, GitHub (or others) could display a little list of gold stars/badges next to each version, showing which trusted companies/groups have vetted the code.

Is there something I'm missing? Why isn't this (apparently) a thing?

2 Answers2

2

There is a major difference between "looks good for my own use case" and a more general code audit which can be applied to many use cases. And this difference has associated costs - why should companies bear these additional costs?

There are sometimes publicly financed or company sponsored real audits for some critical software, for example crypto libraries like OpenSSL or Botan. But don't expect any library you just like to use to undergo such a real audit. And many of these libraries would likely not survive such an audit anyway, because they are not designed with security as a major concern. Or a proper audit is simply not realistic because of a lack of proper documentation (i.e. code documentation, design documentation, ...) and a unwillingness or lack of time of the developers to invest a significant part of their own time to provide the assistance needed for a proper audit (i.e. creating documentation, design discussions, fixes ...).

We, Entity X, believe that version Y of library Z is clean, and will be using this in production."?

I guess the legal departments of many companies would block such a public statement because they fear backslash if a problem is found in these libraries and someone claims that they only used the library because they trusted this statement. And if a problem is found it will also shine a bad light on the capabilities of the auditing company, which might result in customers no longer trusting their products. Therefore companies are usually very careful before they make statements like these - which drives the cost of such an audit even higher.

Steffen Ullrich
  • 190,458
  • 29
  • 381
  • 434
0

The basic question that you have is whether you can trust a specific piece of software. Your example of independent verification is just an example that increases your trust. There are other ways of increasing trust, easier and less costly. That is one of the reasons why this independent verifications are not "a thing".

There are points that should influence your decision to trust software:

  • Formal verification using Common Criteria (versions of z/OS are EAL4)
  • Verification, like has been done for the openssl library
  • Is it produced by a reputable vendor (Oracle, Microsoft, RedHat, tec. One that you trust)?
  • Is it backed or distributed by a reputable vendor?
  • How long does the software exist?
  • Is there a list of bug-fixes? How bad were those bugs?
  • Is it generally used or are you the first user?
  • Are there questions about this software on the Internet, with answers?
  • or are there warnings about this software?
  • Can I do my own verification and are there no (obvious) problems?
  • Testing, from pen-tests to functional tests
  • Does the author(s) respond to questions?
  • Does the author(s) use the software himself?

None of these are guarantees that the software is bug free, is safe or is bug free. Otherwise, we would never have to patch our systems again! And none of them are actually required. But it may explain why there is not such a focus on independent verification. You must do your own risk analysis. And you should watch the security advisories.

Ljm Dullaart
  • 2,007
  • 5
  • 11