0

I tried to find a resource about how one verifies a service's security and their claims when they are not open source. Like 1Password, they describe many good approaches that they are using to secure the password. But, without access to the source code, how can a user like me verify that those security measures are in place or that no metadata or secrets are sent along the way?

Is there a standard out there for such cases? Is that the security audits? If yes, are those security companies considered trusted, so at the end of the day we are trusting people's words and not a code or math? 

Do people just trust such services? If so, are documents such as security whitepapers nothing more than advertisements or there is something to verify?

Note: I use 1Password as an example, as they seem trusted and well used around here and compared as better options than other similar services.

schroeder
  • 125,553
  • 55
  • 289
  • 326
Okoba
  • 5
  • 2
  • How would you verify the security of open source? OK, you've got your 1000MB + 1000MB of Firefox/Google Chromium sources and vulnerabilities in both are discovered every week. The same applies to the Linux kernel, the OpenSSL library, etc. etc. etc. – Artem S. Tashkinov Feb 06 '23 at 15:24
  • What is your point? Those libraries are investigated thoroughly every day, and people can read the code and, for example, see if OpenSSL verifies the certificate correctly and does not have a backdoor for a specific certificate. I am not starting a discussion about open or closed source. My question is mainly about the verification process of claims about a security system today. – Okoba Feb 06 '23 at 15:33
  • At some point, there's got to be trust. Auditors are one way of establishing a company is doing what they say. Is it foolproof? No. But look at things you trust every day and probably never give a second thought to, like your car or an airplane. You trust someone has audited the manufacturing process or the maintenance guidelines. – kenlukas Feb 06 '23 at 15:34
  • 1
    @Okoba: *"Those libraries are investigated thoroughly every day"* - No. Just because it is possible in theory, does not mean that it is actually done. It is lots of effort - who pays for this? *"... and does not have a backdoor for a specific certificate"* - Based on this comment you don't seem to ask about security in general (like security issues caused by bugs - which is most), but more about malicious behavior (backdoors etc). Many open source projects had critical bugs for years and nobody noticed (like log4j) – Steffen Ullrich Feb 06 '23 at 15:36
  • 1
    *people can read the code and, for example, see if OpenSSL verifies the certificate correctly* - yeah, really? Then why was the exact issue was resolved recently? You seem to **believe** that open source **guarantees** 1) multiple eyes 2) no vulnerabilities. In reality both are not true. There's zero guarantee whatsoever. Backdoors - yeah, those are *somewhat* easier to spot, except when they aren't considering modern code complexity. The problem is ... even that's not guaranteed: https://www.theverge.com/2021/4/30/22410164/linux-kernel-university-of-minnesota-banned-open-source – Artem S. Tashkinov Feb 06 '23 at 15:36
  • @SteffenUllrich I am asking about the general case, but yes, one of the big needs for security in such a system is backdoors too. I am not implying that open source is good. I am asking how one checks the words of those service providers. Is it a trust-based system, or is there something else going on? – Okoba Feb 06 '23 at 15:40
  • Hundreds of Python, ruby on rails and NPM libraries have been backdoored over the past decade. Not only that some websites and organizations have been hacked this way. – Artem S. Tashkinov Feb 06 '23 at 15:42
  • It's a trust based system, yes. For me when something is used by governments of the world and three-letter US agencies (NSA/FBI/CIA) - I can trust it. When something is an industry standard, e.g. Adobe products, I can trust it. Other than that, you could use SandBoxie+ (which I love), run it in a VM or on a separate PC. The best way to treat *any* software is *not* to trust it explicitly. Have a backup plan for everything. Probably the only thing I trust in software is encryption algorithms because they are math, not software. But even they can be misused. – Artem S. Tashkinov Feb 06 '23 at 15:44
  • Check the recent news on LastPass. A lot of food for thought. They did misuse encryption. https://www.cnet.com/tech/services-and-software/lastpass-customers-need-to-change-all-of-their-passwords/ – Artem S. Tashkinov Feb 06 '23 at 15:47
  • @SteffenUllrich "who pays for this?" If there is some kind of backdoor/bugdoor in Open Source software, then black hat hackers can monetize it by exploiting it, meaning that they have motivation to do so, unless the software is too unpopular. Also, it's possible to run some kind of static code analyzer on published code in order to use possible bugdoors. – KarmaPeasant Feb 06 '23 at 16:33
  • "But, without access to the source code, how can a user like me verify that those security measures are in place" But even if they gave source code that you (or somebody whom you trust) could easily inspect, they could have easily be running modified malicious version on their servers. – KarmaPeasant Feb 06 '23 at 17:15
  • As you can imagine, this type of question has been asked here many times. It is a complex problem and there are no simple solutions to complex problems. And you ask questions in the general case, but each company and each product is different, you can't make blanket claims for every product in the world. – schroeder Feb 06 '23 at 17:18

1 Answers1

0

Do people just trust such services?

Basically it boils down to trust. There is no magic oracle which is able to tell you for sure if a complex software is secure or not - no matter if closed source or open source.

With open source there is trust that nobody will hide a backdoor in the software, based on the believe that it will be easily detected. This believe could be wrong since a well-designed backdoor might not be actually distinguishable from a bug. Often there is also trust that there will be less bugs since "everybody can audit and fix the code". But in reality this is far less done then would be useful since such audits are costly (might not directly take money but time and knowledge, i.e. opportunity costs) and as a result there are still many and also critical bugs found in open source software which were in the software for many years. There is also usually trust that a downloaded binary will actually reflect the published source code, i.e. one could in theory build it from scratch. But rarely someone actually verifies this believe.

With closed source it is mainly about trust too. It is trust in a good track record of a specific company. It is trust that a company has an interest to provide secure software, at least in case their business model is build around such reputation. There is trust in the companies marketing.

In more security sensitive environments more is needed than just trusting the vendors marketing (closed source) or having the theoretic ability (but usually not the knowledge and/or time) to thoroughly inspect the source code (open source). In these cases additional trust factors might help, like in the form of external certifications. There are companies who specialize in auditing software both in design and implementation, in doing penetration tests etc. If independent companies with a good reputation verify the software and design and assure that it is fine, then this is usually more accepted than just the claims of the vendor themselves.

Steffen Ullrich
  • 190,458
  • 29
  • 381
  • 434
  • "who pays for [audits]?" If there is some kind of backdoor/bugdoor in Open Source software, then black hat hackers can monetize it by exploiting it, meaning that they have motivation to do so, unless the software is too unpopular. Also, it's possible to run some kind of static code analyzer on published code in order to use possible bugdoors. – KarmaPeasant Feb 06 '23 at 17:17
  • Thank you for the answer. – Okoba Feb 07 '23 at 08:00
  • @KarmaPeasant: Bugs and backdoors in closed source are are similar lucrative targets for hackers - and it is very common that these are detected and exploited no matter if source code is available. – Steffen Ullrich Feb 07 '23 at 08:21
  • @SteffenUllrich It's easier for a hacker to find vulnerabilities in open source code, by virtue of having access to source code (except in cases when there is leak of source code of proprietary software, like recent case with Yandex). Although proprietary software is frequently commercial, thus hacking it can be more rewarding for hackers. Also, my thesis here is not that Open Source software provides better security, but that inspection of Open Source software can be rewarded, just not in obvious way. Meaning that if it's popular enough, there likely will be eyes who will inspect it. – KarmaPeasant Feb 07 '23 at 08:33
  • @KarmaPeasant: *"Meaning that if it's popular enough, there likely will be eyes who will inspect it."* - and exactly this is true for closed source too. If the return of investment (i.e. reward vs. effort) is large enough, then there will be eyes with the right knowledge behind them. – Steffen Ullrich Feb 07 '23 at 08:37
  • @SteffenUllrich You miss my point here. It doesn't matter if it's true for closed source software or not. I don't make comparisons here. In your answer you said about lack of motivation to inspect Open Source code(who gonna pay for audit?). I merely tried to refute this point, no more, no less. – KarmaPeasant Feb 07 '23 at 08:44
  • @KarmaPeasant: I understand your point. But the context of my statement is that it is often claimed that open source is more secure because more eyes can look at it. My point is that just the option of looking at it does not mean that *enough* with the right knowledge are actually doing this. Sure there is *some* motivation to do this, but is this sufficient and does it actually provide more security than closed source *in practice* and not only in theory? – Steffen Ullrich Feb 07 '23 at 09:06
  • @SteffenUllrich As far as I can tell, the general statement about which you are concerned cannot be either proved nor disproved, and the best we can do is to suspend judgement and say that we don't know (even in theory). To correctly compare two things that have either trait A or trait non-A, and conclude if trait A or trait non-A is better, other traits must be equal, but they are not (in general). At the best, arguments for/against security of open/closed software can be thumb rules for evaluation of specific software. – KarmaPeasant Feb 07 '23 at 09:37