4

Hypothetical scenario:

  • An organisation with users who rely on the service's zero knowledge cryptography has a vulnerability disclosure made to it from a research institution.

  • There are multiple vulnerabilities in the disclosure with medium-high impact to the users of the service and also various sensible recommendations for fixes. The vulnerabilities are of a nature where it requires the service provider to be acting maliciously. Nevertheless, it undermines the end to end cryptographic security model and could allow the organisation to see private user data. The researchers will give reasonable time e.g. 1 year to fix the high impact issues before they will publish the results.

Should users rely on the privacy of their data with a product like this?

What is a more recommended approach to address a serious vulnerability disclosure? Do all recommended fixes need to be followed exactly from the cryptographers at the research institute (assuming they have more experience and better knowledge than the organisation)?

Ali
  • 61
  • 6
  • 5
    This scenario is not all that uncommon. There are many service providers that claim to run 'zero knowledge' (or 'zero access') services using end-to-end encryption; but a close look shows that they are susceptible to the 'browser crypto chicken-and-egg problem'. See https://security.stackexchange.com/questions/221738/can-protonmail-access-my-passwords-and-hence-my-secrets and https://security.stackexchange.com/questions/229477/how-does-sync-com-provide-zero-knowledge-for-web-application-upload for more info. – mti2935 Mar 30 '22 at 11:48
  • 1
    IMHO if you *truly* want to have end-to-end and be 100% sure that the server provider cannot have access to anything then you **cannot** have them implement the client. The client should be implemented by yourself or some *different* party that you trust to *only* connect to the servers via the end-to-end protocol... if the server-provider also provides the clients end-to-end encryption doesn't really add much, you have to give the provider the same level of trust as standard encryption since their client could snatch the decrypted messages and do what they want with them – GACy20 Mar 31 '22 at 12:54

2 Answers2

14

Certain parts of this question concern matters of opinion. Those are out of scope for Security.SE.


Well, in this highly "hypothetical" scenario, I would expect the researchers to publish their findings, highlighting the way in which the service failed to live up to its organization's promises, the recommendations the researchers made, and what the organization did instead. That's just standard disclosure practice: document the vulnerability (including justification for the claimed impact), the recommendation, and the end result (or current status, for things not fully remedied by publication; ideally you update the publication / release follow-ups as things change going forward).

I can't promise anything about how any part of the security community will respond - we're not a hive mind, and even if we were, you haven't given enough information to say for sure - but historically, similar disclosures have been well-received, and sometimes brought some censure on the organization if they were perceived as failing to ensure security, or lying to the public. It depends to some extent on how severely the promises are broken, e.g. Zoom's "end to end encryption" was initially nothing of the sort and this got some negative attention (for Zoom, not the researchers who pointed it out), whereas when somebody managed to extract keys from an older version of Skype this was seen as academically interesting work but went basically unnoticed at large (it was mostly reverse engineering work, and had little application for attacking Skype users even if the version had still been current).

Some things that will probably impact the reception of the disclosure:

  • The severity of the findings. "A malicious insider with direct access to the servers can change the public keys they send" is a less-severe finding than "All user traffic is TLS-terminated at the gateway and flows in plain text through the rest of the server infrastructure".
  • The degree to which the organization's fix works. There's no obligation at all to follow the specific recommendation of external researchers; the only obligation is to not put users at risk (and not make false claims). The fix will be better received the more it achieves that, and vice versa. A "fix" that doesn't actually address the problem will likely be viewed in a significantly negative light.
  • The severity of any findings left unresolved.
  • The timeline of private disclosure, remediation(s) if any, and public disclosure. Some people in the security community are in favor of immediate public disclosure so they can work around or avoid the issue themselves, others in favor of not disclosing unfixed issues at all unless the vendor shows no interest in fixing them, with probably the largest number favoring a disclosure timeline of some degree of aggressiveness. Everybody (except, I guess, black hats) loves to see a quick turn-around on a fix; nobody wants to see a company ignore legitimately exploitable issues.
  • The degree, if any, to which it looks like the researchers are seeking attention / business, rather than trying to get a real problem addressed. Individuals will be all over the place on their feelings and thresholds for this one, but it is generally not appreciated when a researcher is perceived as raise a big stink about something that isn't actually a serious issue, or as publishing marketing material in the guise of notable research.

What is a more recommended approach to address a serious vulnerability disclosure?

Triage it, anything that poses an imminent threat gets fixed ASAP, while anything that doesn't but still impacts defense in depth is fixed as schedules permit. This is the recommended approach, generally speaking.

Do all recommended fixes need to be followed exactly from the cryptographers at the research institute (assuming they have more experience and better knowledge than the organisation)?

No. Even if the assumption is valid (which it may or may not be), outsiders usually don't have as complete an understanding of the system - its architecture, purpose, protections, future goals, etc. - as the developer. It's a very rare vulnerability that can only be solved in a single way, and even if there is only one known best way, that doesn't mean other ways are inadequate. It is the researcher's job to explain the vulnerability to the developer well enough that the developer can understand whether a proposed fix will suffice.

CBHacking
  • 42,359
  • 3
  • 76
  • 107
  • 2
    @Bob If that's the case, and the hypothetical organisation handles their cryptography poorly, that'll be evident in their response to the issues raised by the researchers - either the actual fix they implement being poor, or just how they go about responding showing a lack of understanding. Everyone else will judge the situation based on that. The researchers saying, in advance of any response from the organisation, that the organisation must implement the researcher's recommendations exactly to the letter or the fix will be inadequate comes across a bit presumptive. – Kayndarr Mar 31 '22 at 02:51
-2

I think the answer is rather simple: Does the company want that their security is broken in real life? If they don't want it to happen, they should fix all known vulnerabilities that seem to be realistic.

Also the "time or money to fix anything" not invested may become a boomerang if people are affected: there could be penalties to pay for disclosing information to the public that may even ruin the company at the end.

IMHO I would not get into business with such a company if avoidable.

U. Windl
  • 137
  • 7
  • I see this question as being more about how the disclosure should be handled, but this answer seems to be just addressing whether the organization should fix the issue. – barbecue Mar 31 '22 at 13:27
  • 1
    Sorry, the questions are: "Should users rely on the privacy of their data with a product like this? What is a more recommended approach to address a serious vulnerability disclosure? Do all recommended fixes need to be followed exactly from the cryptographers at the research institute (assuming they have more experience and better knowledge than the organisation)?" I think my answer applies! – U. Windl Mar 31 '22 at 13:35
  • Actual steps are required, not this apparently. – Sir Muffington Mar 31 '22 at 20:48