Certain parts of this question concern matters of opinion. Those are out of scope for Security.SE.
Well, in this highly "hypothetical" scenario, I would expect the researchers to publish their findings, highlighting the way in which the service failed to live up to its organization's promises, the recommendations the researchers made, and what the organization did instead. That's just standard disclosure practice: document the vulnerability (including justification for the claimed impact), the recommendation, and the end result (or current status, for things not fully remedied by publication; ideally you update the publication / release follow-ups as things change going forward).
I can't promise anything about how any part of the security community will respond - we're not a hive mind, and even if we were, you haven't given enough information to say for sure - but historically, similar disclosures have been well-received, and sometimes brought some censure on the organization if they were perceived as failing to ensure security, or lying to the public. It depends to some extent on how severely the promises are broken, e.g. Zoom's "end to end encryption" was initially nothing of the sort and this got some negative attention (for Zoom, not the researchers who pointed it out), whereas when somebody managed to extract keys from an older version of Skype this was seen as academically interesting work but went basically unnoticed at large (it was mostly reverse engineering work, and had little application for attacking Skype users even if the version had still been current).
Some things that will probably impact the reception of the disclosure:
- The severity of the findings. "A malicious insider with direct access to the servers can change the public keys they send" is a less-severe finding than "All user traffic is TLS-terminated at the gateway and flows in plain text through the rest of the server infrastructure".
- The degree to which the organization's fix works. There's no obligation at all to follow the specific recommendation of external researchers; the only obligation is to not put users at risk (and not make false claims). The fix will be better received the more it achieves that, and vice versa. A "fix" that doesn't actually address the problem will likely be viewed in a significantly negative light.
- The severity of any findings left unresolved.
- The timeline of private disclosure, remediation(s) if any, and public disclosure. Some people in the security community are in favor of immediate public disclosure so they can work around or avoid the issue themselves, others in favor of not disclosing unfixed issues at all unless the vendor shows no interest in fixing them, with probably the largest number favoring a disclosure timeline of some degree of aggressiveness. Everybody (except, I guess, black hats) loves to see a quick turn-around on a fix; nobody wants to see a company ignore legitimately exploitable issues.
- The degree, if any, to which it looks like the researchers are seeking attention / business, rather than trying to get a real problem addressed. Individuals will be all over the place on their feelings and thresholds for this one, but it is generally not appreciated when a researcher is perceived as raise a big stink about something that isn't actually a serious issue, or as publishing marketing material in the guise of notable research.
What is a more recommended approach to address a serious vulnerability disclosure?
Triage it, anything that poses an imminent threat gets fixed ASAP, while anything that doesn't but still impacts defense in depth is fixed as schedules permit. This is the recommended approach, generally speaking.
Do all recommended fixes need to be followed exactly from the cryptographers at the research institute (assuming they have more experience and better knowledge than the organisation)?
No. Even if the assumption is valid (which it may or may not be), outsiders usually don't have as complete an understanding of the system - its architecture, purpose, protections, future goals, etc. - as the developer. It's a very rare vulnerability that can only be solved in a single way, and even if there is only one known best way, that doesn't mean other ways are inadequate. It is the researcher's job to explain the vulnerability to the developer well enough that the developer can understand whether a proposed fix will suffice.