This question is, perhaps, a tad too abstract for a Q&A site. But it's been playing on my mind for a while, so I'm throwing it out there.
Modern systems seem to be increasingly centralised. An average user in the '90s would buy a computer from one manufacturer, have an OS by Microsoft, and then have a range of other suppliers for basic functionality such as compression applications, web browsers, and even memory extenders for the OS itself in the days of DOS.
Even the process of getting applications was a pretty decentralised process: one may use a search engine to find a company's website and then download an executable from it.
Today is of course very different: the OS builds most of the basic functionality in and often ties it with their online services. The process of even finding applications by other suppliers is done via the same company's 'app store'.
Anyway, on to the question: does centralisation, as a rule of thumb for security, increase the damage but decrease the probability of exploits?
I wonder this for the following reasons:
- Downloading exe files from a range of sites is malware-prone, as any of us fixing a friend's computer in the early 2000s would know...
- However, if a centralised 'app store' were to be properly hacked, it could impact every single application it distributes.
- Building in more functionality into the OS decreases the likelihood of third-parties injecting potential security holes into an average OEM package for consumers using the aforementioned OS.
- But unifying all applications as built-in for the OS means that, say, if the compiling process for the OS development was breached, the entire thing would get compromised. (This is overly simplistic; I'm not genuinely suggesting, for example, that Windows Media Centre is compiled alongside the actual NT kernel.)
In short, centralised systems seem to safe-guard newbies against trivial security issues. But isn't the damage of a centralised supplier being hacked much greater than only a single small supplier being broken into?