13

While it is theoretically possible to thoroughly examine the source code of Open Source Software to check for backdoors (neglecting a Ken Thompson hack), and given sufficiently adequate knowledge in Electrical Engineering one can probably figure out what a given observable circuit can do, how can one ever be sure an Integrated Circuit does what it is supposed to do and nothing else (at least on purpose)?

As an example, how can one be sure a TPM chip doesn't actually call your local security agency when it feels like it, e.g. via an integrated GSM modem or a bridged Ethernet port?

And even if the schematics where Open Hardware, how could one be sure the manufacturer (who probably won't let you supervise their super-secret production of your individual chip) doesn't add their own "optimizations"?

Tobias Kienzler
  • 7,658
  • 11
  • 43
  • 68
  • 2
    this is a very good question, however I don't know if there is a perfect answer (although the great bear will probably prove me wrong :-) - it ends up all being about trust models, and incremental trust. You have to place trust in manufacturers somewhere, or else avoid all technology. We rely on contractual and reputational drivers keeping manufacturers honest - various incidents have shown that this may be a bit misguided, but what else can we do without being able to review all circuitry and code ourselves? – Rory Alsop Jan 13 '14 at 11:11

4 Answers4

12

Theoretically, to ascertain what a chip does, you break it apart and reverse-engineer it. In practice, this will be nigh impossible to do. Actually, even for software, for which you have the actual source code, you cannot guarantee that the code really always does what you believe it does (otherwise we would be able to produce bug-free code).

This is not a new problem, and intelligence agencies (an oxymoron) have stumbled upon it many times. When the CIA wants to ascertain that the computers at the White House are not full of Chinese-controlled backdoors, what do they do ? Well, they certainly have a look at the chips for anything obvious (an integrated GSM modem has a minimal size; it can be seen with a X-ray scan of the chip). However, ultimately, they rely on classic investigation methods which have demonstrated their efficiency since the days of Julius Caesar: tracking the source of each component, who designed them, who produced them, who transported them, and so on, with background checks on all involved individuals, and audit of the procedures. This is not very different from "certified software" (e.g. Common Criteria), for which design, specification, developers' background and development methodologies are inspected.

One way to see it is that the hardware is not evil -- people are. So check the people, not the hardware.

In the case of the CIA, this means that they will much prefer chips from Taïwan over chips from mainland China.

Tom Leek
  • 170,038
  • 29
  • 342
  • 480
7

Just to make things clear.

Seems to me like there's two distinct questions here: "Do I have to trust my manufacturer?" and "Can a TPM be malicious?".

Here's some comments about the second one:

A TPM just can't do those kinds of things, it's a passive/dumb device. It is typically connected via a standard bus (LPC). While the LPC does have DMA access via the LDRQ# interrupt, the TPM does not have access to that interrupt. In other words, it cannot access the DMA engine nor can it communicate with other devices by itself. Any attack the TPM could pull off would have to be passive, such as a side-channel attack.

New TPMs implemented by Intel are actually running as an application within the Platform Controller Hub formerly Memory Controller Hub aka Northbridge. They're running on top of Intel Management Engine aside AMT and bundled under the vPro flagship. You can see Intel's ME as an operating system running in ring -3 as it is running on a completely separate CPU (not your main CPU) and have complete memory access to your system memory (via Intel UMA). Therefore, someone could argue that those iTPMs (Integrated TPMs, well in fact it's any application running on ME) have the capability of being active and do the kind of thing you describe.

At that point the question is, could someone backdoor Intel ME/AMT? Yes, it's possible. Unlikely, but possible. You would need to exploit it, or you would need the ME signing keys. Also, - back to your first question - could your manufacturer backdoor Intel ME/AMT? Same answer.

p.s. at some point the story behind BadBIOS turned in questioning whether this was actually happening (i.e. very powerful and portable exploit).

northox
  • 1,413
  • 16
  • 26
6

I did a talk at Blackhat a few years ago (actually 10 now) that revisited Trusting Trust: http://www.blackhat.com/presentations/bh-usa-04/bh-us-04-maynor.pdf

I followed up with an article written for Linux Journal in 2005: http://www.linuxjournal.com/article/7839

I've been researching this topic for almost 15 years now and I can tell you the takeaway from the Thompson article is that unless you verify EVERY component in your environment, you can't. While logic tells you a trojaned piece of hardware or software might have a suspicious binary blob that does a reverse shell in reality it could be much more subtle. The 2005 article highlighted an example of creating a stub for strncpy and actually using strcpy. If you were to look at the symbols post compile everything would look right but anywhere you think a buffer overflow has been stopped by strncpy now becomes an attack vector.

When it comes to hardware this process is even harder because you can't just run strings on the binary. A combination of IP concern and laws keep a lot of the lower operations of something like a mobile device a secret. Jailbreaking has had some success in opening this blackbox, but not a ton.

Check out this blog I wrote in reaction to the iPhone SMS vulnerability in 2009: http://blog.erratasec.com/2009/07/heres-how-we-do-that-voodoo-that-we-do.html#.UtVBoHk6JFw A lot of knee jerk reactions to this bug was to have AT&T disable the SMS plan for the iPhone and people felt safe. In reality even if you don't have a SMS plan you phone receives special SMS updates from the carrier that do things like network tuning, tower updates, etc... A backdoor in the device could be as something simple as pushing a tower update to your phone telling you the IMSI catheter in your neighbor hood is an official carrier tower and its ok to us it.

In short unless you manufacture every component, write all the software, own the phone company, and are able to pass favorable laws for research you can't know if your device is backdoored.

dmaynor
  • 458
  • 2
  • 3
  • I wasn't thinking of phones specifically, though by using proper end-to-end encryption with authentication at least the phone company could be as untrustworthy as you wished - assuming you manage to trust your hardware enough that it cannot be backdoored. – Tobias Kienzler Jan 17 '14 at 11:07
  • (1) (unfaked) strncpy stops an immediate overflow but using the unterminated result as a string often causes other errors including overflows; for 'bounded strcpy' you need sprintf (inefficient and fiddly) or snprintf (inefficient and only C99) or strlcpy (common but not standard) or strcpy_s (only C11 and optional) or =0+strncat (looks silly) (2) I hope you mean IMSI 'catcher', and 'it's ok to use it'; having the phone carrier injecting drugs into my body seems like a _really_ bad idea – dave_thompson_085 Mar 05 '17 at 13:45
1

I cannot fully answer "how to trust?", but I want to contribute at least a helpful idea. As already stated in the other answers there are outrightly two options:

  1. You make everything yourself (the IC), by this you have obvious a good basis for trust.
  2. You can take something already made and analyze and test it.

While these presented options excell in their attribute of yielding a level of complete trust, they are obviously impracticable. The price to pay for the trust, in terms of time and material effort is huge.

In part the idea I want to add as an limited alternative to the two presented options, may depend on sacrificing some comfort or functionality. The logic of the idea being that:

  1. Since we are unable to look insight components (e.g. the TPM chip, the Intel AMT ring -3 processor....) we cannot trust them. We cannot look insight because of the complexity and miniature of the component.
  2. For the very same reason (complexity and size) it is also impracticable to replace them with self-made substitutes.
  3. Under the provision that the components, have some manageable physical separation (i.e. being separate chips) as well as some functional separation (i.e. their functionality is not "everything"), it would be possible to implement a scheme in which you modify the hardware in a way that you hardware separate components from each other via a logic-interface of your own -and hence trusted- design and controll.

To take this from an abstract to a more practical level let me illustrate by an example.

We assume that you have a malicouis IC in your system which as suggested in your question would use GSM/communication to send data to an attacker. Given we satisfy 3 and your ICs functionality is not dependent on and hence functionally separable from communication, as well as physical separable from a communication device (let us assume the only available GSM/modem was an inserted USB-device), then you can trust your IC not to be sending the data to the attacker by having the USB-device not physically and by this functionally connected.

As in software (as for instance to be seen in the LSM like apparmor) the idea is to limit each IC by isolation/separation to only the necessary functional necessary connectedness.

While maybe being impossible to manufacture an complex IC it may be much more possible to make a much less complex one that merely connects on demand (i.e. via software) via hardware switches the components according to their current need. By limiting the available connectedness hence some trust can be gained.

humanityANDpeace
  • 1,432
  • 1
  • 12
  • 24