20

One of the extremely valuable functions of a Trusted Platform Module (TPM) chip is its ability to seal a private key under the hash of the code that will use it. This means that one can create a private key which can only be read by a a piece of code that hashes to a certain value.

By using this technology, we can essentially emulate a smart card in software: we can create a private key that can never be read; it's only possible to ask the hardware to sign a message using this key.

To me, this seems to be a huge step forward in IT security. Primarily this seems useful for mobile devices, which are used for payments. Bitcoin wallets, for example, seem like an obvious use for this technology.

ARM chips include a feature called TrustZone. Does this technology allow doing the above -- sealing a private key under a code hash?

runeks
  • 403
  • 1
  • 3
  • 8

2 Answers2

17

Yes and no.

Strictly speaking, TrustZone is only a processor feature that provides isolation between tasks via the MMU and the memory bus. You can think of it as a poor man's virtualization: there's just the hypervisor (the TZ secure world) and the regular operating system (the TZ normal world). This architecture allows sensitive data to be manipulated outside the reach of the regular OS, but there's a major hurdle: TrustZone in itself does not provide any way to store data. So you can create a key in the secure world but not store it anywhere.

All high-end ARM processors (such as found on most smartphones and tablets) have TrustZone (it's part of the core processor architecture), but it takes more to make it useful. Some processors include additional features that make TZ useful, in particular a way to store a key. This can take the form of some write-once memory (e.g. fuses, typically a few hundred bits thereof) that is only accessible to TZ secure world code. With a protected runtime environment plus a cryptographic key that is only known to this environment, you can build a TPM-like framework to store and manipulate confidential data including signature keys. The Trusted Computing Group is working on it.

This has been used in several mobile devices, though information (especially reliable information) is scarce. Microsoft's Surface RT tablet is based on an ARM processor and has no discrete TPM chip, but has a Bitlocker implementation that is based on a firmware-based TPM, apparently using TrustZone. Several Android devices by Motorola have security features that use code in TrustZone (of course, using a protected environment is no help if the code that you put there has security holes). You can find proposals for security architectures leveraging TrustZone both in ARM promotional literature and in academic publications.

So with TrustZone and a bit more, you can indeed build a system architecture where a key can be stored in a way that cannot be extracted through purely software means. Hardware means are another matter (unlike smartcards, smartphone processors are not designed to self-destruct when someone scrapes the wrapping of the package).

Glorfindel
  • 2,263
  • 6
  • 19
  • 30
Gilles 'SO- stop being evil'
  • 51,415
  • 13
  • 121
  • 180
  • If the code running inside TrustZone mode (including, for example, a firmware based TPM) is updatable, couldn't someone update the firmware to break the virtual TPM? Even if it was the device or OS manufacturer - being compelled by court order for example. – Ian Boyd Feb 22 '16 at 12:45
  • @IanBoyd You can only trust the code as much as you can trust the author of the code. The firmware is unable to differentiate “desirable update fixing a security bug” from “undesirable update adding a backdoor”. Court-ordered updates are a non-issue really: if you don't trust the device/OS manufacturer's update, why are you trusting their original code? – Gilles 'SO- stop being evil' Feb 22 '16 at 13:10
  • But my question isn't about trusting the firmware. My question is can the firmware updates be detected? I would hope that the design is: a hard-coded security chip, with an internal private key, first *reads* the contents of firmware, hashing it, and adding it to an internal state-of-the-system hash. If the firmware is ever updated, the TPM-style chain-of-trust can detect it. And because it starts with non-modifiable code that is embedded in a chip with a unique hardware bound key, no OS manufacturer can change the firmware without destroying any persisted user-mode keys. – Ian Boyd Feb 22 '16 at 14:38
  • @IanBoyd Sure, firmware updates need to be signed, and any update should be detectable through TPM measurements (however, this is only true if you trust the firmware to report genuine fake measurements). But changing measurements does not destroy user data! That would be a possible design choice, but a highly uncommon one, since it would make it impossible to update the firmware without destroying existing data. Updates of secure firmware are a desirable thing, because security bugs do happen. – Gilles 'SO- stop being evil' Feb 22 '16 at 18:15
  • @IanBoyd Destroying user data would be pointless except in the unusual scenario where you trusted the firmware provider up to a point in time, but no longer do. To handle this, it would be better to give the device owner some control (which they could have in theory but often don't in practice) over the updates that get applied. – Gilles 'SO- stop being evil' Feb 22 '16 at 18:16
  • We don't *really* destroy the user data. If you update the firmware of your Microsoft Surface Pro, you have no way to recover the BitLocker encryption key. Thats was why i had to dig out the hardcopy of the encryption key that BitLocker forced me to create. The key is encrypted and stored on the hard drive. It is *"sealed"* using the TPM state of the system. Updating the firmware changes that state, so you cannot decrypt the key - nothing is really destroyed. That technology already in PCs should be available in ARMs. My iOS master key should be *"sealed"* in the trusted computing sense. – Ian Boyd Feb 22 '16 at 18:32
  • @IanBoyd Once again, it's not a matter of being able to do this but of choosing. Destroying user data is extremely unfriendly, and does not in fact help security except in a contrived scenario (where you trusted Microsoft in 2015 but not in 2016). Remember [AviD's rule of usability](http://security.stackexchange.com/questions/6095/xkcd-936-short-complex-password-or-long-dictionary-passphrase/6116#6116): security at the expense of usability comes at the expense of security. If your security system erases user data, users will store their data elsewhere. – Gilles 'SO- stop being evil' Feb 22 '16 at 18:42
  • Fortunately, the Surface Pro, using TPM for key storage does not destroy user data. Does the TrustZone provide trust that the TPM does? – Ian Boyd Feb 22 '16 at 18:45
2

From: https://web.archive.org/web/20151116162016/http://www.liwenhaosuper.com/blog/2014/05/26/tee-and-arm-trustzone/

Could ARM TrustZone be used as TPM directly? Does ARM TrustZone provide secure key storage?

I am afraid not. The problem is the lack of secure storage, as TrustZone specification doesn’t provide any mechanism to implement secure storage. However, the TrustZone feature: assigning a specific peripheral to secure world access only is the key point, but it is up to the Soc Vendors or the TEE developers to decide what peripheral is used as a secure storage media.

Daniel
  • 647
  • 7
  • 16
superboy
  • 21
  • 1