3

We have a need to store security keys on a mobile device, i.e. iOS and Android consumer apps. The app uses those keys in an offline scenario to 'self validate' the signatures of its data. (it can also phone home from time to time to conduct a more robust check) We're considering how we might store these keys on the mobile device in such a way as that:

  • They can only be read by our application
  • They cannot be copied to another device

The obvious places that come to mind are the Android Keystore and the iOS Keychain. But how secure is this approach from attack? Some naive Googling indicates a surprising variation of responses to this question; it would be good to try to understand and quantify the effort required to break open these secure storage areas.

Carlos P
  • 141
  • 3
  • 1
    "how secure is this approach from attack" is not really answerable. It all depends on the attack scenario you're trying to protect against. Generally speaking, it is impossible to protect a client device from itself. As such, the best approach usually is to use the default key storage mechanism and be aware of its limitation: if they are unacceptable in the context of your app, then you should reconsider your plans for releasing altogether. – Stephane Mar 13 '17 at 09:53

1 Answers1

2

The app uses those keys in an offline scenario to 'self validate' the signatures of its data

I'm not sure about the exact circumstances you have in mind, but if your application needs to validate externally provided data, as opposed to data it produced itself, the simplest approach would be to use public key cryptography to sign the data. This way, you'd only need to store the public key on the device, which isn't security critical and doesn't have to be protected (edit: see nolanda's comment for an explanation why this is completely wrong! you do need to protect the public key!)

But how secure is this approach from attack?

What kind of attack? Are you up against the resources of a nation state, or a lone hacker? What do you know about the underlying devices? Are you trying to protect missile launch codes, or client phone numbers?

Key extraction difficulty from Android Keystore varies; for example, if the underlying device has a hardware security module, it's more difficult to extract the key material from the device than if it's just software-protected.

If your Android device or Apple device has telco capability (e.g. it is a phone), you might be open to attacks from the baseband processor, depending on how well your phone isolates the baseband from the application platform and who your attacker is. So that matters, too.

Without knowing what exactly you're dealing with, it's impossible to give a meaningful answer.

You should also consider that attacking the keystore might not be the easiest attack. The keystore is probably well-secured and flaws in it's security are taken seriously and fixed by the smart guys at Google and Apple. IMO it's more likely that your app has security flaws, and if an attacker gains access to your app, e.g. manages to execute his own code in your app process, he will also be able to use the cryptographic keys to encrypt / decrypt or whathever else you use them for, and it probably won't matter if he can access the key material itself or not, because he can make use of the keys just like your app can.

Another attack might center on the process/app isolation of the Android OS/iOS. Once an attacker manages to breach this isolation, or gains root privileges on your phone, again, he'll very likely be able to make use of the keys wherever they are stored, even if he can't get at the keys themselves.

All that said, I'd add that IMO it's very hard to secure a consumer mobile phone platform and if you're dealing with any kind of really important data (think e-voting systems or systems where human life is at stake), I'd be uncomfortable in trusting such a platform to keep my data safe.

Out of Band
  • 9,200
  • 1
  • 22
  • 30
  • 1
    "you'd only need to store the public key on the device, which isn't security critical and doesn't have to be protected." This is incorrect. When a public key is being used to validate data that an attacker wants to modify then the attacker will be interested in replacing the public key with her own public key (which she also has the private half of). This will allow her to convince your system to validate any data she cares to sign. – nolandda Nov 17 '17 at 17:48
  • Oh. You're right, of course! Can't believe I was this stupid. Thanks for pointing this out. – Out of Band Nov 25 '17 at 09:40