3

Our application needs to store a password in a database. It‘s not possible to store a hash because the password is needed to access other data that are protected by this password.

Currently, the password is stored encrypted with a key that is part of the code. We are aware that there is no way to store the password in a 100% secure way, but what would improve security and why? For example, would a salt add security?

Restrictions:

  • The password is originally entered by a user into the application
  • Access to the password is required in non-interactive mode (so cannot be unlocked by a user)
  • Only solutions on the application level can be applied

Please consider the above restrictions, especially that I am looking for an application level solution. This means that it is not feasible to access os level internals, create additional services outside the application, or even set up separate servers.

Given that the password must be encrypted by the application, a secret is needed to do so, and this secret can only be part of the application, as I see it. Therefore, it looks like there are two possible scenarios:

  1. An attacker gains access to stored (encrypted) password, but does not know the secret used to encrypt the password.

  2. An attacker gains access to both, the stored (encrypted) password as well as the secret in the application.

It looks like case 2 can be neglected, as there is obviously no way to protect the password in this scenario, is it? Can case 1 be considered safe or is there a way to make it safer?

not2savvy
  • 711
  • 5
  • 12
  • 1
    Similar to https://security.stackexchange.com/questions/12332/where-to-store-a-server-side-encryption-key/12334 – mti2935 Jul 31 '20 at 00:26
  • Yes, the answer is to encrypt the password. The real question, however, is where to store the cryptographic key. The answer to that will probably be platform-specific. – John Wu Mar 28 '21 at 20:39
  • To really be secure you need to require the user to enter their "master" password each time. This would be used as part of the key/secret and would never be stored. – pcalkins Mar 23 '22 at 16:44
  • @pcalkins Restriction 2: Access to the password is required in non-interactive mode (so cannot be unlocked by a user) – not2savvy Mar 23 '22 at 16:46
  • If you must store it securely on the local side you will need access to os level stuff... (though I suppose a server could generate/send keys?) not sure why that's a problem here? The executable should be able to access things that will at least make the stored value as secure as the machine itself. For example the "DPAPI" solution mentioned. This will prevent the key from being exposed in your code. If the attacker can run executables on the machine all bets are off, though. – pcalkins Mar 23 '22 at 17:43
  • @pcalkins The application has access to persistent storage through an abstraction layer. However, the concrete type of storage to be used (local filesystem, database server, remote service, whatever) is configured by the user. While I understand that the storage can be secured in many ways (access control, encrypted database columns etc.), such options are transparent to the application. This question is about what the application itself can do to improve security. – not2savvy Mar 23 '22 at 19:13
  • Can you please describe the architecture of your system more clearly? Several things aren't clear, like where the "other data" are located, what platform(s) and permissions this needs to run on / with, and why you can't use platform-level security features. – CBHacking Mar 26 '22 at 13:06
  • @CBHacking It does not matter where the other data is stored (in fact, it may be in any place, as it configurable), but we need a password to access these data. One example would be a keystore file that is protected by something like a passphrase. Our app runs within a an architecture that provides only said abstraction layer. Maybe it's best to think of our app as a plugin to a larger cross-platform application that handles all the system level things. I'm sorry if my explanations aren't clear enough. – not2savvy Mar 26 '22 at 13:33

4 Answers4

1

A salt does not help you, because a salt is applied to a cryptographic hash function, which isn't suitable for your scenario, as you have correctly analyzed. Your goal is to make access to the credentials (or the key to decrypt the credentials) as difficult as reasonable possible, given your limitations.

The following approaches are commonly used:

  • Depending on your operating system, you have different options to store credentials locally (e.g. the DPAPI from Microsoft).

  • A modern approach would be using a vault (e.g. from Vault Project) that manages the secure distribution of credentials in your infrastructure. In such a scenario, the credentials are stored on a central server that manages the access rights.

  • In addition, there are hardware-based and token-based key-stores available, ranging from a simple smart card to a full blown HSM.

All of those options are viable and none of them are 100% secure. If someone is able to gain control of your application, he can do what the application can do - including accessing your credentials.

Demento
  • 7,339
  • 5
  • 37
  • 46
  • Could you please explain why a salt cannot be used with encryption? – not2savvy Mar 02 '20 at 23:25
  • @not2savvy - Salts are used for hashing, to reduce their vulnerability against rainbow table attacks. The length of the plaintext drives the size of the required rainbow tables and therefore influence their viability. For encryption (symmetric and asymmetric alike) the size of the plaintext doesn't matter, as long as you are using a state-of-the-art algorithm with appropriate modes. Therefore, a salt does not add additional security for encryption. – Demento Mar 03 '20 at 18:02
1

The usual answer here is that what you seem to be seeking is impossible and not much worth pursuing; just trust in the OS-level security (access controls on files and processes, etc.) to prevent unauthorized access to the "other data" if it's on the same machine, or if said "other data" is external, store the credential in a file (readable only to the user) in the user's profile.

There are some things you can do, but if OS key storage isn't an option, none are better than speed bumps to an attacker and provide no actual security.

  1. Randomly generate a unique encryption key for every installation, rather than using a static key. This generated key will need to be stored somewhere - probably just in a file - but that somewhere can be somewhat different than where the credential is stored and also the unique key will introduce an extra step (though no actual security) in the process of decrypting the credential, plus avoid static keys in binaries (generally bad).
  2. Have a long-running process that prompts the user for the password when they log in / start the app, then keeps the password in RAM. This makes it slightly harder for an attacker to extract the password from a running machine, since it's not written to disk (at least, not outside the swap data), but if they can run arbitrary code as you-the-user - which is assumed if they can access your files - then there are ways (depending on the platform). It does keep the password safe when the machine is off, but also makes logging into the machine / starting the app have an extra hassle, and the app can't run completely unattended (somebody has to log in at some point). This system is analogous to how things like gpg-agent works.

You have a lot more options if you can use platform features. Almost every platform (desktop Linux being the notable exception) has a standard, built-in way to store or at least encrypt secrets, available to all apps. Windows has the Credential Vault and also DPAPI, MacOS and iOS have the Keychain, Android has its own Keychain-like thing. Even on desktop Linux there's usually going to be one or the other of KWallet or GNOME Keyring running, or you can fake it yourself with gpg (all of these Linux options require the user to "unlock" them with a password after logging in, but many apps use them already; the other platforms unlock automatically via the user's login password). Storing the password in such a system is usually not any protection against malware running with the same privileges as the process, but it provides additional protection against cross-user malware and offline attacks.

CBHacking
  • 42,359
  • 3
  • 76
  • 107
  • I'm aware that there is no perfect solution here (see my question "We are aware that there is no way to store the password in a 100% secure way"). My question is if there is a way to _improve_ a non-perfect solution, at least for some scenarios (e.g. scenario 2 in my question). – not2savvy Mar 26 '22 at 13:38
  • +1 for the list of OS specific approaches in your last paragraph. – not2savvy Mar 26 '22 at 13:41
0

There are a couple ways this problem is approached. One is to configure the system to connect to a PKCS#11 server, and to use the HSM to decrypt the master password when the application service is started. (This becomes another chicken-and-egg problem when you realize you have to store the PKCS#11 credentials in a file.)

Another approach is to require a human to enter an "unsealing" password every time the service is started. While this can be more secure, it can't provide automatic recovery in the case of a failure.

Yet another approach is to store your master key in an on-board Hardware Security Module (HSM), such as a TPM chip. You would have to consult your OS documentation for information about using this approach.

John Deters
  • 33,897
  • 3
  • 58
  • 112
  • Thank you for the suggestions. However, HSM and user action cannot be used. I‘ve updated my question with relevant restrictions. – not2savvy Mar 02 '20 at 22:05
0

Answer withdrawn for now.

I'm leaving it up in case somebody else finds the question expecting an answer to the question I thought it was asking.

I thought you were talking about a client+server+external-server model, where the user needs the server to act on their behalf when the user isn't logged in, and the action the server takes requires accessing a third-party data store with the user's password. It seems this is mistaken.


If you're worried about somebody compromising your server (either an external attacker or a malicious / compromised insider), one option to mitigate that risk is to use a tokenizer service. You create a separate server (typically but not always a web service), with a very restricted API. This tokenizer server generally only allows one request from the Internet at large: store a secret (such as a password). The caller supplies a password to the tokenizer, and the tokenizer returns a unique, completely random, opaque blob of sufficient length (usually at least 128 bits) to the caller. The caller then passes that opaque blob to your main API server, in place of their actual password.

When your main service wants to use the user's password, it can't access it directly. The tokenizer will never expose a secret to any except for a specifically delimited list of external endpoints (the consumers of the secrets, i.e. the services that the passwords are used to access). The API server takes the user's opaque blob (the one generated by the tokenizer) and passes it - along with a request that the service wants to send to an external endpoint - to the tokenizer using a private, secure, internal-only endpoint of the tokenizer. The tokenizer takes the request from the main service and looks for the opaque blob (the "token") in its internal DB (which is separate from, and not accessible by, the main service!). If the token is present, the tokenizer uses a key (typically stored in a vault / key management service, and also not accessible to the main service) to decrypt the corresponding secret (password). The tokenizer then replaces the opaque blob token in the request with the actual password and, if the request is for an approved destination, relays the now-password-including request to the external service. When the response comes back to the tokenizer, it relays that response to the main service.

The tokenizer service needs to be extremely locked down. No one person is allowed to deploy code to it, nobody is allowed to log into it, it has the bare minimum of functionality and ideally no external libraries, etc. The idea here is that, even if the main service is completely compromised, the attacker can't get into the Tokenizer and thus steal the passwords. They might still be able to use the passwords by relaying through the tokenizer, but if all they get is a dump of the main DB and key storage, that's still not enough because the tokenizer won't relay requests from anybody else.

In pseudocode:

Main API server:

[@WebService, scope=Internet]
void ProcessStoreTokenizedPassword(Request req, Response res) {
  User u = req.GetUser()
  // If no authenticated user, fail
  (TokenBlob tb, ExternalService es) = req.Parse<StoreTokenizedPassword>()
  // If request formatted incorrectly, fail
  mainApiDb.TokenizedPasswordsTable.Insert(u, tb, es)
}

[@WebService, scope=Internet]
void ProcessDoOperationOnExternalService(Request req, Response res) {
  User u = req.GetUser()
  // If no authenticated user, fail
  (ExternalService es, Operation o) = req.Parse<ActionOnExternalService>()
  // If request formatted incorrectly, fail
  TokenBlob tb = mainApiDb.tokenizedPasswordsTable.Select(u, es)
  // If no stored token for that user to that external service, fail

  // Create a request template for the tokenizer
  TemplateRequest tr = new TemplateRequest(es, o, tb)
  // Get the tokenizer to relay the request out, and the response back
  Response externalRes = tokenizerService.Proxy(tr)
  // Do stuff with the response, or if relaying failed for some reason, fail
}

Tokenizer service

[@WebService, scope=Internet]
void ProcessCreateToken(Request req, Response res) {
  //  Note that we don't care about user auth; anybody can call this
  (Secret s, ExternalService es) = req.Parse<CreateToken>()
  // If either value missing or invalid, fail

  // Generate a random token blob, encrypt the password, and store them
  TokenBlob tb = new TokenBlob(secureRandom.Generate(256))
  EncryptedBlob eb = s.Encrypt(tokenizerKey)
  tokenizerDb.encryptedPasswordsTable.Insert(tb, eb, es)

  // Return the token
  res.Send(tb)
}

[@WebService, scope=Private]
void ProcessRelayOperation(Request req, Response res) {
  // Verify that the request comes from the main API service, else fail

  TemplateRequest tr = req.Parse<RelayOperation>()
  // If the request is malformed, fail
  ExternalService es = tr.es
  Operation o = tr.o
  TokenBlob tb = tr.tb

  // Get the password corresponding to this token blob from the DB
  EncryptedBlob eb = tokenizerDb.encryptedPasswordsTable.Select(tb, es)
  // If there's no encrypted password for that token,
  // or it wasn't created for use with this external service, fail
  Secret s = eb.Decrypt(tokenizerKey)
  // Replace all instances of the token blob in the operation with the real password
  o.Replace(tb, s)
  Request extReq = new Request(es)
  Response extRes = extReq.Send(o)
  // Relay the result of the external call - succeed or fail - to the main server
  // But first make sure the secret didn't get reflected in the response for some reason
  extRes.Replace(s, tb)
  res.Send(extRes)
}

Note that I'm still eliding a lot of stuff, like exactly where the tokenizerKey is stored and how it's protected. Basically follow the advice in other answers, except for this key it's scoped only to this one super-limited service. You might also have a few other internal endpoints to do things like delete tokens that are no longer needed (e.g. because the user deleted their account from the main service or rotated their credential for a given external service). You might also want a cleanup function that deletes tokens which aren't in the main service DB, in case somebody tries to spam the tokenizer with a bunch of junk tokens.

CBHacking
  • 42,359
  • 3
  • 76
  • 107
  • By "main API server", do you mean the application or service that I need the password for? – not2savvy Mar 23 '22 at 15:08
  • No, the "main API server" is *your* service, the one that is collecting and storing the user's passwords for accessing a third-party service (the "external service"), and presumably doing something useful with them. If your application is entirely local (runs on user hardware with no server component), then this approach can still work (if users trust the tokenizer) but that's a sort of weird situation because usually in that case, the user would just enter the password on demand, or store it in a local keychain/vault on their machine/network. – CBHacking Mar 23 '22 at 15:22
  • Then who is the caller in "The caller then passes that opaque blob to your main API server"? If that is the place where the data is stored that our application needs the password to access them, then this isn't feasible, because it is not possible to change the way in which to access those data to anything else than with said password. – not2savvy Mar 23 '22 at 15:28
  • @not2savvy "The caller" in that sentence is the application (webapp, thick client, whatever) that is interacting with your server. I don't understand the rest of the comment; the data is presumably stored externally (on a third-party service where the user's password is used to gain access). In a system where you don't have a "main API server" but do have a tokenizer (I realize this is unlikely), the user would put the password in the tokenizer, get back an opaque token blob, and put that opaque blob into the app. – CBHacking Mar 26 '22 at 12:58
  • Understood. And yes, since there is no tokenizer, the solution cannot be applied in our case. – not2savvy Mar 26 '22 at 13:27