29

PrivateSky is a website that promises encrypted "cloud-like" secure information exchange. They promise that except the sender and receiver, nobody can see your data. After testing it yesterday, I do not understand how this is possible.

Let me explain you step by step what I did. Assume I have two email addresses A and B. I want to send an encrypted message from A to B. So I proceeded as follows:

  1. I created a PrivateSky account for my email address A.
  2. I clicked on the link I received on A, to confirm my email address.
  3. Back on PrivateSky, I created a PIN for A.
  4. Logged in as A on PrivateSky, I send an encrypted message B (without first creating a PrivateSky account for B).
  5. I created a PrivateSky account for my email address B.
  6. I clicked on the link I received on B, to confirm my email address.
  7. Back on PrivateSky, I created a PIN for B.
  8. My message from A appeared in the "Inbox" on B's PrivateSky account.

Again, how can they not see my data? There is no information that I sent directly from A to B, all communication went through PrivateSky. Also, B "received" the message before creating a PrivateSky account, so how could the encryption key be securely transmitted from A to B?

user1202136
  • 595
  • 4
  • 8

4 Answers4

37

This is Brian Spector; I'm the CEO of CertiVox.

Thanks for starting this discussion on PrivateSky, our secure information exchange service. I’ll do my best to answer the intelligent comments and criticisms on this thread.

Second, I’ll try my best to walk the reader through step by step on how our integrated key management and two factor authentication works.

I invite the folks on this thread who haven’t read the white paper to please do so. Everything I am writing has already been written in detail and is publicly available in that document on our website. But since no one reads white papers….

First, let’s refute the statement that we are "lying". We are not, and indeed, we are trying to be as transparent as possible about how we do what we do. It’s hard to argue with that kind of blanket statement, so rather than go there (“so Mr. Jones, when did you stop beating your wife?”), let’s just go through the intelligent points on this thread, as there are quite a few.

To that end, that’s also why we open source our MIRACL SDK, freely available off of our website. MIRACL powers PrivateSky, and the same cryptographic processes in use in PrivateSky are available as libraries in the MIRACL SDK, so you can review the code. MIRACL is in use by hundreds of companies around the globe, and has a fairly decent reputation as having very fast elliptic curve cryptography optimized for constrained environments. It is trusted by many organizations. We take that trust extremely seriously; it’s our bread and butter.

To correct Lucas’ statement here:

Probably the simple answer is: it's not.

If I read correctly they just seem to have used a form of symmetric encryption to authenticate yourself to the system after which you can access your private keys to encrypt data (the private keys are encrypted with your pin).

That’s not correct. Let’s clear up that one up first. The SkyPin you see that logs you into PrivateSky with your email address and 4 digit pin is an elliptic curve authenticated key agreement protocol that uses two factors of authentication, based upon an IEEE P1636.3 draft standard, Wang’s protocol, with the addition of our chief cryptographer, Dr. Michael Scott’s protocol for remote login with simple token and 4 digit pin using elliptic curve bilinear pairings.

Note that Mike’s protocol has been peer reviewed extensively since its first publication in 2002, has withstood over a decade of cryptanalysis, and we’ve published an updated version, this time adapted for use with Wang’s protocol.

Of the two factors, the first factor is a mathematical token stored in your browser, locked to the domain of our authentication service, the second factor being your pin. It work’s like this: The SkyPin Pad you use is actually performing a reconstitution of an identity based encryption key issued in the ID of your email address, we issue this to you when you register. During the registration process, the SkyPin Pad performs a equation on the IBE key you were issued using the 4 digits you select. This process is performed locally. We don’t know what those 4 digits are. At that point, it becomes what we call a “token”. The token is stored in your browser’s HTML5 local storage, not session storage (i.e., not a cookie). This is locked to our authentication service’s domain.

The next time you login, the SkyPin Pad is served from our web server (over TLS, domain secured by DNSSEC) and runs locally in your browser. The SkyPin Pad takes as input your 4 digits, reverses the equation, reconstituting the identity based encryption key you were issued when you registered, again, performed locally. This is Scott’s protocol. The next step is an authenticated key agreement protocol based upon Wang’s protocol, whereby encrypted nonces are exchanged with the IDs of the server and individual attempting to authenticate mutually. At then end of the protocol, both the server and individual have identified themselves to each other, and each side has at their disposal an AES 128-bit key, the session key. Hence the term; authenticated key agreement protocol.

So no, this system does not use symmetric keys to authenticate users. It’s a two-factor authentication system based upon elliptic curve cryptography, running in JavaScript, tailored for HTML5 compatible browsers, that performs authenticated key agreement.

Lucas is also incorrect with this statement (note Lucas that generally you don’t decrypt with public keys, you verify signatures with public keys, you encrypt with public keys):

It still might be decrypted with a the other persons public key browserside. But then again everyone has a public key available, they just need to fetch it from the server and apply it by the browser. – Lucas Kauffman5 hours ago

And by association Rasmus is correct with this statement (thank you for reading the white paper, btw):

According to the white-paper they use SK-KEM, which is an identity-based encryption scheme. This explains why you do not need the public key of B in order to send an encrypted message to him (the email-address is the public key).

In an exponent inversion system like the variant of SK-KEM that we are using, it’s actually a key encapsulation system, whereby the content encryption key is encapsulated (i.e., encrypted) using a protocol with inputs of the global public key of the system and the email address of the recipient. The message is signed using the ECCSI protocol with the user’s private key issued from our private key generator. The user employs a different private key issued from our private key generator to decrypt a message, using the SAKKE protocol (a variant of SK-KEM). The cool thing about SAKKE is that is also has the ability to federate private key generators. Assuming a user has access to the public parameters of another’s private key generator, user A who has a private key of a PKG A can communicate with user B who has their private key issued from PKG B.

Both ECCSI and SAKKE are part of an IETF informational draft called MIKEY-SAKKE, which is being standardized to provide voice data and encryption over 3G networks. MIKEY-SAKKE has strict guidelines on the operations of the PKG to cover such nuances as master key rotation (every 30 days) and packet assembly.

Up until recently, it was assumed that you could not distribute the private key generator in an exponent inversion system. Thankfully, Smart and others devised a multiparty computation protocol that enabled such a possibility. Developing further on that work, Smart and Geisler published a paper called “Distributing the Key Distribution Centre in Sakai–Kasahara Based Systems.

Effectively, you could split the PKG into distributed private key generators, or D-PKG’s. The nice thing about this is that after the initial setup, the master key doesn’t live anywhere in whole; it is split into thirds. Only two of the three nodes need to be available to server shares of the private key to the intended party. The master node is taken offline and discarded after setup. Assuming you have built processes by which to do this, shares of private keys in such a system can be created dynamically as they are requested.

Rasmus makes this statement:

Usually identity-based encryption relies on a trusted third party. So their claim that they cannot decrypt the messages themselves is based on their having placed the master private key and the associated private key generator inside what basically amounts to a hardware security module.

And Lucas makes this statement:

Yes, it seems they just put the trust somewhere else. They can't decrypt it their selves, but their claim that no-one else can seems to be false

Rasmus is partially correct, and Lucas is not correct. It’s actually several hardware security modules, none of them actually possessing the master key in whole. We take the additional steps: D-PKG nodes into HSMs that are tamper proof, tamper resistant, with integrity verification upon boot up of the VM, and carry no state (other than their share of the master key and the code to distribute shares upon authenticated request). We cannot access the shares of the D-PKGs without ruining some expensive kit. But let’s assume we could re-constitute the shares of the PKG.

Because we are following the MIKEY-SAKKE operational guidelines, these D-PKGs are only in operation for one month, meaning, the master key and subsequent private keys issued are only good for one month, and the entire population is rotated every month.

In PrivateSky, the user will be asked to create a unique passphrase. This is used to create a 32-byte long-lived ECDH private key (and subsequent public key), which can also be used in an ECIES setting. The ECDH public key of the user is stored in our directory. The D-PKGs have ECDH key pair that is not long-lived, only monthly. When new keys are issued, a DH secret key between the D-PKG month private key and the ECDH public key of the user is created to encrypt the newly issued ECCSI and SAKKE private keys for the user. This happens for every user. These are stored within our directory for the user so they can decrypt old messages outside of the 30-day master key rotation window.

The point being, after the 30-day window where the shares of the master key are in use, the VMs are destroyed, the key material inside the HSMs are wiped, and we start over again.

So let’s outline the general attack scenarios:

Inside the 30 day window, in order for us to decrypt the data (or turn over keys when we are served summons for requests for information):

  • We need to carefully hack an HSM at FIPS 140-2 Level 3 so as not to destroy the units, twice. In two different data centers.

Outside of the 30 days window:

  • We need to crack AES 256 in order to decrypt the archived private keys OR
  • Someone has to crack AES 128 bit for every single message

You are probably wondering, why in the hell have these guys gone to such great lengths to construct such a system? Especially in light of this admission:

User 1202236 makes this statement:

Thanks for clarifying. So instead of trusting a software company, now you trust a hardware company and that the software company actually uses the provided hardware. It seems to me that you need to trust quite a few people when compared to PGP.

That’s correct. Let me ask this is a more direct way: On a trust scale, is it better to use PGP or PrivateSky if you want to be 100% sure that no one can see your data? Answer: There is no question that creating your own private key via a system such as PGP is better as a trust model. No question at all.

There is only one problem. PGP and the current state of the art are too damn hard for the general population at large to use. Not the people on this board mind you. But I mean my daughter, or her friends, or my parents.

In PrivateSky, using a PIN, a passphrase and your email address, we can get you close, with two-factor authentication as a bonus. And you don’t have to learn anything. You don’t have to do anything different to use PrivateSky than you would any other web property around. That’s why we built it.

The people at CertiVox sincerely believe that just because we want to communicate something through the Internet doesn’t give the government, the NSA, Dropbox, Google or any media companies the right to use my family’s information or mine in the largest data mining scam ever perpetrated on the human race.

From a business perspective, our architecture also accomplishes the following: We will be served with requests for information from authorities. That’s a fact of life when you run a Saas business. Thankfully, in the UK and the EU, there is due process and law for this. How we comply, and our ability to prove the extent of our compliance, rests with the architecture we develop. If your data is accessible by us in the clear, then we have to turn it over. If it’s not, then we still have to turn it over. But if what we turn over is encrypted, and we don’t possess the keys, then what good is the data (it’s encrypted), and what good is serving a FISMA warrant or EU equivalent on us? Complying with requests for information is really, really expensive for a young company. Not being a target for such requests is a competitive edge.

Which leads me to my next point. The team at CertiVox has made a bet that there is a market for a secure information exchange service where our business model is dependent on NOT seeing your data. We believe (maybe naively) that there is enough people and organizations out there who are concerned about the erosion of privacy but don’t have the technical expertise to do anything about it. Dropbox needs to see your data to de-duplicate. Google needs to see your data to serve you ads. We don’t do either and never will.

To that end, we will soon introduce the capability for anyone to run your OWN distributed private key generators, and register these on our system (through the SAKKE federation capabilities), completely taking us out of the loop. You control your own population of users, can rotate master keys at will, revoke users, etc. That WILL give you the equivalent trust model of PGP.

Lastly, let’s go through the steps of how our integrated key management system works. We call this Incognito Keys. You can download an info graphic PDF that is a visual diagram of the following explanation:

To encrypt and sign a message:

  1. Alice logs into PrivateSky with her SkyPin and email address. The two-factor authenticated key agreement protocol produces an AES 128 bit key for use at the browser end and at the authentication server’s end.
  2. The authentication server passes the AES key to our “black boxes”, the D-PKG nodes.
  3. The D-PKG nodes assemble the shares of Alice’s ECCSI key and encrypt these with the AES key.
  4. Alice receives the encrypted ECCSI key (shares), decrypts the shares with her AES key, assembles the ECCSI key from the shares, and is ready to go.
  5. Alice writes a message to Bob and encrypts the message with an AES key, called the content encryption key.
  6. Alice encapsulates the content encryption key with inputs of the Global Public Key and Bob’s email address and appends the encrypted / encapsulated key to the message.
  7. Alice signs the message with her ECCSI key.

To decrypt and verify a message:

  1. Bob logs into PrivateSky with his SkyPin and email address. The two-factor authenticated key agreement protocol produces an AES 128 bit key for use at the browser end and at the authentication server’s end.
  2. The authentication server passes the AES key to our “black boxes”, the D-PKG nodes.
  3. The D-PKG nodes assemble the shares of Bob’s SAKKE private key and encrypt these with the AES key.
  4. Bob receives the encrypted SAKKE private key (shares), decrypts the shares with his AES session key, assembles the SAKKE private key, and is ready to go.
  5. Bob receives the message in his portal, which contains the encapsulated content encryption key.
  6. Bob verifies Alice’s signature, and de-encapsulates the content encryption key with his SAKKE private key.
  7. Bob uses the content encryption key to decrypt the message.

So there you have it. You can argue with the math, the implementation or our intentions, but hopefully we have made all three clear.

Thanks for reading all the way through, and please sign up for PrivateSky! You can see all of this working in action for yourself.

Brian

Glorfindel
  • 2,263
  • 6
  • 19
  • 30
Brian Spector
  • 346
  • 1
  • 2
  • 6
  • Sorry, my several links were not accepted because I don't have enough points yet. All of that information I referenced is publicly available. – Brian Spector Mar 30 '12 at 11:53
  • The link to the infographic is here: http://privatesky.me/storage/documents/PrivateSky-SIX-Incognito-Keys.pdf – Brian Spector Mar 30 '12 at 11:53
  • [PrivateSky](http://privatesky.me) – Brian Spector Mar 30 '12 at 15:55
  • 2
    If a user of yours was a target of an investigation, a government may forbid you from doing your 30 day rotation of your D-PKG instances, and require you to turn them over for forensic analysis. A well-resourced government may be able to bypass the HSM and recover the private keys. Is there anything you do to protect against that? Also, the decryption runs client side in JavaScript. What protection is there against malicious JavaScript being injected (by you, by an XSS attack, or by someone who gets their hands on a root cert and MITMs your web site) and stealing the user's private keys? – Brian Campbell Mar 30 '12 at 17:41
  • In general I was wrong :p – Lucas Kauffman Mar 30 '12 at 19:25
  • and my sincere apologies for the criticism, I should have looked longer for the white paper. – Lucas Kauffman Mar 30 '12 at 19:40
  • 1
    @BrianSpector You should have enough rep now to put your links back in if you like ;) – Zhaph - Ben Duguid Mar 30 '12 at 21:02
  • All links to documents on privatesky.me are dead. – Marek Sebera Apr 14 '14 at 18:17
  • Yes, we took PrivateSky down. You can read why on the CertiVox Wikipedia entry page. – Brian Spector May 29 '14 at 15:05
12

According to the white-paper they use SK-KEM, which is an identity-based encryption scheme. This explains why you do not need the public key of B in order to send an encrypted message to him (the email-address is the public key).

Usually identity-based encryption relies on a trusted third party. So their claim that they cannot decrypt the messages themselves is based on their having placed the master private key and the associated private key generator inside what basically amounts to a hardware security module.

Rasmus Faber
  • 397
  • 2
  • 11
  • 2
    Thanks for clarifying. So instead of trusting a software company, now you trust a hardware company and that the software company actually uses the provided hardware. It seems to me that you need to trust quite a few people when compared to PGP. – user1202136 Mar 29 '12 at 09:07
  • 2
    Yes, it seems they just put the trust somewhere else. They can't decrypt it their selves, but their claim that no-one else can seems to be false. – Lucas Kauffman Mar 29 '12 at 09:08
  • 2
    @LucasKauffman : the same applies to those (conspiraton) theories that says that NSA has some code inside the Intel processors, and when you try to generate keys inside it, it'll choose weak random numbers, etc... – woliveirajr Mar 29 '12 at 12:37
7

To answer Brian Campbell's questions / statements point by point:

If a user of yours was a target of an investigation, a government may forbid you from doing your 30 day rotation of your D-PKG instances, and require you to turn them over for forensic analysis.

It may happen, but highly unlikely in the EU and the UK. Not so unlikely in other domiciles. Also, as mentioned previously, because they are fire and forget code blocks in the HSMs, and because they carry no state, not even the whole master secret, and because the HSMs are validated to FIPS 140-2 level 3, it's going to be a pretty hard nut to crack. Further, stoping us from doing a rotation may constitute a violation of the computer misuse act in the UK, so if it did happen, at least it would make for great legal theatre.

A well-resourced government may be able to bypass the HSM and recover the private keys. Is there anything you do to protect against that?

Could you please provide more detail? Many of us on the team, myself included, have worked for commercial HSM vendors extensively. We're not aware of how this would be accomplished in any technical manner with a properly manufactured HSM to a level 3 degree. If you know in detail, then please do say so.

When the private keys are escrowed and protected with AES-256 bit key that is only possibly created by you, then we are relying on AES 256 bit to do its job. If that's cracked, then we've all got bigger issues to worry about than PrivateSky.

Also, the decryption runs client side in JavaScript. What protection is there against malicious JavaScript being injected (by you, by an XSS attack, or by someone who gets their hands on a root cert and MITMs your web site) and stealing the user's private keys?

First, we don't use ANY certificates in our protocols, this is called certificateless cryptography. (Note that we don't use the same system described in the wiki article but believe we are resistant to Type 1 and 2 adversaries). We do use server side TLS certificates to prevent MITMs on the secured portal and also secure our domain with DNSSEC, to prevent cache poisoning. But the front of house authentication solution, SkyPin, can actually operate without TLS (if we chose to do it), but it can't without DNSSEC. DNSSEC is much more critical component, simply because the authenticated key agreement protocol is the same one that trades big prime numbers in the clear every day (SSL / TLS) when a Diffie-Hellman key agreement session is established. That's one of the accepted ways an SSL / TLS session can be established. Sounds like magic; you trade big prime numbers back and forth in the clear between server and client, and even a man in the middle won't know the session key at the conclusion of the protocol. That's what SkyPin is doing (at a really high level).

So with the authentication process secured by SSL / TLS, and more importantly DNSSEC, we feel like what you are describing is, while possible, really remote. Of the things you mentioned, here is the most likely way a user would be compromised:

  • Malicious javascript / xss attack

We can see this happening pretty easily if you had a malicious browser plug in AND host malware that was doing screen recording. If the browser plugin actually attacked the browser in such a way that made it possible to steal items from session storage (a catastrophic attack on a browser) and NOT lock them to a domain, AND your host computer was infected with malware that was recording your screen, then it would be theoretically possible for an attacker to steal your SkyToken out of the browser's storage, record your PIN input (although we have already taken steps to mitigate this scenario), and somehow, with the same attack, re-insert the pin into another browser and perform the authenticated key agreement protocol, posing as you, on another computer and gain access to your account. Keep in mind BOTH of these things need to happen, without your knowledge. That's a tall order if you have at least host anti-virus running but possible. They would also need to bypass our fraud screening at the login process. Meaning, the attacker just can't take your PIN / token out of your machine in Peoria and start logging into the service from China straight away. We screen for geolocation, among other things.

  • The root cert of our TLS cert vendor is compromised and you browse to another site thinking its PrivateSky.

Unforutnately, there have been WAY TOO MANY reported incidents like this happening recently. This is something we do worry about, a lot, and it's actually why we implemented SkyPin. If we were only logging you in with a username and password into this secured portal, without DNSSEC on our domain, we'd be vulnerable to same attack that Google experienced courtesy of the Revolutionary Guards last year.

Unless you see the SkyPin prompt, don't login to PrivateSky. So as the inverse of the attack outlined above, to pull this attack off, it would require the attacker to compromise DNSSEC protocol, compromise the root cert of our TLS cert vendor AND compromise our servers in order to perform a MITM attack to gain access to your private keys as they are generated dynamically and sent down to you. As stated previously in the first answer (and again, it's in the white paper), the shares of the private keys are themselves encrypted at the D-PKG with the session key that is only available in your browser as a consequence of going through the SkyPin authenticated key agreement protocol with our authentication service. You need to actually authenticate to us, steal the session key out of your browser via some malware on the host, AND it also means compromising our TLS vendors root cert and DNSSEC. Not saying it's not possible, but it would require some serious, serious state sponsored resources.

  • Malicious javascript injected by CertiVox / PrivateSky

We actually worry about this too. I'm not going to reveal what our internal control processes are, but we are pretty diligent about it. Keep in mind though that our business "bet", as mentioned previously, is that we are rolling the dice thinking there is a market for a service like ours where the key differentiator is to NOT see your data. So in this scenario, it's an internal rouge employee screwing with the code. Possible, but unlikely considering the safeguards we do have in place.

Brian Spector
  • 346
  • 1
  • 2
  • 6
  • 2
    Very impressive responses. Never heard of PrivateSky before, but will look into it. Good to see such an engagement with the community and your motivation (especially in light of other software giants who seem less inclined to protect user's privacy)! I have to admit though, the name sounds a little scary rather than reassuring... perhaps because it reminds me of [skynet](https://en.wikipedia.org/wiki/Skynet_(Terminator)?. – Yoav Aner Apr 14 '12 at 12:58
2

As an update to the answer I provided to Brian Campbell, two security researchers, on of whom is Aldo Cortesi, a fantastic information security pro, pointed out that PrivateSky IS vulnerable to a catastrophic compromise of our SSL certificate vendor.

As Aldo said (Aldo, I hope you don’t mind me quoting you),

“…PrivateSky is not resistant to attack if the server is malicious or if there's an SSL cert compromise…”

Here is how the SSL cert attack could work (this is a gross simplification, but generally correct):

  1. A government / state sponsored entity must first compromise the certificate vendor (Globalsign or Verisign) that we use to secure our SSL sessions (this is not trivial, I believe security has been upgraded at both these vendors following the Diginotar attack).
  2. A government entity / state sponsored entity would set up a tap on the network connection between PrivateSky and the user, to facilitate a MITM attack after choosing their attack victim.
  3. After the user logs in, the user’s browser sends a request to PrivateSky for a message, and the attacker slips in malicious lines of JavaScript into the response.
  4. This malicious JavaScript could then decrypt a message, and send the contents through and ajax call to a server that was recording the contents of the message in a query string log file.

Aldo goes onto say,

“Two things need to be said about this. First, this doesn't necessarily strike at the heart of PrivateSky's value proposition. The product is resistant to a range of other problems that plague conventional services - bulk user data leaks in case of a compromise, legal pressure to disclose client information, and so forth.

Second, in my opinion making a 100% web-based product that is resistant to malicious servers and SSL interception is impractical at the moment, definitely requires something "external" to the app itself (like a browser extension), and comes at a big cost of user convenience. When I made cryp.sr, I was trying to approach exactly this problem, and I did it by checking the integrity of the "package" of data from the server with a cryptographic hash, in a browser extension, with valid hashes distributed through a totally independent channel. See the (long) blog posts I wrote on this here:

http://corte.si/posts/security/hostproof.html

http://corte.si/posts/security/crypsr.html

If there was a browser-only way to solve this problem, it would certainly be ground-breaking. I'm fairly convinced that there's no way to do this, though.

Something else I'd like to point out is that although we know that the SSL certificate trust system is broken, it's not a good idea to try to counteract this at the application level. There are other solutions that security-conscious clients could use to fix this. One possibility is client-side certificate pinning, where the user specifies that a cert has to be signed with a certain public key to be valid. Chrome has nascent support for this, used for all the Google properties through their STS module:

http://dev.chromium.org/sts

Another possibility is Convergence, an alternative cert validation scheme recently released by Moxie, a very eminent security researcher:

http://convergence.io/

It's not clear whether Convergence will take off, but it provides good, robust protection against SSL interception, completely independently of the conventional SSL web of signing trust. So, my basic point is that whatever Certivox does, it's probably not worth trying to repair or compensate for the possibility of SSL interception at the HTTP app level.

Another thing we have to keep in mind is the relative probabilities of the types of attack we're talking about here. At the moment, only a very few entities have the ability to create valid-looking malicious SSL certificates - basically, we're taking about intelligence agencies and perhaps a few other folks who have compromised a trusted signatory. The ability to do this is jealously guarded and used extremely sparingly.

By its nature, it's possible to detect when an SSL certificate like this is being used for interception, simply because the signing identities will necessarily change (which is what cert pinning detects). So when they are used they are used on a very small scale, in a very targeted way, against specific individuals or organizations. When this procedure is deviated from and fake certs are amateurishly used on a large scale, they inevitably get detected and the extremely valuable signing asset is ruined (see recent actions by Iran, and the consequent discovery of the Diginotar hack). So, compared to this, a compromise of the Certivox servers is an order of magnitude more likely, and a compromise of the client's desktop environment a many orders of magnitude more likely again. The only people who need to be concerned about malicious certificates are those "special" clients who might be targeted by the types of people who have these capabilities. These people should certainly use the SSL-level solutions I mention above - everybody else can relax.”

I suppose this would probably be an opportune time to mention that we will be introducing mobile clients very soon.

As Aldo continues to say, “An IOS app can trivially build in certificate pinning, the development environment is not susceptible to injection problems, the execution environment is much more secure, and no executable code needs to be downloaded from the Certivox server. It also solves the chicken-and-egg problem, since the application is distributed separately from the Certivox servers, and uses a separate authentication and validation process (Apple's app store, in this case). An IOS app like this really could (with some provisos) meet the strong claim made by Certivox at the moment - that is, that nobody (including Certivox) could read client data.”

What Aldo is referring to in the last sentence is that one security researcher has taken huge issue with our use of the word “can’t” in the context of our marketing materials (I should say it’s the only opinion like this we’ve encountered) and has accused us of being purposely “misleading” given that we are a web application where we “could” see customer’s data if we operated the system in a nefarious, malicious manner. As we have said all along, we are not operating in this manner, and the result is we are not storing the user’s encrypted data in any manner that would provide us potential access (hence our use of the word, “can’t”).

As I have pointed out in this thread, you do have to trust the folks who run PrivateSky not to act in a malicious or nefarious manner during operation, and this may be an inappropriate amount of trust for some people to extend. PGP / SMIME or other system whereby you have created your initial private / public key pair and that key is totally under your control is still optimum in this regards (however difficult for the average user to get their head around).

I’m going to start a separate blog / thread detailing this situation and asking for comments, suggestions and feedback. The opinions of the folks on the board are important to us so I hope you will engage in the dialog.

Finally, I’d love to get some comments going again on this thread on another point:

First, once the mobile and desktop clients are introduced, whereby the web app is merely a convenient UI, should we give customers the ability to shut off access to the web app in the context of closing off this potential leak of client data due to a catastrophic compromise of our certificate vendor assuming your messages and files are being exchanged via one of the installed clients, where this kind of compromise doesn’t exist? I should also note that even an XSS attack (as I have already posted above) could leak user data (but that would be our major league screw up). What is most worrying about this cert compromise attack is that it is fully outside of the PrivateSky’s control, however remote and requiring state sponsored resources.

Secondly, what client would you like to see first in the context of closing off this loophole?
• Mac OS X • iOS • Android • Windows • Linux • Windows Phone

Note: our first client will be an Outlook plugin that will be introduced shortly.

Finally, I should also point out that we will soon introduce the ability for anyone on the PrivateSky system to run their own exponent inversion distributed master key servers, which obviates the whole issue if access to the PrivateSky portal is restricted and messages are only encrypted and decrypted locally on installed clients. You only have to trust yourself at that point.

Cheers, Brian

Brian Spector
  • 346
  • 1
  • 2
  • 6