18

On the Internet, I can find several statements done over the years claiming that serving a X.509 CRL over HTTPS is a bad practice because either

  1. it causes a chicken-and-egg problem when checking for the TLS certificate and
  2. it is simply a waste of resources, given that the CRL is by definition signed by a CA, and a non-confidential artifact.

However:

  1. The recommended alternative is OCSP (possibly combined with the various types of stapling). However OSCP responses are also signed by a revocable entity, whose status needs to be checked too, so I don't understand how that solves the chicken-and-egg problem. Perhaps, there isn't actually any chicken-and-egg problem if we assume the PKI design takes into account the need to avoid circular dependencies for status checks.

  2. CRLs are signed, but they still don't provide replay protection within the period of validity of the CRL itself. In other words, you cannot reliably issue an emergency update to the CRL before its period of validity ends, because a powerful attacker can realistically manipulate all outgoing plain HTTP requests, and replace the new emergency CRL with the old one for that period. HTTPS would at least prevent that up to a point.

I read also that certain implementations simply refuse to retrieve CRLs over HTTPS (Microsoft): that might be a pragmatically good reason to avoid it. However, in that way, we are also perpetuating a questionable practice.

Am I mistaken somewhere?
Isn't it time to revisit the prejudice against distributing CRLs over HTTPS? That considering that there is a strong HTTPS-only trend ongoing and it is not unreasonable to predict that HTTP only requests might be blocked at the network level in certain environments.

StackzOfZtuff
  • 17,923
  • 1
  • 51
  • 86
  • Related: Similar question for OCSP: https://serverfault.com/questions/867307/online-certificate-status-protocol-ocsp-and-port-80 – StackzOfZtuff Sep 01 '21 at 09:12

5 Answers5

12

I generally agree with Steffen Ulrich, I just want to add few cents, since OP references my own answer which I consider proper and valid.

However OSCP responses are also signed by a revocable entity

it's not a revocable entity. OCSP Signing certificates include id-pkix-ocsp-nocheck certificate extension that instructs clients to not check this particular certificate for revocation. Often, OCSP signing certificates don't include CDP (CRL Distribution Points) or AIA (OCSP access method) extensions in certificate. This trick removes egg and chicken problem for OCSP signing certificate.

CRLs are signed, but they still don't provide replay protection within the period of validity of the CRL itself.

that's correct. But this problem isn't solved within current X.509 profile. There are no means to provide immediate revocation response (i.e. immediately detect if particular certificate is revoked). Revocation infrastructure heavily relies on caching and even stappled OCSP response may not be very up-to-date, there may be newer responses, but server hasn't obtained it yet and use cached OCSP response to staple in TLS. As Steffen said, the only way to improve this experience is to use short-lived CRLs/OCSP responses and use clients that support CDP polling to detect if there is newer CRL than the one stored in local cache on a client.

I read also that certain implementations simply refuse to retrieve CRLs over HTTPS

Microsoft CryptoAPI behaves exactly this way in terms of CDP extension. It won't even try to connect to an URL with HTTPS scheme. It will fail the URL if server requests TLS negotiation. Don't know about other crypto frameworks, so this statement may not be applicable to other tools.

As a bottom line: I still don't see much useful reasons to implement CDP over HTTPS. CRLs is a publically distributable content -- no reason for content privacy. It is digitally signed -- no reason for extra signing. The fact that MiTM can modify CRL content over plain HTTP to purposely invalidate CRL signature isn't mitigated by TLS. MiTM can arbitrarily tamper TLS traffic to force client to reject tampered data. Same effect in both cases: denial of service and TLS doesn't solve this problem.

What can be reasonable -- to hide your activity history from men on a wire (e.g. your ISP), but yet questionable.

Crypt32
  • 5,901
  • 12
  • 24
  • 2
    When I designed my custom signature stack I had a solution for replay attacks that it kind of bugs me wasn't used there. The server accepted 10 bytes from the client, signed the response including the 10 bytes from the client, and sent the whole thing back. The server's signing cert was traceable in its own response so the client didn't have to loop. No replay possible. – Joshua Jan 06 '21 at 18:44
  • @Joshua sounds too computationally expensive for scale – Hagen von Eitzen Jan 06 '21 at 21:47
  • 2
    @HagenvonEitzen: According to my benchmarks, it's faster than HTTPS connection startup cost. – Joshua Jan 06 '21 at 21:50
  • 1
    Ironically, the two top answers reference eachother and hence, have a circular dependency :P – user9123 Jan 07 '21 at 21:36
9

... chicken-and-egg problem ...

There is no real chicken-and-egg problem. Revocation (no matter if CRL or OCSP or something else) is only one part of the certificate validation, and it can still be better to do 95% (i.e. HTTPS w/o revocation check) than doing 0% (plain HTTP).

... don't provide replay protection ... you cannot reliably issue an emergency update to the CRL before its period of validity ends ...

While HTTPS helps against replay of an old CRL by an attacker it does not make sure that the new CRL actually reaches the client. An attacker can still simply deny the connection and from the perspective of the client it is fine since the old CRL is still valid. Thus HTTPS is not the method to allow emergency updates to a CRL either. Apart from that the client might not even check for an update if the current CRL is still valid. The correct mechanism here would be to use shorter expiration times.

Note: see the excellent answer from Crypt32 for a more deeper explanation with more technical details.

Steffen Ullrich
  • 190,458
  • 29
  • 381
  • 434
0

TLS/HTTPS clients do normally cache the results of OCSP/CRL requests. When a client sends a request to CA and establishes an HTTPS connection, most of the time it uses cached result of certificate validation. That's why no recursion occurs, no chicken-egg problem. Of course the client should consider the cases when cache result is not available or expired and should prevent recursion. In such case the client uses an HTTPS connection based on a certificate that was not validated. But this has absolutely the same security as a plain HTTP connection, and only in this case HTTPS has no advantage compared to HTTP.

If you use HTTP for OCSP/CRL, then any party that can read your traffic (internet provider, any proxy servers on the way) can see what certificates were requested, when, in what order, and thus can get insights about your browsing history. If you use HTTPS, then only the CA knows what certificates have you requested. Thus using of HTTPS does make sense.

TLDR:

  1. Naive answer on Microsoft site is not correct. Dealing with recursion is trivial. RFC 2580 remembers to keep this in mind: Relying parties ... MUST be prepared for the possibility that this will result in unbounded recursion.
  2. The chicken-egg scenario does not cause any real problem because of caching of OCSP/CRL results and because dealing with recursion is trivial.
  3. Using HTTPS does make sense because it hides your browsing history.
mentallurg
  • 10,256
  • 5
  • 28
  • 44
  • Your first statement is wrong. Chicken-egg is a real problem when CDP endpoint is protected by same issuer. When there is no caching CRL, you can't get it and eventually end in an infinite loop which is stopped only by a validation timeout. – Crypt32 Jan 06 '21 at 11:33
  • @Crypt32: I suppose **you have not read the answer**. Otherwise you would see that: *" In such case the client uses an HTTPS connection based on a certificate that was not validated."* What is wrong here? – mentallurg Jan 06 '21 at 12:04
  • @Crypt32: *"Of course the client should consider the cases when cache result is not available or expired and should prevent recursion."* - This is how clients deal with that. It is **trivial**. Why do you say it wrong? – mentallurg Jan 06 '21 at 12:06
  • I have read your answer. You say that client **\*should\*** avoid recursion, but how? There is no standard that would cover this use case. OCSP has solved this question by using `id-pkix-ocsp-nocheck` extension, but it is part of OCSP (RFC 6960) profile and not part of CRL profile (RFC 5280). This is why I'm not satisfied with this part of your answer. – Crypt32 Jan 06 '21 at 12:19
  • @Crypt32: How should one avoid recursion? It is trivial. You just add an "if" on the proper place in the implementation of TLS logic. If you see that you are establishing connection for OCSP/CRL provider and certificate issuer is the same provider, then you just skip such validation. Means, you trust the certificate for the connection to OCSP/CRL provider. What is not clear here? I described that in the next sentence: *In such case the client uses an HTTPS connection based on a certificate that was not validated.* There is no magic: Yes, in such case one trusts certificate without validation. – mentallurg Jan 06 '21 at 12:23
  • `then you just skip such validation.` -- why? Any stanrdard there that suggests to behave this way? You are missing my point: I agree that recursion should be avoid. But how -- no one knows for sure. You suggest to skip validation if both have same issuer. It's ok for you, but maybe I would go with another route and would permanently fail cert validation. As long as there is no standard mechanism to solve circular recursion for a given use case -- everything is opinion-based. – Crypt32 Jan 06 '21 at 12:30
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/118112/discussion-between-mentallurg-and-crypt32). – mentallurg Jan 06 '21 at 12:35
  • Using HTTPS would not hide your browsing history. If you're concerned about ISPs and proxies as you say in your answer, then they can simply sniff your original HTTPS request URL instead of going through the trouble of checking your OCSP request and cross referencing the serial number of the certificate. There is an edge case where all ISPs and proxies are trusted between you and your target website, but the CA/VA ISP and any proxies aren't, but that would probably be a very rare case. Using it for CRL provides even less privacy as the only info there is which CA your target site is using. – garethTheRed Jan 06 '21 at 12:47
  • "Dealing with recursion is trivial" is subjective. It requires making your HTTP/LDAP/etc stacks and your PKI stacks have intimate knowledge about each other instead of relying on public API. Windows and .NET both solve this by having a recursion limit of 0: don't allow TLS/PKI to be involved in background fetches. – bartonjs Jan 06 '21 at 17:58
  • @bartonjs: 1) You are right. Any opinion is subjective. Some developers find even implementing merge sort hard. 2) I don't see why mutual dependency is needed. It is sufficient if only PKI knows that. In the implementation of PKI you check if you are within another call with the same goal. In such case you understand that you should not continue because it will cause a loop. 3) What API do you mean? API for HTTP connections? API for PKI? Anything else? How is it relevant? – mentallurg Jan 06 '21 at 23:02
  • The amount of "browsing history" leaked through CRL over HTTP is fairly minimal, as the only thing that is exposed is the root CA you're interested in. Nobody ever sees the domain that you're accessing, just that you're now accessing a domain signed by Symantec. This has long been the main privacy argument against (unstapled) OCSP, which has to send the domain to the OCSP server. – TooTea Jan 07 '21 at 10:18
  • @garethTheRed Right, and if your browser does asynchronously prefetch new CRLs for commonly used CAs, the CRL fetch is not even linked to any particular HTTPS connection. Anyone sniffing the CRL download thus only sees that you're sometimes accessing sites signed by that CA, which is way less information than just sniffing the server hello on your HTTPS accesses to the sites themselves. The only case where this may matter is with S/MIME email, because there's no companion TLS connection to sniff there. – TooTea Jan 07 '21 at 10:22
0

I also agree with Steffen Ulrich, but I could see it from Microsoft's perspective. And no, I don't work for Microsoft nor asked any of their employees.

Suppose an attacker can be MitM. Possible attacks?

  • Via HTTP: tha attacker can replay an old CRL and have the victim think that the certificate is still valid. The victim won't notice the differrence and accept a revoked certificate
  • Via HTTPS: the attacker can deny connection, but the client can become aware (and suspicious) of this, but the attacker at this point cannot necessarily serve a valid old CLR
  • Via HTTPS: if and only if the attacker has pwned the TLS key they can set up a MitM that responds with an old CLR. And falls again in the http case

So from my point of view https helps making the attack more complicated. A certificate gets revoked not only when the key is being actively used by someone else.

usr-local-ΕΨΗΕΛΩΝ
  • 5,361
  • 2
  • 18
  • 35
  • 1
    Sorry, I don't understand this answer. Your first sentence says that you understand Microsoft's perspective (why it doesn't accept CLRs over HTTPS), but the rest of the answer concludes that "https helps making the attack more complicated". So, why do you understand Microsoft's perspective? – ruakh Jan 07 '21 at 23:52
0

As root policy file is some script HTTPS URL to link to internal IIS server for copy of root certificates as crt and crl. On other according to Microsoft paperwork crt or crl must be empty. Not sure what is exactly needs to presented.

schroeder
  • 125,553
  • 55
  • 289
  • 326
IlyaS
  • 1
  • 1
    This is very difficult to understand. I'm not even sure if it answers the question. Can you rephrase it? – schroeder Jan 13 '23 at 15:41