109

So, I am designing a door authentication system (can't really go into more detail) for our school, so that only authenticated persons can go through a certain internal door. They hold that its inner working should be kept a secret, so that no one can reverse engineer it. I maintain that this would be Security through obscurity, which is bad. I have designed the system such that knowing how it works wouldn't help you get in, only having the key would help you get in, as according to Kerckhoff's principle. They still maintain that instead of more people knowing about it, less should.

Is it a good idea to go with a closed design? If not, exactly why?

P.S. They only told me it was a secret after I had designed most of it and told a bunch of people, if that makes a difference (I had no idea they wanted it to be a secret).
P.P.S. Although it is not open to the outside, the room it is protecting contains more valuable stuff than the rest of the school, so I would expect adversaries to be able to reverse engineer it anyways if we go with a closed design instead of an open one, but I'm not sure how to communicate that.

Canadian Luke
  • 296
  • 3
  • 13
PyRulez
  • 2,937
  • 4
  • 16
  • 29
  • 133
    Let me guess... Your school is located in Scotland in an old castle. The headmaster is a quirky old bearded type. The room has a mirror... Looks like some unemployed writer has already spilled the beans about the security system. – Deer Hunter Feb 02 '16 at 21:00
  • 6
    The reasons for opening the inner workings of a system is usually for it to get *free* peer reviews. In a school context, I highly doubt that there is too much security feedback incoming, but rather _someone told someone told the someone who was caught in that room_. – Sebb Feb 02 '16 at 21:58
  • 38
    `They hold that its inner working should be kept a secret, so that no one can reverse engineer it.` The reason for stuff being reverse engineered? Because it's unknown... – WernerCD Feb 02 '16 at 21:59
  • 21
    I would question why the school is designing their own door authentication system when there are already a wide variety of commercial products out there that do the same thing (that the school is probably already using to secure its doors). If you have something too secret for just a card-access or a PIN pad to protect, add a biometrics device to the "secure" room and tightly control who is enrolled to use that device. – Johnny Feb 03 '16 at 05:09
  • 59
    Google "layered security". Anyone serious about security will implement layered security, of which secrecy/obscurity is not only a legitimate layer, it is an important layer (though it should not be the CRITICAL layer). It's why really good admins configure their servers to not leak what software version they use. It increases the workload of the attacker since he needs to probe your design first before he can take a crack at it. – slebetman Feb 03 '16 at 09:53
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/35279/discussion-on-question-by-pyrulez-my-school-wants-to-keep-the-details-of-our-doo). – Rory Alsop Feb 04 '16 at 12:34
  • 3
    The more people knows about how it works, the more probable is someone will find a vulnerability and tell you (or use it) like it happens on open-source. I think it should be public, the secret should be a hidden cam so you will notice when someone finds a vulnerability. – Nanoc Feb 04 '16 at 15:13
  • @Nanoc See multiple comments in this thread, and the chunks that have been moved to chat. (And please post further discussion in one of those chat rooms.) Publicizing only makes the design more available to other people. It doesn't guarantee that those people will actually review it, or what their level of competency is in doing so. More importantly, you also have no guarantee of their intentions. – Iszi Feb 04 '16 at 16:07
  • no one can reverse-engineer it if it is public... – njzk2 Feb 04 '16 at 16:23
  • 1
    I am not sure why someone would need to design a custom door authentication system. Surely there must be plenty of off-the-shelve available systems that would suit your school's needs? – njzk2 Feb 04 '16 at 16:26
  • 2
    Seriously questioning why you are designed such an important security door instead of buying it from one of the many highly qualified and professional manufacturers. – SnakeDoc Feb 04 '16 at 16:29
  • 1
    Instead of keeping it a secret, publish a reward for cracking it. That ensures that anyone who manages to get in would rather get the reward and the glory than the contents. – BonsaiOak Feb 05 '16 at 14:48
  • Tl;dr: Keeping the details non-public is not a _bad_ idea. – keshlam Feb 06 '16 at 02:57
  • @DeerHunter Ah dang it. I knew I should have used hashing instead. – PyRulez Feb 06 '16 at 20:05
  • "They hold that its inner working should be kept a secret, so that no one can reverse engineer it" - Um...reverse engineering is _precisely_ for when the inner workings are a secret. If the inner workings are known/published, that's not reverse engineering. More like regular engineering. Though overbuilding the door seems counterproductive; doesn't matter how secure your door is if I just break down one of the walls and waltz in through the hole. What you ought to be designing is a vault. – aroth Feb 09 '16 at 11:27
  • A secure systems needs to stay secure even when i know about the inner working. Think about KeePass which is open source and used to store sensitive information. Of course the data on the ID-card/keys/... might be private but it would be way better to design it in-fakeable (nothing is in-fakable but you could design it to make very hard to fake) – BlueWizard Feb 11 '16 at 12:40
  • (Sidenote: infakeable things exist, see quantum money for more information) – BlueWizard Feb 11 '16 at 12:40
  • I recommend you to save the valuables with conventional methods too. For example put it in an metal case and padlock it. Because when your concept fails you might want to have a second, less secure in theory but better tested layer – BlueWizard Feb 11 '16 at 12:43
  • @JonasDralle Even if they do fake it, they won't know if its right without actually putting it in the lock (in this specific case). – PyRulez Feb 11 '16 at 12:57

8 Answers8

244

Obscurity isn't a bad security measure to have in place. Relying upon obscurity, as your sole or most substantial security measure, is more or less a cardinal sin.

Kerckhoff's Principle is perhaps the most oft-cited argument against implementing security through obscurity. However, if the system is already properly secure according to that principle, this is mis-guided.

The principle, paraphrased, says "assume the enemy knows everything about the design of your security systems". This does not in any way imply "tell the enemy everything about your system, because they know anyway".

If the system is already well-designed according to Kerckhoff's Principle, the worst that can be said about applying security through obscurity to it is that it adds little to no value. At the same time, for such a system, there is equally little to no harm in protecting the confidentiality of its design.

Iszi
  • 27,027
  • 18
  • 99
  • 163
  • 113
    "Security through obscurity isn't bad." Thank you for saying what some people will not admit. – Brad Bouchard Feb 02 '16 at 21:55
  • 7
    "little to no harm in protecting the confidentiality of its design" is not really true. as mentioned by Tom Leek education is important at a school, also by releasing a design people might point out flaws. – Sam Feb 02 '16 at 22:03
  • 42
    I would rephrase the first paragraph as "Obscurity isn't bad. *Relying* upon obscurity for security, as if it alone is enough, is more or less a cardinal sin." I think this would be a more plain wording of what you mean. – jpmc26 Feb 02 '16 at 22:03
  • 21
    @jpmc26 "Obscurity isn't security, it's just a thing" – PyRulez Feb 02 '16 at 22:29
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/35262/discussion-on-answer-by-iszi-my-school-wants-to-keep-the-details-of-our-door-aut). – Rory Alsop Feb 03 '16 at 23:19
  • @BradBouchard Your comment has lost relevance – Kos Feb 04 '16 at 19:48
  • @Kos There. Semi-fixed it. Still no longer a direct quote, but at least there's a sentence in there that it roughly maps to as a paraphrase. – Iszi Feb 04 '16 at 19:55
  • 4
    Any time you require a password to access a resource, you are relying on security through obscurity. – alexw Feb 04 '16 at 23:15
  • @Kos Not sure what you mean by that? – Brad Bouchard Feb 05 '16 at 00:28
  • Good thing I wrote my question the way I did. Originally I was going to ask "Security through obscurity is terrible. How can I explain that?", but I thought that was loaded. – PyRulez Feb 05 '16 at 14:07
  • @alexw Arguably, yes. But without a key of some form - which will, almost invariably and inevitably, be human-reproducible with sufficient knowledge and skill - there is no way for *anyone* to access the system without allowing *everyone* to access the system. Therefore the system could be fully secure for CI, but completely fail A. Or it will exceed A and therefore fail CI. (From the "CIA triad".) Another common paraphrasing of Kerckoff's Principle specifically exempts keys (of which passwords are one type): "A system should remain secure, even if the enemy knows everything except the key." – Iszi Feb 05 '16 at 19:19
  • @BradBouchard ""Security through obscurity isn't bad." Thank you for saying what some people will not admit. " Try telling that to forensic analysts. Anomalies go straight to top of the 'check-me' report. – cremefraiche Feb 06 '16 at 22:59
  • @cremefraiche What I was focusing on in my comment was the fact that obscurity as another layer of security isn't bad. People around here just love to throw out the "obscurity" statement like its a blanket in regard to any question even partially dealing with obscurity and I'm tired of that. It's just another layer, even if it is a weaker layer. – Brad Bouchard Feb 07 '16 at 03:28
  • @BradBouchard I understand you are saying that obscurity as another layer isn't bad, and I think that is a dangerous notion. Obfuscating data leaves telltale signs unless done extremely well. Any publicly available obfuscation tool leaves signatures that are not only easily detectable by modern forensics software, but trigger instant _red flags_. When it comes to data that is important enough to consider securing and obscuring, I would absolutely refrain from the latter. – cremefraiche Feb 07 '16 at 04:42
  • @alexw To go that route, surely any security system is relying on sufficient obscurity for contemporary adversaries. – OJFord Feb 07 '16 at 05:25
  • @cremefraiche There aren't many systems, if any, that don't rely on some form of obscurity; that's not bad. It's bad when that obscurity is the cornerstone of their security. See the comment about military tanks in another answer or comment elsewhere in this thread. I'd find it for you but I'm not sure where it was. It's one of the better explanations about obscurity I've heard. – Brad Bouchard Feb 07 '16 at 14:30
  • @cremefraiche I went and looked; see Cary Bondoc's answer below. – Brad Bouchard Feb 07 '16 at 14:36
  • @cremefraiche Also, we really shouldn't be commenting here any more. We were asked by the mods to move this discussion to chat, so please do so as I'll look for your continued talk there. I am open to what you have to say as I think all people should be. I can always be "wrong" and have my mind changed on something. – Brad Bouchard Feb 07 '16 at 14:45
  • There are a few problems with "obscurity as a security layer" that I don't think I'll be able to express well enough to be convincing in this thread, but one complication is that it adds an overhead cost in maintaining said obscurity. If a decision is made that "X Document" or "A, B, C details" are confidential, these details themselves now require security in an effort to ensure their obscurity. Efforts have to be made to ensure firewalls are in place, that audits are done to check for leaks, etc. If the OP got fired for disclosing the design, doing that would impose yet another cost. – Anthony Feb 08 '16 at 12:02
  • Versus, say, setting a document to "Top Secret". Classifying the document alone would not really protect some military secret (like the specs of some spy plane). But having a protocol that says "documents classified with this level need to be behind this many layers of *real* security" protects that document if that protocol is followed. Shoving secret documents into a hiding place doesn't protect either the subject of the secret or the document describing it. But the document can be secured as well as the subject of the document, without confusion that one is a layer of the other. – Anthony Feb 08 '16 at 12:08
  • `[Kerckhoff's Principle] does not in any way imply "tell the enemy everything about your system, because they know anyway".` No; it's more like the old "if you outlaw guns, only outlaws will have guns" argument: if you keep the details of your system secret, only people with a motivation to crack your system will end up with the details of your system. Notably, this means that people who might be willing to help improve your system will have a difficult time doing so. – Mason Wheeler Feb 08 '16 at 22:27
103

Keeping the design secret does not make the door insecure per se; however, believing that it adds to security is a dangerous delusion.

If revealing the details of the authentication system would allow breaking it, then that system is pure junk and should be discarded. Therefore, if you did your job properly, then revealing the details should be harmless. If you did not do your job properly, then fire yourself, and go hire someone competent.

In all generality, publishing system details promotes external reviews, that result in the break-fix cycle, which ultimately improves security. In a school context, not publishing system details harms security, because schools are full of students, and students are known to be nosy anarchists who will be especially attracted to the reverse engineering of anything that is kept secret from them. It is well-known that in a student computer room, the best way to keep security incidents low is to give the root/Administrator password to a couple of the students -- when a student wants to dabble with computer security, giving him full access removes all incentive for trying to break things, AND turns him into a free police auxiliary to monitor the other students.

Also, detailing the inner workings of a security system could be a highly pedagogical endeavour. I heard that in some schools they actually practice pedagogy, at least occasionally. Your school might want to give it a shot.

Tom Leek
  • 170,038
  • 29
  • 342
  • 480
  • 5
    "If you did not do your job properly, then fire yourself." And it's better to figure that out before implementing it by revealing the design, correct? (I'm really confident in the security design, but still a good precaution, nonetheless.) – PyRulez Feb 02 '16 at 20:46
  • 22
    @PyRulez: the operational notion is "review". You want some extra analysis by other people. An open publication can help a lot in getting free reviews. – Tom Leek Feb 02 '16 at 20:47
  • 10
    Do you have a source for the "give the root/Administrator password to a couple of the students"? I'd like to read more about that. – Johannes Kuhn Feb 03 '16 at 08:03
  • I'm really dubious about giving high school students the root/admin password. They'll just install video games on the computer. – Nelson Feb 03 '16 at 08:39
  • 9
    @Nelson how is that any different from NOT giving them the root password? – Aron Feb 03 '16 at 09:02
  • 2
    @JohannesKuhn I think Tom is confused by [Incompatible Time Share](https://en.wikipedia.org/wiki/Incompatible_Timesharing_System)'s anti hacking feature. Many grey hatters will not try to do something when they already know it can be easily done. – Aron Feb 03 '16 at 09:03
  • 3
    @JohannesKuhn: it was the method used where I was schooled; the sysadmin observed the students' activities, and gave the root password to the most advanced. It worked marvels. (And, on a more general basis, this is called a "meritocracy" and is an efficient management method -- in the 13th century, it allowed Genghis Khan to conquer the World.) – Tom Leek Feb 03 '16 at 13:27
  • 6
    +1 for "students are known to be nosy anarchists" – Sidney Feb 03 '16 at 15:46
  • So in summary, PyRulez should publish the design, and if any flaw whatever is found in it (either by review of the published design or by attack on the actual implemented system) should quit? Seems like quite high stakes to place on the ability to 100% beat a large team of nosy anarchists. Or, to avoid such high stakes, just give the students access to the petty cash box, and then you don't have to resign because you've defined keeping them out not to be part of your job ;-) – Steve Jessop Feb 03 '16 at 17:00
29

You have already received several excellent answers, though @TomLeek's and @Iszi's answer (both excellent btw) seem to be in direct contradiction.

They both make excellent points: on the one hand, keeping the design secret will not make the system secure, whereas reviewing it publicly will enable you to (possibly) find certain vulnerabilities you had not considered; on the other hand, it doesn't really hurt to keep the design secret, as long as that is not a key factor in the design's security.

Both sides are absolutely correct - sometimes.

I think it would be fair to say that both sides in the general argument would agree that keeping the design secret does not directly increase security at all.
In the worst case, it merely hides security weaknesses (which may or may not be a good thing, depending on who you consider it to be most hidden from).
In the best case (where there are no trivial vulnerabilities that would be exposed by publishing the design), it still does not increase security - but it does minimize the attack surface.

Minimizing attack surface (even without the presence of a vulnerability) is definitely a good thing; however this needs to be weighed and traded-off against the benefits of publishing (namely being reviewed by additional sets of eyes), and the downside of keeping it secret - e.g. the temptation to rely on it as a security control (the ever popular security by obscurity), as a form of security theater.

It is also worth noting that, as @Tikiman alluded to, merely publishing the design is not enough to ensure it is reviewed - especially by those who are capable to find the vulnerabilities and who are also inclined to disclose them to you. In many cases, a published design would only be reviewed by those malicious individuals with illicit intent, thus you would not achieve the expected benefit. Moreover, often one does not even know if their design falls into the aforementioned best case or worst case.

So, bottom line - as in so many things in security, the straight answer is still: It Depends.

There is a definite trade-off here to be considered - if this was a complex cryptosystem the answer would be clear; if this was an implementation-heavy typical enterprise system, a different answer would be clear.

My leaning in this case is as @Tom said, but for the secondary reasons mentioned - partly the anarchic user base, and mostly the pedagogical goal.

Note that these are actually not really security considerations - at least not directly.

(Oh and as to @Tikiman's point - the pedagogy involved here means that you can actually ensure the design is reviewed, at the least by the entire class ;-) )

Iszi
  • 27,027
  • 18
  • 99
  • 163
AviD
  • 72,708
  • 22
  • 137
  • 218
  • While you're at it, don't forget to expound on Tikiman163's answer. – PyRulez Feb 02 '16 at 22:31
  • @PyRulez I added a comment or two, such as they are... Though my intent wasn't a point-by-point rebuttal of other answers, but to contrast both sides of the obscurity/opensource debate... – AviD Feb 02 '16 at 22:41
14

This article of Daniel Missler is great!

It states that

Security by Obscurity is bad, but obscurity when added as a layer on top of other controls can be absolutely legitimate.

by having that concept, a much better question would be

Is adding obscurity the best use of my resources given the controls I have in place, or would I be better off adding a different (non-obscurity-based) control?

We can also use the anology of camouflage as obscurity as another layer of security

A powerful example of this is camouflage. Consider an armored tank such as the M-1. The tank is equipped with some of the most advanced armor ever used, and has been shown repeatedly to be effective in actual real-world battle.

So, given this highly effective armor, would the danger to the tank somehow increase if it were to be painted the same color as its surroundings? Or how about in the future when we can make the tank completely invisible? Did we reduce the effectiveness of the armor? No, we didn’t. Making something harder to see does not make it easier to attack if or when it is discovered. This is a fallacy that simply must end.

When the goal is to reduce the number of successful attacks, starting with solid, tested security and adding obscurity as a layer does yield an overall benefit to the security posture. Camouflage accomplishes this on the battlefield, and PK/SPA accomplish this when protecting hardened services.

Emphasis mine.

Iszi's comment is great also, he states that it is much better if we change the word adding to enforcing, so in summary it will be look like this

Summary:

Security by Obscurity is bad, but security enforced with obscurity as a layer on top of other controls can be absolutely legitimate. Assuming that you are safe in the battlefield because you think your tank is painted with the same color as the environment is just plain nonsense. But making your tanks' defense great and enforcing the paint which grants you the camouflage ability as another layer of protection is great!

Cary Bondoc
  • 270
  • 2
  • 11
  • 6
    Doesn't this analogy break down if you account for the reviews you can have by opening the design? – João Portela Feb 03 '16 at 14:01
  • 3
    Maybe in the question of "Is adding obscurity" you should change "adding" for "enforcing". If a single person designs a systems solely in their head, the design is naturally obscure without any additional effort. Even after the designer has documented the system in detail, the design remains reasonably obscure with practically zero additional effort so long as it remains solely within the designer's possession. Putting the design in a shared-access repository, however, threatens (but does not automatically break) the obscurity of the system. That's when effort is needed to enforce it. – Iszi Feb 03 '16 at 15:19
  • 1
    @JoãoPortela The risk to any design, in its current implementation, can only be *increased* by publishing details of that design. Tank armor is really a great analogy, albeit using camouflage as the obscurity layer in the analogy might not be best. Consider a nation already at war. Their tanks are deployed at the front lines in active combat. With open-source design, the enemy has practically equal information as you do when it comes to finding flaws in the tanks' construction or the armor's chemistry. And it's very possible that the enemy could dedicate more resources to finding them. – Iszi Feb 03 '16 at 15:28
  • If an enemy-exploitable flaw is found in the tanks, your combat effectiveness is only really compromised if the enemy in fact knows of the flaw. While Kerckoff says you have to assume this is possible, actually having the designs of the tanks public means you have to assume this is *likely*. Meanwhile, the timeline to come up with a fix or workaround for such flaws - then recall and upgrade/replace the fleet - will likely be measured in *months*. – Iszi Feb 03 '16 at 15:31
  • 1
    Obscurity when it is the only protection is better than no protection at all. Believing that obscurity in this case keeps you safe may be fatal though. – gnasher729 Feb 03 '16 at 16:19
  • 1
    Adding camouflage to the M-1 is done as an extra "field" layer of security that would never be considered "part of" the actual tank's security specs during testing or engineering or design or even "battle readiness" (I suppose that might be off. It wouldn't be considered battle ready without camo, but the camo also wouldn't be seen as boosting the tank's expected battlefield performance.) But most information-based security that uses some obscurity final layer **does** affect the interpretation of it's "true" security. People see the "camo" and think "now _THAT's_ secure!" – Anthony Feb 08 '16 at 12:23
  • 1
    @Anthony Well, *stealth* technology has now been build into a number of defense platforms at a very foundational level. In the most extreme cases--like the USAF's now-retired F117 stealth light-bomber and the B-2 bomber--keeping their exact positions "obscure" from enemy radar basically is the entirely reason the planes exist/ed and were/are effective. And a great many platforms in R & D in many countries are building in "signature management" technologies, to widely varying degrees, for some helpful degree of low-observableness. (Although not relying *only* on that for survivability.) – mostlyinformed Feb 09 '16 at 06:11
  • I was thinking of stealth bombers too while writing that. But that almost seems more like obfuscation as a tactic, not as a security measure. I'm sure the pilot feels safer, but assumes at any moment he may become visible. It also has me thinking of steganography, which might be argued as an obscurity layer. But somehow clandestine/sneaky manuavers feel like a separate or dedicated effort apart from security. – Anthony Feb 09 '16 at 07:11
  • The main issue I think I have is that obscurity as a security measure has the immediate feeling of "secure" like hiding under my blanket. This is naive for security managers or any IT person to assume, but we can then talk about critical versus supplemental, etc. But politicians and other decision makers like the OPs school staff want and often demand obscurity because it looks and feels good to them. – Anthony Feb 09 '16 at 07:18
  • 1
    They aren't interested or concerned with the geeky depths of the issue. These are the same guys who insist we MUST change our password every 90 days but keep their password on a post it in a desk drawer. – Anthony Feb 09 '16 at 07:19
  • 1
    I think that "enforced" should actually be "reinforced". As in, "security **reinforced** with obscurity as a layer on top of other controls can be absolutely legitimate". The obscurity isn't a mechanism by which your security parameters are realized (or "enforced"). It's a layer that's added on top of other security mechanisms to increase (or "reinforce") your overall level of security. – aroth Feb 09 '16 at 11:44
8

While not really answering your question, this might serve as an argument towards your school.

I would consider someone getting access to an authorized key/identity the real risk. People are sloppy, use bad passwords, and write secrets down all the time. A teacher at my school, ages ago, once left the keys to the entire school in a student bathroom.

If I wanted to get into that room I wouldn't even bother trying to find a security hole in the software; I'd steal the key, try tampering the physical lock, or some other external method.

Or, as a friend of mine said as he was asked by the principal how he would go about hacking the school system in order to destroy data; "I'd use a baseball bat".

axl
  • 201
  • 1
  • 2
3

I have a long explanation that may seem to wander, so I'll give a shortened answer, then justify it. Short answer, this is security through obscurity, but this likely isn't a problem because of the number of people that come into contact with the system. So it probably isn't worth having an argument over.

You are correct in your assertion, that keeping the system design a secret is security through obscurity(STO). The primary reason that STO is a bad idea is that a system who's inner workings are not initially known can be reverse engineered in all cases through careful observation and the proper application of social engineering. If you are the only person that understands how a system works, you are the only person who can verify its integrity. Therefore, if there is a potentially by-passable flaw in your design and someone else reverse engineers your design and discovers it, they can exploit it more easily than if you had not kept your designs secret. They are also more likely to be able to keep their discovery and illicit use a secret.

This is because if you make your design public knowledge, more people will examine it, the more people who examine a design the more likely someone is to discover an existing flaw and tell you about it. A design flaw may not even be in the general concept, but the specific implementation, such as the opportunity for a buffer overflow in the implementation of an otherwise secure algorithm. The primary concept of public cryptographic primitives use is that by making everyone aware of your cryptographic primitive algorithms, others may review it. After a large number of individuals have done so you can be reasonably assured that your design is secure. The difficulty is that because you're making a design for a school, only a very small number of people are likely to view your designs, very few of whom are likely to understand them. The fewer people that view your designs, the more likely that everyone that discovers a flaw won't report the flaw.

Unless you have access to a large community of security professionals willing to review your design, letting them have their way may be roughly equal in terms of actual security.

Tikiman163
  • 85
  • 1
  • 1
    I think this-- the number of parties-- is the critical point. An attacker needs to know (a) the password or (b) the exploitable design flaw. If there are a million Acme keypads out there, then it will be easier to find (b), if it exists, because the attacker can just buy and take apart ten Acme keypads. In that case, Acme corp. are better off publishing the design, so they don't get complacent, and have a better chance of learning about any flaw right away. But if there's only one Acme keypad in existence, it's feasible, and worthwhile, to keep its design flaws as secret as its password. – bobtato Feb 03 '16 at 14:35
  • 2
    This answer is very flawed in that it assumes publicizing the design will increase the number of people who *are* reviewing it, and who are both non-hostile and competent. It certainly makes the system more *accessible* to review by many people. But you cannot know how many people actually *will* bother to review it. Or how many of those people will even be knowledgeable, skillful, and thorough enough in their review to find any flaws. Or what the intentions will be of the people who do find flaws. – Iszi Feb 03 '16 at 15:49
  • Iszi, I very specifically addressed the fact that his design is not likely to be reviewed if he does publish it. If you're going to refute someone, please don't do it by agreeing with their assessment. At the very least, try to fully read what you're commenting on before commenting, you could have easily realized you're not pointing out a flaw by reading only the first paragraph where I stated that STO isn't bad in this case. – Tikiman163 Feb 04 '16 at 16:25
  • @Tikiman163 You do state that towards the end but the beginning also strongly implies the opposite so the answer should be improved. In its current state, I am not convinced it adds anything to the discussion. – Relaxed Feb 05 '16 at 22:53
0

One obvious goal would be that the door authentication system cannot be hacked by the students.

If we assume that the authentication system is slightly but not very difficult to hack, and that there are students with some but no advanced hacking skills, then it is quite possible that the added difficulty posed by making the details of the system unavailable is just enough to put it out of reach of that group (the students).

On the other hand, once the system is so secure that cracking it is much more difficult than finding or reverse engineering the workings of the system, keeping the details secret is not very useful anymore.

gnasher729
  • 2,107
  • 11
  • 16
0

If I were responsable for the door I would have same requirement about confidentiality. If the world (and your job) were perfect, obscurity will indeed be useless.

But just think of what actually happens in a real world. I assume you did your job the best you could, using state of the art algorythms. But the devil hides in details, and an implementation detail can weaken security. It happened to the SSL library not that long ago, even if the algos were indeed secure.

What you want when disclosing the details of your security system is that peers do review your implementation and warn you for possible flaws before they are discovered and used by attackers. If you know experts in security system and have them review your work, I think it would be seen as good practice even in your school - IMHO it should at least. But if you just publish it, more likely your first readers will be the ones against whom the door security was built ! And you certainly do not want them to be the first to review your work for possible flaws.

TL/DR: If you build a general security system, that could have a large audience and you do not use it immediately in production you should publish it as broad as you can to get good reviews. But if you build a dedicated implementation that will immediately be used for real security only disclose details to trusted experts to avoid helping attackers to find possible flaws.

Serge Ballesta
  • 25,952
  • 4
  • 42
  • 84