Talk:Security through obscurity
From Wikipedia, the free encyclopedia
[edit] Straw man
Security through obscurity is a straw man. I think the phrase is notable, however it only exists through criticism. I think the article needs to be rewritten in that light. Zeth (talk) 22:52, 5 January 2008 (UTC)
[edit] This article has mistakes.
For example, if somebody stores a spare key under the doormat in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using the spare key. However, the house owner believes that the location of the key is not known to the public, and that a burglar is unlikely to find it. In this instance, since burglars often know likely hiding places, some assert that the house owner would be poorly advised to do so.
This example is incorrect. The security method M is hiding a key in a hidden place. The key K is where that place is. Thus for this example the key would be hiding the key under the doormat.
M() = Hide key K = Doormat
M(K)
In Symetric encryption algorithms the K is always secret. Security through obscurity is when the Method M is kept a secret, not the K.
- No, the "method" is "hiding the key under the dormat". This is correct in the sense that it is illustrating that security through mearly assuming no one knows about a method is insufficient. If you can propose a better illustration, please edit as such (maybe using a key in the example is confusing?). -- Joseph Lorenzo Hall 20:30, 4 February 2006 (UTC)
-
- It seems to me that every methods relies on a certain degree of obscurity, since there is always a key that must not be given to the public. I understand that some systems need more secrecy than just a key. But in the key under the mat example, what difference is there between, say a combination lock -hundreds of possibilities- and hiding the key in a flower pot if you have a lot of flower pots? An attacker can always try all the possibilities, whether it's numbers or hiding places (under the mat, in a flower pot, on top of the window ledge....)?
- An example is the Windows CD keys, a lot of them were leaked out, how (fundamentally) different is that from other security flaws being leaked out? AtikuX 08:21, 1 October 2007 (UTC)
so fix it... by the way it has another probable mistake...i don't think many eyes makes the bug shallow is not linus's but the one who wrote the cathedral and the bazaar,can somene who knows it change it 00 tux 07:16, 18 March 2006 (UTC) i was wrong even if it has been "formulated and named by Eric S. Raymond in his essay"(the linus law page) it's correct... 00 tux
[edit] POV in article
- It is important to separate description of the practice of security through obscurity with criticism of it.
- Operators of systems that rely on security by obscurity often keep the fact that their system is broken secret, so as not to destroy confidence in their service or product.
That seems POV to me.
Also, the beginning of the article says that it's a controversial security practice. I thought that it was a pejorative term, and that people who actually practive it call it something else. -- Khym Chanur 07:52, Nov 20, 2003 (UTC)
- Sorry to notice this comment so tardily. The problem noted seems to center around the meaning of 'broken'. Since a cryptosystem is designed to provide security (whichever aspect(s) is the design intent), if it fails to do so, it's broken by the only relevant test -- failure to do as intended. In engineering, an engine breaks when it doesn't work any more; same thing here, in principle. That I don't know about details of the security breach is, I think, irrelevant. You might, and so learn what was to have been securely held, even if he and I remain in pathetic ignorance.
- Thus, I would argue there is no POV around broken in the sentence quoted. ww 15:32, 12 May 2004 (UTC)
- The assertion "Operators of systems that rely on security by obscurity often keep the fact that their system is broken secret, so as not to destroy confidence in their service or product." appears convoluted. If this assertion translates to "providers often misrepresent their security products", it would be nice to see a list of these so that purchasers can be wary, or contact their states' attorney general. If this assertion translates to "corporations often use security techniques that they know are imperfect", we have an interesting starting point. In the latter case, the status quo lingo appears POV.
[edit] security of meaning through obscurity of phrase successful!
I give up. What is meant here by 'their gain'? from article
specifically, many forms of cryptography are so widely known that preventing their gain by a national government would likely be impossible; the RSA algorithm has actually been memorized in detail by most graduating computer science students.
ww 15:21, 12 May 2004 (UTC)
- Maybe specifically, many algorithms are so widely known that preventing any national government from learning them would likely be impossible; the RSA algorithm has actually been memorized in detail by most graduating computer science students. ? — Matt 15:26, 12 May 2004 (UTC)
- Matt, Could be, but this is still seriously incoherent. Needs rephrasing. ww 15:31, 12 May 2004 (UTC)
[edit] steganography
In looking at the text on the "useless" DMCA, I sense some straying from the topic of this article. I think the point is that systems should use good security rather than just obscurity. Going down the slippery slope to argue about legal and legislative tactics leads to a whole bunch of stuff that dilutes the value of the article, I fear.
Furthermore, I think we need some text on cases where obscurity is in fact good engineering practice - i.e. steganography. And that article needs to refer to this one.... --NealMcB 21:05, 2004 Jul 19 (UTC)
- I think it's fine to mention the DMCA because it's an example of how legislation is used (or is claimed to be used...) to enforce the obscurity of exploits ("security through obscurity") — perfectly on-topic as far as I can see. I agree, however, that the article needs some balance in its treatment; security through obscurity isn't universally bad. Most people running Linux are far safer from attack on the Internet than people running Windows; why? a big component is that Linux is obscure, so there are less viruses, worms and script-kiddies out there that target Linux (and yes, Linux arguably has better intrinsic security). — Matt 02:19, 20 Jul 2004 (UTC)
- The problem with this is that the "obscurity" here is actually more related to confusion about the secret. Cryptographers understand that the secret in a system should be as simple as possible, because this means that you can change it easily, and also that you can study it carefully. So, for example, a users password should be the secret, not their login name. Passwords are easy to change, login names not as easy. When we discuss "obscurity" of OS's (meaning Linux vs. Windows), this is not the same kind of obscurity. In fact, in this context Linux is less obscure, because anyone who wants to can see exactly how each part of the system works. In Windows, the secret is (for example) not only your password, but the software that takes your password and uses it.
- Matt, I would reverse your phrasing. Linux is not arguably more secure than Windows (modulo incompetent configuration), it is so. Having administered both in production environments, my experience is that there is little room for argument on this. Not if you go by the relative amount of effort (and success or lack thereof) in 'securing' them. And it is only arguably due to Linux' relative obscurity -- same reasoning. Linux source is published for all, after all. This is hardly obscurity!!! Incompetence of attacker is not even remotely comparable to attempted intentional obscurity of design.
- As for balance in the article, I think that it's tough to argue that s thru o is sensible in the case of steag while not defensible in the case of poor crypto design. It's an apples and oranges thing. Steag is not design obscurity that some folks are hoping won't be discovered which when lost will allow a successful attack, it's deliberate obsfucation of information upon which depends confidentiality. Not commensurate concepts, really. Comments? Anyone? ww 14:59, 20 Jul 2004 (UTC)
-
-
- You can't say Windows is "obscure", all you need to do is look up on MSDN, and you will find more infomation on the inner workings of Windows. Internet Explorer, for example, is more of an API then a program, as other programs such as MMC, Compiled Help, and any 3rd party app can use the calls in IE. IE is just API that explorer calls, there is no true stand alone exe file (The iexplore.exe calls explorer and the API dll). All these API calls are well documented. So other then the explorer code, IE completely documented. While it's not open source in the normal sence, you can still see all procedure calls it makes, and with tools like process explorer you can see in real time when it calls them. Just for the Windows OS, the wealth of offical documentation takes up more then 2 DVDs. Then you got 3rd party documentation such as from www.sysinternals.com that is supported and recomended by Microsoft.
-
-
-
- What Matt was talking about, it's a more obscure OS in the that it has less users.
-
-
-
- As for the "Having administered both in production environments, my experience is that there is little room for argument on this", what did you do for security on the Windows systems? Did you assign NTFS permissions limiting read, write, and execute permissions? Limit the accounts services run on? Drop rights from groups? And what did you do for the linux desktops? And what did you mean by security? Virus attacks and malware because users were using IE without restrictions enabled to the internet group? 216.54.146.100 15:10, 6 March 2006 (UTC)
-
[edit] "Advantages and disadvantages of security by obscurity"
That section contains no "Advantages".. Shouldn't this be re-worded?
-- agreed. Maybe include the story about the american army using native american dialects for communication to obscure plans from the enemy. That way this entry is also dragged slightly out of the "computer" only corner.
- That was only partly for confidentiality and was understood to be weak. It was stronger for authentication: anyone wanting to understand the Navajo language could study and learn it (just like any other language) and many (including Germans) did so. But (just as with other languages) it was far more difficult for adult non-native speakers to learn to speak Navajo without an accent. Since there were at most very few (probably zero) native Navajo working for the Germans, a Navajo code talker, hearing an unaccented Navajo voice coming from the other end of a conversation, knew he was talking to a US army unit. Phr 23:26, 11 February 2006 (UTC)
- Say what? Navajo was used exclusively in the pacific theater. The germans never tried breaking the code. Check out code talkers#Cryptographic_properties. I think it's an excellent example of SbO, since it relied on Navajo being unknown outside the United States and was very vulnerable to any Navajo falling into the hands of the Japanese. —Preceding unsigned comment added by 85.179.43.109 (talk) 12:08, 31 May 2008 (UTC)
- Actually, since US had used American Indian languages in WWI, Hitler was expecting something similar in the coming war. German linguists were instructed to learn American Indian languages during the 30s against such an eventuality. So the comment above is correct to this extent, if wrong otherwise. The second comment is correct, Navajo wasn't used in Europe in WWII. ww (talk) 23:03, 31 May 2008 (UTC)
- Say what? Navajo was used exclusively in the pacific theater. The germans never tried breaking the code. Check out code talkers#Cryptographic_properties. I think it's an excellent example of SbO, since it relied on Navajo being unknown outside the United States and was very vulnerable to any Navajo falling into the hands of the Japanese. —Preceding unsigned comment added by 85.179.43.109 (talk) 12:08, 31 May 2008 (UTC)
[edit] My big rewrite
I've just rewritten this page and tried to remove the weasel words. In some places I found them unnecessary, as the headings already use the word "argument," which suggests something not self-evident. In other places, I needed to change the text to read more like an argument (as in "If you believe X, then you can conclude Y,") rather than an assertion (as in "Some people say X"). I also moved some arguments from one section to another. For instance, the phrase "the frequency and severity of the consequences have been rather less severe than for proprietary (ie, secret) software" argues against security through obscurity, not for it. I moved all the historical examples together. Finally, I added summary sentences of my own design at the beginning of some paragraphs.
I deleted only two passages outright:
- "Designers, vendors, or executives actually believe they have ensured security by keeping the design of the system secret. It appears to be difficult for those who approach security in this way to have enough perspective to realise they are inviting trouble, sometimes very big trouble."
- "Others find this line of argument out of synch with reality, and suggest that the public would be better served if the accusers were to specify who has committed fraud."
The first sounds like it amounts to the tautology "those who like security by obscurity are those who like security by obscurity." Regarding the second, I failed to understand how to be more specific about the parties committing fraud.
This article should still cite more sources and verify some of its facts. I tried to preserve all the facts presented in the original, which I don't claim are correct, so if you find errors, you know what to do. Using text from old versions might reintroduce weasel words, so I would correct factual errors individually.
On the other hand, cranks and honest people alike will deny that they have axes to grind, and you may think I'm one of the former, trying to disguise my agenda among all the reorganization. In case you do decide to go back to text from old versions, I want to point out my few objective edits so you can at least retain them. I corrected "posession" to "possession" and changed capitalized words like "OR", "MORE", and "PLUS" to italic versions. I expanded "ASAP", too.
Cheers. --Officiallyover 05:36, 14 March 2006 (UTC)
- I'll take a look at this later today... it looks good at first brush. We might want to insert {{fact}} tags where cites are needed. -- Joebeone (Talk) 21:44, 14 March 2006 (UTC)
- Sorry to have taken so long to re-read this article. One major thing stands out that I don't particularly like. The notion of the "'key' had now changed" is quite confusing. In all of these cases the key didn't change at all. What did change? The assumption that the key is kept safe... or essentially the power that the key may have given you given the lock system you are using (for example, you could be using a variety of different house keys... from simple to complex... and if a burglar can locate the key, the security in all these cases has now been normalized or made the same... it's essentially as if you have a latch (no key) on the door). From a pedagogcial point of view, I think that all instances of "the key has changed" in this article should be eliminated for better language. I can take a shot if you'd like (while preserving the rest of the text). Good job, otherwise! -- Joebeone (Talk) 02:53, 20 March 2006 (UTC)
-
- Certainly take a shot at it. Thank you for your comments and for taking the time to review the changes. --Officiallyover 08:08, 20 March 2006 (UTC)
[edit] Referenced
I added a bunch of references and hope that's enough to justify my removing the unreferenced tag. -- Joebeone (Talk) 22:31, 11 May 2006 (UTC)
- I don't want to make any vanity edits, so I'll propose this here and let someone else sort it out. There aren't many treatments of an opposing approach to security -- as I've called it, a "security through visibility" approach -- in existence. Most commentary on "security through obscurity" simply discusses the failures of that de facto methodology, and doesn't address the inverse methodology itself. An article I've written titled Security through visibility does deal with the subject, however, in contrast with the notion of "security through obscurity". It might provide a useful reference item to add to the "external links" section of the article here. Apparently, at least the guys at Second Life thought it qualified as an authoritative resource for their Open Source FAQ. —The preceding unsigned comment was added by Apotheon (talk • contribs) 00:45, 18 April 2007 (UTC).
- Sorry about that. I just forgot to sign it. -- Apotheon 21:16, 19 April 2007 (UTC)
[edit] Biased Structure (more POV)
The article is structured in a manner that's unnecessarily biased against this approach. It starts with Arguments Against rather than In Favor Of, which is a very unusual way to structure any article about a competing strategy (starting with advantages is more standard form and gives the disadvantages section more material to rebut). --Mahemoff 17:28, 21 October 2006 (UTC)
- It may be notable, in this context, that the term "security through obscurity" was likely coined as a pejorative term for certain security practices that were not clearly identified as a "competing strategy" until they were criticized for their (claimed) fallacious predicates. Or something like that. -- Apotheon 14:53, 9 May 2007 (UTC)
[edit] Origins
I removed the part about Apollo. The earliest reference I could find in usenet was Crispin, 1984. The true origin is no doubt lost in obscurity. —The preceding unsigned comment was added by Rees11 (talk • contribs) 20:26, 12 January 2007 (UTC).
[edit] Running network services using non-standard port
The current examples of security through obscurity is somewhat...obscure. At least for knowledgeable computer users, the most common example of security through obscurity is running network services using a non-standard port. I think this is a particularly good example, because it demonstrates both the pros and cons of the practice. As the only security measure (say, running a default login telnet on port 12345) it's clearly insufficient and foolish, yet combined with existing method could be beneficial.
As a personal anecdote (though I believe I'm not alone in this), I get hundreds of SSH login attempts every few hours or so, and therefore my log file is filled with failed login attempts, drowning out other messages (yes I know how to use grep). After switching to a non-standard port, I've yet to see anything in days. In addition to more readable logs, not having to establish / tear down connections saves memory and CPU cycles (granted, in my case the waste is insignificant). —The preceding unsigned comment was added by Madoka (talk • contribs) 01:21, 15 February 2007 (UTC).
[edit] condensed long paragraph
Here's the original text of the now condensed long paragraph. Much of it was removed in the condensation as it confused Bernstein's case with Zimmermans's and attributed (with citation) a motive to Zimemrman that is incorrect. Zimemrman was concerned to retain civil liberties of private discussion in an increasingly electronic world and saw quality crypto as a means to that end.
- In the mid-1990's Phil Zimmerman published the source code for PGP, which is commonly considered a military-grade cryptosystem, in book form, in order to taunt the U.S. government into prosecuting him for violating the law that classifies all secure cryptosystems as munitions, which can't be distributed witthout a government license. Zimmerman's counterargument is that publishing a book is protected by the First Amendment, even though any reader could cut the pages from a $60 book and run them through a scanner, then use an OCR program to create a text file. The resulting source files could be compiled using open-source development systems, such as the GNU C Compiler, on nearly any type of computer, anywhere in the world. He was confident that the NSA would attack PGP with little success--even their supercomputers would take eons to factor the product of two very large primes--and that the program would see wide distribution and use; and that the Supreme Court would not send him to prison. In any case PGP's security is based on number theory and the cost of computation, not on obscure design.
If the editor would like to restore some of the removed materail, please discuss it here. ww 16:05, 27 March 2007 (UTC)
[edit] Historical Notes
The "Historical Notes" section was lifted almost verbatim from the New Hacker's Dictionary. Maybe it should be credited or removed. Rees11 19:57, 28 March 2007 (UTC)
[edit] Security through obscurity in non-computer contexts?
This topic is relevant to overall national security, not just computer security, and therefore should be expanded. For example, would classifying the internal procedures used by the Transportation Security Administration be necessary in order to protect against terror attack, or does it actually prevent vulnerabilities from being addressed? 69.140.164.142 04:45, 23 April 2007 (UTC)
[edit] security [ "by" | "through" ] obscurity?
I wonder which is the best way to express this concept, from english linguistical point of view (but not limiting to). ast pronounced it "Security by obscurity", but he also spelled "Kerckhoff" without the trailing "s", so i doubt he is right about it this time... --ThG 19:50, 16 July 2007 (UTC)
[edit] Security through rarity?
I have seen two uses of the term "security through obscurity." The first is using secrecy for security. This seems to be the meaning used in formal literature and the intended meaning in this article.
However, I have seen a second use of the term used: that the technology is so uncommon that it is not a target of cracking. Perhaps a better name for this argument may be "security through rarity." The argument is sometimes used (perhaps wrongly) to explain why Windows systems are more often the targets of malware than Linux or Macintosh systems. It is argued that this may result because the uncommonness makes it a de facto secret to an amateur cracker (despite possibly being open source), because the bad guys don't have the tools to develop and test their exploits, because existing exploits would be less available to an amateur cracker, or because crackers have less motivation to target a system that would have less impact.
I have seen significant use of the latter meaning of "security through obscurity," even if it is inaccurate. Perhaps there needs to be an additional section in this article to address security through rarity, or even its own article. 68.191.176.60 21:29, 24 July 2007 (UTC)
- There needs to be some coverage of this. I assume that security through rarity is much more common than deliberate security through obscurity. Since rarity and obscurity are sometimes synonyms, it's confusing not to have this mentioned. Also, the two are presumably linked in that they should both offer resistance through deterence of effort. --Wragge 19:49, 23 October 2007 (UTC)
-
- I've added a basic section, using the most common Google term I could find for the concept ("minority"). It's not great though: if people could find a better term or improve the section, that'd be nice. I've tried to avoid writing anything there that wasn't totally obvious to anyone with a brain, but it might still count as "original research" - not sure. DewiMorgan (talk) 18:05, 21 February 2008 (UTC)
[edit] Historical addition
I seem to recall reading about a big debate in the locksmithing community in the 18th or 19th century regarding this subject. If anyone has a good reference on that and can add a section, that would be great. Gigs (talk) 15:51, 23 December 2007 (UTC)