The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Security Through Obscurity

Share:
I take some of the blame for helping to spread "no security through obscurity," first with some talks on COPS (developed with Dan Farmer) in 1990, and then in the first edition of "Practical Unix Security" (with Simson Garfinkel) in 1991. ... The origin of the phrase is arguably from one of Kerckhoff's principles for strong cryptography: that there should be no need for the cryptographic algorithm to be secret, and it can be safely disclosed to your enemy. The point there is that the strength of a cryptographic mechanism that depends on the secrecy of the algorithm is poor; to use Schneier's term, it is "brittle": Once the algorithm is discovered, there is no protection (or minimal) left, and once broken it cannot be repaired. Worse, if an attacker manages to discover the algorithm without disclosing that discovery then she can exploit it over time before it can be fixed. The mapping to OS vulnerabilities is somewhat analogous: if your security depends only (or primarily) on keeping a vulnerability secret, then that security is brittle -- once the vulnerability is disclosed, the system becomes more vulnerable. ... One goal of securing a system is to increase the work factor for the opponent, with a secondary goal of increasing the likelihood of detecting when an attack is undertaken. By that definition, obscurity and secrecy do provide some security because they increase the work factor an opponent must expend to successfully attack your system. ... Every potential vulnerability in the codebase that has yet to be discovered by (or revealed to) someone who might exploit it is not yet a realized vulnerability. ... cases, there is little or no danger to the general public UNTIL some yahoo publishes the vulnerability and an exploit far and wide. ... The problem occurs when a flaw is discovered and the owners/operators attempt to maintain (indefinitely) the sanctity of the system by stopping disclosure of the flaw.

This was originally written for Dave Farber’s IP list.

I take some of the blame for helping to spread “no security through obscurity,” first with some talks on COPS (developed with Dan Farmer) in 1990, and then in the first edition of Practical Unix Security (with Simson Garfinkel) in 1991. None of us originated the term, but I know we helped popularize it with those items.

The origin of the phrase is arguably from one of Kerckhoff’s principles for strong cryptography: that there should be no need for the cryptographic algorithm to be secret, and it can be safely disclosed to your enemy. The point there is that the strength of a cryptographic mechanism that depends on the secrecy of the algorithm is poor; to use Schneier’s term, it is brittle: Once the algorithm is discovered, there is no (or minimal) protection left, and once broken it cannot be repaired. Worse, if an attacker manages to discover the algorithm without disclosing that discovery then she can exploit it over time before it can be fixed.

The mapping to OS vulnerabilities is somewhat analogous: if your security depends only (or primarily) on keeping a vulnerability secret, then that security is brittle—once the vulnerability is disclosed, the system becomes more vulnerable. And, analogously, if an attacker knows the vulnerability and hides that discovery, he can exploit it when desired.

However, the usual intent behind the current use of the phrase “security through obscurity” is not correct. One goal of securing a system is to increase the work factor for the opponent, with a secondary goal of increasing the likelihood of detecting when an attack is undertaken. By that definition, obscurity and secrecy do provide some security because they increase the work factor an opponent must expend to successfully attack your system. The obscurity may also help expose an attacker because it will require some probing to penetrate the obscurity, thus allowing some instrumentation and advanced warning.

In point of fact, most of our current systems have “security through obscurity” and it works! Every potential vulnerability in the codebase that has yet to be discovered by (or revealed to) someone who might exploit it is not yet a realized vulnerability. Thus, our security (protection, actually) is better because of that “obscurity”! In many (most?) cases, there is little or no danger to the general public until some yahoo publishes the vulnerability and an exploit far and wide.

Passwords are a form of secret (obscurity) that provide protection. Classifying or obfuscating a codebase can increase the work factor for an attacker, thus providing additional security. This is commonly used in military systems and commercial trade secrets, whereby details are kept hidden to limit access and increase workfactor for an attacker.

The problem occurs when a flaw is discovered and the owners/operators attempt to maintain (indefinitely) the sanctity of the system by stopping disclosure of the flaw. That is not generally going to work for long, especially in the face of determined foes. The owners/operators should realize that there is no (indefinite) security in keeping the flaw secret.

The solution is to design the system from the start so it is highly robust, with multiple levels of protection. That way, a discovered flaw can be tolerated even if it is disclosed, until it is fixed or otherwise protected. Few consumer systems are built this way.

Bottom line: “security through obscurity” actually works in many cases and is not, in itself, a bad thing. Security for the population at large is often damaged by the people who claim to be defending the systems by publishing the flaws and exploits trying to force fixes. But vendors and operators (and lawyers) should not depend on secrecy as primary protection.

 

Comments

Posted by Ed Felten
on Friday, September 5, 2008 at 06:35 AM

Another way to justify keeping keys rather than algorithms secret is that you can quantify precisely the odds that an adversary will be able to guess a (randomly chosen) key, but there’s no hope of quantifying the probability that he’ll guess what algorithm you’re using. 

And if you can quantify a risk, then you can reason about whether that particular risk is acceptable in the context of your overall system.

Posted by Simson Garfinkel
on Tuesday, September 9, 2008 at 02:25 AM

We’ve seen a lot of cases where the lack of security has dramatically increased vulnerabilities. Although the Open Source community likes to argue that released source code is always more secure than secret (“obscure”) source code, that’s only true if people actually take the time to audit the code, find the vulnerabilities, and then get the vulnerabilities fixed.

Frequently the difference between code that is obscure and code that is not is the difference between a crash and a remote exploit.

Posted by SEO Services
on Thursday, December 18, 2008 at 01:17 PM

Talking about Opensource, I have some portals using Joomla and let me tell you that at least once a week I get someone that tries to crash my sites.  It does not matter how good security is, since the code is available to everyone. There is never a secret in Opensource

Posted by mssarma.org
on Tuesday, December 23, 2008 at 02:21 AM

We have seen many cases where the lack of security has dramatically increased the vulnerability. Although the open source community likes to say that released the source code is always safer than the secret ( “obscure”) source code, it is only if people actually take the time to audit the code, find vulnerabilities, and then fixed the vulnerabilities.

Often the difference between the code that is dark and the code that is not is the difference between an accident and a remote exploit.

Posted by tha
on Thursday, December 25, 2008 at 01:30 AM

the difference between the code that is dark and the code that is not is the difference between an accident and a remote exploit

Posted by helen
on Friday, December 26, 2008 at 03:28 PM

nice post dude

Posted by Omar
on Saturday, December 27, 2008 at 01:54 AM

I don’t think there is security out that is good enough. My sites are constantly being attacked. I’ll keep checking back here to see if something I’ve not tried pops up—I’ll keep trying.

Posted by Freddie
on Saturday, December 27, 2008 at 06:47 AM

Not sure how important this maybe but these are good points.

Posted by Nashville SEO
on Wednesday, January 14, 2009 at 12:45 AM

So much of the opensource form software out there has become a bullseye for hackers.  The main link I have found is that they often look for the line of text at the bottom of these that will read something like, “Powered by such and such, v.2.3”.  The hacker’s virus will scan for that line (which a lot of sites fail to remove) and then the automated system knows how to hack that site. 

So the best tip I can offer is for people to remove all of the default text that is embedded in the sites, upon purchase.

Posted by accident claims
on Tuesday, January 27, 2009 at 09:46 AM

it is common practice nowadays to pay someone to try and crash your site/server just to check the security measure. If someone does manage to, then all existing loopholes, at least the ones foudn are deleted…also better known as ethical hacking. As for open source, well there’s nothing really that can be done about it.

Posted by Modular Display Systems
on Thursday, February 19, 2009 at 01:02 PM

Maintaining a secure environment on Unix and Unix-like operating systems is dependent on design concepts of these operating systems, but vigilance through user and administrative techniques is important to maintain security also. I usually scan the server with a port scanner or vulnerability assessment tool to determine what unnecessary services are running on a system. Disable the services that are not required by any necessary applications.

Posted by MTS
on Friday, February 20, 2009 at 10:42 AM

Amazingly simple concept I will implement these strategies to my own website immediately!

Posted by MTS
on Friday, February 20, 2009 at 10:43 AM

Amazingly simple concept I will implement these strategies to my own website immediately! I believe security is a huge concern and should be easier to moderate.

Posted by Anonymous Proxy
on Thursday, December 17, 2009 at 09:54 PM

We use this technique with our code base.  We obfuscate the code so that any variable names would be meaningless to a hacker.  The reason we do it is not stop the determined hacker, but to stop the less determined ones.  That may sound odd, but obfuscation is similar to the concept of encryption.  Both are delaying tactics and can be defeated.  However, if you can delay your opponents long enough, you win time to change the game again.

Posted by ful
on Tuesday, December 22, 2009 at 12:01 PM

Another way to justify keeping keys rather than algorithms secret is that you can quantify precisely the odds that an adversary will be able to guess a (randomly chosen) key, but there’s no hope of quantifying the probability that he’ll guess what algorithm you’re using.

And if you can quantify a risk, then you can reason about whether that particular risk is acceptable in the context of your overall system.

Posted by Mar Han
on Thursday, February 4, 2010 at 02:56 PM

We used openMRS software, a Tomcat server and MySql datbase for a research project and now I wish we didn’t. We will redevelop in Microsofts Webserver (IIS) and database (SQL)

Posted by puppy care
on Thursday, June 24, 2010 at 09:37 AM

The thing about most hackers that go after Open Source programs is that they will go for the path of least resistance. 

Just a few minor changes from the default install is usually enough to send them along to the next unsuspecting site owner.

But you’re right, leaving the “powered by” is the same thing as putting leave your car unlocked and putting a neon sign in front of it to advertise the fact.

Leave a comment

Commenting is not available in this section entry.