Posts tagged obscurity

Page Content

Security Through Obscurity

This was originally written for Dave Farber’s IP list.

I take some of the blame for helping to spread “no security through obscurity,” first with some talks on COPS (developed with Dan Farmer) in 1990, and then in the first edition of Practical Unix Security (with Simson Garfinkel) in 1991. None of us originated the term, but I know we helped popularize it with those items.

The origin of the phrase is arguably from one of Kerckhoff’s principles for strong cryptography: that there should be no need for the cryptographic algorithm to be secret, and it can be safely disclosed to your enemy. The point there is that the strength of a cryptographic mechanism that depends on the secrecy of the algorithm is poor; to use Schneier’s term, it is brittle: Once the algorithm is discovered, there is no (or minimal) protection left, and once broken it cannot be repaired. Worse, if an attacker manages to discover the algorithm without disclosing that discovery then she can exploit it over time before it can be fixed.

The mapping to OS vulnerabilities is somewhat analogous: if your security depends only (or primarily) on keeping a vulnerability secret, then that security is brittle—once the vulnerability is disclosed, the system becomes more vulnerable. And, analogously, if an attacker knows the vulnerability and hides that discovery, he can exploit it when desired.

However, the usual intent behind the current use of the phrase “security through obscurity” is not correct. One goal of securing a system is to increase the work factor for the opponent, with a secondary goal of increasing the likelihood of detecting when an attack is undertaken. By that definition, obscurity and secrecy do provide some security because they increase the work factor an opponent must expend to successfully attack your system. The obscurity may also help expose an attacker because it will require some probing to penetrate the obscurity, thus allowing some instrumentation and advanced warning.

In point of fact, most of our current systems have “security through obscurity” and it works! Every potential vulnerability in the codebase that has yet to be discovered by (or revealed to) someone who might exploit it is not yet a realized vulnerability. Thus, our security (protection, actually) is better because of that “obscurity”! In many (most?) cases, there is little or no danger to the general public until some yahoo publishes the vulnerability and an exploit far and wide.

Passwords are a form of secret (obscurity) that provide protection. Classifying or obfuscating a codebase can increase the work factor for an attacker, thus providing additional security. This is commonly used in military systems and commercial trade secrets, whereby details are kept hidden to limit access and increase workfactor for an attacker.

The problem occurs when a flaw is discovered and the owners/operators attempt to maintain (indefinitely) the sanctity of the system by stopping disclosure of the flaw. That is not generally going to work for long, especially in the face of determined foes. The owners/operators should realize that there is no (indefinite) security in keeping the flaw secret.

The solution is to design the system from the start so it is highly robust, with multiple levels of protection. That way, a discovered flaw can be tolerated even if it is disclosed, until it is fixed or otherwise protected. Few consumer systems are built this way.

Bottom line: “security through obscurity” actually works in many cases and is not, in itself, a bad thing. Security for the population at large is often damaged by the people who claim to be defending the systems by publishing the flaws and exploits trying to force fixes. But vendors and operators (and lawyers) should not depend on secrecy as primary protection.