The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Rethinking computing insanity, practice and research

Share:

[A portion of this essay appeared in the October 2008 issue of Information Security magazine. My thanks to Dave Farber for a conversation that spurred me to post this expanded version.]

[Small typos corrected in April 2010.]

I’d like to repeat (portions of) a theme I have been speaking about for over a decade. I’ll start by taking a long view of computing.

Fifty years ago, IBM introduced the first all-transistor computer (the 7000 series). Transistors were approximately $60 apiece (in current dollars). Secondary storage was about 10 cents per byte (also in current dollars) and had a density of approximately 2000 bits per cubic inch. According to Wikipedia, a working IBM 7090 system with a full 32K of memory (the capacity of the machine) cost about $3,000,000 to purchase—over $21,000,000 in current dollars. Software, peripherals, and maintenance all cost more. Rental of a system (maintenance included) could be well over $500,000 per month (in 1958 dollars). Other vendors soon brought their own transistorized systems to market, at similar costs.

These early computing systems came without an operating system. However, the costs of having such a system sit idle between jobs (and during I/O) led the field to develop operating systems that supported sharing of hardware to maximize utilization. It also led to the development of user accounts for cost accounting. And all of these soon led to development of security features to ensure that the sharing didn’t go too far, and that accounting was not disabled or corrupted. As the hardware evolved and became more capable, the software also evolved and took on new features.

Costs and capabilities of computing hardware have changed by a factor of tens of millions in five decades. Currently, transistors cost less than 1/7800 of a cent apiece in modern CPU chips (Intel Itanium). Assuming I didn’t drop a decimal place, that is a drop in price by 7 orders of magnitude.  Ed Lazowska made a presentation a few years ago where he indicated that the number of grains of rice harvested worldwide in 2004 was ten quintillion—10 raised to the 18th power. But in 2004, there were also ten quintillion transistors manufactured, and that number has increased faster than the rice harvest ever since. We have more transistors being produced and fielded each year than all the grains of rice harvested in all the countries of the world. Isn’t that amazing?

Storage also changed drastically. We have gone from core memory to semiconductor memory. And in secondary storage we have gone from drum memory to disks to SSDs. If we look at consumer disk storage, it is now common to get storage density of better than 500Gb per cubic inch at a cost of less than $.20 per Gb (including enclosure and controller)—a price drop of nearly 8 orders of magnitude. Of course, weight, size, speed, noise, heat, power, and other factors have all also undergone major changes. To think of it another way, that same presentation by Professor Lazowska, noted that the computerized greeting cards you can buy at the store to record and play back a message to music have more computing power and memory in them than some of those multi-million $ computers of the 1950s, all for under $10.

Yet, despite these incredible transformations, the operating systems, databases, languages, and more that we use are still basically the designs we came up with in the 1960s to make the best use of limited, expensive, shared equipment. More to the theme of this blog, overall information security is almost certainly worse now than it was in the 1960s. We’re still suffering from problems known for decades, and systems are still being built with intrinsic weaknesses, yet now we have more to lose with more valuable information coming online every week.

Why have we failed to make appreciable progress with the software? In part, it is because we’ve been busy trying to advance on every front. Partially, it is because it is simpler to replace the underlying hardware with something faster, thus getting a visible performance gain. This helps mask the ongoing lack of quality and progression to really new ideas. As well, the speed with which the field of computing (development and application) moves is incredible, and few have the time or inclination to step back and re-examine first principles. This includes old habits such as the sense of importance in making code “small” even to the point of leaving out internal consistency checks and error handling. (Y2K was not a one-time fluke—it’s an instance of an institutional bad habit.)

Another such habit is that of trying to build every system to have the capability to perform every task. There is a general lack of awareness that security needs are different for different applications and environments; instead, people seek uniformity of OS, hardware architecture, programming languages and beyond, all with maximal flexibility and capacity. Ostensibly, this uniformity is to reduce purchase, training, and maintenance costs, but fails to take into account risks and operational needs. Such attitudes are clearly nonsensical when applied to almost any other area of technology, so it is perplexing they are still rampant in IT.   

For instance, imagine buying a single model of commercial speedboat and assuming it will be adequate for bass fishing, auto ferries, arctic icebreakers, Coast Guard rescues, oil tankers, and deep water naval interdiction—so long as we add on a few aftermarket items and enable a few options. Fundamentally, we understand that this is untenable and that we need to architect a vessel from the keel upwards to tailor it for specific needs, and to harden it against specific dangers. Why cannot we see the same is true for computing? Why do we not understand that the commercial platform used at home to store Aunt Bee’s pie recipes is NOT equally suitable for weapons control, health care records management, real-time utility management, storage of financial transactions, and more? Trying to support everything in one system results in huge, unwieldy software on incredibly complex hardware chips, all requiring dozens of external packages to attempt to shore up the inherent problems introduced by the complexity. Meanwhile, we require more complex hardware to support all the software, and this drives complexity, cost and power issues.

The situation is unlikely to improve until we, as a society, start valuing good security and quality over the lifetime of our IT products. We need to design systems to enforce behavior within each specific configuration, not continually tinker with general systems to stop each new threat. Firewalls, IDS, antivirus, DLP and even virtual machine “must-have” products are used because the underlying systems aren’t trustworthy—as we keep discovering with increasing pain. A better approach would be to determine exactly what we want supported in each environment, build systems to those more minimal specifications only, and then ensure they are not used for anything beyond those limitations. By having a defined, crafted set of applications we want to run, it will be easier to deny execution to anything we don’t want; To use some current terminology, that’s “whitelisting” as opposed to “blacklisting.” This approach to design is also craftsmanship—using the right tools for each task at hand, as opposed to treating all problems the same because all we have is a single tool, no matter how good that tool may be. After all, you may have the finest quality multitool money can buy, with dozens of blades and screwdrivers and pliers. But you would never dream of building a house (or a government agency) using that multitool. Sure, it does a lot of things passably, but it is far from ideal for expertly doing most complex tasks.

Managers will make the argument that using a single, standard component means it can be produced, acquired and operated more cheaply than if there are many different versions. That is often correct insofar as direct costs are concerned. However, it fails to include secondary costs such as reducing the costs of total failure and exposure, and reducing the cost of “bridge” and “add-on” components to make items suitable. Smaller and more directed systems need to be patched and upgraded far less often than large, all-inclusive systems because they have less to go wrong and don’t change as often. There is also a defensive benefit to the resulting diversity: attackers need to work harder to penetrate a given system because they don’t know what is running. Taken to an extreme, having a single solution also reduces or eliminates real innovation as there is no incentive for radical new approaches; with a single platform, the only viable approach is to make small, incremental changes built to the common format. This introduces a hidden burden on progress that is well understood in historical terms—radical new improvements seldom result from staying with the masses in the mainstream.

Therein lies the challenge, for researchers and policy-makers. The current cyber security landscape is a major battlefield. We are under constant attack from criminals, vandals, and professional agents of governments. There is such an urgent, large-scale need to simply bring current systems up to some bare minimum that it could soak up way more resources than we have to throw at the problems. The result is that there is a huge sense of urgency to find ways to “fix” the current infrastructure. Not only is this where the bulk of the resources is going, but this flow of resources and attention also fixes the focus of our research establishment on these issues, But when this happens, there is great pressure to direct research towards the current environment, and towards projects with tangible results. Program managers are encouraged to go this way because they want to show they are good stewards of the public trust by helping solve major problems. CIOs and CTOs are less willing to try outlandish ideas, and cringe at even the notion of replacing their current infrastructure, broken as it may be. So, researchers go where the money is—tried and true, incremental, “safe” research.

We have crippled our research community as a result. There are too few resources devoted to far-ranging ideas that may not have immediate results. Even if the program managers encourage vision, review panels are quick to quash it. The recent history of DARPA is one that has shifted towards immediate results from industry and away from vision, at least in computing. NSF, DOE, NIST and other agencies have also shortened their horizons, despite claims to the contrary. Recommendations for action (including the recent CSIS Commission report to the President) continue this by posing the problem as how to secure the current infrastructure rather than asking how we can build and maintain a trustable infrastructure to replace what is currently there.

Some of us see how knowledge of the past combined with future research can help us have more secure systems. The challenge continues to be convincing enough people that “cheap” is not the same as “best,” and that we can afford to do better. Let’s see some real innovation in building and deploying new systems, languages, and even networks. After all, we no longer need to fit in 32K of memory on a $21 million computer. Let’s stop optimizing the wrong things, and start focusing on discovering and building the right solutions to problems rather than continuing to try to answer the same tired (and wrong) questions. We need a major sustained effort in research into new operating systems and architectures, new software engineering methods, new programming languages and systems, and more, some with a (nearly) clean-slate starting point. Small failures should be encouraged, because they indicate people are trying risky ideas. Then we need a sustained effort to transition good ideas into practice.

I’ll conclude with s quote that many people attribute to Albert Einstein, but I have seen multiple citations to its use by John Dryden in the 1600s in his play “The Spanish Friar”:

  “Insanity: doing the same thing over and over again expecting different results.”

What we have been doing in cyber security has been insane. It is past time to do something different.

[Added 12/17: I was reminded that I made a post last year that touches on some of the same themes; it is here.]

Comments

Posted by Toby
on Monday, December 15, 2008 at 04:09 PM

As someone who is generally a fan of security revisionism, I find the following statement troubling:

“A better approach would be to determine exactly what we want supported in each environment, build systems to those more minimal specifications only, and then ensure they are not used for anything beyond those limitations. “

The only reason we /have/ a major problem with security is /because/ machines have been used beyond the functions for which they were originally imagined. This has underpinned /all/ innovation in info tech fields since the 50s. We should not abandon it in favour of locked-down machines, in the name of security.

There is no need to be so strict. One need only abandon flexibility if one assumes that the Orange Book or something else is the only path to a solution. Instead, I would argue we need systems built heavily around the idea of least privilege on an architecture that supports delegation and the creation of composable security enforcing abstractions—the latter are the only way to achieve the former whilst allowing the sort of flexibility needed to ensure innovation is not prevented.

Posted by spaf
on Monday, December 15, 2008 at 04:41 PM

Toby,
I don’t suggest doing away with all flexible systems in favor of fixed.  I suggest that the security critical ones would be best systems to minimize.

The Orange Book was okay for some things, but still is based on a large, monolithic OS model, and one without networks.  Further, it assumes that a multi-level system is the desired state.  I don’t buy that, either.

I think we need a variety of systems but we are only researching a very narrow range.

Posted by Jerry Sheehan
on Monday, December 15, 2008 at 06:01 PM

Spaf,

Great blog post.  I was curious what you thought of the debate within the IETF to perhaps not work on fixing Kaminsky bug to force migration to DNSSEC?

Jerry

Posted by Jennifer Kurtz
on Monday, December 15, 2008 at 10:10 PM

How interesting that the march to conformity in software solutions creates more widespread vulnerability to malefactors much the same way botanical mono-cultures leave a plant species more vulnerable to pests.

When will we learn?

Vive la difference!

Thank you for a clear, thoughtful commentary on the state of the practice.

Posted by spaf
on Monday, December 15, 2008 at 10:19 PM

Jerry,

Where there are pressing dangers, we should act. 

I do not believe it good practice to “force” people to install changes—we don’t know what there capabilities or environments might be.

In this case, I think we should pursue all 3 options—fix the flaw where we can, push harder to role out DNSSEC, and do some basic thinking about whether there are better ways to handle the whole name system.

—spaf

Posted by Andy Balinsky
on Tuesday, December 16, 2008 at 02:57 PM

Do you really think that code size optimization is what is responsible for lack of security checks? In my (anecdotal) experience, it is more usually speed of getting code to market, or lack of education of the developer, not a desire to have the code be a few lines shorter or a few modules smaller. In other words, I don’t think it is a problem caused by the history of computing limitations, but rather on the conditions of the current marketplace. Economic forces (or perceptions thereof) that lead product managers to choose features over emphasis on quality (and security checks) seem more influential to me.

Posted by spaf
on Tuesday, December 16, 2008 at 03:45 PM

Andy,

There is no single factor.  I gave size as an example that has been shown to be a factor, and I still hear it mentioned at some places as a concern.  After all, if the code is twice as long (because of defense programming) then it may take twice as long to bring to market.  Thus, managers may stress keeping the code small as a way to force time to market.  This creates an additional mindset on size that may have originally come about because of technical limitations.

Economic forces are a factor, and as I noted, the economics of early machine costs was a primary factor. 

Lack of metrics and responsibility for failures contribute to poor decision-making, certainly.  But there is no simple answer.

The basic point I’m trying to make is that we are stuck in a rut of making bad choices because of things that came before.  We need to step back and rethink a lot if we are going to improve things.
—spaf

Posted by Aaron
on Wednesday, December 17, 2008 at 03:43 AM

Definitely an interesting read, looks like computers have come a long ways.

Posted by Scott B in DC
on Wednesday, December 17, 2008 at 12:19 PM

Gene:

If you read the CSIS Commission report to the President you will see that even they do not get it. The report makes the statement that they think there should not be an infosec distinction between systems that are designated for national security and other government systems—the rules should be the same for both. So I guess we have to protect the USDA systems in the same manner that we protect the NSA!

Another area missed is that there is no recommendations to hold anyone accountable for product safety issues. Consumers can sue a product manufacturer for a defect that causes harm, why can’t we sue a computer company or an online business when they’re hacked and disclose personally identifiable information because of their own bad configuration (see Best Buy and TJX)?

While the report is interesting, it is not eye opening nor is it enlightening. It leaves too much open to interpretation (e.g., fix FISMA, but not how to fix FISMA) and does not identify how the political will be managed to in order to get congress to pass the necessary laws—try debating a member of congress about the Federal Acquisition Regulations and tell me if the blank look is from confusion or disinterest!

Leadership comes from the top. Let’s see if the new administration will provide the leadership that even the Commission on Cyberspace is lacking!

Scott

Posted by Ashish Kundu
on Thursday, December 18, 2008 at 02:20 AM

Thank you for such a very insightful article, Spaf. A couple of the reasons that I would like to add are

(1) We (the research community included) have not built practical programming models/IDEs that facilitate development of secure systems easier. Apart from security, there is no such system that facilitate development of an important software name “Operating Systems”. Why it is important in this context? Because a highly advanced and integrated development environment (with advanced programming model built into it) make the job of a programmer/tester/debugger easier. And programmers love such things that make their job easier. For development of secure applications, such a security-specific environment can pull up certified cryptographic APIs, (re)use standard components, check for security holes and ...

(2) We need to look at practical security; sometimes we give too much emphasis on provable security, or semantic security or the formal models of security. They are important but are not sufficient. (Here is what Prof. Denning says: http://www.cs.georgetown.edu/~denning/infosec/award.html). We need to fill in the gaps in a system.

(3) Most of the programmers, managers, CIOs/CTOs are either unaware of many security issues or are busy making money by the snowballing effect of “insanity”. As long as they do not get robbed off, they do not care if their software on a bank leads others getting robbed off either money or identity for example.

(4) As the other article of yours on InfoSec education mentions, we need to improve on that front and security must become a required course, not an elective for everyone who becomes a software personnel.

Anyways, thanks for such a nice article.

Posted by Manny
on Thursday, January 15, 2009 at 07:09 AM

i like to read and explore this kind of site.

Posted by Michael
on Saturday, January 17, 2009 at 10:53 PM

nice post i really like that.

Posted by neubpaple
on Monday, January 19, 2009 at 07:42 PM

I think you are thinking like sukrat, but I think you should cover the other side of the topic in the post too…

Posted by SEO Services
on Tuesday, March 3, 2009 at 04:04 AM

Some of us see how knowledge of the past combined with future research can help us have more secure systems. The challenge continues to be convincing enough people that “cheap” is not the same as “best,” and that we can afford to do better.

Posted by Beautiful Stars Wallpapers
on Monday, March 23, 2009 at 04:45 AM

Another area missed is that there is no recommendations to hold anyone accountable for product safety issues. Consumers can sue a product manufacturer for a defect that causes harm, why can’t we sue a computer company or an online business when they’re hacked and disclose personally identifiable information because of their own bad configuration (see Best Buy and TJX)?

Posted by Acai
on Wednesday, June 3, 2009 at 02:47 PM

I think with this new age of technology, the government must step up and face the challenges of cyber-criminals. This not only includes the U.S., but other parts of the world as well. When delicate things like electricity, security systems, and vital information are at risk, it is of extreme importance.

Leave a comment

Commenting is not available in this section entry.