A popular saying is that “Reliable software does what it is supposed to do. Secure software does that and nothing else” (Ivan Arce). However, how do we get there, and can we claim that we have achieved the practice of an engineering science? The plethora of vulnerabilities found every year (thousands, and that’s just in software that matters or is publicly known) suggests not. Does that mean that we don’t know how, or that it is just not put into practice for reasons of ignorance, education, costs, market pressures, or something else?
The distinction between artisanal work and engineering work is well expressed in the SEI (Software Engineering Institute) work on capability maturity models. Levels of maturity range from 1 to 5:
- Ad-hoc, individual efforts and heroics
- Optimizing (Science)
Artisanal work is individual work, entirely dependent on the (unique) skills of the individual and personal level of organization. Engineering work aims to be objective, independent from one individual’s perception and does not require unique skills. It should be reproducible, predictable and systematic.
In this context, it occurred to me that the security community often suggests using methods that have artisanal characteristics. We are also somewhat hypocritical (in the academic sense of the term, not deceitful, just not thinking through critically enough). The methods that are suggested to increase security actually rely on practices we decry. What am I talking about? I am talking about black lists.
A common design error is to create a list of “bad” inputs, bad characters, or other undesirable things. This is a black list; it often fails because the enumeration is incomplete, or because the removal of bad characters from the input can result in the production of another bad input which is not caught (and so on recursively). It turns out more often than not that there is a way to circumvent or fool the black list mechanism. Black lists fail also because they are based on previous experience, and only enumerate *known* bad input. The recommended practice is the creation of white lists, that enumerate known good input. Everything else is rejected.
When I teach secure programming, I go through often repeated mistakes, and show students how to avoid them. Books on secure programming show lists upon lists of “sins” and errors to avoid. Those are blacklists that we are in effect creating in the minds of readers and students! It doesn’t stop there. Recommended development methods (solutions for repeated mistakes) also often take the form of black lists. For example, risk assessment and threat modeling require expert artisans to imagine, based on past experience, what are likely avenues of attack, and possible damage and other consequences. The results of those activities are dependent upon unique skill sets, are irreproducible (ask different people and you will get different answers), and attempt to enumerate known bad things. They build black lists into the design of software development projects.
Risk assessment and threat modeling are appropriate for insurance purposes in the physical world, because the laws of physics and gravity on earth aren’t going to change tomorrow. The experience is collected at geographical, geological and national levels, tabulated and analyzed for decades. However, in software engineering, black lists are doomed to failure, because they are based on past experience, and need to face intelligent attackers inventing new attacks. How good can that be for the future of secure software engineering?
Precious few people emphasize development and software configuration methods that result (with guarantees) in the creation of provably correct code. This of course leads into formal methods (and languages like SPARK and the correctness by construction approach), but not necessarily so. For example, I was recently educated on the existence of a software solution called AppArmor (Suse Linux, Crispin Cowan et al.). This solution is based on fairly fine-grained capabilities, and granting to an application only known required capabilities; all the others are denied. This corresponds to building a white list of what an application is allowed to do; the developers even say that it can contain and limit a process running as root. Now, it may still be possible for some malicious activity to take place within the limits of the granted capabilities (if an application was compromised), but their scope is greatly limited. The white list can be developed simply by exercising an application throughout its normal states and functions, in normal usage. Then the list of capabilites is frozen and provides protection against unexpected conditions.
We need to come up with more white list methods for both development and configuration, and move away from black lists. This is the only way that secure software development will become secure software engineering.
Edit (4/16/06): Someone pointed out the site http://blogs.msdn.com/threatmodeling/ to me. It is interesting because it shows awareness of the challenge of getting from an art to a science. It also attempts to abstract the expert knowledge into an “attack library”, which makes explicit its black list nature. However, they don’t openly acknowledge the limitations of black lists. Whereas we don’t currently have a white list design methodology that can replace threat modeling (it is useful!), it’s regrettable that the best everyone can come up with is a black list.
Also, it occurred to me since writing this post that AppArmor isn’t quite a pure white list methodology, strictly speaking. Instead of being a list of known *safe* capabilities, it is a list of *required* capabilities. The difference is that the list of required capabilities, due to the granularity of capabilities and the complexity emerging from composing different capabilities together, is a superset of what is safe for the application to be able to do. What to call it then? I am thinking of “permissive white list” for a white list that allows more than necessary, vs a “restrictive white list” for a white list that possibly prevents some safe actions, and an “exact white list” when the white list matches exactly what is safe to do, no more and no less.
CNET has published an excellent resource for protecting oneself from identity theft. The site includes an ID theft FAQ with many good tips, a roundtable debate, and a few little multimedia gems.
One of my favorite pieces is the pie graph in the sidebar that illustrates risks to ID theft. The most prevalent risks still come from offline.
I gave an ID Theft talk several months ago, and the audience was looking for any way to protect themselves online, to the point of absurdity. But when I suggested that they cut down on all the stuff they carry in their wallets and/or purses, they nearly revolted: “What if I need to do XYZ and don’t have my ID/credit card/library card/customer card/social security card/insurance card/etc.?”
To me, this illustrates that we have a long way to go in educating users about risks. It also illustrates that we need to push back from all the noise created in the infotainment industries, who are perpetuating the online myth and ignoring the brick-and-mortar threats.
A recent study by the US Justice Department notes that households headed by individuals between the ages of 18 and 24 are the most likely to experience identity theft. The report does not investigate why this age group is more susceptible, so I’ve started a list:
- Willingness To Share Information: If myspace, facebook, and the numerous blog sites like livejournal are any indication, younger adults tend to be more open about providing personal information. While these sites may not be used by identity thieves, they nonetheless illustrate students’ willingness to divulge intimate details of their personal lives. Students might be more forthcoming with their SSN, account information and credit card numbers than are their elders.
- Financial Inexperience: Many college students are out on their own for the first time. Many also are in “control” of their finances for the first time. With that lack of experience comes a lack of experience with and knowledge about tracking expenditures and balancing checkbooks. College students are an easier target for identity thieves who can ring up several purchases before being noticed.
- Access to Credit: A walk around campus during the first few weeks of the year also reveals another contributing factor. Students are lured into applying for credit cards by attractive young men and women handing out free T-shirts and other junk. It is not unusual for a college freshman to have three or four credit cards with limits of $1000 to $5000.
- Lost Credit Cards and Numbers: This might be a stretch, but I know many college students who periodically loose their wallets, purses, etc. and who did not act quickly in canceling their debit and credit cards. I also know many who have accidentally left a campus bar without closing their tab. It would be trivial to get access to someone else’s card at these establishments. Along with this reason comes access to friends’ and roommates’ cards.
I’m sure there are many more contributing factors. What interests me is determining the appropriate role of the university in helping to prevent identity theft among this age group. Most colleges and universities now engage in information security awareness and training initiatives with the goal of protecting the university’s infrastructure and the privacy of information covered by regulations such as FERPA, HIPPA, and so on. Should higher education institutions extend infosec awareness campaigns so that they deal with issues of personal privacy protection and identity theft? What are the benefits to universities? What are their responsibilities to their students?
The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest. Topics covered include spyware, phishing, and patching. The winning video, Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.
The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest. In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.
Ars Technica‘s Eric Bangeman posted a pointer and commentary about a case in Illinois where a WiFi piggybacker got caught and fined. This is apparently the third conviction in the US (two in Florida and this one) in the last 9 months. The Rockford Register reports:
In a prepared statement, Winnebago County State’s Attorney Paul Logli said, “With the increasing use of wireless computer equipment, the people of Winnebago County need to know that their computer systems are at risk. They need to use encryption or what are known as firewalls to protect their data, much the same way locks protect their homes.”
Firewall? I guess they didn’t prepare the statement enough, but the intent is clear. Still, it seems that the focus is on the consumer’s responsibility to lock down their network, ignoring the fact that the equipment that’s churned out by manufacturers is far too difficult to secure in the best of circumstances, let alone when you have legacy gear that won’t support WPA. Eric seems to agree:
Personally, I keep my home network locked down, and with consumer-grade WAPs so easy to administer, there’s really no excuse for leaving them running with the default (open) settings.
“Easy” is very relative. It’s “easy” for guys like us, and probably a lot of the Ars audience, but try standing in the networking hardware aisle at Best Buy for about 15 minutes and listen to the questions most customers ask. As I’ve touched on before, expecting them to secure their setups is just asking for trouble.
- January, 2018
- October, 2017
- August, 2017
- April, 2017
- March, 2017
- November, 2016
- October, 2016
- July, 2016
- June, 2016
- March, 2016
- December, 2015
- October, 2015
- August, 2015
- June, 2015
- May, 2015
- April, 2015
- September, 2014
- July, 2014
- May, 2014
- April, 2014
- March, 2014
- February, 2014
- January, 2014
- November, 2013
- October, 2013
- September, 2013
- June, 2013
- April, 2013
- February, 2013
- January, 2013
- December, 2012
- April, 2012
- February, 2012
- October, 2011
- July, 2011
- June, 2011
- May, 2011
- April, 2011
- March, 2011
- September, 2010
- June, 2010
- April, 2010
- March, 2010
- February, 2010
- December, 2009
- November, 2009
- October, 2009
- September, 2009
- August, 2009
- July, 2009