[If you want to skip my recollection and jump right to the announcement that is the reason for this post, go here.]
Back in about 1990 I was approached by an eager undergrad who had recently come to Purdue University. A mutual acquaintance (hi, Rob!) had recommended that the student connect with me for a project. We chatted for a bit and at first it wasn't clear exactly what he might be able to do. He had some experience coding, and was working in the campus computing center, but had no background in the more advanced topics in computing (yet).
Well, it just so happened that a few months earlier, my honeypot Sun workstation had recorded a very sophisticated (for the time) attack, which resulted in an altered shared library with a back door in place. The attack was stealthy, and the new library had the same dates, size and simple hash value as the original. (The attack was part of a larger series of attacks, and eventually documented in "@Large: The Strange Case of the World's Biggest Internet Invasion" (David H. Freedman, Charles C. Mann .)
I had recently been studying message digest functions and had a hunch that they might provide better protection for systems than a simple
ls -1 | diff - old comparison. However, I wanted to get some operational sense about the potential for collision in the digests. So, I tasked the student with devising some tests to run many files through a version of the digest to see if there were any collisions. He wrote a program to generate some random files, and all seemed okay based on that. I suggested he look for a different collection -- something larger. He took my advice a little too much to heart. It seems he had a part time job running backup jobs on the main shared instructional computers at the campus computing center. He decided to run the program over the entire file system to look for duplicates. Which he did one night after backups were complete.
The next day (as I recall) he reported to me that there were no unexpected collisions over many hundreds of thousands of files. That was a good result!
The bad result was that running his program over the file system had resulted in a change of the access time of every file on the system, so the backups the next evening vastly exceeded the existing tape archive and all the spares! This led directly to the student having a (pointed) conversation with the director of the center, and thereafter, unemployment. I couldn't leave him in that position mid-semester so I found a little money and hired him as an assistant. I them put him to work coding up my idea, about how to use the message digests to detect changes and intrusions into a computing system. Over the next year, he would code up my design, and we would do repeated, modified "cleanroom" tests of his software. Only when they all passed, did we release the first version of Tripwire.
That is how I met Gene Kim .
Gene went on to grad school elsewhere, then a start-up, and finally got the idea to start the commercial version of Tripwire with Wyatt Starnes; Gene served as CTO, Wyatt as CEO. Their subsequent hard work, and that of hundreds of others who have worked at the company over the years, resulted in great success: the software has become one of the most widely used change detection & IDS systems in history, as well as inspiring many other products.
Gene became more active in the security scene, and was especially intrigued with issues of configuration management, compliance, and overall system visibility, and with their connections to security and correctness. Over the years he spoken with thousands of customers and experts in the industry, and heard both best-practice and horror stories involving integrity management, version control, and security. This led to projects, workshops, panel sessions, and eventually to his lead authorship of "Visible Ops Security: Achieving Common Security and IT Operations Objectives in 4 Practical Steps" (Gene Kim, Paul Love, George Spafford) , and some other, related works.
His passion for the topic only grew. He was involved in standards organizations, won several awards for his work, and even helped get the B-sides conferences into a going concern. A few years ago, he left his position at Tripwire to begin work on a book to better convey the principles he knew could make a huge difference in how IT is managed in organizations big and small.
I read an early draft of that book a little over a year ago (late 2011), It was a bit rough -- Gene is bright and enthusiastic, but was not quite writing to the level of J.K. Rowling or Stephen King. Still, it was clear that he had the framework of a reasonable narrative to present major points about good, bad, and excellent ways to manage IT operations, and how to transform them for the better. He then obtained input from a number of people (I think he ignored mine), added some co-authors, and performed a major rewrite of the book. The result is a much more readable and enjoyable story -- a cross between a case study and a detective novel, with a dash of H. P. Lovecraft and DevOps thrown in.
The official launch date of the book, "The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win" (Gene Kim, Kevin Behr, George Spafford), is Tuesday, January 15, but you can preorder it before then on (at least) Amazon.
The book is worth reading if you have a stake in operations at a business using IT. If you are a C-level executive, you should most definitely take time to read the book. Consultants, auditors, designers, educators...there are some concepts in there for everyone.
But you don't have to take only my word for it -- see the effusive praise of tech luminaries who have read the book .
So, Spaf sez, get a copy and see how you can transform your enterprise for the better.
(Oh, and I have never met the George Spafford who is a coauthor of the book. We are undoubtedly distant cousins, especially given how uncommon the name is. That Gene would work with two different Spaffords over the years is one of those cosmic quirks Vonnegut might write about. But Gene isn't Vonnegut, either.
So, as a postscript.... I've obviously known Gene for over 20 years, and am very fond of him, as well as happy for his continuing success. However, I have had a long history of kidding him, which he has taken with incredible good nature. I am sure he's saving it all up to get me some day....
When Gene and his publicist asked if I could provide some quotes to use for his book, I wrote the first of the following. For some reason, this never made it onto the WWW site . So, they asked me again, and I wrote the second of the following -- which they also did not use.
So, not to let a good review (or two) go to waste, I have included them here for you. If nothing else, it should convince others not to ask me for a book review.
But, despite the snark (who, me?) of these gag reviews, I definitely suggest you get a copy of the book and think about the ideas expressed therein. Gene and his coauthors have really produced a valuable, readable work that will inform -- and maybe scare -- anyone involved with organizational IT.
Based on my long experience in academia, I can say with conviction that this is truly a book, composed of an impressive collection of words, some of which exist in human languages. Although arranged in a largely random order, there are a few sentences that appear to have both verbs and nouns. I advise that you immediately buy several copies and send them to people -- especially people you don't like -- and know that your purchase is helping keep some out of the hands of the unwary and potentially innocent. Under no circumstances, however, should you read the book before driving or operating heavy machinery. This work should convince you that Gene Kim is a visionary (assuming that your definition of "vision" includes "drug-induced hallucination").
I picked up this new book -- The Phoenix Project , by Gene Kim, et al. -- and could not put it down. You probably hear people say that about books in which they are engrossed. But I mean this literally: I happened to be reading it on my Kindle while repairing some holiday ornaments with superglue. You might say that the book stuck with me for a while.
There are people who will tell you that Gene Kim is a great author and raconteur. Those people, of course, are either trapped in Mr. Kim's employ or they drink heavily. Actually, one of those conditions invariably leads to the other, along with uncontrollable weeping, and the anguished rending of garments. Notwithstanding that, Mr. Kim's latest assault on les belles-lettres does indeed prompt this reviewer to some praise: I have not had to charge my health spending account for a zolpidem refill since I received the advance copy of the book! (Although it may be why I now need risperidone.)
I must warn you, gentle reader, that despite my steadfast sufferance in reading, I never encountered any mention of an actual Phoenix. I skipped ahead to the end, and there was no mention there, either. Neither did I notice any discussion of a massive conflagration nor of Arizona, either of which might have supported the reference to Phoenix . This is perhaps not so puzzling when one recollects that Mr. Kim's train of thought often careens off the rails with any random, transient manifestation corresponding to the meme "Ooh, a squirrel!" Rather, this work is more emblematic of a bus of thought, although it is the short bus, at that.
Despite my personal trauma, I must declare the book as a fine yarn: not because it is unduly tangled (it is), but because my kitten batted it about for hours with the evident joy usually limited to a skein of fine yarn. I have found over time it is wise not to argue with cats or women. Therefore, appease your inner kitten and purchase a copy of the book. Gene Kim's court-appointed guardians will thank you. Probably.
(Congratulations Gene, Kevin and George!)
I have a set of keywords registered with Google Alerts that result in a notification whenever they show up in a new posting. This helps me keep track of some particular topics of interest.
One of them popped up recently with a link to a review and some comments about a book I co-authored (Practical Unix & Internet Security, 3rd Edition). The latest revision is over 6 years old, but still seems to be popular with many security professionals; some of the specific material is out of date, but much of the general material is still applicable and is likely to be applicable for many years yet to come. At the time we wrote the first edition of the book there were only one or two books on computer security, so we included more material to make this a useful text and reference.
In general, I don't respond to reviews of my work unless there is an error of fact, and not always even then. If people like the book, great. If they don't, well, they're entitled to their opinions -- no matter how ignorant and ill-informed they may be.
This particular posting included reviews from Amazon that must have been posted about the 2nd edition of the book, nearly a decade old, although their dates as listed on this site make it look like they are recent. I don't recall seeing all of the reviews before this.
One of the responses in this case was somewhat critical of me rather than the book: the text by James Rothschadl. I'm not bothered by his criticism of my knowledge of security issues. Generally, hackers who specialize in the latest attacks dismiss anyone not versed in their tools as ignorant, so I have heard this kind of criticism before. It is still the case that the "elite" hackers who specialize in the latest penetration tools think that they are the most informed about all things security. Sadly, some decision-makers believe this too, much to their later regret, usually because they depend on penetration analysis as their primary security mechanism.
What triggered this blog posting was when I read the comments that included the repetition of erroneous information originally in the book Underground by Suelette Dreyfus. In that book, Ms. Dreyfus recounted the exploits of various hackers and miscreants -- according to them. One such claim, made by a couple of hackers, was that they had broken into my account circa 1990. I do not think Ms. Dreyfus sought independent verification of this, because the story is not completely correct. Despite this, some people have gleefully pointed this out as "Spaf got hacked."
There are two problems with this tale. First, the computer account they broke into was on the CS department machines at Purdue. It was not a machine I administered (and for which I did not have administrator rights) -- it was on shared a shared faculty machine. Thus, the perps succeeded in getting into a machine run by university staff that happened to have my account name but which I did not maintain. That particular instance came about because of a machine crash, and the staff restored the system from an older backup tape. There had been a security patch applied between the backup and the crash, and the staff didn't realize that the patch needed to be reapplied after the backup.
But that isn't the main problem with this story: rather, the account they broke into wasn't my real account! My real account was on another machine that they didn't find. Instead, the account they penetrated was a public "decoy" account that was instrumented to detect such behavior, and that contained "bait" files. For instance, the perps downloaded a copy of what they thought was the Internet Worm source code. It was actually a copy of the code with key parts missing, and some key variables and algorithms changed such that it would partially compile but not run correctly. No big deal.
Actually, I got log information on the whole event. It was duly provided to law enforcement authorities, and I seem to recall that it helped lead to the arrest of one of them (but I don't recall the details about whether there was a prosecution -- it was 20 years ago, after all).
At least 3 penetrations of the decoy account in the early 1990s provided information to law enforcement agencies, as well as inspired my design of Tripwire. I ran decoys for several years (and may be doing so to this day . I always had a separate, locked down account for personal use, and even now keep certain sensitive files encrypted on removable media that is only mounted when the underlying host is offline. I understand the use of defense-in-depth, and the use of different levels of protection for different kinds of information. I have great confidence in the skills of our current system admins. Still, I administer a second set of controls on some systems. But i also realize that those defenses may not be enough against really determined, resourced attacks. So, if someone wants to spend the time and effort to get in, fine, but they won't find much of interest -- and they may be providing data for my own research in the process!
Cyber seems to be one of the buzzwords in Washington these days, with the recent botnet attacks generating a lot of extra noise. This has included at least one rather bellicose response from a US Representative who either is reading much more interesting information than the rest of us, or is not reading anything at all.
Meanwhile, in the background, various bits of legislation are being worked on by several committees in both the House and Senate to address various aspects of the perceived problems. Two notable instances are legislation proposed by Senator Rockefeller and others that followed closely after my testimony before their committee. I have heard that at least one of these pieces of proposed legislation is being revised, and will be reintroduced. Back in April, I sent comments on both proposed bills to committee staff, but never heard a response. I hope my input had some impact.
It occurred to me that I did not blog about the legislation or my comments. So, to correct that oversight, you will find the enclosed, which are my original comments with some newer perspective gained over the last few months. You can find the text of these bills via Thomas.
(I will post a follow-up when I see what the revised bills are like.)
The National Cybersecurity Advisor Act of 2009, S. 778
This proposed legislation, cosponsored by Senators Snowe, Bayh and Nelson, was a bit of a puzzle to me when it was introduced. The timing was such that the President's 60-day review report had not yet been delivered, and so it seemed premature to me. However, in retrospect, the 60-day review didn't end up suggesting a powerful office within the EOP for cyber, and so this bill was right on target.
The bill would establish an office of National Cybersecurity Advisor , with the head of that office reporting directly to the President. That person would have authority to hire consultants, consult with any Federal agency, approve clearances of personnel related to cyber, and have access to all classified programs relating to cyber. More importantly, the advisor "...shall review and approve all cybersecurity-related budget requests submitted to the Office of Management and Budget" and would "...serve as the principal advisor to the President for all cybersecurity-related matters." Both of these would be an improvement over the suggestions in the final 60-day review.
The bill has had two readings and has been referred to the Committee on Homeland Security and Governmental Affairs.
(I note that the 60 day review would have been delivered to the President on April 9. It is now more than 3 months later, and still no appointment of the cybersecurity cheerleader proposed by that document.)
The National Cybersecurity Act of 2009, S 773
This was also introduced before the 60-day review was released. It contains 23 sections. It has been read twice and referred to the Committee on Commerce, Science, and Transportation. It also is cosponsored by Snowe, Nelson and Bayh.
Sec. 1: Title And Table Of Contents.
Pro forma material.
Sec 2: Findings
This is a section devoted to bits of information that justify the bill. Several people are cited for things they have said on the topics; I was not one of them, although Purdue was mentioned in point 13, and the PITAC report I helped prepare was listed in point 14.
Sec 3: Cybersecurity Advisory Panel
This section defines the creation of a high-level, Presidential advisory panel. The panel will be composed of individuals from a broad cross-section of society, and will provide the President with advice on strategy, trends, priorities, and civil liberties related to cyber security. The panel will be required to provide a report at least once every 2 years.
This looks to be well-designed and potentially very useful. Panels such as this depend on the alacrity with which a President appoints appropriate members, whether those members actually get something useful done, and whether the President heeds their advice. But at least this framework is off to a good start.
Sec 4. Real-time Cybersecurity Dashboard
The Secretary of Commerce is mandated to develop a "real-time dashboard" within a year. This dashboard is supposed to show the cybersecurity status and vulnerability information of all networks managed by the Department of Commerce.
This is quite puzzling. It isn't clear to me why this is restricted to Commerce, although notes I have from staff indicate that the intent is to serve as a pilot for other parts of government. But that isn't the end of the puzzle. Who is supposed to view this dashboard? What do they do after they see something on it? And what the heck does it really measure? (Hopefully not a dynamic FISMA score!)
Of course, I can't help noting that having one location to collect and display vulnerability information is a very bad idea.
Sec 5. State And Regional Cybersecurity Enhancement Program
This section describes the creation of a set of centers around the country to assist small businesses with cybersecurity. It is modeled on the Hollings Manufacturing Extension Partnership (MEP) and would be run by the Department of Commerce. The centers would receive up to 1/2 of their initial funding from the Federal government, with the rest to come from states, regional groups, and fees paid by members. The centers would provide expertise and resources to small companies.
Although I have some misgivings about this, it is the best suggestion I have seen yet on how to get cybersecurity technology out to small businesses in an affordable manner. I was not familiar with this program and had suggested something similar to our agricultural extension model, so this is in keeping with that. The questions I have are whether these will attract the necessary funding and talent to be viable. But it is probably worth the experiment.
Sec 6. NIST Standards Development And Compliance
This section sets out that, within a year, the Secretary of Commerce will establish a research plan for security metrics, establish a whole set of metrics and compliance measures for vulnerabilities and testing, set all these as standards, and apply them to all vendors and government systems. This will also constrain acceptable configurations, and provide accreditation of suppliers.
Whew! This is way off base. We don't know how to do many of these things, and I fear that setting a deadline will mean that a number of poor standards and requirements will be established. Not only that, having a set of uniform configurations (and required compliance to them) is a sure way to weaken our security rather than strengthen it -- diversity and uncertainty have protective effects when used appropriately. Requiring everyone to code the same way, and configure only approved systems the same way is not going to be helpful -- except to the bad guys.
This is also a good way to kill innovation in an area (software development and security deployment) where innovation is badly needed.
This is a bad idea.
Sec 7. Licensing And Certification Of Cybersecurity Professionals
This provision requires Commerce to develop a national licensing and certification program for cybersecurity professionals. Within 3 years, it would be unlawful to provide security services to any government or national security system without the certification.
This is worse than section 6! We don't know yet what the appropriate skills are for professionals. In fact, there are a wide range of skills, not all of which are needed by each person.
The result of this, if it gets enacted, is either that we will have a least-common denominator for skills that will get taught by a lot of training organizations that will enrich them but do nothing for the nation, or the bar will be set so high that we will have a shortage of qualified personnel. Either way, it may also stifle enhanced and unconventional training that could produce new talent.
I have been working as an educator in this field for two decades. This section presents an awful idea.
Sec 8. Review Of NTIA Domain Name Contracts
Basically requires the Advisory Panel (Sec 3) to review any contract renewal with ICANN, and gives it veto authority.
Reasonable. it doesn't address some of the problems with ICANN, but it isn't clear that Congress can do that.
Sec 9. Secure Domain Name Addressing System
Within 3 years, the Commerce Department must come up with a strategy and schedule to implement DNSSEC, and the President must require all agencies and departments to follow that plan.
Probably reasonable, and with a more realistic timetable than some of the other sections.
Sec 10. Promoting Cybersecurity Awareness
Basically, the Secretary of Commerce is charged with finding ways to increase public awareness of cybersecurity. Not a bad idea, but the real issue occurs when budgets are allocated. Commerce gets stuck with lots of unfunded mandates, and I don't see this as ranking up there with, say, maintaining the nation's atomic clocks or evaluating the next digital signature standard. So, if the budgets are cramped, this won't happen.
Sec 11. Federal Cybersecurity Research And Development
This directs NSF to provide more funding towards some specific hard research issues (assurance, attribution, insider threat, privacy protection, etc.), and to help ensure that students get some training in secure code production techniques (although that is a somewhat nebulous concept). It also authorizes significant new funding levels for research, establishment of centers, and funding traineeships.
Overall, I think the intent is good. The issue is once again one of appropriations each year to fund these initiatives. if "new" funding is available, that is great. However, if this ends up eating into other research thrusts, it is generally not good for the community as a whole.
It is also the case that when substantial blocks of money are made available, suddenly "experts" come out the woodwork to compete for it. New ideas and new blood are needed in the area, but it is almost certain that a significant part of this will not accomplish what is intended, although what is accomplished may still have value. I would hope that the NSF doesn't try to address this by tying funds to the Centers of Excellence (sic).
Sec 12. Federal Cyber Scholarship-For-Service Program
The NSF SoS program would be expanded in size and scope, and codify it in law. The Scholarship for Service program grew out of an idea I presented to Congress back in 1997. It has functioned well, although it has not attracted large numbers of students, for a variety of reasons. The expansion of the program in this draft bill doesn't really change the nature of the program, so I would be very surprised if the 1000 students per year would actually matriculate. I suppose the numbers might get pumped up if more schools participated, but we don't have the faculty or educational materials nationally to do that. Thus, I have reservations about this, too.
Sec 13. Cybersecurity Competition And Challenge
This would direct NIST to set up national competitions at different levels for cybersecurity. There is also authorization to solicit for and award prize money to winners.
I can see where this might increase interest in the field, and bring more people out to solve problems. However, the majority of challenges held in the field right now are "hacking into the opposing server" challenges, and I have contended over the years that such an approach should not be encouraged. It we are looking for employees of cyber military groups, this might be okay. But hack challenges don't really recognize the well-rounded and adept defenders and researchers. Attack challenges also don't tend to engage women, who are already badly underrepresented in the field.
So, this is another qualified "maybe" section: good intent, but a lot depends on implementation.
Sec 14. Public-Private Clearinghouse
This establishes Commerce as the home of vulnerability and threat information for government systems and critical private infrastructure. Commerce also has to come up with methods and standards for protecting and sharing this information.
Hmmm, I thought DHS was supposed to be doing all this now?
Sec 15. Cybersecurity Risk Management Report
The President is supposed to come up with a report on the feasibility of a risk and insurance market for cyber risk. The report is also supposed to include the feasibility of including that risk in bond ratings.
I've often said that if we could get the insurance industry engaged, we might well see some progress in private sector security. However, without some liability for companies (above and beyond loss risk) it still might not be enough. This bill doesn't touch the liability issue, which is likely to be a third rail issue for any legislation.
Sec 16. Legal Framework Review And Report
This section of the bill would mandate review of existing law that touches on cyber, and require recommendations for any necessary changes. This includes the ECPA, the Privacy Act, FISMA, and others. This would be a very good idea. The review would be delivered to Congress. At that point, there is no way to predict what might happen, but a review is definitely needed.
Sec 17. Authentication And Civil Liberties Report'
Briefly mandates study of a national identification and authentication program, including the civil liberties issues associated therewith.
This is another touchy topic. There are many groups advocating for strongly authenticated ID, but there are also reasons to proceed with caution. Performing an in-depth study is probably worthwhile, but I'd prefer to see the National Academies tasked with it than an agency of government.
Sec 18. Cybersecurity Responsibilities And Authority
This would give the President authority to disconnect government or critical infrastructure systems in the event of an emergency. it would also grant authority for mapping systems, setting standards, monitoring performance, and other activities to protect and defend national-interest systems. It also allows the President to designate an agency or organization to be in charge during any cyber incident – presumably including Department of Defense agencies.
This has been controversial because of the "disconnect" provision. It isn't clear to me that there are situations that would be helped by a disconnect, although I can certainly imagine some that might be made worse by disconnection. I'm not sure that the current infrastructure would even allow disconnection! So, on balance, if it were left out I don't think it would matter, but it might make some people less nervous.
Most of the other parts of the section seem reasonable.
Sec 19. Quadrennial Cyber Review
Every four years there would need to be a review of cybersecurity posture, strategy, partnerships, threats, and so on. The Advisory Panel (Sec 3) would be involved. "The review shall include a comprehensive examination of the cyber strategy, force structure, modernization plans, infrastructure, budget plan, the Nation's ability to recover from a cyberemergency, and other elements of the cyber program and policies with a view toward determining and expressing the cyber strategy of the United States and establishing a revised cyber program for the next 4 years." Wow!
This is modeled after the Defense Department's review of the same name, I assume. It would be a tremendous amount of work, and might be a huge distraction. However, it also might help to highlight some of the shortfalls and dangers in a way that would be useful for policymakers.
One consideration from the DoD side: structuring reporting in this way tends to move planning from annual or biennial cycles to quadrennial or octennial cycles. In a fast-moving field such as cyber, this might well be counterproductive.
Sec 20. Joint Intelligence Threat Assessment
it states "The Director of National Intelligence and the Secretary of Commerce shall submit to the Congress an annual assessment of, and report on, cybersecurity threats to and vulnerabilities of critical national information, communication, and data network infrastructure."
Well, that's reasonable. Hmm, where is DHS?
Sec 21. International Norms And Cybersecurity Deterrance Measures
The President is directed to work with foreign governments to increase engagement and cooperation in cybersecurity.
We can hardly argue with that!
Sec 22. Federal Secure Products And Services Acquisitions Board
This would establish a board to set and review requirements for Federal acquisitions to ensure that cybersecurity standards are met.
My comments on section 6 hold here as well.
Sec 23. Definitions
Assorted definitions to interpret other parts of the bill.
S. 778 seems like a reasonable idea, although it isn't clear that enough responsibility is given to the position. Merging with S773 might be reasonable with many of the tasks in S.773 currently delegated to the President instead delegated to the new position.
S.773 is best where it encourages new development. reporting, education and response. Unfortunately, some of the restrictions and mandates, especially Sections 6 and 7, make the bill more toxic than helpful.
The new funding required to carry everything out would be in the many hundreds of millions of dollars per year. Most of that is explicitly authorized in this legislation, but corresponding appropriation is not a certainty...and given the current economic climate, it is unlikely. Thus, there are some things contained in here that would end up as unfunded mandates on a few agencies (such as NIST) that are already laboring under a huge taskload with insufficient resources.
No mention is made of bolstering law enforcement at any level to help deal with cybersecurity issues. That is unfortunate, because it is one place where some immediate impact could definitely be made. However, given the way this will wend through committees, that is not unexpected. Commerce gets the bill first, so they get the direction.
DHS isn't mentioned anywhere. Again, that may be because of the path the bill will take through committees. However, I can't help but think it also has to do with the way that DHS has screwed up in this whole arena.
Overall, this bill evidences a great deal of careful thought and deep concern. There are many great ideas in here, as well as a few flawed ones. I have my fingers crossed that the rumored revision addresses the flaws and results in something that can get passed into law. Even a pared-down law consisting of sections 3, 5, 9, 10, 11, 12, 16 and 21 would have a lot of positive impact.
IE 7’s protected mode needs to be acknowledged as a security effort, but CanSecWest proved that it didn’t isolate Flash well enough. It’s not clear if a configuration issue was involved, but I don’t care—most people won’t configure it right either then. IE 7’s protected mode is a collection of good measures, such as applying least privilege and separation of privilege, and intercepting system API calls, but it is difficult to verify and explain how it all fits together, and be sure that there are no gaps. More importantly, it relies heavily on the slippery slope of asking the user to appropriately and correctly grant higher permissions. We know where that leads—most everything gets granted and the security is defeated.
Someone not only thought of a proper security architecture for web browsers but did it (see “Secure web browsing with the OP web browser” by Chris Grier, Shuo Tang, and Samuel T. King). There’s a browser kernel, and everything else is well compartmentalized and isolated. Similarly to the best operating system architectures for security, the kernel is very small (1221 lines of code), has limited functionality, and doesn’t run plug-ins inside kernel space (I’d love to have no drivers in my OS kernel as well…). It’s not clear if it’s a minimal or “true” micro-kernel—the authors steer clear of that discussion. Even malicious hosted ads (e.g., Yahoo! has had repeated experiences with this) are quarantined with a “provider domain policy”. This is an interesting read, and very encouraging. I’d love to play with it, but I can’t find a download.
Recently, the McAfee Corporation released their latest Virtual Criminology Report. Personnel from CERIAS helped provide some of the research for the report.
The report makes interesting reading, and you might want to download a copy. You will have to register to get a copy, however (that’s McAfee, not CERIAS).
The editors concluded that there are 3 major trends in computer security and computer crime:
Certainly, anyone following the news and listening to what we’ve been saying here will recognize these trends. All are natural consequences of increased connectivity and increased presence of valued information and resources online, coupled with weak security and largely ineffectual law enforcement. If value is present and there is little or no protection, and if there is also little risk of being caught and punished, then there is going to be a steady increase in system abuse.
I’ve posted links on my tumble log to a number of recent news articles on computer crime and espionage. It’s clear that there is a lot of misuse occurring, and that we aren’t seeing it all.
[posted with ecto]
The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before. It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM). An undetectable VMM would be “transparent”. Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings. The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability.
However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God. They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts. This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex. However, this is not an hermetic, formal proof that it is impossible and cannot exist; a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible.
Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine. One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM?
McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then. The idea is to integrate security functions within the virtual machine monitor. Malware nowadays prevents the installation of security tools and interferes with them as much as possible. If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools. I really like that idea.
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS. Because of this, VMM detection isn’t a moot point for the defender. However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker. Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM? Perhaps it would need more hooks into the VMM to do this. Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly. What is the cost of all these detection attempts, which must be executed regularly? Aren’t they prohibitive, therefore making strong malicious VMM detection impractical? In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection. Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler. What strikes me is how it resembles the “white list” approach I’ve been talking about. “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner. It states what processes do and what may happen, instead of focusing on what must not happen.
Last year I thought that virtualization and security could provide a revolution; now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger. Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race. In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.
[tags]cyber warfare, cyber terrorism, cyber crime, Estonia[/tags]
I am frequently asked about the likelihood of cyber war or cyber terrorism. I’m skeptical of either being a stand-alone threat, as neither is likely to serve the goals of those who would actually wage warfare or commit terrorism.
The incidents in Estonia earlier this year were quite newsworthy and brought more people out claiming it was cyber terrorism or cyber warfare. Nonsense! It wasn’t terrorism, because it didn’t terrorize anyone—although it did annoy the heck out of many. And as far as warfare goes, nothing was accomplished politically, and the “other side” was never even formally identified.
Basically, in Estonia there was a massive outbreak of cyber vandalism and cyber crime.
Carolyn Duffy Marsan did a nice piece in Network World on this topic. She interviewed a number of people, and wrote it up clearly. I especially like it because she quoted me correctly! You can check out the article here: How close is World War 3.0? - Network World. I think it represents the situation quite appropriately.
[As a humorous aside, I happened to do a search on the Network World site to see if another interview had appeared without me hearing about it. I found this item that had appeared in December of 2006 and I didn’t know about it until now! Darn, and to think I could have started recruiting minions in January. ]
[tags]news, cell phones, reports, security vulnerabilities, hacking, computer crime, research priorities, forensics, wiretaps[/tags]
The Greek Cell Phone Incident
A great story involving computers and software, even though the main hack was against cell phones:
IEEE Spectrum: The Athens Affair. From this we can learn all sorts of lessons about how to conduct a forensic investigation, retention of logs, wiretapping of phones, and more.
Now, imagine VoIP and 802.11 networking and vulnerabilities in routers and…. —the possibilities get even more interesting. I suspect that there’s a lot more eavesdropping going on than most of us imagine, and certainly more than we discover.
NRC Report Released
Last week, the National Research Council announced the release of a new report: Towards a Safer and More Secure Cyberspace. The report is notable in a number of ways, and should be read carefully by anyone interested in cyber security. I think the authors did a great job with the material, and they listened to input from many sources.
There are 2 items I specifically wish to note:
Evolution of Computer Crime
Speaking of my alleged expertise at augury, I noted something in the news recently that confirmed a prediction I made nearly 8 years ago at a couple of invited talks: that online criminals would begin to compete for “turf.” The evolution of online crime is such that the “neighborhood” where criminals operate overlaps with others. If you want the exclusive racket on phishing, DDOS extortion, and other such criminal behavior, you need to eliminate (or absorb) the competition in your neighborhood. But what does that imply when your “turf” is the world-wide Internet?
The next step is seeing some of this spill over into the physical world. Some of the criminal element online is backed up by more traditional organized crime in “meat space.” They will have no compunction about threatening—or disabling—the competition if they locate them in the real world. And they may well do that because they also have developed sources inside law enforcement agencies and they have financial resources at their disposal. I haven’t seen this reported in the news (yet), but I imagine it happening within the next 2-3 years.
Of course, 8 years ago, most of my audiences didn’t believe that we’d see significant crime on the net—they didn’t see the possibility. They were more worried about casual hacking and virus writing. As I said above, however, one only needs to study human nature and history, and the inevitability of some things becomes clear, even if the mechanisms aren’t yet apparent.
The Irony Department
GAO reported a little over a week ago that DHS had over 800 attacks on their computers in two years. I note that the report is of detected attacks. I had one top person in DC (who will remain nameless) refer to DHS as “A train wreck crossed with a nightmare, run by inexperienced political hacks” when referring to things like TSA, the DHS cyber operations, and other notable problems. For years I (and many others) have been telling people in government that they need to set an example for the rest of the country when it comes to cyber security. It seems they’ve been listening, and we’ve been negligent. From now on, we need to stress that they need to set a good example.
[posted with ecto]
One of our students who works in biometrics passed along two interesting article links. This article describes how a password-protected, supposedly very secure USB memory stick was almost trivially hacked. This second article by the same author describes how a USB stick protected by a biometric was also trivially hacked. I’m not in a position to recreate the procedure described on those pages, so I can’t say for certain that the reality is as presented. (NB: simply because something is on the WWW doesn’t mean it is true, accurate, or complete. The rumor earlier this week about a delay in the iPhone release is a good example.) However, the details certainly ring true.
We have a lot of people who are “security experts” or who are marketing security-related products who really don’t understand what security is all about. Security is about reducing risk of untoward events in a given system. To make this work, one needs to actually understand all the risks, the likelihood of them occurring, and the resultant losses. Securing one component against obvious attacks is not sufficient. Furthermore, failing to think about relatively trivial physical attacks is a huge loophole—theft, loss or damage of devices is simple, and the skills to disassemble something to get at the components inside is certainly not a restricted “black art.” Consider the rash of losses and thefts of disks (and enclosing laptops) we have seen over the last year or two, with this one being one of the most recent.
Good security takes into account people, events, environment, and the physical world. Poor security is usually easy to circumvent by attacking one of those avenues. Despite publicity to the contrary, not all security problems are caused by weak encryption and buffer overflows!
[posted with ecto]
[tags]security marketplace, firewalls, IDS, security practices, RSA conference[/tags]
As I’ve written here before, I believe that most of what is being marketed for system security is misguided and less than sufficient. This has been the theme of several of my invited lectures over the last couple of years, too. Unless we come to realize that current “defenses” are really attempts to patch fundamentally faulty designs, we will continue to fail and suffer losses. Unfortunately, the business community is too fixated on the idea that there are quick fixes to really investigate (or support) the kinds of long-term, systemic R&D that is needed to really address the problems.
Thus, I found the RSA conference and exhibition earlier this month to be (again) discouraging this year. The speakers basically kept to a theme that (their) current solutions would work if they were consistently applied. The exhibition had hundreds of companies displaying wares that were often indistinguishable except for the color of their T-shirts—anti-virus, firewalls (wireless or wired), authentication and access control, IDS/IPS, and vulnerability scanning. There were a couple of companies that had software testing tools, but only 3 of those, and none marketing suites of software engineering tools. A few companies had more novel solutions—I was particular impressed by a few that I saw, such as the policy and measurement-based offerings by CoreTrace, ProofSpace, and SignaCert. (In the interest of full disclosure, SignaCert is based around one of my research ideas and I am an advisor to the company.) There were also a few companies with some slick packaging of older ideas (Yoggie being one such example) that still don’t fix underlying problems, but that make it simpler to apply some of the older, known technologies.
I wasn’t the only one who felt that RSA didn’t have much new to offer this year, either.
When there is a vendor-oriented conference that has several companies marketing secure software development suites that other companies are using (not merely programs to find flaws in C and Java code), when there are booths dedicated to secured mini-OS systems for dedicated tasks, and when there are talks scheduled about how to think about limiting functionality of future offerings so as to minimize new threats, then I will have a sense that the market is beginning to move in the direction of maturity. Until then, there are too many companies selling snake oil and talismans—and too many consumers who will continue to buy those solutions because they don’t want to give up their comfortable but dangerous behaviors. And any “security” conference that has Bill Gates as keynote speaker—renowned security expert that he is—should be a clue about what is more important for the conference attendees: real security, or marketing.
Think I am too cynical? Watch the rush into VoIP technologies continue, and a few years from now look at the amount of phishing, fraud, extortion and voice-spam we will have over VoIP, and how the market will support VoIP-enabled versions of some of the same solutions that were in Moscone Center this year. Or count the number of people who will continue to mail around Word documents, despite the growing number of zero-day and unpatched exploits in Word. Or any of several dozen current and predictable dangers that aren’t “glitches”—they are the norm. if you really pay attention to what happens, then maybe you’ll become cynical, too.
If not, there’s always next year’s RSA Conference.