It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems. Virtualization supports functions such as these:
Compare it to the list of operating system duties:
The similarity suggests that virtualization solutions compete with operating systems. I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously. Typical operating systems lack security, reliability and ease of maintenance. They have drivers in kernel space; Windows Vista thankfully now has them in user space, and Linux is moving in that direction. The complexity is staggering. This is reflected in the security guidance; hardening guides and “benchmarks” (essentially an evaluation of configuration settings) are long and complex. The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex. The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches.
What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization. In reality, virtualization typically depends on another, full-blown operating system.
VMWare ESX Server runs its own OS with drivers. Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation). Microsoft’s “Hyper-V” requires a full-blown Windows operating system to run it. So what we’re doing is really exchanging an untrusted OS for another, that we should trust more for some reason. This other OS also needs patches, configuration and maintenance. Now we have multiple OSes to maintain! What did we gain? We don’t trust OSes but we trust “virtualization” that depends on more OSes? At least ESX is “only” 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.
I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory. Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don’t trust operating systems enough to let them stand on their own. The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes.
As for ease of management and availability vs patching, I don’t see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary. ESX is an operating system anyway… I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS; there may be a thesis or two in figuring out how to implement them back in an operating system.
What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista: virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest. Of course, even with a better OS we’d still need virtualization for testbeds like ReAssure, and for legacy applications. Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level. ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure? Computer, 39
Least privilege is the idea of giving a subject or process only the privileges it needs to complete a task. Compartmentalization is a technique to separate code into parts on which least privilege can be applied, so that if one part is compromised, the attacker does not gain full access. Why does this get confused all the time with separation of privilege? Separation of privilege is breaking up a *single* privilege amongst multiple, independent components or people, so that multiple agreement or collusion is necessary to perform an action (e.g., dual signature checks). So, if an authentication system has various biometric components, a component that evaluates a token, and another component that evaluates some knowledge or capability, and all have to agree for authentication to occur, then that is separation of privilege. It is essentially an “AND” logical operation; in its simplest form, a system would check several conditions before granting approval for an operation. Bishop uses the example of “su” or “sudo”; a user (or attacker of a compromised process) needs to know the appropriate password, and the user needs to be in a special group. A related, but not identical concept, is that of majority voting systems. Redundant systems have to agree, hopefully outvoting a defective system. If there was no voting, i.e., if all of the systems always had to agree, it would be separation of privilege. OpenSSH’s UsePrivilegeSeparation option is *not* an implementation of privilege separation by that definition, it simply runs compartmentalized code using least privilege on each compartment.
[tags]obituary,cryptography,Bob Baldwin,kuang, CBW,crypt-breaker’s workbench[/tags]
I learned this week that the information security world lost another of our lights in 2007: Bob Baldwin. This may have been more generally known, but a few people I contacted were also surprised and saddened by the news.
His contributions to the field were wide-ranging. In addition to his published research results he also built tools that a generation of students and researchers found to be of great value. These included the Kuang tool for vulnerability analysis, which we included in the first edition of COPS, and the Crypt-Breaker’s Workbench (CBW), which is still in use.
What follows is (slightly edited) obituary sent to me by Bob’s wife, Anne. There was also an obituary in the fall 2007 issue of Cryptologia.
Robert W Baldwin
May 19, 1957- August 21, 2007
Robert W. Baldwin of Palo Alto passed away at home with his wife at his side on August 21, 2007. Bob was born in Newton, Massachusetts and graduated from Memorial High School in Madison, Wisconsin and Yorktown High School in Arlington, Virginia. He attended the Massachusetts Institute of Technology, where he received BS and MS degrees in Computer Science and Electrical Engineering in 1982 and a Ph.D. in Computer Science in 1987. A leading researcher and practitioner in computer security, Bob was employed by Oracle, Tandem Computers, and RSA Security before forming his own firm, PlusFive Consulting. His most recent contribution was the development of security engineering for digital theaters. Bob was fascinated with cryptology and made frequent contributions to Cryptologia as an author, reviewer, and mentor.
Bob was a loving and devoted husband and father who touched the hearts and minds of many. He is well remembered by his positive attitude and everlasting smile. Bob is survived by his wife, Anne Wilson, two step-children, Sean and Jennifer Wilson of Palo Alto and his two children, Leila and Elise Baldwin of Bellevue, Washington. He is also survived by his parents, Bob and Janice Baldwin of Madison, Wisconsin; his siblings: Jean Grossman of Princeton, N.J., Richard Baldwin of Lausanne, Switzerland, and Nancy Kitsos of Wellesley, MA.; and six nieces and nephews.
In lieu of flowers, gifts in memory of Robert W. Baldwin may be made to a charity of the donor’s choice, to the Recht Brain Tumor Research Laboratory at Stanford Comprehensive Cancer Center, Office of Medical Development, 2700 Sand Hill Road, Menlo Park, CA 94025, Attn: Janice Flowers-Sonne, or to the loving caretakers at the Hospice of the Valley, 1510 E. Flower Street. Phoenix, AZ 85014-5656.
Last year Adobe forced Microsoft to pull PDF creation support from Office 2007 under the threat of a lawsuit while asking them to “charge more” for Office . What stops Adobe from interfering with OpenOffice? In January 2007 Adobe released the full PDF (Portable Document Format) to make PDF an ISO standard . People believe: “Anyone may create applications that read and write PDF files without having to pay royalties to Adobe Systems”, but that’s not quite true. These applications must conform to the specification as decided by Adobe. Applications that are too permissive or somehow irk Adobe could possibly be made illegal, including open source ones, at least in the US. It is unclear how much control Adobe still has (obviously enough for the Yahoo deal) and will still have when and if it becomes an ISO standard. Being an ISO standard does not make PDFs necessarily compatible with free software. If part of the point of free software is to be able to change it so that it is fully loyal to you, then isn’t it a contradiction for free software to implement standards that mandate and enforce mixed loyalties?
Finally, my purchase of the full version of Adobe Acrobat for MacOS X was a usability disaster; you’ll need to apply duress to make me use Acrobat again. I say it’s time to move on to safer ground, from security, legal, and code quality perspectives, ISO standard or not.
How then can we safely transmit and receive documents that are more than plain text? HTML, postscript, and rich-text (rtf) are alternatives that have been disused in favor of PDF for various reasons which I will not analyze here. Two alternatives seemed promising: DVI files and Microsoft XPS, but a bit of research shows that they both have significant shortcomings.
Tex (dvi): TeX is a typesetting system, used to produce DVI (Device independent file format) files. TeX is used mostly in academia, by computer scientists, mathematicians or UNIX enthusiasts. There are many TeX editors with various levels of sophistication; for example OpenOffice can export documents to .tex files, so you can use even a common WYSIWYG text editor. Tex files can be created and managed on Windows , MacOS X and Linux. TeX files do not include images but have tags referencing them as separate files; you have to manage them separately. Windows has DVI viewers, such as YAP and DVIWIN.
However, in my tests OpenOffice lost references to embedded images, producing TeX tags containing errors (”[Warning: Image not found]”). The PDF export on the same file worked perfectly. Even if the TeX export worked, you would still have a bunch of files instead of a single document to send. You then need to produce a DVI file in a second step, using some other program.
Even if OpenOffice’s support of DVI was better, there are other problems. I have found many downloadable DVI documents that could not be displayed in Ubuntu, using “evince”; they produced the error “Unable to open document—DVI document has incorrect format”. After installing the “advi” program (which may have installed some fonts as well), some became viewable both using evince and advi. DVI files do not support embedded fonts; if the end user does not have the correct fonts your document will not be displayed properly.
Another issue is that of orphaned images. Images are missing from dvi downloads such as this one; at some point they were available as a separate download, but aren’t anymore. This is a significant shortcoming, which is side-stepped by converting DVI documents to PDF; however this defeats our purpose.
Microsoft XPS: XPS (XML Paper Specification) documents embed all the fonts used, so XPS documents will behave more predictably than DVI ones. XPS also has the advantage that
Despite being an open specification, there is no support for it yet in Linux. Visiting Microsoft’s XPS web site and clicking on the “get an XPS viewer” link results in the message “This OS is not supported”.
It seems, however, that Microsoft may be just as intent on keeping control of XPS as Adobe for PDFs; the “community promise for XPS” contains an implicit threat should your software not comply “with all of the required parts of the mandatory provisions of the XPS Document Format” . These attached strings negate some advantages that XPS might have had over PDFs.
XPS must become supported on alternative operating systems such as Linux and BSDs, for it to become competitive. This may not happen simply because Microsoft is actively antagonizing Linux and open source developers with vague and threatening patent claims, as well as people interested in open standards with shady lobbying moves and “voting operations”  at standards organizations (Microsoft: you need public support and goodwill for XPS to “win” this one). The advantages of XPS may also not be evident to users comfortable in a world of TeX, postscript, and no-charge PDF tools. The confusion about open formats vs open standards and exactly how much control Adobe still has and will still have when and if PDF becomes an ISO standard does not help. Companies offering XPS products are also limiting their possibilities by not offering Linux versions, at least of the viewers, even without support.
In conclusion, PDF viewers have become risky examples of mixed loyalty software. It is my personal opinion that risk-averse industries and free software enthusiasts should steer clear of the PDF standard, but there are currently no practical replacements. XPS faces extreme adoption problems, not simply due to the PDF installed base, but also due to the ill will generated by Microsoft’s tactics. I wish that DVI was enhanced with included fonts and images, better portability, and better integration within tools like OpenOffice, and that this became an often requested feature for the OpenOffice folks. I don’t expect DVI handlers to be absolutely perfect (e.g., CVE-2002-0836), but the reduced feature set and absence of certain attack vectors should mean less complexity, fewer risks and greater loyalty to the computer owner.
1. ISS, Multiple vendor products URI handling command execution, October 2007. http://www.iss.net/threats/276.html
2. Robert Daniel, Adobe-Yahoo plan places ads on PDF documents, November 2007. http://www.marketwatch.com/news/story/adobe-yahoo-partner-place-ads/story.aspx?guid=%7B903F1845-0B05-4741-8633-C6D72EE11F9A%7D
3. Bogdan Popa, Yahoo Infects Users’ Computers with Trojans - Using a simple advert distributed by Right Media, September 2007. http://news.softpedia.com/news/Yahoo-Infects-Users-039-Computers-With-Trojans-65202.shtml
4. Kurt Foss, Web site editor illustrates how Mac OS X can circumvent PDF security, March 2002. http://www.planetpdf.com/mainpage.asp?webpageid=1976
5. Nate Mook, Microsoft to Drop PDF Support in Office, June 2006. http://www.betanews.com/article/Microsoft_to_Drop_PDF_Support_in_Office/1149284222
6. Adobe Press release, Adobe to Release PDF for Industry Standardization, January 2007. http://www.adobe.com/aboutadobe/pressroom/pressreleases/200701/012907OpenPDFAIIM.html
7. Eric Schechter, Free TeX software available for Windows computers, November 2007. http://www.math.vanderbilt.edu/~schectex/wincd/list_tex.htm
8. Jonathan Allen, The wide ranging impact of the XML Paper Specification, November 2006. http://www.infoq.com/news/2006/11/XPS-Released
9. Microsoft, Community Promise for XPS, January 2007. http://www.microsoft.com/whdc/xps/xpscommunitypromise.mspx
10. Kim Haverblad, Microsoft buys the Swedish vote on OOXML, August 2007. http://www.os2world.com/content/view/14868/1/
As a beginner Linux user, I only recently realized that few people are aware or care that they are breaking U.S. law by using unlicensed codecs. Even fewer know that the codecs they use are unlicensed, or what to do about it. Warning dialogs (e.g., in Ubuntu) provide no practical alternative to installing the codecs, and are an unwelcome interruption to workflow. Those warnings are easily forgotten afterwards, perhaps despite good intentions to correct the situation. Due to software patents in the U.S., codecs from sound to movies such as h.264 need to be licensed, regardless of how unpalatable the law may be, and of how this situation is unfair to U.S. and Canadian citizens compared to other countries. This impacts open source players such as Totem, Amarok, Mplayer or Rythmbox. The CERIAS security seminars, for example, use h.264. The issue of unlicensed codecs in Linux was brought up by Adrian Kingsley-Hughes, who was heavily criticized for not knowing about, or not mentioning, fluendo.com and other ways of obtaining licensed codecs.
So, as I like Ubuntu and want to do the legal thing, I went to the Fluendo site and purchased the “mega-bundle” of codecs. After installing them, I tried to play a CERIAS security seminar. I was presented with a prompt to install 3 packs of codecs which require licensing. Then I realized that the Fluendo set of codecs didn’t include h.264! Using Fluendo software is only a partial solution. When contacted, Fluendo said that support for h.264, AAC and WMS will be released “soon”.
Another suggestion is using Quicktime for Windows under Wine. I was able to do this, after much work; it’s far from being as simple as running Synaptic, in part due to Apple’s web site being uncooperative and the latest version of Quicktime, 7.2, not working under Wine. However, when I got it to work with an earlier version of Quicktime, it worked only for a short while. Now it just displays “Error -50: an unknown error occurred” when I attempt to play a CERIAS security seminar.
VideoLAN Player vs MPEG LA
The VideoLAN FAQ explains why VideoLAN doesn’t license the codecs, and suggests contacting MPEG LA. I did just that, and was told that they were unwilling to let me pay for a personal use license. Instead, I should “choose a player from a licensed supplier (or insist that the supplier you use become licensed by paying applicable royalties)”. I wish that an “angel” (a charity?) could intercede and obtain licenses for codecs in their name, perhaps over the objections of the developers, but that’s unlikely to happen.
What to do
Essentially, free software users are the ball in a game of ping-pong between free software authors and licensors. Many users are oblivious to this no man’s land they somehow live in, but people concerned about legitimacy can easily be put off by it. Businesses in particular will be concerned about liabilities. I conclude that Adrian was right in flagging the Linux codec situation. It is a handicap for computer users in the U.S. compared to countries where licensing codecs isn’t an issue.
One solution would be to give up Ubuntu (for example) and getting a Linux distribution that bundles licensed codecs such as Linspire (based on Ubuntu) despite the heavily criticized deal they made with Microsoft. This isn’t about being anti-Microsoft, but about divided loyalties. Free software, for me, isn’t about getting software for free, even though that’s convenient. It’s about appreciating the greater assurances that free software provides with regards to divided loyalties and the likelihood of software that is disloyal by design. Now Linspire may have or in the future get other interests in mind besides those of its users. This deal being part of a vague but threatening patent attack on Linux by Microsoft also makes Linspire unappealing. Linspire is cheap, so cost isn’t an issue; after all getting the incomplete set of codecs from Fluendo ($40) cost me almost as much as getting the full version of Linspire ($49) would have. Regardless, Linspire may be an acceptable compromise for many businesses. Another advantage of Linspire is that they bundle a licensed DVD player as well (note that the DMCA, and DVD CCA license compliance, are separate issues from licensing codecs such as h.264).
Another possibility is to keep around an old Mac or use lab computers until Fluendo releases the missing codecs. Even if CERIAS was to switch to Theora just to please me, the problem would surface again later. So, there are options, but they aren’t optimal.
As I write this, I’m sitting in a review of some university research in cybersecurity. I’m hearing about some wonderful work (and no, I’m not going to identify it further). I also recently received a solicitation for an upcoming workshop to develop “game changing” cyber security research ideas. What strikes me about these efforts—representative of efforts by hundreds of people over decades, and the expenditure of perhaps hundreds of millions of dollars—is that the vast majority of these efforts have been applied to problems we already know how to solve.
Let me recast this as an analogy in medicine. We have a crisis of cancer in the population. As a result, we are investing huge amounts of personnel effort and money into how to remove diseased portions of lungs, and administer radiation therapy. We are developing terribly expensive cocktails of drugs to treat the cancer…drugs that sometimes work, but make everyone who takes them really ill. We are also investing in all sorts of research to develop new filters for cigarettes. And some funding agencies are sponsoring workshops to generate new ideas on how to develop radical new therapies such as lung transplants. Meanwhile, nothing is being spent to reduce tobacco use; if anything, the government is one of the largest purchasers of tobacco products! Insane, isn’t it? Yes, some of the work is great science, and it might lead to some serendipitous discoveries to treat liver cancer or maybe even heart disease, but it still isn’t solving the underlying problems. It is palliative, with an intent to be curative—but we aren’t appropriately engaging prevention!
Oh, and second-hand smoke endangers many of us, too.
We know how to prevent many of our security problems—least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren’t using much of it.
Instead of building trustworthy systems (note—I’m not referring to making existing systems trustworthy, which I don’t think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised.
We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware.
Instead of switching to languages with intrinsic features that promote safe programming and execution, we spend our efforts on tools to look for buffer overflows and type mismatches in existing code, and merrily continue to produce more questionable quality software.
And we develop almost mindless loyalty to artifacts (operating systems, browsers, languages, tools) without really understanding where they are best used—and not used. Then we pound on our selections as the “one, true solution” and justify them based on cost or training or “open vs. closed” arguments that really don’t speak to fitness for purpose. As a result, we develop fragile monocultures that have a particular set of vulnerabilities, and then we need to spend a huge amount to protect them. If you are thinking about how to secure Linux or Windows or Apache or C++ (et al), then you aren’t thinking in terms of fundamental solutions.
I’m not trying to claim there aren’t worthwhile topics for open research—there are. I’m simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it.
In many parts of India, cows are sacred and cannot be harmed. They wander everywhere in villages, with their waste products fouling the streets and creating a public health problem. However, the only solution that local people are able to visualize is to hire more people to shovel effluent. Meanwhile, the cows multiply, the people feed them, and the problem gets worse. People from outside are able to visualize solutions, but the locals don’t want to employ them.
Metaphorically speaking, we need to put down our shovels and get rid of our sacred cows—maybe even get some recipes for meatloaf.
Let’s start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have. Will it be easy? No, but neither is quitting smoking! But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower.
[Don’t forget to check out my tumble log!]
Over the past decade or so, the entertainment industry has supported a continuing series of efforts to increase the enforcement of copyright laws, a lengthening of copyright terms, and very significant enforcement efforts against individuals. Included in this mess was the DMCA—the Digital Millenium Copyright Act—which has a number of very technology unfriendly aspects.
One result of this copyright madness is lawsuits against individuals found to have file-sharing software on their systems, along with copies of music files. Often the owners of these systems don’t even realize that their software is publishing the music files on their systems. It also seems the case that many people don’t understand copyright and do not realize that downloading (or uploading) music files is against the law. Unfortunately, the entertainment industry has chosen to seek draconian remedies from individuals who may not be involved in more than incidental (or accidental) sharing of files. One recent example is a case where penalties have been declared that may bankrupt someone who didn’t set out to hurt the music industry. I agree with comments by Rep. Rick Boucher that the damages are excessive, even though (in general) the behavior of file sharers is wrong and illegal.
Another recent development is a provision in the recently introduced “College Access and Opportunity Act of 2007” (HR 3746; use Thomas to find the text). Sec 484 (f) contains language that requires schools to put technology into place to prevent copyright violations, and inform the Secretary of Education about what those plans and technologies are. This is ridiculous, as it singles out universities instead of ISPs in general, and forces them to expend resources for misbehavior by students it is otherwise attempting to control. It is unlikely to make any real dent in the problem because it doesn’t address the underlying problems. Even more to the point, no existing technology can reliably detect only those files being shared that have copyright that prohibits such sharing. Encryption, inflation/compression, translation into other formats, and transfer in discontinuous pieces can all be employed to fool monitoring software. Instead, it is simply another cost and burden on higher ed.
We need to re-examine copyright. Another aspect in particular we need to examine is “fair use.” The RIAA, MPAA and similar associations are trying to lock up content so that any use at all requires paying them additional funds. This is clearly silly, but their arguments to date have been persuasive to legislators. However, the traditional concept of “fair use” is important to keep intact—especially for those of us in academia. A recent report outlines that fair use is actually quite important—that approximately 1/6 of the US economy is related to companies and organizations that involve “fair use.” It is well worth noting. Further restrictions on copyright use—and particularly fair use—are clearly not in society’s best interest.
Copyright has served—and continues to serve—valid purposes. However, with digital media and communications it is necessary to rethink the underlying business models. When everyone becomes a criminal, what purpose does the law serve?
Also, check out my new “tumble log.” I update it with short items and links more often than I produce long posts here.
[posted with ecto]
[tags]interview,certification[/tags]I was recently interviewed by Gary McGraw for his Silver Bullet interview series. He elicited my comments on a number of topics, including security testing, ethical hacking, and why security is difficult.If you like any of my blog postings, you might find the interview of some interest. But if not, you might some of the other interviews of interest – mine was #18 in the series.
A news story that hit the wires last week was that someone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior. The press and blogosphere seemed to treat this as surprising. They shouldn’t have.
I have been speaking and writing for nearly two decades on this general issue, as have others (William Hugh Murray, a pioneer and thought leader in security, is one who comes to mind). Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids. First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit. Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled. (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way—that behavior does not establish the bona fides for real expertise. If anything, it establishes a disregard for the community it endangers.)
More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on. Certainly it is possible that they will learn the error of their ways and reform. However, it is also the case that they may slip later and revert to their old ways. Putting some of them in situations of trust with access to items of value is almost certainly too much temptation. This has been established time and again in studies of criminals of all types, especially those who commit fraud. So, why would a prudent manager take a risk when better alternatives are available?
Even worse, circulating stories of criminals who end up as highly-paid consultants are counterproductive, even if they are rarely true. That is the kind of story that may tempt some without strong ethics to commit crimes as a shortcut to fame and riches. Additionally, it is insulting to the individuals who work hard, study intently, and maintain a high standard of conduct in their careers—hiring criminals basically states that the honest, hardworking real experts are fools. Is that the message we really want to put forward?
Luckily, most responsible managers now understand, even if the press and general public don’t, that criminals are simply that—criminals. They may have served their sentences, which now makes them former criminals…but not innocent. Pursuing criminal activity is not—and should not be—a job qualification or career path in civilized society. There are many, many historical examples we can turn to for examples, including those of hiring pirates as privateers and train robbers as train guards. Some took the opportunity to go straight, but the instances of those who abused trust and made off with what they were protecting illustrate that it is a big risk to take. It also is something we have learned to avoid. We are long past the point where those of us in computing should get with the program.
So, what of the argument that there aren’t enough real experts, or they cost too much to hire? Well, what is their real value? If society wants highly-trained and trustworthy people to work in security, then society needs to devote more resources to support the development of curriculum and professional standards. And it needs to provide reasonable salaries to those people, both to encourage and reward their behavior and expertise. We’re seeing more of that now than a dozen years ago, but it is still the case that too many managers (and government officials) want security on the cheap, and then act surprised when they get hacked. I suppose they also buy their Rolex and Breitling watches for $50 from some guy in a parking lot and then act surprised and violated when the watch stops a week later. What were they really expecting?
Lots of new papers added this week—more that we can list here. Check the Reports and Papers Archive for more.