Posts in Kudos, Opinions and Rants

Who ya gonna call?

This morning I received an email, sent to a list of people (I assume). The subject of the email was “Computer Hacker’s service needed” and the contents indicated that the sender was seeking someone to trace back to the sender of some enclosed email. The email in question? The pernicious spam email purporting to be from someone who has been given a contract to murder the recipient, but on reflection will not do the deed if offered a sum of money.

This form of spam is well-known in most of the security and law enforcement communities, and there have been repeated advisories and warnings issued to the public. For instance, Snopes has an article on it because it is so widespread as to have urban legend status. The scam dates back at least to 2006, and is sometimes made to seem more authentic by including some personalized information (usually taken from online sources). A search using the terms “hitman scam spammer” returns over 200,000 links, most of the top ones being stories in news media and user alert sites. The FBI has published several alerts about this family of frauds, too. This is not a rare event.

However, it is not that the author of the email missed those stories that prompts this post. After all, it is not the case that each of us can be aware of everything being done online.

Rather, I am troubled that someone would ostensibly take the threat seriously, and as a follow-up, seek a “hacker” to trace the email back to its sender rather than report it to law enforcement authorities.

One wonders if the same person were to receive the same note on paper, in surface email, whether he would seek the services of someone adept at breaking into mail boxes to seek out the author? Even if he did that, what would it accomplish? Purportedly, the author of the note is a criminal with some experience and compatriots (these emails, and this one in particular, always refer to a gang that is watching the recipient). What the heck is the recipient going to do with someone—and his gang—who probably doesn’t live anywhere nearby?

Perhaps the “victim” might know (or suspect) it is a scam, but is trying to aid the authorities by tracing the email? But why spend your own money to do something that law enforcement is perhaps better equipped to do? Plus, a “hacker” is not necessarily going to use legal methods that will allow the authorities to use the results. Perhaps even more to the point, the “hacker” may not want to be exposed to the authorities—especially if they regularly break the law to find people!

Perhaps the victim already consulted law enforcement and was told it was a scam, but doesn’t believe it? Well, some additional research should be convincing. Plus, the whole story simply isn’t credible. However, if the victim really does have a streak of paranoia and a guilty conscience, then perhaps this is plausible. However, in this case, whoever is hired would likewise be viewed with suspicion, and any report made is going to be doubted by the victim. So, there is no real closure here.

Even worse, if a “hacker” is found who is willing to break the rules and the laws to trace back email, what is to say that he (or she) isn’t going to claim to have found the purported assassin, he’s real, and the price has gone up but the “hacker” is willing to serve as an intermediary? Once the money is paid, the problem is pronounced “fixed,” This is a form of classic scam too—usually played on the gullible by “mystics” who claim that the victim is cursed and can only be cured by a complicated ritual involving a lot of money offered to “the spirits.”

Most important—if someone is hired, and that person breaks the law, then the person hiring that “hacker” can also be charged under the law. Hiring someone to break the law is illegal. And having announced his intentions to this mailing list, the victim has very limited claims of ignorance at this point.

At the heart of this, I am simply bewildered how someone would attempt to find a “hacker”—whose skill set would be unknown, whose honesty is probably already in question, and whose allegiances are uncertain—to track down the source of a threat rather than go to legitimate law enforcement. I can’t imagine a reasonable person (outside of the movies) receiving a threatening letter or phone call then seeking to hire a stranger to trace it back rather than calling in the authorities.

Of course, that is why these online scams—and other scams such as the “419 scams” continue to work: people don’t think to contact appropriate authorities. And when some fall for it, it encourages the spammers to keep on—increasing the pool of victims.

(And yes, I am ignoring the difficulty of actually tracing email back to a source: that isn’t the point of this particular post.)

 

Barack Obama, National Security, and Me

[Update 7/17: Video of the Senator’s opening remarks and the panel session (2 parts) are now online at this site. I have also added a few links.]


This story (somewhat long) is about Senator Barack Obama’s summit session at Purdue University today (Wednesday, July 16). on security challenges for the 21st century. I managed to attend, took notes, and even got my name mentioned. Here’s the full story.

Prelude

Monday night, I received email from a colleague here at Purdue asking if I could get her a ticket to see Senator Obama on campus. I was more than a little puzzled — I knew of no visit from the Senator, and I especially didn’t know why she thought I might have a ticket (although there are people around here who frequently ask me for unusual things).

Another exchange of email resulted in the discovery that the Senator was coming to Purdue today (the 16th of July) with a panel to hold a summit meeting on security issues for the 21st century. Cyber security was going to be one of the topics. The press was told that Purdue was chosen because of the leading role our researchers have in various areas of public safety and national security — including the leading program in cyber security — although some ascribed political motives as the primary reason for the location.

I found it rather ironic that security would be given as the reason for being at Purdue, and yet those of us most involved with those security centers had not been told about the summit or given invitations. It appears that the organizers gave a small number of tickets to the university, and those were distributed to administrators rather than faculty and students working in the topic areas.

I found this all very ironic and interesting, and expressed as much in email to several friends and colleagues — including several who I knew had some (indirect) link to the Senator’s campaign. I had faint hope of getting a ticket, but was more interested in simply getting the word back that there was a misfire in the organization of the event.

Late last night (I was in the office until 6:30) I got a call from someone associated with the Obama campaign. He apologized for the lack of an invitation, and informed me that a ticket was awaiting me at the desk the next day.

The Event

I went over to the Purdue Union at 11:30; the official event was to start at 12. I encountered a number of Purdue administrators in the crowd. Security was apparent for the event, including metal detectors at the door run by uniformed officers, some of whom I believe were with the Secret Service uniformed division. The officers everywhere were polite and cheerful, but watchful. I found a seat in the back of the North Ballroom with about 500 other guests…and nearly as many members of the press, entourage, ushers, protection detail, and so on.

I won’t try to summarize everything said by the Senator and panel — you can find the full video here (in two parts). I will provide some impressions of specific things that were said.

The event started almost on time (noon) with Senator Evan Bayh introducing Senator Barack Obama. Sen. Obama then read from a prepared set of remarks. His comments really resonated with the crowd (I encourage you to follow the link to read them). His comment about how we have been “fighting the last war” is particularly appropriate.

He made some very nice comments about Senator Richard Lugar, the other Senator from Indiana. Senator Lugar is a national asset in foreign policy, and both Senators Obama and Bayh (and former Senator Nunn) had nothing but good things to say about him — and all have worked with him on disarmament and peace legislation. One of the lighter moments was when Senator Obama said that Senator Lugar was a great man in every way except that he was a Republican!

Early in his statement, he deviated from his script as reproduced in the paper, and dropped my name as he was talking about cyber security. I was very surprised. He referred to me as one of the nation’s leading experts in cyber security when he mentioned Purdue being in the lead in this area. Wow! I guess someone I sent my email to pushed the right button (although my colleagues and our students deserve the recognition, as much or more than I do).

His further comments on officially designating the cyber infrastructure as a strategic asset is important for policy & legal reasons, and his comments on education and research also seemed right on. It was a strong opening, and there was obviously a lot in his comments for a number of different audiences, including the press.

Panel Part I

The first 1/3 of the panel discussion was on nuclear weapons issues. The experts present to talk on the issue were (former) Senator Sam Nunn (who joked that in Indiana everyone thought his last name was actually Nunn-Lugar), Senator Bayh, and Dr. Graham Allison, the director of the Belfer Center at Harvard. There was considerable discussion about the proliferation of nuclear materials, the need for cooperation with other countries rather than ignoring them (viz. North Korea and Iran), and the control of fissionable material.

There were some statements that I found to be a bit of hyperbole: For instance, the statement that a single bomb could be made by terrorists to destroy a whole city. Not to minimize the potential damage, but without sophisticated nation-state assistance and machining, a crude fission weapon is about all that a terrorist group could manage, and it wouldn’t be that large or that easy to build. A few tens of kilotons of fission explosion could definitely ruin your day, but a detonation at ground level wouldn’t destroy a whole city of any size. (Lafayette, IN would be mostly destroyed by one, but that isn’t a major city.) Plutonium is too dangerous to handle, so over 100 pounds of U-235 (or U-233) would be needed, and machined appropriately, for such a weapon. Without accelerators and specially shaped charges & containers, getting fission fast enough and long enough is difficult and….well, there is a very serious threat, and the nuances may be lost on the average crowd, but the focus on terrorists building a significant bomb seemed wrong to me.

There were some excellent remarks made about opportunity cost. For instance, the one figure that stood out was that we could fully fund the Nunn-Lugar initiative and some other plans to secure loose nuclear materials by spending the equivalent of 1 month of what we now spend in Iraq over the next 4 years around the world; the war in Iraq is breeding terrorists and making US enemies, while securing loose nukes would help protect generations to come around the world. As both a taxpayer and a parent (as well as someone immersed in defense issues), I know where I would prefer to see the money spent!

One other number given is that currently less than 1/4 of 1% of the defense budget is spent on containing nuclear materials, despite it being a declared priority of President Bush. Professor Allison said that despite grade inflation at Harvard, the President still gets an “F” in this area.

Another interesting factoid stated was that about 10% of the lights in the US are powered by electricity generated from reprocessed fissile material taken from Russian nukes rendered safe under the Nunn-Lugar initiative. That sounds high to me given the amount of nuclear power generated in the US, but even if off by a factor of 10, darned impressive.

Panel Part II

The second part of the panel was on bio weapons. The panelists were Dr. Tara O’Toole of the Center for Biosecurity at Pitt, and Dr. David Relman of Stanford. Their discussion was largely what I expected, about how bio-weapons can be produced by rogue actors as well as rogue states. They made the usual references to plague (with a funny interchange about prairie dogs being carriers, and keeping the Senator’s campaign away from them), anthrax and Ebola.

Again, there was a bit of exaggeration coupled with the dialog. It was pointed out that there has still been no apprehension of the perpetrator of the 2001 anthrax attacks. It was then stated that the anthrax in the envelope sent to Senator Daschle was enough to kill a billion people. No mention was made about how impossible it would be to meter and deliver such dosages in the most appropriate manner to achieve that. In fact, no discussion was made about the difficulty in weaponizing most biological agents, limiting their use as a targeted weapon over a large area. And furthermore, no mention at all was made of chemical weapons.

The conclusion here was that investment in better research and international cooperation was key. The statement was made that better integration of electronic health records would be important, too, although some studies I recall indicate that their utility is probably not so great as some would hope. It was also concluded that benefits in faster medical response and better vaccine production would help in non-crisis times as well. I don’t think we can argue too much with that, although the whole issue of how we pay for medicine and health issues looms large.

Panel Part III

The last panel featured Alan Wade, former CIO of the CIA, and Paul Kurtz of Good Harbor Consulting, speaking on the cyber threat. I’ve known Paul for years, and he is a great person to talk on these issues.

The fact that cyber technology is universal and ubiquitous was highlighted. So was the asymmetry inherent in the area. Some mention was made about how nothing has been done by the current administration until very recently. Sadly, that is clearly the case. The National Strategy in 2002, the PITAC report in 2005, and the CSTB report in 2007 (to name 3 examples) all generated no response. As a member of the PITAC that helped write the 2005 report, I was shocked at the lack of Federal investment and the inaction we documented (I knew it was bad, but didn’t realize until then how bad it was); the reaction from the White House was to dissolve the committee rather than address the real problems highlighted in the report. As one of today’s panelists put it — the current administration’s response has been “…late, fragmented, and inadequate.” Amen.

I was disappointed that so much was said about terrorism and denial of service. Paul did join in near the end and point out that alteration of critical data was a big concern, but there was no mention of alteration of critical services, about theft of intellectual property, about threats to privacy, or other more prominent threats. Terrorism online is not the biggest threat we face, and we have a major crisis in progress that doesn’t involve denial of service. We need to ensure that our policymakers understand the scope of the threat.

On the plus side, Senator Obama reiterated how he sees cyber as a national resource and critical infrastructure. He wants to appoint a national coordinator to help move protection forward. (If he is elected I hope he doesn’t put the position in DHS!)

Paul pointed out the need for more funds for education and research. He also made a very kind remark, mentioning me by name, and saying how we were a world-class resource built with almost no funding. That’s not quite true, but sadly not far off. I have chafed for years at how much more we could do with even modest on-going support that wasn’t tied to specific research projects….

Conclusions

I was really quite impressed with the scope of the discussion, given the time and format, and the expertise of the panelists. Senator Obama was engaged, attentive, and several of his comments and questions displayed more than a superficial knowledge of the material in each area. Given our current President referring to “the Internets” and Senator McCain cheerfully admitting he doesn’t know how to use a computer, it was refreshing and hopeful that Senator Obama knows what terms such as “fission” and “phishing” mean. And he can correctly pronounce “nuclear”! grin His comments didn’t appear to be rehearsed — I think he really does “get it.”

(Before someone picks on me too much…. I believe Senator McCain is an honorable man, a dedicated public servant, and a genuine American hero. I am grateful to have people like him intent on serving the public. However, based on his comments to the press and online, I think he is a generation out of date on current technology and important related issues. That isn’t a comment related to his age, per se, but to his attitude. I’d welcome evidence that I am mistaken.)

Senator Obama is a great orator. I also noticed how his speed of presentation picks up for the press (his opening remarks) but became more conversational during the panel.

Senator Obama kept bringing the panel back to suggestions about what could be done to protect the nation. I appreciated that focus on the goal. He also kept returning to the idea that problems are better solved early, and that investments without imminent threat are a form of insurance — paying for clean-up is far greater than some prudent investment early on. He also repeatedly mentioned the need to be competitive in science and technology, and how important support for education is — and will be.

After the session was over, I didn’t get a chance to meet any of the campaign staff, or say hello to Paul. I did get about 90 seconds with Senator Bayh and invited him to visit. After my name had been mentioned about 3 times by panelists and Senator Obama, he sort of recognized it when I introduced myself. We’ll see if he follows up. I’ve visited his office and Senator Lugar’s, repeatedly, and neither have ever bothered to follow up to see what we’re doing or whether they could help.

Several people in the audience commented on my name being mentioned. I’m more than a little embarrassed that they didn’t refer to CERIAS and my colleagues, and in fact I was the only Purdue person mentioned by name during the entire 2 hours, and then it happened multiple times. I’m not sure if that’s good or not — we’ll see. However, as P.T. Barnum said, there’s no such thing as bad publicity … so long as they spell my name correctly. tongue rolleye None of the local or national press seem to have picked it up, however, so even spelling isn’t an issue.

The press, in fact, hasn’t seemed to focus on the substance of the summit at all. I’ve read about 15 accounts so far, and all have focused on his choice of VP or the status of the campaign. It is so discouraging! These are topics of great importance that are not well understood by the public, and the press simply ignores them. Good thing Angelina Jolie gave birth earlier in the week or the summit wouldn’t have even made the press. confused

I wish more of the population would take the time to listen to prolonged discussion like this. 15-second sound bites serve too often as the sole input for most voters. And even then, too many are insufficiently educated (or motivated) to understand even the most basic concepts. I wonder if more than 5 people will even bother to read this long a post — most people want blogs a single page in length.

As for my own political opinions and voting choices, well, I’m not going to use an official Purdue system to proselytize about items other than cyber security, education, research and Purdue. You can certainly ask me if you see me. Now, if only I had confidence in the electronic voting equipment that so many of us are going to be forced to use in November (hint: I’m chair of the USACM).

Last Tongue-in-Cheek Word

And no, I’m not particularly interested in the VP position.

Virtualization Is Successful Because Operating Systems Are Weak

It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems.  Virtualization supports functions such as these:

  1. Availability
    • Minimized downtime for patching OSes and applications
    •          
    • Restart a crashed OS or server

  2. Scalability
    • More or different images as demand changes

  3. Isolation and compartmentalization
  4. Better hardware utilization
  5. Hardware abstraction for OSes
    • Support legacy platforms

Compare it to the list of operating system duties:

  1. Availability
    • Minimized downtime for patching applications
    •          
    • Restart crashed applications
    •      

  2. Scalability
    • More or different processes as demand changes

  3. Isolation and compartmentalization
    • Protected memory
    • Accounts, capabilities

  4. Better hardware utilization (with processes)
  5. Hardware abstraction for applications

The similarity suggests that virtualization solutions compete with operating systems.  I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously.  Typical operating systems lack security, reliability and ease of maintenance.  They have drivers in kernel space;  Windows Vista thankfully now has them in user space, and Linux is moving in that direction.  The complexity is staggering.  This is reflected in the security guidance;  hardening guides and “benchmarks” (essentially an evaluation of configuration settings) are long and complex.  The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex.  The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches.

What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization.  In reality, virtualization typically depends on another, full-blown operating system. 
more OSes
VMWare ESX Server runs its own OS with drivers.  Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation).  Microsoft’s “Hyper-V” requires a full-blown Windows operating system to run it.  So what we’re doing is really exchanging an untrusted OS for another, that we should trust more for some reason.  This other OS also needs patches, configuration and maintenance.  Now we have multiple OSes to maintain!  What did we gain?  We don’t trust OSes but we trust “virtualization” that depends on more OSes?  At least ESX is “only” 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.

I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory.  Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don’t trust operating systems enough to let them stand on their own.  The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes. 

As for ease of management and availability vs patching, I don’t see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary.  ESX is an operating system anyway…  I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS;  there may be a thesis or two in figuring out how to implement them back in an operating system. 

What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista:  virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest.  Of course, even with a better OS we’d still need virtualization for testbeds like ReAssure,  and for legacy applications.  Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).

Related reading:
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level.  ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure?  Computer, 39

Confusion of Separation of Privilege and Least Privilege

Least privilege is the idea of giving a subject or process only the privileges it needs to complete a task.  Compartmentalization is a technique to separate code into parts on which least privilege can be applied, so that if one part is compromised, the attacker does not gain full access.  Why does this get confused all the time with separation of privilege?  Separation of privilege is breaking up a *single* privilege amongst multiple, independent components or people, so that multiple agreement or collusion is necessary to perform an action (e.g., dual signature checks).  So, if an authentication system has various biometric components, a component that evaluates a token, and another component that evaluates some knowledge or capability, and all have to agree for authentication to occur, then that is separation of privilege.  It is essentially an “AND” logical operation;  in its simplest form, a system would check several conditions before granting approval for an operation.  Bishop uses the example of “su” or “sudo”;  a user (or attacker of a compromised process) needs to know the appropriate password, and the user needs to be in a special group.  A related, but not identical concept, is that of majority voting systems.  Redundant systems have to agree, hopefully outvoting a defective system.  If there was no voting, i.e., if all of the systems always had to agree, it would be separation of privilege.  OpenSSH’s UsePrivilegeSeparation option is *not* an implementation of privilege separation by that definition, it simply runs compartmentalized code using least privilege on each compartment.

Another untimely passing

[tags]obituary,cryptography,Bob Baldwin,kuang, CBW,crypt-breaker’s workbench[/tags]

I learned this week that the information security world lost another of our lights in 2007: Bob Baldwin. This may have been more generally known, but a few people I contacted were also surprised and saddened by the news.

His contributions to the field were wide-ranging. In addition to his published research results he also built tools that a generation of students and researchers found to be of great value. These included the Kuang tool for vulnerability analysis, which we included in the first edition of COPS, and the Crypt-Breaker’s Workbench (CBW), which is still in use.

What follows is (slightly edited) obituary sent to me by Bob’s wife, Anne. There was also an obituary in the fall 2007 issue of Cryptologia.

Robert W Baldwin

May 19, 1957- August 21, 2007

Robert W. Baldwin of Palo Alto passed away at home with his wife at his side on August 21, 2007. Bob was born in Newton, Massachusetts and graduated from Memorial High School in Madison, Wisconsin and Yorktown High School in Arlington, Virginia. He attended the Massachusetts Institute of Technology, where he received BS and MS degrees in Computer Science and Electrical Engineering in 1982 and a Ph.D. in Computer Science in 1987. A leading researcher and practitioner in computer security, Bob was employed by Oracle, Tandem Computers, and RSA Security before forming his own firm, PlusFive Consulting. His most recent contribution was the development of security engineering for digital theaters. Bob was fascinated with cryptology and made frequent contributions to Cryptologia as an author, reviewer, and mentor.

Bob was a loving and devoted husband and father who touched the hearts and minds of many. He is well remembered by his positive attitude and everlasting smile. Bob is survived by his wife, Anne Wilson, two step-children, Sean and Jennifer Wilson of Palo Alto and his two children, Leila and Elise Baldwin of Bellevue, Washington. He is also survived by his parents, Bob and Janice Baldwin of Madison, Wisconsin; his siblings: Jean Grossman of Princeton, N.J., Richard Baldwin of Lausanne, Switzerland, and Nancy Kitsos of Wellesley, MA.; and six nieces and nephews.

In lieu of flowers, gifts in memory of Robert W. Baldwin may be made to a charity of the donor’s choice, to the Recht Brain Tumor Research Laboratory at Stanford Comprehensive Cancer Center, Office of Medical Development, 2700 Sand Hill Road, Menlo Park, CA 94025, Attn: Janice Flowers-Sonne, or to the loving caretakers at the Hospice of the Valley, 1510 E. Flower Street. Phoenix, AZ 85014-5656.

 

Looking for Trustworthy Alternatives to Adobe PDFs

There was a day when PDFs were the safe, portable alternative to Microsoft Word documents.  There was no chance of macro-virus infections, and emails to Spaf with PDFs didn’t bounce back as they did if you sent him a Word document.  It became clear that PDFs adopted mixed loyalties by locking features down and phoning home.  Embedded content caused security issues in PDF viewers (CVE-2007-0047, CVE-2007-0046, CVE-2007-0045, CVE-2005-1306, CVE-2004-1598, CVE-2004-0194, CVE-2003-0434) including a virus using JavaScript as a distribution vector (CVE-2003-0284).  Can you call safe a document viewer that stands in such company as Skype, Mozilla Firefox, Thunderbird, Netscape Navigator, Microsoft Outlook, and Microsoft Outlook Express [1] with a CVSS score above 9 (CVE-2007-5020)?  How about PDFs that can dynamically retrieve Yahoo ads over the internet [2], whereas Yahoo has recently been tricked into distributing trojans in advertisements [3]?  Fully functional PDF viewers are now about as safe and loyal (under your control) as your web browser with full scripting enabled.  That may be good enough for some people, but clearly falls short for risk-averse industries.  It is not enough to fix vulnerabilities quickly;  people saying that there’s no bug-free software are also missing the point.  The point is that it is desirable to have a conservative but functional enough document viewer that does not have a bullseye painted on it by attempting to do too much and be everything to everyone.  This can be stated succinctly as “avoid unnecessary complexity” and “be loyal to the computer owner”.

Whereas it might be possible to use a PDF viewer with limited functionality and not supporting attack vectors, the format has become tainted—in the future more and more people will require you to be able to read their flashy PDF just as some webmasters now deny you access if you don’t have JavaScript enabled.  Adobe has patents on PDF and is intent on keeping control and conformance to specifications;  Apple’s MacOS X PDF viewer (“Preview”) initially allowed printing of secured PDFs to unsecured PDFs [4].  That was quickly fixed, for obvious reasons.  This is as it should be, but it highlights that you are not free to make just any application that manipulates PDFs.

Last year Adobe forced Microsoft to pull PDF creation support from Office 2007 under the threat of a lawsuit while asking them to “charge more” for Office [5].  What stops Adobe from interfering with OpenOffice?  In January 2007 Adobe released the full PDF (Portable Document Format) to make PDF an ISO standard [6].  People believe: “Anyone may create applications that read and write PDF files without having to pay royalties to Adobe Systems”, but that’s not quite true.  These applications must conform to the specification as decided by Adobe.  Applications that are too permissive or somehow irk Adobe could possibly be made illegal, including open source ones, at least in the US.  It is unclear how much control Adobe still has (obviously enough for the Yahoo deal) and will still have when and if it becomes an ISO standard.  Being an ISO standard does not make PDFs necessarily compatible with free software.  If part of the point of free software is to be able to change it so that it is fully loyal to you, then isn’t it a contradiction for free software to implement standards that mandate and enforce mixed loyalties?

Finally, my purchase of the full version of Adobe Acrobat for MacOS X was a usability disaster;  you’ll need to apply duress to make me use Acrobat again.  I say it’s time to move on to safer ground, from security, legal, and code quality perspectives, ISO standard or not. 

How then can we safely transmit and receive documents that are more than plain text?  HTML, postscript, and rich-text (rtf) are alternatives that have been disused in favor of PDF for various reasons which I will not analyze here.  Two alternatives seemed promising:  DVI files and Microsoft XPS, but a bit of research shows that they both have significant shortcomings.

Tex (dvi): TeX is a typesetting system, used to produce DVI (Device independent file format) files.  TeX is used mostly in academia, by computer scientists, mathematicians or UNIX enthusiasts.  There are many TeX editors with various levels of sophistication; for example OpenOffice can export documents to .tex files, so you can use even a common WYSIWYG text editor.  Tex files can be created and managed on Windows [7], MacOS X and Linux.  TeX files do not include images but have tags referencing them as separate files;  you have to manage them separately.  Windows has DVI viewers, such as YAP and DVIWIN

However, in my tests OpenOffice lost references to embedded images, producing TeX tags containing errors (”[Warning: Image not found]”).  The PDF export on the same file worked perfectly.  Even if the TeX export worked, you would still have a bunch of files instead of a single document to send.  You then need to produce a DVI file in a second step, using some other program. 

Even if OpenOffice’s support of DVI was better, there are other problems.  I have found many downloadable DVI documents that could not be displayed in Ubuntu, using “evince”;  they produced the error “Unable to open document—DVI document has incorrect format”.  After installing the “advi” program (which may have installed some fonts as well), some became viewable both using evince and advi.  DVI files do not support embedded fonts;  if the end user does not have the correct fonts your document will not be displayed properly. 

Another issue is that of orphaned images.  Images are missing from dvi downloads such as this one;  at some point they were available as a separate download, but aren’t anymore.  This is a significant shortcoming, which is side-stepped by converting DVI documents to PDF;  however this defeats our purpose.

Microsoft XPS: XPS (XML Paper Specification) documents embed all the fonts used, so XPS documents will behave more predictably than DVI ones.  XPS also has the advantage that

“it is a safe format. Unlike Word documents and PDF files, which can contain macros and JavaScript respectively, XPS files are fixed and do not support any embedded code. The inability to make documents that can literally change their own content makes this a preferable archive format for industries where regulation and compliance is a way of life” [8].

Despite being an open specification, there is no support for it yet in Linux.  Visiting Microsoft’s XPS web site and clicking on the “get an XPS viewer” link results in the message “This OS is not supported”.

It seems, however, that Microsoft may be just as intent on keeping control of XPS as Adobe for PDFs;  the “community promise for XPS” contains an implicit threat should your software not comply “with all of the required parts of the mandatory provisions of the XPS Document Format” [9].  These attached strings negate some advantages that XPS might have had over PDFs.

XPS must become supported on alternative operating systems such as Linux and BSDs, for it to become competitive.  This may not happen simply because Microsoft is actively antagonizing Linux and open source developers with vague and threatening patent claims, as well as people interested in open standards with shady lobbying moves and “voting operations” [10] at standards organizations (Microsoft: you need public support and goodwill for XPS to “win” this one).  The advantages of XPS may also not be evident to users comfortable in a world of TeX, postscript, and no-charge PDF tools.  The confusion about open formats vs open standards and exactly how much control Adobe still has and will still have when and if PDF becomes an ISO standard does not help.  Companies offering XPS products are also limiting their possibilities by not offering Linux versions, at least of the viewers, even without support. 

In conclusion, PDF viewers have become risky examples of mixed loyalty software.  It is my personal opinion that risk-averse industries and free software enthusiasts should steer clear of the PDF standard, but there are currently no practical replacements.  XPS faces extreme adoption problems, not simply due to the PDF installed base, but also due to the ill will generated by Microsoft’s tactics.  I wish that DVI was enhanced with included fonts and images, better portability, and better integration within tools like OpenOffice, and that this became an often requested feature for the OpenOffice folks.  I don’t expect DVI handlers to be absolutely perfect (e.g., CVE-2002-0836), but the reduced feature set and absence of certain attack vectors should mean less complexity, fewer risks and greater loyalty to the computer owner.

1. ISS, Multiple vendor products URI handling command execution, October 2007.  http://www.iss.net/threats/276.html

2. Robert Daniel, Adobe-Yahoo plan places ads on PDF documents, November 2007.  http://www.marketwatch.com/news/story/adobe-yahoo-partner-place-ads/story.aspx?guid=%7B903F1845-0B05-4741-8633-C6D72EE11F9A%7D

3. Bogdan Popa, Yahoo Infects Users’ Computers with Trojans - Using a simple advert distributed by Right Media, September 2007.  http://news.softpedia.com/news/Yahoo-Infects-Users-039-Computers-With-Trojans-65202.shtml

4. Kurt Foss, Web site editor illustrates how Mac OS X can circumvent PDF security, March 2002.  http://www.planetpdf.com/mainpage.asp?webpageid=1976

5. Nate Mook, Microsoft to Drop PDF Support in Office, June 2006.  http://www.betanews.com/article/Microsoft_to_Drop_PDF_Support_in_Office/1149284222

6. Adobe Press release, Adobe to Release PDF for Industry Standardization, January 2007.  http://www.adobe.com/aboutadobe/pressroom/pressreleases/200701/012907OpenPDFAIIM.html

7. Eric Schechter, Free TeX software available for Windows computers, November 2007.  http://www.math.vanderbilt.edu/~schectex/wincd/list_tex.htm

8. Jonathan Allen, The wide ranging impact of the XML Paper Specification, November 2006.  http://www.infoq.com/news/2006/11/XPS-Released

9. Microsoft, Community Promise for XPS, January 2007.  http://www.microsoft.com/whdc/xps/xpscommunitypromise.mspx

10. Kim Haverblad, Microsoft buys the Swedish vote on OOXML, August 2007.  http://www.os2world.com/content/view/14868/1/

Legit Linux Codecs In the U.S.

As a beginner Linux user, I only recently realized that few people are aware or care that they are breaking U.S. law by using unlicensed codecs.  Even fewer know that the codecs they use are unlicensed, or what to do about it.  Warning dialogs (e.g., in Ubuntu) provide no practical alternative to installing the codecs, and are an unwelcome interruption to workflow.  Those warnings are easily forgotten afterwards, perhaps despite good intentions to correct the situation.  Due to software patents in the U.S., codecs from sound to movies such as h.264 need to be licensed, regardless of how unpalatable the law may be, and of how this situation is unfair to U.S. and Canadian citizens compared to other countries.  This impacts open source players such as Totem, Amarok, Mplayer or Rythmbox.  The CERIAS security seminars, for example, use h.264.  The issue of unlicensed codecs in Linux was brought up by Adrian Kingsley-Hughes, who was heavily criticized for not knowing about, or not mentioning, fluendo.com and other ways of obtaining licensed codecs. 

Fluendo Codecs
So, as I like Ubuntu and want to do the legal thing, I went to the Fluendo site and purchased the “mega-bundle” of codecs.  After installing them, I tried to play a CERIAS security seminar.  I was presented with a prompt to install 3 packs of codecs which require licensing.  Then I realized that the Fluendo set of codecs didn’t include h.264!  Using Fluendo software is only a partial solution.  When contacted, Fluendo said that support for h.264, AAC and WMS will be released “soon”.

Wine
Another suggestion is using Quicktime for Windows under Wine.  I was able to do this, after much work;  it’s far from being as simple as running Synaptic, in part due to Apple’s web site being uncooperative and the latest version of Quicktime, 7.2, not working under Wine.  However, when I got it to work with an earlier version of Quicktime, it worked only for a short while.  Now it just displays “Error -50: an unknown error occurred” when I attempt to play a CERIAS security seminar. 

VideoLAN Player vs MPEG LA
The VideoLAN FAQ explains why VideoLAN doesn’t license the codecs, and suggests contacting MPEG LA.  I did just that, and was told that they were unwilling to let me pay for a personal use license.  Instead, I should “choose a player from a licensed supplier (or insist that the supplier you use become licensed by paying applicable royalties)”.  I wish that an “angel” (a charity?) could intercede and obtain licenses for codecs in their name, perhaps over the objections of the developers, but that’s unlikely to happen.

What to do
Essentially, free software users are the ball in a game of ping-pong between free software authors and licensors.  Many users are oblivious to this no man’s land they somehow live in,  but people concerned about legitimacy can easily be put off by it.  Businesses in particular will be concerned about liabilities.  I conclude that Adrian was right in flagging the Linux codec situation.  It is a handicap for computer users in the U.S. compared to countries where licensing codecs isn’t an issue.

One solution would be to give up Ubuntu (for example) and getting a Linux distribution that bundles licensed codecs such as Linspire (based on Ubuntu) despite the heavily criticized deal they made with Microsoft.  This isn’t about being anti-Microsoft, but about divided loyalties.  Free software, for me, isn’t about getting software for free, even though that’s convenient.  It’s about appreciating the greater assurances that free software provides with regards to divided loyalties and the likelihood of software that is disloyal by design.  Now Linspire may have or in the future get other interests in mind besides those of its users.  This deal being part of a vague but threatening patent attack on Linux by Microsoft also makes Linspire unappealing.  Linspire is cheap, so cost isn’t an issue;  after all getting the incomplete set of codecs from Fluendo ($40) cost me almost as much as getting the full version of Linspire ($49) would have.  Regardless,  Linspire may be an acceptable compromise for many businesses.  Another advantage of Linspire is that they bundle a licensed DVD player as well (note that the DMCA, and DVD CCA license compliance, are separate issues from licensing codecs such as h.264).

Another possibility is to keep around an old Mac or use lab computers until Fluendo releases the missing codecs.  Even if CERIAS was to switch to Theora just to please me, the problem would surface again later.  So, there are options, but they aren’t optimal. 

Solving some of the Wrong Problems

[tags]cybersecurity research[/tags]
As I write this, I’m sitting in a review of some university research in cybersecurity.  I’m hearing about some wonderful work (and no, I’m not going to identify it further).  I also recently received a solicitation for an upcoming workshop to develop “game changing” cyber security research ideas.  What strikes me about these efforts—representative of efforts by hundreds of people over decades, and the expenditure of perhaps hundreds of millions of dollars—is that the vast majority of these efforts have been applied to problems we already know how to solve.

Let me recast this as an analogy in medicine.  We have a crisis of cancer in the population.  As a result, we are investing huge amounts of personnel effort and money into how to remove diseased portions of lungs, and administer radiation therapy.  We are developing terribly expensive cocktails of drugs to treat the cancer…drugs that sometimes work, but make everyone who takes them really ill.  We are also investing in all sorts of research to develop new filters for cigarettes.  And some funding agencies are sponsoring workshops to generate new ideas on how to develop radical new therapies such as lung transplants.  Meanwhile, nothing is being spent to reduce tobacco use; if anything, the government is one of the largest purchasers of tobacco products!  Insane, isn’t it?  Yes, some of the work is great science, and it might lead to some serendipitous discoveries to treat liver cancer or maybe even heart disease, but it still isn’t solving the underlying problems.  It is palliative, with an intent to be curative—but we aren’t appropriately engaging prevention!

Oh, and second-hand smoke endangers many of us, too.

We know how to prevent many of our security problems—least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren’t using much of it.

Instead of building trustworthy systems (note—I’m not referring to making existing systems trustworthy, which I don’t think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that promote safe programming and execution, we spend our efforts on tools to look for buffer overflows and type mismatches in existing code, and merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating systems, browsers, languages, tools) without really understanding where they are best used—and not used.  Then we pound on our selections as the “one, true solution” and justify them based on cost or training or “open vs. closed” arguments that really don’t speak to fitness for purpose.  As a result, we develop fragile monocultures that have a particular set of vulnerabilities, and then we need to spend a huge amount to protect them.  If you are thinking about how to secure Linux or Windows or Apache or C++ (et al), then you aren’t thinking in terms of fundamental solutions.

I’m not trying to claim there aren’t worthwhile topics for open research—there are.  I’m simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it.

In many parts of India, cows are sacred and cannot be harmed.  They wander everywhere in villages, with their waste products fouling the streets and creating a public health problem.  However, the only solution that local people are able to visualize is to hire more people to shovel effluent.  Meanwhile, the cows multiply, the people feed them, and the problem gets worse.  People from outside are able to visualize solutions, but the locals don’t want to employ them.

Metaphorically speaking, we need to put down our shovels and get rid of our sacred cows—maybe even get some recipes for meatloaf. grin

Let’s start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have.  Will it be easy?  No, but neither is quitting smoking!  But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower.

[Don’t forget to check out my tumble log!]

Some comments on Copyright and on Fair Use

[tags]copyright,DMCA,RIAA,MPAA,sharing,downloading,fair use[/tags]

Over the past decade or so, the entertainment industry has supported a continuing series of efforts to increase the enforcement of copyright laws, a lengthening of copyright terms, and very significant enforcement efforts against individuals.  Included in this mess was the DMCA—the Digital Millenium Copyright Act—which has a number of very technology unfriendly aspects.

One result of this copyright madness is lawsuits against individuals found to have file-sharing software on their systems, along with copies of music files.  Often the owners of these systems don’t even realize that their software is publishing the music files on their systems. It also seems the case that many people don’t understand copyright and do not realize that downloading (or uploading) music files is against the law.  Unfortunately, the entertainment industry has chosen to seek draconian remedies from individuals who may not be involved in more than incidental (or accidental) sharing of files.  One recent example is a case where penalties have been declared that may bankrupt someone who didn’t set out to hurt the music industry.  I agree with comments by Rep. Rick Boucher that the damages are excessive, even though (in general) the behavior of file sharers is wrong and illegal.

Another recent development is a provision in the recently introduced “College Access and Opportunity Act of 2007” (HR 3746; use Thomas to find the text). Sec 484 (f) contains language that requires schools to put technology into place to prevent copyright violations, and inform the Secretary of Education about what those plans and technologies are.  This is ridiculous, as it singles out universities instead of ISPs in general, and forces them to expend resources for misbehavior by students it is otherwise attempting to control.  It is unlikely to make any real dent in the problem because it doesn’t address the underlying problems.  Even more to the point, no existing technology can reliably detect only those files being shared that have copyright that prohibits such sharing.  Encryption, inflation/compression, translation into other formats, and transfer in discontinuous pieces can all be employed to fool monitoring software.  Instead, it is simply another cost and burden on higher ed.

We need to re-examine copyright.  Another aspect in particular we need to examine is “fair use.”  The RIAA, MPAA and similar associations are trying to lock up content so that any use at all requires paying them additional funds.  This is clearly silly, but their arguments to date have been persuasive to legislators.  However, the traditional concept of “fair use” is important to keep intact—especially for those of us in academia.  A recent report outlines that fair use is actually quite important—that approximately 1/6 of the US economy is related to companies and organizations that involve “fair use.”  It is well worth noting.  Further restrictions on copyright use—and particularly fair use—are clearly not in society’s best interest.

Copyright has served—and continues to serve—valid purposes.  However, with digital media and communications it is necessary to rethink the underlying business models.  When everyone becomes a criminal, what purpose does the law serve?


Also, check out my new “tumble log.”  I update it with short items and links more often than I produce long posts here.

[posted with ecto]

Spaf Gets Interviewed

[tags]interview,certification[/tags]I was recently interviewed by Gary McGraw for his Silver Bullet interview series.  He elicited my comments on a number of topics, including security testing, ethical hacking, and why security is difficult.If you like any of my blog postings, you might find the interview of some interest.  But if not, you might some of the other interviews of interest – mine was #18 in the series.