Posts in Kudos, Opinions and Rants

Page Content

Legit Linux Codecs In the U.S.

As a beginner Linux user, I only recently realized that few people are aware or care that they are breaking U.S. law by using unlicensed codecs.  Even fewer know that the codecs they use are unlicensed, or what to do about it.  Warning dialogs (e.g., in Ubuntu) provide no practical alternative to installing the codecs, and are an unwelcome interruption to workflow.  Those warnings are easily forgotten afterwards, perhaps despite good intentions to correct the situation.  Due to software patents in the U.S., codecs from sound to movies such as h.264 need to be licensed, regardless of how unpalatable the law may be, and of how this situation is unfair to U.S. and Canadian citizens compared to other countries.  This impacts open source players such as Totem, Amarok, Mplayer or Rythmbox.  The CERIAS security seminars, for example, use h.264.  The issue of unlicensed codecs in Linux was brought up by Adrian Kingsley-Hughes, who was heavily criticized for not knowing about, or not mentioning, fluendo.com and other ways of obtaining licensed codecs. 

Fluendo Codecs
So, as I like Ubuntu and want to do the legal thing, I went to the Fluendo site and purchased the “mega-bundle” of codecs.  After installing them, I tried to play a CERIAS security seminar.  I was presented with a prompt to install 3 packs of codecs which require licensing.  Then I realized that the Fluendo set of codecs didn’t include h.264!  Using Fluendo software is only a partial solution.  When contacted, Fluendo said that support for h.264, AAC and WMS will be released “soon”.

Wine
Another suggestion is using Quicktime for Windows under Wine.  I was able to do this, after much work;  it’s far from being as simple as running Synaptic, in part due to Apple’s web site being uncooperative and the latest version of Quicktime, 7.2, not working under Wine.  However, when I got it to work with an earlier version of Quicktime, it worked only for a short while.  Now it just displays “Error -50: an unknown error occurred” when I attempt to play a CERIAS security seminar. 

VideoLAN Player vs MPEG LA
The VideoLAN FAQ explains why VideoLAN doesn’t license the codecs, and suggests contacting MPEG LA.  I did just that, and was told that they were unwilling to let me pay for a personal use license.  Instead, I should “choose a player from a licensed supplier (or insist that the supplier you use become licensed by paying applicable royalties)”.  I wish that an “angel” (a charity?) could intercede and obtain licenses for codecs in their name, perhaps over the objections of the developers, but that’s unlikely to happen.

What to do
Essentially, free software users are the ball in a game of ping-pong between free software authors and licensors.  Many users are oblivious to this no man’s land they somehow live in,  but people concerned about legitimacy can easily be put off by it.  Businesses in particular will be concerned about liabilities.  I conclude that Adrian was right in flagging the Linux codec situation.  It is a handicap for computer users in the U.S. compared to countries where licensing codecs isn’t an issue.

One solution would be to give up Ubuntu (for example) and getting a Linux distribution that bundles licensed codecs such as Linspire (based on Ubuntu) despite the heavily criticized deal they made with Microsoft.  This isn’t about being anti-Microsoft, but about divided loyalties.  Free software, for me, isn’t about getting software for free, even though that’s convenient.  It’s about appreciating the greater assurances that free software provides with regards to divided loyalties and the likelihood of software that is disloyal by design.  Now Linspire may have or in the future get other interests in mind besides those of its users.  This deal being part of a vague but threatening patent attack on Linux by Microsoft also makes Linspire unappealing.  Linspire is cheap, so cost isn’t an issue;  after all getting the incomplete set of codecs from Fluendo ($40) cost me almost as much as getting the full version of Linspire ($49) would have.  Regardless,  Linspire may be an acceptable compromise for many businesses.  Another advantage of Linspire is that they bundle a licensed DVD player as well (note that the DMCA, and DVD CCA license compliance, are separate issues from licensing codecs such as h.264).

Another possibility is to keep around an old Mac or use lab computers until Fluendo releases the missing codecs.  Even if CERIAS was to switch to Theora just to please me, the problem would surface again later.  So, there are options, but they aren’t optimal. 

Solving some of the Wrong Problems

[tags]cybersecurity research[/tags]
As I write this, I’m sitting in a review of some university research in cybersecurity.  I’m hearing about some wonderful work (and no, I’m not going to identify it further).  I also recently received a solicitation for an upcoming workshop to develop “game changing” cyber security research ideas.  What strikes me about these efforts—representative of efforts by hundreds of people over decades, and the expenditure of perhaps hundreds of millions of dollars—is that the vast majority of these efforts have been applied to problems we already know how to solve.

Let me recast this as an analogy in medicine.  We have a crisis of cancer in the population.  As a result, we are investing huge amounts of personnel effort and money into how to remove diseased portions of lungs, and administer radiation therapy.  We are developing terribly expensive cocktails of drugs to treat the cancer…drugs that sometimes work, but make everyone who takes them really ill.  We are also investing in all sorts of research to develop new filters for cigarettes.  And some funding agencies are sponsoring workshops to generate new ideas on how to develop radical new therapies such as lung transplants.  Meanwhile, nothing is being spent to reduce tobacco use; if anything, the government is one of the largest purchasers of tobacco products!  Insane, isn’t it?  Yes, some of the work is great science, and it might lead to some serendipitous discoveries to treat liver cancer or maybe even heart disease, but it still isn’t solving the underlying problems.  It is palliative, with an intent to be curative—but we aren’t appropriately engaging prevention!

Oh, and second-hand smoke endangers many of us, too.

We know how to prevent many of our security problems—least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren’t using much of it.

Instead of building trustworthy systems (note—I’m not referring to making existing systems trustworthy, which I don’t think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that promote safe programming and execution, we spend our efforts on tools to look for buffer overflows and type mismatches in existing code, and merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating systems, browsers, languages, tools) without really understanding where they are best used—and not used.  Then we pound on our selections as the “one, true solution” and justify them based on cost or training or “open vs. closed” arguments that really don’t speak to fitness for purpose.  As a result, we develop fragile monocultures that have a particular set of vulnerabilities, and then we need to spend a huge amount to protect them.  If you are thinking about how to secure Linux or Windows or Apache or C++ (et al), then you aren’t thinking in terms of fundamental solutions.

I’m not trying to claim there aren’t worthwhile topics for open research—there are.  I’m simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it.

In many parts of India, cows are sacred and cannot be harmed.  They wander everywhere in villages, with their waste products fouling the streets and creating a public health problem.  However, the only solution that local people are able to visualize is to hire more people to shovel effluent.  Meanwhile, the cows multiply, the people feed them, and the problem gets worse.  People from outside are able to visualize solutions, but the locals don’t want to employ them.

Metaphorically speaking, we need to put down our shovels and get rid of our sacred cows—maybe even get some recipes for meatloaf. grin

Let’s start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have.  Will it be easy?  No, but neither is quitting smoking!  But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower.

[Don’t forget to check out my tumble log!]

Some comments on Copyright and on Fair Use

[tags]copyright,DMCA,RIAA,MPAA,sharing,downloading,fair use[/tags]

Over the past decade or so, the entertainment industry has supported a continuing series of efforts to increase the enforcement of copyright laws, a lengthening of copyright terms, and very significant enforcement efforts against individuals.  Included in this mess was the DMCA—the Digital Millenium Copyright Act—which has a number of very technology unfriendly aspects.

One result of this copyright madness is lawsuits against individuals found to have file-sharing software on their systems, along with copies of music files.  Often the owners of these systems don’t even realize that their software is publishing the music files on their systems. It also seems the case that many people don’t understand copyright and do not realize that downloading (or uploading) music files is against the law.  Unfortunately, the entertainment industry has chosen to seek draconian remedies from individuals who may not be involved in more than incidental (or accidental) sharing of files.  One recent example is a case where penalties have been declared that may bankrupt someone who didn’t set out to hurt the music industry.  I agree with comments by Rep. Rick Boucher that the damages are excessive, even though (in general) the behavior of file sharers is wrong and illegal.

Another recent development is a provision in the recently introduced “College Access and Opportunity Act of 2007” (HR 3746; use Thomas to find the text). Sec 484 (f) contains language that requires schools to put technology into place to prevent copyright violations, and inform the Secretary of Education about what those plans and technologies are.  This is ridiculous, as it singles out universities instead of ISPs in general, and forces them to expend resources for misbehavior by students it is otherwise attempting to control.  It is unlikely to make any real dent in the problem because it doesn’t address the underlying problems.  Even more to the point, no existing technology can reliably detect only those files being shared that have copyright that prohibits such sharing.  Encryption, inflation/compression, translation into other formats, and transfer in discontinuous pieces can all be employed to fool monitoring software.  Instead, it is simply another cost and burden on higher ed.

We need to re-examine copyright.  Another aspect in particular we need to examine is “fair use.”  The RIAA, MPAA and similar associations are trying to lock up content so that any use at all requires paying them additional funds.  This is clearly silly, but their arguments to date have been persuasive to legislators.  However, the traditional concept of “fair use” is important to keep intact—especially for those of us in academia.  A recent report outlines that fair use is actually quite important—that approximately 1/6 of the US economy is related to companies and organizations that involve “fair use.”  It is well worth noting.  Further restrictions on copyright use—and particularly fair use—are clearly not in society’s best interest.

Copyright has served—and continues to serve—valid purposes.  However, with digital media and communications it is necessary to rethink the underlying business models.  When everyone becomes a criminal, what purpose does the law serve?


Also, check out my new “tumble log.”  I update it with short items and links more often than I produce long posts here.

[posted with ecto]

Spaf Gets Interviewed

[tags]interview,certification[/tags]I was recently interviewed by Gary McGraw for his Silver Bullet interview series.  He elicited my comments on a number of topics, including security testing, ethical hacking, and why security is difficult.If you like any of my blog postings, you might find the interview of some interest.  But if not, you might some of the other interviews of interest – mine was #18 in the series.

What did you really expect?

[tags]reformed hackers[/tags]
A news story that hit the wires last week was that someone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior.  The press and blogosphere seemed to treat this as surprising.  They shouldn’t have.

I have been speaking and writing for nearly two decades on this general issue, as have others (William Hugh Murray, a pioneer and thought leader in security,  is one who comes to mind).  Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids.  First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit.  Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled.  (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way—that behavior does not establish the bona fides for real expertise.  If anything, it establishes a disregard for the community it endangers.)

More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on.  Certainly it is possible that they will learn the error of their ways and reform.  However, it is also the case that they may slip later and revert to their old ways.  Putting some of them in situations of trust with access to items of value is almost certainly too much temptation.  This has been established time and again in studies of criminals of all types, especially those who commit fraud.  So, why would a prudent manager take a risk when better alternatives are available?

Even worse, circulating stories of criminals who end up as highly-paid consultants are counterproductive, even if they are rarely true.  That is the kind of story that may tempt some without strong ethics to commit crimes as a shortcut to fame and riches.  Additionally, it is insulting to the individuals who work hard, study intently, and maintain a high standard of conduct in their careers—hiring criminals basically states that the honest, hardworking real experts are fools.  Is that the message we really want to put forward?

Luckily, most responsible managers now understand, even if the press and general public don’t, that criminals are simply that—criminals.  They may have served their sentences, which now makes them former criminals…but not innocent.  Pursuing criminal activity is not—and should not be—a job qualification or career path in civilized society.  There are many, many historical examples we can turn to for examples, including those of hiring pirates as privateers and train robbers as train guards.  Some took the opportunity to go straight, but the instances of those who abused trust and made off with what they were protecting illustrate that it is a big risk to take.  It also is something we have learned to avoid.  We are long past the point where those of us in computing should get with the program.

So, what of the argument that there aren’t enough real experts, or they cost too much to hire?  Well, what is their real value? If society wants highly-trained and trustworthy people to work in security, then society needs to devote more resources to support the development of curriculum and professional standards.  And it needs to provide reasonable salaries to those people, both to encourage and reward their behavior and expertise.  We’re seeing more of that now than a dozen years ago, but it is still the case that too many managers (and government officials) want security on the cheap, and then act surprised when they get hacked.  I suppose they also buy their Rolex and Breitling watches for $50 from some guy in a parking lot and then act surprised and violated when the watch stops a week later.  What were they really expecting?

This Week at CERIAS

Lots of new papers added this week—more that we can list here. Check the Reports and Papers Archive for more.

CERIAS Reports & Papers

CERIAS Weblogs

Cyberwar

[tags]cyber warfare, cyber terrorism, cyber crime, Estonia[/tags]
I am frequently asked about the likelihood of cyber war or cyber terrorism.  I’m skeptical of either being a stand-alone threat, as neither is likely to serve the goals of those who would actually wage warfare or commit terrorism.

The incidents in Estonia earlier this year were quite newsworthy and brought more people out claiming it was cyber terrorism or cyber warfare.  Nonsense!  It wasn’t terrorism, because it didn’t terrorize anyone—although it did annoy the heck out of many.  And as far as warfare goes, nothing was accomplished politically, and the “other side” was never even formally identified.

Basically, in Estonia there was a massive outbreak of cyber vandalism and cyber crime.

Carolyn Duffy Marsan did a nice piece in Network World on this topic.  She interviewed a number of people, and wrote it up clearly.  I especially like it because she quoted me correctly!  You can check out the article here: How close is World War 3.0? - Network World.  I think it represents the situation quite appropriately.

[As a humorous aside, I happened to do a search on the Network World site to see if another interview had appeared without me hearing about it.  I found this item that had appeared in December of 2006 and I didn’t know about it until now!  Darn, and to think I could have started recruiting minions in January. grin]

Fun video

[tags]the Internet[/tags]
Satire is sometimes a great way to get a point across.  Or multiple points.  I think this little clip is incredibly funny and probably insightful.

Items In the news

[tags]news, cell phones, reports, security vulnerabilities, hacking, computer crime, research priorities, forensics, wiretaps[/tags]
The Greek Cell Phone Incident
A great story involving computers and software, even though the main hack was against cell phones:
IEEE Spectrum: The Athens Affair.  From this we can learn all sorts of lessons about how to conduct a forensic investigation, retention of logs, wiretapping of phones, and more.

Now, imagine VoIP and 802.11 networking and vulnerabilities in routers and…. —the possibilities get even more interesting.  I suspect that there’s a lot more eavesdropping going on than most of us imagine, and certainly more than we discover.

NRC Report Released
Last week, the National Research Council announced the release of a new report: Towards a Safer and More Secure Cyberspace.  The report is notable in a number of ways, and should be read carefully by anyone interested in cyber security.  I think the authors did a great job with the material, and they listened to input from many sources.

There are 2 items I specifically wish to note:

  1. I really dislike the “Cyber Security Bill of Rights” listed in the report.  It isn’t that I dislike the goals they represent—those are great.  The problem is that I dislike the “bill of rights” notion attached to them.  After all, who says they are “rights”?  By what provenance are they granted?  And to what extremes do we do to enforce them?  I believe the goals are sound, and we should definitely work towards them, but let’s not call them “rights.”
  2. Check out Appendix B.  Note all the other studies that have been done in recent years pointing out that we are in for greater and greater problems unless we start making some changes.  I’ve been involved with several of those efforts as an author—including the PITAC report, the Infosec Research Council Hard Problems list, and the CRA Grand Challenges. Maybe the fact that I had no hand in authoring this report means it will be taken seriously, unlike all the rest. grin  More to the point, people who put off the pain and expense of trying to fix things because “Nothing really terrible has happened yet” do not understand history, human nature, or the increasing drag on the economy and privacy from current problems.  The trends are fairly clear in this report: things are not getting better.

Evolution of Computer Crime
Speaking of my alleged expertise at augury, I noted something in the news recently that confirmed a prediction I made nearly 8 years ago at a couple of invited talks: that online criminals would begin to compete for “turf.”  The evolution of online crime is such that the “neighborhood” where criminals operate overlaps with others.  If you want the exclusive racket on phishing, DDOS extortion, and other such criminal behavior, you need to eliminate (or absorb) the competition in your neighborhood.  But what does that imply when your “turf” is the world-wide Internet?

The next step is seeing some of this spill over into the physical world.  Some of the criminal element online is backed up by more traditional organized crime in “meat space.”  They will have no compunction about threatening—or disabling—the competition if they locate them in the real world.  And they may well do that because they also have developed sources inside law enforcement agencies and they have financial resources at their disposal.  I haven’t seen this reported in the news (yet), but I imagine it happening within the next 2-3 years.

Of course, 8 years ago, most of my audiences didn’t believe that we’d see significant crime on the net—they didn’t see the possibility.  They were more worried about casual hacking and virus writing.  As I said above, however, one only needs to study human nature and history, and the inevitability of some things becomes clear, even if the mechanisms aren’t yet apparent.

The Irony Department
GAO reported a little over a week ago that DHS had over 800 attacks on their computers in two years.  I note that the report is of detected attacks.  I had one top person in DC (who will remain nameless) refer to DHS as “A train wreck crossed with a nightmare, run by inexperienced political hacks” when referring to things like TSA, the DHS cyber operations, and other notable problems.  For years I (and many others) have been telling people in government that they need to set an example for the rest of the country when it comes to cyber security.  It seems they’ve been listening, and we’ve been negligent.  From now on, we need to stress that they need to set a good example.

[posted with ecto]

Complexity, virtualization, security, and an old approach

[tags]complexity,security,virtualization,microkernels[/tags]
One of the key properties that works against strong security is complexity.  Complexity poses problems in a number of ways.  The more complexity in an operating system, for instance, the more difficult it is for those writing and maintaining it to understand how it will behave under extreme circumstances.  Complexity makes it difficult to understand what is needed, and thus to write fault-free code.  Complex systems are more difficult to test and prove properties about.  Complex systems are more difficult to properly patch when faults are found, usually because of the difficulty in ensuring that there are no side-effects.  Complex systems can have backdoors and trojan code implanted that is more difficult to find because of complexity.  Complex operations tend to have more failure modes.  Complex operations may also have longer windows where race conditions can be exploited.  Complex code also tends to be bigger than simple code, and that means more opportunity for accidents, omissions and manifestation of code errors.

It is simple that complexity creates problems.

Saltzer and Schroeder identified it in their 1972 paper in CACM. They referred to “economy of mechanism” as their #1 design principle for secure systems.

Some of the biggest problems we have now in security (and arguably, computing) are caused by “feature creep” as we continue to expand systems to add new features.  Yes, those new features add new capabilities, but often the additions are foisted off on everyone whether they want them or not.  Thus, everyone has to suffer the consequences of the next exapnded release of Linux, Windows (Vista), Oracle, and so on.  Many of the new features are there as legitimate improvements for everyone, but some are of interest to only a minority of users, and others are simply there because the designers thought they might be nifty.  And besides, why would someone upgrade unless there were lots of new features?

Of course, this has secondary effects on complexity in addition to the obvious complexity of a system with new features.  One example has to do with backwards compatibility.  Because customers are unlikely to upgrade to the new, improved product if it means they have to throw out their old applications and data, the software producers need to provide extra code for compatibility with legacy systems.  This is not often straight-forward—it adds new complexity.

Another form of complexity has to do with hardware changes.  The increase in software complexity has been one motivating factor for hardware designers, and has been for quite some time.  Back in the 1960s when systems began to support time sharing, virtual memory became a necessity, and the hardware mechanisms for page and segment tables needed to be designed into systems to maintain reasonable performance.  Now we have systems with more and more processes running in the background to support the extra complexity of our systems, so designers are adding extra processing cores and support for process scheduling.

Yet another form of complexity is involved with the user interface.  The typical user (and especially the support personnel) now have to master many new options and features, and understand all of their interactions.  This is increasingly difficult for someone of even above-average ability.  It is no wonder that the average home user has myriad problems using their systems!

Of course, the security implications of all this complexity have been obvious for some time.  Rather than address the problem head-on by reducing the complexity and changing development methods (e.g., use safer tools and systems, with more formal design), we have recently seen a trend towards virtualization.  The idea is that we confine our systems (operating systems, web services, database, etc) in a virtual environment supported by an underlying hypervisor.  If the code breaks…or someone breaks it…the virtualization contains the problems.  At least, in theory.  And now we have vendors providing chipsets with even more complicated instruction sets to support the approach.  But this is simply adding yet more complexity.  And that can’t be good in the long run. Already attacks have been formulated to take advantage of these added “features.”

We lose many things as we make systems more complex.  Besides security and correctness, we also end up paying for resources we don’t use.  And we are also paying for power and cooling for chips that are probably more powerful than we really need.  If our software systems weren’t doing so much, we wouldn’t need quite so much power “under the hood” in the hardware.

Although one example is hardly proof of this general proposition, consider the results presented in 86 Mac Plus Vs. 07 AMD DualCore.  A 21-year old system beat a current top-of-the-line system on the majority of a set of operations that a typical user might perform during a work session.  On your current system, do a “ps” or run the task manager.  How many of those processes are really contributing to the tasks you want to carry out?  Look at the memory in use—how much of what is in use is really needed for the tasks you want to carry out?

Perhaps I can be accused of being a reactionary ( a nice word meaning “old fart:”), but I remember running Unix in 32K of memory.  I wrote my first full-fledged operating system with processes, a file system, network and communication drivers, all in 40K.  I remember the community’s efforts in the 1980s and early 1990s to build microkernels.  I remember the concept of RISC having a profound impact on the field as people saw how much faster a chip could be if it didn’t need to support complexity in the instruction set.  How did we get from there to here?

Perhaps the time is nearly right to have another revolution of minimalism.  We have people developing low-power chips and tiny operating systems for sensor-based applications.  Perhaps they can show the rest of us some old ideas made new.

And for security?  Well, I’ve been trying for several years to build a system (Poly^2) that minimalizes the OS to provide increased security.  To date, I haven’t had much luck in getting sufficient funding to really construct a proper prototype; I currently have some funding from NSF to build a minimal version, but the funding won’t allow anything close to a real implementation.  What I’m trying to show is too contrary to conventional wisdom.  It isn’t of interest to the software or hardware vendors because it is so contrary to their business models, and the idea is so foreign to most of the reviewers at funding agencies who are used to build ever more complex systems.

Imagine a system with several dozen (or hundred) processor cores.  Do we need process scheduling and switching support if we have a core for each active process?  Do we need virtual memory support if we have a few gigabytes of memory available per core?  Back in the 1960s we couldn’t imagine such a system, and no nation or company could afford to build one.  But now that wouldn’t even be particularly expensive compared to many modern systems.  How much simpler, faster, and more secure would such a system be?  In 5 years we may be able to buy such a system on a single chip—will we be ready to use it, or will we still be chasing 200 million line operating systems down virtual rat holes?

So, I challenge my (few) readers to think about minimalism.  If we reduce the complexity of our systems what might we accomplish?  What might we achieve if we threw out the current designs and started over from a new beginning and with our current knowledge and capabilities?

[Small typo fixed 6/21—thanks cfr]

Copyright © 2007 by E. H. Spafford
[posted with ecto]