The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Disloyal Software

Share:

Disloyal software surrounds us.  This is software running on devices or computers you own and that serves interests other than yours.  Examples are DVD firmware that insists on making you watch the silly FBI warning or prevents you from skipping “splashes” or previews, or popup and popunder advertisement web browser windows.  When people discuss malware or categories of software, there is usually little consideration for disloyal software (I found this interesting discussion of Trusted Computing).  Some of it is perfectly legal; some protects legal rights.  At the other extreme, rootkits can subvert entire computers against their owners.  The question is, when can you trust possibly disloyal software, and when does it become malware, such as the Sony CD copy prevention rootkit?

Who’s in Control
Loyalty is a question of perspective in ownership vs control.  The employer providing laptops and computers to employees doesn’t want them to install things that could be liabilities or compromise the computer.  The employee is using software that is restrictive but justifiably so.  From the perspective of someone privately owning a computer, a lesser likelihood of disloyalty is an advantage of free software (as in the FSF free software definition).  The developers won’t benefit from implementing restrictions and developing software that does things that go counter to the interests of the user.  If one does, someone somewhere will likely remove that restriction for the benefit of all.  Of course, this doesn’t address the possibility of cleverly hidden capabilities (such as backdoors) or compromised source code repositories.

This leads to questions of control of many other devices, such as game consoles and media players such as the iPod.  Why does my iPod, using Apple-provided software, not allow me to copy music files to another computer?  It doesn’t matter which computer as long as I’m not violating copyrights;  possibly it’s the same computer that ripped the CDs, because the hard drive died or was upgraded, or it’s the new computer I just bought.  By using the iPod as a storage device instead of a music player, such copies can be done with Apple software, but music files in the “play” section can’t be copied out.  This restriction is utterly silly as it accomplishes nothing but annoy owners, and I’m glad that Ubuntu Linux allows direct access to the music files.

DMCA
Some firmware implements copyright protection measures, and modifying it to remove those protections is made illegal by the DMCA.  As modifying consoles (“modding”) is often done for that purpose, the act of “modding” has become suspicious in itself.  Someone modding a DVD player to simply be able to bypass annoying splash screens, but without affecting copy protection mechanisms, would have a hard time defending herself.  This has a chilling effect on the recycling of perfectly good hardware with better software.  For example, I think Microsoft would still be selling large quantities of the original XBox if the compiled XBMC media player software wasn’t illegal as well for most people due to licensing issues with the Microsoft compiler.  The DMCA helps law enforcement and copyright holders, but has negative effects as well (see wikipedia).  Disloyal devices are distasteful, and the current law heavily favors copyright owners.  Of course, it’s not clearcut, especially in devices that have responsibilities towards multiple entities, such as cell phones.  I recommend watching Ron Buskey’s security seminar about cell phones.

Web Me Up
If you think you’re using only free software, you’re wrong every time you use the web and allow scripting.  The potentially ultimate disloyal software is the one web sites push to your browser.  Active content (JavaScript, Flash, etc…) on web pages can glue you in place and restrict what you can do and how, or deploy adversarial behaviors (e.g., pop-unders or browser attacks).  Every time you visit a web page nowadays, you download and run software that is not free:

* it is often impractical to access the content of the page, or even basic form functionality, without running the software, so you do not have the freedom to run or not run it as a practical choice (in theory you do have a choice, but penalties for choosing the alternative can be significant).

* It is difficult to study given how some code can load other active content from other sites in a chain-like fashion, creating a large spaghetti, which can be changed at any time.

* there is no point to redistributing copies, as the copies running from the web sites you need to use won’t change. 

* Releasing your “improvements” to the public would almost certainly violate copyrights. Even if you made useful improvements, the web site owners could change how their site works regularly, thus foiling your efforts.

Most of the above is true even if the scripts you are made to run in a browser were free software from the point of view of the web developers;  the delivery method tainted them.

Give me some AIR
The Adobe Integrated Runtime (“AIR”) is interesting because it has the potential to free web technologies such as HTML, Flash and JavaScript, by allowing them to be used in a free open source way.  CERIAS webmaster Ed Finkler developed the “Spaz” application with it, and licensed it with the New BSD license.  I say potentially only, because AIR can be used to dynamically load software as well, with all the problems of web scripting.  It’s a question of control and trust.  I can’t trust possibly malicious code that I am forced to run on my machine to access a web page which I happen to visit.  However, I may trust static code that is free software, to not be disloyal by design.  If it is disloyal, it is possible to fix it and redistribute the improved code.  AIR could deliver that, as Ed demonstrated.

The problem with AIR is that I will have to trust a web developer with the security of my desktop.  AIR has two sandboxes, the Classic Sandbox that is like a web browser, and the Application Sandbox, which is compared to server-side applications except they run locally (see the AIR security FAQ).  The Application Sandbox allows local file operations that are typically forbidden to web browsers, but without some of the more dangerous web browser functionality.  Whereas the technological security model makes sense as a foundation, its actual security is entirely up to whoever makes the code that runs in the Application Sandbox.  People who have no qualms about pushing code to my browser and forcing me to turn on scripting, thus making me vulnerable to attacks from sites I will visit subsequently, to malicious ads, or to code injected into their site, can’t be trusted to care if my desktop is compromised through their code, or to be competent to prevent it.

Even the security FAQ for AIR downplays significant risks.  For example, it says “The damage potential from an injection attack in a given website is directly proportional to the value of the website itself. As such, a simple website such as an unauthenticated chat or crossword site does not have to worry much about injection attacks as much as any damage would be annoying at most.”  This completely ignores scripting-based attacks against the browsers themselves, such as those performed by the well-known malwares Mpack and IcePack.  In addition, there probably will be both implementation and design vulnerabilities found in AIR itself.

Either way, AIR is a development to watch.

P.S. (10/16): What if AIR attracts the kind of people that are responsible for flooding the National Vulnerability Database with PHP server application vulnerabilities?  Server applications are notoriously difficult to write securely.  Code that they would write for the application sandbox could be just as buggy, except that instead of a few compromised servers, there could be a large quantity of compromised personal computers…

Solving some of the Wrong Problems

Share:

[tags]cybersecurity research[/tags]
As I write this, I’m sitting in a review of some university research in cybersecurity.  I’m hearing about some wonderful work (and no, I’m not going to identify it further).  I also recently received a solicitation for an upcoming workshop to develop “game changing” cyber security research ideas.  What strikes me about these efforts—representative of efforts by hundreds of people over decades, and the expenditure of perhaps hundreds of millions of dollars—is that the vast majority of these efforts have been applied to problems we already know how to solve.

Let me recast this as an analogy in medicine.  We have a crisis of cancer in the population.  As a result, we are investing huge amounts of personnel effort and money into how to remove diseased portions of lungs, and administer radiation therapy.  We are developing terribly expensive cocktails of drugs to treat the cancer…drugs that sometimes work, but make everyone who takes them really ill.  We are also investing in all sorts of research to develop new filters for cigarettes.  And some funding agencies are sponsoring workshops to generate new ideas on how to develop radical new therapies such as lung transplants.  Meanwhile, nothing is being spent to reduce tobacco use; if anything, the government is one of the largest purchasers of tobacco products!  Insane, isn’t it?  Yes, some of the work is great science, and it might lead to some serendipitous discoveries to treat liver cancer or maybe even heart disease, but it still isn’t solving the underlying problems.  It is palliative, with an intent to be curative—but we aren’t appropriately engaging prevention!

Oh, and second-hand smoke endangers many of us, too.

We know how to prevent many of our security problems—least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren’t using much of it.

Instead of building trustworthy systems (note—I’m not referring to making existing systems trustworthy, which I don’t think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware.

Instead of switching to languages with intrinsic features that promote safe programming and execution, we spend our efforts on tools to look for buffer overflows and type mismatches in existing code, and merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating systems, browsers, languages, tools) without really understanding where they are best used—and not used.  Then we pound on our selections as the “one, true solution” and justify them based on cost or training or “open vs. closed” arguments that really don’t speak to fitness for purpose.  As a result, we develop fragile monocultures that have a particular set of vulnerabilities, and then we need to spend a huge amount to protect them.  If you are thinking about how to secure Linux or Windows or Apache or C++ (et al), then you aren’t thinking in terms of fundamental solutions.

I’m not trying to claim there aren’t worthwhile topics for open research—there are.  I’m simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it.

In many parts of India, cows are sacred and cannot be harmed.  They wander everywhere in villages, with their waste products fouling the streets and creating a public health problem.  However, the only solution that local people are able to visualize is to hire more people to shovel effluent.  Meanwhile, the cows multiply, the people feed them, and the problem gets worse.  People from outside are able to visualize solutions, but the locals don’t want to employ them.

Metaphorically speaking, we need to put down our shovels and get rid of our sacred cows—maybe even get some recipes for meatloaf. grin

Let’s start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have.  Will it be easy?  No, but neither is quitting smoking!  But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower.

[Don’t forget to check out my tumble log!]

Some comments on Copyright and on Fair Use

Share:

[tags]copyright,DMCA,RIAA,MPAA,sharing,downloading,fair use[/tags]

Over the past decade or so, the entertainment industry has supported a continuing series of efforts to increase the enforcement of copyright laws, a lengthening of copyright terms, and very significant enforcement efforts against individuals.  Included in this mess was the DMCA—the Digital Millenium Copyright Act—which has a number of very technology unfriendly aspects.

One result of this copyright madness is lawsuits against individuals found to have file-sharing software on their systems, along with copies of music files.  Often the owners of these systems don’t even realize that their software is publishing the music files on their systems. It also seems the case that many people don’t understand copyright and do not realize that downloading (or uploading) music files is against the law.  Unfortunately, the entertainment industry has chosen to seek draconian remedies from individuals who may not be involved in more than incidental (or accidental) sharing of files.  One recent example is a case where penalties have been declared that may bankrupt someone who didn’t set out to hurt the music industry.  I agree with comments by Rep. Rick Boucher that the damages are excessive, even though (in general) the behavior of file sharers is wrong and illegal.

Another recent development is a provision in the recently introduced “College Access and Opportunity Act of 2007” (HR 3746; use Thomas to find the text). Sec 484 (f) contains language that requires schools to put technology into place to prevent copyright violations, and inform the Secretary of Education about what those plans and technologies are.  This is ridiculous, as it singles out universities instead of ISPs in general, and forces them to expend resources for misbehavior by students it is otherwise attempting to control.  It is unlikely to make any real dent in the problem because it doesn’t address the underlying problems.  Even more to the point, no existing technology can reliably detect only those files being shared that have copyright that prohibits such sharing.  Encryption, inflation/compression, translation into other formats, and transfer in discontinuous pieces can all be employed to fool monitoring software.  Instead, it is simply another cost and burden on higher ed.

We need to re-examine copyright.  Another aspect in particular we need to examine is “fair use.”  The RIAA, MPAA and similar associations are trying to lock up content so that any use at all requires paying them additional funds.  This is clearly silly, but their arguments to date have been persuasive to legislators.  However, the traditional concept of “fair use” is important to keep intact—especially for those of us in academia.  A recent report outlines that fair use is actually quite important—that approximately 1/6 of the US economy is related to companies and organizations that involve “fair use.”  It is well worth noting.  Further restrictions on copyright use—and particularly fair use—are clearly not in society’s best interest.

Copyright has served—and continues to serve—valid purposes.  However, with digital media and communications it is necessary to rethink the underlying business models.  When everyone becomes a criminal, what purpose does the law serve?


Also, check out my new “tumble log.”  I update it with short items and links more often than I produce long posts here.

[posted with ecto]

Spaf Gets Interviewed

Share:

[tags]interview,certification[/tags]I was recently interviewed by Gary McGraw for his Silver Bullet interview series.  He elicited my comments on a number of topics, including security testing, ethical hacking, and why security is difficult.If you like any of my blog postings, you might find the interview of some interest.  But if not, you might some of the other interviews of interest – mine was #18 in the series.

What did you really expect?

Share:

[tags]reformed hackers[/tags]
A news story that hit the wires last week was that someone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior.  The press and blogosphere seemed to treat this as surprising.  They shouldn’t have.

I have been speaking and writing for nearly two decades on this general issue, as have others (William Hugh Murray, a pioneer and thought leader in security,  is one who comes to mind).  Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids.  First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit.  Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled.  (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way—that behavior does not establish the bona fides for real expertise.  If anything, it establishes a disregard for the community it endangers.)

More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on.  Certainly it is possible that they will learn the error of their ways and reform.  However, it is also the case that they may slip later and revert to their old ways.  Putting some of them in situations of trust with access to items of value is almost certainly too much temptation.  This has been established time and again in studies of criminals of all types, especially those who commit fraud.  So, why would a prudent manager take a risk when better alternatives are available?

Even worse, circulating stories of criminals who end up as highly-paid consultants are counterproductive, even if they are rarely true.  That is the kind of story that may tempt some without strong ethics to commit crimes as a shortcut to fame and riches.  Additionally, it is insulting to the individuals who work hard, study intently, and maintain a high standard of conduct in their careers—hiring criminals basically states that the honest, hardworking real experts are fools.  Is that the message we really want to put forward?

Luckily, most responsible managers now understand, even if the press and general public don’t, that criminals are simply that—criminals.  They may have served their sentences, which now makes them former criminals…but not innocent.  Pursuing criminal activity is not—and should not be—a job qualification or career path in civilized society.  There are many, many historical examples we can turn to for examples, including those of hiring pirates as privateers and train robbers as train guards.  Some took the opportunity to go straight, but the instances of those who abused trust and made off with what they were protecting illustrate that it is a big risk to take.  It also is something we have learned to avoid.  We are long past the point where those of us in computing should get with the program.

So, what of the argument that there aren’t enough real experts, or they cost too much to hire?  Well, what is their real value? If society wants highly-trained and trustworthy people to work in security, then society needs to devote more resources to support the development of curriculum and professional standards.  And it needs to provide reasonable salaries to those people, both to encourage and reward their behavior and expertise.  We’re seeing more of that now than a dozen years ago, but it is still the case that too many managers (and government officials) want security on the cheap, and then act surprised when they get hacked.  I suppose they also buy their Rolex and Breitling watches for $50 from some guy in a parking lot and then act surprised and violated when the watch stops a week later.  What were they really expecting?