CERIAS - Center for Education and Research in Information Assurance and Security

Skip Navigation
CERIAS Logo
Purdue University - Discovery Park
Center for Education and Research in Information Assurance and Security

Disloyal Software

Share:

Disloyal software surrounds us.  This is software running on devices or computers you own and that serves interests other than yours.  Examples are DVD firmware that insists on making you watch the silly FBI warning or prevents you from skipping “splashes” or previews, or popup and popunder advertisement web browser windows.  When people discuss malware or categories of software, there is usually little consideration for disloyal software (I found this interesting discussion of Trusted Computing).  Some of it is perfectly legal; some protects legal rights.  At the other extreme, rootkits can subvert entire computers against their owners.  The question is, when can you trust possibly disloyal software, and when does it become malware, such as the Sony CD copy prevention rootkit?

Who’s in Control
Loyalty is a question of perspective in ownership vs control.  The employer providing laptops and computers to employees doesn’t want them to install things that could be liabilities or compromise the computer.  The employee is using software that is restrictive but justifiably so.  From the perspective of someone privately owning a computer, a lesser likelihood of disloyalty is an advantage of free software (as in the FSF free software definition).  The developers won’t benefit from implementing restrictions and developing software that does things that go counter to the interests of the user.  If one does, someone somewhere will likely remove that restriction for the benefit of all.  Of course, this doesn’t address the possibility of cleverly hidden capabilities (such as backdoors) or compromised source code repositories.

This leads to questions of control of many other devices, such as game consoles and media players such as the iPod.  Why does my iPod, using Apple-provided software, not allow me to copy music files to another computer?  It doesn’t matter which computer as long as I’m not violating copyrights;  possibly it’s the same computer that ripped the CDs, because the hard drive died or was upgraded, or it’s the new computer I just bought.  By using the iPod as a storage device instead of a music player, such copies can be done with Apple software, but music files in the “play” section can’t be copied out.  This restriction is utterly silly as it accomplishes nothing but annoy owners, and I’m glad that Ubuntu Linux allows direct access to the music files.

DMCA
Some firmware implements copyright protection measures, and modifying it to remove those protections is made illegal by the DMCA.  As modifying consoles (“modding”) is often done for that purpose, the act of “modding” has become suspicious in itself.  Someone modding a DVD player to simply be able to bypass annoying splash screens, but without affecting copy protection mechanisms, would have a hard time defending herself.  This has a chilling effect on the recycling of perfectly good hardware with better software.  For example, I think Microsoft would still be selling large quantities of the original XBox if the compiled XBMC media player software wasn’t illegal as well for most people due to licensing issues with the Microsoft compiler.  The DMCA helps law enforcement and copyright holders, but has negative effects as well (see wikipedia).  Disloyal devices are distasteful, and the current law heavily favors copyright owners.  Of course, it’s not clearcut, especially in devices that have responsibilities towards multiple entities, such as cell phones.  I recommend watching Ron Buskey’s security seminar about cell phones.

Web Me Up
If you think you’re using only free software, you’re wrong every time you use the web and allow scripting.  The potentially ultimate disloyal software is the one web sites push to your browser.  Active content (JavaScript, Flash, etc…) on web pages can glue you in place and restrict what you can do and how, or deploy adversarial behaviors (e.g., pop-unders or browser attacks).  Every time you visit a web page nowadays, you download and run software that is not free:

* it is often impractical to access the content of the page, or even basic form functionality, without running the software, so you do not have the freedom to run or not run it as a practical choice (in theory you do have a choice, but penalties for choosing the alternative can be significant).

* It is difficult to study given how some code can load other active content from other sites in a chain-like fashion, creating a large spaghetti, which can be changed at any time.

* there is no point to redistributing copies, as the copies running from the web sites you need to use won’t change. 

* Releasing your “improvements” to the public would almost certainly violate copyrights. Even if you made useful improvements, the web site owners could change how their site works regularly, thus foiling your efforts.

Most of the above is true even if the scripts you are made to run in a browser were free software from the point of view of the web developers;  the delivery method tainted them.

Give me some AIR
The Adobe Integrated Runtime (“AIR”) is interesting because it has the potential to free web technologies such as HTML, Flash and JavaScript, by allowing them to be used in a free open source way.  CERIAS webmaster Ed Finkler developed the “Spaz” application with it, and licensed it with the New BSD license.  I say potentially only, because AIR can be used to dynamically load software as well, with all the problems of web scripting.  It’s a question of control and trust.  I can’t trust possibly malicious code that I am forced to run on my machine to access a web page which I happen to visit.  However, I may trust static code that is free software, to not be disloyal by design.  If it is disloyal, it is possible to fix it and redistribute the improved code.  AIR could deliver that, as Ed demonstrated.

The problem with AIR is that I will have to trust a web developer with the security of my desktop.  AIR has two sandboxes, the Classic Sandbox that is like a web browser, and the Application Sandbox, which is compared to server-side applications except they run locally (see the AIR security FAQ).  The Application Sandbox allows local file operations that are typically forbidden to web browsers, but without some of the more dangerous web browser functionality.  Whereas the technological security model makes sense as a foundation, its actual security is entirely up to whoever makes the code that runs in the Application Sandbox.  People who have no qualms about pushing code to my browser and forcing me to turn on scripting, thus making me vulnerable to attacks from sites I will visit subsequently, to malicious ads, or to code injected into their site, can’t be trusted to care if my desktop is compromised through their code, or to be competent to prevent it.

Even the security FAQ for AIR downplays significant risks.  For example, it says “The damage potential from an injection attack in a given website is directly proportional to the value of the website itself. As such, a simple website such as an unauthenticated chat or crossword site does not have to worry much about injection attacks as much as any damage would be annoying at most.”  This completely ignores scripting-based attacks against the browsers themselves, such as those performed by the well-known malwares Mpack and IcePack.  In addition, there probably will be both implementation and design vulnerabilities found in AIR itself.

Either way, AIR is a development to watch.

P.S. (10/16): What if AIR attracts the kind of people that are responsible for flooding the National Vulnerability Database with PHP server application vulnerabilities?  Server applications are notoriously difficult to write securely.  Code that they would write for the application sandbox could be just as buggy, except that instead of a few compromised servers, there could be a large quantity of compromised personal computers…

Comments

Posted by stacy
on Tuesday, October 16, 2007 at 03:52 AM

I agree completely. Of course your message would seem more credible if the web site it was posted on didn’t use javascript (at least the basic functionality works with scripting disabled).

Posted by Pascal Meunier
on Tuesday, October 16, 2007 at 04:21 AM

Thanks Stacy.  I am comfortable with this use of JavaScript because you may visit and access everything without scripting, as far as I know.

Posted by Ed Finkler
on Tuesday, October 16, 2007 at 04:40 AM

@stacy: One of the things we encourage at CERIAS is discourse and freedom of thought. We consider differing points of view from within and without when deciding what technologies we implement. That doesn’t mean we’ll always do what a particular person wants.

Regarding our use of Javascript, I don’t believe any aspect of the site intended for general use would be unusable with JS disabled entirely—if you find one, please let us know!

@pascal: I have trouble understanding the distinction you seem to be making between a desktop application written for AIR vs a desktop application written using any other technology.  Any program written in a number of different languages and technologies could suffer the same problems that a Javascript-based app would.  I disagree with the notion that web scripting is, as a technology, more dangerous than any other dynamic scripting technology.

If you have *ever* installed a program to which you did not have source code access, or you have installed an open-source app before doing a security audit on the source, you’ve making a decision to trust the developer. Are you *sure* they’re not doing anything dangerous or nefarious?

So really, it’s a matter of who you decide to trust. As a web developer myself, I’m not a big fan of what seems like a general mistrust of web devs on your part, but I can also understand the concern—because of the relatively shallow learning curve, someone without a good understanding of security issues can create a functional web application. It is, I believe, inevitable, that this shallow learning curve will move into desktop app development as well.  I also believe that the connection between desktop apps and web-based apps will continue to grow stronger.  I generally believe this is a Good Thing, but it also makes the issue of “who do I trust” a more serious concern.

I don’t know that I have a solution for this, but I do think that establishing some criteria to evaluate the “trustworthiness” of an application and a development team would be a good start. One thing you don’t mention that’s along those lines is that AIR requires a code cert to be distributed with their apps, and throws a fairly loud warning if the cert is not from a known, approved cert vendor.  While it’s by no means a be-all end-all, it does provide more of a “trust warning” than I see *any* time I install an application on Windows or OS X—neither of those require any kind of code signature, AFAIK.

Posted by Pascal Meunier
on Tuesday, October 16, 2007 at 07:00 AM

Ed, you say you have trouble understanding the distinction I’m making, but you mostly do. 

First, please understand that I did not mean to insult those web developers that do care about security.  However, I think that I have very good reasons to distrust the average web developer, security-wise, based on past records, on their typical security posture and on their lack of aversion to risk.  For example, just PHP-related issues fill 25% of the National Vulnerability Database as of 10/15/07.  Web developers tend to be early adopters, which means they like new, shiny but unproven (security-wise) technology, which conflicts with the security desires of even moderately risk-averse people.  Many web developers also have the tendency to force others to take on the same risks that they are willing to take;  this is the issue of forcing client-side scripting technologies by otherwise denying the use of a resource, regardless of how necessary those technologies are.  Web developers that require client-side scripting, and so disregard security for shiny are common in banking and investment web sites, for example, where it is inappropriate IMO.  Strictly speaking about web browsers in a manner unrelated to AIR for the moment, I think that I do not need to emphasize anymore how web client-side scripting is an attack vector—the facts are plain to see now but ignored by most web developers, and warnings were ignored since the beginning. 

Do the certificates you mention evaluate security skills or audit security stances, and the assurance provided by development processes?  Will they enable the average person to make enlightened choices, or will it just be another dialog to click through?

Now let’s see what AIR does.  You take the task of writing secure servers, which is notoriously difficult, and you entrust it to web developers or new developers who will write what is functionally server software running on everyone’s desktops (I am speaking about code running in the Application Sandbox).  Understand that I am not advocating elitism, but realizing and pointing out that a very difficult task is potentially given to novices, or to people that may not be as concerned about security as I am.  In effect, you will take that 25% of the NVD entries, and you will move them from a few servers that are probably watched and defended, to a multitude of desktops in the hands of users.  Then, you have an attack vector, client-side scripting running in the Classic Sandbox, that will be able to mercilessly attack code running in the Application Sandbox without any intervening firewall or other security measures that servers typically have.  This could be compounded by a multitude of versions and software variants of both remote and local code;  if I worked at MITRE for the CVE effort, or for the NVD, the idea of tracking these would give me nightmares.  It has disaster written all over it.

You may think that I sound like a mainframe operator whining about desktop deployments.  I have no doubt that AIR will be successful, though, because ignoring security, it sounds great, and besides that’s the kind of world we live in.  That shouldn’t make any of my points less true.  I wish that every web developer reading this would make my day and surprise me with fully networked but reasonably secure AIR software using the Application Sandbox.  A framework emphasizing security for creating code in the Application Sandbox, along with code checking tools or even formal verification software sounds like a great idea to me, and something that would calm my fears.

To top it all, let me make the general point that technologies that may enable the same functionality can nevertheless vary widely in how tedious they are to use safely, how easy it is to make security mistakes, how difficult it may be to find security mistakes, or how difficult it may be to get assurance that the application is reasonably secure.  This doesn’t mean that products developed with different technologies may not be equally secure in theory.  It means that better software development processes, better skills and more time and expense may be required to achieve parity.  It’s probably safe to say that security-wise, using JavaScript will always be a handicap compared to, for example, using Spark and its associated development methodology.

Posted by stacy
on Tuesday, October 16, 2007 at 07:07 AM

@Ed: I guess I should have been more clear. When I said “the basic functionality works with scripting disabled” I was acknowledging that, for my purposes (I haven’t strayed far from the blogs), the CERIAS site works without javascript enabled. Actually, this site adjust to my personal web surfing habits quite well; I have my monitor pivoted to portrait mode. Try it some time, it is amazing how poorly some site render in that orientation.
Your comments on trust are quite correct. At the risk of starting a religious war, one of the reasons I use FreeBSD is that I like their release process; it gives me a reason to trust them and their code (and no, I have not audited the code). But in the world of web apps, the issue goes far beyond trusting the developers. I have to trust the entire ‘system’ and that system is made up of computers that are outside of your and my control. Unfortunately there isn’t much (besides RFC 3514 grin that helps distinguish the malicious components of the system from the benign ones.

Posted by Pascal Meunier
on Tuesday, October 16, 2007 at 07:21 AM

Oh, I almost forgot.  Another issue is the remarkable bugginess and unreliability of code produced by Adobe, be it Acrobat or the Flash plugin.  For example, this year I bought the full version of the Acrobat suite for MacOS X.  The purchasing and installation process was painful.  Then, after hours of trying to do a simple task that takes one minute in Ubuntu Linux with free tools, I gave up on it and threw it away because it was in effect unusable. 

Please forgive me for having doubts about the quality of the AIR environment itself, if not now, then in the future.  Perhaps different people are involved, but enterprise culture has a way of interfering with talent.

Posted by Ed Finjkler
on Wednesday, October 17, 2007 at 04:13 AM

@stacey: Cool, I just wanted to make sure that you didn’t think there was functionality beyond the “basic” kind that you were missing.  I personally am interested in what the kids call “Rich Internet Applications” these days, mainly because the ability to design more effective interfaces it very appealing, but I also have to be conscious that a lot of visitors to the CERIAS site won’t have scripting enabled.  That’s why we’ve generally avoided it here, and will need to make sure any apps we create at least degrade gracefully.

Posted by Ed Finkler
on Wednesday, October 17, 2007 at 04:50 AM

@Pascal:

“Ed, you say you have trouble understanding the distinction I’m making, but you mostly do.”

Well, yes, I really should have said “I disagree with the distinction you’re making.”

“Do the certificates you mention evaluate security skills or audit security stances, and the assurance provided by development processes? Will they enable the average person to make enlightened choices, or will it just be another dialog to click through?”

Likely the latter.  However, I did *not* make a case that code signing is all that should be done.  I pointed it out because, although you paint AIR with the same broad brush strokes that you do other “disloyal” software, it actually is putting more effort into accountability than most ways of deploying desktop apps.

I can’t disagree that many—maybe most—web developers are not as security-conscious as they should be.  However, I think that is changing in general, and specifically in the communities that I am most active in (PHP especially).

There are a lot of assumptions made in this post.  I’d encourage you to communicate with the communities and companies of which you are skeptical—not simply to confirm your suspicions, but to engage, encourage, and identify actions to be taken that address the problems you bring up.  I found both the AIR engineers and Adobe executives I spoke with at MAX *very* aware of and receptive to security concerns, and anxious to hear feedback and new ideas.

Posted by Pascal Meunier
on Wednesday, October 17, 2007 at 05:39 AM

Ed, let me try again.  It’s a given that the Classic Sandbox will be running hostile, malicious code locally, attacking both AIR itself and the code running in the Application Sandbox.  You agree that there is a basis for, in “broad brush strokes”, not trusting the people who wrote the code running in the Application Sandbox for security.  I don’t trust AIR either, given the track record of web browsers, and Adobe’s. 
If even you do not see why that gives me shivers compared to, say, an IM chat application written in “C” with chat logging turned on, I give up.  Perhaps I can’t express myself correctly, convincingly, or I’m just plain wrong.  Whichever it is, I don’t see the point in bothering these people if I can’t convince at least you.

Leave a comment

Commenting is not available in this section entry.