Posts by pmeunier

The Secunia Personal Software Inspector

So you have all the patches from Microsoft applied automatically, Firefox updates itself as well as its extensions... But do you still have vulnerable, outdated software? Last weekend I decided to try the Secunia Personal Software Inspector, which is free for personal use, on my home gaming computer. The Secunia PSI helps find software that falls through the cracks of the auto-update capabilities. I was pleasantly surprised. It has a polished normal interface as well as an informative advanced interface. It ran quickly and found obsolete versions of Adobe Flash installed concurrently with newer ones, and pointed out that Firefox wasn't quite up-to-date as the latest patch hadn't been applied.

When I made the Cassandra system years ago, I was also dreaming of something like this. It is limited to finding vulnerable software by version, not configuration, and giving links to fixes; so it doesn't help hardening a system to the point that some computer security benchmarks can. However, those security benchmarks can decrease the convenience of using a computer, so they require judgment. It can also be time consuming and moderately complex to figure out what you need to do to improve the benchmark results. By contrast, the SPI is so easy to install and use that it should be considered by anyone capable of installing software updates, or anyone managing a family member's computer. The advanced interface also pointed out that there were still issues with Internet Explorer and with Firefox for which no fixes were available. I may use Opera instead until these issues get fixed. It is unfortunate that it runs only on Windows, though.

The Secunia Personal Software Inspector is not endorsed by Purdue University CERIAS; the above are my personal opinions. I do not own any shares or interests in Secunia.
Edit: fixed the link, thanks Brett!

ReAssure 1.20 Release

A new version of the ReAssure testbed software, 1.20, is now available on the project web site. This version features a rewritten reservation manager that is multi-threaded, object-oriented, better commented, tested with PyLint, and responds to more queries from the web interface. The supporting serial switch communication library (soobml) was rewritten to be thread-safe, object-oriented and now supports multiple switches. Experiments are also started and stopped with much greater time precision. One small comment on PyLint: we allowed line lengths of 100. Lines of 80 characters are cramped when trying to provide meaningful error messages and referencing objects and invoking methods that have long, meaningful names. Our plans for the next release are to support user control of whether experimental PCs are allowed internet access. Currently only a specifically designated experimental PC is allowed access, for containment reasons. Thanks to Ed Cates (CERIAS staff) for providing system administration services and helping with ReAssure. This work is supported by the National Science Foundation under Grant No. 0420906. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Beware SQL libraries missing prepared statement support

Just because your library or framework allows you to specify an SQL query and the data separately, doesn't mean that it's sending data separately from code to the database.

Imagine this scenario. You read that prepared statements are a good way to avoid SQL injection, because the database is given code and data explicitly and separately. You chose a database that supports prepared statements. The library you use also seems to support them as you can pass SQL code and data as two separate arguments. However... internally the library just constructs a string and sends that to the database, and doesn't use the database's prepared statement support!

An example is the library "pyPGSQL", which supports PostGreSQL in Python. It has an "execute" command taking a query and parameters as separate arguments. However, internally it constructs a string to send off (after escaping the parameters, so it *shouldn't* be vulnerable):
self.res = self.conn.conn.query(_qstr % parms)

The point is that escaping on the client side, while most likely OK, isn't as robust as letting the database handle the data separately, by using prepared statements. This particularity of pyPGSQL has been known since 2003 (forum answer). However, it's good to point it out again, as I had started writing a program using pyPGSQL, thinking that my code would be fine. It's possible that others have as well. pyPGSQL doesn't claim to support prepared statements (the absence of a "prepare" instruction should have been a clue!). Nevertheless, it surprises me that there still exist libraries not supporting prepared statements and that don't state this with an unmistakable, large warning. I found surprisingly few Python libraries supporting prepared statements:
  • py-postgresql (unfortunately requires Python 3; was pg_proboscis for Python 2)
  • Cristian Gafton's python-pgsql (not thread-safe)
I am not going to discuss why attempting to escape data in an SQL statement is complex and error-prone, and why prepared statements are a more secure alternative; this has been addressed elsewhere and supported by vulnerability announcements. Have you checked if and how your library or framework really uses prepared statements, or does it just look like it might be using them?

P.S.: Note that the work-around proposed in the forum link above does not provide the security of prepared statements properly supported by a library.

P.P.S.: To clarify, I haven't demonstrated an SQL injection vulnerability in pyPGSQL. It's not about the performance penalty either. It's about escaping done by the client library (the basic implementation of bind parameters without using the database's support) being a second-rate security solution to explicitly telling the database "here is the code. Now here's the data" (prepared statements). It's about decreasing code complexity and reducing chances for "misunderstandings" (and configuration, e.g., encoding, discrepancies). It's about assurance and choosing safer technologies and architectures.

P(3)S.: Why did I expect prepared statement support? Because the Python DBI 2.0 specification for the "execute" method suggests that implementations should be using prepared statements, at least internally:
            "A reference to the operation will be retained by the
            cursor.  If the same operation object is passed in again,
            then the cursor can optimize its behavior.  This is most
            effective for algorithms where the same operation is used,
            but different parameters are bound to it (many times)."

One more thought is that I should be more positive and congratulate the people working on Python 3 for fixing this long-standing problem. They deserve kudos!

Bad JavaScript, no CVE for you!

I'm flabbergasted to see Adobe release an advisory for a critical issue, using everything (BID & a "Vulnerability identifier") but a CVE identifier. I'm not surprised either that JavaScript support in Acrobat was involved in making its exploitation possible. Once again security folks tell people to "turn off JavaScript". It once seemed plausible to do in browsers, but these days even Purdue University makes it mandatory to enable JavaScript, as the tools we rely on for teaching (e.g., Blackboard) and other official Purdue pages don't work properly without JavaScript. Even the help system (!) doesn't work because the help link that could be just an HTML tag is actually implemented in JavaScript (and they also use the referrer tag to mitigate CSRF attacks, so no disabling that either). How long will it be before PDF documents can't be read without enabling JavaScript?

Overloaded Return Values Cause Bugs

In my secure programming class I have denounced the overloading of return values as a bad practice, and recently I discovered a new, concrete example of a bug (possibly a vulnerability) that results from this widespread practice.  A search in the NVD and some googling didn’t reveal any mention of similar issues anywhere, but I probably just have missed them (I have trouble imagining that nobody else pointed that out before).  In any case it may be worth repeating with this example.

The practice is to return a negative value to indicate an error, whereas a positive value has meaning, e.g., a length.  Imagine a function looking like this:

int bad_idea(char *buf, unsigned int size) {
    int length;

    if (<some_error_condition>) {
        length = -ERROR_CODE;
    } else {
        length = size;  // substitute any operations that could overflow the signed int
    }
    return length;
}

This function could return random error values.  Under the right conditions this could result in at least a DoS (imagine that this is a security-related function, e.g., for authentication).  I suggest using separate channels to return error codes and meaningful values.  Doing this lowers complexity in the assignment and meaning of that return value by removing the multiplexing.  As a result: 

     

  • There is an increased complexity in the function prototype, but the decreased ambiguity inside the function is beneficial.  When the rest of the code uses unsigned integers, the likelihood of a signed/unsigned integer conversion mistake or an overflow is high.  In the above example, the function is also defective because it is unable to process correctly some allowed inputs (because the input is unsigned and the output needs to be able to return the same range), so in reality there is no choice but to decouple the error codes from the semantic results (length).  This discrepancy is easier to catch when the ambiguity of the code is decreased.

  • It does away with the bad practice of keeping the same variable for two uses:  assigning error codes and negative values to a “length” is jarring;

  • It disambiguates the purpose and meaning of checking the “returned” values (I’m including the ones passed by reference, loosely using the word “returned”).  Is it to check for an error or is it a semantic check that’s part of the business logic?

Integer overflows are a well-known problem;  however in this case they are more a symptom of conflicting requirements.  The incompatible constraints of having to return a negative integer for errors and an unsigned integer otherwise are really to blame;  the type mismatch (“overflow”) is inevitable given those.  My point is that the likelihood of developers getting confused and having bugs in their code, for example not realizing that they have incompatible constraints, is higher when the return value has a dual purpose.


Edit (10/13):  It just occurred to me that vsnprintf and snprintf are in trouble, in both BSD and Linux.  They return an int, but take an unsigned integer (size_t) as a size input, AND are supposed to return either the number of printed characters or a negative number if there’s an error.  Even if the size can be represented as a signed integer, they return the number of characters that *would* have been printed if the size was infinite, so specifying a small size isn’t a fix.  So it *might* be possible to make these functions seemingly return errors when none happened.

Note(10/14): I’ve been checking the actual code for snprintf.  In FreeBSD, it checks both the passed size and the return value against INT_MAX and appears safe.

Cassandra Lost Its Feed From Secunia

I discovered that our XML feed from Secunia had been disabled, coinciding with a re-organization their web site.  So, the information contained in Cassandra from Secunia is now out of date.  Hopefully it can be re-established soon, but I didn’t get an answer from Secunia over the weekend.

An IPMI House of Cards

We’ve recently added features using IPMI to our ReAssure testbed, for example to support reimaging of our Sun experimental PCs and rebooting into a Live CD, so that researchers can run any OS they want on our testbed.  IPMI stands for the “Intelligent Platform Management Interface”, so we have a dedicated, isolated network on which commands are sent to cards in the experimental PCs.  An OS running in these cards can provide status responses and can perform power management actions, for example a power cycle that will reboot the computer.  This is supposed to be useful if the OS running in the computer locks up, for example.  So, we were hoping that we’d need fewer trips to the facility where the experimental PCs are hosted, have greater reliability and that we’d have more convenient management capabilities.

However, what we got was more headaches.  Some IPMI cards failed entirely;  as we had daisy-chained them, the IPMI cards of the other PCs became inaccessible.  Others simply locked up, requiring a trip to the facility even though the OS on the computer was fine…  One of them sometimes responds to status commands and sometimes not at all, seemingly at random.  The result is that using the IPMI cards actually made ReAssure less reliable and require more maintenance, because the reliability-enhancing component was so unreliable!  The irony.  I don’t know if we’ve just been unlucky, but now I’m keeping an eye out for a way to make that more reliable or an alternative, hoping that it doesn’t introduce even more problems.  That is rather unlikely, as I’ve discovered that even though the LAN interface is standard, the physical side of those cards isn’t;  AFAIK you can’t take a generic IPMI card and install it, it needs to be a proprietary solution by the hardware vendor (e.g., you need a Tyan card for a Tyan motherboard, a Sun IPMI card for a Sun computer, etc…).  So if the IPMI solution provided by your hardware vendor has flaws, you’re stuck with it;  it’s not like a NIC card that you can replace from any vendor.  I don’t know of any way to replace the software on the IPMI cards either, in a manner similar to how you can replace the bad firmware of consumer routers with better open source software.  I suppose that the lessons from this story are that:

  • You can’t make something more reliable by adding low-quality components in a “backup” role, because then you need to maintain them as well and make sure that they’ll work when they’ll be needed;
  • It’s not because something is on a separate card that it is more reliable;
  • IPMI is a weak standard—only the exposed interfaces are standardized, for example enabling the development of OpenIPMI (from the managed OS side) and IPMItools (LAN interface), but the middle of the “sandwich” isn’t—the implementations and parts are proprietary, incompatible between vendors, inflexible and fragile;
  • Proprietary, non-standard solutions prevent choosing better components.

ReAssure 1.10 Released

This new release of our testbed software provides users with full control of experimental PCs instead of being limited to running VMware images:

  • Experimental PCs can be rebooted at will

  • There is a LiveCD in the experimental PCs, which will take a root password that you specify before rebooting the PC

  • Users are now able to replace the operating system installed by default on experimental PCs, and gain full control

  • The host operating system for VMware is restored after an experiment.

This facilitates experiments with other virtualization technologies (e.g, Xen), or with operating systems or software that don’t interact in the desired manner with VMware.

When compared with other testbeds such as Deter, the differences are that:

  • You should be able to run anything on ReAssure, that is compatible with the hardware; 

  • You may try to attack the ReAssure testbed itself; 

  • Malicious software should have great difficulty escaping the testbed (if not using exp01 and exp02, the computers set aside for updating images); 

  • Your experiments using VMware images are portable; 

  • You can take VMware snapshots; 

As before, you can still:

  • Use complex network topographies for your experiments, with high bandwidth utilization on each (Gbit ethernet)

  • Extend reservations or stop experiments at will;

  • Use ISO images and VMware appliances; 

  • Share image files

  • Cooperate remotely with other people, and give them access to the PCs in one of your experiments

  • Update your images from two of our experimental PCs that allow connections to the outside (exp01 and exp02)

Under the hood changes:

  • The switch management now uses a UNIX domain server instead of a script started by cron.  This increases the responsiveness of the system, allows checking the state of the switch directly in real time, and allows self-test results to be displayed on the web interface (for administrators).

  • The upload mechanism now uses a UNIX domain server instead of a script started by cron.  This increases the responsiveness of the system and allows self-test results to be displayed on the web interface (for administrators).

  • The power state of the experimental PCs is controlled via IPMI (Intelligent Platform Management Interface) on an isolated network

Visit the project home page, the testbed management interface itself, or download the open source software.  The ReAssure testbed was developed using an MRI grant from NSF (No. 0420906). 

RuxSeed v. 1.0 Released:  A Ruby Open Source XCCDF Loader

I am happy to announce that ruxseed v. 1.0 is now available on SourceForge. Ruxseed processes XCCDF documents used for SCAP (NIST Security Content Automation Protocol) checklists. It performs benchmark resolution, i.e., the 6 “Loading” steps. Given an XCCDF document, it returns a resolved benchmark in the form of an ReXML tree. The project also contains a number of tests that might be useful to someone developing an XCCDF product.

This release enables work on more complex XCCDF processing, such as tailoring and compliance checking. If you would be interested in that functionality, and are willing to test or contribute code or test cases, please contact me.

Finally, Somebody “Gets” Secure Web Browsing and Does It The Right Way

I’ve ranted before about how insecure web browsers are, because they trust themselves, their libraries and user-added plug-ins too much.  At a very high level, they have responsibilities that can be likened to those of operating systems, because they run potentially dangerous code from different sources (users vs web sites) and need to do it separately from each other and from root (the user account running the browser), i.e., securely.  The web browsers of today look as ridiculous to me as the thought of using Windows 95 to run enterprise servers.  Run an insecure plugin , get owned (e.g., Quicktime).  Enable JavaScript, VBScript, ActiveX, Java, get owned.  Get owned because the web browser depends on libraries that have more than 6-month-old vulnerabilities (1-year old depending on how you count), and the whole thing collapses like a house of cards.  As long as they are internally so open and naive, web browsers will keep having shameful security records and be unworthy of our trust. 

IE 7’s protected mode needs to be acknowledged as a security effort, but CanSecWest proved that it didn’t isolate Flash well enough.  It’s not clear if a configuration issue was involved, but I don’t care—most people won’t configure it right either then.  IE 7’s protected mode is a collection of good measures, such as applying least privilege and separation of privilege, and intercepting system API calls, but it is difficult to verify and explain how it all fits together, and be sure that there are no gaps.  More importantly, it relies heavily on the slippery slope of asking the user to appropriately and correctly grant higher permissions.  We know where that leads—most everything gets granted and the security is defeated.

Someone not only thought of a proper security architecture for web browsers but did it (see “Secure web browsing with the OP web browser” by Chris Grier, Shuo Tang, and Samuel T. King).  There’s a browser kernel, and everything else is well compartmentalized and isolated.  Similarly to the best operating system architectures for security, the kernel is very small (1221 lines of code), has limited functionality, and doesn’t run plug-ins inside kernel space (I’d love to have no drivers in my OS kernel as well…).  It’s not clear if it’s a minimal or “true” micro-kernel—the authors steer clear of that discussion.  Even malicious hosted ads (e.g., Yahoo! has had repeated experiences with this) are quarantined with a “provider domain policy”.  This is an interesting read, and very encouraging.  I’d love to play with it, but I can’t find a download.