"A reference to the operation will be retained by the
cursor. If the same operation object is passed in again,
then the cursor can optimize its behavior. This is most
effective for algorithms where the same operation is used,
but different parameters are bound to it (many times)."
In my secure programming class I have denounced the overloading of return values as a bad practice, and recently I discovered a new, concrete example of a bug (possibly a vulnerability) that results from this widespread practice. A search in the NVD and some googling didn’t reveal any mention of similar issues anywhere, but I probably just have missed them (I have trouble imagining that nobody else pointed that out before). In any case it may be worth repeating with this example.
The practice is to return a negative value to indicate an error, whereas a positive value has meaning, e.g., a length. Imagine a function looking like this:
int bad_idea(char *buf, unsigned int size) {
int length;
if (<some_error_condition>) {
length = -ERROR_CODE;
} else {
length = size; // substitute any operations that could overflow the signed int
}
return length;
}
This function could return random error values. Under the right conditions this could result in at least a DoS (imagine that this is a security-related function, e.g., for authentication). I suggest using separate channels to return error codes and meaningful values. Doing this lowers complexity in the assignment and meaning of that return value by removing the multiplexing. As a result:
There is an increased complexity in the function prototype, but the decreased ambiguity inside the function is beneficial. When the rest of the code uses unsigned integers, the likelihood of a signed/unsigned integer conversion mistake or an overflow is high. In the above example, the function is also defective because it is unable to process correctly some allowed inputs (because the input is unsigned and the output needs to be able to return the same range), so in reality there is no choice but to decouple the error codes from the semantic results (length). This discrepancy is easier to catch when the ambiguity of the code is decreased.
It does away with the bad practice of keeping the same variable for two uses: assigning error codes and negative values to a “length” is jarring;
It disambiguates the purpose and meaning of checking the “returned” values (I’m including the ones passed by reference, loosely using the word “returned”). Is it to check for an error or is it a semantic check that’s part of the business logic?
Integer overflows are a well-known problem; however in this case they are more a symptom of conflicting requirements. The incompatible constraints of having to return a negative integer for errors and an unsigned integer otherwise are really to blame; the type mismatch (“overflow”) is inevitable given those. My point is that the likelihood of developers getting confused and having bugs in their code, for example not realizing that they have incompatible constraints, is higher when the return value has a dual purpose.
Edit (10/13): It just occurred to me that vsnprintf and snprintf are in trouble, in both BSD and Linux. They return an int, but take an unsigned integer (size_t) as a size input, AND are supposed to return either the number of printed characters or a negative number if there’s an error. Even if the size can be represented as a signed integer, they return the number of characters that *would* have been printed if the size was infinite, so specifying a small size isn’t a fix. So it *might* be possible to make these functions seemingly return errors when none happened.
Note(10/14): I’ve been checking the actual code for snprintf. In FreeBSD, it checks both the passed size and the return value against INT_MAX and appears safe.
I discovered that our XML feed from Secunia had been disabled, coinciding with a re-organization their web site. So, the information contained in Cassandra from Secunia is now out of date. Hopefully it can be re-established soon, but I didn’t get an answer from Secunia over the weekend.
We’ve recently added features using IPMI to our ReAssure testbed, for example to support reimaging of our Sun experimental PCs and rebooting into a Live CD, so that researchers can run any OS they want on our testbed. IPMI stands for the “Intelligent Platform Management Interface”, so we have a dedicated, isolated network on which commands are sent to cards in the experimental PCs. An OS running in these cards can provide status responses and can perform power management actions, for example a power cycle that will reboot the computer. This is supposed to be useful if the OS running in the computer locks up, for example. So, we were hoping that we’d need fewer trips to the facility where the experimental PCs are hosted, have greater reliability and that we’d have more convenient management capabilities.
However, what we got was more headaches. Some IPMI cards failed entirely; as we had daisy-chained them, the IPMI cards of the other PCs became inaccessible. Others simply locked up, requiring a trip to the facility even though the OS on the computer was fine… One of them sometimes responds to status commands and sometimes not at all, seemingly at random. The result is that using the IPMI cards actually made ReAssure less reliable and require more maintenance, because the reliability-enhancing component was so unreliable! The irony. I don’t know if we’ve just been unlucky, but now I’m keeping an eye out for a way to make that more reliable or an alternative, hoping that it doesn’t introduce even more problems. That is rather unlikely, as I’ve discovered that even though the LAN interface is standard, the physical side of those cards isn’t; AFAIK you can’t take a generic IPMI card and install it, it needs to be a proprietary solution by the hardware vendor (e.g., you need a Tyan card for a Tyan motherboard, a Sun IPMI card for a Sun computer, etc…). So if the IPMI solution provided by your hardware vendor has flaws, you’re stuck with it; it’s not like a NIC card that you can replace from any vendor. I don’t know of any way to replace the software on the IPMI cards either, in a manner similar to how you can replace the bad firmware of consumer routers with better open source software. I suppose that the lessons from this story are that:
This new release of our testbed software provides users with full control of experimental PCs instead of being limited to running VMware images:
Experimental PCs can be rebooted at will
There is a LiveCD in the experimental PCs, which will take a root password that you specify before rebooting the PC
Users are now able to replace the operating system installed by default on experimental PCs, and gain full control
The host operating system for VMware is restored after an experiment.
This facilitates experiments with other virtualization technologies (e.g, Xen), or with operating systems or software that don’t interact in the desired manner with VMware.
When compared with other testbeds such as Deter, the differences are that:
You should be able to run anything on ReAssure, that is compatible with the hardware;
You may try to attack the ReAssure testbed itself;
Malicious software should have great difficulty escaping the testbed (if not using exp01 and exp02, the computers set aside for updating images);
Your experiments using VMware images are portable;
You can take VMware snapshots;
As before, you can still:
Use complex network topographies for your experiments, with high bandwidth utilization on each (Gbit ethernet)
Extend reservations or stop experiments at will;
Use ISO images and VMware appliances;
Share image files
Cooperate remotely with other people, and give them access to the PCs in one of your experiments
Update your images from two of our experimental PCs that allow connections to the outside (exp01 and exp02)
Under the hood changes:
The switch management now uses a UNIX domain server instead of a script started by cron. This increases the responsiveness of the system, allows checking the state of the switch directly in real time, and allows self-test results to be displayed on the web interface (for administrators).
The upload mechanism now uses a UNIX domain server instead of a script started by cron. This increases the responsiveness of the system and allows self-test results to be displayed on the web interface (for administrators).
The power state of the experimental PCs is controlled via IPMI (Intelligent Platform Management Interface) on an isolated network
Visit the project home page, the testbed management interface itself, or download the open source software. The ReAssure testbed was developed using an MRI grant from NSF (No. 0420906).
I am happy to announce that ruxseed v. 1.0 is now available on SourceForge. Ruxseed processes XCCDF documents used for SCAP (NIST Security Content Automation Protocol) checklists. It performs benchmark resolution, i.e., the 6 “Loading” steps. Given an XCCDF document, it returns a resolved benchmark in the form of an ReXML tree. The project also contains a number of tests that might be useful to someone developing an XCCDF product.
This release enables work on more complex XCCDF processing, such as tailoring and compliance checking. If you would be interested in that functionality, and are willing to test or contribute code or test cases, please contact me.
I’ve ranted before about how insecure web browsers are, because they trust themselves, their libraries and user-added plug-ins too much. At a very high level, they have responsibilities that can be likened to those of operating systems, because they run potentially dangerous code from different sources (users vs web sites) and need to do it separately from each other and from root (the user account running the browser), i.e., securely. The web browsers of today look as ridiculous to me as the thought of using Windows 95 to run enterprise servers. Run an insecure plugin , get owned (e.g., Quicktime). Enable JavaScript, VBScript, ActiveX, Java, get owned. Get owned because the web browser depends on libraries that have more than 6-month-old vulnerabilities (1-year old depending on how you count), and the whole thing collapses like a house of cards. As long as they are internally so open and naive, web browsers will keep having shameful security records and be unworthy of our trust.
IE 7’s protected mode needs to be acknowledged as a security effort, but CanSecWest proved that it didn’t isolate Flash well enough. It’s not clear if a configuration issue was involved, but I don’t care—most people won’t configure it right either then. IE 7’s protected mode is a collection of good measures, such as applying least privilege and separation of privilege, and intercepting system API calls, but it is difficult to verify and explain how it all fits together, and be sure that there are no gaps. More importantly, it relies heavily on the slippery slope of asking the user to appropriately and correctly grant higher permissions. We know where that leads—most everything gets granted and the security is defeated.
Someone not only thought of a proper security architecture for web browsers but did it (see “Secure web browsing with the OP web browser” by Chris Grier, Shuo Tang, and Samuel T. King). There’s a browser kernel, and everything else is well compartmentalized and isolated. Similarly to the best operating system architectures for security, the kernel is very small (1221 lines of code), has limited functionality, and doesn’t run plug-ins inside kernel space (I’d love to have no drivers in my OS kernel as well…). It’s not clear if it’s a minimal or “true” micro-kernel—the authors steer clear of that discussion. Even malicious hosted ads (e.g., Yahoo! has had repeated experiences with this) are quarantined with a “provider domain policy”. This is an interesting read, and very encouraging. I’d love to play with it, but I can’t find a download.