The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Overloaded Return Values Cause Bugs

Share:

In my secure programming class I have denounced the overloading of return values as a bad practice, and recently I discovered a new, concrete example of a bug (possibly a vulnerability) that results from this widespread practice.  A search in the NVD and some googling didn’t reveal any mention of similar issues anywhere, but I probably just have missed them (I have trouble imagining that nobody else pointed that out before).  In any case it may be worth repeating with this example.

The practice is to return a negative value to indicate an error, whereas a positive value has meaning, e.g., a length.  Imagine a function looking like this:

int bad_idea(char *buf, unsigned int size) {
    int length;

    if (<some_error_condition>) {
        length = -ERROR_CODE;
    } else {
        length = size;  // substitute any operations that could overflow the signed int
    }
    return length;
}

This function could return random error values.  Under the right conditions this could result in at least a DoS (imagine that this is a security-related function, e.g., for authentication).  I suggest using separate channels to return error codes and meaningful values.  Doing this lowers complexity in the assignment and meaning of that return value by removing the multiplexing.  As a result: 

     

  • There is an increased complexity in the function prototype, but the decreased ambiguity inside the function is beneficial.  When the rest of the code uses unsigned integers, the likelihood of a signed/unsigned integer conversion mistake or an overflow is high.  In the above example, the function is also defective because it is unable to process correctly some allowed inputs (because the input is unsigned and the output needs to be able to return the same range), so in reality there is no choice but to decouple the error codes from the semantic results (length).  This discrepancy is easier to catch when the ambiguity of the code is decreased.

  • It does away with the bad practice of keeping the same variable for two uses:  assigning error codes and negative values to a “length” is jarring;

  • It disambiguates the purpose and meaning of checking the “returned” values (I’m including the ones passed by reference, loosely using the word “returned”).  Is it to check for an error or is it a semantic check that’s part of the business logic?

Integer overflows are a well-known problem;  however in this case they are more a symptom of conflicting requirements.  The incompatible constraints of having to return a negative integer for errors and an unsigned integer otherwise are really to blame;  the type mismatch (“overflow”) is inevitable given those.  My point is that the likelihood of developers getting confused and having bugs in their code, for example not realizing that they have incompatible constraints, is higher when the return value has a dual purpose.


Edit (10/13):  It just occurred to me that vsnprintf and snprintf are in trouble, in both BSD and Linux.  They return an int, but take an unsigned integer (size_t) as a size input, AND are supposed to return either the number of printed characters or a negative number if there’s an error.  Even if the size can be represented as a signed integer, they return the number of characters that *would* have been printed if the size was infinite, so specifying a small size isn’t a fix.  So it *might* be possible to make these functions seemingly return errors when none happened.

Note(10/14): I’ve been checking the actual code for snprintf.  In FreeBSD, it checks both the passed size and the return value against INT_MAX and appears safe.

Cassandra Lost Its Feed From Secunia

Share:

I discovered that our XML feed from Secunia had been disabled, coinciding with a re-organization their web site.  So, the information contained in Cassandra from Secunia is now out of date.  Hopefully it can be re-established soon, but I didn’t get an answer from Secunia over the weekend.

An IPMI House of Cards

Share:

We’ve recently added features using IPMI to our ReAssure testbed, for example to support reimaging of our Sun experimental PCs and rebooting into a Live CD, so that researchers can run any OS they want on our testbed.  IPMI stands for the “Intelligent Platform Management Interface”, so we have a dedicated, isolated network on which commands are sent to cards in the experimental PCs.  An OS running in these cards can provide status responses and can perform power management actions, for example a power cycle that will reboot the computer.  This is supposed to be useful if the OS running in the computer locks up, for example.  So, we were hoping that we’d need fewer trips to the facility where the experimental PCs are hosted, have greater reliability and that we’d have more convenient management capabilities.

However, what we got was more headaches.  Some IPMI cards failed entirely;  as we had daisy-chained them, the IPMI cards of the other PCs became inaccessible.  Others simply locked up, requiring a trip to the facility even though the OS on the computer was fine…  One of them sometimes responds to status commands and sometimes not at all, seemingly at random.  The result is that using the IPMI cards actually made ReAssure less reliable and require more maintenance, because the reliability-enhancing component was so unreliable!  The irony.  I don’t know if we’ve just been unlucky, but now I’m keeping an eye out for a way to make that more reliable or an alternative, hoping that it doesn’t introduce even more problems.  That is rather unlikely, as I’ve discovered that even though the LAN interface is standard, the physical side of those cards isn’t;  AFAIK you can’t take a generic IPMI card and install it, it needs to be a proprietary solution by the hardware vendor (e.g., you need a Tyan card for a Tyan motherboard, a Sun IPMI card for a Sun computer, etc…).  So if the IPMI solution provided by your hardware vendor has flaws, you’re stuck with it;  it’s not like a NIC card that you can replace from any vendor.  I don’t know of any way to replace the software on the IPMI cards either, in a manner similar to how you can replace the bad firmware of consumer routers with better open source software.  I suppose that the lessons from this story are that:

  • You can’t make something more reliable by adding low-quality components in a “backup” role, because then you need to maintain them as well and make sure that they’ll work when they’ll be needed;
  • It’s not because something is on a separate card that it is more reliable;
  • IPMI is a weak standard—only the exposed interfaces are standardized, for example enabling the development of OpenIPMI (from the managed OS side) and IPMItools (LAN interface), but the middle of the “sandwich” isn’t—the implementations and parts are proprietary, incompatible between vendors, inflexible and fragile;
  • Proprietary, non-standard solutions prevent choosing better components.

Take 5 Minutes to Help Privacy Research!

Share:

This is from our colleagues at NCSU, and is time-critical. Please take 5 minutes to fill out this (simple) survey. It will help an NSF-funded privacy project.. And “Thank you” from CERIAS, too!






 

  ThePrivacyPlace.Org Privacy Survey is Underway!
 

 

Researchers at ThePrivacyPlace.Org are conducting an online survey about privacy policies and user values. The survey is supported by an NSF ITR grant (National Science Foundation Information Technology Research) and was first offered in 2002. We are offering the survey again in 2008 to reveal how user values have changed over the intervening years. The survey results will help organizations ensure their website privacy practices are aligned with current consumer values.

 

The URL is: http://theprivacyplace.org/currentsurvey

 

We need to attract several thousand respondents, and would be most appreciative if you would consider helping us get the word out about the survey, which takes about 5 to 10 minutes to complete. The results will be made available via our project website (http://www.theprivacyplace.org/).

 

Prizes include
  $100 Amazon.com gift certificates sponsored by Intel Co.
  and
  IBM gifts


 

On behalf of the research staff at ThePrivacyPlace.Org, thank you!

 

Who ya gonna call?

Share:

This morning I received an email, sent to a list of people (I assume). The subject of the email was “Computer Hacker’s service needed” and the contents indicated that the sender was seeking someone to trace back to the sender of some enclosed email. The email in question? The pernicious spam email purporting to be from someone who has been given a contract to murder the recipient, but on reflection will not do the deed if offered a sum of money.

This form of spam is well-known in most of the security and law enforcement communities, and there have been repeated advisories and warnings issued to the public. For instance, Snopes has an article on it because it is so widespread as to have urban legend status. The scam dates back at least to 2006, and is sometimes made to seem more authentic by including some personalized information (usually taken from online sources). A search using the terms “hitman scam spammer” returns over 200,000 links, most of the top ones being stories in news media and user alert sites. The FBI has published several alerts about this family of frauds, too. This is not a rare event.

However, it is not that the author of the email missed those stories that prompts this post. After all, it is not the case that each of us can be aware of everything being done online.

Rather, I am troubled that someone would ostensibly take the threat seriously, and as a follow-up, seek a “hacker” to trace the email back to its sender rather than report it to law enforcement authorities.

One wonders if the same person were to receive the same note on paper, in surface email, whether he would seek the services of someone adept at breaking into mail boxes to seek out the author? Even if he did that, what would it accomplish? Purportedly, the author of the note is a criminal with some experience and compatriots (these emails, and this one in particular, always refer to a gang that is watching the recipient). What the heck is the recipient going to do with someone—and his gang—who probably doesn’t live anywhere nearby?

Perhaps the “victim” might know (or suspect) it is a scam, but is trying to aid the authorities by tracing the email? But why spend your own money to do something that law enforcement is perhaps better equipped to do? Plus, a “hacker” is not necessarily going to use legal methods that will allow the authorities to use the results. Perhaps even more to the point, the “hacker” may not want to be exposed to the authorities—especially if they regularly break the law to find people!

Perhaps the victim already consulted law enforcement and was told it was a scam, but doesn’t believe it? Well, some additional research should be convincing. Plus, the whole story simply isn’t credible. However, if the victim really does have a streak of paranoia and a guilty conscience, then perhaps this is plausible. However, in this case, whoever is hired would likewise be viewed with suspicion, and any report made is going to be doubted by the victim. So, there is no real closure here.

Even worse, if a “hacker” is found who is willing to break the rules and the laws to trace back email, what is to say that he (or she) isn’t going to claim to have found the purported assassin, he’s real, and the price has gone up but the “hacker” is willing to serve as an intermediary? Once the money is paid, the problem is pronounced “fixed,” This is a form of classic scam too—usually played on the gullible by “mystics” who claim that the victim is cursed and can only be cured by a complicated ritual involving a lot of money offered to “the spirits.”

Most important—if someone is hired, and that person breaks the law, then the person hiring that “hacker” can also be charged under the law. Hiring someone to break the law is illegal. And having announced his intentions to this mailing list, the victim has very limited claims of ignorance at this point.

At the heart of this, I am simply bewildered how someone would attempt to find a “hacker”—whose skill set would be unknown, whose honesty is probably already in question, and whose allegiances are uncertain—to track down the source of a threat rather than go to legitimate law enforcement. I can’t imagine a reasonable person (outside of the movies) receiving a threatening letter or phone call then seeking to hire a stranger to trace it back rather than calling in the authorities.

Of course, that is why these online scams—and other scams such as the “419 scams” continue to work: people don’t think to contact appropriate authorities. And when some fall for it, it encourages the spammers to keep on—increasing the pool of victims.

(And yes, I am ignoring the difficulty of actually tracing email back to a source: that isn’t the point of this particular post.)