CERIAS - Center for Education and Research in Information Assurance and Security

Skip Navigation
CERIAS Logo
Purdue University - Discovery Park
Center for Education and Research in Information Assurance and Security

CERIAS Blog

Page Content

Firefox’s Super Cookies

Share:

Given all the noise that was made about cookies and programs that look for “spy cookies”, the silence about DOM storage is a little surprising.  DOM storage allows web sites to store all kinds of information in a persistent manner on your computer, much like cookies but with a greater capacity and efficiency.  Another way that web sites store information about you is Adobe’s Flash local storage;  this seems to be a highly popular option (e.g., youtube stores statistics about you that way), and it’s better known.  Web applications such as pandora.com will even deny you access if you turn it off at the Flash management page.  If you’re curious, see the contents in “~/.macromedia/Flash_Player/#SharedObjects/”, but most of it is not human readable. 
I wonder why DOM storage isn’t used much after being available for a whole year;  I haven’t been able to find any web site or web application making use of it so far, besides a proof of concept for taking notes.  Yet, it probably will be (ab)used, given enough time.  There is no user interface in Firefox for viewing this information, deleting it, or managing it in a meaningful way.  All you can do is turn it on or off by going to the “about:config” URL, typing “storage” in the filter and set it to true or false.  Compare this to what you can do about cookies…  I’m not suggesting that anyone worry about it, but I think that we should have more control over what is stored and how, and the curious or paranoid should be able to view and audit the contents without needing the tricks below.  Flash local storage should also be auditable, but I haven’t found a way to do it easily.

Auditing DOM storage.  To find out what information web sites store on your computer using DOM storage (if any), you need to find where your Firefox profile is stored.  In Linux, this would be “~/.mozilla/firefox/”.  You should find a file named “webappsstore.sqlite”.  To view the contents in human readable form, install sqlite3;  in Ubuntu you can use Synaptic to search for sqlite3 and get it installed.  Then, the command:
echo ‘select * from webappsstore;’ | sqlite3 webappsstore.sqlite

will print contents such as (warning, there could potentially be a lot of data stored):
cerias.purdue.edu|test|asdfasdf|0|homes.cerias.purdue.edu

Other SQL commands can be used to delete specific entries or change them, or even add new ones.  If you are a programmer, you should know better than to trust these values!  They are not any more secure than cookies. 

Speculations on Teaching Secure Programming

Share:

I have taught secure programming for several years, and along the way I developed a world view of how teaching it is different from teaching other subject matters.  Some of the following are inferences from uncontrolled observations, others are simply opinions or mere speculation.  I expose this world view here, hoping that it will generate some discussions and that flaws in it will be corrected. 

As other fields, software security can be studied from several different aspects, such as secure software engineering, secure coding at a technical level, architecture, procurement, configuration and deployment.  Similarly to other fields, effective software security teaching depends on the audience—its needs, its current state and capabilities, and its potential for learning.  Learning techniques such as repetition are useful, and students can ultimately benefit from organized, abstracted thought on the subject.  However, teaching software security is different from teaching other subjects because it is not just teaching facts (data), “how to” (skills) and theories and models (knowledge), but also a mindset and the capability to repeatably derive and achieve a form of wisdom in various, even new situations.  It’s not just a question of the technologies used or the degree of technological acumen, but of behavioral psychology, economy, motivation and humor.

Behavioral Psychology— Security is somewhat of a habit, an attitude, a way of thinking and life.  You won’t become a secure programmer just because you learned of a new vulnerability, exploit or security trick today, although it may help and have a cumulative effect.  Attacking requires opportunistic, lateral, experimental thinking with exciting rewards upon success.  It somewhat resembles the capability to create humor by taking something out of the context for which it was created and subjecting it to new, unexpected conditions.  I am also surprised sometimes by the amount of perseverance and dedication attackers demonstrate.  Defending requires vigilance and a systematic, careful, most often tedious labor and thought, which are rewarded slowly by “uptime” or long-term peace.  They are different, yet understanding one is a great advantage to the other.  To excel at both simultaneously is difficult, requires practice and is probably not achievable by everyone.  I note that undergraduate computer science rewards passing tests, including sometimes provided software tests for assignments, which are closer to immediate rewards upon success or immediate failure, with no long-term consequences or requirements.  On top of that, assignments are most often evaluated solely on achieving functionality, and not on preventing unintended side-effects or not allowing other things to happen.  I suspect that this produces graduates with learned behaviors unfavorable to security.  The problem with behaviors is that you may know better than what you’re doing, but you do it anyways.  Economy may provide some limited justification.

Economy—Many people know that doing things securely is “better”, and that they ought to, but it costs.  People are “naturally optimizing” (lazy)—they won’t do something if there’s no perceived need for it, or if they can delay paying the costs or ultimately pay only the necessary ones (“late security” as in “late binding”).  This is where patches stand;  vulnerability disclosures and patches are remotely possible costs to be weighted against the perceived opportunity costs of delays and additional production expenses.  Isolated occurrences of exploits and vulnerability disclosures may be dismissed as bad luck, accidents or something that happens to other projects.  An intense scrutiny of some works may be necessary to demonstrate to a product’s team that their software engineering methods and security results are flawed.  There is plenty of evidence that these attempts at evading costs don’t work well and often backfire. 
Even if change is desired, students can graduate with negligible knowledge of the best practices presented in the SOAR on Software Security Assurance 2007.  Computer science programs are strained by the large amount of knowledge that needs to be taught; perhaps software engineering should be spun off, just like electrical engineering was spun off from physics.  Companies that need software engineers, and ultimately our economy, would be better served by that than by getting students that were just told to “go and create a program that does this and that”.  While I was revising these thoughts, “Crosstalk” published some opinions on the use of Java for teaching computer science, but the title laments “where are the software engineers of tomorrow?”  I think that there is just not enough teaching time to educate people to become both good computer scientists and software engineers, and the result is something that satisfies the need for neither.  Even if new departments aren’t created, two different degrees should probably be offered.

Motivation—For many, trying to teach software security will be in one ear, out the other unless consequences are demonstrated.  Most people need to be shown the exploits that a flaw enables, to believe that it is a serious flaw.  This resembles how a kid may ignore warnings about burns and hot things until a burn is experienced.  Even as teenagers and adults, every summer some people have to re-learn how sunscreen is needed, and the possibility of skin cancer is too remote a consideration for others.  So, security teaching needs to contain a lot of anecdotes and examples of bad things that happened.  I like to show real code in class and analyze the mistakes that were made;  that approach seems to get the interest of undergraduates.  At a later stage, this will evolve from “security prevents bad things” to “with security you can do this safely”.  Actualizing secure programming can make it even more interesting and exciting, by discussing current events in class.

Repetition—Repeated experiences reinforce learning.  Security-focused code scanners repeat and reinforce good coding practice, as long as the warnings are not allowed to be ignored.  Code audits reinforce the message, this time coming from peers, and so result in peer pressure and the risk of shame.  They are great in a company, but I am ambivalent about using code audits by other students, due to the risk of humiliation—humiliation is not appropriate while learning, for many reasons.  Also, the students doing the audit may not be competent yet, by definition, and I’m not sure how I would grade the activity.  Code audits by the teacher do not scale well.  This leaves scanners.  I have been looking into it and I tried some commercial code scanners, but what I’ve seen are systems that are unmanageable for classroom use and don’t catch some of the flaws I wish they would. 

Organization and abstraction—Whereas showing exploits and attacks is good for the beginner, more advanced students will want to move away from black lists of things not to do (e.g., “Deadly Sins”) to good practices, assurance, and formal methods.  I made a presentation on the subject almost two years ago.

In conclusion, teaching secure programming differs from typical subject matters because of how the knowledge is utilized;  it needs to change behaviors and attitudes;  and it benefits from different tools and activities.  It is interesting in how it connects with morality.  Whereas these characteristics aren’t unique in the entire body of human knowledge, they present interesting challenges.

Confusion of Separation of Privilege and Least Privilege

Share:

Least privilege is the idea of giving a subject or process only the privileges it needs to complete a task.  Compartmentalization is a technique to separate code into parts on which least privilege can be applied, so that if one part is compromised, the attacker does not gain full access.  Why does this get confused all the time with separation of privilege?  Separation of privilege is breaking up a *single* privilege amongst multiple, independent components or people, so that multiple agreement or collusion is necessary to perform an action (e.g., dual signature checks).  So, if an authentication system has various biometric components, a component that evaluates a token, and another component that evaluates some knowledge or capability, and all have to agree for authentication to occur, then that is separation of privilege.  It is essentially an “AND” logical operation;  in its simplest form, a system would check several conditions before granting approval for an operation.  Bishop uses the example of “su” or “sudo”;  a user (or attacker of a compromised process) needs to know the appropriate password, and the user needs to be in a special group.  A related, but not identical concept, is that of majority voting systems.  Redundant systems have to agree, hopefully outvoting a defective system.  If there was no voting, i.e., if all of the systems always had to agree, it would be separation of privilege.  OpenSSH’s UsePrivilegeSeparation option is *not* an implementation of privilege separation by that definition, it simply runs compartmentalized code using least privilege on each compartment.

ReAssure Version 1.01 Released

Share:

As the saying goes, version 1.0 always has bugs, and ReAssure was no exception.  Version 1.01 is a bug-fix release for broken links and the like;  there were no security issues.  Download the source code in Ruby here, or try it there.  ReAssure is the virtualization (VMware and UML) experimental testbed built for containment and networking security experiments.  There are two computers for creating and updating images, and of course you can use VMware appliances.  The other 19 computers are hooked to a Gbit switch configured on-the-fly according to the network topology you specified, with images being transfered, setup and started automatically.  Remote access is through ssh for the host OS, and through NX (think VNC) or the VMware console for the guest OS.

Another untimely passing

Share:

[tags]obituary,cryptography,Bob Baldwin,kuang, CBW,crypt-breaker’s workbench[/tags]

I learned this week that the information security world lost another of our lights in 2007: Bob Baldwin. This may have been more generally known, but a few people I contacted were also surprised and saddened by the news.

His contributions to the field were wide-ranging. In addition to his published research results he also built tools that a generation of students and researchers found to be of great value. These included the Kuang tool for vulnerability analysis, which we included in the first edition of COPS, and the Crypt-Breaker’s Workbench (CBW), which is still in use.

What follows is (slightly edited) obituary sent to me by Bob’s wife, Anne. There was also an obituary in the fall 2007 issue of Cryptologia.

Robert W Baldwin

May 19, 1957- August 21, 2007

Robert W. Baldwin of Palo Alto passed away at home with his wife at his side on August 21, 2007. Bob was born in Newton, Massachusetts and graduated from Memorial High School in Madison, Wisconsin and Yorktown High School in Arlington, Virginia. He attended the Massachusetts Institute of Technology, where he received BS and MS degrees in Computer Science and Electrical Engineering in 1982 and a Ph.D. in Computer Science in 1987. A leading researcher and practitioner in computer security, Bob was employed by Oracle, Tandem Computers, and RSA Security before forming his own firm, PlusFive Consulting. His most recent contribution was the development of security engineering for digital theaters. Bob was fascinated with cryptology and made frequent contributions to Cryptologia as an author, reviewer, and mentor.

Bob was a loving and devoted husband and father who touched the hearts and minds of many. He is well remembered by his positive attitude and everlasting smile. Bob is survived by his wife, Anne Wilson, two step-children, Sean and Jennifer Wilson of Palo Alto and his two children, Leila and Elise Baldwin of Bellevue, Washington. He is also survived by his parents, Bob and Janice Baldwin of Madison, Wisconsin; his siblings: Jean Grossman of Princeton, N.J., Richard Baldwin of Lausanne, Switzerland, and Nancy Kitsos of Wellesley, MA.; and six nieces and nephews.

In lieu of flowers, gifts in memory of Robert W. Baldwin may be made to a charity of the donor’s choice, to the Recht Brain Tumor Research Laboratory at Stanford Comprehensive Cancer Center, Office of Medical Development, 2700 Sand Hill Road, Menlo Park, CA 94025, Attn: Janice Flowers-Sonne, or to the loving caretakers at the Hospice of the Valley, 1510 E. Flower Street. Phoenix, AZ 85014-5656.