Posts in General

Open Source Outclassing Home Router Vendor’s Firmware

I’ve had an interesting new experience these last few months.  I was faced with having to return a home wireless router again and trying a different model or brand, or try an open source firmware replacement. If one is to believe reviews on sites like Amazon and Newegg, all home wireless routers have significant flaws, so the return and exchange game could have kept going on for a while.  The second Linksys device I bought (the most expensive on the display!) had the QoS features I wanted but crashed every day and had to be rebooted, even with the latest vendor-provided firmware.  It was hardly better than the Verizon-provided Westell modem, which had to be rebooted sometimes several times per day despite having simpler firmware. That was an indication of poor code quality, and quite likely security problems (beyond the obvious availability issues). 

I then heard about DD-WRT, an alternative firmware released under the GPL.  There are other alternative firmwares as well, but I chose this one simply because it supported the Linsys router;  I’m not sure which of the alternatives is the best.  For several months now, not only has the device demonstrated 100% availability with v.24 (RC5), but it supports more advanced security features and is more polished.  I expected difficulties because it is beta software, but had none.  Neither CERIAS or I are endorsing DD-WRT, and I don’t care if my home router is running vendor-provided or open source firmware, as long as it is a trustworthy and reliable implementation of the features I want.  Yet, I am amazed that open source firmware has outclassed firmware for an expensive (for a home router) model of a recognized and trusted brand.  Perhaps home router vendors should give up their proprietary, low-quality development efforts, and fund or contribute somehow to projects like DD-WRT and install that as default.  A similar suggestion can be made if the software development is already outsourced.  I believe that it might save a lot of grief to their customers, and lower the return rates on their products.

Firefox’s Super Cookies

Given all the noise that was made about cookies and programs that look for “spy cookies”, the silence about DOM storage is a little surprising.  DOM storage allows web sites to store all kinds of information in a persistent manner on your computer, much like cookies but with a greater capacity and efficiency.  Another way that web sites store information about you is Adobe’s Flash local storage;  this seems to be a highly popular option (e.g., youtube stores statistics about you that way), and it’s better known.  Web applications such as pandora.com will even deny you access if you turn it off at the Flash management page.  If you’re curious, see the contents in “~/.macromedia/Flash_Player/#SharedObjects/”, but most of it is not human readable. 
I wonder why DOM storage isn’t used much after being available for a whole year;  I haven’t been able to find any web site or web application making use of it so far, besides a proof of concept for taking notes.  Yet, it probably will be (ab)used, given enough time.  There is no user interface in Firefox for viewing this information, deleting it, or managing it in a meaningful way.  All you can do is turn it on or off by going to the “about:config” URL, typing “storage” in the filter and set it to true or false.  Compare this to what you can do about cookies…  I’m not suggesting that anyone worry about it, but I think that we should have more control over what is stored and how, and the curious or paranoid should be able to view and audit the contents without needing the tricks below.  Flash local storage should also be auditable, but I haven’t found a way to do it easily.

Auditing DOM storage.  To find out what information web sites store on your computer using DOM storage (if any), you need to find where your Firefox profile is stored.  In Linux, this would be “~/.mozilla/firefox/”.  You should find a file named “webappsstore.sqlite”.  To view the contents in human readable form, install sqlite3;  in Ubuntu you can use Synaptic to search for sqlite3 and get it installed.  Then, the command:
echo ‘select * from webappsstore;’ | sqlite3 webappsstore.sqlite

will print contents such as (warning, there could potentially be a lot of data stored):
cerias.purdue.edu|test|asdfasdf|0|homes.cerias.purdue.edu

Other SQL commands can be used to delete specific entries or change them, or even add new ones.  If you are a programmer, you should know better than to trust these values!  They are not any more secure than cookies. 

Speculations on Teaching Secure Programming

I have taught secure programming for several years, and along the way I developed a world view of how teaching it is different from teaching other subject matters.  Some of the following are inferences from uncontrolled observations, others are simply opinions or mere speculation.  I expose this world view here, hoping that it will generate some discussions and that flaws in it will be corrected. 

As other fields, software security can be studied from several different aspects, such as secure software engineering, secure coding at a technical level, architecture, procurement, configuration and deployment.  Similarly to other fields, effective software security teaching depends on the audience—its needs, its current state and capabilities, and its potential for learning.  Learning techniques such as repetition are useful, and students can ultimately benefit from organized, abstracted thought on the subject.  However, teaching software security is different from teaching other subjects because it is not just teaching facts (data), “how to” (skills) and theories and models (knowledge), but also a mindset and the capability to repeatably derive and achieve a form of wisdom in various, even new situations.  It’s not just a question of the technologies used or the degree of technological acumen, but of behavioral psychology, economy, motivation and humor.

Behavioral Psychology— Security is somewhat of a habit, an attitude, a way of thinking and life.  You won’t become a secure programmer just because you learned of a new vulnerability, exploit or security trick today, although it may help and have a cumulative effect.  Attacking requires opportunistic, lateral, experimental thinking with exciting rewards upon success.  It somewhat resembles the capability to create humor by taking something out of the context for which it was created and subjecting it to new, unexpected conditions.  I am also surprised sometimes by the amount of perseverance and dedication attackers demonstrate.  Defending requires vigilance and a systematic, careful, most often tedious labor and thought, which are rewarded slowly by “uptime” or long-term peace.  They are different, yet understanding one is a great advantage to the other.  To excel at both simultaneously is difficult, requires practice and is probably not achievable by everyone.  I note that undergraduate computer science rewards passing tests, including sometimes provided software tests for assignments, which are closer to immediate rewards upon success or immediate failure, with no long-term consequences or requirements.  On top of that, assignments are most often evaluated solely on achieving functionality, and not on preventing unintended side-effects or not allowing other things to happen.  I suspect that this produces graduates with learned behaviors unfavorable to security.  The problem with behaviors is that you may know better than what you’re doing, but you do it anyways.  Economy may provide some limited justification.

Economy—Many people know that doing things securely is “better”, and that they ought to, but it costs.  People are “naturally optimizing” (lazy)—they won’t do something if there’s no perceived need for it, or if they can delay paying the costs or ultimately pay only the necessary ones (“late security” as in “late binding”).  This is where patches stand;  vulnerability disclosures and patches are remotely possible costs to be weighted against the perceived opportunity costs of delays and additional production expenses.  Isolated occurrences of exploits and vulnerability disclosures may be dismissed as bad luck, accidents or something that happens to other projects.  An intense scrutiny of some works may be necessary to demonstrate to a product’s team that their software engineering methods and security results are flawed.  There is plenty of evidence that these attempts at evading costs don’t work well and often backfire. 
Even if change is desired, students can graduate with negligible knowledge of the best practices presented in the SOAR on Software Security Assurance 2007.  Computer science programs are strained by the large amount of knowledge that needs to be taught; perhaps software engineering should be spun off, just like electrical engineering was spun off from physics.  Companies that need software engineers, and ultimately our economy, would be better served by that than by getting students that were just told to “go and create a program that does this and that”.  While I was revising these thoughts, “Crosstalk” published some opinions on the use of Java for teaching computer science, but the title laments “where are the software engineers of tomorrow?”  I think that there is just not enough teaching time to educate people to become both good computer scientists and software engineers, and the result is something that satisfies the need for neither.  Even if new departments aren’t created, two different degrees should probably be offered.

Motivation—For many, trying to teach software security will be in one ear, out the other unless consequences are demonstrated.  Most people need to be shown the exploits that a flaw enables, to believe that it is a serious flaw.  This resembles how a kid may ignore warnings about burns and hot things until a burn is experienced.  Even as teenagers and adults, every summer some people have to re-learn how sunscreen is needed, and the possibility of skin cancer is too remote a consideration for others.  So, security teaching needs to contain a lot of anecdotes and examples of bad things that happened.  I like to show real code in class and analyze the mistakes that were made;  that approach seems to get the interest of undergraduates.  At a later stage, this will evolve from “security prevents bad things” to “with security you can do this safely”.  Actualizing secure programming can make it even more interesting and exciting, by discussing current events in class.

Repetition—Repeated experiences reinforce learning.  Security-focused code scanners repeat and reinforce good coding practice, as long as the warnings are not allowed to be ignored.  Code audits reinforce the message, this time coming from peers, and so result in peer pressure and the risk of shame.  They are great in a company, but I am ambivalent about using code audits by other students, due to the risk of humiliation—humiliation is not appropriate while learning, for many reasons.  Also, the students doing the audit may not be competent yet, by definition, and I’m not sure how I would grade the activity.  Code audits by the teacher do not scale well.  This leaves scanners.  I have been looking into it and I tried some commercial code scanners, but what I’ve seen are systems that are unmanageable for classroom use and don’t catch some of the flaws I wish they would. 

Organization and abstraction—Whereas showing exploits and attacks is good for the beginner, more advanced students will want to move away from black lists of things not to do (e.g., “Deadly Sins”) to good practices, assurance, and formal methods.  I made a presentation on the subject almost two years ago.

In conclusion, teaching secure programming differs from typical subject matters because of how the knowledge is utilized;  it needs to change behaviors and attitudes;  and it benefits from different tools and activities.  It is interesting in how it connects with morality.  Whereas these characteristics aren’t unique in the entire body of human knowledge, they present interesting challenges.

ReAssure Version 1.01 Released

As the saying goes, version 1.0 always has bugs, and ReAssure was no exception.  Version 1.01 is a bug-fix release for broken links and the like;  there were no security issues.  Download the source code in Ruby here, or try it there.  ReAssure is the virtualization (VMware and UML) experimental testbed built for containment and networking security experiments.  There are two computers for creating and updating images, and of course you can use VMware appliances.  The other 19 computers are hooked to a Gbit switch configured on-the-fly according to the network topology you specified, with images being transfered, setup and started automatically.  Remote access is through ssh for the host OS, and through NX (think VNC) or the VMware console for the guest OS.

Another untimely passing

[tags]obituary,cryptography,Bob Baldwin,kuang, CBW,crypt-breaker’s workbench[/tags]

I learned this week that the information security world lost another of our lights in 2007: Bob Baldwin. This may have been more generally known, but a few people I contacted were also surprised and saddened by the news.

His contributions to the field were wide-ranging. In addition to his published research results he also built tools that a generation of students and researchers found to be of great value. These included the Kuang tool for vulnerability analysis, which we included in the first edition of COPS, and the Crypt-Breaker’s Workbench (CBW), which is still in use.

What follows is (slightly edited) obituary sent to me by Bob’s wife, Anne. There was also an obituary in the fall 2007 issue of Cryptologia.

Robert W Baldwin

May 19, 1957- August 21, 2007

Robert W. Baldwin of Palo Alto passed away at home with his wife at his side on August 21, 2007. Bob was born in Newton, Massachusetts and graduated from Memorial High School in Madison, Wisconsin and Yorktown High School in Arlington, Virginia. He attended the Massachusetts Institute of Technology, where he received BS and MS degrees in Computer Science and Electrical Engineering in 1982 and a Ph.D. in Computer Science in 1987. A leading researcher and practitioner in computer security, Bob was employed by Oracle, Tandem Computers, and RSA Security before forming his own firm, PlusFive Consulting. His most recent contribution was the development of security engineering for digital theaters. Bob was fascinated with cryptology and made frequent contributions to Cryptologia as an author, reviewer, and mentor.

Bob was a loving and devoted husband and father who touched the hearts and minds of many. He is well remembered by his positive attitude and everlasting smile. Bob is survived by his wife, Anne Wilson, two step-children, Sean and Jennifer Wilson of Palo Alto and his two children, Leila and Elise Baldwin of Bellevue, Washington. He is also survived by his parents, Bob and Janice Baldwin of Madison, Wisconsin; his siblings: Jean Grossman of Princeton, N.J., Richard Baldwin of Lausanne, Switzerland, and Nancy Kitsos of Wellesley, MA.; and six nieces and nephews.

In lieu of flowers, gifts in memory of Robert W. Baldwin may be made to a charity of the donor’s choice, to the Recht Brain Tumor Research Laboratory at Stanford Comprehensive Cancer Center, Office of Medical Development, 2700 Sand Hill Road, Menlo Park, CA 94025, Attn: Janice Flowers-Sonne, or to the loving caretakers at the Hospice of the Valley, 1510 E. Flower Street. Phoenix, AZ 85014-5656.

 

Passing of a Pioneer

On November 18, 2007, noted computer pioneer James P. Anderson, Jr., died at his home in Pennsylvania. Jim, 77, had finally retired in August.

Jim, born in Easton, Pennsylvania, graduated from Penn State with a degree in Meteorology. From 1953 to 1956 he served in the U.S. Navy as a Gunnery Officer and later as a Radio Officer. This later service sparked his initial interest in cryptography and information security.

Jim was unaware in 1956, when he took his first job at Univac Corporation, that his career in computers had begun. Hired by John Mauchly to program meteorological data, Dr. Mauchly soon became a family friend and mentor. In 1959, Jim went to Burroughs Corporation as manager of the Advanced Systems Technology Department in the Research Division, where he explored issues of compilation, parallel computing, and computer security. While there, he conceived of and was one of the patent holders of one of the first multiprocessor systems, the D-825. After being manager of Systems Development at Auerbach Corporation from 1964 to 1966, Jim formed an independent consulting firm, James P. Anderson Company, which he maintained until his retirement.

Jim's contributions to information security involved both the abstract and the practical. He is generally credited with the invention and explication of the reference monitor (in 1972) and audit trail-based intrusion detection (in 1980). He was involved in many broad studies in information security needs and vulnerabilities. This included participation on the 1968 Defense Science Board Task Force on Computer Security that produced the "Ware Report", defining the technical challenges of computer security. He was then the deputy chair and editor of a follow-on report to the U.S. Air Force in 1972. That report, widely known as "The Anderson Report", defined the research agenda in information security for well over a decade. Jim was also deeply involved in the development of a number of other seminal standards, policies and over 200 reports including BLACKER, the TCSEC (aka "The Orange Book"), TNI, and other documents in "The Rainbow Series".

Jim consulted for major corporations and government agencies, conducting reviews of security policy and practice. He had long- standing consulting arrangements with computer companies, defense and intelligence agencies and telecommunication firms. He was a mentor and advisor to many in the community who went on to prominence in the field of cyber security. Jim is well remembered for his very practical and straightforward analyses, especially in his insights about how operational security lapses could negate strong computing safeguards, and about the poor quality design and coding of most software products.

Jim eschewed public recognition of his many accomplishments, preferring that his work speak for itself. His accomplishments have long been known within the community, and in 1990 he was honored with the NIST/NCSC (NSA) National Computer Systems Security Award, generally considered the most prestigious award in the field. In his acceptance remarks Jim observed that success in computer security design would be when its results were used with equal ease and confidence by average people as well as security professionals - a state we have yet to achieve.

Jim had broad interests, deep concerns, great insight and a rare willingness to operate out of the spotlight. His sense of humor and patience with those earnestly seeking knowledge were greatly admired, as were his candid responses to the clueless and self-important.

With the passing of Jim Anderson the community has lost a friend, mentor and colleague, and the field of cyber security has lost one of its founding fathers.

Jim is survived by his wife, Patty, his son Jay, daughter Beth and three grandchildren. In lieu of other recognition, people may make donations to their favorite charities in memory of Jim.

[Update 01/03/2008 from Peter Denning:]

I noted a comment that Jim is credited with the reference monitor. He told me once that he credits that to a paper I wrote with Scott Graham for the 1972 SJCC and said that paper was the first he'd seen using the actual term. I told him that I got the concept (not the term) from Jack Dennis at MIT. Jack probably got it from the ongoing Project MAC discussions. Where it came from before that, I do not know. It might be better to say that Jim recognized the fundamental importance of reference monitor for computer security practice and stumped endlessly for its adoption.

Computer Security Outlook

Recently, the McAfee Corporation released their latest Virtual Criminology Report.  Personnel from CERIAS helped provide some of the research for the report.
The report makes interesting reading, and you might want to download a copy.  You will have to register to get a copy, however (that’s McAfee, not CERIAS).

The editors concluded that there are 3 major trends in computer security and computer crime:

  1. An increasing level and sophistication of nation-state sponsored espionage and (some) sabotage.
  2. An increasing sophistication in criminal threats to individuals and businesses
  3. An increasing market for exploits and attack methods

Certainly, anyone following the news and listening to what we’ve been saying here will recognize these trends.  All are natural consequences of increased connectivity and increased presence of valued information and resources online, coupled with weak security and largely ineffectual law enforcement.  If value is present and there is little or no protection, and if there is also little risk of being caught and punished, then there is going to be a steady increase in system abuse.

I’ve posted links on my tumble log to a number of recent news articles on computer crime and espionage.  It’s clear that there is a lot of misuse occurring, and that we aren’t seeing it all.

[posted with ecto]

Another Round on Passwords

[tags]passwords, security practices[/tags]
The EDUCAUSE security mailing list has yet (another) discussion on password policies.  I’ve blogged about this general issue several times in the past, but maybe it is worth revisiting.

Someone on the list wrote:

Here is my question - does anyone have the data on how many times a hack (attack) has occurred associated to breaking the “launch codes” from outside of the organization?  The last information I gleaned from the FBI reports (several years ago) indicated that 70 percent of hackings (attacks) were internal.

My most recent experience with intrusions has had nothing to do with a compromised password, rather an exploit of some vunerability in the OS, database, or application.

I replied:

I track these things, and I cannot recall the last time I saw any report of an incident caused by a guessed password.  Most common incidents are phishing, trojans, snooping, physical theft of sensitive media, and remote exploitation of bugs.

People devote huge amounts of effort to passwords because it is one of the few things they think they can control. 

Picking stronger passwords won’t stop phishing.  It won’t stop users downloading trojans.  It won’t stop capture of sensitive transmissions.  It won’t bring back a stolen laptop (although if the laptop has proper encryption it *might* protect the data).  And passwords won’t ensure that patches are in place but flaws aren’t.

Creating and forcing strong password policies is akin to being the bosun ensuring that everyone on the Titanic has locked their staterooms before they abandon ship.  It doesn’t stop the ship from sinking or save any lives, but it sure does make him look like he’s doing something important…..

That isn’t to say that we should be cavalier about setting passwords.  It is important to try to set strong passwords, but once reasonably good ones are set in most environments the attacks are going to come from other places—password sniffing, exploitation of bugs in the software, and implantation of trojan software.

As a field, we spend waaaaay too much time and resources on palliative measures rather than fundamental cures.  In most cases, fiddling with password rules is a prime example.  A few weeks ago, I blogged about a related issue.

Security should be based on sound risk assessment, and in most environments weak passwords don’t present the most significant risk.

Gazing in the Crystal Ball

[tags]future technology, cyber security predictions, malware, bots, privacy, cyber crime[/tags]
Four times in the last month I have been contacted by people asking my predictions for future cyber security threats and protections.  One of those instances will be as I serve on a panel at the Information Security Decisions Conference in Chicago next week; we’ll be talking about the future of infosec. 

Another instance when I was contacted was by the people at Information Security magazine for their upcoming 10th anniversary issue.  I was interviewed back in 2002, and my comments were summarized in a “crystal ball” article.  Some of those predictions were more like trend predictions, but I think I did pretty well.  Most happened, and a couple may yet come to pass (I didn’t say they would all happen in 5 years!). I had a conversation with one of the reporters for the Nov 2007 issue, and provided some more observations looking forward.

After answering several of these requests, I thought it might be worthwhile to validate my views.  So, I wrote up a list of things I see happening in security as we go forward.  Then I polled (what I thought) was a small set of colleagues; thru an accident of mail aliases, a larger group of experts got my query.  (The mailer issue may be fodder for a future blog post.)  I got about 20 thoughtful replies from some real experts and deep thinkers in the field.

What was interesting is that while reading the replies, I found only a few minor differences from what I had already written!  Either that means I have a pretty good view of what’s coming, or else the people I asked are all suffering under the same delusions. 

Of course, none of us made predictions as are found in supermarket tabloids, along the lines of “Dick Cheney will hack into computers running unpatched Windows XP at the Vatican in February in an attempt to impress Britney Spears.”  Although we might generate some specific predictions like that, I don’t think our crystal balls have quite the necessary resolution.  Plus, I’m sure the Veep’s plans along those lines are classified, and we might end up in Gitmo for revealing them.  Nonetheless, I’d like to predict that I will win the Powerball Lottery, but will be delayed collecting the payout because Adriana Lima has become so infatuated with me, she has abducted me.  Yes, I’d like to predict that, but I think the Cheney prediction might be more likely….

But seriously, here are some of my predictions/observations of where we’re headed with cyber security.  (I’m not going to name the people who responded to my poll, because when I polled them I said nothing about attributing their views in public; I value my friends’ privacy as much or more than their insights!  However, my thanks again to those who responded.) 

If all of these seem obvious to you, then you are probably working in cyber security or have your own crystal ball.

Threats
Expect attack software to be the dominant threat in the coming few years.  As a trend, we will continue to see fewer overt viruses and worm programs as attacks, but continuing threats that hijack machines with bots, trojans, and browser subversion. Threats that self-modify to avoid detection, and threats that attack back against defenders will make the situation even more challenging.  It will eventually be too difficult to tell if a system is compromised and disinfect it—the standard protocol will be to reformat and reinstall upon any question.

Spam, pop-up ads, and further related advertising abuses will grow worse (as difficult as that is to believe), and will continue to mask more serious threats.  The ties between spam and malware will increase.  Organized crime will become more heavily involved in both because of the money to be made coupled with the low probability of prosecution.

Extortion based on threats to integrity, availability, or exposure of information will become more common as systems are invaded and controlled remotely.  Extortion of government entities may be threatened based on potential attacks against infrastructure controls.  These kinds of losses will infrequently be revealed to the public.

Theft of proprietary information will increase as a lucrative criminal activity.  Particularly targeted will be trade secret formulations and designs, customer lists, and supply chain details.  The insider threat will grow here, too.

Expect attacks against governmental systems, and especially law enforcement systems, as criminals seek to remove or damage information about themselves and their activities.

Protections
Fads will continue and will seem useful to early adopters, but as greater roll-out occurs, deficiencies will be found that will make them less effective—or possibly even worse than what they replace.  Examples include overconfident use of biometrics and over-reliance on virtualization to protect systems.  Mistaken reliance on encryption as a solution will also be a repeated theme.

We will continue to see huge expenditures on R&D to retrofit security onto fundamentally broken technologies rather than on re-engineering systems according to sound security principles.  Governments and many companies will continue to stress the search for “new” ideas without adequately applying older, proven techniques that might be somewhat inconvenient even though effective.

There will be continued development of protection technologies out of proportion to technologies that will enable us to identify and punish the criminals.  It will be a while before the majority of people catch on that passive defense alone is not enough and begin to appropriately capitalize investigation and law enforcement.  We will see more investment in scattered private actions well before we see governments stepping up.

White-listing and integrity management solutions will become widely used by informed security professionals as they become aware of how impossible it is to detect all bad software and behavior (blacklisting).  Meanwhile, because of increasing stealth and sophistication of attacks, many victims will not realize that their traditional IDS/anti-virus solutions based on blacklists have failed to protect them. 

White-listing will also obviate the competition among some vendors to buy vulnerabilities, and solve the difficulty of identifying zero-day attacks, because it is not designed to trigger on those items.  However, it may be slow to be adopted because so much has been invested in traditional blacklist technologies: firewalls, IDS/NIDS/IPS, antivirus, etc.

Greater emphasis will be placed on positive identity management, both online and in the physical world.  Coupled with access control, this will provide some solutions but further erode privacy.  Thus, it is uncertain how widely these technologies will be embraced.  TSA and too much of the general public will still believe that showing a picture ID somehow improves security, so the way ahead in authentication/identification is uncertain.

Personnel
We will continue to see more people using sensitive systems, but not enough people trained in cyber protection.  This will continue some current trends such as people with questionable qualifications calling themselves “experts,” and more pressure for certifications and qualifications to demonstrate competence (and more promotion of questionable certifications to meet that need).

Many nations will face difficulties finding appropriately educated and vetted experts who are also capable of getting national-level clearances.  Industry may also find it difficult to find enough trained individuals without criminal records, which will lead to greater reliance on outsourcing.  It will also mean that we will continue to see instances where poorly-informed individuals mistakenly think that single technologies will solve all all their problems—with firewalls and encryption being two prime examples.

Personnel for after-the-fact investigations (both law enforcement and civil) will be in high demand and short supply.

Much greater emphasis needs to be placed on educating the end-user population about security and privacy, but this will not receive sufficient support or attention. 

The insider threat will become more pronounced because systems are mostly still being designed and deployed with perimeter defenses.

Milieu
Crime, identity theft, and violations of privacy will increasingly become part of public consciousness.  This will likely result in reduction of trust in on-line services.  This may also negatively impact development of new services and products, but there will still be great adoption of new technologies despite their unknown risk models; VoIP is an example.

Some countries will become known as havens for computer criminals.  International pressure will increase on those countries to become “team players” in catching the criminals.  This will not work well in those countries where the government has financial ties to the criminals or has a political agenda in encouraging them.  Watch for the first international action (financial embargo?) on this issue within the next five years.

We will see greater connectivity, more embedded systems, and less obvious perimeters.  This will require a change in how we think about security (push it into the devices and away from network core, limit functionality), but the changes will be slow in coming.  Advertisers and vendors will resist these changes because some of their revenue models would be negatively impacted.

Compliance rules and laws will drive some significant upgrades and changes, but not all will be appropriate as the technology changes.  Some compliance requirements may actually expose organizations to attack.  Related to compliance, the enforcement of external rights (e.g., copyright using DRM) will lead to greater complexity in systems, more legal wrangling, and increased user dissatisfaction with some IT products.

More will be spent in the US on DRM enforcement and attempts to restrict access to online pictures of naked people than is likely to be spent on cybersecurity research.  More money will be spent by the US government ensuring that people don’t take toothpaste in carry-on luggage on airplanes than will be spent on investigating and prosecuting computer fraud and violation of spam laws.

Government officials will continue to turn to industry for “expert advice”—listening to the same people who have built multinational behemoths by marketing the unsafe products that got us into this mess already.  (It’s the same reason they consult the oil executives on how to solve global warming.)  Not surprisingly, the recommendations will all be for strongly worded statements and encouragement, but not real change in behavior.

We will see growing realization that massive data stores, mirroring, RAID, backups and more mean that data never really goes away.  This will be a boon to some law enforcement activities, a terrible burden for companies in civil lawsuits, and a continuing threat to individual privacy.  It will also present a growing challenge to reconcile different versions of the same data in some meaningful way.  Purposeful pollution of the data stores around the world will be conducted by some individuals to make the collected data so conflicted and ambiguous that it cannot be used.

Overall Bottom line:  things are going to get worse before they get better, and it may be a while before things get better.

[posted with ecto]

Hypocritical Security Conference Organizers

Every once in a while, I receive spam for security conferences of which I’ve never heard, even less attended.  Typically the organizers of these conferences are faculty members, professors, or government agency employees who should know better than hire companies to spam for them.  I suppose that hiring a third party provides plausible deniability.  It’s hypocritical.  To be fair, I once received an apology for a spamming, which demonstrated that those involved understood integrity.

It’s true that it’s only a minor annoyance.  But, if you can’t trust someone for small things, should you trust them for important ones?