Posts in Kudos, Opinions and Rants
Page Content
Do we need a new Internet?
Short answer: " Almost certainly, no."
Longer answer:
The blogosphere is abuzz with comments on John Markoff's Saturday NT Times piece, Do We Need a New Internet? John got some comments from me about the topic a few weeks back. Unfortunately, I don't think a new Internet will solve the problems we are facing.
David Akin, a journalist/blogger commented on nicely John's post. In it, he quoted one of my posts to Dave Farber's IP list, which I then turned into a longer post in this blog. Basically, I noted that the Internet itself is not the biggest problem. Rather, it is the endpoints, the policies, the economics, and the legal environment that make things so difficult. It is akin to trying to blame the postal service because people manage to break into our houses by slipping their arms through the mailslots or because we leave the door unlocked "just in case" a package is going to be delivered.
Consider that some estimates of losses as a result of computer crime and fraud are in the many billions of $$ per year. (Note my recent post on a part of this.) Consider how much money is repeatedly spent on reissuing credit and debit cards because of loss of card info, restoring systems from backups, trying to remove spyware, bots, viruses, and the like. Consider how much is spent on defensive mechanisms than only work in limited cases -- anti-virus, IDS, firewalls, DLP, and whatever the latest fad might be.
What effect does that play on global finances? It is certainly a major drag on the economy. This was one of the conclusions (albeit, described as "friction") of the CSTB report Towards a Safer and More Secure Cyberspace, which did not seem to get much attention upon release.
Now, think about the solutions being put forward, such as putting all your corporate assets and sensitive records "out in the cloud" somewhere, on servers that are likely less well-protected or isolated than the ones being regularly compromised at the banks and card processors. But it will look cheaper because organizations won't need to maintain resources in-house. And it is already being hyped by companies, and seemingly being promoted by the NSF and CCC as "the future." Who can resist the future?
Next, stir in the economic conditions where any talk is going to be dismissed immediately as "crazy" if it involves replacing infrastructure with something that (initially) costs more, or that needs more than a minor change of business processes. And let's not forget that when the economy goes bad, more criminal behavior is likely as people seek value wherever they can find it.
The institutional responses from government and big vendors will be more of the same: update the patches, and apply another layer of gauze.
I have long argued that we should carefully re-examine some of the assumptions underlying what we do rather than blindly continue doing the same things. People are failing to understand that many important things have changed since we first started building computing artifacts! That means we might have better solutions if we really thought about the underlying problems from first principles.
I recently suggested this rethinking of basic assumptions to a few senior leaders in computing research (who shall remain nameless, at least within this posting) and was derided for not thinking about "new frontiers" for research. There is a belief among some in the research community (especially at the top universities) that the only way we (as a community; or perhaps more pointedly, them and their students) will get more funding for research and that we (again, the royal "we") will get premier publications is by pushing "new" ideas. This is partly a fault of the government agencies and companies, which aren't willing to support revisiting basic ideas and concepts because they want fixes to their existing systems now!
One part that makes sense from Markoff's article is about the research team making something that is effectively "plug compatible" with existing systems. That is roughly where a longer-term solution lies. If we can go back and devise more secure systems and protocols, we don't need to deploy them everywhere at once: we gradually phase them in, exactly as we do periodic refreshes of current systems. There is not necessarily an impassible divide between what we need and what we can afford.
I'm sorry to say that I don't see necessary changes occurring any time soon. It would upset too much of the status quo for too many parties. Thus, the situation isn't going to get better -- it's going to get worse -- probably much worse. When we finally get around to addressing the problems, it will be more expensive and traumatic than it needed to be.
As I noted before:
"Insanity: doing the same thing over and over again expecting different results."
Of course, my continued efforts to make this point could be branded insane. ;-)
An Aside
Over a decade ago, I gave several talks where I included the idea of having multiple "service network" layers on top of the Internet -- effectively VPNs. One such network would be governed by rules similar to those of the current Internet. A second would use cryptographic means to ensure that every packet was identified. This would be used for commercial transactions. Other such virtual networks would have different ground rules on authentication, anonymity, protocols and content. There would be contractual obligations to be followed to participate, and authorities could revoke keys and access for cause. Gateways would regulate which "networks" organizations could use. The end result would be a set of virtual networks on the Internet at large, similar to channels on a cable service. Some would be free-for-all and allow anonymous posting, but others would be much more regulated, because that is what is needed for some financial and government transactions.
I remember one audience at an early SANS conference at the time was so hostile to the idea that members began shouting objections before I could even finish my talk. I also couldn't find a venue willing to publish a speculative essay on the topic (although I admit I only tried 2-3 places before giving up). The general response was that it would somehow cut out the possibility for anonymous and experimental behavior because no one would want to use the unauthenticated channels. It was reminiscent of the controversy when I was the lead in the Usenet "Great Renamng."
The problem, of course, is that if we try to support conflicting goals such as absolute anonymity and strong authentication on the same network we will fail at one or the other (or both). We can easily find situations where one or the other property (as simply two examples of properties at stake) is needed. So long as we continue to try to apply patches onto such a situation before reconsidering the basic assumptions, we will continue to have unhappy failures.
But as a bottom line, I simply want to note that there is more than one way to "redesign the Internet" but the biggest problems continue to be the users and their expectations, not the Internet itself.
A Modest Proposal
Yesterday and today I was reading repeated news stories about the pending bailout -- much of it intended to prop up companies with failed business models and incompetent management. Also distressing are the stories of extravagant bonuses for financial managers who are likely responsible for creating some of the same economic mess that is causing so much turmoil in world markets.
Running through my mind was also a re-reading of the recent statement by Norman Augustine before the U.S. House Democratic Steering and Policy Committee (it's short and a great read -- check it out). I definitely resonate with his comments about how we should invest in our technology and research to ensure that our country has a future.
And I was thinking about how can we reward a spirit of honest hard work rather than a sense of entitlement, and avoid putting money into industries where greed and incompetence have led to huge disasters, where those same miscreants are using the full weight of political pressure to try to get huge chunks of bailout money to continue their reign of error.
And all this came together when I saw a story about the lack of medical treatment and high rate of suicides for returning military after extended tours in the battlefield. And then I read this story and this story about the homeless -- just two out of many recent stories.
Why can't we direct a little of our national wealth into a new GI Bill, similar to the original in 1944? Provide money so that our men and women who are returning from honorable service to the country can get the counseling and medical care they need. And then, ship them off to colleges for a degree. If they show real promise and/or have a degree already, then cover a graduate degree.
These are people who volunteered years out of their lives to serve the interests of the rest of us. They were willing to put their lives on the line for us. And some died. And others have suffered severe physical and psychological trauma. They have shown they are able to focus, sacrifice, and work hard. My experience over the last two decades has shown me that most veterans and active-duty military personnel make good students for those reasons.
Service doesn't provide intellectual ability, certainly, and not all can excel, but I am certain that many (if not most) can do well given the chance. And if regular college isn't the right fit, then a vocational education program or appropriate apprenticeship should be covered.
Money should be allocated for additional counseling and tutoring for these students, too. They are likely to have a great range of physical and psychological needs than the usual student population, and we should address that. And money will need to be allocated to help provide the facilities to house and teach these students.
While we're at it, maybe the same should be offered to those who have provided other service, such as in the AmeriCorps or Peace Corps? And perhaps those who take new jobs helping rebuild the nation's infrastructure. I'm not a politician or economist, so I'm not sure what we should do for details, but the basic idea would be that someone who gives 4 years of service to the country should get 2-4 years of college tuition, fees, room and board.
We might also want structure it so that degrees in the STEM (Science, Technology, Engineering and Math) disciplines have some form of extra preference or encouragement, although we should not discourage any valid course of study -- except we should definitely not fund any degrees in finance!
Then maybe give a tax credit to any companies that hire these people after they graduate.
And make this good for anyone who served since, oh, 2001, and for anyone who signs up for one of the covered services before, say, 2015. If those dates don't make sense, then pick others. But extend it forward to help encourage people to join some of the covered services -- they could certainly use more people -- and start far enough back.
Yes, I know there are currently educational benefits and health benefits for our veterans, but I am suggesting something more comprehensive for both, and for possibly a larger group. I'm suggesting that we invest in our future by making sure we do our utmost to patch up the injuries suffered in duty to our fellows, give them a shot at a better future. And that better shot is not to turn them out into our cities where there are no jobs, where the homeless live, where drugs and street crime may be rampant.
The whole process would give a huge boost of education to the workforce we want to have for the future. We don't need more assembly line workers. We need engineers, technologists, scientists and more. Not only will we be educating them, but the endorsement we would be making about the importance of education, and the care for those who serve, will pay back indirectly for decades to come. It worked in the 40s and 50s, and led to huge innovations and a surge in the economy.
Will it all be expensive? Undoubtedly. But I'm guessing it is far less than what is in the budget bills for bank bailouts and propping up failed industrial concerns.
And when it is done, we will have something to show for our investment -- far more than simply some rebuilt roads and a re-emergence of predatory lending.
But as I said, I'm not a politician, so what do I know?
Update: I have learned that there is a new GI bill, passed last year, which addresses some of the items I suggested above. Great! It doesn't cover quite the breadth of what I suggested, and only covers the military. Somehow, I missed this when I did my web search....
Customer (dis)service
As our technology becomes more complex, it is often shipped with flaws and missing features. The evolution of the Internet coupled with a "must ship" attitude has led to a number of interesting business practices. One in particular, remote updates/patching, presents some interesting reliability issues.
One of the best known versions of remote patching is the software update function, currently found in many computer applications, and in most common operating systems. In its usual form, this is a system that can download patches or whole new software artifacts to address a newly-discovered security vulnerability. Some systems are automated, but most require manual intervention. The current systems generally only involve security fixes and no functionality improvements -- the functionality improvements may or may not be bundled in less frequent updates (service packs), or they may be deferred into a major revision that requires additional payment.
Many of us working in security and reliability have expressed concern about these updates, because if the update mechanism is somehow hijacked by an attacker, it can be used to quickly distribute malware to large numbers of systems at once. There have also been examples where updates accidentally deleted critical files or provided faulty configurations, thus disabling or degrading many, many machines at once (for example this one and this one). Most vendors have elaborate systems in place to test and verify such patches, and they have plans in place to quickly respond if something goes wrong.
Now, we're seeing the same concerns begin to occur with consumer goods that aren't primarily intended as interactive computers. I can speak from personal (unfortunate) experience that at least one major vendor appears clueless and not customer-friendly.
I recently purchased 2 Samsung Blu-Ray DVD players: a BD-P2500, and a BD-P1500. Both have Internet connections for firmware updates and Blu-Ray Live. The BD-P2500 also supports live streaming of Netflix content.
A couple of days after Christmas, the 2500 froze up. I could not get it to respond to anything, including the factory reset code. I contacted Samsung and was given information to send the player in for service -- it was still within warranty. They've had it for nearly 2 weeks with a status of "waiting for parts." It has now been broken longer than it was working, with still no prognosis about when it might be returned.
No problem -- I still have the other player I can use, right?
The 1500 came up with an on-screen message early in the week that a firmware update was available. Having had experience with downloads and upgrades of OS components, I waited a couple of days before doing anything. When I initiated the download, it completed without error, according to the display. However, after completion, it too was dead -- no response to anything, including reset codes. So, I called Samsung again. The problem was escalated in customer service. This is what I was told:
- There was a bad update put on the servers, and many players that got the download have frozen up
- They do not have a fix for it at the current time and have no idea when one will be available
- I should check their WWW site once a week to see when an update is available. "It should almost certainly be within a month."
- Even though it is their fault for putting up a bad firmware update, if I am required to send in the player, it is now out of warranty for service so it is my own expense.
It seems fairly clear that Samsung has a major problem in testing and assurance, and a surprising lack of concern for customer support. It also sounds like they don't have much of a handle on what it will take to fix a locked-up player.
I wonder how many other people around the world are stuck with non-functional players and a vague answer about the fix? It could well be in the thousands. And the best they can offer us is to check the WWW site once a week to see when they are ready for us to pay to install a fix to a problem they caused in the first place!
As someone who works in security and reliability, I can see all sorts of interesting problems here involving updates to consumer appliances. They problems are magnified with incomplete or incompetent responses from the vendors. It certainly suggests that consumers should press vendors to issue things that work correctly and don't require updates -- or at least have a fail safe state that allows recovery! Imagine losing use of your TV, phone, refrigerator or car indefinitely because of a faulty update caused by the vendor, with an indefinite fix. For those with malice in mind, this would be a great thing to do to harm a company -- and maybe to extort some money as "protection."
As a consumer, I'm rather angry. I don't expect to buy anything else made by Samsung, and I certainly won't recommend them to anyone else. You may choose to use this as a cautionary tale in your own pursuit of consumer items and choose another vendor that is more careful with their updates, and more considerate of customers who have paid for their products. And if you have one of the frozen players with some idea how to recover it to working condition, I'd be interested in hearing about it.
Sadly, caveat emptor.
Update 01/19/09: Samsung is shipping me a replacement for my bricked P2500. It left their plant on Friday, surface UPS. So, that will be a 3-week turnaround.
Meanwhile, I called the service number again about the P1500 and pressed until they escalated me to "executive response." (Third or fourth level customer service, I guess.) I kept reminding them that it was their firmware update that caused the problem. After 30 minutes on the phone, I must have worn them down sufficiently: they extended the warranty through this week, and are providing me the shipping information to send it in for service under warranty. Hooray!
Unlike last week, the personnel I talked with today were uniformly helpful and informative. I wonder if they have had enough complaints that there has been a change in policy? Or did I just get two really bad service reps in a row last week?
Nonetheless, the bad updates and the lack of a failsafe are really poor design.
Follow-up on the CA Hack
Yesterday, I posted a long entry on the recent news about how some researchers obtained a "rogue" certificate from one of the Internet Certificate Authorities. There are some points I missed in the original post that should be noted.
- The authors of the exploit have a very readable, interesting description of what they did and why it worked. I should have included a link to it in the original posting, but forgot to edit it in. The interested reader should definitely see that article online, include the animations.
- There are other ways this attack can be defeated, certainly, but they are stop-gap measures. I didn't explain them because I don't view them as other than quick patches. However, if you are forced to continue to use MD5 and you issue certificates, then it is important to randomize the certificate serial number that is issued, and to insert a random delay interval in the validity time field. Both will introduce enough random bits so as to make this particular attack against the CA infeasible given current technology.
- I suggested that vendors use another hash algorithm, and have SHA-1 as an example. SHA-2 would be better, as SHA-1 has been shown to have a theoretical weakness similar to MD5, although it has proven more resistant to attack to date. Use of SHA-1 could possible result in a similar problem within a few years (or, as suggested in the final part of my post, within a few weeks if a breakthrough occurs). However, use of SHA-1 would be preferable to MD5!
- MD5 is not "broken" in a complete way. There are several properties of a message digest that are valuable, including collision resistance: that it is infeasible to end up with two inputs giving the same hash value. To the best of my knowledge, MD5 has only been shown to be susceptible to "weak collisions" -- instances where the attacker can pick one or both inputs so as to produce identical hash values. The stronger form of preimage resistance, where there is an arbitrary hash output H and an attacker cannot form an input that also produces H, still holds for MD5. Thus, applications that depend on this property (including many signing applications and integrity tools) are apparently still okay.
- One of our recent PhD grads, William Speirs, worked on defining hash functions for his PhD dissertation. His dissertation, Dynamic Cryptographic Hash Functions, is available online for those interested in seeing it.
I want to reiterate that there are more fundamental issues of trust involved than what hash function is used. The whole nature of certificates is based around how much we trust the certificate authorities that issue the certificates, and the correctness of the software that verifies those certificates then shows us the results. If an authority is careless or rogue, then the certificates may be technically valid but not match our expectations for validity. If our local software (such as a WWW browser) incorrectly validates a certificate, or presents the results incorrectly, we may trust a certificate we shouldn't. Even such mundane issues as having one's system at the correct time/date can be important: the authors of this particular hack demonstrated that by backdating their rogue certificate.
My continuing message to the community is to not lose sight of those things we assume. Sometimes, changes in the world around us render those assumptions invalid, and everything built on them becomes open to question. If we forget those assumptions -- and our chains of trust built on them -- we will continue to be surprised by the outcomes.
That is perhaps fitting to state (again) on the last day of the year. Let me observe that as human beings we sometimes take things for granted in our lives. Spend a few moments today (and frequently, thereafter) to pause and think about the things in your life that you may be taking for granted: family, friends, love, health, and the wonder of the world around you. Then as is your wont, celebrate what you have.
Best wishes for a happy, prosperous, safe -- and secure -- 2009.
[12/31/08 Addition]: Steve Bellovin has noted that transition to the SHA-2 hash algorithm in certificates (and other uses) would not be simple or quick. He has written a paper describing the difficulties and that paper is online.
A Serious Threat to Online Trust
There are several news stories now appearing (e.g., Security News) about a serious flaw in how certificates used in online authentication are validated. Ed Felten gives a nice summary of how this affects online WWW site authentication in his Freedom to Tinker blog posting. Brian Krebs also has his usual readable coverage of the problem in his Washington Post article. Steve Bellovin has some interesting commentary, too, about the legal climate.
Is there cause to be concerned? Yes, but not necessarily about what is being covered in the media. There are other lessons to be learned from this.
Short tutorial
First, for the non-geek reader, I'll briefly explain certificates.
Think about how, online, I can assure myself that the party at the other end of a link is really who they claim to be. What proof can they offer, considering that I don't have a direct link? Remember that an attacker can send any bits down the wire to me and may access to faster computers than I do.
I can't base my decision on how the WWW pages appear, or embedded images. Phishing, for instance, succeeds because the phishers set up sites with names and graphics that look like the real banks and merchants, and users trust the visual appearance. This is a standard difficulty for people -- understanding the difference between identity (claiming who I am) and authentication (proving who I am).
In the physical world, we do this by using identity tokens that are issued by trusted third parties. Drivers licenses and passports are two of the most common examples. To get one, we need to produce sufficient proof of identity to a third party to meet its standards of proof. Then, the third party issues a document that is very difficult to forge (almost nothing constructed is impossible to forge or duplicate -- but some things require so much time and expenditure it isn't worthwhile). Because the criteria for proof of identity and strength of construction of the document are known, various other parties will accept the document as "proof" of identity. Of course, other problems occur that I'm not going to address -- this USACM whitepaper (of which I was principal author) touches on many of them.
Now, in the online world we cannot issue or see physical documents. Instead, we use certificates. We do this by putting together an electronic document that gives the information we want some entity to certify as true about us. The format of this certificate is generally fixed by standards, the most common one being the X.509 suite. This document is sent to an organization known as a Certificate Authority (CA), usually along with a fee. The certificate authority is presumably well-known, and performs a check (to their own standards) that the information in the document is correct, and it has the right form. The CA then calculate a digital hash value of the data, and creates a digital signature of that hash value. This is then added to the certificate and sent back to the user. This is the equivalent of putting a signature on a license and then sealing it in plastic. Any alteration of the data will change the digital hash, and a third party will find that the new hash and the hash value signed with the key of the CA don't match. The reason this works is that the hash function and encryption algorithm used are presumed to be so computationally difficult to forge that it is basically not possible.
As an example of a certificate , if you visit "https://www.cerias.purdue.edu" you can click on the little padlock icon that appears somewhere in the browser window frame (this is browser dependent) to view details of the CERIAS SSL certificate.
You can get more details on all this by reading the referenced Wikipedia pages, and by reading chapters 5 & 7 in Web Security, Privacy and Commerce.
Back to the hack
In summary, some CAs have been negligent about updating their certificate signing mechanisms in the wake of news that MD5 is weak, published back in 2004. The result is that malicious parties can generate and obtain a certificate "authenticating" them as someone else. What makes it worse is that the root certificate of most of these CAs are "built in" to browser and application trust lists to simplify look-up of new certificates. Thus, most people using standard WWW browsers can be fooled into thinking they have connected to real, valid sites -- even through they are connecting to rogue sites.
The approach is simple enough: a party constructs two certificates. One is for the false identity she wishes to claim, and the other is real. She crafts the contents of the certificate so that the MD5 hash of the two, in canonical format, is the same. She submits the real identity certificate to the authority, which verifies her bona fides, and returns the certificate with the MD5 hash signed with the CA private key. Our protagonist then copies that signature to the false certificate, which has the same MD5 hash value and thus the same digital signature, and proceeds with her impersonation!
What makes this worse is that the false key she crafts is for a secondary certificate authority. She can publish this in appropriate places, and is now able to mint as many false keys as she wishes -- and they will all have signatures that verify in the chain of trust back to the issuer! She can even issue these new certificates using a stronger hash algorithm than MD5!
What makes this even worse is that it has been known for years that MD5 is weak, yet some CAs have continued to use it! Particularly unfortunate is the realization that Lenstra, Wang and de Weger described how this could be done back in 2005. Methinks that may be grounds for some negligence lawsuits if anyone gets really burned by this....
And adding to the complexity of all this is the issue of certificates in use for other purposes. For example, certificates are used with encrypted S/MIME email to digitally sign messages. Certificates are used to sign ActiveX controls for Microsoft software. Certificates are used to verify the information on many identity cards, including (I believe) government-issued Common Access Cards (CAC). Certificates also provide identification for secured instant messaging sessions (e.g., iChat). There may be many other sensitive uses because certificates are a "known" mechanism. Cloud computing services , software updates, and more may be based on these same assumptions. Some of these services may accept and/or use certificates issued by these deficient CAs.
Fixes
Fixing this is not trivial. Certainly, all CAs need to start issuing certificates based on other message digests, such as SHA-1. However, this will take time and effort, and may not take effect before this problem can be exploited by attackers. Responsible vendors will cease to issue certificates until they get this fixed, but that has an economic impact some many not wish to incur.
We can try to educate end-users about this, but the problem is so complicated with technical details, the average person won't know how to actually make a determination about valid certificates. It might even cause more harm by leading people to distrust valid certificates by mistake!
It is not possible to simply say that all existing applications will no longer accept certificates rooted at those CAs, or will not accept certificates based on MD5: there are too many extant, valid certificates in place to do that. Eventually, those certificates will expire, and be replaced. That will eventually take care of the problem -- perhaps within the space of the next 18 months or so (most certificates are issued for only a year at a time, in part for reasons such as this).
Vendors of applications, and especially WWW browsers, need to give careful thought about updates to their software to flag MD5-based certificates as deserving of special attention. This may or may not be a worthwhile approach, for the reason given above: even with a warning, too few people will be able to know what to do.
Bigger issue
We base a huge amount of trust on certificates and encryption. History has shown how easy it is to get implementations and details wrong. History has also shown how quickly things can be destabilized with advances in technology.
In particular, too many people and organizations take for granted the assumptions on which this vast certificate system is based. For instance, we assume that the hash/digest functions in use are computationally difficult to reverse or cause collisions. We also assume that certain mathematical functions underlying public/private key encryption are too difficult to reverse or "brute force." However, all it takes is some new insight or analysis, or maybe new, affordable technology (e.g., practical quantum computing, or massively parallel computing) to violate those assumptions.
If you look at the way our systems are constructed, too little thought is given to what happens to existing infrastructure when something breaks. Designs can include compensating and recovery code, but doing so requires some cost in space or time. However, all too often people are willing to avoid the investment by putting off the danger to "if and when that happens." Thus, we instance such as the Y2K problems and the issues here with potentially rogue CAs.
(I'll note as an aside, that when I designed the original version of Tripwire and what became the Signacert product, I specifically included simultaneous use of several different message digest functions in different families for this very reason. I knew it was a matter of time before one or two were broken. I still believe that it is beyond reason to find files that will match multiple, different algorithms simultaneously.)
Another issue is the whole question of who we trust, and for what. As noted in the USACM whitepaper, authentication is always relative to a third party. How much do we trust those third parties? How much trust have we invested in the companies and agencies issuing certificates? Are they properly verifying identities? How good is there internal security? How do we know, and how much is at risk from our trust in those entities?
Let me leave you with a final thought. How do we know that this problem has not already been quietly exploited? The basic concept has been in the open literature for years. The general nature of this attack on certificates has been known for well over a decade, if not two. Given the technical and infrastructure resources available to national agencies and organized criminals, and given the motivation to use this hack selectively and quietly, how can we know that it is not already being used?
[Added 12/31/2008]: A follow-up post to this one is available in the blog.


