I created a YouTube channel a while back, and began uploading my videos and linking in videos of me that were online. Yes, it’s a dedicated Spaf channel! However, I’m not on camera eating Tide pods, or doing odd skateboard stunts. This is a set of videos with my research and views over the years on information (cyber) security, research, education, and policies.
There are two playlists under the channel — one for interviews that people have conducted with me over the years, and the other being various conference and seminar talks.
One of the seminar talks was one I did at Bellcore on the Internet Worm — about 6 weeks after it occurred (yes, that’s 1988)! Many of my observations and recommendations in that talk seem remarkably current — which I don’t think is necessarily a good observation about how current practice has (not) evolved.
My most recent talk/video is a redo of my keynote address at the 2017 CISSE conference held in June, 2017 in Las Vegas. The talk specifically addresses what I see as the needs in current information security education. CISSE was unable to record it at the time, so I redid it for posterity based on the speaker notes. It only runs about 35 minutes long (there were no introductions or Q&A to field) so it is a quicker watch than being at the conference!
I think there are some other goodies in all of those videos, including views of my bow ties over the years, plus some of my predictions (most of which seem to have been pretty good). However, I am putting these out without having carefully reviewed them — there may be some embarrassing goofs among the (few) pearls of wisdom. It is almost certain that many things changed away from the operational environment that existed at the time I gave some of these talks, so I’m sure some comments will appear “quaint” in retrospect. However, I decided that I would share what I could because someone, somewhere, might find these of value.
If you know of a recording I don’t have linked in to one of the lists, please let me know.
Comments appreciated. Give it a look!
[This is posted on behalf of the three students listed below. This is yet another example of bad results when speed takes precedence over doing things safely. Good work by the students! --spaf]
As a part of an INSuRE project at Purdue University, PhD Information Security student Robert Morton and seniors in Computer Science Austin Klasa and< Daniel Sokoler conducted an observational study on Google’s QUIC protocol (Quick UDP Internet Connections, pronounced quick). The team found that QUIC leaked the length of the password potentially allowing eavesdroppers to bypass authentication in popular services like Google Mail or G-mail. The team named the vulnerability Ring-Road and is currently trying to quantify the potential damage.
During the initial stages of the research, the Purdue team found that the Internet has been transformed over the last five years with a new suite of performance improving communication protocols such as SPDY, HTTP/2 and QUIC. These new protocols are being rapidly adopted to increase the speed and performance of applications on the Internet. More than 10% of the top 1 Million websites are already using some of these technologies, including many of the 10 highest traffic sites.
While these new protocols have improved speed, the Purdue team focused on determining if any major security issues arose from using QUIC. The team was astonished to find that Google's QUIC protocol leaks the exact length of sensitive information when transmitted over the Internet. This could allow an eavesdropper to learn the exact length of someone's password when signing into a website. In part, this negates the purpose of the underlying encryption, which is designed to keep data confidential -- including its length.
In practice, the Purdue team found QUIC leaks the exact length of passwords into commonly used services such as Google's E-mail or G-mail. The Purdue team than created a proof-of concept exploit to demonstrate the potential damage:
Step 1 - The team sniffed a target network to identify the password length from QUIC.
Step 2 - The team optimized a password dictionary to the identified password length.
Step 3 - The team automated an online attack to bypass authentication into G-mail.
The Purdue team believes the root cause of this problem came when Google decided to use a particular encryption method in QUIC: the Advanced Encryption Standard Galois/Counter Mode (AES-GCM). AES-GCM is a mode of encryption often adopted for its speed and performance. By default, AES-GCM cipher text is the same length as the original plaintext. For short communications such as passwords, exposing the length can be damaging when combined with other contextual clues to bypass authentication, and therein lies the problem.
ConclusionIn summary, there seems to be an inherent trade-off between speed and security. As new protocols emerge on the Internet, these new technologies should be thoroughly tested for security vulnerabilities in a real-world environment. Google has been informed of this vulnerability and is currently working to identify a patch to protect their users. As Google works to create a fix, we recommend users and system administrators to disable QUIC in Chrome and their servers by visiting this link. We also recommend -- independent of this issue -- that users consider enabling two step verification with their G-mail accounts, for added protection, as described here. The Purdue team will be presenting their talk and proof-of-concept exploit against G-mail at the upcoming CERIAS Symposium on 18 April 2017.
Additional InformationTo learn more, please visit ringroadbug.com and check out the video of our talk called, "Making the Internet Fast Again...At the Cost of Security" at the CERIAS Symposium on 18 April 2017.
AcknowledgementsThis research is a part of the Information Security Research and Education (INSuRE) project. The project was under the direction of Dr. Melissa Dark and Dr. John Springer and assisted by technical directors a part of the Information Assurance Directorate of the National Security Agency.
INSuRE is a partnership between successful and mature Centers of Academic Excellence in Information Assurance Research (CAE-R) and the National Security Agency (NSA), the Department of Homeland Security and other federal and state agencies and laboratories to design, develop and test a cybersecurity research network. INSuRE is a self-organizing, cooperative, multi-disciplinary, multi-institutional, and multi-level collaborative research project that can includes both unclassified and classified research problems in cybersecurity.
This work was funded under NSF grant award No. 1344369. Robert Morton, the PhD Information Security student, is supported under the Scholarship For Service (SFS) Fellowship NSF grant award No. 1027493.
DisclaimersAny opinions, findings, or conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, CERIAS, Purdue University, or the National Security Agency.
Over the last month or two I have received several invitations to go speak about cyber security. Perhaps the up-tick in invitations is because of the allegations by Edward Snowden and their implications for cyber security. Or maybe it is because news of my recent awards has caught their attention. It could be it is simply to hear about something other than the (latest) puerile behavior by too many of our representatives in Congress and I'm an alternative chosen at random. Whatever the cause, I am tempted to accept many of these invitations on the theory that if I refuse too many invitations, people will stop asking, and then I wouldn't get to meet as many interesting people.
As I've been thinking about what topics I might speak about, I've been looking back though the archive of talks I've given over the last few decades. It's a reminder of how many things we, as a field, knew about a long time ago but have been ignored by the vendors and authorities. It's also depressing to realize how little impact I, personally, have had on the practice of information security during my career. But, it has also led me to reflect on some anniversaries this year (that happens to us old folk). I'll mention three in particular here, and may use others in some future blogs.
In early November of 1988 the world awoke to news of the first major, large-scale Internet incident. Some self-propagating software had spread around the nascent Internet, causing system crashes, slow-downs, and massive uncertainty. It was really big news. Dubbed the "Internet Worm," it served as an inspiration for many malware authors and vandals, and a wake-up call for security professionals. I recall very well giving talks on the topic for the next few years to many diverse audiences about how we must begin to think about structuring systems to be resistant to such attacks.
Flash forward to today. We don't see the flashy, widespread damage of worm programs any more, such as what Nimda and Code Red caused. Instead, we have more stealthy botnets that infiltrate millions of machines and use them for spam, DDOS, and harassment. The problem has gotten larger and worse, although in a manner that hides some of its magnitude from the casual observer. However, the damage is there; don't try to tell the folks at Saudi Aramaco or Qatar's Rasgas that network malware isn't a concern any more! Worrisomely, experts working with SCADA systems around the world are increasingly warning how vulnerable they might be to similar attacks in the future.
Computer viruses and malware of all sorts first notably appeared "in the wild" in 1982. By 1988 there were about a dozen in circulation. Those of us advocating for more care in design, programming and use of computers were not heeded in the head-long rush to get computing available on every desktop (and more) at the lowest possible cost. Thus, we now have (literally) tens of millions of distinct versions of malware known to security companies, with millions more appearing every year. And unsafe practices are still commonplace -- 25 years after that Internet Worm.
For the second anniversary, consider 10 years ago. The Computing Research Association, with support from the NSF, convened a workshop of experts in security to consider some Grand Challenges in information security. It took a full 3 days, but we came up with four solid Grand Challenges (it is worth reading the full report and (possibly) watching the video):
I would argue -- without much opposition from anyone knowledgeable, I daresay -- that we have not made any measurable progress against any of these goals, and have probably lost ground in at least two.
Why is that? Largely economics, and bad understanding of what good security involves. The economics aspect is that no one really cares about security -- enough. If security was important, companies would really invest in it. However, they don't want to part with all the legacy software and systems they have, so instead they keep stumbling forward and hope someone comes up with magic fairy dust they can buy to make everything better.
The government doesn't really care about good security, either. We've seen that the government is allegedly spending quite a bit on intercepting communications and implanting backdoors into systems, which is certainly not making our systems safer. And the DOD has a history of huge investment into information warfare resources, including buying and building weapons based on unpatched, undisclosed vulnerabilities. That's offense, not defense. Funding for education and advanced research is probably two orders of magnitude below what it really should be if there was a national intent to develop a secure infrastructure.
As far as understanding security goes, too many people still think that the ability to patch systems quickly is somehow the approach to security nirvana, and that constructing layers and layers of add-on security measures is the path to enlightenment. I no longer cringe when I hear someone who is adept at crafting system exploits referred to as a "cyber security expert," but so long as that is accepted as what the field is all about there is little hope of real progress. As J.R.R. Tolkien once wrote, "He that breaks a thing to find out what it is has left the path of wisdom." So long as people think that system penetration is a necessary skill for cyber security, we will stay on that wrong path.
And that is a great segue into the last of my three anniversary recognitions. Consider this quote (one of my favorite) from 1973 -- 40 years ago -- from a USAF report, Preliminary Notes on the Design of Secure Military Computer Systems, by a then-young Roger Schell:
…From a practical standpoint the security problem will remain as long as manufacturers remain committed to current system architectures, produced without a firm requirement for security. As long as there is support for ad hoc fixes and security packages for these inadequate designs and as long as the illusory results of penetration teams are accepted as demonstrations of a computer system security, proper security will not be a reality.
That was something we knew 40 years ago. To read it today is to realize that the field of practice hasn't progressed in any appreciable way in three decades, except we are now also stressing the wrong skills in developing the next generation of expertise.
Maybe I'll rethink that whole idea of going to give a talks on security and simply send them each a video loop of me banging my head against a wall.
PS -- happy 10th annual National Cyber Security Awareness Month -- a freebie fourth anniversary! But consider: if cyber security were really important, wouldn't we be aware of that every month? The fact that we need to promote awareness of it is proof it isn't taken seriously. Thanks, DHS!
Now, where can I find I good wall that doesn't already have dents from my forehead....?
Yes, I have been quiet (here) over the last few months, and have a number of things to comment on. This hiatus is partly because of schedule, partly because I had my laptop stolen, and partly health reasons. However, I'm going to try to start back into adding some items here that might be of interest.
To start, here is one item that I found while cleaning out some old disks: a briefing I gave at the NSA Research division in 1994. I then gave it, with minor updates, to the DOD CIO Council (or whatever their name was/is -- the CNSS group?), the Federal Infosec Research Council, and the Criticial Infrastructure Commission in 1998. In it, I spoke to what I saw as the biggest challenges in protecting government systems, and what were major research challenges of the time.
I have no software to read the 1994 version of the talk any more, but the 1998 version was successfully imported into Powerpoint. I cleaned up the fonts and gave it a different background (the old version was fugly) and that prettier version is available for download. (Interesting that back then it was "state of the art" :-)
I won't editorialize on the content slide by slide, other than to note that I could give this same talk today and it would still be current. You will note that many of the research agenda items have been echoed in other reports over the succeeding years. I won't claim credit for that, but there may have been some influences from my work.
Nearly 16 years have passed by, largely wasted, because the attitude within government is still largely one of "with enough funding we can successfully patch the problems." But as I've quoted in other places, insanity is doing the same thing over and over again and expecting different results. So long as we believe that simple incremental changes to the existing infrastructure, and simply adding more funding for individual projects, is going to solve the problems then the problems will not get addressed -- they will get worse. It is insane to think that pouring ever more funding into attempts to "fix" current systems is going to succeed. Some of it may help, and much of it may produce some good research, but overall it will not make our infrastructure as safe as it should be.
Yesterday, Admiral (ret) Mike McConnell, the former Director of National Intelligence in the US, said in a Senate committee hearing that if there were a cyberwar today, the US would lose. That may not be quite the correct way of putting it, but we certainly would not come out of it unharmed and able to claim victory. What's more, any significant attack on the cyberinfrastructure of the US would have global repercussions because of the effects on the world's economy, communications, trade, and technology that are connected by the cyber infrastructure in the US.
As I have noted elsewhere, we need to do things differently. I have prepared and circulated a white paper among a few people in DC about one approach to changing the way we fund some of the research and education in the US in cybersecurity. I have had some of them tell me it is too radical, or too different, or doesn't fit in current funding programs. Exactly! And that is why I think we should try those things -- because doing more of the same in the current funding programs simply is not working.
But 15 years from now, I expect to run across these slides and my white paper, and sadly reflect on over three decades where we did not step up to really deal with the challenges. Of course, by then, there may be no working computers on which to read these!
Short answer: " Almost certainly, no."
Longer answer:
The blogosphere is abuzz with comments on John Markoff's Saturday NT Times piece, Do We Need a New Internet? John got some comments from me about the topic a few weeks back. Unfortunately, I don't think a new Internet will solve the problems we are facing.
David Akin, a journalist/blogger commented on nicely John's post. In it, he quoted one of my posts to Dave Farber's IP list, which I then turned into a longer post in this blog. Basically, I noted that the Internet itself is not the biggest problem. Rather, it is the endpoints, the policies, the economics, and the legal environment that make things so difficult. It is akin to trying to blame the postal service because people manage to break into our houses by slipping their arms through the mailslots or because we leave the door unlocked "just in case" a package is going to be delivered.
Consider that some estimates of losses as a result of computer crime and fraud are in the many billions of $$ per year. (Note my recent post on a part of this.) Consider how much money is repeatedly spent on reissuing credit and debit cards because of loss of card info, restoring systems from backups, trying to remove spyware, bots, viruses, and the like. Consider how much is spent on defensive mechanisms than only work in limited cases -- anti-virus, IDS, firewalls, DLP, and whatever the latest fad might be.
What effect does that play on global finances? It is certainly a major drag on the economy. This was one of the conclusions (albeit, described as "friction") of the CSTB report Towards a Safer and More Secure Cyberspace, which did not seem to get much attention upon release.
Now, think about the solutions being put forward, such as putting all your corporate assets and sensitive records "out in the cloud" somewhere, on servers that are likely less well-protected or isolated than the ones being regularly compromised at the banks and card processors. But it will look cheaper because organizations won't need to maintain resources in-house. And it is already being hyped by companies, and seemingly being promoted by the NSF and CCC as "the future." Who can resist the future?
Next, stir in the economic conditions where any talk is going to be dismissed immediately as "crazy" if it involves replacing infrastructure with something that (initially) costs more, or that needs more than a minor change of business processes. And let's not forget that when the economy goes bad, more criminal behavior is likely as people seek value wherever they can find it.
The institutional responses from government and big vendors will be more of the same: update the patches, and apply another layer of gauze.
I have long argued that we should carefully re-examine some of the assumptions underlying what we do rather than blindly continue doing the same things. People are failing to understand that many important things have changed since we first started building computing artifacts! That means we might have better solutions if we really thought about the underlying problems from first principles.
I recently suggested this rethinking of basic assumptions to a few senior leaders in computing research (who shall remain nameless, at least within this posting) and was derided for not thinking about "new frontiers" for research. There is a belief among some in the research community (especially at the top universities) that the only way we (as a community; or perhaps more pointedly, them and their students) will get more funding for research and that we (again, the royal "we") will get premier publications is by pushing "new" ideas. This is partly a fault of the government agencies and companies, which aren't willing to support revisiting basic ideas and concepts because they want fixes to their existing systems now!
One part that makes sense from Markoff's article is about the research team making something that is effectively "plug compatible" with existing systems. That is roughly where a longer-term solution lies. If we can go back and devise more secure systems and protocols, we don't need to deploy them everywhere at once: we gradually phase them in, exactly as we do periodic refreshes of current systems. There is not necessarily an impassible divide between what we need and what we can afford.
I'm sorry to say that I don't see necessary changes occurring any time soon. It would upset too much of the status quo for too many parties. Thus, the situation isn't going to get better -- it's going to get worse -- probably much worse. When we finally get around to addressing the problems, it will be more expensive and traumatic than it needed to be.
As I noted before:
"Insanity: doing the same thing over and over again expecting different results."
Of course, my continued efforts to make this point could be branded insane. ;-)
An Aside
Over a decade ago, I gave several talks where I included the idea of having multiple "service network" layers on top of the Internet -- effectively VPNs. One such network would be governed by rules similar to those of the current Internet. A second would use cryptographic means to ensure that every packet was identified. This would be used for commercial transactions. Other such virtual networks would have different ground rules on authentication, anonymity, protocols and content. There would be contractual obligations to be followed to participate, and authorities could revoke keys and access for cause. Gateways would regulate which "networks" organizations could use. The end result would be a set of virtual networks on the Internet at large, similar to channels on a cable service. Some would be free-for-all and allow anonymous posting, but others would be much more regulated, because that is what is needed for some financial and government transactions.
I remember one audience at an early SANS conference at the time was so hostile to the idea that members began shouting objections before I could even finish my talk. I also couldn't find a venue willing to publish a speculative essay on the topic (although I admit I only tried 2-3 places before giving up). The general response was that it would somehow cut out the possibility for anonymous and experimental behavior because no one would want to use the unauthenticated channels. It was reminiscent of the controversy when I was the lead in the Usenet "Great Renamng."
The problem, of course, is that if we try to support conflicting goals such as absolute anonymity and strong authentication on the same network we will fail at one or the other (or both). We can easily find situations where one or the other property (as simply two examples of properties at stake) is needed. So long as we continue to try to apply patches onto such a situation before reconsidering the basic assumptions, we will continue to have unhappy failures.
But as a bottom line, I simply want to note that there is more than one way to "redesign the Internet" but the biggest problems continue to be the users and their expectations, not the Internet itself.