In IT security ("cybersecurity") today, there is a powerful herd mentality. In part, this is because it is driven by an interest in shiny new things. We see this with the massive pile-on to new technologies when they gain buzzword status: e.g., threat intelligence, big data, blockchain/bitcoin, AI, zero trust. The more they are talked about, the more others think they need to be adopted, or at least considered. Startups and some vendors add to the momentum with heavy marketing of their products in that space. Vendor conferences such as the yearly RSA conference are often built around the latest buzzwords. And sadly, too few people with in-depth knowledge of computing and real security are listened to about the associated potential drawbacks. The result is usually additional complexity in the enterprise without significant new benefits — and often with other vulnerabilities, plus expenses to maintain them.
Managers are often particularly victimized by these fads as a result of long-standing deficiencies in the security space: we have no sound definition of security that encompasses desired security properties, and we, therefore, have no metrics to measure them. If a manager cannot get some numeric value or comparison of how new technology may make things better vs. its cost, the decision is often made on "best practice." Unfortunately, "best practice" is also challenging to define, especially when there is a lot of talk and excitement by people about vending the next new shiny thing. Additionally, enterprise needs are seldom identical, so “best” may not be uniform. If the additional siren call is heard about "See how it will save you money!" then it is nearly impossible to resist, even if the "savings" are only near-term or downright illusory.
This situation is complicated because so much of what we use is defective, broken, or based on improperly-understood principles. Thus, to attempt to secure it (really, to gain greater confidence in it) solutions that sprinkle magic pixie dust on top are preferred because they don't mean sacrificing the sunk cost inherent in all the machines and software already in use. Magic fairy dust is shiny, too, and usually available at a lower (initial) cost than actually fixing the underlying problems. So that is why we have containers on VMs on systems with multiple levels of hypervisor behind firewalls and IPS --and turtles all the way down — while the sunk costs keep getting larger. This is also why patching and pen testing are seen as central security practices— they are the flying buttresses of security architecture these days.
The lack of a proper definition and metrics has been known for a while. In part, the old Rainbow series from the NCSC (NSA) was about this. The authors realized the difficulty of defining "secure" and instead spoke of "trusted." The series established a set of features and levels of trust assurance in products to meet DOD needs. However, that was with a DOD notion of security at the time, so issues of resilience and availability (among others) weren't really addressed. That is one reason why the Rainbow Series was eventually deprecated: the commercial marketplace found it didn't apply to their needs.
Defining security principles is a hard problem, and is really in the grand challenge space for security research. It was actually stated as such 16 years ago in the CRA security Grand Challenges report (see #3). Defining accompanying metrics is not likely to be simple either, but we really need to do it or continue to come up against problems. If the only qualities we can reliably measure for systems are speed and cost, the decisions are going to be heavily weighted towards solutions that provide those at the expense of maintainability, security, reliability, and even correctness. Corporations and governments are heavily biased towards solutions that promise financial results in the next year (or next quarter) simply because that is easily measured and understood.
I've written and spoken about this topic before (see here and here for instance). But it has come to the forefront of my thinking over the last year, as I have been on sabbatical. Two recent issues have reinforced that:
I hope to write some more on the issues around defining security and bucking the "conventional wisdom" once I am fully recovered from my sabbatical. There should be no shortage of material. In the meantime, I invite you to look at the cloud paper cited above and provide your comments below.
A recent visit and conversation with Steve Crocker prompted me to think about how little the current security landscape has really changed from the past. I started looking through some of my archives, and that was what prompted my recent post here: Things are not getting better.
I posted that and it generated a fair bit of comment over on LinkedIn, which then led to me making some comments about how the annual RSA conference doesn’t reflect some of the real problems I worry about, and wondering about attendance. That, in turn, led me to remember a presentation I started giving about 6 years ago (when I was still invited to give talks at various places). It needed one editorial correction, and it is still valid today. I think it outlines some of the current problematic aspects of security in the commercial space, and security research. Here it is: Rethinking Security. This is a set of presentation slides without speaker notes or an audio recording of me presenting them, but I think you’ll get the ideas from it.
Coincident to this, an essay I wrote in conjunction with Steven Furnell, of the University of Plymouth in the UK, appeared in the British Computing Society’s online list. It describes how some things we’ve known about for 30 years are still problems in deployed security. Here’s that column: The Morris worm at 30.
Steve and I are thinking about putting something together to provide an overview of our 80+ years combined experience with security and privacy observations. As I delve more into my archives, I may be reposting more here. You may also be interested in some videos of some of my past talks, that I wrote about in this blog last year.
In the meantime, continue to build connected home thermostats and light bulbs that spy on the residents, and network-connected shoes that fail in ways preventing owners from being able to wear them, among other abominations. I'll be here, living in the past, trying to warn you.
PS. The 20th CERIAS Symposium is approaching! Consider attending. More details are online.
[This is posted on behalf of the three students listed below. This is yet another example of bad results when speed takes precedence over doing things safely. Good work by the students! --spaf]
As a part of an INSuRE project at Purdue University, PhD Information Security student Robert Morton and seniors in Computer Science Austin Klasa and< Daniel Sokoler conducted an observational study on Google’s QUIC protocol (Quick UDP Internet Connections, pronounced quick). The team found that QUIC leaked the length of the password potentially allowing eavesdroppers to bypass authentication in popular services like Google Mail or G-mail. The team named the vulnerability Ring-Road and is currently trying to quantify the potential damage.
During the initial stages of the research, the Purdue team found that the Internet has been transformed over the last five years with a new suite of performance improving communication protocols such as SPDY, HTTP/2 and QUIC. These new protocols are being rapidly adopted to increase the speed and performance of applications on the Internet. More than 10% of the top 1 Million websites are already using some of these technologies, including many of the 10 highest traffic sites.
While these new protocols have improved speed, the Purdue team focused on determining if any major security issues arose from using QUIC. The team was astonished to find that Google's QUIC protocol leaks the exact length of sensitive information when transmitted over the Internet. This could allow an eavesdropper to learn the exact length of someone's password when signing into a website. In part, this negates the purpose of the underlying encryption, which is designed to keep data confidential -- including its length.
In practice, the Purdue team found QUIC leaks the exact length of passwords into commonly used services such as Google's E-mail or G-mail. The Purdue team than created a proof-of concept exploit to demonstrate the potential damage:
Step 1 - The team sniffed a target network to identify the password length from QUIC.
Step 2 - The team optimized a password dictionary to the identified password length.
Step 3 - The team automated an online attack to bypass authentication into G-mail.
The Purdue team believes the root cause of this problem came when Google decided to use a particular encryption method in QUIC: the Advanced Encryption Standard Galois/Counter Mode (AES-GCM). AES-GCM is a mode of encryption often adopted for its speed and performance. By default, AES-GCM cipher text is the same length as the original plaintext. For short communications such as passwords, exposing the length can be damaging when combined with other contextual clues to bypass authentication, and therein lies the problem.
ConclusionIn summary, there seems to be an inherent trade-off between speed and security. As new protocols emerge on the Internet, these new technologies should be thoroughly tested for security vulnerabilities in a real-world environment. Google has been informed of this vulnerability and is currently working to identify a patch to protect their users. As Google works to create a fix, we recommend users and system administrators to disable QUIC in Chrome and their servers by visiting this link. We also recommend -- independent of this issue -- that users consider enabling two step verification with their G-mail accounts, for added protection, as described here. The Purdue team will be presenting their talk and proof-of-concept exploit against G-mail at the upcoming CERIAS Symposium on 18 April 2017.
Additional InformationTo learn more, please visit ringroadbug.com and check out the video of our talk called, "Making the Internet Fast Again...At the Cost of Security" at the CERIAS Symposium on 18 April 2017.
AcknowledgementsThis research is a part of the Information Security Research and Education (INSuRE) project. The project was under the direction of Dr. Melissa Dark and Dr. John Springer and assisted by technical directors a part of the Information Assurance Directorate of the National Security Agency.
INSuRE is a partnership between successful and mature Centers of Academic Excellence in Information Assurance Research (CAE-R) and the National Security Agency (NSA), the Department of Homeland Security and other federal and state agencies and laboratories to design, develop and test a cybersecurity research network. INSuRE is a self-organizing, cooperative, multi-disciplinary, multi-institutional, and multi-level collaborative research project that can includes both unclassified and classified research problems in cybersecurity.
This work was funded under NSF grant award No. 1344369. Robert Morton, the PhD Information Security student, is supported under the Scholarship For Service (SFS) Fellowship NSF grant award No. 1027493.
DisclaimersAny opinions, findings, or conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, CERIAS, Purdue University, or the National Security Agency.
Over the past couple of months I’ve been giving an evolving talk on why we don’t yet have secure systems, despite over 50 years of work in the field. I first gave this at an NSF futures workshop, and will give it a few more times this summer and fall.
As I was last reviewing my notes, it occurred to me that many of the themes I’ve spoken about have been included in past posts here in the blog, and are things I’ve been talking about for nearly my entire career. It’s disappointing how little progress I’ve seen on so many fronts. The products on the market, and the “experts” who get paid big salaries to be corporate and government advisors and who get the excessive press coverage, also serve to depress.
My current thinking is to write a series of blog posts to summarize my thinking on this general topic. I’m not sure how many I’ll write, but I have a list of probable topics already in mind. They break out roughly into (in approximate order of presentation):
Each of these will be of moderate length, with some references and links to material to read. If you’re interested in a preview, I recommend looking at some of my recent talks archived on YouTube, some of my past blog posts here, and oral histories of various pioneers in the field of infosec done by the Babbage Institute (including, perhaps, my own).
I’ll start with the first posting sometime in the next few days, after I get a little more caught up from my vacation. But I thought I’d make this post, first, to solicit feedback on ideas that people might like me to add to the list.
My first post will be about the definition of security — and why part of the problem is that we can’t very well fix something that we can’t reliably define and thus obviously don’t completely understand.
I have continued to update my earlier post about women in cybersecurity. Recent additions include links to some scholarship opportunities offered by ACSA and the (ISC)2 Foundation. Both scholarship opportunities have deadlines in the coming weeks, so look at them soon if you are interested.
The 15th Annual Security Symposium is less than a month away! Registration is still open but filling quickly. If you register for the Symposium, or for the 9th ICCWS held immediately prior, you can get a discount on the other event. Thus, you should think about attending both and saving on the registration costs! See the link for more details.
I periodically post an item to better define my various social media presences. If you follow me (Spaf) and either wonder why I post in multiple venues, or want to read even more of my musings, then take a look at it.
I ran across one of my old entries in this blog — from October 2007 — that had predictions for the future of the field. In rereading them, I think I did pretty well, although some of the predictions were rather obvious. What do you think?
Sometime in the next week or so (assuming the polar vortex and ice giants don’t get me) I will post some of my reflections on the RSA 2014 conference. However, if you want a sneak peek at what I think about what I saw on the display floor and after listening to some of the talks, you can read another of my old blog entries — things haven’t changed much.