The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

50 Years, and Lessons (Not) Learned

Share:

Recently, I had cause to reflect on some of what I have done in my career. As one result, I posted a blog entry about how many programming languages I have learned.

As I was writing that up, it struck me that this is an anniversary year: I wrote my first computer program 50 years ago!

I don't recall the exact program, but it was in Fortran 66, was punched onto cards, and run on a Burroughs mainframe (as I recall, it was a B5700). I was in high school at the time, and enrolled in the advanced math track, so I was offered the opportunity to take an experimental computer course in place of shop class.

Thus, I don’t think I ever got to build that clunky birdhouse in woodworking shop. However, I did get to experiment with checking my pre-calc homework on the computer, and I kept all my fingers. I suspect my programs were as clunky as the birdhouses, although it wasn’t as obvious to everyone else. Taking the course also helped cement my nerd status, ensuring wedgies and no dates for the remainder of my high school career. (This was a result that extended well beyond high school, unfortunately.)

It was a few years later, in college, that I got to do any programming again, then in BASIC on an HP 3000 and assembly on an Altair 8800. However, the prior experience in Fortran gave me a head start over everyone else in the class and I never really looked back. My first CS advisor was a member of the Fortran 77 standards committee so I also circled back around to Fortran before I got my batchelors degree.

All of that experience (and more) was tumbling around in my head when time came to produce a lecture title and abstract. It resulted in the title and abstract, below. I gave this talk in the University of Maryland-Baltimore County UCYBR Distinguished Lecture Series earlier this week.

If you’re curious, you can view the recorded lecture. (I have some other presentations – including one from 1989 – when I had hair – on my YouTube channel page.)


Cyber Lessons, Learned and Unlearned

Dr. Eugene Spafford is a professor with an appointment in Computer Science at Purdue University, where he has served on the faculty since 1987. He is also a professor of Philosophy (courtesy), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy) and a Professor of Political Science (courtesy). He serves on a number of advisory and editorial boards. Spafford's current research interests are primarily in the areas of information security, computer crime investigation and information ethics. He is generally recognized as one of the senior leaders in the field of computing.

Among other things, Spaf (as he is known to his friends, colleagues, and students) is Executive Director Emeritus of the Purdue CERIAS (Center for Education and Research in Information Assurance and Security), and was the founder and director of the (superseded) COAST Laboratory. He is Editor-on-Chief of the Elsevier journal Computers & Security, the oldest journal in the field of information security, and the official outlet of IFIP TC-11.

Spaf has been a student and researcher in computing for over 40 years, 35 of which have been in security-related areas. During that time, computing has evolved from mainframes to the Internet of Things. Of course, along with these changes in computing have been changes in technology, access, and both how we use and misuse computing resources. Who knows what the future holds?

In this UCYBR talk, Spaf will reflect upon this evolution and trends and discuss what he sees as significant "lessons learned" from history. Will we learn from our past? Or are we destined to repeat history (again!) and never break free from the many cybersecurity challenges that continue to impact our world?

Riffing on the Ph.D. Degree

Share:

I recenty was having a discussion with someone about the Ph.D. option for a degree here.  The person said “I don’t want a Ph.D. because I don’t ever intend to do research at a university.”  Thus began a conversation about how the Ph.D. may be a requirement for most faculty positions, but it is not a sentence connected to the degree!  Furthermore, not all faculty positions are primarily research positions.

As an example, of the 23 Ph.D. graduates for whom I have been primary (co)advisor to date, 11 have spent some time as faculty members but only four are still full-time faculty.  Six of them currently reside outside the U.S., and six (an overlapping group) have started their own companies. Seven are C-level executives, and another 10 are in senior director/partner-type positions.  It is certainly not the case they are all doing academic research at a university!

The Ph.D. is a way of learning how to focus on a narrow problem, develop a comprehensive plan to solve it, and then present the problem and its solution in a formal, convincing manner. Thus, completing a Ph.D. is a way to hone time management and research skills, dive into an area of interest, and prove one’s capability to manage a big task.  That is useful not only for academic research, but for managing projects, running an agency, and solving problems in “the real world.”

I’m proud of all of these graduates for what they did while completing their degrees and then going on to do interesting and important things in their careers. Here’s a list with mention of their most recent position:

  • Hiralal Agrawal; 1991; Senior Research Scientist, Perspecta Labs.
  • Hsin (Sean) Pan; 1993; Senior Director, Foxconn.
  • Steve J. Chapin; 1993; Lead Cyber Security Researcher, Lawrence Livermore National Laboratories.
  • Chonchanok Viravan; 1994; President of Pathanasomdoon Co, Ltd. (Thailand).
  • Sandeep Kumar; 1995; Staff Engineer, VMware, CA.
  • Christoph Schuba; 1997; Senior Security Architect, Apple Computer.
  • Ivan Krsul; 1998; President, Arte Xacta (La Paz, Bolivia).
  • Diego Zamboni; 2001; Enterprise Architect, Swisscom (Switzerland).
  • Wenliang (Kevin) Du; 2001; Professor, Syracuse University.
  • Thomas Daniels; 2002; Associate Teaching Professor, Iowa State University.
  • Ben Kuperman; 2004; Senior Manager of Software Development, Adobe.
  • Florian Buchholz; 2005; Professor, James Madison University.
  • James Early; 2005; Senior Software Engineer, Good Uncle.
  • Paul D. Williams, 2005; Senior Vice President and Chief Security Officer, Teradata.
  • Brian Carrier; 2006; CTO and Head of Digital Forensics, Basis Technology.
  • Rajeev Gopalakrishna; 2006; independent Consulting Researcher.
  • Serdar Cabuk; 2006; Partner, Deloitte Denmark.
  • Maja Pusara Jankovic, 2007; Senior consultant, Ab Initio.
  • Dannie Stanley, 2014; Associate Professor, Taylor University.
  • Mohammed Almeshekah, 2015; Founder and Managing Partner of Outliers Venture Capital (Saudi Arabia).
  • Kelley Misata, 2016 (INSC); CEO and Founder, Sightline Security Corporation.
  • Jeff Avery, 2017; Senior Principle Cyber Systems Engineer, Northrop Grumman.
  • Christopher Gutierrez, 2017; Research Scientist, Intel Corporation.

I am working with five Ph.D. advisees currently. Four of them are employed outside of academia and intend to stay in those positions after getting their degrees.

If you’re interested in getting a Ph.D. (or an MS) at Purdue related to cyber security, take a look at our information page.

(As a matter of trivia, even though the majority of my former students didn’t go into university positions, there are at least 53 more people who received the Ph.D. with one of the above 23 as primary advisor.  Maybe we should start a “Spaf number” similar to the Erdös Number?)

 

So you have to learn a 3rd programming language?

Share:

I recently found myself in a conversation where someone made a comment about "Being so old I've programmed in Pascal!" I'm considerably older than that person, and actually did some of my first programming on plugboards and punchcards. I declined the opportunity to point that out at the time.

Upon some reflection, I realize I've had the opportunity (and sometimes, the necessity) to use many, many different languages during my 48 years of programming. I used to find it empowering and instructive to try different programming paradigms and approaches, so I actively sought out new ones. As my workload and schedule have evolved over time, I have not really picked up many new ones. I’d like to learn Swift and Rust (for example) but I'll need to carve out the time and obtain a compiler, first.

For grins, I thought I'd make a list of programming languages where I wrote at least one non-trivial program, where "non-trivial" means that there were subroutines/functions/methods/etc. I may have left a few out, but... (you can find most of these documented on Wikipedia if you haven't run across them before).

  • 80x86 assembler
  • 6502 assembler
  • 8080 assembler
  • abc
  • Ada
  • Algol 68
  • Algol W
  • APL
  • AppleScript
  • awk/sed
  • bash
  • Basic
  • bc
  • C (original and ANSI)
  • C++
  • Cobol
  • Common LISP
  • COMPASS
  • csh
  • dc
  • Eiffel
  • Emacs LISP
  • Euclid
  • flex/lex
  • Forth
  • Fortran 77
  • Fortran II
  • Fortran IV
  • html
  • Java
  • Javascript
  • JCL
  • ksh
  • LISP
  • M4
  • MATLAB
  • MIX
  • Modula 2
  • Modula 3
  • MS-DOS Batch
  • nroff/troff
  • Oberon
  • Pascal
  • Perl
  • PHP
  • PL/I
  • PL/M
  • Postscript
  • Pr1me assembler
  • Prolog
  • Python
  • Ratfor
  • RPG
  • sed
  • Simula
  • Smalltalk
  • SNOBOL
  • tcl/tk
  • TeX/LaTeX
  • VAX assembler

I also wrote one small program in Intercal, to prove to myself that I could. I never worked up the courage to tackle Malbolge.

I've also written and debugged patches in microcode on several machines, but I won't claim that I really mastered any of the associated languages.

There may be a few I left out plus dialect/version variations, but that is almost 60 languages as is. I'm sure there are people who have programmed in more; those of us who have been around for a while have needed to adapt.

I don't program very much anymore. I occasionally will whip up a ksh or Perl script, and very rarely, a C program. Those are sort of my default programming tools. If I needed to, I suppose a weekend or two with some language manuals would get me somewhat back up to speed with these others. Thankfully, no one has a pressing need for me to write code for anything, although I'm still pretty good at debugging (errors tend to be the same in any language). I have written four complete compilers and three full operating systems using some of these languages, including one each in assembly language. Thankfully, that is also not on my agenda to do again.

So when "kids these days..." complain about having to learn a 3rd programming language for class, well, I am amused.

Sturm und Drang and Hacking and Twitter

Share:

Last week, an article appeared in the Washington Examiner that contained a couple of quotes from me. The context of how the quotes were obtained is explained below.

Apparently, some people took exception to aspects of the article and/or my quotes. This was all manifested on Twitter, with outrage, some ad hominem attacks, bombastic comments about unfollowing me, and more. After all, it is Twitter, and what would Twitter be without outrage and posturing?

(Interestingly, despite some unfollows, my Twitter account as of Sunday has more followers than before this happened. Draw your own conclusions about that. As for me, I don't care much how many people follow or not -- I still post things there I decide I want to post.)

I decided it might be worth a short post on how the quotes came about and perhaps addressing a few things from the article.

How the Quotes Came to Be

Earlier in the week, I received a request to contact a reporter. This is not unusual. I regularly am asked by the press to comment on cybersecurity, privacy, cybercrime, and so forth. The university encourages that kind of outreach. I generally try to provide useful background to reporters.

I called the reporter. He told me he was working on a story but couldn't share details. He gave me a very vague description -- basically, that he had some evidence that someone working in cybersecurity for one of the presidential campaigns had a history of associating with racist organizations, trolling, and breaking into computers. He wanted to know what I thought of that.

As I expressed to him, if true, I thought that was a poor choice. I explained that generally speaking, someone in such a position should have been more thoroughly vetted. He then outlined how the person likely had a history of hacking into other people's accounts and asked me what I thought. I stated -- with that as context -- that people with that kind of history are usually a poor choice for positions of trust. A history of breaking the law suggests they may be (note: may) more likely to do it again, thus posing a risk to their employer. Furthermore, I noted that a past that is concealed from the employer opens up the possibility of extortion. Both of these imply an "insider" risk. Given the high stakes of this election cycle coupled with foreign interference, that seemed like a real problem.

My conversation with the reporter was over 20 minutes in length. He quoted two of my statements in the published article. This should not be a surprise to anyone who has ever spoken to a reporter...or to anyone who has written for the press. Lots of material isn't used, including material that may set useful bounds on what is published.

Hacking

Unfortunately, "hacking" and "hacker" have divergent meanings. One usage means someone who explores systems and capabilities, often finding new and unexpected features or problems. A second usage means someone who breaks into systems without permission, illegally, often causing harm. This dichotomy has been a problem for over 30 years now, and we still haven't resolved it in general usage. There have been attempts to qualify the term ("white hat" and "black hat," terms which have other problems), and using labels such as "ethical hacking," which implies everything else is not ethical. These are not satisfactory solutions.

In the conversation with the reporter, he was continually using "hacking" in the pejorative sense, such as "hacking into other people's computers without their consent." My replies were to that usage and in that context.

To be clear, I understand the difference. I have taught and worked with people who are hackers in a positive sense. At one time, when I had more free time and less arthritis in my hands, I did my own share of system hacking. When performed with care and consent, the hobbyist/exploratory form of hacking is often fun and educational. Hacking of others' systems without consent, to cause damage or harm, is a bad thing.

The people who take umbrage over use of "hacking" should to pay close attention to context to moderate their blood pressure. Furthermore, they should realize that 30 years of use by journalists to denote unauthorized access means that the general public only understands that one definition of "hacking" no matter how they define it. It is now similar to any malware being labeled "computer virus" -- it is unlikely that the term will ever get a more precise definition for public use.

Ethics

I have worked in the area of professional ethics for over 3 decades. I wrote one of the first articles on the ethics of computer intrusion and contributed to many textbooks in the area. I helped develop the last two iterations of ACM's Code of Professional Ethics. I am chair of ACM's Committee on Publishing Ethics & Plagiarism. I have lectured on the topic at major companies and government agencies. I teach aspects of ethics in classes. It isn't simply a word to me.

Professional ethics have a vital role in defining a profession. They help practitioners distinguish among choices. They help guide us in knowing the difference between what we can do and what we should do. Every major professional organization, across multiple professions, has some form of professional code of behavior.

In the context of this issue, breaking (hacking!) into other peoples' systems without permission is unethical. It is also usually illegal. Trolling people in the form described to me by the reporter is unethical and harmful. And being a bigot is wrong, although a too common evil in society today.

Those of us who work in computing -- and especially in security-related positions -- should be very concerned about how we are viewed by the public. If we want to be trusted, then we need to act in a trustworthy manner. Ethical behavior and knowledge of the law are important, and distinguish professionals from everyone else.

It is in this context that I made this comment: "People who are well respected don't come from trolling or hacking groups. There's been a culture shift there. Companies don't want to hire people with sketchy backgrounds." That is true. The companies I work with -- banks, aerospace, defense, telecommunications -- do not want people who have a history of breaking into systems (note the version of "hacking" here) or abusing others. It is a liability for them. It is also evidence of poor judgment and a willingness to do unethical things, at least at some time in the past. Those activities are grounds for termination from many positions. A history of those things is often an automatic disqualification from hiring -- and is questioned as a standard part of polygraph exams. (No, I'm not going to have a side conversation about polygraph exam accuracy here, but you can see one of my blog posts from 2006.)

Can people who did unethical things reform? Of course! Sometimes people do foolish things and later regret and repent. However, it is also the case that people who do foolish and illegal things usually deny they did them, or they claim to have reformed so they can get a shot at doing them again. Whether one accepts the apparent reformation of the individual is a matter of faith (religious or otherwise) and risk management. As I noted, "Somebody who shows up with red flags would not be allowed to occupy a position of sensitivity." Maybe this denies someone reformed and talented a position. However, it also is a matter of practical risk reduction and is part of the standard of due care by organizations dealing with information of great value.

The Person in the Article

I was never given the name or specifics of the person mentioned in the article during the interview. I only learned her name after the article appeared. To my knowledge, I have never met her. I have no personal knowledge of her activities. I made no statements attributing any activities to her. So, if you are a friend of hers and bent out of shape because of the article, you really shouldn't take it out on me.

Bottom Line

TL;DR. People will bluster and posture on Twitter. I was quoted as saying some things that set a few people off, either because they don't pay attention to context, don't understand how insider threats are minimized, or perhaps because they are friends of the person the article is about. I guess it is also possible they don't like the venue or the political campaign. Whatever the reason, I don't care if people unfollow me, but if people are abusive in their comments I will block them. However, the people who want to try to understand the overall context may find the above useful.

Meanwhile, here is some reading for you:

  1. ACM Code of Professional Ethics
  2. IEEE Code of Ethics
  3. ISSA Code of Ethics
  4. ISC2 Code of Ethics
  5. ISACA Code of Professional Ethics

Summary of July 15th, 2020 Purdue Seminar on Control System Cyber Security

Share:
Joe Weiss

Guest Blog by Joe Weiss, Applied Control Systems, Inc

Wednesday July 15, 2020 I gave a 1 hour presentation on control system cyber security for the Purdue University Summer Seminar Series.

Summary

The statistics from the call include:

There were 183 pre-registrations of which 119 attended. The registrations were from 16 countries – Australia, Austria, Brazil, China, Germany, India, Israel, Kuwait, Lithuania, Mexico, Netherlands, New Zealand, Saudi Arabia, Singapore, UK, US. Actual attendees were from India, Israel, Kuwait, Lithuania, Mexico, Netherlands, Saudi Arabia, Singapore, UK, US.

For those unable to attend, the recording will be on the Purdue Cerias website at: https://ceri.as/weiss

After 20 years, control system cyber security has made significant strides in monitoring and securing OT (control system) networks. However, after 20 years, control system cyber security has made minimal strides in monitoring or securing the actual control system devices (e.g., process sensors, actuators, drives, analyzers, etc.) and lower level device networks which is where you go “boom in the night”. Much of this is because of the culture clash between the engineering community who understand the devices but generally have been removed form control system cyber security efforts and the network security personnel who do not understand control system devices or control system processes. The impact of the culture gap is at least partially due to network security’s erroneous assumptions:

  • Process sensor input to all OT networks are uncompromised, authenticated, and correct so that securing the network packers is sufficient to protect the control systems and physical processes,
  • control system devices can only be accessed from Ethernet (IP) network,
  • all control system anomalies can be found from Ethernet (IP) network,
  • network vulnerabilities directly correspond to physical system impacts,
  • cyber security frameworks can be directly applied to control system cyber security, and
  • cyber security is about zero trust.

Q&A

There were 10 questions raised that I did not have a chance to answer on the webinar. I thought the questions and answers would be of general interest.

1). Q: Joe, this is great. You said "Our information sharing doesn't work." What do you think needs to be improved, and how would you improve it?

Answer: Information sharing on cyber network vulnerabilities are addressed in DHS and vendor disclosures as well as industry ISACs. The information sharing that is missing is about actual cyber-related incidents. NERC Lessons Learned don’t address incidents as being cyber-related. The various industry ISACs have not addressed cyber incidents occurring within their industry. The sharing on control system incidents to date most often has been by engineers who have seen incidents that were out of the ordinary. Informally, my old conference (ICS Cyber Security Conference which no longer exists) served as an informal information sharing vehicle for the engineers to discuss real control system cyber-related incidents. Unfortunately, I don’t believe the government can help because of the private organizations concern about making these incidents public. I wrote a chapter in my book, Protecting Industrial Control Systems from Electronic Threats Chapter 9 “Information Sharing and Disclosure”. I will have more to say about this subject in a separate blog at www.controlglobal.com/unfettered.

2). Q: What is your view on Executive order for going back to analog system? We are all driving through zero carbon and digitalization- How to achieve the balance between them?

Answer: Hardwired analog systems with no “intelligence” such as electromechanical float switches as part of a hard-wired relay ladder logic system would be independent of a cyber attack from external threat agents, including hardware backdoors. However, adding any intelligence and modern communication capabilities would make the analog systems as vulnerable as digital systems to a backdoor sensor attack. Both smart and dumb systems would be potentially vulnerable with respect to a physical, hands on insider attack. That is the reason for defense-in-depth. The only way to get the balance between zero carbon and digitalization (or manufacturing and digitalization) is to have engineering and network security work the issues together.

3). Q: What approach do we take to secure level 0 and level 1 equipment?

Answer: I mentioned in my presentation that a major misconception is that all process sensor communications have traverse the Ethernet IP network and that network monitoring can identify any sensor anomalies. Consequently, there is a need to develop control system policies, procedures, and use existing equipment (as well as network) monitoring technologies. However, existing equipment or network monitoring technologies likely will not be sufficient to identify counterfeit devices or address hardware backdoors. This would most likely require new sensor monitoring technology that address the “physics” of the sensors which would be the input to both equipment and network monitoring. This new sensor monitoring technology has been proven in laboratory testing against multiple sensor vendors. In addition, there needs to be an industry recognition that the requirements within standards like ISA 62443 apply to the entire control system, level 0 through to the DMZ. Part of this understanding is that the control system and its network is owned by engineering personnel (operations, engineering support, maintenance) rather than the IT personnel, who should be used in a support role as a service provider.

4). Q: So to keep validating the low-level sensor data real time, we will need to know the algorithm that computes the result (e.g., temperature, pressure, etc.) but manufacturers may not wish to share their proprietary algorithms. Then, what can be done?

Answer: The sensors need to be identified by physics “fingerprinting” (see above). This would identify counterfeits as well as identify any sensor deviations agnostically. That is, it will identify deviations whether from sensor drift, loose screws, calibration issues, hardware problems, or cyber compromises. Once the deviation is identified, there are available tools that should be capable of determining the cause. It is a shame to say in 2020 we still don’t know when to use our diagnostic toolboxes because of the lack of awareness.

5). Q: Could you also have an engineer's SOC rather than an IT/OT SOC.? They would focus on the engineering aspects.

Answer: Without being flippant, that is the control room or control center.

6). Q: How to mitigate supply chain risks?

Answer: This is a very complex question because supply chain risks can range from an entire device to software, to firmware, to microprocessors, as well as integration of multiple instances of these. It requires the cooperation of procurement, physical security, cyber security, test shops/labs, engineering, and maintenance. Sensor monitoring to detect counterfeit or hardware backdoors would be a critical piece of the solution. Asset owners should also require their product and service providers to comply with standards like ISA 62443-2-4 and then to vet them against those requirements. I would be happy to further discuss my thoughts offline.

7). Q: Two questions : Is there any validated OT architecture that may hinder the possibility of backdoor attacks where the device would look for a master to trigger?

Answer: I don’t think so as the backdoor could bypass the OT architecture – the reason for the Presidential Executive Order.

8). Q: I had a question about the levels. Do you think there is an advantage in separating level-0 devices to continuously-monitored devices (PLCs, HMIs) and smart IO Devices (IO Link based devices, Ethernet IP devices/Profinet devices)

Answer: Two years ago, we created a special ISA99 Working Group on Level 0,1 devices – ISA99.04-07 to address Level 0,1 devices. The working group concurred that “legacy“ Level 0,1 devices cannot be secured to current cyber security requirements. Additionally, the Purdue Reference Model was identified as being out-of-date for addressing modern control system device technology for cyber and communication capabilities as there no longer are clear distinctions between levels even for the same device. There is an advantage to segregating sensors based on the zone they are located. Each zone should have its security requirements based on risk and countermeasures that are unique to that zone. For instance, a safety-instrumented system (SIS) involves sensors, logic solver, final elements as well as an engineering workstation. Having a SIS zone makes it easier to accomplish least privilege from both a physical and logical access perspective.

9). Q: Are Controls devices companies taking any action to certify programmable hardware electronics to validate no malicious logic is included on logic or printed circuit hardware?

Answer: I think that is the ultimate intent of ISASecure and commercial test/certification companies. The devices certified to date are controllers and master stations. None of the Level 0,1 devices has completed cyber security certifications.

10). Q: Another questions I had was about a recent change in the industry direction, to put all devices on the IP network now.¬ I bring new machines to our plant, and 100% of our machines have an Ethernet network and a NAT gateway to expose device.

Answer: Unfortunately, that is becoming a common practice especially with Industry4.0 and other digital upgrade initiatives. However, I believe there is a real need to question whether safety devices should be part of the overall integrated plant Ethernet network. Moreover, I think there needs to be a reassessment of the need to connect control system devices directly to the Internet without some form of proxy to shield them from public access.

-Joe Weiss