Willis H. Ware, a highly respected and admired pioneer in the fields of computing security and privacy, passed away on November 22nd, 2013, aged 93.Born August 31,1920, Mr. Ware received a BSEE from the University of Pennsylvania (1941), and an SM in EE from MIT (1942). He worked on classified radar and IFF (identify friend or foe) electronic systems during WWII. After the war he received his Ph.D. in EE from Princeton University (1951) while working at the Institute for Advanced Studies for John von Neumann, building an early computer system.
Upon receiving his Ph.D., Dr. Ware took a position with North American Aviation (now part of Boeing Corporation). After a year, he joined the RAND Corporation (in 1952) where he stayed for the remainder of his career -- 40 more years — and thereafter as an emeritus computer scientist. His first task at RAND was helping to build the "Johnniac," an early computer system. During his career at RAND he advanced to senior leadership positions, eventually becoming the chairman of the Computer Science Department.
Willis was influential in many aspects of computing. As an educator, he initiated and taught one of the first computing courses, at UCLA, and wrote some of the field's first textbooks. In professional activities, he was involved in early activities of the ACM, and was the founding president of AFIPS (American Federation of Information Processing Societies). From 1958-1959 he served as chairman of the IRE Group on computers, a forerunner of the current Computer Society of the IEEE. He served as the Vice Chair of IFIP TC 11 from 1985-1994. At the time of his death he was still serving as a member of the EPIC Advisory Board.
Dr. Ware chaired several influential studies, including one in 1967 that produced a groundbreaking and transformational report for ARPA (now DARPA) that was known thereafter as "The Ware Report." To this day, some of the material in that report could be applied to better understand and protect computing systems security. The follow-on work to that study eventually led, albeit somewhat indirectly, to the development of the NCSC "Rainbow Series" of publications. (The NCSC, National Computer Security Center, was a public-facing portion of the NSA ,serving as an office for improving security in commercial products.)
In 1972, Dr. Ware was tapped to chair the Advisory Committee on Automated Personal Data Systems for the HEW (now HHS) Secretary. That report, and Willis's subsequent paper,"Records, Computers, and the Rights of Citizens," established the first version of the Code of Fair Information Practices. That, in turn, significantly influenced the Privacy Act of 1974, and many subsequent versions of fair information practices. The Privacy Act mandated the creation of the Privacy Protection Study Commission, of which Dr. Ware was vice chair.
Willis was the first chairman of the Information System and Privacy Advisory Board, created by the Computer Security Act of 1987. He remained chairman of that board for 11 years following its establishment. Over the years, Dr. Ware served on many other advisory boards, including the US Air Force Scientific Advisory Board, the NSA Scientific Advisory Board, and over 30 National Research Council boards and committees.
Willis Ware was one of the most honored professionals in computing. He was a Member of the National Academy of Engineering, and was a Fellow of the AAAS, of the IEEE, and of the ACM — perhaps the first person to accrue all four honors. He was a recipient of the IEEE Centennial Medal in 1984, the IEEE Computer Pioneer Award in 1993, and a USAF Exceptional Civilian Service Medal in 1979. He was the recipient of the NIST/NSA National Computer System Security Award in 1989, the IFIP Kristian Beckman Award in 1999, a lifetime achievement award from the Electronic Privacy Information Center (2012), and was inducted into the Cyber Security Hall of Fame in 2013.
Dr. Willis H. Ware was truly a pioneer computer scientist, an early innovator in computing education, one of the founders of the field of computer security, and an early proponent of the need to understand appropriate use of computing and the importance of privacy. His dedication to the field and the public interest was both exceptional and seminal.
The Rand Corporation posted an in memorium piece on their website.
(Any updates or corrections will be posted here as they become available.)
Update 10/26: included acronym expansions of IFF and NCSC, along with links for NCSC and HHS. Added small grammatical corrections.
Update 10/29: added the note and link to the Rand Corporation in memorium piece.
On October 9th, 2013, I delivered one of the keynote addresses at the ISSA International Conference. I included a number of observations on computing, security, education, hacking, malware, women in computing, and the future of cyber security.
You can see a recording of my talk on YouTube or view it here. You might find it somewhat amusing. See the old guy with the bow tie ramble on.
(If you work in cyber security, you should think about joining the ISSA.)
(Also, if you didn't know, I have two other blogs. One blog is a Tumblr blog feed of various media stories about security, privacy and cybercrime. The other blog is about various personal items that aren't really related to CERIAS, or even necessarily to cyber security — some serious, some not so much.)
Over the last month or two I have received several invitations to go speak about cyber security. Perhaps the up-tick in invitations is because of the allegations by Edward Snowden and their implications for cyber security. Or maybe it is because news of my recent awards has caught their attention. It could be it is simply to hear about something other than the (latest) puerile behavior by too many of our representatives in Congress and I'm an alternative chosen at random. Whatever the cause, I am tempted to accept many of these invitations on the theory that if I refuse too many invitations, people will stop asking, and then I wouldn't get to meet as many interesting people.
As I've been thinking about what topics I might speak about, I've been looking back though the archive of talks I've given over the last few decades. It's a reminder of how many things we, as a field, knew about a long time ago but have been ignored by the vendors and authorities. It's also depressing to realize how little impact I, personally, have had on the practice of information security during my career. But, it has also led me to reflect on some anniversaries this year (that happens to us old folk). I'll mention three in particular here, and may use others in some future blogs.
In early November of 1988 the world awoke to news of the first major, large-scale Internet incident. Some self-propagating software had spread around the nascent Internet, causing system crashes, slow-downs, and massive uncertainty. It was really big news. Dubbed the "Internet Worm," it served as an inspiration for many malware authors and vandals, and a wake-up call for security professionals. I recall very well giving talks on the topic for the next few years to many diverse audiences about how we must begin to think about structuring systems to be resistant to such attacks.
Flash forward to today. We don't see the flashy, widespread damage of worm programs any more, such as what Nimda and Code Red caused. Instead, we have more stealthy botnets that infiltrate millions of machines and use them for spam, DDOS, and harassment. The problem has gotten larger and worse, although in a manner that hides some of its magnitude from the casual observer. However, the damage is there; don't try to tell the folks at Saudi Aramaco or Qatar's Rasgas that network malware isn't a concern any more! Worrisomely, experts working with SCADA systems around the world are increasingly warning how vulnerable they might be to similar attacks in the future.
Computer viruses and malware of all sorts first notably appeared "in the wild" in 1982. By 1988 there were about a dozen in circulation. Those of us advocating for more care in design, programming and use of computers were not heeded in the head-long rush to get computing available on every desktop (and more) at the lowest possible cost. Thus, we now have (literally) tens of millions of distinct versions of malware known to security companies, with millions more appearing every year. And unsafe practices are still commonplace -- 25 years after that Internet Worm.
For the second anniversary, consider 10 years ago. The Computing Research Association, with support from the NSF, convened a workshop of experts in security to consider some Grand Challenges in information security. It took a full 3 days, but we came up with four solid Grand Challenges (it is worth reading the full report and (possibly) watching the video):
I would argue -- without much opposition from anyone knowledgeable, I daresay -- that we have not made any measurable progress against any of these goals, and have probably lost ground in at least two.
Why is that? Largely economics, and bad understanding of what good security involves. The economics aspect is that no one really cares about security -- enough. If security was important, companies would really invest in it. However, they don't want to part with all the legacy software and systems they have, so instead they keep stumbling forward and hope someone comes up with magic fairy dust they can buy to make everything better.
The government doesn't really care about good security, either. We've seen that the government is allegedly spending quite a bit on intercepting communications and implanting backdoors into systems, which is certainly not making our systems safer. And the DOD has a history of huge investment into information warfare resources, including buying and building weapons based on unpatched, undisclosed vulnerabilities. That's offense, not defense. Funding for education and advanced research is probably two orders of magnitude below what it really should be if there was a national intent to develop a secure infrastructure.
As far as understanding security goes, too many people still think that the ability to patch systems quickly is somehow the approach to security nirvana, and that constructing layers and layers of add-on security measures is the path to enlightenment. I no longer cringe when I hear someone who is adept at crafting system exploits referred to as a "cyber security expert," but so long as that is accepted as what the field is all about there is little hope of real progress. As J.R.R. Tolkien once wrote, "He that breaks a thing to find out what it is has left the path of wisdom." So long as people think that system penetration is a necessary skill for cyber security, we will stay on that wrong path.
And that is a great segue into the last of my three anniversary recognitions. Consider this quote (one of my favorite) from 1973 -- 40 years ago -- from a USAF report, Preliminary Notes on the Design of Secure Military Computer Systems, by a then-young Roger Schell:
…From a practical standpoint the security problem will remain as long as manufacturers remain committed to current system architectures, produced without a firm requirement for security. As long as there is support for ad hoc fixes and security packages for these inadequate designs and as long as the illusory results of penetration teams are accepted as demonstrations of a computer system security, proper security will not be a reality.
That was something we knew 40 years ago. To read it today is to realize that the field of practice hasn't progressed in any appreciable way in three decades, except we are now also stressing the wrong skills in developing the next generation of expertise.
Maybe I'll rethink that whole idea of going to give a talks on security and simply send them each a video loop of me banging my head against a wall.
PS -- happy 10th annual National Cyber Security Awareness Month -- a freebie fourth anniversary! But consider: if cyber security were really important, wouldn't we be aware of that every month? The fact that we need to promote awareness of it is proof it isn't taken seriously. Thanks, DHS!
Now, where can I find I good wall that doesn't already have dents from my forehead....?
In the June 17, 2013 online interview with Edward Snowden, there was this exchange:
I simply thought I'd point out a statement of mine that first appeared in print in 1997 on page 9 of Web Security & Commerce (1st edition, O'Reilly, 1997, S. Garfinkel & G. Spafford):
Secure web servers are the equivalent of heavy armored cars. The problem is, they are being used to transfer rolls of coins and checks written in crayon by people on park benches to merchants doing business in cardboard boxes from beneath highway bridges. Further, the roads are subject to random detours, anyone with a screwdriver can control the traffic lights, and there are no police.
I originally came up with an abbreviated version of this quote during an invited presentation at SuperComputing 95 (December of 1995) in San Diego. The quote at that time was everything up to the "Further...." and was in reference to using encryption, not secure WWW servers.
A great deal of what people are surprised about now should not be a surprise -- some of us have been lecturing about elements of it for decades. I think Cassandra was a cyber security professor....
[Added 9/10: This also reminded me of a post from a couple of years ago. The more things change....]
Last post, we wrote about the NSA‟s secret program to obtain and then analyze the telephone metadata relating to foreign espionage and terrorism by obtaining the telephone metadata relating to everyone. In this post, we will discuss a darker, but somewhat less troubling program called PRISM. As described in public media as leaked PowerPoint slides, PRISM and its progeny is a program to permit the NSA, with approval of the super-secret Foreign Intelligence Surveillance Court (FISC) to obtain “direct access” to the servers of internet companies (e.g., AOL, Google, Microsoft, Skype, and Dropbox) to search for information related to foreign terrorism – or more accurately, terrorism and espionage by “non US persons.”
Whether you believe that PRISM is a wonderful program narrowly designed to protect Americans from terrorist attacks or a massive government conspiracy to gather intimate information to thwart Americans political views, or even a conspiracy to run a false-flag operation to start a space war against alien invaders, what the program actually is, and how it is regulated, depends on how the program operates. When Sir Isaac Newton published his work Opticks in 1704, he described how a PRISM could be used to – well, shed some light on the nature of electromagnetic radiation. Whether you believe that the Booz Allen leaker was a hero, or whether you believe that he should be given the full Theon Greyjoy for treason, there is little doubt that he has sparked a necessary conversation about the nature of privacy and data mining. President Obama is right when he says that, to achieve the proper balance we need to have a conversation. To have a conversation, we have to have some knowledge of the programs we are discussing.
Unlike the telephony metadata, the PRISM programs involve a different character of information, obtained in a potentially different manner. As reported, the PRISM programs involve not only metadata (header, source, location, destination, etc.) but also content information (e-mails, chats, messages, stored files, photographs, videos, audio recordings, and even interception of voice and video Skype calls.)
Courts (including the FISA Court) treat content information differently from “header”information. For example, when the government investigated the ricin-laced letters sent to President Obama and NYC Mayor Michael Bloomberg, they reportedly used the U.S. Postal Service‟s Mail Isolation Control and Tracking (MICT) system which photographs the outside of every letter or parcel sent through the mails – metadata. When Congress passed the Communications Assistance to Law Enforcement Act (CALEA), which among other things established procedures for law enforcement agencies to get access to both “traffic” (non-content) and content information, the FBI took the posistion that it could, without a wiretap order, engage in what it called “Post-cut-through dialed digit extraction” -- that is, when you call your bank and it prompts you to enter your bank account number and password, the FBI wanted to “extract” that information (Office of Information Retrival) as “traffic” not “content.” So the lines between “content” and “non-content”may be blurry. Moreover, with enough context, we can infer content. As Justice Sotomeyor observed in the 2012 GPS privacy case:
… it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. E.g., Smith, 442 U.S., at 742, 99 S.Ct. 2577; United States v. Miller, 425 U.S. 435, 443, 96 S.Ct. 1619, 48 L.Ed.2d 71 (1976). This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.
But the PRISM program is clearly designed to focus on content. Thus, parts of the Supreme Court‟s holding in Smith v. Maryland that people have no expectation of privacy in the numbers called, etc. therefore does not apply to the PRISM-type information. Right?
Again, not so fast.
Simple question. Do you have a reasonable expectation of privacy in the contents of your e-mail?
Short answer: Yes.
Longer answer: No.
Better answer: Vis a vis whom, and for what purposes. You see, privacy is not black and white. It is multispectral – you know, like light through a triangular piece of glass.
When the government was conducting a criminal investigation of the manufacturer of Enzyte (smiling Bob and his gigantic – um – putter) they subpoenaed his e-mails from, among others, Yahoo! The key word here is subpoena – not search warrant. Now that‟s the thing about data and databases -- if information exists it can be subpoenaed. In fact, a Florida man has now demanded production of cell location data from – you guessed it – the NSA.
But content information is different from other information. And cloud information is different. The telephone records are the records of the phone company about how you used their service. The contents of emails and documents stored in the cloud are your records of which the provider has incidental custody. It would be like the government subpoenaing your landlord for the contents of your apartment (they could, of course subpoena you for this, but then you would know), or subpoenaing the U-stor-it for the contents of your storage locker (sparking a real storage war). They could, with probable cause and a warrant, seach the locker (if you have a warrant, I guess you‟re cooing to come in), but a subpoena to a third party is dicey.
So the Enzyte guy had his records subpoenaed. This was done pursuant to the stored communications act which permits it. The government argued that they didn‟t need a search warrant to read Enzyte guy‟s email, because – you guessed it – he had no expectation of privacy in the contents of his mail. Hell, he stored it unencrypted with a thjird party. Remember Smith v. Maryland? The phone company case? You trust a third party with your records, you risk exposure. Or as Senator Blutarsky (I. NH?) might opine, “you ()*^#)( up, you trusted us…”(actually Otter said that, with apologies to Animal House fans.)
Besides, cloud provider contracts, and email and internet provider privacy policies frequently limit privacy rights of users. In the Enzyte case, the government argued that terms of service that permitted scanning of the contents of email for viruses or spam (or in the case of Gmail or others, embedding context based ads) meant that the user of the email service “consented” to have his or her mail read, and therefore had no privacy rights in the content. (“Yahoo! reserves the right in their sole discretion to pre-screen, refuse, or move any Content that is available via the Service.”) Terms of service which provided that the ISP would respond to lawful subpoenas made them a “joint custodian” of your email and other records (like your roommate) who could consent to the production of your communications or files. Those policies that your employer has that says, “employees have no expectation of privacy in their emails or files"? While you thought that meant that your boss (and the IT guy) can read your emails, the FBI or NSA may take the position that “no expectation of privacy” means exactly that.
Fortunately, most courts don’t go so far. In general, courts have held that the contents of communications and information stored privately online (not on publicly accessible Facebook or Twitter feeds) are entitled to legal protection even if they are in the hands of potentially untrustworthy third parties. But this is by no means assured.
But clearly the data in the PRISM case is more sensitive and entitled to a greater level of legal protection than that in the telephony metadata case. That doesn‟t mean that the government, with a court order, can't search or obtain it. It means that companies like Google and Facebook probably can't just “give it” to the government. I''s not their data.
So the NSA wants to have access to information in a massive database. They may want to read the contents of an email, a file stored on Dropbox, whatever. They may want to track a credit card through the credit card clearing process, or a banking transaction through the interbank funds transfer network. They may want to track travel records – planes, trains or automobiles. All of this information is contained in massive databases or storage facilities held by third parties – usually commercial entities. Banks. VISA/MasterCard. Airlines. Google.
The information can be tremendously useful. The NSA may have lawful authority (a Court order) to obtain it. But there is a practical problem. How does the NSA quickly and efficiently seek and obtain this information from a variety of sources without tipping those sources off about the individual searches it is conducting – information which itself is classified? That appears to be the problem attempted to be solved by PRISM programs.
In the telephony program, the NSA “solved” the problem by simply taking custody of the database.
In PRISM, they apparently did not. And that is a good thing. The databases remain the custody of those who created them.
Here‟s where it gets dicey – factually.
The reports about PRISM indicate that the NSA had “direct access” to the servers of all of these Internet companies. Reports have been circulating that the NSA had similar “direct access” to financial and credit card databases as well. The Internet companies have all issued emphatic denials. So what gives?
Speculation time. The NSA and Internet companies could be outright lying. David Drummond, Google‟s Chief Legal Officer aint going to jail for this. Second, they could be reinterpreting the term “direct” access. When General Alexander testified under oath that the NSA did not “collect any type of data on millions of Americans” he took the term “collect” to mean “read” rather than “obtain.”
Most likely, however, is that the NSA PRISM program is a protocol for the NSA, with FISC approval, to task the computers at these Internet companies to perform a search. This tasking is most likely indirect. How it works is, at this point, rank speculation. What is likely is that an NSA analyst, say in Honolulu, wants to get the communications (postings, YouTube videos, stored communications, whatever) of Abu Nazir, a non-US person, which are stored on a server in the U.S., or stored on a server in the Cloud operated by a US company. The analyst gets “approval” for the “search,” by which I mean that a flock of lawyers from the NSA, FBI and DOJ descend (what is the plural of lawyers? [ a "plague"? --spaf] ) and review the request to ensure that it asks for info about a non US person, that it meets the other FISA requirements, that there is minimization, etc. Then the request is transmitted to the FISC for a warrant. Maybe. Or maybe the FISC has approved the searches in bulk (raising the Writ of Assistance issue we described in the previous post.) We don‟t know. But assuming that the FISC approves the “search,” the request has to be transmitted to, say Google, for their lawyers to review, and then the data transmitted back to the NSA. To the analyst in Honolulu, it may look like “direct access.” I type in a search, and voilia! Results show up on the screen. It is this process that appears to be within the purview of PRISM. It may be a protocol for effectuating court-approved access to information in a database, not direct access to the database.
Or maybe not. Maybe it is a direct pipe into the servers, which the NSA can task, and for which the NSA can simply suck out the entire database and perform their own data analytics. Doubtful, but who knows? That‟s the problem with rank speculation. Aliens, anyone?
But are basing this analysis on what we believe is reasonable to assume.
So, is it legal? Situation murky. Ask again later.
If the FISC approves the search, with a warrant, within the scope of the NSA‟s authority, on a non-US person, with minimization, then it is legal in the U.S., while probably violating the hell out of most EU and other data privacy laws. But that is the nature of the FISA law and the USA PATRIOT Act which amended it. Like the PowerPoint slides said, most internet traffic travels through the U.S., which means we have the ability (and under USA PATRIOT, the authority) to search it.
While the PRISM programs are targeted at much more sensitive content information, if conducted as described above, they actually present fewer domestic legal issues than the telephony metadata case. If they are a dragnet, or if the NSA is actually conducting data mining on these databases to identify potential targets, then there is a bigger issue.
The government has indicated that they may release an unclassified version of at least one FISC opinion related to this subject. That‟s a good thing. Other redacted legal opinions should also be released so we can have the debate President Obama has called for. And let some light pass through this PRISM.
† Mark Rasch, is the former head of the United States Department of Justice Computer Crime Unit, where he helped develop the department’s guidelines for computer crimes related to investigations, forensics and evidence gathering. Mr. Rasch is currently a principal with Rasch Technology and Cyberlaw and specializes in computer security and privacy.
‡ Sophia Hannah has a BS degree in Physics with a minor in Computer Science and has worked in scientific research, information technology, and as a computer programmer. She currently manages projects with Rasch Technology and Cyberlaw and researches a variety of topics in cyberlaw.
Rasch Cyberlaw (301) 547-6925 www.raschcyber.com
The NSA programs to retrieve and analyze telephone metadata and internet communications and files (the former we will call the telephony program, the latter codenamed PRISM) are at one and the same time narrow and potentially reasonably designed programs aimed at obtaining potentially useful information within the scope of the authority granted by Congress. They are, at one and the same time perfectly legal and grossly unconstitutional. It’s not that we are of two opinions about these programs. It is that the character of these programs are such that they have both characteristics at the same time. Like Schrödinger’s cat, they are both alive and dead at the same time – and a further examination destroys the experiment. Let’s look at the telephony program first.
Telephone companies, in addition to providing services, collect a host of information about the customer including their name, address, billing and payment information (including payment method, payment history, etc.). When the telephone service is used, the phone company collects records of when, where and how it was used – calls made (or attempted), received, telephone numbers, duration of calls, time of day of calls, location of the phones from which the calls were made, and other information you might find on your telephone bill. In addition, the phone company may collect certain technical information – for example, if you use a cell phone, the location of the cell from which the call was made, and the signal strength to that cell tower or others. From this signal strength, the phone company can tell reasonably precisely where the caller is physically located (whether they are using the phone or not) even if the phone does not have GPS. In fact, that is one of the ways that the Enhanced 911 service can locate callers. The phone company creates these records for its own business purposes. It used to collect this primarily for billing, but with unlimited landline calling, that need has diminished. However, the phone companies still collect this data to do network engineering, load balancing and other purposes. They have data retention and destruction policies which may keep the data for as short as a few days, or as long as several years, depending on the data. Similar “metadata” or non-content information is collected about other uses of the telephone networks, including SMS message headers and routing information. Continuing with the Schrödinger analogy, the law says that this is private and personal information, which the consumer does not own and for which the consumer has no expectation of privacy. Is that clear?
Federal law calls this telephone metadata “Consumer Proprietary Network Information” or CPNI. 47 U.S.C. 222 (c)(1) provides that:
Except as required by law or with the approval of the customer, a telecommunications carrier that receives or obtains customer proprietary network information by virtue of its provision of a telecommunications service shall only use, disclose, or permit access to individually identifiable customer proprietary network information in its provision of (A) the telecommunications service from which such information is derived, or (B) services necessary to, or used in, the provision of such telecommunications service, including the publishing of directories.
Surprisingly, the exceptions to this prohibition do not include a specific “law enforcement” or “authorized intelligence activity” exception. Thus, if the disclosure of consumer CPNI to the NSA under the telephony program is “required by law” then the phone company can do it. If not, it can’t.
But wait, there’s more. At the same time that the law says that consumer’s telephone metadata is private, it also says that consumers have no expectation of privacy in that data. In a landmark 1979 decision, the United States Supreme Court held that the government could use a simple subpoena (rather than a search warrant) to obtain the telephone billing records of a consumer. See, these aren’t the consumer’s records. They are the phone company’s records. The Court noted, “we doubt that people in general entertain any actual expectation of privacy in the numbers they dial. All telephone users realize that they must "convey" phone numbers to the telephone company, since it is through telephone company switching equipment that their calls are completed. All subscribers realize, moreover, that the phone company has facilities for making permanent records of the numbers they dial, for they see a list of their long-distance (toll) calls on their monthly bills.” The court went on, “even if petitioner did harbor some subjective expectation that the phone numbers he dialed would remain private, this expectation is not "one that society is prepared to recognize as `reasonable.'”
By trusting the phone company with the records of the call, consumers “assume the risk” that the third party will disclose it. The Court explained, “petitioner voluntarily conveyed to it information that it had facilities for recording and that it was free to record. In these circumstances, petitioner assumed the risk that the information would be divulged to police.” This dichotomy is not surprising. The Supreme Court held that, as a matter of Constitutional law, any time you trust a third party, you run the risk that the information will be divulged. Prosecutors and litigants subpoena third party information all the time –your phone bills, your medical records, credit card receipts, bank records, surveillance camera data, and records from your mechanic – just about anything. These are not your records, so you can’t complain. At the same time, Congress was concerned with phone company’s use of CPNI for marketing purposes without consumer consent, so they imposed statutory restrictions on the disclosure or use of CPNI unless “required by law.”
There is little doubt that telephony metadata can be useful in foreign intelligence and terrorism cases. Hell, it can be useful in any criminal investigation, or for that matter, a civil or administrative case. But if the CIA obtains the phone records of, say Abu Nazir (for Homeland fans), and spots a phone number he has called, they, through the NSA, want to be able to find out information about that phone call, and who that person called. The NSA wants this data for precisely the same reason that it is legally protected – phone metadata reveals patterns which can show relationships between people, and help determine who is associated with whom and for what purpose. Metadata and link analysis can help distinguish between a call to mom, a call to a colleague, and a call to a terrorist cell. Context can reveal content – or at least create a strong inference of content. So, in appropriate cases involving terrorism, national security or intelligence involving non-US persons, the NSA should have this data. And indeed, they always have. None of that is new.
If the NSA captured a phone number, say 867-5309, they could demand the records relating to that call from the phone company through an order issued by a special super-secret court called FISC. The order could say “give the NSA all the records of phone usage of 867-5309 as well as the records of the numbers that they called.” Problem is, that is unwieldy, time consuming, requires a new court order with each query, and in many ways overproduces records. Remember, not only are these terrorism and national security investigations, but the target is a non-US person, usually (but not always) located outside the United States.
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Read that carefully. You would think that it requires a warrant to search, right? Wrong. Actually, Courts interpret the comma after the word “violated” as a semi-colon (who says grammar doesn’t matter?) “The people” which includes but is not limited to U.S. citizens, have a right to be secure against unreasonable searches and seizures (more on the “and” in a minute). Also, warrants have to be issued by neutral magistrates and must specify what is to be seized. So no warrant is needed if the search is “reasonable.” In fact, the vast majority of “searches and seizures” in America are conducted without a warrant. People are searched at airports and borders. No warrant. They are patted down on the streets and in their cars. No warrant. Cops look into their car windows, follow them around, and capture video of them without a warrant. Police airplanes, helicopters (and soon drones) capture images of people in their back yards or porches. No warrant. Dogs can sniff for drugs, bombs or contraband. No warrant. And people give consent to search without a warrant all the time. When the police searched the boat for the fugitive Boston bomber, they needed no warrant because of exigent circumstances (and perhaps because the boat’s owner consented). Warrantless searches can be “reasonable” and can pass constitutional muster. That’s one reason Congress created the FISC.
For law enforcement purposes (to catch criminals) the government can get a grand jury subpoena, a search warrant, a “trap and trace” order, a “pen register” order, a Title III wiretap order, or other orders if they can show (depending on the information sought) probable cause or some relevance to the criminal investigation. But for intelligence gathering purposes, the NSA can’t really show “probable cause” to believe that there’s a crime, because often there is not. It’s intelligence gathering. So the Foreign Intelligence Surveillance Act (FISA) created a special secret court to allow the intelligence community to do what the law enforcement community could already do – get information under a court order, but instead of showing that a crime was committed, they had to show that the information related to foreign intelligence.
After September 11, 2001, Congress added terrorism as well. When Congress amended FISA, it allowed the FISA court (FISC) to authorize orders for the production of “books records or other documents.” Section 215 of the USA PATRIOT Act allowed the FBI to apply for an order to produce materials that assist in an investigation undertaken to protect against international terrorism or clandestine intelligence activities. The act specifically gives an example to clarify what it means by "tangible things": it includes "books, records, papers, documents, and other items." Telephone metadata fits within this description, including the NSA Telephony Program (As we know it)
So the NSA has the authority to seek and obtain (through the FBI and FISC) telephone metadata. It also has a legitimate need to do so. But that’s not exactly what they did here. Instead of getting the records they needed, the NSA decided that it would get all the records of all calls made or received (non-content information) about everyone, at least from Verizon, and most likely from all providers. The demand was updated daily, so every call record was dumped by the phone companies onto a massive database operated by the NSA.
Now this is bad. And good. The good part is that, by collecting metadata from all of the phone companies, the NSA could “normalize” and cross-reference the data. A single authorized search of the database could find records from Verizon, AT&T, Sprint, T-Mobile, and possibly Orange, British Telecom, who knows? Rather than having to have the FISC issue an order to Verizon for a phone record, and then after that is examined, another order to AT&T, by having the data all in one place, “pingable” by the NSA, a singly query can find all of the records related to that query.
So if the FISC authorizes a search for Abu Nazir’s phone records, this process allows the NSA to actually get them. Also, the NSA doesn’t have to provide a court order (which itself would reveal classified information about who they were looking at) to some functionary at Verizon or AT&T (even if that functionary had a security clearance). And Verizon’s database would not have a record of what FISC authorized searches the NSA conducted – information which itself is highly classified.
Just because the NSA had all of the records does not mean that it looked at them all. In fact, the NSA and FBI established a protocol, which was apparently approved by the FISC that restricted how and when they could ping this massive database. So the mere physical transfer of the metadata database from the phone companies to the NSA doesn’t impinge privacy unless and until the NSA makes a query, and these queries are all authorized by the FISC and are lawful. So what’s the big deal? It’s all good, man.
Not so fast Mr. Schrödinger. There are two huge legal problems with this program. Undoubtedly, the USA PATRIOT Act authorizes the FISC to order production of “tangible things” and these records are “tangible things.” But the law does not authorize what are called “general warrants.” A general warrant is a warrant that either fails to specify the items to be searched for or seized, fails to do so with particularity, or is so broad or vague as to permit the person seizing the items almost unfettered discretion in what to take. A warrant which permitted seizure of “all evidence of crimes” or “all evidence of gang activity” would be an unconstitutional general warrant.
It’s important to note that the warrant is “legal” in the sense that it was for information relevant to a crime (or, say terrorism), that the obtaining of the warrant was authorized by law, that a court issued the warrant, and that the proper procedures were followed. But the warrant is unconstitutional and so is the search and seizure. This is particularly true where the warrant seeks information that relates to First Amendment protected activities like what books we are reading, and with whom we are associating. So when Texas authorized the search and seizure of records relating to “communist activities”(the ism before terrorism) and cops got a warrant to take such books and records, the Supreme Court had no problem finding that the warrant was an unconstitutional “general warrant.”
Even though the FISC warrant to Verizon specified exactly what was to be seized (“everything”) it was undoubtedly a general warrant. Remember, the Fourth Amendment prohibits unreasonable “searches” and “seizures.” A warrant authorizing seizure of all records of millions of people who did nothing wrong, particularly when it is designed to figure out their associations is about as general as you can get. And that is assuming that the searches, or pinging to the database, which happen later, are reasonable.
What’s more, by taking custody of all of these records, the NSA abrogates the document retention and destruction policies of all of the phone companies. We can assume that the NSA keeps these records indefinitely. So long after Verizon decides it doesn’t need to know what cell tower you pinged on July 4, 2005 at 6:15.22 PM EST, the NSA will retain this record. That’s a problem for the NSA because now, instead of subpoenaing Verizon for these records (especially in a criminal case where the defendant has a constitutional right to the records if relevant to a defense), the NSA (or FBI who obtained the records for the NSA) can expect to get a subpoena for the records. While the NSA and FBI would undoubtedly claim that the program is classified, clearly my own phone records are not classified. A federal law called the Classified Information Procedures Act provides a mechanism to obtain unclassified versions of classified data. So if you were charged with a crime by the FBI, and the same FBI had records (in this database) that indicated that you did not commit the crime, they would have to search the database and produce the records. And when Verizon tells you that the records are gone, well… it aint true anymore.
Even if the “seizure” is a general warrant, the government would argue that it is “reasonable” because it is necessary to effectuate the NSA’s function of protecting national security, and its impact on privacy is minimal because the database isn’t “pinged” without court approval. The “collection” of data about tens of millions of Americans doesn’t affect their privacy especially when the Supreme Court said that they have no privacy rights in this data, and it doesn’t even belong to them. (Even though the Director of National Intelligence testified in March that the NSA did not “collect” any data on millions of Americans). Besides, the NSA would argue, there is no other way for the government to do this.
What does the NSA do with the records? Here’s where there is an unknown. At present, we do not know what the NSA does with the telephone metadata database. Do they simply query it – e.g., give me all the records of calls made by Abu Nazir; or do they preform data mining, link analysis, and pattern analysis on the database in order to identify potential Abu Nazir’s? If the latter, then the NSA is clearly searching records of millions of Americans. If the former, it is still troubling for a few reasons.
First, the NSA’s authority revolves around non-US persons. While there may be “inadvertent” collection on U.S. persons, the target of the surveillance must be a non-US person for the program to be legal. According to the leaked documents, the NSA took a very liberal interpretation of what this means. First, they determined that as long as there was a 51% chance that the target was a non-US person, the NSA was entitled to obtain records. Second, they may – and we stress "may" – have interpreted their authority as providing that, if the target of the investigation was foreign (again 51% chance) then they could obtain records related to calls between two US persons wholly in the US. Finally, they apparently deployed a “two degrees of separation” test. If Abu Nazir (51% foreign) called John Smith’s telephone number, the NSA could look at who Smith (100% US) called within the US (first degree of separation). If Smith called Jones, the NSA could then look at Jones’ call records (second degree of separation.) At this point, even if the pinging of the database is authorized by the FISC, we are a long way from Abu Nazir. Toto, I’m afraid we are in Kansas.
OK, but what’s the big deal? The seizure of the database is authorized by FISC, under a statute approved by Congress, with Congressional knowledge and oversight (maybe), and under strict control by the NSA, the FBI and DOJ. Every search of the database is approved by the super-secret court, right? Not so fast, Kemo Sabe. It is highly unlikely that the FISC approves every database search. More likely is that the FBI and NSA have established protocols and procedures designed to ensure that the searches are within their jurisdiction, are designed to find information about terrorism and foreign intelligence, that the targets are (51%) foreign, and that there is a minimization procedure. These protocols –rather than the individual searches themselves – are what are approved by the FISC. The NSA then most likely reports back to the FISC (through the DOJ) about whether there was an “inadvertent disclosure” of information not related to these objectives. So the court most likely does not approve every search.
And that’s another problem. You see, each “search” of the database is – well – a search. That search must be supported by probable cause (in a criminal case to believe that there’s a crime, in a FISA case, espionage, foreign intelligence or terrorism) and must be approved by a court. Each search. Not the process. We have been down this road before. In fact, this is precisely what lead to the American Revolution in general and the Fourth Amendment in particular.
When the British Parliament issued the Navigation Acts imposing tariffs on goods imported into America, many colonists refused to pay them (as Boston lawyer James Otis noted, “taxation without representation is tyranny”) So Parliament authorized King George II to issue what are called “writs of assistance.” This writ, issued by a Court, authorized the executive branch (a customhouse officer with the assistance of the sheriff) to search colonists houses for unlawfully smuggled items. These writs did not specify what the sheriff could search for or seize, or where he could look. Like the NSA program, the court approved what could be done, the executive had discretion in how to do it. When George II was succeeded by George III (the writs expiring with the death of the King) Parliament reauthorized them under the hated Townsend Acts. James Otis urged resistance, and it was the use of these unspecific writs authorizing searches that galvanized public opinion (and that of John Adams in particular) to urge revolution. It is why the Fourth Amendment demanded that a search warrant specify based on probable cause, the specific place to be searched and item to be seized. It’s also why writs of assistance are prohibited in the constitution.
The NSA FISC approved searches would be like a judge in Los Angeles issuing a search warrant to the LAPD which said, “you may search any house as long as you smell marijuana in that house.” While the search may be reasonable, and indeed, if the LAPD had applied for a warrant to search a house after they smelled marijuana a court probably would have issued the warrant, the broad blanket approval of these searches would be more akin to a writ of assistance.
So the NSA digital telephony program, while legal in the sense that it was approved by both Congress and the Foreign Intelligence Surveillance Court, has some serious Constitutional problems.
The phone companies could be on the hook for participating in the program, even though they have both immunity and had no choice but to participate. In fact, they could not legally have even disclosed the program. In the FISA amendments, Congress expressly gave the phone companies immunity for making “good faith” disclosures of information pursuant to Section 215.
So why would the phone company be in trouble? The problem is the “good faith” part. In 2012 the Supreme Court looked at the question of when someone (cops in that case) should have immunity for a good faith search pursuant to an unconstitutional warrant. The cops got a warrant for all records of “gang related activity” and all guns in a particular house. The court agreed that the warrant was overbroad, unconstitutional, and should not have been issued. The question was whether the cops, who executed the warrant, should have immunity from civil liability because they acted in “good faith.”
The Supreme Court noted that the fact that they got a warrant at all was one indication that they acted in good faith, but that, “the fact that a neutral magistrate has issued a warrant authorizing the allegedly unconstitutional search or seizure does not end the inquiry into objective reasonableness. Rather, we have recognized an exception allowing suit when “it is obvious that no reasonably competent officer would have concluded that a warrant should issue.” In other words, the cops are generally permitted to rely on the fact that a court issued a search warrant, unless the warrant itself (or the means by which it is procured) is so obviously unconstitutional, overbroad, general or otherwise prohibited that you cannot, in good faith rely on it. While the court found that the cops had immunity because the warrant was not so overbroad to lead to the inevitable conclusion that it was unconstitutional, it is hard to make that same argument where the FISA warrant essentially asked for every record of the phone company. Hard to imagine a broader warrant.
Justice Kagan pointed out that it’s not illegal to be a member of a gang, and that a warrant that authorized seizure of evidence of gang membership per se called for associational records which were protected. Much like the phone logs here. Justices Sotomayor and Ginsburg went further noting, The fundamental purpose of the Fourth Amendment’s warrant clause is “to protect against all general searches.” Go-Bart Importing Co. v. United States, 282 U. S. 344, 357 (1931)
The Fourth Amendment was adopted specifically in response to the Crown’s practice of using general warrants and writs of assistance to search “suspected places” for evidence of smuggling, libel, or other crimes. Boyd v. United States, 116 U. S. 616–626 (1886). Early patriots railed against these practices as “the worst instrument of arbitrary power” and John Adams later claimed that “the child Independence was born” from colonists’opposition to their use. Id., at 625 (internal quotation marks omitted).
To prevent the issue of general warrants on “loose, vague or doubtful bases of fact,” Go-Bart Importing Co., 282 U. S., at 357, the Framers established the inviolable principle that should resolve this case: “no Warrants shall issue, but upon probable cause . . . and particularly describing the . . . things to be seized.” U. S. Const., Amdt. 4. That is, the police must articulate an adequate reason to search for specific items related to specific crimes. They found that the search by the police without probable cause was unreasonable even though there was both judicial and executive oversight, and that therefore there should be no immunity because the actions were not in “good faith.” The phone companies run that risk here.
† Mark Rasch, is the former head of the United States Department of Justice Computer Crime Unit, where he helped develop the department’s guidelines for computer crimes related to investigations, forensics and evidence gathering. Mr. Rasch is currently a principal with Rasch Technology and Cyberlaw and specializes in computer security and privacy.
‡ Sophia Hannah has a BS degree in Physics with a minor in Computer Science and has worked in scientific research, information technology, and as a computer programmer. She currently manages projects with Rasch Technology and Cyberlaw and researches a variety of topics in cyberlaw.
Rasch Cyberlaw (301) 547-6925 www.raschcyber.com
Wednesday, April 3, 2013
Summary by Gaspar Modelo-Howard
Why do we, as cybersecurity professionals, go to work each day? Mr. Gebhart reflected on this question to start his presentation, suggesting a very clear and concise answer. It is to protect the many things and people that are so important to our lives. Security professionals need to protect the families from threats like cyber bullies or identity thieves, risks associated to financial information, attacks to the new business ideas and our critical infrastructure, and to help protect those that protect us, such as law enforcement and first responders. This is why a multidisciplinary approach, such as what CERIAS follows and to which Mr. Gebhart pointed out, is required to come up with the ideas and solutions to achieve our goal as cybersecurity professionals.
In the early days of malware, it could have been considered a nuisance. After all, there were about 17,000 pieces of malware in 1997 and for some people antivirus software could be updated every few months. But malware has been growing at a rapid pace. McAfee stores more than 120M samples of malware software in its database, up from 80M in 2011. The growth is also fast in the mobile landscape. There were 2K unique pieces of mobile malware in 2011, while last year it grew to 36K. And as the mobile market becomes more popular and we move from multiple operating systems to just two today, Google’s Android and Apple’s iOS, there will still be room for growth for malware. McAfee’s stats show that (1) Android is the most targeted operating system for malware, (2) many application stores for phones host malware, and (3) half of all iOS phones are jail broken.
Other trends explain the always changing landscape of information technology and therefore security. For example, the growth in the number of devices connected to the Internet and their changing profiles. There are approximately 1B devices today, and that number should reach 50B by 2020. People think about computers and phones when asked about which electronic devices are connected to the Internet. But there are many others such as automobiles, televisions, dishwashers, and refrigerators that are being connected every day, helping to put the control of our lives at our fingertips: how much energy we consume, what do we eat or how we communicate and with whom.
So today’s risks are more about the devices and data stored, rather than just malware, and everybody is at risk. At the personal level, there are always reports of attacks aimed at individuals. Mr. Gebhart recounted Operation High Roller that targeted corporate bank accounts and wealthy people by using a variant of the Zeus Trojan horse. At the business level, he talked about the incident known as Operation Aurora, discovered by McAfee Labs, where attackers were after intellectual property from 150 companies. It is also common nowadays to hear about state sponsored cyberattacks on businesses. For example, McAfee believes is one of the most attacked companies in the world (given their condition as both a security services provider and a consumer) as they see many, frequent attacks around the world, ran by well-funded, professional organizations.
One of the most concerning areas at risk is critical infrastructure and governments around the world show growing concern about malware. The Stuxnet malware seemed to come from a spies’ movie as it was created as a stealthy, offensive tool to cause harm. The Citadel trojan is another example of how incisive and targeted malware can be, attacking individual organizations, while also harvesting credentials and passwords from users. So the malware found nowadays in the wild is more targeted and automated, which explains the growing concern on highly important systems such as critical infrastructure. Additionally, the commercialization of malware keeps increasing. Hackers as a Service (HAAS) and off-the-shelf malware are too common now, so malicious code and people’ services are openly being sold.
Mr. Gebhart suggested that new partnerships are required to deal with malware; it is no longer only a technical issue. This pointed back to his early comment of dealing with cybersecurity in a multidisciplinary approach. An organization’s board should be involved and new strategies need to be created. Whereas malware used years ago to be a topic that would only include a mid-level business manager, now is a high-level management discussion topic everywhere you go. It is in everybody’s mind, with people not limiting the conversation to the technical aspects of an attack, but also talking about the impact to the business. Today, it is required to include those that make the decisions for the business in order to opportunely defend against malware and to plan for security.
Innovation is also paramount in order to successfully protect the systems and Mr. Gebhart mentioned several current initiatives. For example, companies are increasingly using cloud-based threat intelligence systems to deal with real-time and historical data, and at increasing quantities. McAfee monitoring systems receive about 56B events a month from 120M devices. Many of the events are hashed and sent to their systems on the cloud to determine if they are malicious or not, allowing McAfee to block (if necessary) similar traffic. The response capabilities have also improved, as now there exists the algorithms to classify the events, determining which ones to handle, and to respond fast.
The DeepSAFE Technology is another innovation example, coming from the partnership between McAfee and Intel. The jointly-developed technology serves as a foundation for new hardware-assisted security products. Today’s malware detection software sits above the operating system, whereas DeepSAVE will operate without such restriction and closer to the hardware, offering a different vantage point to detect, block, and remediate hidden attacks such as Stuxnet and SpyEye.
To close his presentation, Mr. Gebhart mentioned to not forget who we are working for and to protect the global access to information and the identities of our users. It is an exciting time to be involved in cybersecurity with the changing landscapes of information technology and security. Computing has come a long way in the last few decades but we still have to build the trust around it so people can confidently rely on computing.
As Christopher Painter, Coordinator for Cyber Issues within the US Department of State, began his keynote address to the CERIAS Symposium audience he humorously admitted, "Today I’m flying without a net", a PowerPoint presentation net that is. This set the tone for an informal and informative discussion about the changing threat landscape in cyberspace.
In the early 1990s Christopher Painter began his federal career as an Assistant U.S. Attorney in Los Angeles; a time when most people were not that interested in cyber crime and the issues we are facing today where unimaginable. These issues weren’t on the forefront of most people’s minds which provided Mr. Painter an opportunity to dive in and get involved at all levels of cyber investigations happening at the time. Mr. Painter led some of the early and most infamous cyber crime cases including the prosecution of Kevin Mitnick; one of the most wanted cyber criminals in the United States.
Through his work leading case and policy discussions of the Computer Crime and Intellectual Property Section of the US Department of Justice, Mr. Painter has become a leading expert in international cyber issues. However, through this impressive journey he shared with the CERIAS audience, one of the most marked times during his career was with President Obama in 2009. Reminding the audience of the campaign hacking incident that raised the awareness of cyber threats to the office of the President, Mr. Painter discussed how the shift in focus on cyber issues was starting to occur. Now charged with identifying the gaps in national cyber policies, Mr. Painter led a research initiative which resulted in over 60 interviews engaging individuals from government, private industry, academia and civic society the results of this study became the premise for President Obama’s landmark speech on cyber security in May 2009.
Over the past 5 years the conversations in cyber security have evolved dramatically. Initially these conversations were so highly technical in nature that government officials handed them to the technical community to find the solutions. Today, with cyber issues expanding beyond domestic boundaries it was quickly realized that in order for solutions to be sustainable they needed the "push" of the senior policy makers and CEOs from the private sectors. As Mr. Painter stated, "We have come a long way even though the challenges continue to mount, we need to remember we still have a long way to go."
Today, the cyber security threat landscape has changed from the days of the "lone gunman hackers" to the now organized, transnational groups. Cyber security professionals are facing mounting challenges in international laws, forensic processes and the introduction of new actors in the arena of bad guys. However, reflecting back again to President Obama’s 2009 speech on cyber security, Mr. Painter recall’s the President reference to the “economic threat of cyber crime”; an important distinction from merely addressing cyber crime as a security threat to identifying cyber crime as an economic threat to the country.
Public awareness is changing and so are the conversations within the U.S. government. Remembering President Obama’s 2013 State of the Union address, Mr. Painter remarked, “this was to a national audience who are not cyber folks - it is another great example of how the cyber issues have transitioned to be government issues.” This landmark speech resulted in a new sergeant of collaboration and coordinating among government agencies; "This is a big shift in how these groups are running interagency meetings as there is a new commonality and purpose to these issues."
Looking toward the future, world will continue to grabble with the constantly changing cyber threat landscape and the equitably of these issues in the physical world. These are global challenges globally. As result, in partnership with the Department of Homeland Security, Mr. Painter and his team are bringing technical information and training to over 100 countries; working to help technologically advancing countries to mitigate the increasing and complex cyber threats around the world. Concurrently, they are evaluating key policies issues including 1. international security - the US has taken the lead in establishing an international law through systems that build confidence in transparency; 2. cyber security due-diligence-challenging the international community to continue to develop national policies, build institutions and foster the due diligence process; 3. identification cyber crimes; 4. internet governance - through existing technical organizations and a multi-stakeholder approach; and 5. internet freedom - principles around openness and transparency online.
As the audiences starts to process this incredible professional journey along with the changing landscape in cyber space, Mr. Painter closed his keynote address illustrating the efforts him and his team in working closely with inter-agencies within the US government, private sectors and academia around the world. Also, actively conducting important dialogues and advancing the key cyber issues with governments in Brazil, South Africa, Korea, Japan and Germany to name a few; bringing the issues of cyber security strategies, the changing landscape and key policy issues to these emerging countries.
Starting the conversation Stephen reminded the audience that what makes biometrics such an interesting field is the unpredictability of the humans in the testing and evaluations processes. In traditional biometric testing environments researchers work with algorithms and established metrics and methodologies. However, as biometrics testing moves to operational environments there are more uncertainties to content with and therefore making it hard to do. Considering these two important testing environments, what biometric researchers are now trying to do is to understand further how a biometric system performs in any environment and identify what (or who) could the possible cause of errors.
As Stephen pointed out, there have been several papers addressing how individual error impacts biometric performance and the potential causes of these errors. Some of these errors are now being traced to gaps in biometrics testing including training (e.g. "How do you train someone who is difficult to train or doesn’t want to be trained?"), accessibility (e.g. "Are the performance results different in a operation environment than collected in a lab?"), usability (e.g. "Can the system be used efficiently, effectively and consistently by a large population?") and the complexities of the human factors on biometric testing performance. Raising the question, is the error always subject centric?>
In order to fill in some these gaps, Stephen and his graduate students are looking at the traditional biometric modes and metrics to determine if they are suitable in today’s testing and evaluation environments. During the CERIAS tech talk Stephen spotlighted the research of three of his graduate students: 1. The Concept of Stability Thesis by Kevin O’Connor - the examination of finger print stability across force levels; 2. The Case of Habituation by Jacob Hasselgren - quantitatively measuring habituation in biometrics testing environments; and 3. Human Biometric Sensor Interaction highlighting Michael Brokly’s research on test administrator errors in biometrics, including the effects of operator train, workloads of both test administrators and test operators, fatigue and stress.
The biometrics community continues to investigate these questions in order to understand how the vast array of players in a operational data collection environment impact performance. In his closing statements, Stephen reiterated the complexities and challenges in biometrics testing and how researchers are looking deeper into the factors affecting performance beyond a simple ROC/DET curve.
During the introduction, Professor Spafford discussed Mark Weatherford's experience prior to becoming Deputy Under Secretary for cybersecurity at DHS. He mentioned that Mr.Weatherford was CIO of the state of Colorado and California and director of security for the electric power industry. He made it known that Mr.Weatherford has won a number awards and spent a lot of time in cybersecurity in the navy.
He also mentioned that under sequestration rules Mr.Weatherford was not allowed to travel. Mr.Weatherford desired to be present, but he could not attend, so he decided to create a video.
Mark Weatherford began his commentary with the For Want of a Nail rhyme because he believes it is a good way on how to approach the business of security. Mr.Weatherford expressed his appreciation for Professor Spafford, thanking him for how much he has helped advance the topic of cybersecurity and the development of some of the national security leaders.
Mr. Weatherford proceeded to state that "we're in business where ninety nine percent secure, means you’re still one hundred percent vulnerable." An example he used was from 2008, when a large mortgage company that is no longer in business, was concerned with the loss of their client’s information. They decided to disable the USB ports from thousands of machines to prevent employees from copying data. They missed one machine, which was used by an analyst to load and sell customer’s data over a two year period.
Cybersecurity threat, DHS’s role in cybersecurity, the President’s Executive Order on cybersecurity, and the lack of cyber talent across the nation are the four topics that Mr.Weatherford briefly explained.
President’s Cybersecurity Executive Order (EO):
Mr.Weatherford closed this commentary by stating "DHS wants to be your partner in cybersecurity whether you’re in the government, academia or the private sector. No one can go it alone in this business and be successful, so think of us as partners and colleagues, we really can help."