In the June 17, 2013 online interview with Edward Snowden, there was this exchange:
I simply thought I'd point out a statement of mine that first appeared in print in 1997 on page 9 of Web Security & Commerce (1st edition, O'Reilly, 1997, S. Garfinkel & G. Spafford):
Secure web servers are the equivalent of heavy armored cars. The problem is, they are being used to transfer rolls of coins and checks written in crayon by people on park benches to merchants doing business in cardboard boxes from beneath highway bridges. Further, the roads are subject to random detours, anyone with a screwdriver can control the traffic lights, and there are no police.
I originally came up with an abbreviated version of this quote during an invited presentation at SuperComputing 95 (December of 1995) in San Diego. The quote at that time was everything up to the "Further...." and was in reference to using encryption, not secure WWW servers.
A great deal of what people are surprised about now should not be a surprise -- some of us have been lecturing about elements of it for decades. I think Cassandra was a cyber security professor....
[Added 9/10: This also reminded me of a post from a couple of years ago. The more things change....]
Last post, we wrote about the NSA‟s secret program to obtain and then analyze the telephone metadata relating to foreign espionage and terrorism by obtaining the telephone metadata relating to everyone. In this post, we will discuss a darker, but somewhat less troubling program called PRISM. As described in public media as leaked PowerPoint slides, PRISM and its progeny is a program to permit the NSA, with approval of the super-secret Foreign Intelligence Surveillance Court (FISC) to obtain “direct access” to the servers of internet companies (e.g., AOL, Google, Microsoft, Skype, and Dropbox) to search for information related to foreign terrorism – or more accurately, terrorism and espionage by “non US persons.”
Whether you believe that PRISM is a wonderful program narrowly designed to protect Americans from terrorist attacks or a massive government conspiracy to gather intimate information to thwart Americans political views, or even a conspiracy to run a false-flag operation to start a space war against alien invaders, what the program actually is, and how it is regulated, depends on how the program operates. When Sir Isaac Newton published his work Opticks in 1704, he described how a PRISM could be used to – well, shed some light on the nature of electromagnetic radiation. Whether you believe that the Booz Allen leaker was a hero, or whether you believe that he should be given the full Theon Greyjoy for treason, there is little doubt that he has sparked a necessary conversation about the nature of privacy and data mining. President Obama is right when he says that, to achieve the proper balance we need to have a conversation. To have a conversation, we have to have some knowledge of the programs we are discussing.
Unlike the telephony metadata, the PRISM programs involve a different character of information, obtained in a potentially different manner. As reported, the PRISM programs involve not only metadata (header, source, location, destination, etc.) but also content information (e-mails, chats, messages, stored files, photographs, videos, audio recordings, and even interception of voice and video Skype calls.)
Courts (including the FISA Court) treat content information differently from “header”information. For example, when the government investigated the ricin-laced letters sent to President Obama and NYC Mayor Michael Bloomberg, they reportedly used the U.S. Postal Service‟s Mail Isolation Control and Tracking (MICT) system which photographs the outside of every letter or parcel sent through the mails – metadata. When Congress passed the Communications Assistance to Law Enforcement Act (CALEA), which among other things established procedures for law enforcement agencies to get access to both “traffic” (non-content) and content information, the FBI took the posistion that it could, without a wiretap order, engage in what it called “Post-cut-through dialed digit extraction” -- that is, when you call your bank and it prompts you to enter your bank account number and password, the FBI wanted to “extract” that information (Office of Information Retrival) as “traffic” not “content.” So the lines between “content” and “non-content”may be blurry. Moreover, with enough context, we can infer content. As Justice Sotomeyor observed in the 2012 GPS privacy case:
… it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. E.g., Smith, 442 U.S., at 742, 99 S.Ct. 2577; United States v. Miller, 425 U.S. 435, 443, 96 S.Ct. 1619, 48 L.Ed.2d 71 (1976). This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.
But the PRISM program is clearly designed to focus on content. Thus, parts of the Supreme Court‟s holding in Smith v. Maryland that people have no expectation of privacy in the numbers called, etc. therefore does not apply to the PRISM-type information. Right?
Again, not so fast.
Simple question. Do you have a reasonable expectation of privacy in the contents of your e-mail?
Short answer: Yes.
Longer answer: No.
Better answer: Vis a vis whom, and for what purposes. You see, privacy is not black and white. It is multispectral – you know, like light through a triangular piece of glass.
When the government was conducting a criminal investigation of the manufacturer of Enzyte (smiling Bob and his gigantic – um – putter) they subpoenaed his e-mails from, among others, Yahoo! The key word here is subpoena – not search warrant. Now that‟s the thing about data and databases -- if information exists it can be subpoenaed. In fact, a Florida man has now demanded production of cell location data from – you guessed it – the NSA.
But content information is different from other information. And cloud information is different. The telephone records are the records of the phone company about how you used their service. The contents of emails and documents stored in the cloud are your records of which the provider has incidental custody. It would be like the government subpoenaing your landlord for the contents of your apartment (they could, of course subpoena you for this, but then you would know), or subpoenaing the U-stor-it for the contents of your storage locker (sparking a real storage war). They could, with probable cause and a warrant, seach the locker (if you have a warrant, I guess you‟re cooing to come in), but a subpoena to a third party is dicey.
So the Enzyte guy had his records subpoenaed. This was done pursuant to the stored communications act which permits it. The government argued that they didn‟t need a search warrant to read Enzyte guy‟s email, because – you guessed it – he had no expectation of privacy in the contents of his mail. Hell, he stored it unencrypted with a thjird party. Remember Smith v. Maryland? The phone company case? You trust a third party with your records, you risk exposure. Or as Senator Blutarsky (I. NH?) might opine, “you ()*^#)( up, you trusted us…”(actually Otter said that, with apologies to Animal House fans.)
Besides, cloud provider contracts, and email and internet provider privacy policies frequently limit privacy rights of users. In the Enzyte case, the government argued that terms of service that permitted scanning of the contents of email for viruses or spam (or in the case of Gmail or others, embedding context based ads) meant that the user of the email service “consented” to have his or her mail read, and therefore had no privacy rights in the content. (“Yahoo! reserves the right in their sole discretion to pre-screen, refuse, or move any Content that is available via the Service.”) Terms of service which provided that the ISP would respond to lawful subpoenas made them a “joint custodian” of your email and other records (like your roommate) who could consent to the production of your communications or files. Those policies that your employer has that says, “employees have no expectation of privacy in their emails or files"? While you thought that meant that your boss (and the IT guy) can read your emails, the FBI or NSA may take the position that “no expectation of privacy” means exactly that.
Fortunately, most courts don’t go so far. In general, courts have held that the contents of communications and information stored privately online (not on publicly accessible Facebook or Twitter feeds) are entitled to legal protection even if they are in the hands of potentially untrustworthy third parties. But this is by no means assured.
But clearly the data in the PRISM case is more sensitive and entitled to a greater level of legal protection than that in the telephony metadata case. That doesn‟t mean that the government, with a court order, can't search or obtain it. It means that companies like Google and Facebook probably can't just “give it” to the government. I''s not their data.
So the NSA wants to have access to information in a massive database. They may want to read the contents of an email, a file stored on Dropbox, whatever. They may want to track a credit card through the credit card clearing process, or a banking transaction through the interbank funds transfer network. They may want to track travel records – planes, trains or automobiles. All of this information is contained in massive databases or storage facilities held by third parties – usually commercial entities. Banks. VISA/MasterCard. Airlines. Google.
The information can be tremendously useful. The NSA may have lawful authority (a Court order) to obtain it. But there is a practical problem. How does the NSA quickly and efficiently seek and obtain this information from a variety of sources without tipping those sources off about the individual searches it is conducting – information which itself is classified? That appears to be the problem attempted to be solved by PRISM programs.
In the telephony program, the NSA “solved” the problem by simply taking custody of the database.
In PRISM, they apparently did not. And that is a good thing. The databases remain the custody of those who created them.
Here‟s where it gets dicey – factually.
The reports about PRISM indicate that the NSA had “direct access” to the servers of all of these Internet companies. Reports have been circulating that the NSA had similar “direct access” to financial and credit card databases as well. The Internet companies have all issued emphatic denials. So what gives?
Speculation time. The NSA and Internet companies could be outright lying. David Drummond, Google‟s Chief Legal Officer aint going to jail for this. Second, they could be reinterpreting the term “direct” access. When General Alexander testified under oath that the NSA did not “collect any type of data on millions of Americans” he took the term “collect” to mean “read” rather than “obtain.”
Most likely, however, is that the NSA PRISM program is a protocol for the NSA, with FISC approval, to task the computers at these Internet companies to perform a search. This tasking is most likely indirect. How it works is, at this point, rank speculation. What is likely is that an NSA analyst, say in Honolulu, wants to get the communications (postings, YouTube videos, stored communications, whatever) of Abu Nazir, a non-US person, which are stored on a server in the U.S., or stored on a server in the Cloud operated by a US company. The analyst gets “approval” for the “search,” by which I mean that a flock of lawyers from the NSA, FBI and DOJ descend (what is the plural of lawyers? [ a "plague"? --spaf] ) and review the request to ensure that it asks for info about a non US person, that it meets the other FISA requirements, that there is minimization, etc. Then the request is transmitted to the FISC for a warrant. Maybe. Or maybe the FISC has approved the searches in bulk (raising the Writ of Assistance issue we described in the previous post.) We don‟t know. But assuming that the FISC approves the “search,” the request has to be transmitted to, say Google, for their lawyers to review, and then the data transmitted back to the NSA. To the analyst in Honolulu, it may look like “direct access.” I type in a search, and voilia! Results show up on the screen. It is this process that appears to be within the purview of PRISM. It may be a protocol for effectuating court-approved access to information in a database, not direct access to the database.
Or maybe not. Maybe it is a direct pipe into the servers, which the NSA can task, and for which the NSA can simply suck out the entire database and perform their own data analytics. Doubtful, but who knows? That‟s the problem with rank speculation. Aliens, anyone?
But are basing this analysis on what we believe is reasonable to assume.
So, is it legal? Situation murky. Ask again later.
If the FISC approves the search, with a warrant, within the scope of the NSA‟s authority, on a non-US person, with minimization, then it is legal in the U.S., while probably violating the hell out of most EU and other data privacy laws. But that is the nature of the FISA law and the USA PATRIOT Act which amended it. Like the PowerPoint slides said, most internet traffic travels through the U.S., which means we have the ability (and under USA PATRIOT, the authority) to search it.
While the PRISM programs are targeted at much more sensitive content information, if conducted as described above, they actually present fewer domestic legal issues than the telephony metadata case. If they are a dragnet, or if the NSA is actually conducting data mining on these databases to identify potential targets, then there is a bigger issue.
The government has indicated that they may release an unclassified version of at least one FISC opinion related to this subject. That‟s a good thing. Other redacted legal opinions should also be released so we can have the debate President Obama has called for. And let some light pass through this PRISM.
† Mark Rasch, is the former head of the United States Department of Justice Computer Crime Unit, where he helped develop the department’s guidelines for computer crimes related to investigations, forensics and evidence gathering. Mr. Rasch is currently a principal with Rasch Technology and Cyberlaw and specializes in computer security and privacy.
‡ Sophia Hannah has a BS degree in Physics with a minor in Computer Science and has worked in scientific research, information technology, and as a computer programmer. She currently manages projects with Rasch Technology and Cyberlaw and researches a variety of topics in cyberlaw.
Rasch Cyberlaw (301) 547-6925 www.raschcyber.com
The NSA programs to retrieve and analyze telephone metadata and internet communications and files (the former we will call the telephony program, the latter codenamed PRISM) are at one and the same time narrow and potentially reasonably designed programs aimed at obtaining potentially useful information within the scope of the authority granted by Congress. They are, at one and the same time perfectly legal and grossly unconstitutional. It’s not that we are of two opinions about these programs. It is that the character of these programs are such that they have both characteristics at the same time. Like Schrödinger’s cat, they are both alive and dead at the same time – and a further examination destroys the experiment. Let’s look at the telephony program first.
Telephone companies, in addition to providing services, collect a host of information about the customer including their name, address, billing and payment information (including payment method, payment history, etc.). When the telephone service is used, the phone company collects records of when, where and how it was used – calls made (or attempted), received, telephone numbers, duration of calls, time of day of calls, location of the phones from which the calls were made, and other information you might find on your telephone bill. In addition, the phone company may collect certain technical information – for example, if you use a cell phone, the location of the cell from which the call was made, and the signal strength to that cell tower or others. From this signal strength, the phone company can tell reasonably precisely where the caller is physically located (whether they are using the phone or not) even if the phone does not have GPS. In fact, that is one of the ways that the Enhanced 911 service can locate callers. The phone company creates these records for its own business purposes. It used to collect this primarily for billing, but with unlimited landline calling, that need has diminished. However, the phone companies still collect this data to do network engineering, load balancing and other purposes. They have data retention and destruction policies which may keep the data for as short as a few days, or as long as several years, depending on the data. Similar “metadata” or non-content information is collected about other uses of the telephone networks, including SMS message headers and routing information. Continuing with the Schrödinger analogy, the law says that this is private and personal information, which the consumer does not own and for which the consumer has no expectation of privacy. Is that clear?
Federal law calls this telephone metadata “Consumer Proprietary Network Information” or CPNI. 47 U.S.C. 222 (c)(1) provides that:
Except as required by law or with the approval of the customer, a telecommunications carrier that receives or obtains customer proprietary network information by virtue of its provision of a telecommunications service shall only use, disclose, or permit access to individually identifiable customer proprietary network information in its provision of (A) the telecommunications service from which such information is derived, or (B) services necessary to, or used in, the provision of such telecommunications service, including the publishing of directories.
Surprisingly, the exceptions to this prohibition do not include a specific “law enforcement” or “authorized intelligence activity” exception. Thus, if the disclosure of consumer CPNI to the NSA under the telephony program is “required by law” then the phone company can do it. If not, it can’t.
But wait, there’s more. At the same time that the law says that consumer’s telephone metadata is private, it also says that consumers have no expectation of privacy in that data. In a landmark 1979 decision, the United States Supreme Court held that the government could use a simple subpoena (rather than a search warrant) to obtain the telephone billing records of a consumer. See, these aren’t the consumer’s records. They are the phone company’s records. The Court noted, “we doubt that people in general entertain any actual expectation of privacy in the numbers they dial. All telephone users realize that they must "convey" phone numbers to the telephone company, since it is through telephone company switching equipment that their calls are completed. All subscribers realize, moreover, that the phone company has facilities for making permanent records of the numbers they dial, for they see a list of their long-distance (toll) calls on their monthly bills.” The court went on, “even if petitioner did harbor some subjective expectation that the phone numbers he dialed would remain private, this expectation is not "one that society is prepared to recognize as `reasonable.'”
By trusting the phone company with the records of the call, consumers “assume the risk” that the third party will disclose it. The Court explained, “petitioner voluntarily conveyed to it information that it had facilities for recording and that it was free to record. In these circumstances, petitioner assumed the risk that the information would be divulged to police.” This dichotomy is not surprising. The Supreme Court held that, as a matter of Constitutional law, any time you trust a third party, you run the risk that the information will be divulged. Prosecutors and litigants subpoena third party information all the time –your phone bills, your medical records, credit card receipts, bank records, surveillance camera data, and records from your mechanic – just about anything. These are not your records, so you can’t complain. At the same time, Congress was concerned with phone company’s use of CPNI for marketing purposes without consumer consent, so they imposed statutory restrictions on the disclosure or use of CPNI unless “required by law.”
There is little doubt that telephony metadata can be useful in foreign intelligence and terrorism cases. Hell, it can be useful in any criminal investigation, or for that matter, a civil or administrative case. But if the CIA obtains the phone records of, say Abu Nazir (for Homeland fans), and spots a phone number he has called, they, through the NSA, want to be able to find out information about that phone call, and who that person called. The NSA wants this data for precisely the same reason that it is legally protected – phone metadata reveals patterns which can show relationships between people, and help determine who is associated with whom and for what purpose. Metadata and link analysis can help distinguish between a call to mom, a call to a colleague, and a call to a terrorist cell. Context can reveal content – or at least create a strong inference of content. So, in appropriate cases involving terrorism, national security or intelligence involving non-US persons, the NSA should have this data. And indeed, they always have. None of that is new.
If the NSA captured a phone number, say 867-5309, they could demand the records relating to that call from the phone company through an order issued by a special super-secret court called FISC. The order could say “give the NSA all the records of phone usage of 867-5309 as well as the records of the numbers that they called.” Problem is, that is unwieldy, time consuming, requires a new court order with each query, and in many ways overproduces records. Remember, not only are these terrorism and national security investigations, but the target is a non-US person, usually (but not always) located outside the United States.
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Read that carefully. You would think that it requires a warrant to search, right? Wrong. Actually, Courts interpret the comma after the word “violated” as a semi-colon (who says grammar doesn’t matter?) “The people” which includes but is not limited to U.S. citizens, have a right to be secure against unreasonable searches and seizures (more on the “and” in a minute). Also, warrants have to be issued by neutral magistrates and must specify what is to be seized. So no warrant is needed if the search is “reasonable.” In fact, the vast majority of “searches and seizures” in America are conducted without a warrant. People are searched at airports and borders. No warrant. They are patted down on the streets and in their cars. No warrant. Cops look into their car windows, follow them around, and capture video of them without a warrant. Police airplanes, helicopters (and soon drones) capture images of people in their back yards or porches. No warrant. Dogs can sniff for drugs, bombs or contraband. No warrant. And people give consent to search without a warrant all the time. When the police searched the boat for the fugitive Boston bomber, they needed no warrant because of exigent circumstances (and perhaps because the boat’s owner consented). Warrantless searches can be “reasonable” and can pass constitutional muster. That’s one reason Congress created the FISC.
For law enforcement purposes (to catch criminals) the government can get a grand jury subpoena, a search warrant, a “trap and trace” order, a “pen register” order, a Title III wiretap order, or other orders if they can show (depending on the information sought) probable cause or some relevance to the criminal investigation. But for intelligence gathering purposes, the NSA can’t really show “probable cause” to believe that there’s a crime, because often there is not. It’s intelligence gathering. So the Foreign Intelligence Surveillance Act (FISA) created a special secret court to allow the intelligence community to do what the law enforcement community could already do – get information under a court order, but instead of showing that a crime was committed, they had to show that the information related to foreign intelligence.
After September 11, 2001, Congress added terrorism as well. When Congress amended FISA, it allowed the FISA court (FISC) to authorize orders for the production of “books records or other documents.” Section 215 of the USA PATRIOT Act allowed the FBI to apply for an order to produce materials that assist in an investigation undertaken to protect against international terrorism or clandestine intelligence activities. The act specifically gives an example to clarify what it means by "tangible things": it includes "books, records, papers, documents, and other items." Telephone metadata fits within this description, including the NSA Telephony Program (As we know it)
So the NSA has the authority to seek and obtain (through the FBI and FISC) telephone metadata. It also has a legitimate need to do so. But that’s not exactly what they did here. Instead of getting the records they needed, the NSA decided that it would get all the records of all calls made or received (non-content information) about everyone, at least from Verizon, and most likely from all providers. The demand was updated daily, so every call record was dumped by the phone companies onto a massive database operated by the NSA.
Now this is bad. And good. The good part is that, by collecting metadata from all of the phone companies, the NSA could “normalize” and cross-reference the data. A single authorized search of the database could find records from Verizon, AT&T, Sprint, T-Mobile, and possibly Orange, British Telecom, who knows? Rather than having to have the FISC issue an order to Verizon for a phone record, and then after that is examined, another order to AT&T, by having the data all in one place, “pingable” by the NSA, a singly query can find all of the records related to that query.
So if the FISC authorizes a search for Abu Nazir’s phone records, this process allows the NSA to actually get them. Also, the NSA doesn’t have to provide a court order (which itself would reveal classified information about who they were looking at) to some functionary at Verizon or AT&T (even if that functionary had a security clearance). And Verizon’s database would not have a record of what FISC authorized searches the NSA conducted – information which itself is highly classified.
Just because the NSA had all of the records does not mean that it looked at them all. In fact, the NSA and FBI established a protocol, which was apparently approved by the FISC that restricted how and when they could ping this massive database. So the mere physical transfer of the metadata database from the phone companies to the NSA doesn’t impinge privacy unless and until the NSA makes a query, and these queries are all authorized by the FISC and are lawful. So what’s the big deal? It’s all good, man.
Not so fast Mr. Schrödinger. There are two huge legal problems with this program. Undoubtedly, the USA PATRIOT Act authorizes the FISC to order production of “tangible things” and these records are “tangible things.” But the law does not authorize what are called “general warrants.” A general warrant is a warrant that either fails to specify the items to be searched for or seized, fails to do so with particularity, or is so broad or vague as to permit the person seizing the items almost unfettered discretion in what to take. A warrant which permitted seizure of “all evidence of crimes” or “all evidence of gang activity” would be an unconstitutional general warrant.
It’s important to note that the warrant is “legal” in the sense that it was for information relevant to a crime (or, say terrorism), that the obtaining of the warrant was authorized by law, that a court issued the warrant, and that the proper procedures were followed. But the warrant is unconstitutional and so is the search and seizure. This is particularly true where the warrant seeks information that relates to First Amendment protected activities like what books we are reading, and with whom we are associating. So when Texas authorized the search and seizure of records relating to “communist activities”(the ism before terrorism) and cops got a warrant to take such books and records, the Supreme Court had no problem finding that the warrant was an unconstitutional “general warrant.”
Even though the FISC warrant to Verizon specified exactly what was to be seized (“everything”) it was undoubtedly a general warrant. Remember, the Fourth Amendment prohibits unreasonable “searches” and “seizures.” A warrant authorizing seizure of all records of millions of people who did nothing wrong, particularly when it is designed to figure out their associations is about as general as you can get. And that is assuming that the searches, or pinging to the database, which happen later, are reasonable.
What’s more, by taking custody of all of these records, the NSA abrogates the document retention and destruction policies of all of the phone companies. We can assume that the NSA keeps these records indefinitely. So long after Verizon decides it doesn’t need to know what cell tower you pinged on July 4, 2005 at 6:15.22 PM EST, the NSA will retain this record. That’s a problem for the NSA because now, instead of subpoenaing Verizon for these records (especially in a criminal case where the defendant has a constitutional right to the records if relevant to a defense), the NSA (or FBI who obtained the records for the NSA) can expect to get a subpoena for the records. While the NSA and FBI would undoubtedly claim that the program is classified, clearly my own phone records are not classified. A federal law called the Classified Information Procedures Act provides a mechanism to obtain unclassified versions of classified data. So if you were charged with a crime by the FBI, and the same FBI had records (in this database) that indicated that you did not commit the crime, they would have to search the database and produce the records. And when Verizon tells you that the records are gone, well… it aint true anymore.
Even if the “seizure” is a general warrant, the government would argue that it is “reasonable” because it is necessary to effectuate the NSA’s function of protecting national security, and its impact on privacy is minimal because the database isn’t “pinged” without court approval. The “collection” of data about tens of millions of Americans doesn’t affect their privacy especially when the Supreme Court said that they have no privacy rights in this data, and it doesn’t even belong to them. (Even though the Director of National Intelligence testified in March that the NSA did not “collect” any data on millions of Americans). Besides, the NSA would argue, there is no other way for the government to do this.
What does the NSA do with the records? Here’s where there is an unknown. At present, we do not know what the NSA does with the telephone metadata database. Do they simply query it – e.g., give me all the records of calls made by Abu Nazir; or do they preform data mining, link analysis, and pattern analysis on the database in order to identify potential Abu Nazir’s? If the latter, then the NSA is clearly searching records of millions of Americans. If the former, it is still troubling for a few reasons.
First, the NSA’s authority revolves around non-US persons. While there may be “inadvertent” collection on U.S. persons, the target of the surveillance must be a non-US person for the program to be legal. According to the leaked documents, the NSA took a very liberal interpretation of what this means. First, they determined that as long as there was a 51% chance that the target was a non-US person, the NSA was entitled to obtain records. Second, they may – and we stress "may" – have interpreted their authority as providing that, if the target of the investigation was foreign (again 51% chance) then they could obtain records related to calls between two US persons wholly in the US. Finally, they apparently deployed a “two degrees of separation” test. If Abu Nazir (51% foreign) called John Smith’s telephone number, the NSA could look at who Smith (100% US) called within the US (first degree of separation). If Smith called Jones, the NSA could then look at Jones’ call records (second degree of separation.) At this point, even if the pinging of the database is authorized by the FISC, we are a long way from Abu Nazir. Toto, I’m afraid we are in Kansas.
OK, but what’s the big deal? The seizure of the database is authorized by FISC, under a statute approved by Congress, with Congressional knowledge and oversight (maybe), and under strict control by the NSA, the FBI and DOJ. Every search of the database is approved by the super-secret court, right? Not so fast, Kemo Sabe. It is highly unlikely that the FISC approves every database search. More likely is that the FBI and NSA have established protocols and procedures designed to ensure that the searches are within their jurisdiction, are designed to find information about terrorism and foreign intelligence, that the targets are (51%) foreign, and that there is a minimization procedure. These protocols –rather than the individual searches themselves – are what are approved by the FISC. The NSA then most likely reports back to the FISC (through the DOJ) about whether there was an “inadvertent disclosure” of information not related to these objectives. So the court most likely does not approve every search.
And that’s another problem. You see, each “search” of the database is – well – a search. That search must be supported by probable cause (in a criminal case to believe that there’s a crime, in a FISA case, espionage, foreign intelligence or terrorism) and must be approved by a court. Each search. Not the process. We have been down this road before. In fact, this is precisely what lead to the American Revolution in general and the Fourth Amendment in particular.
When the British Parliament issued the Navigation Acts imposing tariffs on goods imported into America, many colonists refused to pay them (as Boston lawyer James Otis noted, “taxation without representation is tyranny”) So Parliament authorized King George II to issue what are called “writs of assistance.” This writ, issued by a Court, authorized the executive branch (a customhouse officer with the assistance of the sheriff) to search colonists houses for unlawfully smuggled items. These writs did not specify what the sheriff could search for or seize, or where he could look. Like the NSA program, the court approved what could be done, the executive had discretion in how to do it. When George II was succeeded by George III (the writs expiring with the death of the King) Parliament reauthorized them under the hated Townsend Acts. James Otis urged resistance, and it was the use of these unspecific writs authorizing searches that galvanized public opinion (and that of John Adams in particular) to urge revolution. It is why the Fourth Amendment demanded that a search warrant specify based on probable cause, the specific place to be searched and item to be seized. It’s also why writs of assistance are prohibited in the constitution.
The NSA FISC approved searches would be like a judge in Los Angeles issuing a search warrant to the LAPD which said, “you may search any house as long as you smell marijuana in that house.” While the search may be reasonable, and indeed, if the LAPD had applied for a warrant to search a house after they smelled marijuana a court probably would have issued the warrant, the broad blanket approval of these searches would be more akin to a writ of assistance.
So the NSA digital telephony program, while legal in the sense that it was approved by both Congress and the Foreign Intelligence Surveillance Court, has some serious Constitutional problems.
The phone companies could be on the hook for participating in the program, even though they have both immunity and had no choice but to participate. In fact, they could not legally have even disclosed the program. In the FISA amendments, Congress expressly gave the phone companies immunity for making “good faith” disclosures of information pursuant to Section 215.
So why would the phone company be in trouble? The problem is the “good faith” part. In 2012 the Supreme Court looked at the question of when someone (cops in that case) should have immunity for a good faith search pursuant to an unconstitutional warrant. The cops got a warrant for all records of “gang related activity” and all guns in a particular house. The court agreed that the warrant was overbroad, unconstitutional, and should not have been issued. The question was whether the cops, who executed the warrant, should have immunity from civil liability because they acted in “good faith.”
The Supreme Court noted that the fact that they got a warrant at all was one indication that they acted in good faith, but that, “the fact that a neutral magistrate has issued a warrant authorizing the allegedly unconstitutional search or seizure does not end the inquiry into objective reasonableness. Rather, we have recognized an exception allowing suit when “it is obvious that no reasonably competent officer would have concluded that a warrant should issue.” In other words, the cops are generally permitted to rely on the fact that a court issued a search warrant, unless the warrant itself (or the means by which it is procured) is so obviously unconstitutional, overbroad, general or otherwise prohibited that you cannot, in good faith rely on it. While the court found that the cops had immunity because the warrant was not so overbroad to lead to the inevitable conclusion that it was unconstitutional, it is hard to make that same argument where the FISA warrant essentially asked for every record of the phone company. Hard to imagine a broader warrant.
Justice Kagan pointed out that it’s not illegal to be a member of a gang, and that a warrant that authorized seizure of evidence of gang membership per se called for associational records which were protected. Much like the phone logs here. Justices Sotomayor and Ginsburg went further noting, The fundamental purpose of the Fourth Amendment’s warrant clause is “to protect against all general searches.” Go-Bart Importing Co. v. United States, 282 U. S. 344, 357 (1931)
The Fourth Amendment was adopted specifically in response to the Crown’s practice of using general warrants and writs of assistance to search “suspected places” for evidence of smuggling, libel, or other crimes. Boyd v. United States, 116 U. S. 616–626 (1886). Early patriots railed against these practices as “the worst instrument of arbitrary power” and John Adams later claimed that “the child Independence was born” from colonists’opposition to their use. Id., at 625 (internal quotation marks omitted).
To prevent the issue of general warrants on “loose, vague or doubtful bases of fact,” Go-Bart Importing Co., 282 U. S., at 357, the Framers established the inviolable principle that should resolve this case: “no Warrants shall issue, but upon probable cause . . . and particularly describing the . . . things to be seized.” U. S. Const., Amdt. 4. That is, the police must articulate an adequate reason to search for specific items related to specific crimes. They found that the search by the police without probable cause was unreasonable even though there was both judicial and executive oversight, and that therefore there should be no immunity because the actions were not in “good faith.” The phone companies run that risk here.
† Mark Rasch, is the former head of the United States Department of Justice Computer Crime Unit, where he helped develop the department’s guidelines for computer crimes related to investigations, forensics and evidence gathering. Mr. Rasch is currently a principal with Rasch Technology and Cyberlaw and specializes in computer security and privacy.
‡ Sophia Hannah has a BS degree in Physics with a minor in Computer Science and has worked in scientific research, information technology, and as a computer programmer. She currently manages projects with Rasch Technology and Cyberlaw and researches a variety of topics in cyberlaw.
Rasch Cyberlaw (301) 547-6925 www.raschcyber.com
Let's start with some short mental exercises. Limber up your cerebellum. Stretch out and touch your cognitive centers a few times. Ready?
There's another barn on fire! Quick, get a bucket brigade going -- we need to put the fire out before everything burns. Again. It is getting so tiring watching all our stuff burn while we're trying to run a farm here. Too bad we can only afford the barns constructed of fatwood. But no time to think of that -- a barn's burning again! 3rd time this week!
Hey, you people over there tinkering with designs for sprinkler systems and concrete barns -- cut it out! We can't spare you to do that -- too many barns are burning! And you, stop babbling about investigating and arresting arsonists -- we don't have time or money for that: didn't you hear me? Another barn is burning!
Now, hurry up. We're going to have a contest to find who can pass this pail of water the quickest. Yes, it is a small, leaky pail, but we have a lot of them, so that is what we're going to use in the contest. The winners get to be closest to the flames and have a name tag that says "fire prevention specialist." No, we can't afford larger buckets. And no, you can't go get a hose -- we need you in the line. Damnit! The barn's burning!
Sounds really stupid, doesn't it? Whoever is in charge isn't doing anything to address the underlying problem of poor barn construction. It doesn't really match the notion of what a fire prevention specialist might really do. And it certainly doesn't provide deep career preparation for any of those contestants... it may even condemn them to a future of menial bucket passing because we're putting them on the line with no training or qualification beyond being able to pass a bucket.
Let's try another one.
Imagine that every car and automobile in the country has been poorly designed. They almost all leak coolant and burn oil. They're trivial to steal. They are mostly cheap junkers, all built on the same frame with the same engines, accessories, and tires -- even the ones sold to the police and military (actually, they're the same cars, but with different paint). The big automakers are rolling out new models every year that they advertise as being more efficient and reliable, but that is simply hype to get you to buy a new car because the new features also regularly break down. There are a few good models available, but they are quite a bit more expensive; those more expensive ones often (but not always) break down less, are more difficult to steal, and get far better mileage. Their vendors also don't have a yearly model update, and many consumers aren't interested in them because those cars don't take the common size of tire or fuzzy dice for the mirror.
The auto companies have been building this way for decades. They sell their products around the world, and they're a major economic force. Everyone needs a car, and they shell out money for new ones on a regular basis. People grumble about the poor quality and the breakdowns, but other than periodic service bulletins, there are few changes from year to year. Many older, more decrepit cars are on the road because too many people (and companies) cannot afford to buy new ones that they know aren't much better than the old ones. Many people argue -- vociferously -- against any attempt to put safety regulations on the car companies because it might hurt such an important market segment.
A huge commercial enterprise has sprung up around fixing cars and adding on replacement parts that are supposedly more reliable. People pour huge amounts of money into this market because they depend on the cars for work, play, safety, shopping, and many other things. However, there are so many cars, and so many update bulletins and add-ons, there simply aren't enough trained mechanics to keep up -- especially because many of the add-ons don't work, or require continual adjustment.
What to do? Aha! We'll encourage young people in high school and maybe college to become "automotive specialists." We'll publish all sorts of articles with doom and gloom as a result of the shortage of people going into auto repair. We especially need lots more military mechanics.
So...we'll have competitions! We'll offer prizes to the individuals (or teams) that are able to change the oil of last year's model the most quickly, or who can most efficiently hotwire a pickup truck, take it to the garage, change the tires, and return it. The government will support these competitions. They'll get lots of press. Some major professional organizations and even universities will promote these. Of course we'll hire lots of mechanics that way! (Women aren't interested in these kinds of competition? We won't worry about that now. People who are poor with wrenches won't compete? No problem -- we'll fill in with the rest.)
Meanwhile, the government and major companies aren't really doing anything to fix the actual engineering of the automobiles. There are a few comprehensive engineering programs at universities around the country, but minimal focus and resources are applied there, and little is said about applying their knowledge to really fixing transportation. The government, especially the military, simply wants more mechanics and cheaper cars -- overall safety and reliability aren't a major concern.
Pretty stupid, huh? But there does seem to be a trend to these exercises.
Let's try one more.
We have a large population that needs to be fed. They've grown accustomed to cheap, fast-food. Everyone eats at the drive-thru, where they get a burger or compressed chicken by-product or mystery-meat taco. It's filling, and it keeps them going for the day. It also leads to obesity, hypertension, cardiac problems, diabetes, and more. However, no one really blames the fast-food chains, because they are simply providing what people want.
It isn't exactly what people should have, and is it really what everyone wants? No, there are better restaurants with healthy food, but that food is more expensive and many people would go hungry if they had to eat at those places given the current economic model. Of course, if they didn't need to spend so much on medicine and hospital stays, a healthier diet is actually cheaper. Also, those better places aren't easy to find -- small (or no) advertising budgets, for instance.
The government has contracted with the chains for food, and even serves it at every government office and on every military base. The chains thus have a fair amount of political clout so that every time someone raises the issue about how unhealthy the food is, they get muffled by the arguments "But it would be too expensive to eat healthy" and "Most people don't like that other food and can't even find it!"
We have a crisis because the demand for the fast-food is so great that there aren't enough fry cooks. So, the heads of major military organizations and government agencies observe we are facing a crisis because, without enough fry cooks, our troops will be overwhelmed by better fed people from China. Government officials and industry people agree because they can't imagine any better diet (or are so enamored of fried potatoes that they don't want anything else).
How do they address the crisis? By mounting advertising campaigns to encourage young people to enter the exciting world of "cuisine awareness." We make it seem glamorous. Private organizations offer certifications in "soda making" and "ketchup bottle maintenance" that are awarded after 3-day seminars. DOD requires anyone working in food service to have one of these certificates -- and that's basically all. We see educational institutes and small colleges offering special programs in "salad bar maintenance." The generals and admirals keep showing up at meetings proclaiming how important it is that we get more burger-flippers in place before we have a "patty melt Pearl Harbor."
The government launches a program to certify schools as centers of "Cuisine Awareness Exellence" if they can prove they have at least 5 cookbooks in the library, a crockpot, and two faculty who have boiled water. Soon, there are hundreds of places designated with this CAE, from taco trucks and hot dog stands to cordon bleu centers -- but lots are only hot dog stands. None of them are given any recipes, cooks, or financial support, of course -- simply designating them is enough, right?
When all of that isn't seen to be enough, the powers-that-be offer up contests that encourage kids to show up and cook. Those who are able to most quickly defrost a compressed cake of Soylent Red, cook it, stick it in a bun, and serve it up in a bag with fries is declared the winner and given a job behind someone's grill. Actually, each registered contestant gets a jaunty paper cap and offer of an immediate job cooking for the military (assuming they are U.S. citizens; after all, we know what those furriners eat sure isn't food!) And gosh, how could they aspire to be anything BUT a fry cook for the next 40 years -- no need to worry about any real education before they take the jobs.
Meanwhile, those studying dietetics, preventative health care, sustainable agriculture, haute cuisine, or other related topics are largely ignored -- not to mention the practicing experts in these fields. The people and places of study for those domains are ignored by the officials, and many of the potential employers in those areas are actually going out of business because of lack of public interest and support. The advice of the experts on how to improve diet is ignored. Find that disconcerting? Here -- have a deep-fried cherry pie and a chocolate ersatz-dairy item drink to make you feel better.
Did you sense a set of common threads (assuming you didn't blow out your cortex in the exercise)?
First, in every case, a mix of short-sighted and ultimately stupid solutions are being undertaken. In each, there are large-scale efforts to address pressing problems that largely ignore fundamental, systemic weaknesses.
Second, there are a set of efforts putatively being made to increase the population of experts, but only with those who know how to address a current, limited problem set. Fancy titles, certificates, and seminars are used to promote these technicians. Meanwhile, longer-term expertise and solutions are being ignored because of the perceived urgency of the immediate problems and a lack of understanding of cost and risk.
Third, longer-term disaster is clearly coming in each case because of secondary problems and growth of the current threats.
Why did this come up with my post and panel on cybersecurity? I would hope that would be obvious, but if not, let me suggest you go back to read my prior post, then read the above examples, again. Then, consider:
One of the most egregious aspects is this last item -- the increasing use of competitions as a way of drawing people to the field. Competitions, by their very nature, stress learned behavior to react to current problems that are likely small deviations from past issues. They do not require extensive grounding in multiple fields. Competitions require rapid response instead of careful design and deep thought -- if anything, they discourage people who exhibit slow, considerate thinking -- discourage them from the contests, and possibly from considering the field itself. If what is being promoted are competitions for the fastest hack on a WIntel platform, how is that going to encourage deep thinkers interested in architecture, algorithms, operating systems, cryptology, or more?
Competitions encourage the mindset of hacking and patching, not of strong design. Competitions encourage the mindset of quick recovery over the gestalt of design-operate-observe-investigate-redesign. Because of the high-profile, high-pressure nature of competitions, they are likely to discourage the philosophical and the careful thinkers. Speed is emphasized over comprehensive and robust approaches. Competitions are also likely to disproportionately discourage women, the shy, and those with expertise in non-mainstream systems. In short, competitions select for a narrow set of skills and proclivities -- and may discourage many of the people we most need in the field to address the underlying problems.
So, the next time you hear some official talk about the need for "cyber warriors" or promoting some new "capture the flag" competition, ask yourself if you want to live in a world where the barns are always catching fire, the cars are always breaking down, nearly everyone eats fast food, and the major focus of "authorities" is attracting more young people to minimally skilled positions that perpetuate that situation...until everything falls apart. The next time you hear about some large government grant that happens to be within 100 miles of the granting agency's headquarters or corporate support for a program of which the CEO is an alumnus but there is no history of excellence in the field, ask yourself why their support is skewed towards building more hot dog stands.
Those of us here at CERIAS, and some of our colleagues with strategic views elsewhere, remind you that expertise is a pursuit and a process, not a competition or a 3-day class, and some of us take it seriously. We wish you would, too.
Your brain may now return to being a couch potato.
[I was recently asked for some thoughts on the issues of professionalization and education of people working in cyber security. I realize I have been asked this many times, I and I keep repeating my answers, to various levels of specificity. So, here is an attempt to capture some of my thoughts so I can redirect future queries here.]
There are several issues relating to the area of personnel in this field that make issues of education and professional definition more complex and difficult to define. The field has changing requirements and increasing needs (largely because industry and government ignored the warnings some of us were sounding many years ago, but that is another story, oft told -- and ignored).
When I talk about educational and personnel needs, I discuss it metaphorically, using two dimensions. Along one axis is the continuum (with an arbitrary directionality) of science, engineering, and technology. Science is the study of fundamental properties and investigation of what is possible -- and the bounds on that possibility. Engineering is the study of design and building new artifacts under constraints. Technology is the study of how to choose from existing artifacts and employ them effectively to solve problems.
The second axis is the range of pure practice to abstraction. This axis is less linear than the other (which is not exactly linear, either), and I don't yet have a good scale for it. However, conceptually I relate it to applying levels of abstraction and anticipation. At its "practice" end are those who actually put in the settings and read the logs of currently-existing artifacts; they do almost no hypothesizing. Moving the other direction we see increasing interaction with abstract thought, people and systems, including operations, law enforcement, management, economics, politics, and eventually, pure theory. At one end, it is "hands-on" with the technology, and at the other is pure interaction with people and abstractions, and perhaps no contact with the technology.
There are also levels of mastery involved for different tasks, such as articulated in Bloom's Taxonomy of learning. Adding that in would provide more complexity than can fit in this blog entry (which is already too long).
The means of acquisition of necessary expertise varies for any position within this field. Many technicians can be effective with simple training, sometimes with at most on-the-job experience. They usually need little or no background beyond everyday practice. Those at the extremes of abstract thought in theory or policy need considerably more background, of the form we generally associate with higher education (although that is not strictly required), often with advanced degrees. And, of course, throughout, people need some innate abilities and motivation for the role they seek; Not everyone has ability, innate or developed, for each task area.
We have need of the full spectrum of these different forms of expertise, with government and industry currently putting an emphasis on the extremes of the quadrant involving technology/practice -- they have problems, now, and want people to populate the "digital ramparts" to defend them. This emphasis applies to those who operate the IDS and firewalls, but also to those who find ways to exploit existing systems (that is an area I believe has been overemphasized by government. Cf. my old blog post and a recent post by Gary McGraw). Many, if not most, of these people can acquire needed skills via training -- such as are acquired on the job, in 1-10 day "minicourses" provided by commercial organizations, and vocational education (e.g, some secondary ed, 2-year degree programs). These kinds of roles are easily designated with testing and course completion certificates.
Note carefully that there is no value statement being made here -- deeply technical roles are fundamental to civilization as we know it. The plumbers, electricians, EMTs, police, mechanics, clerks, and so on are key to our quality of life. The programs that prepare people for those careers are vital, too.
Of course, there are also careers that are directly located in many other places in the abstract plane illustrated above: scientists, software engineers, managers, policy makers, and even bow tie-wearing professors.
One problem comes about when we try to impose sharply-defined categories on all of this, and say that person X has sufficient mastery of the category to perform tasks A, B, and C that are perceived as part of that category. However, those categories are necessarily shifting, not well-defined, and new needs are constantly arising. For instance, we have someone well trained in selecting and operating firewalls and IDS, but suddenly she is confronted with the need to investigate a possible act of nation-state espionage, determine what was done, and how it happened. Or, she is asked to set corporate policy for use of BYOD without knowledge of all the various job functions and people involved. Further deployment of mobile and embedded computing will add further shifts. The skills to do most of these tasks are not easily designated, although a combination of certificates and experience may be useful.
Too many (current) educational programs stress only the technology -- and many others include significant technology training components because of pressure by outside entities -- rather than a full spectrum of education and skills. We have a real shortage of people who have any significant insight into the scope of application of policy, management, law, economics, psychology and the like to cybersecurity, although arguably, those are some of the problems most obvious to those who have the long view. (BTW, that is why CERIAS was founded 15 years ago including faculty in nearly 20 academic departments: "cybersecurity" is not solely a technology issue; this has more recently been recognized by several other universities that are now also treating it holistically.) These other skill areas often require deeper education and repetition of exercises involving abstract thought. It seems that not as many people are naturally capable of mastering these skills. The primary means we use to designate mastery is through postsecondary degrees, although their exact meaning does vary based on the granting institution.
So, consider some the bottom line questions of "professionalization" -- what is, exactly, the profession? What purposes does it serve to delineate one or more niche areas, especially in a domain of knowledge and practice that changes so rapidly? Who should define those areas? Do we require some certification to practice in the field? Given the above, I would contend that too many people have too narrow a view of the domain, and they are seeking some way of ensuring competence only for their narrow application needs. There is therefore a risk that imposing "professional certifications" on this field would both serve to further skew the perception of what is involved, and discourage development of some needed expertise. Defining narrow paths or skill sets for "the profession" might well do the same. Furthermore, much of the body of knowledge is heuristics and "best practice" that has little basis in sound science and engineering. Calling someone in the 1600s a "medical professional" because he knew how to let blood, apply leeches, and hack off limbs with a carpenter's saw using assistants to hold down the unanesthitized patient creates a certain cognitive dissonance; today, calling someone a "cyber security professional" based on knowledge of how to configure Windows, deploy a firewall, and install anti-virus programs should probably be viewed as a similar oddity. We need to evolve to where the deployed base isn't so flawed, and we have some knowledge of what security really is -- evolve from the equivalent of "sawbones" to infectious disease specialists.
We have already seen some of this unfortunate side-effect with the DOD requirements for certifications. Now DOD is about to revisit the requirements, because they have found that many people with certifications don't have the skills they (DOD) think they want. Arguably, people who enter careers and seek (and receive) certification are professionals, at least in a current sense of that word. It is not their fault that the employers don't understand the profession and the nature of the field. Also notable are cases of people with extensive experience and education, who exceed the real needs, but are not eligible for employment because they have not paid for the courses and exams serving as gateways for particular certificates -- and cash cows for their issuing organizations. There are many disconnects in all of this. We also saw skew develop in the academic CAE program.
Here is a short parable that also has implications for this topic.
In the early 1900s, officials with the Bell company (telephones) were very concerned. They told officials and the public that there was a looming personnel crisis. They predicted that, at the then-current rate of growth, by the end of the century everyone in the country would need to be a telephone operator or telephone installer. Clearly, this was impossible.
Fast forward to recent times. Those early predictions were correct. Everyone was an installer -- each could buy a phone at the corner store, and plug it into a jack in the wall at home. Or, simpler yet, they could buy cellphones that were already on. And everyone was an operator -- instead of using plugboards and directory assistance, they would use an online service to get a phone number and enter it in the keypad (or speed dial from memory). What happened? Focused research, technology evolution, investment in infrastructure, economics, policy, and psychology (among others) interacted to "shift the paradigm" to one that no longer had the looming personnel problems.
If we devoted more resources and attention to the broadly focused issues of information protection (not "cyber" -- can we put that term to rest?), we might well obviate many of the problems that now require legions of technicians. Why do we have firewalls and IDS? In large part, because the underlying software and hardware was not designed for use in an open environment, and its development is terribly buggy and poorly configured. The languages, systems, protocols, and personnel involved in the current infrastructure all need rethinking and reengineering. But so long as the powers-that-be emphasize retaining (and expanding) legacy artifacts and compatibility based on up-front expense instead of overall quality, and in training yet more people to be the "cyber operators" defending those poor choices, we are not going to make the advances necessary to move beyond them (and, to repeat, many of us have been warning about that for decades). And we are never going to have enough "professionals" to keep them safe. We are focusing on the short term and will lose the overall struggle; we need to evolve our way out of the problems, not meet them with an ever-growing band of mercenaries.
The bottom line? We should be very cautious in defining what a "professional" is in this field so that we don't institutionalize limitations and bad practices. And we should do more to broaden the scope of education for those who work in those "professions" to ensure that their focus -- and skills -- are not so limited as to miss important features that should be part of what they do. As one glaring example, think "privacy" -- how many of the "professionals" working in the field have a good grounding and concern about preserving privacy (and other civil rights) in what they do? Where is privacy even mentioned in "cybersecurity"? What else are they missing?
[If this isn't enough of my musings on education, you can read two of my ideas in a white paper I wrote in 2010. Unfortunately, although many in policy circles say they like the ideas, no one has shown any signs of acting as a champion for either.]
[3/2/2013] While at the RSA Conference, I was interviewed by the Information Security Media Group on the topic of cyber workforce. The video is available online.
[If you want to skip my recollection and jump right to the announcement that is the reason for this post, go here.]
Back in about 1990 I was approached by an eager undergrad who had recently come to Purdue University. A mutual acquaintance (hi, Rob!) had recommended that the student connect with me for a project. We chatted for a bit and at first it wasn't clear exactly what he might be able to do. He had some experience coding, and was working in the campus computing center, but had no background in the more advanced topics in computing (yet).
Well, it just so happened that a few months earlier, my honeypot Sun workstation had recorded a very sophisticated (for the time) attack, which resulted in an altered shared library with a back door in place. The attack was stealthy, and the new library had the same dates, size and simple hash value as the original. (The attack was part of a larger series of attacks, and eventually documented in "@Large: The Strange Case of the World's Biggest Internet Invasion" (David H. Freedman, Charles C. Mann .)
I had recently been studying message digest functions and had a hunch that they might provide better protection for systems than a simple
ls -1 | diff - old comparison. However, I wanted to get some operational sense about the potential for collision in the digests. So, I tasked the student with devising some tests to run many files through a version of the digest to see if there were any collisions. He wrote a program to generate some random files, and all seemed okay based on that. I suggested he look for a different collection -- something larger. He took my advice a little too much to heart. It seems he had a part time job running backup jobs on the main shared instructional computers at the campus computing center. He decided to run the program over the entire file system to look for duplicates. Which he did one night after backups were complete.
The next day (as I recall) he reported to me that there were no unexpected collisions over many hundreds of thousands of files. That was a good result!
The bad result was that running his program over the file system had resulted in a change of the access time of every file on the system, so the backups the next evening vastly exceeded the existing tape archive and all the spares! This led directly to the student having a (pointed) conversation with the director of the center, and thereafter, unemployment. I couldn't leave him in that position mid-semester so I found a little money and hired him as an assistant. I them put him to work coding up my idea, about how to use the message digests to detect changes and intrusions into a computing system. Over the next year, he would code up my design, and we would do repeated, modified "cleanroom" tests of his software. Only when they all passed, did we release the first version of Tripwire.
That is how I met Gene Kim .
Gene went on to grad school elsewhere, then a start-up, and finally got the idea to start the commercial version of Tripwire with Wyatt Starnes; Gene served as CTO, Wyatt as CEO. Their subsequent hard work, and that of hundreds of others who have worked at the company over the years, resulted in great success: the software has become one of the most widely used change detection & IDS systems in history, as well as inspiring many other products.
Gene became more active in the security scene, and was especially intrigued with issues of configuration management, compliance, and overall system visibility, and with their connections to security and correctness. Over the years he spoken with thousands of customers and experts in the industry, and heard both best-practice and horror stories involving integrity management, version control, and security. This led to projects, workshops, panel sessions, and eventually to his lead authorship of "Visible Ops Security: Achieving Common Security and IT Operations Objectives in 4 Practical Steps" (Gene Kim, Paul Love, George Spafford) , and some other, related works.
His passion for the topic only grew. He was involved in standards organizations, won several awards for his work, and even helped get the B-sides conferences into a going concern. A few years ago, he left his position at Tripwire to begin work on a book to better convey the principles he knew could make a huge difference in how IT is managed in organizations big and small.
I read an early draft of that book a little over a year ago (late 2011), It was a bit rough -- Gene is bright and enthusiastic, but was not quite writing to the level of J.K. Rowling or Stephen King. Still, it was clear that he had the framework of a reasonable narrative to present major points about good, bad, and excellent ways to manage IT operations, and how to transform them for the better. He then obtained input from a number of people (I think he ignored mine), added some co-authors, and performed a major rewrite of the book. The result is a much more readable and enjoyable story -- a cross between a case study and a detective novel, with a dash of H. P. Lovecraft and DevOps thrown in.
The official launch date of the book, "The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win" (Gene Kim, Kevin Behr, George Spafford), is Tuesday, January 15, but you can preorder it before then on (at least) Amazon.
The book is worth reading if you have a stake in operations at a business using IT. If you are a C-level executive, you should most definitely take time to read the book. Consultants, auditors, designers, educators...there are some concepts in there for everyone.
But you don't have to take only my word for it -- see the effusive praise of tech luminaries who have read the book .
So, Spaf sez, get a copy and see how you can transform your enterprise for the better.
(Oh, and I have never met the George Spafford who is a coauthor of the book. We are undoubtedly distant cousins, especially given how uncommon the name is. That Gene would work with two different Spaffords over the years is one of those cosmic quirks Vonnegut might write about. But Gene isn't Vonnegut, either.
So, as a postscript.... I've obviously known Gene for over 20 years, and am very fond of him, as well as happy for his continuing success. However, I have had a long history of kidding him, which he has taken with incredible good nature. I am sure he's saving it all up to get me some day....
When Gene and his publicist asked if I could provide some quotes to use for his book, I wrote the first of the following. For some reason, this never made it onto the WWW site . So, they asked me again, and I wrote the second of the following -- which they also did not use.
So, not to let a good review (or two) go to waste, I have included them here for you. If nothing else, it should convince others not to ask me for a book review.
But, despite the snark (who, me?) of these gag reviews, I definitely suggest you get a copy of the book and think about the ideas expressed therein. Gene and his coauthors have really produced a valuable, readable work that will inform -- and maybe scare -- anyone involved with organizational IT.
Based on my long experience in academia, I can say with conviction that this is truly a book, composed of an impressive collection of words, some of which exist in human languages. Although arranged in a largely random order, there are a few sentences that appear to have both verbs and nouns. I advise that you immediately buy several copies and send them to people -- especially people you don't like -- and know that your purchase is helping keep some out of the hands of the unwary and potentially innocent. Under no circumstances, however, should you read the book before driving or operating heavy machinery. This work should convince you that Gene Kim is a visionary (assuming that your definition of "vision" includes "drug-induced hallucination").
I picked up this new book -- The Phoenix Project , by Gene Kim, et al. -- and could not put it down. You probably hear people say that about books in which they are engrossed. But I mean this literally: I happened to be reading it on my Kindle while repairing some holiday ornaments with superglue. You might say that the book stuck with me for a while.
There are people who will tell you that Gene Kim is a great author and raconteur. Those people, of course, are either trapped in Mr. Kim's employ or they drink heavily. Actually, one of those conditions invariably leads to the other, along with uncontrollable weeping, and the anguished rending of garments. Notwithstanding that, Mr. Kim's latest assault on les belles-lettres does indeed prompt this reviewer to some praise: I have not had to charge my health spending account for a zolpidem refill since I received the advance copy of the book! (Although it may be why I now need risperidone.)
I must warn you, gentle reader, that despite my steadfast sufferance in reading, I never encountered any mention of an actual Phoenix. I skipped ahead to the end, and there was no mention there, either. Neither did I notice any discussion of a massive conflagration nor of Arizona, either of which might have supported the reference to Phoenix . This is perhaps not so puzzling when one recollects that Mr. Kim's train of thought often careens off the rails with any random, transient manifestation corresponding to the meme "Ooh, a squirrel!" Rather, this work is more emblematic of a bus of thought, although it is the short bus, at that.
Despite my personal trauma, I must declare the book as a fine yarn: not because it is unduly tangled (it is), but because my kitten batted it about for hours with the evident joy usually limited to a skein of fine yarn. I have found over time it is wise not to argue with cats or women. Therefore, appease your inner kitten and purchase a copy of the book. Gene Kim's court-appointed guardians will thank you. Probably.
(Congratulations Gene, Kevin and George!)
As someone who is interested in information security and CERIAS (or why else would you be reading this blog?), you are undoubtedly already aware of the great need for education and research in information/cyber security areas -- the very areas in which we have been a leader for the last 20+ years here at Purdue University.
One aspect of our efforts is an on-going need to attract and retain the very best faculty members possible to provide leadership in all aspects of what we do.
Universities have a mechanism for attracting and retaining the best people: endowed chairs for faculty. These are special designations for positions for leading faculty. The associated endowment provides discretionary funds for travel, research, staff and a salary supplement to support the position. Only a small number of these positions exist in any computing field at universities nationally…and almost none in information/cyber security and privacy. Having one of the oldest and largest programs in this field, Purdue University really should have a few of these positions available to attract and keep the best faculty we can find.
Normally, the endowments for these chairs are provided by generous individuals or foundations who support the university and/or the research area. As a small token of appreciation, the university allows the benefactor(s) to name the chaired position (within reason), thus resulting in something such as the Homer J. Simpson Distinguished Professor of Cyber Security or the Yoyodyne Propulsion Systems Professor of Information Security and Privacy. This name is kept in perpetuity, and is on all stationery and publications of that professor henceforth.
Purdue has just announced a new program to match donations 1:1 for chaired positions with no restrictions. It is thus possible for someone (or a group, company, club or foundation) to endow a distinguished chair at ½ of the usual amount. Further, that amount may be pledged over a three-year period, and the donor(s) still retain full naming rights!
Note that Purdue University is a 503(c) organization and thus donations to support this have potential tax advantages for the donor(s).
We really would like to have CERIAS continue to be the leader in the field of information security. Obtaining at least one (and preferably, several) named chairs in the field, most likely with homes in the CS department, would help us keep that lead, and keep our program strong.
If you are interested in taking part in this great opportunity to help fund one of the first few endowed professorships globally in this important area, please contact me. And if you know of others who might be interested, please pass this along to them. Fields including computer games and graphics have dozens of endowed professorships around the country. Isn't it about time we showed that information security is taken seriously, too?
Sunday, October 2nd, Earl Eugene Schultz, Jr. passed away. Gene probably had suffered an unrecognized stroke about two weeks earlier, and a week later fell down a long escalator at the Minneapolis municipal airport. He was given immediate emergency aid, then hospitalized, but never regained consciousness. Many of his family members were with him during his final days.
What follows is a more formal obituary, based on material provided by his family and others. That is followed by some personal reflections.
Gene was born September 10, 1946, in Chicago to E. Eugene Sr. and Elizabeth Schultz. They moved to California in 1948, and Gene’s sister, Nancy, was born in 1955. The family lived in Lafayette, California. Gene graduated from UCLA, and earned his MS and PhD (in Cognitive Science, 1977) at Purdue University in Indiana.
While at Purdue University, Gene met and married Cathy Brown. They were married for 36 years, and raised three daughters: Sarah, Rachel and Leah.
Gene was an active member of Cornerstone Fellowship, and belonged to a men’s Bible study. His many interests included family, going to his mountain home in Twain Harte, model trains, music, travelling, the outdoors, history, reading and sports.
Gene is survived by his wife of 36 years, Cathy Brown Schultz; father, Gene Schultz, Sr.; sister, Nancy Baker; daughters and their spouses, Sarah and Tim Vanier, Rachel and Duc Nguyen, Leah and Nathan Martin; and two grandchildren, Nola and Drake Nguyen.A memorial service will be held at Cornerstone Fellowship in Livermore, California on Saturday, October 8, 2011 at 1 pm. Donations may be sent to Caring Bridge.org under his name, Gene Schultz.
You should also take a few moments to visit this page and learn about the symptoms and response to stroke.
Gene was one of the more notable and accomplished figures in computing security over the last few decades. During the course of his career, Gene was professor of computer science at several universities, including the University of California at Davis and Purdue University, and retired from the University of California at Berkeley. He consulted for a wide range of clients, including U.S. and foreign governments and the banking, petroleum, and pharmaceutical industries. He also managed several information security practices and served as chief technology officer for two companies.
Gene formed and managed the Computer Incident Advisory Capability (CIAC) — an incident response team for the U.S. Department of Energy — from 1986–1992. This was the first formal incident response team, predating the CERT/CC by several years. He also was instrumental in the founding of FIRST — the Forum of Incident Response & Security Teams.
During his 30 years of work in security, Gene authored or co-authored over 120 papers, and five books. He was manager of the I4 program at SRI from 1994–1998. From 2002–2007, he was the Editor-in-Chief of Computers and Security — the oldest journal in computing security — and continued to serve on its editorial board. Gene was also an associate editor of Network Security. He was a member of the accreditation board of the Institute of Information Security Professionals (IISP).
Gene testified as an expert several times before both Senate and House Congressional committees. He also served as an expert advisor to a number of companies and agencies. Gene was a certified SANS instructor, instructor for ISACA, senior SANS analyst, member of the SANS NewsBites editorial board, and co-author of the 2005 and 2006 Certified Information Security Manager preparation materials.
Dr. Schultz was honored numerous times for his research, service, and teaching. Among his many notable awards, Gene received the NASA Technical Excellence Award, Department of Energy Excellence Award, the Vanguard Conference Top Gun Award (for best presenter) twice, the Vanguard Chairman's Award, the ISACA John Kuyers Best Speaker/Best Conference Contributor Award and the National Information Systems Security Conference Best Paper Award. One of only a few Distinguished Fellows of the Information Systems Security Association (ISSA), he was also named to the ISSA Hall of Fame and received ISSA's Professional Achievement and Honor Roll Awards.
As I recall, I first “met” Gene almost 25 years ago, when he was involved with the CIAC and I was involved with network security. We exchanged email about security issues and his time at Purdue. I may have even met him earlier — I can’t recall, exactly. It seems we have been friends forever. We also crossed paths once or twice at conferences, but it was only incidental.
In 1998, I started CERIAS at Purdue. I had contacted personnel at the (now defunct) company Global Integrity while at the National Computer Security Conference that year about supporting the effort at CERIAS. What followed was a wonderful collaboration: Gene was the Director of Research for Global Integrity, and as part of their support for CERIAS they “loaned” Gene to us for several years. Gene, Cathy and Leah moved to West Lafayette, a few houses away from where I lived, and Gene proceeded to help us in research and teaching courses over the next three years while he worked remotely for GI.
The students at Purdue loved Gene, but that seems to have been the case for everywhere he taught. Gene had a gift for conveying complex concepts to students, and had incredible patience when dealing with them one-on-one. He came up with great assignments, sprinkled his lectures with interesting stories from his experience, and encouraged the students to try things to see what they might discover. He was inspirational. He was inspirational as a colleague; too, although we both traveled so much that we didn’t get to see each other too often.
In 2001 he parted ways with Global Integrity, and moved his family back to California. This was no doubt influenced by the winters they had experienced in Indiana — too much of a reminder of grad student days for Gene and Cathy! I remember one time that we all got together to watch a New Year’s Purdue football bowl appearance, and the snow was so high as to make the roads impassable for a few days. Luckily, we lived near each other and it was only a short walk to warmth, hors d’oeuvres, and wine.
In the following years, Gene and I kept in close touch. We served on a few committees and editorial boards together, regularly saw each other at conferences, and kept the email flowing back and forth. He returned to Purdue and CERIAS several times to conduct seminars and joint research. He was generous with his time to the students and faculty who met with him.
Earlier this year, several of us put together a proposal to a funding agency. In it, we listed Gene as an outside expert to review and advise us on our work. We had room in the budget to pay him almost any fee he requested. But, when I spoke with him on the phone, he indicated he didn’t care if we paid more than his expenses — “I want to help CERIAS students and advance the field” was his rationale.
Since I learned of the news of his accident, and subsequent passing, I have provided some updates and notes to friends, colleagues, former students, and others via social media and email. So many people who knew Gene have responded with stories. There are three elements that are frequently repeated, and from my experience they help to define the man:
Gene Schultz was a wonderful role model, mentor and friend for a huge number of people, including being a husband to a delightful wife for 36 years and father to three wonderful daughters. Our world is a little less bright with him gone, but so very much better that he was with us for the time he was here.
E. Eugene Schultz, Jr., 9/10/46–10/2/11. Requiescat in pace.
I was watching a video today (more on that later) that reminded me of some history. It also brought to mind that too few defenders these days build forensics capture into their systems to help identify intruders. They also don't have active defenses, countermeasures and chaff in place to slow down attackers and provide more warning of problems.
Back in the late 1980s and early 1990s, I quietly built some counterhacking and beaconing tools that I installed in a "fake front" machine on our local network. People who tried to break into it might get surprises and leave me log info about what they were up to, and things they downloaded would not do what they thought or might beacon me to indicate where the code went. This was long before honeypots were formalized, and before firewalls were in common use. Some of my experiences contributed to me writing the first few papers on software forensics (now called digital forensics), development of Tripwire, and several of my Ph.D. students's theses topics.
I didn't talk about that work much at the time for a variety of reasons, but I did present some of the ideas to students in classes over the years, and in some closed workshops. Tsutomu Shimomura, Dan Farmer and I traded some of our ideas on occasion, along with a few others; a DOD service branch contracted with a few companies to actually built some tools from my ideas, a few of which made it into commercial products. (And no, I never got any royalties or credit for them, either, or for my early work on firewalls, or security scanning, or.... I didn't apply for patents or start companies, unfortunately. It's interesting to see how much of the commercial industry is based around things I pioneered.)
I now regret not having actually written about my ideas at the time, but I was asked by several groups (including a few government agencies) not to do so because it might give away clues to attackers. A few of those groups were funding my grad students, so I complied. You can find a few hints of the ideas in the various editions of Practical Unix & Internet Security because I shared several of the ideas with my co-author, Simson Garfinkel, who had a lot of clever ideas of his own. He went on to found a company, Sandstorm Enterprises, to build and market some professional tools in roughly this space; I was a minor partner in that company. (Simson has continued to have lots of other great ideas, and is now doing wonderful things with disk forensics as a faculty member at the Naval Postgraduate School.)
Some of the ideas we all had back then continue to be reinvented, along with many new and improved approaches. Back in the 1980s, all my tools were in Unix (SunOS, mostly), but now there are possible options in many other systems, with Windows and Linux being the main problems. Of course, back in the 1980s the Internet wasn't used for commerce, Linux hadn't been developed, and Windows was not the widespread issue it it now. There also wasn't a WWW with its problems of cross-site scripting and SQL injection. Nonetheless, there were plenty of attackers, and more than enough unfound bugs in the software to enable attacks.
For the sake of history, I thought I'd document a few of the things I remember as working well, so the memories aren't lost forever. These are all circa 1989-1993:
There were many other tools and tripwires in place, of course, but the above were some of the most successful.
What does successful mean? Well, they helped me to identify several penetrations in progress, and get info on the attackers. I also identified a few new attacks, including the very subtle library substitution that was documented in @Large: The Strange Case of the World's Biggest Internet Invasion. The substitute with backdoor in place had the identical size, dates and simple checksum as the original so as to evade tools such as COPS and rdist. Most victims never knew they had been compromised. My system caught the attack in progress. I was able to share details with the Sun response team — and thereafter they started using MD5 checksums on their patch releases. That incident also inspired some of my design of Tripwire.
In another case, I collected data on some people who had broken into my system to steal the Morris Worm source code. The attacks were documented in the book Underground . The author, Suelette Dreyfus, assisted by Julian Assange (yes, the Wikileaks one), never bothered to contact me to verify what she wrote. The book suggests that my real account was compromised, and source code taken. However, it was the fake account, my security monitors froze the connection after a few minutes, and the software that was accessed was truncated and neutered. Furthermore, the flaws that were exploited to get in were not on my machine — they were on a machine operated by the CS staff. (Dreyfuss got several other things wrong, but I'm not going to do a full critique.)
There were a half-dozen other incidents where I was able to identify new attacks (now known as zero-day exploits) and get the details to vendors. But after a while, interest dropped off in attacking my machine as new, more exciting opportunities for the kiddies came into play, such as botnets and DDOS attacks. And maybe the word spread that I didn't keep anything useful or interesting on my system. (I still don't.) It's also the case that I got much more interested in issues that don't involve the hands-on, bits & bytes parts of security — I'm now much more interested in fundamental science and policy aspects. I leave the hands-on aspects to the next generation. So, I'm not really a challenge now — especially as I do not administer my system anymore — it's done by staff.
I was reminded of all this when someone on Twitter posted the URL of a video taken at Notacon 2011 (Funnypots and Skiddy Baiting: Screwing with those that screw with you by Adrian "Iron Geek" Crenshaw). It is amusing and reminded me of the stories, above. It also showed that some of the same techniques we used 20 years ago are still applicable today.
Of course, that is also depressing. Now, nearly 20 years later, lots of things have changed but unfortunately, security is a bigger problem, and law enforcement is still struggling to keep up. Too many intrusions occur without being noticed, and too little information is available to track the perps.
There are a few takeaways from all the above that the reader is invited to consider:
Also, you might watch Iron Geek's video to inspire some other ideas if you are interested in this general area — it's a good starting point. (And another, related and funny post on this general topic is here, but is possibly NSFW.)
In conclusion, I'll close with my 3 rules for successful security:
Yet another breach of information has occurred, this time from the Arizona Department of Public Safety. A large amount of data about law enforcement operations was exposed, as was a considerable amount of personnel information. As someone who has been working in information security and the implications of technology for nearly 30 years, two things come to mind.
First, if a largely uncoordinated group could penetrate the systems and expose all this information, then so could a much more focused, well-financed, and malevolent group — and it would not likely result in postings picked up by the media. Attacks by narcotics cartels, organized crime, terrorists and intelligence agencies are obvious threats; we can only assume that some have already succeeded but not been recognized or publicized. And, as others are noting, this poses a real threat to the physical safety of innocent people. Yes, in any large law enforcement organization there are likely to be some who are corrupt (the claimed reason behind the attack), but that is not reason to attack them all. Some of the people they are arrayed against are far worse.
For example, there are thousands (perhaps tens of thousands) of kidnappings in Mexico for ransom, with many of the hostages killed rather than freed after payment. Take away effective law enforcement in Arizona, and those gangs would expand into the U.S. where they could demand bigger ransoms. The hackers, sitting behind a keyboard removed from gang and street violence, safe from forcible rape, and with enough education to be able to avoid most fraud, find it easy to cherry-pick some excesses to complain about. But the majority of people in the world do not have the education or means to enjoy that level of privileged safety. Compared to police in many third-world countries where extortion and bribes are always required for any protection at all, U.S. law enforcement is pretty good. (As is the UK, which has also recently been attacked.)
Ask yourself what the real agenda is of a group that has so far only attacked law enforcement in some of the more moderate countries, companies without political or criminal agendas, and showing a total disregard for collateral damage. Ask why these "heroes" aren't seeking to expose some of the data and names of the worst drug cartels, or working to end human trafficking and systematic rape in war zones, or exposing the corruption in some African, South American & Asian governments, or seeking to damage the governments of truly despotic regimes (e.g., North Korea, Myanmar), or interfering with China's online attacks against the Dalai Lama, or leaking memos about willful environmental damage and fraud by some large companies, or seeking to destroy extremist groups (such as al Qaida) that oppress woman and minorities and are seeking weapons of mass destruction.
Have you seen one report yet about anything like the above? None of those actions would necessarily be legal, but any one of them would certainly be a better match for the claimed motives. Instead, it is obvious that these individuals and groups are displaying a significant political and moral bias — or blindness — they are ignoring the worst human rights offenders and criminals on the planet. It seems they are after the ego-boosting publicity, and concerned only with themselves. The claims of exposing evil is intended to fool the naive.
In particular, this most recent act of exposing the names and addresses of family members of law enforcement, most of whom are undoubtedly honest people trying to make the world a safer place, is not a matter of "Lulz" — it is potentially enabling extortion, kidnapping, and murder. The worst criminals, to whom money is more important than human life, are always seeking an opportunity to neutralize the police. Attacking family members of law enforcement is common in many countries, including Mexico, and this kind of exposure further enables it now in Arizona. The data breach is attacking some of the very people and organizations trying to counter the worst criminal and moral abuses that may occur, and worse, their families.
Claiming that, for instance, that the "War on Drugs" created the cartels and is morally equivalent (e.g., response #13 in this) is specious. Laws passed by elected representatives in the U.S. did not cause criminals in Mexico to kidnap poor people, force them to fight to the death for the criminals' amusement, and then force the survivors to act as expendable drug mules. The moral choices by criminals are exactly that — moral choices. The choice to kidnap, rape, or kill someone who objects to your criminal behavior is a choice with clear moral dimensions. So are the choices of various hackers who expose data and deface systems.
When I was growing up, I was the chubby kid with glasses. I didn't do well in sports, and I didn't fit in with the groups that were the "cool kids." I wasn't into drinking myself into a stupor, or taking drugs, or the random vandalism that seemed to be the pasttimes of those very same "cool kids." Instead, I was one of the ones who got harassed, threatened, my homework stolen, and laughed at. The ones who did it claimed that it was all in fun — this being long before the word "lulz" was invented. But it was clear they were being bullies, and they enjoyed being bullies. It didn't matter if anyone got hurt, it was purely for their selfish enjoyment. Most were cowards, too, because they would not do anything that might endanger them, and when they could, they did things anonymously. The only ones who thought it was funny were the other dysfunctional jerks. Does that sound familiar?
Twenty years ago, I was objecting to the press holding up virus authors as unappreciated geniuses. They were portrayed as heroes, performing experiments and striking blows against the evil computer companies then prominent in the field. Many in the public and press (and even in the computing industry) had a sort of romantic view of them — as modern, swashbuckling, electronic pirates, of the sorts seen in movies. Now we can see the billions of dollars in damage wrought by those "geniuses" and their successors with Zeus and Conficker and the rest. The only difference is of time and degree — the underlying damage and amoral concern for others is simply visible to more people now. (And, by the way, the pirates off Somalia and in the Caribbean, some of whom simply kill their victims to steal their property, are real pirates, not the fictional, romantic versions in film.)
The next time you see a news article about some group, by whatever name, exposing data from a gaming company or law enforcement agency, think about the real evil left untouched. Think about who might actually be hurt or suffer loss. Think about the perpetrators hiding their identities, attacking the poorly defended, and bragging about how wonderful and noble and clever they are. Then ask if you are someone cheering on the bully or concerned about who is really getting hurt. And ask how others, including the press, are reporting it. All are choices with moral components. What are yours?
I have received several feedback comments to this (along with the hundreds of spam responses). Several were by people using anonymous addresses. We don't publish comments made anonymously or containing links to commercial sites. For this post, I am probably not going to pass through any rants, at least based on what I have seen. Furthermore, I don't have the time (or patience) to do a point-by-point commentary on the same things, again and again. However, I will make a few short comments on what I have received so far.
Several responses appear to be based on the assumption that I don't have knowledge or background to back up some of my statements. I'm not going to rebut those with a list of specifics. However, people who know what I've been doing over the few decades (or bothered to do a little research) — including work with companies, law enforcement, community groups, and government agencies — would hardly accuse me of being an observer with only an academic perspective.
A second common rant is that the government has done some bad things, or the police have done something corrupt, or corporations are greedy, and those facts somehow justify the behavior I described. Does the fact that a bully was knocked around by someone else and thus became a bully mean that if you are the victim, it's okay? If so, then the fact that the U.S. and U.K. have had terrorist attacks that have resulted in overly intrusive laws should make it all okay for you. After all, they had bad things happen to them, so their bad behavior is justified, correct? Wrong. That you became an abuser of others because you were harmed does not make it right. Furthermore, attacks such as the ones I discussed do nothing to fix those problems, but do have the potential to harm innocent parties as well as give ammunition to those who would pass greater restrictions on freedom. Based on statistics (for the US), a significant number of the people whining about government excess have not voted or bothered to make their opinions known to their elected representatives. The more people remain uninvolved, the more it looks like the population doesn't care or approves of those excesses, including sweetheart deals for corporations and invasions of privacy. Change is possible, but it is not going to occur by posting account details of people subscribed to Sony services, or giving out addresses and names of families of law enforcement officers, or defacing the NPR website. One deals with bullies by confronting them directly.
The third most common rant so far is to claim that it doesn't make any difference, for one reason or another: all the personal information is already out there on the net or will be soon, that the government (or government of another country) or criminals have already captured all that information, that it doesn't cost anything, security is illusory, et al. Again, this misses the point. Being a bully or vandal because you think it won't make any difference doesn't excuse the behavior. Because you believe that the effects of your behavior will happen anyhow is no reason to hasten those effects. If you believe otherwise, then consider: you are going to die someday, so it doesn't make a difference if you kill yourself, so you might as well do it now. Still there? Then I guess you agree that each act has implications that matter even if the end state is inevitable.
Fourth, some people claim that these attacks are a "favor" to the victims by showing them their vulnerabilities, or that the victims somehow deserved this because their defenses were weak. I addressed these claims in an article published in 2003. In short, blaming the victim is inappropriate. Yes, some may deserve some criticism for not having better defenses, but that does not justify an attack nor serve as a defense for the attackers. It is no favor either. If you are walking down a street at night and are assaulted by thugs who beat you with 2x4s and steal your money, you aren't likely to lie bleeding in the street saying to yourself "Gee, they did me a huge favor by showing I wasn't protected against a brutal assault. I guess I deserved that." Blaming the victim is done by the predators and their supporters to try to justify their behavior. And an intrusion or breach, committed without invitation or consent, is not a favor — it is a crime.
Fifth, if you support anarchy, then that is part of your moral choices. It does not invalidate what I wrote. I believe that doing things simply because they amuse you is a very selfish form of choice, and is the sort of reasoning many murderers, rapists, pedophiles and arsonists use to justify their actions. In an anarchy, they'd be able to indulge to their hearts content. Lotsa lulz. But don't complain if I and others don't share that view.
I am going to leave it here. As I said, I'm not interested in spending the next few weeks arguing on-line with people who are trying to justify behavior as bullies and vandals based on faulty assumptions.