Posts by spaf

Another year, another RSAC

I have attended 10 of the last 15 RSA conferences. I do this to see what’s new in the market, meet up with friends and colleagues I don’t get to see too often, listen to some technical talks, and enjoy a few interesting restaurants and taverns in SF. Thereafter, I usually blog about my impressions (see 2015 and 2014, for example).I think I could reuse my 2015 comments almost unchanged…

There have been some clear trends over the years:

  • The technical talks each year seem more focused on superficial approaches and issues: there seemed to be less technical content, at least in the few I observed. This goes with the rather bizarre featured talks by cast members of CSI: Cyber and Sean Penn — well known experts on cyber. Not. (Several others told me they thought the same about the sessions.) Talks a decade ago seemed to me to be deeper.
  • This matches some of what I observed at booths. The engineers and sales reps at the booths have little deep knowledge about the field. They know the latest buzzwords and market-speak, but can’t answer some simple questions about security technologies. They don’t know people, terms, or history. More on this later.
  • There is still an evident level of cynicism among booth personnel that surprised me, but less than last year.
  • There seemed to be more companies exhibiting (both sides of Moscone were full). There also seemed to be more that weren’t there last year and are unlikely to be around next year; I estimate that as many as 20% may be one-time wonders.

This year showed some evidence of effectiveness of new policies against “booth babes.” I talked to a number of women engineers who were more comfortable this year working at the booths. A couple indicated they could dress up a little without being mistaken for “the help.” That is a great step forward, but it needs reinforcement and consistency. At least one tried to come close to the edge and sparked some backlash.

As I noted above, the majority of people I talked to at vendor booths didn’t seem to have any real background in security beyond a few years of experience with the current market. This is a longer-term trend. The market has been tending more towards patching and remediation of bad software rather than strong design and really secure posture. It is almost as if they have given up trying to fix root causes because few end-users are willing to make the tough (and more expensive) choices. Thus, the solutions are after-the-fact, or intended to wrap broken software rather than fix it. Employees don’t need to actually study the theory and history of security if they’re not going to use it! Of course, not everyone is in that category. There are a number of really strong experts who have extensive background in the field, but it seems to me (subjectively) that the number attending decreases every year.

Related to that, a number of senior people in the field that I normally try to meet with skipped the conference this year. Many of them told me that the conference (and lodging and…) is not worth what they get from attending.

(As a data point, the Turing Award was announced during the first day of the conference. I asked several young people, and they had no idea who Diffie and Hellman were or what they had done. They also didn’t know what the Turing Award was. Needless to say, they also had no idea who I was, which is more or less what I expect, but a change from a decade ago.)

As far as buzzwords, this year didn’t really have one. Prior years have highlighted “the cloud,” “big data,”, and “threat intelligence” (to recap a few). This year I thought there would be more focus on Internet of Things (IoT), but it wasn’t. If anything, there seemed to be more with “endpoint protection” as the theme. Anti-virus, IDS, and firewalls were not emphasized much on the exhibit floor. Authentication of users and apps were. Phishng is a huge problem but the solutions presented are either privacy invasive or involve simulated phishing to (allegedly) train end users. Overall, I didn’t see much that I would consider really novel.

There was one big topic of conversation — the FBI vs. Apple encryption debate. There were panels on it. Presenters mentioned it. It was a topic of conversation at receptions, on the exhibit floor, and more. The overwhelming sentiment that I heard was on Apple’s side of the case. (Interestingly, I recently wrote an editorial in CACM on this general topic — written before the lawsuit was filed.)

Overall, I spent 4 days in SF. My schedule was fairly full, but I left this time with the sense that I hadn’t really spent all that time usefully. I did get to see some friends and former students. I got a fresh supply of T-shirts. I picked up literature for our campus CISO. And I have a few leads for companies that may be interested in donating product to CERIAS — or joining our partner consortium. If a few of those come through then I may change my mind.

If you attended the conference this year, leave a comment with your impressions.

A looming anniversary, and a special offer

It may seem odd to consider June 2016 as January approaches, but I try to think ahead. And June 2016 is a milestone anniversary of sorts. So, I will start with some history, and then an offer to get something special and make a charitable donation at the same time.

In June of 1991, the first edition of Practical Unix Security was published by O’Reilly. That means that June 2016 is the 25th anniversary of the publication of the book. How time flies!

Read the history and think of participating in the special offer to help us celebrate the 25th anniversary of something significant!

History

PUIS v1

In summer of 1990, Dan Farmer wrote the COPS scanner under my supervision. That toolset embodied a fair amount of domain expertise in Unix that I had accumulated in prior years, augmented with items that Dan found in his research. It generated a fair amount of “buzz” because it exposed issues that many people didn’t know and/or understand about Unix security. With the growth of Unix deployment (BSD, AT&T, Sun Microsystems, Sequent, Pyramid, HP, DEC, et al) there were many sites adopting Unix for the first time, and therefore many people without the requisite sysadmin and security skills. I thus started getting a great deal of encouragement to write a book on the topic.

I consulted with some peers and investigated the deals offered by various publishers, and settled on O’Reilly Books as my first contact. I was using their Nutshell handbooks and liked those books a great deal: I appreciated their approach to getting good information in the hands of readers at a reasonable price. Tim O’Reilly is now known for his progressive views on publishing and pricing, but was still a niche publisher back then.

I contacted Tim, and he directed me to Debby Russell, one of their editors. Debby was in the midst of writing her own book, Computer Security Basics. I told her what I had in mind, and she indicated that only a few days prior she had received a proposal from a well-known tech author, Simson Garfinkel, on the same topic. After a little back-and-forth, Debby introduced us by phone, and we decided we would join forces to write the book. It was a happy coincidence because we each brought something to the effort that made the whole more than the sum of its parts.

That first book was a little painful for me because it was written in FrameMaker to be more easily typeset by the publisher, and I had never used FrameMaker before, Additionally, Simson didn’t have the overhead of preparing and teaching classes, so he really pushed the schedule! I also had my first onset of repetitive stress injury to my hands — something that bothers me occasionally to this day, and has limited me over the years from writing as much as I’d like. I won’t blame the book as the cause, but it didn’t help!

The book was completed in early 1991 and included some of my early work with COPS and Tripwire, plus a section on some experiments with technology for screening networks. I needed a name for what I was doing, and taking a hint from construction work I had done when I was younger, I called it a “firewall.” To the best of our recollection, I was the one who coined that term; I had started speaking about firewalls in tutorials and conferences in at least late 1990, and the term soon became commonplace. (I also described the DMZ structure for using firewalls, although my term for that didn’t catch on.)PUIS v2PUIS v1

Anyhow…. the book appeared in the summer of 1991 and became a best seller (for its kind; last I heard, over 100,000 copies have been sold in 11 languages, and at least twice that many copies pirated). Thereafter, Simson and I also worked on a book on www security (editions in 1997 and 2002), along with our various other projects.

After several years, we produced a major rewrite and update of the Unix security book to include material on internetworking (a fast-growing topic area). That second edition was published in 1996.

Simson and I had gotten so occupied with other things that we welcomed a 3rd author for the third edition of PUIS (as we came to call it): Alan Schwartz. That edition appeared in 2003 and included many updates, plus extension of the material to cover Linux.

Simson went on to write many other books, started several companies, then went back to school to get his PhD. After a while as an academic, he is now a research scientist with NIST…and occasional tech essayist. Alan has a career as a consultant, author, and professor at the University of Illinois at Chicago. Me? I continued on up through the ranks as a POP (plain ol' professor) at Purdue University, including starting CERIAS.

Given all the various changes, and how busy our lives have become with other things, there are no current plans for a 4th edition of the book. If we were to do one, it would probably need to be split into at least two volumes, given all the changes and issues involved.

Possibly interesting bit of trivia: Simson and I did not know each other prior to late 1990, and we did not meet in person until 1992 — more than a year after the book was published! However, the experience of working together brought about an enduring friendship. I also did not meet Alan until several years after the 3rd edition went to press.

Special Offer

If you have someone (maybe yourself) who you’d like to provide with a special gift, here’s an offer of one that includes a donation to two worthwhile non-profit organizations. (This is in the spirit of my recent bow tie auction for charity.) You can make a difference as well as get something special!

Over the years, Simson, Alan, and I have often been asked to autograph copies of the book. We know there is some continuing interest in this (I as asked again, last week). Furthermore, the 25th anniversary seems like a milestone worth noting with something special. Therefore, we are making this offer.

For a contribution where everything after expenses will go to two worthwhile, non-profit organizations, you will get (at least) an autographed copy of an edition of Practical Unix & Internet Security!! Depending on the amount you include, I may throw in some extras.

The Two Non-Profits

The organizations we’ve chosen are EPIC and the ISSA Education Foundation.

The Electronic Privacy Information Center (EPIC) is a non-partisan, non-profit public interest research center established in 1994 to focus public attention on emerging privacy and civil liberties issues and to protect privacy, freedom of expression, and democratic values in the information age.

The ISSA Education Foundation is a non-profit organization associated with the international Information Systems Security Association (ISSA). It provides scholarships and educational programs on cyber security worldwide.

This offer is thus contributing to two worthwhile organizations — one supporting better security, and one supporting better privacy.

The Offer & Levels

Here are the levels of contribution:

$50
Send your copy of any edition of Pratical Unix Security and I (Spaf) will autograph it, then return it to you.
$75
Send your book along with a suggested inscription, and I (Spaf) will use that along with the autograph, subject to the caveats given below.
$100
I will send you a brand new copy of the 3rd edition with my signature.
$125
Simson and I will both sign your book, and if it is the 3rd edition of PUIS, so will Alan. (If it is a first or second edition of PUIS, Alan will sign it if you ask, although he didn't help author those editions.) If you suggest an inscription, one of us will add it, subject to the caveats given below.
$175
We will provide you with a new copy of the book, signed by all 3 authors, with a suitable inscription.
$250
Same as the previous category, plus we'll toss in one of Spaf's bow ties. (If that doesn't make sense, you haven't been to one of his conference talks or distinguished lectures in the last two decades.) This offer is limited to the first 8 at this level.
$300 or more+
All of the above plus something...make a suggestion. (No, we won't write your project code or hack into the Federal Reserve for you).
Each multiple of $500
We will send you five brand new copies of the latest edition of PUIS, signed by all 3 authors.

As an additional offer, Simson and Spaf will sign and return your copy of Web Security, Privacy, & Commerce for an additional $25 if you include it with one of the above.

We are making no profit on this offer! Anything above what is spent on shipping will get split evenly between the designated organizations. We are donating our time, gas (to drive to the post office), and ink to do this. We will share images of the receipts with anyone who asks for proof.

Don’t have a copy of the book already? Copies are still for sale via O’Reilly, Amazon, Barnes & Noble, and other likely sources. It should be easy to get (at least) one. As noted above, we are willing to do even that for you if you will pledge a sufficient amount for this fundraising drive.

This is an offer that will not be repeated...unless we happen to be around 25 years from now and decide to do this on the 50th anniversary!

This offer will expire at noon on Feb 1, 2016, so participate soon! If you act quickly, this could make the perfect "white elephant gift" for your annual office gift exchange, or maybe an odd stocking stuffer for that certain someone but you must order right away! And if you keep it for a few years, you might be able to use it at an upcoming SCA event!

The Fine Print

  • You must provide a check or money order in US dollars with the book(s), made out to “Eugene H. Spafford.” Put “Holiday fundraising offer” in the memo/notes field. Any items sent with an invalid form of payment will not be returned. Alternatively, you can include checks or money orders made out directly to each of the two organizations and we will simply send them on. (The goal is to raise money for these worthwhile organizations and celebrate the anniversary of the book, not to handle a lot of funds!) That will allow you to take the tax deduction, if it is applicable to your circumstances.

  • Ship the book(s) with funds to:

    PUIS Anniversary Offer
    c /o Professor Eugene H. Spafford
    Purdue University CERIAS
    656 Oval Dr
    West Lafayette, IN 47907-2086
  • Be sure to include your return address! Even better, include a pre-made return address label acceptable to the USPS.

  • All signed books will be returned via “book rate” US mail (USPS). If you want something faster, insured, or tracked, then send a pre-paid, self-addressed FedEx, or UPS envelope sufficient to hold the book in a padded envelope. You must do the same for shipment outside the United States! We will not be responsible for damage or lost items sent “book rate” in regular mail.
  • If you send a contribution for a custom inscription but don’t suggest one, we will make one up. If you suggest an inscription that we find offensive (e.g., “COBOL Rules!”) or legally problematic (e.g, “I embezzled $100 million”) then we will make up something else.

  • All contribution amounts beyond what is required for shipping expenses will be split evenly between the two organizations. No handling expenses will be charged (insert your own joke here).

And no, we don’t have books with anything like this cover; We wish we did!!

PUIS joke cover

Cyber Security in Stasis

This evening, someone pointed out Congressional testimony I gave over 6 years ago. This referenced similar testimony I gave in 2001, and I prepared it using notes from lectures I gave in the early-to-mid 1990s.

What is discouraging is that if I were asked to provide testimony next week, I would only need to change a few numbers in this document and it could be used exactly as is. The problems have not changed, the solutions have not been attempted, and if anything, the lack of leadership in government is worse.

Some of us have been saying the same things for decades. I’m approaching my 3rd decade of this, and I’m a young’un in this space.

If you are interested, read the testimony from 2009 and see what you think.

Privacy, Surveillance, Freedom of Expression, and Purdue University

On September 24 and 25 of this year, Purdue University hosted the second Dawn or Doom symposium. The event — a follow-up to the similarly-named event held last year — was focused on talks, movie, presentations, and more related to advanced technology. In particular, the focus has been on technology that poses great potential to advance society, but also potential for misuse or accident that could cause great devastation.

I was asked to speak this year on the implications of surveillance capabilities. These have the promise of improving use of resources, better marketing, improved health care, and reducing crime. However, those same capabilities also threaten our privacy, decrease some potential for freedom of political action, and create an enduring record of our activities that may be misused.

My talk was videotaped and is now available for viewing. The videographers did not capture my introduction and the first few seconds of my remarks.The remaining 40 or so minutes of me talking about surveillance, privacy, and tradeoffs are there, along with a few audience questions and my answers.

If you are interested, feel free to check it out. Comments welcome, especially if I got something incorrect — I was doing this from memory, and as I get older I find my memory not not be quite as trustworthy as it used to be.




You can find video of most of the other Dawn or Doom 2 events online here. The videos of last year's Dawn or Doom event are also online. I spoke last year about some of the risks of embedding computers everywhere, and giving those systems control over safety-critical decisions without adequate safeguards. That talk, Faster Than Our Understanding , includes some of the same privacy themes as the most recent talk, along with discussion of security and safety issues.




Yes, if you saw the news reports, the Dawn or Doom 2 event is also where this incident involving Barton Gellman occurred. Please note that other than some communication with Mr. Gellman, I played absolutely no role in the taping or erasure of his talk. Those issues are outside my scope of authority and responsibility at the university, and based on past experience, almost no one here listens to my advice even if they solicit it. I had no involvement in any of this, other than as a bystander.

Purdue University issued a formal statement on this incident. Related to that statement, for the record, I don’t view Mr. Gellman’s reporting as “an act of civil disobedience.” I do not believe that activities of the media, as protected by the First Amendment of the US Constitution and by legal precedent, can be viewed as “civil disobedience” any more than can be voting, invoking the right to a jury trial, or treating people equally under the law no matter their genders or skin colors. I also share some of Mr. Gellman’s concerns about the introduction of national security restrictions into the entire academic environment, although I also support the need to keep some sensitive government information out of the public view.

That may provide the topic for my talk next year, if I am invited to speak again.

Why I Don’t Attend Cons

I recently had a couple of students (and former students, and colleagues) ask me if I was attending any of a set of upcoming cons (non-academic/organizational conferences) in the general area of cyber security. That includes everything from the more highly polished Black Hat and DefCon events, to Bsides events, DerbyCon, Circle City Con, et al. (I don’t include the annual RSA Conference in that list, however.)

25 years ago there were some as the field was starting up that I attended. One could argue that some of the early RAID and SANS conferences fit this category, as did some of the National Computer Security Conferences. I even helped organize some of those events, including the 2nd RAID workshop! But that was a long time ago. I don’t attend cons now, and haven’t for decades. There are two main reasons for that.

First, is finances. Some of the events are quite expensive to attend — travel, housing, and registration all cost money. As an academic faculty member, and especially as one at a state university, I don’t have a business account covering things like these as an expense item. Basically, I would have to pay everything out of pocket, and that isn’t something I can afford to do on a regular (or even sporadic) basis. I manage to scrape up enough to attend the main RSA conference each year, but that is it.

Yes, faculty do sometimes have some funds for conferences. When we have grants from agencies such as NSF or DARPA, they often include travel funds, but usually we target those for places where the publication of our research (and that of our students) gives the most academic credit — IEEE & ACM events, for instance. Sometimes donors will provide some gifts to the university for us to use on things not covered by grants, including travel. And some faculty have made money by spinning off companies and patenting their inventions, so they can use that.

None of that describes my situation. Over the last 20 years I have devoted most of my efforts at raising (and spending) funds towards the COAST lab and then CERIAS. When I have had funding for conferences, I have usually spent it on my students, first, to allow them to get the professional exposure. There is seldom money left over for me to attend anything. I show up at a few events because I’m invited to speak and the hosts cover the expenses. The few things I’ve invented I’ve mostly put out in the public domain. I suppose it would be great if some donor provided a pot of money to the university for me to use, but I’ve gotten in the habit of spending what I have on junior colleagues and students so I’m not sure what I’d do with it!

There is also the issue of time. I have finite time (and it seems more compressed as I get older) and there are only so many trips I have time (and energy) to make, even if I could afford more. Several times over the last few years I’ve hit that limit, as I’ve traveled for CERIAS, for ACM, and for some form of advising, back to back to back.

Second, I’m not sure I’d learn much useful at most cons. I’ve been working (research, teaching, advising) in security and privacy for 30 years. I think I have a pretty good handle on the fundamentals, and many of the nuances. Most cons present either introductions for newbies, or demonstrations of hacks into existing systems. I don’t need the intros, and the hacks are not at all surprising. There is some great applications engineering work being done by the people involved, but unlike some people, I don’t need to see an explicit demonstration to understand the weaknesses in supply chains, poor authentication, lack of separation, no root of trust, and all the other problems that underlie those hacks. I eventually hear about the presentations after the fact when they get into the news; I can’t recall hearing about any that really surprised me for quite some time now.

I wish leaders in government and business didn’t need to be continually bashed with demonstrations to begin to get the same points about good security, but I’ve been trying to explain these issues for nearly my whole career, and they simply don’t seem to listen after “This will cost more than you are currently spending.” If anything, attending con events simply points out that the message I’ve been trying to convey for so long has not been heard; rather than instructive, cons might well be rather depressing for me.

There’s obviously also a social element to these events — including the more academic and professional conferences — that I am clearly missing out on. I do have a little regret over that. I don’t get to meet some of the young up-and-coming people in the field, on either the research or applied ends of things. I also don’t get to see some of people I already know as often as I wish I did. However, that gets back to cost and time. And I don’t think too many people have noticed the difference or bemoaned a loss because I wasn’t there, especially as I have gotten older. The current crop of practitioners are all excited by learning the most recent variation on a theme — someone who points out that it is all material we could have predicted (and prevented) isn’t going to fit in. Frankly, I was surprised to hear there was any interest in Jack Daniel’s “Shoulders of Infosec” project by some of the con crowd!

So, do I hate cons? No, not at all! If colleagues or students find them of value and they have the time and resources to attend, then they should go…but they aren’t likely to see me attending.

Proposed Changes in Export Control

The U.S. limits the export of certain high-tech items that might be used inappropriately (from the government’s point of view). This is intended to prevent (or slow) the spread of technologies that could be used in weapons, used in hostile intelligence operations, or used against a population in violation of their rights. Some are obvious, such as nuclear weapons technology and armor piercing shells. Others are clear after some thought, such as missile guidance software and hardware, and stealth coatings. Some are not immediately clear at all, and may have some benign civilian uses too, such as supercomputers, some lasers, and certain kinds of metal alloys.

Recently, there have been some proposed changes to the export regulations for some computing-related items. In what follows, I will provide my best understanding of both the regulations and the proposed changes. This was produced with the help of one of the professional staff at Purdue who works in this area, and also a few people in USACM who provided comments (I haven’t gotten permission to use their names, so they’re anonymous for now). I am not an expert in this area so please do not use this to make important decisions about what is covered or what you can send outside the country! If you see something in what follows that is in error, please let me know so I can correct it. If you think you might have an export issue under this, consult with an appropriate subject matter expert.

Export Control

Some export restrictions are determined, in a general way, as part of treaties (e.g., nuclear non-proliferation). A large number are as part of the Wassenaar Arrangement — a multinational effort by 41 countries generally considered to be allies of the US, including most of NATO; a few major countries such as China are not, nor are nominal allies such as Pakistan and Saudi Arabia (to name a few). The Wassenaar group meets regularly to review technology and determine restrictions, and it is up to the member states to pass rules or legislation for themselves. The intent is to help promote international stability and safety, although countries not within Wassenaar might not view it that way.

In the U.S., there are two primary regulatory regimes for exports: ITAR and EAR. ITAR is the International Traffic in Arms Regulations in the Directorate of Defense Trade Controls at the Department of State. ITAR provides restrictions on sale and export of items of primary (or sole) use in military and intelligence operations. The EAR is the Export Administration Regulations in the Bureau of Industry and Security at the Department of Commerce. EAR rules generally cover items that have “dual use” — both military and civilian uses.

These are extremely large, dense, and difficult to understand sets of rules. I had one friend label these as “clear as mud.” After going through them for many hours, I am convinced that mud is clearer!

Items designed explicitly for civilian applications without consideration to military use, or with known dual-use characteristics, are not subject to the ITAR because dual-use and commodity items are explicitly exempted from ITAR rules (see sections 121.1(d) and 120.41(b) of the ITAR). However, being exempt from ITAR does not make an item exempt from the EAR!

If any entity in the US — company, university, or individual — wishes to export an item that is covered under one of these two regimes, that entity must obtain an export license from the appropriate office. The license will specify what can be exported, to what countries, and when. Any export of a controlled item without a license is a violation of Federal law, with potentially severe consequences. What constitutes an export is broader than some people may realize, including:

  • Shipping something outside the U.S. as a sale or gift is an export, even if only a single instance is sent.
  • Sending something outside the U.S. under license, knowing (or suspecting) it will then be sent to a 3rd location is a violation.
  • Providing a controlled item to a foreign-controlled company or organization even if in the U.S. may be an export.
  • Providing keys or passwords that would allow transfer of controlled information or materials to a foreign national is an export
  • Designing or including a controlled item in something that is not controlled, or which separately has a license, and exporting that may be a violation.
  • Giving a non-U.S. person (someone not a citizen or permanent resident) access to an item to examine or use may be an export.
  • Providing software, drawings, pictures, or data on the Internet, on a DVD, on a USB stick, etc to a non-U.S. person may be an export

Those last two items are what as known as a deemed export because the item didn’t leave the U.S., but information about it is given to a non-US person. There are many other special cases of export, and nuances (giving control of a spacecraft to a foreign national is prohibited, for example, as are certain forms of reexport). This is all separate from disclosure of classified materials, although if you really want trouble, you can do both at the same time!

This whole export thing may seem a bit extreme or silly, especially when you look at the items involved, but it isn’t — economic and military espionage to get this kind of material and information is a big problem, even at research labs and universities. Countries that don’t have the latest and greatest factories, labs, and know-how are at a disadvantage both militarily and economically. For instance, a country (.e.g, Iran) that doesn’t have advanced metallurgy and machining may not be able to make specialized items (e.g., the centrifuges to separate fissionable uranium), so they will attempt to steal or smuggle the technology they need. The next best approach is to get whatever knowledge is needed to recreate the expertise locally. You only need to look at the news over a period of a few months to see many stories of economic theft and espionage, as well as state-sponsored incidents.

This brings us to the computing side of things. High speed computers, advanced software packages, cryptography, and other items all have benign commercial uses. However, they all also have military and intelligence uses. High speed computers can be used in weapons guidance and design systems, advanced software packages can be used to model and refine nuclear weapons and stealth vehicles, and cryptography can be used to hide communications and data. As such, there are EAR restriction on many of these items. However, because the technology is so ubiquitous and the economic benefit to the U.S. is so significant, the restrictions have been fairly reasonable to date for most items.

Exemptions

Software is a particularly unusual item to regulate. The norm in the community (for much of the world) is to share algorithms and software. By its nature, huge amounts of software can be copied onto small artifacts and taken across the border, or published on an Internet archive. In universities we regularly give students from around the world access to advanced software, and we teach software engineering and cryptography in our classes. Restriction on these kinds of items would be difficult to enforce, and in some cases, simply silly to restrict.

Thus, the BIS export rules contain a number of exemptions that remove some items from control entirely. (In the following, designations such as 5D002 refer to classes of items as specified in the EAR, and 734.8 refers to section 734 paragraph 8.)

  • EAR 734.3(b.3) exempts technology except software classified under 5D002 (primarily cryptography) if it is
    • arising from fundamental research (described in 734.8), or
    • is already published or will be published (described in 734.7), or
    • is educational information (described in 734.9).
    Exempt from 5D002 is any printed source code, including encryption code, or object code whose corresponding source code is otherwise exempt. See also my note below about 740.13(e.3).
  • EAR 734.7 defines publication as appearing in books, journals, or any media that is available for free or at a price not to exceed the cost of distribution; or freely available in libraries; or released at an open gathering conference, meeting, or seminar open to the qualified public; or otherwise made available,
  • EAR 734.8 defines fundamental research that is ordinarily published in the course of that research.
  • EAR 734.9 defines educational information (which is excluded from the EAR) as information that is released by instruction in catalog courses and associated teaching laboratories of academic institutions. This provision applies to all information, software or technology except certain encryption software, but if the source code of encryption software is publicly available as described in 740.13(e), it can also be considered educational information.
  • We still have some deemed export issues if we are doing research that doesn’t meet the definition of fundamental research (e.g. because it requires signing an NDA or there is a publication restriction in the agreement) and a researcher or recipient involved is not a US person (citizen or permanent resident) employed full time by the university, and with a permanent residence in the US, and is not a national of a D:5 country (Afghanistan, Belarus, Burma, the CAR, China (PRC), the DRC, Core d’Ivorie, Cuba, Cyprus, Eritrea, Fiji (!), Haiti, Iran, Iraq, DPRK, Lebanon, Liberia, Libya, Somalia, Sri Lanka, Sudan, Syria, Venezuela, Vietnam, Zimbabwe). However, that is easily flagged by contracts officers and should be the norm at most universities or large institutions with a contracts office.
  • EAR 740.13(d) exempts certain mass-market software that is sold retail and installed by end-users so long as it does not contain cryptography with keys longer than 64 bits.

The exemption for publication is interesting. Anyone doing research on controlled items appears to have an exemption under EAR 740.13(e) where they can publish (including posting on the Internet) the source code from research that falls under ECCN 5D002 (typically, cryptography) without restriction, but must notify BIS and NSA of digital publication (email is fine, see 740.13(e.3)); there is no restriction or notification requirement for non-digital print. What is not included is any publication or export (including deemed export) of cryptographic devices or object code not otherwise exempt (object code whose corresponding source code is exempt is itself exempt), or for knowing export to one of the prohibited countries (E:1 from supplement 1 of section 740 — Cuba, Iran, DPRK, Sudan and Syria, although Cuba may have just been removed.)

As part of an effort to harmonize the EAR and ITAR, a proposed revision to both has been published on June 3 (80 FR 31505) that has a nice side-by-side chart of some of these exemptions, along with some small suggested changes.

Changes

The Wassenaar group agreed to some changes in December 2013 to include intrusion software and network monitoring items of certain kinds on their export control lists. The E.U. adopted new rules in support of this in October of 2014. On May 20, 2015, the Department of Commerce published — in the Federal Register (80 FR 28853) — a request for comments on its proposed rule to amend the EAR. Specifically, the notice stated:

The Bureau of Industry and Security (BIS) proposes to implement the agreements by the Wassenaar Arrangement (WA) at the Plenary meeting in December 2013 with regard to systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion software; software specially designed or modified for the development or production of such systems, equipment or components; software specially designed for the generation, operation or delivery of, or communication with, intrusion software; technology required for the development of intrusion software; Internet Protocol (IP) network communications surveillance systems or equipment and test, inspection, production equipment, specially designed components therefor, and development and production software and technology therefor. BIS proposes a license requirement for the export, reexport, or transfer (in-country) of these cybersecurity items to all destinations, except Canada. Although these cybersecurity capabilities were not previously designated for export control, many of these items have been controlled for their "information security" functionality, including encryption and cryptanalysis. This rule thus continues applicable Encryption Items (EI) registration and review requirements, while setting forth proposed license review policies and special submission requirements to address the new cybersecurity controls, including submission of a letter of explanation with regard to the technical capabilities of the cybersecurity items. BIS also proposes to add the definition of "intrusion software" to the definition section of the EAR pursuant to the WA 2013 agreements. The comments are due Monday, July 20, 2015.

The actual modifications are considerably more involved than the above paragraph, and you should read the Federal Register notice to see the details.

This proposed change has caused some concern in the computing community, perhaps because the EAR and ITAR are so difficult to understand, and because of the recent pronouncements by the FBI seeking to mandate “back doors” into communications and computing.

The genesis of the proposed changes is stated to match the Wassenaar additions of (basically) methods of building, controlling, and inserting intrusion software; technologies related to the production of intrusion software; technology for IP network analysis and surveillance, or for the development and testing of same. These are changes to support national security, regional stability, and counter terrorism.

According to the notice, intrusion software includes items that are intended to avoid detection or defeat countermeasures such as address randomization and sandboxing, and exfiltrate data or change execution paths to provide for execution of externally provided instructions. Debuggers, hypervisors, reverse engineering, and other software tools are exempted. Software and technology designed or specially modified for the development, generation, operation, delivery, or communication with intrusion software is controlled — not the intrusion software itself. It is explicitly stated that rootkits and zero-day exploits will presumptively be denied licenses for export.

The proposed changes for networking equipment/systems would require that it have all 5 of the following characteristics to become a controlled item:

  1. It operates on a carrier class IP network (e.g., national grade backbone)
  2. Performs analysis at OSI layer 7
  3. Extracts metadata and content and indexes what it extracts
  4. Executes searches based on hard selectors (e.g., name, address)
  5. Performs mapping of relational networks among people or groups

Equipment specially designed for QoS, QoE, or marketing is exempt from this classification.

Two other proposed changes would remove the 740.13(d) exemption for mass-market products, and would make software controlled by one of these new sections and containing encryption would now be dual-listed in two categories. There are other changes for wording, cleaning up typos, and so on.

I don’t believe there are corresponding changes to ITAR because these all naturally fall under the EAR.

Discussion

Although social media has had a number of people posting vitriol and warnings of the impending Apocalypse, I can’t see it in this change. If anything, this could be a good thing — people who are distributing tools to build viruses, botnets, rootkits and the like may now be prosecuted. The firms selling network monitoring equipment that is being used to oppress political and religious protesters in various countries may now be restrained. The changes won’t harm legitimate research and teaching, because the exemptions I listed above will still apply in those cases. There are no new restrictions on defensive tools. There are no new restrictions on cryptography.

Companies and individuals making software or hardware that will fall under these new rules will now have to go through the process of applying for export licenses, It may also be the case those entities may find their markets reduced. I suspect that it is a small population that will be subject to such a restriction, and for some of them, given their histories, I’m not at all bothered by the idea.

I have seen some analyses that claim that software to jailbreak cellphones might now be controlled. However, if published online without fee (as is often the case), it would be exempt under 734.7. It arguably is a debugging tool, which is also exempt.

I have also seen claims that any technology for patching would fall under these new rules. Legitimate patching doesn’t seek to avoid detection or defeat countermeasures, which are specifically defined as “techniques designed to ensure the safe exertion of code.” Thus, legitimate patching won’t fall within the scope of control.

Jennifer Granick wrote a nice post about the changes. She rhetorically asked at the end whether data loss prevention tools would fall under this new rule. I don’t see how — those tools don’t operate on national grade backbones or index the data they extract. She also posed a question about whether this might hinder research into security vulnerabilities. Given that fundamental research is still exempt under 734.8 as are published results under 734.7, I don’t see the same worry.

The EFF also posted about the proposed rule changes, with some strong statements against them. Again, the concern they stated is about researchers and the tools they use. As I read the EAR and the proposed rule, this is not an issue if the source code for any tools that are exported is published, as per 734.7. The only concern would if the tools were exported and the source code was not publicly available, i.e., private tools exported. I have no idea how often this happens; in my experience, either the tools are published or else they aren’t shared at all, and neither case runs afoul of the rule. The EFF post also tosses up fuzzing, vulnerability reporting, and jailbreaking as possible problems. Fuzzing tools might possibly be a problem under a strict interpretation of the rules, but the research and publication exemptions would seem to come to the rescue. Jailbreaing I addressed, above. I don’t see how reporting vulnerabilities would be export of technology or software for building or controlling intrusion software, so maybe I don’t understand the point.

At first I was concerned about how this rule might affect research at the university, or the work at companies I know about. As I have gotten further into it, I am less and less worried. it seems that there are very reasonable exceptions in place, and I have yet to see a good example of something we might legitimately want to do that would now be prohibited under these rules.

However, your own reading of the proposed rule changes may differ from mine. If so, note the difference in comment to this essay and I’ll either respond privately or post your comment. Of course, commenting here won’t affect the rule! If you want to do that, you should use the formal comment mechanism listed in the Federal Register notice, on or before July 20, 2015.




Update July 17: The BIS has an (evolving) FAQ on the changes posted online. It makes clear the exemptions I described, above. The regulations only cover tools specially designed to design, install, or communicate with intrusion software as they define it. Sharing of vulnerabilities and proof of exploits is not regulated. Disclosing vulnerabilities is not regulated so long as the sharing does not include tools or technologies to install or operate the exploits.

As per the two blog posts I cite above

  • research into security vulnerabilities is explicitly exempt so long as it is simply the research
  • export of vulnerability toolkits and intrusion software would be regulated if those tools are not public domain
  • fuzzing is explicitly listed as exempt because it is not specifically for building intrusion software
  • jailbreaking is exempt, as is publicly available tools for jailbreaking. Tools to make jrailbreaks would likely be regulated.

Look at the FAQ for more detail.

Déjà Vu All Over Again: The Attack on Encryption

Preface

by Spaf
Chair, ACM US Public Policy Council (USACM)

About 20 years ago, there was a heated debate in the US about giving the government mandatory access to encrypted content via mandatory key escrow. The FBI and other government officials predicted all sorts of gloom and doom if it didn’t happen, including that it would prevent them from fighting crime, especially terrorists, child pornographers, and drug dealers. Various attempts were made to legislate access, including forced key escrow encryption (the “Clipper Chip”). Those efforts didn’t come to pass because eventually enough sensible — and technically literate — people spoke up. Additionally, the economic realities also made it clear that people weren’t knowingly going to buy equipment with government backdoors built in.

Fast forward to today. In the intervening two decades, the forces of darkness did not overtake us as a result of no restrictions on encryption. Yes, there were some terrorist incidents, but either there was no encryption involved that made any difference (e.g., the Boston Marathon bombing), or there was plenty of other evidence but it was never used to prevent anything (e.g., the 9/11 tragedy). Drug dealers have not taken over the country (unless you consider Starbucks coffee a narcotic). Authorities are still catching and prosecuting criminals, including pedophiles and spies. Notably, even people who are using encryption in furtherance of criminal enterprises, such as Ross “Dread Pirate Roberts” Ulbricht, are being arrested and convicted. In all these years, the FBI has yet to point to anything significant where the use of encryption frustrated their investigations. The doomsayers of the mid-1990s were quite clearly wrong.

However, now in 2015 we again have government officials raising a hue and cry that civilization will be overrun, and law enforcement will be rendered powerless unless we pass laws mandating that back doors and known weaknesses be put into encryption on everything from cell phones to email. These arguments have a strong flavor of déjà vu for those of us who were part of the discussion in the 90s. They are even more troubling now, given the scope of government eavesdropping, espionage, and massive data thefts: arguably, encryption is more needed now that it was 20 years ago.

USACM, the Public Policy Council of the ACM, is currently discussing this issue — again. As a group, we made statements against the proposals 20 years ago. (See, for instance, the USACM and IEEE joint letter to Senator McCain in 1997). The arguments in favor of weakening encryption are as specious now as they were 20 years ago; here are a few reasons why:

  • Weakening encryption to catch a small number of “bad guys” puts a much larger community of law-abiding citizens and companies at risk. Strong encryption is needed to help protect data at rest and in transit against criminal interception;
  • A “golden key” or weakened cryptography is likely to be discovered by others. There is a strong community of people working in security — both legitimately and for criminal enterprises — and access to the “key” or methods to exploit the weaknesses will be actively sought. Once found, untold millions of systems will be defenseless — some, permanently.
  • There is no guarantee that the access methods won’t be leaked, even if they are closely held. There are numerous cases of blackmail and bribery of officials leading to leaked information. Those aren’t the only motives, either. Consider Robert Hanssen, Edward Snowden, and Chelsea (Bradley) Manning: three individuals with top security clearances who stole/leaked extremely sensitive and classified information. Those are only the ones publicly identified so far. Human nature and history instruct us that they won’t be the last.
  • As recently disclosed incidents — including data exfiltration from the State Department, IRS, and OPM — have shown, the government isn’t very good at protecting sensitive information. Keys will be high-value targets. How long before the government agencies (and agents) holding them are hacked?
  • Revelations of government surveillance in excess of legal authority, past and recent, suggest that any backdoor capability in the hands of the government may possibly (likely?) be misused. Strong encryption is a form of self-protection.
  • Consumers in other countries aren’t going to want to buy hardware/software that has backdoors built in for the US government. US companies will be at a huge disadvantage in selling into the international marketplace. Alternatively, other governments will demand the same keys/access, ostensibly for their own law enforcement purposes. Companies will need to accede to these requests, thus broadening the scope of potential disclosure, as well as making US data more accessible to espionage by those countries.
  • Cryptography is not a dark art. There are many cryptography systems available online. Criminals and terrorists will simply layer encryption by using other, stronger systems in addition to the mandated, weakened cryptography. Mandating backdoors will mostly endanger only the law-abiding.

There are other reasons, too, including cost, impact on innovation, and more. The essay below provides more rationale. Experts and organizations in the field have recently weighed in on this issue, and (as one of the individuals, and as chair of one of the organizations) I expect we will continue to do so.

With all that as a backdrop, I was reminded of an essay on this topic area by one of USACM’s leaders. It was originally given as a conference address two decades ago, then published in several places, including on the EPIC webpage of information about the 1990s anti-escrow battle. The essay is notable both because it was written by someone with experience in Federal criminal prosecution, and because it is still applicable, almost without change, in today’s debate. Perhaps in 20 more years this will be reprinted yet again, as once more memories dim of the arguments made against government-mandated surveillance capabilities. It is worth reading, and remembering.



The Law Enforcement Argument for Mandatory Key Escrow Encryption: The “Dank” Case Revisited

by Andrew Grosso, Esq.
Chair, USACM Committee on Law

(This article is a revised version of a talk given by the author at the 1996 RSA Data Security Conference, held in San Francisco, California. Mr. Grosso is a former federal prosecutor who now has his own law practice in Washington, D.C. His e-mail address is agrosso@acm.org.)

I would like to start by telling a war story. Some years ago, while I was an Assistant U.S. Attorney, I was asked to try a case which had been indicted by one of my colleagues. For reasons which will become clear, I refer to this case as “the Dank case.”

The defendant was charged with carrying a shotgun. This might not seem so serious, but the defendant had a prior record. In fact, he had six prior convictions, three of which were considered violent felonies. Because of that, this defendant was facing a mandatory fifteen years imprisonment, without parole. Clearly, he needed an explanation for why he was found in a park at night carrying a shotgun. He came up with one.

The defendant claimed that another person, called “Dank,” forced him to carry the gun. “Dank,” it seems, came up to him in the park, put the shotgun in his hands, and then pulled out a handgun and put the handgun to the defendant’s head. “Dank” then forced the defendant to walk from one end of the park to other, carrying this shotgun. When the police showed up, “Dank” ran away, leaving the defendant holding the bag, or, in this case, the shotgun.

The jurors chose not to believe the defendant’s story, although they spent more time considering it than I would like to admit. After the trial, the defendant’s story became known in my office as “the Dank defense.” As for myself, I referred to it as “the devil made me do it.”

I tell you this story because it reminds me of the federal government’s efforts to justify domestic control of encryption. Instead, of “Dank,” it has become, “drug dealers made me do it;” or “terrorists made me do it;” or “crypto anarchists made me do it.” There is as much of a rationale basis behind these claims as there was behind my defendant’s story of “Dank.” Let us examine some of the arguments the government has advanced.

It is said that wiretapping is indispensable to law enforcement. This is not the case. Many complex and difficult criminal investigations have been successfully concluded, and successfully argued to a jury, where no audio tapes existed of the defendants incriminating themselves. Of those significant cases, cited by the government, where audio tapes have proved invaluable, such as in the John Gotti trial, the tapes have been made through means of electronic surveillance other than wire tapping, for example, through the use of consensual monitoring or room bugs. The unfettered use of domestic encryption could have no effect on such surveillance.

It is also said that wiretapping is necessary to prevent crimes. This, also, is not the case. In order to obtain a court order for a wire tap, the government must first possess probable cause that a crime is being planned or is in progress. If the government has such probable cause concerning a crime yet in the planning stages, and has sufficient detail about the plan to tap an individual’s telephone, then the government almost always has enough probable cause to prevent the crime from being committed. The advantage which the government gains by use of a wiretap is the chance to obtain additional evidence which can later be used to convict the conspirators or perpetrators. Although such convictions are desirable, they must not be confused with the ability to prevent the crime.

The value of mandating key escrow encryption is further eroded by the availability of super encryption, that is, using an additional encryption where the key is not available to the government. True, the government’s mandate would make such additional encryption illegal; however the deterrence effect of such legislation is dubious at best. An individual planning a terrorist act, or engaging in significant drug importation, will be little deterred by prohibitions on the means for encoding his telephone conversations. The result is that significant crimes will not be affected or discouraged.

In a similar vein, the most recent estimates of the national cost for implementing the Digital Telephony law, which requires that commercial telecommunications companies wiretap our nation’s communications network for the government’s benefit, is approximately three billion dollars. Three billion dollars will buy an enormous number of police man hours, officer training, and crime fighting equipment. It is difficult to see that this amount of money, by being spent on wire tapping the nation, is being spent most advantageously with regard to law enforcement’s needs.

Finally, the extent of the federal government’s ability to legislate in this area is limited. Legislation for the domestic control of encryption must be based upon the commerce clause of the U.S. Constitution. That clause would not prohibit an individual in, say, the state of California from purchasing an encryption package manufactured in California, and using that package to encode data on the hard drive of his computer, also located in California. It is highly questionable whether the commerce clause would prohibit the in-state use of an encryption package which had been obtained from out of state, where all the encryption is done in-state and the encrypted data is maintained in- state. Such being the case, the value of domestic control of encryption to law enforcement is doubtful.

Now let us turn to the disadvantages of domestic control of encryption. Intentionally or not, such control would shift the balance which exists between the individual and the state. The individual would no longer be free to conduct his personal life, or his business, free from the risk that the government may be watching every move. More to the point, the individual would be told that he would no longer be allowed to even try to conduct his life in such a manner. Under our constitution, it has never been the case that the state had the right to evidence in a criminal investigation. Rather, under our constitution, the state has the right to pursue such evidence. The distinction is crucial: it is the difference between the operation of a free society, and the operation of a totalitarian state.

Our constitution is based upon the concept of ordered liberty. That is, there is a balance between law and order, on the one hand, and the liberty of the individual on the other. This is clearly seen in our country’s bill of rights, and the constitutional protections afforded our accused: evidence improperly obtained is suppressed; there is a ban on the use of involuntary custodial interrogation, including torture, and any questioning of the accused without a lawyer; we require unanimous verdicts for convictions; and double jeopardy and bills of attainder are prohibited. In other words, our system of government expressly tolerates a certain level of crime and disorder in order to preserve liberty and individuality. It is difficult to conceive that the same constitution which is prepared to let a guilty man go free, rather than admit an illegally seized murder weapon into evidence at trial, can be interpreted to permit whole scale, nationwide, mandatory surveillance of our nation’s telecommunications system for law enforcement purposes. It is impossible that the philosophy upon which our system of government was founded could ever be construed to accept such a regime.

I began this talk with a war story, and I would like to end it with another war story. While a law student, I had the opportunity to study in London for a year. While there, I took one week, and spent it touring the old Soviet Union. The official Soviet tour guide I was assigned was an intelligent woman. As a former Olympic athlete, she had been permitted in the 1960’s to travel to England to compete in international tennis matches. At one point in my tour, she asked me why I was studying in London. I told her that I wanted to learn what it was like to live outside of my own country, so I chose to study in a country where I would have little trouble with the language. I noticed a strange expression on her face as I said this. It was not until my tour was over and I looked back on that conversation that I realized why my answer had resulted in her having that strange look. What I had said to her was that *I* had chosen to go to overseas to study; further, I had said that *I* had chosen *where* to go. That I could make such decisions was a right which she and the fellow citizens did not have. Yes, she had visited England, but it was because her government chose her to go, and it was her government which decided where she should go. In her country, at that time, her people had order, but they had no liberty.

In our country, the domestic control of encryption represents a shift in the balance of our liberties. It is a shift not envisioned by our constitution. If ever to be taken, it must be based upon a better defense than what “Dank,” or law enforcement, can provide.




What you can do

Do you care about this issue? If so, consider contacting your elected legislators to tell them what you think, pro or con. Use this handy site to find out how to contact your Representative and Senators.

Interested in being involved with USACM? If so, visit this page. Note that you first need to be a member of ACM but that gets you all sorts of other benefits, too. We are concerned with issues of computing security, privacy, accessibility, digital governance, intellectual property, computing law, and e-voting. Check out our brochure for more information.


† — This blog post is not an official statement of USACM. However, USACM did issue the letter in 1997 and signed the joint letter earlier this year, as cited, so those two documents are official.

Teaching Information Security

Let me recommend an article in Communications of the ACM, June 2015, vol 6(2), pp. 64-69. The piece is entitled PLUS ÇA CHANGE, PLUS C’EST LA MÊME CHOSE, and the author is the redoubtable Corey Schou.

Corey has been working in information security education as long (and maybe longer) than anyone else in the field. What’s more, he has been involved in numerous efforts to help define the field, and make it more professional.

His essay distills a lot of his thinking about information security (and its name), its content, certification, alternatives, and the history of educational efforts in the area.

If you work in the field in any way — as a teacher, practitioner, policy-maker, or simply hobbyist, there is probably something in the piece for you.

(And yes, there are several indirect references to me in the piece. Two are clearer than others — can you spot them? I definitely endorse Corey’s conclusions so perhaps that is why I’m there. grin

—spaf

Short, Random Thought on Testing

In the late 1980s, around the time the Airbus A340 was introduced (1991), those of us working in software engineering/safety used to exchange a (probably apocryphal) story. The story was about how the fly-by-wire avionics software on major commercial airliners was tested.

According to the story, Airbus engineers employed the latest and greatest formal methods, and provided model checking and formal proofs of all of their avionics code. Meanwhile, according to the story, Boeing performed extensive design review and testing, and made all their software engineers fly on the first test flights. The general upshot of the story was that most of us (it seemed) felt more comfortable flying on Boeing aircraft. (It would be interesting to see if that would still be the majority opinion in the software engineering community.)

Today, in a workshop, I was reminded of this story. I realized how poor a security choice that second approach would be even if it might be a reasonable software test. All it would take is one engineer (or test pilot) willing to sacrifice himself/herself, or a well-concealed attack, or someone near the test field with an air to ground missile, and it would be possible to destroy the entire pool of engineers in one fell swoop…as well as the prototype, and possibly (eventually) the company.

Related to recent events, I would also suggest that pen-testing at the wrong time, with insufficient overall knowledge (or evil intent) could lead to consequences with some similar characteristics. Testing on live systems needs to be carefully considered if catastrophic failures are possibly realized.

No grand conclusions here, other than to think about how testing interacts with security. The threat to the design organization needs to be part of the landscape — not simply testing the deployed product to protect the end-users.

Two Items of interest

Here are a couple of items of possible interest to some of you.

First, a group of companies, organizations, and notable individuals signed on to a letter to President Obama urging that the government not mandate “back doors” in computing products. I was one of the signatories. You can find a news account about the letter here and you can read the letter itself here. I suggest you read the letter to see the list of signers and the position we are taking.

Second, I’ve blogged before about the new book by Carey Nachenberg — a senior malware expert who is one of the co-authors of Norton Security: The Florentine Deception. This is an entertaining mystery with some interesting characters and an intricate plot that ultimately involves a very real cyber security threat. It isn’t quite in the realm of an Agatha Christie or Charles Stross, but everyone I know how has read it (and me as well!) have found it an engrossing read.

So, why am I mentioning Carey’s book again? Primarily because Carey is donating all proceeds from sale of the book to a set of worthy charities. Also, it presents a really interesting cyber security issue presented in an entertaining manner. Plus, I wrote the introduction to the book, explaining a curious “premonition” of the plot device in the book. What device? What premonition? You’ll need to buy the book (and thus help contribute to the charities), read the book (and be entertained), and then get the answer!

You can see more about the book and order a copy at the website for The Florentine Deception.