Disloyal software surrounds us. This is software running on devices or computers you own and that serves interests other than yours. Examples are DVD firmware that insists on making you watch the silly FBI warning or prevents you from skipping “splashes” or previews, or popup and popunder advertisement web browser windows. When people discuss malware or categories of software, there is usually little consideration for disloyal software (I found this interesting discussion of Trusted Computing). Some of it is perfectly legal; some protects legal rights. At the other extreme, rootkits can subvert entire computers against their owners. The question is, when can you trust possibly disloyal software, and when does it become malware, such as the Sony CD copy prevention rootkit?
Who’s in Control
Loyalty is a question of perspective in ownership vs control. The employer providing laptops and computers to employees doesn’t want them to install things that could be liabilities or compromise the computer. The employee is using software that is restrictive but justifiably so. From the perspective of someone privately owning a computer, a lesser likelihood of disloyalty is an advantage of free software (as in the FSF free software definition). The developers won’t benefit from implementing restrictions and developing software that does things that go counter to the interests of the user. If one does, someone somewhere will likely remove that restriction for the benefit of all. Of course, this doesn’t address the possibility of cleverly hidden capabilities (such as backdoors) or compromised source code repositories.
This leads to questions of control of many other devices, such as game consoles and media players such as the iPod. Why does my iPod, using Apple-provided software, not allow me to copy music files to another computer? It doesn’t matter which computer as long as I’m not violating copyrights; possibly it’s the same computer that ripped the CDs, because the hard drive died or was upgraded, or it’s the new computer I just bought. By using the iPod as a storage device instead of a music player, such copies can be done with Apple software, but music files in the “play” section can’t be copied out. This restriction is utterly silly as it accomplishes nothing but annoy owners, and I’m glad that Ubuntu Linux allows direct access to the music files.
DMCA
Some firmware implements copyright protection measures, and modifying it to remove those protections is made illegal by the DMCA. As modifying consoles (“modding”) is often done for that purpose, the act of “modding” has become suspicious in itself. Someone modding a DVD player to simply be able to bypass annoying splash screens, but without affecting copy protection mechanisms, would have a hard time defending herself. This has a chilling effect on the recycling of perfectly good hardware with better software. For example, I think Microsoft would still be selling large quantities of the original XBox if the compiled XBMC media player software wasn’t illegal as well for most people due to licensing issues with the Microsoft compiler. The DMCA helps law enforcement and copyright holders, but has negative effects as well (see wikipedia). Disloyal devices are distasteful, and the current law heavily favors copyright owners. Of course, it’s not clearcut, especially in devices that have responsibilities towards multiple entities, such as cell phones. I recommend watching Ron Buskey’s security seminar about cell phones.
Web Me Up
If you think you’re using only free software, you’re wrong every time you use the web and allow scripting. The potentially ultimate disloyal software is the one web sites push to your browser. Active content (JavaScript, Flash, etc…) on web pages can glue you in place and restrict what you can do and how, or deploy adversarial behaviors (e.g., pop-unders or browser attacks). Every time you visit a web page nowadays, you download and run software that is not free:
* it is often impractical to access the content of the page, or even basic form functionality, without running the software, so you do not have the freedom to run or not run it as a practical choice (in theory you do have a choice, but penalties for choosing the alternative can be significant).
* It is difficult to study given how some code can load other active content from other sites in a chain-like fashion, creating a large spaghetti, which can be changed at any time.
* there is no point to redistributing copies, as the copies running from the web sites you need to use won’t change.
* Releasing your “improvements” to the public would almost certainly violate copyrights. Even if you made useful improvements, the web site owners could change how their site works regularly, thus foiling your efforts.
Most of the above is true even if the scripts you are made to run in a browser were free software from the point of view of the web developers; the delivery method tainted them.
Give me some AIR
The Adobe Integrated Runtime (“AIR”) is interesting because it has the potential to free web technologies such as HTML, Flash and JavaScript, by allowing them to be used in a free open source way. CERIAS webmaster Ed Finkler developed the “Spaz” application with it, and licensed it with the New BSD license. I say potentially only, because AIR can be used to dynamically load software as well, with all the problems of web scripting. It’s a question of control and trust. I can’t trust possibly malicious code that I am forced to run on my machine to access a web page which I happen to visit. However, I may trust static code that is free software, to not be disloyal by design. If it is disloyal, it is possible to fix it and redistribute the improved code. AIR could deliver that, as Ed demonstrated.
The problem with AIR is that I will have to trust a web developer with the security of my desktop. AIR has two sandboxes, the Classic Sandbox that is like a web browser, and the Application Sandbox, which is compared to server-side applications except they run locally (see the AIR security FAQ). The Application Sandbox allows local file operations that are typically forbidden to web browsers, but without some of the more dangerous web browser functionality. Whereas the technological security model makes sense as a foundation, its actual security is entirely up to whoever makes the code that runs in the Application Sandbox. People who have no qualms about pushing code to my browser and forcing me to turn on scripting, thus making me vulnerable to attacks from sites I will visit subsequently, to malicious ads, or to code injected into their site, can’t be trusted to care if my desktop is compromised through their code, or to be competent to prevent it.
Even the security FAQ for AIR downplays significant risks. For example, it says “The damage potential from an injection attack in a given website is directly proportional to the value of the website itself. As such, a simple website such as an unauthenticated chat or crossword site does not have to worry much about injection attacks as much as any damage would be annoying at most.” This completely ignores scripting-based attacks against the browsers themselves, such as those performed by the well-known malwares Mpack and IcePack. In addition, there probably will be both implementation and design vulnerabilities found in AIR itself.
Either way, AIR is a development to watch.
P.S. (10/16): What if AIR attracts the kind of people that are responsible for flooding the National Vulnerability Database with PHP server application vulnerabilities? Server applications are notoriously difficult to write securely. Code that they would write for the application sandbox could be just as buggy, except that instead of a few compromised servers, there could be a large quantity of compromised personal computers…
[tags]cybersecurity research[/tags]
As I write this, I’m sitting in a review of some university research in cybersecurity. I’m hearing about some wonderful work (and no, I’m not going to identify it further). I also recently received a solicitation for an upcoming workshop to develop “game changing” cyber security research ideas. What strikes me about these efforts—representative of efforts by hundreds of people over decades, and the expenditure of perhaps hundreds of millions of dollars—is that the vast majority of these efforts have been applied to problems we already know how to solve.
Let me recast this as an analogy in medicine. We have a crisis of cancer in the population. As a result, we are investing huge amounts of personnel effort and money into how to remove diseased portions of lungs, and administer radiation therapy. We are developing terribly expensive cocktails of drugs to treat the cancer…drugs that sometimes work, but make everyone who takes them really ill. We are also investing in all sorts of research to develop new filters for cigarettes. And some funding agencies are sponsoring workshops to generate new ideas on how to develop radical new therapies such as lung transplants. Meanwhile, nothing is being spent to reduce tobacco use; if anything, the government is one of the largest purchasers of tobacco products! Insane, isn’t it? Yes, some of the work is great science, and it might lead to some serendipitous discoveries to treat liver cancer or maybe even heart disease, but it still isn’t solving the underlying problems. It is palliative, with an intent to be curative—but we aren’t appropriately engaging prevention!
Oh, and second-hand smoke endangers many of us, too.
We know how to prevent many of our security problems—least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren’t using much of it.
Instead of building trustworthy systems (note—I’m not referring to making existing systems trustworthy, which I don’t think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised.
We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware.
Instead of switching to languages with intrinsic features that promote safe programming and execution, we spend our efforts on tools to look for buffer overflows and type mismatches in existing code, and merrily continue to produce more questionable quality software.
And we develop almost mindless loyalty to artifacts (operating systems, browsers, languages, tools) without really understanding where they are best used—and not used. Then we pound on our selections as the “one, true solution” and justify them based on cost or training or “open vs. closed” arguments that really don’t speak to fitness for purpose. As a result, we develop fragile monocultures that have a particular set of vulnerabilities, and then we need to spend a huge amount to protect them. If you are thinking about how to secure Linux or Windows or Apache or C++ (et al), then you aren’t thinking in terms of fundamental solutions.
I’m not trying to claim there aren’t worthwhile topics for open research—there are. I’m simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it.
In many parts of India, cows are sacred and cannot be harmed. They wander everywhere in villages, with their waste products fouling the streets and creating a public health problem. However, the only solution that local people are able to visualize is to hire more people to shovel effluent. Meanwhile, the cows multiply, the people feed them, and the problem gets worse. People from outside are able to visualize solutions, but the locals don’t want to employ them.
Metaphorically speaking, we need to put down our shovels and get rid of our sacred cows—maybe even get some recipes for meatloaf.
Let’s start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have. Will it be easy? No, but neither is quitting smoking! But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower.
[Don’t forget to check out my tumble log!]
[tags]copyright,DMCA,RIAA,MPAA,sharing,downloading,fair use[/tags]
Over the past decade or so, the entertainment industry has supported a continuing series of efforts to increase the enforcement of copyright laws, a lengthening of copyright terms, and very significant enforcement efforts against individuals. Included in this mess was the DMCA—the Digital Millenium Copyright Act—which has a number of very technology unfriendly aspects.
One result of this copyright madness is lawsuits against individuals found to have file-sharing software on their systems, along with copies of music files. Often the owners of these systems don’t even realize that their software is publishing the music files on their systems. It also seems the case that many people don’t understand copyright and do not realize that downloading (or uploading) music files is against the law. Unfortunately, the entertainment industry has chosen to seek draconian remedies from individuals who may not be involved in more than incidental (or accidental) sharing of files. One recent example is a case where penalties have been declared that may bankrupt someone who didn’t set out to hurt the music industry. I agree with comments by Rep. Rick Boucher that the damages are excessive, even though (in general) the behavior of file sharers is wrong and illegal.
Another recent development is a provision in the recently introduced “College Access and Opportunity Act of 2007” (HR 3746; use Thomas to find the text). Sec 484 (f) contains language that requires schools to put technology into place to prevent copyright violations, and inform the Secretary of Education about what those plans and technologies are. This is ridiculous, as it singles out universities instead of ISPs in general, and forces them to expend resources for misbehavior by students it is otherwise attempting to control. It is unlikely to make any real dent in the problem because it doesn’t address the underlying problems. Even more to the point, no existing technology can reliably detect only those files being shared that have copyright that prohibits such sharing. Encryption, inflation/compression, translation into other formats, and transfer in discontinuous pieces can all be employed to fool monitoring software. Instead, it is simply another cost and burden on higher ed.
We need to re-examine copyright. Another aspect in particular we need to examine is “fair use.” The RIAA, MPAA and similar associations are trying to lock up content so that any use at all requires paying them additional funds. This is clearly silly, but their arguments to date have been persuasive to legislators. However, the traditional concept of “fair use” is important to keep intact—especially for those of us in academia. A recent report outlines that fair use is actually quite important—that approximately 1/6 of the US economy is related to companies and organizations that involve “fair use.” It is well worth noting. Further restrictions on copyright use—and particularly fair use—are clearly not in society’s best interest.
Copyright has served—and continues to serve—valid purposes. However, with digital media and communications it is necessary to rethink the underlying business models. When everyone becomes a criminal, what purpose does the law serve?
Also, check out my new “tumble log.” I update it with short items and links more often than I produce long posts here.
[posted with ecto]
[tags]interview,certification[/tags]I was recently interviewed by Gary McGraw for his Silver Bullet interview series. He elicited my comments on a number of topics, including security testing, ethical hacking, and why security is difficult.If you like any of my blog postings, you might find the interview of some interest. But if not, you might some of the other interviews of interest – mine was #18 in the series.
[tags]reformed hackers[/tags]
A news story that hit the wires last week was that someone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior. The press and blogosphere seemed to treat this as surprising. They shouldn’t have.
I have been speaking and writing for nearly two decades on this general issue, as have others (William Hugh Murray, a pioneer and thought leader in security, is one who comes to mind). Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids. First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit. Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled. (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way—that behavior does not establish the bona fides for real expertise. If anything, it establishes a disregard for the community it endangers.)
More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on. Certainly it is possible that they will learn the error of their ways and reform. However, it is also the case that they may slip later and revert to their old ways. Putting some of them in situations of trust with access to items of value is almost certainly too much temptation. This has been established time and again in studies of criminals of all types, especially those who commit fraud. So, why would a prudent manager take a risk when better alternatives are available?
Even worse, circulating stories of criminals who end up as highly-paid consultants are counterproductive, even if they are rarely true. That is the kind of story that may tempt some without strong ethics to commit crimes as a shortcut to fame and riches. Additionally, it is insulting to the individuals who work hard, study intently, and maintain a high standard of conduct in their careers—hiring criminals basically states that the honest, hardworking real experts are fools. Is that the message we really want to put forward?
Luckily, most responsible managers now understand, even if the press and general public don’t, that criminals are simply that—criminals. They may have served their sentences, which now makes them former criminals…but not innocent. Pursuing criminal activity is not—and should not be—a job qualification or career path in civilized society. There are many, many historical examples we can turn to for examples, including those of hiring pirates as privateers and train robbers as train guards. Some took the opportunity to go straight, but the instances of those who abused trust and made off with what they were protecting illustrate that it is a big risk to take. It also is something we have learned to avoid. We are long past the point where those of us in computing should get with the program.
So, what of the argument that there aren’t enough real experts, or they cost too much to hire? Well, what is their real value? If society wants highly-trained and trustworthy people to work in security, then society needs to devote more resources to support the development of curriculum and professional standards. And it needs to provide reasonable salaries to those people, both to encourage and reward their behavior and expertise. We’re seeing more of that now than a dozen years ago, but it is still the case that too many managers (and government officials) want security on the cheap, and then act surprised when they get hacked. I suppose they also buy their Rolex and Breitling watches for $50 from some guy in a parking lot and then act surprised and violated when the watch stops a week later. What were they really expecting?
[tags]hacking, national security, China, cyber espionage[/tags]
Over the last week or two there have been several news items based on statements and leaks regarding on-going cyber espionage. For instance, two articles, one in the British Financial Times and another on CNN allege that Chinese agents had successfully broken into systems at the Pentagon resulting in a shutdown of unclassified mail systems. The London Times had an article on the Chinese Army making preparations for “Cyber War” and in New Zealand an official indicated that government systems had been hacked by foreign agents, implying Chinese involvement. An article in today’s Christian Science Monitor noted that China has been attacking German and British government sites and industry, and another article in the Asia-Pacific news mentions France and Australia as targets.
Of course, these kinds of stories aren’t new. There was a story in the Washington Post back in 2005 about alleged Chinese hacking, and another set of stories this past March including one in USA Today, There seems to be a thread going back to at least 2003, as reported in Time magazine.
Not to be outdone, and perhaps in a classic “Spy vs. Spy” countercharge, a Chinese official complained that their systems had been hacked into and damaged by foreign agents. That could very well be true, but the timing is such that we should be rather skeptical of these claims.
So, what is really going on? Well, it probably is the case that few people know the whole, real story—and it is undoubtedly classified within each country where any part of the story is known. However, there are a few things we know for certain:
Given those 4 observations, we can be reasonably sure that not all the events being discovered are actually government sanctioned; that not all the actors are being accurately identified; and probably only a fraction of the incidents are actually being discovered. The situation is almost certainly worse in some ways than implied by the newspaper accounts.
Some of us have been warning about lax cyber security, especially coupled with poorly designed COTS products, for years. What is surprising is that authorities and the press are viewing these incidents as surprising!
It remains to be seen why so many stories are popping up now. It’s possible that there has been a recent surge in activity, or perhaps some recent change has made it more visible to various parties involved. However, that kind of behavior is normally kept under wraps. That several stories are leaking out, with similar elements, suggests that there may be some kind of political positioning also going on—the stories are being released to create leverage in some other situation.
Cynically, we can conclude that once some deal is concluded everyone will go back to quietly spying on each other and the stories will disappear for a while, only to surface again at some later time when it serves anoher political purpose. And once again, people will act surprised. If government and industry were really concerned, we’d see a huge surge in spending on defenses and research, and a big push to educate a cadre of cyber defenders. But it appears that the President is going to veto whatever budget bills Congress sends to him, so no help there. And the stories of high-tech espionage have already faded behind media frenzy over accounts about Britney being fat, at least in the US.
So, who is getting violated? In a sense, all of us, and our own governments are doing some of the “hacking” involved. And sadly, that isn’t really newsworthy any more.
Updated 9/14
And here is something interesting from the airforce that echoes many of the above points.
[posted with ecto]
The role of diversity in helping computer security received attention when Dan Geer was fired from @stake for his politically inconvenient considerations on the subject. Recently, I tried to “increase diversity” by buying a Ubuntu system—that is, a system that would come with Ubuntu pre-loaded. I have used Ubuntu for quite a while now and it has become my favorite for the desktop, for many reasons that I don’t want to expand upon here, and despite limitations on the manageability of multiple monitor support. I wanted a system that would come with it pre-loaded so as not to pay for an OS I won’t use, not support companies that didn’t deserve that money, and be even less of a target than if I had used MacOS X. I wanted a system that would have a pre-tested, supported Ubuntu installation. I still can’t install 7.04 on a recent Sun machine (dual opteron) because of some problems with the SATA drivers on an AMD-64 platform (the computer won’t boot after the upgrade from 6.10). I don’t want another system with only half-supported hardware or hardware that is sometimes supported, sometimes not as versions change. I suppose that I could pay up the $250 that Canonical wants for 1 year of professional support, but there is no guarantee that they would be able to get the hardware to play nicely with 7.04. With a pre-tested system, there is no such risk and there are economies of scale. Trying to get software to play nicely after buying the hardware feels very much to me like putting the “cart before the horse”; it’s a reactive approach that conflicts with best practices.
So, encouraged by the news of Dell selling Ubuntu machines, I priced out a machine and monitor. When I requested a quote, I was told that this machine was available only for individual purchase, and that I needed to go on the institutional purchase site if I wanted to buy it with one of my grants. Unfortunately, there wasn’t and still is no Ubuntu machine available for educational purchase on that site. No amount of begging changed Dell’s bizarre business practices. Dell’s representative for Purdue stated that this was due to “supply problems” and that Ubuntu machines may be available for purchase in a few months. Perhaps. The other suggestion was to buy a Dell Precision machine, but they only come with Red Hat Linux (see my point about supporting companies who deserve it), and they use ATI video hardware (ATI has a history of having bad drivers for Linux).
I then looked for desktops from other companies. System76, and apparently nobody else (using internet searches), had what I wanted, except that they were selling only up to 20” monitors. When I contacted them, they kindly and efficiently offered a 24” monitor for purchase, and sent me a quote. I forwarded the quote for purchasing.
After a while, I was notified that System76 wasn’t a registered vendor with Purdue University, and that it costs too much to add a vendor that “is not likely to be much of a repeat vendor” and that Purdue is “unwilling to spend the time/money required to set them up as a new vendor in the purchasing system.” I was also offered the possibility to buy the desktop and monitor separately, and because then the purchase would be done under different purchasing rules and with a credit card, I could buy them from System76 if I wanted… but I would have to pay a 50% surcharge imposed by Purdue (don’t ask, it doesn’t make sense to me).
Whereas Purdue may have good reasons to do that from an accounting point of view, I note that educational, institutional purchases are subject to rules and restrictions that limit or make less practical computing diversity, assuming that this is a widespread practice. This negatively impacts computing “macro-security” (security considered on a state-wide scale or larger). I’m not pretending that the policies are new or that buying a non-mainstream computer has not been problematic in the past. However, the scale of computer security problems has increased over the years, and these policies have an effect on security that they don’t have on other items purchased by Purdue or other institutions. We could benefit from being aware of the unfortunate effects of those purchasing policies; I believe that exemptions for computers would be a good thing.
Edit: I wrote the wrong version numbers for Ubuntu in the original.
Edit (9/14/07): Changed the title from “Ubuntu Linux Computers 50% More Expensive: a Barrier to Computing Diversity” to “Purchasing Policies That Create a Barrier to Computing Diversity”, as it is the policies that are the problem, and the barriers are present against many products, not just Ubuntu Linux.
[tags]network crime, internet video, extortion, streaming video[/tags]
Here’s an interesting story about what people can do if they gain access to streaming video at a poorly-protected site. If someone on the other end of the phone is really convincing, what could she get the victims to do?
FBI: Strip Or Get Bombed Threat Spreads - Local News Story - KPHO Phoenix:
[tags]cyber warfare, cyber terrorism, cyber crime, Estonia[/tags]
I am frequently asked about the likelihood of cyber war or cyber terrorism. I’m skeptical of either being a stand-alone threat, as neither is likely to serve the goals of those who would actually wage warfare or commit terrorism.
The incidents in Estonia earlier this year were quite newsworthy and brought more people out claiming it was cyber terrorism or cyber warfare. Nonsense! It wasn’t terrorism, because it didn’t terrorize anyone—although it did annoy the heck out of many. And as far as warfare goes, nothing was accomplished politically, and the “other side” was never even formally identified.
Basically, in Estonia there was a massive outbreak of cyber vandalism and cyber crime.
Carolyn Duffy Marsan did a nice piece in Network World on this topic. She interviewed a number of people, and wrote it up clearly. I especially like it because she quoted me correctly! You can check out the article here: How close is World War 3.0? - Network World. I think it represents the situation quite appropriately.
[As a humorous aside, I happened to do a search on the Network World site to see if another interview had appeared without me hearing about it. I found this item that had appeared in December of 2006 and I didn’t know about it until now! Darn, and to think I could have started recruiting minions in January.
]
So, you watch for advisories, deploy countermeasures (e.g., change firewall and IDS rules) or shut down vulnerable services, patch applications, restore services. You detect compromises, limit damages, assess the damage, repair, recover, and attempt to prevent them again. Tomorrow you start again, and again, and again. Is it worth it? What difference does it make? Who cares anymore?
If you’re sick of it, you may just be getting fatigued.
If you don’t bother defending anymore because you think there’s no point to this endless threadmill, you may be suffering from learned helplessness. Some people even consider that if you only passively wait for patches to be delivered and applied by software update mechanisms, you’re already in the “learned helplessness category”. On the other hand, tracking every vulnerability in the software you use by reading BugTraq, Full Disclosure, etc…, the moment that they are announced, and running proof of concept code on your systems to test them isn’t for everyone; there are diminishing returns, and one has to balance risk vs energy expenditure, especially when that energy could produce better returns. Of course I believe that using Cassandra is an OK middle ground for many, but I’m biased.
The picture may certainly look bleak, with talk of “perpetual zero-days”. However, there are things you can do (of course, as in all lists not every item applies to everyone):
Use the CIS benchmarks, and if evaluation tools are available for your platform, run them. These tools give you a score, and even as silly as some people may think this score is (reducing the number of holes in a ship from 100 to 10 may still sink the ship!), it gives you positive feedback as you improve the security stance of your computers. It’s encouraging, and may lift the feeling that you are sinking into helplessness. If you are a Purdue employee, you have access to CIS Scoring Tools with specialized features (see this news release). Ask if your organization also has access and if not consider asking for it (note that this is not necessary to use the benchmarks).
Use the NIST security checklists (hardening guides and templates). The NIST’s information technology laboratory site has many other interesting security papers to read as well.
Consider using Thunderbird and the Enigmail plugin for GPG, which make handling signed or encrypted email almost painless. Do turn on SSL or TLS-only options to connect to your server (both SMTP and either IMAP or POP) if it supports it. If not, request these features from your provider. Remember, learned helplessness is not making any requests or any attempts because you believe it’s not ever going to change anything. If you can login to the server, you also have the option of SSH tunneling, but it’s more hassle.
Watch CERIAS security seminars on subjects that interest you.
If you’re a software developer or someone who needs to test software, consider using the ReAssure system as a test facility with configurable network environments and collections of VMware images (disclosure: ReAssure is my baby, with lots of help from other CERIAS people like Ed Cates).
Good luck! Feel free to add more ideas as comments.
*A small rant about privacy, which tends to be another area of learned helplessness: Why do they need to know? I tend to consider all information that people gather about me, that they don’t need to know for tasks I want them to do for me, a (perhaps very minor) violation of my privacy, even if it has no measurable effect on my life that I know about (that’s part of the problem—how do I know what effect it has on me?). I like the “on a need to know basis” principle, because you don’t know which selected (and possibly out of context) or outdated information is going to be used against you later. It’s one of the lessons of life that knowledge about you isn’t always used in legal ways, and even if it’s legal, not everything that’s legal is “Good” or ethical, and not all agents of good or legal causes are ethical and impartial or have integrity. I find the “you’ve got nothing to hide, do you?” argument extremely stupid and irritating—and it’s not something that can be explained in a sentence or two to someone saying that to you. I’m not against volunteering information for a good cause, though, and I have done so in the past, but it’s rude to just take it from me without asking and without any explanation, or to subvert my software and computer to do so.