Posts in Secure IT Practices
Page Content
Word documents being used in new attacks
I have repeatedly pointed out (e.g., this post) to people that sending Word files as attachments is a bad idea. This has been used many, many times to circulate viruses, worms, and more. People continue to push back because (basically) it is convenient for them. How often have we heard that convenience trumps good security (and good sense)?
Now comes this story of yet another attack being spread with Word documents.
There are multiple reasons why I don't accept Word documents in email. This is simply one of the better reasons.
If you want to establish a sound security posture at your organization, one of the things you should mandate is no circulation of executable formats -- either out or in. ".doc" files are in this category. I am unsure if the new ".docx" format is fully immune to these kinds of things but it seems ".rtf" is.
Failures in the Supply Chain
[This is dervied from a posting of mine to Dave Farber's Interesting People list.]
There is an article in the October Businessweek that describes the problem of counterfeit electronic components being purchased and used in critical Defense-related products.
This is not a new threat. But first let's reflect on the past.
Historically, the military set a number of standards (MIL-SPEC) to ensure that materials they obtained were of an appropriate level of quality, as well as interoperable with other items. The standards helped ensure a consistency for everything from food to boots to tanks to software, as well as ensuring performance standards (quality).
The standards process was not without problems, however. Among issues often mentioned were:
- Standards were sometimes not revised often enough to reflect changes in technology. The result was that the military often had to acquire and use items that were generations behind the commercial marketplace (esp. in software/computers);
- Knowing and complying with so many standards often caused companies considerable extra time and effort in supplying items, thus raising their cost well above comparable commercial equivalents;
- Incompatible standards across military agencies and services, especially when compared with commercial items used by civilian agencies, led to waste and increased cost, and lack of flexibility in implementation;
- Imposition of rigid standards cut down on innovation and rapid development/acquisition/deployment cycles;
- The rigidity and complexity of the standards effectively shut out new vendors, especially small vendors because they could not match the standards-compliance efforts of large, entrenched defense vendors.
Thus, in June of 1994, William Perry, the then Secretary of Defense, issued a set of orders that effectively provide a pathway to move away from the standards and adopt commercial standards and performance goals in their place. (cf. the Wikipedia article on MIL-SPEC). One of the rationales expressed then, especially as regarded computing software and hardware, was that the competition of the marketplace would lead to better quality products. (Ironically, the lack of vendor-neutral standards then led to a situation where we have large monocultures of software/hardware platforms throughout government, and the resultant lack of meaningful competition has almost certainly not served the goals of better quality and security.)
In some cases, the elimination of standards has indeed helped keep down costs and improve innovation. I have been told, anecdotally, that stealth technology might not have been fielded had those aircraft been forced within the old MIL-SPEC regime.As a matter of cost and speed many MIL-SPEC standards seem to have been abandoned to choose COTS whenever possible without proper risk analysis. Only recently have policy-makers begun to realize some of the far-reaching problems that have resulted from the rush to abandon those standards.
As the Businessweek article details, counterfeit items and items with falsified (or poorly conducted) quality control have been finding their way into critical systems, including avionics and weapons control. The current nature of development means that many of those systems are assembled from components and subsystems supplied by other contractors, so a fully-reputable supplier may end up supplying a faulty system because of a component supplied by a vendor with which they have no direct relationship. One notable example of this was the "Cisco Raider" effort from a couple of years ago where counterfeit Cisco router boards were being sold in the US.
As noted in several press articles (such as the ones linked in, above) there is considerable price motive to supply less capable, "grey market" goods in large bids. The middlemen either do not know or care where the parts come from or where they are being used -- the simply know they are making money. The problem is certainly not limited to Defense-related parts, of course. Fake "Rolex" watches that don't keep time, fake designer shoes that fall apart in the rain, and fake drugs that either do nothing or actually cause harm are also part of the "gray market." Adulteration of items or use of prohibited materials is yet another aspect of the problem: think "lead paint" and "melamine" for examples. Of course, this isn't a US-only problem; people around the world are victimized by gray-market, adulterated and counterfeit goods.
These incidents actually illustrate some of the unanticipated future effects of abandoning strong standards. One of the principal values of MIL-SPEC standards was that it established a strict chain of accountability for products. I suspect that little thought has been given by policy-makers to the fact that there is considerable flow of items across borders from countries where manufacturing expertise and enforcement of both IP laws and consumer-protection statutes may not be very stringent. Buying goods from countries where IP violations are rampant (If there is little fear over copying DVDs, then there is little fear over stamping locally-produced items as "Cisco"), and where bribes are commonplace, is not a good strategy for uniform quality.
Of course, there are even more problems than simply quality. Not every country and group has the same political and social goals as we do in the US (or any other country -- this is a general argument). As such, if they are in a position to produce and provide items that may be integrated into our defense systems or critical infrastructure, it may be in their interests to produce faulty goods -- or carefully doctored goods. Software with hidden 'features" or control components with hidden states could result in catastrophe. That isn't fear-mongering -- we know of cases where this was done, such as to the Soviets in the 1980s. Even if the host country isn't subtly altering the components, it may not have the resources to protect the items being produced from alteration by third parties. After all, if the labor is cheaper in country X, then it will also be cheaper to bribe the technicians and workers to make changes to what they are producing.
The solution is to go back to setting high standards, require authentication of supply chain, and better evaluation of random samples. Unfortunately, this is expensive, and we're not in a state nationally where extra expense (except to line the pockets of Big Oil and Banking) is well tolerated by government. Furthermore, this alters the model where many small vendors acting as middlemen are able to get a "piece of the action." Their complaints to elected representatives who may not understand the technical complexities adds even further pressure against change.
In some cases, the risk posed in acquisition of items may warrant subsidizing the re-establishment of some manufacturing domestically (e.g., chip fabs). This doesn't need to be across the board, but it does required judicious risk-analysis to determine where critical points are -- or will be in the future. We must realize that the rapid changes in technology may introduce new patterns of production and acquisition that we should plan for now. For instance, once elements of nanotechnology become security-critical, we need to ensure that we have sufficient sources of controlled, quality production and testing.
I'm not going to hold my breath over change, however. Some of us have been complaining about issues such as this for decades. The usual response is that we are making a big deal out of "rare events" or are displaying xenophobia. The sheer expense frightens off many from even giving it more than a cursory thought. I know I have been dismissed as an "over-imaginative academic" more times than I can count when I point out the weaknesses.
One of the factors that allegedly led to the decline of the Roman empire was the use of lead in pipes, and lead salts to make cheap wine more palatable for the masses. The Romans knew there was a health problem associated with lead, but the vendors saw more profit from using it.
Once we have sufficiently poisoned our own infrastructure to save money and make the masses happier, how long do we last?
[If you are interested in being notified of new entries by spaf on cyber and national security policy issues, you can either subscribe to the RSS feed for this site, or subscribe to the notification list.]
PHPSecInfo talk at OSCON 2008
If you're at OSCON, and you love security, you may or may not enjoy my talk on PHPSecInfo, a security auditing tool for the PHP environment. I'm actually going to try to show some new code, so if you've seen it before, you can see it again – for the first time.
The talk is at 1:45pm Thursday, 07/24/2008.
Virtualization Is Successful Because Operating Systems Are Weak
- Availability
- Minimized downtime for patching OSes and applications
- Restart a crashed OS or server
- Scalability
- More or different images as demand changes
- Isolation and compartmentalization
- Better hardware utilization
- Hardware abstraction for OSes
- Support legacy platforms
- Availability
- Minimized downtime for patching applications
- Restart crashed applications
- Scalability
- More or different processes as demand changes
- Isolation and compartmentalization
- Protected memory
- Accounts, capabilities
- Better hardware utilization (with processes)
- Hardware abstraction for applications
VMWare ESX Server runs its own OS with drivers. Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation). Microsoft's "Hyper-V" requires a full-blown Windows operating system to run it. So what we're doing is really exchanging an untrusted OS for another, that we should trust more for some reason. This other OS also needs patches, configuration and maintenance. Now we have multiple OSes to maintain! What did we gain? We don't trust OSes but we trust "virtualization" that depends on more OSes? At least ESX is "only" 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.
I'm now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory. Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don't trust operating systems enough to let them stand on their own. The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes.
As for ease of management and availability vs patching, I don't see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary. ESX is an operating system anyway... I believe that all the special things that a virtualization solution does for functionality and security, as well as the "new" opportunities being researched, could be done as well by a trustworthy, properly designed OS; there may be a thesis or two in figuring out how to implement them back in an operating system.
What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista: virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest. Of course, even with a better OS we'd still need virtualization for testbeds like ReAssure, and for legacy applications. Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).
Related reading:
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level. ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure? Computer, 39
Another Round on Passwords
[tags]passwords, security practices[/tags]
The EDUCAUSE security mailing list has yet (another) discussion on password policies. I've blogged about this general issue several times in the past, but maybe it is worth revisiting.
Someone on the list wrote:
Here is my question - does anyone have the data on how many times a hack (attack) has occurred associated to breaking the “launch codes” from outside of the organization? The last information I gleaned from the FBI reports (several years ago) indicated that 70 percent of hackings (attacks) were internal.
My most recent experience with intrusions has had nothing to do with a compromised password, rather an exploit of some vunerability in the OS, database, or application.
I replied:
I track these things, and I cannot recall the last time I saw any report of an incident caused by a guessed password. Most common incidents are phishing, trojans, snooping, physical theft of sensitive media, and remote exploitation of bugs.
People devote huge amounts of effort to passwords because it is one of the few things they think they can control.
Picking stronger passwords won't stop phishing. It won't stop users downloading trojans. It won't stop capture of sensitive transmissions. It won't bring back a stolen laptop (although if the laptop has proper encryption it *might* protect the data). And passwords won't ensure that patches are in place but flaws aren't.
Creating and forcing strong password policies is akin to being the bosun ensuring that everyone on the Titanic has locked their staterooms before they abandon ship. It doesn't stop the ship from sinking or save any lives, but it sure does make him look like he's doing something important.....
That isn't to say that we should be cavalier about setting passwords. It is important to try to set strong passwords, but once reasonably good ones are set in most environments the attacks are going to come from other places -- password sniffing, exploitation of bugs in the software, and implantation of trojan software.
As a field, we spend waaaaay too much time and resources on palliative measures rather than fundamental cures. In most cases, fiddling with password rules is a prime example. A few weeks ago, I blogged about a related issue.
Security should be based on sound risk assessment, and in most environments weak passwords don't present the most significant risk.



