[tags]Google, spam[/tags]
Today I received email from a google.com address. The sender said he had found me by doing a search on the WWW. He indicated he hoped I wasn’t offended by his sending unsolicited email. However, he had a great offer for me, one that I was uniquely qualified for, and then offered a couple of URLs.
Does that sound familiar?
My first thought was that it was a 419 scam (the usual “I am the son of the crown prince of Nigeria…” letters). However, after checking out the mail headers and the enclosed URLs, it appears to be a (semi) legit letter from a Google recruiter. He was asking if I was open to considering a new, exciting position with Google.
And what exciting new position does the Google recruiter think I’m ideally suited for? Starting system administrator…..
And by the way, sending email to “abuse@google.com” gets an automated response that states, in no uncertain terms, that Google never sends spam and that I should take my complaints elsewhere.
Gee, think this is a new career possibility for me?
[posted with ecto]
[tags]cyber security reseach, PITAC[/tags]
I strongly urge you to read Jim Horning’s blog entry about a recent Congressional hearing on cyber security research—his blog is Nothing is as simple as we hope it will be. (Jim posts lots of interesting items—you should add his blog to your list.)
I have been visiting Federal offices and speaking before Congress for almost 20 years trying to raise some awareness of the importance of addressing information security research. More recently, I was a member of the President’s Information Technology Advisory Committee (PITAC). We studied the current funding of cybersecurity research and the magnitude of the problem. Not only was our report largely ignored by both Congress and the President, the PITAC was disbanded. For whatever reason, the current Administration is markedly unsupportive of cyber security research, and might even be classed as hostile to those who draw attention to this lack of support.
Of course, there are many other such reports from other august groups that state basically the same as the PITAC report. No matter who has issued the reports, Congress and the Executive Branch have largely failed to address the issues.
Thus, it is heartening to read of Chairman Langevin’s comments. However, I’m not going to get my hopes up.
Be sure to also read Dan Geer’s written testimony. It touches on many of the same themes he has spoken about in recent years, including his closing keynote at our annual CERIAS Security Symposium (save the dates—March 19 & 20, 2008—for the next symposium).
Copyright © 2007 by E. H. Spafford
[posted with ecto]
The premise of the “Verified by VISA” program seems fine: request a password to allow the use of a credit card online, to lower credit card fraud (besides the problem of having to manage yet another password). However, there were several problems with how I was introduced to the program:
That appeared to me more like a phishing attempt, exploiting a XSS vulnerability, than anything else. After contacting my bank, I was assured that the program was legitimate. Visa actually has a web site where you can register your card for the program:
https://usa.visa.com/personal/security/vbv/index.html
On that site, you will find that most links to explanations are broken. I get a “Sorry! The page you’ve requested cannot be found.” when clicking almost all of them (I found out later that it works if you activate JavaScript). Another issue is that you need to activate JavaScript in order to provide that sensitive information, therefore exposing your browser to exploits against the browser and to any XSS exploits (I’m not worried as much about the VISA site, which doesn’t have user-submitted content, as much as the shopping sites). If you are not using NoScript or forget to disable JavaScript afterwards, then you expose yourself to exploits from all the future sites you will visit. It’s irresponsible and unnecessary: there was nothing in the JavaScript-activated forms (or in the explanations) that couldn’t have been done with regular HTML. It’s all in the name of security…
A fundamental issue I have with this process is that commands (the registration) to reach a higher level of security are issued in-band, using the very medium and means (browser) that are semi-trusted and part of the problem we’re trying to solve (I realize that this program addresses other threats, such as the vulnerability of CC numbers stored by merchants). Moreover, doing this exposes more sensitive credentials. It is almost like hiring a thief as a courier for the new keys to the building, while giving him as well the key to the safe where all the keys are stored.
The Visa program also enables a new kind of attack against credit cards. If criminals get their hands on your last 4 SSN digits (or if they guess it, it’s only 9999 brute force attempts) and your credit card number, they could register it themselves, denying you its use! The motivation for this attack wouldn’t necessarily be financial gain, but causing you grief. I also bet that you will have a harder time proving that fraud occurred, and may get stuck with any charges made by the criminals.
The correct way of registering for this program would be by using a trusted channel, such as showing up at your bank in person to choose a password for your credit card, or through registered mail with signatures. However, these are not available options for me (I wonder if some banks offer this service, and if so, whether they are not simply using the above web site). There should also be a way to decline participation in the program, and block the future registration of the card.
In conclusion, this poorly executed program had a reverse effect on me: I now distrust my Visa card, and Visa itself, a little bit more.
Update: There doesn’t seem to be a limit on the number of times you can try to register a card, enabling the brute force finding of someone’s last 4 SSN digits (I tried 20 times. At the end I entered the correct number and it worked, proving that it still accepted attempts after 20 times). An attacker can then use the last 4 digits of your SSN elsewhere! Let’s say, your retirement accounts with Fidelity and others that accept SSNs as user IDs.
For more fun, I attempted to register my credit card again. I received a message stating that the card was already registered, but I was offered the chance to re-register it anyway and erase my previously entered password simply by entering my name, the complete SSN and phone number. Isn’t that great, now attackers could validate my entire SSN!
It gets worse. I entered an incorrect SSN, and the system accepted it. I was then prompted to enter new passwords. The system accepted the new passwords without blinking… Not only is the design flawed, but the implementation fails to properly perform the checks!
[tags]Windows,MacOS, security flaws, patches, press coverage[/tags]
There’s been a lot of froth in the press about a vulnerability discovered in a “Hack the Mac” contest conducted recently. (Example stories here and here.) I’m not really sure where this mini-hysteria is coming from—there isn’t really anything shocking here.
First of all, people shouldn’t be surprised that there are security flaws in Apple products. After all, those are complex software artifacts, and the more code and functionality present, the more likely it is the case that there will be flaws present—including serious flaws leading to security problems. Unless special care is taken in design and construction (not evident in any widely-used system) vulnerabilities are likely to be present.
Given that, the discovery of one serious flaw doesn’t necessarily mean there are hundreds more lurking beneath the surface and that MacOS X is as bad (or worse) than some other systems. Those bloggers and journalists who have some vulture genomes seem particularly prone to making sweeping announcements after each Apple-based flaw (and each Linux bug) is disclosed or a story about vulnerabilities is published. Yes, there are some problems, and there are undoubtedly more yet to be found. That doesn’t mean that those systems are inherently dangerous or even as buggy and difficult to protect as, for example, Windows XP. Drawing such conclusions based on one or two data points is not appropriate; these same people should likewise conclude that eating at restaurants anywhere in the US is dangerous because someone got food poisoning at a roadside stand in Mexico last year!
To date, there appear to be fewer flaws in Apple products than we have seen in some other software. Apple MacOS X is built on a sturdy base (BSD Unix) and doesn’t have a huge number of backwards compatibility features, which is often a source of flaws in other vendors’ products. Apple engineers, too, seem to be a little more careful and savvy about software quality issues than other vendors, as least as evidenced by the relative number of crashes and “blue screen” events in their products. The result is that MacOS X is pretty good right out of the box.
Of course, this particular flaw is not with MacOS X, but with Java code that is part of the Quicktime package for WWW browsers. The good news is that it is not really a MacOS problem; the bad news is that it is a serious bug that got widely distributed; and the worse news is that it potentially affects other browsers and operating systems.
I have been troubled by the fact that we (CERIAS, and before that COAST) have been rebuffed on every attempt over the last dozen years to make any contact with security personnel inside Apple. I haven’t seen evidence that they are really focused on information security in the way that other major companies such as Sun, HP and Microsoft are, although the steady patching of flaws that have not yet been widely reported outside the company does seem to indicate some expertise and activity somewhere inside Apple. Problems such as this Quicktime flaw don’t give warm fuzzy feelings about that, however.
Apple users should not be complacent. There are flaws yet to be discovered, and users are often the weakest link. Malware, including viruses, can get into MacOS X and cause problems, although they are unlikely to ever be of the number and magnitude as bedevil Windows boxes (one recent article noted that vendors are getting around 125 new malware signatures a day—the majority are undoubtedly for Windows platforms). And, of course, Mac machines (and Linux and….) also host browsers and other software that execute scripts and enable attacks. Those who use MS Word have yet more concerns.
The bottom line. No system is immune to attacks. All users should be cautious and informed. Apple systems still appear to be safer than their counterparts running Windows XP (the jury is out on Vista so far), and are definitely easier to maintain and use than similarly secured systems running Linux. You should continue to use the system that is most appropriate for your needs and abilities, and that includes your abilities to understand and configure security features to meet your security needs. For now, my personal systems continue to be a MacBook Pro (with XP and Vista running under Parallels) and a Sun Solaris machine. Your own milage should—and probably will—vary.
[tags]Windows, Office, malware, vulnerabilities[/tags]
This appeared in USA Today yesterday: Cyberspies exploit Microsoft Office. This is yet more support for my earlier post.
So, are you ready to join the movement—stop sending Word documents in email?
Update 4/28: And here is yet another story of how Word files are being used against victims.
[posted with ecto]
[tags]Vista, Windows, security,flaws,Microsoft[/tags]
Update: additions added 4/19 and 4/24, at the end.
Back in 2002, Microsoft performed a “security standdown” that Bill Gates publicly stated cost the company over $100 million. That extreme measure was taken because of numerous security flaws popping up in Microsoft products, steadily chipping away at MS’s reputation, customer safety, and internal resources. (I was told by one MS staffer that response to major security flaws often cost close to $1 million each for staff time, product changes, customer response, etc. I don’t know if that is true, but the reality certainly was/is a substantial number.)
Without a doubt, people inside Microsoft took the issue seriously. They put all their personnel through a security course, invested heavily in new testing technologies, and even went so far as to convene an advisory board of outside experts (the TCAAB)—including some who have not always been favorably disposed towards MS security efforts. Security of the Microsoft code base suddenly became a Very Big Deal.
Fast forward 5 years: When Vista was released a few months ago, we saw lots of announcements that it was the most secure version of Windows ever, but that metric was not otherwise qualified; a cynic might comment that such an achievement would not be difficult. The user population has become habituated to the monthly release of security patches for existing products, with the occasional emergency patch. Bundling all the patches together undoubtedly helps reduce the overhead in producing them, but also serves to obscure how many different flaws are contained inside each patch set. The number of flaws maybe hasn’t really decreased all that much from years ago.
Meanwhile, reports from inside MS indicate that there was no comprehensive testing of personnel to see how the security training worked and no follow-on training. The code base for new products has continued to grow, thus opening new possibilities for flaws and misconfiguration. The academic advisory board may still exist, but I can’t find a recent mention of it on the Microsoft web pages, and some of the people I know who were on it (myself included) were dismissed over a year ago. The external research program at MSR that connected with academic institutions doing information security research seems to have largely evaporated—the WWW page for the effort lists John Spencer as contact, and he retired from Microsoft last year. The upcoming Microsoft Research Faculty Summit has 9 research tracks, and none of them are in security.
Microsoft seems to project the attitude that they have solved the security problem.
If that’s so, why are we still seeing significant security flaws appear that not only affect their old software, but their new software written under the new, extra special security regime, such as Vista and Longhorn? Examples such as the ANI flaw and the recent DNS flaw are both glaring examples of major problems that shouldn’t have been in the current code: the ANI flaw is very similar to a years-old flaw that was already known inside Microsoft, and the DNS flaw is another buffer overflow!! There are even reports that there may be dozens (or hundreds) of patches awaiting distribution for Vista.
Undoubtedly, the $100 million spent back in 2002 was worth something—the code quality has definitely improved. There is greater awareness inside Microsoft about security and privacy issues. I also know for a fact that there are a lot of bright, talented and very motivated people inside Microsoft who care about these issues. But questions remain: did Microsoft get its money’s worth? Did it invest wisely and if so, why are we still seeing so many (and so many silly) security flaws? Why does it seem that security is no longer a priority? What does that portend for Vista, Longhorn, and Office 2007? (And if you read the “standdown” article, one wonders also about Mr. Nash’s posterior.
)
I have great respect for many of the things Microsoft has done, and admiration for many of the people who work there. I simply wish they had some upper management who would realize that security (and privacy) are ongoing process needs, not one-time problems to overcome with a “campaign.”
What do you think?
[posted with ecto]
Update 4/19: The TCAAB does still continue to exist, apparently, but with a greater focus on privacy issues than security. I do not know who the current members might be.
Update 4/24: I have heard (informally) from someone inside Microsoft in informal response to this post. He pointed out several issues that I think are valid and deserve airing here;
Many of my questions still remain unanswered, including Mr. Nash’s condition….
TippingPoint’s Zero Day Initiative (ZDI) gives interesting data. TippingPoint’s ZDI has made public its “disclosure pipeline” on August 28, 2006. As of today, it has 49 vulnerabilities from independent researchers, which have been waiting on average 114 days for a fix. There are also 12 vulnerabilities from TippingPoint’s researchers as well. With those included, the average waiting time for a fix is 122 days, or about 4 months! Moreover, 56 out of 61 are high severity vulnerabilities. These are from high profile vendors: Microsoft, HP, Novell, Apple, IBM Tivoli, Symantec, Computer Associates, Oracle… Some high severity issues have been languishing for more than 9 months.
Hum. ZDI is supposed to be a “best-of-breed model for rewarding security researchers for responsibly disclosing discovered vulnerabilities. ” How is it responsible to take 9 months to fix a known but secret high severity vulnerability? It’s not directly ZDI’s fault that the vendors are taking so long, but then it’s not providing much incentive either to the vendors. This suggests that programs like ZDI’s have a pernicious effect. They buy the information from researchers, who are then forbidden from disclosing the vulnerabilities. More vulnerabilities are found due to the monetary incentive, but only people paying for protection services have any peace of mind. The software vendors don’t care much, as the vulnerabilities remain secret. The rest of us are worse off than before because more vulnerabilities remain secret for an unreasonable length of time.
Interestingly, this is what was predicted several years ago in “Market for Software Vulnerabilities? Think Again” (2005) Kannan K and Telang R, Management Science 51, pp. 726-740. The model predicted worse social consequences from these programs than no vulnerability handling at all due to races with crackers, increased vulnerability volume, and unequal protection of targets. This makes another conclusion of the paper interesting and likely valid: CERT/CC offering rewards to vulnerability discoverers should provide the best outcomes, because information would be shared systematically and equally. I would add that CERT/CC is also in a good position to find out if a vulnerability is being exploited in the wild, in which case it can release an advisory and make vulnerability information public sooner. A vendor like TippingPoint has a conflict of interest in doing so, because it decreases the value of their protection services.
I tip my hat to TippingPoint for making their pipeline information public. However, because they provide no deadlines to vendors or incentives for responsibly patching the vulnerabilities, the very existence of their services and similar ones from other vendors are hurting those who don’t subscribe. That’s what makes vulnerability protection services a racket.
[tags]monocultures, compliance, standard configurations, desktops, OMB[/tags]
Another set of news items, and another set of “nyah nyah” emails to me. This time, the press has been covering a memo out of the OMB directing all Federal agencies to adopt a mandatory baseline configuration for Windows machines. My correspondents have misinterpreted the import of this announcement to mean that the government is mandating a standard implementation of Windows on all Federal machines. To the contrary, it is mandating a baseline security configuration for only those machines that are running Windows. Other systems can still be used (and should be).
What’s the difference? Quite a bit. The OMB memo is about ensuring that a standard, secure baseline is the norm on any machine running Windows. This is because there are so many possible configuration options that can be set (and set poorly for secure operation), and because there are so many security add-ons, it has not been uncommon for attacks to occur because of weak configurations. As noted in the memo, the Air Force pioneered some work in decreeing security baseline configurations. By requiring that certain minimum security configuration settings were in place on every Windows machines, there was a reduction in incidents.
From this, and other studies, including some great work at NIST to articulate useful policies, we get the OMB memo.
This is actually an excellent idea. Unfortunately, the minimum is perhaps a bit too “minimum.” For instance, replacing IE 6 under XP with Firefox would probably be a step up in security. However, to support common applications and uses, the mandated configuration can only go so far without requiring lots of extra (costly) work or simply breaking things. And if too many things get broken, people will find ways around the secure configuration—after all, they need to get their work done! (This is often overlooked by novice managers focused on “fixing” security.)
Considering the historical problems with Linux and some other systems, and the complexity of their configuration, minimum configurations for those platforms might not be a bad idea, either. However, they are not yet used in large enough numbers to prompt such a policy. Any mechanism or configuration where the complexity is beyond the ken of the average user should have a set, minimum, safe configuration.
Note my use of the term “minimum” repeatedly. If the people in charge of enforcing this new policy prevent clueful people from setting stronger configurations, then that is a huge problem. Furthermore, if there are no provisions for understanding when the minimum configuration might lead to weakness or problems and needs to be changed, that would also be awful. As with any policy, implementation can be good or be terrible.
Of course, mandating the use of Windows (2000, XP, Vista or otherwise) on all desktops would not be a good idea for anyone other than Microsoft and those who know no other system. In fact, mandating the use of ANY OS would be a bad idea. Promoting diversity and heterogeneity is valuable for many reasons, not least of which are:
These advantages are not offset by savings in training or bulk purchasing, as some people would claim. They are 2nd order effects and difficult to measure directly, but their absence is noted….usually too late.
But what about interoperability? That is where standards and market pressure come to bear. If we have a heterogeneous environment, then the market should help ensure that standards are developed and adhered to so as to support different solutions. That supports competition, which is good for the consumer and the marketplace.
And security with innovation and choice should really be the minimum configuration we all seek.
[posted with ecto]