Posts by pmeunier

Login with Facebook, Google and LinkedIn

Is your management considering logins using Facebook, Google or LinkedIn accounts? What are the risks? One consideration is password policies. I experimented to find out what were the effective password policies in place:
SiteMinimum CharactersReuse?Trivial?All lower-case?Expiration
All 3 prevented the use of trivial passwords such as 123456. However, all accepted a password consisting only of lower-case letters, and none of the services seems to implement password expiration, at least not in a reasonable time frame (1 year or less). Password expiration is necessary to protect against password guessing attacks, because given enough time a slow trickle of systematic attempts will succeed. The weaker the other password requirements and protections (e.g., number of tries allowed/minute) are, the quicker the expiration period should be. In my opinion, all 3 have weak password policies overall. However, if you *must* have a "login with your X account" feature, I suggest using Google's service and not the others, at least when considering only password policies. Google has the best policy by far (potentially thousands of times stronger), with 8 characters and not allowing the re-use of previous passwords.

After 16 login failures, Google presents a captcha. This struck me as a large number, but FaceBook allows an even greater number of attempts before blocking (I lost count). On Facebook, you can continue login attempts simply by clearing the Facebook cookies in the browser, which apparently provides an unlimited number of login attempts and a great weakness towards password guessing attacks. But then, clearing the browser's cookies also bypasses the Google captcha... How disappointing. LinkedIn is the only one that didn't lose track of login attempts by clearing browser cookies or using a different browser; after 12 failed attempts, it required answering a captcha. So, if you must have 2 login services, I would suggest Google and LinkedIn, and to avoid Facebook.

Other considerations, such as the security of the login mechanism and trustworthiness of the service, are not addressed here.

Looking for fail2ban++

If you're looking for a worthwhile project, here's something that could benefit most security practitioners. The application "fail2ban" has been extremely useful in blocking sources of undesirable behavior such as brute force attacks on password mechanisms, spammers (by hooking it up to your mail server's rejection log), as well as hostile vulnerability scanners. However, it only works for IPv4. Discussions (and patches) I've seen to make it work with IPv6, unfortunately focus on making it understand IPv6 addresses, and miss an important point. With IPv6, entities, even home users, will have large networks at their disposal. As a result, it may be futile to block a single IPv6 address. However, blocking whole IPv6 networks with the same threshold as a single IPv4 user may block legitimate users. I need a program that will work like fail2ban but will allow progressive blocking, as follows: If undesirable behavior is observed from IP addresses within a network of size N past threshold T(N), block the entire network. This would work with multiple network sizes, starting with singleton IPs and scaling up to large networks, with the threshold increasing and being more tolerant the larger the network is. How the threshold changes with the size of the network should be configurable.

A corollary of the above is that when we'll move to IPv6, as some service providers have already done, password strength, and the strength of secrets and applications in general, will have to increase because we will have to be more tolerant of undesirable behavior, until the threshold of the attacker's network size is reached. This will of course be likely a lot more, and at a minimum the same, as what we tolerate on IPv4 for a single address.

Can Your IPv4 Firewall Be Bypassed by IPv6 Traffic?

Do you have a firewall? Maybe it's not as useful as you think it is. I was surprised to discover that IPv6 was enabled on several hosts with default firewall policies of ACCEPT and no rules. This allowed IPv6 traffic to completely bypass the numerous IPv4 rules!

IPv6 makes me queasy security-wise due to features such as making all IPv6 hosts into routers that obey source routing, as well as the excessively eager and accepting autoconfiguration. More recent doesn't imply more secure, especially if it's unmanaged because you don't realize it's ON. The issue is IPv6 being enabled by default in a fully open mode. Not everyone realizes this is happening, as we're very much still thinking in terms of IPv4. Even auditing tools such as Lynis (for Linux/UNIX systems) don't report this; it only checks if the IPv4 ruleset is empty. There are going to be a lot of security problems because of this. I know it's been so for some time, but awareness lags. I'm not the only one who thinks it's going to be a bumpy ride, as pointed out elsewhere.

You can mitigate this issue in several ways, besides learning how to secure IPv6 (which you'll have to do sometime) and using your plentiful spare time to do so enterprise-wide. Changing all the default IPv6 policies to DROP without adding any ACCEPT rules breaks things. For example, Java applications try IPv6 first by default and take several minutes to finally switch over to IPv4; this can be perceived as broken. If you have Ubuntu on your desktop, you can use ufw, the Uncomplicated FireWall, to configure your firewall with a click of the mouse. When "turned on", it changes the default policy to DROP but also adds rules accepting local traffic on the INPUT and OUTPUT chains (well done and thanks, Canonical and Gufw developers). This allows Java applications to contact local services, for example. You can also disable IPv6 in sysctl.conf (and have Java still work) if you have a recent kernel (e.g., Ubuntu 10):

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

followed by a reboot. You can also do this immediately, which will be good only until you reboot (note: sudo alone doesn't work, you need to do "sudo su -"):

echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/lo/disable_ipv6

This removes the IPv6 addresses assigned to your network interfaces, and then Java ignores IPv6. If you have an "old" kernel (e.g., the most recent Debian) and need to support Java applications, the above kernel configurations are not available at this time. However, there are other ways to disable IPv6 for Debian, well documented elsewhere. You can also manually add firewall rules like those done by ufw, as described above.

Making the CWE Top 25, 2010 Edition

As last year, I was glad to be able to participate in the making of the CWE Top 25. The 2010 Edition has been more systematically and methodically produced than last year's. We adjusted the level of abstraction of the entries to be more consistent, precise and actionable. For that purpose, new CWE entries were created, so that we didn't have to include a high-level entry because there was no other way to discuss a particular variation of a weakness. There was a formal vote with metrics, with a debate about which metrics to use, how to vote, and how to calculate a final score. We moved the high-level CWE entries which could be described as "Didn't perform good practice X" or "Didn't follow principle Y" into a mitigations section which specifically addresses what X and Y are and why you should care about them. Those mitigations were then mapped against the top-25 CWE entries that they affected.

For the metrics, CWE entries were ranked by prevalence and importance. We used P X I to calculate scores. That makes sense to me because risk is defined as Potential loss x Probability of occurrence, so by this formula the CWE rankings are related to the risk those weaknesses pose to your software and business. Last year, the CWEs were not ranked; they instead had "champions" who argued for their inclusion in the Top-25.

I worked on creating an educational profile, with its own metrics (of course not alone; it wouldn't have happened without Steve Christey, his team at MITRE, and other CWE participants). The Top-25 now has profiles; so depending on your application and concerns, you may select a profile that ranks entries differently and appropriately. The educational profile used prevalence, importance but also emphasis. Emphasis relates to how difficult a concept is to explain and understand. Easy concepts can be learned in homeworks, labs, or are perhaps so trivial that they can be learned in the students own reading time. Harder concepts deserve more class time, provided that they are important enough. Another factor for emphasis was how much a particular CWE is helpful in learning others, and its general applicability. So, the educational profile tended to include higher-level weaknesses. Also, it considered all historical time periods for prevalence, whereas the Top-25 is more focused on data for the last 2 years. This is similar to the concept of regression testing -- we don't want problems that have been solved to reappear.

Overall, I have a good feeling about this year's work, and I hope that it will prove useful and practical. I will be looking for examples of its use and experiences with it, and of course I'd love to hear what you think of it. Tell us both the good and the bad -- I'm aware that it's not perfect, and it has some subjective elements, but perhaps comments will be useful for next year's iteration.

Cowed Through DNS

May 2010 will mark the 4th aniversary of our collective cowing by spammers, malware authors and botnet operators. In 2006, spammers squashed Blue Frog. They made the vendor of this service, Blue Security, into lepers, as everyone became afraid of being contaminated by association and becoming a casualty of the spamming war. Blue Frog hit spammers were it counted -- in the revenue stream, simply by posting complaints to spamvertized web sites. It was effective enough to warrant retaliation. DNS was battered into making Blue Security unreachable. The then paying commercial clients of Blue Security were targetted, destroying the business model; so Blue Security folded [1]. I was stunned that the "bad guys" won by brute force and terror, and the security community either was powerless or let it go. Blue Security was even blamed for some of their actions and their approach. Blaming the victims for daring to organize and attempt to defend people, err, I mean for provoking the aggressor further, isn't new. An open-source project attempting to revive the Blue Frog technology evaporated within the year. The absence of interest and progress has since been scary (or scared) silence.

According to most sources, 90-95% of our email traffic has been spam for years now. Not content with this, they subject us to blog spam, friendme spam, IM spam, and XSS (cross-site scripting) spam. That spam or browser abuse through XSS convinces more people to visit links and install malware, thus enrolling computers into botnets. Botnets then enforce our submission by defeating Blue Security type efforts, and extort money from web-based businesses. We can then smugly blame "those idiots" who unknowingly handed over the control over their computers, with a slight air of exasperation. It may also be argued that there's more money to be made selling somewhat effective spam-fighting solutions than by emulating a doomed business model. But in reality, we've been cowed.

I had been hoping that the open source project could make it through the lack of a business model; after all, the open source movement seems like a liberating miracle. However, the DNS problem remained. So, even though I didn't use Blue Frog at the time, I have been hoping for almost 4 years now that DNS would be improved to resist the denial of service attacks that took Blue Security offline. I have been hoping that someone else would take up the challenge. However, all we have is modest success at (temporarily?) disabling particular botnets, semi-effective filtering, and mostly ineffective reporting. Since then, spammers have ruled the field practically uncontested.

Did you hear about Comcast's deployment of DNSSEC [2]? It sounds like a worthy improvement; it's DNS with security extensions, or "secure DNS". However, Denial-of-service (DoS) prevention is out-of-scope of DNSSEC! It has no DoS protections, and moreover there are reports of DoS "amplification attacks" exploiting the larger DNSSEC-aware response size [3]. Hum. Integrity is not the only problem with DNS! A search of IEEE Explore and the ACM digital library for "DNS DoS" reveals several relevant papers [4-7], including a DoS-resistant backwards compatible replacement for the current DNS from 2004. Another alternative, DNSCurve has protection for confidentiality, integrity and availability (DoS) [8], has just been deployed by OpenDNS [9] and is being proposed to the IETF DNSEXT working group [10]. This example of leadership suggests possibilities for meaningful challenges to organized internet crime. I will be eagerly watching for signs of progress in this area. We've kept our head low long enough.

1. Robert Lemos (2006) Blue Security folds under spammer's wrath. SecurityFocus. Accessed at
2. Comcast DNSSEC Information Center Accessed at
3. Bernstein DJ (2009) High-speed cryptography, DNSSEC, and DNSCurve. Accessed at:
4. Fanglu Guo, Jiawu Chen, Tzi-cker Chiueh (2006) Spoof Detection for Preventing DoS Attacks against DNS Servers. 26th IEEE International Conference on Distributed Computing Systems.
5. Kambourakis G, Moschos T, Geneiatakis D, Gritzalis S (2007) A Fair Solution to DNS Amplification Attacks. Second International Workshop on Digital Forensics and Incident Analysis.
6. Hitesh Ballani, Paul Francis (2008) Mitigating DNS DoS attacks. Proceedings of the 15th ACM conference on Computer and communications security
7. Venugopalan Ramasubramanian, Emin Gün Sirer (2004) The design and implementation of a next generation name service for the internet. Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications
8. DNSCurve: Usable security for DNS (2009). Accessed at
9. Matthew Dempsky (2010) OpenDNS adopts DNSCurve. Accessed at
10. Matthew Dempsky (2010) [dnsext] DNSCurve Internet-Draft. Accessed at

Drone “Flaw” Known Since 1990s Was a Vulnerability

"The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn't know how to exploit it, the officials said." Call it what it is: it's a vulnerability that was misclassified (some might argue that it's an exposure, but there is clearly a violation of implicit confidentiality policies). This fiasco is the result of the thinking that there is no vulnerability if there is no threat agent with the capability to exploit a flaw. I argued against Spaf regarding this thinking previously; it is also widespread in the military and industry. I say that people using this operational definition are taking a huge risk if there's a chance that they misunderstood either the flaw, the capabilities of threat agents, present or future, or if their own software is ever updated. I believe that for software that is this important, an academic definition of vulnerability should be used: if it is possible that a flaw could conceptually be exploited, it's not just a flaw, it's a vulnerability, regardless of the (assumed) capabilities of the current threat agents. I maintain that (assuming he exists for the sake of this analogy) Superman is vulnerable to kryptonite, regardless of an (assumed) absence of kryptonite on earth.

The problem is that it is logically impossible to prove a negative, e.g., that there is no kryptonite (or that there is no God, etc...). Likewise, it is logically impossible to prove that there does not exist a threat agent with the capabilities to exploit a given flaw in your software. The counter-argument is then that the delivery of the software becomes impractical, as the costs and time required escalate to remove risks that are extremely unlikely. However, this argument is mostly security by obscurity: if you know that something might be exploitable, and you don't fix it because you think no adversary will have the capability to exploit it, in reality, you're hoping that they won't find or be told how (for the sake of this argument, I'm ignoring brute force computational capabilities). In addition, exploitability is a thorny problem. It is very difficult to be certain that a flaw in a complex system is not exploitable. Moreover, it may not be exploitable now, but may become so when a software update is performed! I wrote about this in "Classes of vulnerabilities and attacks". In it, I discussed the concept of latent, potential or exploitable vulnerabilities. This is important enough to quote:

"A latent vulnerability consists of vulnerable code that is present in a software unit and would usually result in an exploitable vulnerability if the unit was re-used in another software artifact. However, it is not currently exploitable due to the circumstances of the unit’s use in the software artifact; that is, it is a vulnerability for which there are no known exploit paths. A latent vulnerability can be exposed by adding features or during the maintenance in other units of code, or at any time by the discovery of an exploit path. Coders sometimes attempt to block exploit paths instead of fixing the core vulnerability, and in this manner only downgrade the vulnerability to latent status. This is why the same vulnerability may be found several times in a product or still be present after a patch that supposedly fixed it.

A potential vulnerability is caused by a bad programming practice recognized to lead to the creation of vulnerabilities; however the specifics of its use do not constitute a (full) vulnerability. A potential vulnerability can become exploitable only if changes are made to the unit containing it. It is not affected by changes made in other units of code. For example, a (potential) vulnerability could be contained in the private method of an object. It is not exploitable because all the object’s public methods call it safely. As long as the object’s code is not changed, this vulnerability will remain a potential vulnerability only.

Vendors often claim that vulnerabilities discovered by researchers are not exploitable in normal use. However, they are often proved wrong by proof of concept exploits and automated attack scripts. Exploits can be difficult and expensive to create, even if they are only proof-of-concept exploits. Claiming unexploitability can sometimes be a way for vendors to minimize bad press coverage, delay fixing vulnerabilities and at the same time discredit and discourage vulnerability reports. "

Discounting or underestimating the capabilities, current and future, of threat agents is similar to the claims from vendors that a vulnerability is not really exploitable. We know that this has been proven wrong ad nauseam. Add configuration problems to the use of the "operational definition" of a vulnerability in the military and their contractors, and you get an endemic potential for military catastrophies.

Talking to the Police All the Time

I started writing this entry while thinking about the "if you have nothing to hide, then you have nothing to fear" fallacy. What do you say to someone who says that they have nothing to hide, or that some information about them is worthless anyway, so they don't care about some violation of their privacy? What do you say to a police officer who says that if you have nothing to hide then you have nothing to fear by answering questions? It implies that if you refuse to answer then you're probably "not innocent". That "pleading the 5th" is now used as a joke to admit guilt in light banter, is a sign of how pervasive the fallacy has become. It's followed closely with "where there's smoke there's probably fire" when discussing someone's arrest, trial, or refusal to answer questions. However, in the field of information security, it is an everyday occurrence to encounter people who don't realize the risks to which they are exposed. So why is this fallacy so irritating?

Those kind of statements expose naïveté or, if intended as a manipulative statement, perversity. It takes a long time to explain the risks and convince others that they are real, and that they are really exposed to them, and what the consequences might be. Even if you could somehow manage to explain it convincingly on the spot, before you're done, chances are that you'll be dismissed as a "privacy nut". In addition, you rarely have that kind of time to make a point in a normal discussion. So, that fallacy is often a successful gambit simply because it discourages someone from trying to explain why it's so silly.

You may buy some time by mentioning anecdotes such as the man falsely accused of arson because by coincidence, he bought certain things in a store at a certain time (betrayed by his grocery loyalty card) [1]. Or, there's the Indiana woman who bought for her sick family just a little too much medication containing pseudoephedrine, an ingredient used in the manufacture of crystal meth [2]. Possibilities for the misinterpretation of data or the inappropriate enforcement of bad laws are multiplied by the ways in which it can be obtained. Police can stick a GPS-tracking device on anyone they want without getting a search warrant [3] or routinely use your own phone's GPS [4]. Visiting a web page, regardless of whether you used an automated spider, clicked on a linked manually, perhaps even being tricked into doing it, or were framed by a malicious or compromised web site, can trigger an FBI raid [5] (remember goatse? Except it's worse, with a criminal record for you). There are also the dumb things people post themselves, for example on Facebook, causing them to lose jobs, opportunities for jobs, or even get arrested [6].

Regardless, people always think that happens only to others, that "they were dumb and I'm not" or that they are isolated incidents. This is why I was delighted to find this video of a law professor explaining why talking to police can be a bad idea [7]. Even though I knew that "everything you say can be used against you", I was surprised to learn that nothing you say can be used in your defense. This asymmetry is a rather convincing argument for exercising 5th amendment rights. Then there are the chances that even though you are innocent, due to the stress or excitement you will exaggerate or say something stupid. For example, you might say you've never touched a gun in your life -- except you did once a long time ago when you were a teen maybe, and forgot about it but there's a photo proving that you lied (apparently, that you didn't mean to lie matters little). People say stupid things in less stressful circumstances. Why take the chance? There are also coincidences that look rather damning and bad for you. Police sometimes make mistakes as well. The presentation is well-made and is very convincing; I recommend viewing it.

There are so many ways in which private information can be misinterpreted and used against you or to your disadvantage, and not just by police. Note that I agree that we need an effective police; however, there's a line between that and a surveillance society making you afraid to speak your mind in private, afraid to buy certain things at the grocery store, afraid to go somewhere or visit a web site, or afraid of chatting online with your friends, because you never know who will use anything you say or do against you and put it in the wrong context. In effect, you may be speaking to the police all the time but don't realize it. Even though considering each method separately, it can be argued that technically there isn't a violation of the 5th amendment, the cumulative effect may violate its intent.

Then, after I wrote most of this entry, Google CEO Eric Schmidt declared that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place" [8]. I'm afraid that's a realistic assessment, even if it's a lawful activity, given the "spying guides" published by the likes of Yahoo!, Verizon, Sprint, Cox, SBC, Cingular, Nextel, GTE, Voicestream for law enforcement, and available at Cryptome [9]. The problem is that you'll then live a sad life devoid of personal liberties. The alternative shrug and ignorance of the risks is bliss, until it happens to you.

[1] Brandon Sprague (2004) Fireman attempted to set fire to house, charges say. Times Snohomish County Bureau, Seattle Times. Accessed at

[2] Mark Nestmann (2009) Yes, You ARE a Criminal…You Just Don't Know it Yet. In "Preserving your privacy and more", November 23 2009. Accessed at

[3] Chris Matyszczyk (2009) Court says police can use GPS to track anyone. Accessed at

[4] Christopher Soghoian (2009) 8 Million Reasons for Real Surveillance Oversight. Accessed at

[5] Declan McCullagh (2009) FBI posts fake hyperlinks to snare child porn suspects. Accessed at:

[6] Mark Nestmann (2009) Stupid Facebook Tricks. In "Preserving your privacy and more", November 27 2009. Accessed at

[7] James Duane (2008) Don't Talk to Police.

[8] Ryan Tate (2009) Google CEO: Secrets Are for Filthy People. Accessed at

[9] Cryptome. Accessed at
Last edited Jan 25, as per emumbert1's suggestion (see comments).

“Verified by VISA”: Still Using SSNs Online, Dropped by PEFCU

I have written before about the "Verified by VISA" program. While shopping for Thanksgiving online this year, I noticed that Verified by Visa scripts were blocked by NoScript, and I could complete my purchases without authenticating. It was tempting to conclude that the implementation was faulty, but a few phone calls clarified that the Purdue Employee Federal Credit Union stopped participating in the program. I have ambivalent feelings about this. I'm glad that PEFCU let us escape from the current implementation and surprise enrollment based on SSN at the time of purchase, and SSN-based password reset. Yet, I wish a password-protection system was in place because it could significantly improve security (see below). Getting such a system to work is difficult, because in addition to needing to enroll customers, both banks and merchants have to support it. For the sake of curiosity, I counted the number of participating stores in various countries, as listed on the relevant VISA web sites:
CountryNumber of Stores
Hong Kong20
Multiply this by the fraction of participating banks (data not available for the US), and for a program that started in 2001, that's spotty coverage. Adoption would be better by getting people to enroll when applying for credit cards, when making a payment, by mail at any time, or in person at their bank. The more people adopt it, the more stores and banks will be keen on reducing their risk as the cost per participating card holder would decrease. Ambushing people at the time of an online purchase with an SSN request violates the security principle of psychological acceptability. The online password reset based on entering your SSN, which I had criticized, is still exposing people to SSN-guessing risks, and also the only means to change your password. I wish that VISA would overhaul the implementation and use an acceptable process (e.g., a nonce-protected link via email to a page with a security question). The reason I'm interested is because I'd rather have a password-protected credit card, and a single password to manage, than a hundred+ online shopping accounts that keep my credit card information with varying degrees of (in)security. Using an appropriate choke-point would reduce attack surface, memorization requirements, and identity theft.

Firefox Vulnerabilities: Souvenirs of Windows 95

I've been waiting for an announcement of vulnerabilities in Firefox due to popular extensions. I've compared it to Windows 95 before. Yet students often opine that Firefox is more secure than Internet Explorer. It is worth repeating this explanation from the announcement:

"Mozilla doesn't have a security model for extensions and Firefox fully trusts the code of the extensions. There are no security boundaries between extensions and, to make things even worse, an extension can silently modify another extension."

Asking which of Firefox and Internet Explorer is most secure is like asking which of two random peasants is wealthier. They both might be doing their best and there may be significant differences but I wouldn't expect either to be a financier. While I'm running with this analogy, let me compare the widespread and often mandatory use of client scripts in websites (e.g., JavaScript) to CDOs: they both are designed by others with little interest in your security, they leverage your resources for their benefit, they are opaque, complex, nearly impossible to audit, and therefore untrustworthy. They have also both caused a lot of damage, as having scripting enabled is required for many attacks on browsers. How much smaller would botnets be without scripting? Like CDOs, scripting is a financial affair; it is needed to support advertising and measure the number of visitors and click-throughs. Scripting will stay with us because there's money involved, and if advertisers had their way, there would be no option to disable plugins and JavaScript, nor would there be extensions like NoScript. To be fair, there are beneficial uses for JavaScript, but it's a tangled mess with a disputable net value. Here's my take on media and advertising:

Every medium supported exclusively by advertising tends to have a net value of zero for viewers and users (viewsers?). This is where radio and TV are right now. If the value was significantly higher than zero, advertisers could wring more profits from it, for example by increasing the duration or number of annoying things, polluting your mind or gathering and exploiting more information. If it was significantly less than zero, then they would lose viewership and therefore revenue.

So, with time, and if advertising is allowed to power all websites through the requirement for scripting and JavaScript, surfing the web will become as pleasant, useful and watchable as TV, for example (with the difference that your TV can't be used --yet-- to attack people and other nations). I don't mind being locked out of websites that critically depend on advertising revenue -- just like I don't watch TV anymore because it has become a negative value proposition to me. However I mind being needlessly exposed to risks due to other people's decisions, when I use other websites. I'm looking forward to the "component directory lockdown" in Firefox 3.6 as a step in the right direction, and that's the bright light at the end of the tunnel: some things are improving.

Cassandra Firing GnuPG Blanks

A routine software update (a minor revision number) caused a serious problem. A number of blank messages were sent until we realized that attempts to sign messages with GnuPG from PHP resulted in empty strings. If you received a blank message from Cassandra, you can find out what it was about by logging to the service. Then click on the affected profile name (from the subject of the email), then "Search" and "this month". This will retrieve the latest alerts over an interval of one month for that profile. Messages will not be signed until we figure out a fix. We're sorry for the inconvenience. Edit (Monday 11/2, noon): This has been fixed and emails are signed again. I also added a pre-flight test to detect this condition in the future.