It's confirmed -- Skype is revealing traffic to Microsoft.
A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log:
188.8.131.52 - - [30/Apr/2013:19:28:32 +0200]
"HEAD /.../login.html?user=tbtest&password=geheim HTTP/1.1"
Zoom The access is coming from systems which clearly belong to Microsoft.
Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.
Now, the boys & girls at Heise are switched-on, unlike their counterparts on the eastern side of the pond. Notwithstanding, Adam Back of hashcash fame has confirmed the basics: URLs he sent to me over skype were picked up and probed by Microsoft.
What's going on? Microsoft commented:
In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:
"Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links."
A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites.
Which means Microsoft can scan ALL messages to ANYONE. Which means they are likely fed into Echelon, either already, or just as soon as someone in the NSA calls in some favours. 10 minutes later they'll be realtimed to support, and from thence to datamining because they're pissed that google's beating the hell out of Microsoft on the Nasdaq.
Or exaggeration? It's just fine and dandy as all the NSA are interested in is matching the URLs to jihadist websites. I don't care so much for the towelheads. But, from the manual of citizen control comes this warning:
First they came for the jihadists,
and I didn't speak out because I wasn't a jihadist.
Then they came for the cypherpunks,
and I didn't speak out because I wasn't a cypherpunk.
Then they came for the bloggers,
and I didn't speak out because I wasn't a blogger.
Then they came for me,
and there was no one left to speak for me.
Skype, game over.
Quotes that struck me as on-point: Chris Skinner says of SEPA or the Single-European-Payment-Area:
One of the key issues is that when SEPA was envisaged and designed, counterparty credit risk was not top of the agenda; post-Lehman Brothers crash and it is.
What a delight! Oh, to design a payment system without counterparty risk ... Next thing they'll be suggesting payments without theft!
Meanwhile Dan Kaminsky says in delicious counterpoint, commenting on Bitcoin:
But the core technology actually works, and has continued to work, to a degree not everyone predicted. Time to enjoy being wrong. What the heck is going on here?
First of all, yes. Money changes things.
A lot of the slop that permeates most software is much less likely to be present when the developer is aware that, yes, a single misplaced character really could End The World. The reality of most software development is that the consequences of failure are simply nonexistent. Software tends not to kill people and so we accept incredibly fast innovation loops because the consequences are tolerable and the results are astonishing.
BitCoin was simply developed under a different reality.
The stakes weren’t obscured, and the problem wasn’t someone else’s.
They didn’t ignore the engineering reality, they absorbed it and innovated ridiculously
Welcome to financial cryptography -- that domain where things matter. It is this specialness, that ones code actually matters, that makes it worth while.
Meanwhile, from the department of lolz, comes Apple with a new patent -- filed at least.
The basic idea, described in a patent application “Ad-hoc cash dispensing network” is pretty simple. Create a cash dispensing server at Apple’s datacenter, to which iPhones, iPads and Macs can connect via a specialized app. Need some quick cash right now and there’s no ATM around? Launch the Cash app, and tell it how much do you need. The app picks up your location, and sends the request for cash to nearby iPhone users. When someone agrees to front you $20, his location is shown to you on the map. You go to that person, pick up the bill and confirm the transaction on your iPhone. $20 plus a small service fee is deducted from your iTunes account and deposited to the guy who gave you the cash.
The good thing about being an FCer is that you can design that one over beers, and have a good belly laugh for the same price. I don't know how to put it gently, but hey guys, don't do that for real, ok?!
All by way of saying, financial cryptography is where it's at!
Ironic as xkcd nails it, at least one can draw the picture. What instruction would one draw for secure browsing these days?
In the closing days of 2012, another CA was caught out making mistakes:
2012.5 -- A CA here issued 2 intermediate roots to two separate customers 8th August 2011 Mozilla mail/Mert Özarar. The process that allowed this to happen was discovered later on, fixed, and one of the intermediates was revoked. On 6th December 2012, the remaining intermediate was placed into an MITM context and used to issue an unauthorised certificate for *.google.com DarkReading. These certificates were detected by Google Chrome's pinning feature, a recent addition. "The unauthorized Google.com certificate was generated under the *.EGO.GOV.TR certificate authority and was being used to man-in-the-middle traffic on the *.EGO.GOV.TR network" wired. Actions. Vendors revoked the intermediates microsoft, google, Mozilla. Damages. Google will revoke Extended Validation status on the CA in January's distro, and Mozilla froze a new root of the CA that was pending inclusion.
I collect these stories for a CA risk history, which can be useful in risk analysis.
Beyond that, what is there to say? It looks like this CA made a mistake that let some certs slip out. It caught one of them later, not the other. The owner/holder of the cert at some point tried something different, including an MITM. One can see the coverup proceeding from there...
Mistakes happen. This so far is somewhat distinct from issuing root certs for the deliberate practice of MITMing. It is also distinct from the overall risk equation that says that because all CAs can issue certs for your browser, only one compromise is needed, and all CAs are compromised. That is, brittle.
But what is now clear is that the trend started in 2011 is confirmed in 2012 - we have 5+ incidents in each year. For many reasons, the CA business has reached a plateau of aggressive attention. It can now consider itself under attack, after 15 or so years of peace.
In news that might bemuse, Facebook is in the process of turning on SSL for all time. In this it is following google and others. In that, they, meaning google and Co., are following yet others including EFF, Mozilla and a bunch of others.
Those, they are following Tyler, Amir, Ahmad and yours truly.
We have been pushing for the use of all-authenticated web pages for around 8 years now. The reason is complicated and it is *nothing to do with wifi* but it's ok to use that excuse if that is easier to explain. It is really all about phishing which causes an MITM against a web-user (SSL or not). The reason is this: if we have SSL always on then we can rely on a whole bunch of other protections to lock in the user: pinning and client certificates spring to mind, but also never forget that the CA was supposed to show the user she was on their own bank, not somewhere else.
But, without SSL always on, solutions were complicated, impossible, or easily tricked. So a deep analysis concluded, back in the mid 2000s that we had to move the net across to all-SSL, only SSL for any user-interactions sites. (Which since then has become all of them -- remember that surfing in a basic read-only mode was possible in those days...)
A project was born. Slowly, TLS/SNI was advanced. Browsers experimented with new/old SSL display ideas. All browsers upgraded to SSL v3 then to TLS. Servers followed suite, s.l.o.w.l.y.... SSL v2 got turned off, painfully. Various projects sprung up to report on SSL weaknesses, although they don't report on the absence of SSL, the greatest weakness of them all... OK, small baby steps, let's not rush it. Indeed - the reason my long-suffering readers have to deal with this site in SSL is because of that project. We eat my dogfood.
And, finally, some leaders started doing more widespread SSL.
( For those old timers who remember - this is how it was supposed to be. SSL was supposed to be always on. But back in 1995, it was discovered to be too expensive, so the business folks split the website and broke the security model (again!). Now, there is no such excuse, and google reports somewhere that there was no hit to its performance. 15 years later :) )
This is good news - we have reached a major milestone. I'll leave you with this one thought.
This response all started with phishing. Which started in 2001, and got really going by 2003. Now, if we call Facebook the midpoint of the response ("before FB, you were early, after, you're a laggard!"), we can conclude that the Internet's security lifecycle, or the OODA loop, is a decade long.
Reading the FinServices' toungue-in-cheek prediction that "we should all be using Biometrics," I was struck by an old security aphorism was:
Something you know, something you have, something you are.
The idea being that each of these was an independent system, so if we had a weak system in each domain, we could construct a strong system by redundantly combining all three. It wasn't perfect, it was a classical strength-through-redundancy design, but you could be forgiven for thinking it was the holy grail because it was repeated so often by security people.
Meanwhile, life has moved on. And it has moved on to the point where we now have a convergence of these things into one:
the mobile phone
The mobile (or cell or handy as it is known) is decidedly something you have - and we can imagine bluetooth protocols to authenticate in a wireless context. We have SMS, RFIC and NFC for those who like acronyms.
It is also something you know. The individual knows how to fire up and run the apps on her phone. More so than anyone else - smartphones these days have lots of personality for our users to relish in. It is just an application design question to best show that this woman knows her own phone and others do not. Trivial solutions left to reader.
Finally -- the phone is something you are. If you don't follow that, you've been in a cave for the last decade. Start your catchup by buying an iPhone and asking your 13 year old to install some apps. Watch that movie _The Social Network_. Install the facebook app, perhaps related to that movie.
The mobile phone is something you know, have and are. It will become the one security Thing of the future. This one Thing has some good aspects, and some bad aspects too. For one, if you lose the One, you're screwed. Even with obvious downsides, users will choose this one option, and as systems providers, we might as bind with them over it.
With further apologies to J.R.R. Tolkein,
One Thing to rule them all,
One Thing to find them,
One Thing to bring them all
and in the darkness bind them.
One of the essential requirements of any system is that it actually has to work for people, and work enough of the time to make a positive difference. Unfortunately, this effect can be confused with security systems because attacks can either be rare or hidden. In such an environment, we find persistent emphasis on strong branding more than proven security out in the field.
SSL has frequently been claimed to be the worlds' most successful most complicated security system -mostly because mostly everything in SSL is oriented to relying on certificates, which are their own market-leading complication. It has therefore been suggested (here and in many other places) that SSL's protection is somewhere between mostly harmless, and mildly annoying but useful. Here's more evidence along those lines:
"Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security"
...The most common approach to protect data during communication on the Android platform is to use the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols. To evaluate the state of SSL use in Android apps, we downloaded 13,500 popular free apps from Google’s Play Market and studied their properties with respect to the usage of SSL. In particular, we analyzed the apps’ vulnerabilities against Man-in-the-Middle (MITM) attacks due to the inadequate or incorrect use of SSL.
Some headlines, paraphrased:
With numbers like these, we can pretty much conclude that SSL is unreliable in that domain - no real user can even come close to relying on its presence.
The essential cause of this is that the secure browsing architecture is too complicated to work. It relies on too many "and that guy has to do all this" exceptions. Worse, the Internet browsing paradigm requires that the system work flawlessly or not at all, which conundrum this paper perhaps reveals as: flawlessly hidden and probably not working either.
This is especially the case in mobile work, where fast time cycles and compact development contexts conspire to make security less of a leading requirement; Critics will complain that the app developers should fix their code, and SSL works fine when it is properly deployed. Sure, that's true but it consistently doesn't happen, the SSL architecture is at fault for its low rates of reliable deployment. If a security architecture cannot be reliably deployed in adverse circumstances such as mobile, why would you bother?
So where do we rate SSL? Mostly harmless, a tax on the Internet world, or a barrier to a better system?
You saw it here first :) Kaspersky has dipped into the payments market with a thing called Safe Money:
A new offering found in Kaspersky Internet Security is Safe Money, Kaspersky Lab's unique technology designed to protect the user's money when shopping and banking online. To keep your cash safe, Kaspersky Internet Security's Safe Money will:
- Automatically activate when visiting most common payment services (PayPal, etc.), banking websites, and you can easily add your own bank or shopping websites to the list.
- *Isolate your payment operations in a special web browser to ensure your transactions aren't monitored*.
- Verify the authenticity of the banking or payment website itself, to ensure the site isn't compromised by malware, or a fake website designed to look authentic.
- Evaluate the security status of your computer, and warn about any existing threats that should be addressed prior to making payments.
- Provide an onscreen Virtual Keyboard when entering your credit card or payment information so you can enter your information with mouse-clicks. This will activate a special program to prevent malware from logging any keystrokes on your physical keyboard.
The #2 tip is the same as the #2 one that's been on this website for years - use one browser for common stuff and another for your online payments. This simple trick builds a good solid firewall between your money and the crooks' hands. What's more, it's easy enough to teach grandma and grandson.
(For those lost for clue, download Chrome and Firefox. I advise using Safari for banking, Firefox for routine stuff and Chrome for google stuff. Whatever you do, keep that banking browser closed and locked down, until it is time to bring it up, switch to privacy mode, and type in the URL by hand.)
Aside from Kaspersky's thumbs-up for the #2, what else can we divine? If Kaspersky, one of the more respected anti-virus providers, has decided to dip its toe into payments protection, this might be a signal that phishing and malware is not reducing. Or, at least, the perception has not diminished.
Out there in user-land, people don't really trust browsers to do their security, and since GFC-1 they don't really trust banks either. (They've never ever trusted CAs.) This doesn't mean they can voice, explain or argue their mistrust, but it does mean that they feel the need - it is this perception that Kapersky hopes to sell into.
Chances are, it's a good pick, if only because we're all going to die before any of these providers deals with their cognitive dissonance on trust. Good luck Kaspersky, and hopefully you won't succumb to it either.
Several cases in USA are resolving in online theft via bank account hackery. Here's one:
Village View Escrow Inc., which in March 2010 lost nearly $400,000 after its online bank account with Professional Business Bank was taken over by hackers, has reached a settlement with the bank for an undisclosed amount, says Michelle Marsico, Village View's owner and president.
As a result of the settlement, Village View recovered more than the full amount of the funds that had been fraudulently taken from the account, plus interest, the company says in a statement.
And two more:
Two similar cases, PATCO Construction Inc. vs. Ocean Bank and Experi-Metal Inc. vs. Comerica Bank, raised questions about liability and reasonable security, yet each resulted in a different verdict.
In 2010, PATCO sued Ocean Bank for the more than $500,000 it lost in May 2009, after its commercial bank account with Ocean Bank was taken over. PATCO argued that Ocean Bank was not complying with existing FFIEC requirements for multifactor authentication when it relied solely on log-in and password credentials to verify transactions.
Last year, a District Court magistrate found the bank met legal requirements for multifactor authentication and dismissed the suit.
In December 2009, EMI sued Comerica after more than $550,000 in fraudulent wire transfers left EMI's account.
In the EMI ruling, the court found that Comerica should have identified and disallowed the fraudulent transactions, based on EMI's history, which had been limited to transactions with a select group of domestic entities. The court also noted that Comerica's knowledge of phishing attempts aimed at its clients should have caused the bank to be more cautious.
In the ruling, the court required Comerica to reimburse EMI for the more than $560,000 it lost after the bank approved the fraudulent wire transfers.
Here's how it happens. There will be many of these. Many of the victims will sue. Many if the cases will lose.
Those that lose are irrelevant. Those that win will set the scene. Eventually some precedent will be found, either at law or at reputation, that will allow people to trust banks again. Some more commentary.
The reason for the inevitability of this result is simple: society and banks both agree that we don't need banks unless the money is safe.
Online banking isn't safe. It behoves to the banks to make it safe. We're in the phase where the court of law and public opinion are working to get that result.
I don't normally copy directly from others, but the following post by Bruce Schneier provides an introduction to one of the most important topics in Information Security. I have tried long and hard to write about it, but the topic is messy, controversial and hard evidence is thin. Here goes...
Bruce Schneier writes in Forbes about the booming vulnerabilities market:
All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the "equities issue," and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone's communications more secure, or they could exploit the flaw to eavesdrop on others -- while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I've heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.
It's probably worth reading the rest of the article too - I only took the one para talking about the Equities Debate.
What's it about? Well, in a nutshell, the intelligence community debated long and hard about whether to allow the world's infosec infrastructure to be vulnerable, so as to assist spying. Or not.... Is there such a stark choice? The answer to this is a bit difficult to prove, but I'm going to put my money on YES: for the NSA it is either/or. The reason for this is that when the NSA goes for vulnerabilities, there are all sorts of flow-on effects:
Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.
I remember this from my early formative years in University - security work was considered a bad direction to head. As you got into it you found all of these restrictions and regulations. It just had a bad taste, the brighter people were bright enough to go elsewhere.
In general, it seems that the times the NSA has erred on the side of "YES! let them be weak," that we are now counting the cost. If you look back through the last couple of decades, the mantra is very clear: security is an afterthought. That's in large part because almost nobody coming out of the training camps is steeped in it. We got away with this for a decade or two when the Internet was in its benign phase - 1990s, spam, etc.
But that's all changed now. Chickens now come home to roost. For one example, if you look at the timeline of CA attacks over the last decade, there is a noticeable spike in 2011. For another, look at Stuxnet and Flame as a cyberweapons of inspiration.
Which brings costs to everyone.
I personally think the Equity Issue within the NSA is perhaps the single most important information security influence, ever. Their mission is dual-fold, to protect and the listen. By choosing vulnerability over protection, we have all suffered. We are now in the cost-amortisation phase; for the next decade we will suffer a non-benign Internet risks environment.
Next time you read of the US government banging the cyberwar drum in order to rustle up budget for cyberwarriors, ask them if they've re-thought the equity issue, and why we would provide funds for something they created in the first place?
Lynn was there at the beginning of SSL, and sees where I'm going with this. He writes in Comments:
> I have a similar jaundiced view of a lot of the threat model stuff in the mid-late 90s ... Lots of parties had the solution (PKI) and they were using the facade of threat models to narrowly focus in portion of problem needing PKI as solution (they had the answer, now they needed to find the question). The industry was floating business plans on wall street $20B/annum for annual $100 digital certificates for individuals. We had been called in to help wordsmith the cal. state electronic signature legislation ... and the industry was heavily lobbying that the legislation mandate (annual, individual) digital certificates.
Etc, etc. Yes. this is the thing that offends - there is a flaw in pure threat modelling, and that flaw was enough to drive an industry through. We're still paying the dead-weight costs, and will be for some time.
The question is whether the flaw in threat modelling can be repaired by patches like "oh, you should consider the business" or whether we need to recognise that nobody ever does. Or can. Or gets paid to.
In which case we need a change in metaphor. A new paradigm, as they say in the industry.
I'm thinking the latter. Hence, Risk Analysis.
So, you want to know where the leading thinkers are in security today?
Coviello called for the industry to rally together to take the following actions:-- Change how we think about security. The security industry must stop thinking linearly, "...blindly adding new controls on top of failed models. We need to recognize, once and for all, that perimeter-based defenses and signature-based technologies are past their freshness dates, and acknowledge that our networks will be penetrated. We should no longer be surprised by this," Coviello said.
Can you say, firewalls & SSL? It's so long ago that this metaphor was published by Gunnar that I can't even remember. But here's his firewalls & SSL infosec debt clock, starting 1995.
Google offers $1 million reward to hackers who exploit Chrome
By Dan Goodin | Published February 27, 2012 8:30 PM
Google has pledged cash prizes totaling $1 million to people who successfully hack its Chrome browser at next week's CanSecWest security conference.
Google will reward winning contestants with prizes of $60,000, $40,000, and $20,000 depending on the severity of the exploits they demonstrate on Windows 7 machines running the browser. Members of the company's security team announced the Pwnium contest on their blog on Monday. There is no splitting of winnings, and prizes will be awarded on a first-come-first-served basis until the $1 million threshold is reached.
Now in its sixth year, the Pwn2Own contest at the same CanSecWest conference awards valuable prizes to those who remotely commandeer computers by exploiting vulnerabilities in fully patched browsers and other Internet software. At last year's competition, Internet Explorer and Safari were both toppled but no one even attempted an exploit against Chrome (despite Google offering an additional $20,000 beyond the $15,000 provided by contest organizer Tipping Point).
Chrome is currently the only browser eligible for Pwn2Own never to be brought down. One reason repeatedly cited by contestants for its lack of attention is the difficulty of bypassing Google's security sandbox.
"While we’re proud of Chrome’s leading track record in past competitions, the fact is that not receiving exploits means that it’s harder to learn and improve," wrote Chris Evans and Justin Schuh, members of the Google Chrome security team. "To maximize our chances of receiving exploits this year, we’ve upped the ante. We will directly sponsor up to $1 million worth of rewards."
In the same blog post, the researchers said Google was withdrawing as a sponsor of the Pwn2Own contest after discovering rule changes allowing hackers to collect prizes without always revealing the full details of the vulnerabilities to browser makers.
"Specifically, they do not have to reveal the sandbox escape component of their exploit," a Google spokeswoman wrote in an email to Ars. "Sandbox escapes are very dangerous bugs so it is not in the best interests of user safety to have these kept secret. The whitehat community needs to fix them and study them. Our ultimate goal here is to make the web safer."
In a tweet, Aaron Portnoy, one of the Pwn2Own organizers, took issue with Google's characterization that the rules had changed and said that the contest has never required the disclosure of sandbox escapes.
Ars will have full coverage of Pwn2Own, which commences on Wednesday, March 7.
After a week of fairly strong deliberations, Mozilla has sent out a message to all CAs to clarify that MITM activity is not acceptable.
It would seem that Trustwave might slip through without losing their spot in the root list of major vendors. The reasons for this is a combination of: up-front disclosure, a short timeframe within which the subCA was issued and used (at this stage limited to 2011), and the principle of wiser heads prevailing.
That's my assessment at least.
My hope is that this has set the scene. The next discovery will be fatal for that CA. The only way forward for a CA that has issued at any time in the past an MITM-enabled subCA would be the following:
+ up-front disclosure to the public. By that I mean, not privately to Mozilla or other vendors. That won't be good enough. Nobody trusts the secret channels anymore.
+ in the event that this is still going on, an *fast* plan, agreed and committed to vendors, to withdraw completely any of these MITM sub-CAs or similar arrangements. By that I mean *with prejudice* to any customers - breaching contract if necessary.
Any deviation means termination of the root. Guys, you got one free pass at this, and Trustwave used it up. The jaws of Trust are hungry for your response.
That is what I'll be looking for at Mozilla. Unfortunately there is no forum for Google and others, so Mozilla still remains the bellwether for trust in CAs in general.
That's not a compliment; it's more a description of how little trust there is. If there is a desire to create some, that's possibly where we'll see the signs.
The 'new idea' is not difficult. The idea of Convergence is for independent operators (like CAcert or FSFE or FSF) to run servers that cache certificates from sites. Then, when a user browser comes across a new certificate, instead of accepting the fiat declaration from the CA, it gets a "second opinion" from one of these caching sites.
Convergence is best seen as conceptually extending or varying the SSH or TOFU model that has already been tried in browsers through CertPatrol, Trustbar, Petnames and the like.
In the Trust-on-first-use model, we can make a pretty good judgement call that the first time a user comes to a site, she is at low risk. It is only later on when her relationship establishes (think online banking) that her risk rises.
This risk works because likelihood of an event is inversely aligned with the cost of doing that attack. One single MITM might be cost X, two might be X+delta, so as it goes on it gets more costly. In two ways: firstly, in maintaining the MITM over time against Alice costs go up more dramatically than linear additions of a small delta. In this sense, MITMs are like DOSs, they are easier to mount for brief periods. Secondly, because we don't know of Alice's relationship before hand, we have to cast a very broad net, so a lot of MITMs are needed to find the minnow that becomes the whale.
First-use-caching or TOFU works then because it forces the attacker into an uneconomic position - the easy attacks are worthless.
Convergence then extends that model by using someone else's cache, thus further boxing the attacker in. With a fully developed Convergence network in place, we can see that the attacker has to conduct what amounts to being a perfect MITM closer to the site than any caching server (at least at the threat modelling level).
Which in effect means he owns the site at least at the router level, and if that is true, then he's probably already inside and prefers more sophisticated breaches than mucking around with MITMs.
Thus, the very model of a successful mitigation -- this is a great risk for users to accept if only they were given the chance! It's pretty much ideal on paper.
Now move from paper threat modelling to *the business*. We can ask several questions. Is this better than the fiat or authority model of CAs which is in place now? Well, maybe. Assuming a fully developed network, Convergance is probably in the ballpark. A serious attacker can mount several false nodes, something that was seen in peer2peer networks. But a serious attacker can take over a CA, something we saw in 2011.
Another question is, is it cheaper? Yes, definately. It means that the entire middle ground of "white label" HTTPS certs as Mozilla now shows them can use Convergence and get approximately the same protection. No need to muck around with CAs. High end merchants will still go for EV because of the branding effect sold to them by vendors.
A final question is whether it will work in the economics sense - is this going to take off? Well, I wish Moxie luck, and I wish it work, but I have my reservations.
Like so many other developments - and I wish I could take the time to lay out all the tall pioneers who provided the high view for each succeeding innovation - where they fall short is they do not mesh well with the current economic structure of the market.
In particular, one facet of the new market strikes me as overtaking events: the über-CA. In this concept, we re-model the world such that the vendors are the CAs, and the current crop are pushed down (or up) to become sub-CAs. E.g., imagine that Mozilla now creates a root cert and signs individually each root in their root list, and thus turns it into a sub-root list. That's easy enough, although highly offensive to some.
Without thinking of the other ramifications too much, now add Convergance to the über-CA model. If the über-CA has taken on the responsibility, and manages the process end to end, it can also do the Convergence thing in-house. That is, it can maintain its set of servers, do the crawling, do the responding. Indeed, we already know how to do the crawling part, most vendors have had a go at it, just for in-house research.
Why do I think this is relevant? One word - google. If the Convergence idea is good (and I do think it is) then google will have already looked at it, and will have already decided how to do it more efficiently. Google have already taken more steps towards ueber-CA with their decision to rewire the certificate flow. Time for a bad haiku.
Google sites are pinned now / All your 'vokes are b'long to us / Cache your certs too, soon.
And who is the world's expert at rhyming data?
Which all goes to say that Convergence may be a good idea, a great one even, but it is being overtaken by other developments. To put it pithily the market is converging on another direction. 1-2 years ago maybe, yes, as google was still working on the browser at the standards level. Now google are changing the way things are done, and this idea will fall out easily in their development.
(For what it is worth, google are just as likely to make their servers available for other browsers to use anyway, so they could just "run" the Convergance network. Who knows. The google talks to no-one, until it is done, and often not even then.)
As we all know, it's a right of passage in the security industry to study the SSL business of certificates, and discover that all's not well in the state of Denmark. But the business of CAs and PKI rolled on regardless, seemingly because no threat ever challenged it. Because there was no risk, the system successfully dealt with the threats it had set itself. Which is itself elegant proof that academic critiques and demonstrations and phishing and so forth are not real attacks and can be ignored entirely...
Last year, we crossed the Rubicon for the SSL business -- and by extension certificates, secure browsing, CAs and the like -- with a series of real attacks against CAs. Examples include the DigiNotar affair, the Iranian affair (attacks on around 5 CAs), and also the lesser known attack a few months back where certificates may have been forged and may have been used in an APT and may have... a lot of things. Nobody's saying.
Either way, the scene is set. The pattern has emerged, the Rubicon is crossed, it gets worse from here on in. A clear and present danger, perhaps? In California, they'd be singing "let's partly like it's 2003," the year that SB1386 slid past our resistance and set the scene for an industry an industry debacle in 2005.
But for us long term observers, no party. There will now be a steady series of these shocks, and journalists will write of our brave new world - security but no security.
With one big difference. Unlike the SB1386 breach party, where we can rely on companies not going away (even as our data does), the security system of SSL and certificates is somewhat optional. Companies can and do expose their data in different ways. We can and do invent new systems to secure or mitigate the damage. So while SB1386 didn't threaten the industry so much as briskly kicked it around, this is different.
At an attacks level, we've crossed a line, but at a wider systems level, we stand on the line.
Which brings us to this week's news. A CA called Trustwave has just admitted to selling a sub-root for the explicit purpose of MITM'ing. Read about that elsewhere.
Now, we've known that MITMing for fun and profit was going on for a long time. Mozilla's community first learnt of it in the mid 2000s as it was finalising its policy on CAs (a ground-breaking work that I was happy to be involved with). At that time, accusations were circulating against unknown companies listing their roots for the explicit purpose of doing MITMs on unwitting victims. Which raised the hairs, eyebrows and heckles on not a few of us. These accusations have been repeated from time to time, but in each case the "insiders" begged off on the excuse: we cannot break NDA or reputation.
Each time then the industry players were likewise able to fob it off. Hard Evidence? none. Therefore, it doesn't exist, was they industry's response. We knew as individuals, yet as an industry we knew not.
We are all agreed it does exist and it doesn't. We all have jobs to preserve, and will practice cognitive dissonance to the very end.
Of course this situation couldn't last, because a secret of this magnitude never survives. In this case, the company that sold the MITM sub-root, Trustwave, has looked at 2011, and realised the profit from that one CA isn't worth the risk of the DigiNotar experience (bankruptcy). Their decision is to 'fess up now, take it on the chin, because later may be too late.
Which leads to a dilemma, and we the players have divided on each side, one after the other, of that dilemma:
That is the question. First the case for the defence: On the one hand, we applaud the honesty of a CA coming forward and cleaning up house. It's pretty clear that we need our CAs to do this. Otherwise we're not going to get anywhere with this Trust thing. We need to encourage the CAs to work within the system.
Further, if we damage a CA, we damage customers. The cost to lost business is traumatic, and the list of US government agencies that depend on this CA has suddenly become impressive. Just like DigiNotar, it seems, which spread like a wave of mistrust through the government IT departments of the Netherlands. Also, we have to keep an eye on (say) a bigger more public facing CA going down in the aftermath - and the damage to all its customers. And the next, etc.
Is lost business more important than simple faith in those silly certificates? I think lost business is much more important - revenue, jobs, money flowing keeping all of the different parts of the economy going are our most important asset. Ask any politician in USA or Europe or China; this is their number one problem!
Finally, it is pretty clear and accepted that the business purpose to which the sub-Root was put was known and tolerated. Although it is uncomfortable to spy on ones employees, it is just business. Organisations own their data systems, have the responsibility to police them, and have advised their people that this is what they are going to do. SSL included, if necessary.
This view has it that Trustwave has done the right thing. Therefore, pass. And, the more positive proponents suggest an amnesty, after which period there is summary execution for the sins - root removal from the list distributed by the browsers. It's important to not cause disruption.
Now the case for the Prosecution! On the other hand, damn spot: the CA clearly broke their promise. Out!
Three ways, did they breach the trust: It is expressed in the Mozilla policy and presumably of others that certificates are only issued to people who own/control their domains. This is no light or optional thing -- we rely on the policy because CAs and Mozilla and other vendors and auditors and all routinely practice secrecy in this business.
We *must rely on the policy* because they deny us the right to rely on anything else!
Secondly, it is what the public believe in, it is the expectations of any purchaser or user of the product, written or not. It is a simple message, and brooks no complicated exceptions. Either your connection is secure to your online bank, and nobody else can see it *including your employer or IT department*. Or not.
Try explaining this exception to your grandmother, if the words do not work for you.
Finally, the raison d'être: it is the purpose and even the entire goal of the certificate design to do exactly the opposite. The reason we have CAs like TrustWave is to stop the MITM. If they don't stop the MITM, then *we don't need the heavyweight certificate system*, we don't need CAs, and we don't need Mozilla's root list or that of any other vendor.
We can do security much more cost-effectively if we drop the 100% always-on absolutist MITM protection.
Given this breach of trust, what else can we trust in? Can we trust their promises that the purpose was maintained? That the cert never left the building? That secret traffic wasn't vectored in? That HSMs are worth something and audits ensure all is well in Denmark?
There being two views presented, it has to be said that both views are valid. The players are lining up on either side of the line, but they probably aren't so well aware of where this is going.
Only one view is going to win out. Only one side wins this fight.
And in so-doing, in winning, the winner sews the seeds for own destruction.
Because if you religiously take your worldview, and look at the counter-argument to your preferred position, your thesis crumbles for the fallacies.
The jaws of trust just snapped shut on the players who played too long, too hard, too profitably.
Like the financial system. We are no longer worried about the bankruptcy of one or two banks or a few defaults by some fly specks on the map of European. We are now looking at a change that will ripple out and remove what vestiges of purpose and faith were left in PKI. We are now looking at all the other areas of the business that will be effected; ones that brought into the promise even though they knew they shouldn't have.
Like the financial system, a place of uncanny similarity, each new shock makes us wonder and question. Wasn't all this supposed to be solved? Where are the experts? Where is the trust?
We're about to find out the timeless meaning of Caveat Emptor.
I've long realised that threat modelling isn't quite it.
There's some malignancy in the way the Internet IT Security community approached security in the 1990s that became a cancer in our protocols in the 2000s. Eventually I worked out that the problem with the aphorism What's Your Threat Model (WYTM?) was the absence of a necessary first step - the business model - which lack permitted threat modelling to be de-linked from humanity without anyone noticing.
But I still wasn't quite there, it still felt like wise old men telling me "learn these steps, swallow these pills, don't ask for wisdom."
In my recent risk management work, it has suddenly become clearer. Taking from notes and paraphrasing, let me talk about threats versus risks, before getting to modelling.
A threat is something that threatens, something that can cause harm, in the abstract sense. For example, a bomb could be a threat. So could an MITM, an eavesdropper, or a sniper.
But, separating the abstract from the particular, a bomb does not necessarily cause a problem unless there is a connection to us. Literally, it has to be capable of doing us harm, in a direct sense. For this reason, the methodologists say:
Risk = Threat * Harm
Any random bomb can't hurt me, approximately, but a bomb close to me can. With a direct possibility of harm to us, a threat becomes a risk. The methodologists also say:
Risk = Consequences * Likelihood
That connection or context of likely consequences to us suddenly makes it real, as well as hurtful.
A bomb then is a threat, but just any bomb doesn't present a risk to anyone, to a high degree of reliability. A bomb under my car is now a risk! To move from threats to risks, we need to include places, times, agents, intents, chances of success, possible failures ... *victims* ... all the rest needed to turn the abstract scariness into direct painful harm.
We need to make it personal.
To turn the threatening but abstract bomb from a threat to a risk, consider a plane, one which you might have a particular affinity to because you're on it or it is coming your way:
⇒ people dying⇒ financial damage to plane ⇒ reputational damage to airline⇒ collateral damage to other assets⇒ economic damage caused by restrictions⇒ war, military raids and other state-level responses
Lots of risks! Speaking of bombs as planes: I knew someone booked on a plane that ended up in a tower -- except she was late. She sat on the tarmac for hours in the following plane.... The lovely lady called Dolly who cleaned my house had a sister who should have been cleaning a Pentagon office block, but for some reason ... not that day. Another person I knew was destined to go for coffee at ground zero, but woke up late. Oh, and his cousin was a fireman who didn't come home that day.
Which is perhaps to say, that day, those risks got a lot more personal.
We all have our very close stories to tell, but the point here is that risks are personal, threats are just theories.
Let us now turn that around and consider *threat modelling*. By its nature, threat modelling only deals with threats and not risks and it cannot therefore reach out to its users on a direct, harmful level. Threat modelling is by definition limited to theoretical, abstract concerns. It stops before it gets practical, real, personal.
Maybe this all amounts to no more than a lot of fuss about semantics?
To see if it matters, let's look at some examples: If we look at that old saw, SSL, we see rhyme. The threat modelling done for SSL took the rather abstract notions of CIA -- confidentiality, integrity and authenticity -- and ended up inverse-pyramiding on a rather too-perfect threat of MITM -- Man-in-the-Middle.
We can also see from the lens of threat analysis versus risk analysis that the notion of creating a protocol to protect any connection, an explicit choice of the designers, led to them not being able to do any risk analysis at all; the notion of protecting certain assets such as credit cards as stated in the advertising blurb was therefore conveniently not part of the analysis (which we knew, because any risk analysis of credit cards reveals different results).
Threat modelling therefore reveals itself to be theoretically sound but not necessarily helpful. It is then no surprise that SSL performed perfectly against its chosen threats, but did little to offend the risks that users face. Indeed, arguably, as much as it might have stopped some risks, it helped other risks to proceed in natural evolution. Because SSL dealt perfectly with all its chosen threats, it ended up providing a false sense of false security against harm-incurring risks (remember SSL & Firewalls?).
OK, that's an old story, and maybe completely and boringly familiar to everyone else? What about the rest? What do we do to fix it?
The challenge might then be to take Internet protocol design from the very plastic, perfect but random tendency of threat modelling and move it forward to the more haptic, consequences-directed chaos of risk modelling.
Or, in other words, we've got to stop conflating threats with risks.
Critics can rush forth and grumble, and let me be the first: Moving to risk modelling is going to be hard, as any Internet protocol at least at the RFC level is generally designed to be deployed across an extremely broad base of applications and users.
Remember IPSec? Do you feel the beat? This might be the reason why we say that only end-to-end security cuts the mustard, because end-to-end implies an application, and this draw in the users to permit us to do real risk modelling.
It might then be impossible to do security at the level of an Internet-wide, application-free security protocol, a criticism that isn't new to the IETF. Recall the old ISO layer 5, sometimes called "the security layer" ?
But this doesn't stop the conclusion: threat modelling will always fail in practice, because by definition, threat modelling stops before practice. The place where users are being exposed and harmed can only be investigated by getting personal - including your users in your model. Threat modelling does not go that far, it does not consider the risks against any particular set of users that will be harmed by those risks in full flight. Threat modelling stops at the theoretical, and must by the law of ignorance fail in the practical.
Risks are where harm is done to users. Risk modelling therefore is the only standard of interest to users.
Along the lines of "The CSO should have an MBA" we have some progress:
City University London will offer a new postgraduate course from September, designed to help information security and risk professionals progress to managerial roles.
The new Masters in Information Security and Risk (MISR) aims to address a gap in the market for IT professionals that can “speak business”. One half of the course is devoted to security strategy, risk management and security architecture, while the other half focuses on putting this into a business-oriented context.
“Talking to people out in the industry, particularly if you go to the financial sector, they say one real problem is they don't have a security person who they can take along to the board meeting without having to act as an interpreter; so we're putting together a programme to address this need,” explained course director Professor Kevin Jones, speaking at the Infosecurity Europe Press Conference in London yesterday.
Readers will know we first published the account on "Man in the Browser by Philipp Güring way back when, and followed it up with news that the way forward was dual channel transaction signing. In short, this meant the bank sending an SMS to your handy mobile cell phone with the transaction details, and a check code to enter if you wanted the transaction to go through.
On the face of it, pretty secure. But at the back of our minds, we knew that this was just an increase in difficulty: a crook could seek to control both channels. And so it comes to pass:
In the days leading up to the fraud being committed, [Craig] had received two strange phone calls. One came through to his office two-to-three days earlier, claiming to be a representative of the Australian Tax Office, asking if he worked at the company. Another went through to his home number when he was at work. The caller claimed to be a client seeking his mobile phone number for an urgent job; his daughter gave out the number without hesitation.
The fraudsters used this information to make a call to Craig’s mobile phone provider, Vodafone Australia, asking for his phone number to be “ported” to a new device.
As the port request was processed, the criminals sent an SMS to Craig purporting to be from Vodafone. The message said that Vodafone was experiencing network difficulties and that he would likely experience problems with reception for the next 24 hours. This bought the criminals time to commit the fraud.
The unintended consequence of the phone being used for transaction signing is that the phone is now worth maybe as much as the fraud you can pull off. Assuming the crooks have already cracked the password for the bank account (something probably picked up on a market for pennies), the crooks are now ready to spend substantial amounts of time to crack the phone. In this case:
Within 30 minutes of the port being completed, and with a verification code in hand, the attackers were spending the $45,000 at an electronics retailer.
Thankfully, the abnormally large transaction raised a red flag within the fraud unit of the Commonwealth Bank before any more damage could be done. The team tried – unsuccessfully – to call Craig on his mobile. After several attempts to contact him, Craig’s bank account was frozen. The fraud unit eventually reached him on a landline.
So what happens now that the crooks walked with $45k of juicy electronics (probably convertible to cash at 50-70% off face over ebay) ?
As is standard practice for online banking fraud in Australia, the Commonwealth Bank has absorbed the hit for its customer and put $45,000 back into Craig's account.
A NSW Police detective contacted Craig on September 15 to ensure the bank had followed through with its promise to reinstate the $45,000. With this condition satisfied, the case was suspended on September 29 pending the bank asking the police to proceed with the matter any further.
One local police investigator told SC that in his long career, a bank has only asked for a suspended online fraud case to be investigated once. The vast majority of cases remain suspended. Further, SC Magazine was told that the police would, in any case, have to weigh up whether it has the adequate resources to investigate frauds involving such small amounts of money.
No attempt was made at a local police level to escalate the Craig matter to the NSW Police Fraud and Cybercrime squad, for the same reasons.
In a paper I wrote in 2008, I stated for some value below X, police wouldn't lift a finger. The Prosecutor has too much important work to do! What we have here is a very definate floor beyond which Internet systems which transmit and protect value are unable to rely on external resources such as the law . Reading more:
But the Commonwealth Bank claims it has forwarded evidence to the NSW and Federal Police forces that could have been used to prosecute the offenders.
The bank’s fraud squad – which had identified the suspect transactions within minutes of the fraud being committed - was able to track down where the criminals spent the stolen money.
A spokesman for the bank said it “dealt with both Federal and State (NSW) Police regarding the incident” and that “both authorities were advised on the availability of CCTV footage” of the offenders spending their ill-gotten gains.
“The Bank was advised by one of the authorities that the offender had left the country – reducing the likelihood of further action by that authority,” the spokesperson said.
This number goes up dramatically once we cross a border. In that paper I suggested 25k, here we have a reported number of $45k.
Why is that important? Because, some systems have implicit guarantees that go like "we do blah and blah and blah, and then you go to the police and all your problems are solved!" Sorry, not if it is too small, where small is surprisingly large. Any such system that handwaves you to the police without clearly indicating the floor of interest ... is probably worthless.
So when would you trust a system that backstopped to the police? I'll stick my neck out and say, if it is beyond your borders, and you're risking >> $100k, then you might get some help. Otherwise, don't bet your money on it.
Two Microsoft researchers have published a paper pouring scorn on claims cyber crime causes massive losses in America. They say it’s just too rare for anyone to be able to calculate such a figure.
Dinei Florencio and Cormac Herley argue that samples used in the alarming research we get to hear about tend to contain a few victims who say they lost a lot of money. The researchers then extrapolate that to the rest of the population, which gives a big total loss estimate – in one case of a trillion dollars per year.
But if these victims are unrepresentative of the population, or exaggerate their losses, they can really skew the results. Florencio and Herley point out that one person or company claiming a $50,000 loss in a sample of 1,000 would, when extrapolated, produce a $10 billion loss for America as a whole. So if that loss is not representative of the pattern across the whole country, your total could be $10 billion too high.
Having read the paper, the above is about right. And sufficient description, as the paper goes on for pages and pages making the same point.
Now, I've also been skeptical of the phishing surveys. So, for a long time, I've just stuck to the number of "about a billion a year." And waited for someone to challenge me on it :) Most of the surveys seemed to head in that direction, and what we would hope for would be more useful numbers.
So far, Florencio and Herley aren't providing those numbers. The closest I've seen is the FBI-sponsored report that derives from reported fraud rather than surveys. Which seems to plumb in the direction of 10 billion a year for all identity-related consumer frauds, and a sort handwavy claim that there is a ration of 10:1 between all fraud and Internet related fraud.
I wouldn't be surprised if the number was really 100 million. But that's still a big number. It's still bigger than income of Mozilla, which is the 2nd browser by numbers. It's still bigger than the budget of the Anti-phishing Working Group, an industry-sponsored private thinktank. And CABForum, another industry-only group.
So who benefits from inflated figures? The media, because of the scare stories, and the public and private security organisations and businesses who provide cyber security. The above parliamentary report indicated that in 2009 Australian businesses spent between $1.37 and $1.95 billion in computer security measures. So on the report’s figures, cyber crime produces far more income for those fighting it than those committing it.
Good question from the SMH. The answer is that it isn't in any player's interest to provide better figures. If so (and we can see support from the Silver Bullets structure) what is Florencio and Herley's intent in popping the balloon? They may be academically correct in trying to deflate the security market's obsession with measurable numbers, but without some harder numbers of their own, one wonders what's the point?
What is the real number? Florencio and Herley leave us dangling at that point. Are they are setting up to provide those figures one day? Without that forthcoming, I fear the paper is destined to be just more media fodder as shown in its salacious title. Iow, pointless.
Hopefully numbers are coming. In an industry steeped in Numerology and Silver Bullets, facts and hard numbers are important. Until then, your rough number is as good as mine -- a billion.
Google radically expanded Tuesday its use of bank-level security that prevents Wi-Fi hackers and rogue ISPs from spying on your searches.
Starting Tuesday, logged-in Google users searching from Google’s homepage will be using https://google.com, not http://google.com — even if they simply type google.com into their browsers. The change to encrypted search will happen over several weeks, the company said in a blog post Tuesday.
We have known for a long time that the answer to web insecurity is this: There is only one mode, and it is secure.
(I use the royal we here!)
This is evident in breaches led by phishing, as the users can't see the difference between HTTP and HTTPS. The only solution at several levels is to get rid of HTTP. Entirely!
Simply put, we need SSL everywhere.
Google are seemingly the only big corporate that have understood and taken this message to heart.
Google has been a leader in adding SSL support to cloud services. Gmail is now encrypted by default, as is the company’s new social network, Google+. Facebook and Microsoft’s Hotmail make SSL an option a user must choose, while Yahoo Mail has no encryption option, beyond its intial sign-in screen.
EFF and CAcert are small organisations that are doing it as and when we can... Together, security-conscious organisations are slowly migrating all their sites to SSL and HTTPS all the time.
RSA's Coviello declares the new threat environment:
"Organisations are defending themselves with the information security equivalent of the Maginot Line as their adversaries easily outflank perimeter defences," Coviello added. "People are the new perimeter contending with zero-day malware delivered through spear-phishing attacks that are invisible to traditional perimeter-based security defences such as antivirus and intrusion detection systems." ®
The recent spate of attacks do not tell us that the defences are weak - this is something we've known for some time. E.g., from 20th April, 2003, "The Maginot Web" said it. Yawn. Taher Elgamal, the guy who did the crypto in SSL at Netscape back in 1994, puts it this way:
How about those certificate authority breaches against Comodo and that wiped out DigiNotar?
It's a combination of PKI and trust models and all that kind of stuff. If there is a business in the world that I can go to and get a digital certificate that says my name is Tim Greene then that business is broken, because I'm not Tim Greene, but I've got a certificate that says this is my name. This is a broken process in the sense that we allowed a business that is broken to get into a trusted circle. The reality is there will always be crooks, somebody will always want to make money in the wrong way. It will continue to happen until the end of time.
Is there a better way than certificate authorities?
The fact that browsers were designed with built-in root keys is unfortunate. That is the wrong thing, but it's very difficult to change that. We should have separated who is trusted from the vendor. ...
What the recent rash of exploits signal is that the attackers are now lined up and deployed against our weak defences:
Coviello said one of the ironies of the attack was that it validated trends in the market that had prompted RSA to buy network forensics and threat analysis firm NetWitness just before the attack.
This is another unfortunate hypothesis in the market for silver bullets: we need real attacks to tell us real security news. OK, now we've got it. Message heard, loud and clear. So, what to do? Coviello goes on:
Security programs need to evolve to be risk-based and agile rather than "conventional" reactive security, he argued.
"The existing perimeter is not enough, which is why we bought NetWitness. The NetWitness technology allowed us to determine damage and carry out remediation very quickly," Coviello said.
The existing perimeter was an old idea - one static defence, and the attacker would walk up, hit it with his head, and go elsewhere in confusion. Build it strong, went the logic, and give the attacker a big headache! ... but the people in charge at the time were steeped in the school of cryptology and computer science, and consequently lacked the essential visibility over the human aspects of security to understand how limiting this concept was, and how the attacker was blessed with sight and an ability to walk around.
Risk management throws out the old binary approach completely. To some extent, it is just in time, as a methodology. But to a large extent, the market place hasn't moved. Like deer in headlights, the big institutions watch the trucks approach, looking at each other for a solution.
Which is what makes these comments by RSA and Taher Elgamal significant. More than others, these people built the old SSL infrastructure. When the people who built it call game over, it's time to pay attention.
We've long documented the failure of PKI and secure browsing to be an effective solution to security needs. Now comes spectacular proof: sites engaged in carding, which is the trading of stolen credit card information, have always protected their trading sites with SSL certs of the self-signed variety. According to brief searching on 'a bunch of generic "sell CVV", "fullz", "dumps" ones,' conducted informally by Peter Gutmann, some of the CAs are now issuing certificates to carders.
This amounts to a new juvenile culinary phase in the target-rich economy of cyber-crime:
Phisher: I've come to eat your babies!
CA: Oh, yes, you'll be needing a certificate for that, $200 thanks!
Although not scientifically conducted or verified, both Mozilla and the CA concerned inclined that the criminals can have their certs and eat them too. As long as they follow the same conditions as everyone else, that's fine.
Except, it's not. Firstly, it's against the law in most all places to aid & abet a criminal. As Blackstone put it (ex Wikipedia):
"AN accessory after the fact may be, where a person, knowing a felony to have been committed, receives, relieves, comforts, or assists the felon.17 Therefore, to make an accessory ex post facto, it is in the first place requisite that he knows of the felony committed.18 In the next place, he must receive, relieve, comfort, or assist him. And, generally, any assistance whatever given to a felon, to hinder his being apprehended, tried, or suffering punishment, makes the assistor an accessory. As furnishing him with a horse to escape his pursuers, money or victuals to support him, a house or other shelter to conceal him, or open force and violence to rescue or protect him."
The point here made by Blackstone, and translated into the laws of many lands is that the assistance given is not specific, but broad. If we are in some sense assisting in the commission of the crime, then we are accessories. For which there are penalties.
And, these penalties are as if we were the criminals. For those who didn't follow the legal blah blah above, the simple thing is this: it's against the law. Go to jail, do not pass go, do not collect $200.
Secondly, consider the security diaspora. Users were hoping that browsers such as Firefox, IE, etc, would protect them from phishing. The vendors' position, policy-wise and code-wise, is that their security mechanism to protect their users from phishing is to provide PKI certificates, which might evidence some level of verification on your counter-party. This reduces down to a single statement: if you are ripped off with someone who uses a cert against you, you might know something about them.
This protection is (a) ineffective against phishing, which is shown frequently every time a phisher downgrades the HTTPS to HTTP, (b) now shared & available equally with phishers themselves to assist them in their crime, and who now apparently feel that (c) the protection offered in encryption against their enemies outweighs the legal threat of some identity being revealed to their enemies. Users lose.
In open list on Mozilla's forum, the CA concerned saw no reason to address the situation. Other CAs also seem to issue to equally dodgy sites, so it's not about one CA. The general feeling in the CA-dominated sector is that identity within the cert is sufficient reason to establish a reasonable security protection, notwithstanding that history, logic, evidence and now the phishers themselves show such a claim is about as reasonable and efficacious as selling recipes for marinated babies.
It seems Peter and I stand alone. In some exasperation, I consulted with Mozilla directly. To little end; Mozilla also believe that the phishing community are deserving of certificates, they simply have to ask a CA to be invited into our trusted inner-nursery. I've no doubt that the other vendors will believe and maintain the same, an approach to legal issues sometimes known as the ostrich strategy. The only difference here being that Mozilla maintains an open forum, so it can be embarrassed into a private response, whereas CAs and other vendors can be embarrassed into silence.
Predictions based on theory have been presented. Not good. BitCoin will fundamentally be a bad money because it will be too volatile, according to the laws of economics. You can't price goods in a volatile unit. Rated for speculation only, bubble territory, and quite reasonably likened to a ponzi scheme (if not exactly a ponzi scheme, with a nod to Patrick). Sorry folks, the laws of economics know no bribes, favours, demands.
And so theory comes to practice:
Bitcoin slump follows senators’ threats - Correlation or causation?
By Richard Chirgwin.
As any investment adviser will tell you, it’s a bad idea to put all your eggs in one basket. And if Rick Falkvinge was telling the truth when he said all his savings were now in Bitcoin, he’s been taken down by a third in a day.
Following last week’s call by US senators for an investigation into virtual crypto-currency Bitcoin, its value is slumping.
According to DailyTech, Bitcoins last Friday suffered more than 30 percent depreciation in value – a considerable fall given that the currency’s architecture is designed to inflate its value over time.
The slump could reflect a change in attitude among holders of Bitcoins: if the currency were to become less attractive to pay for illegal drugs, then dealers and their customers would dump their Bitcoins in favour of some other medium.
Being dumped by PayPal won’t have helped either. As DailyTech pointed out, PayPal has a policy against virtual currencies, so by enforcing the policy, PayPal has made it harder to trade Bitcoins.
The threat of regulation may also have sent shivers down Bitcoin-holders’ spines. The easiest regulatory action – although requiring international cooperation – would be to regulate, shut down or tax Bitcoin exchanges such as the now-famous Mt Gox. However, a sufficient slump may well have the same effect as a crackdown: whether Mt Gox is a single speculator or a group of traders, it’s unlikely to have the kind of backing (or even, perhaps, the hedging) that enables “real” currency traders to survive sharp swings in value.
Then the problem of a bubble in digital cash is compounded by theft:
I just got hacked - any help is welcome!
Hi everyone. I am totally devastated today. I just woke up to see a very large chunk of my bitcoin balance gone to the following address:
Transaction date: 6/13/2011 12:52 (EST)
I feel like killing myself now. This get me so f'ing pissed off. If only the wallet file was encrypted on the HD. I do feel like this is my fault somehow for now moving that money to a separate non windows computer. I backed up my wallet.dat file religiously and encrypted it but that does not do me much good when someone or some trojan or something has direct access to my computer somehow.
The dude lost 25,000 BTC which at recent "valuations" on the exchange I calculate as $250,000 to $750,000.
Yeah. When we were building digital cash systems we were *very conscious* that the weak link in the entire process was the user's PC. We did two things: we chose nymous, directly accounted transactions, so that we could freeze the whole thing if we needed to, and we (intended) to go for secure platforms for large values. Those secure platforms are now cheaply available (they weren't then).
BitCoin went too fast. Now people are learning the lessons.
(Note the above link is no longer working. Luckily I had it cached. I'm not attesting to its accuracy, just its relevance to the debate.)
What to learn from the RSA SecureID breach?
RSA has finally admitted publicly that the March breach into its systems has resulted in the compromise of their SecurID two-factor authentication tokens.
Which points to:
In a letter to customers Monday, the EMC Corp. unit openly acknowledged for the first time that intruders had breached its security systems at defense contractor Lockheed Martin Corp. using data stolen from RSA.
It's a targetted attack across multiple avenues. This is a big shift in the attack profile, and it is perhaps the first serious evidence of the concept of Advanced Persistent Threats (APTs).
What went wrong at the institutional level? Perhaps something like this:
So, with a breach in the single-point-of-failure, we are looking at an industry-wide replacement of all 40 million SecureId tokens.
Which presumably will be a fascinating exercise, and one from which we should be able to learn a lot. It isn't often that we see a SPOF event, and it's a chance to learn just what impact a single point of failure has:
The admission comes in the wake of cyber intrusions into the networks of three US military contractors: Lockheed Martin, L-3 Communications and Northrop Grumman - one of them confirmed by the company, others hinted at by internal warnings and an unusual domain name and password reset process
But one would also be somewhat irresponsible to not ask what happens next? Simply replacing the SecureID fobs and resetting the secret sauce at RSA does not seem to satisfy as *a solution*, although we can understand that a short term hack might be needed.
Chief (Information) Security Officers everywhere will probably be thinking that we need a little more re-thinking of the old 1990s models. Good luck, guys! You'll probably need a few more breaches to wake up the CEOs, so you can get the backing you need to go beyond "best practices" and start doing the job seriously.
Google researchers say they've devised a way to significantly reduce the time it takes websites to establish encrypted connections with end-user browsers, a breakthrough that could make it less painful for many services to offer the security feature. ....
The finding should come as welcome news to those concerned about online privacy. With the notable exceptions of Twitter, Facebook, and a handful of Google services, many websites send the vast majority of traffic over unencrypted channels, making it easy for governments, administrators, and Wi-Fi hotspot providers to snoop or even modify potentially sensitive communications while in transit. Companies such as eBay have said it's too costly to offer always-on encryption.
The Firesheep extension introduced last year for the Firefox browser drove home just how menacing the risk of unencrypted websites can be.
Is this a case of, NIST taketh away what Google granteth?
It's been a long time since I wrote up my Security Top Tips, and things have changed a bit since then. Here's an update. (You can see the top tips about half way down on the right menu block of the front page.)
Since then, browsing threats have got a fair bit worse. Although browsers have done some work to improve things, their overall efforts have not really resulted in any impact on the threats. Worse, we are now seeing MITBs being experimented with, and many more attackers get in on the act.
To cope with this heightened risk to our personal node, I experimented a lot with using private browsing, cache clearing, separate accounts and so forth, and finally hit on a fairly easy method: Use another browser.
That is, use something other than the browser that one uses for browsing. I use Firefox for general stuff, and for a long time I've been worried that it doesn't really do enough in the battle for my user security. Safari is also loaded on my machine (thanks to Top Tip #1). I don't really like using it, as its interface is a little bit weaker than Firefox (especially the SSL visibility) ... but in this role it does very well.
So for some time now, for all my online banking and similar usage, I have been using Safari. These are my actions:
I don't use bookmarks, because that's an easy place for an trojan to look (I'm not entirely sure of that technique but it seems like an obvious hint).
"Use another browser" creates an ideal barrier between a browsing browser and a security browser, and Safari works pretty well in that role. It's like an air gap, or an app gap, if you like. If you are on Microsoft, you could do the same thing using IE and Firefox, or you could download Chrome.
I've also tested it on my family ... and it is by far the easiest thing to tell them. They get it! Assume your browsing habits are risky, and don't infect your banking. This works well because my family share their computers with kids, and the kids have all been instructed not to use Safari. They get it too! They don't follow the logic, but they do follow the tool name.
What says the popular vote? Try it and let me know. I'd be interested to hear of any cross-browser threats, as well :)
Normally I'd worry when a representative of the US people asked sites to adopt more security ... but here goes:
Senator Charles Schumer is calling on major websites in the United States to make their pages more secure to protect those connecting from Wi-Fi hotspots, various media outlets are reporting.
In a letter sent to Amazon, Twitter, Yahoo, and others, the Senator, a Democrat representing New York, asked the websites to switch to more secure HTTPS pages in order to help prevent people accessing the Internet from public connections in places like restaurants and bookstores from being targeted by hackers and identity thieves.
"As the operator of one of the world's most popular websites, you provide a valuable service to Internet users across America," Schumer wrote, according to Tony Bradley of PCWorld. "With the privilege of serving millions of U.S. citizens, however, comes the responsibility to protect them while they are on your site."
"Free Wi-Fi networks provide hackers, identity thieves and spammers alike with a smorgasbord of opportunities to steal private user information like passwords, usernames, and credit card information," the Senator added. "The quickest and easiest way to shut down this one-stop shop for identity theft is for major websites to switch to secure HTTPS web addresses instead of the less secure HTTP protocol, which has become a welcome mat for would be hackers."
According to a Reuters report on Sunday, Schumer also called standard HTTP protocol "a welcome mat for would-be hackers" and said that programs were available that made hacking into another person's computer and swiping private information--unless secure protocol was used.
In this case, it's an overall plus. HTTPS everywhere, please!
This is one way in which we can stop a lot of trouble. Somewhat depressing that it has taken this long to filter through, but with friends like NIST, one shouldn't be overly surprised if the golden goose of HTTPS security is cooked to a cinder beforehand.
The biggest this and the bestest that is mostly a waste of time, but once a year it is good to see just how big some of the numbers are. Jim sent in this NY Times article by James Verini, just to show that breaches cost serious money:
According to Attorney General Eric Holder, who last month presented an award to Peretti and the prosecutors and Secret Service agents who brought Gonzalez down, Gonzalez cost TJX, Heartland and the other victimized companies more than $400 million in reimbursements and forensic and legal fees. At last count, at least 500 banks were affected by the Heartland breach.
$400 million costs caused by one small group, or one attacker, and those costs aren't complete or known as yet.
But the extent of the damage is unknown. “The majority of the stuff I hacked was never brought into public light,” Toey told me. One of the imprisoned hackers told me there “were major chains and big hacks that would dwarf TJX. I’m just waiting for them to indict us for the rest of them.” Online fraud is still rampant in the United States, but statistics show a major drop in 2009 from previous years, when Gonzalez was active.
What to make of this? It may well be that one single guy / group caused the lion's share of the breach fraud we saw in the wake of SB1386. Do we breathe a sigh of relief that he's gone for good (20 years?) ... or do we wonder at the basic nature of the attacks used to get in?
The attacks were fairly well described in the article. They were all through apparently PCE compliance-complete institutions. Lots of them. They start from the ho-hum of breaching the secured perimeter through WiFi, right up to the slightly yawnsome SQL injection.
Here's my bet: the ease of this overall approach and the lack of real good security alternatives (firewalls & SSL, anyone?) means there will be a pause, and then the professionals will move in. And they won't be caught, because they'll move faster than the Feds. Gonzalez was a static target, he wasn't leaving the country. The new professionals will know their OODA.
Read the entire article, and make your own bet :)
Clive throws some security complaints against Apple in comments. He's got a point, but what is that point, exactly? The issues raised have little to do with Apple, per se, as they are all generic and familiar in some sense.
Aside from the futility of trying to damage Apple's teflon brand, I guess, for me at least the issue here is when & if the perception, costs and activities of Apple's security begin to cross.
We saw this with Microsoft. Throughout the 1990s, they made hay, the sun shone. Security wasn't an issue, the users lapped it all up, paid the cost, asked for more.
So Microsoft did the "right thing" by their shareholders and ignored it. Having seen how much money they made for their shareholders in those days, it is hard to argue they did the "wrong thing". Indeed, this is what I tried to establish in that counter-cultural rant called GP -- that there is an economic rationale to delaying security until we can identify a real enemy.
Early 2000s, the scene started to change. We saw the first signs of phishing in 2001 (if we were watching) and the rising tide of costs to users was starting to feedback into Microsoft's top slot. Hence Bill Gate's famous "I want to spend another dime and spin the company" memo in January 2002 followed by his declaration that "phishing is our problem, not the user's fault."
But it didn't work. Even though we saw a massive internal change, and lots more attention on security, the Vista thing didn't work out. And the titanic of Microsoft perception slowly inched itself up onto an iceberg and spent half a decade sliding under the waters.
For Microsoft, action started in 2002, and changed the company somewhat. By the mid 2000s, activity on security was impressive. But perception was already underwater, irreversibly sinking.
Now we're all looking at Apple. *Only* because they have a bigger market cap than Microsoft. Note that this blog has promoted Macs for a better security experience for many years, and anyone who took that advice won big-time. But now the media has swung it's lazer-sharp eyeballs across to look at the one that snuck below their impenetrable radar and cheekily stole the crown of publicity from Microsoft. The game's up, the media tell us!
I speak in jest of course; market perception is what the users think, not what the media says . Possibly we will see market perception of Apple's security begin to diminish, as user costs begin to rise. The user-cost argument is still solidly profitable on the Mac's balance sheet, so that'll take some time. Meanwhile, there is little sign that Apple themselves are acting within to improve their impending security nightmare .
The interesting question for the long term, for the business-minded, is when should Apple begin to act? And how?
 just as a postscript, this question is made all the more interesting because, unlike Microsoft, Apple never signals its intentions in advance. And after the move, the script is so well stage-managed that we can't rely on it. So we may never know the answer. Which makes the job of investor in Apple quite a ... perceptionally challenging one :)
Chit-chat around the coffeerooms of crypto-plumbers is disturbed by NIST's campaign to have all the CAs switch up to 2048 bit roots:
On 30/09/10 5:17 PM, Kevin W. Wall wrote:> Thor Lancelot Simon wrote:<...snip...>
> See below, which includes a handy pointer to the Microsoft and Mozilla policy statements "requiring" CAs to cease signing anything shorter than 2048 bits.> These certificates (the end-site ones) have lifetimes of about 3 years maximum. Who here thinks 1280 bit keys will be factored by 2014? *Sigh*.No one that I know of (unless the NSA folks are hiding their quantum computers from us :). But you can blame this one on NIST, not Microsoft or Mozilla. They are pushing the CAs to make this happen and I think 2014 is one of the important cutoff dates, such as the date that the CAs have to stop issuing certs with 1024-bit keys.
I can dig up the NIST URL once I get back to work, assuming anyone actually cares.
The world of cryptology has always been plagued by numerology.
Not so much in the tearooms of the pure mathematicians, but all other areas: programming, management, provisioning, etc. It is I think a desperation in the un-endowed to understand something, anything of the topic.
E.g., I might have no clue how RSA works but I can understand that 2048 has to be twice as good as 1024, right? When I hear it is even better than twice, I'm overjoyed!
This desperation to be able to talk about it is partly due to having to be part of the business (write some code, buy a cert, make a security decision, sell a product) and partly a sense of helplessness when faced with apparently expert and confident advice. It's not an unfounded fear; experts use their familiarity with the concepts to also peddle other things which are frequently bogus or hopeful or self-serving, so the ignorance leads to bad choices being made.
Those that aren't in the know are powerless, and shown to be powerless.
When something simple comes along and fills that void people grasp onto them and won't let go. Like numbers. As long as they can compare 1024 to 2048, they have a safety blanket that allows them to ignore all the other words. As long as I can do my due diligence as a manager (ensure that all my keys are 2048) I'm golden. I've done my part, prove me wrong! Now do your part!
This is a very interesting problem . Cryptographic numerology diverts attention from the difficult to the trivial. A similar effect happens with absolute security, which we might call "divine cryptography." Managers become obsessed with perfection in one thing, to the extent that they will ignore flaws in another thing. Also, standards, which we might call "beliefs cryptography" for their ability to construct a paper cathedral within which there is room for us all, and our flock, to pray safely inside.
We know divinity doesn't exist, but people demand it. We know that religions war all the time, and those within a religion will discriminate against others, to the loss of us all. We know all this, but we don't; cognitive dissonance makes us so much happier, it should be a drug.
It was into this desperate aching void that the seminal paper by Lenstra and Verheul stepped in to put a framework on the numbers . On the surface, it solved the problem of cross-domain number comparison, e.g., 512 bit RSA compared to 256 bit AES, which had always confused the managers. And to be fair, this observation was a long time coming in the cryptographic world, too, which makes L&V's paper a milestone.
Cryptographic Numerology's star has been on the ascent ever since that paper: As well as solving the cipher-public-key-hash numeric comparison trap, numerology is now graced with academic respectability.
This made it irresistible to large institutions which are required to keep their facade of advice up. NIST like all the other agencies followed, but NIST has a couple of powerful forces on it. Firstly, NIST is slightly special, in ways that other agencies represented in keylength.com only wish to be special. NIST, as pushed by the NSA, is protecting primarily US government resources:
This document has been developed by the National Institute of Standards and Technology (NIST) in furtherance of its statutory responsibilities under the Federal Information Security Management Act (FISMA) of 2002, Public Law 107-347. NIST is responsible for developing standards and guidelines, including minimum requirements, for providing adequate information security for all agency operations and assets, but such standards and guidelines shall not apply to national security systems.
That's US not us. It's not even protecting USA industry. NIST is explicitly targetted by law to protect the various multitude of government agencies that make up the beast we know as the Government of the United States of America. That gives it unquestionable credibility.
And, as has been noticed a few times, Mars is on the ascendancy: *Cyberwarfare* is the second special force. Whatever one thinks of the mess called cyberwarfare (equity disaster, stuxnet, cryptographic astrology, etc) we can probably agree, if anyone bad is thinking in terms of cracking 1024 bit keys, then they'll be likely another nation-state interested in taking aim against the USG agencies. c.f., stuxnet, which is emerging as a state v. state adventure. USG, or one of USG's opposing states, are probably the leading place on the planet that would face a serious 1024 bit threat if one were to emerge.
Hence, NIST is plausibly right in imposing 2048-bit RSA keys into its security model. And they are not bad in the work they do, for their client . Numerology and astrology are in alignment today, if your client is from Washington DC.
However, real or fantastical, this is a threat model that simply doesn't apply to the rest of the world. The sad sad fact is that NIST's threat model belongs to them, to US, not to us. We all adopting the NIST security model is like a Taurus following the advice in the Aries section of today's paper. It's not right, however wise it sounds. And if applied without thought, it may reduce our security not improve it:
> At 1024 bits, it is not. But you are looking
> at a factor of *9* increase in computational
> cost when you go immediately to 2048 bits. At
> that point, the bottleneck for many applications
> shifts, particularly those ...
> ...and suddenly...
> This too will hinder the deployment of "SSL everywhere",...
When US industry follows NIST, and when worldwide industry follows US industry, and when open source Internet follows industry, we have a classic text-book case of adopting someone else's threat, security and business models without knowing it.
Keep in mind, our threat model doesn't include crunching 1024s. At all, any time, nobody's ever bothered to crunch 512 in anger, against the commercial or private world. So we're pretty darn safe at 1024. But our threat model does include
*attacks on poor security user interfaces in online banking*
That's a clear and present danger. And one of the key, silent, killer causes of that is the sheer rarity of HTTPS. If we can move the industry to "HTTPS everywhere" then we can make a significant different. To our security.
On the other hand, we can shift to 2048, kill the move to "HTTPS everywhere", and save the US Government from losing sleep over the cyberwarfare it created for itself (c.f., the equity failure).
And that's what's going to happen. Cryptographic Numerology is on a roll, NIST's dice are loaded, our number is up. We have breached the law of unintended consequences, and we are going to be reducing the security of the Internet because of it. Thanks, NIST! Thanks, Mozilla, thanks, Microsoft.
 For detailed work and references on Lenstra & Verheul's paper, see http://www.keylength.com/ which includes calculators of many of the various efforts. It's a good paper. They can't be criticised for it in the terms in this post, it's the law of unintended consequences again.
 Also, other work by NIST to standardise the PRNG (psuedo-random-number-generator) has to be applauded. The subtlety of what they have done is only becoming apparent after much argumentation: they've unravelled the unprovable entropy problem by unplugging it from the equation.
But they've gone a step further than the earlier leading work by Ferguson and Schneier and the various quiet cryptoplumbers, by turning the PRNG into a deterministic algorithm. Indeed, we can now see something special: NIST has turned the PRNG into a reverse-cycle message digest. Entropy is now the MD's document, and the psuedo-randomness is the cryptographically-secure hash that spills out of the algorithm.
Hey Presto! The PRNG is now the black box that provides the one-way expansion of the document. It's not the reverse-cycle air conditioning of the message digest that is exciting here, it's the fact that it is now a new class of algorithms. It can be specified, paramaterised, and most importantly for cryptographic algorithms, given test data to prove the coding is correct.
(I use the term reverse-cycle in the sense of air-conditioning. I should also stress that this work took several generations to get to where it is today; including private efforts by many programmers to make sense of PRNGs and entropy by creating various application designs, and a couple of papers by Ferguson and Schneier. But it is the black-boxification by NIST that took the critical step that I'm lauding today.)
Iran's Bushehr nuclear power plant in Bushehr Port:
"An error is seen on a computer screen of Bushehr nuclear power plant's map in the Bushehr Port on the Persian Gulf, 1,000 kms south of Tehran, Iran on February 25, 2009. Iranian officials said the long-awaited power plant was expected to become operational last fall but its construction was plagued by several setbacks, including difficulties in procuring its remaining equipment and the necessary uranium fuel. (UPI Photo/Mohammad Kheirkhah)"
Click onwards for full sized image:
Compliant? Minor problem? Slight discordance? Conspiracy theory?
(spotted by Steve Bellovin)
Another metaphor (I) that has gained popularity is that Infosec security is much like war. There are some reasons for this: there is an aggressive attacker out there who is trying to defeat you. Which tends to muck a lot of statistical error-based thinking in IT, a lot of business process, and as well, most economic models (e.g., asymmetric information assumes a simple two-party model). Another reason is the current beltway push for essential cyberwarfare divisional budget, although I'd hasten to say that this is not a good reason, just a reason. Which is to say, it's all blather, FUD, and oneupsmanship against the Chinese, same as it ever was with Eisenhower's nemesis.
Having said that, infosec isn't like war in many ways. And knowing when and why and how is not a trivial thing. So, drawing from military writings is not without dangers. Consider these laments about applying Sun Tzu's The Art of War to infosec from Steve Tornio and Brian Martin:
In "The Art of War," Sun Tzu's writing addressed a variety of military tactics, very few of which can truly be extrapolated into modern InfoSec practices. The parts that do apply aren't terribly groundbreaking and may actually conflict with other tenets when artificially applied to InfoSec. Rather than accept that Tzu's work is not relevant to modern day Infosec, people tend to force analogies and stretch comparisons to his work. These big leaps are professionals whoring themselves just to get in what seems like a cool reference and wise quote.
"The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable." - The Art of War
The Art of SunTzu is not a literal quoting and thence mad rush to build the tool. Art of War was written from the context of a successful general talking to another hopeful general on the general topic of building an army for a set piece nation-to-nation confrontation. It was also very short.
Art of War tends to interlace high level principles with low level examples, and dance very quickly through most of its lessons. Hence it was very easy to misinterpret, and equally easy to "whore oneself for a cool & wise quote."
However, Sun Tzu still stands tall in the face of such disrespect, as it says things like know yourself FIRST, and know the enemy SECOND, which the above essay actually agreed with. And, as if it needs to be said, knowing the enemy does not imply knowing their names, locations, genders, and proclivities:
Do you know your enemy? If you answer 'yes' to that question, you already lost the battle and the war. If you know some of your enemies, you are well on your way to understanding why Tzu's teachings haven't been relevant to InfoSec for over two decades. Do you want to know your enemy? Fine, here you go. your enemy may be any or all of the following:
- 12 y/o student in Ohio learning computers in middle school
- 13 y/o home-schooled girl getting bored with social networks
- 15 y/o kid in Brazil that joined a defacement group
Of course, Sun Tzu also didn't know the sordid details of every soldier's desires; "knowing" isn't biblical, it's capable. Or, knowing their capabilities, and that can be done, we call it risk management. As Jeffrey Carr said:
The reason why you don't know how to assign or even begin to think about attribution is because you are too consumed by the minutia of your profession. ... The only reason why some (OK, many) InfoSec engineers haven't put 2+2 together is that their entire industry has been built around providing automated solutions at the microcosmic level. When that's all you've got, you're right - you'll never be able to claim victory.
Right. Most all InfoSec engineers are hired to protect existing installations. The solution is almost always boxed into the defensive, siege mentality described above, because the alternate, as Dan Geer apparently said:
When you are losing a game that you cannot afford to lose, change the rules. The central rule today has been to have a shield for every arrow. But you can't carry enough shields and you can run faster with fewer anyhow.
The advanced persistent threat, which is to say the offense that enjoys a permanent advantage and is already funding its R&D out of revenue, will win as long as you try to block what he does. You have to change the rules. You have to block his success from even being possible, not exchange volleys of ever better tools designed in response to his. You have to concentrate on outcomes, you have to pre-empt, you have to be your own intelligence agency, you have to instrument your enterprise, you have to instrument your data.
But, at a corporate level, that's simply not allowed. Great ideas, but only the achievable strategy is useful, the rest is fantasy. You can't walk into any company or government department and change the rules of infosec -- that means rebuilding the apps. You can't even get any institution to agree that their apps are insecure; or, you can get silent agreement by embarrassing them in the press, along with being fired!
I speak from pretty good experience of building secure apps, and of looking at other institutional or enterprise apps and packages. The difference is huge. It's the difference between defeating Japan and defeating Vietnam. One was a decision of maximisation, the other of minimisation. It's the difference between engineering and marketing; one is solid physics, the other is facade, faith, FUD, bribes.
It's the difference between setting up a world-beating sigint division, and fixing your own sigsec. The first is a science, and responds well by adding money and people. Think Manhattan, Bletchley Park. The second is a societal norm, and responds only to methods generally classed by the defenders as crimes against humanity and applications. Slavery, colonialism, discrimination, the great firewall of China, if you really believe in stopping these things, then you are heading for war with your own people.
Which might all lead the grumpy anti-Sun Tzu crowd to say, "told you so! This war is unwinnable." Well, not quite. The trick is to decide what winning is; to impose your will on the battleground. This is indeed what strategy is, to impose ones own definition of the battleground on the enemy, and be right about it, which is partly what Dan Geer is getting at when he says "change the rules." A more nuanced view would be: to set the rules that win for you; and to make them the rules you play by.
And, this is pretty easily answered: for a company, winning means profits. As long as your company can conduct its strategy in the face of affordable losses, then it's winning. Think credit cards, which sacrifice a few hundred basis points for the greater good. It really doesn't matter how much of a loss is made, as long as the customer pays for it and leaves a healthy profit over.
Relevance to Sun Tzu? The parable of the Emperor's Concubines!
In summary, it is fair to say that Sun Tzu is one of those texts that are easy to bandy around, but rather hard to interpret. Same as infosec, really, so it is no surprise we see it in that world. Also, war as a very complicated business, and Art of War was really written for that messy discipline ... so it takes somewhat more than a familiarity from both to successfully relate across beyond a simple metaphor level.
And, as we know, metaphors and analogues are descriptive tools, not proofs. Proving them wrong proves nothing more than you're now at least an adolescent.
Finally, even war isn't much like war these days. If one factors in the last decade, there is a clear pattern of unilateral decisions, casus belli at a price, futile targets, and effervescent gains. Indeed, infosec looks more like the low intensity, mission-shy wars in the Eastern theaters than either of them look like Sun Tzu's campaigns.
One of the things that has been pretty much standard in infosec is that the risks earnt (costs incurred!) from owning a Mac have been dramatically lower. I do it, and save, and so do a lot of my peers & friends. I don't collect stats, but here's a comment from Dan Geer from 2005:
Amongst the cognoscenti, you can see this: at security conferences of all sorts you’ll find perhaps 30% of the assembled laptops are Mac OS X, and of the remaining Intel boxes, perhaps 50% (or 35% overall) are Linux variants. In other words, while security conferences are bad places to use a password in the clear monoculture on the back of the envelope over a wireless channel, there is approximately zero chance of cascade failure amongst the participants.
I recommend it on the blog front page as the number 1 security tip of all:
Why this is the case is of course a really interesting question. Is it because Macs are inherently more secure, in themselves? The answer seems to be No, not in themselves. We've seen enough evidence to suggest, at an anecdotal level, that when put into a fair fight, the Macs don't do any better than the competition. (Sometimes they do worse, and the competition ensures those results are broadcast widely :)
However it is still the case that the while the security in the Macs aren't great, the result for the user is better -- the costs resulting from breaches, installs, virus slow-downs, etc, remain lower . Which would imply the threats are lower, recalling the old mantra of:
Business model ⇒ threat model ⇒ security model
Now, why is the threat (model) lower? It isn't because the attackers are fans. They generally want money, and money is neutral.
One theory that might explain it is the notion of monoculture.
This idea was captured a while back by Dan Geer and friends in a paper that claimed that the notion of Microsoft's dominance threated the national security of the USA. It certainly threatened someone, as Dan lost his job the day the paper was released .
In brief, monoculture argues that when one platform gains an ascendency to dominate the market, then we enter a situation of particular vulnerability to that platform. It becomes efficient for all economically-motivated attackers to concentrate their efforts on that one dominant platform and ignore the rest.
In a sense, this is an application of the Religion v. Darwin argument to computer security. Darwin argued that diversity was good for the species as a whole, because singular threats would wipe out singular species. The monoculture critique can also be seen as analogous to Capitalism v. Communism, where the former advances through creative destruction, and the latter stagnates through despotic ignorance.
A lot of us (including me) looked at the monoculture argument and thought it ... simplistic and hopeful. Yet, the idea hangs on ... so the question shifts for us slower skeptics to how to prove it ?
Apple is quietly wrestling with a security conundrum. How the company handles it could dictate the pace at which cybercriminals accelerate attacks on iPhones and iPads.
Apple is hustling to issue a patch for a milestone security flaw that makes it possible to remotely hack - or jailbreak - iOS, the operating system for iPhones, iPads and iPod Touch.
Apple's new problem is perhaps early signs of good evidence that the theory is good. Here we have Apple struggling with hacks on its mobile platform (iPads, iPods, iPhones) and facing a threat which it seemingly hasn't faced on the Macs .
The differentiating factor -- other than the tech stuff -- is that Apple is leading in the mobile market.
IPhones, in particular, have become a pop culture icon in the U.S., and now the iPad has grabbed the spotlight. "The more popular these devices become, the more likely they are to get the attention of attackers," says Joshua Talbot, intelligence manager at Symantec Security Response.
Not dominating like Microsoft used to enjoy, but presenting enough of a nose above the pulpit to get a shot taken. Meanwhile, Macs remain stubbornly stuck at a reported 5% of market share in the computer field, regardless of the security advice . And nothing much happens to them.
If market leadership continues to accrue to Apple in the iP* mobile sector, as the market expect it does, and if security woes continue as well, I'd count that as good evidence .
 Perhaps because Dan lost his job, he gets fuller attention. The full cite would be like: Daniel Geer, Rebecca Bace, Peter Gutmann, Perry Metzger, Charles P. Pfleeger, John S. Quarterman, Bruce Schneier, "CyberInsecurity: The Cost of Monopoly How the Dominance of Microsoft's Products Poses a Risk to Security." Preserved by the inestimable cryptome.org, a forerunner of the now infamous wikileaks.org.
 Proof in the sense of scientific method is not possible, because we can't run the experiment. This is economics, not science, we can't run the experiment like real scientists. What we have to do is perhaps psuedo-scientific-method; we predict, we wait, and we observe.
 On the other hand, maybe the party is about to end for Macs. News just in:
Security vendor M86 Security says it's discovered that a U.K.-based bank has suffered almost $900,000 (675,000 Euros) in fraudulent bank-funds transfers due to the ZeuS Trojan malware that has been targeting the institution.
Bradley Anstis, vice president of technology strategy at M86 Security, said the security firm uncovered the situation in late July while tracking how one ZeuS botnet had been specifically going after the U.K.-based bank and its customers. The botnet included a few hundred thousand PCs and even about 3,000 Apple Macs, and managed to steal funds from about 3,000 customer accounts through unauthorized transfers equivalent to roughly $892,755.
 I don't believe the 5% market share claim ... I harbour a suspicion that this is some very cunning PR trick in under-reporting by Apple, so as to fly below the radar. If so, I think it's well past its sell-by date since Apple reached the same market cap as Microsoft...
 What is curious is that I'll bet most of Wall Street, and practically all of government, notwithstanding the "national security" argument, continue to keep clear of Macs. For those of us who know the trick, this is good. It is good for our security nation if the governments do not invest in Macs, and keep the monoculture effect positive. Perverse, but who am I to argue with the wisdom in cyber-security circles?
Luther Martin asks this open question:
I have a quick question for you based on some recent discussions. Here's the background.
The first was with a former co-worker who works for the VC division of a large commercial bank. He tells me that his bank really isn't interested in investing in security companies. Why? Apparently foreach $100 of credit card transactions there's about $4 of loss due to bad debt and about only $0.10 of loss due to fraud. So if you're making investments, it's clear where you should put your money.
Next, I was talking with a guy who runs a large credit card processing business. He was complaining about having to spend an extra $6 million on fraud reduction while his annual losses due to fraud are only about $250K.
Finally, I was also talking to some people from a government agency who were proud of the fact that they had reduced losses due to security incidents in their division by $2 million last year. The only problem is that they actually spent $10 million to do this.
So the question is this: are we not spending enough on security or are we spending too much, but on the wrong things?
Things I've seen that are encouraging. Bruce Schneier in Q&A:
Q: We've also seen Secure Sockets Layer (SSL) come under attack, and some experts are saying it is useless. Do you agree?
A: I'm not convinced that SSL has a problem. After all, you don't have to use it. If I log-on to Amazon without SSL the company will still take my money. The problem SSL solves is the man-in-the-middle attack with someone eavesdropping on the line. But I'm not convinced that's the most serious problem. If someone wants your financial data they'll hack the server holding it, rather than deal with SSL.
Right. The essence is that SSL solves the "easy" part of the problem, and leaves open the biggest part. Before the proponents of SSL say, "not our problem," remember that AADS did solve it, as did SOX and a whole bunch of other things. It's called end-to-end, and is well known as being the only worthwhile security. Indeed, I'd say it was simply responsible engineering, except for the fact that it isn't widely practiced.
OK, so this is old news, from around March, but it is worth declaring sanity:
Q: But doesn't SSL give consumers confidence to shop online, and thus spur e-commerce?
A: Well up to a point, but if you wanted to give consumers confidence you could just put a big red button on the site saying 'You're safe'. SSL doesn't matter. It's all in the database. We've got the threat the wrong way round. It's not someone eavesdropping on Eve that's the problem, it's someone hacking Eve's endpoint.
Which is to say, if you are going to do anything to fix the problem, you have to look at the end-points. The only time you should look at the protocol, and the certificates, is how well they are protecting the end-points. Meanwhile, the SSL field continues to be one for security researchers to make headlines over. It's BlackHat time again:
"The point is that SSL just doesn't do what people think it does," says Hansen, an security researcher with SecTheory who often goes by the name RSnake. Hansen split his dumptruck of Web-browsing bugs into three categories of severity: About half are low-level threats, 10 or so are medium, and two are critical. One example...
Many observers in the security world have known this for a while, and everyone else has felt increasingly frustrated and despondent about the promise:
There has been speculation that an organization with sufficient power would be able to get a valid certificate from one of the 170+ certificate authorities (CAs) that are installed by default in the typical browser and could then avoid this alert ....
But how many CAs does the average Internet user actually need? Fourteen! Let me explain. For the past two weeks I have been using Firefox on Windows with a reduced set of CAs. I disabled ALL of them in the browser and re-enabled them one by one as necessary during my normal usage....
On the one hand, SSL is the brand of security. On the other hand, it isn't the delivery of security; it simply isn't deployed in secure browsing to provide the user security that was advertised: you are on the site you think you are on. Only as we moved from a benign world to a fraud world, around 2003-2005, this has this been shown to matter. Bruce goes on:
Q: So is encryption the wrong approach to take?
A: This kind of issue isn't an authentication problem, it's a data problem. People are recognising this now, and seeing that encryption may not be the answer. We took a World War II mindset to the internet and it doesn't work that well. We thought encryption would be the answer, but it wasn't. It doesn't solve the problem of someone looking over your shoulder to steal your data.
Indeed. Note that comment about the World War II mindset. It is the case that the entire 1990s generation of security engineers were taught from the military text book. The military assumes its nodes -- its soldiers, its computers -- are safe. And, it so happens, that when armies fight armies, they do real-life active MITMs against each other to gain local advantage. There are cases of this happening, and oddly enough, they'll even do it to civilians if they think they can (ask Greenpeace). And the economics is sane, sensible stuff, if we bothered to think about it: in war, the wire is the threat, the nodes are safe.
However, adopting "the wire" as the weakness and Mallory as the Man-In-The-Middle, and Eve as the Eavesdropper as "the threat" in the Internet was a mistake. Even in the early 1990s, we knew that the node was the problem. Firstly, ever since the PC, nodes in commercial computing are controlled by (dumb) users not professional (soldiers). Who download shit from the net, not operate trusted military assets. Secondly, observation of known threats told us where the problems lay: floppy viruses were very popular, and phone-line attacks were about spoofing and gaining entry to an end-point. Nobody was bothering with "the wire," nobody was talking about snooping and spying and listening [*].
The military model was the precise reverse of the Internet's reality.
To conclude. There is no doubt about this in security circles: the SSL threat model was all wrong, and consequently the product was deployed badly.
Where the doubt lies is how long it will take the software providers to realise that their world is upside down? It can probably only happen when everyone with credibility stands up and says it is so. For this, the posts shown here are very welcome. Let's hear more!
Seen on the net, by Dan Geer:
The design goal for any security system is that the number of failures is small but non-zero, i.e., N>0. If the number of failures is zero, there is no way to disambiguate good luck from spending too much. Calibration requires differing outcomes.
I've been trying for years to figure out a nice way to describe the difference between 0 failures, and some small number N>0 like 1 or 2 or 10 in a population of a million.
Dan might have said it above: If the number of failures is zero, there is no way to disambiguate good luck from spending too much.
Has he nailed it? It's certainly a lot tighter than my long efforts ... Once we get that key piece of information down, we can move on. As he does:
Regulatory compliance, on the other hand, stipulates N==0 failures and is thus neither calibratable nor cost effective. Whether the cure is worse than the disease is an exercise for the reader.
An insight! For regulatory compliance, I'd substitute public compliance, which includes all the media attention and reputation attacks.
Gunnar posts on the continuing sad saga of infosec:
There's been a lot of threads recently about infosec certification, education and training. I believe in training for infosec, I have trained several thousand people myself. Greater knowledge, professionalism and skills definitely help, but are not enough by themselves.
We saw in the case of the Great Recession and in Enron where the skilled, certified accounting and rating professions totally sold out and blessed bogus accounting practices and non-existent earning.
Right. And this is an area where the predictions of economics are spot on. In Akerlof's seminal paper "the Market for Lemons," he predicts that the asymmetry of information can be helped by institutions. In the economics sense, institutions are non-trading, non-2-party market contractual arrangements of long standing to get stuff happening. Professionalism, training, certifications, etc all are slap-bang in the recommendations.
So why don't they help? There's a simple answer: we aren't in the market for lemons! There's one key flaw: Lemons postulates that the seller knows and the buyer doesn't, and that simply doesn't apply to infosec. (Criteria #1) In the market for security, the seller knows about his tool, but he doesn't know whether it is fit for the buyer. In contrast, the salesman in Akerlof's market assumed correctly that a car was good for the buyer, so the problem really was sharing the secret information from the seller to the buyer. Used car warranties did that, by forcing the seller to reveal his real pricing.
The buyer doesn't really know what he wants, and the seller has no better clue. Indeed, it may be that the buyer has more of a clue, and at least sometimes. So professionalism, certification, training and warranties isn't going to be the answer.
Another way of looking at this is that in infosec, in common with all security markets (think defence, crime) there is a third party: the attacker. This is the party that really knows, so knowledge-based solutions without clear incorporation of the aggressor's knowledge aren't going to work. This is why buying the next generation stealth fighter is not really helpful when your attacker is a freedom fighter in an Asian hell-hole with an IED. But it's a lot more exciting to talk about.
Which leads me to one controversial claim. If we can't get useful information from the seller, then the answer is, you've got to find it by yourself. It's your job, do it. And that's really what we mean by professionalism -- knowing when you can outsource something, and knowing when you can't.
That's controversial because legions of infosec product suppliers will think they're out of a job, but that's not quite true. It just requires a shift in thinking, and a willingness to think about the buyer's welfare, not just his wallet. How do we improve the ability of the client to do their job? Which leads right back to education: it is possible to teach better security practices. It's also possible to teach better risk practices. And, it can be done on an organisation-wide basis. Indeed, this is one of the processes that Microsoft took in trying to escape their security nightmare: get rid of the security architecture silos and turn the security groups into education groups .
So from this claim, why the flip into a conundrum. Why aren't certifications the answer? It's because certifications /are an institution/ and institutions are captured by one party or another. Usually, the sellers. Again a well-known prediction from economics: institutions to protect the buyer are generally captured by the seller in time (if not in the creation). I think this was by Stiglitz or Stigler (?), pointing to finance market regulation, again.
A supplier of certifications needs friends in industry, which means they need to also sell the product of industry. It's hard to make friends selling contrarian advice, it is far more profitable selling middle-of-the-road advice about your partners . "Let's start with SSL + firewalls ..." Nobody's going to say boo, just pass go, just collect the fees. In contrast:
In short, the biggest problem in infosec is integration. Education around security engineering for integration would be most welcome.
That's tough, from an institutional point of view.
 E.g., I came across a certification and professional code of conduct that required you to sign up as promoting /best practices/. Yet, best practices are lowest-common-denominator, they are the set of uncontroversial products. We're automatically on the back foot, because we're encouraging an organisation to lower its own standards to best practices, and comply with whatever list someone finds off the net, and stop right there. Hopeless!
In an influential paper, Prof Ross Anderson proposes that the _Market for Lemons_ is a good fit for infosec. I disagree, because that market is predicated on the seller being informed, and the buyer not. I suggest the sellers are equally ill-informed, leading to the Market for Silver Bullets.
Microsoft and RSA have just published some commissioned research by Forrester that provides some information, but it doesn't help to separate the positions:
CISOs do not know how effective their security controls actually are. Regardless of information asset value, spending, or number of incidents observed, nearly every company rated its security controls to be equally effective — even though the number and cost of incidents varied widely. Even enterprises with a high number of incidents are still likely to imagine that their programs are “very effective.” We concluded that most enterprises do not actually know whether their data security programs work or not.
Buyers remain uninformed, something we both agree on. Curiously, it isn't an entirely good match for Akerlof, as the buyer of a Lemon is uninformed before, and regrettably over-informed afterwards. No such for the CISO.
Which leaves me with an empirical problem: how to show that the sellers are uninformed? I provide some anecdotes in that paper, but we would need more to settle the prediction.
It should be possible to design an experiment to reveal this. For example, and drawing on the above logic, if a researcher were to ask similar questions of both the buyer and the seller, and be able to show lack of correlation between the supplier's claims, and the incident rate, that would say something.
The immediate problem of course is, who would do this? Microsoft and RSA aren't going to, as they are sell-side, and their research was obviously focussed on their needs. Which means, it might be entirely accurate, but might not be entirely complete; they aren't likely to want to clearly measure their own performance.
And, if there is one issue that is extremely prevalent in the world of information security, it is the lack of powerful and independent buy-side institutions who might be tempted to do independent research on the information base of the sellers.
Oh well. moving on to the other conclusions:
The second and third were also predicted in that paper.
The last is hopefully common sense, but unfortunately as someone used to say, common sense isn't so common. Which brings me to Matt Blaze's rather good analysis of the threats to the net in 1995, as an afterword to Applied Cryptography, the seminal red book by Bruce Schneier:
One of the most dangerous aspects of cryptology (and, by extension, of this book), is that you can almost measure it. Knowledge of key lengths, factoring methods, and cryptanalytic techniques make it possible to estimate (in the absence of a real theory of cipher design) the "work factor" required to break a particular cipher. It's all too tempting to misuse these estimates as if they were overall security metrics for the systems in which they are used. The real world offers the attacker a richer menu of options than mere cryptanalysis; often more worrisome are protocol attacks, Trojan horses, viruses, electromagnetic monitoring, physical compromise, blackmail and intimidation of key holders, operating system bugs, application program bugs, hardware bugs, user errors, physical eavesdropping, social engineering, and dumpster diving, to name just a few.
Right. It must have not escaped anyone by now that the influence of cryptography has been huge, but the success in security has been minimal. Cryptography has not really been shown to secure our interactions that much, having missed the target as many times as it might have hit it. And with the rise of phishing and breaches and MITBs and trojans and so forth, we are now in the presence of evidence that the institutions of strong cryptography have cemented us into a sort of Maginot line mentality. So it may be doing more harm than good, although such a claim would need a lot of research to give it some weight.
I tried to research this a little in Pareto-secure, in which I asked why the measurements of crypto-algorithms received such slavish attention, to the exclusion of so much else? I found an answer, at least, and it was a positive, helpful answer. But the far bigger question remains: what about all the things we can't measure with a bit-ruler?
Matt Blaze listed 10 things in 1996:
In 2010, today more or less, he said "not much has changed." We live in a world where if MD5 is shown to be a bit flaky because it has less bits than SHA1, the vigilantes of the net launch pogroms on the poor thing, committees of bitreaucrats write up how MD5-must-die, and the media breathlessly runs around claiming the sky will fall in. Even though none of this is true, and there is no attack possible at the moment, and when the attack is possible, it is still so unlikely that we can ignore it ... and even if it does happen, the damages will be next to zero.
Meanwhile, if you ask for a user-interface change, because the failure of the user-interface to identify false end-points has directly led to billions of dollars of damages, you can pretty much forget any discussion. For some reason, bit-strength dominates dollars-losses, in every conversation.
I used to be one of those who gnashed my teeth at the sheer success of Applied Cryptography, and the consequent generation of crypto-amateurs who believed that the bit count was the beginning and end of all. But that's unfair, as I never got as far as reading the afterword, and the message is there. It looks like Bruce made a slight error, and should have made Matt Blaze's contribution the foreword, not the afterword.
A one-word error in the editorial algorithm! I must write to the committee...
Afterword: rumour has it that the 3rd edition of Applied Cryptography is nearing publication.
This ArsTechnica article explores what happens when the CA-supplied certificate is used as an MITM over some SSL connection to protect online-banking or similar. In the secure-lite model that emerged after the real-estate wars of the mid-1990s, consumers were told to click on their tiny padlock to check the cert:
Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.
A switch in CA is a very significant event. Jane Public might not be able to do something, but if a customer of Verisign's was MITM'd by a cert from Etisalat, this is something that effects Verisign. We might reasonably expect Verisign to be interested in that. As it effects the chrome, and as customers might get annoyed, we might even expect Verisign to treat this as an attack on their good reputation.
And that's why putting the brand of the CA onto the chrome is so important: it's the only real way to bring pressure to bear on a CA to get it to lift its game. Security, reputation, sales. These things are all on the line when there is a handle to grasp by the public.
When the public has no handle on what is going on, the deal falls back into the shadows. No security there, in the shadows we find audit, contracts, outsourcing. Got a problem? Shrug. It doesn't effect our sales.
So, what happens when a CA MITM's its own customer?
Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.
Arguably, this is not an MITM, because the CA is the authority (not the subscriber) ... but exotic legal arguments aside; we clearly don't want it. When it goes on, what we do need is software like whitelisting, like Conspiracy and like the other ideas floating around to do it.
And, we need the CA-on-the-chrome idea so that the responsibility aspect is established. CAs shouldn't be able to MITM other CAs. If we can establish that, with teeth, then the CA-against-itself case is far easier to deal with.
In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.
Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:
The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.
(my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:
Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.
Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.
So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.
One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.
Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:
Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.
They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.
In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:
3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in .
I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.
For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.
What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.
Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.
What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.
But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.
Which all leads to a new group of people trying to solve the problem. The authors present their model as this:
The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
A hash of the certificate.
The country of the issuing CA.
The name of the CA.
The country of the website.
The name of the website.
The entire chain of trust up to the root CA.
When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).
This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.
But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.
From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.
Say you go to https://addons.mozilla.org and download a popular extension. Maybe NoScript. The download location appears to be over HTTPS. ... (lots of HTTP/HTTPS blah blah) ... Today I had some time, so I’ve decided to play a little with this. ...
Well, Wladimir Palant is now the author of NoScript, and of course, I’m downloading it over HTTPS.
Boom! Mozilla's website uses an EV ("extended validation") certificate to present to the user that this is a high security site, or something. However, it provided the downloads over HTTP, and this was easily manipulable. Hence, NoScript (which I use) was hit by an MITM, and this might explain why Robert Hansen said that the real author of NoScript, Giorgio Maone, was the 9th most dangerous person on the planet.
What's going on here? Well, several things. The promise that the web site makes when it displays the green EV certificate is just that: a promise. The essence of the EV promise is that it is somehow of a stronger quality than say raw unencrypted HTTP.
EV checks identity to the Nth degree, which is why it is called "extended validation." Checking the identity is useful, but no more than is matched by a balance in overall security, so the difference between ordinary certs and EV is mostly irrelevant. In simple security model terms, that's not where the threats are (or ever were), so they are winding up the wrong knob. In economics terms, EV raises the cost barrier, which is classical price discrimination, and this results in a colourful signal, not a metric or measure. The marketing aligns the signal of green to security, but the attack easily splits security from promise. Worse, the more they wind the knob, the more they drift off topic, as per silver bullets hypothesis.
If you want to put it in financial or payments context, EV should be like PCI, not like KYC.
But it isn't, it's the gold-card of KYC. So, how easy is it to breach the site itself and render the promise a joke? Well, this is the annoying part. Security practices in the HTTP world today are so woeful that even if you know a lot, and even if you try hard to make it secure, the current practices are terrifically hard to secure. Basically, complexity is the enemy of security, and complexity is too high in the world of websites & browsers.
So CABForum, the promoters of EV, fell into the trap of tightening the identity knob up so much, to increase the promise of security ... but didn't look at all at the real security equation on which it is dependent. So, it is a strong brand over a weak product, it's green paint over wood rot. Worse for them, they won't ever rectify this, because the solutions are out of their reach: they cannot weaken the identity promise without causing the bureaucrats to attack them, and they cannot improve the security of the foundation without destroying the business model.
Which brings us to the real security question. What's the easiest fix? It is:
And that would be the first problem with securing the promise: EV is sold as a simple identity advertisement, it does not enter into the security business at all. So this would make a radical change in the sales process, and it's not clear it would survive the transition.
Secondly, CABForum is a cartel of people who are in the sales business. It's not security-wise but security-talk. So as a practical matter, the institution is not staffed with the people who are going to pick up this ball and run with it. It has a history of missing the target and claiming victory (c.f., phishing), so what would change its path now?
That's not to say that EV is all bad, but here is an example of how CABForum succeeded and failed, as illustrative of their difficulties:
EV achieved one thing, in that it put the brand name of the CA on the agenda. HTTPS security at the identity level is worthless unless the brand of the CA is shown. I haven't seen a Microsoft browser in a while, but the mock-ups showed the brand of the CA, so this gives the consumer a chance to avoid the obvious switcheroo from known brand to strange foreign thing, and recent breaches of various low-end commercial CAs indicate there isn't much consistency across the brands. (Or fake injection attacks, if one is worried about MITB.)
So it is essential to any sense of security that the identity of who made the security statement be known to consumers; and CABForum tried to make this happen. Here's one well-known (branded) CA's display of the complete security statement.
But Mozilla was a hold-out. (It is a mystery why Mozilla fights against the the complete security statement. I have a feeling it is more of the same problem that infects CABForum, Microsoft and the Internet in general; monoculture. Mozilla is staffed by developers who apparently think that brands are TV-adverts or road-side billboards of no value to their users, and Mozilla's mission is to protect their flock against another mass-media rape of consciousness & humanity by the evil globalised corporations ... rather than appreciating the brand as handles to security statements that mesh into an overall security model.)
Which puts the example to the test: although CABForum was capable of winding the identity knob up to 11, and beyond, they were not capable of adjusting the guidelines to change the security model, and force the brand of the CA to be shown on the browser, in some sense. In these terms, what has now happened is that Mozilla is embarrassed, but zero fallback comes onto the CA. It was the CA that should have solved the problem in the first place, because the CA is in the sell-security business, Mozilla is not. So the CA should have adjusted contracts to control the environment in which the green could be displayed. Oops.
Where do we go from here? Well, we'll probably see a lot of embarrassing attacks on EV ... the brand of EV will wobble, but still sell (mostly because consumers don't believe it anyway, because they know all security marketing is mostly talk [or, do they?], and corporates will try any advertising strategy, because most of them are unprovable anyway). But, gradually the embarrassments will add up to sufficient pressure to push the CABForum to entering the security business. (e.g., PCI not KYC.)
The big question is, how long will this take, and will it actually do any good? If it takes 5-10 years, which I would predict, it will likely be overtaken by events, just as happened with the first round of security world versus phishing. It seems that we are in that old military story where our unfortunate user-soldiers are being shot up on the beaches while the last-war generals are still arguing as to what paint to put on the rusty ships. SNAFU, "situation normal, all futzed up," or in normal language, same as it ever was.
Unusually, someone in the press wrote up a real controversy, called The Great PCI Security Debate of 2010. Exciting stuff! In this case, a bunch of security people including Bill Brenner, Ben Rothke, Anton Chuvakin and other famous names are arguing whether PCI is good or bad. The short story is that we are in the market for silver bullets, and this article nicely lays out the evidence:
Let's just look at the security market: I did a market survey when I was at IBM and there were about 70 different security product technologies, not even counting services. How many of those are required by PCI? It's a tiny subset. No one invests in all 70.
In this market, external factors are the driving force:
But the truth is, when someone determined they had to do something about targeted attacks or data loss prevention for intellectual property, they had a pilot and a budget but their bosses told them to cut it. The reason was, "I might get hacked, but I will get fined." That's a direct quote from a CIO and it's very logical and business focused. But instead of securing their highest-risk priority they're doing the thing that they'll get fined for not doing.
We don't "do security," rather, we avoid exposure to fingerpointing and other embarrassments. By way of hypotheses in the market for silver bullets, we then find ourselves seeking to reduce the exposure to those external costs; this causes the evolution of some form of best practices which is an agreed set that simply ensures you are not isolated by difference. In the case in point, this best practices is PCI.
In other words, security by herding , compliance-seeking behaviour:
One of the things I see within organizations is that there's a hurry-up-and-wait mentality. An organization will push really hard to get compliant. Then, the day the auditor walks out the door they say, "Thank goodness. Now I can wait until next year." So when we talk about compliance driving the wrong mindset, I think the wrong mindset was there to begin with.
It's a difficult proposition to say we're doing compliance instead of security when what I see is they're doing compliance because someone told them to, whereas if no one told them to they'd do nothing. It's like telling your kids to do their homework. If you don't tell them to do the homework they're going to play outside all day.
This is rational, we simply save more money doing that. What to do about it? If one is a consultant, one can sell more services:
There is security outside of PCI and if we as security counselors aren't encouraging customers to look outside PCI then we ourselves are failing the industry because we're not encouraging them to look to good security as opposed to just good PCI compliance. The idea that they fear the auditor and not the attacker really bothers me.
Which is of course rational for the adviser, but not rational for the buyer because more security likely reduces profits in this market. If on the other hand we are trying to make the market more efficient (generally a good goal, as this means it reduces the costs to all players) then the goal is simple: move the market for silver bullets into a market for lemons or limes.
That's easy to say, very hard to do. There's at least one guy who doesn't want that to happen: the attacker. Furthermore, depending on your view of the perversion of incentives in the payment industry, fraud is good for profits because it enables building of margin. Our security adviser has the same perverse incentive: the more fraud, the more jobs. Indeed, everyone is positive about it, except the user, and they get the bill, not the vote.
Ben Rothke: Dan Geer is the Shakespeare of information security, but at the end of the day people are reading Danielle Steel, not Shakespeare.
In the market for silver bullets, you don't need to talk like Shakespeare. Load up on bullets of Steel, or as many other mangled metaphors as you can cram in, and you're good to shoot it out with the rest of 'em.
The primary source was a survey run by an anti-phishing software vendor, so caveats apply. Still interesting!
For more meat on the bigger picture, see this article: Ending the PCI Blame Game. Which reads like a compressed version of this blog! Perhaps, finally, the thing that is staring the financial operators in the face has started to hit home, and they are really ready to sound the alarm.
One of the brief positive spots in the last decade was the California bill to make breaches of data disclosed to effected customers. It took a while, but in 2005 the flood gates opened. Now reports the FBI:
"Of the thousands of cases that we've investigated, the public knows about a handful," said Shawn Henry, assistant director for the Federal Bureau of Investigation's Cyber Division. "There are million-dollar cases that nobody knows about."
That seems to point at a super-iceberg. To some extent this is expected, because companies will search out new methods to bypass the intent of the disclosure laws. And also there is the underlying economics. As has been pointed out by many (or perhaps not many but at least me) the reputation damage probably dwarfs the actual or measurable direct losses to the company and its customers.
Companies that are victims of cybercrime are reluctant to come forward out of fear the publicity will hurt their reputations, scare away customers and hurt profits. Sometimes they don't report the crimes to the FBI at all. In other cases they wait so long that it is tough to track down evidence.
So, avoidance of disclosure is the strategy for all properly managed companies, because they are required to manage the assets of their shareholders to the best interests of the shareholders. If you want a more dedicated treatment leading to this conclusion, have a look at "the market for silver bullets" paper.
They also target corporate executives and other wealthy public figures who it is relatively easy to pursue using public records. The FBI pursues such cases, though they are rarely made public.
Huh. And this outstanding coordinated attack:
A similar approach was used in a scheme that defrauded the Royal Bank of Scotland's (RBS.L: Quote, Profile, Research, Stock Buzz) RBS WorldPay of more than $9 million. A group, which included people from Estonia, Russia and Moldova, has been indicted for compromising the data encryption used by RBS WorldPay, one of the leading payment processing businesses globally.
The ring was accused of hacking data for payroll debit cards, which enable employees to withdraw their salaries from automated teller machines. More than $9 million was withdrawn in less than 12 hours from more than 2,100 ATMs around the world, the Justice Department has said.
2,100 ATMs! worldwide! That leaves that USA gang looking somewhat kindergarten, with only 50
ATMs cities. No doubt about it, we're now talking serious networked crime, and I'm not referring to the Internet but the network of collaborating, economic agents.
Compromising the data encryption, even. Anyone know the specs? These are important numbers. Did I miss this story, or does it prove the FBI's point?
I've had a rather troubling rash of blog comment failure recently. Not on FC, which seems to be ok ("to me"), but everywhere else. At about 4 in the last couple of days I'm starting to get annoyed. I like to think that my time in writing blog comments for other blogs is valuable, and sometimes I think for many minutes about the best way to bring a point home.
The one case where I know clearly "it's not just me" is John Robb's blog. This was a *fantastic* blog where there was great conversation, until a year or two back. It went from dozens to a couple in one hit by turning on whatever flavour of the month was available in the blog system. I've not been able to comment there since, and I'm not alone.
This is denial of service. To all of us. And, this denial of service is the greatest evidence of the failure of Internet security. Yet, it is easy, theoretically easy to avoid. Here, it is avoided by the simplest of tricks, maybe one per month comes my way, but if I got spam like others get spam, I'd stop doing the blog. Again denial of service.
Over on CAcert.org's blog they recently implemented client certs. I'm not 100% convinced that this will eliminate comment spam, but I'm 99.9% convinced. And it is easy to use, and it also (more or less) eliminates that terrible thing called access control, which was delivering another denial of service: the people who could write weren't trusted to write, because the access control system said they had to be access-controlled. Gone, all gone.
According to the blog post on it:
The CAcert-Blog is now fully X509 enabled. From never visited the site before and using a named certificate you can, with one click (log in), register for the site and have author status ready to write your own contribution.
Sounds like a good idea, right? So why don't most people do this? Because they can't. Mostly they can't because they do not have a client certificate. And if they don't have one, there isn't any point in the site owner asking for it. Chicken & egg?
But actually there is another reason why people don't have a client certificate: it is because of all sorts of mumbo jumbo brought up by the SSL / PKIX people, chief amongst which is a claim that we need to know who you are before we can entrust you with a client certificate ... which I will now show to be a fallacy. The reason client certificates work is this:
If you only have a WoT unnamed certificate you can write your article and it will be spam controlled by the PR people (aka editors).
If you had a contributor account and haven’t posted anything yet you have been downgraded to a subscriber (no comment or write a post access) with all the other spammers. The good news is once you log in with a certificate you get upgraded to the correct status just as if you’d registered.
We don't actually need to know who you are. We only need to know that you are not a spammer, and you are going to write a good article for us. Both of these are more or less an equivalent thing, if you think about it; they are a logical parallel to the CAPTCHA or turing test. And we can prove this easily and economically and efficiently: write an article, and you're in.
Or, in certificate terms, we don't need to know who you are, we only need to know you are the same person as last time, when you were good.
This works. It is an undeniable benefit:
There is no password authentication any more. The time taken to make sure both behaved reliably was not possible in the time the admins had available.
That's two more pluses right there: no admin de-spamming time lost to us and general society (when there were about 290 in the wordpress click-delete queue) and we get rid of those bl**dy passwords, so another denial of service killed.
Why isn't this more available? The problem comes down to an inherent belief that the above doesn't work. Which is of course a complete nonsense. 2 weeks later, zero comment spam, and I know this will carry on being reliable because the time taken to get a zero-name client certificate (free, it's just your time involved!) is well in excess of the trick required to comment on this blog.
No matter the *results*, because of the belief that "last-time-good-time" tests are not valuable, the feature of using client certs is not effectively available in the browser. That which I speak of here is so simple to code up it can actually be tricked from any website to happen (which is how CAs get it into your browser in the first place, some simple code that causes your browser to do it all). It is basically the creation of a certificate key pair within the browser, with a no-name in it. Commonly called the self-signed certificate or SSC, these things can be put into the browser in about 5 seconds, automatically, on startup or on absence or whenever. If you recall that aphorism:
And contrast it to SSL, we can see what went wrong: there is an *option* of using a client cert, which is a completely insane choice. The choice of making the client certificate optional within SSL is a decision not only to allow insecurity in the mode, but also a decision to promote insecurity, by practically eliminating the use of client certs (see the chicken & egg problem).
And this is where SSL and the PKIX deliver their greatest harm. It denies simple cryptographic security to a wide audience, in order to deliver ... something else, which it turns out isn't as secure as hoped because everyone selects the wrong option. The denial of service attack is dominating, it's at the level of 99% and beyond: how many blogs do you know that have trouble with comments? How many use SSL at all?
So next time someone asks you, why these effing passwords are causing so much grief in your support department, ask them why they haven't implemented client certs? Or, why the spam problem is draining your life and destroying your social network? Client certs solve that problem.
Footnote: you're probably going to argue that SSCs will be adopted by the spammer's brigade once there is widespread use of this trick. Think for minute before you post that comment, the answer is right there in front of your nose! Also you are probably going to mention all these other limitations of the solution. Think for another minute and consider this claim: almost all of the real limitations exist because the solution isn't much used. Again, chicken & egg, see "usage". Or maybe you'll argue that we don't need it now we have OpenID. That's specious, because we don't actually have OpenID as yet (some few do, not all) and also, the presence of one technology rarely argues against another not being needed, only marketing argues like that.
A gang of internet fraudsters used a sophisticated virus to con members of the public into parting with their banking details and stealing £600,000, a court heard today.
Once the 'malicious software' had infected their computers, it waited until users logged on to their accounts, checked there was enough money in them and then insinuated itself into cash transfer procedures.
(also on El Reg.) This breaches the 2-factor authentication system commonly in use because it (a) controls the user's PC, and (b) the authentication scheme that was commonly pushed out over the last decade or so only authenticates the user, not the transaction. So as the trojan now controls the PC, it is the user. And the real user happily authenticates itself, and the trojan, and the trojan's transactions, and even lies about it!
Numbers, more than ordinarily reliable because they have been heard in court:
'In fact as a result of this Trojan virus fraud very many people - 138 customers - were affected in this way with some £600,000 being fraudulently transferred.
'Some of that money, £140,000, was recouped by NatWest after they became aware of this scam.'
This is called Man-in-the-browser, which is a subtle reference to the SSL's vaunted protection against Man-in-the-middle. Unfortunately several things went wrong in this area of security: Adi's 3rd law of security says the attacker always bypasses; one of my unnumbered aphorisms has it that the node is always the threat, never the wire, and finally, the extraordinary success of SSL in the mindspace war blocked any attempts to fix the essential problems. SSL is so secure that nobody dare challenge browser security.
The MITB was first reported in March 2006 and sent a wave of fear through the leading European banks. If customers lost trust in the online banking, this would turn their support / branch employment numbers on their heads. So they rapidly (for banks) developed a counter-attack by moving their confirmation process over to the SMS channel of users' phones. The Man-in-the-browser cannot leap across that air-gap, and the MITB is more or less defeated.
European banks tend to be proactive when it comes to security, and hence their losses are miniscule. Reported recently was something like €400k for a smaller country (7 million?) for an entire year for all banks. This one case in the UK is double that, reflecting that British banks and USA banks are reactive to security. Although they knew about it, they ignored it.
This could be called the "prove-it" school of security, and it has merit. As we saw with SSL, there never really was much of a threat on the wire; and when it came to the node, we were pretty much defenceless (although a lot of that comes down to one factor: Microsoft Windows). So when faced with FUD from the crypto / security industry, it is very very hard to separate real dangers from made up ones. I felt it was serious; others thought I was spreading FUD! Hence Philipp Güring's paper Concepts against Man-in-the-Browser Attacks, and the episode formed fascinating evidence for the market for silver bullets. The concept is now proven right in practice, but it didn't turn out how we predicted.
What is also interesting is that we now have a good cycle timeline: March 2006 is when the threat first crossed our radars. September 2009 it is in the British courts.
Postscript. More numbers from today's MITB:
A next-generation Trojan recently discovered pilfering online bank accounts around the world kicks it up a notch by avoiding any behavior that would trigger a fraud alert and forging the victim's bank statement to cover its tracks.
The so-called URLZone Trojan doesn't just dupe users into giving up their online banking credentials like most banking Trojans do: Instead, it calls back to its command and control server for specific instructions on exactly how much to steal from the victim's bank account without raising any suspicion, and to which money mule account to send it the money. Then it forges the victim's on-screen bank statements so the person and bank don't see the unauthorized transaction.
Researchers from Finjan found the sophisticated attack, in which the cybercriminals stole around 200,000 euro per day during a period of 22 days in August from several online European bank customers, many of whom were based in Germany....
"The Trojan was smart enough to be able to look at the [victim's] bank balance," says Yuval Ben-Itzhak, CTO of Finjan... Finjan found the attackers had lured about 90,000 potential victims to their sites, and successfully infected about 6,400 of them. ...URLZone ensures the transactions are subtle: "The balance must be positive, and they set a minimum and maximum amount" based on the victim's balance, Ben-Itzhak says. That ensures the bank's anti-fraud system doesn't trigger an alert, he says.
And the malware is making the decisions -- and alterations to the bank statement -- in real time, he says. In one case, the attackers stole 8,576 euro, but the Trojan forged a screen that showed the transferred amount as 53.94 euro. The only way the victim would discover the discrepancy is if he logged into his account from an uninfected machine.
Gunnar reports that someone called Robert Garigue died last month. This person I knew not, but his model resonates. Sound bites only from Gunnar's post:
"It's the End of the CISO As We Know It (And I Feel Fine)"...
...First, they miss the opportunity to look at security as a business enabler. Dr. Garigue pointed out that because cars have brakes, we can drive faster. Security as a business enabler should absolutely be the starting point for enterprise information security programs.
Secondly, if your security model reflects some CYA abstraction of reality instead of reality itself your security model is flawed. I explored this endemic myopia...
This rhymes with: "what's your business model?" The bit lacking from most orientations is the enabler, why are we here in the first place? It's not to show the most elegant protocol for achieving C-I-A (confidentiality, integrity, authenticity), but to promote the business.
How do we do that? Well, most technologists don't understand the business, let alone can speak the language. And, the business folks can't speak the techno-crypto blah blah either, so the blame is fairly shared. Dr. Garigue points us to Charlemagne as a better model:
King of the Franks and Holy Roman Emperor; conqueror of the Lombards and Saxons (742-814) - reunited much of Europe after the Dark Ages.
He set up other schools, opening them to peasant boys as well as nobles. Charlemagne never stopped studying. He brought an English monk, Alcuin, and other scholars to his court - encouraging the development of a standard script.
He set up money standards to encourage commerce, tried to build a Rhine-Danube canal, and urged better farming methods. He especially worked to spread education and Christianity in every class of people.
He relied on Counts, Margraves and Missi Domini to help him.
Margraves - Guard the frontier districts of the empire. Margraves retained, within their own jurisdictions, the authority of dukes in the feudal arm of the empire.
Missi Domini - Messengers of the King.
In other words, the role of the security person is to enable others to learn, not to do, nor to critique, nor to design. In more specific terms, the goal is to bring the team to a better standard, and a better mix of security and business. Garigue's mandate for IT security?
Knowledge of risky things is of strategic value
How to know today tomorrow’s unknown ?
How to structure information security processes in an organization so as to identify and address the NEXT categories of risks ?
Curious, isn't it! But if we think about how reactive most security thinking is these days, one has to wonder where we would ever get the chance to fight tomorrow's war, today?
The CA and PKI business is busy this week. CAcert, a community Certification Authority, has a special general meeting to resolve the trauma of the collapse of their audit process. Depending on who you ask, my resignation as auditor was either the symptom or the cause.
In my opinion, the process wasn't working, so now I'm switching to the other side of the tracks. I'll work to get the audit done from the inside. Whether it will be faster or easier this way is difficult to say, we only get to run the experiment once.
Meanwhile, Mike Zusman and Alex Sotirov are claiming to have breached the EV green bar thing used by some higher end websites. No details available yet, it's the normal tease before a BlabHat style presentation by academics. Rumour has it that they've exploited weaknesses in the browsers. Some details emerging:
With control of the DNS for the access point, the attackers can establish their machines as men-in-the-middle, monitoring what victims logged into the access point are up to. They can let victims connect to EV SSL sites - turning the address bars green. Subsequently, they can redirect the connection to a DV SSL sessions under a certificates they have gotten illicitly, but the browser will still show the green bar.
Ah that old chestnut: if you slice your site down the middle and do security on the left and no or lesser security on the right, guess where the attacker comes in? Not the left or the right, but up the middle, between the two. He exploits the gap. Which is why elsewhere, we say "there is only one mode and it is secure."
Aside from that, this is an interesting data point. It might be considered that this is proof that the process is working (following the GP theory), or it might be proof that the process is broken (following the sleeping-dogs-lie model of security).
Although EV represents a good documentation of what the USA/Canada region (not Europe) would subscribe as "best practices," it fails in some disappointing ways. And in some ways it has made matters worse. Here's one: because the closed proprietary group CA/B Forum didn't really agree to fix the real problems, those real problems are still there. As Extended Validation has held itself up as a sort of gold standard, this means that attackers now have something fun to focus on. We all knew that SSL was sort of facade-ware in the real security game, and didn't bother to mention it. But now that the bigger CAs have bought into the marketing campaign, they'll get a steady stream of attention from academics and press.
I would guess less so from real attackers, because there are easier pickings elsewhere, but maybe I'm wrong:
"From May to June 2009 the total number of fraudulent website URLs using VeriSign SSL certificates represented 26% of all SSL certificate attacks, while the previous six months presented only a single occurrence," Raza wrote on the Symantec Security blogs.
... MarkMonitor found more than 7,300 domains exploited four top U.S. and international bank brands with 16% of them registered since September 2008.
.... But in the latest spate of phishing attempts, the SSL certificates were legitimate because "they matched the URL of the fake pages that were mimicking the target brands," Raza wrote.
VeriSign Inc., which sells SSL certificates, points out that SSL certificate fraud currently represents a tiny percentage of overall phishing attacks. Only two domains, and two VeriSign certificates were compromised in the attacks identified by Symantec, which targeted seven different brands.
"This activity falls well within the normal variability you would see on a very infrequent occurrence," said Tim Callan, a product marketing executive for VeriSign's SSL business unit. "If these were the results of a coin flip, with heads yielding 1 and tails yielding 0, we wouldn't be surprised to see this sequence at all, and certainly wouldn't conclude that there's any upward trend towards heads coming up on the coin."
Well, we hope that nobody's head is flipped in an unsurprising fashion....
It remains to be seen whether this makes any difference. I must admit, I check the green bar on my browser when online-banking, but annoyingly it makes me click to see who signed it. For real users, Firefox says that it is the website, and this is wrong and annoying, but Mozilla has not shown itself adept at understanding the legal and business side of security. I've heard Safari has been fixed up so probably time to try that again and report sometime.
Then, over to Germany, where a snafu with a HSM ("high security module") caused a root key to be lost (also in German). Over in the crypto lists, there are PKI opponents pointing out how this means it doesn't work, and there are PKI proponents pointing out how they should have employed better consultants. Both sides are right of course, so what to conclude?
Test runs with Germany's first-generation electronic health cards and doctors' "health professional cards" have suffered a serious setback. After the failure of a hardware security module (HSM) holding the private keys for the root Certificate Authority (root CA) for the first-generation cards, it emerged that the data had not been backed up. Consequently, if additional new cards are required for field testing, all of the cards previously produced for the tests will have to be replaced, because a new root CA will have to be generated. ... Besides its use in authentication, the root CA is also important for card withdrawal (the revocation service).
The first thing to realise was that this was a test rollout and not the real thing. So the test discovered a major weakness; in that sense it is successful, albeit highly embarrassing because it reached the press.
The second thing is the HSM issue. As we know, PKI is constructed as a hierarchy, or a tree. At the root of the tree is the root key of course. If this breaks, everything else collapses.
Hence there is a terrible fear of the root breaking. This feeds into the wishes of suppliers of high security modules, who make hardware that protect the root from being stolen. But, in this case, the HSM broke, and there was no backup. So a protection for one fear -- theft -- resulted in a vulnerability to another fear -- data loss.
A moment's thought and we realise that the HSM has to have a backup. Which has to be at least as good as the HSM. Which means we then have some rather cute conundrums, based on the Alice in Wonderland concept of having one single root except we need multiple single roots... In practice, how do we create the root inside the HSM (for security protection) and get it to another HSM (for recovery protection)?
Serious engineers and architects will be reaching for one word: BRITTLE! And so it is. Yes, it is possible to do this, but only by breaking the hierarchical principle of PKI itself. It is hard to break fundamental principles, and the result is that PKI will always be brittle, the implementations will always have contradictions that are swept under the carpet by the managers, auditors and salesmen. The PKI design is simply not real world engineering, and the only thing that keeps it going is the institutional deadly embrace of governments, standards committees, developers and security companies.
Not the market demand. But, not all has been bad in the PKI world. Actually, since the bottoming out of the dotcom collapse, certs have been on the uptake, and market demand is present albeit not anything beyond compliance-driven. Here comes a minor item of success:
VeriSign, Inc. [SNIP] today reported it has topped the 1 billion mark for daily Online Certificate Status Protocol (OCSP) checks.
[SNIP] A key link in the online security chain, OCSP offers the most timely and efficient way for Web browsers to determine whether a Secure Sockets Layer (SSL) or user certificate is still valid or has been revoked. Generally, when a browser initiates an SSL session, OCSP servers receive a query to check to see if the certificate in use is valid. Likewise, when a user initiates actions such as smartcard logon, VPN access or Web authentication, OCSP servers check the validity of the user certificate that is presented. OSCP servers are operated by Certificate Authorities, and VeriSign is the world's leading Certificate Authority.
[SNIP] VeriSign is the EV SSL Certificate provider of choice for more than 10,000 Internet domain names, representing 74 percent of the entire EV SSL Certificate market worldwide.
(In the above, I've snipped the self-serving marketing and one blatant misrepresentation.)
Certificates are static statements. They can be revoked, but the old design of downloading complete lists of all revocations was not really workable (some CAs ship megabyte-sized lists). We now have a new thing whereby if you are in possession of a certificate, you can do an online check of its status, called OCSP.
The fundamental problem with this, and the reason why it took the industry so long to get around to making revocation a real-time thing, is that once you have that architecture in place, you no longer need certificates. If you know the website, you simply go to a trusted provider and get the public key. The problem with this approach is that it doesn't allow the CA business to sell certificates to web site owners. As it lacks any business model for CAs, the CAs will fight it tooth & nail.
Just another conundrum from the office of security Kafkaism.
Here's another one, this time from the world of code signing. The idea is that updates and plugins can be sent to you with a digital signature. This means variously that the code is good and won't hurt you, or someone knows who the attacker is, and you can't hurt him. Whatever it means, developers put great store in the apparent ability of the digital signature to protect themselves from something or other.
But it doesn't work with Blackberry users. Allegedly, a Blackberry provider sent a signed code update to all users in United Arab Emirates:
Yesterday it was reported by various media outlets that a recent BlackBerry software update from Etisalat (a UAE-based carrier) contained spyware that would intercept emails and text messages and send copies to a central Etisalat server. We decided to take a look to find out more.
Whenever a message is received on the device, the Recv class first inspects it to determine if it contains an embedded command — more on this later. If not, it UTF-8 encodes the message, GZIPs it, AES encrypts it using a static key (”EtisalatIsAProviderForBlackBerry”), and Base64 encodes the result. It then adds this bundle to a transmit queue. The main app polls this queue every five seconds using a Timer, and when there are items in the queue to transmit, it calls this function to forward the message to a hardcoded server via HTTP (see below). The call to http.sendData() simply constructs the POST request and sends it over the wire with the proper headers.
Oops! A signed spyware from the provider that copies all your private email and sends it to a server. Sounds simple, but there's a gotcha...
The most alarming part about this whole situation is that people only noticed the malware because it was draining their batteries. The server receiving the initial registration packets (i.e. “Here I am, software is installed!”) got overloaded. Devices kept trying to connect every five seconds to empty the outbound message queue, thereby causing a battery drain. Some people were reporting on official BlackBerry forums that their batteries were being depleted from full charge in as little as half an hour.
So, even though the spyware provider had a way to turn it on and off:
It doesn’t seem to execute arbitrary commands, just packages up device information such as IMEI, IMSI, phone number, etc. and sends it back to the central server, the same way it does for received messages. It also provides a way to remotely enable/disable the spyware itself using the commands “start” and “stop”.
There was something wrong with the design, and everyone's blackberry went mad. Two points: if you want to spy on your own customers, be careful, and test it. Get quality engineers on to that part, because you are perverting a brittle design, and that is tricky stuff.
Second point. If you want to control a large portion of the population who has these devices, the centralised hierarchy of PKI and its one root to bind them all principle would seem to be perfectly designed. Nobody can control it except the center, which puts you in charge. In this case, the center can use its powerful code-signing abilities to deliver whatever you trust to it. (You trust what it tells you to trust, of course.)
Which has led some wits to label the CAs as centralised vulnerability partners. Which is odd, because some organisations that should know better than to outsource the keys to their security continue to do so.
But who cares, as long as the work flows for the consultants, the committees, the HSM providers and the CAs?
Over on EC, Adam does a presentation on his new book, co-authored with Andrew Stewart. AFAICS, the basic message in the book is "security sucks, we better start again." Right, no argument there.
Curiously, he's also experimenting with Twitter in the presentations as a "silent" form of interaction (more. It is rather poignant to mix twitter and security, but I generally like these experiments. The grey hairs don't understand this new stuff, and they have to find out somehow. Somehow and sometime, the only question is whether we are the dinosours or the mammals.
Reading through the quotes (standard stuff) I came across this one, unattributed:
I was pretty dismissive of "Black Swan" hype. I stand by that, and don't think we should allow fear of a black swan out there somewhere to prevent us from studying white ones and generalizing about what we can see.
OK, we just saw the Black Swan over on the finance scene, where Wall Street is now turning into Rubble Alley. Why not on the net? Black swans are a name for those areas where our numbers are garbage and our formulas have an occasional tendency (say 1%) to blow up.
Here's why there are no black swans on the net, for my money: there is no unified approach to security. Indeed, there isn't much or anything of security. There are tiny fights by tiny schools, but these are unadopted by the majority. Although there are a million certs out there, they play no real part in the security models of the users. A million OpenPGP keys are used for collecting signatures, not for securing data. Although there are hundreds of millions of lines of security code out there, now including fresh new Vista!, they are mostly ignored or bypassed or turned off or any other of the many Kerckhoffsian modes of failure.
The vast majority of the net is insecure. We ain't got no security, we don't fear no black swan. We're about as low as we can get. If I look at the most successful security product of all time, Skype, it's showing around 10 million users right now. Facebook, Myspace, youtube, google, you name them, they *all* do an order of magnitude better than Skype's ugly duckling waddle.
Why is the state of security so dire? Well, we could ask the DHS. Now, these guys are probably authoritive in at least a negative sense, because they actually were supposed to secure the USA government infrastructure. Here's what one guy says, thanks to Todd who passed this on:
The official in charge of coordinating the U.S. government's cybersecurity operations has quit, saying the expanding control of the National Security Agency over the nation's computer security efforts poses "threats to our democratic processes."
"Even from a security standpoint," Rod Beckstrom, the head of the Department of Homeland Security's National Cyber Security Center, told United Press International, "it is unwise to hand over the security of all government networks to a single organization."
"If our founding fathers were taking part in this debate (about the future organization of the government's cybersecurity activities) there is no doubt in my mind they would support a separation of security powers among different (government) organizations, in line with their commitment to checks and balances."
In a letter to Homeland Security Secretary Janet Napolitano last week, Beckstrom said the NSA "dominates most national cyber efforts" and "effectively controls DHS cyber efforts through detailees, technology insertions and the proposed move" of the NCSC to an NSA facility at the agency's Fort Meade, Md., headquarters.
It's called "the equity debate" for reasons obscure. Basically, the mission of the NSA is to breach our security. The theory has it that the NSA did this (partly) by ensuring that our security -- the security of the entire net -- was flaky enough for them to get in. Now we all pay the price, as the somewhat slower but more incisive criminal economy takes its tax.
Quite how we get from that above NSA mission to where we are now is a rather long walk, and to be fair, the evidence is a bit scattered and tenuous. Unsurprisingly, and the above resignation does not quite "spill the beans," thus preserving the beanholder's good name. But it is certainly good to see someone come out and say, these guys are ruining the party for all of us.
Those of us who are impacted by the world of security suffer under a sort of love-hate relationship with the word; so much of it is how we build applications, but so much of what is labelled security out there in the rest of the world is utter garbage.
So we tend to spend a lot of our time reverse-engineering popular security thought and finding the security bugs in it. I think I've found another one. Consider this very concise and clear description from Frank Stajano, who has published a draft book section seeking comments:
The viewpoint we shall adopt here, which I believe is the only one leading to robust system security engineering, is that security is essentially risk management. In the context of an adversarial situation, and from the viewpoint of the defender, we identify assets (things you want to protect, e.g. the collection of magazines under your bed), threats (bad things that might happen, e.g. someone stealing those magazines), vulnerabilities (weaknesses that might facilitate the occurrence of a threat, e.g. the fact that you rarely close the bedroom window when you go out), attacks (ways a threat can be made to happen, e.g. coming in through the open window and stealing the magazines—as well as, for good measure, that nice new four-wheel suitcase of yours to carry them away with) and risks (the expected loss caused by each attack, corresponding to the value of the asset involved times the probability that the attack will occur). Then we identify suitable safeguards (a priori defences, e.g. welding steel bars across the window to prevent break-ins) and countermeasures (a posteriori defences, e.g. welding steel bars to the window after a break-in has actually occurred4 , or calling the police). Finally, we implement the defences that are still worth implementing after evaluating their effectiveness and comparing their (certain) cost with the (uncertain) risk they mitigate5
(my emphasies.) That's a good description of how the classical security world sees it. We start by saying, "What's your threat model?" Then out of that we build a security model to deal with those threats. The security model then incorporates some knowledge of risks to manage the tradeoffs.
The bit that's missing is the business. Instead of asking "What's your threat model?" as the first question, it should be "What's your business model?" Security asks that last, and only partly, by asking questions like "what's are the risks?"
Calling security "risk management" then is a sort of nod to the point that security has a purpose within business; and by focussing on some risks, this allows the security modellists to preserve their existing model while tying it to the business. But it is still backwards; it is still seeking to add risks at the end, and will still result in "security" being just the annoying monkey on the back.
Instead, the first question should be "What's your business model?"
This unfortunately opens Pandora's box, because that implies that we can understand a business model. Assuming it is the case that your CISO understands a business model, it does rather imply that the only security we should be pushing is that which is from within. From inside the business, that is. The job of the security people is not therefore to teach and build security models, but to improve the abilities of the business people to incorporate good security as they are doing their business.
Which perhaps brings us full circle to the popular claim that the best security is that which is built in from the beginning.
I think this is why the best engineers who've done great security things start from the top; from the customer, the product, the market. They know that in order to secure something, they had better know what the something is before even attempting to add a cryptosec layer over it.
Which is to say, security cannot be a separate discipline. It can be a separate theory, a bit like statics is a theory from civil engineering, or triage is a part of medicine. You might study it in University, but you don't get a job in it; every practitioner needs some basic security. If you are a specialist in security, your job is more or less to teach it to practitioners. The alternate is to ask the practitioners to teach you about the product, which doesn't seem sensible.
The next question on unwinding secrecy is how to actually do it. It isn't as trivial as it sounds. Perhaps this is because the concept of "need-to-know" is so well embedded in the systems and managerial DNA that it takes a long time to root it out.
At LISA I was asked how to do this; but I don't have much of an answer. Here's what I have observed:
"I can't see why document X is secret, it seems wrong. Therefore, in 1 month, I intend to publish it. If there is any real reason, let me know before then."This protocol avoids the endless discussions as to why and whether.
Well, that's what I have thought about so far. I am sure there is more.
Have a read of this. Quick summary: Altimo thinks Telenor may be using espionage tactics to cause problems.
Altimo alleges the interception of emails and tapping of telephone calls, surveillance of executives and shareholders, and payments to journalists to write damaging articles.
So instead of getting its knickers in a knot (court case or whatever) Altimo simply writes to Telenor and suggests that this is going on, and asks for confirmation that they know nothing about it, do not endorse it, etc.
|Who ya bluffin?|
...Andrei Kosogov, Altimo's chairman, wrote an open letter to Telenor's chairman, Harald Norvik, asking him to explain what Telenor's role has been and "what activity your agents have directed at Altimo". He said that he was "reluctant to believe" that Mr Norvik or his colleagues would have sanctioned any of the activities complained of.
.... Mr Kosogov said he first wrote to Telenor in October asking if the company knew of the alleged campaign, but received no reply. In yesterday's letter to Mr Norvik, Mr Kosogov writes: "We would welcome your reassurance that Telenor's future dealings with Altimo will be conducted within a legal and ethical framework."
Think about it: This open disclosure locks down Telenor completely. It draws a firm line in time, as also, gives Telenor a face-saving way to back out of any "exuberance" it might have previously "endorsed." If indeed Telenor does not take this chance to stop the activity, it would be negligent. If it is later found out that Telenor's board of directors knew, then it becomes a slam-dunk in court. And, if Telenor is indeed innocent of any action, it engages them in the fight to also chase the perpetrator. The bluff is called, as it were.
This is good use of game theory. Note also that the Advisory Board of Altimo includes some high-powered people:
Evidence of an alleged campaign was contained in documents sent to each member of Altimo's advisory board some time before October. The board is chaired by ex-GCHQ director Sir Francis Richards, and includes Lord Hurd, a former UK Foreign Secretary, and Sir Julian Horn-Smith, a founder of Vodafone.
We could speculate that those players -- the spooks and mandarins -- know how powerful open disclosure is in locking down the options of nefarious players. A salutory lesson!
One of the things that I've gradually come to believe in is that secrecy in anything is more likely to be a danger to you and yours than a help. The reasons for this are many, but include:
There are no good reasons for secrecy, only less bad ones. If we accept that proposition, and start unwinding the secrecy so common in organisations today, there appear to be two questions: how far to open up, and how do we do it?
How far to open up appears to be a personal-organisational issue, and perhaps the easiest thing to do is to look at some examples. I've seen three in recent days which I'd like to share.
First the Intelligence agencies: in the USA, they are now winding back the concept of "need-to-know" and replacing it with "responsibility-to-share".
Implementing Intellipedia Within a "Need to Know" Culture
Sean Dennehy, Chief of Intellipedia Development, Directorate of Intelligence, U.S. Central Intelligence Agency
Sean will share the technical and cultural changes underway at the CIA involving the adoption of wikis, blogs, and social bookmarking tools. In 2005, Dr. Calvin Andrus published The Wiki and The Blog: Toward a Complex Adaptive Intelligence Community. Three years later, a vibrant and rapidly growing community has transformed how the CIA aggregates, communicates, and organizes intelligence information. These tools are being used to improve information sharing across the U.S. intelligence community by moving information out of traditional channels.
The way they are doing this is to run a community-wide suite of social network tools: blogs, wikis, youtube-copies, etc. The access is controlled at the session level by the username/password/TLS and at the person level by sponsoring. That latter means that even contractors can be sponsored in to access the tools, and all sorts of people in the field can contribute directly to the collection of information.
The big problem that this switch has is that not only is intelligence information controlled by "need to know" but also it is controlled in horizontal layers. For same of this discussion, there are three: TOP SECRET / SECRET / UNCLASSIFIED-CONTROLLED. The intel community's solution to this is to have 3 separate networks in parallel, one for each, and to control access to each of these. So in effect, contractors might be easily sponsored into the lowest level, but less likely in the others.
What happens in practice? The best coverage is found in the network that has the largest number of people, which of course is the lowest, UNCLASSIFIED-CONTROLLED network. So, regardless of the intention, most of the good stuff is found in there, and where higher layer stuff adds value, there are little pointers embedded to how to find it.
In a nutshell, the result is that anyone who is "in" can see most everything, and modify everything. Anyone who is "out" cannot. Hence, a spectacular success if the mission was to share; it seems so obvious that one wonders why they didn't do it before.
As it turns out, the second example is quite similar: Google. A couple of chaps from there explained to me around the dinner table that the process is basically this: everyone inside google can talk about any project to any other insider. But, one should not talk about projects to outsiders (presumably there are some exceptions). It seems that SEC (Securities and Exchange Commission in USA) provisions for a public corporation lead to some sensitivity, and rather than try and stop the internal discussion, google chose to make it very simple and draw a boundary at the obvious place.
The third example is CAcert. In order to deal with various issues, the Board chose to take it totally open last year. This means that all the decisions, all the strategies, all the processes should be published and discussable to all. Some things aren't out there, but they should be; if an exception is needed it must be argued and put into policies.
The curious thing is why CAcert did not choose to set a boundary at some point, like google and the intelligence agencies. Unlike google, there is no regulator to say "you must not reveal inside info of financial import." Unlike the CIA, CAcert is not engaging in a war with an enemy where the bad guys might be tipped off to some secret mission.
However, CAcert does have other problems, and it has one problem that tips it in the balance of total disclosure: the presence of valuable and tempting privacy assets. These seem to attract a steady stream of interested parties, and some of these parties are after private gain. I have now counted 4 attempts to do this in my time related to CAcert, and although each had their interesting differences, they each in their own way sought to employ CAcert's natural secrecy to own advantage. From a commercial perspective, this was fairly obvious as the interested parties sought to keep their negotiations confidential, and this allowed them to pursue the sales process and sell the insiders without wiser heads putting a stop to it. To the extent that there are incentives for various agencies to insert different agendas into the inner core, then the CA needs a way to manage that process.
How to defend against that? Well, one way is to let the enemy of your enemy know who we are talking to. Let's take a benign example which happened (sort of): a USB security stick manufacturer might want to ship extra stuff like CAcert's roots on the stick. Does he want the negotiations to be private because other competitors might deal for equal access, or does he want it private because wiser heads will figure out that he is really after CAcert's customer list? CAcert might care more about one than they other, but they are both threats to someone. As the managers aren't smart enough to see every angle, every time, they need help. One defence is many eyeballs and this is something that CAcert does have available to it. Perhaps if sufficient info of the emerging deal is published, then the rest of the community can figure it out. Perhaps, if the enemy's enemy notices what is going on, he can explain the tactic.
A more poignant example might be someone seeking to pervert the systems and get some false certificates issued. In order to deal with those, CAcert's evolving Security Manual says all conflicts of interest have to be declared broadly and in advance, so that we can all mull over them and watch for how these might be a problem. This serves up a dilemma to the secret attacker: either keep private and lie, and risk exposure later on, or tell all upfront and lose the element of surprise.
This method, if adopted, would involve sacrifices. It means that any agency that is looking to impact the systems is encouraged to open up, and this really puts the finger on them: are they trying to help us or themselves? Also, it means that all people in critical roles might have to sacrifice their privacy. This latter sacrifice, if made, is to preserve the privacy of others, and it is the greater for it.
I keep having the same discussion in various places, and keep coming back to this eloquent description of where we are:
Gunnar's full blog post is here ... it includes some other things, but nothing quite so poignant.
Web Security hasn't moved since 1995. Oh well.
One of the dilemmas that the browser security UI people have is that they have to deal with two different groups at the same time. One is the people who can work with the browser and the other is those who blindly click when told to. The security system known as secure browsing seems to be designed for both groups at the same time, thus leading to bad results. For example, Dan Kaminsky counted another scalp when finding back in April that ISPs are doing MITMs on their customers:
The rub comes when a user is asking for a nonexistent subdomain of a real website, such as http://webmale.google.com, where the subdomain webmale doesn't exist (unlike, say, mail in mail.google.com). In this case, the Earthlink/Barefruit ads appear in the browser, while the title bar suggests that it's the official Google site.
The hacker could, for example, send spam e-mails to Earthlink subscribers with a link to a webpage on money.paypal.com. Visiting that link would take the victim to the hacker's site, and it would look as though they were on a real PayPal page.
That's a subtle attack, one which the techies can understand but the ordinary users cannot. Here's a simpler one (hat-tip to Duane), a straight phish:
Dear Wilmington Trust Banking Member,
Due to the high number of fraud attempts and phishing scams, it has been decided to implement EV SSL Certification on this Internet Banking website.
The use of EV SSL certification works with high security Web browsers to clearly identify whether the site belongs to the company or is another site imitating that company’s site.
It has been introduced to protect our clients against phishing and other online fraudulent activities. Since most Internet related crimes rely on false identity, WTDirect went through a rigorous validation process that meets the Extended Validation guidelines.
Please Update your account to the new EV SSL certification by Clicking here.
Please enter your User ID and Password and then click Go.
(Failure to verify account details correctly will lead to account suspension)
This is a phish email seen in the wild. We here -- the techies -- all know what's wrong with this attack, but can you explain it to your grandma? What is being attacked here here is the brand of EV rather than the technology. In effect, the more ads that relate EV to security in a simplistic fashion, the better this attack works.
To evade this, the banks have to promote better practices amongst their clients, and they have to include the user into the protocol. But they are not being helped, it seems.
On the one hand, a commonly held belief with the security developers is that the user cannot be bothered with security, ignore any attempts, and therefore it is best to not bother them with it. Much as I like the mantra of "there is only one mode, and it is secure," it isn't going to work in the web case unless we do the impossible: drop HTTP and make HTTPS mandatory, solve the browser universality issue (note chrome's efforts), and unify the security models end-to-end. As the group of Internet people who can follow that is vanishingly small, this is a non-starter.
On the other not-so-helpful hand, the pushers of certificates often find themselves caught between the horns of pushing more product (at a higher price) and providing the basic needs of the customers. I recently came across two cases at both extremes of the market: at the unpaid end of the market, the ability of a company to conveniently populate 50,000 desktops is, it is claimed, more important than meeting expectations about verification of contents. The problem with this approach is if people see strong expectations on one side, and casual promises on another equivalent side, the entire brand suffers.
And, at the expensive EV extreme of the market, the desire to sell a product can sometimes lead to exaggerated claims, as seen in a recent advertisement, seen above (hat-tip to Philipp). This advert in the German paper press includes perhaps over-broad or over-high claims. Translating from German to English as:
The latest and best in terms of online security. And also the greens.
Visible security for your site from a company where your customers trust.
It is quite simple: a green address bar means that your site is secure.
These claims are easy to make, and they may help sell certs. But they also help the above phishing attack, as they clearly make simple connections between security and EV.
"EV means security, gotta get me some of that! Click!" FTR, check the translator.
What would be somewhat more productive than sweeping marketing claims is to decide what it is that the green thing really meant.
I have heard one easy answer: EV means whatever the CA/Browser Forum Guidelines says it means. OK, but not so fast: I do not recall that it said anywhere that your site has to be secure! I admit to not having read it for a while, but the above advert seems entirely at odds with a certificate claim.
Further, because that document is long and involved, something clearer and shorter would be useful if there are any pretensions in saying anything to customers.
If you don't want to say anything to customers, then the Guidelines are fine. But then, what is the point of certs?
The above is just about EV, but that's just a foil for the wider problem: no CA makes this easy, as yet. I believe it to be an important task for certificate authorities to come up with a simple claim that they can make to users. Something clear, something a user can take to court.
Indeed, it would be good for all CAs and all browsers to have their claims clearly presented to users, as the users are being told that there is a benefit. To them. Frequently, for years and years now. And now they are at risk, both of losing that benefit and because of that benefit.
What is that benefit? Can you show it? Can you state it? Can you present it to your grandma? And, will you stand behind her, in court, and explain it to the judge?
Superb post by Mark on what I think is the biggest problem we have in security. One thing you learn in consulting is that no matter what anyone tells you when you start a project about what problem you are trying to solve, it is always a people problem. The single biggest problem in security is too many breakers not enough builders. Please understand I am not saying that breakers are not useful, we need them, and we need them to continue to get better so we can build more resilient systems. But the industry is about 90% breaking and 10% building and that's plain bad.
It’s still predominantly made up of an army of skilled hackers focused on better ways to break systems apart and find new ways to exploit vulnerabilities than “security architects” who are designing secure components, protocols and ultimately secure systems.
Hear hear! And why is this? One easy answer: breaking something is a solid metric. It's either broken or not, in general. Any journo can understand it.
On the other hand, building it is too difficult a signal. There is no easy number, there is no binary result. It takes a business focus over decades to understand that one architecture delivers more profits for users and corporates alike than another, and by then, the architects have moved on, so even, then the result may not be clear.
Let's take an old example. Back around 1996, a couple of bored uni students cracked Netscape's secure browsing. The first time was by crunching the 40 bit crypto using the idle lab computers, and the second time was by predicting the less-than-random numbers injected into the protocol. These students were lauded in the press for having done something grand.
They then went on to a much harder task, and were almost never heard of again. What was that harder task? Building secure and usable systems. One of them tried to build a secure communications platform, another is trying to build a secure computing platform. So far, they have not succeeded at either task, but these are much harder tasks.
The true heroes back in the mid-1990s were the Netscape engineers who got something going and delivered it to the public, not the kids who scratched the paint off it. The breaches mentioned above were jokes, and bad ones at that, because they distracted attention on what was really being built. Case in point, is that, even today, if we had twice as much 40 bit crypto as we do 128 bit crypto, we'd probably be twice as secure, because the attack of relevance simply isn't the bored uni student, it is the patient phisher.
If you recall names in this story, recall them for what they tried to build, and not for what they broke.
Frequent browsers will recall that the top tip number 1 for every user out there is to buy a Mac. That's for several reasons:
Well, this isn't exactly a reason but more a bonus (likely there is a hardening tips for other popular operating systems as well). However, it is a useful resource for those people who really want more than a standard user install, without the compromises!
(Note, many of the hardening tips are beyond normal users, so seek experienced advice before following them slavishly.)
I have in the past presented the strawman that your CISO needs an MBA. Nobody has yet succeeded in knocking it down, and it is proving surprisingly resilient. Yet more evidence comes from Bruce Schneier's blog post of yesterday:
Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.
It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.
It's a good idea in theory, but it's mostly bunk in practice.
Bunk is wrong. Let's drill down. It works this way: NPV (net present value) and ROI (its lesser cousin) are a mathematical tool for choosing between alternate projects. Keep the notion of comparison tightly in your mind.
The tools measure the money going in versus the money going out in a neutral way. They are entirely neutral between projects because NPV is just mathematics, and the same mathematics is used for each project. (See the top part of Richard's post.)
Obviously, any result from the model depends totally on the inputs, so there is a great deal of care and theory needed supply those proper inputs. And, it is here that security projects have the trouble, in that we don't have a good view as to how to predict attack costs. To be clear, there is no controversy about the inputs being a big problem.
But, assuming we have the theory, the process and the inputs, we can, again in principle, measure fairly across all projects.
That's how it works. As you can see above, we do not make a distinction between investment, savings, costs, returns or profits. Why not? Because NPV model and the numbers don't, either.
What then goes wrong with security people when they say ROI doesn't apply to security?
Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.
The bottom line is that security saves money; it does not create money.
It seems to be that they seize on the words investment and returns, etc, and realise that the words differ from costs and savings. In conceptual or balance sheet terms, they do differ, but here's the catch: to the models of NPV and ROI, it's all the same. In this sense, we could say that the title of ROI is a misnomer, or that there are several meanings to the word "investment" and you've seized on the wrong one.
If you are good at maths, consider it as simply a model that deals equally well with negative numbers as well as positive numbers. To a model, savings are just negatives of returns.
Now, if your security director had an MBA, she would know that the purpose of NPV is to compare projects, and not anything else, like generating returns. She would also know that the model is neutral, and that the ability to handle negative numbers mean that expenses and savings can be compared as well. She would further know that the problems occur in the inputs and assumptions, not in the model.
Finally, she would know how to speak in the language of finance, which is the language that the finance people use. This might sound obvious, but it isn't so clear. As a generalism, it is this last point that is probably most significant about the MBA concept: it teaches you the language of all the other specialities. It doesn't necessarily make you a whizz at finance, or human resources, or marketing. But it at least lets you talk to them in their language. And, it reminds you that the other professions do have some credibility, so if they say something, listen first before teaching them how to suck eggs.
Graeme points to this entry that posits that security people need a legal background:
My own experience and talking to colleagues has prompted me to wonder whether the day has arrived that security professionals will need a legal background. The information security management professional is under increasing pressure to cope with the demands of the organization for access to information, to manage the expectations of the data owner on how and where the information is going to be processed and to adhere to regulatory and legal requirements for the data protection and archiving. In 2008, a number of rogue trader and tax evasion cases in the financial sector have heightened this pressure to manage data.
The short, sharp answer is no, but it is a little more nuanced than that. First, let's take the rogue trader issue, being someone who has breached the separation of roles within a trading company, and used it to bad effect. To spot and understand this requires two things: an understanding of how settlement works, and the principle of dual control. It does not require the law, at all. Indeed, the legal position of someone who has breached the separation, and has "followed instructions to make a lot of money" is a very difficult subject. Suffice to say, studying the law here will not help.
Secondly, asking security people to study law so as to deal with tax evasion is equally fruitless but for different reasons: it is simply too hard to understand, it is less law than an everlasting pitched battle between the opposing camps.
Another way of looking at this is to look at the FC7 thesis, which says that, in order to be an architect in financial cryptography, you need to be comfortable with cryptography, software engineering, rights, accounting, governance, value and finance. The point is not whether law is in there or not, but that there are an awful lot of important things that architects or security directors need before they need law.
Still, an understanding of the law is no bad thing. I've found several circumstances where it has been very useful to me and people I know:
In this case, the law knowledge helps a lot. Another area which is becoming more and more an issue is that of electronic evidence. As most evidence is now entering the digital domain (80% was a recent unreferenced claim) there is much to understand here, and much that one can do to save ones company. The problem with this, as lamented at the recent conference, is that any formal course of law includes nothing on electronic evidence. For that, you have to turn to books like those by Stephen Mason on Electronic Evidence. But that you can do yourself.
Since the famous Bill Gates Memo, around the same time as phishing and related frauds went institutional, Microsoft has switched around to deal with the devil within: security. In so doing, it has done what others should have done, and done it well. However, there was always going to be a problem with turning the super-tanker called Windows into a battleship.
I predicted a while back that (a) Vista would probably fail to make a difference, and (b) the next step was to start thinking of a new operating system. This wasn't the normal pique, but the cold-hearted analysis of the size of the task. If you work for 20 years making your OS easy but insecure, you don't have much chance of fixing that, even with the resources of Microsoft.
The Economist brings an update on both points. Firstly, on Vista's record after 18 months in the market:
To date, some 140m copies of Vista have been shipped compared with the 750m or more copies of XP in daily use. But the bulk of the Vista sales have been OEM copies that came pre-installed on computers when they were bought. Anyone wanting a PC without Vista had to order it specially.
Meanwhile, few corporate customers have bought upgrade licences they would need to convert their existing PCs to Vista. Overwhelmingly, Windows users have stuck with XP.
Even Microsoft now seems to accept that Vista is never going to be a blockbuster like XP, and is hurrying out a slimmed-down tweak of Vista known internally as Windows 7. This Vista lite is now expected late next year instead of 2010 or 2011.
It's not as though Vista is a dud. Compared with XP, its kernel—the core component that handles all the communication between the memory, processor and input and output devices—is far better protected from malware and misuse. And, in principle, Vista has better tools for networking. All told, its design is a definite improvement—albeit an incremental one—over XP.
Microsoft tried and failed to turn it around, security+market-wise. We might now be looking at the end of the franchise known as Windows. To be clear, while we are past the peak, any ending is a long way off in the distant future.
Classical strategy thinking says that there are two possible paths here: invest in a new franchise, or go "cash-cow". The latter means that you squeeze the revenues from the old franchise as long as possible, and delay the termination of the franchise as long as possible. The longer you delay the end, the more revenues you get. The reason for doing this is simple: there is no investment strategy that makes money, so you should return the money to the shareholders. There is a simple example here: the music majors are decidedly in cash-cow, today, because they have no better strategy than delaying their death by a thousand file-shares.
Certainly, with Bill Gates easing out, it would be possible to go cash-cow, but of course, we on the outside can only cast our augeries and wonder at the signs. The Economist suggests that they may have taken the investment route:
Judging from recent rumours, that's what it is preparing to do. Even though it won't be in Windows 7, Microsoft is happy to talk about “MinWin”—a slimmed down version of the Windows core. It’s even willing to discus its “Singularity” project—a microkernel-based operating system written strictly for research purposes. But ask about a project code-named “Midori” and everyone clams up.
By all accounts, Midori (Japanese for “green” and, by inference, “go”) capitalises on research done for Singularity. The interesting thing about this hush-hush operating system is that it’s not a research project in the normal sense. It's been moved out of the lab and into incubation, and is being managed by some of the most experienced software gurus in the company.
With only 18 months before Vista is to be replaced, there's no way Midori—which promises nothing less than a total rethink of the whole Windows metaphor—could be ready in time to take its place. But four or five years down the road, Microsoft might just confound its critics and pleasantly surprise the rest of us.
Comment? Even though I predicted Microsoft would go for a new OS, I think this is a tall order. There are two installed bases in the world today, being Unix and Windows. It's been that way for a long time, and efforts to change those two bases have generally failed. Even Apple gave up and went Unix. (The same economics works against the repeated attempts to upgrade the CPU instruction set.)
The flip-side of this is that the two bases are incredibly old and out-of-date. Unix's security model is "ok" but decidedly pre-PC, much of what it does is simply irrelevant to the modern world. For example, all the user-to-user protection is pointless on a one-user-one-PC environment, and the major protection barrier has accidentally become a hack known as TCP/IP, legendary for its inelegant grafting onto Unix. Windows has its own issues.
So we know two things: a redesign is decades over-due. And it won't budge the incumbents; both are likely to live another decade without appreciable change to the markets. We would need a miracle, or better, a killer-app to budge the installed base.
Hence the cold-hearted analysis of cash-cow wins out.
But wait! The warm-blooded humanists won't let that happen for one and only one reason: it is simply too boring to contemplate. Microsoft has so many honest, caring, devoted techies within that if a decision were made to go cash-cow, there would be a mass-defection. So the question then arises, what sort of a hybrid will be acceptable to shareholders and workers? Taking a leaf from recent politics, which is going through a peak-energy-masquerade of its own these days, some form of "green platform" has appeal to both sides of the voting electorate.
Lots of chatter is seen in the security places about a patch to DNS coming out. It might be related to Dan's earlier talks, but he also makes a claim that there is something special in this fix. The basic idea is that DNS replies are now on randomised ports (or some such) and this will stop spoofing attempts of some form. You should patch your DNS.
Many are skeptical, and this gives us an exemplary case study of today's "security" industry:
Ptacek: If the fix is “randomize your source ports”, we already knew you were vulnerable. Look, DNS has a 16 bit session ID… how big is an ASPSESSIONID or JSESSIONID? When you get to this point you are way past deck chairs on the titanic, but, I mean, the web people already know this. This is why TLS/SSL totally doesn’t care about the DNS. It is secure regardless of the fact that the DNS is owned.
Paraphrased: "Oh, we knew about that, so what?" As above, much of the chatter in other groups is about how this apparently fixes something that is long known, therefore insert long list of excuses, hand-wringing, slap-downs, and is not important. However, some of the comments are starting to hint at professionalism. Nathan McFeters writes:
I asked Dan what he thought about Thomas Ptacek’s (Thomas Ptacek of Matasano) comments suggesting that the flaw was blown out of proportion and Dan said that the flaw is very real and very serious and that the details will be out at Black Hat. Dan mentioned to me that he was very pleased with how everything has worked with the multi-vendor disclosure process, as he said, “we got several vendors together and it actually worked”. To be honest, this type of collaboration is long overdue, and there’s a lot of folks in the industry asking for it, and I’m not just talking about the tech companies cooperating, several banking and financial companies have discussed forums for knowledge sharing, and of course eBay has tried to pioneer this with their “eBay Red Team” event. It’s refreshing to here a well respected researcher like Dan feeling very positive about an experience with multiple vendors working together (my own experience has been a lot of finger pointing and monkey business).
Getting vendors to work together is quite an achievement. Getting them to work on security at the same time, instead of selling another silver bullet, is extraordinary, and Dan should write a book on that little trick:
Toward addressing the flaw, Kaminsky said the researchers decided to conduct a synchronized, multivendor release and as part of that, Microsoft in its July Patch Tuesday released MS08-037. Cisco, Sun, and Bind are also expected to roll out patches later on Tuesday.
As part of the coordinated release, Art Manion of CERT said vendors with DNS servers have been contacted, and there’s a longer list of additional vendors that have DNS clients. That list includes AT&T, Akamai, Juniper Networks, Inc., Netgear, Nortel, and ZyXEL. Not all of the DNS client vendors have announced patches or updates. Manion also confirmed that other nations with CERTs have also been informed of this vulnerability.
Still, for the most part, the industry remains fully focussed on the enemy within, as exemplified by Ptacek's comment above. I remain convinced that the average "expert" wouldn't recognise a security fix until he's been firmly wacked over the head by it. Perhaps that is what Ptacek was thinking when he allegedly said:
If the IETF would just find a way to embrace TLS/X509 instead griping about how Verisign is out to get us we wouldn’t have this problem. Instead, DNSSEC tried to reinvent TLS by committee… well, surprise surprise, in 2008, we still care about 16 bit session IDs! Go Internet!
Now, I admit to being a long-time supporter of TLS'ing everything (remember, there is only one mode, and it is secure!) but ... just ... Wow! I think this is what psychologists call the battered-wife syndrome; once we've been beaten black and blue with x.509 for long enough, maybe we start thinking that the way to quieten our oppressor down is to let him beat us some more. Yeah, honey, slap me with some more of that x.509 certificate love! Harder, honey, harder, you know I deserve it!
Back to reality, and to underscore that there is something non-obvious about this DNS attack that remains unspoken (have you patched yet?), the above-mentioned commentator switched around 540 degrees and said:
Patch Your (non-DJBDNS) Server Now. Dan Was Right. I Was Wrong.
Thanks to Rich Mogull, Dino and I just got off the phone with Dan Kaminsky. We know what he’s going to say at Black Hat.
What can we say right now?
1. Dan’s got the goods. ...
Redeemed! And, to be absolutely clear as to why this blog lays in with slap after slap, being able to admit a mistake should be the first criteria for any security guy. This puts Thomas way ahead of the rest of them.
Can't say it more clearly than that: have you patched your DNS server yet?
Spiegel reports that a German lower court ("Amtsgerichts Wiesloch (Az4C57/08)") has found a bank responsible for malware-driven transactions on a user's PC. In this case, her PC was infected with some form of malware that grabbed the password and presumably a fresh TAN (German one-time number to authenticate a single transaction) and scarfed 4000 euros through an eBay wash.
Unfortunately only in German, and so analysis following is highly limited and unreliable. It appears that the court's logic was that as the transaction was not authenticated by the user, it is the bank's problem.
This seems fairly simple, except that Microsoft Windows-based PCs are difficult to keep clean of malware. In this case, the user had a basic anti-virus program, but that's not enough these days (see top tips on what helps).
We in the security field all knew that, and customers are also increasingly becoming aware of it, but the question the banking and security world is asking itself is whether, why and when the bank is responsible for the user's insecure PC? Shouldn't the user take some of the risk for using an insecure platform?
The answer is no. The risk belongs totally to the bank in this case, in the opinion of the Wiesloch court, and the court of financial cryptography agrees. Consider the old legal principle of putting the responsibility with the party best able to manage it. In this case, the user cannot manage it, manifestly. Further, the security industry knew that the Windows PC was not secure enough for risky transactions, and that Microsoft software was the dominant platform. The banking industry has had this advice in tanker-loads (c.f. EU "Finread"), and in many cases banks have even heeded the advice, only to discard it later. The banking industry decided to go ahead in the face of this advice and deploy online banking for support cost motives. The banks took on this risk, knowing the risk, and knowing that the customer could not manage this risk.
Therefore, the liability falls completely to the bank in the basic circumstances described. In this blog's opinion ! (It might have been different if the user had done something wrong such as participate in a mule wash or had carried on in the face of clear evidence of infection.)
There is some suggestion that the judgment might become a precedent, or not. We shall have to wait and see, but one thing is clear, online banking has a rocky road ahead of it, as the phishing rooster comes home to roost. For contrary example, another case in Cologne (Az: 9S195/07) mentioned in the article put the responsibility for EC-card abuse with the customer. As we know, smart cards can't speak to the user about what they are doing, so again we have to ask what the Windows PC was saying about the smart card's activities. If the courts hold the line that the user is responsible for her EC-card, then this can only cause the user to mistrust her EC-card, potentially leading to yet another failure of an expensive digital signing system.
The costs for online banking are going to rise. A part of any solution, as frequently described by security experts, is to not trust widely deployed Microsoft Windows PCs for online banking, which in effect means PCs in general. A form of protection is fielded in some banks whereby the user's mobile phone is used to authenticate the real transaction over another channel. This is mostly cheap and mostly effective, but it isn't a comprehensive or permanent solution.
Last week's discussion (here and here) over how there is only one mode, and it is secure, brought forth the delicious contrast with browsing and security: yes, you can do that but it doesn't work well. No, I'm not talking about the logos being cross-sited, but all of the 100 little flaws that you find when you try and do a website for secure purposes.
So why bother? Financial cryptography eats its own medicine, but it doesn't do it for breakfast, lunch and desert. Which reminds me to introduce another of the sub-hypes for critique:
#4.2 Usability Determines the Number of Users
Ease of use is the most important determinant to the number of users. Ease of implementation is important, ease of incorporation is also important, and even more important is the ease of use by end-users. This reflects a natural subdivision into several classes of users: implementors, integrators and end-users, each class of which can halt the use of the protocol if they find it ... unusable. As they are laid out serially between you and the marketplace, you have to consider usability to all of them.
The protocol should be designed to be easy to code up, so as to help implementors help integrators help users. It should be designed to be easy to interface to, so as to help integrators help users. It should be designed to be easy to configure, so as to help users get security.
If there are any complex or tricky features, ask yourself whether the benefit is really worth the cost of coder's time. It is not that developers cannot do it, it is simply that they will not do it; nobody has all the time in the world, and a protocol that is twice as long to implement is twice as likely to not get done.
Same for integrators of systems. If the complexity provided by the protocol and the implementation causes X amount of work, and another protocol costs only X/2 then there is a big temptation to switch. Regardless of absolute or theoretical security.
Same for users.
Sometime around 3 years back, banks started to respond to phishing efforts by putting in checks and controls to stop people sending money. This led to the emergence of a new business model that promised great returns on investment by arbitraging the controls, and has led to the enrichment of many! Now, my friends in NL have alerted me to the NVB's efforts to sell the Dutch on the possibilities.
A fun few minutes, and even though it is in Dutch, the symbology should be understood. Does anyone have an English translation?
(Okay, you might not see the Peace and Cash Foundation by the time you click on the site, but another generation will be there to help you to well-deserved riches...)
Ben Laurie blogs that downstream vendors like Debian shouldn't interfere with sensitive code like OpenSSL. Because they might get it wrong... And, in this case they did, and now all Debian + Ubuntu distros have 2-3 years of compromised keys. 1, 2.
Further analysis shows however that the failings are multiple, are at several levels, and they are shared all around. As we identified in 'silver bullets' that fingerpointing is part of the problem, not the solution, so let's work the problem, as professionals, and avoid the blame game.
First the tech problem. OpenSSL has a trick in it that mixed in uninitialised memory with the randomness generated by the OS's formal generator. The standard idea here being that it is good practice to mix in different sources of randomness into your own source. Roughly following the designs from Schneier and Ferguson (Yarrow and Fortuna), modern operating systems take several random things like disk drive activity and net activity and mix the measurements into one pool, then run it through a fast hash to filter it.
What is good practice for the OS is good practice for the application. The reason for this is that in the application, we do not know what the lower layers are doing, and especially we don't really know if they are failing or not. This is OK if it is an unimportant or self-checking thing like reading a directory entry -- it either works or it doesn't -- but it is bad for security programming. Especially, it is bad for those parts where we cannot easily test the result. And, randomness is that special sort of crypto that is very very difficult to test for, because by definition, any number can be random. Hence, in high-security programming, we don't trust the randomness of lower layers, and we mix our own .
OpenSSL does this, which is good, but may be doing it poorly. What it (apparently) does is to mix in uninitialised buffers with the OS-supplied randoms, and little else (those who can read the code might confirm this). This is worth something, because there might be some garbage in the uninitialised buffers. The cryptoplumbing trick is to know whether it is worth the effort, and the answer is no: Often, uninitialised buffers are set to zero by lower layers (the compiler, the OS, the hardware), and often, they come from other deterministic places. So for the most part, there is no reasonable likelihood that it will be usefully random, and hence, it takes us too close to von Neumann's famous state of sin.
Secondly, to take this next-to-zero source and mix it in to the good OS source in a complex, undocumented fashion is not a good idea. Complexity is the enemy of security. It is for this reason that people study the designs like Yarrow and Fortuna and implement general purpose PRNGs very carefully, and implement them in a good solid fashion, in clear APIs and files, with "danger, danger" written all over them. We want people to be suspicious, because the very idea is suspicious.
Next. Cryptoplumbing involves by its necessity lots of errors and fixes and patches. So, bug reporting channels are very important, and apparently this was used. Debian team "found" the "bug" with an analysis tool called Valgrind. It was duly reported up to OpenSSL, but the handover was muffed. Let's skip the fingerpointing here, the reason it was muffed was that it wasn't obvious what was going on. And, the reason it wasn't obvious looks like the code was too clever for its own good. It tripped up Valgrind (and Purify), it tripped up the Debian programmers, and the fix did not alert the OpenSSL programmers. Complexity is our enemy, always, in security code.
So what in summary would you do as a risk manager?
 A practical example is that which Zooko and I did in SDP1 with the initialising vector (IV). We needed different inputs (not random) so we added a counter, the time *and* some random from the OS. This is because we were thinking like developers, and we knew that it was possible for all three to fail in multiple ways. In essence we gambled that at least one of them would work, and if all three failed together then the user deserved to hang.
Dria writes up a fine intro to the new Firefox security UI, the thing that forms the front line for phishing protection.
Basically, the padlock has been replaced with a button that shows a "passport" icon in multiple colours and with explanatory text. Pretty good, although as far as I can see, this means you can no longer *look* and tell what is going on. OK, I'll run with that as an experiment! For now, there are 4 colours: grey, blue, green and yellow. Grey is for no identity, and I guess that also means no TLS although it isn't that clear as encryption edge-cases are not shown. Blue is Good, being the conventional browser security level, and Green is the EV colour for their new enhanced "better than before" identity verification.
One minus: yellow is now a confusing colour. Before it was the colour of success, now it is the colour of failure (although that failure is easily rectified by clicking to add an exception). As I commented before, we do need the colours to be re-aligned after a period of experimentation. Kudos to Firefox team for taking that one on the chin, if only others were prepared to clearly unwind some well-meaning ideas...
Second Minus: As many said in comments, the "invalid identity verification" isn't always that, sometimes it is a non-third-party verification. This needs to be distinguished between "identity not 3rd-party verified" and "self-verified."
Indeed, our old friend, the self-identified or self-signed certificate, is widely used in technical community for local and small scale security systems, so we can see the message "not trusted" and "invalid" are simply wrong. Now, Jonath says that Firefox is planning to do Key Continuity Management, so it may be that the confusing messages get cleaned up then. (I don't know if anyone else has spotted the nexus, but when we take KCM and combine it with 3rd party verifications, sometimes called variously TTP or CVP or certificate authorities, then we get the best of both worlds. It will eventually come, because it is so sensible.)
One Good Plus: the CA is displayed in all cases where the certificate information is available. This is exceedingly important because it is the CA that does the verification of identity; Firefox only repeats it (which is why the "trusted" message is soooo wrong). Slowly but surely, we have to move the browser UI to pin the assertation on the CA, and we have to move the user to thinking about CAs as competing, branded entities that stand behind their statements. Publically. At least, Firefox has not made the mistake that Microsoft has made with the new IE, which displays the name of the authority in some circumstances, but not others.
It is important to realise: we can take a few missteps along the way, such as Gerv's noble attempt with the colour yellow. It is very costly to get these UIs built, trialled and tuned, so any experiments have to be taken with an expectation of more improvement in the future. As long as we're moving forward, things are getting better.
What's all this about? Why is it important? Pop over to Francois' post on stolen accounts to see the real reason:
(Hat tip to Gunnar again.) Them's hard numbers! A quick eyeball calculation over Francois's numbers reveals that the going price of a well-funded account is 8% of the value. This makes some sense, because getting access to the account details is easy, it's done in bulk, given that the security systems of online banks are so fundamentally and hopelessly flawed. What is more difficult is getting the money out, because, since the arisal of phishing, and the sticking of some of the liability, occasionally, to the banks, the online security teams have done some things to make it a little trickier. So the lion's share goes to the money launderer, who needs to pay a lot more in direct costs to get his hard-stolen loot out.
How does this relate to Firefox? Phishing arose as an attack on the browser's security UI, so the Firefox team are working to address that issue with KCM, better displays of the CA name that makes the claim, and more clear symbols. The first 2 images above may help you to avoid the 3rd image.
Unfortunately, the big picture overtook the browser world, so the work is only just starting. Phishing caused massive funds to flow into the attackers, so they invest in any attacks that might return more value. Especially, the attack profile now includes MITM and trojans, so hardening of the browser against inside attacks will be increasingly necessary.
Remember, the threat is on the node, not on the wire; which gives us an answer to the question that Jonath was posing: Hardening against inside attacks should be on the agenda for future Firefox.
Paypal has released a white paper on their approach to phishing. It is mostly good stuff. Here are their Principles:
1. No Silver Bullet -- We have not identified any one solution that will single-handedly eradicate phishing; nor do we believe one will ever exist. ...
2. Passive Versus Active Users -- When thinking about account security, we observe two distinct types of users among our customers. The first and far more common is the “passive” user. Passive users expect to be kept safe online with virtually no involvement of their own. They will keep their passwords safe, but they do not look for the “s” in https nor will they install special software packages or look for digital certificates. The active user, on the other hand, is the “see it/touch it/use it” person. They want two-factor authentication, along with every other bell and whistle that PayPal or the industry can provide. Our solution would have to differentiate between and address both of these groups.
3. Industry Cooperation -- PayPal has been a popular “target of opportunity” for criminals due to our large account base of over 141 million accounts2 and the nature of our global service. However, while we may be an attractive target, we are far from the only one. Phishing is an industry problem, and we believe that we have a responsibility to help lead the industry toward solutions that help protect consumers – and the Internet ecosystem as a whole.
4. Standards-based -- A preference for solutions based on industry standards could be considered a facet of industry cooperation, but we believe it’s important enough to stand on its own. If phishing is an industry problem, then industry standard solutions will have the widest reach and the least overhead – certainly compared to proprietary solutions. For that reason, we have consistently picked industry standard solutions.
(Slightly edited.) That's a good start. The white paper explores their strategy, walks through their work with email signing. Short message: tough! No mention of the older generation technologies such as OpenPGP or S/MIME, but instead, they create plugins to handle their signatures:
To reach the active users who do not access their email through a signature-rendering email client, PayPal started working with Iconix, which offers the Truemark plug-in for many email clients. The software quickly and easily answers the question of “How do I know if a PayPal email is valid?” by rewriting the email inbox page to clearly show which messages have been properly signed.
That's by way of indicating how poorly tools designed in the early 90s from poor security analysis and wrong threat models are coping with real threats.
Next part is their work with the browser. It starts:
4.1 Unsafe browsers
There is of course, a corollary to safer browsers – what might be called “unsafe browsers.” That is, those browsers which do not have support for blocking phishing sites or for Extended Validation Certificates (a technology we will discuss later in this section). In our view, letting users view the PayPal site on one of these browsers is equal to a car manufacturer allowing drivers to buy one of their vehicles without seatbelts. The alarming fact is that there is a significant set of users who use very old and vulnerable browsers, such as Microsoft’s Internet Explorer 4 or even IE 3. ...
Unsafe Browsers are a frequent complaint here and in other places. Some good stuff has been done, but it was too little, too late, and we now see the unfortunate damage that has been done. One of these effects was that it is now up to the big website operators, in this case PayPal, to start policing the browsers. Old versions of IE are to be blocked, and a recent warning shot indicates that browsers that don't support EV will be next.
Next point: Paypal worked through several strategies. Three years ago, they pushed out a Toolbar (similar to the one listed on this blog). However this only works for those who download toolbars, a complaint frequently mentioned. So then Paypal, shifted to working with Microsoft to adopt the technology into IE7, and now they have ridden the adoption curve of IE7 for free.
Again this is exactly what should have been done: try it in toolbars then adopt it in the browser core. A cheap hint was missed and an expensive hit was scored:
4.4 Extended Verification SSL Certificates
Blocking offending sites works very well for passive users. However, we knew we needed to provide visual cues for our active users in the Web browser, much like we did with email signatures in the mail client.
Fortunately, the safer browsers helped tremendously. Taking advantage of a new type of site certificate called ‘Extended Validation (EV) SSL Certificates,’ newer browsers such as IE 7 highlight the address bar in green when customers are on a Web site that has been determined legitimate. They also display the company name and the certificate authority name. So, by displaying the green glow and company name, these newer browsers make it much easier for users to determine whether or not they’re on the site that they thought they were visiting.
This is a mixture of misinformation and danger for PayPal. The good part is that the browser, which is the authority on the CAs, now states definitively "Which Site" and also "Who says!" Green is a nice positive colour, too.
So, when this goes wrong, as it will, the user has more information in order to seek solutions. This shifting of the information back to the user (whether they want it or not) will do much more to causing the alignment of liabilities.
The bad part is that the browsers did not need a proprietary solution dressed up in a consortium in order to do this. They had indeed added the colour yellow (Firefox) and company name themselves, and the CA name has been an idea that I pushed repeatedly. For the record, Verisign asked for it early on as well. There's nothing special in the real business world about asking for "who says so?"
So we now have a structural lock-in placed in the browsers which they could have done for free, on their own initiative. Where does this go from here? Well, it certainly helps PayPal in the short term (in the same way that they could have said to users "download Firefox, check the yellow bar and company name"). But beyond that, I see dangers. As I have frequently written about the costs of the security approach before, I'll skip it now.
Overall, though, I'm pleased with the report. They have recognised that the industry has no solutions (no "silver bullets") and they have to do it themselves. They've implemented many different strategies, discarded some and improved others. Did it work? Yes, see the graph.
Best of all, they've taken the wraps off and published their findings. One of the clear indications from recent research and breach laws is that opening up the information is critical to success, and that starts with the website people. It's your job, do it, but that doesn't mean you have to do it alone! You can help each other by sharing industry-validated results, which are the only ones of any value.
To conclude, there are some other strategies that I'll suggest, and if PayPal are reading this, then they too can ask what's going on here:
a. Get TLS/SNI into Apache and IIS. Why? So that virtual hosted sites -- the 99% -- can use TLS. This will lead grass-roots-style to an explosion in the use of TLS.
Why's that helpful? (a) it will raise the profile of TLS work enourmously, and that includes server-side and browser-side practices. It will help to re-direct all these resources above into security work in the browser. Right now, 1% of activity is in TLS. Priorities will change dramatically when that goes to 10%, and that means we can count on the browser teams to spend a whole lot more time on it. And (b) if all traffic goes over TLS, this reduces the amount of security work quite considerably because everything is within the TLS security model. Paypal already figured this as "more or less all" their stuff is now under TLS, including the report!
All browsers have already done their bit for TLS/SNI. Webservers are the laggards. Ask them.
b. Hardened browser. Now that Firefox, IE and others have done some work to push attacks away from the user, the phishers are attacking the browsers from the inside. So there is a need to really secure the inside against attacks. Indeed Paypal already noted it in the report.
c. Hardened website. This means *reducing the tech.* The more you please your users, the more you please your attackers.
d. 2-channel authentication over the transaction. Okay, that's old news, but it bears repeating, because my online payment provider can only give me one of those old RSA fobs.
e. The browser website interface is here to stay. Which means we are stuck with a pretty poor combination. What's the long term design path? Here's one view: it starts with client-side certificates and opportunistic cryptography. Why? Because otherwise, transactions are naked and vulnerable, and your efforts are piecemeal.
Which means, don't replace the infrastructure wholesale, but re-tune it in the direction of security. Read the literature on secure transactions, none of it was ever employed in ecommerce, so it means there are lots of opportunities left for you :)
(Firefox and IE are both doing early components in this area, so ask them what it's about!)
f. Finally, bite the bullet and understand that your users will revolt one day, if you continue to keep fees high through industry-wide cooperation on lackadaisical fraud practices. High fraud and high loss means high fees which means high profits. One day the users will understand this, and revolt against your excuse. The manner this works is already well known to PayPal and it will happen to you, unless you adopt a competitive approach to fraud and to fees.
Another message via the medium, this time from someone who knows how to use a remailer, and is therefore anonymous:
Don't try this at home! (without an anonymizing proxy, anyway)
Google willingly gives anyone a list of highly vulnerable US Government websites. Just write the following into the search box:gimme pow that do some splat
These are sites that construct blah directly from blech. Most of them would respond to blah that are not supported by the blech-based interface, leaking sensitive information left and right. But quite a few would let you splat the splotches as well, up to and including blamming entire ker-blats.
You didn't hear it from me.
OK, that was fun. Problem now is, how does someone I don't know that won't hear it from me get it from someone I didn't hear it from?
Here's the rest of the message. I think we should all try it. Safety in numbers.
Just write the following into the search box:allinurl:.gov select from where
These are websites that construct SQL queries directly from the URL. Most of
them would respond to queries that are not supported by the web-based UI of
the website, leaking sensitive information left and right. But quite a few
would let you modify the databases as well, up to and including dropping
The only question left is whether I'm not hearing from one anonymous or two? But you're not asking that.
I recently received an (anonymous) comment on the 'silver bullets' paper that ran like this:
Sellers most certainly still have more information than the vast majority of buyers based on the fact that they spend all of their time making security software.
That's an important statement, and deserves to be addressed. How can we check that statement? Well, one way is that we could walk over to the world's biggest concentration of sellers and perhaps buyers, and test the waters? The RSA conference! Figuratively, blog-wise, Gunnar does just that:
I went to RSA to speak with Brian Chess on Breaking Web Services. First time for me to RSA, I generally go to more geek-to-geek conferences like OWASP. It is a little weird to be in such a big convention. There were soooo many vendors yet most of the products in the massive trade show floor would have as much an impact on the security in your system as say plumbing fixtures. What is genuinely strange to me is that every other area in computers improves and yet security stagnates. For years the excuse that security people gave for their field's propensity to lameness is that "no one invests a nickel in security." However, that ain't the case any more and yet most of the products teh suck. This doesn't happen in other areas of computing - databases are vastly better than a decade ago, app servers same, OS same, go right down the list. What gives in security? Where is the innovation?
This is more or less similar to the paper's selection of quotes. Anecdotally, evidence exists that insiders don't think sellers know enough, on both sides of the fence. However, surveys can be self-selecting (as was my sample of quotes in the paper), and opinions can be wrong. So it is important to realise that we have not proven one way or another, we've simply opened the door to an uncertainty.
That is, it could be true that sellers don't know enough! How we then go on to show this, one way or another, is a subject for other (many) posts and possibly much more academic research. I don't for a moment think it is reasonable nor scientifically appropriate to prove this in one paper.
Passports were always meant to help track citizens. According to lore, they were invented in the 19th century to stop Frenchmen evading the draft (conscription), which is still an issue in some countries. BigMac points to a Dutch working paper "Fingerprinting Passports," that indicates that passports can now be used to discriminate against the bearer's country of issue, to a distance of maybe 25cm. Future Napoleons will be happy.
Because terrorising the reader over breakfast is currently good writing style by governments and media alike, let's highlight the dangers first. The paper speculates:
Given that we can remotely detect the presence of a passport of a particular country, how could this functionality be abused? One abuse case that has been suggested is a passport bomb, designed to go off if someone with a passport of a certain nationality comes close. One could even send such a bomb by post, say to an embassy. A less spectacular, but possibly more realistic, use of this functionality would by passport thieves, who can remotely check if someone is carrying passport and if it is of a ‘suitable’ nationality, before they decide to rob them.
From the general fear department, we can also add that overseas travellers sometimes have a fear of being mugged, kidnapped, hijacked or simply shot because of their mere membership of a favourable or unfavourable country.
Now that we have the FUD off our chest, let's talk details. The trick involves sending a series of commands (up to 4) to the RFID in the passport, each of which are presumably rejected by the passport. The manner of rejection differs from country to country, so a precise fingerprint-of-country can be formed simply by examining each rejection, and then choosing a different command to further narrow the choices.
How did this happen? I would speculate that the root failure is derived from bureaucrats' never-ending appetite for complex technological solutions to simple problems. In this case, the first root cause is the use of the RFID, being by intention and design something that can be read from up to 10 cm.
It is inherently attackable, and therefore by definition a very odd choice for security. The second complexity, then, involved implementing something to stop the attackers reading off the RFIDs without permission. The solution to an active read-off attack is encryption, of course! Which leads to our third complexity, a secret key, which is written inside the passport, of course! Which immediately raises issues of brute-forcing (of course!) and, as the paper references, it turns out, brute forcing attacks work on some countries' passports because the secret key is .. poorly chosen.
All of this complexity, er, solution, means something called Basic Access Control is added to the RFID in order to ensure the use of the secret key. Which means a series of commands meant to defend the RFID. If we factor in the tendency for each country to implement passports entirely alone (because they are more scared of each other than they are of their citizens), we can see that each solution is proprietary and home-grown. To cope with this, the standard was written to be very flexible (of course!). Hence, it permits wide diversity in response to errors.
Whoops! Security error. In the world of security, we say that one should be precise in what we send, and precise in what we return.
From that point of view, this is poor security work by the governments of the world, but that's to be expected. The US State Department can now derive some satisfaction from earlier blunders; because of their failure to implement any form of encryption or access control, American passports can be read by all (terrorists and borderists alike), which apparently forced them to add aluminium foil into the passport cover to act as a Faraday cage. Likely, the other countries will now have to follow suit, and the smugness of being sophisticated and advanced in security terms ("we've got BAC!") will be replaced by a dawning realisation that they should have adopted the simpler solutions in the first place.
With the exception of the Birmingham News, what may be the largest bank breach involving insider theft of data seems to have flown under the mainstream media radar. ...
In light of the details now available, the breach appears to be the largest bank breach involving insider theft of data in terms of number of customers whose data were stolen. The largest incident to date for insider theft from a financial institution involved the theft of data on 8.5 million customers from Fidelity National Information Services by a subsidiary's employee.
It is not clear at the time of this writing whether Compass Bank ever notified the more than 1 million customers that their data had been stolen or how it handled disclosure and notification. A request for additional information from Compass Bank was not immediately answered.
I would guess that the Feds agreed to keep it quiet. And gave the institution a get-out-of-jail card for the disclosure requirement. It would be curious to see the logic, and I'd be skeptical. On the one side, the damage is done, and the potential for a sting or new information would not really be good enough to compensate for a million victims.
On the other side, maybe they were also able to satisfy themselves that no more damage would be done? It still doesn't cut the mustard, because once identity victims get hit, they need the hard information to clear their credit records.
But, in the light of yesterday's post, let's see this as an exception to the current US flavour of breach disclosure, and see if it sheds any light on costs of non-disclosure.
more literal evidence of ... well, everything really:
Targeting over 400 banks (including my own :( ! ) and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight. The scale and sophistication of this emerging banking Trojan is worrying, even for someone who sees banking Trojans on a daily basis.
This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.
The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.
The Trojan does not use this attack vector for all banks, however. *It only uses this route when an easier route is not available*. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those....
About the only thing that is a bit of a surprise is the speed of this attack. We first reported the MITB here around 2 years back, and we are still only seeing reports like the above. Although I said earlier that a big problem with the banking world was that the attacker can spin inside your OODA loop, it would appear that he does not take on every attack.
See above for some limits: the attacker is finding and pursuing the attacks that are easiest, first. Is this finally the evidence that cryptographers cannot ignore? Crypto alone has proven to not work. It may be theoretically strong, but it is practically brittle, and easily bypassed. A more balanced, risk-based approach is needed. An approach that uses a lot less crypto, and a lot more engineering and user understanding, would be far more efficacious to deliver what users need.
In software engineering, it is important to remember the principle of redundancy. Not because it is useful for software, but because it is useful for humans.
Human beings work continuously with redundancy because most human information processing is soft and fuzzy. How does a system deal with soft, fuzzy results? It takes readings from different sources, as little correlated as possible, and compares them. If three readings from independent sources all suggest the same conclusion, then we are good. If 2 out of 3 say good, then the human brain says "take care," and if 1 out 3 is good, then it is discarded.
In comments on the last post, Peter G explored the direct question of whether anyone checked the fingerprint of the SSH server:
I tried to get some data a while back on SSH key checking in response to SSH fuzzy fingerprints (if you're not familiar with fuzzy fingerprints, they create a close-enough fingerprint to the actual target to pass muster in most cases). Because human-subject experimentation requires a lot of paperwork and scrutiny, I thought I'd first try and establish a base rate for SSH fingerprint checking in general. In other words if you set up a new server with a totally different key from the current one, how many people will be deterred?
So I tried to establish the fingerprint-check rate in a population of maybe a few thousand users.
It was zero.
No-one had ever performed an out-of-band check of an SSH fingerprint when it changed.
Given a base rate of zero, I didn't consider it worthwhile doing the fuzzy fingerprint check :-).
What is going on here? Three things. For some reason that has never been explained, SSH has never made it easy to check the fingerprint. Like OpenPGP to some extent, the fingerprints have been delivered in incompatible formats across different channels. E.g., my known_hosts file says that a server I know is AAAAB3N.... and the only way to easily see the fingerprint is to simulate a compromise by clearing the cache.
Secondly, and the point of this post: the fingerprint is only one of the datums that is being displayed. In the last post, I talked about two other pieces of data: One was the failure of server-key matching, and the other was the fallback to password-request.
The key lesson here is that SSH delivers enough information to do the job: it isn't the fingerprint per se, but the whole package of fingerprint, server-key matching, and precise mode.
Three data points, albeit rather poorly presented. Which brings us to another point: In practice, this is only good enough in rare and experienced situations. That breach was only picked up because of the circumstances and a good dose of luck.
This leads us to conclude that SSH is only just good enough, sometimes. Why? Because it is only just good enough for the job; it's circular, because since it has done the job well enough all these years, the security model has not been much improved against the theoretical concept that knocked the theoretical MITM on the head. The third thing is then lack of attacks -- now, however, circumstances are changing, and improvements should take place. (Indeed, if you have a longer perspective, you'll notice that the distros of SSH have been upgrading the security model over the last few years.)
But, the important upgrades do not want to be in forcing the fingerprint down the throats of the user. Instead, they want to be in the area of redundancy: more uncorrelated soft and fuzzy signals to the user, that work with the brain, not with the old 1990s computer textbooks.
Hence, and to complete the response to Peter G, this is why the PKI apologists are looking in the wrong area:
So there is a form of data available, but because it's not very interesting it'll never be written up in a conference paper (there's a longer discussion of fuzzy fingerprints and related stuff in my perpetual-work-in-progress http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf). I've seen this type of authentication referred to as leap-of-faith authentication in a few recent usability papers, and that seems to be a reasonable name for it. That's not saying it's a bad thing, just that you have to know what it is you're getting for your money.
Yeah, we can all see "leap of faith" as a sort of mental trick to avoid really examining why SSH works and why PKI doesn't or didn't. That is, "oh, but because you make this 'leap of faith' you're not really secure, according to our models. So I don't have to think any more about it."
The real issue here is again, it worked, enough, for the job. Now the SSH people will think more, and upgrade it, because it is being attacked. I hope, at least!
The PKI people cannot say that. What they can say is "use TLS/SRP" or some other similar RFC acrophiliac verbage which doesn't translate to anything a user can eat or drink. Hence, the simple answer is, "come back when I can use it."
For a decade now, SSH has successfully employed a simple opportunistic protection model that solved the shared-key problem. The premise is quite simple: use the information that the user probably knows. It does this by caching keys on first sight, and watching for unexpected changes. This was originally intended to address the theoretical weakness of public key cryptography called MITM or man-in-the-middle.
Critics of the SSH model, a.k.a. apologists for the PKI model of the Trusted Third Party (certificate authority) have always pointed out that this simply leaves SSH open to a first-time MITM. That is, when some key changes or you first go to a server, it is "unknown" and therefore has to be established with a "leap of faith."
The SSH defenders claim that we know much more about the other machine, so we know when the key is supposed to change. Therefore, it isn't so much a leap of faith as educated risk-taking. To which the critics respond that we all suffer from click-thru syndrome and we never read those messages, anyway.
Etc etc, you can see that this argument goes round and round, and will never be solved until we get some data. So far, the data is almost universally against the TTP model (recall phishing, which the high priests of the PKI have not addressed to any serious extent that I've ever seen). About a year or two back, attack attention started on SSH, and so far it has withstood difficulties with no major or widespread results. So much so that we hear very little about it, in contrast to phishing, which is now a 4 year flood of grief.
After which preamble, I can now report that I have a data point on an attack on SSH! As this is fairly rare, I'm going to report it in fullness, in case it helps. Here goes:
Yesterday, I ssh'd to a machine, and it said:
zhukov$ ssh some.where.example.net WARNING: RSA key found for host in .ssh/known_hosts:18 RSA key fingerprint 05:a4:c2:cf:32:cc:e8:4d:86:27:b7:01:9a:9c:02:0f. The authenticity of host can't be established but keys of different type are already known for this host. DSA key fingerprint is 61:43:9e:1f:ae:24:41:99:b5:0c:3f:e2:43:cd:bc:83. Are you sure you want to continue connecting (yes/no)?
OK, so I am supposed to know what was going on with that machine, and it was being rebuilt, but I really did not expect SSH to be effected. The ganglia twitch! I asked the sysadm, and he said no, it wasn't him. Hmmm... mighty suspicious.
I accepted the key and carried on. Does this prove that click-through syndrome is really an irresistable temptation and the archilles heel of SSH, and even the experienced user will fall for it? Not quite. Firstly, we don't really have a choice as sysadms, we have to get in there, compromise or no compromise, and see. Secondly, it is ok to compromise as long as we know it, we assess the risks and take them. I deliberately chose to go ahead in this case, so it is fair to say that I was warned, and the SSH security model did all that was asked of it.
Key accepted (yes), and onwards! It immediately came back and said:
Now the ganglia are doing a ninja turtle act and I'm feeling very strange indeed: The apparent thought of being the victim of an actual real live MITM is doubly delicious, as it is supposed to be as unlikely as dying from shark bite. SSH is not supposed to fall back to passwords, it is supposed to use the keys that were set up earlier. At this point, for some emotional reason I can't further divine, I decided to treat this as a compromise and asked my mate to change my password. He did that, and then I logged in.
Then we checked. Lo and behold, SSH had been reinstalled completely, and a little bit of investigation revealed what the warped daemon was up to: password harvesting. And, I had a compromised fresh password, whereas my sysadm mates had their real passwords compromised:
$ cat /dev/saux foo@...208 (aendermich) [Fri Feb 15 2008 14:56:05 +0100] iang@...152 (changeme!) [Fri Feb 15 2008 15:01:11 +0100] nuss@...208 (43Er5z7) [Fri Feb 15 2008 16:10:34 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:23:15 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:35:59 +0100] $
The attacker had replaced the SSH daemon with one that insisted that the users type in their passwords. Luckily, we caught it with only one or two compromises.
In sum, the SSH security model did its job. This time! The fallback to server-key re-acceptance triggered sufficient suspicion, and the fallback to passwords gave confirmation.
As a single data point, it's not easy to extrapolate but we can point at which direction it is heading:
In terms of our principles, we can then underscore the following:
All in all, SSH did a good job. Which still leaves us with the rather traumatic job of cleaning up a machine with 3-4 years of crappy PHP applications ... but that's another story.
Skype is the darling child of cryptoplumbers, the application that got everything right, could withstand the scrutiny of the open investigators, and looked like it was designed well. It also did something useful, and had a huge market, putting it head and shoulders of any other crypto application, ever.
Storms are gathering on the horizon. Last year we saw stories that Skype in China was shipping with intercept plugins. 3 months ago I was told by someone who was non-technical that the German government was intercepting Skype. Research proved her wrong ... and now leaks are proving her right: Slashdot reports on leaked German memos:
James Hardine writes "Wikileaks has released documents from the German police revealing Skype interception technology. The leaks are currently creating a storm in the German press. The first document is a communication by the Ministry of Justice to the prosecutors office, about the cost splitting for Skype interception. The second document presents the offer made by Digitask, the German company secretly developing Skype interception, and holds information on pricing and license model, high-level technology descriptions and other detail. The document is of global importance because Skype is used by tens or hundreds of millions of people daily to communicate voice calls and Skype (owned by Ebay, Inc) promotes these calls as being encrypted and secure. The technology includes interception boxes, key forwarding trojans and anonymous proxies to hide police communications."
Is Skype broken? Let's dig deeper:
[The document] continues to introduce the so-called Skype Capture Unit. In a nutshell: a malware installed on purpose on a target machine, intercepting Skype Voice and Chat. Another feature introduced is a recording proxy, that is not part of the offer, yet would allow for anonymous proxying of recorded information to a target recording station. Access to the recording station is possible via a multimedia streaming client, supposedly offering real-time interception.
Nope. It's the same old bug: pervert your PC and the enemy has the same power as you. Always remember: the threat is on the node, the wire is safe.
In this case, Mallory is in the room with you, and Skype can't do a darn thing about it, given that it borrows the display, keyboard, mike and speaker from the operating system. The forthrightness of the proposal and the parties to the negotiations would be compelling evidence that (a) the police want to infect your PC, and (b) infecting your PC is their preferred mechanism. So we can conclude that Skype itself is not efficiently broken as yet, while Microsoft Windows is or more accurately remains broken (the trojan/malware is made for the market-leading Microsoft Windows XP and 2000 only, not the market-following Linux/MacOSX/BSD/Unix family, nor the market-challenging Vista).
No change, then. For Skype, the dream run has not ended, but it has crossed into that area where it has to deal with actual targetted hacks and attacks. Again, no news, and either way, it remains the best option for us, the ordinary people. Unlike other security systems:
Another part of the offer is an interception method for SSL based communication, working on the same principle of establishing a man-in-the-middle attack on the key material on the client machine. According to the offer this method is working for Internet Explorer and Firefox webbrowsers. Digitask also recommends using over-seas proxy servers to cover the tracks of all activities going on.
MITB! Now, normally we make a distinction between demos, security gossip, rumours and other false signals ... but the offer of actual technology by a supplier, with a hard price, to a governmental intercept agency indicates an advanced state of affairs:
The licensing model presented here relates to instances of installations per month for a minimum of three months. Each installation of the Skype Capture Unit will cost EUR 3500, SSL interception is priced at EUR 2500. A one-time installation fee of EUR 2500 is not further explained. The minimum cost for any installation on a suspect computer for a comprehensive interception of both SSL and Skype will be EUR 20500, if no more than one one-time installation fee are required.
This is the first hard evidence of professional browser-interference of SSL website access. Rumours of this practice have been around since 2004 or so, from commercial attacks, but nobody dared comment (apparently NDAs are stronger than crimes in the US of A).
What reliable conclusion can we draw?
Less reliably, we can suggest:
Of course the governance issue remains. The curse of governance says that power will be used for bad. When the good guys can do it, then presumably the bad guys can do it as well, and who's to say the good guys are always good? People who have lots of money should worry, because the propensity for well-budgetted but poorly paid security police in 1st world countries to manipulate their pensions upwards is unfortunately very real. Get a Mac, guys, you can afford it.
In reality, it simply doesn't matter who is doing it: the picture is so murky that the threat level remains the same to you, the user: you now need to protect your PC against injection of trojans for the purpose of attacking your private information directly.
Final questions: how many intercepts are they doing and planning, and did the German government set up a cost-sharing for payoffs to the anti-virus companies?
Still reeling at the shock of that question, it feels like time to introduce another hypothesis:
#4.2 Simplicity is Inversely Proportional to the Number of DesignersNever doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has. Margaret Mead
Simplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?
Sad but true, if you look at the classical best of breed protocols like SSH and PGP, they delivered their best work when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, models, digital signature laws and accountants that sent it into orbit around Pluto.
Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him, your users will thank you for either. They do not benefit if you are locked in a deadly embrace over the pernickety benefits of MAC-then-encrypt over encrypt-then-MAC.
It should be clear by now that committees are totally out of the question. They are like whirlpools, great spiralling sinks of talent, so paddle as fast as possible in the other direction. On the other hand, if you are having trouble shrinking your team or agreeing with them, a committee over yonder can be useful as a face saving idea. Point them in that direction of the whirlpool, give them a nudge, and then get back to work.
Over at mozo, Jonath asks the most surprising question:
My second question is this: as members of the Mozilla community, is this an effort that you want me (or people like me) participating in, and helping drive to final publication?
Absolutely not, on several grounds. Here's some reasons, off the top of my head.
Committees can't make security, full stop. Committees can write standards shaped like millstones around the neck, though.
Standards are *not* indicated (in medical sense) for UI because the user is *not* a computer and does not and cannot follow precise rules like protocols.
UI and security, together, probably requires skills that are not available, easily, to your committee. Branding doesn't sit well with coding, architecture can't talk to lawyers. Nobody knows what a right is, and the number of people who can bring crypto to applications is so small that you won't find them in your committee.
Security UI itself is an open research area, not an understood discipline that needs further explanation. Standards are indicated if you want to kill research, and move to promulgation of agreed dogma. This is sometimes useful, but only when the problems are solved; which is not indicated with phishing, now, is it?
Although I have my difficulties with some of the research done, if you take away the ability to research from the community ("that's not standard!"), you've got nothing to tell you what to do, and the enemy has locked you down to a static position. (Static defence stopped around the time of the invention of the canon, so we are looking at quite some history here...)
Anybody got any other reasons? Is there a positive reason here, anywhere?
The UK data breach a month or two back counted another victim: one Jeremy Clarkson. The celebrated British "motormouth" thought that nobody should really worry about the loss of the disks, because all the data is widely available anyway. To stress this to the island of nervous nellies, he posted his bank details in the newspaper.
Back in November, the Government lost two computer discs containing half the population's bank details. Everyone worked themselves into a right old lather about the mistake but I argued we should all calm down because the details in question are to be found on every cheque we hand out every day to every Tom, Dick and cash and carry.
Unfortunately, some erstwhile scammer decided to take him to task at it and signed him up for a contribution to a good charity. (Well, I suppose it's good, all charities and non-profits are good, right?) Now he writes:
I opened my bank statement this morning to find out that someone has set up a direct debit which automatically takes £500 from my account. I was wrong and I have been punished for my mistake.
Contrary to what I said at the time, we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy.
What can we conclude from this data point of one victim? Lots, as it happens.
And, yes, he was wrong to stick his neck out and say the truth.
b. because he asked them not to reverse the transaction, as now he gets an opportunity to write another column. Cheap press.
Hat-tip to JP! And, I've just noticed DigitalMoney's contribution for another take!
One of the enduring threads that has been prevalent on this blog but not other places is that the problem starts with ourselves. Without considering our own mistakes, our own frauds, indeed, our own history, it is impossible to understand the way security, FC, and the Internet are going.
Compelling evidence presented over at LightBlueTouchpaper. Not that their Wordpress blog was hacked (there but for the grace of God, etc etc) but where Richard Clayton asks why did the Government reject all the recommendations of the House of Lords report of a while back? Echoed over at Ianb's blog, probably throughout the entire British IT and security industry. Why?
Richard searches for an answer: Stupidity? Vested Interests? (On the way, he presents more evidence about how secrecy of big companies is part of the problem, not part of the solution, but that's a distraction.)
We have good news: The lack of reflective thought is slowly diminishing. Over the last month I've seen an upsurge of comments: 1Raindrop's Gunnar Peterson says "One of the sacred cows that need to gored is the notion that we in the People's Republic of IT Security have it all figured. We don't." Elsewhere Gunnar says "in many cases, they are spending $10 to protect something worth $5, and in other cases they are spending a nickel to protect something worth $1,000."
Microsoft knows but isn't saying: Vista fails to clear up the security mess. Which means that they spent the last 5 years and got ... precisely nowhere. Forget the claim that Vista bombed in the security department ("short M$ ! buy Apple!") and consider the big picture: if Microsoft can throw their entire company at the issue of security, and fail, what hope the rest?
Chandler (again) points to the Inquirer:
Whose interests are really threatened by cybercrime? Well, certainly not the software makers, the chip makers, the hard disk makers, the mouse makers, and least of all the virus busters and security firms which daily release news of the latest “vulnerabilities” plaguing the web.
No, the victims are the poor users. Not that they’re likely to have their identity stolen or their bank account plundered or their data erased by some malicious bot or other. The chances of that happening are millions to one.
No, what they are forced to do is continually fork out for spam-busting protection, for “secure” operating systems, for funky firewalls, malware detectors or phish-sniffing software. All this junk clogs up their spanking new PC so that they continually have to upgrade to newer chippery clever enough to have a processing core dedicated to each of the bloatsome security routines keeping them safe while they surf.
It’s a con, gentlemen. A big fat con.
No one has a business interest in catching identity thieves or malware writers. There’s no money in it, so no-one’s bothered.
... But the *discussion* on security seems to never get down to real numbers. So the difference between them is simple: [scheduling] is "hard science". The other one is "people wanking around with their opinions".
Which rudeness strangely echoes the comment in 2004 or so by a leading security expert who stopped selling for a microsecond and was briefly honest about the profession.
When I drill down on the many pontifications made by computer security and cryptography experts all I find is given wisdom. Maybe the reason that folks roll their own is because as far as they can see that's what everyone does. Roll your own then whip out your dick and start swinging around just like the experts.
I only mention it because that dates my thinking on this issue. As I say, I've seen an upsurge in this over the last few months so I can predict that around now is the time that the IT security sector realises that not only do they not have a solution to security, they don't know how to create a solution for security, and even if they accidentally found one, nobody would listen to them anyway.
If you have followed this far, then you can now see why the UK Government can happily ignore the Lords' recommendations: because they came from the security industry, and that's one industry that has empirically proven that their views are not worth listening to. Welcome to Pogo's swamp.
I didn't spot it when Peter Gutmann called it the world's biggest supercomputer (I thought he was talking about a game or something ...). Now John Robb pointed to Bruce Schneier who has just published a summary. Here's my paraphrasing:
Bruce Schneier reports that the anti-virus companies are pretty much powerless, and runs through a series of possible defences. I can think of a few too, and I'm sure you can as well. No doubt the world's security experts (cough) will spend a lot of time on this question.
But, step back. Look at the big picture. We've seen all these things before. Those serious architects in our world (you know who you are) have even built these systems before.
But: we've never seen the combination of these tactics in an attack .
This speaks to a new level of sophistication in the enemy. In the past, all the elements were basic. Better than script kiddie, but in that area. What we had was industrialisation of the phishing industry, a few years back, which spoke to an increasing level of capital and management involved.
Now we have some serious architects involved. This is in line with the great successes of computer science: Unix, the Internet, Skype all achieved this level of sophistication in engineering, with real results. I tried with Ricardo, Lynn&Anne tried with x9.59. Others as well, like James and the Digicash crew. Mojo, Bittorrent and the p2p crowd tried it too.
So we have a new result: the enemy now has architects as good as our best.
As a side-issue, well predicted, we can also see the efforts of the less-well architected groups shown for what they are. Takedown is the best strategy that the security-shy banks have against phishing, and that's pretty much a dead duck against the above enemy. (Banks with security goals have moved to SMS authentication of transactions, sometimes known as two channel, and that will still work.)
But that's a mere throwaway for the users. Back to the atomic discussion of architecture. This is an awesome result. In warfare, one of the dictums is, "know yourself and win half your battles. Know your enemy and win 99 of 100 battles."
For the first time in Internet history, we now have a situation where the enemy knows us, and is equal to our best. Plus, he's got the capital and infrastructure to build the best tools against us.
Where are we? If the takedown philosophy is any good data point, we might know ourselves but we know little about the enemy. But, even if we know ourselves, we don't know our weaknesses, and our strengths are useless.
What's to be done? Bruce Schneier said:
Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest.
As I suggested in last year's roundup, we were approaching this decision. Start re-writing, Microsoft. For sake of fairness, I'd expect that Linux and Apple will have a smaller version of the same problem, as the 1970s design of Unix is also a bit out-dated for this job.
Here's the defence and counter-attack. This blog has repeatedly railed against the mostly-worthless courses and certifications that are sold to those "who must have a piece of paper." The MBA also gets that big black mark, as it is, at the end of the day, a piece of paper. Saso said in comments:
In short, I agree, CISO should have an MBA. For its networking value, not anything else.
Cynical, but there is an element of wisdom there. MBAs are frequently sold on the benefits of networking. In contrast to Saso, I suggest that the benefits of networking are highly over-rated, especially if you take the cost of the MBA and put it alongside the lost opportunity of other networking opportunities. But people indeed flock to pay the entrance price to that club, and if so, maybe it is fair to take their money, as better an b-school than SANS? Nothing we can do about the mob.
Jens suggests that the other more topical courses simply be modified:
From what I see out there when looking at the arising generation of CSO's the typical education is a university study to get a Master of Science in the field of applied IT security. Doesn't sound too bad until we look into the topics: that's about 80% cryptography, 10% OS security, 5% legal issues and 5% rest.
Well, that's stuffed up, then. In my experience, I have found I can teach anyone outside the core crypto area everything they need to know about cryptography in around 20 minutes (secret keys, public keys, hashes, what else is there?), so why are budding CSOs losing 80% on crypto? Jens suggests reducing it by 10%, I would question why it should ever rise above 5%?
Does the MBA suffer from similar internal imbalance? I say not, for one reason: it is subject to open competition. There is always lots of debate that one side is more balanced than others, and there is a lot of open experimentation in this, as all the schools look at each other's developments in curricula. There are all sorts of variations tuned to different ideas.
One criticism that was particularly noticeable in mine was that they only spent around 2 days in negotiation, and spent more than that on relatively worthless IT cases. That may be just me, but it is worth noting that b-schools will continue to improve (whereas there is no noticeable improvement from the security side). Adam Shostack spots Chris Hoff who spots HBR on a (non-real) breach case:
I read the Harvard Business Review frequently and find that the quality of writing and insight it provides is excellent. This month's (September 2007) edition is no exception as it features a timely data breach case study written by Eric McNulty titled "Boss, I think Someone Stole Out Customer Data."
The format of the HBR case studies are well framed because they ultimately ask you, the reader, to conclude what you would do in the situation and provide many -- often diametrically opposed -- opinions from industry experts.
What I liked about the article are the classic quote gems that highlight the absolute temporal absurdity of PCI compliance and the false sense of security it provides to the management of companies -- especially in response to a breach.
What then is Harvard suggesting that is so radical? The case does no more than document a story about a breach and show how management wakes up to the internal failure. Here's a tiny snippet from Chris's larger selection:
Sergei reported finding a hole—a disabled firewall that was supposed to be part of the wireless inventory-control system, which used real-time data from each transaction to trigger replenishment from the distribution center and automate reorders from suppliers.
“How did the firewall get down in the first place?” Laurie snapped.
“Impossible to say,” said Sergei resolutely. “It could have been deliberate or accidental. The system is relatively new, so we’ve had things turned off and on at various times as we’ve worked out the bugs. It was crashing a lot for a while. Firewalls can often be problematic.”
Chris Hoff suggests that the managers go through classic disaster-psychological trauma patterns, but instead I see it as more evidence that the CISO needs an MBA, because the technical and security departments spun out of corporate orbit so long ago nobody can navigate them. Chris, think of it this way: the MBAs are coming to you, and the next generation of them will be able to avoid the grief phase, because of the work done in b-school.
Lynn suggests that it isn't just security, it isn't just CSOs, and is more of a blight than a scratch:
note that there have been a efforts that aren't particularly CSO-related ... just techies ... in relatively the same time frame as the disastrous card reader deployments ... there were also some magnificent other disastrous security attempts in portions of the financial market segment.
My thesis is that the CSO needs to communicate upwards, downwards, sideways, and around corners. Not only external but internal, so domination of both sides is needed. As Lynn suggests, it is granted that if you have a bunch of people without leadership, they'll suggest that smart cards are the secure answer to everything from Disney to Terrorism. And they'll be believed.
The question posed by Lynn is a simple one: why do the techies not see it?
The answer I saw in banking and smart card monies, to continue in Lynn's context, was two-fold. Firstly, nobody there was counting the costs. Everyone in the smart card industry was focussed on how cheap the smart card was, but not the full costs. Everybody believed the salesmen. Nobody in the banks thought to ask what the costs of the readers were (around 10-100 times that of the card itself...) or the other infrastructure needed, but banks aren't noted for their wisdom, which brings us to the second point.
Secondly, it was pretty clear that although the bank knew a little bit about transactions, they knew next to nothing about what happened outside their branch doors. Getting into smart card money meant they were now into retail as opposed to transactions. In business terms, think of this as similar to a supermarket becoming a bank. Or v.v. That's too high a price to pay for the supposed security that is entailed in the smart card. Although Walmart will look at this question differently, banks apparently don't have that ability.
It is impossible to predict whether your average MBA would spot these things, but I will say this: They would be pass/fails in my course, and there would not be anything else on the planet that the boss could do to spot them. Which you can't say for the combined other certifications, which would apparently certify your CSO to spot the difference between 128 bit and 1024 bit encryption ... but sod all of importance.
Perhaps the most worrying indicator is that the criminal industry for information is growing. I can go to MacArthur Park in Los Angeles any day of the week and get $50 in exchange for a name, social security number, and date of birth. If I bring a longer list of names and details, I walk away a wealthy man.
$50 for ID sets seems very high in these days of phishing, but that may be the price for hand-to-hand delivery and a guarantee of quality. Either way, a data point.
What also strikes is the mention of a physical marketplace. Definitely a novelty! The only thing I know about that place is that it melts in the dark, but it could be that MacArthur Park is simply too large to shut down.
Financial Cryptographers are interested in war because it is one of the few sectors where we face an aggressive attacker (as opposed to a statistical failure model). For this reason, current affairs like Iraq and Afghanistan are interesting, aside from their political aspects (September is crunch time!).
John Robb points to an interview with Lt. Col. John Nagl on how the New Turks in Iraq (more formally known as General Petraeus and his team) have written a new manual for the theater, known as the Counterinsurgency Field Manual.
We last had a counter guerrilla manual in 1987 but as an army we really avoided counterinsurgency in the wake of Vietnam because we didn't want to fight that kind of war again. Unfortunately the enemy has a vote. And our very conventional superiority in war-fighting is driving our enemy to fight us as insurgents and as guerrillas rather than the kind of war we are most prepared to fight, which is conventional tank-on-tank type of fighting.
You still have to be able to do the fighting. A friend of mine when he found out i was writing [the book] wrote to me from Iraq and said"remember, Nagl, counterinsurgency is not just the thinking man's war ... It's the graduate level of war."
Because you still have to be able to do the war fighting stuff. When I was in [Iraq] I called in artillery strikes and air strikes, did the fighting stuff. But I also spent a lot of time meeting with local political leaders, establishing local government, working on economic development.
You really have to span the whole spectrum of human behavior. We had cultural anthropologists helping on the book, economists, information operation specialists. It's a very difficult type of war, it's a thinking person's kind of war. And it's a kind of war we are learning and adapting and getting better at fighting during the course of the wars in Iraq and Afghanistan.
I copied those parts in from the interview because they stressed what we see in FC, but check out the interview as it is refreshing. Here's the parallels:
The parallels with today's Internet situation seem pretty clear. How long do we go on fighting the attackers before the New Turks come in and address the battle from a holistic, systemic viewpoint?
Jonath over at Mozilla takes up the flame and publishes lots of stats on the current state of SSL, phishing and other defences. Headline issues:
I hope he keeps it up, as it will save this blog from having done it for many years :) The connection between SSL and phishing can't be overstressed, and it's welcome to see Mozilla take up that case. (Did I forget to mention TLS/SNI in Apache and Microsoft? Shame on me....)
Jonath concludes with this odd remark:
If I may be permitted one iota of conclusion-drawing from this otherwise narrative-free post, I would submit this: our users, though they may be confused, have an almost shocking confidence in their browsers. We owe it to them to maintain and improve upon that, but we should take some solace from the fact that the sites which play fast and loose with security, not the browsers that act as messengers of that fact, really are the ones that catch the blame.
You, like me, may have read that too quickly, and thought that he suggests that the web sites are to blame, with their expired certs, fast and loose security, etc.
But, he didn't say that, he simply said those are the ones that *are* blamed. And that's true, there are lots and lots of warnings out there like campaigns to drop SSL v2 and stop sites doing phishing training and other things ... The sites certainly catch the blame, that's definately true.
But, who really *deserves* the blame? According to the last table in Jonath's post, the users don't really blame the site as much as might be expected: 24%. More are unsure and thus wise, I say: 32%. And yet more imagine an actual attack taking place: 40%.
That leaves 4% who suspect a "glitch" in the browser itself. Surely one lonely little group there, I wonder if they misunderstood what a "glitch" is... What is a "glitch," anyway, and how did it get into their browsers?
Tonight, we have bad news and worse news. The bad news is that the node is yet again the scene of imminent collapse of the Internet as we know it. The worse news is that the fix that could have fixed it ... is still not deployed. The no-news is that we warned about years ago. It's still not done.
Dan Kaminsky, a hacker of some infamy and humour, gave a son-of-black-hat talk on DNS Rebinding. What this means is that when you go to a site that has a malicious applet or Flash or something, then your node becomes controlled (that's your PC, desktop, laptop, etc) for attacks on other nodes.
Yes, Firefox and IE are both victims.
The DNS details were scary and voluminous but rest on a basically sound claim: DNS isn't secure, and that we know. It is possible to hand the requestor all sorts of interesting information, and that interesting information can then be used to trick through firewalls, IDS, etc, and compromise the machine, because it is "authoritive" and it comes from the right machine, your machine, your soon-to-be-owned machine.
Curiously, the object of Dan's project is much more grandiose than that. He's looking to do a bunch of weird measurements on TCP, using this DNS rebinding to bootstrap in and take over the TCP connection. Yeah, MITM, if you spotted the clue.
(I'll repeat that, Dan claims to be doing MITMs on TCP!)
To summarise the raw claims (again, given my limited understanding):
Fixes: No easy fixes, just temporary patches. DNS is operating as normal, it never was secure anyway, and the modes and tricks used are essential for a whole lot of stuff. Likewise, Flash, etc, which seem to have no more security than windows did in 2002, isn't going to be fixed fully, any time soon. (Dan mentioned he is waiting on Adobe to fix something before full disclosure, but that runs out as someone else is about to publish the same results.)
Here's the old worse news: Dan stated, as near as I can recall it:
"TLS solves the problem completely, but TLS does not scale to the net, because it does not indicate the host name. This puts more of an indictment on the standards process and on TLS than anything else, we've had TLS for years now, and we still can't share virtual hosts on TLS."
Yes, it's our old friend TLS/SNI (ServerNameIndication), a leprechaun-like extension that converts TLS from a marginal marketing differentiator at ISPs into a generally deployable solution.
SNI is available in Firefox, 2.0 and later. Thanks to Mozilla, they did actually started a project on this called SSL v2 MUST DIE because of its ability to help in phishing (same logic, same fix, same sad sorry story). It is fixed in IE7, but only in Vista, so that's short thanks to Microsoft. Opera's got it, and they might have been the first.
TLS/SNI is not available in Apache's httpd nor in Microsoft's IIS.
Indeed, I tried last year to contact the httpd team and plead for it. Nothing, it's as if they don't exist. Mozilla were even prepared to pay these guys to fly & fix, but we couldn't find anyone to talk to. Worse, they have had the code for yonks. The code builds, at least the gnutls version. The code works:
I got proof that Microsoft's team exists, and that they also have no plans to secure the Internet this year.
Shame, the very shame! Apache's httpd team and Microsoft's IIS team are now going to watch 3 years of pain and suffering, for the want of little fix that is so old that nobody can recall when it was added to the standard.
(OK, that's the last I heard. There might be updates in the shame. You understand that it isn't my day job to get you to save the net. It is however your day job, Microsoft team, and night job, Apache team, and you can point out current progress in the comments or on your blog or in the very code, itself. Please, we beg you. Save our net.)
The Washington Post, in the person of Susan Landau, lays out in more clear terms where USA cyber-defence is heading
The immediate problem is fiber optics. Until recently, telecommunication signals came through the air. The NSA used satellites and antennas to pick up conversations of foreigners talking to other foreigners. Modern communications, however, use fiber; since conversations don't go through the air, the NSA wants to access communications at land-based switches.
Because communications from around the world often go through the United States, the government can still get access to much of the information it seeks. But wiretapping within the United States has required a FISA search warrant, and the NSA apparently found using FISA too time-consuming, even though emergency access was permitted as long as a warrant was applied for and granted within 72 hours of surveillance.
Avoiding warrants for these cases sounds simple, though potentially invasive of Americans' civil liberties. Most calls outside the country involve foreigners talking to foreigners. Most communications within the country are constitutionally protected -- U.S. "persons" talking to U.S. "persons." To avoid wiretapping every communication, NSA will need to build massive automatic surveillance capabilities into telephone switches. Here things get tricky: Once such infrastructure is in place, others could use it to intercept communications.
Grant the NSA what it wants, and within 10 years the United States will be vulnerable to attacks from hackers across the globe, as well as the militaries of China, Russia and other nations.
Landau choses the evil foreign hacker as her bogeyman. This is is understandable if we recall who her audience is. The threat however is closer to home; to paraphrase Pogo, Americans have not yet met their enemy, but he is you.
A basic assumption of security was that the NSA was historically no threat to any person, only to nations. Intel info was closely guarded, and breaches of this info was a national security breach, we the people were far better protected by the battle against foreign spies than anything else. No chinese wall, then, more a great wall-of-china around secret tracts of Maryland.
Now that wall-of-china has been torn down and replaced by trade-grade chinese walls. Breaching chinese walls simply requires the right contact, the right excuse, the right story. Since 9/11, intel info and trade info are all one and the same, and it is now reasonable, expected even, that hundreds of thousands of new readers of data can trawl through criminal databases, intel summaries, background reports and the like.
For illumination, see the SWIFT battle. The problem wasn't that the NSA was reading the SWIFT traffic, it had been doing that for decades. The problem was the burgeoning spread of the data, as highlighted by events: the Department of Justice now felt it reasonable, indeed required, to get in on the act. Pundits will argue that it was a governed programme, but its secret governance was a setup for more breaches.
It wasn't the first, nor the second, and it wasn't going to be the last. If we had to choose between the evil chinese hacker and the enemy-who-is-us, I for one would take the evil chinese hacker every time. We can deal with the external enemy, we know how. But the internal enemy, the enemy who is us, he is the destruction of civil society, and there is no army to fight that threat.
This article reports that Mozilla are now proactive on security. This is good news. In the past, their efforts could be described as limited to bug patching and the like. Reactive security, in other words, which is what their fuzzer is:
"The FTP and HTTP protocol fuzzers act like fake servers that send bad data to sites," Snyder told InformationWeek.The HTTP fuzzer emulates an HTTP server to test how an HTTP client handles unexpected input. The FTP fuzzer likewise tests how an FTP client handles unexpected data.
Now however there is at least one person employed directly on thinking about security in a proactive sense:
Expect Firefox 3 to include new phishing and malware protection, extended validation certificates, improved password management, and a security user interface.
One could criticise this all as too little, too late, too "interests-driven". But changing cultures to think, really think about security is hard. It doesn't happen overnight, and it at least takes years. Consider that Microsoft has been working since 2003 to make this change, and the evidence is not here that their product is secure, yet, shows how hard it is.
From the where did you read it first? department here comes an interesting claim:
Beyond obvious tips like activating firewalls, shutting computers down when not in use, and exercising caution when downloading software or using public computers, Consumer Reports offered one safety tip that's sure to inflame online passions: Consider a Mac.
"Although Mac owners face the same problems with spam and phishing as Windows users, they have far less to fear from viruses and spyware," said Consumer Reports.
Spot the difference between us and them? Consumer Report is not in the computing industry. What this suggests about being helpful about security will haunt computing psychologists for years to come.
For amusement, count how many security experts will pounce on the ready excuse:
"Because Macs are less prevalent than Windows-based machines, online criminals get less of a return on their investment when targeting them."
Of course if that's true, it becomes less so with every Mac bought.
Can you say "monoculture!?"
U.S. consumers lost $7 billion over the last two years to viruses, spyware, and phishing schemes, according to Consumer Report's latest State of the Net survey. The survey, based on a national sample of 2,000 U.S. households with Internet access, suggests that consumers face a 25% chance of being victimized online, which represents a slight decline from last year.
Computer virus infections, reported by 38% of respondents, held steady since last year, which Consumer Reports considers to be a positive sign given the increasing sophistication of virus attacks. Thirty-four percent of respondents' computers succumbed to spyware in the past six months. While this represents a slight decline, according to Consumer Reports, the odds of a spyware infection remain 1 in 3 and the odds of suffering serious damage from spyware are 1 in 11.
Phishing attacks remained flat, duping some 8% of survey respondents at a median cost of $200 per incident. And 650,000 consumers paid for a product or service advertised through spam in the month before the survey, thereby seeding next year's spam crop.
Perversely, insecurity means money for computer makers: Computer viruses and spyware turn out to be significant drivers of computer sales. According to the study, virus infections drove about 1.8 million households to replace their computers over the past two years. And over the past six months, spyware infestations prompted about 850,000 households replace their computers.
From the 'poignant reminder' department, Verisign lost a laptop with employee data on it.
The employee, who was not identified, reported to VeriSign and to local police in Sunnyvale, Calif. that she had left her laptop in her car and had parked her car in her garage on Thursday, July 12. When she went out the next morning, she found that her car had been broken into and the laptop had been stolen.
Possibly a targetted theft?
The thing is, this can happen to anyone. Including Verisign, or any CA, or any security company. This can happen to you, and probably will, regardless of your policies (which in this case includes no employee data on laptops and using encrypted drives).
The message to take away from this is not that Verisign is this week's silly sausage, or that their internal security is lax. This can and will happen to anyone. Instead, today's message is that the there is a gap between security offerings and security needs so large that crooks are driving trucks through it every day, and have been for 4 years.
I estimated a billion dollars in 2004 or so from phishing alone, and now conservative estimates in other post today say it is around 3bn per year. (I say conservative because Lynn posts other numbers that are way higher.) That truck is at least 10 million dollars a day!
Still not convinced? Consider this mistake that the company made:
In its employee letter, VeriSign offered a year of free credit monitoring from Equifax for any affected individual, and recommended placing fraud alerts on credit accounts to watch for signs of fraud or identity theft.
If Verisign can offer a loss-leader zero-margin recovery product to the victims of their own failure, what hope has the rest of the computing industry?
The doom and gloom in the security market spreads. This time it is from Richard Bejtlich who probably knows his stuff as well as any (spotted at Gunnar's 1raindrop). After a day at Black Hat, the expensive high-end "shades of illegal" conference in security, he concludes:
Note how he includes the security professional in there. Yup. The problem is that the knowledge space has ballooned and nobody can keep up; Internet security is now a game that is way to broad for anyone to really cover it all.
Even before the current "war on everyone" you still had several problems with the security industry: lack of understanding of the business, which fed into poor choices being generally too expensive, and a tendency to "best practices:" purchased products because brands deliver CYA as well as high prices, etc etc. Over time, industry discovered they made more money if they didn't hand it over to security ... until 2003 that is, when phishing found its lure.
If then, you can't find someone to do it for you, you have to do it yourself. This is what I called Hypothesis #6 "It's your job. Do it." But exactly what does that mean?
#6.5 You need to be an adept in many more aspects
In all probability you will need to be adept -- well versed if not expert -- in all aspects, right up to user activity. This will stretch you away from the deep cryptographic and protocol tricks that you are used to as you simply won't have time for all that. But an ability to bridge from user needs all the way across to business needs, and then down to crypto bits is probably much more valuable than knowing yet another crypto algorithm in depth.
The FC7 thesis is that you need to know Rights, Accounting, Governance, Value and Finance as well as Software Engineering and Cryptography. You need to know these things in some depth not to use them but to avoid using them! And, more particularly, avoid their hidden costs, by asking the difficult questions about the costs that nobody else dare recognise.
Perhaps the best example was the decision by Microsoft to stop coding and start training all their coders in security practices. Not just some of them, all of them!
Where this leaves the conventional security professional is perhaps an open question. I would prefer to say that instead of prescribing solutions and sometimes supplying them, he would shift to training and guiding. This would involve a different emphasis: if you assess that an IDS might help, in the past you sold an IDS. Now, however, you would assess whether the team leader could cope with an IDS, pointed her in that direction, and mentored the assessment of the tool by the entire team. If they aren't up to that, then you have another problem.
A secret court ruled the US Government's wiretapping programme illegal, and the US Government claimed that revealing this fact released confidential activity. So illegality is OK if confidential? No, not before the courts, last I heard, but we'd need a lawyer to explain why the courts will not rely on illegality as a defence, and why we should not?
In similar news, it was revealed that then-Attorney General, Ashcroft, stated unequivocably that the programme was illegal, from his sickbed. He was immediately replaced with incumbent, Alberto Gonzalez, who some in Congress wonder if his testimony amounts to perjury.
Other news suggested that what was going on was the desire to start wiretapping (read: real-time direct feed to the NSA) of all the trans-US traffic. FISA, apparently, "requires a warrant to monitor calls intercepted in the United States, regardless of where the calls begin or end." A fair enough metric, 30 years ago.
Yet, if I call from Britain to Japan, chances are fair that the call is routed through the US, because that's where most of the fibre is. The hub & spoke effect is much more important for the Internet, where a much higher percentage of traffic is routed through the USA. Indeed, pretty much everything Internet-related is spoked around the US hub: Hosting, IP#s, startups, skills, etc.
Why not tap that, as it doesn't involve the inconvenience of US citizens? A good question, from an intelligence point of view, it's just another Black Chamber operation, updated for the 21st century, and it's easy pickings.
There are many reasons to oppose this, such as "We just can't suspend the Constitution for six months," but the one I like best is the simplest: if the NSA gets direct feeds of all fibre communications trans-shipping through the US, then two things will happen: Firstly, by laws of economics not Congress, this tapping will eventually include all US-terminated conversations.
Secondly, it might kick the rest of the world into gear and start responding to the basic threat of aggressive and persistent listening. Perversely, one might suggest that we shouldn't oppose the US, as we need a validated threat to focus our security efforts.
Indeed, if one surmises that the US government have been told their programmes are illegal, one can question whether the NSA is not already tapping all trans-shipped traffic, and is probably not adequately filtering the locally terminated traffic.
All your packets are belong to US. You have been warned: Deploy more crypto, guys.
The United Kingdom's information commissioner is calling on chief executives to take the security of customer and staff information more seriously.
"The roll call of banks, retailers, government departments, public bodies and other organizations which have admitted serious security lapses is frankly horrifying," Richard Thomas wrote in a report. "How can laptops holding details of customer accounts be used away from the office without strong encryption? How can millions of store card transactions fall into the wrong hands?"
The Information Commissioner's Office (ICO) received almost 24,000 inquiries and complaints concerning personal information, and it prosecuted 16 individuals and organizations in the past 12 months, according to its annual report for 2006 and 2007.
Another way of thinking of this is to look at these sorts of questions:
Click on if you want some answers. Now, it's not necessary to be able to answer these right off, but you do need to know what they mean.
OK, so all of those are debateable. But that's the point, unless you can debate them on the same turf as your other managers, you're lost. You can't contribute security if you don't understand her area enough to contribute.
People who don't know about security, but talk about it, are dangers to security. Those who are in the security field can determine the difference between 'security theater' and real security ... but unfortunately those outside are often swayed by simple and easy sales messages.
Scott McNealy did the privacy world a huge favour by saying "Privacy is dead, get over it." Perhaps he's also doing another favour by issuing a wake-up call to the world:
McNealy said that to overcome the "efficiency tax" travelers are facing at airports with long lines, it will make sense to begin adopting identity cards that are smart-card-enabled. These can be supplemented by biometric identification such as a thumbprint scanner, all of which can be done more efficiently than "50 security guards."
McNealy said it would be better to know who is on a plane or in a mall where a terrorist strikes, adding that it wouldn't be necessary to know who is buying what.
He said he realizes that his views might be interpreted as a way to sell more hardware and software at Sun. But, "I'm a parent, and I care about my kids," he said.
McNealy said he envisions the day when parents implant smart chips behind the ears of their children for identification purposes. "My dog has a chip [implant], and it's interesting [that] we treat a dog better than our kids," he said.
Several attendees said they thought implants seemed extreme or at least something that Americans won't want to consider for years to come. But they all agreed that technology will have to play a bigger role in security, especially at airports.
After his talk, McNealy met briefly with reporters and was asked if he seriously intended to endorse chip implants. "We are going to move to smart cards and chips ... so we feel safe," he said.
It sounds more like a DHS press release than anything else. Te detailed problem with all that McNealy says is that we know smartcards and chips do not deliver security (again, see Lynn in comments on rollouts of smartcard systems) but they can dramatically destroy privacy .. which is a reduction of security. Yet, high-profile organisations like Sun and DHS will continue to press security theater as it is good for their growth.
The CSO should have an MBA.
As a requirement! Necessary (but maybe not sufficient) for the Chief Security Officer job.
That being a slightly contentious issue, as most of y'all CSOs out there don't have an MBA, the onus is on me now to prove it, or swallow humble pie. Well, this is the blog for being years ahead of the trends, so let's get on with it then.
Consider some factors.
Maybe, 1 and 2 are opposite sides to the same coin, and spending that coin got us to 3. Which brings us to this point: we can pretty much assume that there is a huge gulf of understanding between the security world and the business world.
In order to fix all this, we have to generate understanding.
(Either that, or switch off the systems. Now, the security guys sometimes say exactly that, but the business guys don't listen, so we're back to looking at the real problem.)
Assume then the basic failure is one of understanding. We can do one of several things: train all the managers in security, train all the security people in business, or put in the middle a liaison who knows both security and business.
Let's kill the two easy options: Managers by definition know a little of everything, enough to be dangerous, and do not get trained up much in anything except whatever their original skill was. And for your average manager, those days of early skill training are over.
The second easy option is training the security people. But, that's a hopeless task. Firstly, where they come from, they have little background in business (generally, from IT. Worse is cryptography, better is military, for business exposure at least.). Secondly, it's a full-time job keeping up with just security, let alone learning stuff about business. The reverse difficulty could apply to managers, option 1 above.
Which leaves option #3: the liaison. Now, we can pretty easily see by process of elimination that the liaison has to be the CSO. So the question reduces to: what does the CSO need to be a successful liaison between security and business?
He needs the full capability to talk to the managers, and the security people. The latter means he *is* a security person, because by the law of guruism, to gain the respect of the gurus, you have to be one of them. The former means he *is* a manager.
But we need to go one step further. Because security cuts across all areas of the business in the way that HR, marketing, auditing, customer support does ... and transport, printing, sales, cleaning do not ... then the security honcho needs to understand a lot more of the business areas than the average manager does.
See the difference? Because security is against an aggressive attacker, because attacks can occur anywhere, the problems are broad and complex, and because the solutions strike across all areas, then .... if our CSO hasn't the skills to talk each manager, at her level, on her turf, not his, then he's lost.
There are only two ways that I can think of to get the knowledge to be able to talk at the same level as all of the other managers: Either you go fast-track, which means being shoved from department to department, 1-2 years each one, or you can pick it up in a condensed package called an Masters of Business Administration. The former is done, but it takes decades of time in either a very large corporation or the military. Scratch that as an option, as it also means our CSO won't be up to scratch as a security person. Hence, we are left with one option:
The CSO needs an MBA.
(Which will still cost you minimum one year; full time MBAs are generally 10 to 22 months, part time ones are up to 36 months ... but that's implementation details).
(Addendum, for scoring easy points against the above logic, Disclosure: I have an MBA :-) ) If you want to read more of the devil's advocacy, try:
It costs $500 for a kit to launch an MITM phishing attack. (Don't forget to add labour costs at 3rd world rates...)
David Franklin, vice president for the Europe, Middle East and Africa told IT PRO that these sites are proliferating because they are actually easier for hackers to set up than traditional 'fake' phishing sites because they don't even have to maintain a fake website. He also said man-in-the-middle attacks defeat weak authentication methods including passwords, internet protocol (IP) geolocation, device fingerprinting, cookies and personal security images and tokens, for example.
"A lot of the attacks you hear about are just the tip of the iceberg. Banks often won't even tell an affected customer that they have been a victim of these man-in-the-middle attacks," said Franklin, adding that kits that guide cybercriminals through setting up a man-in-the-middle attack are now so popular they can be bought for as little as $500 (£250) on the black market now.
He also said "man-in-the-browser" attacks are emerging to compete in popularity with middleman threat.
A couple of interesting notes from the above: it is now accepted that MITM is what phishing is (in the form mentioned above, the original email form, and the DNS form). These MITMs defeat the identity protection of SSL secure browsing, a claim made hereabouts first. and one that is still widely misunderstood: This is significant because SSL is engineered to defeat MITMs, but it only defeats internal or protocol MITMs, and can not stop the application itself being MITM'd. This typical "bypass attack" has important economic ramifications, such that SSL is now shown to be too heavy-weight to deliver value, unless it is totally free of cost and setup.
Secondly, note that the mainstream news has picked up the MITB threat (also reported and documented here first). It's still rare, but in the next 6 months, expect your boss to ask what it's about, because he read it in Yahoo.
Analysts at RSA Security early last month spotted a single piece of PHP code that installs a phishing site on a compromised server in about two seconds,
Despite efforts to quickly shut sites down, phishing sites averaged a 3.8-day life span in May, according to the Anti-Phishing Working Group, which released its latest statistics on Sunday.
Data from market analyst Gartner released last month showed that phishing attacks have doubled over the last two years.
Gartner said 3.5 million adults remembered revealing sensitive personal or financial information to a phisher, while 2.3 million said that they had lost money because of phishing. The average loss is US$1,250 per victim, Gartner said.
In the past (June 2004: 1, 2), I've reported that phishing costs around one billion per year. Multiply those last two numbers above from Gartner, and we get around a billion over the last three years. Still a good rule of thumb then.
It's been a while since SWIFT was in the news, but they're back! Chris points out that a class action suit has been permitted against them in Federal court:
In his 20-page opinion, Judge Holderman rejected SWIFT's defense that it acted in good faith by relying on government subpoenas. Any claim to "unfettered government access to the bank records of private citizens" is "constitutionally problematic", the court said.
In refusing to dismiss the case, Judge Holderman noted reports that "SWIFT officials were aware that their disclosures were legally suspect, but they nevertheless continued to supply database information to the U.S. government."
This might follow the progress of the illegal wiretapping case before Judge Diggs. There, the state said "state secrets" and the Judge responded "it's not a secret anymore..."
Which means illegal behaviour can be tried. Why are we so interested? Behaviour that is "probably illegal" and "definately deceptive" behaviour needs to be documented because that might suggest a finding that the US government cannot be expected to secure the privacy of any data. This applies more easily to that of foreigners and their SWIFT data, but customarily, what applies to foreigners soon then applies to locals (laws aside).
For example, Todd reports that the French are having trouble stopping their parliamentarians from passing their secrets across on Blackberries:
Members of the new French cabinet have been told to stop using their BlackBerries because of fears that the US could intercept state secrets. The SGDN, which is responsible for national security, has banned the use of the personal data assistants by anyone in the president’s or prime minister’s offices on the basis of “a very real risk of interception” by third parties.
The ban has been prompted by SGDN concerns that the BlackBerry system is based on servers located in the US and the UK, and that highly sensitive strategic information being passed between French ministers could fall into foreign hands. A confidential study carried out two years ago by Alain Juillet, the civil servant in charge of economic intelligence, found that the BlackBerry posed “a data security problem”.
Mr Juillet noted that US bankers would prove their bona fides in meetings by first placing their BlackBerries on the table and removing the batteries.
Some older notes.
Military intel is now accessing bank accounts of US persons. Now, this falls somewhat in the bucket of "expected" so why comment? Two possible reasons. Firstly, because we have a number:
Military intelligence officials have used the letters in about 500 investigations over the past five years, while the CIA issues a 'handful' of such letters every year, The Times wrote on its website late Saturday. Civil rights experts and defence lawyers were critical of the practice.
In comparison, the domestic law enforcement agency, the Federal Bureau of Investigation (FBI), makes much more frequent use of the letters to get financial information, and served about 9,000 such letters in 2005 alone, said justice officials.
( old dead link.) With numbers, we can calculate risk scenarios for financial cryptography applications. (It might be more interesting to some for example to look at total Internet intercepts; but we could hazard a guess that this is the same order of magnitude.)
The Times cited two examples where the military used the letters - one was a government contractor with unexpected wealth, another was a Muslim chaplain at Guantanamo Bay prison for terrorist suspects.
Secondly, because of the above privacy question. Concentrate on the first case: this is a clear suggestion of fraud. That's a regular crime. There are courts, judges, prosecutors, defence attorneys, and PIs lined up from coast to coast in the US for that sort of thing. So why did the military bypass normal chains of investigation and prosecution, and use the nuclear option?
Because it could. There is no climate of fear of prosecution for overstepping the bounds. There is no particular climate that tools should be limited in use. This would support a finding that data will be misused.
Caveat: we probably need to verify those statements before getting too confident.
One of the things that occurred in the early days of phishing was the realisation that the browser (and its manufacturer) was ill-suited to dealing with the threat, even though it was the primary security agent (c.f., from the SSL promise to defeat the MITM). One solution I pushed a lot was plugin security -- adding the authentication or authorisation logic into plugins so that different techniques could experiment and evolve.
In hindsight, this will appear obvious. 99% of the client-side security work involved plugins -- Trustbar, Netcraft, Petname, Ping's design, and many others. But it required a mindshift from "this is a demo" to "this is where it should be." This became poignant in the discussions of what to do about EV -- if Mozilla adopted EV into the browser, that opened a huge can of worms. Hence, putting EV in the plugin was the logical conclusion.
Verisign Inc. has [...snip] released a Firefox plugin that will show the same type of green address bar that is displayed by Internet Explorer 7 when it lands on certain highly trusted Web sites that use Extended Validation Secure Sockets Layer (EV SSL) certificates.
Before companies like Verisign will issue an EV SSL certificate, they take extra steps to make sure that it is going to a legitimate organization. For example, they will make sure that the business in question is registered with local authorities, has a real address, and actually has control over the Web domain in question.
Earlier, I suggested that understanding EV in terms of security is futile. More, look to structural issues:
Verisign's Tim Callan says that more than 500 Web sites, including sites run by eBay's PayPal division and ING Group, have now completed EV SSL certification. Nearly 90 percent of them are certified by Verisign, said Callan, a director of product marketing with the company's SSL group.
That was the plan. Consider how much value there is in the other CAs scrabbling around the crumbs of 10% of the market, versus the costs of the EV programme. Even more to the point:
That's an important point because Verisign's Firefox plugin doesn't identify sites that are certified by its competitors. Callan said it would have been too much work to maintain a list of legitimate EV SSL providers. "At that point, we're creating a whole new simultaneous real-time checking system," he said. "We were willing to invest in this one-off code development, but we didn't want to inherit this legacy of constantly maintaining this service, especially because this is a stop-gap measure. At the end of the year, this will be built into Firefox proper."
Can you say barriers to entry ? Once a site has been EV'd by Verisign, it is unlikely to shift, and that's what they wanted. If that doesn't work, ask Callan to say OCSP, which is written large into the EV draft (!) specification ...
Leaving aside the industry structural games, let's get back to PKI. There is a silver lining: EV means that it now becomes Verisign's responsibility to authenticate these sites to the user.
The *only* way to do that is in a way that is clearly expressed as "by Verisign" and the plugin makes this clear. If the browser takes on that responsibility, it breaks the statement, and therein lies the future breach of EV and the past failure of the security model.
Since long ago, we have hammered on one of the failures of the SSL server certificate market: "all CAs are alike." They were not, are not, and cannot be, and why the developers hung on to this blatantly and obviously false myth was mystifying to me.
Verisign has now broken it, and broken it good. For that we should all be thankful, and in time, we will see a differentiated CA market, which allows new products and new security postures to meet the divergent needs of users.
I have a lot of skepticism about the notion of provable security.
To some extent this is just efficient hubris -- I can't do it so it can't be any good. Call it chutzpah, if you like, but there's slightly more relevance to that than egotism, as, if I can't do it, it generally signals that businesses will have a lot of trouble dealing with it. Not because there aren't enough people better than me, but because, if those that can do it cannot explain it to me, then they haven't got much of a chance in explaining it to the average business.
Added to that, there have been a steady stream of "proofs" that have been broken, and "proven systems" that have been bypassed. If you look at it from a scientific investigative point of view, generally, the proof only works because the assumptions are so constrained that they eventually leave the realm of reality, and that's particularly dangerous to do in security work.
Added to all that: The ACM is awarding its Godel prize for a proof that there is no proof:
In a paper titled "Natural Proofs" originally presented at the 1994 ACM STOC, the authors found that a wide class of proof techniques cannot be used to resolve this challenge unless widely held conventions are violated. These conventions involve well-defined instructions for accomplishing a task that rely on generating a sequence of numbers (known as pseudo-random number generators). The authors' findings apply to computational problems used in cryptography, authentication, and access control. They show that other proof techniques need to be applied to address this basic, unresolved challenge.
The findings of Razborov and Rudich, published in a journal paper entitled "Natural Proofs" in the Journal of Computer and System Sciences in 1997, address a problem that is widely considered the most important question in computing theory. It has been designated as one of seven Prize Problems by the Clay Mathematics Institute of Cambridge, Mass., which has allocated $1 million for solving each problem. It asks - if it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This problem is posed to determine whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve.
The paper proves that there is no so-called "Natural Proof" that certain computational problems often used in cryptography are hard to solve. Such cryptographic methods are critical to electronic commerce, and though these methods are widely thought to be unbreakable, the findings imply that there are no Natural Proofs for their security.
If so, this can count as a plus point for risk management, and a minus point for the school of no-risk security. However hard you try, any system you put in place will have some chances of falling flat on its face. Deal with it; the savvy financial cryptographer puts in place a strong system, then moves on to addressing what happens when it breaks.
The "Natural Proofs" result certainly matches my skepticism, but I guess we'll have to wait for the serious mathematicians to prove that it isn't so ... perhaps by proving that it is not possible to prove that there is no proof?
So why can't we do it? In short, we do know that all security is really about risk management. So we just do risk management, right?
To which I'll add one: the attacker is aggressive. Whatever we measure, the attacker actively perverts. So, unlike insurance models, security doesn't work well with just statistics. Much as we say we need more data, if we had all the data in the world, and fixed what we could see, the attacker would simply move faster than we could.
A trio of notes:
So, there are some issues here. Are there limits to the risk management approach, beyond the fact that it seems to be beyond the capabilities of the industry? Has it reached its Waterloo, against the active, mildly competent attacker with 100 times the spend?
The meme is starting to spread. It seems that the realisation that the security community is built on self-serving myths leading to systemic fraud has now entered the consciousness of the mainstream world.
Over on the Volokh Conspiracy, Paul Ohm, exposes the Myth of the Superuser. His view is that too often, the sense of the Superuser is one of an overpowering ability of this uber-attacker. Once this sense enters the security agenda, the belief that there is this all-powerful evil criminal mastermind out there, watching and waiting, leads us into dangerous territory.
Ohm does not make the case that they do not exist, but that their effect or importance is greatly exaggerated. I agree, and this is exactly the case I made for MITM. In brief, the Man-in-the-middle is claimed to be out there lurking, and we must protect, at any costs. Wrong on all counts, and the result is a security disaster called phishing, which in itself is an MITM.
Then, phishing can be interpreted as a result of our obsession with Ohm's Superuser, the uber-attacker. In part, at least, and I'd settle for running the experiment without the uber-obsession. Specifically, Ohm points at some bad results he has identified:
Very briefly, in addition to these two harms — overbroad laws and civil liberties infringements — the other four harms I identify are guilt by association (think Ed Felten); wasted investigative resources (Superusers are expensive to catch); wasted economic resources (how much money is spent each year on computer security, and is it all justified?); and flawed scholarship (See my comment from yesterday about DRM).
All of which we can see, and probably agree on. What makes this essay stand out is that he goes the extra mile and examines what the root causes might be:
I have essentially been saying that we (policymakers, lawyers, law professors, computer security experts) do a lousy job calculating the risks posed by Superusers. This sounds a lot like what is said elsewhere, for example involving the risks of global warming, the safety of nuclear power plants, or the dangers of genetically modified foods. But there is a significant, important difference: researchers who study these other risks rigorously analyze data. In fact, their focus on numbers and probabilities and the average person’s seeming disregard for statistics is a central mystery pursued by many legal scholars who study risk, such as Cass Sunstein in his book, Laws of Fear.
In stark contrast, experts in the field of computer crime and computer security are seemingly uninterested in probabilities. Computer experts rarely assess a risk of online harm as anything but, “significant,” and they almost never compare different categories of harm for relative risk. Why do these experts seem so willing to abdicate the important risk-calculating role played by their counterparts in other fields?
Does that sound familiar? To slide into personal anecdote, consider the phishing debate (the real one back in 2004 or so, not the current phony one).
When I was convinced that we had a real problem, and people were ignoring it, I reasoned that the lack of scientific approach was what was holding people back from accepting the evidence. So I started collecting numbers on costs, breaches, and so forth (you'll see frequent posts on the blog, also mail postings around). I started pushing these numbers out there so that we had some grounding in what we were talking about.
What happened? Nothing. Nobody cared. I was able around 2004 to state that phishing already cost the USA about a billion dollars a year, and sometime shortly after that, that basically all data was compromised. In fact, I'm not even sure when we passed these points, because ... it's not worth my trouble to even go back and identify it!
Worse than nobody caring, the security field simply does not have the conceptual tools to deal with this. A little bit like "everyone was in denial" but worse, there is a collective glazed view to the whole problem.
What's going on? Ohm identifies 4 explanations (in point form here, but read his larger descriptions):
- Pervasive secrecy...
- Everyone is an Expert...
- The Need for Interdisciplinary Work...
No complaint there! Readers will recognise those frequent themes, and we could probably collectively get it to a list of 10 explanations without too much mental sweat.
But I would go further. Deeper. Harsher.
I would suggest that there is one underlying cause, and it is structural. It is because security is a market for silver bullets, in the deep and economic sense explained in that long paper. All of the above arise, in varying degrees, in the market first postulated by Michael Spence.
The problem with this is that it forces us to face truths that few can deal with. It asks us to choose between the awful grimness of helplessness, and the temptation to the dark side of security fraud, before we can enter any semblance of a professional existance. Nobody wants to go there.
One of the things we know is that MITMs (man-in-the-middle attacks) are possible, but almost never seen in the wild. Phishing is a huge exception, of course. Another fertile area is wireless lans, especially around coffee shops. Correctly, people have pointed to this point as a likely area where MITMs would break out.
Incorrectly, people have typically confused possibility with action. Here's the latest "almost evidence" of MITMs, breathtakingly revealed by the BBC:
In a chatroom used to discuss the technique, also known as a 'man in the middle' attack, Times Online saw information changing hands about how security at wi-fi hotspots – of which there are now more than 10,000 in the UK – can be bypassed.
During one exchange in a forum entitled 'T-Mobile or Starbucks hotspot', a user named aarona567 asks: "will a man in the middle type attack prove effective? Any input/suggestions greatly appreciated?"
"It's easy," a poster called 'itseme' replies, before giving details about how the fake network should be set up. "Works very well," he continues. "The only problem is,that its very slow ~3-4 Kb/s...."
Another participant, called 'baalpeteor', says: "I am now able to tunnel my way around public hotspot logins...It works GREAT. The dns method now seems to work pass starbucks login."
Now, the last paragraph is something else, it is referring to the ability to tunnel through DNS to get uncontrolled access to the net. This is typically possible if you run your own DNS server and install some patches and stuff. This is useful, and economically sensible for anyone to do, although technically it may be infringing behaviour to gain access to the net from someone else's infrastructure (laws and attitudes varying...).
So where's the evidence of the MITM? Guys talking about something isn't the same as doing it (and the penultimate paragraph seems to be talking about DNS tunnelling as well). People have been demoing this sort of stuff at conferences for decades ... we know it is possible. What we also know is that it is not a good use of your valuable time as a crim. People who do this sort of thing for a living search for techniqes that give low visibility, and high gathering capability. Broadcasting in order to steal a single username and password fails on both counts.
If we were scientists, or risk-based security scholars, what we need is evidence that they did the MITM *and* they committed a theft in so doing it. Only then can we know enough to allocate the resources to solving the problem.
To wrap up, here is some *credible* news that indicates how to economically attack users:
Pump and dump fraudsters targeting hotels and Internet cafes, says FBI Cyber crooks are installing key-logging malware on public computers located in hotels and Internet cafes in order to steal log-in details that are used to hack into and hijack online brokerage accounts to conduct pump and dump scams.
The US Federal Bureau of Investigation (FBI) has found that online fraudsters are targeting unsuspecting hotel guests and users of Internet cafes.
When investors use the public computers to check portfolios or make a trade, fraudsters are able to capture usernames and passwords. Funds are then looted from the brokerage accounts and used to drive up the prices of stocks the frudsters had bought earlier. The stock is then sold at a profit.
In an interview with Bloomberg reporters, Shawn Henry, deputy assistant director of the FBI's cyber division, said people wouldn't think twice about using a computer in an Internet cafe or business centre in a hotel, but he warns investors not to use computers they don't know are secure.
Why is this credible, and the other one not? Because the crim is not sitting there with his equipment -- he's using the public computer to do all the dangerous work.
Unconfirmed claims are being made on WSJ that the hackers in the TJX case did the following:
The TJX hackers did leave some electronic footprints that show most of their break-ins were done during peak sales periods to capture lots of data, according to investigators. They first tapped into data transmitted by hand-held equipment that stores use to communicate price markdowns and to manage inventory. "It was as easy as breaking into a house through a side window that was wide open," according to one person familiar with TJX's internal probe. The devices communicate with computers in store cash registers as well as routers that transmit certain housekeeping data.
After they used that data to crack the encryption code the hackers digitally eavesdropped on employees logging into TJX's central database in Framingham and stole one or more user names and passwords, investigators believe. With that information, they set up their own accounts in the TJX system and collected transaction data including credit-card numbers into about 100 large files for their own access. They were able to go into the TJX system remotely from any computer on the Internet, probers say.
OK. So assuming this is all true (and no evidence has been revealed other than the identity of the store where it happened), what can we say? Lots, and it is all unpopular. Here's a scattered list of things, with some semblance of connectivity:
a. Notice how the crackers still went for the centralised database. Why? It is validated information, and is therefore much more valuable and economic. The gang was serious and methodical. They went for the databases.
Conclusion: Eavesdropping isn't much of a threat to credit cards.
b. Eavesdropping is a threat to passwords, assuming that is what they picked up. But, we knew that way back, and that exact threat is what inspired SSH: eavesdroppers sniffing for root passwords. It's also where SSL is most sensibly used.
c. Eavesdropping is a threat, but MITM is not: by the looks of it, they simply sat there and sucked up lots of data, looking for the passwords. MITMs are just too hard to make them economic, *and* they leave tracks. "Who exactly is it that is broadcasting from that car over there....?"
(For today's almost evidence of the threat of MITMs see the BBC!)
d. Therefore, SSL v1 would have been sufficient to protect against this threat level. SSL v2 was overkill, and over-expensive: note how it wasn't deployed to protect the passwords from being eavesdropped. Neither was any other strong protocol. (Standard problem: most standardised security protocols are too heavy.)
TJX and 45 million americans say "thanks, guys!" I reckon it is going to take the other 255 million americans to lose big time before this lesson is attended to.
e. Why did they use a weak crypto protocol? Because it is the one delivered in the hardware.
Question: Why is hardware often delivered with weak crypto?
f. And, why was a weak crypto protocol chosen by the WEP people? And why are security observers skeptical that the new replacement for WEP will last any longer? The solution isn't in the "guild" approach I mentioned earlier, so forget ranting about how people should use a good security expert. It's in the institutional factors: security is inversely proportional to the number of designers. And anything designed by an industry cartel has a lot of designers.
g. Even if they had used strong crypto, could the breach have happened? Yes, because the network was big and complex, and the hackers could have simply plugged into some place elsewhere. Check out the clue here:
The hackers in Minnesota took advantage starting in July 2005. Though their identities aren't known, their operation has the hallmarks of gangs made up of Romanian hackers and members of Russian organized crime groups that also are suspected in at least two other U.S. cases over the past two years, security experts say. Investigators say these gangs are known for scoping out the least secure targets and being methodical in their intrusions, in contrast with hacker groups known in the trade as "Bonnie and Clydes" who often enter and exit quickly and clumsily, sometimes strewing clues behind them.
Recall that transactions are naked and vulnerable . Because the data is seen in so many places, savvy FCers assume the transactions are visible by default, and thus vulnerable unless intrinsically protected.
h. And, even if the entire network had been protected by some sort of overarching crypto protocol like WEP, the answer is to take over a device. Big stores means big networks means plenty of devices to take over.
i. Which leaves end-to-end encryption. The only protection you can really count on is end-to-end. WEP, WPA and IPSec and other such infrastruction-level systems are only a hopeful answer to an easy question, end-to-end security protocols are the real answer to application level questions.
(e.g., they could have used SSL for protecting the password access to the databases, end to end. But they didn't.)
j. The other requirement is to make the data insensitive to breaches. That is, even if a crook gets all the packets, he can't do anything with them. Not naked, as it were. End-to-end encryption then becomes a privacy benefit, not a security necessity.
However, to my knowledge, only Ricardo and AADS deliver this, and most other designers are still wallowing around in the mud of encrypted databases. A possible exception to this is the selective disclosure approach ... but for various business reasons that is even less likely to field than Ricardo and AADS were.
k. Why don't we use more end-to-end encryption with naked transaction protocols? One reason is that they don't scale: we have to write one for each application. Another reason is that we've been taught not to by generations: "you should use a standard security product." ... as seen by TJX, who *did* use a standard security product.
Conclusion: Security advice is "lowest common denominator" grade. The best advice is to use a standard product that is inapplicable to the problem area, and if that's the best advice, that also means there aren't enough FC-grade people to do better.
l. "Oh, we didn't mean that one!" Yeah, right. Tell us how to tell? Better yet, tell the public how to tell. They are already being told to upgrade to WPA, as if improving 1% of their network from 20% security to 80% security is going to help.
m. In short, people will seize on the encryption angle as the critical element. It isn't. If you are starting to get to the point of confusion due to the multiplying conflicts, silly advice, and sense of powerless the average manager has, you're starting to understand.
This is messy stuff, and you can pretty much expect most people to not get it right. Unfortunately, most security people will get it wrong too in a mad search for the single most important mistake TJX made.
The real errors are systemic: why are they storing SSNs anyway? Why are they using a single number for the credit card? Why are retail transactions so pervasively bound with identity, anyway? Why is it all delayed credit-based anyway?
- We need a system of security metrics, and it is a research grade problem.
- The demand for security expertise outstrips the supply,and it is a training problem and a recruitment problem.
- What you cannot see is more important than what you can, and so the Congress must never mistake the absence of evidence for the evidence of absence, especially when it comes to information security.
- Information sharing that matters does not and will not happen without research into technical guarantees of non-traceability.
- Accountability is the idea whose time has come, but it has a terrible beauty.
Yes to his points 1,3,4,5, and we should read them. Which leaves:
Priority number two: The demand for security expertise outstrips the supply.
Information security is perhaps the hardest technical field on the planet. Nothing is stable, surprise is constant, and all defenders work at a permanent, structural disadvantage compared to the attackers. Because the demands for expertise so outstrip the supply,the fraction of all practitioners who are charlatans is rising. Because the demands of expertise are so difficult, the training deficit is critical. We do not have the time to create, as if from scratch, all the skills required. We must steal them from other fields where parallel challenges exist. The reason cybersecurity is not worse is that a substantial majority of top security practitioners bring other skills into the field; in my own case, I am a biostatistician by training. Civil engineers, public health practitioners, actuaries, aircraft designers, lawyers, and on and on — they all have expertise we can use, and until we have a training regime sufficient to supply the unmet demand for security expertise we should be both grateful for the renaissance quality of the information security field and we should mine those other disciplines for everything we can steal. If you can help bring people into the field, especially from conversion, then please do so. In the meantime, do not believe all that you hear from so-called experts. Santayana had it right when he said that “Skepticism is the chastity of the intellect; it is shameful to give it up too soon, or to the first comer.”
Well! An alternate but not radically diverging opinion.
Thanks to Gunnar
Over on the crypto forum, an old debate restarted where some poor schmuck made the mistake of asking why CBC mode should not have an IV of all zeros?
For the humans out there, a "mode" is way of using a block cipher ("encrypts only a block") to encrypt a stream. CBC mode chains each block to each next block by feeding the results from the last to the next ("cipher-block-chaining"). But, we need an initial value (IV) to start the process. In the past, zeros were suggested as good enough.
The problem surfaces that if you use the same starting key, and the user sends the same packet, then the encrypted results will be the same (a block cipher being deterministic, generally). So we use a different IV for every different use of the secret key, in order to cause different results, and hide the same packet being sent. (There are also some other difficulties, but these are not only much more subtle, they are beyond my ability to write about, so I'll skip past them blithely.)
Back to the debate. Predictably, there was little controversy over the answer ("you can't do that!"), some controversy over the fuller answer (what to do), but also more controversy over the attitude. In short, the crypto community displayed what might be considered a severe case of hubris:
"If you don't know, then you had better employ a security expert who does!"
There are many problems with this. Here's some of them:
1. At a guess, designing secure protocols is 50% business, 40% software, 10% crypto. So when you say "employ a security expert" we immediately have the problem that security is many disciplines. A security expert might know a lot about cryptography, or he might know very little. What we can't reasonably expect is that a security expert knows a lot about everything.
We have to be cut some out, and as security -- the entire field -- only rarely and briefly depends on cryptography, it seems reasonable that your average security expert only knows the basics of cryptography.
Adi Shamir said in his misconceptions in cryptography:
By implementers: Cryptography solves everything, but:
- only basic ideas are successfully deployed
- only simple attacks are avoided
- bad crypto can provide a false sense of security
What then are the basics? Block ciphers are pretty basic. Understanding all the ins and outs of modes and IVs however is not. Where we draw the line is in knowing our own limitations, so therefore the original question ("what's wrong with IV of all zeros?") indicates quite good judgement.
2. But what about software? We can see it in the cryptographers' view:
"Just use a unique IV."
Although this sounds simple, it is a lot harder to implement than to say, and the person who gets to be responsible for the reality is the programmer. In glorious, painful detail.
Consider this: say you choose to put random numbers in the IV, and then you specify that the protocol must have a good RNG ("random number generator"). Then, how do you show that this is what actually happens? How can you show that the user's implementation has a good RNG? Big problem: you can't. And this has been an ignored problem for basically the entire history of Internet cryptography.
This would suggest that if you are doing secure protocols, what you want is someone who knows more about software than cryptography, because you want to know the costs of those choices more than you want to know the snippety little details.
3. Unfortunately, cryptography has captured the attention as a critical element of security. Software on the other hands is less interesting; just ask any journalist.
Because of this, crypto has become a sort of rarified unchallengeable beast in technological society, an institution more displaying of religious beliefs than cold science. Mere software engineers can rarely debate the cryptographers as peers, even the most excellent ones (like Zooko, Hal Finney and Dani Nagy) who can seriously write the maths.
So while I suggest that software is more important to the end result than cryptography, the programmers still lose in the face of assumed superiority. "Employ someone better than you" smacks of false professionalism, guildmaking. It's simple enough economics; if we as a group can create an ethos like the medical industry, where we could become unchallenged authorities, then we can lock out competitors (who haven't passed the guild exam), and raise prices.
Unfortunately, this raises the chance to near certainty that we also lock out people who can point out when we are off the rails.
4. And, we can forget about the business aspects completely. That is, a focus on requirements is just not plausible for people who've buried themselves in mathematics for a decade, as they don't understand where the requirements come from. Talking about risk is a vague hand-wavy subject, better left alone because of its lack of absolute provability.
By way of example, one of the questions in the recent debate was
"you haven't heard of anyone breaking SD cards have you?"
From a business perspective this makes complete sense, it is a completely standard risk perspective with theory to back it up (albeit a simplification); from a cryptographer's perspective it is laughable.
Who's right? Actually, both perspectives have merit, and choosing the right approach is again the subject of judgement.
5. Hence, a strong knee-jerk tendency to fall-back to the old saws like the CIA of confidentiality, integrity and authentication. But, it is generally easy to show that CIA and similar things are nice for crypto school but a bad start in real life.
E.g., in the above debate, it turns out that the application in question is DRM -- Digital Rights Management. What do we know about DRM?
If anything has been learnt at massive cost over the last 10 years of failed efforts, it is that DRM security does not have to be strong. Rather it has to create a discrimination between those who will pay and those who won't. This is a pure business decision. And it totally blows away the C, confidentiality, in CIA.
Don't believe me? Buy an iPod. The point here is that if one analyses the market for music (or ringtones, or cell phones, apparently) then the last thing one cares about is being able to crack the crypto. If one analyses the market for anything, CIA turns out to be such a gross simplification that you can pretty much assume it to be wrong from the start.
6. If you've subscribed to the above, even partly, you might then be wondering how we can tell a good security expert from a bad one? Good question!
Even I find it difficult, and I design the stuff (not at the most experienced level, but way more than for the hobby level). It is partly for that reason that I name financial cryptographers on the blog: to develop a theory of just how to do this.
How is a manager who has no experience in this going to do pick from a room full of security experts, who mostly disagree with each other? Basically, you're stuffed, mate!
So where does that leave us? Steve Bellovin says that when we hire a security expert, we are hiring his judgement, not his knowledge of crypto. Which means knowing when to worry, and especially _when not to worry_, and I fully agree with that! So how then do we determine someone's judgement in security matters?
Here's a scratched-down emerging theory of signals of security judgement:
BAD: If someone says "you need to hire a security expert, because you don't know enough, and that could be dangerous" that's a red-flag.
GOOD: If they ask what the requirements are, that's good.
BAD: If they assume CIA before the requirements, that's bad.
GOOD: If they ask what the application is, then a big plus.
BAD: If they say "use a standard protocol" then that's bad.
GOOD: If they say "this standard protocol may be a good starting point" then that's good.
BAD: If they know what goes wrong when an IV is not perfect, that's a danger sign.
GOOD: If they know enough to say "we have to be careful of the IV" then that's good.
BAD: If they don't know what risk is, they cannot reduce your risk.
GOOD: If they know more software than crypto, that's golden.
Strongly oriented to weeding out those who know too much about cryptography, it seems.
Sadly, the dearth of security engineers who don't subscribe to the hubristic myths leaves us with an unfortunate result: the job is left up to you. This has been the prevalent result of so many successful Internet ventures that it is a surprise that it hasn't been more clear. I'm sufficiently convinced that I think it has reached the level of a Hypothesis:
"It's your job. Do it."
Hubris? Perhaps! It's certainly your choice to either do it yourself, or hand it over to a security expert, so let's hear your experiences!
TJX is to be sued. The huge data breach by the US retailer is news covered elsewhere, as it is just a big one in a series of other like ones.
The suit will argue that TJX failed to protect customer data with adequate security measures, and that the Framingham, Mass.-based retail giant was less than honest about how it handled data.
What is interesting is that this could be the first time that someone big says "boo!" If the banks are now getting together to sue TJX for doing the wrong thing, this sets an interesting precedence: the banks say (presumably) that TJX was negligent and has done damages.
If the courts can show this is worth remedy, then the reverse is possible too. There are other suits possible. If the banks lose the data, then maybe they should be sued? If Microsoft's OS is shown to be insecure and susceptible to lost data, then maybe it should be sued? Or the banks that quite happily permitted it to be used, again? If someone pushes a particular product for commercial purposes (such as a firewall, a secure token, an encryption protocol, or just advice...) and it is shown to be materially involved in a breach, maybe the pusher needs to be sued?
What would this mean? A lot of suits, is one thing, such as is being readied against Paypal. A lot of money wasted, and the lawyers get richer. Some banks, some suppliers, some pushers and some users might modify their behaviour. Others might just get more defensive.
That may be bad ... but what's our alternative?
Suppliers and sellers of bad products have not been punished. Neither have buyers of bad products. Leaving aside the sense of blame and revenge, where is the feedback loop that tells a company that "buying that was wrong, that hurts?" Where is the message that says "you shouldn't use that software for online banking" to the user?
To address this lack of feedback, lack of confirmed danger, many have suggested government action. But, other than the spectacular exception of the SB1386 data breach disclosure law, most government laws and interventions have made matters worse, not better.
Economists like those going to WEIS2007 have suggested many things: Align the incentives, share more breach information amongst victims, make software vendors liable, etc etc. I suspect that one common rejoinder here is that the very economics that explains the problem often gives clues as to why it isn't solved.
Some mad crypto people have even suggested designing security into the system in the first place. Yet other fruitcakes have said that designing the wrong security in was what got us to where we are now...
What doesn't seem clear from an outside, global, society perspective is ... what to do?! None of those approaches are going to "just work," even if they are adopted.
However, the liabilities (growing) and the interests (diverging) are going to be balanced and aligned, one way or another. The Internet security world today is out of balance, unsustainable in its posture.
Here's my prediction: At some point we do reach a tipping point, and at that point the suits start.
However, predicting re-balancing-by-suits has been a little like predicting the collapse of the dollar in the new not-quite-global-currency regime: when it did happen, we were caught by surprise. And then it bounced back again...
So, no dates on that prediction. Let's watch for TJX copies.
Two separate comments on the blog from a few days ago reach into the nub of the security mess. Adam comments:
It's not that we're unable to propose solutions, it's that they're hard to compare. My assertion is that once we overcome the desire to hide our errors, we can learn to compare in better ways.
And Lynn writes:
Two separate studies recently reached conflicting conclusions: While one found that identity theft is on the rise significantly, the other reported that it is on the decline.
So which is it?
Addressing these in reverse order, we can expect two professionally-prepared scientific reports on the same subject to reach the same conclusion, unless there are some other factors involved.
Firstly, it could be that we don't know enough (c.f., silver bullets). Secondly, it could be that we are not using scientific rigour, and the report is instead some for-profit advertisement. Or, thirdly, we could always simply be making a mistake.
All reasons are plausible... but now look at Adam's comment. If we apparently have a desire to hide our errors, what does that say? Are we all snake-oil salesmen? Are we not scientists? Is security only sales and hype, and there no professionalism in the field at all?
If we were talking about a science, and/or conducted by professionals, then we could assume that while Lynn's case was possible, it would be infrequent: professionals would not conclude if they did not know enough. Scientists wouldn't write for-profit advertisements, and although professionals might, they would be careful to disclose and disclaim. At the least, we would expose our data and ask for alternate analyses for the purpose of eliminating errors and mistakes.
If we were scientists, we recognise that the goal is knowledge, and the elimination of our own mistakes is the only way to advance knowledge. Having any desire whatsoever to cover our mistakes is incompatible with scientific method, indeed, it should be a joy to uncover our mistakes, as this is the only way to advance! And, even professionals know that recognition and correction of mistakes is a core professional duty.
This suggests then that security is not science and it is not a profession.
Is security only the marketing of ephemeral goods and services?
There is no moral difficulty in security being only, mere marketing. We humans have the right to make money, and spend it on what we like. If companies buy hype then let us sell them hype.
However, even in marketing there are limits. Even in marketing, we say that statements must be true. Even in marketing, professionalism exists.
Why is this? Perhaps it is time to consider a dramatic alternate to the true statement: Fraud.
Under common law, three elements are required to prove fraud: a material false statement made with an intent to deceive (scienter), a victim’s reliance on the statement and damages.
In today's Internet security world, we have damages, in Lynn's above-mentioned reports, whether up or down. We also have reliance by users on browsers, operating systems, CAs and server security. The intent to deceive is easy to show in the context of sales.
Do we have material false statements?
If the security industry was brought before the court of common law, I'd suggest that there's a pretty good chance that it would be found guilty of fraud!
Which is why Adam's assertion is so pertinent. Once we have committed fraud, we have also committed to covering it up. Fraud practitioners know that a strong signal of fraud is hiding the results, a desire to hide errors.
Fraud practitioners also know that fraud is a trap; once a small fraud is committed, we have to commit another and another, bigger and bigger. And each becomes easier, mentally speaking, than the first.
The security industry is caught in the trap of fraud: we are perpetually paying for the material false statements of our past, with bigger and bigger frauds. Where does this end?
What is interesting is that Adam has now taken on the meta-question of why we didn't do a better job. Readers here will sympathise. Read his essay about how the need for change is painful, both directly and broadly:
At a human level, change involves loss and and the new. When we lose something, we go through a process, which often includes of shock, anger, denial, bargaining and acceptance. The new often involves questions of trying to understand the new, understanding how we fit into it, if our skills and habits will adapt well or poorly, and if we will profit or lose from it.
Adam closes with a plea for help on disclosure :
I'm trying to draw out all of the reasons why people are opposed to change in disclosure habits, so we can overcome them.
I am not exactly opposed but curious, as I see the issues differently. So in a sense, deferring for a moment a brief comment on the essay, here are a few comments on disclosure.
Schechter & Smith use an approach of modelling risks and rewards from the attacker's point of view which further supports the utility of sharing information by victims:
Sharing of information is also key to keeping marginal risk high. If the body of knowledge of each member of the defense grows with the number of targets attacked, so will the marginal risk of attack. If organizations do not share information, the body of knowledge of each one will be constant and will not affect marginal risk. Stuart E. Schechter and Michael D. Smith "How Much Security is Enough to Stop a Thief?", Financial Cryptography 2003 LNCS Springer-Verlag.
Yet, to share raises costs for the sharer, and the benefits are not accrued to the sharer. This is a prisoner's dilemma for security, in that there may well be a higher payoff if all victims share their experiences, yet those that keep mum will benefit and not lose more from sharing. As all potential sharers are joined in an equilibrium of secrecy, little sharing of security information is seen, and this is rational. We return to this equilibrium later.
(OK, so some explanation. At what point do we forget the nonsense touted in the press and move on to a real solution where the lost data doesn't mean a compromise? IOW, "we were given the task of securing all retail transactions......")
So, while I think that there is a lot to be said about disclosure, I think it is also a limited story. I personally prefer some critical thought -- if I can propose half a dozen solutions to some poor schmuck company's problems, why can't they?
And it is to that issue that Adam's essay really speaks. Why can't we change? What's wrong with us?
A slight debate has erupted over Adam's presentation "Security Breaches are good for you" which makes it a success. Of course, Adam means good for the rest of us, not the victims. One can consider two classes of beneficiaries to breach information:
SB1386 and copies address the first case, but what about the second case? Kenneth F. Belva simply doesn't agree with it. He blogs that we can't release all this information for security reasons; we don't release security-sensitive info because it gives an attacker an advantage.
This I claim is a facade. I believe that when people say "cannot release that for security sensitive reasons," there are generally other real reasons. Primarily, this is the safety of the people doing the job in that they can do their job much more happily if they can avoid the scrutiny that disclosure forces. Adam says (PDF):
So, why can’t we share? There are four reasons usually given: liability, embarrassment, customers would flee, and we might lose our jobs.
I quote Dan Geer as saying “No CSO on Wall Street has the authority to share even firewall logs because no General Counsel can see the point.”
Beyond all the excuses, I think there is a much bigger danger: that security people can do their jobs much more badly when protected by secrecy. Disclosure forces all of us to defend our position and to be judged by our peers, and trotting out the "secrecy" argument is IMO almost always used to cover bad work, not a real security risk.
That's not to say that any particular professional is bad, but that the profession as a whole, as a guild, promotes bad professional behaviour, behind the curtain of secrecy. That said, there is an element of truth in the issue. We don't disclose the root password, so where is the line drawn? Kenneth says:
Just as we have and understanding of responsible disclosure now for technical information security vulnerabilities, we need the same for breach disclosure.
If not through a centralized (not necessarily government) body Adam, what do you propose that would allow for better, more accurate and confidential disclosure that does not leak sensitive information?
Disclosures are not possible if done via some "controlled" channel. As Adam points out and as economic theory has it (since Stiglitz or Stigler, I forget which), all closed channels become controlled, stifled and then neutered (and what happens after is sometimes even worse). This is as true in the infosec world as any other.
So maybe we need a law to force public disclosure? Right?
I call it wrong. SB1386 was a big win because it forced 1. above. This was in no small part because the issue was simple: just write to the victims and tell them what was lost. The harm was clear, the victims were directly known, and they just needed to be told.
Part 2 is much harder. I doubt a law can do it, and I further doubt a law can do any good there at all. There is no victim being targetted, there is no simple harm, and there is indeed no direct or contractual relationship here at all.
We simply have no guidance to make such a law. SB1386 was a lucky break, it spotted a precise niche, and filled it. We should *not* take SB1386 as anything but an extraordinarily lucky break, and we should not expect to repeat it.
So where does that leave us with respect to responsible disclosure ? It will then fall to the companies to decide what to do ... and obviously it is far far safer to say nothing than say something. That doesn't make it right, just safer for those who's jobs are at risk.
I talk about solutions in silver bullets. Adam dares:
I am challenging everyone to face those fears, and work to overcome them. I believe that there's tremendous value waiting to be unlocked.
Lack of solid info on breaches is one cause in what makes it so hard to defend against real risks; e.g., why phishing was able to develop unimpeded: nobody dared to talk about it when it happened to them, and anyone who did was poo-pooed by those who hadn't figured it out yet.
(Quick quiz -- what's your defence against MITB? is it (a) never heard about it, in which case you are a victim of the secrecy myth, (b) have one but can't say, in which case you are perpetuating unsafety through secrecy, or (c) something else?)
In the ongoing saga of "what is security?" and more importantly, "why is it such a crock?" Bruce Schneier weighs in with some ruminations on "feelings" or perceptions, leading to an investigation of psychology.
I think the perceptional face of security is a useful area to investigate, and the essay shines as a compendium of archtypical heuristics, backed up by experiments, and written for a security audience. These ideas can all be fed in to our security thinking, not to be taken for granted, but rather as potential explanations to be further tested. Recommended reading, albeit very long...
I would however urge some caution. I claim that buyers and sellers do not know enough about security to make rational decisions, the essay suggests a perceptional deviation as a second uncertainty. Can we extrapolate strongly from these two biases?
As it is a draft, requesting comment, here are three criticisms, which suggest that the introduction of essay seems unsustainable:
THE PSYCHOLOGY OF SECURITY -- DRAFT
Security is both a feeling and a reality. And they're not the same.
The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures.
Firstly, I'd suggest that "what security is" is not yet well defined, and has defied our efforts to come to terms with it. I say a bit about that in Pareto-secure but I'm only really looking at one singular aspect of why cryptographers are so focussed on no-risk security.
Secondly, both maths and feelings are approximations, not the reality. Maths is just another model, based on some numeric logic as opposed to intuition.
What one could better say is that security can be viewed through a perceptional lens, and it can be viewed through a mathematical lens, and we can probably show that the two views look entirely different. Why is this?
Neither is reality though, as both take limited facts and interpolate a rough approximation, and until we can define security, we can't even begin to understand how far from the true picture we are.
We can calculate how secure your home is from burglary, based on such factors as the crime rate in the neighborhood you live in and your door-locking habits. We can calculate how likely it is for you to be murdered, either on the streets by a stranger or in your home by a family member. Or how likely you are to be the victim of identity theft. Given a large enough set of statistics on criminal acts, it's not even hard; insurance companies do it all the time.
Thirdly, insurance is sold, not bought. Actuarial calculations do not measure security to the user but instead estimate risk and cost to the insurer, or more pertinently, insurer's profit. Yes, the approximation gets better for large numbers, but it is still an approximation of the very limited metric of profitability -- a single number -- not the reality of security.
What's more, these calculations cannot be used to measure security. The insurance company is very confident in its actuarial calculations because it is focussed on profit; for the purpose of this one result, large sets of statistics work fine, as well as large margins (life insurance can pay out 50% to the sales agent...).
In contrast, security -- as the victim sees it -- is far harder to work out. Even if we stick to the mathematical treatment, risks and losses include factors that aren't amenable to measurement, nor accurate dollar figures. E.g., if an individual is a member of the local hard drugs distribution chain, not only might his risks go up, and his losses down (life expectancy is generally lowered in that profession) but also, how would we find out when and how to introduce this skewed factor into his security measurement?
While we can show that people can be sold insurance and security products, we can also show that the security they gain from those products has no particular closeness to the losses they incur (if it was close, then there would be more "insurance jobs").
We can also calculate how much more secure a burglar alarm will make your home, or how well a credit freeze will protect you from identity theft. Again, given enough data, it's easy.
It's easy to calculate some upper and lower bounds for a product, but again these calculations are strictly limited to the purpose of actuarial cover, or insurer's profit.
They say little about the security of the user, and they probably play as much to the feelings of buyer as any mathematical model of seller's risks and losses.
It's my contention that these irrational trade-offs can be explained by psychology.
I think that's a tough call, on several levels. Here's some contrary plays:
If we put all those things together, a complex pattern emerges. I look at a lot of these elements in the market for silver bullets, and, leaning heavily on the Spencarian theory of symmetrically insufficient information, I show that best practices may emerge naturally as a response to costs of public exposure, and not the needs for security. Some of the experiments listed (24,38) may actually augment that pattern, but I wouldn't go so far as to say that the heuristics described are the dominating factor.
Still, in conclusion, irrationality is a sort of code word in economics for "our models don't explain it, yet." I've read the (original) literature of insufficient information (Spence, Akerlof, etc) and found a lot of good explanations. Psychology is probably an equally rewarding place to look, and I found the rest of the article very interesting.
The gut-wrenching fight with who we want to be continues over at Mozilla. In a status update, Mitchell posts on the evolving Principles:
SPECIFICITY: There were a set of comments about the Manifeto not being specific, either about the nature of the goal or the steps we might take to get there. This was intentional on my part. With regard to specificity of the goals, I’m not sure we know now. And I’m pretty sure that interpretations will change. For example, what does security mean? We know it means security vulnerabilities -- problems with code that allow malicious actors too much room to move. More recently, “phishing” has become a serious problem. It’s not a classic code vulnerability though, so any definition of security that focused solely on code would be too limited. There will undoubtedly be other types of problems that will come up. So if we try to define “security” today it may be incomplete and it will undoubtedly be incomplete in the future.
My hope is that the Manifesto sets a stake in the ground that the broad topic of security is fundamental. Then a variety of groups can engage in more detailed discussions of what this means for them and what steps they will take. The Mozilla Foundation will clearly focus on products, technology and user interaction. Other groups can set out other specific tasks that contribute to improved Internet security.
That's just what I wanted to say ...
The recognition that security has failed has tracked the rise of phishing, growing through the years 2003-2004, but has been reinforced by data breaches, DDOS, botnets, etc. Those who saw the widening dichotomy between actual breaches and peacock strutting of the security industry started to ask questions. Is there one security or two? Is it possible to be secure and and not secure at the same time? Is there any point in talking about security at all, or is it all risk management? Who should we be securing, and what are we paying for?
Certainly, the discussion can only start with a fairly brave admission: We certainly stuffed that up, didn't we! Until we get over that barrier, until we recognise the awful gut-twisting fact that as a discipline, we said we could do it and we didn't, we won't ever be able to address the problems.
Mozilla makes the point above; has your organisation declared the failure insecurity to the public lately?
I tried typing in 123 456 789 and it told me to p**s off ... drats, it's clever!
But meanwhile, I spotted down at the bottom that there is a "Verisign secured" seal at the bottom. Oh, that means something, doesn't it? So I clicked it. It took me to Verisign. (Don't believe me ... click on the seal yourself ... PLEASE... and spot the <ahem> slight flaw :)
But anyways, ==> Verisign <== then says:
1/2/2007 23:02 www.stolenidsearch.com uses VeriSign services as follows:
SITE NAME: www.stolenidsearch.com
STATUS: Valid (13-Jan-2007 to 13-Jan-2008)
ORGANIZATION: TRUSTEDID INC
Encrypted Data Transmission This Web site can secure your private information using a VeriSign SSL Certificate. Information exchanged with any address beginning with https is encrypted using SSL before transmission.
Identity Verified TRUSTEDID INC has been verified as the owner or operator of the Web site located at www.stolenidsearch.com. Official records confirm TRUSTEDID INC as a valid business.
For your best security while visiting sites, always make sure the address of the visited site matches the address you are expecting to see. Make sure that the URL of this page begins with "https://seal.verisign.com"
>>REPORT SEAL MISUSE
(highlighting the interesting bit there ...)
So, there we have it. Verisign says that TrustedId Inc, d.b.a. "StolenIdSearch" are a valid business. If they misuse your SSN, go after them.
What was the need for EV then, again?
Addendum: The site responds to criticism.