July 23, 2014

on trust, Trust, trusted, trustworthy and other words of power

Follows is the clearest exposition of the doublethink surrounding the word 'trust' that I've seen so far. This post by Jerry Leichter on Crypto list doesn't actually solve the definitional issue, but it does map out the minefield nicely. Trustworthy?

On Jul 20, 2014, at 1:16 PM, Miles Fidelman <...> wrote:
>> On 19/07/2014 20:26 pm, Dave Horsfall wrote:
>>>
>>> A trustworthy system is one that you *can* trust;
>>> a trusted system is one that you *have* to trust.
>>
> Well, if we change the words a little, the government
> world has always made the distinction between:
> - certification (tested), and,
> - accreditation (formally approved)

The words really are the problem. While "trustworthy" is pretty unambiguous, "trusted" is widely used to meant two different things: We've placed trust in it in the past (and continue to do so), for whatever reasons; or as a synonym for trustworthy. The ambiguity is present even in English, and grows from the inherent difficulty of knowing whether trust is properly placed: "He's a trusted friend" (i.e., he's trustworthy); "I was devastated when my trusted friend cheated me" (I guess he was never trustworthy to begin with).

In security lingo, we use "trusted system" as a noun phrase - one that was unlikely to arise in earlier discourse - with the *meaning* that the system is trustworthy.

Bruce Schneier has quoted a definition from some contact in the spook world: A trusted system (or, presumably, person) is one that can break your security. What's interesting about this definition is that it's like an operational definition in physics: It completely removes elements about belief and certification and motivation and focuses solely on capability. This is an essential aspect that we don't usually capture.

When normal English words fail to capture technical distinctions adequately, the typical response is to develop a technical vocabulary that *does* capture the distinctions. Sometimes the technical vocabulary simply re-purposes common existing English words; sometimes it either makes up its own words, or uses obscure real words - or perhaps words from a different language. The former leads to no end of problems for those who are not in the field - consider "work" or "energy" in physics. The latter causes those not in the field to believe those in it are being deliberately obscurantist. But for those actually in the field, once consensus is reached, either approach works fine.

The security field is one where precise definitions are *essential*. Often, the hardest part in developing some particular secure property is pinning down precisely what the property *is*! We haven't done that for the notions surrounding "trust", where, to summarize, we have at least three:

1. A property of a sub-system a containing system assumes as part of its design process ("trusted");
2. A property the sub-system *actually provides* ("trustworthy").
3. A property of a sub-system which, if not attained, causes actual security problems in the containing system (spook definition of "trusted").

As far as I can see, none of these imply any of the others. The distinction between 1 and 3 roughly parallels a distinction in software engineering between problems in the way code is written, and problems that can actually cause externally visible failures. BTW, the software engineering community hasn't quite settled on distinct technical words for these either - bugs versus faults versus errors versus latent faults versus whatever. To this day, careful papers will define these terms up front, since everyone uses them differently.

-- Jerry

Posted by iang at 05:05 AM | Comments (0) | TrackBack

May 25, 2014

How much damage does one hacker do? FBI provides some estimates.

John Young points to some information on a conviction settlement for a hacker caught participating in LulzSec, which term the FBI explains as:

“Lulz” is shorthand for a common abbreviation used in Internet communications – LOL – or “laughing out loud.” As explained on LulzSec’s website, LulzSec.com, the group’s unofficial motto was “Laughing at your security since 2011.”

Aside from the human interest aspects of the story [0], the FBI calculates some damages (blue page 8, edited to drop non-damages estimates):

In the PSR, Probation correctly calculates that the defendant’s base offense level is 7 pursuant to U.S.S.G. §2B1.1(a)(1) and correctly applies a 22-level enhancement in light of a loss amount between $20 million and $50 million 4; a 6-level enhancement given that the offense involved more than 250 victims; ...

_____
4 This loss figure includes damages caused not only by hacks in which Monsegur personally and directly participated, but also damages from hacks perpetrated by Monsegur’s co- conspirators in which he did not directly participate. Monsegur’s actions personally and directly caused between $1,000,000 and $2,500,000 in damages. ...


That last number range of $1m to 2.5m damages is interesting, and can be contrasted to his 10 direct victims (listed on blue pages 5-6) exploited over a 1 year period.

One could surmise that this isn't an optimal solution. E.g., hypothetically, if the 10 victims were to pay each a tenth of their losses, they'd raise a salary of 100-250k and put perp to productive work, and we'd all be in net profit [1].

Obviously this didn't efficiently solve in society due to information problems. LulzEconSec, anyone?

__________
[0] this post was originally a post on Cryptography lists.
[1] Additional comments on the 'profit' side, blue page 13:

"Although difficult to quantify, it is likely that Monsegur’s actions prevented at least millions of dollars in loss to these victims."
and blue page 16:
"Through Monsegur’s cooperation, the FBI was able to thwart or mitigate at least 300 separate hacks. The amount of loss prevented by Monsegur’s actions is difficult to fully quantify, but even a conservative estimate would yield a loss prevention figure in the millions of dollars."

Posted by iang at 07:19 AM | Comments (3) | TrackBack

May 19, 2014

How to make scientifically verifiable randomness to generate EC curves -- the Hamlet variation on CAcert's root ceremony

It occurs to me that we could modify the CAcert process of verifiably creating random seeds to make it also scientifically verifiable, after the event. (See last post if this makes no sense.)

Instead of bringing a non-deterministic scheme, each participant could bring a deterministic scheme which is hitherto secret. E.g., instead of me using my laptop's webcam, I could use a Guttenberg copy of Hamlet, which I first declare in the event itself.

Another participant could use Treasure Island, a third could use Cien años de soledad.

As nobody knew what each other participate was going to declare, and the honest players amongst did a best-efforts guess on a new statically consistent tome, we can be sure that if there is at least one honest non-conspiring party, then the result is random.

And now verifiable post facto because we know the inputs.

Does this work? Does it meet all the requirements? I'm not sure because I haven't had time to think about it. Thoughts?

Posted by iang at 10:19 AM | Comments (1) | TrackBack

May 02, 2014

How many SSL MITMs are there? Here's a number: 0.2% !!!

Whenever we get into the SSL debate, there's always an aspect of glass-half-full or glass-half-empty. Is the SSL system doing the job, and we're safe? Or were we all safe anyway, and it wasn't needed? Or?

Here's a paper that suggests the third choice: that maybe SSL isn't doing the job at all, to a disturbingly high number: 0.2% of connections are MITMed.

Analyzing Forged SSL Certificates in the Wild Huang, Rice, Ellingsen, Jackson https://www.linshunghuang.com/papers/mitm.pdf

Abstract—The SSL man-in-the-middle attack uses forged SSL certificates to intercept encrypted connections between clients and servers. However, due to a lack of reliable indicators, it is still unclear how commonplace these attacks occur in the wild. In this work, we have designed and implemented a method to detect the occurrence of SSL man-in-the-middle attack on a top global website, Facebook. Over 3 million real-world SSL connections to this website were analyzed. Our results indicate that 0.2% of the SSL connections analyzed were tampered with forged SSL certificates, most of them related to antivirus software and corporate-scale content filters. We have also identified some SSL connections intercepted by malware. Limitations of the method and possible defenses to such attacks are also discussed.

Now, that may mean only a few, statistically, but if we think about the dangers of MITMing, it's always been the case that MITMing would only be used under fairly narrow circumstances because it can in theory be spotted. Therefore this is a quite a high number, it means that it is basically quite easy to do.

After eliminating those known causes such as your anti-virus scanning, corporate inspection and so forth, this number drops down by an order of magnitude. But that still leaves some 500-1000 suspicious MITMs spotted in a sample of 3.5m.

H/t to Jason and Mikko's tweet.

Posted by iang at 06:59 PM | Comments (1) | TrackBack

April 06, 2014

The evil of cryptographic choice (2) -- how your Ps and Qs were mined by the NSA

One of the excuses touted for the Dual_EC debacle was that the magical P & Q numbers that were chosen by secret process were supposed to be defaults. Anyone was at liberty to change them.

Epic fail! It turns out that this might have been just that, a liberty, a hope, a dream. From last week's paper on attacking Dual_EC:

"We implemented each of the attacks against TLS libraries described above to validate that they work as described. Since we do not know the relationship between the NIST- specified points P and Q, we generated our own point Q′ by first generating a random value e ←R {0,1,...,n−1} where n is the order of P, and set Q′ = eP. This gives our trapdoor value d ≡ e−1 (mod n) such that dQ′ = P. (Our random e and its corresponding d are given in the Appendix.) We then modified each of the libraries to use our point Q′ and captured network traces using the libraries. We ran our attacks against these traces to simulate a passive network attacker.

In the new paper that measures how hard it was to crack open TLS when corrupted by Dual_EC, the authors changed the Qs to match the P delivered, so as to attack the code. Each of the four libraries they had was in binary form, and it appears that each had to be hard-modified in binary in order to mind their own Ps and Qs.

So did (a) the library implementors forget that issue? or (b) NIST/FIPS in its approval process fail to stress the need for users to mind their Ps and Qs? or (c) the NSA knew all along that this would be a fixed quantity in every library, derived from the standard, which was pre-derived from their exhaustive internal search for a special friendly pair? In other words:

"We would like to stress that anybody who knows the back door for the NIST-specified points can run the same attack on the fielded BSAFE and SChannel implementations without reverse engineering.

Defaults, options, choice of any form has always been known as bad for users, great for attackers and a downright nuisance for developers. Here, the libraries did the right thing by eliminating the chance for users to change those numbers. Unfortunately, they, NIST and all points thereafter, took the originals without question. Doh!

Posted by iang at 07:32 PM | Comments (0) | TrackBack

April 01, 2014

The IETF's Security Area post-NSA - what is the systemic problem?

In the light of yesterday's newly revealed attack by the NSA on Internet standards, what are the systemic problems here, if any?

I think we can question the way the IETF is approaching security. It has taken a lot of thinking on my part to identify the flaw(s), and not a few rants, with many and aggressive defences and counterattacks from defenders of the faith. Where I am thinking today is this:

First the good news. The IETF's Working Group concept is far better at developing general standards than anything we've seen so far (by this I mean ISO, national committees, industry cartels and whathaveyou). However, it still suffers from two shortfalls.

1. the Working Group system is more or less easily captured by the players with the largest budget. If one views standards as the property of the largest players, then this is not a problem. If OTOH one views the Internet as a shared resource of billions, designed to serve those billions back for their efforts, the WG method is a recipe for disenfranchisement. Perhaps apropos, spotted on the TLS list by Peter Gutmann:

Documenting use cases is an unnecessary distraction from doing actual work. You'll note that our charter does not say "enumerate applications that want to use TLS".

I think reasonable people can debate and disagree on the question of whether the WG model disenfranchises the users, because even though a a company can out-manouver the open Internet through sheer persistence and money, we can still see it happen. In this, IETF stands in violent sunlight compared to that travesty of mouldy dark closets, CABForum, which shut users out while industry insiders prepared the base documents in secrecy.

I'll take the IETF any day, except when...

2. the Working Group system is less able to defend itself from a byzantine attack. By this I mean the security concept of an attack from someone who doesn't follow the rules, and breaks them in ways meant to break your model and assumptions. We can suspect byzantium disclosures in the fingered ID:

The United States Department of Defense has requested a TLS mode which allows the use of longer public randomness values for use with high security level cipher suites like those specified in Suite B [I-D.rescorla-tls-suiteb]. The rationale for this as stated by DoD is that the public randomness for each side should be at least twice as long as the security level for cryptographic parity, which makes the 224 bits of randomness provided by the current TLS random values insufficient.

Assuming the story as told so far, the US DoD should have added "and our friends at the NSA asked us to do this so they could crack your infected TLS wide open in real time."

Such byzantine behaviour maybe isn't a problem when the industry players are for example subject to open observation, as best behaviour can be forced, and honesty at some level is necessary for long term reputation. But it likely is a problem where the attacker is accustomed to that other world: lies, deception, fraud, extortion or any of a number of other tricks which are the tools of trade of the spies.

Which points directly at the NSA. Spooks being spooks, every spy novel you've ever read will attest to the deception and rule breaking. So where is this a problem? Well, only in the one area where they are interested in: security.

Which is irony itself as security is the field where byzantine behaviour is our meat and drink. Would the Working Group concept past muster in an IETF security WG? Whether it does or no depends on whether you think it can defend against the byzantine attack. Likely it will pass-by-fiat because of the loyalty of those involved, I have been one of those WG stalwarts for a period, so I do see the dilemma. But in the cold hard light of sunlight, who is comfortable supporting a WG that is assisted by NSA employees who will apply all available SIGINT and HUMINT capabilities?

Can we agree or disagree on this? Is there room for reasonable debate amongst peers? I refer you now to these words:

On September 5, 2013, the New York Times [18], the Guardian [2] and ProPublica [12] reported the existence of a secret National Security Agency SIGINT Enabling Project with the mission to “actively [engage] the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products’ designs.” The revealed source documents describe a US $250 million/year program designed to “make [systems] exploitable through SIGINT collection” by inserting vulnerabilities, collecting target network data, and influencing policies, standards and specifications for commercial public key technologies. Named targets include protocols for “TLS/SSL, https (e.g. webmail), SSH, encrypted chat, VPNs and encrypted VOIP.”
The documents also make specific reference to a set of pseudorandom number generator (PRNG) algorithms adopted as part of the National Institute of Standards and Technology (NIST) Special Publication 800-90 [17] in 2006, and also standardized as part of ISO 18031 [11]. These standards include an algorithm called the Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC). As a result of these revelations, NIST reopened the public comment period for SP 800-90.

And as previously written here. The NSA has conducted a long term programme to breach the standards-based crypto of the net.

As evidence of this claim, we now have *two attacks*, being clear attempts to trash the security of TLS and freinds, and we have their own admission of intent to breach. In their own words. There is no shortage of circumstantial evidence that NSA people have pushed, steered, nudged the WGs to make bad decisions.

I therefore suggest we have the evidence to take to a jury. Obviously we won't be allowed to do that, so we have to do the next best thing: use our collective wisdom and make the call in the public court of Internet opinion.

My vote is -- guilty.

One single piece of evidence wasn't enough. Two was enough to believe, but alternate explanations sounded plausible to some. But we now have three solid bodies of evidence. Redundancy. Triangulation. Conclusion. Guilty.

Where it leaves us is in difficulties. We can try and avoid all this stuff by e.g., avoiding American crypto, but it is a bit broader that that. Yes, they attacked and broke some elements of American crypto (and you know what I'm expecting to fall next.). But they also broke the standards process, and that had even more effect on the world.

It has to be said that the IETF security area is now under a cloud. Not only do they need to analyse things back in time to see where it went wrong, but they also need some concept to stop it happening in the future.

The first step however is to actually see the clouds, and admit that rain might be coming soon. May the security AD live in interesting times, borrow my umbrella?

Posted by iang at 11:56 PM | Comments (0) | TrackBack

March 31, 2014

NSA caught again -- deliberate weakening of TLS revealed!?

In a scandal that is now entertaining that legal term of art "slam-dunk" there is news of a new weakness introduced into the TLS suite by the NSA:

We also discovered evidence of the implementation in the RSA BSAFE products of a non-standard TLS extension called “Extended Random.” This extension, co-written at the request of the National Security Agency, allows a client to request longer TLS random nonces from the server, a feature that, if it enabled, would speed up the Dual EC attack by a factor of up to 65,000. In addition, the use of this extension allows for for attacks on Dual EC instances configured with P-384 and P-521 elliptic curves, something that is not apparently possible in standard TLS.

This extension to TLS was introduced 3 distinct times through an open IETF Internet Draft process, twice by an NSA employee and a well known TLS specialist, and once by another. The way the extension works is that it increases the quantity of random numbers fed into the cleartext negotiation phase of the protocol. If the attacker has a heads up to those random numbers, that makes his task of divining the state of the PRNG a lot easier. Indeed, the extension definition states more or less that:

4.1. Threats to TLS

When this extension is in use it increases the amount of data that an attacker can inject into the PRF. This potentially would allow an attacker who had partially compromised the PRF greater scope for influencing the output.

The use of Dual_EC, the previously fingered dodgy standard, makes this possible. Which gives us 2 compromises of the standards process that when combined magically work together.

Our analysis strongly suggests that, from an attacker’s perspective, backdooring a PRNG should be combined not merely with influencing implementations to use the PRNG but also with influencing other details that secretly improve the exploitability of the PRNG.

Red faces all round.

Posted by iang at 06:12 PM | Comments (0) | TrackBack

March 15, 2014

Update on password management -- how to choose good ones

Spotted in the Cryptogram is something called "the Schneier Method."

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like "This little piggy went to market" might become "tlpWENT2m". That nine-character password won't be in anyone's dictionary. Of course, don't use this one, because I've written about it. Choose your own sentence -- something personal.

Here are some examples:

WIw7,mstmsritt... When I was seven, my sister threw my stuffed rabbit in the toilet.

Wow...doestcst Wow, does that couch smell terrible.

Ltime@go-inag~faaa! Long time ago in a galaxy not far away at all.

uTVM,TPw55:utvm,tpwstillsecure Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password.

This is something which I've also recently taken to using more and more, but I still *write passwords down*.

This isn't a complete solution, as we still have various threats such as losing the paper, forgetting the phrase, or being Miranda'd as we cross the border.

The task here is to evolve to a system where we are reducing our risks, not increasing them. On the whole we need to improve our password creation ability quite dramatically if password crunching is a threat to us personally, and it seems to be the case as more and more sites fall to the NSA-preferred syndrome of systemic security ineptness.

Posted by iang at 08:25 AM | Comments (0) | TrackBack

February 10, 2014

Bitcoin Verification Latency -- MtGox hit by market timing attack, squeezed between the water of impatience and the rock of transactional atomicity

Fresh on the heels of our release of "Bitcoin Verification Latency -- The Achilles Heel for Time Sensitive Transactions" it seems that Mt.Gox has been hit by exactly that - a market timing attack based on latency. In their own words:

Non-technical Explanation:

A bug in the bitcoin software makes it possible for someone to use the Bitcoin network to alter transaction details to make it seem like a sending of bitcoins to a bitcoin wallet did not occur when in fact it did occur. Since the transaction appears as if it has not proceeded correctly, the bitcoins may be resent. MtGox is working with the Bitcoin core development team and others to mitigate this issue.

Technical Explanation:

Bitcoin transactions are subject to a design issue that has been largely ignored, while known to at least a part of the Bitcoin core developers and mentioned on the BitcoinTalk forums. This defect, known as "transaction malleability" makes it possible for a third party to alter the hash of any freshly issued transaction without invalidating the signature, hence resulting in a similar transaction under a different hash. Of course only one of the two transactions can be validated. However, if the party who altered the transaction is fast enough, for example with a direct connection to different mining pools, or has even a small amount of mining power, it can easily cause the transaction hash alteration to be committed to the blockchain.

The bitcoin api "sendtoaddress" broadly used to send bitcoins to a given bitcoin address will return a transaction hash as a way to track the transaction's insertion in the blockchain.
Most wallet and exchange services will keep a record of this said hash in order to be able to respond to users should they inquire about their transaction. It is likely that these services will assume the transaction was not sent if it doesn't appear in the blockchain with the original hash and have currently no means to recognize the alternative transactions as theirs in an efficient way.

This means that an individual could request bitcoins from an exchange or wallet service, alter the resulting transaction's hash before inclusion in the blockchain, then contact the issuing service while claiming the transaction did not proceed. If the alteration fails, the user can simply send the bitcoins back and try again until successful.

Which all means what? Well, it seems that while waiting on a transaction to pop out of the block chain, one can rely on a token to track it. And so can ones counterparty. Except, this token was not exactly constructed on a security basis, and the initiator of the transaction can break it, leading to two naive views of the transaction. Which leads to some game-playing.

Let's be very clear here. There are three components to this break: Latency, impatience, and a bad token. Latency is the underlying physical problem, also known as the coordination problem or the two-generals problem. At a deeper level, as latency on a network is a physical certainty limited by the speed of light, there is always an open window of opportunity for trouble when two parties are trying to agree on anything.

In fast payment systems, that window isn't a problem for humans (as opposed to algos), as good payment systems clear in less than a second, sometimes known as real time. But not so in Bitcoin; where the latency is from 5 minutes and up to 120 depending on your assumptions, which leaves an unacceptable gap between the completion of the transaction and the users' expectations. Hence the second component: impatience.

The 'solution' to the settlement-impatience problem then is the hash token that substitutes as a final (triple entry) evidentiary receipt until the block-chain settles. This hash or token used in Bitcoin is broken, in that it is not cryptographically reliable as a token identifying the eventual settled payment.

Obviously, the immediate solution is to fix the hash, which is what Mt.Gox is asking Bitcoin dev team to do. But this assumes that the solution is in fact a solution. It is not. It's a hack, and a dangerous one. Let's go back to the definition of payments, again assuming the latency of coordination.

A payment is initiated by the controller of an account. That payment is like a cheque (or check) that is sent out. It is then intermediated by the system. Which produces the transaction.

But as we all know with cheques, a controller can produce multiple cheques. So a cheque is more like a promise that can be broken. And as we all know with people, relying on the cheque alone isn't reliable enough by and of itself, so the system must resolve the abuses. That fundamental understanding in place, here's what Bitcoin Foundation's Gavin Andresen said about Mt.Gox:

The issues that Mt. Gox has been experiencing are due to an unfortunate interaction between Mt. Gox’s implementation of their highly customized wallet software, their customer support procedures, and their unpreparedness for transaction malleability, a technical detail that allows changes to the way transactions are identified.

Transaction malleability has been known about since 2011. In simplest of terms, it is a small window where transaction ID’s can be “renamed” before being confirmed in the blockchain. This is something that cannot be corrected overnight. Therefore, any company dealing with Bitcoin transactions and have coded their own wallet software should responsibly prepare for this possibility and include in their software a way to validate transaction ID’s. Otherwise, it can result in Bitcoin loss and headache for everyone involved.

Ah. Oops. So it is a known problem. So one could make a case that Mt.Gox should have dealt with it, as a known bug.

But note the language above... Transaction malleability? That is a contradiction in terms. A transaction isn't malleable, the very definition of a transaction is that it is atomic, it is or it isn't. ACID for those who recall the CS classes: Atomic, consistent, independent, durable.

Very simply put, that which is put into the beginning of the block chain calculation cycle /is not a transaction/ whereas that which comes out, is, assuming a handwavy number of 10m cycles such as 6. Therefore, the identifier to which they speak cannot be a transaction identifier, by definition. It must be an identifier to ... something else!

What's happening here then is more likely a case of cognitive dissonance, leading to a regrettable and unintended deception. Read Mt.Gox's description above, again, and the reliance on the word becomes clearer. Users have known to demand transactions because we techies taught them that transactions are reliable, by definition; Bitcoin provides the word but not the act.

So the first part of the fix is to change the words back to ones with reliable meanings. You can't simply undefine a term that has been known for 40 years, and expect the user community to follow.

(To be clear, I'm not suggesting what the terms should be. In my work, I simply call what goes in a 'Payment', and what comes out a 'Receipt'. The latter Receipt is equated to the transaction, and in my lesson on triple entry, I often end with a flourish: The Receipt is the Transaction. Which has more poetry if you've experienced transactional pain before, and you've read the whole thing. We all have our dreams :)

We are still leaves the impatience problem.

Note that this will also affect any other crypto-currency using the same transaction scheme as Bitcoin.

Conclusion
To put things in perspective, it's important to remember that Bitcoin is a very new technology and still very much in its early stages. What MtGox and the Bitcoin community have experienced in the past year has been an incredible and exciting challenge, and there is still much to do to further improve.

When we did our early work in this, we recognised that the market timing attack comes from the implicit misunderstanding of how latency interferes with transactions, and how impatience interferes with both of them. So in our protocols, there is no 'token' that is available to track a pending transaction. This was a deliberate, early design decision, and indeed the servers still just dump and ignore anything they don't understand in order to force the clients away from leaning on unreliable crutches.

It's also the flip side of the triple-entry receipt -- its existence is the full evidence, hence, the receipt is the transaction. Once you have the receipt, you're golden, if not, you're in the mud.

But Bitcoin had a rather extraordinary problem -- the distribution of its consensus on the transaction amongst any large group of nodes that wanted to play. Which inherently made transactional mechanics and latency issues blow out. This is a high price to pay, and only history is going to tell us whether the price is too high or affordable.

Posted by iang at 07:36 AM | Comments (1) | TrackBack

December 30, 2013

MITMs conducted by the NSA - 50% success rate

One of the complaints against the SSL obesity security model was that all the blabber of x.509/CAs was there to protect against the MITM (man-in-the-middle) attack. But where was this elusive beast?

Now we have evidence. In the recent Der Spiegel article about the NSA's hacking catalogue, it is laid out pretty comprehensively:

A Race Between Servers

Once TAO teams have gathered sufficient data on their targets' habits, they can shift into attack mode, programming the QUANTUM systems to perform this work in a largely automated way. If a data packet featuring the email address or cookie of a target passes through a cable or router monitored by the NSA, the system sounds the alarm. It determines what website the target person is trying to access and then activates one of the intelligence service's covert servers, known by the codename FOXACID.

This NSA server coerces the user into connecting to NSA covert systems rather than the intended sites. In the case of Belgacom engineers, instead of reaching the LinkedIn page they were actually trying to visit, they were also directed to FOXACID servers housed on NSA networks. Undetected by the user, the manipulated page transferred malware already custom tailored to match security holes on the target person's computer.

The technique can literally be a race between servers, one that is described in internal intelligence agency jargon with phrases like: "Wait for client to initiate new connection," "Shoot!" and "Hope to beat server-to-client response." Like any competition, at times the covert network's surveillance tools are "too slow to win the race." Often enough, though, they are effective. Implants with QUANTUMINSERT, especially when used in conjunction with LinkedIn, now have a success rate of over 50 percent, according to one internal document.

We've seen some indication that wireless is used for MITMs, but it is a difficult attack, as it requires physical presence. Phishing is an MITM, and has been in widespread use, but like apocyphal saying from Star Wars, these MITMs "aren't the droids you're looking for." Or so say the security experts behind web encryption standards.

This one is the droid we're looking for. A major victim is identified, serious assets are listed, secondary victims, procedures, codenames, the whole works. This is an automated, industrial-scale attack, something that breaches the normal conceptual boundaries of what an MITM looks like. We can no longer assume that MITMs are too expensive for mass use. Their economic applicability is presumably enabled the NSA operates a shadow network, capable of attacking the nodes in ours:

The insert method and other variants of QUANTUM are closely linked to a shadow network operated by the NSA alongside the Internet, with its own, well-hidden infrastructure comprised of "covert" routers and servers. It appears the NSA also incorporates routers and servers from non-NSA networks into its covert network by infecting these networks with "implants" that then allow the government hackers to control the computers remotely.

Tantalising stuff for your inner geek! So it seems we now do need protection against the the MITM, in the form of the NSA. For real work, and also for Facebook, LinkedIn and other entertainment sites because of their universality as an attack vector. But will SSL provide that? In the short term and for easier cases, yes. But not completely, because most set-ups are ill-equiped to deal with attacks at an aggressive level. Until the browser starts mapping the cert to the identity expected, something we've been requesting for a decade now, it just won't provide much defence.

Certificate pinning is coming, but so is Christmas, DNSSec, IPv6 and my guaranteed anti-unicorn pill. By the time certificate pinning gets here, the NSA will likely have exfiltrated every important site's keys or bought off the right CA so it doesn't matter anyway.

One question remains: is this a risk? to us?

In the old Security World, we always said we don't consider the NSA a risk to us, because they never reveal the data (unless we're terrorists or drug dealers or commies or Iranians, in which case we know we're fair game).

That no longer holds true. The NSA shares data with every major agency in the USA that has an interest. They crossed the line that cannot be crossed, and the rot of ML seizure corruption, economic espionage and competitive intervention means that the NSA is now as much a threat to everyone as any other attacker.

Every business that has a competitor in USA. Every department that has a negotiation with a federal agency. Every individual that has ever criticised the status quo on the Internet. We're all at risk, now.

Oh, to live in interesting times.

Posted by iang at 01:39 AM | Comments (0) | TrackBack

December 24, 2013

MITB defences of dual channel -- the end of a good run?

Back in 2006 Philipp Gühring penned the story of what had been discovered in European banks, in what has now become a landmark paper in banking security:

A new threat is emerging that attacks browsers by means of trojan horses. The new breed of new trojan horses can modify the transactions on-the-fly, as they are formed in in browsers, and still display the user's intended transaction to her. Structurally they are a man-in-the-middle attack between the the user and the security mechanisms of the browser. Distinct from Phishing attacks which rely upon similar but fraudulent websites, these new attacks cannot be detected by the user at all, as they are use real services, the user is correctly logged-in as normal, and there is no difference to be seen.

This was quite scary. The European banks had successfully migrated their user bases across to the online platform and were well on the way to reducing branch numbers. Fantastic cost reductions... But:

The WYSIWYG concept of the browser is successfully broken. No advanced authentication method (PIN, TAN, iTAN, Client certificates, Secure-ID, SmartCards, Class3 Readers, OTP, ...) can defend against these attacks, because the attacks are working on the transaction level, not on the authentication level. PKI and other security measures are simply bypassed, and are therefore rendered obsolete.

If they saw any reduction in use of web banking, and the load shift back to branch, they were in a world of pain --- capacity had shrunk.

The conclusion that the European banks came to, once they'd got over their initial fears, was that phones could be used to do SMS transaction authorisations. This system was rolled out over the next couple of years, and it more or less took the edge off the MITB.

Now comes news NSS Labs' Ken Baylor that the malware authors have developed two channel attacks:

On the positive side, there has been little innovation in the functionality of mobile financial malware in the last 24 months, and the iOS platform appears secure; however, further analysis reveals that there are now multiple mobile malware suites capable of defeating bank multifactor authentication. With 99 percent of new mobile malware targeting Android, attacks on this platform are unprecedented both in their number and their impact. The lack of iOS malware is likely related to the low availability of iOS malware developers in the ex-Soviet Republic.

While banks remain slow to evolve their mobile security strategies, they will find the cyber criminals are several steps ahead of them.

Malware now tries to mount an attack on both channels. This occurs thus-ways:

Zeus and other MITB trojans have used social engineering to bypass this process. When a user on an infected PC authenticates to a banking site using SMS authentication, the user is greeted by a webinject, similar to Figure 1. The webinject requires the installation of new software on the user’s mobile device; this software is in fact malware.

ZitMo malware intercepts SMS TANs from the bank. Once greeted by the webinject on a Zeus-infected PC, the user enrolls by entering a phone number. A “security update” link is sent to the phone, and ZitMo installs when the link is clicked. Any bank SMS messages are redirected to a cyber criminal’s phone (all other SMS messages will be delivered as normal).

We knew at the time that this could occur, but it seemed unlikely. (I say 'we' to mean that I was mostly an observer; at the time I was in Vienna and was at the periphery of some of these groups. However, my lack of German made any contributions rather erratic.)

Unlikely because on the first hand it seemed an inordinate amount of complexity, and on the other, there wasn't enough of a target. What changed? The market has shifted hugely over to mobile use as opposed to web use. The Americans have been a bit slower, but now their on a roll:

According to the Pew Research Center,1 mobile banking usage has jumped from 24 percent of the US adult population in April 2012 to 35 percent in May 2013. Banks have encouraged this move toward mobile banking. Most banks began offering mobile services with a simple redirect to a mobile site (with limited functionality) upon detection of smartphone HTTP headers; others created mobile apps with HTML wrappers for a better user experience and more functionality. As yet, only a few have built secure native apps for each platform.

Many banks believe that mobile devices are a secure secondary method of authentication. To authenticate the widest number of people who have phones (rather than just smartphones), many built their second factor authentication solutions on one of the most widely available (although insecure) protocols: short message services (SMS). As banks believed an SMS-authenticated customer was more secure than a PC-based user, they enabled the former to carry out riskier transactions. Realizing the rewards awaiting those able to circumvent SMS authentication, criminals quickly developed mobile malware.

So the convenient second channel of the phone has actually switched places: it's the primary channel, it's life, and the laptop is relegated to the at-office, at-home work slave. The model has been turned upside down, and in the things that fell out of the pockets, the security also took a tumble.

Closing with Ken Baylor's recommendations:

NSS Labs Recommendations
  • Understand and account for current mobile malware strategies when developing mobile banking apps.
  • Do not rely on SMS-based authentication; it has been thoroughly compromised.
  • Retire HTML wrapper mobile banking apps and replace them with secure native mobile apps where feasible. These apps should include a combination of hardened browsers, certificate-based identification, unique install keys, in-app encryption, geolocation, and device fingerprinting.


Hey! I guess that's the business I'm in now. We've successfully ported the 'mature' Ricardo platform to Android: No flaky browsers in sight, and our auth strategy is a real strategy, not that old certificate-based snake-oil. Obviously, public keys and in-app encryption.

Geolocation and device fingerprinting I am yet to add. But that's easy enough later on. I guess I should post on all this some time, if anyone is interested...

Posted by iang at 03:40 AM | Comments (1) | TrackBack

November 10, 2013

The NSA will shape the worldwide commercial cryptography market to make it more tractable to...

In the long running saga of the Snowden revelations, another fact is confirmed by ashkan soltani. It's the last point on this slide showing some nice redaction minimisation.

In words:

(U) The CCP expects this Project to accomplish the following in FY 2013:
  • ...
  • (TS//SI//NF) Shape the worldwide commercial cryptography marketplace to make it more tractable to advanced cryptanalytic capabilities being developed by NSA/CSS. [CCP_00090]

Confirmed: the NSA manipulates the commercial providers of cryptography to make it easier to crack their product. When I said, avoid American-influenced cryptography, I wasn't joking: the Consolidated Cryptologic Program (CCP) is consolidating access to your crypto.

Addition: John Young forwarded me the original documents (Guardian and NYT) and their blanket introduction makes it entirely clear:

(TS//SI//NF) The SIGINT Enabling Project actively engages the US and foreign IT industries to covertly influence and/or overtly leverage their commercial products' designs. These design changes make the systems in question exploitable through SIGINT collection (e.g., Endpoint, MidPoint, etc.) with foreknowledge of the modification. ....

Note also that the classification for the goal above differs in that it is NF -- No Foreigners -- whereas most of the other goals listed are REL TO USA, FVEY which means the goals can be shared with the Five Eyes Intelligence Community (USA, UK, Canada, Australia, New Zealand).

The more secret it is, the more clearly important is this goal. The only other goal with this level of secrecy was the one suggesting an actual target of sensitivity -- fair enough. More confirmation:

(U) Base resources in this project are used to:
  • (TS//SI//REL TO USA, FVEY) Insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets.
  • ...

and in goals 4, 5:

  • (TS//SI//REL TO USA, FVEY) Complete enabling for [XXXXXX] encryption chips used in Virtual Private Network and Web encryption devices. [CCP_00009].
  • (TS//SI//REL TO USA, FVEY) Make gains in enabling decryption and Computer Network Exploitation (CNE) access to fourth generation/Long Term Evolution (4GL/LTE) networks via enabling. [CCP_00009]

Obviously, we're interested in the [XXXXXX] above. But the big picture is complete: the NSA wants backdoor access to every chip used for encryption in VPNs, wireless routers and the cell network.

This is no small thing. There should be no doubt now that the NSA actively looks to seek backdoors in any interesting cryptographic tool. Therefore, the NSA is numbered amongst the threats, and so are your cryptographic providers, if they are within reach of the NSA.

Granted that other countries might behave the same way. But the NSA has the resources, the will, the market domination (consider Microsoft's CAPI, Java's Cryptography Engine, Cisco & Juniper on routing, FIPS effect on SSL, etc) and now the track record to make this a more serious threat.

Posted by iang at 06:48 AM | Comments (0) | TrackBack

November 01, 2013

NSA v. the geeks v. google -- a picture is worth a thousand cribs

Dave Cohen says: "I wonder if I have what it takes to make presentations at the NSA."

H/t to Jeroen. So I wonder if the Second World Cryptowars are really on?

Our Mission

To bring the world our unique end-to-end encrypted protocol and architecture that is the 'next-generation' of private and secure email. As founding partners of The Dark Mail Alliance, both Silent Circle and Lavabit will work to bring other members into the alliance, assist them in implementing the new protocol and jointly work to proliferate the worlds first end-to-end encrypted 'Email 3.0' throughout the world's email providers. Our goal is to open source the protocol and architecture and help others implement this new technology to address privacy concerns against surveillance and back door threats of any kind.

Could be. In the context of the new google sniffing revelations, it may now be clearer how the NSA is accessing all of the data of all of the majors. What do we think about the NSA? Some aren't happy, like Kenton Varda:

If the NSA is indeed tapping communications from Google's inter-datacenter links then they are almost certainly using the open source protobuf release (i.e. my code) to help interpret the data (since it's almost all in protobuf format). Fuck you, NSA.

What about google? Some outrage from the same source:

I had to admit I was shocked by one thing: I'm amazed Google is transmitting unencrypted data between datacenters.

is met with Varda's comment:

We're (I think) talking about Google-owned fiber between Google-owned endpoints, not shared with anyone, and definitely not the public internet. Physically tapping fiber without being detected is pretty difficult and a well-funded state-sponsored entity is probably the only one that could do it.

Ah. So google did some risk analysis and thought this was one they can pass on. Google's bad. A bit of research shows BlackHat in 2003:

  • Commercially available taps are readily available that produce an insertion loss of 3 dB which cost less than $1000!
  • Taps currently in use by state-sponsored military and intelligence organizations have insertion losses as low as 0.5 dB!
  • That document indicates 2001 published accounts of NSA tapping fibre, and I found somewhere a hint that it was first publically revealed in 1999. I'm pretty sure we knew about the USS Jimmy Carter back then, although my memory fades...

    So maybe Google thought it hard to tap fibre, but actually we've known for over a decade that is not so. Google's bad, they are indeed negligent. Jeroen van Gelderen says:

    Correct me if I'm wrong but you promise that "[w]e restrict access to personal information to Google employees, contractors and agents who need to know that information in order to process it for us, and who are subject to strict contractual confidentiality obligations and may be disciplined or terminated if they fail to meet these obligations.

    Indeed, as a matter of degree, I would say google are grossly negligent: the care that they show for physical security at their data centers, and all the care that they purport in other security matters, was clearly not shown once the fiber left their house.

    Meanwhile, given the nature of the NSA's operations, some might ask (as Jeroen does):

    Now that you have been caught being utterly negligent in protecting customer data, to the point of blatantly violating your own privacy policies, can you please tell us which of your senior security people were responsible for downplaying the risk of your thousands of miles of unsecured, easily accessible fibers being tapped? Have they been fired yet?

    Chances of that one being answered are pretty slim. I can imagine Facebook being pretty relaxed about this. I can sort of see Apple dropping the ball on this. I'm not going to spare any time with Microsoft, who've been on the contract teet since time immemorial.

    But google? That had security street cred? Time to call a spade a spade: if google are not analysing and revealing how they came to miss these known and easy threats, then how do we know they aren't conspirators?

    Posted by iang at 01:34 PM | Comments (1) | TrackBack

    October 31, 2013

    Why the NSA loves the one-security-model HTTPS fanaticism of the Internet

    Of all the things I have written about the traps in the HTTPS model for security, this one diagram lays it out so well, I'm left in the dirt. Presented with little comment:

    The National Security Agency has secretly broken into the main communications links that connect Yahoo and Google data centers around the world, according to documents obtained from former NSA contractor Edward Snowden and interviews with knowledgeable officials.

    By tapping those links, the agency has positioned itself to collect at will from hundreds of millions of user accounts, many of them belonging to Americans. The NSA does not keep everything it collects, but it keeps a lot.

    Read all of the stories in The Washington Post’s ongoing coverage of the National Security Agency’s surveillance programs.

    According to a top-secret accounting dated Jan. 9, 2013, the NSA’s acquisitions directorate sends millions of records every day from internal Yahoo and Google networks to data warehouses at the agency’s headquarters at Fort Meade, Md. In the preceding 30 days, the report said, field collectors had processed and sent back 181,280,466 new records — including “metadata,” which would indicate who sent or received e-mails and when, as well as content such as text, audio and video.

    ...

    Posted by iang at 04:54 AM | Comments (0) | TrackBack

    October 29, 2013

    Confirmed: the US DoJ will not put the bankers in jail, no matter how deep the fraud

    I've often asked the question why no-one went to jail for the frauds of the financial crisis, and now the US government has answered it: they are complicit in the cover-up, which means that the financial rot has infected the Department of Justice as well. Bill Black writes about the recent Bank of America verdict:

    The author of the most brilliantly comedic statement ever written about the crisis is Landon Thomas, Jr. He does not bury the lead. Everything worth reading is in the first sentence, and it should trigger belly laughs nationwide.

    Bank of America, one of the nation’s largest banks, was found liable on Wednesday of having sold defective mortgages, a jury decision that will be seen as a victory for the government in its aggressive effort to hold banks accountable for their role in the housing crisis."

    “The government,” as a statement of fact so indisputable that it requires neither citation nor reasoning, has been engaged in an “aggressive effort to hold banks accountable for their role in the housing crisis.” Yes, we have not seen such an aggressive effort since Captain Renault told Rick in the movie Casablanca that he was “shocked” to discover that there was gambling going on (just before being handed his gambling “winnings” which were really a bribe).

    There are four clues in the sentence I quoted that indicate that the author knows he’s putting us on, but they are subtle. First, the case was a civil case. “The government’s” “aggressive effort to hold banks accountable” has produced – zero convictions of the elite Wall Street officers and banks whose frauds drove the crisis. Thomas, of course, knows this and his use of the word “aggressive” mocks the Department of Justice (DOJ) propaganda. The jurors found that BoA (through its officers) committed an orgy of fraud in order to enrich those officers. That is a criminal act. Prosecutors who are far from “aggressive” prosecute elite frauds criminally because they know it is essential to deter fraud and safeguard our financial system. The DOJ refused to prosecute the frauds led by senior BoA officers. The journalist’s riff is so funny because he portrays DOJ’s refusal to prosecute frauds led by elite BoA officers as “aggressive.” Show the NYT article to friends you have who are Brits and who claim that Americans are incapable of irony. The article’s lead sentence refutes that claim for all time.

    The twin loan origination fraud epidemics (liar’s loans and appraisal fraud) and the epidemic of fraudulent sales of the fraudulently originated mortgages to the secondary market would each – separately – constitute the most destructive frauds in history. These three epidemics of accounting control fraud by loan originators hyper-inflated the real estate bubble and drove our financial crisis and the Great Recession. By way of contrast, the S&L debacle was less than 1/70 the magnitude of fraud and losses than the current crisis, yet we obtained over 1,000 felony convictions in cases DOJ designated as “major.” If DOJ is “aggressive” in this crisis what word would be necessary to describe our approach?

    Read on for the details of how Bill Black forms his conclusion.

    Posted by iang at 05:27 AM | Comments (0) | TrackBack

    September 05, 2013

    The OODA thought cycle of the security world is around a decade -- Silent Circle releases a Secure Chat that deletes messages

    According to the record, I first started talking publically about this problem it seems in 2004, 9 years ago, in a post exchange with Bill Stewart:

    Bill Stewart wrote:
    > I don't understand the threat model here. The usual models are ...
    > - Recipient's Computer Disk automatically backed up to optical storage at night
    > - No sense subpoenaing cyphertext when you can subpoena plaintext.

    In terms of threats actually seen in the real world
    leading to costs, etc, I would have thought that the
    subpoena / civil / criminal case would be the largest.
    ...

    In summary, one of the largest threats to real people out there is that things said in the haste of the moment come back to haunt them. So I wanted a crypto-chat system that caused the messages to disappear:

    At 07:54 AM 9/17/2004, Ian Grigg wrote:
    >Ahhhh, now if one could implement a message that self-
    >destructed on the recipient's machine, that would
    >start to improve security against the above outlined
    >threat.

    Think about the Arthur Anderson 'document destruction policy' memo, or as Bill goes on to list:

    That's been done, by "Disappearing Inc". www.disappearing.com/ says they're now owned by Omniva. ... The system obviously doesn't stop the recipient from screen-scraping the message (don't remember if it supported cut&paste), but it's designed for the Ollie North problem
    "What do you mean the email system backs up all messages
    on optical disk? I thought I deleted the evidence!"
    or the business equivalent (anti-trust suit wants all your correspondence from the last 17 years.)

    OK, so those are bad guys, and why would we want to sell our services to them? Hopefully, they are too small a market to make a profit (and if not, we're in more trouble than we thought,,).

    No, I'm really thinking about the ordinary people, not the Ollie Norths of the world. The headline example here is the messy divorce, where SBTX drags out every thing you said romantically and in frustration over secure chat from 10 years ago. It's a real problem, and companies have tried to solve it:

    However what gets interesting is when the sparks of anger not romance fly:

    If a couple breaks up, one of them may disconnect the service and all the data will be deleted.

    But none of these have credibility of the security industry. They're all like Kim-dotcom efforts, which aim at serious problems, but fail to get to the real end.

    Now, thankfully a company with serious security credibility has release a solution:

    WASHINGTON, D.C. – September 3, 2013 – Silent Circle, the global encrypted communications firm revolutionizing mobile device security for organizations and individuals alike, today announced the availability of its Silent Text secure messaging and file transfer app for Android devices via Google Play. With the addition of Silent Text for Android, Silent Circle's apps and services offer unmatched privacy protection by routing encrypted calls, messages and attachments exclusively between Silent Circle users' iOS and Android devices without logging metadata associated with subscribers' communications.

    Silent Text for Android's features include:

    • Burn Notice feature allows you to have any messages you send self-destruct after a time delay

    • ...

    [Jon Callas:] "Beyond strong encryption, our apps give users important, additional privacy controls, such as Silent Text's ability to wipe messages and files from a recipient's device with a 'Burn Notice.'"

    Fantastic! We may not like the rest of Silent Circle's products or solutions, but finally we have serious cryptoplumbers deploying a really needed feature. Back to me, back to 2004:

    As this threat is real, persistent and growing in popularity, the obsession of perfectly covering more crypto-savvy threats seems .. unbalanced?

    Which leaves me wondering why it took so long to get the attention of the serious industry? This above quote measures the OODA loop in security threat thinking at around 9 years, so a decade, as opposed to quicker in-protocol threats. Which scarily matches the time it took to deploy TLS/SNI. And the time it is taking from phishing threat identification (around 2003) to understanding that HTTPS Everywhere was part of the solution (2005) to deployment of HTTPS Everywhere (2012++).

    Why does the security industry clutch so fiercely to its quaint old notions of CIA as received wisdom and security done because we can do it? A partial answer is that we are simply bad at risk. Our society and by extension the so-called security industry cannot handle risk management, and instead chases headline threats and bogeymen, as Bruce Schneiers laments:

    We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

    He's hitting most of the bases there: risks from and to people, rather than dusty cold-war cryptographic textbooks; the technician's desire to eliminate risks 'perfectly' and ignore those he can't deal with; the adulation given to exotic technical solutions /versus/ the avoidance of risk as opportunity.

    There is more: our inability to feel and measure risk accurately means we are susceptible to the dollar-incentivised snake-oil salesmen. Which leads to, liability and responsibility problems in what is termed the agency problem: we used to say that nobody ever got fired for buying IBM. During the 1990s, nobody ever got fired for implementing SSL.

    In the 2000s, nobody ever got fired for increasing the budget for Homeland Security.

    In the 2010s it seems, nobody will ever get fired for cyberwarfare. We continue as a society to be better at creating more risks for ourselves in the name of threats.

    Posted by iang at 05:02 AM | Comments (2) | TrackBack

    July 30, 2013

    The NSA is lying again -- how STOOPID are we?

    In the on-going tits-for-tat between the White House and the world (Western cyberwarriors versus Chinese cyberspies; Obama and will-he-won't he scramble his forces to intercept a 29 year old hacker who is-flying-isn't-flying; the ongoing search for the undeniable Iranian cassus belli, the secret cells in Apple and Google that are too secret to be found but no longer secret enough to be denied), and one does wonder...

    Who can we believe on anything? Here's a data point. This must be the loudest YOU-ARE-STOOPID response I have ever seen from a government agency to its own populace:

    The National Security Agency lacks the technology to conduct a keyword search of its employees’ emails, even as it collects data on every U.S. phone call and monitors online communications of suspected terrorists, according to NSA’s freedom of information officer.

    “There’s no central method to search an email at this time with the way our records are set up, unfortunately,” Cindy Blacker told a reporter at the nonprofit news website ProPublica.

    Ms. Blacker said the agency’s email system is “a little antiquated and archaic,” the website reported Tuesday.

    One word: counterintelligence. The NSA is a spy agency. It has a department that is mandated to look at all its people for deviation from the cause. I don't know what it's called, but more than likely there are actually several departments with this brief. And they can definately read your email. In bulk, in minutiae, and in ways we civilians can't even conceive.

    It is standard practice at most large organizations — not to mention a standard feature of most commercially available email systems — to be able to do bulk searches of employees’ email as part of internal investigations, discovery in legal cases or compliance exercises.

    The claim that the NSA cannot look at its own email system is either a) a declaration of materially aiding the enemy by not completing its necessary and understood role of counterintelligence (in which case it should be tried in a military court, being wartime, right?), or b) a downright lie to a stupid public.

    I'm inclined to think it's the second (which leaves a fascinating panopoly of civilian charges). In which case, one wonders just how STOOPID the people governing the NSA are? Here's another data point:

    The numbers tell the story — in votes and dollars. On Wednesday, the House voted 217 to 205 not to rein in the NSA’s phone-spying dragnet. It turns out that those 217 “no” voters received twice as much campaign financing from the defense and intelligence industry as the 205 “yes” voters.

    .... House members who voted to continue the massive phone-call-metadata spy program, on average, raked in 122 percent more money from defense contractors than those who voted to dismantle it.

    .... Lawmakers who voted to continue the NSA dragnet-surveillance program averaged $41,635 from the pot, whereas House members who voted to repeal authority averaged $18,765.

    So one must revise ones opinion lightly in the face of overwhelming financial evidence: Members of Congress are financially savvy, anything but stupid.

    Which makes the voting public...

    Posted by iang at 10:02 AM | Comments (6) | TrackBack

    July 11, 2013

    The failure of cyber defence - the mindset is against it

    I have sometimes uttered the theory that the NSA is more or less responsible for the failure in defence arts of the net. Here is some circumstantial evidence gleaned from an interview with someone allegedly employed to hack foreigner's computers:

    Grimes: What do you wish we, as in America, could do better hacking-wise?

    Cyber warrior: I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don't have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now.

    My main thesis is that the NSA has erred on the side of destroying the open society's capability of defence (recall interference with PGP, GSM, IETF, cryptography, secure browsing, etc). We are bad at it in the aggregate because our attempts to do better are frustrated in oh so many ways.

    This above claim suggests two things. Firstly, they only know or think to Attack! whatever the problem. Secondly, due to a mindset of offense, the spooks in the aggregate will be unsuited to any mission to assist the defence side. And will be widely perceived to be untrustworthy.

    Hence, any discussions of the dangerous state of civilian defences will only be used as an excuse to boost attack capabilities. Thus making the problem worse.

    For amusement, here are some other snippets:

    Grimes: What happened after you got hired?

    Cyber warrior: I immediately went to work. Basically they sent me a list of software they needed me to hack. I would hack the software and create buffer overflow exploits. I was pretty good at this. There wasn't a piece of software I couldn't break. It's not hard. Most of the software written in the world has a bug every three to five lines of code. It isn't like you have to be a supergenius to find bugs.

    But I quickly went from writing individual buffer overflows to being assigned to make better fuzzers. You and I have talked about this before. The fuzzers were far faster at finding bugs than I was. What they didn't do well is recognize the difference between a bug and an exploitable bug or recognize an exploitable bug from one that could be weaponized or widely used. My first few years all I did was write better fuzzing modules.

    Grimes: How many exploits does your unit have access to?

    Cyber warrior: Literally tens of thousands -- it's more than that. We have tens of thousands of ready-to-use bugs in single applications, single operating systems.

    Grimes: Is most of it zero-days?

    Cyber warrior: It's all zero-days. Literally, if you can name the software or the controller, we have ways to exploit it. There is no software that isn't easily crackable. In the last few years, every publicly known and patched bug makes almost no impact on us. They aren't scratching the surface.

    Posted by iang at 04:32 AM | Comments (4) | TrackBack

    June 19, 2013

    On casting the first cyber-stone, USA declares cyberwar. Everyone loses.

    Following on from revelations of the USA's unilateral act of cyberwar otherwise known as Stuxnet, it is now apparent to all but the most self-serving of Washington lobbyests that Iran has used their defeat to learn and launch the same weapons. VanityFair has the story:

    On the hidden battlefields of history’s first known cyber-war, the casualties are piling up. In the U.S., many banks have been hit, and the telecommunications industry seriously damaged, likely in retaliation for several major attacks on Iran. Washington and Tehran are ramping up their cyber-arsenals, built on a black-market digital arms bazaar, enmeshing such high-tech giants as Microsoft, Google, and Apple.

    The headline victim of the proxy war is the Saudi's state-run oil company bank called Saudi Aramco:

    The data on three-quarters of the machines on the main computer network of Saudi aramco had been destroyed. Hackers who identified themselves as Islamic and called themselves the Cutting Sword of Justice executed a full wipe of the hard drives of 30,000 aramco personal computers. For good measure, as a kind of calling card, the hackers lit up the screen of each machine they wiped with a single image, of an American flag on fire.

    Which makes the American decisions all the more curious.

    For the U.S., Stuxnet was both a victory and a defeat. The operation displayed a chillingly effective capability, but the fact that Stuxnet escaped and became public was a problem.

    How did they think this would not get out? What part of 'virus' and 'anti-virus industry' did they not understand?

    Last June, David E. Sanger confirmed and expanded on the basic elements of the Stuxnet conjecture in a New York Times story, the week before publication of his book Confront and Conceal. The White House refused to confirm or deny Sanger’s account but condemned its disclosure of classified information, and the F.B.I. and Justice Department opened a criminal investigation of the leak, which is still ongoing.

    In Washingtonspeak, that means they did it. Wired and NYT confirm:

    Despite headlines around the globe, officials in Washington have never openly acknowledged that the US was behind the attack. It wasn’t until 2012 that anonymous sources within the Obama administration took credit for it in interviews with The New York Times. [snip...] Citing anonymous Obama administration officials, The New York Times reported that the malware began replicating itself and migrating to computers in other countries. [snip...] In 2006, the Department of Defense gave the go-ahead to the NSA to begin work on targeting these centrifuges, according to The New York Times.

    We now have enough evidence to decide, beyond reasonable doubt, that the USA and Israel launched a first-strike cyber attack against an enemy it was not at war with. Which succeeded, damages are credibly listed.

    Back to the VanityFair article: What made the Whitehouse think they wouldn't then unleash a tiger they tweaked by the tail?

    Sanger, for his part, said that when he reviewed his story with Obama-administration officials, they did not ask him to keep silent. According to a former White House official, in the aftermath of the Stuxnet revelations “there must have been a U.S.-government review process that said, This wasn’t supposed to happen. Why did this happen? What mistakes were made, and should we really be doing this cyber-warfare stuff? And if we’re going to do the cyber-warfare stuff again, how do we make sure (a) that the entire world doesn’t find out about it, and (b) that the whole world does not fucking collect our source code?”

    None of it makes sense unless we assume that Washington DC is so disconnected from the reality of the art of cyber-security. It gets worse:

    One of the most innovative features of all this malware—and, to many, the most disturbing—was found in Flame, the Stuxnet precursor. Flame spread, among other ways, and in some computer networks, by disguising itself as Windows Update. Flame tricked its victim computers into accepting software that appeared to come from Microsoft but actually did not. Windows Update had never previously been used as camouflage in this malicious way. By using Windows Update as cover for malware infection, Flame’s creators set an insidious precedent. If speculation that the U.S. government did deploy Flame is accurate, then the U.S. also damaged the reliability and integrity of a system that lies at the core of the Internet and therefore of the global economy.

    Which Microsoft is now conspiratorial in:

    Microsoft Corp. (MSFT), the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.

    Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn’t ask and can’t be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.

    Frank Shaw, a spokesman for Microsoft, said those releases occur in cooperation with multiple agencies and are designed to give government “an early start” on risk assessment and mitigation.

    In an e-mailed statement, Shaw said there are “several programs” through which such information is passed to the government, and named two which are public, run by Microsoft and for defensive purposes.

    Notice the discord between those positions. Microsoft will now be vulnerable to civil suits around the world for instances where bugs were disclosed and then used against victims. Why is this? Why is the cozy cognitive dissonance of the Americans worthless elsewhere? Simple: immunity by the US Government only works in the USA. Spying, cyber attacks, and conspiracy to destroy state equipment are illegal elsewhere.

    And:

    For at least a decade, Western governments—among them the U.S., France, and Israel—have been buying “bugs” (flaws in computer programs that make breaches possible) as well as exploits (programs that perform jobs such as espionage or theft) not only from defense contractors but also from individual hackers. The sellers in this market tell stories that suggest scenes from spy novels. One country’s intelligence service creates cyber-security front companies, flies hackers in for fake job interviews, and buys their bugs and exploits to add to its stockpile. Software flaws now form the foundation of almost every government’s cyber-operations, thanks in large part to the same black market—the cyber-arms bazaar—where hacktivists and criminals buy and sell them. ...

    In the U.S., the escalating bug-and-exploit trade has created a strange relationship between government and industry. The U.S. government now spends significant amounts of time and money developing or acquiring the ability to exploit weaknesses in the products of some of America’s own leading technology companies, such as Apple, Google, and Microsoft. In other words: to sabotage American enemies, the U.S. is, in a sense, sabotaging its own companies.

    It's another variant on the old biblical 30 pieces of silver story. The US government is by practice and policy undermining the trust in its own industry.

    So where does this go? As I have oft mentioned, as long as the intelligence information collected stayed in the community, the act of spying represented not much of a threat to the people. But that is far different to aggressive first-strike attacks, and it is far different to industrial espionage run by the state:

    Thousands of technology, finance and manufacturing companies are working closely with U.S. national security agencies, providing sensitive information and in return receiving benefits that include access to classified intelligence, four people familiar with the process said.

    These programs, whose participants are known as trusted partners, extend far beyond what was revealed by Edward Snowden, a computer technician who did work for the National Security Agency. The role of private companies has come under intense scrutiny since his disclosure this month that the NSA is collecting millions of U.S. residents’ telephone records and the computer communications of foreigners from Google Inc (GOOG). and other Internet companies under court order.

    ...

    Along with the NSA, the Central Intelligence Agency (0112917D), the Federal Bureau of Investigation and branches of the U.S. military have agreements with such companies to gather data that might seem innocuous but could be highly useful in the hands of U.S. intelligence or cyber warfare units, according to the people, who have either worked for the government or are in companies that have these accords.

    Pure intelligence for state purposes is no longer a plausible claim. Cyberwar is unleashed, pandorra's box is opened. The agency has crossed the commercial barriers, swapping product for information. With thousands of companies.

    Back to the attack on Iran. Last week I was reading _The Rommel Papers_, the post-humous memoirs of the late great WWII general Erwin Rommel. In passing, he opined that when the 'terrorists' struck at his military operations in North Africa, the best strategy was to ignore it. He did, and nothing much happened. Specifically, he eschewed the civilian reprisals so popular in films and novels, and he did not do much or anything to chase up who might be responsible. Beyond normal police operations, presumably.

    The USA's strategy for pinprick is the reverse of Rommel's. Attacks on the Iranians seem to elicit a response.

    Sure enough, in August 2012 a devastating virus was unleashed on Saudi Aramco, the giant Saudi state-owned energy company. The malware infected 30,000 computers, erasing three-quarters of the company’s stored data, destroying everything from documents to email to spreadsheets and leaving in their place an image of a burning American flag, according to The New York Times. Just days later, another large cyberattack hit RasGas, the giant Qatari natural gas company. Then a series of denial-of-service attacks took America’s largest financial institutions offline. Experts blamed all of this activity on Iran, which had created its own cyber command in the wake of the US-led attacks.

    So, let's inventory the interests here.

    For uninvolved government agencies, mainstreet USA, banks, and commercial industry there and in allied countries, this is total negative: They will bear the cost.

    For the NSA (and here I mean the NSA/CIA/DoD/Mossad group), there is no plausible harm. The NSA carries no cost. Meanwhile the NSA maintains and grows its already huge capability to collect huge amounts of boring data. And, launch pre-emptive strikes against Iran's centrifuges. And the program to sign up most of the USA's security industry in the war against everyone has yielded thousands of sign-ups, thus tarring the entirety of the USA with the same brush.

    And indeed, for the NSA, the responses by Iran -- probably or arguably justifiable and "legal" under the international laws of war -- represent an opportunity to further stress their own growth. It's all upside for the them:

    Inside the government, [General Alexander] is regarded with a mixture of respect and fear, not unlike J. Edgar Hoover, another security figure whose tenure spanned multiple presidencies. “We jokingly referred to him as Emperor Alexander—with good cause, because whatever Keith wants, Keith gets,” says one former senior CIA official who agreed to speak on condition of anonymity. “We would sit back literally in awe of what he was able to get from Congress, from the White House, and at the expense of everybody else.”

    What about Iran? Well, it has made it clear that regime change is not on the agenda which is what the USA really wants. As it will perceive that the USA won't stop, it has few options but to defend. Which regime would be likely to back down when it knows there is no accomodation on the other side?

    Iran is a specialist in asymmetric attacks (as it has to be) so we can predict that their Stuxnet inspiration will result in many responses in the past and the future. Meanwhile, the USA postures that a cyber attack is cause for going physical, and the USA has never been known to back down in face of a public lashing.

    All signs point to escalation. Which plays into the NSA's hands:

    The cat-and-mouse game could escalate. “It’s a trajectory,” says James Lewis, a cyber­security expert at the Center for Strategic and International Studies. “The general consensus is that a cyber response alone is pretty worthless. And nobody wants a real war.” Under international law, Iran may have the right to self-defense when hit with destructive cyberattacks. William Lynn, deputy secretary of defense, laid claim to the prerogative of self-defense when he outlined the Pentagon’s cyber operations strategy. “The United States reserves the right,” he said, “under the laws of armed conflict, to respond to serious cyberattacks with a proportional and justified military response at the time and place of our choosing.” Leon Panetta, the former CIA chief who had helped launch the Stuxnet offensive, would later point to Iran’s retaliation as a troubling harbinger. “The collective result of these kinds of attacks could be a cyber Pearl Harbor,” he warned in October 2012, toward the end of his tenure as defense secretary, “an attack that would cause physical destruction and the loss of life.” If Stuxnet was the proof of concept, it also proved that one successful cyberattack begets another. For Alexander, this offered the perfect justification for expanding his empire.

    Conclusion? The NSA are not going to be brought to heel. Congress will remain ineffective, shy of governance and innocent of the war it has signed-off on.

    The cyber divisions are going to have their day in the field. And the businesses of the USA and its public allies are going to carry the cost of a hot cyberwar.

    The U.S. banking leadership is said to be extremely unhappy at being stuck with the cost of remediation—which in the case of one specific bank amounts to well over $10 million. The banks view such costs as, effectively, an unlegislated tax in support of U.S. covert activities against Iran. The banks “want help turning [the DDoS] off, and the U.S. government is really struggling with how to do that. It’s all brand-new ground,” says a former national-security official. And banks are not the only organizations that are paying the price. As its waves of attacks continue, Qassam has targeted more banks (not only in the U.S., but also in Europe and Asia) as well as brokerages, credit-card companies, and D.N.S. servers that are part of the Internet’s physical backbone.

    And, it's going to go kinetic.

    ...at a time when the distinction between cyberwarfare and conventional warfare is beginning to blur. A recent Pentagon report made that point in dramatic terms. It recommended possible deterrents to a cyberattack on the US. Among the options: launching nuclear weapons.

    Not a smart situation. When you look at the causes of this, there isn't even a plausible cassus belli here. It's more like boys with too-big toys playing a first-person shoot-em-up video game, where they carry none of the costs of the shots.

    But it gets worse. Having caused this entire war to come, the biggest boy with the biggest toy says:

    Alexander runs the nation’s cyberwar efforts, an empire he has built over the past eight years by insisting that the US’s inherent vulnerability to digital attacks requires him to amass more and more authority over the data zipping around the globe. In his telling, the threat is so mind-bogglingly huge that the nation has little option but to eventually put the entire civilian Internet under his protection, requiring tweets and emails to pass through his filters, and putting the kill switch under the government’s forefinger. “What we see is an increasing level of activity on the networks,” he said at a recent security conference in Canada. “I am concerned that this is going to break a threshold where the private sector can no longer handle it and the government is going to have to step in.”

    !

    Posted by iang at 09:30 AM | Comments (1) | TrackBack

    May 16, 2013

    All Your Skype Are Belong To Us

    It's confirmed -- Skype is revealing traffic to Microsoft.

    A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log:

    65.52.100.214 - - [30/Apr/2013:19:28:32 +0200]
    "HEAD /.../login.html?user=tbtest&password=geheim HTTP/1.1"

    Utrace map
    Zoom The access is coming from systems which clearly belong to Microsoft.
    Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.

    Now, the boys & girls at Heise are switched-on, unlike their counterparts on the eastern side of the pond. Notwithstanding, Adam Back of hashcash fame has confirmed the basics: URLs he sent to me over skype were picked up and probed by Microsoft.

    What's going on? Microsoft commented:

    In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:

    "Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links."

    A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites.

    Which means Microsoft can scan ALL messages to ANYONE. Which means they are likely fed into Echelon, either already, or just as soon as someone in the NSA calls in some favours. 10 minutes later they'll be realtimed to support, and from thence to datamining because they're pissed that google's beating the hell out of Microsoft on the Nasdaq.

    Game over?

    Or exaggeration? It's just fine and dandy as all the NSA are interested in is matching the URLs to jihadist websites. I don't care so much for the towelheads. But, from the manual of citizen control comes this warning:

    First they came for the jihadists,
    and I didn't speak out because I wasn't a jihadist.

    Then they came for the cypherpunks,
    and I didn't speak out because I wasn't a cypherpunk.

    Then they came for the bloggers,
    and I didn't speak out because I wasn't a blogger.

    Then they came for me,
    and there was no one left to speak for me.


    Skype, game over.

    Posted by iang at 02:25 PM | Comments (5) | TrackBack

    May 06, 2013

    What makes financial cryptography the absolutely most fun field to be in?

    Quotes that struck me as on-point: Chris Skinner says of SEPA or the Single-European-Payment-Area:

    One of the key issues is that when SEPA was envisaged and designed, counterparty credit risk was not top of the agenda; post-Lehman Brothers crash and it is.

    What a delight! Oh, to design a payment system without counterparty risk ... Next thing they'll be suggesting payments without theft!

    Meanwhile Dan Kaminsky says in delicious counterpoint, commenting on Bitcoin:

    But the core technology actually works, and has continued to work, to a degree not everyone predicted. Time to enjoy being wrong. What the heck is going on here?

    First of all, yes. Money changes things.

    A lot of the slop that permeates most software is much less likely to be present when the developer is aware that, yes, a single misplaced character really could End The World. The reality of most software development is that the consequences of failure are simply nonexistent. Software tends not to kill people and so we accept incredibly fast innovation loops because the consequences are tolerable and the results are astonishing.

    BitCoin was simply developed under a different reality.

    The stakes weren’t obscured, and the problem wasn’t someone else’s.

    They didn’t ignore the engineering reality, they absorbed it and innovated ridiculously

    Welcome to financial cryptography -- that domain where things matter. It is this specialness, that ones code actually matters, that makes it worth while.

    Meanwhile, from the department of lolz, comes Apple with a new patent -- filed at least.

    The basic idea, described in a patent application “Ad-hoc cash dispensing network” is pretty simple. Create a cash dispensing server at Apple’s datacenter, to which iPhones, iPads and Macs can connect via a specialized app. Need some quick cash right now and there’s no ATM around? Launch the Cash app, and tell it how much do you need. The app picks up your location, and sends the request for cash to nearby iPhone users. When someone agrees to front you $20, his location is shown to you on the map. You go to that person, pick up the bill and confirm the transaction on your iPhone. $20 plus a small service fee is deducted from your iTunes account and deposited to the guy who gave you the cash.

    The good thing about being an FCer is that you can design that one over beers, and have a good belly laugh for the same price. I don't know how to put it gently, but hey guys, don't do that for real, ok?!

    All by way of saying, financial cryptography is where it's at!

    Posted by iang at 03:20 PM | Comments (1) | TrackBack

    March 05, 2013

    How to use PGP to verify that an email is authentic

    Ironic as xkcd nails it, at least one can draw the picture. What instruction would one draw for secure browsing these days?

    Posted by iang at 06:26 PM | Comments (1) | TrackBack

    January 05, 2013

    Yet another CA snafu

    In the closing days of 2012, another CA was caught out making mistakes:

    2012.5 -- A CA here issued 2 intermediate roots to two separate customers 8th August 2011 Mozilla mail/Mert Özarar. The process that allowed this to happen was discovered later on, fixed, and one of the intermediates was revoked. On 6th December 2012, the remaining intermediate was placed into an MITM context and used to issue an unauthorised certificate for *.google.com DarkReading. These certificates were detected by Google Chrome's pinning feature, a recent addition. "The unauthorized Google.com certificate was generated under the *.EGO.GOV.TR certificate authority and was being used to man-in-the-middle traffic on the *.EGO.GOV.TR network" wired. Actions. Vendors revoked the intermediates microsoft, google, Mozilla. Damages. Google will revoke Extended Validation status on the CA in January's distro, and Mozilla froze a new root of the CA that was pending inclusion.

    I collect these stories for a CA risk history, which can be useful in risk analysis.

    Beyond that, what is there to say? It looks like this CA made a mistake that let some certs slip out. It caught one of them later, not the other. The owner/holder of the cert at some point tried something different, including an MITM. One can see the coverup proceeding from there...

    Mistakes happen. This so far is somewhat distinct from issuing root certs for the deliberate practice of MITMing. It is also distinct from the overall risk equation that says that because all CAs can issue certs for your browser, only one compromise is needed, and all CAs are compromised. That is, brittle.

    But what is now clear is that the trend started in 2011 is confirmed in 2012 - we have 5+ incidents in each year. For many reasons, the CA business has reached a plateau of aggressive attention. It can now consider itself under attack, after 15 or so years of peace.

    Posted by iang at 04:14 PM | Comments (3) | TrackBack

    November 22, 2012

    Facebook goes HTTPS-always - victory after a long hard decade

    In news that might bemuse, Facebook is in the process of turning on SSL for all time. In this it is following google and others. In that, they, meaning google and Co., are following yet others including EFF, Mozilla and a bunch of others.

    Those, they are following Tyler, Amir, Ahmad and yours truly.

    We have been pushing for the use of all-authenticated web pages for around 8 years now. The reason is complicated and it is *nothing to do with wifi* but it's ok to use that excuse if that is easier to explain. It is really all about phishing which causes an MITM against a web-user (SSL or not). The reason is this: if we have SSL always on then we can rely on a whole bunch of other protections to lock in the user: pinning and client certificates spring to mind, but also never forget that the CA was supposed to show the user she was on their own bank, not somewhere else.

    But, without SSL always on, solutions were complicated, impossible, or easily tricked. So a deep analysis concluded, back in the mid 2000s that we had to move the net across to all-SSL, only SSL for any user-interactions sites. (Which since then has become all of them -- remember that surfing in a basic read-only mode was possible in those days...)

    A project was born. Slowly, TLS/SNI was advanced. Browsers experimented with new/old SSL display ideas. All browsers upgraded to SSL v3 then to TLS. Servers followed suite, s.l.o.w.l.y.... SSL v2 got turned off, painfully. Various projects sprung up to report on SSL weaknesses, although they don't report on the absence of SSL, the greatest weakness of them all... OK, small baby steps, let's not rush it. Indeed - the reason my long-suffering readers have to deal with this site in SSL is because of that project. We eat my dogfood.

    And, finally, some leaders started doing more widespread SSL.

    ( For those old timers who remember - this is how it was supposed to be. SSL was supposed to be always on. But back in 1995, it was discovered to be too expensive, so the business folks split the website and broke the security model (again!). Now, there is no such excuse, and google reports somewhere that there was no hit to its performance. 15 years later :) )

    This is good news - we have reached a major milestone. I'll leave you with this one thought.

    This response all started with phishing. Which started in 2001, and got really going by 2003. Now, if we call Facebook the midpoint of the response ("before FB, you were early, after, you're a laggard!"), we can conclude that the Internet's security lifecycle, or the OODA loop, is a decade long.

    This observation I especially leave there for those thinking about starting a little cyber war.

    Posted by iang at 10:29 AM | Comments (0) | TrackBack

    November 21, 2012

    Some One Thing you know, you have, you are

    Reading the FinServices' toungue-in-cheek prediction that "we should all be using Biometrics," I was struck by an old security aphorism was:

    Something you know, something you have, something you are.

    The idea being that each of these was an independent system, so if we had a weak system in each domain, we could construct a strong system by redundantly combining all three. It wasn't perfect, it was a classical strength-through-redundancy design, but you could be forgiven for thinking it was the holy grail because it was repeated so often by security people.

    Meanwhile, life has moved on. And it has moved on to the point where we now have a convergence of these things into one:

    the mobile phone

    The mobile (or cell or handy as it is known) is decidedly something you have - and we can imagine bluetooth protocols to authenticate in a wireless context. We have SMS, RFIC and NFC for those who like acronyms.

    It is also something you know. The individual knows how to fire up and run the apps on her phone. More so than anyone else - smartphones these days have lots of personality for our users to relish in. It is just an application design question to best show that this woman knows her own phone and others do not. Trivial solutions left to reader.

    Finally -- the phone is something you are. If you don't follow that, you've been in a cave for the last decade. Start your catchup by buying an iPhone and asking your 13 year old to install some apps. Watch that movie _The Social Network_. Install the facebook app, perhaps related to that movie.

    The mobile phone is something you know, have and are. It will become the one security Thing of the future. This one Thing has some good aspects, and some bad aspects too. For one, if you lose the One, you're screwed. Even with obvious downsides, users will choose this one option, and as systems providers, we might as bind with them over it.

    With further apologies to J.R.R. Tolkein,

    One Thing to rule them all,
    One Thing to find them,
    One Thing to bring them all
    and in the darkness bind them.
    Posted by iang at 11:29 AM | Comments (1) | TrackBack

    October 21, 2012

    Planet SSL: mostly harmless

    One of the essential requirements of any system is that it actually has to work for people, and work enough of the time to make a positive difference. Unfortunately, this effect can be confused with security systems because attacks can either be rare or hidden. In such an environment, we find persistent emphasis on strong branding more than proven security out in the field.

    SSL has frequently been claimed to be the worlds' most successful most complicated security system -mostly because mostly everything in SSL is oriented to relying on certificates, which are their own market-leading complication. It has therefore been suggested (here and in many other places) that SSL's protection is somewhere between mostly harmless, and mildly annoying but useful. Here's more evidence along those lines:

    "Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security"

    ...The most common approach to protect data during communication on the Android platform is to use the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocols. To evaluate the state of SSL use in Android apps, we downloaded 13,500 popular free apps from Google’s Play Market and studied their properties with respect to the usage of SSL. In particular, we analyzed the apps’ vulnerabilities against Man-in-the-Middle (MITM) attacks due to the inadequate or incorrect use of SSL.

    Some headlines, paraphrased:

    • 8.0% of the apps examined automatically contain SSL/TLS code that is potentially vulnerable to MITM attacks.
    • 41% of apps selected for manual audit exhibited various forms of SSL/TLS misuse that allowed us to successfully launch MITM attacks...
    • Half of users questioned were not able to correctly judge whether their browser session was protected by SSL/TLS or not.
    • 73.6% of apps could have used HTTPS instead of HTTP with minimal effort by adding a single character to the target URLs.
    • 17.28% accepted any hostname or any certificate!
    • 56% of banks "protected" by one generic banking app were breached by MITM attack.
    • One browser accepts any certificates...
    • One anti-virus product relied on SSL and not its own signature system, and was therefore vulnerable to being tricked into deleting itself :P

    With numbers like these, we can pretty much conclude that SSL is unreliable in that domain - no real user can even come close to relying on its presence.

    The essential cause of this is that the secure browsing architecture is too complicated to work. It relies on too many "and that guy has to do all this" exceptions. Worse, the Internet browsing paradigm requires that the system work flawlessly or not at all, which conundrum this paper perhaps reveals as: flawlessly hidden and probably not working either.

    This is especially the case in mobile work, where fast time cycles and compact development contexts conspire to make security less of a leading requirement; Critics will complain that the app developers should fix their code, and SSL works fine when it is properly deployed. Sure, that's true but it consistently doesn't happen, the SSL architecture is at fault for its low rates of reliable deployment. If a security architecture cannot be reliably deployed in adverse circumstances such as mobile, why would you bother?

    So where do we rate SSL? Mostly harmless, a tax on the Internet world, or a barrier to a better system?

    Posted by iang at 04:18 AM | Comments (2) | TrackBack

    August 26, 2012

    Use another browser - Kaspersky follows suit

    You saw it here first :) Kaspersky has dipped into the payments market with a thing called Safe Money:

    A new offering found in Kaspersky Internet Security is Safe Money, Kaspersky Lab's unique technology designed to protect the user's money when shopping and banking online. To keep your cash safe, Kaspersky Internet Security's Safe Money will:
      1. Automatically activate when visiting most common payment services (PayPal, etc.), banking websites, and you can easily add your own bank or shopping websites to the list.
      2. *Isolate your payment operations in a special web browser to ensure your transactions aren't monitored*.
      3. Verify the authenticity of the banking or payment website itself, to ensure the site isn't compromised by malware, or a fake website designed to look authentic.
      4. Evaluate the security status of your computer, and warn about any existing threats that should be addressed prior to making payments.
      5. Provide an onscreen Virtual Keyboard when entering your credit card or payment information so you can enter your information with mouse-clicks. This will activate a special program to prevent malware from logging any keystrokes on your physical keyboard.
      ...

    The #2 tip is the same as the #2 one that's been on this website for years - use one browser for common stuff and another for your online payments. This simple trick builds a good solid firewall between your money and the crooks' hands. What's more, it's easy enough to teach grandma and grandson.

    (For those lost for clue, download Chrome and Firefox. I advise using Safari for banking, Firefox for routine stuff and Chrome for google stuff. Whatever you do, keep that banking browser closed and locked down, until it is time to bring it up, switch to privacy mode, and type in the URL by hand.)

    Aside from Kaspersky's thumbs-up for the #2, what else can we divine? If Kaspersky, one of the more respected anti-virus providers, has decided to dip its toe into payments protection, this might be a signal that phishing and malware is not reducing. Or, at least, the perception has not diminished.

    Out there in user-land, people don't really trust browsers to do their security, and since GFC-1 they don't really trust banks either. (They've never ever trusted CAs.) This doesn't mean they can voice, explain or argue their mistrust, but it does mean that they feel the need - it is this perception that Kapersky hopes to sell into.

    Chances are, it's a good pick, if only because we're all going to die before any of these providers deals with their cognitive dissonance on trust. Good luck Kaspersky, and hopefully you won't succumb to it either.

    Posted by iang at 07:34 AM | Comments (7) | TrackBack

    June 20, 2012

    Banks will take responsibility for online fraud

    Several cases in USA are resolving in online theft via bank account hackery. Here's one:

    Village View Escrow Inc., which in March 2010 lost nearly $400,000 after its online bank account with Professional Business Bank was taken over by hackers, has reached a settlement with the bank for an undisclosed amount, says Michelle Marsico, Village View's owner and president.

    As a result of the settlement, Village View recovered more than the full amount of the funds that had been fraudulently taken from the account, plus interest, the company says in a statement.

    And two more:

    Two similar cases, PATCO Construction Inc. vs. Ocean Bank and Experi-Metal Inc. vs. Comerica Bank, raised questions about liability and reasonable security, yet each resulted in a different verdict.

    In 2010, PATCO sued Ocean Bank for the more than $500,000 it lost in May 2009, after its commercial bank account with Ocean Bank was taken over. PATCO argued that Ocean Bank was not complying with existing FFIEC requirements for multifactor authentication when it relied solely on log-in and password credentials to verify transactions.

    Last year, a District Court magistrate found the bank met legal requirements for multifactor authentication and dismissed the suit.

    In December 2009, EMI sued Comerica after more than $550,000 in fraudulent wire transfers left EMI's account.

    In the EMI ruling, the court found that Comerica should have identified and disallowed the fraudulent transactions, based on EMI's history, which had been limited to transactions with a select group of domestic entities. The court also noted that Comerica's knowledge of phishing attempts aimed at its clients should have caused the bank to be more cautious.

    In the ruling, the court required Comerica to reimburse EMI for the more than $560,000 it lost after the bank approved the fraudulent wire transfers.

    Here's how it happens. There will be many of these. Many of the victims will sue. Many if the cases will lose.

    Those that lose are irrelevant. Those that win will set the scene. Eventually some precedent will be found, either at law or at reputation, that will allow people to trust banks again. Some more commentary.

    The reason for the inevitability of this result is simple: society and banks both agree that we don't need banks unless the money is safe.

    Online banking isn't safe. It behoves to the banks to make it safe. We're in the phase where the court of law and public opinion are working to get that result.

    Posted by iang at 04:42 PM | Comments (2) | TrackBack

    June 16, 2012

    The Equity Debate

    I don't normally copy directly from others, but the following post by Bruce Schneier provides an introduction to one of the most important topics in Information Security. I have tried long and hard to write about it, but the topic is messy, controversial and hard evidence is thin. Here goes...

    Bruce Schneier writes in Forbes about the booming vulnerabilities market:

    All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the "equities issue," and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone's communications more secure, or they could exploit the flaw to eavesdrop on others -- while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I've heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

    It's probably worth reading the rest of the article too - I only took the one para talking about the Equities Debate.

    What's it about? Well, in a nutshell, the intelligence community debated long and hard about whether to allow the world's infosec infrastructure to be vulnerable, so as to assist spying. Or not.... Is there such a stark choice? The answer to this is a bit difficult to prove, but I'm going to put my money on YES: for the NSA it is either/or. The reason for this is that when the NSA goes for vulnerabilities, there are all sorts of flow-on effects:

    Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

    I remember this from my early formative years in University - security work was considered a bad direction to head. As you got into it you found all of these restrictions and regulations. It just had a bad taste, the brighter people were bright enough to go elsewhere.

    In general, it seems that the times the NSA has erred on the side of "YES! let them be weak," that we are now counting the cost. If you look back through the last couple of decades, the mantra is very clear: security is an afterthought. That's in large part because almost nobody coming out of the training camps is steeped in it. We got away with this for a decade or two when the Internet was in its benign phase - 1990s, spam, etc.

    But that's all changed now. Chickens now come home to roost. For one example, if you look at the timeline of CA attacks over the last decade, there is a noticeable spike in 2011. For another, look at Stuxnet and Flame as a cyberweapons of inspiration.

    Which brings costs to everyone.

    I personally think the Equity Issue within the NSA is perhaps the single most important information security influence, ever. Their mission is dual-fold, to protect and the listen. By choosing vulnerability over protection, we have all suffered. We are now in the cost-amortisation phase; for the next decade we will suffer a non-benign Internet risks environment.

    Next time you read of the US government banging the cyberwar drum in order to rustle up budget for cyberwarriors, ask them if they've re-thought the equity issue, and why we would provide funds for something they created in the first place?

    Posted by iang at 06:50 AM | Comments (1) | TrackBack

    March 20, 2012

    More context on why context undermines threat modelling...

    Lynn was there at the beginning of SSL, and sees where I'm going with this. He writes in Comments:

    > I have a similar jaundiced view of a lot of the threat model stuff in the mid-late 90s ... Lots of parties had the solution (PKI) and they were using the facade of threat models to narrowly focus in portion of problem needing PKI as solution (they had the answer, now they needed to find the question). The industry was floating business plans on wall street $20B/annum for annual $100 digital certificates for individuals. We had been called in to help wordsmith the cal. state electronic signature legislation ... and the industry was heavily lobbying that the legislation mandate (annual, individual) digital certificates.


    Etc, etc. Yes. this is the thing that offends - there is a flaw in pure threat modelling, and that flaw was enough to drive an industry through. We're still paying the dead-weight costs, and will be for some time.

    The question is whether the flaw in threat modelling can be repaired by patches like "oh, you should consider the business" or whether we need to recognise that nobody ever does. Or can. Or gets paid to.

    In which case we need a change in metaphor. A new paradigm, as they say in the industry.

    I'm thinking the latter. Hence, Risk Analysis.

    Posted by iang at 10:41 AM | Comments (0) | TrackBack

    March 12, 2012

    Measuring the OODA loop of security thinking -- Can you say - firewalls & SSL?

    So, you want to know where the leading thinkers are in security today?

    Coviello called for the industry to rally together to take the following actions:
    -- Change how we think about security. The security industry must stop thinking linearly, "...blindly adding new controls on top of failed models. We need to recognize, once and for all, that perimeter-based defenses and signature-based technologies are past their freshness dates, and acknowledge that our networks will be penetrated. We should no longer be surprised by this," Coviello said.

    Can you say, firewalls & SSL? It's so long ago that this metaphor was published by Gunnar that I can't even remember. But here's his firewalls & SSL infosec debt clock, starting 1995.

    Posted by iang at 09:17 PM | Comments (3) | TrackBack

    February 28, 2012

    Serious about user security? Plonk down a million in prizes....

    Google offers $1 million reward to hackers who exploit Chrome
    By Dan Goodin | Published February 27, 2012 8:30 PM

    Google has pledged cash prizes totaling $1 million to people who successfully hack its Chrome browser at next week's CanSecWest security conference.

    Google will reward winning contestants with prizes of $60,000, $40,000, and $20,000 depending on the severity of the exploits they demonstrate on Windows 7 machines running the browser. Members of the company's security team announced the Pwnium contest on their blog on Monday. There is no splitting of winnings, and prizes will be awarded on a first-come-first-served basis until the $1 million threshold is reached.

    Now in its sixth year, the Pwn2Own contest at the same CanSecWest conference awards valuable prizes to those who remotely commandeer computers by exploiting vulnerabilities in fully patched browsers and other Internet software. At last year's competition, Internet Explorer and Safari were both toppled but no one even attempted an exploit against Chrome (despite Google offering an additional $20,000 beyond the $15,000 provided by contest organizer Tipping Point).

    Chrome is currently the only browser eligible for Pwn2Own never to be brought down. One reason repeatedly cited by contestants for its lack of attention is the difficulty of bypassing Google's security sandbox.

    "While we’re proud of Chrome’s leading track record in past competitions, the fact is that not receiving exploits means that it’s harder to learn and improve," wrote Chris Evans and Justin Schuh, members of the Google Chrome security team. "To maximize our chances of receiving exploits this year, we’ve upped the ante. We will directly sponsor up to $1 million worth of rewards."

    In the same blog post, the researchers said Google was withdrawing as a sponsor of the Pwn2Own contest after discovering rule changes allowing hackers to collect prizes without always revealing the full details of the vulnerabilities to browser makers.

    "Specifically, they do not have to reveal the sandbox escape component of their exploit," a Google spokeswoman wrote in an email to Ars. "Sandbox escapes are very dangerous bugs so it is not in the best interests of user safety to have these kept secret. The whitehat community needs to fix them and study them. Our ultimate goal here is to make the web safer."
    In a tweet, Aaron Portnoy, one of the Pwn2Own organizers, took issue with Google's characterization that the rules had changed and said that the contest has never required the disclosure of sandbox escapes.

    Ars will have full coverage of Pwn2Own, which commences on Wednesday, March 7.

    Posted by iang at 08:01 PM | Comments (0) | TrackBack

    February 18, 2012

    one week later - chewing on the last morsel of Trust in the PKI business

    After a week of fairly strong deliberations, Mozilla has sent out a message to all CAs to clarify that MITM activity is not acceptable.

    It would seem that Trustwave might slip through without losing their spot in the root list of major vendors. The reasons for this is a combination of: up-front disclosure, a short timeframe within which the subCA was issued and used (at this stage limited to 2011), and the principle of wiser heads prevailing.

    That's my assessment at least.

    My hope is that this has set the scene. The next discovery will be fatal for that CA. The only way forward for a CA that has issued at any time in the past an MITM-enabled subCA would be the following:

    + up-front disclosure to the public. By that I mean, not privately to Mozilla or other vendors. That won't be good enough. Nobody trusts the secret channels anymore.
    + in the event that this is still going on, an *fast* plan, agreed and committed to vendors, to withdraw completely any of these MITM sub-CAs or similar arrangements. By that I mean *with prejudice* to any customers - breaching contract if necessary.

    Any deviation means termination of the root. Guys, you got one free pass at this, and Trustwave used it up. The jaws of Trust are hungry for your response.

    That is what I'll be looking for at Mozilla. Unfortunately there is no forum for Google and others, so Mozilla still remains the bellwether for trust in CAs in general.

    That's not a compliment; it's more a description of how little trust there is. If there is a desire to create some, that's possibly where we'll see the signs.

    Posted by iang at 10:53 PM | Comments (1) | TrackBack

    The Convergence of PKI

    Last week's post on the jaws of Trust sparked a bit of interest, and Chris asks what I think about Convergence in comments. I listened to this talk by Moxie Marlinspike, and it is entertaining.

    The 'new idea' is not difficult. The idea of Convergence is for independent operators (like CAcert or FSFE or FSF) to run servers that cache certificates from sites. Then, when a user browser comes across a new certificate, instead of accepting the fiat declaration from the CA, it gets a "second opinion" from one of these caching sites.

    Convergence is best seen as conceptually extending or varying the SSH or TOFU model that has already been tried in browsers through CertPatrol, Trustbar, Petnames and the like.

    In the Trust-on-first-use model, we can make a pretty good judgement call that the first time a user comes to a site, she is at low risk. It is only later on when her relationship establishes (think online banking) that her risk rises.

    This risk works because likelihood of an event is inversely aligned with the cost of doing that attack. One single MITM might be cost X, two might be X+delta, so as it goes on it gets more costly. In two ways: firstly, in maintaining the MITM over time against Alice costs go up more dramatically than linear additions of a small delta. In this sense, MITMs are like DOSs, they are easier to mount for brief periods. Secondly, because we don't know of Alice's relationship before hand, we have to cast a very broad net, so a lot of MITMs are needed to find the minnow that becomes the whale.

    First-use-caching or TOFU works then because it forces the attacker into an uneconomic position - the easy attacks are worthless.

    Convergence then extends that model by using someone else's cache, thus further boxing the attacker in. With a fully developed Convergence network in place, we can see that the attacker has to conduct what amounts to being a perfect MITM closer to the site than any caching server (at least at the threat modelling level).

    Which in effect means he owns the site at least at the router level, and if that is true, then he's probably already inside and prefers more sophisticated breaches than mucking around with MITMs.

    Thus, the very model of a successful mitigation -- this is a great risk for users to accept if only they were given the chance! It's pretty much ideal on paper.

    Now move from paper threat modelling to *the business*. We can ask several questions. Is this better than the fiat or authority model of CAs which is in place now? Well, maybe. Assuming a fully developed network, Convergance is probably in the ballpark. A serious attacker can mount several false nodes, something that was seen in peer2peer networks. But a serious attacker can take over a CA, something we saw in 2011.

    Another question is, is it cheaper? Yes, definately. It means that the entire middle ground of "white label" HTTPS certs as Mozilla now shows them can use Convergence and get approximately the same protection. No need to muck around with CAs. High end merchants will still go for EV because of the branding effect sold to them by vendors.

    A final question is whether it will work in the economics sense - is this going to take off? Well, I wish Moxie luck, and I wish it work, but I have my reservations.

    Like so many other developments - and I wish I could take the time to lay out all the tall pioneers who provided the high view for each succeeding innovation - where they fall short is they do not mesh well with the current economic structure of the market.

    In particular, one facet of the new market strikes me as overtaking events: the über-CA. In this concept, we re-model the world such that the vendors are the CAs, and the current crop are pushed down (or up) to become sub-CAs. E.g., imagine that Mozilla now creates a root cert and signs individually each root in their root list, and thus turns it into a sub-root list. That's easy enough, although highly offensive to some.

    Without thinking of the other ramifications too much, now add Convergance to the über-CA model. If the über-CA has taken on the responsibility, and manages the process end to end, it can also do the Convergence thing in-house. That is, it can maintain its set of servers, do the crawling, do the responding. Indeed, we already know how to do the crawling part, most vendors have had a go at it, just for in-house research.

    Why do I think this is relevant? One word - google. If the Convergence idea is good (and I do think it is) then google will have already looked at it, and will have already decided how to do it more efficiently. Google have already taken more steps towards ueber-CA with their decision to rewire the certificate flow. Time for a bad haiku.

    Google sites are pinned now / All your 'vokes are b'long to us / Cache your certs too, soon.

    And who is the world's expert at rhyming data?

    Which all goes to say that Convergence may be a good idea, a great one even, but it is being overtaken by other developments. To put it pithily the market is converging on another direction. 1-2 years ago maybe, yes, as google was still working on the browser at the standards level. Now google are changing the way things are done, and this idea will fall out easily in their development.

    (For what it is worth, google are just as likely to make their servers available for other browsers to use anyway, so they could just "run" the Convergance network. Who knows. The google talks to no-one, until it is done, and often not even then.)

    Posted by iang at 07:21 PM | Comments (2) | TrackBack

    February 09, 2012

    PKI and SSL - the jaws of trust snap shut

    As we all know, it's a right of passage in the security industry to study the SSL business of certificates, and discover that all's not well in the state of Denmark. But the business of CAs and PKI rolled on regardless, seemingly because no threat ever challenged it. Because there was no risk, the system successfully dealt with the threats it had set itself. Which is itself elegant proof that academic critiques and demonstrations and phishing and so forth are not real attacks and can be ignored entirely...

    Until 2011.

    Last year, we crossed the Rubicon for the SSL business -- and by extension certificates, secure browsing, CAs and the like -- with a series of real attacks against CAs. Examples include the DigiNotar affair, the Iranian affair (attacks on around 5 CAs), and also the lesser known attack a few months back where certificates may have been forged and may have been used in an APT and may have... a lot of things. Nobody's saying.

    Either way, the scene is set. The pattern has emerged, the Rubicon is crossed, it gets worse from here on in. A clear and present danger, perhaps? In California, they'd be singing "let's partly like it's 2003," the year that SB1386 slid past our resistance and set the scene for an industry an industry debacle in 2005.

    But for us long term observers, no party. There will now be a steady series of these shocks, and journalists will write of our brave new world - security but no security.

    With one big difference. Unlike the SB1386 breach party, where we can rely on companies not going away (even as our data does), the security system of SSL and certificates is somewhat optional. Companies can and do expose their data in different ways. We can and do invent new systems to secure or mitigate the damage. So while SB1386 didn't threaten the industry so much as briskly kicked it around, this is different.

    At an attacks level, we've crossed a line, but at a wider systems level, we stand on the line.

    And that line is a cliff.

    Which brings us to this week's news. A CA called Trustwave has just admitted to selling a sub-root for the explicit purpose of MITM'ing. Read about that elsewhere.



    Now, we've known that MITMing for fun and profit was going on for a long time. Mozilla's community first learnt of it in the mid 2000s as it was finalising its policy on CAs (a ground-breaking work that I was happy to be involved with). At that time, accusations were circulating against unknown companies listing their roots for the explicit purpose of doing MITMs on unwitting victims. Which raised the hairs, eyebrows and heckles on not a few of us. These accusations have been repeated from time to time, but in each case the "insiders" begged off on the excuse: we cannot break NDA or reputation.

    Each time then the industry players were likewise able to fob it off. Hard Evidence? none. Therefore, it doesn't exist, was they industry's response. We knew as individuals, yet as an industry we knew not.

    We are all agreed it does exist and it doesn't. We all have jobs to preserve, and will practice cognitive dissonance to the very end.

    Of course this situation couldn't last, because a secret of this magnitude never survives. In this case, the company that sold the MITM sub-root, Trustwave, has looked at 2011, and realised the profit from that one CA isn't worth the risk of the DigiNotar experience (bankruptcy). Their decision is to 'fess up now, take it on the chin, because later may be too late.

    Which leads to a dilemma, and we the players have divided on each side, one after the other, of that dilemma:

    To drop the Trustwave root, or not?



    That is the question. First the case for the defence: On the one hand, we applaud the honesty of a CA coming forward and cleaning up house. It's pretty clear that we need our CAs to do this. Otherwise we're not going to get anywhere with this Trust thing. We need to encourage the CAs to work within the system.

    Further, if we damage a CA, we damage customers. The cost to lost business is traumatic, and the list of US government agencies that depend on this CA has suddenly become impressive. Just like DigiNotar, it seems, which spread like a wave of mistrust through the government IT departments of the Netherlands. Also, we have to keep an eye on (say) a bigger more public facing CA going down in the aftermath - and the damage to all its customers. And the next, etc.

    Is lost business more important than simple faith in those silly certificates? I think lost business is much more important - revenue, jobs, money flowing keeping all of the different parts of the economy going are our most important asset. Ask any politician in USA or Europe or China; this is their number one problem!

    Finally, it is pretty clear and accepted that the business purpose to which the sub-Root was put was known and tolerated. Although it is uncomfortable to spy on ones employees, it is just business. Organisations own their data systems, have the responsibility to police them, and have advised their people that this is what they are going to do. SSL included, if necessary.

    This view has it that Trustwave has done the right thing. Therefore, pass. And, the more positive proponents suggest an amnesty, after which period there is summary execution for the sins - root removal from the list distributed by the browsers. It's important to not cause disruption.



    Now the case for the Prosecution! On the other hand, damn spot: the CA clearly broke their promise. Out!

    Three ways, did they breach the trust: It is expressed in the Mozilla policy and presumably of others that certificates are only issued to people who own/control their domains. This is no light or optional thing -- we rely on the policy because CAs and Mozilla and other vendors and auditors and all routinely practice secrecy in this business.

    We *must rely on the policy* because they deny us the right to rely on anything else!

    Secondly, it is what the public believe in, it is the expectations of any purchaser or user of the product, written or not. It is a simple message, and brooks no complicated exceptions. Either your connection is secure to your online bank, and nobody else can see it *including your employer or IT department*. Or not.

    Try explaining this exception to your grandmother, if the words do not work for you.

    Finally, the raison d'être: it is the purpose and even the entire goal of the certificate design to do exactly the opposite. The reason we have CAs like TrustWave is to stop the MITM. If they don't stop the MITM, then *we don't need the heavyweight certificate system*, we don't need CAs, and we don't need Mozilla's root list or that of any other vendor.

    We can do security much more cost-effectively if we drop the 100% always-on absolutist MITM protection.

    Given this breach of trust, what else can we trust in? Can we trust their promises that the purpose was maintained? That the cert never left the building? That secret traffic wasn't vectored in? That HSMs are worth something and audits ensure all is well in Denmark?

    This rather being a problem with trust. Lie once, lose it.



    There being two views presented, it has to be said that both views are valid. The players are lining up on either side of the line, but they probably aren't so well aware of where this is going.

    Only one view is going to win out. Only one side wins this fight.

    And in so-doing, in winning, the winner sews the seeds for own destruction.

    Because if you religiously take your worldview, and look at the counter-argument to your preferred position, your thesis crumbles for the fallacies.

    The jaws of trust just snapped shut on the players who played too long, too hard, too profitably.

    Like the financial system. We are no longer worried about the bankruptcy of one or two banks or a few defaults by some fly specks on the map of European. We are now looking at a change that will ripple out and remove what vestiges of purpose and faith were left in PKI. We are now looking at all the other areas of the business that will be effected; ones that brought into the promise even though they knew they shouldn't have.

    Like the financial system, a place of uncanny similarity, each new shock makes us wonder and question. Wasn't all this supposed to be solved? Where are the experts? Where is the trust?

    We're about to find out the timeless meaning of Caveat Emptor.

    Posted by iang at 10:54 PM | Comments (7) | TrackBack

    January 29, 2012

    Why Threat Modelling fails in practice

    I've long realised that threat modelling isn't quite it.

    There's some malignancy in the way the Internet IT Security community approached security in the 1990s that became a cancer in our protocols in the 2000s. Eventually I worked out that the problem with the aphorism What's Your Threat Model (WYTM?) was the absence of a necessary first step - the business model - which lack permitted threat modelling to be de-linked from humanity without anyone noticing.

    But I still wasn't quite there, it still felt like wise old men telling me "learn these steps, swallow these pills, don't ask for wisdom."

    In my recent risk management work, it has suddenly become clearer. Taking from notes and paraphrasing, let me talk about threats versus risks, before getting to modelling.

    A threat is something that threatens, something that can cause harm, in the abstract sense. For example, a bomb could be a threat. So could an MITM, an eavesdropper, or a sniper.

    But, separating the abstract from the particular, a bomb does not necessarily cause a problem unless there is a connection to us. Literally, it has to be capable of doing us harm, in a direct sense. For this reason, the methodologists say:

    Risk = Threat * Harm

    Any random bomb can't hurt me, approximately, but a bomb close to me can. With a direct possibility of harm to us, a threat becomes a risk. The methodologists also say:

    Risk = Consequences * Likelihood

    That connection or context of likely consequences to us suddenly makes it real, as well as hurtful.

    A bomb then is a threat, but just any bomb doesn't present a risk to anyone, to a high degree of reliability. A bomb under my car is now a risk! To move from threats to risks, we need to include places, times, agents, intents, chances of success, possible failures ... *victims* ... all the rest needed to turn the abstract scariness into direct painful harm.

    We need to make it personal.

    To turn the threatening but abstract bomb from a threat to a risk, consider a plane, one which you might have a particular affinity to because you're on it or it is coming your way:

    ⇒ people dying
    ⇒ financial damage to plane
    ⇒ reputational damage to airline
    ⇒ collateral damage to other assets
    ⇒ economic damage caused by restrictions
    ⇒ war, military raids and other state-level responses

    Lots of risks! Speaking of bombs as planes: I knew someone booked on a plane that ended up in a tower -- except she was late. She sat on the tarmac for hours in the following plane.... The lovely lady called Dolly who cleaned my house had a sister who should have been cleaning a Pentagon office block, but for some reason ... not that day. Another person I knew was destined to go for coffee at ground zero, but woke up late. Oh, and his cousin was a fireman who didn't come home that day.

    Which is perhaps to say, that day, those risks got a lot more personal.

    We all have our very close stories to tell, but the point here is that risks are personal, threats are just theories.

    Let us now turn that around and consider *threat modelling*. By its nature, threat modelling only deals with threats and not risks and it cannot therefore reach out to its users on a direct, harmful level. Threat modelling is by definition limited to theoretical, abstract concerns. It stops before it gets practical, real, personal.

    Maybe this all amounts to no more than a lot of fuss about semantics?

    To see if it matters, let's look at some examples: If we look at that old saw, SSL, we see rhyme. The threat modelling done for SSL took the rather abstract notions of CIA -- confidentiality, integrity and authenticity -- and ended up inverse-pyramiding on a rather too-perfect threat of MITM -- Man-in-the-Middle.

    We can also see from the lens of threat analysis versus risk analysis that the notion of creating a protocol to protect any connection, an explicit choice of the designers, led to them not being able to do any risk analysis at all; the notion of protecting certain assets such as credit cards as stated in the advertising blurb was therefore conveniently not part of the analysis (which we knew, because any risk analysis of credit cards reveals different results).

    Threat modelling therefore reveals itself to be theoretically sound but not necessarily helpful. It is then no surprise that SSL performed perfectly against its chosen threats, but did little to offend the risks that users face. Indeed, arguably, as much as it might have stopped some risks, it helped other risks to proceed in natural evolution. Because SSL dealt perfectly with all its chosen threats, it ended up providing a false sense of false security against harm-incurring risks (remember SSL & Firewalls?).

    OK, that's an old story, and maybe completely and boringly familiar to everyone else? What about the rest? What do we do to fix it?

    The challenge might then be to take Internet protocol design from the very plastic, perfect but random tendency of threat modelling and move it forward to the more haptic, consequences-directed chaos of risk modelling.

    Or, in other words, we've got to stop conflating threats with risks.

    Critics can rush forth and grumble, and let me be the first: Moving to risk modelling is going to be hard, as any Internet protocol at least at the RFC level is generally designed to be deployed across an extremely broad base of applications and users.

    Remember IPSec? Do you feel the beat? This might be the reason why we say that only end-to-end security cuts the mustard, because end-to-end implies an application, and this draw in the users to permit us to do real risk modelling.

    It might then be impossible to do security at the level of an Internet-wide, application-free security protocol, a criticism that isn't new to the IETF. Recall the old ISO layer 5, sometimes called "the security layer" ?

    But this doesn't stop the conclusion: threat modelling will always fail in practice, because by definition, threat modelling stops before practice. The place where users are being exposed and harmed can only be investigated by getting personal - including your users in your model. Threat modelling does not go that far, it does not consider the risks against any particular set of users that will be harmed by those risks in full flight. Threat modelling stops at the theoretical, and must by the law of ignorance fail in the practical.

    Risks are where harm is done to users. Risk modelling therefore is the only standard of interest to users.

    Posted by iang at 02:02 PM | Comments (6) | TrackBack

    January 21, 2012

    for bright times for CISOs ... turn on the light!

    Along the lines of "The CSO should have an MBA" we have some progress:

    City University London will offer a new postgraduate course from September, designed to help information security and risk professionals progress to managerial roles.

    The new Masters in Information Security and Risk (MISR) aims to address a gap in the market for IT professionals that can “speak business”. One half of the course is devoted to security strategy, risk management and security architecture, while the other half focuses on putting this into a business-oriented context.

    “Talking to people out in the industry, particularly if you go to the financial sector, they say one real problem is they don't have a security person who they can take along to the board meeting without having to act as an interpreter; so we're putting together a programme to address this need,” explained course director Professor Kevin Jones, speaking at the Infosecurity Europe Press Conference in London yesterday.

    Posted by iang at 04:48 AM | Comments (0) | TrackBack

    December 08, 2011

    Two-channel breached: a milestone in threat evaluation, and a floor on monetary value

    Readers will know we first published the account on "Man in the Browser by Philipp Güring way back when, and followed it up with news that the way forward was dual channel transaction signing. In short, this meant the bank sending an SMS to your handy mobile cell phone with the transaction details, and a check code to enter if you wanted the transaction to go through.

    On the face of it, pretty secure. But at the back of our minds, we knew that this was just an increase in difficulty: a crook could seek to control both channels. And so it comes to pass:

    In the days leading up to the fraud being committed, [Craig] had received two strange phone calls. One came through to his office two-to-three days earlier, claiming to be a representative of the Australian Tax Office, asking if he worked at the company. Another went through to his home number when he was at work. The caller claimed to be a client seeking his mobile phone number for an urgent job; his daughter gave out the number without hesitation.

    The fraudsters used this information to make a call to Craig’s mobile phone provider, Vodafone Australia, asking for his phone number to be “ported” to a new device.

    As the port request was processed, the criminals sent an SMS to Craig purporting to be from Vodafone. The message said that Vodafone was experiencing network difficulties and that he would likely experience problems with reception for the next 24 hours. This bought the criminals time to commit the fraud.

    The unintended consequence of the phone being used for transaction signing is that the phone is now worth maybe as much as the fraud you can pull off. Assuming the crooks have already cracked the password for the bank account (something probably picked up on a market for pennies), the crooks are now ready to spend substantial amounts of time to crack the phone. In this case:

    Within 30 minutes of the port being completed, and with a verification code in hand, the attackers were spending the $45,000 at an electronics retailer.

    Thankfully, the abnormally large transaction raised a red flag within the fraud unit of the Commonwealth Bank before any more damage could be done. The team tried – unsuccessfully – to call Craig on his mobile. After several attempts to contact him, Craig’s bank account was frozen. The fraud unit eventually reached him on a landline.

    So what happens now that the crooks walked with $45k of juicy electronics (probably convertible to cash at 50-70% off face over ebay) ?

    As is standard practice for online banking fraud in Australia, the Commonwealth Bank has absorbed the hit for its customer and put $45,000 back into Craig's account.

    A NSW Police detective contacted Craig on September 15 to ensure the bank had followed through with its promise to reinstate the $45,000. With this condition satisfied, the case was suspended on September 29 pending the bank asking the police to proceed with the matter any further.

    One local police investigator told SC that in his long career, a bank has only asked for a suspended online fraud case to be investigated once. The vast majority of cases remain suspended. Further, SC Magazine was told that the police would, in any case, have to weigh up whether it has the adequate resources to investigate frauds involving such small amounts of money.

    No attempt was made at a local police level to escalate the Craig matter to the NSW Police Fraud and Cybercrime squad, for the same reasons.

    In a paper I wrote in 2008, I stated for some value below X, police wouldn't lift a finger. The Prosecutor has too much important work to do! What we have here is a very definate floor beyond which Internet systems which transmit and protect value are unable to rely on external resources such as the law . Reading more:

    But the Commonwealth Bank claims it has forwarded evidence to the NSW and Federal Police forces that could have been used to prosecute the offenders.

    The bank’s fraud squad – which had identified the suspect transactions within minutes of the fraud being committed - was able to track down where the criminals spent the stolen money.

    A spokesman for the bank said it “dealt with both Federal and State (NSW) Police regarding the incident” and that “both authorities were advised on the availability of CCTV footage” of the offenders spending their ill-gotten gains.

    “The Bank was advised by one of the authorities that the offender had left the country – reducing the likelihood of further action by that authority,” the spokesperson said.

    This number goes up dramatically once we cross a border. In that paper I suggested 25k, here we have a reported number of $45k.

    Why is that important? Because, some systems have implicit guarantees that go like "we do blah and blah and blah, and then you go to the police and all your problems are solved!" Sorry, not if it is too small, where small is surprisingly large. Any such system that handwaves you to the police without clearly indicating the floor of interest ... is probably worthless.

    So when would you trust a system that backstopped to the police? I'll stick my neck out and say, if it is beyond your borders, and you're risking >> $100k, then you might get some help. Otherwise, don't bet your money on it.

    Posted by iang at 04:34 PM | Comments (5) | TrackBack

    October 26, 2011

    Phishing doesn't really happen? It's too small to measure?

    Two Microsoft researchers have published a paper pouring scorn on claims cyber crime causes massive losses in America. They say it’s just too rare for anyone to be able to calculate such a figure.

    Dinei Florencio and Cormac Herley argue that samples used in the alarming research we get to hear about tend to contain a few victims who say they lost a lot of money. The researchers then extrapolate that to the rest of the population, which gives a big total loss estimate – in one case of a trillion dollars per year.

    But if these victims are unrepresentative of the population, or exaggerate their losses, they can really skew the results. Florencio and Herley point out that one person or company claiming a $50,000 loss in a sample of 1,000 would, when extrapolated, produce a $10 billion loss for America as a whole. So if that loss is not representative of the pattern across the whole country, your total could be $10 billion too high.

    Having read the paper, the above is about right. And sufficient description, as the paper goes on for pages and pages making the same point.

    Now, I've also been skeptical of the phishing surveys. So, for a long time, I've just stuck to the number of "about a billion a year." And waited for someone to challenge me on it :) Most of the surveys seemed to head in that direction, and what we would hope for would be more useful numbers.

    So far, Florencio and Herley aren't providing those numbers. The closest I've seen is the FBI-sponsored report that derives from reported fraud rather than surveys. Which seems to plumb in the direction of 10 billion a year for all identity-related consumer frauds, and a sort handwavy claim that there is a ration of 10:1 between all fraud and Internet related fraud.

    I wouldn't be surprised if the number was really 100 million. But that's still a big number. It's still bigger than income of Mozilla, which is the 2nd browser by numbers. It's still bigger than the budget of the Anti-phishing Working Group, an industry-sponsored private thinktank. And CABForum, another industry-only group.

    So who benefits from inflated figures? The media, because of the scare stories, and the public and private security organisations and businesses who provide cyber security. The above parliamentary report indicated that in 2009 Australian businesses spent between $1.37 and $1.95 billion in computer security measures. So on the report’s figures, cyber crime produces far more income for those fighting it than those committing it.

    Good question from the SMH. The answer is that it isn't in any player's interest to provide better figures. If so (and we can see support from the Silver Bullets structure) what is Florencio and Herley's intent in popping the balloon? They may be academically correct in trying to deflate the security market's obsession with measurable numbers, but without some harder numbers of their own, one wonders what's the point?

    What is the real number? Florencio and Herley leave us dangling at that point. Are they are setting up to provide those figures one day? Without that forthcoming, I fear the paper is destined to be just more media fodder as shown in its salacious title. Iow, pointless.

    Hopefully numbers are coming. In an industry steeped in Numerology and Silver Bullets, facts and hard numbers are important. Until then, your rough number is as good as mine -- a billion.

    Posted by iang at 05:05 PM | Comments (2) | TrackBack

    October 23, 2011

    HTTPS everywhere: Google, we salute you!

    Google radically expanded Tuesday its use of bank-level security that prevents Wi-Fi hackers and rogue ISPs from spying on your searches.

    Starting Tuesday, logged-in Google users searching from Google’s homepage will be using https://google.com, not http://google.com — even if they simply type google.com into their browsers. The change to encrypted search will happen over several weeks, the company said in a blog post Tuesday.


    We have known for a long time that the answer to web insecurity is this: There is only one mode, and it is secure.

    (I use the royal we here!)

    This is evident in breaches led by phishing, as the users can't see the difference between HTTP and HTTPS. The only solution at several levels is to get rid of HTTP. Entirely!

    Simply put, we need SSL everywhere.

    Google are seemingly the only big corporate that have understood and taken this message to heart.

    Google has been a leader in adding SSL support to cloud services. Gmail is now encrypted by default, as is the company’s new social network, Google+. Facebook and Microsoft’s Hotmail make SSL an option a user must choose, while Yahoo Mail has no encryption option, beyond its intial sign-in screen.

    EFF and CAcert are small organisations that are doing it as and when we can... Together, security-conscious organisations are slowly migrating all their sites to SSL and HTTPS all the time.

    It will probably take a decade. Might as well start now -- where's your organisation's commitment to security? Amazon, Twitter, Yahoo? Facebook!

    Posted by iang at 05:24 AM | Comments (2) | TrackBack

    October 13, 2011

    Founders of SSL call game over?

    RSA's Coviello declares the new threat environment:

    "Organisations are defending themselves with the information security equivalent of the Maginot Line as their adversaries easily outflank perimeter defences," Coviello added. "People are the new perimeter contending with zero-day malware delivered through spear-phishing attacks that are invisible to traditional perimeter-based security defences such as antivirus and intrusion detection systems." ®

    The recent spate of attacks do not tell us that the defences are weak - this is something we've known for some time. E.g., from 20th April, 2003, "The Maginot Web" said it. Yawn. Taher Elgamal, the guy who did the crypto in SSL at Netscape back in 1994, puts it this way:

    How about those certificate authority breaches against Comodo and that wiped out DigiNotar?

    It's a combination of PKI and trust models and all that kind of stuff. If there is a business in the world that I can go to and get a digital certificate that says my name is Tim Greene then that business is broken, because I'm not Tim Greene, but I've got a certificate that says this is my name. This is a broken process in the sense that we allowed a business that is broken to get into a trusted circle. The reality is there will always be crooks, somebody will always want to make money in the wrong way. It will continue to happen until the end of time.

    Is there a better way than certificate authorities?

    The fact that browsers were designed with built-in root keys is unfortunate. That is the wrong thing, but it's very difficult to change that. We should have separated who is trusted from the vendor. ...

    What the recent rash of exploits signal is that the attackers are now lined up and deployed against our weak defences:

    Coviello said one of the ironies of the attack was that it validated trends in the market that had prompted RSA to buy network forensics and threat analysis firm NetWitness just before the attack.

    This is another unfortunate hypothesis in the market for silver bullets: we need real attacks to tell us real security news. OK, now we've got it. Message heard, loud and clear. So, what to do? Coviello goes on:

    Security programs need to evolve to be risk-based and agile rather than "conventional" reactive security, he argued.

    "The existing perimeter is not enough, which is why we bought NetWitness. The NetWitness technology allowed us to determine damage and carry out remediation very quickly," Coviello said.


    The existing perimeter was an old idea - one static defence, and the attacker would walk up, hit it with his head, and go elsewhere in confusion. Build it strong, went the logic, and give the attacker a big headache! ... but the people in charge at the time were steeped in the school of cryptology and computer science, and consequently lacked the essential visibility over the human aspects of security to understand how limiting this concept was, and how the attacker was blessed with sight and an ability to walk around.

    Risk management throws out the old binary approach completely. To some extent, it is just in time, as a methodology. But to a large extent, the market place hasn't moved. Like deer in headlights, the big institutions watch the trucks approach, looking at each other for a solution.

    Which is what makes these comments by RSA and Taher Elgamal significant. More than others, these people built the old SSL infrastructure. When the people who built it call game over, it's time to pay attention.

    Posted by iang at 10:31 AM | Comments (5) | TrackBack

    August 01, 2011

    A tale of phishers, CAs, vendors and losers: I've come to eat your babies!

    We've long documented the failure of PKI and secure browsing to be an effective solution to security needs. Now comes spectacular proof: sites engaged in carding, which is the trading of stolen credit card information, have always protected their trading sites with SSL certs of the self-signed variety. According to brief searching on 'a bunch of generic "sell CVV", "fullz", "dumps" ones,' conducted informally by Peter Gutmann, some of the CAs are now issuing certificates to carders.

    This amounts to a new juvenile culinary phase in the target-rich economy of cyber-crime:

    Phisher: I've come to eat your babies!
    CA: Oh, yes, you'll be needing a certificate for that, $200 thanks!

    Although not scientifically conducted or verified, both Mozilla and the CA concerned inclined that the criminals can have their certs and eat them too. As long as they follow the same conditions as everyone else, that's fine.

    Except, it's not. Firstly, it's against the law in most all places to aid & abet a criminal. As Blackstone put it (ex Wikipedia):

    "AN accessory after the fact may be, where a person, knowing a felony to have been committed, receives, relieves, comforts, or assists the felon.17 Therefore, to make an accessory ex post facto, it is in the first place requisite that he knows of the felony committed.18 In the next place, he must receive, relieve, comfort, or assist him. And, generally, any assistance whatever given to a felon, to hinder his being apprehended, tried, or suffering punishment, makes the assistor an accessory. As furnishing him with a horse to escape his pursuers, money or victuals to support him, a house or other shelter to conceal him, or open force and violence to rescue or protect him."

    The point here made by Blackstone, and translated into the laws of many lands is that the assistance given is not specific, but broad. If we are in some sense assisting in the commission of the crime, then we are accessories. For which there are penalties.

    And, these penalties are as if we were the criminals. For those who didn't follow the legal blah blah above, the simple thing is this: it's against the law. Go to jail, do not pass go, do not collect $200.

    Secondly, consider the security diaspora. Users were hoping that browsers such as Firefox, IE, etc, would protect them from phishing. The vendors' position, policy-wise and code-wise, is that their security mechanism to protect their users from phishing is to provide PKI certificates, which might evidence some level of verification on your counter-party. This reduces down to a single statement: if you are ripped off with someone who uses a cert against you, you might know something about them.

    This protection is (a) ineffective against phishing, which is shown frequently every time a phisher downgrades the HTTPS to HTTP, (b) now shared & available equally with phishers themselves to assist them in their crime, and who now apparently feel that (c) the protection offered in encryption against their enemies outweighs the legal threat of some identity being revealed to their enemies. Users lose.

    In open list on Mozilla's forum, the CA concerned saw no reason to address the situation. Other CAs also seem to issue to equally dodgy sites, so it's not about one CA. The general feeling in the CA-dominated sector is that identity within the cert is sufficient reason to establish a reasonable security protection, notwithstanding that history, logic, evidence and now the phishers themselves show such a claim is about as reasonable and efficacious as selling recipes for marinated babies.

    It seems Peter and I stand alone. In some exasperation, I consulted with Mozilla directly. To little end; Mozilla also believe that the phishing community are deserving of certificates, they simply have to ask a CA to be invited into our trusted inner-nursery. I've no doubt that the other vendors will believe and maintain the same, an approach to legal issues sometimes known as the ostrich strategy. The only difference here being that Mozilla maintains an open forum, so it can be embarrassed into a private response, whereas CAs and other vendors can be embarrassed into silence.

    Posted by iang at 06:13 PM | Comments (0) | TrackBack

    June 14, 2011

    BitCoin - the bad news

    Predictions based on theory have been presented. Not good. BitCoin will fundamentally be a bad money because it will be too volatile, according to the laws of economics. You can't price goods in a volatile unit. Rated for speculation only, bubble territory, and quite reasonably likened to a ponzi scheme (if not exactly a ponzi scheme, with a nod to Patrick). Sorry folks, the laws of economics know no bribes, favours, demands.

    And so theory comes to practice:

    Bitcoin slump follows senators’ threats - Correlation or causation?

    By Richard Chirgwin.

    As any investment adviser will tell you, it’s a bad idea to put all your eggs in one basket. And if Rick Falkvinge was telling the truth when he said all his savings were now in Bitcoin, he’s been taken down by a third in a day.

    Following last week’s call by US senators for an investigation into virtual crypto-currency Bitcoin, its value is slumping.

    According to DailyTech, Bitcoins last Friday suffered more than 30 percent depreciation in value – a considerable fall given that the currency’s architecture is designed to inflate its value over time.

    The slump could reflect a change in attitude among holders of Bitcoins: if the currency were to become less attractive to pay for illegal drugs, then dealers and their customers would dump their Bitcoins in favour of some other medium.

    Being dumped by PayPal won’t have helped either. As DailyTech pointed out, PayPal has a policy against virtual currencies, so by enforcing the policy, PayPal has made it harder to trade Bitcoins.

    The threat of regulation may also have sent shivers down Bitcoin-holders’ spines. The easiest regulatory action – although requiring international cooperation – would be to regulate, shut down or tax Bitcoin exchanges such as the now-famous Mt Gox. However, a sufficient slump may well have the same effect as a crackdown: whether Mt Gox is a single speculator or a group of traders, it’s unlikely to have the kind of backing (or even, perhaps, the hedging) that enables “real” currency traders to survive sharp swings in value.
    ......

    Then the problem of a bubble in digital cash is compounded by theft:


    I just got hacked - any help is welcome!

    Hi everyone. I am totally devastated today. I just woke up to see a very large chunk of my bitcoin balance gone to the following address:

    1KPTdMb6p7H3YCwsyFqrEmKGmsHqe1Q3jg

    Transaction date: 6/13/2011 12:52 (EST)

    I feel like killing myself now. This get me so f'ing pissed off. If only the wallet file was encrypted on the HD. I do feel like this is my fault somehow for now moving that money to a separate non windows computer. I backed up my wallet.dat file religiously and encrypted it but that does not do me much good when someone or some trojan or something has direct access to my computer somehow.

    The dude lost 25,000 BTC which at recent "valuations" on the exchange I calculate as $250,000 to $750,000.

    Yeah. When we were building digital cash systems we were *very conscious* that the weak link in the entire process was the user's PC. We did two things: we chose nymous, directly accounted transactions, so that we could freeze the whole thing if we needed to, and we (intended) to go for secure platforms for large values. Those secure platforms are now cheaply available (they weren't then).

    BitCoin went too fast. Now people are learning the lessons.

    (Note the above link is no longer working. Luckily I had it cached. I'm not attesting to its accuracy, just its relevance to the debate.)


    Also:For those who can stomach more bad news...
    1. Bitcoin and tulip bulbs
    2. Is BitCoin a triple entry system?
    3. BitCoin - the bad news

    Posted by iang at 10:51 AM | Comments (1) | TrackBack

    June 07, 2011

    RSA Pawned - Black Queen runs amoc behind US lines of defence

    What to learn from the RSA SecureID breach?

    RSA has finally admitted publicly that the March breach into its systems has resulted in the compromise of their SecurID two-factor authentication tokens.

    Which points to:

    In a letter to customers Monday, the EMC Corp. unit openly acknowledged for the first time that intruders had breached its security systems at defense contractor Lockheed Martin Corp. using data stolen from RSA.

    It's a targetted attack across multiple avenues. This is a big shift in the attack profile, and it is perhaps the first serious evidence of the concept of Advanced Persistent Threats (APTs).

    What went wrong at the institutional level? Perhaps something like this:

    • A low-threat environment in the 1990s
    • led to success of low-threat SecureId token
    • (based on non-diversified model that sourced back to a single company),
    • which peace in our time translated to lack of desire to evolve in 2000s,
    • and the industry grew to love "best practices with a vengeance" as everyone from finance to defence relied on the same approach.
    • and domination in secure tokens by one brand-name supplier.
    • Meanwhile, we watched the evolution of attack scenarios, rolling on through the phishing and breaches pincer movement of the early 200s up to APTs to now,
    • while any thought & leadership in the security industry withered and died.

    So, with a breach in the single-point-of-failure, we are looking at an industry-wide replacement of all 40 million SecureId tokens.

    Which presumably will be a fascinating exercise, and one from which we should be able to learn a lot. It isn't often that we see a SPOF event, and it's a chance to learn just what impact a single point of failure has:

    The admission comes in the wake of cyber intrusions into the networks of three US military contractors: Lockheed Martin, L-3 Communications and Northrop Grumman - one of them confirmed by the company, others hinted at by internal warnings and an unusual domain name and password reset process

    But one would also be somewhat irresponsible to not ask what happens next? Simply replacing the SecureID fobs and resetting the secret sauce at RSA does not seem to satisfy as *a solution*, although we can understand that a short term hack might be needed.

    Chief (Information) Security Officers everywhere will probably be thinking that we need a little more re-thinking of the old 1990s models. Good luck, guys! You'll probably need a few more breaches to wake up the CEOs, so you can get the backing you need to go beyond "best practices" and start doing the job seriously.


    In contrast, the very next post discusses where we're at when we fail to meet "best practices!"

    Posted by iang at 11:45 AM | Comments (4) | TrackBack

    May 20, 2011

    Hold the press! Corporates say that SSL is too slow to be used all the time?

    Google researchers say they've devised a way to significantly reduce the time it takes websites to establish encrypted connections with end-user browsers, a breakthrough that could make it less painful for many services to offer the security feature. ....

    The finding should come as welcome news to those concerned about online privacy. With the notable exceptions of Twitter, Facebook, and a handful of Google services, many websites send the vast majority of traffic over unencrypted channels, making it easy for governments, administrators, and Wi-Fi hotspot providers to snoop or even modify potentially sensitive communications while in transit. Companies such as eBay have said it's too costly to offer always-on encryption.

    The Firesheep extension introduced last year for the Firefox browser drove home just how menacing the risk of unencrypted websites can be.

    Is this a case of, NIST taketh away what Google granteth?

    Posted by iang at 12:25 PM | Comments (3) | TrackBack

    March 30, 2011

    Revising Top Tip #2 - Use another browser!

    It's been a long time since I wrote up my Security Top Tips, and things have changed a bit since then. Here's an update. (You can see the top tips about half way down on the right menu block of the front page.)

    Since then, browsing threats have got a fair bit worse. Although browsers have done some work to improve things, their overall efforts have not really resulted in any impact on the threats. Worse, we are now seeing MITBs being experimented with, and many more attackers get in on the act.

    To cope with this heightened risk to our personal node, I experimented a lot with using private browsing, cache clearing, separate accounts and so forth, and finally hit on a fairly easy method: Use another browser.

    That is, use something other than the browser that one uses for browsing. I use Firefox for general stuff, and for a long time I've been worried that it doesn't really do enough in the battle for my user security. Safari is also loaded on my machine (thanks to Top Tip #1). I don't really like using it, as its interface is a little bit weaker than Firefox (especially the SSL visibility) ... but in this role it does very well.

    So for some time now, for all my online banking and similar usage, I have been using Safari. These are my actions:

    • I start Safari up
    • click on Safari / Private Browsing
    • use google or memory to find the bank
    • inspect the URL and head in.
    • After my banking session I shut down Safari.

    I don't use bookmarks, because that's an easy place for an trojan to look (I'm not entirely sure of that technique but it seems like an obvious hint).

    "Use another browser" creates an ideal barrier between a browsing browser and a security browser, and Safari works pretty well in that role. It's like an air gap, or an app gap, if you like. If you are on Microsoft, you could do the same thing using IE and Firefox, or you could download Chrome.

    I've also tested it on my family ... and it is by far the easiest thing to tell them. They get it! Assume your browsing habits are risky, and don't infect your banking. This works well because my family share their computers with kids, and the kids have all been instructed not to use Safari. They get it too! They don't follow the logic, but they do follow the tool name.

    What says the popular vote? Try it and let me know. I'd be interested to hear of any cross-browser threats, as well :)


    A couple of footnotes: Firsly, belated apologies to anyone who's been tricked by the old Top Tip for taking so long to write this one up. Secondly, I've dropped the Petnames / Trustbar Top Tip because it isn't really suitable for the mass users (e.g., my mother) and these fine security tools never really attracted the attention of the browser-powers-that-be, so they died away as hobby efforts tend to do. Maybe the replacement would be "Turn on Private Browsing?"

    Posted by iang at 01:54 AM | Comments (8) | TrackBack

    March 04, 2011

    more on HTTPS everywhere -- US Senator writes to websites?!

    Normally I'd worry when a representative of the US people asked sites to adopt more security ... but here goes:

    Senator Charles Schumer is calling on major websites in the United States to make their pages more secure to protect those connecting from Wi-Fi hotspots, various media outlets are reporting.

    In a letter sent to Amazon, Twitter, Yahoo, and others, the Senator, a Democrat representing New York, asked the websites to switch to more secure HTTPS pages in order to help prevent people accessing the Internet from public connections in places like restaurants and bookstores from being targeted by hackers and identity thieves.

    "As the operator of one of the world's most popular websites, you provide a valuable service to Internet users across America," Schumer wrote, according to Tony Bradley of PCWorld. "With the privilege of serving millions of U.S. citizens, however, comes the responsibility to protect them while they are on your site."

    "Free Wi-Fi networks provide hackers, identity thieves and spammers alike with a smorgasbord of opportunities to steal private user information like passwords, usernames, and credit card information," the Senator added. "The quickest and easiest way to shut down this one-stop shop for identity theft is for major websites to switch to secure HTTPS web addresses instead of the less secure HTTP protocol, which has become a welcome mat for would be hackers."

    According to a Reuters report on Sunday, Schumer also called standard HTTP protocol "a welcome mat for would-be hackers" and said that programs were available that made hacking into another person's computer and swiping private information--unless secure protocol was used.

    ...

    In this case, it's an overall plus. HTTPS everywhere, please!

    This is one way in which we can stop a lot of trouble. Somewhat depressing that it has taken this long to filter through, but with friends like NIST, one shouldn't be overly surprised if the golden goose of HTTPS security is cooked to a cinder beforehand.

    Posted by iang at 02:36 AM | Comments (3) | TrackBack

    November 15, 2010

    The Great Cyberheist

    The biggest this and the bestest that is mostly a waste of time, but once a year it is good to see just how big some of the numbers are. Jim sent in this NY Times article by James Verini, just to show that breaches cost serious money:

    According to Attorney General Eric Holder, who last month presented an award to Peretti and the prosecutors and Secret Service agents who brought Gonzalez down, Gonzalez cost TJX, Heartland and the other victimized companies more than $400 million in reimbursements and forensic and legal fees. At last count, at least 500 banks were affected by the Heartland breach.

    $400 million costs caused by one small group, or one attacker, and those costs aren't complete or known as yet.

    But the extent of the damage is unknown. “The majority of the stuff I hacked was never brought into public light,” Toey told me. One of the imprisoned hackers told me there “were major chains and big hacks that would dwarf TJX. I’m just waiting for them to indict us for the rest of them.” Online fraud is still rampant in the United States, but statistics show a major drop in 2009 from previous years, when Gonzalez was active.

    What to make of this? It may well be that one single guy / group caused the lion's share of the breach fraud we saw in the wake of SB1386. Do we breathe a sigh of relief that he's gone for good (20 years?) ... or do we wonder at the basic nature of the attacks used to get in?

    The attacks were fairly well described in the article. They were all through apparently PCE compliance-complete institutions. Lots of them. They start from the ho-hum of breaching the secured perimeter through WiFi, right up to the slightly yawnsome SQL injection.

    Here's my bet: the ease of this overall approach and the lack of real good security alternatives (firewalls & SSL, anyone?) means there will be a pause, and then the professionals will move in. And they won't be caught, because they'll move faster than the Feds. Gonzalez was a static target, he wasn't leaving the country. The new professionals will know their OODA.

    Read the entire article, and make your own bet :)

    Posted by iang at 04:51 AM | Comments (0) | TrackBack

    October 24, 2010

    perception versus action, when should Apple act?

    Clive throws some security complaints against Apple in comments. He's got a point, but what is that point, exactly? The issues raised have little to do with Apple, per se, as they are all generic and familiar in some sense.

    Aside from the futility of trying to damage Apple's teflon brand, I guess, for me at least the issue here is when & if the perception, costs and activities of Apple's security begin to cross.

    We saw this with Microsoft. Throughout the 1990s, they made hay, the sun shone. Security wasn't an issue, the users lapped it all up, paid the cost, asked for more.

    So Microsoft did the "right thing" by their shareholders and ignored it. Having seen how much money they made for their shareholders in those days, it is hard to argue they did the "wrong thing". Indeed, this is what I tried to establish in that counter-cultural rant called GP -- that there is an economic rationale to delaying security until we can identify a real enemy.

    Early 2000s, the scene started to change. We saw the first signs of phishing in 2001 (if we were watching) and the rising tide of costs to users was starting to feedback into Microsoft's top slot. Hence Bill Gate's famous "I want to spend another dime and spin the company" memo in January 2002 followed by his declaration that "phishing is our problem, not the user's fault."

    But it didn't work. Even though we saw a massive internal change, and lots more attention on security, the Vista thing didn't work out. And the titanic of Microsoft perception slowly inched itself up onto an iceberg and spent half a decade sliding under the waters.

    For Microsoft, action started in 2002, and changed the company somewhat. By the mid 2000s, activity on security was impressive. But perception was already underwater, irreversibly sinking.

    Now we're all looking at Apple. *Only* because they have a bigger market cap than Microsoft. Note that this blog has promoted Macs for a better security experience for many years, and anyone who took that advice won big-time. But now the media has swung it's lazer-sharp eyeballs across to look at the one that snuck below their impenetrable radar and cheekily stole the crown of publicity from Microsoft. The game's up, the media tell us!

    I speak in jest of course; market perception is what the users think, not what the media says [1]. Possibly we will see market perception of Apple's security begin to diminish, as user costs begin to rise. The user-cost argument is still solidly profitable on the Mac's balance sheet, so that'll take some time. Meanwhile, there is little sign that Apple themselves are acting within to improve their impending security nightmare [2].

    The interesting question for the long term, for the business-minded, is when should Apple begin to act? And how?


    [1] Watching the media for security perception shifts is like relying on astrology for stock market picks. Using one belief system to answer the question of another belief system is only advisable for hobbyists with time on their hands and money to lose.

    [2] just as a postscript, this question is made all the more interesting because, unlike Microsoft, Apple never signals its intentions in advance. And after the move, the script is so well stage-managed that we can't rely on it. So we may never know the answer. Which makes the job of investor in Apple quite a ... perceptionally challenging one :)

    Posted by iang at 01:30 AM | Comments (1) | TrackBack

    October 05, 2010

    Cryptographic Numerology - our number is up

    Chit-chat around the coffeerooms of crypto-plumbers is disturbed by NIST's campaign to have all the CAs switch up to 2048 bit roots:

    On 30/09/10 5:17 PM, Kevin W. Wall wrote:
    > Thor Lancelot Simon wrote:
    > See below, which includes a handy pointer to the Microsoft and Mozilla policy statements "requiring" CAs to cease signing anything shorter than 2048 bits.
    <...snip...>
    > These certificates (the end-site ones) have lifetimes of about 3 years maximum. Who here thinks 1280 bit keys will be factored by 2014? *Sigh*.
    No one that I know of (unless the NSA folks are hiding their quantum computers from us :). But you can blame this one on NIST, not Microsoft or Mozilla. They are pushing the CAs to make this happen and I think 2014 is one of the important cutoff dates, such as the date that the CAs have to stop issuing certs with 1024-bit keys.

    I can dig up the NIST URL once I get back to work, assuming anyone actually cares.


    The world of cryptology has always been plagued by numerology.

    Not so much in the tearooms of the pure mathematicians, but all other areas: programming, management, provisioning, etc. It is I think a desperation in the un-endowed to understand something, anything of the topic.

    E.g., I might have no clue how RSA works but I can understand that 2048 has to be twice as good as 1024, right? When I hear it is even better than twice, I'm overjoyed!

    This desperation to be able to talk about it is partly due to having to be part of the business (write some code, buy a cert, make a security decision, sell a product) and partly a sense of helplessness when faced with apparently expert and confident advice. It's not an unfounded fear; experts use their familiarity with the concepts to also peddle other things which are frequently bogus or hopeful or self-serving, so the ignorance leads to bad choices being made.

    Those that aren't in the know are powerless, and shown to be powerless.

    When something simple comes along and fills that void people grasp onto them and won't let go. Like numbers. As long as they can compare 1024 to 2048, they have a safety blanket that allows them to ignore all the other words. As long as I can do my due diligence as a manager (ensure that all my keys are 2048) I'm golden. I've done my part, prove me wrong! Now do your part!


    This is a very interesting problem [1]. Cryptographic numerology diverts attention from the difficult to the trivial. A similar effect happens with absolute security, which we might call "divine cryptography." Managers become obsessed with perfection in one thing, to the extent that they will ignore flaws in another thing. Also, standards, which we might call "beliefs cryptography" for their ability to construct a paper cathedral within which there is room for us all, and our flock, to pray safely inside.

    We know divinity doesn't exist, but people demand it. We know that religions war all the time, and those within a religion will discriminate against others, to the loss of us all. We know all this, but we don't; cognitive dissonance makes us so much happier, it should be a drug.


    It was into this desperate aching void that the seminal paper by Lenstra and Verheul stepped in to put a framework on the numbers [2]. On the surface, it solved the problem of cross-domain number comparison, e.g., 512 bit RSA compared to 256 bit AES, which had always confused the managers. And to be fair, this observation was a long time coming in the cryptographic world, too, which makes L&V's paper a milestone.

    Cryptographic Numerology's star has been on the ascent ever since that paper: As well as solving the cipher-public-key-hash numeric comparison trap, numerology is now graced with academic respectability.

    This made it irresistible to large institutions which are required to keep their facade of advice up. NIST like all the other agencies followed, but NIST has a couple of powerful forces on it. Firstly, NIST is slightly special, in ways that other agencies represented in keylength.com only wish to be special. NIST, as pushed by the NSA, is protecting primarily US government resources:

    This document has been developed by the National Institute of Standards and Technology (NIST) in furtherance of its statutory responsibilities under the Federal Information Security Management Act (FISMA) of 2002, Public Law 107-347. NIST is responsible for developing standards and guidelines, including minimum requirements, for providing adequate information security for all agency operations and assets, but such standards and guidelines shall not apply to national security systems.

    That's US not us. It's not even protecting USA industry. NIST is explicitly targetted by law to protect the various multitude of government agencies that make up the beast we know as the Government of the United States of America. That gives it unquestionable credibility.

    And, as has been noticed a few times, Mars is on the ascendancy: *Cyberwarfare* is the second special force. Whatever one thinks of the mess called cyberwarfare (equity disaster, stuxnet, cryptographic astrology, etc) we can probably agree, if anyone bad is thinking in terms of cracking 1024 bit keys, then they'll be likely another nation-state interested in taking aim against the USG agencies. c.f., stuxnet, which is emerging as a state v. state adventure. USG, or one of USG's opposing states, are probably the leading place on the planet that would face a serious 1024 bit threat if one were to emerge.

    Hence, NIST is plausibly right in imposing 2048-bit RSA keys into its security model. And they are not bad in the work they do, for their client [3]. Numerology and astrology are in alignment today, if your client is from Washington DC.

    However, real or fantastical, this is a threat model that simply doesn't apply to the rest of the world. The sad sad fact is that NIST's threat model belongs to them, to US, not to us. We all adopting the NIST security model is like a Taurus following the advice in the Aries section of today's paper. It's not right, however wise it sounds. And if applied without thought, it may reduce our security not improve it:


    Writes Thor:
    > At 1024 bits, it is not. But you are looking
    > at a factor of *9* increase in computational
    > cost when you go immediately to 2048 bits. At
    > that point, the bottleneck for many applications
    > shifts, particularly those ...
    > Also,...
    > ...and suddenly...
    >
    > This too will hinder the deployment of "SSL everywhere",...

    When US industry follows NIST, and when worldwide industry follows US industry, and when open source Internet follows industry, we have a classic text-book case of adopting someone else's threat, security and business models without knowing it.

    Keep in mind, our threat model doesn't include crunching 1024s. At all, any time, nobody's ever bothered to crunch 512 in anger, against the commercial or private world. So we're pretty darn safe at 1024. But our threat model does include

    *attacks on poor security user interfaces in online banking*

    That's a clear and present danger. And one of the key, silent, killer causes of that is the sheer rarity of HTTPS. If we can move the industry to "HTTPS everywhere" then we can make a significant different. To our security.

    On the other hand, we can shift to 2048, kill the move to "HTTPS everywhere", and save the US Government from losing sleep over the cyberwarfare it created for itself (c.f., the equity failure).

    And that's what's going to happen. Cryptographic Numerology is on a roll, NIST's dice are loaded, our number is up. We have breached the law of unintended consequences, and we are going to be reducing the security of the Internet because of it. Thanks, NIST! Thanks, Mozilla, thanks, Microsoft.



    [1] As well as this area, others have looked at how to make the bounty of cryptography more safely available to non-cognicenti. I especially push the aphorisms of Adi Shamir and Kerckhoffs. And, add my own meagre efforts in Hypotheses and Pareto-secure.

    [2] For detailed work and references on Lenstra & Verheul's paper, see http://www.keylength.com/ which includes calculators of many of the various efforts. It's a good paper. They can't be criticised for it in the terms in this post, it's the law of unintended consequences again.

    [3] Also, other work by NIST to standardise the PRNG (psuedo-random-number-generator) has to be applauded. The subtlety of what they have done is only becoming apparent after much argumentation: they've unravelled the unprovable entropy problem by unplugging it from the equation.

    But they've gone a step further than the earlier leading work by Ferguson and Schneier and the various quiet cryptoplumbers, by turning the PRNG into a deterministic algorithm. Indeed, we can now see something special: NIST has turned the PRNG into a reverse-cycle message digest. Entropy is now the MD's document, and the psuedo-randomness is the cryptographically-secure hash that spills out of the algorithm.

    Hey Presto! The PRNG is now the black box that provides the one-way expansion of the document. It's not the reverse-cycle air conditioning of the message digest that is exciting here, it's the fact that it is now a new class of algorithms. It can be specified, paramaterised, and most importantly for cryptographic algorithms, given test data to prove the coding is correct.

    (I use the term reverse-cycle in the sense of air-conditioning. I should also stress that this work took several generations to get to where it is today; including private efforts by many programmers to make sense of PRNGs and entropy by creating various application designs, and a couple of papers by Ferguson and Schneier. But it is the black-boxification by NIST that took the critical step that I'm lauding today.)

    Posted by iang at 10:55 AM | Comments (1) | TrackBack

    August 24, 2010

    What would the auditor say to this?

    Iran's Bushehr nuclear power plant in Bushehr Port:

    "An error is seen on a computer screen of Bushehr nuclear power plant's map in the Bushehr Port on the Persian Gulf, 1,000 kms south of Tehran, Iran on February 25, 2009. Iranian officials said the long-awaited power plant was expected to become operational last fall but its construction was plagued by several setbacks, including difficulties in procuring its remaining equipment and the necessary uranium fuel. (UPI Photo/Mohammad Kheirkhah)"

    Click onwards for full sized image:

    Compliant? Minor problem? Slight discordance? Conspiracy theory?

    (spotted by Steve Bellovin)

    Posted by iang at 05:53 AM | Comments (2) | TrackBack

    August 12, 2010

    memes in infosec II - War! Infosec is WAR!

    Another metaphor (I) that has gained popularity is that Infosec security is much like war. There are some reasons for this: there is an aggressive attacker out there who is trying to defeat you. Which tends to muck a lot of statistical error-based thinking in IT, a lot of business process, and as well, most economic models (e.g., asymmetric information assumes a simple two-party model). Another reason is the current beltway push for essential cyberwarfare divisional budget, although I'd hasten to say that this is not a good reason, just a reason. Which is to say, it's all blather, FUD, and oneupsmanship against the Chinese, same as it ever was with Eisenhower's nemesis.

    Having said that, infosec isn't like war in many ways. And knowing when and why and how is not a trivial thing. So, drawing from military writings is not without dangers. Consider these laments about applying Sun Tzu's The Art of War to infosec from Steve Tornio and Brian Martin:

    In "The Art of War," Sun Tzu's writing addressed a variety of military tactics, very few of which can truly be extrapolated into modern InfoSec practices. The parts that do apply aren't terribly groundbreaking and may actually conflict with other tenets when artificially applied to InfoSec. Rather than accept that Tzu's work is not relevant to modern day Infosec, people tend to force analogies and stretch comparisons to his work. These big leaps are professionals whoring themselves just to get in what seems like a cool reference and wise quote.

    "The art of war teaches us to rely not on the likelihood of the enemy's not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable." - The Art of War

    The Art of SunTzu is not a literal quoting and thence mad rush to build the tool. Art of War was written from the context of a successful general talking to another hopeful general on the general topic of building an army for a set piece nation-to-nation confrontation. It was also very short.

    Art of War tends to interlace high level principles with low level examples, and dance very quickly through most of its lessons. Hence it was very easy to misinterpret, and equally easy to "whore oneself for a cool & wise quote."

    However, Sun Tzu still stands tall in the face of such disrespect, as it says things like know yourself FIRST, and know the enemy SECOND, which the above essay actually agreed with. And, as if it needs to be said, knowing the enemy does not imply knowing their names, locations, genders, and proclivities:

    Do you know your enemy? If you answer 'yes' to that question, you already lost the battle and the war. If you know some of your enemies, you are well on your way to understanding why Tzu's teachings haven't been relevant to InfoSec for over two decades. Do you want to know your enemy? Fine, here you go. your enemy may be any or all of the following:

    • 12 y/o student in Ohio learning computers in middle school
    • 13 y/o home-schooled girl getting bored with social networks
    • 15 y/o kid in Brazil that joined a defacement group
    • ...

    Of course, Sun Tzu also didn't know the sordid details of every soldier's desires; "knowing" isn't biblical, it's capable. Or, knowing their capabilities, and that can be done, we call it risk management. As Jeffrey Carr said:

    The reason why you don't know how to assign or even begin to think about attribution is because you are too consumed by the minutia of your profession. ... The only reason why some (OK, many) InfoSec engineers haven't put 2+2 together is that their entire industry has been built around providing automated solutions at the microcosmic level. When that's all you've got, you're right - you'll never be able to claim victory.

    Right. Most all InfoSec engineers are hired to protect existing installations. The solution is almost always boxed into the defensive, siege mentality described above, because the alternate, as Dan Geer apparently said:

    When you are losing a game that you cannot afford to lose, change the rules. The central rule today has been to have a shield for every arrow. But you can't carry enough shields and you can run faster with fewer anyhow.

    The advanced persistent threat, which is to say the offense that enjoys a permanent advantage and is already funding its R&D out of revenue, will win as long as you try to block what he does. You have to change the rules. You have to block his success from even being possible, not exchange volleys of ever better tools designed in response to his. You have to concentrate on outcomes, you have to pre-empt, you have to be your own intelligence agency, you have to instrument your enterprise, you have to instrument your data.

    But, at a corporate level, that's simply not allowed. Great ideas, but only the achievable strategy is useful, the rest is fantasy. You can't walk into any company or government department and change the rules of infosec -- that means rebuilding the apps. You can't even get any institution to agree that their apps are insecure; or, you can get silent agreement by embarrassing them in the press, along with being fired!

    I speak from pretty good experience of building secure apps, and of looking at other institutional or enterprise apps and packages. The difference is huge. It's the difference between defeating Japan and defeating Vietnam. One was a decision of maximisation, the other of minimisation. It's the difference between engineering and marketing; one is solid physics, the other is facade, faith, FUD, bribes.

    It's the difference between setting up a world-beating sigint division, and fixing your own sigsec. The first is a science, and responds well by adding money and people. Think Manhattan, Bletchley Park. The second is a societal norm, and responds only to methods generally classed by the defenders as crimes against humanity and applications. Slavery, colonialism, discrimination, the great firewall of China, if you really believe in stopping these things, then you are heading for war with your own people.

    Which might all lead the grumpy anti-Sun Tzu crowd to say, "told you so! This war is unwinnable." Well, not quite. The trick is to decide what winning is; to impose your will on the battleground. This is indeed what strategy is, to impose ones own definition of the battleground on the enemy, and be right about it, which is partly what Dan Geer is getting at when he says "change the rules." A more nuanced view would be: to set the rules that win for you; and to make them the rules you play by.

    And, this is pretty easily answered: for a company, winning means profits. As long as your company can conduct its strategy in the face of affordable losses, then it's winning. Think credit cards, which sacrifice a few hundred basis points for the greater good. It really doesn't matter how much of a loss is made, as long as the customer pays for it and leaves a healthy profit over.

    Relevance to Sun Tzu? The parable of the Emperor's Concubines!

    In summary, it is fair to say that Sun Tzu is one of those texts that are easy to bandy around, but rather hard to interpret. Same as infosec, really, so it is no surprise we see it in that world. Also, war as a very complicated business, and Art of War was really written for that messy discipline ... so it takes somewhat more than a familiarity from both to successfully relate across beyond a simple metaphor level.

    And, as we know, metaphors and analogues are descriptive tools, not proofs. Proving them wrong proves nothing more than you're now at least an adolescent.

    Finally, even war isn't much like war these days. If one factors in the last decade, there is a clear pattern of unilateral decisions, casus belli at a price, futile targets, and effervescent gains. Indeed, infosec looks more like the low intensity, mission-shy wars in the Eastern theaters than either of them look like Sun Tzu's campaigns.

    memes in infosec I - Eve and Mallory are missing, presumed dead

    Posted by iang at 04:34 PM | Comments (1) | TrackBack

    August 11, 2010

    Hacking the Apple, when where how... and whether we care why?

    One of the things that has been pretty much standard in infosec is that the risks earnt (costs incurred!) from owning a Mac have been dramatically lower. I do it, and save, and so do a lot of my peers & friends. I don't collect stats, but here's a comment from Dan Geer from 2005:

    Amongst the cognoscenti, you can see this: at security conferences of all sorts you’ll find perhaps 30% of the assembled laptops are Mac OS X, and of the remaining Intel boxes, perhaps 50% (or 35% overall) are Linux variants. In other words, while security conferences are bad places to use a password in the clear monoculture on the back of the envelope over a wireless channel, there is approximately zero chance of cascade failure amongst the participants.

    I recommend it on the blog front page as the number 1 security tip of all:

    #1 buy a mac.

    Why this is the case is of course a really interesting question. Is it because Macs are inherently more secure, in themselves? The answer seems to be No, not in themselves. We've seen enough evidence to suggest, at an anecdotal level, that when put into a fair fight, the Macs don't do any better than the competition. (Sometimes they do worse, and the competition ensures those results are broadcast widely :)

    However it is still the case that the while the security in the Macs aren't great, the result for the user is better -- the costs resulting from breaches, installs, virus slow-downs, etc, remain lower [1]. Which would imply the threats are lower, recalling the old mantra of:

    Business model ⇒ threat model ⇒ security model

    Now, why is the threat (model) lower? It isn't because the attackers are fans. They generally want money, and money is neutral.

    One theory that might explain it is the notion of monoculture.

    This idea was captured a while back by Dan Geer and friends in a paper that claimed that the notion of Microsoft's dominance threated the national security of the USA. It certainly threatened someone, as Dan lost his job the day the paper was released [2].

    In brief, monoculture argues that when one platform gains an ascendency to dominate the market, then we enter a situation of particular vulnerability to that platform. It becomes efficient for all economically-motivated attackers to concentrate their efforts on that one dominant platform and ignore the rest.

    In a sense, this is an application of the Religion v. Darwin argument to computer security. Darwin argued that diversity was good for the species as a whole, because singular threats would wipe out singular species. The monoculture critique can also be seen as analogous to Capitalism v. Communism, where the former advances through creative destruction, and the latter stagnates through despotic ignorance.

    A lot of us (including me) looked at the monoculture argument and thought it ... simplistic and hopeful. Yet, the idea hangs on ... so the question shifts for us slower skeptics to how to prove it [3]?

    Apple is quietly wrestling with a security conundrum. How the company handles it could dictate the pace at which cybercriminals accelerate attacks on iPhones and iPads.

    Apple is hustling to issue a patch for a milestone security flaw that makes it possible to remotely hack - or jailbreak - iOS, the operating system for iPhones, iPads and iPod Touch.

    Apple's new problem is perhaps early signs of good evidence that the theory is good. Here we have Apple struggling with hacks on its mobile platform (iPads, iPods, iPhones) and facing a threat which it seemingly hasn't faced on the Macs [4].

    The differentiating factor -- other than the tech stuff -- is that Apple is leading in the mobile market.

    IPhones, in particular, have become a pop culture icon in the U.S., and now the iPad has grabbed the spotlight. "The more popular these devices become, the more likely they are to get the attention of attackers," says Joshua Talbot, intelligence manager at Symantec Security Response.

    Not dominating like Microsoft used to enjoy, but presenting enough of a nose above the pulpit to get a shot taken. Meanwhile, Macs remain stubbornly stuck at a reported 5% of market share in the computer field, regardless of the security advice [5]. And nothing much happens to them.

    If market leadership continues to accrue to Apple in the iP* mobile sector, as the market expect it does, and if security woes continue as well, I'd count that as good evidence [6].


    [1] #1 security tip remains good: buy a Mac, not because of the security but because of the threats. Smart users don't care so much why, they just want to benefit this year, this decade, while they can.

    [2] Perhaps because Dan lost his job, he gets fuller attention. The full cite would be like: Daniel Geer, Rebecca Bace, Peter Gutmann, Perry Metzger, Charles P. Pfleeger, John S. Quarterman, Bruce Schneier, "CyberInsecurity: The Cost of Monopoly How the Dominance of Microsoft's Products Poses a Risk to Security." Preserved by the inestimable cryptome.org, a forerunner of the now infamous wikileaks.org.

    [3] Proof in the sense of scientific method is not possible, because we can't run the experiment. This is economics, not science, we can't run the experiment like real scientists. What we have to do is perhaps psuedo-scientific-method; we predict, we wait, and we observe.

    [4] On the other hand, maybe the party is about to end for Macs. News just in:

    Security vendor M86 Security says it's discovered that a U.K.-based bank has suffered almost $900,000 (675,000 Euros) in fraudulent bank-funds transfers due to the ZeuS Trojan malware that has been targeting the institution.

    Bradley Anstis, vice president of technology strategy at M86 Security, said the security firm uncovered the situation in late July while tracking how one ZeuS botnet had been specifically going after the U.K.-based bank and its customers. The botnet included a few hundred thousand PCs and even about 3,000 Apple Macs, and managed to steal funds from about 3,000 customer accounts through unauthorized transfers equivalent to roughly $892,755.

    Ouch!

    [4] I don't believe the 5% market share claim ... I harbour a suspicion that this is some very cunning PR trick in under-reporting by Apple, so as to fly below the radar. If so, I think it's well past its sell-by date since Apple reached the same market cap as Microsoft...

    [5] What is curious is that I'll bet most of Wall Street, and practically all of government, notwithstanding the "national security" argument, continue to keep clear of Macs. For those of us who know the trick, this is good. It is good for our security nation if the governments do not invest in Macs, and keep the monoculture effect positive. Perverse, but who am I to argue with the wisdom in cyber-security circles?

    Posted by iang at 09:30 AM | Comments (1) | TrackBack

    August 05, 2010

    Are we spending too little on security? Or are we spending too much??

    Luther Martin asks this open question:


    Ian,

    I have a quick question for you based on some recent discussions. Here's the background.

    The first was with a former co-worker who works for the VC division of a large commercial bank. He tells me that his bank really isn't interested in investing in security companies. Why? Apparently foreach $100 of credit card transactions there's about $4 of loss due to bad debt and about only $0.10 of loss due to fraud. So if you're making investments, it's clear where you should put your money.

    Next, I was talking with a guy who runs a large credit card processing business. He was complaining about having to spend an extra $6 million on fraud reduction while his annual losses due to fraud are only about $250K.

    Finally, I was also talking to some people from a government agency who were proud of the fact that they had reduced losses due to security incidents in their division by $2 million last year. The only problem is that they actually spent $10 million to do this.

    So the question is this: are we not spending enough on security or are we spending too much, but on the wrong things?

    Luther

    Posted by iang at 10:38 PM | Comments (6) | TrackBack

    August 01, 2010

    memes in infosec I - Eve and Mallory are missing, presumed dead

    Things I've seen that are encouraging. Bruce Schneier in Q&A:

    Q: We've also seen Secure Sockets Layer (SSL) come under attack, and some experts are saying it is useless. Do you agree?

    A: I'm not convinced that SSL has a problem. After all, you don't have to use it. If I log-on to Amazon without SSL the company will still take my money. The problem SSL solves is the man-in-the-middle attack with someone eavesdropping on the line. But I'm not convinced that's the most serious problem. If someone wants your financial data they'll hack the server holding it, rather than deal with SSL.

    Right. The essence is that SSL solves the "easy" part of the problem, and leaves open the biggest part. Before the proponents of SSL say, "not our problem," remember that AADS did solve it, as did SOX and a whole bunch of other things. It's called end-to-end, and is well known as being the only worthwhile security. Indeed, I'd say it was simply responsible engineering, except for the fact that it isn't widely practiced.

    OK, so this is old news, from around March, but it is worth declaring sanity:

    Q: But doesn't SSL give consumers confidence to shop online, and thus spur e-commerce?

    A: Well up to a point, but if you wanted to give consumers confidence you could just put a big red button on the site saying 'You're safe'. SSL doesn't matter. It's all in the database. We've got the threat the wrong way round. It's not someone eavesdropping on Eve that's the problem, it's someone hacking Eve's endpoint.

    Which is to say, if you are going to do anything to fix the problem, you have to look at the end-points. The only time you should look at the protocol, and the certificates, is how well they are protecting the end-points. Meanwhile, the SSL field continues to be one for security researchers to make headlines over. It's BlackHat time again:

    "The point is that SSL just doesn't do what people think it does," says Hansen, an security researcher with SecTheory who often goes by the name RSnake. Hansen split his dumptruck of Web-browsing bugs into three categories of severity: About half are low-level threats, 10 or so are medium, and two are critical. One example...

    Many observers in the security world have known this for a while, and everyone else has felt increasingly frustrated and despondent about the promise:

    There has been speculation that an organization with sufficient power would be able to get a valid certificate from one of the 170+ certificate authorities (CAs) that are installed by default in the typical browser and could then avoid this alert ....

    But how many CAs does the average Internet user actually need? Fourteen! Let me explain. For the past two weeks I have been using Firefox on Windows with a reduced set of CAs. I disabled ALL of them in the browser and re-enabled them one by one as necessary during my normal usage....


    On the one hand, SSL is the brand of security. On the other hand, it isn't the delivery of security; it simply isn't deployed in secure browsing to provide the user security that was advertised: you are on the site you think you are on. Only as we moved from a benign world to a fraud world, around 2003-2005, this has this been shown to matter. Bruce goes on:

    Q: So is encryption the wrong approach to take?

    A: This kind of issue isn't an authentication problem, it's a data problem. People are recognising this now, and seeing that encryption may not be the answer. We took a World War II mindset to the internet and it doesn't work that well. We thought encryption would be the answer, but it wasn't. It doesn't solve the problem of someone looking over your shoulder to steal your data.

    Indeed. Note that comment about the World War II mindset. It is the case that the entire 1990s generation of security engineers were taught from the military text book. The military assumes its nodes -- its soldiers, its computers -- are safe. And, it so happens, that when armies fight armies, they do real-life active MITMs against each other to gain local advantage. There are cases of this happening, and oddly enough, they'll even do it to civilians if they think they can (ask Greenpeace). And the economics is sane, sensible stuff, if we bothered to think about it: in war, the wire is the threat, the nodes are safe.

    However, adopting "the wire" as the weakness and Mallory as the Man-In-The-Middle, and Eve as the Eavesdropper as "the threat" in the Internet was a mistake. Even in the early 1990s, we knew that the node was the problem. Firstly, ever since the PC, nodes in commercial computing are controlled by (dumb) users not professional (soldiers). Who download shit from the net, not operate trusted military assets. Secondly, observation of known threats told us where the problems lay: floppy viruses were very popular, and phone-line attacks were about spoofing and gaining entry to an end-point. Nobody was bothering with "the wire," nobody was talking about snooping and spying and listening [*].

    The military model was the precise reverse of the Internet's reality.

    To conclude. There is no doubt about this in security circles: the SSL threat model was all wrong, and consequently the product was deployed badly.

    Where the doubt lies is how long it will take the software providers to realise that their world is upside down? It can probably only happen when everyone with credibility stands up and says it is so. For this, the posts shown here are very welcome. Let's hear more!


    [*] This is not entirely true. There is one celebrated case of an epidemic of eavesdropping over ethernets, which was passwords being exchanged over telnet and rsh connections. A case-study in appropriate use of security models follows...

    PS: Memes II - War! Infosec is WAR!

    Posted by iang at 04:33 PM | Comments (3) | TrackBack

    July 29, 2010

    The difference between 0 breaches and 0+delta breaches

    Seen on the net, by Dan Geer:

    The design goal for any security system is that the number of failures is small but non-zero, i.e., N>0. If the number of failures is zero, there is no way to disambiguate good luck from spending too much. Calibration requires differing outcomes.

    I've been trying for years to figure out a nice way to describe the difference between 0 failures, and some small number N>0 like 1 or 2 or 10 in a population of a million.

    Dan might have said it above: If the number of failures is zero, there is no way to disambiguate good luck from spending too much.

    Has he nailed it? It's certainly a lot tighter than my long efforts ... Once we get that key piece of information down, we can move on. As he does:

    Regulatory compliance, on the other hand, stipulates N==0 failures and is thus neither calibratable nor cost effective. Whether the cure is worse than the disease is an exercise for the reader.

    An insight! For regulatory compliance, I'd substitute public compliance, which includes all the media attention and reputation attacks.

    Posted by iang at 12:29 AM | Comments (6) | TrackBack

    May 28, 2010

    questioning infosec -- don't buy into professionalism, certifications, and other silver bullets

    Gunnar posts on the continuing sad saga of infosec:

    There's been a lot of threads recently about infosec certification, education and training. I believe in training for infosec, I have trained several thousand people myself. Greater knowledge, professionalism and skills definitely help, but are not enough by themselves.

    We saw in the case of the Great Recession and in Enron where the skilled, certified accounting and rating professions totally sold out and blessed bogus accounting practices and non-existent earning.

    Right. And this is an area where the predictions of economics are spot on. In Akerlof's seminal paper "the Market for Lemons," he predicts that the asymmetry of information can be helped by institutions. In the economics sense, institutions are non-trading, non-2-party market contractual arrangements of long standing to get stuff happening. Professionalism, training, certifications, etc all are slap-bang in the recommendations.

    So why don't they help? There's a simple answer: we aren't in the market for lemons! There's one key flaw: Lemons postulates that the seller knows and the buyer doesn't, and that simply doesn't apply to infosec. (Criteria #1) In the market for security, the seller knows about his tool, but he doesn't know whether it is fit for the buyer. In contrast, the salesman in Akerlof's market assumed correctly that a car was good for the buyer, so the problem really was sharing the secret information from the seller to the buyer. Used car warranties did that, by forcing the seller to reveal his real pricing.

    The buyer doesn't really know what he wants, and the seller has no better clue. Indeed, it may be that the buyer has more of a clue, and at least sometimes. So professionalism, certification, training and warranties isn't going to be the answer.

    Another way of looking at this is that in infosec, in common with all security markets (think defence, crime) there is a third party: the attacker. This is the party that really knows, so knowledge-based solutions without clear incorporation of the aggressor's knowledge aren't going to work. This is why buying the next generation stealth fighter is not really helpful when your attacker is a freedom fighter in an Asian hell-hole with an IED. But it's a lot more exciting to talk about.

    Which leads me to one controversial claim. If we can't get useful information from the seller, then the answer is, you've got to find it by yourself. It's your job, do it. And that's really what we mean by professionalism -- knowing when you can outsource something, and knowing when you can't.

    That's controversial because legions of infosec product suppliers will think they're out of a job, but that's not quite true. It just requires a shift in thinking, and a willingness to think about the buyer's welfare, not just his wallet. How do we improve the ability of the client to do their job? Which leads right back to education: it is possible to teach better security practices. It's also possible to teach better risk practices. And, it can be done on an organisation-wide basis. Indeed, this is one of the processes that Microsoft took in trying to escape their security nightmare: get rid of the security architecture silos and turn the security groups into education groups [1].

    So from this claim, why the flip into a conundrum. Why aren't certifications the answer? It's because certifications /are an institution/ and institutions are captured by one party or another. Usually, the sellers. Again a well-known prediction from economics: institutions to protect the buyer are generally captured by the seller in time (if not in the creation). I think this was by Stiglitz or Stigler, pointing to finance market regulation, again.

    A supplier of certifications needs friends in industry, which means they need to also sell the product of industry. It's hard to make friends selling contrarian advice, it is far more profitable selling middle-of-the-road advice about your partners [2]. "Let's start with SSL + firewalls ..." Nobody's going to say boo, just pass go, just collect the fees. In contrast:

    In short, the biggest problem in infosec is integration. Education around security engineering for integration would be most welcome.

    That's tough, from an institutional point of view.



    [1] Of course, even for Microsoft, bettering their internal capabilities was no silver bullet. They did get better, and it is viewed now that their latest products are more secure. FWIW. But, they still lost pole position last week, as Apple pipped Microsoft to become the world's biggest tech organisation, by market cap. Security played its part in that, and it is something of a rather stellar prediction that it still remains better /for your security/ to work with a Mac, because apparent Mac market shares are still low enough to earn a monoculture bounty for Apple users. Microsoft, keep trying, some are noticing, but no cigar as yet :)

    [2] E.g., I came across a certification and professional code of conduct that required you to sign up as promoting /best practices/. Yet, best practices are lowest-common-denominator, they are the set of uncontroversial products. We're automatically on the back foot, because we're encouraging an organisation to lower its own standards to best practices, and comply with whatever list someone finds off the net, and stop right there. Hopeless!

    Posted by iang at 10:16 PM | Comments (1) | TrackBack

    April 13, 2010

    Ruminations on the State of the Security Nation

    In an influential paper, Prof Ross Anderson proposes that the _Market for Lemons_ is a good fit for infosec. I disagree, because that market is predicated on the seller being informed, and the buyer not. I suggest the sellers are equally ill-informed, leading to the Market for Silver Bullets.

    Microsoft and RSA have just published some commissioned research by Forrester that provides some information, but it doesn't help to separate the positions:

    CISOs do not know how effective their security controls actually are. Regardless of information asset value, spending, or number of incidents observed, nearly every company rated its security controls to be equally effective — even though the number and cost of incidents varied widely. Even enterprises with a high number of incidents are still likely to imagine that their programs are “very effective.” We concluded that most enterprises do not actually know whether their data security programs work or not.

    Buyers remain uninformed, something we both agree on. Curiously, it isn't an entirely good match for Akerlof, as the buyer of a Lemon is uninformed before, and regrettably over-informed afterwards. No such for the CISO.

    Which leaves me with an empirical problem: how to show that the sellers are uninformed? I provide some anecdotes in that paper, but we would need more to settle the prediction.

    It should be possible to design an experiment to reveal this. For example, and drawing on the above logic, if a researcher were to ask similar questions of both the buyer and the seller, and be able to show lack of correlation between the supplier's claims, and the incident rate, that would say something.

    The immediate problem of course is, who would do this? Microsoft and RSA aren't going to, as they are sell-side, and their research was obviously focussed on their needs. Which means, it might be entirely accurate, but might not be entirely complete; they aren't likely to want to clearly measure their own performance.

    And, if there is one issue that is extremely prevalent in the world of information security, it is the lack of powerful and independent buy-side institutions who might be tempted to do independent research on the information base of the sellers.

    Oh well. moving on to the other conclusions:

    • Secrets comprise two-thirds of the value of firms’ information portfolios.
    • Compliance, not security, drives security budgets.
    • Firms focus on preventing accidents, but theft is where the money is.
    • The more valuable a firm’s information, the more incidents it will have.

    The second and third were also predicted in that paper.

    The last is hopefully common sense, but unfortunately as someone used to say, common sense isn't so common. Which brings me to Matt Blaze's rather good analysis of the threats to the net in 1995, as an afterword to Applied Cryptography, the seminal red book by Bruce Schneier:

    One of the most dangerous aspects of cryptology (and, by extension, of this book), is that you can almost measure it. Knowledge of key lengths, factoring methods, and cryptanalytic techniques make it possible to estimate (in the absence of a real theory of cipher design) the "work factor" required to break a particular cipher. It's all too tempting to misuse these estimates as if they were overall security metrics for the systems in which they are used. The real world offers the attacker a richer menu of options than mere cryptanalysis; often more worrisome are protocol attacks, Trojan horses, viruses, electromagnetic monitoring, physical compromise, blackmail and intimidation of key holders, operating system bugs, application program bugs, hardware bugs, user errors, physical eavesdropping, social engineering, and dumpster diving, to name just a few.

    Right. It must have not escaped anyone by now that the influence of cryptography has been huge, but the success in security has been minimal. Cryptography has not really been shown to secure our interactions that much, having missed the target as many times as it might have hit it. And with the rise of phishing and breaches and MITBs and trojans and so forth, we are now in the presence of evidence that the institutions of strong cryptography have cemented us into a sort of Maginot line mentality. So it may be doing more harm than good, although such a claim would need a lot of research to give it some weight.

    I tried to research this a little in Pareto-secure, in which I asked why the measurements of crypto-algorithms received such slavish attention, to the exclusion of so much else? I found an answer, at least, and it was a positive, helpful answer. But the far bigger question remains: what about all the things we can't measure with a bit-ruler?

    Matt Blaze listed 10 things in 1996:

    1. The sorry state of software.
    2. Ineffective protection against denial-of-service attacks.
    3. No place to store secrets.
    4. Poor random number generation.
    5. Weak passphrases.
    6. Mismatched trust.
    7. Poorly understood protocol and service interactions.
    8. Unrealistic threat and risks assessment.
    9. Interfaces that make security expensive and special.
    10. No broad-based demand for security.

    In 2010, today more or less, he said "not much has changed." We live in a world where if MD5 is shown to be a bit flaky because it has less bits than SHA1, the vigilantes of the net launch pogroms on the poor thing, committees of bitreaucrats write up how MD5-must-die, and the media breathlessly runs around claiming the sky will fall in. Even though none of this is true, and there is no attack possible at the moment, and when the attack is possible, it is still so unlikely that we can ignore it ... and even if it does happen, the damages will be next to zero.

    Meanwhile, if you ask for a user-interface change, because the failure of the user-interface to identify false end-points has directly led to billions of dollars of damages, you can pretty much forget any discussion. For some reason, bit-strength dominates dollars-losses, in every conversation.

    I used to be one of those who gnashed my teeth at the sheer success of Applied Cryptography, and the consequent generation of crypto-amateurs who believed that the bit count was the beginning and end of all. But that's unfair, as I never got as far as reading the afterword, and the message is there. It looks like Bruce made a slight error, and should have made Matt Blaze's contribution the foreword, not the afterword.

    A one-word error in the editorial algorithm! I must write to the committee...

    Afterword: rumour has it that the 3rd edition of Applied Cryptography is nearing publication.

    Posted by iang at 02:25 AM | Comments (1) | TrackBack

    March 29, 2010

    Pushing the CA into taking responsibility for the MITM

    This ArsTechnica article explores what happens when the CA-supplied certificate is used as an MITM over some SSL connection to protect online-banking or similar. In the secure-lite model that emerged after the real-estate wars of the mid-1990s, consumers were told to click on their tiny padlock to check the cert:

    Now, a careful observer might be able to detect this. Amazon's certificate, for example, should be issued by VeriSign. If it suddenly changed to be signed by Etisalat (the UAE's national phone company), this could be noticed by someone clicking the padlock to view the detailed certificate information. But few people do this in practice, and even fewer people know who should be issuing the certificates for a given organization.

    Right, so where does this go? Well, people don't notice because they can't. Put the CA on the chrome and people will notice. What then?

    A switch in CA is a very significant event. Jane Public might not be able to do something, but if a customer of Verisign's was MITM'd by a cert from Etisalat, this is something that effects Verisign. We might reasonably expect Verisign to be interested in that. As it effects the chrome, and as customers might get annoyed, we might even expect Verisign to treat this as an attack on their good reputation.

    And that's why putting the brand of the CA onto the chrome is so important: it's the only real way to bring pressure to bear on a CA to get it to lift its game. Security, reputation, sales. These things are all on the line when there is a handle to grasp by the public.

    When the public has no handle on what is going on, the deal falls back into the shadows. No security there, in the shadows we find audit, contracts, outsourcing. Got a problem? Shrug. It doesn't effect our sales.

    So, what happens when a CA MITM's its own customer?

    Even this is limited; if VeriSign issued the original certificate as well as the compelled certificate, no one would be any the wiser. The researchers have devised a Firefox plug-in that should be released shortly that will attempt to detect particularly unusual situations (such as a US-based company with a China-issued certificate), but this is far from sufficient.

    Arguably, this is not an MITM, because the CA is the authority (not the subscriber) ... but exotic legal arguments aside; we clearly don't want it. When it goes on, what we do need is software like whitelisting, like Conspiracy and like the other ideas floating around to do it.

    And, we need the CA-on-the-chrome idea so that the responsibility aspect is established. CAs shouldn't be able to MITM other CAs. If we can establish that, with teeth, then the CA-against-itself case is far easier to deal with.

    Posted by iang at 11:20 PM | Comments (7) | TrackBack

    March 24, 2010

    Why the browsers must change their old SSL security (?) model

    In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.

    Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:

    The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.

    (my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:

    Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.

    Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.

    So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.

    One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.

    Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:

    Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.

    They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.

    In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:

    3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in [6].

    I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.

    For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.

    What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.

    Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.

    What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.

    But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.

    Which all leads to a new group of people trying to solve the problem. The authors present their model as this:

    The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
    A hash of the certificate.
    The country of the issuing CA.
    The name of the CA.
    The country of the website.
    The name of the website.
    The entire chain of trust up to the root CA.

    When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).

    This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.

    But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.

    From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.

    Posted by iang at 07:52 PM | Comments (1) | TrackBack

    February 10, 2010

    EV's green cert is breached (of course) (SNAFU)

    Reading up on something or other (Ivan Ristić), I stumbled on this EV breach by Adrian Dimcev:

    Say you go to https://addons.mozilla.org and download a popular extension. Maybe NoScript. The download location appears to be over HTTPS. ... (lots of HTTP/HTTPS blah blah) ... Today I had some time, so I’ve decided to play a little with this. ...

    Well, Wladimir Palant is now the author of NoScript, and of course, I’m downloading it over HTTPS.

    Boom! Mozilla's website uses an EV ("extended validation") certificate to present to the user that this is a high security site, or something. However, it provided the downloads over HTTP, and this was easily manipulable. Hence, NoScript (which I use) was hit by an MITM, and this might explain why Robert Hansen said that the real author of NoScript, Giorgio Maone, was the 9th most dangerous person on the planet.

    What's going on here? Well, several things. The promise that the web site makes when it displays the green EV certificate is just that: a promise. The essence of the EV promise is that it is somehow of a stronger quality than say raw unencrypted HTTP.

    EV checks identity to the Nth degree, which is why it is called "extended validation." Checking the identity is useful, but no more than is matched by a balance in overall security, so the difference between ordinary certs and EV is mostly irrelevant. In simple security model terms, that's not where the threats are (or ever were), so they are winding up the wrong knob. In economics terms, EV raises the cost barrier, which is classical price discrimination, and this results in a colourful signal, not a metric or measure. The marketing aligns the signal of green to security, but the attack easily splits security from promise. Worse, the more they wind the knob, the more they drift off topic, as per silver bullets hypothesis.

    If you want to put it in financial or payments context, EV should be like PCI, not like KYC.

    But it isn't, it's the gold-card of KYC. So, how easy is it to breach the site itself and render the promise a joke? Well, this is the annoying part. Security practices in the HTTP world today are so woeful that even if you know a lot, and even if you try hard to make it secure, the current practices are terrifically hard to secure. Basically, complexity is the enemy of security, and complexity is too high in the world of websites & browsers.

    So CABForum, the promoters of EV, fell into the trap of tightening the identity knob up so much, to increase the promise of security ... but didn't look at all at the real security equation on which it is dependent. So, it is a strong brand over a weak product, it's green paint over wood rot. Worse for them, they won't ever rectify this, because the solutions are out of their reach: they cannot weaken the identity promise without causing the bureaucrats to attack them, and they cannot improve the security of the foundation without destroying the business model.

    Which brings us to the real security question. What's the easiest fix? It is:

    there is only one mode, and it is secure.

    Now, hypothetically, we could see a possibility of saving EV if the CABForum contracts were adjusted such that they enforce this dictum. In practical measures, it would be a restriction that an EV-protected website would only communicate over EV, there would be no downgrade possible. Not to Blue HTTPS, not to white HTTP. This would include all downloads, all javascript, all mashups, all off-site blah blah.

    And that would be the first problem with securing the promise: EV is sold as a simple identity advertisement, it does not enter into the security business at all. So this would make a radical change in the sales process, and it's not clear it would survive the transition.

    Secondly, CABForum is a cartel of people who are in the sales business. It's not security-wise but security-talk. So as a practical matter, the institution is not staffed with the people who are going to pick up this ball and run with it. It has a history of missing the target and claiming victory (c.f., phishing), so what would change its path now?

    That's not to say that EV is all bad, but here is an example of how CABForum succeeded and failed, as illustrative of their difficulties:

    EV achieved one thing, in that it put the brand name of the CA on the agenda. HTTPS security at the identity level is worthless unless the brand of the CA is shown. I haven't seen a Microsoft browser in a while, but the mock-ups showed the brand of the CA, so this gives the consumer a chance to avoid the obvious switcheroo from known brand to strange foreign thing, and recent breaches of various low-end commercial CAs indicate there isn't much consistency across the brands. (Or fake injection attacks, if one is worried about MITB.)

    So it is essential to any sense of security that the identity of who made the security statement be known to consumers; and CABForum tried to make this happen. Here's one well-known (branded) CA's display of the complete security statement.

    But Mozilla was a hold-out. (It is a mystery why Mozilla fights against the the complete security statement. I have a feeling it is more of the same problem that infects CABForum, Microsoft and the Internet in general; monoculture. Mozilla is staffed by developers who apparently think that brands are TV-adverts or road-side billboards of no value to their users, and Mozilla's mission is to protect their flock against another mass-media rape of consciousness & humanity by the evil globalised corporations ... rather than appreciating the brand as handles to security statements that mesh into an overall security model.)

    Which puts the example to the test: although CABForum was capable of winding the identity knob up to 11, and beyond, they were not capable of adjusting the guidelines to change the security model, and force the brand of the CA to be shown on the browser, in some sense. In these terms, what has now happened is that Mozilla is embarrassed, but zero fallback comes onto the CA. It was the CA that should have solved the problem in the first place, because the CA is in the sell-security business, Mozilla is not. So the CA should have adjusted contracts to control the environment in which the green could be displayed. Oops.

    Where do we go from here? Well, we'll probably see a lot of embarrassing attacks on EV ... the brand of EV will wobble, but still sell (mostly because consumers don't believe it anyway, because they know all security marketing is mostly talk [or, do they?], and corporates will try any advertising strategy, because most of them are unprovable anyway). But, gradually the embarrassments will add up to sufficient pressure to push the CABForum to entering the security business. (e.g., PCI not KYC.)

    The big question is, how long will this take, and will it actually do any good? If it takes 5-10 years, which I would predict, it will likely be overtaken by events, just as happened with the first round of security world versus phishing. It seems that we are in that old military story where our unfortunate user-soldiers are being shot up on the beaches while the last-war generals are still arguing as to what paint to put on the rusty ships. SNAFU, "situation normal, all futzed up," or in normal language, same as it ever was.

    Posted by iang at 12:23 PM | Comments (7) | TrackBack

    January 23, 2010

    load up on Steel, and shoot it out! PCI and the market for silver bullets

    Unusually, someone in the press wrote up a real controversy, called The Great PCI Security Debate of 2010. Exciting stuff! In this case, a bunch of security people including Bill Brenner, Ben Rothke, Anton Chuvakin and other famous names are arguing whether PCI is good or bad. The short story is that we are in the market for silver bullets, and this article nicely lays out the evidence:

    Let's just look at the security market: I did a market survey when I was at IBM and there were about 70 different security product technologies, not even counting services. How many of those are required by PCI? It's a tiny subset. No one invests in all 70.

    In this market, external factors are the driving force:

    But the truth is, when someone determined they had to do something about targeted attacks or data loss prevention for intellectual property, they had a pilot and a budget but their bosses told them to cut it. The reason was, "I might get hacked, but I will get fined." That's a direct quote from a CIO and it's very logical and business focused. But instead of securing their highest-risk priority they're doing the thing that they'll get fined for not doing.

    We don't "do security," rather, we avoid exposure to fingerpointing and other embarrassments. By way of hypotheses in the market for silver bullets, we then find ourselves seeking to reduce the exposure to those external costs; this causes the evolution of some form of best practices which is an agreed set that simply ensures you are not isolated by difference. In the case in point, this best practices is PCI.

    In other words, security by herding , compliance-seeking behaviour:

    One of the things I see within organizations is that there's a hurry-up-and-wait mentality. An organization will push really hard to get compliant. Then, the day the auditor walks out the door they say, "Thank goodness. Now I can wait until next year." So when we talk about compliance driving the wrong mindset, I think the wrong mindset was there to begin with.

    It's a difficult proposition to say we're doing compliance instead of security when what I see is they're doing compliance because someone told them to, whereas if no one told them to they'd do nothing. It's like telling your kids to do their homework. If you don't tell them to do the homework they're going to play outside all day.

    This is rational, we simply save more money doing that. What to do about it? If one is a consultant, one can sell more services:

    There is security outside of PCI and if we as security counselors aren't encouraging customers to look outside PCI then we ourselves are failing the industry because we're not encouraging them to look to good security as opposed to just good PCI compliance. The idea that they fear the auditor and not the attacker really bothers me.

    Which is of course rational for the adviser, but not rational for the buyer because more security likely reduces profits in this market. If on the other hand we are trying to make the market more efficient (generally a good goal, as this means it reduces the costs to all players) then the goal is simple: move the market for silver bullets into a market for lemons or limes.

    That's easy to say, very hard to do. There's at least one guy who doesn't want that to happen: the attacker. Furthermore, depending on your view of the perversion of incentives in the payment industry, fraud is good for profits because it enables building of margin. Our security adviser has the same perverse incentive: the more fraud, the more jobs. Indeed, everyone is positive about it, except the user, and they get the bill, not the vote.

    I see a bright future for PCI. To put it in literary terms:

    Ben Rothke: Dan Geer is the Shakespeare of information security, but at the end of the day people are reading Danielle Steel, not Shakespeare.

    In the market for silver bullets, you don't need to talk like Shakespeare. Load up on bullets of Steel, or as many other mangled metaphors as you can cram in, and you're good to shoot it out with the rest of 'em.

    Posted by iang at 08:23 AM | Comments (0) | TrackBack

    December 05, 2009

    Phishing numbers

    From a couple of sources posted by Lynn:

    • A single run only hits 0.0005 percent of users,
    • 1% of customers will follow the phishing links.
    • 0.5% of customers fall for phishing schemes and compromise their online banking information.
    • the monetary losses could range between $2.4 million and $9.4 million annually per one million online banking clients
    • in average ... approximately 832 a year ... reached users' inboxes.
    • costs estimated at up to $9.4 million per year per million users.
    • based on data colleded from "3 million e-banking users who are customers of 10 sizeable U.S. and European banks."

    The primary source was a survey run by an anti-phishing software vendor, so caveats apply. Still interesting!

    For more meat on the bigger picture, see this article: Ending the PCI Blame Game. Which reads like a compressed version of this blog! Perhaps, finally, the thing that is staring the financial operators in the face has started to hit home, and they are really ready to sound the alarm.

    Posted by iang at 06:35 PM | Comments (1) | TrackBack

    November 26, 2009

    Breaches not as disclosed as much as we had hoped

    One of the brief positive spots in the last decade was the California bill to make breaches of data disclosed to effected customers. It took a while, but in 2005 the flood gates opened. Now reports the FBI:

    "Of the thousands of cases that we've investigated, the public knows about a handful," said Shawn Henry, assistant director for the Federal Bureau of Investigation's Cyber Division. "There are million-dollar cases that nobody knows about."

    That seems to point at a super-iceberg. To some extent this is expected, because companies will search out new methods to bypass the intent of the disclosure laws. And also there is the underlying economics. As has been pointed out by many (or perhaps not many but at least me) the reputation damage probably dwarfs the actual or measurable direct losses to the company and its customers.

    Companies that are victims of cybercrime are reluctant to come forward out of fear the publicity will hurt their reputations, scare away customers and hurt profits. Sometimes they don't report the crimes to the FBI at all. In other cases they wait so long that it is tough to track down evidence.

    So, avoidance of disclosure is the strategy for all properly managed companies, because they are required to manage the assets of their shareholders to the best interests of the shareholders. If you want a more dedicated treatment leading to this conclusion, have a look at "the market for silver bullets" paper.

    Meanwhile, the FBI reports that the big companies have improved their security somewhat, so the attackers direct at smaller companies. And:

    They also target corporate executives and other wealthy public figures who it is relatively easy to pursue using public records. The FBI pursues such cases, though they are rarely made public.

    Huh. And this outstanding coordinated attack:

    A similar approach was used in a scheme that defrauded the Royal Bank of Scotland's (RBS.L: Quote, Profile, Research, Stock Buzz) RBS WorldPay of more than $9 million. A group, which included people from Estonia, Russia and Moldova, has been indicted for compromising the data encryption used by RBS WorldPay, one of the leading payment processing businesses globally.

    The ring was accused of hacking data for payroll debit cards, which enable employees to withdraw their salaries from automated teller machines. More than $9 million was withdrawn in less than 12 hours from more than 2,100 ATMs around the world, the Justice Department has said.

    2,100 ATMs! worldwide! That leaves that USA gang looking somewhat kindergarten, with only 50 ATMs cities. No doubt about it, we're now talking serious networked crime, and I'm not referring to the Internet but the network of collaborating, economic agents.

    Compromising the data encryption, even. Anyone know the specs? These are important numbers. Did I miss this story, or does it prove the FBI's point?

    Posted by iang at 01:23 PM | Comments (0) | TrackBack

    October 19, 2009

    Denial of Service is the greatest bug of most security systems

    I've had a rather troubling rash of blog comment failure recently. Not on FC, which seems to be ok ("to me"), but everywhere else. At about 4 in the last couple of days I'm starting to get annoyed. I like to think that my time in writing blog comments for other blogs is valuable, and sometimes I think for many minutes about the best way to bring a point home.

    But more than half the time, my comment is rejected. The problem is on the one hand overly sophisticated comment boxes that rely on exotica like javascript and SSO through some place or other ... and spam on the other hand.

    These things have destroyed the credibility of the blog world. If you recall, there was a time when people used blogs for _conversations_. Now, most blogs are self-serving promotion tools. Trackbacks are dead, so the conversational reward is gone, and comments are slow. You have to be dedicated to want to follow a blog and put a comment on there, or stupid enough to think your comment matters, and you'll keep fighting the bl**dy javascript box.

    The one case where I know clearly "it's not just me" is John Robb's blog. This was a *fantastic* blog where there was great conversation, until a year or two back. It went from dozens to a couple in one hit by turning on whatever flavour of the month was available in the blog system. I've not been able to comment there since, and I'm not alone.

    This is denial of service. To all of us. And, this denial of service is the greatest evidence of the failure of Internet security. Yet, it is easy, theoretically easy to avoid. Here, it is avoided by the simplest of tricks, maybe one per month comes my way, but if I got spam like others get spam, I'd stop doing the blog. Again denial of service.

    Over on CAcert.org's blog they recently implemented client certs. I'm not 100% convinced that this will eliminate comment spam, but I'm 99.9% convinced. And it is easy to use, and it also (more or less) eliminates that terrible thing called access control, which was delivering another denial of service: the people who could write weren't trusted to write, because the access control system said they had to be access-controlled. Gone, all gone.

    According to the blog post on it:

    The CAcert-Blog is now fully X509 enabled. From never visited the site before and using a named certificate you can, with one click (log in), register for the site and have author status ready to write your own contribution.

    Sounds like a good idea, right? So why don't most people do this? Because they can't. Mostly they can't because they do not have a client certificate. And if they don't have one, there isn't any point in the site owner asking for it. Chicken & egg?

    But actually there is another reason why people don't have a client certificate: it is because of all sorts of mumbo jumbo brought up by the SSL / PKIX people, chief amongst which is a claim that we need to know who you are before we can entrust you with a client certificate ... which I will now show to be a fallacy. The reason client certificates work is this:

    If you only have a WoT unnamed certificate you can write your article and it will be spam controlled by the PR people (aka editors).

    If you had a contributor account and haven’t posted anything yet you have been downgraded to a subscriber (no comment or write a post access) with all the other spammers. The good news is once you log in with a certificate you get upgraded to the correct status just as if you’d registered.

    We don't actually need to know who you are. We only need to know that you are not a spammer, and you are going to write a good article for us. Both of these are more or less an equivalent thing, if you think about it; they are a logical parallel to the CAPTCHA or turing test. And we can prove this easily and economically and efficiently: write an article, and you're in.

    Or, in certificate terms, we don't need to know who you are, we only need to know you are the same person as last time, when you were good.

    This works. It is an undeniable benefit:

    There is no password authentication any more. The time taken to make sure both behaved reliably was not possible in the time the admins had available.

    That's two more pluses right there: no admin de-spamming time lost to us and general society (when there were about 290 in the wordpress click-delete queue) and we get rid of those bl**dy passwords, so another denial of service killed.

    Why isn't this more available? The problem comes down to an inherent belief that the above doesn't work. Which is of course a complete nonsense. 2 weeks later, zero comment spam, and I know this will carry on being reliable because the time taken to get a zero-name client certificate (free, it's just your time involved!) is well in excess of the trick required to comment on this blog.

    No matter the *results*, because of the belief that "last-time-good-time" tests are not valuable, the feature of using client certs is not effectively available in the browser. That which I speak of here is so simple to code up it can actually be tricked from any website to happen (which is how CAs get it into your browser in the first place, some simple code that causes your browser to do it all). It is basically the creation of a certificate key pair within the browser, with a no-name in it. Commonly called the self-signed certificate or SSC, these things can be put into the browser in about 5 seconds, automatically, on startup or on absence or whenever. If you recall that aphorism:

    There is only one mode, and it is secure.

    And contrast it to SSL, we can see what went wrong: there is an *option* of using a client cert, which is a completely insane choice. The choice of making the client certificate optional within SSL is a decision not only to allow insecurity in the mode, but also a decision to promote insecurity, by practically eliminating the use of client certs (see the chicken & egg problem).

    And this is where SSL and the PKIX deliver their greatest harm. It denies simple cryptographic security to a wide audience, in order to deliver ... something else, which it turns out isn't as secure as hoped because everyone selects the wrong option. The denial of service attack is dominating, it's at the level of 99% and beyond: how many blogs do you know that have trouble with comments? How many use SSL at all?

    So next time someone asks you, why these effing passwords are causing so much grief in your support department, ask them why they haven't implemented client certs? Or, why the spam problem is draining your life and destroying your social network? Client certs solve that problem.

    SSL security is like Bismarck's sausages: "making laws is like making sausages, you don't want to watch them being made." The difference is, at least Bismark got a sausage!

    Footnote: you're probably going to argue that SSCs will be adopted by the spammer's brigade once there is widespread use of this trick. Think for minute before you post that comment, the answer is right there in front of your nose! Also you are probably going to mention all these other limitations of the solution. Think for another minute and consider this claim: almost all of the real limitations exist because the solution isn't much used. Again, chicken & egg, see "usage". Or maybe you'll argue that we don't need it now we have OpenID. That's specious, because we don't actually have OpenID as yet (some few do, not all) and also, the presence of one technology rarely argues against another not being needed, only marketing argues like that.

    Posted by iang at 10:47 AM | Comments (6) | TrackBack

    October 01, 2009

    Man-in-the-Browser goes to court

    Stephen Mason reports that MITB is in court:

    A gang of internet fraudsters used a sophisticated virus to con members of the public into parting with their banking details and stealing £600,000, a court heard today.

    Once the 'malicious software' had infected their computers, it waited until users logged on to their accounts, checked there was enough money in them and then insinuated itself into cash transfer procedures.

    (also on El Reg.) This breaches the 2-factor authentication system commonly in use because it (a) controls the user's PC, and (b) the authentication scheme that was commonly pushed out over the last decade or so only authenticates the user, not the transaction. So as the trojan now controls the PC, it is the user. And the real user happily authenticates itself, and the trojan, and the trojan's transactions, and even lies about it!

    Numbers, more than ordinarily reliable because they have been heard in court:

    'In fact as a result of this Trojan virus fraud very many people - 138 customers - were affected in this way with some £600,000 being fraudulently transferred.

    'Some of that money, £140,000, was recouped by NatWest after they became aware of this scam.'

    This is called Man-in-the-browser, which is a subtle reference to the SSL's vaunted protection against Man-in-the-middle. Unfortunately several things went wrong in this area of security: Adi's 3rd law of security says the attacker always bypasses; one of my unnumbered aphorisms has it that the node is always the threat, never the wire, and finally, the extraordinary success of SSL in the mindspace war blocked any attempts to fix the essential problems. SSL is so secure that nobody dare challenge browser security.

    The MITB was first reported in March 2006 and sent a wave of fear through the leading European banks. If customers lost trust in the online banking, this would turn their support / branch employment numbers on their heads. So they rapidly (for banks) developed a counter-attack by moving their confirmation process over to the SMS channel of users' phones. The Man-in-the-browser cannot leap across that air-gap, and the MITB is more or less defeated.

    European banks tend to be proactive when it comes to security, and hence their losses are miniscule. Reported recently was something like €400k for a smaller country (7 million?) for an entire year for all banks. This one case in the UK is double that, reflecting that British banks and USA banks are reactive to security. Although they knew about it, they ignored it.

    This could be called the "prove-it" school of security, and it has merit. As we saw with SSL, there never really was much of a threat on the wire; and when it came to the node, we were pretty much defenceless (although a lot of that comes down to one factor: Microsoft Windows). So when faced with FUD from the crypto / security industry, it is very very hard to separate real dangers from made up ones. I felt it was serious; others thought I was spreading FUD! Hence Philipp Güring's paper Concepts against Man-in-the-Browser Attacks, and the episode formed fascinating evidence for the market for silver bullets. The concept is now proven right in practice, but it didn't turn out how we predicted.

    What is also interesting is that we now have a good cycle timeline: March 2006 is when the threat first crossed our radars. September 2009 it is in the British courts.

    Postscript. More numbers from today's MITB:

    A next-generation Trojan recently discovered pilfering online bank accounts around the world kicks it up a notch by avoiding any behavior that would trigger a fraud alert and forging the victim's bank statement to cover its tracks.

    The so-called URLZone Trojan doesn't just dupe users into giving up their online banking credentials like most banking Trojans do: Instead, it calls back to its command and control server for specific instructions on exactly how much to steal from the victim's bank account without raising any suspicion, and to which money mule account to send it the money. Then it forges the victim's on-screen bank statements so the person and bank don't see the unauthorized transaction.

    Researchers from Finjan found the sophisticated attack, in which the cybercriminals stole around 200,000 euro per day during a period of 22 days in August from several online European bank customers, many of whom were based in Germany....

    "The Trojan was smart enough to be able to look at the [victim's] bank balance," says Yuval Ben-Itzhak, CTO of Finjan... Finjan found the attackers had lured about 90,000 potential victims to their sites, and successfully infected about 6,400 of them. ...URLZone ensures the transactions are subtle: "The balance must be positive, and they set a minimum and maximum amount" based on the victim's balance, Ben-Itzhak says. That ensures the bank's anti-fraud system doesn't trigger an alert, he says.

    And the malware is making the decisions -- and alterations to the bank statement -- in real time, he says. In one case, the attackers stole 8,576 euro, but the Trojan forged a screen that showed the transferred amount as 53.94 euro. The only way the victim would discover the discrepancy is if he logged into his account from an uninfected machine.

    Posted by iang at 09:26 AM | Comments (1) | TrackBack

    September 02, 2009

    Robert Garigue and Charlemagne as a model of infosec

    Gunnar reports that someone called Robert Garigue died last month. This person I knew not, but his model resonates. Sound bites only from Gunnar's post:

    "It's the End of the CISO As We Know It (And I Feel Fine)"...

    ...First, they miss the opportunity to look at security as a business enabler. Dr. Garigue pointed out that because cars have brakes, we can drive faster. Security as a business enabler should absolutely be the starting point for enterprise information security programs.
    ...

    Secondly, if your security model reflects some CYA abstraction of reality instead of reality itself your security model is flawed. I explored this endemic myopia...

    This rhymes with: "what's your business model?" The bit lacking from most orientations is the enabler, why are we here in the first place? It's not to show the most elegant protocol for achieving C-I-A (confidentiality, integrity, authenticity), but to promote the business.

    How do we do that? Well, most technologists don't understand the business, let alone can speak the language. And, the business folks can't speak the techno-crypto blah blah either, so the blame is fairly shared. Dr. Garigue points us to Charlemagne as a better model:

    King of the Franks and Holy Roman Emperor; conqueror of the Lombards and Saxons (742-814) - reunited much of Europe after the Dark Ages.

    He set up other schools, opening them to peasant boys as well as nobles. Charlemagne never stopped studying. He brought an English monk, Alcuin, and other scholars to his court - encouraging the development of a standard script.

    He set up money standards to encourage commerce, tried to build a Rhine-Danube canal, and urged better farming methods. He especially worked to spread education and Christianity in every class of people.

    He relied on Counts, Margraves and Missi Domini to help him.

    Margraves - Guard the frontier districts of the empire. Margraves retained, within their own jurisdictions, the authority of dukes in the feudal arm of the empire.

    Missi Domini - Messengers of the King.

    In other words, the role of the security person is to enable others to learn, not to do, nor to critique, nor to design. In more specific terms, the goal is to bring the team to a better standard, and a better mix of security and business. Garigue's mandate for IT security?

    Knowledge of risky things is of strategic value

    How to know today tomorrow’s unknown ?

    How to structure information security processes in an organization so as to identify and address the NEXT categories of risks ?

    Curious, isn't it! But if we think about how reactive most security thinking is these days, one has to wonder where we would ever get the chance to fight tomorrow's war, today?

    Posted by iang at 10:45 PM | Comments (1) | TrackBack

    July 15, 2009

    trouble in PKI land

    The CA and PKI business is busy this week. CAcert, a community Certification Authority, has a special general meeting to resolve the trauma of the collapse of their audit process. Depending on who you ask, my resignation as auditor was either the symptom or the cause.

    In my opinion, the process wasn't working, so now I'm switching to the other side of the tracks. I'll work to get the audit done from the inside. Whether it will be faster or easier this way is difficult to say, we only get to run the experiment once.

    Meanwhile, Mike Zusman and Alex Sotirov are claiming to have breached the EV green bar thing used by some higher end websites. No details available yet, it's the normal tease before a BlabHat style presentation by academics. Rumour has it that they've exploited weaknesses in the browsers. Some details emerging:

    With control of the DNS for the access point, the attackers can establish their machines as men-in-the-middle, monitoring what victims logged into the access point are up to. They can let victims connect to EV SSL sites - turning the address bars green. Subsequently, they can redirect the connection to a DV SSL sessions under a certificates they have gotten illicitly, but the browser will still show the green bar.

    Ah that old chestnut: if you slice your site down the middle and do security on the left and no or lesser security on the right, guess where the attacker comes in? Not the left or the right, but up the middle, between the two. He exploits the gap. Which is why elsewhere, we say "there is only one mode and it is secure."

    Aside from that, this is an interesting data point. It might be considered that this is proof that the process is working (following the GP theory), or it might be proof that the process is broken (following the sleeping-dogs-lie model of security).

    Although EV represents a good documentation of what the USA/Canada region (not Europe) would subscribe as "best practices," it fails in some disappointing ways. And in some ways it has made matters worse. Here's one: because the closed proprietary group CA/B Forum didn't really agree to fix the real problems, those real problems are still there. As Extended Validation has held itself up as a sort of gold standard, this means that attackers now have something fun to focus on. We all knew that SSL was sort of facade-ware in the real security game, and didn't bother to mention it. But now that the bigger CAs have bought into the marketing campaign, they'll get a steady stream of attention from academics and press.

    I would guess less so from real attackers, because there are easier pickings elsewhere, but maybe I'm wrong:

    "From May to June 2009 the total number of fraudulent website URLs using VeriSign SSL certificates represented 26% of all SSL certificate attacks, while the previous six months presented only a single occurrence," Raza wrote on the Symantec Security blogs.

    ... MarkMonitor found more than 7,300 domains exploited four top U.S. and international bank brands with 16% of them registered since September 2008.
    .... But in the latest spate of phishing attempts, the SSL certificates were legitimate because "they matched the URL of the fake pages that were mimicking the target brands," Raza wrote.

    VeriSign Inc., which sells SSL certificates, points out that SSL certificate fraud currently represents a tiny percentage of overall phishing attacks. Only two domains, and two VeriSign certificates were compromised in the attacks identified by Symantec, which targeted seven different brands.

    "This activity falls well within the normal variability you would see on a very infrequent occurrence," said Tim Callan, a product marketing executive for VeriSign's SSL business unit. "If these were the results of a coin flip, with heads yielding 1 and tails yielding 0, we wouldn't be surprised to see this sequence at all, and certainly wouldn't conclude that there's any upward trend towards heads coming up on the coin."

    Well, we hope that nobody's head is flipped in an unsurprising fashion....

    It remains to be seen whether this makes any difference. I must admit, I check the green bar on my browser when online-banking, but annoyingly it makes me click to see who signed it. For real users, Firefox says that it is the website, and this is wrong and annoying, but Mozilla has not shown itself adept at understanding the legal and business side of security. I've heard Safari has been fixed up so probably time to try that again and report sometime.

    Then, over to Germany, where a snafu with a HSM ("high security module") caused a root key to be lost (also in German). Over in the crypto lists, there are PKI opponents pointing out how this means it doesn't work, and there are PKI proponents pointing out how they should have employed better consultants. Both sides are right of course, so what to conclude?

    Test runs with Germany's first-generation electronic health cards and doctors' "health professional cards" have suffered a serious setback. After the failure of a hardware security module (HSM) holding the private keys for the root Certificate Authority (root CA) for the first-generation cards, it emerged that the data had not been backed up. Consequently, if additional new cards are required for field testing, all of the cards previously produced for the tests will have to be replaced, because a new root CA will have to be generated. ... Besides its use in authentication, the root CA is also important for card withdrawal (the revocation service).

    The first thing to realise was that this was a test rollout and not the real thing. So the test discovered a major weakness; in that sense it is successful, albeit highly embarrassing because it reached the press.

    The second thing is the HSM issue. As we know, PKI is constructed as a hierarchy, or a tree. At the root of the tree is the root key of course. If this breaks, everything else collapses.

    Hence there is a terrible fear of the root breaking. This feeds into the wishes of suppliers of high security modules, who make hardware that protect the root from being stolen. But, in this case, the HSM broke, and there was no backup. So a protection for one fear -- theft -- resulted in a vulnerability to another fear -- data loss.

    A moment's thought and we realise that the HSM has to have a backup. Which has to be at least as good as the HSM. Which means we then have some rather cute conundrums, based on the Alice in Wonderland concept of having one single root except we need multiple single roots... In practice, how do we create the root inside the HSM (for security protection) and get it to another HSM (for recovery protection)?

    Serious engineers and architects will be reaching for one word: BRITTLE! And so it is. Yes, it is possible to do this, but only by breaking the hierarchical principle of PKI itself. It is hard to break fundamental principles, and the result is that PKI will always be brittle, the implementations will always have contradictions that are swept under the carpet by the managers, auditors and salesmen. The PKI design is simply not real world engineering, and the only thing that keeps it going is the institutional deadly embrace of governments, standards committees, developers and security companies.

    Not the market demand. But, not all has been bad in the PKI world. Actually, since the bottoming out of the dotcom collapse, certs have been on the uptake, and market demand is present albeit not anything beyond compliance-driven. Here comes a minor item of success:

    VeriSign, Inc. [SNIP] today reported it has topped the 1 billion mark for daily Online Certificate Status Protocol (OCSP) checks.

    [SNIP] A key link in the online security chain, OCSP offers the most timely and efficient way for Web browsers to determine whether a Secure Sockets Layer (SSL) or user certificate is still valid or has been revoked. Generally, when a browser initiates an SSL session, OCSP servers receive a query to check to see if the certificate in use is valid. Likewise, when a user initiates actions such as smartcard logon, VPN access or Web authentication, OCSP servers check the validity of the user certificate that is presented. OSCP servers are operated by Certificate Authorities, and VeriSign is the world's leading Certificate Authority.

    [SNIP] VeriSign is the EV SSL Certificate provider of choice for more than 10,000 Internet domain names, representing 74 percent of the entire EV SSL Certificate market worldwide.

    (In the above, I've snipped the self-serving marketing and one blatant misrepresentation.)

    Certificates are static statements. They can be revoked, but the old design of downloading complete lists of all revocations was not really workable (some CAs ship megabyte-sized lists). We now have a new thing whereby if you are in possession of a certificate, you can do an online check of its status, called OCSP.

    The fundamental problem with this, and the reason why it took the industry so long to get around to making revocation a real-time thing, is that once you have that architecture in place, you no longer need certificates. If you know the website, you simply go to a trusted provider and get the public key. The problem with this approach is that it doesn't allow the CA business to sell certificates to web site owners. As it lacks any business model for CAs, the CAs will fight it tooth & nail.

    Just another conundrum from the office of security Kafkaism.

    Here's another one, this time from the world of code signing. The idea is that updates and plugins can be sent to you with a digital signature. This means variously that the code is good and won't hurt you, or someone knows who the attacker is, and you can't hurt him. Whatever it means, developers put great store in the apparent ability of the digital signature to protect themselves from something or other.

    But it doesn't work with Blackberry users. Allegedly, a Blackberry provider sent a signed code update to all users in United Arab Emirates:

    Yesterday it was reported by various media outlets that a recent BlackBerry software update from Etisalat (a UAE-based carrier) contained spyware that would intercept emails and text messages and send copies to a central Etisalat server. We decided to take a look to find out more.

    ...
    Whenever a message is received on the device, the Recv class first inspects it to determine if it contains an embedded command — more on this later. If not, it UTF-8 encodes the message, GZIPs it, AES encrypts it using a static key (”EtisalatIsAProviderForBlackBerry”), and Base64 encodes the result. It then adds this bundle to a transmit queue. The main app polls this queue every five seconds using a Timer, and when there are items in the queue to transmit, it calls this function to forward the message to a hardcoded server via HTTP (see below). The call to http.sendData() simply constructs the POST request and sends it over the wire with the proper headers.

    Oops! A signed spyware from the provider that copies all your private email and sends it to a server. Sounds simple, but there's a gotcha...

    The most alarming part about this whole situation is that people only noticed the malware because it was draining their batteries. The server receiving the initial registration packets (i.e. “Here I am, software is installed!”) got overloaded. Devices kept trying to connect every five seconds to empty the outbound message queue, thereby causing a battery drain. Some people were reporting on official BlackBerry forums that their batteries were being depleted from full charge in as little as half an hour.

    So, even though the spyware provider had a way to turn it on and off:

    It doesn’t seem to execute arbitrary commands, just packages up device information such as IMEI, IMSI, phone number, etc. and sends it back to the central server, the same way it does for received messages. It also provides a way to remotely enable/disable the spyware itself using the commands “start” and “stop”.

    There was something wrong with the design, and everyone's blackberry went mad. Two points: if you want to spy on your own customers, be careful, and test it. Get quality engineers on to that part, because you are perverting a brittle design, and that is tricky stuff.

    Second point. If you want to control a large portion of the population who has these devices, the centralised hierarchy of PKI and its one root to bind them all principle would seem to be perfectly designed. Nobody can control it except the center, which puts you in charge. In this case, the center can use its powerful code-signing abilities to deliver whatever you trust to it. (You trust what it tells you to trust, of course.)

    Which has led some wits to label the CAs as centralised vulnerability partners. Which is odd, because some organisations that should know better than to outsource the keys to their security continue to do so.

    But who cares, as long as the work flows for the consultants, the committees, the HSM providers and the CAs?

    Posted by iang at 07:13 AM | Comments (7) | TrackBack

    March 12, 2009

    We don't fear no black swan!

    Over on EC, Adam does a presentation on his new book, co-authored with Andrew Stewart. AFAICS, the basic message in the book is "security sucks, we better start again." Right, no argument there.

    Curiously, he's also experimenting with Twitter in the presentations as a "silent" form of interaction (more. It is rather poignant to mix twitter and security, but I generally like these experiments. The grey hairs don't understand this new stuff, and they have to find out somehow. Somehow and sometime, the only question is whether we are the dinosours or the mammals.

    Reading through the quotes (standard stuff) I came across this one, unattributed:

    I was pretty dismissive of "Black Swan" hype. I stand by that, and don't think we should allow fear of a black swan out there somewhere to prevent us from studying white ones and generalizing about what we can see.

    OK, we just saw the Black Swan over on the finance scene, where Wall Street is now turning into Rubble Alley. Why not on the net? Black swans are a name for those areas where our numbers are garbage and our formulas have an occasional tendency (say 1%) to blow up.

    Here's why there are no black swans on the net, for my money: there is no unified approach to security. Indeed, there isn't much or anything of security. There are tiny fights by tiny schools, but these are unadopted by the majority. Although there are a million certs out there, they play no real part in the security models of the users. A million OpenPGP keys are used for collecting signatures, not for securing data. Although there are hundreds of millions of lines of security code out there, now including fresh new Vista!, they are mostly ignored or bypassed or turned off or any other of the many Kerckhoffsian modes of failure.

    The vast majority of the net is insecure. We ain't got no security, we don't fear no black swan. We're about as low as we can get. If I look at the most successful security product of all time, Skype, it's showing around 10 million users right now. Facebook, Myspace, youtube, google, you name them, they *all* do an order of magnitude better than Skype's ugly duckling waddle.

    Why is the state of security so dire? Well, we could ask the DHS. Now, these guys are probably authoritive in at least a negative sense, because they actually were supposed to secure the USA government infrastructure. Here's what one guy says, thanks to Todd who passed this on:

    The official in charge of coordinating the U.S. government's cybersecurity operations has quit, saying the expanding control of the National Security Agency over the nation's computer security efforts poses "threats to our democratic processes."

    "Even from a security standpoint," Rod Beckstrom, the head of the Department of Homeland Security's National Cyber Security Center, told United Press International, "it is unwise to hand over the security of all government networks to a single organization."

    "If our founding fathers were taking part in this debate (about the future organization of the government's cybersecurity activities) there is no doubt in my mind they would support a separation of security powers among different (government) organizations, in line with their commitment to checks and balances."

    In a letter to Homeland Security Secretary Janet Napolitano last week, Beckstrom said the NSA "dominates most national cyber efforts" and "effectively controls DHS cyber efforts through detailees, technology insertions and the proposed move" of the NCSC to an NSA facility at the agency's Fort Meade, Md., headquarters.

    It's called "the equity debate" for reasons obscure. Basically, the mission of the NSA is to breach our security. The theory has it that the NSA did this (partly) by ensuring that our security -- the security of the entire net -- was flaky enough for them to get in. Now we all pay the price, as the somewhat slower but more incisive criminal economy takes its tax.

    Quite how we get from that above NSA mission to where we are now is a rather long walk, and to be fair, the evidence is a bit scattered and tenuous. Unsurprisingly, and the above resignation does not quite "spill the beans," thus preserving the beanholder's good name. But it is certainly good to see someone come out and say, these guys are ruining the party for all of us.

    Posted by iang at 07:00 PM | Comments (4) | TrackBack

    January 16, 2009

    What's missing in security: business

    Those of us who are impacted by the world of security suffer under a sort of love-hate relationship with the word; so much of it is how we build applications, but so much of what is labelled security out there in the rest of the world is utter garbage.

    So we tend to spend a lot of our time reverse-engineering popular security thought and finding the security bugs in it. I think I've found another one. Consider this very concise and clear description from Frank Stajano, who has published a draft book section seeking comments:

    The viewpoint we shall adopt here, which I believe is the only one leading to robust system security engineering, is that security is essentially risk management. In the context of an adversarial situation, and from the viewpoint of the defender, we identify assets (things you want to protect, e.g. the collection of magazines under your bed), threats (bad things that might happen, e.g. someone stealing those magazines), vulnerabilities (weaknesses that might facilitate the occurrence of a threat, e.g. the fact that you rarely close the bedroom window when you go out), attacks (ways a threat can be made to happen, e.g. coming in through the open window and stealing the magazines—as well as, for good measure, that nice new four-wheel suitcase of yours to carry them away with) and risks (the expected loss caused by each attack, corresponding to the value of the asset involved times the probability that the attack will occur). Then we identify suitable safeguards (a priori defences, e.g. welding steel bars across the window to prevent break-ins) and countermeasures (a posteriori defences, e.g. welding steel bars to the window after a break-in has actually occurred4 , or calling the police). Finally, we implement the defences that are still worth implementing after evaluating their effectiveness and comparing their (certain) cost with the (uncertain) risk they mitigate5

    (my emphasies.) That's a good description of how the classical security world sees it. We start by saying, "What's your threat model?" Then out of that we build a security model to deal with those threats. The security model then incorporates some knowledge of risks to manage the tradeoffs.

    The bit that's missing is the business. Instead of asking "What's your threat model?" as the first question, it should be "What's your business model?" Security asks that last, and only partly, by asking questions like "what's are the risks?"

    Calling security "risk management" then is a sort of nod to the point that security has a purpose within business; and by focussing on some risks, this allows the security modellists to preserve their existing model while tying it to the business. But it is still backwards; it is still seeking to add risks at the end, and will still result in "security" being just the annoying monkey on the back.

    Instead, the first question should be "What's your business model?"

    This unfortunately opens Pandora's box, because that implies that we can understand a business model. Assuming it is the case that your CISO understands a business model, it does rather imply that the only security we should be pushing is that which is from within. From inside the business, that is. The job of the security people is not therefore to teach and build security models, but to improve the abilities of the business people to incorporate good security as they are doing their business.

    Which perhaps brings us full circle to the popular claim that the best security is that which is built in from the beginning.

    Posted by iang at 03:51 AM | Comments (4) | TrackBack

    December 07, 2008

    Security is a subset of Reliability

    From the "articles I wish I'd written" department, Chandler points to an article by Daniels Geer & Conway on all the ways security is really a subset of reliability. Of course!

    I think this is why the best engineers who've done great security things start from the top; from the customer, the product, the market. They know that in order to secure something, they had better know what the something is before even attempting to add a cryptosec layer over it.

    Which is to say, security cannot be a separate discipline. It can be a separate theory, a bit like statics is a theory from civil engineering, or triage is a part of medicine. You might study it in University, but you don't get a job in it; every practitioner needs some basic security. If you are a specialist in security, your job is more or less to teach it to practitioners. The alternate is to ask the practitioners to teach you about the product, which doesn't seem sensible.

    Posted by iang at 07:12 PM | Comments (1) | TrackBack

    Unwinding secrecy -- how to do it?

    The next question on unwinding secrecy is how to actually do it. It isn't as trivial as it sounds. Perhaps this is because the concept of "need-to-know" is so well embedded in the systems and managerial DNA that it takes a long time to root it out.

    At LISA I was asked how to do this; but I don't have much of an answer. Here's what I have observed:

    • Do a little at a time.
    • Pick a small area and start re-organising it. Choose an area where there is lots of frustration and lots of people to help. Open it up by doing something like a wiki, and work the information. It will take a lot of work and pushing by yourself, mostly because people won't know what you are doing or why (even if you tell them).
    • What is needed is a success. That is, a previously secret area is opened up, and as a result, good work gets done that was otherwise inhibited. People need to see the end-to-end journey in order to appreciate the message. (And, obviously, it should be clear at the end of it that you don't need the secrecy as much as you thought.)
    • Whenever some story comes out about a successful opening of secrecy, spread it around. The story probably isn't relevant to your organisation, but it gets people thinking about the concept. E.g., that which I posted recently was done to get people thinking. Another from Chandler.
    • Whenever there is a success on openness inside your organisation, help to make this a showcase (here are three). Take the story and spread it around; explain how the openness made it possible.
    • When some decision comes up about "and this must be kept secret," discuss it. Challenge it, make it prove itself. Remind people that we are an open organisation and there is benefit in treating all as open as possible.
    • Get a top-level decision that "we are open." Make it broad, make it serious, and incorporate the exceptions. "No, we really are open; all of our processes are open except when a specific exception is argued for, and that must be documented and open!" Once this is done, from top-level, you can remind people in any discussion. This might take years to get, so have a copy of a resolution in your back pocket for a moment when suddenly, the board is faced with it, and minded to pass a broad, sweeping decision.
    • Use phrases like "security-by-obscurity." Normally, I am not a fan of these as they are very often wrongly used; so-called security-by-obscurity often tans the behinds of supposed open standards models. But it is a useful catchphrase if it causes the listener to challenge the obscure security benefits of secrecy.
    • Create an opening protocol. Here's an idea I have seen: when someone comes across a secret document (generally after much discussion ...) that should not have been kept secret, let them engage in the Opening-Up Protocol without any further ado. Instead of grumbling or asking, put the ball in their court. Flip it around, and take the default as to be open:
      "I can't see why document X is secret, it seems wrong. Therefore, in 1 month, I intend to publish it. If there is any real reason, let me know before then."
      This protocol avoids the endless discussions as to why and whether.

    Well, that's what I have thought about so far. I am sure there is more.

    Posted by iang at 01:24 PM | Comments (0) | TrackBack

    November 20, 2008

    Unwinding secrecy -- busting the covert attack

    Have a read of this. Quick summary: Altimo thinks Telenor may be using espionage tactics to cause problems.

    Altimo alleges the interception of emails and tapping of telephone calls, surveillance of executives and shareholders, and payments to journalists to write damaging articles.

    So instead of getting its knickers in a knot (court case or whatever) Altimo simply writes to Telenor and suggests that this is going on, and asks for confirmation that they know nothing about it, do not endorse it, etc.

    Who ya bluffin?

    ...Andrei Kosogov, Altimo's chairman, wrote an open letter to Telenor's chairman, Harald Norvik, asking him to explain what Telenor's role has been and "what activity your agents have directed at Altimo". He said that he was "reluctant to believe" that Mr Norvik or his colleagues would have sanctioned any of the activities complained of.

    .... Mr Kosogov said he first wrote to Telenor in October asking if the company knew of the alleged campaign, but received no reply. In yesterday's letter to Mr Norvik, Mr Kosogov writes: "We would welcome your reassurance that Telenor's future dealings with Altimo will be conducted within a legal and ethical framework."

    Think about it: This open disclosure locks down Telenor completely. It draws a firm line in time, as also, gives Telenor a face-saving way to back out of any "exuberance" it might have previously "endorsed." If indeed Telenor does not take this chance to stop the activity, it would be negligent. If it is later found out that Telenor's board of directors knew, then it becomes a slam-dunk in court. And, if Telenor is indeed innocent of any action, it engages them in the fight to also chase the perpetrator. The bluff is called, as it were.

    This is good use of game theory. Note also that the Advisory Board of Altimo includes some high-powered people:

    Evidence of an alleged campaign was contained in documents sent to each member of Altimo's advisory board some time before October. The board is chaired by ex-GCHQ director Sir Francis Richards, and includes Lord Hurd, a former UK Foreign Secretary, and Sir Julian Horn-Smith, a founder of Vodafone.

    We could speculate that those players -- the spooks and mandarins -- know how powerful open disclosure is in locking down the options of nefarious players. A salutory lesson!

    Posted by iang at 06:25 PM | Comments (1) | TrackBack

    November 19, 2008

    Unwinding secrecy -- how far?

    One of the things that I've gradually come to believe in is that secrecy in anything is more likely to be a danger to you and yours than a help. The reasons for this are many, but include:

    • hard to get anything done
    • your attacker laughs!
    • ideal cover for laziness, a mess or incompetence

    There are no good reasons for secrecy, only less bad ones. If we accept that proposition, and start unwinding the secrecy so common in organisations today, there appear to be two questions: how far to open up, and how do we do it?

    How far to open up appears to be a personal-organisational issue, and perhaps the easiest thing to do is to look at some examples. I've seen three in recent days which I'd like to share.

    First the Intelligence agencies: in the USA, they are now winding back the concept of "need-to-know" and replacing it with "responsibility-to-share".


    Implementing Intellipedia Within a "Need to Know" Culture

    Sean Dennehy, Chief of Intellipedia Development, Directorate of Intelligence, U.S. Central Intelligence Agency

    Sean will share the technical and cultural changes underway at the CIA involving the adoption of wikis, blogs, and social bookmarking tools. In 2005, Dr. Calvin Andrus published The Wiki and The Blog: Toward a Complex Adaptive Intelligence Community. Three years later, a vibrant and rapidly growing community has transformed how the CIA aggregates, communicates, and organizes intelligence information. These tools are being used to improve information sharing across the U.S. intelligence community by moving information out of traditional channels.

    The way they are doing this is to run a community-wide suite of social network tools: blogs, wikis, youtube-copies, etc. The access is controlled at the session level by the username/password/TLS and at the person level by sponsoring. That latter means that even contractors can be sponsored in to access the tools, and all sorts of people in the field can contribute directly to the collection of information.

    The big problem that this switch has is that not only is intelligence information controlled by "need to know" but also it is controlled in horizontal layers. For same of this discussion, there are three: TOP SECRET / SECRET / UNCLASSIFIED-CONTROLLED. The intel community's solution to this is to have 3 separate networks in parallel, one for each, and to control access to each of these. So in effect, contractors might be easily sponsored into the lowest level, but less likely in the others.

    What happens in practice? The best coverage is found in the network that has the largest number of people, which of course is the lowest, UNCLASSIFIED-CONTROLLED network. So, regardless of the intention, most of the good stuff is found in there, and where higher layer stuff adds value, there are little pointers embedded to how to find it.

    In a nutshell, the result is that anyone who is "in" can see most everything, and modify everything. Anyone who is "out" cannot. Hence, a spectacular success if the mission was to share; it seems so obvious that one wonders why they didn't do it before.

    As it turns out, the second example is quite similar: Google. A couple of chaps from there explained to me around the dinner table that the process is basically this: everyone inside google can talk about any project to any other insider. But, one should not talk about projects to outsiders (presumably there are some exceptions). It seems that SEC (Securities and Exchange Commission in USA) provisions for a public corporation lead to some sensitivity, and rather than try and stop the internal discussion, google chose to make it very simple and draw a boundary at the obvious place.

    The third example is CAcert. In order to deal with various issues, the Board chose to take it totally open last year. This means that all the decisions, all the strategies, all the processes should be published and discussable to all. Some things aren't out there, but they should be; if an exception is needed it must be argued and put into policies.

    The curious thing is why CAcert did not choose to set a boundary at some point, like google and the intelligence agencies. Unlike google, there is no regulator to say "you must not reveal inside info of financial import." Unlike the CIA, CAcert is not engaging in a war with an enemy where the bad guys might be tipped off to some secret mission.

    However, CAcert does have other problems, and it has one problem that tips it in the balance of total disclosure: the presence of valuable and tempting privacy assets. These seem to attract a steady stream of interested parties, and some of these parties are after private gain. I have now counted 4 attempts to do this in my time related to CAcert, and although each had their interesting differences, they each in their own way sought to employ CAcert's natural secrecy to own advantage. From a commercial perspective, this was fairly obvious as the interested parties sought to keep their negotiations confidential, and this allowed them to pursue the sales process and sell the insiders without wiser heads putting a stop to it. To the extent that there are incentives for various agencies to insert different agendas into the inner core, then the CA needs a way to manage that process.

    How to defend against that? Well, one way is to let the enemy of your enemy know who we are talking to. Let's take a benign example which happened (sort of): a USB security stick manufacturer might want to ship extra stuff like CAcert's roots on the stick. Does he want the negotiations to be private because other competitors might deal for equal access, or does he want it private because wiser heads will figure out that he is really after CAcert's customer list? CAcert might care more about one than they other, but they are both threats to someone. As the managers aren't smart enough to see every angle, every time, they need help. One defence is many eyeballs and this is something that CAcert does have available to it. Perhaps if sufficient info of the emerging deal is published, then the rest of the community can figure it out. Perhaps, if the enemy's enemy notices what is going on, he can explain the tactic.

    A more poignant example might be someone seeking to pervert the systems and get some false certificates issued. In order to deal with those, CAcert's evolving Security Manual says all conflicts of interest have to be declared broadly and in advance, so that we can all mull over them and watch for how these might be a problem. This serves up a dilemma to the secret attacker: either keep private and lie, and risk exposure later on, or tell all upfront and lose the element of surprise.

    This method, if adopted, would involve sacrifices. It means that any agency that is looking to impact the systems is encouraged to open up, and this really puts the finger on them: are they trying to help us or themselves? Also, it means that all people in critical roles might have to sacrifice their privacy. This latter sacrifice, if made, is to preserve the privacy of others, and it is the greater for it.

    Posted by iang at 05:16 PM | Comments (0) | TrackBack

    October 19, 2008

    What happened in security over the last 10 years?

    I keep having the same discussion in various places, and keep coming back to this eloquent description of where we are:

    Gunnar's full blog post is here ... it includes some other things, but nothing quite so poignant.

    Web Security hasn't moved since 1995. Oh well.

    Posted by iang at 09:19 PM | Comments (3) | TrackBack

    October 06, 2008

    Browser Security UI: the horns of the dilemma

    One of the dilemmas that the browser security UI people have is that they have to deal with two different groups at the same time. One is the people who can work with the browser and the other is those who blindly click when told to. The security system known as secure browsing seems to be designed for both groups at the same time, thus leading to bad results. For example, Dan Kaminsky counted another scalp when finding back in April that ISPs are doing MITMs on their customers:

    The rub comes when a user is asking for a nonexistent subdomain of a real website, such as http://webmale.google.com, where the subdomain webmale doesn't exist (unlike, say, mail in mail.google.com). In this case, the Earthlink/Barefruit ads appear in the browser, while the title bar suggests that it's the official Google site.

    As a result, all those subdomains are only as secure as Barefruit's servers, which turned out to be not very secure at all. Barefruit neglected basic web programming techniques, making its servers vulnerable to a malicious JavaScript attack. That meant hackers could have crafted special links to unused subdomains of legitimate websites that, when visited, would serve any content the attacker wanted.

    The hacker could, for example, send spam e-mails to Earthlink subscribers with a link to a webpage on money.paypal.com. Visiting that link would take the victim to the hacker's site, and it would look as though they were on a real PayPal page.

    That's a subtle attack, one which the techies can understand but the ordinary users cannot. Here's a simpler one (hat-tip to Duane), a straight phish:

    Dear Wilmington Trust Banking Member,

    Due to the high number of fraud attempts and phishing scams, it has been decided to implement EV SSL Certification on this Internet Banking website.

    The use of EV SSL certification works with high security Web browsers to clearly identify whether the site belongs to the company or is another site imitating that company’s site.

    It has been introduced to protect our clients against phishing and other online fraudulent activities. Since most Internet related crimes rely on false identity, WTDirect went through a rigorous validation process that meets the Extended Validation guidelines.

    Please Update your account to the new EV SSL certification by Clicking here.

    Please enter your User ID and Password and then click Go.

    (Failure to verify account details correctly will lead to account suspension)

    This is a phish email seen in the wild. We here -- the techies -- all know what's wrong with this attack, but can you explain it to your grandma? What is being attacked here here is the brand of EV rather than the technology. In effect, the more ads that relate EV to security in a simplistic fashion, the better this attack works.

    To evade this, the banks have to promote better practices amongst their clients, and they have to include the user into the protocol. But they are not being helped, it seems.

    On the one hand, a commonly held belief with the security developers is that the user cannot be bothered with security, ignore any attempts, and therefore it is best to not bother them with it. Much as I like the mantra of "there is only one mode, and it is secure," it isn't going to work in the web case unless we do the impossible: drop HTTP and make HTTPS mandatory, solve the browser universality issue (note chrome's efforts), and unify the security models end-to-end. As the group of Internet people who can follow that is vanishingly small, this is a non-starter.

    On the other not-so-helpful hand, the pushers of certificates often find themselves caught between the horns of pushing more product (at a higher price) and providing the basic needs of the customers. I recently came across two cases at both extremes of the market: at the unpaid end of the market, the ability of a company to conveniently populate 50,000 desktops is, it is claimed, more important than meeting expectations about verification of contents. The problem with this approach is if people see strong expectations on one side, and casual promises on another equivalent side, the entire brand suffers.

    And, at the expensive EV extreme of the market, the desire to sell a product can sometimes lead to exaggerated claims, as seen in a recent advertisement, seen above (hat-tip to Philipp). This advert in the German paper press includes perhaps over-broad or over-high claims. Translating from German to English as:

    The latest and best in terms of online security. And also the greens.

    Visible security for your site from a company where your customers trust.

    It is quite simple: a green address bar means that your site is secure.

    These claims are easy to make, and they may help sell certs. But they also help the above phishing attack, as they clearly make simple connections between security and EV.

    "EV means security, gotta get me some of that! Click!" FTR, check the translator.

    What would be somewhat more productive than sweeping marketing claims is to decide what it is that the green thing really meant.

    I have heard one easy answer: EV means whatever the CA/Browser Forum Guidelines says it means. OK, but not so fast: I do not recall that it said anywhere that your site has to be secure! I admit to not having read it for a while, but the above advert seems entirely at odds with a certificate claim.

    Further, because that document is long and involved, something clearer and shorter would be useful if there are any pretensions in saying anything to customers.

    If you don't want to say anything to customers, then the Guidelines are fine. But then, what is the point of certs?

    The above is just about EV, but that's just a foil for the wider problem: no CA makes this easy, as yet. I believe it to be an important task for certificate authorities to come up with a simple claim that they can make to users. Something clear, something a user can take to court.

    Indeed, it would be good for all CAs and all browsers to have their claims clearly presented to users, as the users are being told that there is a benefit. To them. Frequently, for years and years now. And now they are at risk, both of losing that benefit and because of that benefit.

    What is that benefit? Can you show it? Can you state it? Can you present it to your grandma? And, will you stand behind her, in court, and explain it to the judge?

    Posted by iang at 05:27 AM | Comments (2) | TrackBack

    September 20, 2008

    Builders v. Breakers

    Gunnar lauds a post on why there are few architects in the security world:

    Superb post by Mark on what I think is the biggest problem we have in security. One thing you learn in consulting is that no matter what anyone tells you when you start a project about what problem you are trying to solve, it is always a people problem. The single biggest problem in security is too many breakers not enough builders. Please understand I am not saying that breakers are not useful, we need them, and we need them to continue to get better so we can build more resilient systems. But the industry is about 90% breaking and 10% building and that's plain bad.

    It’s still predominantly made up of an army of skilled hackers focused on better ways to break systems apart and find new ways to exploit vulnerabilities than “security architects” who are designing secure components, protocols and ultimately secure systems.

    Hear hear! And why is this? One easy answer: breaking something is a solid metric. It's either broken or not, in general. Any journo can understand it.

    On the other hand, building it is too difficult a signal. There is no easy number, there is no binary result. It takes a business focus over decades to understand that one architecture delivers more profits for users and corporates alike than another, and by then, the architects have moved on, so even, then the result may not be clear.

    Let's take an old example. Back around 1996, a couple of bored uni students cracked Netscape's secure browsing. The first time was by crunching the 40 bit crypto using the idle lab computers, and the second time was by predicting the less-than-random numbers injected into the protocol. These students were lauded in the press for having done something grand.

    They then went on to a much harder task, and were almost never heard of again. What was that harder task? Building secure and usable systems. One of them tried to build a secure communications platform, another is trying to build a secure computing platform. So far, they have not succeeded at either task, but these are much harder tasks.

    The true heroes back in the mid-1990s were the Netscape engineers who got something going and delivered it to the public, not the kids who scratched the paint off it. The breaches mentioned above were jokes, and bad ones at that, because they distracted attention on what was really being built. Case in point, is that, even today, if we had twice as much 40 bit crypto as we do 128 bit crypto, we'd probably be twice as secure, because the attack of relevance simply isn't the bored uni student, it is the patient phisher.

    If you recall names in this story, recall them for what they tried to build, and not for what they broke.

    Posted by iang at 05:59 AM | Comments (8) | TrackBack

    September 18, 2008

    Macs for security (now, with new improved NSA hardening tips!)

    Frequent browsers will recall that the top tip number 1 for every user out there is to buy a Mac. That's for several reasons:

    • the security engineering is solid, based on a long history of around 15 years of security programming tradition in Unix
    • Apple have also maintained a security tradition, from well before OSX
    • it remains a "smaller market share" so benefits from the monoculture bounty

    Now there is another reason: hardening tips from the NSA (or here with disclaimers).

    Well, this isn't exactly a reason but more a bonus (likely there is a hardening tips for other popular operating systems as well). However, it is a useful resource for those people who really want more than a standard user install, without the compromises!

    (Note, many of the hardening tips are beyond normal users, so seek experienced advice before following them slavishly.)

    Posted by iang at 12:11 PM | Comments (2) | TrackBack

    September 03, 2008

    Yet more evidence: your CISO needs an MBA

    I have in the past presented the strawman that your CISO needs an MBA. Nobody has yet succeeded in knocking it down, and it is proving surprisingly resilient. Yet more evidence comes from Bruce Schneier's blog post of yesterday:

    Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

    It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

    It's a good idea in theory, but it's mostly bunk in practice.

    Bunk is wrong. Let's drill down. It works this way: NPV (net present value) and ROI (its lesser cousin) are a mathematical tool for choosing between alternate projects. Keep the notion of comparison tightly in your mind.

    The tools measure the money going in versus the money going out in a neutral way. They are entirely neutral between projects because NPV is just mathematics, and the same mathematics is used for each project. (See the top part of Richard's post.)

    Obviously, any result from the model depends totally on the inputs, so there is a great deal of care and theory needed supply those proper inputs. And, it is here that security projects have the trouble, in that we don't have a good view as to how to predict attack costs. To be clear, there is no controversy about the inputs being a big problem.

    But, assuming we have the theory, the process and the inputs, we can, again in principle, measure fairly across all projects.

    That's how it works. As you can see above, we do not make a distinction between investment, savings, costs, returns or profits. Why not? Because NPV model and the numbers don't, either.

    What then goes wrong with security people when they say ROI doesn't apply to security?

    Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.

    Or, or here:

    The bottom line is that security saves money; it does not create money.

    It seems to be that they seize on the words investment and returns, etc, and realise that the words differ from costs and savings. In conceptual or balance sheet terms, they do differ, but here's the catch: to the models of NPV and ROI, it's all the same. In this sense, we could say that the title of ROI is a misnomer, or that there are several meanings to the word "investment" and you've seized on the wrong one.

    If you are good at maths, consider it as simply a model that deals equally well with negative numbers as well as positive numbers. To a model, savings are just negatives of returns.

    Now, if your security director had an MBA, she would know that the purpose of NPV is to compare projects, and not anything else, like generating returns. She would also know that the model is neutral, and that the ability to handle negative numbers mean that expenses and savings can be compared as well. She would further know that the problems occur in the inputs and assumptions, not in the model.

    Finally, she would know how to speak in the language of finance, which is the language that the finance people use. This might sound obvious, but it isn't so clear. As a generalism, it is this last point that is probably most significant about the MBA concept: it teaches you the language of all the other specialities. It doesn't necessarily make you a whizz at finance, or human resources, or marketing. But it at least lets you talk to them in their language. And, it reminds you that the other professions do have some credibility, so if they say something, listen first before teaching them how to suck eggs.

    Posted by iang at 10:09 AM | Comments (2) | TrackBack

    August 25, 2008

    Should a security professional have a legal background?

    Graeme points to this entry that posits that security people need a legal background:

    My own experience and talking to colleagues has prompted me to wonder whether the day has arrived that security professionals will need a legal background. The information security management professional is under increasing pressure to cope with the demands of the organization for access to information, to manage the expectations of the data owner on how and where the information is going to be processed and to adhere to regulatory and legal requirements for the data protection and archiving. In 2008, a number of rogue trader and tax evasion cases in the financial sector have heightened this pressure to manage data.

    The short, sharp answer is no, but it is a little more nuanced than that. First, let's take the rogue trader issue, being someone who has breached the separation of roles within a trading company, and used it to bad effect. To spot and understand this requires two things: an understanding of how settlement works, and the principle of dual control. It does not require the law, at all. Indeed, the legal position of someone who has breached the separation, and has "followed instructions to make a lot of money" is a very difficult subject. Suffice to say, studying the law here will not help.

    Secondly, asking security people to study law so as to deal with tax evasion is equally fruitless but for different reasons: it is simply too hard to understand, it is less law than an everlasting pitched battle between the opposing camps.

    Another way of looking at this is to look at the FC7 thesis, which says that, in order to be an architect in financial cryptography, you need to be comfortable with cryptography, software engineering, rights, accounting, governance, value and finance. The point is not whether law is in there or not, but that there are an awful lot of important things that architects or security directors need before they need law.

    Still, an understanding of the law is no bad thing. I've found several circumstances where it has been very useful to me and people I know:

    • Contract law underpins the Ricardian contract.
    • Dispute resolution underpins the arbitration systems used in sensitive communities (such as WebMoney and CAcert).
    • The ICANN dispute system might have an experienced and realises that touching domains registries can do grave harm. In the alternate, a jurist looking at the system will not come to that conclusion at all.

    In this case, the law knowledge helps a lot. Another area which is becoming more and more an issue is that of electronic evidence. As most evidence is now entering the digital domain (80% was a recent unreferenced claim) there is much to understand here, and much that one can do to save ones company. The problem with this, as lamented at the recent conference, is that any formal course of law includes nothing on electronic evidence. For that, you have to turn to books like those by Stephen Mason on Electronic Evidence. But that you can do yourself.

    Posted by iang at 03:38 PM | Comments (3) | TrackBack

    July 11, 2008

    wheretofore Vista? Microsoft moves to deal with the end of the Windows franchise

    Since the famous Bill Gates Memo, around the same time as phishing and related frauds went institutional, Microsoft has switched around to deal with the devil within: security. In so doing, it has done what others should have done, and done it well. However, there was always going to be a problem with turning the super-tanker called Windows into a battleship.

    I predicted a while back that (a) Vista would probably fail to make a difference, and (b) the next step was to start thinking of a new operating system. This wasn't the normal pique, but the cold-hearted analysis of the size of the task. If you work for 20 years making your OS easy but insecure, you don't have much chance of fixing that, even with the resources of Microsoft.

    The Economist brings an update on both points. Firstly, on Vista's record after 18 months in the market:

    To date, some 140m copies of Vista have been shipped compared with the 750m or more copies of XP in daily use. But the bulk of the Vista sales have been OEM copies that came pre-installed on computers when they were bought. Anyone wanting a PC without Vista had to order it specially.

    Meanwhile, few corporate customers have bought upgrade licences they would need to convert their existing PCs to Vista. Overwhelmingly, Windows users have stuck with XP.

    Even Microsoft now seems to accept that Vista is never going to be a blockbuster like XP, and is hurrying out a slimmed-down tweak of Vista known internally as Windows 7. This Vista lite is now expected late next year instead of 2010 or 2011.

    It's not as though Vista is a dud. Compared with XP, its kernel—the core component that handles all the communication between the memory, processor and input and output devices—is far better protected from malware and misuse. And, in principle, Vista has better tools for networking. All told, its design is a definite improvement—albeit an incremental one—over XP.

    Microsoft tried and failed to turn it around, security+market-wise. We might now be looking at the end of the franchise known as Windows. To be clear, while we are past the peak, any ending is a long way off in the distant future.

    Classical strategy thinking says that there are two possible paths here: invest in a new franchise, or go "cash-cow". The latter means that you squeeze the revenues from the old franchise as long as possible, and delay the termination of the franchise as long as possible. The longer you delay the end, the more revenues you get. The reason for doing this is simple: there is no investment strategy that makes money, so you should return the money to the shareholders. There is a simple example here: the music majors are decidedly in cash-cow, today, because they have no better strategy than delaying their death by a thousand file-shares.

    Certainly, with Bill Gates easing out, it would be possible to go cash-cow, but of course, we on the outside can only cast our augeries and wonder at the signs. The Economist suggests that they may have taken the investment route:

    Judging from recent rumours, that's what it is preparing to do. Even though it won't be in Windows 7, Microsoft is happy to talk about “MinWin”—a slimmed down version of the Windows core. It’s even willing to discus its “Singularity” project—a microkernel-based operating system written strictly for research purposes. But ask about a project code-named “Midori” and everyone clams up.

    By all accounts, Midori (Japanese for “green” and, by inference, “go”) capitalises on research done for Singularity. The interesting thing about this hush-hush operating system is that it’s not a research project in the normal sense. It's been moved out of the lab and into incubation, and is being managed by some of the most experienced software gurus in the company.

    With only 18 months before Vista is to be replaced, there's no way Midori—which promises nothing less than a total rethink of the whole Windows metaphor—could be ready in time to take its place. But four or five years down the road, Microsoft might just confound its critics and pleasantly surprise the rest of us.

    Comment? Even though I predicted Microsoft would go for a new OS, I think this is a tall order. There are two installed bases in the world today, being Unix and Windows. It's been that way for a long time, and efforts to change those two bases have generally failed. Even Apple gave up and went Unix. (The same economics works against the repeated attempts to upgrade the CPU instruction set.)

    The flip-side of this is that the two bases are incredibly old and out-of-date. Unix's security model is "ok" but decidedly pre-PC, much of what it does is simply irrelevant to the modern world. For example, all the user-to-user protection is pointless on a one-user-one-PC environment, and the major protection barrier has accidentally become a hack known as TCP/IP, legendary for its inelegant grafting onto Unix. Windows has its own issues.

    So we know two things: a redesign is decades over-due. And it won't budge the incumbents; both are likely to live another decade without appreciable change to the markets. We would need a miracle, or better, a killer-app to budge the installed base.

    Hence the cold-hearted analysis of cash-cow wins out.

    But wait! The warm-blooded humanists won't let that happen for one and only one reason: it is simply too boring to contemplate. Microsoft has so many honest, caring, devoted techies within that if a decision were made to go cash-cow, there would be a mass-defection. So the question then arises, what sort of a hybrid will be acceptable to shareholders and workers? Taking a leaf from recent politics, which is going through a peak-energy-masquerade of its own these days, some form of "green platform" has appeal to both sides of the voting electorate.

    Posted by iang at 09:26 AM | Comments (2) | TrackBack

    July 10, 2008

    DNS rebinding attack/patch: the germination of professional security cooperation?

    Lots of chatter is seen in the security places about a patch to DNS coming out. It might be related to Dan's earlier talks, but he also makes a claim that there is something special in this fix. The basic idea is that DNS replies are now on randomised ports (or some such) and this will stop spoofing attempts of some form. You should patch your DNS.

    Many are skeptical, and this gives us an exemplary case study of today's "security" industry:

    Ptacek: If the fix is “randomize your source ports”, we already knew you were vulnerable. Look, DNS has a 16 bit session ID… how big is an ASPSESSIONID or JSESSIONID? When you get to this point you are way past deck chairs on the titanic, but, I mean, the web people already know this. This is why TLS/SSL totally doesn’t care about the DNS. It is secure regardless of the fact that the DNS is owned.

    Paraphrased: "Oh, we knew about that, so what?" As above, much of the chatter in other groups is about how this apparently fixes something that is long known, therefore insert long list of excuses, hand-wringing, slap-downs, and is not important. However, some of the comments are starting to hint at professionalism. Nathan McFeters writes:

    I asked Dan what he thought about Thomas Ptacek’s (Thomas Ptacek of Matasano) comments suggesting that the flaw was blown out of proportion and Dan said that the flaw is very real and very serious and that the details will be out at Black Hat. Dan mentioned to me that he was very pleased with how everything has worked with the multi-vendor disclosure process, as he said, “we got several vendors together and it actually worked”. To be honest, this type of collaboration is long overdue, and there’s a lot of folks in the industry asking for it, and I’m not just talking about the tech companies cooperating, several banking and financial companies have discussed forums for knowledge sharing, and of course eBay has tried to pioneer this with their “eBay Red Team” event. It’s refreshing to here a well respected researcher like Dan feeling very positive about an experience with multiple vendors working together (my own experience has been a lot of finger pointing and monkey business).

    Getting vendors to work together is quite an achievement. Getting them to work on security at the same time, instead of selling another silver bullet, is extraordinary, and Dan should write a book on that little trick:

    Toward addressing the flaw, Kaminsky said the researchers decided to conduct a synchronized, multivendor release and as part of that, Microsoft in its July Patch Tuesday released MS08-037. Cisco, Sun, and Bind are also expected to roll out patches later on Tuesday.

    As part of the coordinated release, Art Manion of CERT said vendors with DNS servers have been contacted, and there’s a longer list of additional vendors that have DNS clients. That list includes AT&T, Akamai, Juniper Networks, Inc., Netgear, Nortel, and ZyXEL. Not all of the DNS client vendors have announced patches or updates. Manion also confirmed that other nations with CERTs have also been informed of this vulnerability.

    Still, for the most part, the industry remains fully focussed on the enemy within, as exemplified by Ptacek's comment above. I remain convinced that the average "expert" wouldn't recognise a security fix until he's been firmly wacked over the head by it. Perhaps that is what Ptacek was thinking when he allegedly said:

    If the IETF would just find a way to embrace TLS/X509 instead griping about how Verisign is out to get us we wouldn’t have this problem. Instead, DNSSEC tried to reinvent TLS by committee… well, surprise surprise, in 2008, we still care about 16 bit session IDs! Go Internet!

    Now, I admit to being a long-time supporter of TLS'ing everything (remember, there is only one mode, and it is secure!) but ... just ... Wow! I think this is what psychologists call the battered-wife syndrome; once we've been beaten black and blue with x.509 for long enough, maybe we start thinking that the way to quieten our oppressor down is to let him beat us some more. Yeah, honey, slap me with some more of that x.509 certificate love! Harder, honey, harder, you know I deserve it!

    Back to reality, and to underscore that there is something non-obvious about this DNS attack that remains unspoken (have you patched yet?), the above-mentioned commentator switched around 540 degrees and said:

    Patch Your (non-DJBDNS) Server Now. Dan Was Right. I Was Wrong.

    Thanks to Rich Mogull, Dino and I just got off the phone with Dan Kaminsky. We know what he’s going to say at Black Hat.

    What can we say right now?

    1. Dan’s got the goods. ...

    Redeemed! And, to be absolutely clear as to why this blog lays in with slap after slap, being able to admit a mistake should be the first criteria for any security guy. This puts Thomas way ahead of the rest of them.

    Can't say it more clearly than that: have you patched your DNS server yet?

    Posted by iang at 09:30 AM | Comments (4) | TrackBack

    July 06, 2008

    German court finds Bank responsible for malwared PC

    Spiegel reports that a German lower court ("Amtsgerichts Wiesloch (Az4C57/08)") has found a bank responsible for malware-driven transactions on a user's PC. In this case, her PC was infected with some form of malware that grabbed the password and presumably a fresh TAN (German one-time number to authenticate a single transaction) and scarfed 4000 euros through an eBay wash.

    Unfortunately only in German, and so analysis following is highly limited and unreliable. It appears that the court's logic was that as the transaction was not authenticated by the user, it is the bank's problem.

    This seems fairly simple, except that Microsoft Windows-based PCs are difficult to keep clean of malware. In this case, the user had a basic anti-virus program, but that's not enough these days (see top tips on what helps).

    We in the security field all knew that, and customers are also increasingly becoming aware of it, but the question the banking and security world is asking itself is whether, why and when the bank is responsible for the user's insecure PC? Shouldn't the user take some of the risk for using an insecure platform?

    The answer is no. The risk belongs totally to the bank in this case, in the opinion of the Wiesloch court, and the court of financial cryptography agrees. Consider the old legal principle of putting the responsibility with the party best able to manage it. In this case, the user cannot manage it, manifestly. Further, the security industry knew that the Windows PC was not secure enough for risky transactions, and that Microsoft software was the dominant platform. The banking industry has had this advice in tanker-loads (c.f. EU "Finread"), and in many cases banks have even heeded the advice, only to discard it later. The banking industry decided to go ahead in the face of this advice and deploy online banking for support cost motives. The banks took on this risk, knowing the risk, and knowing that the customer could not manage this risk.

    Therefore, the liability falls completely to the bank in the basic circumstances described. In this blog's opinion ! (It might have been different if the user had done something wrong such as participate in a mule wash or had carried on in the face of clear evidence of infection.)

    There is some suggestion that the judgment might become a precedent, or not. We shall have to wait and see, but one thing is clear, online banking has a rocky road ahead of it, as the phishing rooster comes home to roost. For contrary example, another case in Cologne (Az: 9S195/07) mentioned in the article put the responsibility for EC-card abuse with the customer. As we know, smart cards can't speak to the user about what they are doing, so again we have to ask what the Windows PC was saying about the smart card's activities. If the courts hold the line that the user is responsible for her EC-card, then this can only cause the user to mistrust her EC-card, potentially leading to yet another failure of an expensive digital signing system.

    The costs for online banking are going to rise. A part of any solution, as frequently described by security experts, is to not trust widely deployed Microsoft Windows PCs for online banking, which in effect means PCs in general. A form of protection is fielded in some banks whereby the user's mobile phone is used to authenticate the real transaction over another channel. This is mostly cheap and mostly effective, but it isn't a comprehensive or permanent solution.

    Posted by iang at 08:28 AM | Comments (3) | TrackBack

    June 22, 2008

    H4.2 -- Usability Determines the Number of Users

    Last week's discussion (here and here) over how there is only one mode, and it is secure, brought forth the delicious contrast with browsing and security: yes, you can do that but it doesn't work well. No, I'm not talking about the logos being cross-sited, but all of the 100 little flaws that you find when you try and do a website for secure purposes.

    So why bother? Financial cryptography eats its own medicine, but it doesn't do it for breakfast, lunch and desert. Which reminds me to introduce another of the sub-hypes for critique:

    #4.2 Usability Determines the Number of Users

    Ease of use is the most important determinant to the number of users. Ease of implementation is important, ease of incorporation is also important, and even more important is the ease of use by end-users. This reflects a natural subdivision into several classes of users: implementors, integrators and end-users, each class of which can halt the use of the protocol if they find it ... unusable. As they are laid out serially between you and the marketplace, you have to consider usability to all of them.

    The protocol should be designed to be easy to code up, so as to help implementors help integrators help users. It should be designed to be easy to interface to, so as to help integrators help users. It should be designed to be easy to configure, so as to help users get security.

    If there are any complex or tricky features, ask yourself whether the benefit is really worth the cost of coder's time. It is not that developers cannot do it, it is simply that they will not do it; nobody has all the time in the world, and a protocol that is twice as long to implement is twice as likely to not get done.

    Same for integrators of systems. If the complexity provided by the protocol and the implementation causes X amount of work, and another protocol costs only X/2 then there is a big temptation to switch. Regardless of absolute or theoretical security.

    Same for users.

    Posted by iang at 08:13 AM | Comments (3) | TrackBack

    June 06, 2008

    The Dutch show us how to make money: Peace and Cash Foundation

    Sometime around 3 years back, banks started to respond to phishing efforts by putting in checks and controls to stop people sending money. This led to the emergence of a new business model that promised great returns on investment by arbitraging the controls, and has led to the enrichment of many! Now, my friends in NL have alerted me to the NVB's efforts to sell the Dutch on the possibilities.

    Welcome to the Peace and Cash Foundation!

    A fun few minutes, and even though it is in Dutch, the symbology should be understood. Does anyone have an English translation?

    (Okay, you might not see the Peace and Cash Foundation by the time you click on the site, but another generation will be there to help you to well-deserved riches...)

    Posted by iang at 01:00 PM | Comments (2) | TrackBack

    May 14, 2008

    Case study in risk management: Debian's patch to OpenSSL

    Ben Laurie blogs that downstream vendors like Debian shouldn't interfere with sensitive code like OpenSSL. Because they might get it wrong... And, in this case they did, and now all Debian + Ubuntu distros have 2-3 years of compromised keys. 1, 2.

    Further analysis shows however that the failings are multiple, are at several levels, and they are shared all around. As we identified in 'silver bullets' that fingerpointing is part of the problem, not the solution, so let's work the problem, as professionals, and avoid the blame game.

    First the tech problem. OpenSSL has a trick in it that mixed in uninitialised memory with the randomness generated by the OS's formal generator. The standard idea here being that it is good practice to mix in different sources of randomness into your own source. Roughly following the designs from Schneier and Ferguson (Yarrow and Fortuna), modern operating systems take several random things like disk drive activity and net activity and mix the measurements into one pool, then run it through a fast hash to filter it.

    What is good practice for the OS is good practice for the application. The reason for this is that in the application, we do not know what the lower layers are doing, and especially we don't really know if they are failing or not. This is OK if it is an unimportant or self-checking thing like reading a directory entry -- it either works or it doesn't -- but it is bad for security programming. Especially, it is bad for those parts where we cannot easily test the result. And, randomness is that special sort of crypto that is very very difficult to test for, because by definition, any number can be random. Hence, in high-security programming, we don't trust the randomness of lower layers, and we mix our own [1].

    OpenSSL does this, which is good, but may be doing it poorly. What it (apparently) does is to mix in uninitialised buffers with the OS-supplied randoms, and little else (those who can read the code might confirm this). This is worth something, because there might be some garbage in the uninitialised buffers. The cryptoplumbing trick is to know whether it is worth the effort, and the answer is no: Often, uninitialised buffers are set to zero by lower layers (the compiler, the OS, the hardware), and often, they come from other deterministic places. So for the most part, there is no reasonable likelihood that it will be usefully random, and hence, it takes us too close to von Neumann's famous state of sin.

    Secondly, to take this next-to-zero source and mix it in to the good OS source in a complex, undocumented fashion is not a good idea. Complexity is the enemy of security. It is for this reason that people study the designs like Yarrow and Fortuna and implement general purpose PRNGs very carefully, and implement them in a good solid fashion, in clear APIs and files, with "danger, danger" written all over them. We want people to be suspicious, because the very idea is suspicious.

    Next. Cryptoplumbing involves by its necessity lots of errors and fixes and patches. So, bug reporting channels are very important, and apparently this was used. Debian team "found" the "bug" with an analysis tool called Valgrind. It was duly reported up to OpenSSL, but the handover was muffed. Let's skip the fingerpointing here, the reason it was muffed was that it wasn't obvious what was going on. And, the reason it wasn't obvious looks like the code was too clever for its own good. It tripped up Valgrind (and Purify), it tripped up the Debian programmers, and the fix did not alert the OpenSSL programmers. Complexity is our enemy, always, in security code.

    So what in summary would you do as a risk manager?

    1. Listen for signs of complexity and cuteness. Clobber them when you see them.
    2. Know which areas of crypto are very hard, and which are "simple". Basically, public keys and random generation are both devilish areas. Block ciphers and hashes are not because they can be properly tested. Tune your alarm bells for sensitivity to the hard areas.
    3. If your team has to do something tricky (and we know that randomness is very tricky) then encourage them to do it clearly and openly. Firewall it into its own area: create a special interface, put it in a separate file, and paint "danger, danger" all over it. KISS, which means Keep the Interface Stupidly Simple.
    4. If you are dealing in high-security areas, remember that only application security is good enough. Relying on other layers to secure you is only good for medium-level security, as you are susceptible to divide and conquer.
    5. Do not distribute your own fixes to someone else's distro. Do an application work-around, and notify upstream. (Or, consider 4. above)
    6. These problems will always occur. Tech breaks, get used to it. Hence, a good high-security design always considers what happens when each component fails in its promise. Defence in depth. Systems that fail catastrophically with the failure of one component aren't good systems.
    7. Teach your team to work on the problem, not the people. Discourage fingerpointing; shifting the blame is part of the problem, not the solution. Everyone involved is likely smart, so the issues are likely complex, not superficial (if it was that easy, we would have done it).
    8. Do not believe in your own superiority. Do not believe in the superiority of others. Worry about people on your team who believe in their own superiority. Get help and work it together. Take your best shot, and then ...
    9. If you've made a mistake, own it. That helps others to concentrate on the problem. A lot. In fact, it even helps to falsely own other people's problems, because the important thing is the result.

    [1] A practical example is that which Zooko and I did in SDP1 with the initialising vector (IV). We needed different inputs (not random) so we added a counter, the time *and* some random from the OS. This is because we were thinking like developers, and we knew that it was possible for all three to fail in multiple ways. In essence we gambled that at least one of them would work, and if all three failed together then the user deserved to hang.

    Posted by iang at 05:57 AM | Comments (8) | TrackBack

    May 11, 2008

    Phishing faceoff - Firefox 3 v. the market for your account

    Dria writes up a fine intro to the new Firefox security UI, the thing that forms the front line for phishing protection.

    Basically, the padlock has been replaced with a button that shows a "passport" icon in multiple colours and with explanatory text. Pretty good, although as far as I can see, this means you can no longer *look* and tell what is going on. OK, I'll run with that as an experiment! For now, there are 4 colours: grey, blue, green and yellow. Grey is for no identity, and I guess that also means no TLS although it isn't that clear as encryption edge-cases are not shown. Blue is Good, being the conventional browser security level, and Green is the EV colour for their new enhanced "better than before" identity verification.

    One minus: yellow is now a confusing colour. Before it was the colour of success, now it is the colour of failure (although that failure is easily rectified by clicking to add an exception). As I commented before, we do need the colours to be re-aligned after a period of experimentation. Kudos to Firefox team for taking that one on the chin, if only others were prepared to clearly unwind some well-meaning ideas...

    Second Minus: As many said in comments, the "invalid identity verification" isn't always that, sometimes it is a non-third-party verification. This needs to be distinguished between "identity not 3rd-party verified" and "self-verified."

    Indeed, our old friend, the self-identified or self-signed certificate, is widely used in technical community for local and small scale security systems, so we can see the message "not trusted" and "invalid" are simply wrong. Now, Jonath says that Firefox is planning to do Key Continuity Management, so it may be that the confusing messages get cleaned up then. (I don't know if anyone else has spotted the nexus, but when we take KCM and combine it with 3rd party verifications, sometimes called variously TTP or CVP or certificate authorities, then we get the best of both worlds. It will eventually come, because it is so sensible.)

    One Good Plus: the CA is displayed in all cases where the certificate information is available. This is exceedingly important because it is the CA that does the verification of identity; Firefox only repeats it (which is why the "trusted" message is soooo wrong). Slowly but surely, we have to move the browser UI to pin the assertation on the CA, and we have to move the user to thinking about CAs as competing, branded entities that stand behind their statements. Publically. At least, Firefox has not made the mistake that Microsoft has made with the new IE, which displays the name of the authority in some circumstances, but not others.

    It is important to realise: we can take a few missteps along the way, such as Gerv's noble attempt with the colour yellow. It is very costly to get these UIs built, trialled and tuned, so any experiments have to be taken with an expectation of more improvement in the future. As long as we're moving forward, things are getting better.

    What's all this about? Why is it important? Pop over to Francois' post on stolen accounts to see the real reason:

    (Hat tip to Gunnar again.) Them's hard numbers! A quick eyeball calculation over Francois's numbers reveals that the going price of a well-funded account is 8% of the value. This makes some sense, because getting access to the account details is easy, it's done in bulk, given that the security systems of online banks are so fundamentally and hopelessly flawed. What is more difficult is getting the money out, because, since the arisal of phishing, and the sticking of some of the liability, occasionally, to the banks, the online security teams have done some things to make it a little trickier. So the lion's share goes to the money launderer, who needs to pay a lot more in direct costs to get his hard-stolen loot out.

    How does this relate to Firefox? Phishing arose as an attack on the browser's security UI, so the Firefox team are working to address that issue with KCM, better displays of the CA name that makes the claim, and more clear symbols. The first 2 images above may help you to avoid the 3rd image.

    Unfortunately, the big picture overtook the browser world, so the work is only just starting. Phishing caused massive funds to flow into the attackers, so they invest in any attacks that might return more value. Especially, the attack profile now includes MITM and trojans, so hardening of the browser against inside attacks will be increasingly necessary.

    Remember, the threat is on the node, not on the wire; which gives us an answer to the question that Jonath was posing: Hardening against inside attacks should be on the agenda for future Firefox.

    Posted by iang at 12:44 PM | Comments (0) | TrackBack

    April 22, 2008

    Paypal -- Practical Approaches to Phishing -- open white paper

    Paypal has released a white paper on their approach to phishing. It is mostly good stuff. Here are their Principles:

    1. No Silver Bullet -- We have not identified any one solution that will single-handedly eradicate phishing; nor do we believe one will ever exist. ...

    2. Passive Versus Active Users -- When thinking about account security, we observe two distinct types of users among our customers. The first and far more common is the “passive” user. Passive users expect to be kept safe online with virtually no involvement of their own. They will keep their passwords safe, but they do not look for the “s” in https nor will they install special software packages or look for digital certificates. The active user, on the other hand, is the “see it/touch it/use it” person. They want two-factor authentication, along with every other bell and whistle that PayPal or the industry can provide. Our solution would have to differentiate between and address both of these groups.

    3. Industry Cooperation -- PayPal has been a popular “target of opportunity” for criminals due to our large account base of over 141 million accounts2 and the nature of our global service. However, while we may be an attractive target, we are far from the only one. Phishing is an industry problem, and we believe that we have a responsibility to help lead the industry toward solutions that help protect consumers – and the Internet ecosystem as a whole.

    4. Standards-based -- A preference for solutions based on industry standards could be considered a facet of industry cooperation, but we believe it’s important enough to stand on its own. If phishing is an industry problem, then industry standard solutions will have the widest reach and the least overhead – certainly compared to proprietary solutions. For that reason, we have consistently picked industry standard solutions.

    (Slightly edited.) That's a good start. The white paper explores their strategy, walks through their work with email signing. Short message: tough! No mention of the older generation technologies such as OpenPGP or S/MIME, but instead, they create plugins to handle their signatures:

    To reach the active users who do not access their email through a signature-rendering email client, PayPal started working with Iconix, which offers the Truemark plug-in for many email clients. The software quickly and easily answers the question of “How do I know if a PayPal email is valid?” by rewriting the email inbox page to clearly show which messages have been properly signed.

    That's by way of indicating how poorly tools designed in the early 90s from poor security analysis and wrong threat models are coping with real threats.

    Next part is their work with the browser. It starts:

    4.1 Unsafe browsers

    There is of course, a corollary to safer browsers – what might be called “unsafe browsers.” That is, those browsers which do not have support for blocking phishing sites or for Extended Validation Certificates (a technology we will discuss later in this section). In our view, letting users view the PayPal site on one of these browsers is equal to a car manufacturer allowing drivers to buy one of their vehicles without seatbelts. The alarming fact is that there is a significant set of users who use very old and vulnerable browsers, such as Microsoft’s Internet Explorer 4 or even IE 3. ...

    Unsafe Browsers are a frequent complaint here and in other places. Some good stuff has been done, but it was too little, too late, and we now see the unfortunate damage that has been done. One of these effects was that it is now up to the big website operators, in this case PayPal, to start policing the browsers. Old versions of IE are to be blocked, and a recent warning shot indicates that browsers that don't support EV will be next.

    Next point: Paypal worked through several strategies. Three years ago, they pushed out a Toolbar (similar to the one listed on this blog). However this only works for those who download toolbars, a complaint frequently mentioned. So then Paypal, shifted to working with Microsoft to adopt the technology into IE7, and now they have ridden the adoption curve of IE7 for free.

    Again this is exactly what should have been done: try it in toolbars then adopt it in the browser core. A cheap hint was missed and an expensive hit was scored:

    4.4 Extended Verification SSL Certificates

    Blocking offending sites works very well for passive users. However, we knew we needed to provide visual cues for our active users in the Web browser, much like we did with email signatures in the mail client.
    Fortunately, the safer browsers helped tremendously. Taking advantage of a new type of site certificate called ‘Extended Validation (EV) SSL Certificates,’ newer browsers such as IE 7 highlight the address bar in green when customers are on a Web site that has been determined legitimate. They also display the company name and the certificate authority name. So, by displaying the green glow and company name, these newer browsers make it much easier for users to determine whether or not they’re on the site that they thought they were visiting.

    This is a mixture of misinformation and danger for PayPal. The good part is that the browser, which is the authority on the CAs, now states definitively "Which Site" and also "Who says!" Green is a nice positive colour, too.

    So, when this goes wrong, as it will, the user has more information in order to seek solutions. This shifting of the information back to the user (whether they want it or not) will do much more to causing the alignment of liabilities.

    The bad part is that the browsers did not need a proprietary solution dressed up in a consortium in order to do this. They had indeed added the colour yellow (Firefox) and company name themselves, and the CA name has been an idea that I pushed repeatedly. For the record, Verisign asked for it early on as well. There's nothing special in the real business world about asking for "who says so?"

    So we now have a structural lock-in placed in the browsers which they could have done for free, on their own initiative. Where does this go from here? Well, it certainly helps PayPal in the short term (in the same way that they could have said to users "download Firefox, check the yellow bar and company name"). But beyond that, I see dangers. As I have frequently written about the costs of the security approach before, I'll skip it now.

    Overall, though, I'm pleased with the report. They have recognised that the industry has no solutions (no "silver bullets") and they have to do it themselves. They've implemented many different strategies, discarded some and improved others. Did it work? Yes, see the graph.

    Best of all, they've taken the wraps off and published their findings. One of the clear indications from recent research and breach laws is that opening up the information is critical to success, and that starts with the website people. It's your job, do it, but that doesn't mean you have to do it alone! You can help each other by sharing industry-validated results, which are the only ones of any value.

    To conclude, there are some other strategies that I'll suggest, and if PayPal are reading this, then they too can ask what's going on here:

    a. Get TLS/SNI into Apache and IIS. Why? So that virtual hosted sites -- the 99% -- can use TLS. This will lead grass-roots-style to an explosion in the use of TLS.

    Why's that helpful? (a) it will raise the profile of TLS work enourmously, and that includes server-side and browser-side practices. It will help to re-direct all these resources above into security work in the browser. Right now, 1% of activity is in TLS. Priorities will change dramatically when that goes to 10%, and that means we can count on the browser teams to spend a whole lot more time on it. And (b) if all traffic goes over TLS, this reduces the amount of security work quite considerably because everything is within the TLS security model. Paypal already figured this as "more or less all" their stuff is now under TLS, including the report!

    All browsers have already done their bit for TLS/SNI. Webservers are the laggards. Ask them.

    b. Hardened browser. Now that Firefox, IE and others have done some work to push attacks away from the user, the phishers are attacking the browsers from the inside. So there is a need to really secure the inside against attacks. Indeed Paypal already noted it in the report.

    c. Hardened website. This means *reducing the tech.* The more you please your users, the more you please your attackers.

    d. 2-channel authentication over the transaction. Okay, that's old news, but it bears repeating, because my online payment provider can only give me one of those old RSA fobs.

    e. The browser website interface is here to stay. Which means we are stuck with a pretty poor combination. What's the long term design path? Here's one view: it starts with client-side certificates and opportunistic cryptography. Why? Because otherwise, transactions are naked and vulnerable, and your efforts are piecemeal.

    Which means, don't replace the infrastructure wholesale, but re-tune it in the direction of security. Read the literature on secure transactions, none of it was ever employed in ecommerce, so it means there are lots of opportunities left for you :)

    (Firefox and IE are both doing early components in this area, so ask them what it's about!)

    f. Finally, bite the bullet and understand that your users will revolt one day, if you continue to keep fees high through industry-wide cooperation on lackadaisical fraud practices. High fraud and high loss means high fees which means high profits. One day the users will understand this, and revolt against your excuse. The manner this works is already well known to PayPal and it will happen to you, unless you adopt a competitive approach to fraud and to fees.

    Posted by iang at 02:27 PM | Comments (3) | TrackBack

    April 20, 2008

    Fair Disclosure via blogs? Anyone listening to Pow, Splat, Blech?

    Another message via the medium, this time from someone who knows how to use a remailer, and is therefore anonymous:

    Don't try this at home! (without an anonymizing proxy, anyway)

    Google willingly gives anyone a list of highly vulnerable US Government websites. Just write the following into the search box:

    gimme pow that do some splat

    These are sites that construct blah directly from blech. Most of them would respond to blah that are not supported by the blech-based interface, leaking sensitive information left and right. But quite a few would let you splat the splotches as well, up to and including blamming entire ker-blats.

    You didn't hear it from me.

    OK, that was fun. Problem now is, how does someone I don't know that won't hear it from me get it from someone I didn't hear it from?


    Late Addition: Now that anon has leaked the sensitive command, I realise this is what I first saw on Perilocity's post on WTF. Breach news is worse than politics, a week is ancient history. I can do no better than those guys. Still, to save a click, the basic thing is that you can see the SELECT command on the HTML of the website, and then use that example to craft your own. Here's a story of some sick public servants who Oklahoma city felt the need to share with us all:

    Here's the rest of the message. I think we should all try it. Safety in numbers.

    Just write the following into the search box:
    allinurl:.gov select from where

    These are websites that construct SQL queries directly from the URL. Most of
    them would respond to queries that are not supported by the web-based UI of
    the website, leaking sensitive information left and right. But quite a few
    would let you modify the databases as well, up to and including dropping
    entire tables.

    The only question left is whether I'm not hearing from one anonymous or two? But you're not asking that.

    Posted by iang at 12:38 PM | Comments (2) | TrackBack

    April 16, 2008

    Proving that you know something about security...

    I recently received an (anonymous) comment on the 'silver bullets' paper that ran like this:

    Sellers most certainly still have more information than the vast majority of buyers based on the fact that they spend all of their time making security software.

    That's an important statement, and deserves to be addressed. How can we check that statement? Well, one way is that we could walk over to the world's biggest concentration of sellers and perhaps buyers, and test the waters? The RSA conference! Figuratively, blog-wise, Gunnar does just that:

    I went to RSA to speak with Brian Chess on Breaking Web Services. First time for me to RSA, I generally go to more geek-to-geek conferences like OWASP. It is a little weird to be in such a big convention. There were soooo many vendors yet most of the products in the massive trade show floor would have as much an impact on the security in your system as say plumbing fixtures. What is genuinely strange to me is that every other area in computers improves and yet security stagnates. For years the excuse that security people gave for their field's propensity to lameness is that "no one invests a nickel in security." However, that ain't the case any more and yet most of the products teh suck. This doesn't happen in other areas of computing - databases are vastly better than a decade ago, app servers same, OS same, go right down the list. What gives in security? Where is the innovation?

    This is more or less similar to the paper's selection of quotes. Anecdotally, evidence exists that insiders don't think sellers know enough, on both sides of the fence. However, surveys can be self-selecting (as was my sample of quotes in the paper), and opinions can be wrong. So it is important to realise that we have not proven one way or another, we've simply opened the door to an uncertainty.

    That is, it could be true that sellers don't know enough! How we then go on to show this, one way or another, is a subject for other (many) posts and possibly much more academic research. I don't for a moment think it is reasonable nor scientifically appropriate to prove this in one paper.

    Posted by iang at 07:01 AM | Comments (1) | TrackBack

    April 09, 2008

    another way to track their citizens

    Passports were always meant to help track citizens. According to lore, they were invented in the 19th century to stop Frenchmen evading the draft (conscription), which is still an issue in some countries. BigMac points to a Dutch working paper "Fingerprinting Passports," that indicates that passports can now be used to discriminate against the bearer's country of issue, to a distance of maybe 25cm. Future Napoleons will be happy.

    Because terrorising the reader over breakfast is currently good writing style by governments and media alike, let's highlight the dangers first. The paper speculates:

    Given that we can remotely detect the presence of a passport of a particular country, how could this functionality be abused? One abuse case that has been suggested is a passport bomb, designed to go off if someone with a passport of a certain nationality comes close. One could even send such a bomb by post, say to an embassy. A less spectacular, but possibly more realistic, use of this functionality would by passport thieves, who can remotely check if someone is carrying passport and if it is of a ‘suitable’ nationality, before they decide to rob them.

    From the general fear department, we can also add that overseas travellers sometimes have a fear of being mugged, kidnapped, hijacked or simply shot because of their mere membership of a favourable or unfavourable country.

    Now that we have the FUD off our chest, let's talk details. The trick involves sending a series of commands (up to 4) to the RFID in the passport, each of which are presumably rejected by the passport. The manner of rejection differs from country to country, so a precise fingerprint-of-country can be formed simply by examining each rejection, and then choosing a different command to further narrow the choices.

    How did this happen? I would speculate that the root failure is derived from bureaucrats' never-ending appetite for complex technological solutions to simple problems. In this case, the first root cause is the use of the RFID, being by intention and design something that can be read from up to 10 cm.

    It is inherently attackable, and therefore by definition a very odd choice for security. The second complexity, then, involved implementing something to stop the attackers reading off the RFIDs without permission. The solution to an active read-off attack is encryption, of course! Which leads to our third complexity, a secret key, which is written inside the passport, of course! Which immediately raises issues of brute-forcing (of course!) and, as the paper references, it turns out, brute forcing attacks work on some countries' passports because the secret key is .. poorly chosen.

    All of this complexity, er, solution, means something called Basic Access Control is added to the RFID in order to ensure the use of the secret key. Which means a series of commands meant to defend the RFID. If we factor in the tendency for each country to implement passports entirely alone (because they are more scared of each other than they are of their citizens), we can see that each solution is proprietary and home-grown. To cope with this, the standard was written to be very flexible (of course!). Hence, it permits wide diversity in response to errors.

    Whoops! Security error. In the world of security, we say that one should be precise in what we send, and precise in what we return.

    From that point of view, this is poor security work by the governments of the world, but that's to be expected. The US State Department can now derive some satisfaction from earlier blunders; because of their failure to implement any form of encryption or access control, American passports can be read by all (terrorists and borderists alike), which apparently forced them to add aluminium foil into the passport cover to act as a Faraday cage. Likely, the other countries will now have to follow suit, and the smugness of being sophisticated and advanced in security terms ("we've got BAC!") will be replaced by a dawning realisation that they should have adopted the simpler solutions in the first place.

    Posted by iang at 03:33 AM | Comments (3) | TrackBack

    March 25, 2008

    Pogo reports: big(gest) bank breach was covered up?

    An anomoly surfaces on the breach scene. Lynn reports in comments via dark reading to Pogo:

    With the exception of the Birmingham News, what may be the largest bank breach involving insider theft of data seems to have flown under the mainstream media radar. ...

    In light of the details now available, the breach appears to be the largest bank breach involving insider theft of data in terms of number of customers whose data were stolen. The largest incident to date for insider theft from a financial institution involved the theft of data on 8.5 million customers from Fidelity National Information Services by a subsidiary's employee.

    It is not clear at the time of this writing whether Compass Bank ever notified the more than 1 million customers that their data had been stolen or how it handled disclosure and notification. A request for additional information from Compass Bank was not immediately answered.

    I would guess that the Feds agreed to keep it quiet. And gave the institution a get-out-of-jail card for the disclosure requirement. It would be curious to see the logic, and I'd be skeptical. On the one side, the damage is done, and the potential for a sting or new information would not really be good enough to compensate for a million victims.

    On the other side, maybe they were also able to satisfy themselves that no more damage would be done? It still doesn't cut the mustard, because once identity victims get hit, they need the hard information to clear their credit records.

    But, in the light of yesterday's post, let's see this as an exception to the current US flavour of breach disclosure, and see if it sheds any light on costs of non-disclosure.

    Posted by iang at 08:11 AM | Comments (4) | TrackBack

    March 13, 2008

    Trojan with Everything, To Go!

    more literal evidence of ... well, everything really:

    Targeting over 400 banks (including my own :( ! ) and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight. The scale and sophistication of this emerging banking Trojan is worrying, even for someone who sees banking Trojans on a daily basis.

    This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.

    The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.

    The Trojan does not use this attack vector for all banks, however. *It only uses this route when an easier route is not available*. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those....

    (spotted by JPM) MITB, MITM, two-factor as silver bullets for online banks, the node is insecure, etc etc.

    About the only thing that is a bit of a surprise is the speed of this attack. We first reported the MITB here around 2 years back, and we are still only seeing reports like the above. Although I said earlier that a big problem with the banking world was that the attacker can spin inside your OODA loop, it would appear that he does not take on every attack.

    See above for some limits: the attacker is finding and pursuing the attacks that are easiest, first. Is this finally the evidence that cryptographers cannot ignore? Crypto alone has proven to not work. It may be theoretically strong, but it is practically brittle, and easily bypassed. A more balanced, risk-based approach is needed. An approach that uses a lot less crypto, and a lot more engineering and user understanding, would be far more efficacious to deliver what users need.

    Posted by iang at 07:18 AM | Comments (2) | TrackBack

    February 20, 2008

    Principle of Redundancy

    In software engineering, it is important to remember the principle of redundancy. Not because it is useful for software, but because it is useful for humans.

    Human beings work continuously with redundancy because most human information processing is soft and fuzzy. How does a system deal with soft, fuzzy results? It takes readings from different sources, as little correlated as possible, and compares them. If three readings from independent sources all suggest the same conclusion, then we are good. If 2 out of 3 say good, then the human brain says "take care," and if 1 out 3 is good, then it is discarded.

    In comments on the last post, Peter G explored the direct question of whether anyone checked the fingerprint of the SSH server:

    I tried to get some data a while back on SSH key checking in response to SSH fuzzy fingerprints (if you're not familiar with fuzzy fingerprints, they create a close-enough fingerprint to the actual target to pass muster in most cases). Because human-subject experimentation requires a lot of paperwork and scrutiny, I thought I'd first try and establish a base rate for SSH fingerprint checking in general. In other words if you set up a new server with a totally different key from the current one, how many people will be deterred?

    So I tried to establish the fingerprint-check rate in a population of maybe a few thousand users.

    It was zero.

    No-one had ever performed an out-of-band check of an SSH fingerprint when it changed.

    Given a base rate of zero, I didn't consider it worthwhile doing the fuzzy fingerprint check :-).

    What is going on here? Three things. For some reason that has never been explained, SSH has never made it easy to check the fingerprint. Like OpenPGP to some extent, the fingerprints have been delivered in incompatible formats across different channels. E.g., my known_hosts file says that a server I know is AAAAB3N.... and the only way to easily see the fingerprint is to simulate a compromise by clearing the cache.

    Secondly, and the point of this post: the fingerprint is only one of the datums that is being displayed. In the last post, I talked about two other pieces of data: One was the failure of server-key matching, and the other was the fallback to password-request.

    The key lesson here is that SSH delivers enough information to do the job: it isn't the fingerprint per se, but the whole package of fingerprint, server-key matching, and precise mode.

    Three data points, albeit rather poorly presented. Which brings us to another point: In practice, this is only good enough in rare and experienced situations. That breach was only picked up because of the circumstances and a good dose of luck.

    This leads us to conclude that SSH is only just good enough, sometimes. Why? Because it is only just good enough for the job; it's circular, because since it has done the job well enough all these years, the security model has not been much improved against the theoretical concept that knocked the theoretical MITM on the head. The third thing is then lack of attacks -- now, however, circumstances are changing, and improvements should take place. (Indeed, if you have a longer perspective, you'll notice that the distros of SSH have been upgrading the security model over the last few years.)

    But, the important upgrades do not want to be in forcing the fingerprint down the throats of the user. Instead, they want to be in the area of redundancy: more uncorrelated soft and fuzzy signals to the user, that work with the brain, not with the old 1990s computer textbooks.

    Hence, and to complete the response to Peter G, this is why the PKI apologists are looking in the wrong area:

    So there is a form of data available, but because it's not very interesting it'll never be written up in a conference paper (there's a longer discussion of fuzzy fingerprints and related stuff in my perpetual-work-in-progress http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf). I've seen this type of authentication referred to as leap-of-faith authentication in a few recent usability papers, and that seems to be a reasonable name for it. That's not saying it's a bad thing, just that you have to know what it is you're getting for your money.


    Yeah, we can all see "leap of faith" as a sort of mental trick to avoid really examining why SSH works and why PKI doesn't or didn't. That is, "oh, but because you make this 'leap of faith' you're not really secure, according to our models. So I don't have to think any more about it."

    The real issue here is again, it worked, enough, for the job. Now the SSH people will think more, and upgrade it, because it is being attacked. I hope, at least!

    The PKI people cannot say that. What they can say is "use TLS/SRP" or some other similar RFC acrophiliac verbage which doesn't translate to anything a user can eat or drink. Hence, the simple answer is, "come back when I can use it."

    Posted by iang at 02:48 PM | Comments (1) | TrackBack

    February 17, 2008

    Say it ain't so? MITM protection on SSH shows its paces...

    For a decade now, SSH has successfully employed a simple opportunistic protection model that solved the shared-key problem. The premise is quite simple: use the information that the user probably knows. It does this by caching keys on first sight, and watching for unexpected changes. This was originally intended to address the theoretical weakness of public key cryptography called MITM or man-in-the-middle.

    Critics of the SSH model, a.k.a. apologists for the PKI model of the Trusted Third Party (certificate authority) have always pointed out that this simply leaves SSH open to a first-time MITM. That is, when some key changes or you first go to a server, it is "unknown" and therefore has to be established with a "leap of faith."

    The SSH defenders claim that we know much more about the other machine, so we know when the key is supposed to change. Therefore, it isn't so much a leap of faith as educated risk-taking. To which the critics respond that we all suffer from click-thru syndrome and we never read those messages, anyway.

    Etc etc, you can see that this argument goes round and round, and will never be solved until we get some data. So far, the data is almost universally against the TTP model (recall phishing, which the high priests of the PKI have not addressed to any serious extent that I've ever seen). About a year or two back, attack attention started on SSH, and so far it has withstood difficulties with no major or widespread results. So much so that we hear very little about it, in contrast to phishing, which is now a 4 year flood of grief.

    After which preamble, I can now report that I have a data point on an attack on SSH! As this is fairly rare, I'm going to report it in fullness, in case it helps. Here goes:

    Yesterday, I ssh'd to a machine, and it said:

    zhukov$ ssh some.where.example.net
    WARNING: RSA key found for host in .ssh/known_hosts:18
    RSA key fingerprint 05:a4:c2:cf:32:cc:e8:4d:86:27:b7:01:9a:9c:02:0f.
    The authenticity of host can't be established but keys
    of different type are already known for this host.
    DSA key fingerprint is 61:43:9e:1f:ae:24:41:99:b5:0c:3f:e2:43:cd:bc:83.
    Are you sure you want to continue connecting (yes/no)?
    

    OK, so I am supposed to know what was going on with that machine, and it was being rebuilt, but I really did not expect SSH to be effected. The ganglia twitch! I asked the sysadm, and he said no, it wasn't him. Hmmm... mighty suspicious.

    I accepted the key and carried on. Does this prove that click-through syndrome is really an irresistable temptation and the archilles heel of SSH, and even the experienced user will fall for it? Not quite. Firstly, we don't really have a choice as sysadms, we have to get in there, compromise or no compromise, and see. Secondly, it is ok to compromise as long as we know it, we assess the risks and take them. I deliberately chose to go ahead in this case, so it is fair to say that I was warned, and the SSH security model did all that was asked of it.

    Key accepted (yes), and onwards! It immediately came back and said:

    iang@somewhere's password:

    Now the ganglia are doing a ninja turtle act and I'm feeling very strange indeed: The apparent thought of being the victim of an actual real live MITM is doubly delicious, as it is supposed to be as unlikely as dying from shark bite. SSH is not supposed to fall back to passwords, it is supposed to use the keys that were set up earlier. At this point, for some emotional reason I can't further divine, I decided to treat this as a compromise and asked my mate to change my password. He did that, and then I logged in.

    Then we checked. Lo and behold, SSH had been reinstalled completely, and a little bit of investigation revealed what the warped daemon was up to: password harvesting. And, I had a compromised fresh password, whereas my sysadm mates had their real passwords compromised:

    $ cat /dev/saux foo@...208 (aendermich) [Fri Feb 15 2008 14:56:05 +0100] iang@...152 (changeme!) [Fri Feb 15 2008 15:01:11 +0100] nuss@...208 (43Er5z7) [Fri Feb 15 2008 16:10:34 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:23:15 +0100] iang@...113 (kash@zza75) [Fri Feb 15 2008 16:35:59 +0100] $

    The attacker had replaced the SSH daemon with one that insisted that the users type in their passwords. Luckily, we caught it with only one or two compromises.

    In sum, the SSH security model did its job. This time! The fallback to server-key re-acceptance triggered sufficient suspicion, and the fallback to passwords gave confirmation.

    As a single data point, it's not easy to extrapolate but we can point at which direction it is heading:

    • the model works better than its absence would, for this environment and this threat.
    • This was a node threat (the machine was apparently hacked via dodgy PHP and last week's linux kernel root exploit).
    • the SSH model was originally intended to counter an MITM threat, not a node threat.
    • because SSH prefers keys to passwords (machines being more reliable than humans) my password was protected by the default usage,
    • then, as a side-effect, or by easy extension, the SSH model also protects against a security-mode switch.
    • it would have worked for a real MITM, but only just, as there would only have been the one warning.
    • But frankly, I don't care. The compromise of the node was far more serious,
    • and we know that MITM is the least cost-effective breach of all. There is a high chance of visibility and it is very expensive to run.
    • If we can seduce even a small proportion of breach attacks across to MITM work then we have done a valuable thing indeed.

    In terms of our principles, we can then underscore the following:

    • We are still a long way away from seeing any good data on intercept over-the-wire MITMs. Remember: the threat is on the node. The wire is (relatively) secure.
    • In this current context, SSH's feature to accept passwords, and fallback from key-auth to password-auth, is a weakness. If the password mode had been disabled, then an entire area of attack possibilities would have been evaded. Remember: There is only one mode, and it is secure.
    • The use of the information known to me saved me in this case. This is a good example of how to use the principle of Divide and Conquer. I call this process "bootstrapping relationships into key exchanges" and it is widely used outside the formal security industry.

    All in all, SSH did a good job. Which still leaves us with the rather traumatic job of cleaning up a machine with 3-4 years of crappy PHP applications ... but that's another story.



    For those wondering what to do about today's breach, it seems so far:

    • turn all PHP to secure settings. throw out all old PHP apps that can't cope.
    • find an update for your Linux kernel quickly
    • watch out for SSH replacements and password harvesting
    • prefer SSH keys over passwords. The compromises can be more easily cleaned up by re-generating and re-setting the keys, they don't leapfrog so easily, and they aren't so susceptible to what is sometimes called "social engineering" attacks.
    Posted by iang at 04:26 PM | Comments (7) | TrackBack

    January 29, 2008

    Rumours of Skype + SSL breaches: same old story (MITB)

    Skype is the darling child of cryptoplumbers, the application that got everything right, could withstand the scrutiny of the open investigators, and looked like it was designed well. It also did something useful, and had a huge market, putting it head and shoulders of any other crypto application, ever.

    Storms are gathering on the horizon. Last year we saw stories that Skype in China was shipping with intercept plugins. 3 months ago I was told by someone who was non-technical that the German government was intercepting Skype. Research proved her wrong ... and now leaks are proving her right: Slashdot reports on leaked German memos:

    James Hardine writes "Wikileaks has released documents from the German police revealing Skype interception technology. The leaks are currently creating a storm in the German press. The first document is a communication by the Ministry of Justice to the prosecutors office, about the cost splitting for Skype interception. The second document presents the offer made by Digitask, the German company secretly developing Skype interception, and holds information on pricing and license model, high-level technology descriptions and other detail. The document is of global importance because Skype is used by tens or hundreds of millions of people daily to communicate voice calls and Skype (owned by Ebay, Inc) promotes these calls as being encrypted and secure. The technology includes interception boxes, key forwarding trojans and anonymous proxies to hide police communications."

    Is Skype broken? Let's dig deeper:

    [The document] continues to introduce the so-called Skype Capture Unit. In a nutshell: a malware installed on purpose on a target machine, intercepting Skype Voice and Chat. Another feature introduced is a recording proxy, that is not part of the offer, yet would allow for anonymous proxying of recorded information to a target recording station. Access to the recording station is possible via a multimedia streaming client, supposedly offering real-time interception.

    Nope. It's the same old bug: pervert your PC and the enemy has the same power as you. Always remember: the threat is on the node, the wire is safe.

    In this case, Mallory is in the room with you, and Skype can't do a darn thing about it, given that it borrows the display, keyboard, mike and speaker from the operating system. The forthrightness of the proposal and the parties to the negotiations would be compelling evidence that (a) the police want to infect your PC, and (b) infecting your PC is their preferred mechanism. So we can conclude that Skype itself is not efficiently broken as yet, while Microsoft Windows is or more accurately remains broken (the trojan/malware is made for the market-leading Microsoft Windows XP and 2000 only, not the market-following Linux/MacOSX/BSD/Unix family, nor the market-challenging Vista).

    No change, then. For Skype, the dream run has not ended, but it has crossed into that area where it has to deal with actual targetted hacks and attacks. Again, no news, and either way, it remains the best option for us, the ordinary people. Unlike other security systems:

    Another part of the offer is an interception method for SSL based communication, working on the same principle of establishing a man-in-the-middle attack on the key material on the client machine. According to the offer this method is working for Internet Explorer and Firefox webbrowsers. Digitask also recommends using over-seas proxy servers to cover the tracks of all activities going on.

    MITB! Now, normally we make a distinction between demos, security gossip, rumours and other false signals ... but the offer of actual technology by a supplier, with a hard price, to a governmental intercept agency indicates an advanced state of affairs:

    The licensing model presented here relates to instances of installations per month for a minimum of three months. Each installation of the Skype Capture Unit will cost EUR 3500, SSL interception is priced at EUR 2500. A one-time installation fee of EUR 2500 is not further explained. The minimum cost for any installation on a suspect computer for a comprehensive interception of both SSL and Skype will be EUR 20500, if no more than one one-time installation fee are required.

    This is the first hard evidence of professional browser-interference of SSL website access. Rumours of this practice have been around since 2004 or so, from commercial attacks, but nobody dared comment (apparently NDAs are stronger than crimes in the US of A).

    What reliable conclusion can we draw?

    • the cost of an intercept is 2500 and climbing.
    • the "delivery time" taken is a month or so, perhaps indicating the need to probe and inject into Windows.
    • MacOSX and Linux are safe for now, due to small market share and better security focus
    • Vista is safe today, for an unknown brew of market share, newness and "added security" reasons.
    • Skype itself is fine. So install your Skype on a Mac (if human) or a Linux box (if a hardcore techie).

    Less reliably, we can suggest:

    • All major police forces in rich countries will have access to this technology.
    • Major commercial attackers will have access, as well as major criminal attackers.
    • Presumably the desire of the police here is to not interfere with ordinary people's online banking, which they now can do because most banking systems are still stuck on dual factor (memo to my bank: your super-duper advanced dual factor system is truly breached by the MITB).
    • Nor, presumably, do they care about your reading of this blog nor wikileaks, both being available in cleartext as well. Which means the plan to install *TLS/SSL everywhere to protect all browsing* is still a good plan, and is only held up by the slowness at Apache and Microsoft. (Guys, one million phishing victims every year beg you to hurry up.)
    • Police are more interested in breaching the online chat of various bad guys. So, SSL email and chat forums, Skype chat and voice.

    Of course the governance issue remains. The curse of governance says that power will be used for bad. When the good guys can do it, then presumably the bad guys can do it as well, and who's to say the good guys are always good? People who have lots of money should worry, because the propensity for well-budgetted but poorly paid security police in 1st world countries to manipulate their pensions upwards is unfortunately very real. Get a Mac, guys, you can afford it.

    In reality, it simply doesn't matter who is doing it: the picture is so murky that the threat level remains the same to you, the user: you now need to protect your PC against injection of trojans for the purpose of attacking your private information directly.

    Final questions: how many intercepts are they doing and planning, and did the German government set up a cost-sharing for payoffs to the anti-virus companies?

    Posted by iang at 05:46 PM | Comments (2) | TrackBack

    January 11, 2008

    #4.2 Simplicity is Inversely Proportional to the Number of Designers

    Still reeling at the shock of that question, it feels like time to introduce another hypothesis:

    #4.2 Simplicity is Inversely Proportional to the Number of Designers
    Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has. Margaret Mead

    Simplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?

    Sad but true, if you look at the classical best of breed protocols like SSH and PGP, they delivered their best work when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, models, digital signature laws and accountants that sent it into orbit around Pluto.

    Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him, your users will thank you for either. They do not benefit if you are locked in a deadly embrace over the pernickety benefits of MAC-then-encrypt over encrypt-then-MAC.

    It should be clear by now that committees are totally out of the question. They are like whirlpools, great spiralling sinks of talent, so paddle as fast as possible in the other direction. On the other hand, if you are having trouble shrinking your team or agreeing with them, a committee over yonder can be useful as a face saving idea. Point them in that direction of the whirlpool, give them a nudge, and then get back to work.

    Posted by iang at 02:35 PM | Comments (3) | TrackBack

    What good are standards?

    Over at mozo, Jonath asks the most surprising question:


    My second question is this: as members of the Mozilla community, is this an effort that you want me (or people like me) participating in, and helping drive to final publication?

    Absolutely not, on several grounds. Here's some reasons, off the top of my head.

    Committees can't make security, full stop. Committees can write standards shaped like millstones around the neck, though.

    Standards are *not* indicated (in medical sense) for UI because the user is *not* a computer and does not and cannot follow precise rules like protocols.

    UI and security, together, probably requires skills that are not available, easily, to your committee. Branding doesn't sit well with coding, architecture can't talk to lawyers. Nobody knows what a right is, and the number of people who can bring crypto to applications is so small that you won't find them in your committee.

    Security UI itself is an open research area, not an understood discipline that needs further explanation. Standards are indicated if you want to kill research, and move to promulgation of agreed dogma. This is sometimes useful, but only when the problems are solved; which is not indicated with phishing, now, is it?

    Although I have my difficulties with some of the research done, if you take away the ability to research from the community ("that's not standard!"), you've got nothing to tell you what to do, and the enemy has locked you down to a static position. (Static defence stopped around the time of the invention of the canon, so we are looking at quite some history here...)

    Anybody got any other reasons? Is there a positive reason here, anywhere?

    Posted by iang at 02:22 PM | Comments (0) | TrackBack

    January 08, 2008

    UK data breach counts another coup!

    The UK data breach a month or two back counted another victim: one Jeremy Clarkson. The celebrated British "motormouth" thought that nobody should really worry about the loss of the disks, because all the data is widely available anyway. To stress this to the island of nervous nellies, he posted his bank details in the newspaper.

    Back in November, the Government lost two computer discs containing half the population's bank details. Everyone worked themselves into a right old lather about the mistake but I argued we should all calm down because the details in question are to be found on every cheque we hand out every day to every Tom, Dick and cash and carry.

    Unfortunately, some erstwhile scammer decided to take him to task at it and signed him up for a contribution to a good charity. (Well, I suppose it's good, all charities and non-profits are good, right?) Now he writes:

    I opened my bank statement this morning to find out that someone has set up a direct debit which automatically takes £500 from my account. I was wrong and I have been punished for my mistake.

    Contrary to what I said at the time, we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy.

    What can we conclude from this data point of one victim? Lots, as it happens.

    1. Being a victim of the *indirect* nature continues to support the thesis that security is a market for silver bullets. That is, the market is about FUD, not security in any objective sense.
    2. (writing for the non-Brit audience here,) Jeremy Clarkson is a comedian. Comments from comedians will do more to set the agenda on security than any 10 incumbents (I hesitate to use more conventional terms). There has to be some pithy business phrase about this, like, when your market is defined by comedians, it's time for the, um, incumbents to change jobs.
    3. Of course, he's right on both counts. Yes, there is nothing much to worry about, individually, because (a) the disks are lost, not stolen, and (b) the data is probably shared so willingly that anyone who wants it already has it. (The political question of whether you could trust the UK government to tie its security shoelaces is an entirely other matter...)

      And, yes, he was wrong to stick his neck out and say the truth.


    4. So why didn't the bank simply reverse the transaction? I'll leave that briefly as an exercise to the reader, there being two good reasons that I can think of, after the click.



    a. because he gave implied permission for the transactions by posting his details, and he breached implied terms of service!

    b. because he asked them not to reverse the transaction, as now he gets an opportunity to write another column. Cheap press.

    Hat-tip to JP! And, I've just noticed DigitalMoney's contribution for another take!

    Posted by iang at 04:13 AM | Comments (2) | TrackBack

    October 31, 2007

    Entire UK security industry is sent to Pogo's Swamp

    One of the enduring threads that has been prevalent on this blog but not other places is that the problem starts with ourselves. Without considering our own mistakes, our own frauds, indeed, our own history, it is impossible to understand the way security, FC, and the Internet are going.

    Compelling evidence presented over at LightBlueTouchpaper. Not that their Wordpress blog was hacked (there but for the grace of God, etc etc) but where Richard Clayton asks why did the Government reject all the recommendations of the House of Lords report of a while back? Echoed over at Ianb's blog, probably throughout the entire British IT and security industry. Why?

    Richard searches for an answer: Stupidity? Vested Interests? (On the way, he presents more evidence about how secrecy of big companies is part of the problem, not part of the solution, but that's a distraction.)

    We have good news: The lack of reflective thought is slowly diminishing. Over the last month I've seen an upsurge of comments: 1Raindrop's Gunnar Peterson says "One of the sacred cows that need to gored is the notion that we in the People's Republic of IT Security have it all figured. We don't." Elsewhere Gunnar says "in many cases, they are spending $10 to protect something worth $5, and in other cases they are spending a nickel to protect something worth $1,000."

    Microsoft knows but isn't saying: Vista fails to clear up the security mess. Which means that they spent the last 5 years and got ... precisely nowhere. Forget the claim that Vista bombed in the security department ("short M$ ! buy Apple!") and consider the big picture: if Microsoft can throw their entire company at the issue of security, and fail, what hope the rest?

    Chandler (again) points to the Inquirer:

    Whose interests are really threatened by cybercrime? Well, certainly not the software makers, the chip makers, the hard disk makers, the mouse makers, and least of all the virus busters and security firms which daily release news of the latest “vulnerabilities” plaguing the web.

    No, the victims are the poor users. Not that they’re likely to have their identity stolen or their bank account plundered or their data erased by some malicious bot or other. The chances of that happening are millions to one.

    No, what they are forced to do is continually fork out for spam-busting protection, for “secure” operating systems, for funky firewalls, malware detectors or phish-sniffing software. All this junk clogs up their spanking new PC so that they continually have to upgrade to newer chippery clever enough to have a processing core dedicated to each of the bloatsome security routines keeping them safe while they surf.

    It’s a con, gentlemen. A big fat con.

    No one has a business interest in catching identity thieves or malware writers. There’s no money in it, so no-one’s bothered.

    Chandler then goes on to identify where the solution isn't but let's not get distracted on that, today. Some people including John Q pointed to Linus who said:

    ... But the *discussion* on security seems to never get down to real numbers. So the difference between them is simple: [scheduling] is "hard science". The other one is "people wanking around with their opinions".

    Which rudeness strangely echoes the comment in 2004 or so by a leading security expert who stopped selling for a microsecond and was briefly honest about the profession.

    When I drill down on the many pontifications made by computer security and cryptography experts all I find is given wisdom. Maybe the reason that folks roll their own is because as far as they can see that's what everyone does. Roll your own then whip out your dick and start swinging around just like the experts.

    I only mention it because that dates my thinking on this issue. As I say, I've seen an upsurge in this over the last few months so I can predict that around now is the time that the IT security sector realises that not only do they not have a solution to security, they don't know how to create a solution for security, and even if they accidentally found one, nobody would listen to them anyway.

    If you have followed this far, then you can now see why the UK Government can happily ignore the Lords' recommendations: because they came from the security industry, and that's one industry that has empirically proven that their views are not worth listening to. Welcome to Pogo's swamp.

    Posted by iang at 07:39 AM | Comments (0) | TrackBack

    October 05, 2007

    Storm Worm signals major new shift: a Sophisticated Enemy

    I didn't spot it when Peter Gutmann called it the world's biggest supercomputer (I thought he was talking about a game or something ...). Now John Robb pointed to Bruce Schneier who has just published a summary. Here's my paraphrasing:

    • Patience ...
    • Separation of Roles ...
    • Redundant Roles ...
    • No damage to host ...
    • p2p communications to control nodes ...
    • morphing of standard signatures (DNS, code) ...
    • probing (in military recon terms) past standard defences ...
    • knowledge of the victim's weaknesses ...
    • suppression of the enemy's recon ...

    Bruce Schneier reports that the anti-virus companies are pretty much powerless, and runs through a series of possible defences. I can think of a few too, and I'm sure you can as well. No doubt the world's security experts (cough) will spend a lot of time on this question.

    But, step back. Look at the big picture. We've seen all these things before. Those serious architects in our world (you know who you are) have even built these systems before.

    But: we've never seen the combination of these tactics in an attack .

    This speaks to a new level of sophistication in the enemy. In the past, all the elements were basic. Better than script kiddie, but in that area. What we had was industrialisation of the phishing industry, a few years back, which spoke to an increasing level of capital and management involved.

    Now we have some serious architects involved. This is in line with the great successes of computer science: Unix, the Internet, Skype all achieved this level of sophistication in engineering, with real results. I tried with Ricardo, Lynn&Anne tried with x9.59. Others as well, like James and the Digicash crew. Mojo, Bittorrent and the p2p crowd tried it too.

    So we have a new result: the enemy now has architects as good as our best.

    As a side-issue, well predicted, we can also see the efforts of the less-well architected groups shown for what they are. Takedown is the best strategy that the security-shy banks have against phishing, and that's pretty much a dead duck against the above enemy. (Banks with security goals have moved to SMS authentication of transactions, sometimes known as two channel, and that will still work.)

    But that's a mere throwaway for the users. Back to the atomic discussion of architecture. This is an awesome result. In warfare, one of the dictums is, "know yourself and win half your battles. Know your enemy and win 99 of 100 battles."

    For the first time in Internet history, we now have a situation where the enemy knows us, and is equal to our best. Plus, he's got the capital and infrastructure to build the best tools against us.

    Where are we? If the takedown philosophy is any good data point, we might know ourselves but we know little about the enemy. But, even if we know ourselves, we don't know our weaknesses, and our strengths are useless.

    What's to be done? Bruce Schneier said:

    Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest.

    As I suggested in last year's roundup, we were approaching this decision. Start re-writing, Microsoft. For sake of fairness, I'd expect that Linux and Apple will have a smaller version of the same problem, as the 1970s design of Unix is also a bit out-dated for this job.

    Posted by iang at 07:07 AM | Comments (3) | TrackBack

    August 28, 2007

    On the downside of the MBA-equiped CSO...

    There is always the downside to any silver bullet. Last month I proposed that the MBA is the silver bullet that the security industry needs, and this caused a little storm of protest.

    Here's the defence and counter-attack. This blog has repeatedly railed against the mostly-worthless courses and certifications that are sold to those "who must have a piece of paper." The MBA also gets that big black mark, as it is, at the end of the day, a piece of paper. Saso said in comments:

    In short, I agree, CISO should have an MBA. For its networking value, not anything else.

    Cynical, but there is an element of wisdom there. MBAs are frequently sold on the benefits of networking. In contrast to Saso, I suggest that the benefits of networking are highly over-rated, especially if you take the cost of the MBA and put it alongside the lost opportunity of other networking opportunities. But people indeed flock to pay the entrance price to that club, and if so, maybe it is fair to take their money, as better an b-school than SANS? Nothing we can do about the mob.

    Jens suggests that the other more topical courses simply be modified:

    From what I see out there when looking at the arising generation of CSO's the typical education is a university study to get a Master of Science in the field of applied IT security. Doesn't sound too bad until we look into the topics: that's about 80% cryptography, 10% OS security, 5% legal issues and 5% rest.

    Well, that's stuffed up, then. In my experience, I have found I can teach anyone outside the core crypto area everything they need to know about cryptography in around 20 minutes (secret keys, public keys, hashes, what else is there?), so why are budding CSOs losing 80% on crypto? Jens suggests reducing it by 10%, I would question why it should ever rise above 5%?

    Does the MBA suffer from similar internal imbalance? I say not, for one reason: it is subject to open competition. There is always lots of debate that one side is more balanced than others, and there is a lot of open experimentation in this, as all the schools look at each other's developments in curricula. There are all sorts of variations tuned to different ideas.

    One criticism that was particularly noticeable in mine was that they only spent around 2 days in negotiation, and spent more than that on relatively worthless IT cases. That may be just me, but it is worth noting that b-schools will continue to improve (whereas there is no noticeable improvement from the security side). Adam Shostack spots Chris Hoff who spots HBR on a (non-real) breach case:

    I read the Harvard Business Review frequently and find that the quality of writing and insight it provides is excellent. This month's (September 2007) edition is no exception as it features a timely data breach case study written by Eric McNulty titled "Boss, I think Someone Stole Out Customer Data."

    The format of the HBR case studies are well framed because they ultimately ask you, the reader, to conclude what you would do in the situation and provide many -- often diametrically opposed -- opinions from industry experts.
    ...
    What I liked about the article are the classic quote gems that highlight the absolute temporal absurdity of PCI compliance and the false sense of security it provides to the management of companies -- especially in response to a breach.

    What then is Harvard suggesting that is so radical? The case does no more than document a story about a breach and show how management wakes up to the internal failure. Here's a tiny snippet from Chris's larger selection:

    Sergei reported finding a hole—a disabled firewall that was supposed to be part of the wireless inventory-control system, which used real-time data from each transaction to trigger replenishment from the distribution center and automate reorders from suppliers.

    “How did the firewall get down in the first place?” Laurie snapped.

    “Impossible to say,” said Sergei resolutely. “It could have been deliberate or accidental. The system is relatively new, so we’ve had things turned off and on at various times as we’ve worked out the bugs. It was crashing a lot for a while. Firewalls can often be problematic.”

    Chris Hoff suggests that the managers go through classic disaster-psychological trauma patterns, but instead I see it as more evidence that the CISO needs an MBA, because the technical and security departments spun out of corporate orbit so long ago nobody can navigate them. Chris, think of it this way: the MBAs are coming to you, and the next generation of them will be able to avoid the grief phase, because of the work done in b-school.

    Lynn suggests that it isn't just security, it isn't just CSOs, and is more of a blight than a scratch:

    note that there have been a efforts that aren't particularly CSO-related ... just techies ... in relatively the same time frame as the disastrous card reader deployments ... there were also some magnificent other disastrous security attempts in portions of the financial market segment.

    My thesis is that the CSO needs to communicate upwards, downwards, sideways, and around corners. Not only external but internal, so domination of both sides is needed. As Lynn suggests, it is granted that if you have a bunch of people without leadership, they'll suggest that smart cards are the secure answer to everything from Disney to Terrorism. And they'll be believed.

    The question posed by Lynn is a simple one: why do the techies not see it?

    The answer I saw in banking and smart card monies, to continue in Lynn's context, was two-fold. Firstly, nobody there was counting the costs. Everyone in the smart card industry was focussed on how cheap the smart card was, but not the full costs. Everybody believed the salesmen. Nobody in the banks thought to ask what the costs of the readers were (around 10-100 times that of the card itself...) or the other infrastructure needed, but banks aren't noted for their wisdom, which brings us to the second point.

    Secondly, it was pretty clear that although the bank knew a little bit about transactions, they knew next to nothing about what happened outside their branch doors. Getting into smart card money meant they were now into retail as opposed to transactions. In business terms, think of this as similar to a supermarket becoming a bank. Or v.v. That's too high a price to pay for the supposed security that is entailed in the smart card. Although Walmart will look at this question differently, banks apparently don't have that ability.

    It is impossible to predict whether your average MBA would spot these things, but I will say this: They would be pass/fails in my course, and there would not be anything else on the planet that the boss could do to spot them. Which you can't say for the combined other certifications, which would apparently certify your CSO to spot the difference between 128 bit and 1024 bit encryption ... but sod all of importance.

    Posted by iang at 09:16 AM | Comments (1) | TrackBack

    Threatwatch: US-SSNs melt for $50 in MacArthur Park

    So, having read the HBR case I just wrote about ("nobody else reads the original material they quote, why should I?"), I discovered this numbers gem on the very last page:

    Perhaps the most worrying indicator is that the criminal industry for information is growing. I can go to MacArthur Park in Los Angeles any day of the week and get $50 in exchange for a name, social security number, and date of birth. If I bring a longer list of names and details, I walk away a wealthy man.

    $50 for ID sets seems very high in these days of phishing, but that may be the price for hand-to-hand delivery and a guarantee of quality. Either way, a data point.

    What also strikes is the mention of a physical marketplace. Definitely a novelty! The only thing I know about that place is that it melts in the dark, but it could be that MacArthur Park is simply too large to shut down.

    Posted by iang at 06:20 AM | Comments (0) | TrackBack

    August 27, 2007

    Learning from Iraq and Failure

    Financial Cryptographers are interested in war because it is one of the few sectors where we face an aggressive attacker (as opposed to a statistical failure model). For this reason, current affairs like Iraq and Afghanistan are interesting, aside from their political aspects (September is crunch time!).

    John Robb points to an interview with Lt. Col. John Nagl on how the New Turks in Iraq (more formally known as General Petraeus and his team) have written a new manual for the theater, known as the Counterinsurgency Field Manual.

    We last had a counter guerrilla manual in 1987 but as an army we really avoided counterinsurgency in the wake of Vietnam because we didn't want to fight that kind of war again. Unfortunately the enemy has a vote. And our very conventional superiority in war-fighting is driving our enemy to fight us as insurgents and as guerrillas rather than the kind of war we are most prepared to fight, which is conventional tank-on-tank type of fighting.

    ...

    You still have to be able to do the fighting. A friend of mine when he found out i was writing [the book] wrote to me from Iraq and said

    "remember, Nagl, counterinsurgency is not just the thinking man's war ... It's the graduate level of war."

    Because you still have to be able to do the war fighting stuff. When I was in [Iraq] I called in artillery strikes and air strikes, did the fighting stuff. But I also spent a lot of time meeting with local political leaders, establishing local government, working on economic development.

    You really have to span the whole spectrum of human behavior. We had cultural anthropologists helping on the book, economists, information operation specialists. It's a very difficult type of war, it's a thinking person's kind of war. And it's a kind of war we are learning and adapting and getting better at fighting during the course of the wars in Iraq and Afghanistan.

    I copied those parts in from the interview because they stressed what we see in FC, but check out the interview as it is refreshing. Here's the parallels:

    • The Gung-ho warriors enter the field.
    • And are defeated.
    • Institutions are not able to respond to the new threats until they have shown themselves incapable of forcing old threat models on the enemy.
    • The battle is won, or at least fought, with brains, not brawn.
    • Still, the "warfighting" or general security stuff never goes away.
    • When we are dealing with an asymmetric or "new" attack, multiple disciplines enter into the discussion to analyse the balance between fighting and other strategies.
    • The new strategy emerges, but only after the losses to both our ground forces and our old generals.

    The parallels with today's Internet situation seem pretty clear. How long do we go on fighting the attackers before the New Turks come in and address the battle from a holistic, systemic viewpoint?

    Posted by iang at 07:11 PM | Comments (0) | TrackBack

    August 23, 2007

    Threatwatch: Numbers on phishing, who's to blame, the unbearable loneliness of 4%

    Jonath over at Mozilla takes up the flame and publishes lots of stats on the current state of SSL, phishing and other defences. Headline issues:

    • Number of SSL sites: 600,000 from Netcraft
    • Cost of phishing to US: $2.1 billion dollars.
    • Number of expired certs: 18%
    • Number of users who blame a glitch in the browser for popups: 4%

    I hope he keeps it up, as it will save this blog from having done it for many years :) The connection between SSL and phishing can't be overstressed, and it's welcome to see Mozilla take up that case. (Did I forget to mention TLS/SNI in Apache and Microsoft? Shame on me....)

    Jonath concludes with this odd remark:

    If I may be permitted one iota of conclusion-drawing from this otherwise narrative-free post, I would submit this: our users, though they may be confused, have an almost shocking confidence in their browsers. We owe it to them to maintain and improve upon that, but we should take some solace from the fact that the sites which play fast and loose with security, not the browsers that act as messengers of that fact, really are the ones that catch the blame.

    You, like me, may have read that too quickly, and thought that he suggests that the web sites are to blame, with their expired certs, fast and loose security, etc.

    But, he didn't say that, he simply said those are the ones that *are* blamed. And that's true, there are lots and lots of warnings out there like campaigns to drop SSL v2 and stop sites doing phishing training and other things ... The sites certainly catch the blame, that's definately true.

    But, who really *deserves* the blame? According to the last table in Jonath's post, the users don't really blame the site as much as might be expected: 24%. More are unsure and thus wise, I say: 32%. And yet more imagine an actual attack taking place: 40%.

    That leaves 4% who suspect a "glitch" in the browser itself. Surely one lonely little group there, I wonder if they misunderstood what a "glitch" is... What is a "glitch," anyway, and how did it get into their browsers?

    Posted by iang at 09:06 AM | Comments (0) | TrackBack

    August 16, 2007

    DNS Rebinding, and the drumroll of SHAME for MICROSOFT and APACHE

    Tonight, we have bad news and worse news. The bad news is that the node is yet again the scene of imminent collapse of the Internet as we know it. The worse news is that the fix that could have fixed it ... is still not deployed. The no-news is that we warned about years ago. It's still not done.

    Dan Kaminsky, a hacker of some infamy and humour, gave a son-of-black-hat talk on DNS Rebinding. What this means is that when you go to a site that has a malicious applet or Flash or something, then your node becomes controlled (that's your PC, desktop, laptop, etc) for attacks on other nodes.

    Now, I don't fully understand the deal ... and details were difficult to follow ... but it is something to do with weird things with DNS that allow a malicious site to download bad code into your applet/flash/javascript weakened browser. Then, that code literally takes over and turns your node -- your PC -- into an internal attack-dog under someone else's whistle. Dan uses the example of the printer down the hall, but in finance circles this is the internal derivatives accounting system down the hall, already smoking from too much recent attention.

    Yes, Firefox and IE are both victims.

    The DNS details were scary and voluminous but rest on a basically sound claim: DNS isn't secure, and that we know. It is possible to hand the requestor all sorts of interesting information, and that interesting information can then be used to trick through firewalls, IDS, etc, and compromise the machine, because it is "authoritive" and it comes from the right machine, your machine, your soon-to-be-owned machine.

    Curiously, the object of Dan's project is much more grandiose than that. He's looking to do a bunch of weird measurements on TCP, using this DNS rebinding to bootstrap in and take over the TCP connection. Yeah, MITM, if you spotted the clue.

    (I'll repeat that, Dan claims to be doing MITMs on TCP!)

    To summarise the raw claims (again, given my limited understanding):

    • attack bypasses all firewalls,
    • can be used to own your router,
    • bypasses IDS
    • your browser needs to strike a malicious Flash, Java applets or javascript in some variants (needs sockets, Firefox delivers sockets through javascript??)
    • Works on the 2 major browsers
    • A simplified version has been seen in the wild, by bad guys
    • No DNS fix, but there are some short term patches?

    Fixes: No easy fixes, just temporary patches. DNS is operating as normal, it never was secure anyway, and the modes and tricks used are essential for a whole lot of stuff. Likewise, Flash, etc, which seem to have no more security than windows did in 2002, isn't going to be fixed fully, any time soon. (Dan mentioned he is waiting on Adobe to fix something before full disclosure, but that runs out as someone else is about to publish the same results.)

    • The systemic fix: There is only one mode, and it's secure. Ok, I just had to add that in...
    • The practical fix: Go TLS, everywhere.
    • Timeframe: 3 years.
    • Excuse: read on...

    Here's the old worse news: Dan stated, as near as I can recall it:

    "TLS solves the problem completely, but TLS does not scale to the net, because it does not indicate the host name. This puts more of an indictment on the standards process and on TLS than anything else, we've had TLS for years now, and we still can't share virtual hosts on TLS."

    Yes, it's our old friend TLS/SNI (ServerNameIndication), a leprechaun-like extension that converts TLS from a marginal marketing differentiator at ISPs into a generally deployable solution.

    SNI is available in Firefox, 2.0 and later. Thanks to Mozilla, they did actually started a project on this called SSL v2 MUST DIE because of its ability to help in phishing (same logic, same fix, same sad sorry story). It is fixed in IE7, but only in Vista, so that's short thanks to Microsoft. Opera's got it, and they might have been the first.

    Yet...

    TLS/SNI is not available in Apache's httpd nor in Microsoft's IIS.

    Indeed, I tried last year to contact the httpd team and plead for it. Nothing, it's as if they don't exist. Mozilla were even prepared to pay these guys to fly & fix, but we couldn't find anyone to talk to. Worse, they have had the code for yonks. The code builds, at least the gnutls version. The code works:

    I got proof that Microsoft's team exists, and that they also have no plans to secure the Internet this year.

    Shame, the very shame! Apache's httpd team and Microsoft's IIS team are now going to watch 3 years of pain and suffering, for the want of little fix that is so old that nobody can recall when it was added to the standard.

    (OK, that's the last I heard. There might be updates in the shame. You understand that it isn't my day job to get you to save the net. It is however your day job, Microsoft team, and night job, Apache team, and you can point out current progress in the comments or on your blog or in the very code, itself. Please, we beg you. Save our net.)

    Addendum: a rebinding patch! and some more from the department of snails.

    Posted by iang at 06:59 PM | Comments (5) | TrackBack

    August 10, 2007

    Susan Landau on threats to the USA: don't forget Pogo

    The Washington Post, in the person of Susan Landau, lays out in more clear terms where USA cyber-defence is heading

    The immediate problem is fiber optics. Until recently, telecommunication signals came through the air. The NSA used satellites and antennas to pick up conversations of foreigners talking to other foreigners. Modern communications, however, use fiber; since conversations don't go through the air, the NSA wants to access communications at land-based switches.

    Because communications from around the world often go through the United States, the government can still get access to much of the information it seeks. But wiretapping within the United States has required a FISA search warrant, and the NSA apparently found using FISA too time-consuming, even though emergency access was permitted as long as a warrant was applied for and granted within 72 hours of surveillance.

    Avoiding warrants for these cases sounds simple, though potentially invasive of Americans' civil liberties. Most calls outside the country involve foreigners talking to foreigners. Most communications within the country are constitutionally protected -- U.S. "persons" talking to U.S. "persons." To avoid wiretapping every communication, NSA will need to build massive automatic surveillance capabilities into telephone switches. Here things get tricky: Once such infrastructure is in place, others could use it to intercept communications.

    Grant the NSA what it wants, and within 10 years the United States will be vulnerable to attacks from hackers across the globe, as well as the militaries of China, Russia and other nations.

    Landau choses the evil foreign hacker as her bogeyman. This is is understandable if we recall who her audience is. The threat however is closer to home; to paraphrase Pogo, Americans have not yet met their enemy, but he is you.

    A basic assumption of security was that the NSA was historically no threat to any person, only to nations. Intel info was closely guarded, and breaches of this info was a national security breach, we the people were far better protected by the battle against foreign spies than anything else. No chinese wall, then, more a great wall-of-china around secret tracts of Maryland.

    Now that wall-of-china has been torn down and replaced by trade-grade chinese walls. Breaching chinese walls simply requires the right contact, the right excuse, the right story. Since 9/11, intel info and trade info are all one and the same, and it is now reasonable, expected even, that hundreds of thousands of new readers of data can trawl through criminal databases, intel summaries, background reports and the like.

    For illumination, see the SWIFT battle. The problem wasn't that the NSA was reading the SWIFT traffic, it had been doing that for decades. The problem was the burgeoning spread of the data, as highlighted by events: the Department of Justice now felt it reasonable, indeed required, to get in on the act. Pundits will argue that it was a governed programme, but its secret governance was a setup for more breaches.

    It wasn't the first, nor the second, and it wasn't going to be the last. If we had to choose between the evil chinese hacker and the enemy-who-is-us, I for one would take the evil chinese hacker every time. We can deal with the external enemy, we know how. But the internal enemy, the enemy who is us, he is the destruction of civil society, and there is no army to fight that threat.

    Posted by iang at 10:56 AM | Comments (1) | TrackBack

    August 09, 2007

    Mozilla gets proactive about browser security?

    This article reports that Mozilla are now proactive on security. This is good news. In the past, their efforts could be described as limited to bug patching and the like. Reactive security, in other words, which is what their fuzzer is:

    Mozilla has been using an open-source application security testing tool, known as a fuzzer, for JavaScript to detect and fix dozens of security bugs in Firefox, Mozilla director of ecosystem development Window Snyder said Thursday at the Black Hat USA 2007 conference in Las Vegas. The JavaScript fuzzer found 280 bugs in Firefox, 27 of which were exploitable.

    Now Mozilla is making that JavaScript fuzzer available to anyone who wants to use it, and it'll be followed later this year by fuzzers for the HTTP and FTP protocols.

    "The FTP and HTTP protocol fuzzers act like fake servers that send bad data to sites," Snyder told InformationWeek.The HTTP fuzzer emulates an HTTP server to test how an HTTP client handles unexpected input. The FTP fuzzer likewise tests how an FTP client handles unexpected data.

    Now however there is at least one person employed directly on thinking about security in a proactive sense:

    Expect Firefox 3 to include new phishing and malware protection, extended validation certificates, improved password management, and a security user interface.

    One could criticise this all as too little, too late, too "interests-driven". But changing cultures to think, really think about security is hard. It doesn't happen overnight, and it at least takes years. Consider that Microsoft has been working since 2003 to make this change, and the evidence is not here that their product is secure, yet, shows how hard it is.

    Posted by iang at 08:05 AM | Comments (0) | TrackBack

    Shock of new Security Advice: "Consider a Mac!"

    From the where did you read it first? department here comes an interesting claim:

    Beyond obvious tips like activating firewalls, shutting computers down when not in use, and exercising caution when downloading software or using public computers, Consumer Reports offered one safety tip that's sure to inflame online passions: Consider a Mac.

    "Although Mac owners face the same problems with spam and phishing as Windows users, they have far less to fear from viruses and spyware," said Consumer Reports.

    Spot the difference between us and them? Consumer Report is not in the computing industry. What this suggests about being helpful about security will haunt computing psychologists for years to come.

    For amusement, count how many security experts will pounce on the ready excuse:

    "Because Macs are less prevalent than Windows-based machines, online criminals get less of a return on their investment when targeting them."

    Of course if that's true, it becomes less so with every Mac bought.

    Can you say "monoculture!?"



    The report itself from Consumer Reports seems to be for subscribers only. For our ThreatWatch series, the article has many juicy numbers:

    U.S. consumers lost $7 billion over the last two years to viruses, spyware, and phishing schemes, according to Consumer Report's latest State of the Net survey. The survey, based on a national sample of 2,000 U.S. households with Internet access, suggests that consumers face a 25% chance of being victimized online, which represents a slight decline from last year.

    Computer virus infections, reported by 38% of respondents, held steady since last year, which Consumer Reports considers to be a positive sign given the increasing sophistication of virus attacks. Thirty-four percent of respondents' computers succumbed to spyware in the past six months. While this represents a slight decline, according to Consumer Reports, the odds of a spyware infection remain 1 in 3 and the odds of suffering serious damage from spyware are 1 in 11.

    Phishing attacks remained flat, duping some 8% of survey respondents at a median cost of $200 per incident. And 650,000 consumers paid for a product or service advertised through spam in the month before the survey, thereby seeding next year's spam crop.

    Perversely, insecurity means money for computer makers: Computer viruses and spyware turn out to be significant drivers of computer sales. According to the study, virus infections drove about 1.8 million households to replace their computers over the past two years. And over the past six months, spyware infestations prompted about 850,000 households replace their computers.

    Posted by iang at 07:36 AM | Comments (0) | TrackBack

    Verisign reminder of what data security really means

    From the 'poignant reminder' department, Verisign lost a laptop with employee data on it.

    The employee, who was not identified, reported to VeriSign and to local police in Sunnyvale, Calif. that she had left her laptop in her car and had parked her car in her garage on Thursday, July 12. When she went out the next morning, she found that her car had been broken into and the laptop had been stolen.

    Possibly a targetted theft?

    The thing is, this can happen to anyone. Including Verisign, or any CA, or any security company. This can happen to you, and probably will, regardless of your policies (which in this case includes no employee data on laptops and using encrypted drives).

    The message to take away from this is not that Verisign is this week's silly sausage, or that their internal security is lax. This can and will happen to anyone. Instead, today's message is that the there is a gap between security offerings and security needs so large that crooks are driving trucks through it every day, and have been for 4 years.

    I estimated a billion dollars in 2004 or so from phishing alone, and now conservative estimates in other post today say it is around 3bn per year. (I say conservative because Lynn posts other numbers that are way higher.) That truck is at least 10 million dollars a day!

    Still not convinced? Consider this mistake that the company made:

    In its employee letter, VeriSign offered a year of free credit monitoring from Equifax for any affected individual, and recommended placing fraud alerts on credit accounts to watch for signs of fraud or identity theft.

    If Verisign can offer a loss-leader zero-margin recovery product to the victims of their own failure, what hope has the rest of the computing industry?

    Posted by iang at 07:23 AM | Comments (0) | TrackBack