Stefan posted a bunch of materials on a phone based ecash system.
On Identity theft, America's cartoonists are striking back. Click here and then send me your credit card number....
On the HCI thread of how users view web security, Chris points out that "Simson Garfinkel's dissertation is worth looking at in this context." This relates to the earlier two papers on what users think on web security.
Scott reports ``Visa International has published a white paper titled "Financial Flows and Supply Chain Efficiency" (sorry, in PDF) authored by Professor Warren H. Hausman of Stanford University.'' It's interesting if somewhat self-serving, and feeds into the whole message is the payment thread.
Stefan via Adam pointed me to a new blog on risks called Not Bad For a Cubicle. I shall pretend to know what that means, especially as the blogger in question claims knowledge of FC ... but meanwhile, the author takes task with persistent but poor usage of the word security, where 'risks' should be preferred. This makes a lot of sense. Maybe I should change all uses of the word over?
Because it's more secure becomes ... because it's less risky! Nice. But, wait! That would mean I'd have to change the name of my new paper over to Pareto-risk-free ... Hmm, let's think about this some more.
A security model generally includes the human in the loop; totally automated security models are generally disruptable. As we move into our new era of persistent attacks on web browsers, research on human failure is very useful. Ping points to two papers done around 2001 on what web browsing security means to users.
Users Conceptions of Web Security explored how users treat the browser's security display. Unsurprisingly, non-technical users showed poor recognition of spoof pages, based on the presence of a false padlock/key icon. Perhaps more surprisingly, users derived a different view of security: they interpreted security as meaning it was safe to put their information in. Worse, they tended to derive this information as much from the padlock as from the presence of a form:
"That at least has the indication of a secure connection; I mean, it's obviously asking for a social security number and a password."
Which is clearly wrong. But consider the distance between what is correct - the web security model protects the information on the wire - and what it is that is relevant to the user. For the user, they want to keep their information safe from harm, any harm, and they will assume that any signal sent will be to that end.
But as the threat to information is almost entirely at the end node, and not on the wire, users are faced with an odd choice: interpret the signal according to the harms that they know about, which is wrong, or ignoring the signal because it is irrelevent to the harms. Which is unexpected.
It is therefore understandable that users misinterpret the signals they are given.
The second paper, User's Conceptions of Risks and Harms on the Web is also worth reading, but it was conducted in a period of relative peace. It would be very interesting to see the same study conducted again, now that we are in a state of perpetual attack.
In the ongoing thread of Adam's question - how do we signal good security - it's important to also list signals of bad security. CoCo writes that Tegam, a French anti-virus maker, has secured a conviction against a security researcher for reverse engineering and publishing weaknesses.
This seems to be a signal that Tegam has bad security. If they had good security, then why would they care what a security researcher said? They could just show him to be wrong. Or fix it.
There are two other possibilities. Firstly, Tegam has bad security, and they know it. This is the most likely, and their aggressive focus on preserving the revenue base would perhaps lead them to prefer suppression of future researches into the product. CoCo points to a claim that Tegam accused the researcher of being a terrorist in a French advertisement, which indicates an attempt to disguise the suppression and validate it in the minds of their buying public. In French and google translates to quixotic english. Tegam responds that this article makes their case, but comments by flacks do no such thing. However the response makes for interesting reading and may balance their case.
Alternatively (secondly) they just don't know. And, I don't think we need to show the proof of "don't know" is equivalent to "insecure."
CoCo also comments on how the chilling effect will raise insecurity in general. But if enough companies decline to pursue avenues of prosecution, this might balance out in our favour: we might then end up with a new signal of those that prosecute and those that do not.
Texas Instruments recently signalled desire for good security in the RFID breach, as well as an understanding of the risks to the user. Tegam has signalled the reverse. Are they saying that their product has known weaknesses, and they wish to hide these from the users? You be the judge, and while you're at it, ponder on which side of this fence your own company sits?
I'm jacking into net in some random office in downtown Vienna, and I'm introduced to the payment-system-in-a-jar. Paper and tokens and IOUs thrown in a big vase serves to manage coordination on an office wide scale of coffee, beer and juice. For my talk on community currencies I thought this would make a great example of a payment system on a local basis, so I lifted the entire thing, and carried the 40cm high jar, money, tokens, and paper included to the presentation.
This payment system (as I presented) can be stolen. It can be broken. Nice good Internet ones don't have that problem. It was a nice example, it worked and my audience enjoyed the huge jar of purloined coffee money. But as I walked back to the office I wondered whether they'd mind me purloining their payment system.
I needn't have worried. There was a party in progress, the local technical community was in a happy mood. As I pulled the huge jar out of my laptop bag, luckily unbroken, there were smiles and laughter, and I had to explain what I wanted it for.
And then, as I was explaining, I detected a complete lack of interest... with austrian lingo and one word sneaking through repeatedly: NSA. After some confusion, I found out that I was at the post-success party of the group that had just data mined the NSA.
How this happened was gathered in scattered conversations slipped between explanations of payment systems and crypto cert systems. People had signed up for a semi-secret mailing list, and when the archives were put online, they'd been downloaded. Now they're up online in some fashion, and there is discussion on what to do next. The next phases were explained ... but in some sense this was subject to change, so I'll skip that part.
It looks like the NSA made a few mistakes in migration of internal forums to external availability. That's not a bad thing; but they left a lot of internal stuff in the archives. Also, it looks to me like the stories that are being discussed are really a bad use of secrecy - the sort of political manouvering that was discovered on the lists should not have been secret, but subject to public review. It is after all the money of the taxpayer that is being abused in this debate.
The one story I did hear was a bureaucratic fight among the FBI, NSA and the Brits over who gets to set the biometrics standard. According to the mail list, the FBI is based on fingerprints so they want that. The NSA loves voice recognition, so that's their baby. But the Brits are all hot on iris recognition and they have the world wide patent.
Good one guys - this is the sort of debate that really needs to be conducted in the open, not under secrecy. We follow with interest, and now, I must go use the local payment system again to mine some more beers.
Triage is one thing, security is another. Last week's ground-shifting news was widely ignored in the press. Scanning a bunch of links, the closest I found to any acknowledgement of what Microsoft announced is this:
In announcing the plan, Gates acknowledged something that many outside the company had been arguing for some time--that the browser itself has become a security risk. "Browsing is definitely a point of vulnerability," Gates said.
Yet no discussion on what that actually meant. Still, to his sole credit, author Steven Musil admitted he didn't follow what Microsoft were up to. The rest of media speculated on compatibility, Firefox as a competitor and Microsoft's pay-me-don't-pay-me plans for anti-virus services, which I guess is easier to understand as there are competitors who can explain how they're not scared.
So what does this mean? Microsoft has zero, zip, nada credibility in security.
...earlier this week the chairman of the World's Most Important Software Company looked an auditorium full of IT security professionals in the eye and solemnly assured them that "security is the most important thing we're doing."
And this time he really means it.
That, of course, is the problem: IT pros have heard this from Bill Gates and Microsoft many times before ...
Whatever they say is not only discounted, it's even reversed in the minds of the press. Even when they get one right, it is assumed there must have been another reason! The above article goes on to say:
Indeed, it's no accident that Microsoft is mounting another security PR blitz now, for the company is trying to reverse the steady loss of IE's browser market share to Mozilla's Firefox 1.0.
Microsoft is now the proud owner of a negative reputation in security,
Which leads to the following strategy: actions not words. Every word said from now until the problem is solved will just generate wheel spinning for no productivity, at a minimum (and not withstanding Gartner's need to sell those same words on). The only way that Microsoft can change their reputation for insecurity is to actually change their product to be secure. And then be patient.
Microsoft should shut up and and do some security. Which isn't entirely impossible. If it is a browser v. browser question, it is not as if the competition has an insurmountable lead in security. Yes, Firefox has a reputation for security, but showing that objectively is difficult: their brand is indistinguishable from "hasn't got a large enough market share to be worth attacking as yet."
"This is a work in progress," Wilcox says. "The best thing for Microsoft to do is simply not talk about what it's going to do with the browser."
IEEE Security & Privacy magazine has a special on _Economics of Information Security_ this month. Best bet is to simple read the editor's intro.
There are two on economimcs of disclosure, a theme touched upon recently:
Two I've selected for later reading are:
This is because they speak to a current theme - how to model information in attacks.
Gervase Markham has written "a plan for scams," a series of steps for different module owners to start defending. First up, the browser, and the list will be fairly agreeable to FCers: Make everything SSL, create a history of access by SSL, notify when on a new site! I like the addition of a heuristics bar (note that Thunderbird already does this).
Meanwhile, Mozilla Foundation has decided to pull IDNs - the international domain names that were victimised by the Shmoo exploit. How they reached this decision wasn't clear, as it was taken on insider's lists, and minutes aren't released (I was informed). But Gervase announced the decision on his blog and the security group, and the responses ran hot.
I don't care about IDNs - that's just me - but apparently some do. Axel points to Paul Hoffman, an author of IDN, who pointed out how he had IDN spoofing solutions with balance. Like him, I'm more interested in the process, and I'm also thinking of the big security risks to come and also the meta-risks. IDN is a storm in a teacup, as it is no real risk beyond what we already have (and no, the digits 0,1 in domains have not been turned off).
Referring this back to Frank Hecker's essay on the foundation of a disclosure policy does not help, because the disclosure was already done in this case. But at the end he talks about how disclosure arguments fell into three classes:
Literacy: “What are the words?” Numeracy: “What are the numbers?” Ecolacy: “And then what?”
"To that end [Frank suggests] to those studying the “economics of disclosure” that we also have to study the “politics of disclosure” and the “ecology of disclosure” as well."
Food for thought! On a final note, a new development has occurred in certs: a CA in Europe has issued certs with the critical bit set. What this means is that without the code (nominally) to deal with the cert, it is meant to be rejected. And Mozilla's crypto module follows the letter of the RFC in this.
IE and Opera do not it seems (see #17 in bugzilla), and I'd have to say, they have good arguments for rejecting the RFC and not the cert. Too long to go into tonight, but think of the crit ("critical bit") as an option on a future attack. Also, think of the game play that can go on. We shall see, and coincidentally, this leads straight back into phishing because it is asking the browser to ... display stuff about the cert to the user!
What stuff? In this case, the value of the liability in Euros. Now, you can't get more FC than that - it drags in about 6 layers of our stack, which leaves me with a real problem allocating a category to this post!
The Linux community just set up a new way to report security bugs. In the debate, it transpires that Linus Torvalds laid down a firm position of "no delay on fix disclosures." Having had a look at some security systems lately, I'm inclined to agree. It may be that delays make sense to let vendors catch up. That's the reason most quoted, and it makes entire logical sense. But, that's not what happens.
The process then gets hijacked for other agendas, and security gets obscured. Thinking about it, I'm now viewing the notion of security being discussed behind closed doors with some suspicion; it's just not clear how security by obscure committees divides their time between "ignoring the squeeky wheels" and "create space to fix the bugs." I've sent messages on important security issues to several closed security groups in the last 6 months, and universally they've been ignored.
So, zero days disclosure is my current benchmark. Don't like it? Use a closed product. Especially when considered that 90% of the actual implementations out there never get patched in any useful time anyway...
To paraphrase Adi, "[security] is bypassed not attacked."
How to address Internet security in an open source world is a simmering topic. Frank Hecker has documented his view of the Mozilla Full Disclosure debate that led to their current security policy. He makes the point that the parties are different: with open source there are many vendors, which complicates the notion of disclosure. Further, the bug fixers can be anyone, following the many eyeballs theory. This then devolves into the creation of a search for a policy where anyone can be an insider, Mozilla's current policy is that result; and we are very fortunate to have the story recorded.
Meanwhile, Adam points at an attempt by Microsoft to slow down open disclosure of exploits. In this case they are attacking the release of source code to exploit, Adam responds that this is perhaps more in the interests of defenders than attackers. My view: it looks less dramatic if treated as gameplay by Microsoft. The short term end goal is to get the patches out there, and Microsoft have succumbed to the easy blame opportunity to create a sense of urgency.
Adam found a "top 18 security papers" list. My suggestion is to add Adi Shamir's recent Turing Award Lecture to the list. I recorded the important slides here, and at least once a week I find myself coping one or other of the components from it for posting somewhere on the net. And I'm writing an entire paper on just one line...
Adam reports the list is already up to 28, perhaps what the list keeper needs is some sort of market determined mechanism. Perhaps every security blog could trackback to their top 3 selections,and thus create a voting circle? (Hmmm, scanning the list, I see some ones that I wouldn't vote for, so how about some negative votes as well?)
To be fair, I'm not sure I've read any of them, which either augers badly for me or badly for the list :-) Which brings up another point. If someone is going to promote some paper or other, far*&%$ake put the URL of the HTML up there... If it ain't in HTML it can't reach an audience and it can't then be in any top 18. So there! That's it from me this week.
Axel points to a rather good article on Unintended Consequences with lots of good examples for the security thinker. If there is one cause that one had to put ones finger on, it is this: the attacker is smart, and can be expected to think about how to attack your system. Once you think like an attacker, you have a chance. If not, forget it.
Notwithstanding that minor ommission, here's the rather nice FC example, that of the mysterious $100 superbills.
Back in the 1970s, long before the revolution that would eventually topple him from power, the Shah of Iran was one of America's best friends (he was a dictator who brutally repressed his people, but he was anti-communist, and that made him OK in our book). Wanting to help out a good friend, the United States government agreed to sell Iran the very same intaglio presses used to print American currency so that the Shah could print his own high quality money for his country. Soon enough, the Shah was the proud owner of some of the best money printing machines in the world, and beautiful Iranian Rials proceeded to flow off the presses.
All things must come to an end, and the Shah was forced to flee Iran in 1979 when the Ayatollah Khomeini's rebellion brought theocratic rule to Iran. Everyone reading this undoubtedly knows the terrible events that followed: students took American embassy workers hostage for over a year as Iran declared America to be the "Great Satan," while evidence of US complicity in the Shah's oppression of his people became obvious, leading to a break in relations between the two countries that continues to worsen to this day.
During the early 90s, counterfeit $100 bills began to flood the Mideast, eventually spreading around the world. Known as "superbills" or "superdollars" by the US Treasury due to the astounding quality of the forgeries, these $100 bills became a tremendous headache not only for the US and its economy, but also for people all over the world that depend on the surety of American money. Several culprits have been suggested as responsible for the superbills, including North Korea and Syria, but many observers think the real culprit is the most obvious suspect: an Iranian government deeply hostile to the United States ... and even worse, an Iranian government possessing the very same printing presses used to create American money.
If you've ever wondered just why American currency was redesigned in the 1990s, now you know. In the 1970s, the US rewarded an ally with a special machine; in the 1990s, the US had to change its money because that ally was no longer an ally, and that special machine was now a weapon used to attack the US's money supply, where it really hurts. As an example of the law of unintended consequences, it's powerful, and it illustrates one of the main results of that law: that those unintended consequences can really bite back when you least expect them.
Read the rest... Unintended Consequences.
In a New Scientist article, mainstream popular press is starting to take notice that the big Wi-Fi standards have awful crypto. But there are some signs that the remedy is being pondered - I'll go out on a limb and predict that within a year, opportunistic cryptography will be all the rage. (links: 1, 2, 3,4 5)
(Quick explanation - opportunistic cryptography is where you generate what you need to talk to the other party on the fly, and don't accept any assumptions that it isn't good enough. That is, you take on a small risk of a theoretical attack up front, in order to reach cryptographic security quickly and cheaply. The alternate, no-risk cryptography, has failed as a model because its expense means people don't deploy it. Hence, it may be no-risk, but it also doesn't deliver security.)
Here's what has been seen in the article:
Security experts say that the solution lies in educating people about the risks involved in going wireless, and making the software to protect them easier to use. "Blaming the consumer is wrong. Computers are too complex for the average person to secure. It's the fault of the network, the operating system and the software vendors," says California-based cryptographer Bruce Schneier in the US. "Products need to be secure out of the box," he says.
Skipping the contradiction between "educating people" and "blaming the consumer", it is encouraging to see security people pushing for "secure out of the box." Keys should be generated opportunistically and on install, the SSH model (an SSH blog?). If more is wanted, then the expert can arrange that, but there is little point in asking an average user to go through that process. They won't.
Schneier is pessimistic. "When convenience and features are in opposition to security, security generally loses. As wireless networks become more common, security will get worse."
Schneier is unduly pessimistic. The mistake in the above logic is to consider the opposition between convenience and security as an invioble assumption. The devil is in those assumptions, and as Modadugu and Rescorla said recently:
"Considering the complexity of modern security protocols and the current state of proof techniques, it is rarely possible to completely prove the security of a protocol without making at least some unrealistic assumptions about the attack model."
(Apologies, but it's buried in a PDF. Post.) That's a green shoot, right there! Adi Shamir says that absolutely secure systems do not exist, so as soon as we get over that false assumption that we can arrange things perfectly, we can start to work out what benefits us most, in an imperfect world.
There's no reason why security and convenience can't walk hand in hand. In the 90s, security was miscast as needing to be perfect regardless of convenience. This simply resulted in lost sales and thus much less security. Better to think of security as what we can offer in alignment with convenience - how much security can we deliver for our convenience dollar? A lot, as it turns out.
According to those that think WiKID thoughts, yes. Quoting a paper by Campbell et al, there can be measured a 5% drop in stock price when confidentiality is breached. Adam demurs, thinking the market is unconcerned about the breaches of confidentiality, rather, is concerned about a) loss of customers or b) lawsuits.
I demur over both! I don't think the market cares about any of those things.
In this case, I think the market is responding to the unknown. In other words, fear. It has long been observed that once a cost is understood, it becomes factored in, and I guess that's what is happening with DDOS and defacements/viruses/worms. But large scale breaches of confidentiality are a new thing. Previously buried, they are now surfaced, and are new and scary to the market.
And the California law makes them even scarier, forcing the companies into the unknown of future litigation. But, I think once these attacks have run their course in the public mind, they will stop causing any market reaction. That isn't to say that the attacks stop, or the breaches in confidentiality stop, but the market will be so used to them that they will be ignored.
Otherwise I have a problem with a 5% drop in value. How is it that confidentiality is worth 5% of a company? If that were the case, companies like DigiCash and Zero-Knowledge would have scored big time, but we know they didn't. Confidentiality just isn't worth that much, ITMO (in the market's opinion).
The full details:
"The economic cost of publicly announced information security breaches: empirical evidence from the stock market," Katherine Campbell, Lawrence A. Gordon, Martin P. Loeb and Lei Zhou Accounting and Information Assurance, Robert H. Smith School of Business, University of Maryland, 2003.
Abstract This study examines the economic effect of information security breaches reported in newspapers or publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.
Also over on Ross Anderson's Econ & Security page there are these:
Two papers, "Economic Consequences of Sharing Security Information" (by Esther Gal-Or and and Anindya Ghose) and "An Economics Perspective on the Sharing of Information Related to Security Breaches" (by Larry Gordon), analyse the incentives that firms have to share information on security breaches within the context of the ISACs set up recently by the US government. Theoretical tools developed to model trade associations and research joint ventures can be applied to work out optimal membership fees and other incentives. There are interesting results on the type of firms that benefit, and questions as to whether the associations act as social planners or joint profit maximisers.
Which leads to "How Much Security is Enough to Stop a Thief?," Stuart Schechter and Michael Smith, FC03 .
Through a long chain of blogs (evidence that users care about phishing at least: gemal.dk MozIne, LWN, Addict) comes news that Thunderbird is also to have click-thru protection. The hero of the day is one Scott MacGregor. Easiest just to read his bug report and gfx:
Get a phishing detector going for Thunderbird. I'm sure it can be improved quite a bit but this starts to catch some of the more obvious scams.
When the user clicks on a URL that we think is a phishing URL, he now gets prompted before we open it. Handles two cases so far. Hopefully we can add more as we figure out how. The host name of the actual URL is an IP address. The link text is a URL whose host name does not match the host name of the actual URL.. I added support for a silentMode so later on we can hopefully walk an existing message DOM and call into this routine on each link element in the DOM. This would allow us to insert an email scam warning bar in the message window down the road.
That's good stuff! It is similar to the fix that JPM reported a couple of days ago. Momentum is building to fix the tools, so it we might soon start to see work in browsers - that which is being attacked - to address phishing. So far, Firefox has made a small start with a yellow SSL bar and SSL domain name on the bottom right . More will follow, especially as the fixes outside the browser force phishers towards more "correct" URLs and SSL attacks.
I've been focussed on a big project that finally came together last night, so am now able to relax a little and post. Adam picked up on this comment on haplass Salman Rushdie still suffering from his maybe-fatwa. Which led to a link on the Big Lie and this definition:
"All this was inspired by the principle - which is quite true in itself - that in the big lie there is always a certain force of credibility; because the broad masses of a nation are always more easily corrupted in the deeper strata of their emotional nature than consciously or voluntarily; and thus in the primitive simplicity of their minds they more readily fall victims to the big lie than the small lie, since they themselves often tell small lies in little matters but would be ashamed to resort to large-scale falsehoods. It would never come into their heads to fabricate colossal untruths, and they would not believe that others could have the impudence to distort the truth so infamously."
Today's pop quiz is: Who wrote that?
I'll let in one little hint, he was one of the great orators of the 20th century. If you the impatient sort that can't handle a little suspense, you can click on the WikiPedia link to see, but let's analyse his theory first.
To his big lie. The concept is breathtaking in its arrogance, but it's also difficult to deny. I'm sure you can think of a few in politics, right now, but this is an FC forum, so let's think like that. I can think of two cases where the big lie has occurred.
The first was in the security of a payment system I worked with back in the 90s. It was totally secure, as everyone agreed. Yet it wasn't, and watching that unravel led to fascinating observations as the organisation had to face up to its deepest secrets being revealed to the world. (In this case, some bright upstart from California had patented the secrets, which should give you enough of a clue...).
The second big lie was the secure browsing system. SSL in browsers, in other words. It was supposed to be secure, but the security started unravelling a few years back as phishing started to get going. Before that, I'd been poking at it and unwinding some of the assumptions in order to show it wasn't secure. It was a hobby, back then, as what we do in the security world is hone our skills by taking apart someone else's system.
To little avail. And I now wonder if what I was facing was the big lie? A community of Internet security people had created the belief that it was secure. And this enabled them to ignore any particular challenge to that security. Hence if, by way of example, we pointed out that, say, a breach on any certificate authority would cause all CAs to be breached, this was easily fobbed off onto some area of the intricate web (e.g., CAs are audited, therefore...).
Now, that area could also easily be shown to be weak as well, but by that time people had lost interest in our arguments. They had done their job, perhaps, or they simply relied on other people to assure them that those other areas were safe. My own view is that when one steps outside the discipline, all subtlety disappears and the truth becomes, well, gospel. (Auditing makes companies safe, right? That's what Sarbanes-Oxley and Basel II and all that is about!)
Our orator from the past goes on to say:
"Even though the facts which prove this to be so may be brought clearly to their minds, they will still doubt and waver and will continue to think that there may be some other explanation. For the grossly impudent lie always leaves traces behind it, even after it has been nailed down, a fact which is known to all expert liars in this world and to all who conspire together in the art of lying."
Now, he conveniently pins the blame on a conspiracy of expert liars, which we'll leave for the moment. But notice how even as the lie "leaves traces behind it" the power of the mind turns to seaching for the explanation that keeps it "true." And so it is with phishing and web browser's security against a spoofed site. Even as phishing reaches and enjoys institutional scope, the basic facts of the matter - it's an attack on the secure browser - are ignored.
There must be some other explanation! If we were to say that the browser should identify the site, and it doesn't, then that would mean that secure browsing isn't secure, and that can't be right, can it? There must be some other explanation... and all of the associations and cartels and standards organisations and committees are rushing around in ever enlarging circles proposing server software, secure hardware tokens, user education, and bigger fines.
The big lie is an extraordinarily powerful thing. In closing, I'll post the last part of that extract, which might alert you to the author. Call it clue #2. But keep an open mind as to what he is saying, because I'll challenge you on it!
"These people know only too well how to use falsehood for the basest purposes. From time immemorial, however, the Jews have known better than any others how falsehood and calumny can be exploited. Is not their very existence founded on one great lie, namely, that they are a religious community, where as in reality they are a race? And what a race! One of the greatest thinkers that mankind has produced has branded the Jews for all time with a statement which is profoundly and exactly true. Schopenhauer called the Jew 'The Great Master of Lies.' Those who do not realize the truth of that statement, or do not wish to believe it, will never be able to lend a hand in helping Truth to prevail."
Now, we all know that isn't true. Or do we? Just exactly how did our orator create such a fascinating big lie, and how many people do you know that can unravel the above and work out what he did?
Here's what I think he did. Firstly, he described the big lie. Then, he attributed the big lie to his targeted victims. In that way, he hid the fact that he himself was creating another big lie set squarely against the first one.
So our hapless citizen has to not only unravel one big lie, but two big lies. Not only that, but the first big lie has probably been around for yonks, and just has to be true, right?
Offering a defence to Adolf Hitler's inspiration is tough. (Yes, it was he, writing in Mein Kampf, if you haven't already guessed it. WikiPedia.) Two big lies do not a big truth make? Nice, pithy, and will not be understood by our 99% target population. It takes a big lie to defeat a big lie?
A puzzler to be sure. For now, I'll leave you with the big thought that it's time for a big coffee.
All the blogs (1, 2, 3) are buzzing about the T-Mobile cracker. 21 year old Nicolas Jacobsen hacked into the phone company's database and lifted identity information for some 400 customers, and also scarfed up a photos taken by various phone users. He sold these and presumably made some money. He was at it for at least 6 months, and was picked up in an international sweep that netted 28 people.
No doubt the celebrity photos were embarrassing, but what was cuter was that he also lifted documents from the Secret Service and attempted to sell them on IRC chat rooms!
One would suppose that he would find himself in hot water. Consider the young guy who tried to steal a few credit cards from a hardware store by parking outside and using his laptop to wirelessly hack in and install a trojan. He didn't succeed in stealing anything, as they caught him beforehand. Even then, the maximum he was looking at was 6 credit card numbers. Clearly, a kid mucking around and hoping to strike lucky, this was no real criminal.
He got 12 years. That's 2 years for every credit card he failed to steal.
If proportionality means anything, Jacobsen is never ever going to see sunlight again. So where are we now? Well, the case is being kept secret, and the Secret Service claim they can't talk about it. This is a complete break with tradition, as normally the prosecution will organise a press circus in order to boost their ratings. It's also somewhat at odds with the press release they put out on the other 19 guys they picked up.
The answer is probably that which "a source" offers: "the Secret Service, the source says, has offered to put the hacker to work, pleading him out to a single felony, then enlisting him to catch other computer criminals in the same manner in which he himself was caught. The source says that Jacobsen, facing the prospect of prison time, is favorably considering the offer."
Which is fine, except the hardware shop hacker also helped the hardware store to fix up their network and still got 12 years. The way I read this message is that proportionality - the punishment matching the crime - is out the window, and if you are going to hack, make sure you hack the people who will come after you to the point of ridicule.
Those of you who shudder over my aggressive adoration of "security by obscurity" will cheer the article in the Register that reveals the latest on-camera bloopers.
It seems that thousands of webcams (little cameras for PCs) install and open up webservers by default. Now, this is a fine thing to do if you can keep your webserver "hidden" from view. (That's what we mean by security by obscurity!) But recall that google and/or others have been shipping spyware tools that capture secret URLs from chat sessions and email sessions, and then forward them to search engines! Well, it was only a matter of time before someone figured out a way to search google for all those secret cameras ...
Suddenly, the age old trick of using a secret webserver or URL to distribute a private document no longer works. Whoops. Security by obscurity just flipped that trick on its head.
But, let's not throw out the baby with the bathwater. Anyone using that trick should have known that they were taking a risk. Now we know the risk is dramatically enhanced by spyware snaffling secret URLs. So, stop doing it. But, while it lasted, it was a good trick, and it saved lots of people lots of costs.
Oh, for the victims - those companies shipping the webserver camera setups that are unsecure by default - well, you deserve to be embarrassed. And the people spyed upon by the bloggers ... consider the greater good of teaching us how to secure our world as your compensation. And let's hope you weren't doing anything too embarrassing.
A tech survey by accountants gives some interesting tips on security. The reason it is credible is because the authors aren't from our industry, so they can be expected to approach this without the normal baggage of some security product to sell. Of course their own is for sale, but that's easy to factor out in this case.
Security is still the Number One concern that accountants are seeing out there. That makes sense. It accords with everything we've seen about the phishing and identity theft explosions over the last couple of years.
Second is electronic document management. Why now? This issue has been around for yonks, and businesses have been basically doing the paperless office as and when they could. My guess is that things like Sarbanes-Oxley, Basel II and various lesser well named regulatory attacks on governance have pushed this to the fore. Now, if you haven't got your documents under control (whatever that means) you have a big risk on your hands.
Third is Data Integration. This echoes what I've seen in finance circles of late; they have gone through a phase of automating everything with every system under the sun. Now, they're faced with tieing them all together. The companies selling product at the moment are those with tools to ease the tying of things together. But so far, the companies are not exactly enticed, with many companies dreading yet another cycle based on the current web services hype.
Spam has slipped to Fourth in the rankings of the "biggest concerns". The article tries to hint at this as a general easing of the problem, but I'd suggest caution: there are far too many ways in which this can be misinterpreted. For example, the huge increase in security concerns over the last year have probably and simply overshadowed spam to the extent that spam may well have doubled and we'd not have cared. Identity Theft is now on the agenda, and that puts the spam into context. One's a nuisance and the other's a theft. Internet security experts may be bemused, but users and accountants can tell the difference.
For the rest, read on...
Information Security Once Again Tops AICPA Tech List
Jan. 3, 2005 (SmartPros) For the third consecutive year, information
security is the country's number one technology concern, according to the
results of the 2005 Top Technologies survey of the American Institute of
Certified Public Accountants.
The survey, conducted annually since 1990, seeks to determine the 10 most
important technology issues for the coming year. There were more than 300
participants in the 2005 survey, a 30 percent increase over the previous
Interestingly, spam technology -- an issue closely associated with
information security -- apparently has lost some currency. It made its debut
on the 2004 list at number two. On the new list, it falls to number four.
"Because our work and personal lives are now inextricably linked to
information systems, security will always be top of mind," said Roman
Kepczyk, CPA/CITP, Chair of the AICPA's Information Technology Executive
Committee. Commenting on spam technology's lower placement on the list, he
said, "We've seen major improvements to filtering systems, which have
allowed us to bring spam under greater control. This most likely is the
reason that spam technology doesn't command the importance it did in the
A different issue closely allied with information security -- electronic
data management, or the paperless office -- moved up to second place. It was
number three last year.
There are two debuts on the Top Technologies list: authentication
technologies and storage technologies. Another issue, learning and training
competency, reappears at number 10 after an absence of three years.
The following are the 2005 Top 10 Technologies:
1.. Information Security: The hardware, software, processes and procedures
in place to protect an organization's information systems from internal and
2.. Electronic Document Management (paperless or less-paper office): The
process of capturing, indexing, storing, retrieving, searching and managing
documents electronically. Formats include PDF, digital and image store
3.. Data Integration: The ability to update one field and have it
automatically synchronize between multiple databases, such as the
automatic/seamless transfer of client information between all systems. In
this instance, only the data flows across systems from platform to platform
or application to application. Data integration also involves the
application-neutral exchange of information. For example, the increased use
of XBRL (eXtensible Business Reporting Language) by companies worldwide
provides for the seamless exchange and aggregation of financial data to meet
the needs of different user groups using different applications to read,
present and analyze data.
4.. Spam Technology: The use of technology to reduce or eliminate unwanted
e-mail commonly known as Spam.
5.. Disaster Recovery: The development, monitoring and updating of the
process by which organizations plan for continuity of their business in the
event of a loss of business information resources through theft,
virus/malware infestation, weather damage, accidents or other malicious
destruction. Disaster recovery includes business continuation, contingency
planning and disk recovery technologies and processes.
6.. Collaboration and Messaging Applications: Applications that allow
users to communicate electronically, including e-mail, voicemail, universal
messaging, instant messaging, e-mailed voice messages and digital faxing.
Examples include a computer conference using the keyboard (a keyboard chat)
over the Internet between two or more people.
7.. Wireless Technologies: The transfer of voice or data from one machine
to another via the airwaves and without physical connectivity. Examples
include cellular, satellite, infrared, Bluetooth, WiFi, 3G, 2-way paging,
CDMA, Wireless/WiMax and others.
8.. Authentication Technologies (new): The hardware, software, processes
and procedures to protect a person's privacy and identity from internal and
external threats, including digital identity, privacy and biometric
9.. Storage Technologies (new): Storage area networks (SAN) include mass
storage, CD-recordable, DVD, data compression, near field recording,
electronic document storage and network attached storage (NAS), as well as
small personal storage devices like USB drives.
10.. Learning and Training Competency (End Users): The methodology and
curriculum by which personnel learn to understand and use technology. This
includes measuring competency, learning plans to increase the knowledge of
individuals, and hiring and retaining qualified personnel with career
opportunities that retain the stars.
Also, each year the AICPA Top Technologies Task Force prepares a "watch
list" of five emerging technologies [...]
Axel's blog points to a storm in a teacup over at a professional association called the Computer Security Institute. It seems that they invited Frank Abagnale to keynote at their conference. Abagnale, if you recall, is the infamous fraudster portrayed in the movie Catch me if you can.
Many of the other speakers kicked up a fuss. It seems they had ethical qualms about speaking at a conference where the 'enemy' was also presenting. Much debate ensued, alleges Alex, about forgiveness, holier than thou attitudes and cashing in on notoriety.
I have a different perspective, based on Carl von Clausewitz's famous aphorism. He said something to the extent of "Know yourself and you will win half your battles. Know your enemy and you will win 99 battles out of a hundred." Those speakers who complained or withdrew have cast themselves as limited to the first group, the self-knowers, and revealed themselves as reliable only to win every second battle.
Still, even practitioners of narrow horizons should not be above learning from those who see further. So why is there such a paranoia of only dealing with the honest side in the security industry? This is the never-ending white-hat versus black-hat debate. I think the answer can be found in guildthink.
People who are truly great at what they do can afford to be magnaminous about the achievements of others, even those they fight. But most are not like that, they are continually trapped in a sort of middle level process-oriented tier, implementing that which the truly great have invented. As such, they are always on the defensive for attacks on their capabilities, because they are unable to deal at the level where they can cope with change and revolution.
This leads the professional tiers to always be on the lookout for ways to create "us" and "them." Creating a professional association is one way, or a guild, to use the historical term.
Someone like Frank Abagnale - a truly gifted fraudster - has the ability to make them look like fools. Thus, he scares them. The natural response to this is to search out rational and defensible ways to keep him and his ilk on the outside, in order to protect the delicate balance of trade. For that reason, it is convenient to pretend to be morally and ethically opposed to dealing with those that are convicted. What they are really saying is that his ability to show up the members for what they are - middle ranking professionals - is against their economic interests.
In essence, all professionals do this, and it should come as no surprise. All associations of professionals spend a lot of their time enhancing the credibility of their members and the dangers of doing business with those outside the association. So much so that you won't find any association - medical, accounting, engineering, or security - that will admit that this is all normal competitive behaviour. (A quick check of the CSI site confirms that they sell training, and they had a cyberterrorism panel. Say no more...)
So more kudos to the CSI for breaking out of the mold of us and them! It seems that common sense won over and Frank attended. He can be seen here in a photo op, confirming his ability to charm the ladies, and giving "us" yet another excuse to exclude him from our limited opportunities with "them" !
Adam continues to grind away at his problem: how to signal good security. It's a good question, as we know that the market for security is highly inefficient, some would say dysfunctional. E.g., we perceive that many security products are good but ignored, and others are bad but extraordinarily popular, and despite repeated evidence of breaches, en masse, users flock to it with lemming-like behaviour.
I think a real part of this is that the underlying question of just what security really is remains unstudied. So, what is security? Or, in more formal economics terms, what is the product that is sold in the market for security?
This is not such an easy question from an economist's point of view. It's a bit like the market for lemons, which was thought to be just anomalous and weird until some bright economist sat down and studied it. AFAIK, nobody's studied the market for security, although I admit to only having asked one economist, and his answer was "there's no definition for *that* product that I know of!"
Let's give it a go. Here's the basic issue: security as a product lacks good testability. That is, when you purchase your standard security product, there is no easy way to show that it achieves its core goal, which is to secure you against the threat.
Well, actually, that's not quite correct; there are obviously two sorts of security products, those that are testable and those that are not. Consider a gate that is meant to guard against dogs. You can install this in a fence, then watch the rabid canines try and beat against the gate. With a certain amount of confidence you can determine that the gate is secure against dogs.
But, now consider a burglar alarm. You can also install it with about the same degree of effort. You can conduct the basic workability tests, same as a gate. One opens and goes click on closing; the other sets and resets, with beeping.
But there the comparison gets into trouble, as once you've shown the burglar alarm to work, you still have no real way of determining that it achieves its goal. How do you know it stops burglars?
The threat that is being addressed cannot be easily simulated. Yes, you can pretend to be a burglar, but non-burglars are pretty poor at that. Whereas one doesn't need to be a dog to pretend to be a dog, and do so well enough to test a gate.
What then is one supposed to do? Hire a burglar? Well, let's try that: put an ad in the paper, or more digitally, hang around IRC and learn some NuWordz. And your test burglar gets in and ... does what? If he's a real burglar, he might tell you or he might just take the stuff. Or, both, it's not unreasonable to imagine a real burglar telling you *and* coming back a month later...
Or he fails to get in. What does that tell you? Only that *that* burglar can't get in! Or that he's lying.
Let's summarise. We have these characteristics in the market for security:
Perhaps some examples might help. Consider a security product such as Microsoft Windows Operating System. Clearly they write it as well as they can, and then test it as much as they can afford. Yet, it always ships with bugs in it, and in time those bugs are exploited. So their testing - their simulated threats - is unsatisfactory. And their ability to arrange testing by real threats is limited by the inefficient market for blackhats (another topic in itself, but one beyond today's scope).
Closer to (my) home, let's look at crypto protocols as a security product. We can see that it is fairly close as well: The simulated threat is the review by analysts, the open source cryptologists and cryptoplumbers that pore through the code and specs looking for weaknesses. Yet, it's expensive to purchase review of crypto, which is why so many people go open source and hope that someone finds it interesting enough. And, even when you can attract someone to review your code, it is never ever a complete review. It's just what they had time for; no amount of money buys a complete review of everything that is possible.
And, if we were to have any luck in finding a real attacker, then it would only be by deploying the protocol in vast numbers of implementations or in a few implementations of such value that it would be worth his time to try and attack it. So, after crossing that barrier, we are probably rather ill-suited to watching for his arrival as a threat, simply due to the time and effort already undertaken to get that far. (E.g., the protocol designers are long since transferred to other duties.) And almost by default, the energy spent in cracking our protocol is an investment that can only be recouped by aggressive acquisition of assets on the breach.
(Protocol design has always been known to have highly asymmetric characteristics in security. It is for this reason that the last few years have shown a big interest in provability of security statements. But this is a relatively young art; if it is anything like the provability of coding that I did at University it can be summarised as "showing great potential" for many decades to come.)
Having established these characteristics, a whole bunch of questions are raised. What then can we predict about the market for Lemmings? (Or is it the market for Pied Pipers?) If we cannot determine its efficacy as a product, why is it that we continue to buy? What is it that we can do to make this market respond more ... responsibly? And finally, we might actually get a chance to address Adam's original question, to whit, how do we go about signalling security, anyway?
Lucky we have a year ahead of us to muse on these issues.
In a show of remarkable adeptness, Netcraft have released an anti-phishing plugin for IE. Firefox is coming, so they say. This was exciting enough to make it on Slashdot, as David at Mozilla pointed out to me.
There are now dozens of plugins floating around designed to address phishing. (If that doesn't say this is a browser issue, I don't know what will. Yes, the phish are growing wings and trialling cell phones, pagers and any other thing they can get at, but the main casting action is still a browser game.) The trustbar one is my favourite, although it doesn't work on my Firefox.
So, what about Netcraft? Well, it's quite inspired. Netcraft have this big database of all the webservers in existance, and quite a few that are not. The plugin simply pops on over to the Netcraft database and asks for the vital stats on that website.
Well, hey ho! Why didn't we think of that?
There's a very good reason why not. Several in fact. Firstly, this puts Netcraft into your browser in an important position; if they succeed at this, then they have entre into the user's hearts and minds. That means some sort of advertising revenue model, etc etc, as clearly permitted in their licence. Or worse, like their own little spyware programs which may or may not be permitted under their Privacy clause.
(So one reason we didn't think of that is because we all hate advertising models ... just so we're clear on that point!)
But more interesting is that Netcraft is a player in the security industry. At least, they are a collector of CA and SSL statistics, and their reports sell for mighty big bucks. So one might expect them to pay attention to those suggestions that supported the SSL industry, like the ones that I frequently ... propose.
But, no. What they have done is completely bypassed the SSL security model and crafted a new one based on a database of known information. If one has followed the CA security debate, it bears a stunning similarity to the notions of what we'd do if we were attempting to fix the model. It's the endgame: to fix the revocation problem you add online checking which means you don't need the CAs any more.
Boom. If Netcraft succeeds in this approach (and there is no reason why others can't copy it!) then we don't need CAs any more. Well, that's not quite true, what this implies is that Netcraft just became a CA. But, they are a CA according to their rules, not those historical artifacts popularised by accounting entities such as WebTrust.
So it's another way to become a CA: give away the service for free, acquire the user base, and figure out how to charge for it later. A classic dotcom boom strategy, right? Bypass the browser policy completely because it is struggling under the weight of the WebTrust legacy, and the security wood can't be seen for the policy trees.
(Now, some will be scratching their heads about the apparent lack of a cert in the plugin. Don't worry, that's an implementation detail. They can add that later, for now they offer a free certificate service with no cert. Think of the upgrade potential here. The important thing is to see if this works as a *business* model first.)
So this takes aim at the very group that they sell reports to. Of course, the people who want to buy reports on certificate use are the CAs, and their various suppliers of CA toolkits.
That's why it's a significant event. (And another reason why we didn't think of it!)
Netcraft have obviously worked out several things: the CAs are powerless to do anything about phishing, and that's a much bigger revenue stream than a few boring reports. Further, the security model is stagnant at best and a crock at worst, so why not try something new? And, the browser manufacturers aren't playing their part, with narry a one admitting that the problem is in their patch. So their users are also vulnerable to a takeover by someone with some marketing and security sense.
Well done Netcraft, is all I can say! Which is to say that I have no idea whether the plugin itself will work as advertised. But the concept, now, that's grand!
Recently, it's become fashionable to write an article on how to protect yourself from all the malware, phishing, spyware, viruses, spam, espionage and bad disk drives out there. Here's some: [IBM], [Schneier], [GetLuky].
Unfortunately, most of them go over the heads of ordinary users, and many of them challenge even experienced users! So I've been keeping my eye out for succinct tips, the sort for car owners who don't know what an oil change is. I have two which I've posted here before, being Buy a Mac and download FireFox. Both good things, but I feel the lack of any good tip for phishing; there just isn't a good way to deal with that yet.
There they are, sitting in a box in the right of the blog.
- Buy a Mac - Uses BSD as its secure operating system...
- Download FireFox - Re-engineered for security...
- Check name of site - written on bottom right of FireFox, next to padlock...
- Write Passwords Down - In a safe place...
People do ask me from time to time what to do. I feel mightily embarrassed because I have no Windows machine, but I also find myself empathising with ordinary users who ask what it means to upgrade the software! So my tips are designed for people who know not what SP2 means.
Let me know your suggestions, but be warned: they'd better be very very simple. Coz that's all that counts for the user.
Cypherpunk askes a) why has phishing gone beyond "don't click that link" and b) why we can't educate the users?
A lot of what I wrote in The Year of the Snail is apropos to that first question. In economic terms, we would say that Phishing is now institutionalised. In more general parlance, and in criminal terms, it would be better expressed as organised crime. Phishing is now a factory approach, if you like, with lots of different phases, and different actors all working together. Which is to say that it is now very serious, it's not a simple net bug like viruses or spam, and that generally means telling people to avoid it will be inadequate.
We can look at the second question much more scientifically. The notion of teaching people not to click has been tried for so long now that we have a lot of experience just how effective the approach of 'user education' is. For example, see the research by Ye and Smith and also Herzberg and Gbara, who tested users in user interface security questions. Bottom line: education is worse than useless.
Users defy every effort to be educated. They use common sense and their own eyes: and they click a link that has been sent to them. If they didn't do that, then we wouldn't have all these darn viruses and all this phishing! But viruses spread, and users get phished, so we know that they don't follow any instructions that we might give them.
So why does this silly notion of user education persist? Why is every security expert out there recommending that 'users be educated' with not the least blush of embarrassment at the inadequacy of their words?
I think it's a case of complexity, feedback and some fairly normal cognitive dissonance. It tends to work like this: a security expert obviously receives his training from some place, which we'll call received wisdom. Let's call him Trent, because he is trusted. He then goes out and employs this wisdom on users. Our user, Alice, hears the words of "don't click that link" and because of the presence of Trent, our trusted teacher, she decides to follow this advice.
Then, Alice goes out into the world and ... well, does productive work, something us Internet geeks know very little about. In her office every day she dutifully does not click, until she notices two thing. Firstly, everyone else is clicking away like mad, and indeed sending lots of Word documents and photos of kids and those corny jokes that swill around the office environment.
And, secondly, she notices that nobody else seems to suffer. So she starts clicking and enjoying the life that Microsoft taught her: this stuff is good, click here to see my special message. It all becomes a blur and some time later she has totally forgotten *why* she shouldn't click, and cannot work out what the problem is anyway.
(Then of course a virus sweeps the whole office into the seas ...)
So what's going on here? Well, several factors.
Hence, cognitive dissonance. In this case, the security industry has an unfounded view that education is a critical component of a security system. Out in the real world, though, that doesn't happen. Not only doesn't the education happen, but when it does happen, it isn't effective.
Perhaps a better way to look at this is to use Microsoft as a barometer. What they do is generally what the user asks for. The user wants to click on mail coming in, so that's what Microsoft gives them, regardless of the wider consequences.
And, the user does not want to be educated, so eventually, Microsoft took away that awful bloody paperclip. Which leaves us with the lesson of inbuilt, intiutive, as-delivered security. If you want a system to be secure, you have to build it so that it is so intiutively to the user. Each obvious action should be secure. And you have to deliver it so that it operates out of the box, securely. (Mozilla have recently made some important steps in this direction by establishing a policy of delivery to the average user. It's a first welcome step which will eventually lead them to delivering a secure browser.)
If these steps aren't taken, then it doesn't help to say to the user, don't click there. Which brings me to the last point: why is user education *worse* than useless? Well, every time a so-called security expert calls for the users to be educated, he is avoiding the real problems, and he is shifting the blame away from the software to the users. In this sense, he is the problem, and until we can get him out of the way, we can't start thinking of the solutions.
Over on Adam's blog he has a developing theme on 'security signalling.' He asks whether a code-checking program like RATS would signal to the world that a product is a good secure product? It's an important question, and if you need a reason, consider this: when/if Microsoft gets done rewriting its current poisoned chalice of an operating system, how is it going to tell the world that it's done the job?
Last night I had occasion to feel the wrath of such a check, so can now respond with at least one sample point. The story starts earlier in the week, when I reinstalled my laptop with FreeBSD 5.3. This was quite a massive change for me, up from 4.9 which had slowly been dying under the imposition of too many forward-compatible ports. It also of course retriggered a reinstall of languages, but this should have been no trouble as I already had jdk1.4.2 installed, and that was still the current. (Well, I say no trouble ... as Java is "unsupported" on FreeBSD, mostly because of Sun's control freak policies creating a "write once, run twice" environment.)
Anyway, my code went crazy (*). A minor change in the compiler checking brought out a squillion errors. Four hours later, and I'd edited about 100 files and changed about 500 lines of code. My eyes were glazed, my brain frizzled and the only cognitive capability I had left was to open beers and dispose of them.
Now, this morning, I can look at the effect, hopefully in the cold hard light of a northern winter's sunny day. At least for another hour.
It's security code (hard crypto payments) so am I more secure? No. Probably actually less secure, because the changes were so many and so trivial that the robot masquerading as me made them without thinking; just trying to get the darn thing to compile so I could get back to my life.
So one answer to whether the RATS proposal could make any difference is that it could make things worse: If thrown at a project, the rush to get the RATS thing to come out clean could cause more harm than good.
Which is a once-off effect or a singularity. But what if you didn't have a singularity, and you instead just had "good coding checks" all the time?
Well. This is just like the old days of C where some shops used Lint and others didn't. (Lint was an old tool for cleaning out "fluff" from your C code.) Unfortunately there wasn't enough of a real discernable difference that I ever saw to be able to conclude that a Lint-using shop was more secure.
What one could tell is that the Lint-using shop had some coding practices in place. Were they good practices? Sometimes, yes. Maybe. On the whole Lint did good stuff, but it also did some stupid things, and the net result was that either you used Lint or you were careful, and not using Lint was either a signal that you were careful and knew more, or you weren't careful, and knew less; whereas using Lint was a signal that you didn't know enough to be careful, but at least you knew that!
We could debate for years on which is better.
As an example of this, at a tender age, I rewrote the infamous strcpy(3) set of routines to eliminate buffer overflows. Doing so cost me a day or two of coding. But from there on in, I never had a buffer overflow, and my code was easy to audit. Massive benefit, and I preferred that to using Lint, simply because *I* knew what I was doing was much safer.
But how to convince the world of that? I don't know ... still an open question. But, I'm glad Adam has brought up the question, and I have the chance to say "that won't work," because the answer is probably worth an extra percentage point on Microsoft's market cap, in a couple of years, maybe.
* The code is WebFunds which is about 100kloc (kilo-lines-of-code) chock full of hard crypto, RTGS payments and various other financial cryptography applications like secure net storage and secure chat.
Back in the early 90s, some Bright Spark had the idea that if a certificate authority could sign a certificate, and this certificate could be used to secure logins, payments, emails and ... well, everything, then obviously everyone would want one. And everyone was a lot of people, even back in the days before mainstream Internet.
There was a slight problem, though. The certificate wasn't really the only way to do things. In fact, it was one of the poorer ways to do things, because of that pesky complexity argument. But this didn't present an unsurmountable challenge to our Mr. B.S., as crypto and security are devilish complex things, *however* you do them.
All that was needed was a threat to hang the hat of certificate security on, and the rest could be written into history. This bogeyman turned out to be the wicked thief of credit cards, who would conduct a thing of great evil called a Man-In-The-Middle attack on poor innocent Internet consumers. This threat (cut to images of virginal shoppers tied to the rails before the oncoming train of rapacious gangsters) made the whole lot hang together, cohesively enough to fool all but the most skeptical security experts.
And so PKI was born. Public Key Infrastructure involved lots of crypto, lots of complexity, lots of software, and of course, oodles and oodles of certs, all for sale. Boom! Literally, at least in stock market terms, as certificate sellers went through the Internet IPO roof.
Their mission was to sell certs or die in the attempt, and they did. By the end of the dotcom boom, all but one of them were smoking carcasses. The one that survived cleverly used its stock market money to buy some businesses with real cash flow. But even it danced with stock prices that were 1-2% of its peak. Now it's up in the 10% range.
Unfortunately, even though the PKI companies died exotic and flashy deaths, the mission did not. Now, one insider has crawled out from the ashes to write an anonymous article that isn't exactly waving a white flag. As he reveals his inner dreams, it's stunning to realise that this insider *still* believes that PKI can do it, even when he admits that the mission was to sell certs. How pervasive a marketing myth is that?
Anyways, here's the link. For those students of Internet security, you be the judge: how much of the following makes sense when you consider their mission was to sell certs? How much of it makes sense if you take away that mission?
Wherein a very patient CSO hatches a plan to revive a technology thought to be dead
I recently noticed a curious phenomenon. Public Key Infrastructure, once rumored to be dead, is making a comeback. Several high-profile institutions are now deploying a technology that I assumed had been extinct since the dot-bomb era. It's sort of technology's version of the coelacanth. This was a fish that was assumed to have been extinct for hundreds of thousands of years and then-bam!-one turns up in a fisherman's net off the coast of Madagascar.
I admit I have a certain fondness for Public Key Infrastructure, or PKI as it is commonly known-at least that is the three-letter version. PKI is commonly described using choice four-letter words as well. That's because it came into favor-and just as ingloriously fell out of it-with the boom of the '90s.
I should know, because I cut my security teeth on the bleeding edge of PKI. In 1992, I took a position as the director of electronic commerce with a company that sought to deploy a global certificate authority (CA) that would issue the digital certificates used to process PKI. Under our plan, all other CAs would be subordinate to us, and we would sit atop a giant pyramid scheme raking in monopoly profits by charging pennies on all the billions of e-commerce transactions around the world.
The only problem was that other PKI companies were busy scheming with their own plans to take over the e-commerce world. While we were plotting against each other, we forgot to actually deploy the technology. After a few years of hand waving, PowerPoint presentations and whiteboard discussions, investors began demanding that we start earning our keep by making a profit. Silly realists!
Over at EmergentChaos, Adam asked what happens when "the Snail" gets 10x worse? I need several cups of coffee to work that one out! My first impressions were that ... well, it gets worse, dunnit! which is just an excuse for not thinking about the question.
OK, so gallons of coffee and a week later, what is the natural break on the shift in the security marketplace? This is a systems theory (or "systemics" as it is known) question. Hereafter follows a rant on where it might go.
(Unfortunately, it's a bit shambolic. Sorry about that.)
A lot of ordinary users (right now) are investigating ways to limit their involvement with Windows due to repeated disasters with their PCs. This is the the first limiting factor on the damage: as people stop using PCs on a casual basis, they switch to using them on a "must use" basis.
(Downloading Firefox is the easy fix and I'll say no more about it.) Some of those - retail users - will switch to Macs, and we can guess that Mac might well double its market share over the next couple of years. A lot of others - development users and poorer/developing countries - will switch to the open source Unix alternates like Linux/BSD. So those guys will have a few good years of steady growth too.
Microsoft will withdraw from the weaker marketplaces. So we have already seen them pull out of supporting older versions, and we will see them back off from trying to fight Firefox too hard (they can always win that back later on). But it will maintain its core. It will fight tooth and nail to protect two things: the Office products, and the basic windows platform.
To do that, the bottom line is that they probably need to rewrite large chunks of their stuff. Hence the need to withdraw from marginal areas in order to concentrate on protecting that which is core, so as to concentrate efforts. So we'll see a period characterised by no growth or negative growth by Microsoft, during which the alternates will reach a stable significant percentage. But, Microsoft will come back, and this time with a more secure platform. My guess is that it will take them 2 years, but that's because everything of that size takes that long.
(Note that this negative market growth will be accompanied by an increase in revenues for Microsoft as companies are forced to upgrade to the latest releases in order to maintain some semblance of security. This is the perversity known as the cash cow: as the life cycle ends, the cash goes up.)
I'd go out on a limb here and predict that in 2 years, Microsoft will still control about half of the desk top market, down from about 90% today.
There are alternates outside the "PC" mold. More people will move to PDAs/cellular/mobile phones for smaller apps like contact and communications. Pushing this move also is the effect we've all wondered about for a decade now: spam. As spam grows and grows, email becomes worse and worse. Already there is a generation of Internet users that simply do not use email: the teenagers. They are chat users and phone users.
It's no longer the grannies who don't use email, it is now the middle aged tech groupies (us) who are feeling more and more isolated. Email is dying. Or, at least, it is going the way of the telegram, that slow clunky way in which we send rare messages like birthday, wedding and funderal notices. People who sell email-based product rarely agree with this, but I see it on every wall that has writing on it  .
But, I hear you say, chat and phones are also subject to all of the same attacks that are going to do Microsoft and the Internet so much damage! Yes, it's true, they are subject to those attacks, but they are not going to be damaged in the same way. There are two reasons for this.
Chat users are much much more comfortable with many many identities. In the world of instant messaging, Nyms are king and queen and all the other members of the royal family at the same time. The same goes for the mobile phone world; there has been a seismic shift in that world over to prepaid billing, which also means that an identity that is duff or a phone that is duff can simply be disposed of, and a new one set up. Some people I know go through phones and SIMs on a monthly basis.
Further, unlike email, there are multiple competing systems for both the phone platform and the IM platform, so we have a competition of technologies. We never had that in email, because we had one standard and nobody really cared to compete; but this time, as hackers hit, different technologies can experiment with different solutions to the cracks in different ways. The one that wins will attract a few percentage points of market share until the solution is copied. So the result of this is that the much lauded standardisation of email and the lack of competition in its basic technical operability is one of the things that will eventually kill it off.
In summary so far; email is dying, chat is king, queen, and anyone you want to be, and your mobile/cellular is your pre-paid primary communications and management device.
What else? Well, those who want email will have to pay *more* for it, because they will be the shrinking few who consume all the bandwidth with their spam. Also, the p2p space will save us from the identity crisis by inventing the next wave of commerce based on the nym. Which means that we can write off the Hollywood block buster for now.
Shambolic, isn't it!
 "Scammers Exploit DomainKeys Anti-phishing Weapon"
 "Will 2005 be the year of the unanswered e-mail message?"
In the military classroom, we teach 4 phases of war, one of which is "Withdrawal" (the others are Attack, Advance, Defence). One of the reasons for withdrawing is that the terrain cannot be defended, and thus we withdraw to terrain that can be defended. It's all fairly common sense stuff, but we are up against an inbuilt fear in all politicians and not a few soldiers that withdrawal is retreat and retreat is failure.
Sometimes it is necessary to give up ground. Microsoft are now in that position. They are over extended on platforms, and need to back away from support of all older versions of the OS, if they are to have any chance of fielding a secure OS in the next couple of years. More evidence of this withdrawal is now coming to light:
These two articles discuss the withdrawal of support from various products, for security reasons. For Microsoft, the defensable terrain is Windows XP. Or, at least, that's their strategy!
In line with my last post about using payment systems to stupidly commit crimes, here's what's happening over in the hacker world. In brief, some thief is trying to sell some Cisco source code he has stolen, and decided to use e-gold to get the payout. Oops. Even though e-gold has a reputation for being a den of scammers, any given payment can be traced from woe to go. All you have to do is convince the Issuer to do that, and this case, e-gold has a widely known policy of accepting any court order for such work.
The sad thing about these sorts of crooks and crimes is that we have to wait until they've evolved by self destruction to find out the really interesting ways to crack a payment system.
E-gold Tracks Cisco Code Thief
November 5, 2004 By Michael Myser
The electronic currency site that the Source Code Club said it will use to accept payment for Cisco Systems Inc.'s firewall source code is confident it can track down the perpetrators.
Dr. Douglas Jackson, chairman of E-gold Ltd., which runs www.e-gold.com, said the company is already monitoring accounts it believes belong to the Source Code Club, and there has been no activity to date. ADVERTISEMENT
"We've got a pretty good shot at getting them in our system," said Jackson, adding that the company formally investigates 70 to 80 criminal activities a year and has been able to determine the true identity of users in every case.
On Monday, a member of the Source Code Club posted on a Usenet group that the group is selling the PIX 6.3.1 firewall firmware for $24,000, and buyers can purchase anonymously using e-mail, PGP keys and e-gold.com, which doesn't confirm identities of its users.
PointerClick here to read more about the sale of Cisco code.
"Bad guys think they can cover their tracks in our system, but they discover otherwise when it comes to an actual investigation," said Jackson.
The purpose of the e-gold system, which is based on 1.86 metric tons of gold worth the equivalent of roughly $25 million, is to guarantee immediate payment, avoid market fluctuations and defaults, and ease transactions across borders and currencies. There is no credit line, and payments can only be made if covered by the amount in the account. Like the Federal Reserve, there is a finite value in the system. There are currently 1.5 million accounts at e-gold.com, 175,000 of those Jackson considers "active."
eWEEK.com Special Report: Internet Security To have value, or e-gold, in an account, users must receive a payment in e-gold. Often, new account holders will pay cash to existing account holders in return for e-gold. Or, in the case of SCC, they will receive payment for a service.
The only way to cash out of the system is to pay another party for a service or cash trade, which Jackson said creates an increasingly traceable web of activity.
He did offer a caveat, however: "There is always the risk that they are clever enough to figure out an angle for offloading their e-gold in a way that leads to a dead end, but that tends to be much more difficult than most bad guys think."
This is all assuming the SCC actually receives a payment, or even has the source code in the first place.
PointerDavid Coursey says securing source code must be a priority. Read about it here.
It's the ultimate buyer beware-the code could be made up, tampered with or may not exist. And because the transaction through e-gold is instantaneous and guaranteed, there is no way for the buyer to back out.
Next Page: Just a publicity stunt?
Dave Hawkins, technical support engineer with Radware Inc. in Mahwah, N.J., believes SCC is merely executing a publicity stunt.
"If they had such real code, it's more likely they would have sold it in underground forums to legitimate hackers rather than broadcasting the sale on Usenet," he said. "Anyone who did have the actual code would probably keep it secret, examining it to build private exploits. By selling it, it could find its way into the public, and all those juicy vulnerabilities [would] vanish in the next version."
PointerFor insights on security coverage around the Web, check out eWEEK.com Security Center Editor Larry Seltzer's Weblog.
"There's really no way to tell if this is legitimate," said Russ Cooper, senior scientist with security firm TruSecure Corp. of Herndon, Va. Cooper, however, believes there may be a market for it nonetheless. By posting publicly, SCC is able to get the attention of criminal entities they otherwise might not reach.
"It's advertising from one extortion team to another extortion team," he said. "These DDOS [distributed denial of service] extortionists, who are trying to get betting sites no doubt would like to have more ways to do that."
PointerCheck out eWEEK.com's Security Center for the latest security news, reviews and analysis.
"Internet scams cannot be thwarted by placing the burden on users to defend themselves at all times. Beleaguered users need protection, and the technology must change to provide this."
Sacrilege! Infamy! How can this rebel break ranks to suggest anything other than selling more crypto and certs and solutions to the users?
Yet, others agree. Cory Doctorow says Nielsen is cranky, but educating the users is not going to solve security issues, and "our tools conspire against us to make us less secure...." Mitch Wagner agrees, saying that "much security is also too complicated for most users to understand."
And they all three agree on Nielsen's first recommendation:
"Encrypt all information at all times, except when it's displayed on the screen. In particular, never send plaintext email or other information across the Internet: anything that leaves your machine should be encrypted."
Welcome to the movement.
It's a probing question. In fact, it goes right to the heart of security's dysfunctionalism. In fact, I don't think I can answer the question. But, glutton for punishment that I am, here's some thoughts.
Signalling that "our stuff is secure" is fairly routine. As Adam suggests, we write blogs and thus establish a reputation that could be blackened if our efforts were not secure. Also, we participate in security forums, and pontificate on matters deep and cryptographic. We write papers, and we write stuff that we claim is secure. We publish our code in open source form. (Some say that's an essential signal, but it only makes a difference if anybody reads it with the view to checking the security. In practice, that simply doesn't happen often enough to matter in security terms, but at least we took the risk.)
All that amounts to us saying we grow peaches, nothing more. Then there are standards. I've employed OpenPGP for this purpose, primarily, but we've also used x.509. Also, it's fairly routine to signal our security by stating our algorithms. We use SHA1, triple DES, DSA, RSA, and I'm now moving over to AES. All wonderful acronyms that few understand, but many know that they are the "safe" ones.
Listing algorithms also points out the paucity of that signal: it still leaves aside how well you use them! For imponderable example, DES used in "ECB mode" achieves one result, whereas in "CBC mode" achieves a different result. How many know the difference? It's not a great signal, if it is so easy to confuse as that.
So the next level of signalling is to use packages of algorithms. The most famous of these are PGP for email, SSL for browsing, and SSH for Unix administration. How strong are these? Again, it seems to come down to "when used wisely, they are good." Which doesn't imply that the use of them is in any way wise, and doesn't imply that their choice leads to security.
SSL in particular seems to have become a watchword for security, so much so that I can pretty much guaruntee that I can start an argument by saying "I don't use SSL because it doesn't add anything to our security model." From my point of view, I'm signalling that I have thought about security, but from the listener's point of view, only a pagan would so defile the brand of SSL.
Brand is very important, and can be a very powerful signal. We all wish we could be the one big name in peach valley, but only a few companies or things have the brand of security. SSL is one, as above. IBM is another. Other companies would like to have it (Microsoft, Verisign, Sun) but for one reason or another they have failed to establish that particular brand.
So what is left? It would appear that there are few positive signals that work, if only because any positive signal that arises gets quickly swamped by the masses of companies lining up for placebo security sales. Yes, everyone knows enough to say "we do AES, we recommend SSL, and we can partner with IBM." So these are not good signals as they are too easy to copy.
Then there are negative signals: I haven't been hacked yet. But this again is hard to prove. How do we know that you haven't been? How do you know? I know one particular company that ran around the world telling everyone that they were the toppest around in security, and all the other security people knew nothing. (Even I was fooled.) Then they were hacked, apparently lost half a mil in gold, and it turned out that the only security was in the minds of the founders. But they kept that bit quiet, so everyone still thinks they are secure...
"I've been audited as unhackable" might be a security signal. But, again, audit companies can be purchased to say whatever is desired; I know of a popular company that secures the planet with its software (or, would like to) that did exactly that - bought an audit that said it was secure. So that's another dead signal.
What's left may well be that of "I'm being attacked." That is, right now, there's a hacker trying to crack my security. And I haven't lost out yet.
That might seem like sucking on a lemon to see if it is sour, but play the game for a moment. If instead of keeping quiet about the hack attacks, I reported the daily crack attempts, and the losses experienced (zero for now), that indicates that some smart cookie has not yet managed to break my security. If I keep reporting that, every day or every month, then when I do get hacked - when my wonderful security product gets trashed and my digital dollars are winging it to digital Brazil - I'm faced with a choice:
Tell the truth, stop reporting, or lie.
If I stop reporting my hacks, it will be noticed by my no longer adoring public. Worse, if I lie, there will be at least two people who know it, and probably many more before the day is out. And my security product won't last if I've been shown to lie about its security.
Telling the truth is the only decent result of that game, and that then forces me to deal with my own negative signal. Which results in a positive signal - I get bad results and I deal with them. The alternates become signals that something is wrong, so anyway out, sucking on the lemon will eventually result in a signal as to how secure my product is.
Reading the new SANS list of top 20 vulnerabilities leaves one distinctly uncomfortable. It's not that it is conveniently sliced into top 10s for Unix and Microsoft Windows, I see that as a practical issue when so much of the world is split so diametrically.
The bias is what bothers. The Windows side is written with excrutiating care to avoid pointing any blame at Microsoft. For example, wherever possible, general cases are introduced with lists of competing products, before concentrating on how it afflcts Microsoft product in particular. Also, the word Microsoft appears with only positive connotations: You have this Microsoft security tool, whereas you have a buggy Windows application.
One would think that such a bias is just a reflection of SANS' use of institutions and vendors as the source of its security info. For example, "p2p file sharing" is now alleged to be a "vulnerability" which has to be a reflection of the FBI responding to the RIAA over falling sales of CD music.
But what did strike me as totally weird was that phishing wasn't mentioned!
Huh? Surely there can't be a security person on the planet who hasn't heard of phishing and realised that it's one of the top serious issues? Why would SANS not list it as a vulnerability? Is the FBI too busy worrying about Hollywood's bottom lines to concentrate on theft from banks and other payment operators?
The answer is, I think, that the list only includes stuff for which there is a solution. Looking at the website confirms that SANS sells solutions. Scads of them, in fact. Well, it can't sell a solution for phishing because ... there isn't a solution to be sold. Not yet, at least.
Which is to say that the list is misnamed, it's the top 20 solutions we can sell you: SANS says they are "The Trusted Source for Computer Security Training, Certification and Research" and it's unlikely that they can instill that trust in their customers if they teach about a vulnerability they can't also solve.
No doubt they are working on one, as are hundreds of other security vendors. But it does leave one wondering how we go about securing the net when security itself is coopted to other agendas.
It was an impossible task anyway, and more kudos to Amit Yoran for resigning. News that he has quit the so-called "cybersecurity czar" position in the US means that one more person is now available to do good security work out in the private sector.
When it comes to securing cyberspace, we can pretty much guaruntee that the less the government (any, you pick) does the better. They will always be behind the game, and always subject to massive pressure from large companies selling snake oil. Security is a game where you only know when you fail, which makes it strongly susceptible to brand, hype, finger pointing and other scams.
There is one thing that the government (specifically, the US federal government this time) could have done to seriously improve our chances of a secure net, and that was to get out of crypto regulation. There was no movement on that issue, so crypto remains in this sort of half-bad half-good limbo area of weakened regulatory controls (open software crypto is .. free, but not the rest). The result of the January 2000 easing was as planned (yes, this is documented strategy): it knocked the stuffing out of the free community's independent push, while still leaving real product skipping crypto because of the costs.
IMNSHO the reason we have phishing, rampant hacking, malware, and countless other plagues is because the US government decided back in days of post-WWII euphoria that people didn't need crypto. Think about it: we built the net, now why can't we secure it?
For about 60 years or more, any large company getting into crypto has had to deal with .. difficulties. (Don't believe me, ask Sun why they ship Java in "crippled mode.") This is called "barriers to entry" which results in a small group of large companies arising to dominate the field, which further sets the scene for expensive junk masquerading as security.
In the absence of barriers to entry, we'd expect knowledge dispersed and acted upon in a regular fashion just like the rest of the net intellectual capital. Yet, any specialist has to run the gauntlet of .. issues of integrity. Work on free stuff and starve, or join a large company and find yourself polishing hypeware with snake oil.
Of course it's not as bad as I make out. But neither is it as good as some claim it. Fact is, crypto is not deployed like relational databases, networking protocols, virtual machine languages or any of the other 100 or so wonderful and complex technologies we developed, mastered and deployed in the free world known as the Internet. And there's no good reason for that, only bad reasons: US government policy remains anti-crypto, which means US government policy is to not have a secure Internet.
POSTED: 11:32 AM EDT October 1, 2004
WASHINGTON -- The government's cybersecurity chief has abruptly resigned after one year with the Department of Homeland Security, confiding to industry colleagues his frustration over what he considers a lack of attention paid to computer security issues within the agency.
Amit Yoran, a former software executive from Symantec Corp., informed the White House about his plans to quit as director of the National Cyber Security Division and made his resignation effective at the end of Thursday, effectively giving a single's day notice of his intentions to leave.
Yoran said Friday he "felt the timing was right to pursue other opportunities." It was unclear immediately who might succeed him even temporarily. Yoran's deputy is Donald "Andy" Purdy, a former senior adviser to the White House on cybersecurity issues.
Yoran has privately described frustrations in recent months to colleagues in the technology industry, according to lobbyists who recounted these conversations on condition they not be identified because the talks were personal.
As cybersecurity chief, Yoran and his division - with an $80 million budget and 60 employees - were responsible for carrying out dozens of recommendations in the Bush administration's "National Strategy to Secure Cyberspace," a set of proposals to better protect computer networks.
Yoran's position as a director -- at least three steps beneath Homeland Security Secretary Tom Ridge -- has irritated the technology industry and even some lawmakers. They have pressed unsuccessfully in recent months to elevate Yoran's role to that of an assistant secretary, which could mean broader authority and more money for cybersecurity issues.
"Amit's decision to step down is unfortunate and certainly will set back efforts until more leadership is demonstrated by the Department of Homeland Security to solve this problem," said Paul Kurtz, a former cybersecurity official on the White House National Security Council and now head of the Washington-based Cyber Security Industry Alliance, a trade group.
Under Yoran, Homeland Security established an ambitious new cyber alert system, which sends urgent e-mails to subscribers about major virus outbreaks and other Internet attacks as they occur, along with detailed instructions to help computer users protect themselves.
It also mapped the government's universe of connected electronic devices, the first step toward scanning them systematically for weaknesses that could be exploited by hackers or foreign governments. And it began routinely identifying U.S. computers and networks that were victims of break-ins.
Yoran effectively replaced a position once held by Richard Clarke, a special adviser to President Bush, and Howard Schmidt, who succeeded Clarke but left government during the formation of the Department of Homeland Security to work as chief security officer at eBay Inc.
Yoran cofounded Riptech Inc. of Alexandria, Va., in March 1998, which monitored government and corporate computers around the world with an elaborate sensor network to protect against attacks. He sold the firm in July 2002 to Symantec for $145 million and stayed on as vice president for managed security services.
Copyright 2004 by The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Addendum: 2004.10.07: The administration's rapid response: Cybersecurity expert Howard Schmidt returning to DHS
In the "war on phishing" which has yet to be declared, there is little good news. It continues to increase, identity theft is swamping the police departments, and obscure efforts by the RIAA to assert that CD pirating is now linked to financing of terrorism grab the headlines . Here's a good article on the victims, and the woe that befalls the common man of the net, while waiting for something to be done about it .
Meantime, what to do? Phishing won't stop until the browser manufacturers - Microsoft, Mozilla, Konqueror, Opera - accept that it's an attack on the browser. The flood of viruses on Microsoft's installed base won't change any time soon, especiallly underscored by the SP2 message: Microsoft has shown there is no easy patch for a fundamentally broken system.
Don't hold your breath, it will take years. In the meantime, the only thing I can think of for the embattled ordinary user is this: buy a Mac and download Firefox. That won't stop the phishing, but at least they are sufficiently inured against viruses that you won't have to worry about that threat.
Invasion of the identity snatchers
By Kelly Martin, SecurityFocus (kel at securityfocus.com)
Published Friday 24th September 2004 11:32 GMT
Last year I was the victim of identity theft, a sobering reality in today's world. An unscrupulous criminal managed to social engineer his way past the formidable security checks and balances provided by my credit card company, my bank, and one of my investment accounts. He methodically researched my background and personal information until he could successfully impersonate me, and then subsequently set forth to change the mailing addresses of my most important financial statements.
It was a harrowing experience, and one worth explaining in the context of the online world. Numerous visits to the local police and the Canadian RCMP revealed some rather surprising things: identity theft is already so common that there are entire units within law enforcement that deal with this issue every day. They have toll-free numbers, websites and documents that clearly define their incident response procedures. But the reality is, law enforcement will respond to these issues just as you might expect: with phone calls, in-person interviews, and some traditional detective work. It's still very much an analog world around us.
The other thing that became crystal clear during the process of regaining my own identity is this: for as capable as they may be, law enforcement is woefully ill-equipped to track down identity theft that starts online. As a security professional with a healthy dose of paranoia, I was confident that my online identity had not been compromised - a more traditional approach had been used. But with the sophistication of today's viruses, millions of others cannot say the same thing.
While not all identity theft starts online, the fact is that online identity theft is now incredibly easy to do. The same methodical, traditional approach that was used to steal my identity by placing phone calls is being sped up, improved upon, and made ever more lethal by first attacking the victim online. Your banking and credit card information can come later.
We all know how commonplace these technologies already are: keyloggers, Trojans with remote-control capabilities and even webcam control, and backdoors that give access to all your files. There are millions of these installed on infected machines all over the world, lurking in the shadows.
Ever do your taxes on your home computer? All it takes is one Social Insurance Number (or Social Security Number in America), plus some really basic personal information, and you're sunk. Every nugget of information can be worth its weight in gold if, for example, that online banking password that was just logged enables someone to change your address and then, a month later, take out a loan in your name.
The rise of phishing scams over the past two years alludes to this growing menace: your personal information, especially your banking and credit card information, has significant value to a criminal. No surprise there.
Working in the security field, many of us know people who are regularly infected with viruses, worms, Trojans. When it gets bad enough, they reformat and reinstall. I can't count the number of times I've heard people tell me that they're not overly concerned, as they believe that the (often, minimal) personal information on their computer is not inherently valuable. They've clearly never had their personal information put to ill use.
As I was reading the new Threat Report from Symantec, which documents historical virus trends, only the biggest numbers jumped out at me. The average time from vulnerability to exploit is now just 5.8 days. Some 40 per cent of Fortune 100 companies had been infected with worms over a period of six months. There were 4,496 new Microsoft Windows viruses discovered in six months, or an average of 24 new viruses every day. Basically, the epidemic is out of control.
With a few exceptions, however, the most popular and most prominent viruses and worms are not the ones that will be used to steal your identity. It's that carefully crafted email, or that feature-rich and bloated Trojan, that will be used in covert attempts.
Perhaps a suitable solution to the epidemic is a rather old one, and one that I employ myself: encryption of all the personal data that is deemed valuable. I'm not talking about your pictures of Aunt Tilly or your music archive - I'm referring to your tax returns, your financial information, your bill payments, etc. This approach still won't avoid the keyloggers or that remote control Trojan that's sitting on your drive, but it does help to avoid new surprises and mistaken clicks.
And to those users out there whom we deal with everyday and who still say there's nothing important on their computer that requires them to care about today's worms, Trojans, viruses, and so on, the day their own information is stolen and used against them is growing ever more near.
Copyright © 2004, SecurityFocus logo (http://www.securityfocus.com/)
The DDOS (distributed denial of service) attack is now a mainstream threat. It's always been a threat, but in some sense or other, it's been possible to pass it off. Either it happens to "those guys" and we aren't effected. Or, the nuisance factor of some hacker launching a DOS on our ISP is dealt with by waiting.
Or we do what the security folks do, which is to say we can't do anything about it, so we won't. Ignoring it is the security standard. I've been guilty of it myself, on both sides of the argument: Do X because it would help with DOS, to which the response is, forget it, it won't stop the DOS.
But DOS has changed. First to DDOS - distributed denial of service. And now, the application of DDOS has become over the last year or two so well institutionalised that it is a frequent extortion tool.
Authorize.com, a processor of credit cards, suffered a sustained extortion attack last week. The company has taken a blow as merchants have deserted in droves. Of course, they need to, because they need their payments to keep their own businesses alive. This signals a shift in Internet attacks to a systemic phase - an attack on a payment processor is an attack on all its customers as well.
Hunting around for some experience, Gordon of KatzGlobal.com gave this list of motives for DDOS:
Great list, Gordon! He reports that extortion attacks are 1 in 100 of the normal, and I'm not sure whether to be happier or more worried.
So what to do about DDOS? Well, the first thing that has to be addressed is the security mantra of "we can't do anything about it, so we'll ignore it." I think there are a few things to do.
Firstly, change the objective of security efforts from "stop it" to "not make it worse." That is, a security protocol, when employed, should work as well as the insecure alternative when under DOS conditions. And thus, the user of the security protocol should not feel the need to drop the security and go for the insecure alternative.
Perhaps better put as: a security protocol should be DOS-neutral.
Connection oriented security protocols have this drawback - SSL and SSH both add delay to the opening of their secure connection from client to server. Packet-oriented or request-response protocols should not, if they are capable of launching with all the crypto included in one packet. For example, OpenPGP mail sends one mail message, which is exactly the same as if it was a cleartext mail. (It's not even necessarily bigger, as OpenPGP mails are compressed.) OpenPGP is thus DOS-neutral.
This makes sense, as connection-oriented protocols are bad for security and reliability. About the best thing you can do if you are stuck with a connection-oriented architecture is to turn it immediately into a message-based architecture, so as to minimise the cost. And, from a DOS pov, it seems that this would bring about a more level playing field.
A second thing to look at is to be DNS neutral. This means not being slavishly dependent on DNS to convert ones domain names (like www.financialcryptography.com) into the IP number (like 18.104.22.168). Old timers will point out that this still leaves one open to an IP-number-based attack, but that's just the escapism we are trying to avoid. Let's close up the holes and look at what's left.
Finally, I suspect there is merit in make-work strategies in order to further level the playing field. Think of hashcash. Here, a client does some terribly boring and long calculation to calculate a valid hash that collides with another. Because secure message digests are generally random in their appearance, finding one that is not is lots of work.
So how do we use this? Well, one way to flood a service is to send in lots of bogus requests. Each of these requests has to be processed fully, and if you are familiar with the way servers are moving, you'll understand the power needed for each request. More crypto, more database requests, that sort of thing. It's easy to flood them by generating junk requests, which is far easier to do than to generate a real request.
The solution then may be to put guards further out, and insist the client matches the server load, or exceeds it. Under DOS conditions, the guards far out on the net look at packets and return errors if the work factor is not sufficient. The error returned can include the current to-the-minute standard. A valid client with one honest request can spend a minute calculating. A DDOS attack will then have to do much the same - each minute, using the new work factor paramaters.
Will it work? Who knows - until we try it. The reason I think of this is that it is the way many of our systems are already structured - a busy tight center, and a lot of smaller lazy guards out front. Think Akamai, think smart firewalls, think SSL boxes.
The essence though is to start thinking about it. It's time to change the mantra - DOS should be considered in the security equation, if only at the standards of neutrality. Just because we can't stop DOS, it is no longer acceptable to say we ignore it.
WebTrust is the organisation that sets auditing standards for certificate authorities. It's motto, "It's a matter of trust," is of course the marketing message they want you to absorb, and subject to skepticism. How deliciously ironic, then, that when you go to their site, click on Contact, you get redirected to another domain that uses the wrong certificate!
http://www.cpawebtrust.org/ is the immoral re-user of WebTrust's certificate. It's a presumption that the second domain belongs to the same organisation (the American Institute of Certified Public Accountants, AICPA), but the information in whois doesn't really clear that up due to conflicts and bad names.
What have WebTrust discovered? That certificates are messy, and are thus costly. This little cert mess is going to cost them a few thousand to sort out, in admin time, sign-off, etc etc. Luckily, they know how to do this, because they're in the business of auditing CAs, but they might also stop to consider that this cost is being asked of millions of small businesses, and this might be why certificate use is so low.
There's a big debate going on the US and Canada about who is going to pay for Internet wire tapping. In case you hadn't been keeping up, Internet wire-tapping *is* coming. The inevitability of it is underscored by the last ditched efforts of the ISPs to refer to older Supreme Court rulings that the cost should be picked up by those requiring the wire tap. I.e., it's established in US law that the cops should pay for each wiretap .
I got twigged to a new issue by an article  that said:
"To make wiretapping possible, Internet phone companies have to buy equipment and software as well as hire technicians, or contract with VeriSign or one of its competitors. The costs could run into the millions of dollars, depending on the size of the Internet phone company and the number of government requests."
What caught me by surprise was the mention of Verisign. So I looked, and it seems they *are indeed* in the business of subpoena compliance . I know most won't believe me, given their public image as a trusted ecommerce player, so here's the full page:
NetDiscovery Service for CALEA Compliance
Complete Lawful Intercept Service
VeriSigns NetDiscovery service provides telecom network operators, cable operators, and Internet service providers with a streamlined service to help meet requirements for assisting government agencies with lawful interception and subpoena requests for subscriber records. Net Discovery is the premier turnkey service for provisioning, access, delivery, and collection of call information from operators to law enforcement agencies (LEAs).
Reduce Operating Expenses
Compliance also requires companies to maintain extensive records and respond to government requests for information. The NetDiscovery service converts content into required formats and delivers the data directly to LEA facilities. Streamlined administrative services handle the provisioning of lawful interception services and manage system upgrades.
One Connection to LEAs
Compliance may require substantial capital investment in network elements and security to support multiple intercepts and numerous law enforcement agencies (LEAs). One connection to VeriSign provides provisioning, access, and delivery of call information from carriers to LEAs.
Industry Expertise for Continued Compliance
VeriSign works with government agencies and LEAs to stay up-to-date with applicable requirements. NetDiscovery customers benefit from quick implementation and consistent compliance through a single provider.
CALEA is the name of the bill that mandates law enforcement agency (LEA) access to telcos - each access should carry a cost. The cops don't want to pay for it, and neither do the suppliers. Not to mention, nobody really wants to do this. So in steps VeriSign with a managed service to handle wiretaps, eavesdropping, and other compliance tasks as directed under subpoena. On first blush, very convenient!
Here's where the reality meter goes into overdrive. VeriSign is also the company that sells about half of the net's SSL certificates for "secure ecommerce ." These SSL certificates are what presumptively protect connections between consumers and merchants. It is claimed that a certificate that is signed by a certificate authority (CA) can protect against the man-in-the-middle (MITM) attack and also domain name spoofing. In security reality, this is arguable - they haven't done much of a job against phishing so far, and their protection against some other MITMs is somewhere between academic and theoretical .
A further irony is that VeriSign also runs the domain name system for the .com and the .net domains. So, indeed, they do have a hand in the business of domain name spoofing; the trivial ease of mounting this attack has in many ways influenced the net's security architecture by raising domain spoofing to something that has to be protected against . But so far nothing much serious has come of that .
But getting back to the topic of the MITM protection afforded by those expensive VeriSign certificates. The point here is that, on the one hand, VeriSign is offering protection from snooping, and on the other hand, is offering to facilitate the process of snooping.
The fox guarding the chicken coop?
Nobody can argue the synergies that come from the engineering aspects of such a mix: we engineers have to know how to attack it in order to defend it. This is partly the origin of the term "hacker," being one who has to crack into machines ... so he can learn to defend.
But there are no such synergies in governance, nor I fear in marketing. Can you say "conflict of interest?" What is one to make of a company that on the one hand offers you a "trustworthy" protection against attack, and on the other hand offers a service to a most likely attacker ?
Marketing types, SSL security apologists and other friends of VeriSign will all leap to their defence here and say that no such is possible. Or even if it was, there are safeguards. Hold on to that thought for a moment, and let's walk through it.
How to MITM the CA-signed Cert, in one easy lesson
Discussions on the cryptography list recently brought up the rather stunning observation that the Certificate Authority (CA) can always issue a forged certificate and there is no way to stop this. Most attack models on the CA had assumed an external threat; few consider the insider threat. And fair enough, why would the CA want to issue a bogus cert?
In fact the whole point of the PKI exercise was that the CA is trusted. All of the assumptions within secure browsing point at needing a trusted third party to intermediate between two participants (consumer and merchant), so the CA was designed by definition to be that trusted party.
Until we get to VeriSign's compliance division that is. Here, VeriSign's role is to facilitate the "provisioning of lawful interception services" with its customers, ISPs amongst them . Such services might be invoked from a subpoena to listen to the traffic of some poor Alice, even if said traffic is encrypted.
Now, we know that VeriSign can issue a certificate for any one of their customers. So if Alice is protected by a VeriSign cert, it is an easy technical matter for VeriSign, pursuant to subpoena or other court order, to issue a new cert that allows them to man-in-the-middle the naive and trusting Alice .
It gets better, or worse, depending on your point of view. Due to a bug in the PKI (the public key infrastructure based on x.509 keys that manages keys for SSL), all CAs are equally trusted. That is, there is no firewall between one certificate authority and another, so VeriSign can issue a cert to MITM *any* other CA-issued cert, and every browser will accept it without saying boo .
Technically, VeriSign has the skills, they have the root certificate and now they are in the right place. MITM never got any easier . Conceivably, under orders from the court Verisign would now be willing to conduct an MITM against its own customers and its own certs, in every place that it has a contract for LEA compliance.
Governance? What Governance?
All that remains is the question of whether VeriSign would do such a thing. The answer is almost certainly yes: Normally, one would say that the user's contract, the code of practice, and the WebTrust audit would prevent such a thing. After all, that was the point of all the governance and contracts and signing laws that VeriSign wrote back in the mid 90s - to make the CA into a trusted third party.
But, a court order trumps all that. Judges strike down contract clauses, and in the English common law and the UCC, which is presumably what VeriSign uses, a judge can strike out clauses in the law or even write an entire law down.
Further, the normal way to protect against over zealous insiders or conflicts of interests is to split the parties: one company issues the certs, and another breaches them. Clearly, the first company works for its clients and has a vested interest in protecting the clients. Such a CA will go to the judge and argue against a cert being breached, if it wants to keep selling its wares .
Yet, in VeriSign's case, it's also the agent for the ISP / telco - and they are the ones who get it in the neck. They are paying a darn sight more money to VeriSign to make this subpoena thing go away than ever Alice paid for her cert. So it comes down to "big ISP compliance contract" versus "one tiny little cert for a dirtbag who's probably a terrorist."
The subpoena wins all ways, well assisted by economics. If the company is so ordered, it will comply, because it is its stated goal and mission to comply, and it's paid more to comply than to not comply.
All that's left, then, is to trust in the fairness of the American juridical system. Surely such a fight of conscience would be publically viewed in the courts? Nope. All parties except the victim are agreed on the need to keep the interception secret. VeriSign is protected in its conflict of interest by the judge's order of silence on the parties. And if you've been following the news about PATRIOT 1,2, National Security Letters, watchlists, no-fly lists, suspension of habeus corpus, the Plame affair, the JTTF's political investigations and all the rest, you'll agree there isn't much hope there.
What's are we to do about it?
Then, what's VeriSign doing issuing certs? What's it doing claiming that users can trust it? And more apropos, do we care?
It's pretty clear that all three of the functions mentioned today are real functions in the Internet market place. They will continue, regardless of our personal distaste. It's just as clear that a world of Internet wire-tapping is a reality.
The real conflict of interest here is in a seller of certs also being a prime contractor for easy breachings of certs. As its the same company, and as both functions are free market functions, this is strictly an issue for the market to resolve. If conflict of interest means anything to you, and you require your certs to be issued by a party you can trust, then buy from a supplier that doesn't also work with LEAs under contract.
At least then, when the subpoena hits, your cert signer will be working for you, and you alone, and may help by fighting the subpoena. That's what is meant by "conflict of interest."
I certainly wouldn't recommend that we cry for the government to fix this. If you look at the history of these players, you can make a pretty fair case that government intervention is what got us here in the first place. So, no rulings from the Department of Commerce or the FCC, please, no antitrust law suits, and definitely no Star Chamber hearings!
Yet, there are things that can be done. One thing falls under the rubric of regulation: ICANN controls the top level domain names, including .net and .com which are currently contracted to VeriSign. At least, ICANN claims titular control, and it fights against VeriSign, the Department of Commerce, various other big players, and a squillion lobbyists in exercising that control .
It would seem that if conflict of interest counts for anything, removing the root server contracts from VeriSign would indicate displeasure at such a breach of confidence. Technically, this makes sense: since when did we expect DNS to be anything but a straight forward service to convert domain names into numbers? The notion that the company now has a vested interest in engaging in DNS spoofing raises a can of worms that I suspect even ICANN didn't expect. Being paid to spoof doesn't seem like it would be on the list of suitable synergies for a manager of root servers.
Alternatively, VeriSign could voluntarily divest one or other of the snooping / anti-snooping businesses. The anti-snooping business would be then a potential choice to run the DNS roots, reflecting their natural alignments of interest.
 This makes only sense. If the cops didn't pay, they'd have no brake on their activity, and they would abuse the privilege extended by the law and the courts.
 Ken Belson, Wiretapping on the Net: Who pays? New York Times, http://www.iht.com/articles/535224.htm
 Check the great statistics over at SecuritySpace.com.
 In brief, I know of these MITMs: phishing, click-thru-syndrome, CA-substitution. The last has never been exploited, to my knowledge, as most attacks bypass certificates, and attack the secure browsing system at the browser without presenting an SSL certificate.
 , D. Atkins, R. Austein, Threat Analysis of the Domain Name System (DNS), RFC 3833.
 There was the famous demonstration by some guy trying to get into the DNS business.
 Most likely? 'fraid so. The MITM is extraordinarily rare - so rare that it is unmeasurable and to all practical intents and purposes, not a practical threat. But, as we shall see, this raises the prospects of a real threat.
 VeriSign, op cit.
 I'm skipping here the details of who Alice is, etc as they are not relevent. For the sake of the exercise, consider a secure web mail interface that is hosted in another country.
 Is the all-CAs-are-equal bug written up anywhere?
 There is an important point which I'm skipping here, that the MITM is way too hard under ordinary Internet circumstances to be a threat. For more on that, see Who's afraid of Mallory Wolf?.
 This is what is happening in the cases of RIAA versus the ISPs.
 Just this week: VeriSign to fight on after ICANN suit dismissed
U.S. Federal District Court Dismisses VeriSign's Anti-Trust Claim Against ICANN with Prejudice and the Ruling from the Court.
Today: VeriSign suing ICANN again
All forms of security are about cost/benefit and risk analysis. But people have trouble with the notion that something is only secure up to a certain point . So suppliers often pretend that their product is totally secure, which leads to interesting schisms between the security department and the marketing department.
Secrecy is one essential tool in covering up the yawning gulf between the public's need to believe in absolute security, and the supplier's need to deliver a real product. Quite often, anything to do with security is kept secret. This is claimed to deliver more protection, but that protection, known as "security by obscurity," can lead to a false sense of security.
In my experience, another effect often occurs: Institutional cognitive dissonance surrounding the myth of absolute security leads to security paralysis. Not only is the system secure, by fiat, but any attempt to point out the flaws is treated somewhere between an affront and a crime. Then, when the break occurs, regardless of the many unheeded warnings, widespread shock spreads rapidly as beliefs are shatttered.
Anyway, getting to the point: banks and other FIs rarely reveal how much security is built in, using real numbers. Below, the article reveals a dollar number for an attack on a Pin Entry Device (PED). For those in a hurry, skip down to the emboldened sections, half way down.
 addendum: This article, Getting Naked for Big Brother amply makes this point.
Behold the modern automated teller machine, a tiny mechanical fortress in a world of soft targets. But even with all those video cameras, audit trails, and steel reinforced cash vaults, wily thieves armed with social engineering techniques and street technology are still making bank. Now the financial industry is working to close one more chink in the ATM's armor: the humble PIN pad.
Last year Visa International formally launched a 50-point security certification process for "PIN entry devices" (PEDs) on ATMs that accept Visa. The review is exhaustive: an independent laboratory opens up the PED and probes its innards; it examines the manufacturing process that produced the device; and it attacks the PED as an adversary might, monitoring it, for example, to ensure that no one can identify which buttons are being pressed by sound or electromagnetic emission. "If we are testing a product that is essentially compliant, we typically figure it's about a four week process," says Ken Kolstad, director of operations at California-based InfoGard, one of three certification labs approved by Visa International worldwide.
If that seems like a lot of trouble over a numeric keypad, you haven't cracked open an ATM lately. The modern PED is a physically and logically self contained tamper-resistant unit that encrypts a PIN within milliseconds of its entry, and within centimeters of the customer's fingertips. The plaintext PIN never leaves the unit, never travels over the bank network, isn't even available to the ATM's processor: malicious code running on a fully compromised Windows-based ATM machine might be able to access the cash dispenser and spit out twenties, but in theory it couldn't obtain a customer's unencrypted ATM code.
The credit card companies have played a large role in advancing the state of this obscure art. In additional to Visa's certification program, MasterCard has set an 1 April, 2005 deadline for ATMs that accept its card to switch their PIN encryption from DES to the more secure Triple DES algorithm (some large networks negotiated a more lenient deadline of December 2005). But despite these efforts, the financial sector continues to suffer massive losses to increasingly sophisticated ATM fraud artists, who take home some $50m a year in the U.S. alone, according to estimates by the Electronic Funds Transfer Association (EFTA). To make these mega withdrawals, swindlers have developed a variety of methods for cloning or stealing victim's ATM and credit cards.
Some techniques are low-tech. In one scam that Visa says is on the rise, a thief inserts a specially-constructed sleeve in an ATM's card reader that physically captures the customer's card. The con artist then lingers near the machine and watches as the frustrated cardholder tries to get his card back by entering his PIN. When the customer walks away, the crook removes the sleeve with the card in it, and makes a withdrawal.
At the more sophisticated end, police in Hong Kong and Brazil have found ATMs affixed with a hidden magstripe reader attached to mouth of the machine's real reader, expertly designed to look like part of the machine. The rogue reader skims each customer's card as it slides in. To get the PIN for the card, swindlers have used a wireless pinhole camera hidden in a pamphlet holder and trained on the PED, or attached fake PIN pads affixed over the real thing that store the keystrokes without interfering with the ATM's normal operation. "They'll create a phony card later and use that PIN," says Kurt Helwig, executive director of the EFTA. "They're getting pretty sophisticated on the hardware side, which is where the problem has been."
Visa's certification requirements try to address that hardware assisted fraud. Under the company's standards, each PED must provide "a means to deter the visual observation of PIN values as they are being entered by the cardholder". And the devices must be sufficiently resistant to physical penetration so that opening one up and bugging it would either cause obvious external damage, cost a thief at least $25,000, or require that the crook take the PIN pad home with him for at least 10 hours to carry out the modification.
"There are some mechanisms in place that help protect against some of these attacks... but there's no absolute security," says InfoGard's Kolstad. "We're doing the best we can to protect against it."
That balancing approach - accounting for the costs of cracking security, instead of aspiring to be unbreakable - runs the length and breadth of Visa's PED security standards. Under one requirement, any electronics utilizing the encryption key must be confined to a single integrated circuit with a geometry of one micron or less, or be encased in Stycast epoxy. Another requirement posits an attacker with a stolen PED, a cloned ATM card, and knowledge of the cyphertext PIN for that card. To be compliant, the PED must contain some mechanism to prevent this notional villain from brute forcing the PIN with an array of computer-controlled solenoid fingers programmed to try all possible codes while monitoring the output of the PED for the known cyphertext.
"In fact, these things are quite reasonable," says Hansup Kwon, CEO of Tranax Technologies, an ATM company that submitted three PEDs for approval to InfoGard. Before its PIN pads could be certified, Tranax had the change the design of the keycaps to eliminate nooks and crannies in which someone might hide a device capable of intercepting a cardholder's keystrokes. "We had to make the keypad completely visible from the outside, so if somebody attacks in between, it's complete visible," says Kwon.
Where Visa went wrong, Kwon says, is in setting an unrealistic timetable for certification. When Visa launched the independent testing program last November, it set a 1 July deadline: any ATMs rolling into service after that date would have to have laboratory certified PIN pads, or they simply couldn't accept Visa cards.
That put equipment makers in a tight spot, says Kwon. "It's almost a six months long process... If you make any design modification, it takes a minimum of three months or more to implement these changes," he says. "So there was not enough time to implement these things to meet the Visa deadline."
Visa International's official position is that they gave manufactures plenty of time - 1 July saw 31 manufacturers with 105 PIN pads listed on the company's webpage of approved PEDs. But in late June, with the deadline less than a week away, Visa suddenly dropped the certification deadline altogether. "I think what we realized was that it was important to work with the other industry players," says spokesperson Sabine Middlemass.
Visa says it's now working with rival MasterCard to develop an industry wide standard before setting a new deadline for mandatory compliance. In the meantime, the company is encouraging vendors to submit their PIN pads for certification under the old requirements anyway, voluntarily, for the sake of security.
Copyright © 2004, 0 (http://www.securityfocus.com/)
Ever since the BA crash in the early 90s, when an engine failed on takeoff, and the pilots shut down the wrong one from instrument confusion, mobile phones have been banned on British aircraft, and other countries more or less followed suit. Cell phones (mobiles, as they are called in many countries) were blamed initially, and as some say, it's guilty until proven innocent in air safety.
Now there is talk of allowing them again  . They should never have been banned in the first place. Here's why.
(As a security engineer, it's often instructive to reverse-engineer the security decisions of other people's systems. Security is like economics, we don't get to try out our hypothesies except in real life. So we have to practice where we can. Here is a security-based analysis on whether it's safe to fly and dial.)
In security, we need a valid threat. Imagined threats are a waste of time and money. Once we identify and validate the threat (normally, by the damage it does) we create a regime to protect it. Then, we conduct some sort of test to show that the protection works. Otherwise, we are again wasting our time and money. We would be negligent, as it were, because we are wasting the clients money and potentially worse if we get it wrong.
Now consider pocket phones. It's pretty easy to see they are an imagined threat - there is no validated case . But skip that part and consider the protection - banning mobile phones.
Does it work? Hell no. If you have a 747 full of people, what is the statistical likelihood of people leaving their phone on accidentally? Quite significant, really. Enough that there is going to be a continual, ever present threat of transmissions. Inescapably, mobile phones are on when the plane takes off and lands - through shear accidental activity.
In real safety systems, asking people not to do it is stupid. If it has to be stopped, it has to be stopped proactively. Which means one of three things:
If planes are vulnerable, then the operators have to respond. As they haven't responded, we can easily conclude that the planes are not vulnerable. If it tuns out that they are vulnerable, then instead of the warnings being justified as some might have it, we have a different situation:
The operators would be negligent. Grossly and criminally, probably, as if a plane were to go down through cell phone interference, saying "but we said 'turn it off'" simply doesn't cut the mustard.
So, presumably, planes are not vulnerable to cell phones.
PS: so why did operators ban phones? Two reasons that I know of. In the US, there were complaints that the fast moving phones were confusing the cells. Also, the imminent roll-out of in-flight phones in many airlines was known to be a dead duck if passengers could use their cellphones...
 To talk or not to talk, Rob Bamforth
 Miracles and Wonders By Alan Cabal
 This extraordinarily flawed security analysis leaves one gaping... but it does show that if a cellphone is blasting away 30cm from flight deck equipment, there might be a problem.
Almost forgotten in the financial world, but e-gold, the innovative digital gold currency issuer based in Florida, USA (and nominally in Nevis, East Caribbean), was one of the biggest early targets for phishing . Because of their hard money policy, stolen e-gold has always been as highly prized by crooks as by its fan base of libertarians and gold enthusiasts .
Now it seems that they may have had success in stopping phishing and keylogging attacks; anecdotal reports indicate that their AccSent program has delivered the goods . The company rarely announces anything these days, but the talk among the merchants and exchangers is that there's been relative peace since May. Before that, again anecdotally, losses seemed to be in the "thousands per day" mark, which racks up about a million over a year. Small beer for a major financial institution, but much more serious for e-gold which has on order of $10 million in float .
From the feelings of merchants, it seems to have been somewhere between totally successful and stunningly successful. Nobody's prepared to state what proportion has been eliminated, but around 90% success rate is how I'd characterise it. Here's how it works, roughly :
"AccSent monitors account access attempts and issues a one-time PIN challenge to those coming from IP address ranges or browsers that differ from the last authorized account access. The AccSent advantage is that e-gold Users need not take any action - or even understand what an IP address or a phishing attack is - to immediately benefit from this innovative new feature. However, as powerful as AccSent is, the best protection against phishing and other criminal attacks is user education."
If it stomps phishing and keylogging dead for e-gold, is this a universal solution? I don't think so. As welcome as it is, I suspect all this has done is pushed the phishers over to greener pastures - mainstream banks. If every financial institution were to implement this, then the phishers would just get more sophisticated.
But in the meantime, this is a programme well worth emulating as even if it makes it just hard enough to push the victims down the street to the next muggins, that's welcome. This is the equivalent of putting deadlocks on your doors. The point is not to make your house impenetrable, but to make it harder than your neighbour's house.
It's also welcome in that any defence allows the people who have to deal with this get to grips with phishing and keylogging attacks in a concrete manner. Until now, there's been precious little but hot air. Concrete benefits lead to understanding and better benefits. Hot air just leads to balloons.
 Earliest report of a phishing attack on the company is 2001.
 See the May Scale, the essential Internet moneterists guide showing e-gold at #3.
 e-gold's Account Sentinel is described here: http://e-gold.com/accsent.html
 Compare this with the guestimates of around a bllion for mainstream phishing losses:
 A news snippet here: http://e-gold.com/news.html
Will Kamishlian has written an essay on the question I posed on the Internet security community last week: "why is the community being ignored?" It's a good essay, definitely worth reading for those looking for the "big" perspective on the Internet. Especially great is the tie-in to previous innovative booms and consequent failures in quality. Does it answer the question? You be the judge!
Here's the problem -- of personal information vulnerability as I see it from the lowest to the highest level.
1. User Level
The problem of phishing stems from the rendering of HTML from within an email client. I started to receive phish email long before I knew about the problem. I ignored these email messages because the URL in the email did not match that of my bank, etc. This would not be the case were I using a client that rendered messages in HTML.
2. Software Provider Level
Users want their Internet experience to be seamless and user-friendly. Therefore, software providers are going to continue to add new features and functionality to their email clients, browsers, etc. in a quest to provide an *easy* experience. Therefore, problem #1 is not going away. In fact, it will get worse. As new features are added, so too will new vulnerabilities. Software providers will -- as they have in the past -- patch these vulnerabilities piecemeal.
3. Industry Level
At the industry level, a widespread disagreement will remain about how to pro-actively protect the user. Individual software providers will resist any intervention that may potentially limit the features that they can provide to users. Therefore, problem # 2 is not going away.
At the moment, the problem could be solved at the industry level. As the article notes, there is a dearth of neither security experts nor advice. The professionals exist. What is needed is a consortium that can agree on an extremely basic security model from which all security aspects can be devolved -- from models to specifications. The model must be so basic that no provider can argue with its validity. Extensions to this basic model would provide lower-level models from which specifications and certifications could be devolved.
4. Societal Level
Users want more features, and providers want to satisfy users; however, in doing so, users are getting harmed. In the long term, there are four possible scenarios for this state of affairs:
* Over time, users will accept the harm and adopt the technology
* Industry will adopt universal technology to prevent harm
* Industry will create a third party to ensure safe guards
* Government will step in to introduce safe guards
By long term, I refer to a state of maturation for electronic commerce over the Internet.
Ian's article caught my eye because I am a student of history, and have been tracking the Internet boom against other booms in the past, such as that of the growth of the railroad industry, from its inception to its maturity.
The growth of the Internet mirrors the birth, boom and maturity of several industries. These come quickly to mind:
In each of these industries, consumers initially accepted a relatively high risk of potential harm as a cost of doing business. As each of these industries matured, consumer pressure produced improvements in consumer safety. Note that the author refers to these industries as they grew in the United States.
Regarding the current state of affairs for electronic commerce, we can draw lessons from the means by which each of the above industries responded to the pressures for consumer safety. Each of the above responded in a unique manner to societal pressures for consumer safety.
Major railroad disasters caught the public attention (much as airline disasters do now) during the 1860's and 1870's. Over time, industry players universally adopted safety technology -- primarily the telegraph and air brakes -- that did much to improve consumer safety. The final piece of railroad consumer safety was put in place when the industry convened to adopt standard time -- an institution that lives with us today. The universal adoption of new technology and of standard time was possible because by the 1870's there were a few major players, which could impose their will on the entire industry, and which could foresee that consumer safety improvements would lead to increased revenues.
The electrical industry responded in a manner much different from that of the railroad industry. Unlike the railroad industry, there were no headlines of failures leading to the deaths of tens of people at a time. On the other hand, adoption of electricity in the home was slowed by the fact that electricity represented a very new and unknown (to the consumer) technology, so that electrical accidents -- in the industry's infancy -- were accorded the horror of the unknown.
During the infancy of the electrical infancy, major players realized that they could achieve widespread adoption by ensuring consumer safety. At the same time, insurance companies were often bearing the cost for electrical catastrophes. The insurance companies, in conjunction with the electrical industry, created the Underwriters' Laboratory -- another institution that lives with us today. Thus, a third party was created with the single goal of providing consumer safety.
Consumer safety in the airline industry progressed in a way dissimilar to both of the above. During the 1930's, the public was horrified by news accounts of airline disasters (much as the public had been horrified by train disasters decades earlier). The difference between the 1930's and the 1870's is that by the 1930's, consumers had adopted the notion that the government could and should enact legislation to ensure consumer safety. As a result, societal pressures built to the point that politicians quickly stepped in to provide consumer safety, and the forerunner to the Federal Aviation Administration (FAA) was formed. The FAA now influences airline passenger safety on a worldwide level.
The automotive industry comes last on this list because it was forced to adopt consumer safety measures long after this industry had matured. The reasons for this are that unlike the railroad and airline industries, catastrophes did not make front page news, and unlike the electrical industry, the technology did not represent the unknown -- automobiles were an advancement on the horse and buggy. Until the 1960's, consumers accepted a relatively low level of safety because safety was viewed as a consumer issue.
Safety in the automotive industry changed in the 1960's, due in large part to the efforts of Ralph Nader. Nader used statistics to demonstrate the lack of concern on the part of the automotive industry. While individual car crashes did not make front page news, Nader's statistics of thousands of preventable deaths did make news. Auto makers did not initially respond to the societal pressure that Nader created, and as a result, the government stepped in. The Department of Transportation began to promulgate automotive safety regulation.
During the 1970's class action lawsuits brought a new type of pressure on the automotive industry. Huge payouts in cases, such as the Ford Pinto suit, brought us a new breed of lawyer adept at winning large sums from juries sympathetic to descriptions of death and bodily harm caused by negligence on the part of manufacturers. I do not have a strong opinion on what the net effect that these lawsuits have had on consumer safety; however, I suspect that these lawsuits have increased costs to the consumer in greater measure than they have improved overall consumer safety. Class action lawsuits are won when the defendent is found *provably* negligent. The lesson to the industry is to not be caught *provably* negligent.
The Electronic Commerce Industry
Where does all of this history leave us? We can find relevant historical conditions from each of these industries, and from these conditions, we can plan for the future of electronic commerce. From the foregoing, we can accept that consumers expect the government to step in quickly whenever an industry is viewed as negligent with regard to consumer safety (as in the airline and automotive industries).
The infancy of the electronic commerce industry is similar to that of the electrical industry in that the Internet has an aspect of the unknown, although unlike the electrical industry, failures and accidents in the electronic commerce industry do not lead to death or injury. Nevertheless, we can expect that consumer adoption of electronic commerce will be slowed until consumers are reassured that their safety is protected by a technology they do not understand.
Unlike the railroad and airline industries, failures in electronic commerce do not usually make front page news. On the other hand, politicians and interest groups are beginning to weigh in with statistics -- at a governmental level. We can expect that there will be pressure on the government to regulate this industry. Witness the quick passage of legislation designed to prevent spam.
Like the railroad industry, there are a few major players (that provide electronic commerce software) that could move to self-regulate the entire industry. To simplify the current state of affairs, Microsoft will not adhere to standards that it either does not control or that may limit its ability to offer new features. Other players are loath to adhere to standards that Microsoft controls. Therefore, we cannot expect that the major players in the software industry will move to self-regulate *unless*, as was the case with the railroad industry, the major players come to believe that cooperation would lead to higher revenues for all participants.
Unlike the railroad industry, it is unlikely that a massive improvement in consumer safety could result from universal adoption of a few key pieces of technology. Electronic commerce, like the airline industry, has too many points of potential failure for a simple widespread solution. Therefore, we cannot expect technology to come to the rescue.
The interesting thing about the electrical industry is that it was insurers who moved to form the UL because insurers paid the costs of electrical catastrophes. At the moment, the costs of electronic commerce failures are being borne by consumers and a wide variety of providers (banks, retailers, etc.). The lesson here an industry, bearing the costs of failure in another industry, can act in concert to compel improvements in consumer safety.
Coming lastly to the automotive industry, we can see a parallel in that consumer safety in electronic commerce is much viewed as a cost of doing business. Most consumers recognize that risks exist, however unknowable, yet this is accepted as the cost of conducting business online. Electronic commerce failures do not make front page news; however, we can expect that consumer interest groups and politicians will be making headlines with statistics of people harmed by electronic commerce. Perhaps, the electronic commerce industry will come under fire from lawyers who can easily identify large groups of consumers *harmed* by rich software development companies.
From the foregoing, we can see that consumer adoption of electronic commerce will be hampered until consumers perceive that a higher level of safety is provided. We can expect no silver bullet in terms of technology. We can expect -- absent credible efforts by the industry to self-regulate -- that politicians will come under increasing pressure to regulate electronic commerce. The software industry powers will work to thwart that pressure; however, they may be unsuccessful -- especially when one considers the power wielded by the big three automakers during the 1960's.
The question is, will the industry move to self-regulate before government moves in? In my opinion, the best hope for self-regulation would be in parallel industries -- especially banking. I believe it is unlikely that software providers will commonly agree that improved consumer safety would lead to revenue growth for all. On the other hand industries, such as banking, are bearing an increasing share of costs for failures in electronic commerce, and those that bear the costs are likely to move in concert -- as did insurers by forming the UL. If pressure were brought to bear, I think that these adjacent industries might bring the best results in terms of self-regulation.
So, we are left rolling our own, until either government or adjacent industries step in to create standards for consumer safety regarding electronic commerce. Our only hope for preventing onerous government regulation lies in convincing these adjacent industries that by acting in concert, they can reduce their costs by improving consumer safety on the Internet.
A question I posed on the cryptography mailing list: The phishing thing has now reached the mainstream, epidemic proportions that were feared and predicted in this list over the last year or two. Many of the "solution providers" are bailing in with ill- thought out tools, presumably in the hope of cashing in on a buying splurge, and hoping to turn the result into lucrative cash flows.
Sorry, the question's embedded further down...
In other news, Verisign just bailed in with a service offering . This is quite cunning, as they have offered the service primarily as a spam protection service, with a nod to phishing. In this way they have something, a toe in the water, but they avoid the embarrassing questions about whatever happened to the last security solution they sold.
Meanwhile, the security field has been deathly silent. (I recently had someone from the security industry authoritively tell me phishing wasn't a problem ... because the local plod said he couldn't find any!)
Here's my question - is anyone in the security field of any sort of repute being asked about phishing, consulted about solutions, contracted to build? Anything?
Or, are security professionals as a body being totally ignored in the first major financial attack that belongs totally to the Internet?
What I'm thinking of here is Scott's warning of last year :
Subject: Re: Maybe It's Snake Oil All the Way Down At 08:32 PM 5/31/03 -0400, Scott wrote: ... >When I drill down on the many pontifications made by computer >security and cryptography experts all I find is given wisdom. Maybe >the reason that folks roll their own is because as far as they can see >that's what everyone does. Roll your own then whip out your dick and >start swinging around just like the experts.
I think we have that situation. For the first time we are facing a real, difficult security problem. And the security experts have shot their wad.
 Lynn Wheeler's links below if anyone is interested:
VeriSign Joins The Fight Against Online Fraud
 sorry, the original email I couldn't find, but here's the thread, routed at:
Anecdotally, it seems that now Europe has a world class currency, it's attracted world class forgers . Perhaps catching the Issuer by surprise, the ECB and its satellites is facing significant efforts at injecting false currency. Oddly enough, the Euro note is very nice, and hard to forge. Which makes the claim that only the special note departments in the central banks can tell the forgeries quite a surprise.
Still, it is tiny amounts. I've heard estimates of 30% of the dollar issue washing around the poorer regions is forged, so it doesn't seem as though the gnomes in Frankfurt have much to complain about as yet.
And, it has to be taken as an accolade of sorts. If a currency is good, it is worth forging. Something we discovered in the digital cash world was that attacks against the system don't start until the issuer has moved about a million worth in float, and has a thousand or so users. Until then, the crooks haven't got the mass to be able to hide in, it's the case with some of these smaller systems that "everyone knows everyone" and that's a bit limiting if you are trying to fence some value through the market makers.
As well as the FT review, in a further sign that phishing is on track to being a serious threat to the Internet, Google yesterday covered phishing on the front page. 37 articles in one day didn't make a top story, but all signs are pointing to increasing paranoia. If you "search news" then you get about 932 stories.
It's mainstream news. Which is in itself an indictment of the failure of Internet security, a field that continues to reject phishing as a threat.
Let's recap. There are about three lines of potential defense. The user's mailer, the user's browser, and the user herself.
(We can pretty much rule out the server, because that's bypassed by this MITM; some joy would be experienced using IP number tracking, but that can by bypassed and it may be more trouble than it's worth.... We can also pretty much rule out authentication, as scammers that steal hundreds of thousands have no trouble stealing keys. Also in the bit bucket are the various strong URL schemes, which would help, but only when they have reached critical mass. No hope there.)
In turn, here's what they can do against phishing:
The user's mailer can only do so much here - it's job is to take emails, and that's what it does. It has no way of knowing that an email is a phishing attack, especially if the email carefully copies a real one. Bayesian filters might help here, but they are also testable, and they can be beaten by the committed attacker - which a phisher is. Spam tech is not the right way of thinking, because if one spam slips through, we don't mind. In contrast, if a phish slips through, we care a lot (especially if we believe the 5% hit rate.).
Likewise, the user is not so savvy. Most of the users are going to have trouble picking the difference between a real email and a fake one, or a real site and a fake one. It doesn't help that even real emails and sites have lots of subtle problems with them that will cause confusion. So I'd suggest that relying in the user as a way to deal with this is a loser. The more checking the better, and the more knowledge the better, but this isn't going to address the problem.
This leaves the browser. Luckily, in any important relationship, the browser knows, or can know, some things about that relationship. How many times visited, what things done there, etc etc. All the browser has to do then is to track a little more information, and make the user aware of that information.
But to do that, the browsers must change. The'yve got to change in 3 of these 4 ways:
1. cache certificate use statistics and other information, on a certificate and URL basis. Some browsers already cache some info - this is no big deal.
2. display the vital statistics of the connection in a chrome (protected) area - especially the number of visits. This we call the branding box. This represents a big change to browser security model, but, for various reasons, all users of browsers are going to benefit.
3. accept self-signed certs as *normal* security, again displayed in the chrome. This is essential to get people used to seeing many more cert-protected sessions, so that the above 2 parts can start to work.
4. servers should bootstrap as a normal default behavoiur using an on-demand generated self-signed cert. Only when it is routine for certs to be in existance for *any* important website, will the browser be able to reliably track a persistency and the user start to understand the importance of keeping an eye on that persistency.
It's not a key/padlock, it's a number. Hey presto, we *can* teach the user what a number means - it's the number of times that you visited BankofAmerica.com, or FunkofAmericas.co.mm or wheresover you are heading. If that doesn't seem right, don't enter in any information.
These are pretty much simple changes, and the best news is that Ye & Smith's "Trusted Paths for Browsers" showed that this was totally plausible.
Today, the Financial Times leads its InfoTech review with phishing . The FT has new stats: Brightmail reports 25 unique phishing scams per day. Average amount shelled out for 62m emails by corporates that suffer: $500,000. And, 2.4bn emails seen by Brightmail per month - with a claim that they handle 20% of the world's mail. Let's work those figures...
That means 12bn emails per month are scams. If 62m emails cause costs of half a million, then that works out at $0.008 per email. 144bn emails per year makes for ... $1.152 billion dollars paid out every year .
In other words, each phishing email is generating losses to business of a penny. Black indeed - nobody has been able to show such a profit model from email, so we can pretty much guaruntee that the flood is only just beginning.
(The rest of the article included lots of cartels trying to peddle solutions, and a mention that the IETF things email authentication might help. Fat chance of that, but it did mention one worrying development - phishers are starting to use viral techniques to infect the user's PC with key loggers. That's a very worrying development - as there is no way a program can defeat something that is permitted to invade by the Microsoft operating system.)
 The Financial Times, London, 23rd June 2004,
"Gone phishing," FT-IT Review.
 Compare and contrast this 1 billion dollar loss to the $5bn claimed by NYT last week:
"Phishing an epidemic, Browsers still snoozing"
"While it's difficult to pin down an exact dollar amount lost when identity thieves strike such institutions, Jones said 20 cases that have been proposed for federal prosecution involve $300,000 to $1 million in losses each."
This matches the amount reported in the Texas phishing case, although it refers to identity theft, not phishing (yes, they are not the same).
A study by Gartner Research [L04] found that about two million users gave such information to spoofed web sites, and that "Direct losses from identity theft fraud against phishing attack victims -- including new-account, checking account and credit card account fraud" cost U.S. banks and credit card issuers about $1.2 billion last year.
[L04] Avivah Litan, Phishing Attack Victims Likely Targets for Identity Theft, Gartner FirstTake, FT-22-8873, Gartner Research, 4 May 2004
In our world, we are very conscious of the natural order of life. First comes physics. From physics is derived economics, or the natural costing of things. And finally, law cleans up, adding things like dispute resolution (us techies call them edge cases). Notwithstanding that this ordering has been proven over any millenium you care to pick, sometimes, more often than one would merit, people ask for lawmakers to ignore the reality of life and to regulate certain inevitable behaviours into some sort of limbo of illegal normality.
Such it is with Internet telephony, also known as VOIP (voice over IP): Internet machines are totally uncontrollable, at a basic fundamental physical level. It was designed that way, and the original architects forgot to include the flag for legislative override. Further, Internet machines can do voice communications.
From those two initial assumptions, we can pretty much conclude that any attempt to regulate VOIP will fail. And that all the perks now enjoyed by large dominating parties of power (e.g., governments) will fade away.
Yet, the US Department of Justice is asking Congress to regulate wire taps on VOIPs . This looks like a repeat of the crypto wars in the 90s. Almost lost and definitely bungled by the FBI, the crypto wars were won by the NSA and its rather incredible sense of knowing when to ease off a bit.
Reading the below article, one of two by Declan McCulloch, it is at least hopeful to see that Congressmen are starting to express a bipartisan skepticism to the nonsense dished up in the name of terrorism. Why doesn't the DoJ and its primary arm, the FBI "get it?" It's unsure, but when a police force decides that protection of the faltering revenues of Hollywood is its 3rd biggest priority , another unwinnable battle against physics, one can only expect more keystone cops action in the future .
Here's the pop quiz of the week - what technologies can be thought to "aid, abet, induce, counsel or procure" violation of copyright? There is no prize for photocopying, laser printers, cassette recorders, CD and DVD burners, PCs, software, cameras, phones, typewriters ... Surely we can come up with something that is an innovative inducer of the #3 crime - copyright violation?
 Declan McCullagh, Feds: VoIP a potential haven for terrorists
 http://www.financialcryptography.com/mt/archives/000072.html does record where:
FBI weighs into anti-piracy fight
 Declan McCullagh, Antipiracy bill targets technology
WASHINGTON--The U.S. Department of Justice on Wednesday lashed out at Internet telephony, saying the fast-growing technology could foster "drug trafficking, organized crime and terrorism."
Laura Parsky, a deputy assistant attorney general in the Justice Department, told a Senate panel that law enforcement bodies are deeply worried about their ability to wiretap conversations that use voice over Internet Protocol (VoIP) services.
"I am here to underscore how very important it is that this type of telephone service not become a haven for criminals, terrorists and spies," Parsky said. "Access to telephone service, regardless of how it is transmitted, is a highly valuable law enforcement tool."
Police been able to conduct Internet wiretaps for at least a decade, and the FBI's controversial Carnivore (also called DCS1000) system was designed to facilitate online surveillance. But Parsky said that discerning "what the specific (VoIP) protocols are and how law enforcement can extract just the specific information" are difficult problems that could be solved by Congress requiring all VoIP providers to build in backdoors for police surveillance.
The Bush administration's request was met with some skepticism from members of the Senate Commerce committee, who suggested that it was too soon to impose such weighty regulations on the fledgling VoIP industry. Such rules already apply to old-fashioned telephone networks, thanks to a 1994 law called the Communications Assistance for Law Enforcement Act (CALEA).
"What you need to do is convince us first on a bipartisan basis that there's a problem here," said Sen. Ron Wyden, D-Ore. "I would like to hear specific examples of what you can't do now and where the law falls short. You're looking now for a remedy for a problem that has not been documented."
Wednesday's hearing was the first to focus on a bill called the VoIP Regulatory Freedom Act, sponsored by Sen. John Sununu, R-N.H. It would ban state governments from regulating or taxing VoIP connections. It also says that VoIP companies that connect to the public telephone network may be required to follow CALEA rules, which would make it easier for agencies to wiretap such phone calls.
The Justice Department's objection to the bill is twofold: Its wording leaves too much discretion with the Federal Communications Commission, Parsky argued, and it does not impose wiretapping requirements on Internet-only VoIP networks that do not touch the existing phone network, such as Pulver.com's Free World Dialup.
"It is even more critical today than (when CALEA was enacted in 1994) that advances in communications technology not provide a haven for criminal activity and an undetectable means of death and destruction," Parsky said.
Sen. Frank Lautenberg, D-N.J., wondered if it was too early to order VoIP firms to be wiretap-friendly by extending CALEA's rules. "Are we premature in trying to tie all of this down?" he asked. "The technology shift is so rapid and so vast."
The Senate's action comes as the FCC considers a request submitted in March by the FBI. If the request is approved, all broadband Internet providers--including companies using cable and digital subscriber line technology--will be required to rewire their networks to support easy wiretapping by police.
Wednesday's hearing also touched on which regulations covering 911 and "universal service" should apply to VoIP providers. The Sununu bill would require the FCC to levy universal service fees on Internet phone calls, with the proceeds to be redirected to provide discounted analog phone service to low-income and rural American households.
One point of contention was whether states and counties could levy taxes on VoIP connections to support services such as 911 emergency calling. Because of that concern, "I would not support the bill as drafted and I hope we would not mark up legislation at this point," said Sen. Byron Dorgan, D-N.D.
Sen. Conrad Burns, R-Mont., added: "The marketplace does not always provide for critical services such as emergency response, particularly in rural America. We must give Americans the peace of mind they deserve."
Some VoIP companies, however, have announced plans to support 911 calling. In addition, Internet-based phone networks have the potential to offer far more useful information about people who make an emergency call than analog systems do.
By Declan McCullagh Staff Writer, CNET News.com
A forthcoming bill in the U.S. Senate would, if passed, dramatically reshape copyright law by prohibiting file-trading networks and some consumer electronics devices on the grounds that they could be used for unlawful purposes.
A bill called the Induce Act is scheduled to come before the Senate sometime next week. If passed, it would make whoever "aids, abets, induces (or) counsels" copyright violations liable for those violations.
Bottom line:If passed, the bill could dramatically reshape copyright law by prohibiting file-trading networks and some consumer electronics devices on the grounds that they could be used for unlawful purposes.
The proposal, called the Induce Act, says "whoever intentionally induces any violation" of copyright law would be legally liable for those violations, a prohibition that would effectively ban file-swapping networks like Kazaa and Morpheus. In the draft bill seen by CNET News.com, inducement is defined as "aids, abets, induces, counsels, or procures" and can be punished with civil fines and, in some circumstances, lengthy prison terms.
The bill represents the latest legislative attempt by influential copyright holders to address what they view as the growing threat of peer-to-peer networks rife with pirated music, movies and software. As file-swapping networks grow in popularity, copyright lobbyists are becoming increasingly creative in their legal responses, which include proposals for Justice Department lawsuits against infringers and act
ion at the state level.
Originally, the Induce Act was scheduled to be introduced Thursday by Sen. Orrin Hatch, R-Utah, but the Senate Judiciary Committee confirmed at the end of the day that the bill had been delayed. A representative of Senate Majority Leader Bill Frist, a probable co-sponsor of the legislation, said the Induce Act would be introduced "sometime next week," a delay that one technology lobbyist attributed to opposition to the measure.
Though the Induce Act is not yet public, critics are already attacking it as an unjustified expansion of copyright law that seeks to regulate new technologies out of existence.
"They're trying to make it legally risky to introduce technologies that could be used for copyright infringement," said Jessica Litman, a professor at Wayne State University who specializes in copyright law. "That's why it's worded so broadly."
Litman said that under the Induce Act, products like ReplayTV, peer-to-peer networks and even the humble VCR could be outlawed because they can potentially be used to infringe copyrights. Web sites such as Tucows that host peer-to-peer clients like the Morpheus software are also at risk for "inducing" infringement, Litman warned.
Jonathan Lamy, a spokesman for the Recording Industry Association of America, declined to comment until the proposal was officially introduced.
"It's simple and it's deadly," said Philip Corwin, a lobbyist for Sharman Networks, which distributes the Kazaa client. "If you make a product that has dual uses, infringing and not infringing, and you know there's infringement, you're liable."
The Induce Act stands for "Inducement Devolves into Unlawful Child Exploitation Act," a reference to Capitol Hill's frequently stated concern that file-trading networks are a source of unlawful pornography. Hatch is a conservative Mormon who has denounced pornography in the past and who suggested last year that copyright holders should be allowed to remotely destroy the computers of music pirates.
Foes of the Induce Act said that it would effectively overturn the Supreme Court's 1984 decision in the Sony Corp. v. Universal City Studios case, often referred to as the "Betamax" lawsuit. In that 5-4 opinion, the majority said VCRs were legal to sell because they were "capable of substantial noninfringing uses." But the majority stressed that Congress had the power to enact a law that would lead to a different outcome.
"At a minimum (the Induce Act) invites a re-examination of Betamax," said Jeff Joseph, vice president for communications at the Consumer Electronics Association. "It's designed to have this fuzzy feel around protecting children from pornography, but it's pretty clearly a backdoor way to eliminate and make illegal peer-to-peer services. Our concern is that you're attacking the technology."
Sometimes you see a business pattern like this: A company or government doesn't do its job. So up springs a service sector to fill the gap. After a while, everyone starts to think that's the natural state of affairs, but the canny business types know that these service providers are very vulnerable.
Such is the case here . When Microsoft started talking about providing its own anti-virus product, shares in Symantec and presumably other anti-virus (AV) producers started sliding.
Hey ho, where's the news? Microsoft delivers a product that is weak on the security side, and strong on the user side. Now they've started working on security, and the turnaround has been painful and noisy, if not exactly measurable in progress terms.
Any investor in Symantec and any similar anti-virus producer must have been able to connect the dots in Bill Gates famous memo from security to viruses. Of course the mission would include reducing the vulnerabilty to viruses, and of course there would be some serious thought to two possibilities: writing the code such that viruses aren't a threat (why do we need to say this?) and creating an in-house AV product so the synergies could be exploited.
The latter is a stop-gap measure. But so is Symantec's anti-virus division: it's only there because Microsoft don't write good code. The day they stop this egregious practice is the day that we no longer need Symantec to take $100 out of our pocket for Microsoft's laziness.
Of course, the state of understanding in the security industry is woefully underscored by the claim that Microsoft needs to sell it separately rather than bundle it. Since when is it anti-trust to deliver a secure product? And, who in Microsoft legal hasn't worked out that if they sell on the one hand a broken product, and on the other a fix for the same, the class action attornies are going to skip breakfast to get that one filed?
Symantec Wobbled by Microsoft Threat
By Ronna Abramson TheStreet.com Staff Reporter 6/16/2004 2:47 PM EDT
Shares of Symantec (SYMC:Nasdaq - news - research) continued their recent slide Wednesday amid more definitive news that Microsoft (MSFT:Nasdaq - news - research) plans to enter its thriving antivirus software market.
Shares of security titan Symantec were recently down $1.60, or 3.8%, to $40.82. The stock has shed about 7% since Friday's close, while the Nasdaq Composite has inched down less-than 0.5% during the same period.
Shares of security rival Network Associates (NET:NYSE - news - research), another beneficiary of virus outbreaks, have declined 2.6% since Friday. Shares were recently down 20 cents, or 1.2%, at $16.72. Microsoft stock was recently off 17 cents, or 0.6%, to $27.24.
Thanks to the anti-virus market, Symantec defied the economic downturn as other tech names were struggling. In April, the company's stock hit an all-time intraday high of $50.88.
But Symantec shares began to tumble Monday after wire services wrote that Mike Nash, chief of Microsoft's security business unit, said at a dinner with reporters that the world's largest software maker is developing its own anti-virus products that will compete against Symantec and Network Associates.
"We're still planning to offer our own [antivirus] product," Reuters quoted Nash as saying.
Those comments came just two weeks after another Microsoft executive, Rich Kaplan, corporate vice president of security business and technology marketing, said the company was still undecided about what it would do with antivirus technology acquired last year from a Romanian security firm.
Kaplan was not available for comment Wednesday. Instead, Amy Carroll, director of Microsoft's security business and technology unit, confirmed Nash's comments and Microsoft's intentions to enter the antivirus space in a telephone interview Wednesday.
"What Mike said is not new," Carroll said. "Our plan is to offer an AV [antivirus] solution." The company has not yet announced a timeline for when the antivirus product will debut or details on exactly what shape it will take.
Microsoft plans to offer an antivirus product or service for a fee, but will not bundle it with its ubiquitous Windows operating system, she added.
Analysts have suggested that such bundling would undoubtedly raise antitrust eyebrows, given that Microsoft has been the subject of suits both in the U.S. and Europe, where regulators are still fighting the software behemoth over the bundling of its media player with Windows.
Observers have offered plenty of reasons why they believed Microsoft would not enter the field. Chris Bonavico, a portfolio manager with Transamerica Investment Management, has suggested one reason Microsoft will not jump into the space is because security is a services business that requires around-the-clock responses to new attacks, and Microsoft isn't a services company.
But Carroll said Wednesday that Microsoft already has a team called the Microsoft Security Response Center that monitors potential security threats to customers 24 hours a day, seven days a week.
Meanwhile, other have suggested consumers and enterprises will stick with third-party antivirus vendors even if Microsoft launches its own competing product, especially given Microsoft's spotty security record to date.
"In the consumer market, they [customers] may not be as savvy," said Tony Ursillo, an analyst with Loomis, Sayles & Co., which holds Symantec shares. But "I don't know if at least corporate customers will want to buy a product that patches the holes of another product sold by the same company."
"I think it will be tricky" for Microsoft, Ursillo added.
Phishing, the sending of spoof emails to trick you into revealing your browser login passphrase, now seems to be the #1 threat to Internet users. A dubious award, indeed. An article in the New York Times claims that online identity theft caused damages of $5 billion worldwide, while spamming was a mere $3.5 billion . That was last year, and phishing was just a sniffle then. Expect something like 20-30 billion this year in phishing, as now, we're facing an epidemic .
That article also mentions that 5-20% of these emails work - the victim clicks through using that nice browser-email feature and enters their hot details into the bogus password harvesting site.
Reported elsewhere today :
"Nearly 2 million Americans have had their checking accounts raided by criminals in the past 12 months, according to a soon-to-be released survey by market research group Gartner. Consumers reported an average loss per incident of $1,200, pushing total losses higher than $2 billion for the year."
"Gartner researcher Avivah Litan blamed online banking for most of the problem."
A recent phishing case in a Texas court gave something like $200 damages per victim . That's a court case - so the figures can have some credibility. The FTC reports an average loss rate of about $5300 per victim for all identity theft .
So we are clearly into the many many millions of dollars of damage. It is not out of the question that we are reaching for the billion dollar mark, especially if we add in the associated costs. The FTC reported about $53b of losses last year; while most of identity theft is non-Internet related, it only needs to be 10% of total identity theft to look like the NYT's figure of $5bn, above.
Let's get our skepticisms up front here: I don't believe these figures. Firstly, they are reported quite loosely, with no backing. There is no pretence at academic seriousness. Secondly, they seem to derive from a bunch of vested interest players, such as mi2g.com and AFWG.org (both being peddlers of some sort of solution). Oh, and here's another group .
We know that there is a shortage of reliable figures to go on. Having said that, even if the figures are way off, we still have a conclusion:
This is real money, folks!
Wake up time! This is actual fraud, money being taken from real average Internet users, people who download the email programs and browsers and web servers that many of us worked on and used. Forget the hypothetical attacks postulated by Internet security experts a decade or so back. Those attacks were not real, they were a figment of some academic's imagination.
The security model built in to standard browsing is broken. Get over it, guys.
Every year, since 2003, Americans will lose more money than was ever paid out to CAs for those certificates to protect them from MITM . By the end of this year, Americans will have been defrauded - I predict - to the same extent as Verisign's peak in market cap.
The secure browser falls to the MITM three ways that I know of - phishing, click thru syndrome, and substitute CA.
The new (since 2002) phishing attack is a classical MITM that breaches secure browsing. This attack convinces some users to go to an MITM site. It's what happens when a false sense of security goes on to long. The fact that secure browsing falls to the MITM is unfortunate, but what's really sad is that the Internet security community declines to accept the failure of the SSL/CA/secure browsing model.
Until this failure is recognised, there is no hope of moving on: What needs to be done is to realise that the browser is the front line of defence for the user - it's the only agent that knows anything about the user's browsing activity, and it's the only agent that can tell a new spoof site from an old, well-used and thus trusted site.
Internet security people need to wind the clock forward by about a decade and start thinking about how to protect ordinary Internet users from the billions of dollars being taken from their pockets. Or not, as the case may be. But, in the meantime, let's just accept that the browser has a security model worth diddly squat. And start thinking about how to fix it.
 New York Times, Online crime engenders a new hero: cybersleuth, 11th June 2004. (below)
 Epidemic is defined as more than one article on an Internet fraud in Lynn Wheeler's daily list.
florida cartel stole identities of 1100 cardholders
Survey: 2 million bank accounts robbed
"Cost of Phishing - Case in Texas"
FTC Releases Survey of Identity Theft in U.S.
Large companies form group to fight "phishing"
 Identity Theft - the American Disease
Online crime engenders a new hero: cybersleuth
Christopher S. Stewart NYT
Friday, June 11, 2004
A lot of perfectly respectable small businesses are raking in money from
From identity theft to bogus stock sales to counterfeit prescription drugs,
crime is rife on the Web. But what has become the Wild West for savvy
cybercriminals has also developed into a major business opportunity for
The number of security companies that patrol the shady corners of the
virtual world is small but growing.
"As more and more crime is committed on the Internet, there will be growth
of these services," said Rich Mogull, research director for information
security and risk at Gartner, a technology-market research firm in Stamford,
ICG, a Princeton, New Jersey, company founded in 1997, has grown to 35
employees and projected revenue of $7 million this year from eight employees
and $1.5 million in revenue just four years ago, said Michael Allison, its
founder and chief executive.
ICG, which is licensed as a private investigator in New Jersey, tracks down
online troublemakers for major corporations around the world, targeting
spammers and disgruntled former employees as well as scam artists, using
both technology and more traditional cat-and-mouse tactics.
"It's exciting getting into the hunt," said Allison, a 45-year-old British
expatriate. "You never know what you're going to find. And when you identify
and finally catch someone, it's a real rush."
According to Mi2g, a computer security firm, online identity theft last year
cost businesses and consumers more than $5 billion worldwide, while spamming
drained $3.5 billion from corporate coffers. And those numbers are climbing,
"The Internet was never designed to be secure," said Alan Brill, senior
managing director at Kroll Ontrack, a technology services provider that was
set up in 1985 by Kroll Associates, an international security company based
in New York. "There are no guarantees."
Kroll has seven crime laboratories around the world and is opening two more
in the United States because of the growing demand for such work.
ICG clients, many of whom Allison will not identify because of privacy
agreements, include pharmaceutical companies, lawyers, financial
institutions, Internet service providers, digital entertainment groups and
One of the few cases that ICG can talk about is a spamming problem that
happened a few years ago at Ericsson, the Swedish telecommunications
company. Hundreds of thousands of e-mail messages promoting a telephone-sex
service inundated its servers hourly, crippling the system.
"They kept trying to filter it out," said Jeffrey Bedser, ICG chief
operating officer. "But the spam kept on morphing and getting around the
Bedser and his team plugged the spam message into search engines and located
other places on the Web where it appeared. Some e-mail addresses turned up,
which led to a defunct e-fax Web site. And that Web site had in its registry
the name of the spammer, who turned out to be a middle-aged man living in
the Georgetown section of Washington.
Several weeks later, the man was sued. He ultimately agreed to a $100,000
civil settlement, though he didn't go away, Bedser said.
"The guy sent me an e-mail that said, 'I know who you are and where you
are,'" Bedser recalled. "He also signed me up for all kinds of spam and I
ended up getting flooded with e-mail for sex and drugs for the next year."
Allison says ICG's detective work is, for the most part, unglamorous,
involving mostly sitting in front of computers and "looking for ones and
zeros." Still, there are some private-eye moments. Computer forensic work,
for instance, takes investigators to corporate offices all over America,
sometimes in the dead of night.
Searching through the hard drives of suspects - always with a company lawyer
or executive present - the investigators hunt for "vampire data," or old
e-mails and documents that the computer users thought they had deleted long
In some cases, investigators have to be a little bit sneaky themselves.
Once, an ICG staffer befriended a suspect in a "pump-and-dump" scheme - in
which swindlers heavily promote a little-known stock to get the price up,
then sell their holdings at artificially high prices - by chatting with him
electronically on a chess Web site.
The Internet boom almost guarantees an unending supply of cybercriminals.
"They're like mushrooms," Allison said.
Right now, the most crowded fields of criminal activity are the digital
theft of music and movies, illegal prescription-drug sales and "phishers,"
identity thieves who pose as representatives of financial institutions and
send out fake e-mails to people asking for their account information. The
Anti-Phishing Working Group, an industry association, estimates that 5
percent to 20 percent of recipients respond to these phony e-mails.
In 2003, 215,000 cases of identity theft were reported to the Federal Trade
Commission, an increase of 33 percent from the year before.
This bad news for consumers is a growth opportunity for ICG. "The bad guys
will always be out there," Allison said. "But we're getting better and
better. And we're catching up quickly."
The New York Times
The White House administration has apparently defied the US Congress and kept the controversial "Total Information Awareness" going as a secret project. A politics journal called Capitol Hill Blue has exposed what it claims is the TIA project operating with no change.
Whether this is all true, or just another anti-Bush story by Democrat apologists in the leadup to the election, is all open to question. Republican apologists can now chime in on cue. While they are doing that, here are some of the impressive claims of monitoring of the US citizen's habits:
If you're looking for the fire, that's an awful lot of smoke. What does all this mean? Well, for one, it should start to put pressure on the open source crypto community to start loosening up. Pretty much all of that can be covered using free and easy techniques that have otherwise been eschewed for lack of serious threat models. I speak of course of using opportunistic cryptography to get protection deployed as widely as possible.
This could take as back to the halcyon days of the 90s, when the open source community fought with the dark side to deploy crypto to all. A much more noble battle than today's windmill tinting against credit card thieves and other corporately inspired woftams. We didn't, it seems, succeed in protecting many people, as crypto remains widely undeployed, and where it is deployed, it is of uncertain utility. But, we're older and wiser now. Maybe it's time for another go
What Price Freedom?
How Big Brother Is Watching, Listening and Misusing Information About You
By TERESA HAMPTON & DOUG THOMPSON
Jun 8, 2004, 08:19
You're on your way to work in the morning and place a call on your wireless phone. As your call is relayed by the wireless tower, it is also relayed by another series of towers to a microwave antenna on top of Mount Weather between Leesburg and Winchester, Virginia and then beamed to another antenna on top of an office building in Arlington where it is recorded on a computer hard drive.
The computer also records you phone digital serial number, which is used to identify you through your wireless company phone bill that the Defense Advanced Research Projects Agency already has on record as part of your permanent file.
A series of sophisticated computer programs listens to your phone conversation and looks for "keywords" that suggest suspicious activity. If it picks up those words, an investigative file is opened and sent to the Department of Homeland Security.
Congratulations. Big Brother has just identified you as a potential threat to the security of the United States because you might have used words like "take out" (as in taking someone out when you were in fact talking about ordering takeout for lunch) or "D-Day" (as in deadline for some nefarious activity when you were talking about going to the new World War II Memorial to recognize the 60th anniversary of D-Day).
If you are lucky, an investigator at DHS will look at the entire conversation in context and delete the file. Or he or she may keep the file open even if they realize the use of words was innocent. Or they may decide you are, indeed, a threat and set up more investigation, including a wiretap on your home and office phones, around-the-clock surveillance and much closer looks at your life.
Welcome to America, 2004, where the actions of more than 150 million citizens are monitored 24/7 by the TIA, the Terrorist Information Awareness (originally called Total Information Awareness) program of DARPA, DHS and the Department of Justice.
Although Congress cut off funding for TIA last year, the Bush Administration ordered the program moved into the Pentagon's "black bag" budget, which is neither authorized nor reviewed by the Hill. DARPA also increased the use of private contractors to get around privacy laws that would restrict activities by federal employees.
Six months of interviews with security consultants, former DARPA employees, privacy experts and contractors who worked on the TIA facility at 3701 Fairfax Drive in Arlington reveal a massive snooping operation that is capable of gathering - in real time - vast amounts of information on the day to day activities of ordinary Americans.
Going on a trip? TIA knows where you are going because your train, plane or hotel reservations are forwarded automatically to the DARPA computers. Driving? Every time you use a credit card to purchase gas, a record of that transaction is sent to TIA which can track your movements across town or across the country.
Use a computerized transmitter to pay tolls? TIA is notified every time that transmitter passes through a toll booth. Likewise, that lunch you paid for with your VISA becomes part of your permanent file, along with your credit report, medical records, driving record and even your TV viewing habits.
Subscribers to the DirecTV satellite TV service should know - but probably don't - that every pay-per-view movie they order is reported to TIA as is any program they record using a TIVO recording system. If they order an adult film from any of DirecTV's three SpiceTV channels, that information goes to TIA and is, as a matter of policy, forwarded to the Department of Justice's special task force on pornography.
"We have a police state far beyond anything George Orwell imagined in his book 1984," says privacy expert Susan Morrissey. "The everyday lives of virtually every American are under scrutiny 24-hours-a-day by the government."
Paul Hawken, owner of the data information mining company Groxis, agrees, saying the government is spending more time watching ordinary Americans than chasing terrorists and the bad news is that they aren't very good at it.
"It's the Three Stooges go to data mining school," says Hawken. "Even worse, DARPA is depending on second-rate companies to provide them with the technology, which only increases the chances for errors."
One such company is Torch Concepts. DARPA provided the company with flight information on five million passengers who flew Jet Blue Airlines in 2002 and 2003. Torch then matched that information with social security numbers, credit and other personal information in the TIA databases to build a prototype passenger profiling system.
Jet Blue executives were livid when they learned how their passenger information, which they must provide the government under the USA Patriot Act, was used and when it was presented at a technology conference with the title: Homeland Security - Airline Passenger Risk Assessment.
Privacy Expert Bill Scannell didn't buy Jet Blue's anger.
"JetBlue has assaulted the privacy of 5 million of its customers," said Scannell. "Anyone who flew should be aware and very scared that there is a dossier on them."
But information from TIA will be used the DHS as a major part of the proposed CAPSII airline passenger monitoring system. That system, when fully in place, will determine whether or not any American is allowed to get on an airplane for a flight.
JetBlue requested the report be destroyed and the passenger data be purged from the TIA computers but TIA refuses to disclose the status of either the report or the data.
Although exact statistics are classified, security experts say the U.S. Government has paid out millions of dollars in out-of-court settlements to Americans who have been wrongly accused, illegally detained or harassed because of mistakes made by TIA. Those who accept settlements also have to sign a non-disclosure agreement and won't discuss their cases.
Hawken refused to do business with DARPA, saying TIA was both unethical and illegal.
"We got a lot of e-mails from companies - even conservative ones - saying, 'Thank you. Finally someone won't do something for money,'" he adds.
Those who refuse to work with TIA include specialists from the super-secret National Security Agency in Fort Meade, MD. TIA uses NSA's technology to listen in on wireless phone calls as well as the agency's list of key words and phrases to identify potential terrorist activity.
"I know NSA employees who have quit rather than cooperate with DARPA," Hawken says. "NSA's mandate is to track the activities of foreign enemies of this nation, not Americans."
© Copyright 2004 Capitol Hill Blue
Over in the UK, Bob Hettinga reports on an article in the Observer about how the EU legislators are preparing to mandate software and hardware to reject images of banknotes. Ya gotta hand it to the Europeans, they love fixing things with directives. Here's the technique:
"The software relies on features built into leading currencies. Latest banknotes contain a pattern of five tiny circles. On the £20 note, they're disguised as a musical notation, on the euro they appear in a constellation of stars; on the new $20 note, the pattern is hidden in the zeros of a background pattern. Imaging software or devices detect the pattern and refuse to deal with the image."
I think this is a great idea. I think we should all adopt this DRM technique for our imagery, and use the 5 circle pattern to stop people copying our logos, our holiday snaps, and our bedroom pictures posted on girlfriend swap sites.
The best part is that, as the pattern is part of the asset base of the governments, we the people already own it.
Banks win EU support for software blocks to tackle the cottage counterfeiters
Tony Thompson, crime correspondent - Observer
Sunday June 6, 2004
Computer and software manufacturers are to be forced to introduce new security measures to make it impossible for their products to be used to copy banknotes.
The move, to be drafted into European Union legislation by the year end, follows a surge in counterfeit currency produced using laser printers, home scanners and graphics software. Imaging software and printers have become so powerful and affordable that production of fake banknotes has become a booming cottage industry.
Though counterfeiters are usually unable to source the specialist paper on which genuine banknotes are printed, many are being mixed in with genuine notes in high volume batches. The copies are often good enough to fool vending machines. By using a fake £20 note to purchase a £2 rail fare, the criminal can take away £18 in genuine change.
Although the Bank of England refuses to issue figures for the number of counterfeit notes in circulation and insists they represent a negligible fraction of notes issued, it also admits fakes are on the increase.
Anti-counterfeiting software developed by the Central Bank Counterfeit Deterrence Group, an organisation of 27 leading world banks including the Bank of England, has been distributed free of charge to computer and software manufacturers since the beginning of the year. At present use of the software is voluntary though several companies have incorporated it into their products.
The latest version of Adobe Photoshop, a popular graphics package, generates an error message if the user attempts to scan banknotes of main currencies. A number of printer manufactures have also incorporated the software so that only an inch or so of a banknote will reproduce, to be followed by the web address of a site displaying regulations governing the reproduction of money.
The software relies on features built into leading currencies. Latest banknotes contain a pattern of five tiny circles. On the £20 note, they're disguised as a musical notation, on the euro they appear in a constellation of stars; on the new $20 note, the pattern is hidden in the zeros of a background pattern. Imaging software or devices detect the pattern and refuse to deal with the image.
Certain colour copiers now come loaded with software that detects when a banknote has been placed on the glass, and refuses to make a copy or produces a blank sheet.
Researchers at Hewlett Packard are to introduce technology that would allow printers to detect colours similar to those used in currency. The printer will automatically alter the colour so that the difference between the final product and a genuine banknote will be easily detectable by the naked eye.
Adobe acted after it emerged that several counterfeiting gangs had used Photoshop to manipulate and enhance images. The security feature, which is not mentioned in any product documentation, has outraged users who say it could interfere with genuine artistic projects. There were also concerns that the software would automatically report duplication attempts to the software company or police via the internet.
A spokesman for the National Criminal Intelligence Service said criminals traditionally used offset lithographic printing for counterfeiting. 'Developments in electrostatic photocopying equipment, together with advances in computer and reprographic technology, have led to a rise in the proportion of counterfeit notes produced in a domestic environment. The use of this technology generally results in a lower quality counterfeit, although this varies according to the skill of the counterfeiter and the equipment and techniques used.'
Although some countries, most notably America, allow reproduction of banknotes for artistic purposes if they are either significantly larger or smaller than the real thing, in the UK it is a criminal offence to reproduce 'on any substance whatsoever, and whether or not on the correct scale', any part of any Bank of England banknote.
Guardian Unlimited © Guardian Newspapers Limited 2004
Identity theft is a uniquely American problem. It reflects the massive - in comparison to other countries - use of data and credit to manage Americans' lives. Other countries would do well to follow the experiences, as "what happens there, comes here." Here are two articles on the modus operandi of the identity thief , and the positive side of massive data collection .
First up, the identity thief . He's not an individual, he's a gang, or more like a farm. Your identity is simply a crop to process. Surprisingly, it appears that garbage collected from the streets (Americans call it trash) is still the seed material. Further, the database nation's targetting characteristics work for the thief as he doesn't need to "qualify" the victim any. If you receive lots of wonderful finance deals, he wants your business too.
Once sufficient information is collected (bounties paid per paper) it becomes a process of using PCs and innocent address authorities to weezle ones way into the prime spot. For example, your mail is redirected to the farm, the right mails are extracted, and your proper mail is conveniently re-delivered - the classic MITM. We all know paper identity is worthless for real security, but it is still surprising to see how easily we can be brought in to harvest.
[Addendum: Lynn Wheeler reports that a new study by Professor Judith Collins of Michigan State University reveals up to 70% of identity theft starts with employee insider theft [1.b]. This study, as reported by MSNBC, directly challenges the above article.]
Next up, a surprisingly thoughtful article on how data collection delivers real value - cost savings - to the American society . The surprise is in the author, Declan McCullagh, who had previously been thought to be a bit of a Barbie for his sallacious use of gossip in the paparazzi tech press. The content is good but very long.
The real use of information is to make informed choices - not offer the wrong thing. Historically, this evolved as networks of traders that shared information. To counteract fraud that arose, traders kept blacklists and excluded no-gooders. A dealer exposed as misusing his position of power stood to lose a lot, as Adam Smith argued, far more indeed than the gain on any one transaction .
In the large, merchants with businesses exposed to public scrutiny, or to American-style suits, can be trusted to deal fairly. Indeed, McCullagh claims, the US websites are delivering approximately the same results in privacy protection as those in Europe. Free market wins again over centralised regulations.
Yet there is one area where things are going to pot. The company known as the US government, a sprawling, complex interlinking of huge numbers of databases, is above any consumer scrutiny and thus pressure for fair dealings. Indeed, we've known for some years that the policing agencies did an endrun around Congress' prohibition on databases by outsourcing to the private sector. The FBI's new purchase of your data from Checkpoint is "so secret that even the contract number may not be disclosed." This routine dishonesty and disrespect doesn't even raise an eyebrow anymore.
Where do we go from here? As suggested, the challenge is to enjoy the benefits of massive data conglomeration without losing the benefit of privacy and freedom. It'll be tough - the technological solutions to identity frauds at all levels from financial cryptographers have not succeeded in gaining traction, probably because they are so asymmetric, and deployment is so complicated as to rule out easy wins. Even the fairly mild SSL systems the net community put in place in the '90s have been rampantly bypassed by phishing-based identity attacks, not leaving us with much hope that financial cryptographers will ever succeed in privacy protection .
What is perhaps surprising is that we have in recent years redesigned our strong privacy systems to add optional identity tokens - for highly regulated markets such as securities trading . The designs haven't been tested in the full, but it does seem as though it is possible to build systems that are both identity strong and privacy strong. In fact, the result seems to be stronger than either approach alone.
But it remains clear that deployment against an uninterested public is a hard issue. Every company selling privacy to my knowledge has failed. Don't hold your breath, or your faith, and keep an eye on how this so-far American disease spreads to other countries.
 Mike Lee & Brian Hitchen, "Identity Theft - The Real Cause,"
[1.b] Bob Sullivan, "Study: ID theft usually an inside job,"
 Declan McCullagh, 'The upside of "zero privacy,"'
 Adam Smith, "Lecture on the Influence of Commerce on Manners," 1766.
 I write about the embarrassment known as secure browsing here:
 The methods for this are ... not publishable just yet, embarrassingly.
Over on eWeek.com, an Internet Magazine, a blog entry of mine seems to have hit home , and caused a response. Peter Coffee has written an article, "Report Takes Software Processes to Task ," that starts with "I feel as if I could get an entire year's worth of columns, or perhaps even build my next career, out of the material in a Task Force Report..." Promising stuff !
He then goes on to draw a couple of reasonable points from the report (how unprofessional security professionals are..., how security is multi-disciplinary...) and then ruins his promising start by launching an ad hominem attack. Read it, it is mind bogglingly silly.
I won't respond, other than to point out that real security professionals do not do the ad hominem ("against the man") as it distracts from the real debate of security. As he rightly intimated, security is substantially complex. As he apparently missed, this makes security very vulnerable to the sort of $50 million pork barrel projects that look good in a report, but miss the point of the complexity. And, Mr Coffee definitely missed that doing the ad hominem thing signalled that someone was upset at their pork being spiked. Sorry about that!
Comments of any form are welcome, although I admit to being surprised at this one. Especially, if Mr Coffee would like to take up his claim to spend a year reading and benefitting from the report, I'll respond on the security aspects he raises.
 Ian Grigg, "cybersecurity FUD," 05th April, 2004,
 Peter Coffee, "Report Takes Software Processes to Task," 22nd April, 2004,
 National Cyber Security Partnership, "Security Across the Software Development Life Cycle,"
 Ian Grigg, "Financial Cryptography in 7 Layers," 4th Financial Cryptography Conference, 2000,
For those interested in the intersection of security and economics, Ross Anderson's page has a wealth of links.
"Do we spend enough on keeping `hackers' out of our computer systems? Do we not spend enough? Or do we spend too much? For that matter, do we spend too little on the police and the army, or too much? And do we spend our security budgets on the right things?"
"The economics of security is a hot and rapidly growing field of research. More and more people are coming to realise that security failures are often due to perverse incentives rather than to the lack of suitable technical protection mechanisms. (Indeed, the former often explain the latter.) While much recent research has been on `cyberspace' security issues - from hacking through fraud to copyright policy - it is expanding to throw light on `everyday' security issues at one end, and to provide new insights and new problems for theoretical computer scientists and `normal' economists at the other. In the commercial world, as in the world of diplomacy, there can be complex linkages between security arguments and economic ends."
"This page provides links..."
In what is rapidly becoming an Internet soap opera, an alleged writer of the Sasser virus, 18 year old Sven Jaschan from Germany, was fingered under the Bounty program initiated by Microsoft a few months back . As predicted, with $250,000 in prize money, an immediate question faces Microsoft: Are the informants in on the game  ?
Microsoft insists that "the informant had no connection to the virus writer's work, and say they wouldn't pay a reward to anyone who had helped author the computer virus." Others are skeptical, both of the incident and the benefit of the program .
Says one person: "In the last 15 years we've had 30 or 40 arrests of these people worldwide, and yet we still get 15 more of these (viruses) every week." The power of perception remains foremost here as all reporters routinely ignore the underlying structural weaknesses in the Microsoft platform that is being hit by virus after virus. Perhaps that story is stale.
The German authorities released the author immediately, when they discovered that his intentions may have been honourable . He was just helping his Mom, the papers say, and he deserves a medal, not prison:
' Despite the damage to millions of computers, one leading German newspaper said in a page one commentary Monday there was a strange sense of national pride that a German student had outwitted the world's best computer experts. "Many of the (German) journalists who traveled to the province could not help but harbor clandestine admiration for the effectiveness if the worm," Die Welt daily wrote.'
American virus company NAI immediately responded with a call for new laws:
'Jimmy Kuo, a research fellow with antivirus software maker NAI .... said that additional laws may be necessary to dissuade virus writers from releasing their programs onto the Internet. "We would hope that there could be laws that would prohibit the posting of malicious code," Kuo said. "Sasser was partially written by some malicious code that was downloaded by the Internet." '
They had their chance in 1945. But, there is good news - at least Microsoft announced a few years ago that security is its goal. I see no evidence in the browser market that they are serious, but I suppose we'll know more in 15 more years .
Addendum: It seems that a week later, Police probe Sasser informant the informant was already on the way to losing his bounty. Question is, what happens now? What's the point in informing on a virus writer if your life gets turned upside down on the suspicion that you are in cohoots? Safer to go find some other line of work...
The Feb issue of Nilson Report reports stats from the antiphishing.org WG. New for me at least, is some light thrown on Tumbleweed, the company behind the WG, which as suspected is casting itself as a solution to phishing.
"Email Signatures [quoteth Nilson]. Tumbleweed is developing a method of using digital signature issued by a trusted Certificate Authority (CA) to sign emails. This type of technology, also being pursued by AOL, Microsoft, and Yahoo, would help thwart phishing scams. While crooks who own legitimate sounding domain names (such as Visa.customerservice.com) could still sign their messages, an alert would arrive with the email if the signature had not been issued by a CA. The larger problem with signing emails could come down the line as phishers migrate to other methods of luring victims. Some have already started using instant messaging. Next could be mobile messaging, banner ads, and sites that would turn up readily in a Google search. Beefing up law enforcement is another option, but with more and more phishers operating globally, it can take up to a week to ferret them out and shut them down."
Well, Nilson picked up the obvious, so no need to dwell on it here. It then goes on to talk about Passmark, which I slammed in Phishing - and now the "solutions providers".
What are we supposed to conclude from this parade of aspiring security beauties? One solution provider hasn't thought it through at all, and the other seems to be "just using CA-signed certs," the very technology that is being perverted in the first place. As if it hadn't thought it through at all...
Is there no security company out there that does security? It is rather boring repeating the solution so I won't, today.
Beepcard has developed ComdotTM, a self-powered electronic card that performs wireless authentication without using a card reader. The card transmits a user identification code to a PC, cell phone, or regular phone, enabling online authentication and physical presence in online transactions.
Comdot supports payment card legacy systems, such as magnetic stripe readers, smart chips and embossing. It can be implemented as a standard credit card, a membership card, or a gift certificate, and works both on the Internet and in the offline world.
The Comdot system will come as welcome relief to any system provider struggling to increase security rapidly on a mass scale, and to do so unobtrusively. ComdotTM is the ideal solution to the “reader” problem that has plagued mass deployment of smart cards. Indeed, these sound-based communications cards reach most transaction arenas that until now have been relegated to a status that the financial services world has always regarded as “card-not-present.” Also for healthcare organizations, transportation and communications networks and corporate computing systems, ComdotTM cards offer an important leap forward as an authentication scheme that is both secure and convenient.
The "Reader-Free" Revolution
How do we do it? By using "clientless" architecture and by creating an active, rather than passive, card device:
Clientless architecture. Any standard home computer can talk to Comdot cards, as soon as the card software is installed. The sub-100k card communications software applet can be embedded in any service provider web page or e-wallet system or can reside within any other software that is permanently resident on a user's computer. Either way, installation is simple and neat. The web-based version installs automatically on the user's computer. The resident version comes with a wizard that installs onto the user's computer in seconds.
Comdot turns every PC or phone into a secure point of sale, enabling secure Internet shopping, banking, and financial account services. Comdot and accompanying software provide online value in several core operations, such as:
Launch. One-click launch of web browser and direction to the card issuer's online services. One-click launch of e-wallets, online account services, or other value-added Internet services.
Authenticate. Online authentication of users. The proliferation of Internet banking, stock portfolios, and application service providers of all sorts increases the need for online user authentication. Comdot is a low-cost, physical, first-factor user authentication device that replaces vulnerable and easy-to-forget passwords.
Transact. Unprecedented physical presence in online transactions. The Beepcard card authenticates cardholders to their payment card issuers and e-merchants, greatly reducing the problem of on-line fraud. Because the presence of a Comdot card in transactions can be proven, cardholders shop online without fear of credit card theft. The result: increased consumer trust in e-commerce. The presence of Comdot technology in an online transaction reduces the likelihood of purchase dispute and repudiation.
One bright spot in the aforementioned report on cyber security is the section on security modelling  . I had looked at this a few weeks back and found ... very little in the way of methadology and guidance on how to do this as a process . The sections extracted below confirm that there isn't much out there, as well as listing what steps are know, and provide some references. FTR.
 Cybersecurity FUD, FC Blog entry, 5th April 2004, http://www.financialcryptography.com/mt/archives/000107.html
 Security Across the Software Development Lifecycle Task Force, _Improving Security Across the Software Development LifeCycle_, 1st April, 2004. Appendix B: PROCESSESTOPRODUCESECURESOFTWARE, 'Practices for Producing Secure Software," pp21-25 http://www.cyberpartnership.org/SDLCFULL.pdf
 Browser Threat Model, FC Blog entry, 26th February 2004. http://www.financialcryptography.com/mt/archives/000078.html
While principles alone are not sufficient for secure software development, principles can help guide secure software development practices. Some of the earliest secure software development principles were proposed by Saltzer and Schroeder in 1974 [Saltzer]. These eight principles apply today as well and are repeated verbatim here:
1. Economy of mechanism: Keep the design as simple and small as possible.
2. Fail-safe defaults: Base access decisions on permission rather than exclusion.
3. Complete mediation: Every access to every object must be checked for authority.
4. Open design: The design should not be secret.
5. Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.
6. Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
7. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.
8. Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.
Later work by Peter Neumann [Neumann], John Viega and Gary McGraw [Viega], and the Open Web Application Security Project (http://www.owasp.org) builds on these basic security principles, but the essence remains the same and has stood the test of time.
Threat modeling is a security analysis methodology that can be used to identify risks, and guide subsequent design, coding, and testing decisions. The methodology is mainly used in the earliest phases of a project, using specifications, architectural views, data flow diagrams, activity diagrams, etc. But it can also be used with detailed design documents and code. Threat modeling addresses those threats with the potential of causing the maximum damage to an application.
Overall, threat modeling involves identifying the key assets of an application, decomposing the application, identifying and categorizing the threats to each asset or component, rating the threats based on a risk ranking, and then developing threat mitigation strategies that are then implemented in designs, code, and test cases. Microsoft has defined a structured method for threat modeling, consisting of the following steps [Howard 2002].
Other structured methods for threat modeling are available as well [Schneier].
Although some anecdotal evidence exists for the effectiveness of threat modeling in reducing security vulnerabilities, no empirical evidence is readily available.
Attack trees characterize system security when faced with varying attacks. The use of Attack Trees for characterizing system security is based partially on Nancy Leveson's work with "fault trees" in software safety [Leveson]. Attack trees model the decisionmaking process of attackers. Attacks against a system are represented in a tree structure. The root of the tree represents the potential goal of an attacker (for example, to steal a credit card number). The nodes in the tree represent actions the attacker takes, and each path in the tree represents a unique attack to achieve the goal of the attacker.
Attack trees can be used to answer questions such as what is the easiest attack. The cheapest attack? The attack that causes the most damage? The hardest to detect attack? Attack trees are used for risk analysis, to answer questions about the system's security, to capture security knowledge in a reusable way, and to design, implement, and test countermeasures to attacks [Viega] [Schneier] [Moore].
Just as with Threat Modeling, there is anecdotal evidence of the benefits of using Attack Trees, but no empirical evidence is readily available.
Hoglund and McGraw have identified forty-nine attack patterns that can guide design, implementation, and testing [Hoglund]. These soon to be published patterns include:
1. Make the Client Invisible
2. Target Programs That Write to Privileged OS Resources
3. Use a User-Supplied Configuration File to Run Commands That Elevate Privilege
4. Make Use of Configuration File Search Paths
5. Direct Access to Executable Files
6. Embedding Scripts within Scripts
7. Leverage Executable Code in Nonexecutable Files
8. Argument Injection
9. Command Delimiters
10. Multiple Parsers and Double Escapes
11. User-Supplied Variable Passed to File System Calls
12. Postfix NULL Terminator
13. Postfix, Null Terminate, and Backslash
14. Relative Path Traversal
15. Client-Controlled Environment Variables
16. User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth)
17. Session ID, Resource ID, and Blind Trust
18. Analog In-Band Switching Signals (aka "Blue Boxing")
19. Attack Pattern Fragment: Manipulating Terminal Devices
20. Simple Script Injection
21. Embedding Script in Nonscript Elements
22. XSS in HTTP Headers
23. HTTP Query Strings
24. User-Controlled Filename
25. Passing Local Filenames to Functions That Expect a URL
26. Meta-characters in E-mail Header
27. File System Function Injection, Content Based
28. Client-side Injection, Buffer Overflow
29. Cause Web Server Misclassification
30. Alternate Encoding the Leading Ghost Characters
31. Using Slashes in Alternate Encoding
32. Using Escaped Slashes in Alternate Encoding
33. Unicode Encoding
34. UTF-8 Encoding
35. URL Encoding
36. Alternative IP Addresses
37. Slashes and URL Encoding Combined
38. Web Logs
39. Overflow Binary Resource File
40. Overflow Variables and Tags
41. Overflow Symbolic Links
42. MIME Conversion
43. HTTP Cookies
44. Filter Failure through Buffer Overflow
45. Buffer Overflow with Environment Variables
46. Buffer Overflow in an API Call
47. Buffer Overflow in Local Command-Line Utilities
48. Parameter Expansion
49. String Format Overflow in syslog()
These attack patterns can be used discover potential security defects.
[Saltzer] Saltzer, Jerry, and Mike Schroeder, "The Protection of Information in Computer Systems", Proceedings of the IEEE. Vol. 63, No. 9 (September 1975), pp. 1278-1308. Available on-line at http://cap-lore.com/CapTheory/ProtInf/.
[Neumann] Neumann, Peter, Principles Assuredly Trustworthy Composable Architectures: (Emerging Draft of the) Final Report, December 2003
[Viega] Viega, John, and Gary McGraw. Building Secure Software: How to Avoid Security Problems the Right Way, Reading, MA: Addison Wesley, 2001.
[Howard 2002] Howard, Michael, and David C. LeBlanc. Writing Secure Code, 2nd edition, Microsoft Press, 2002
[Schneier] Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons (2000)
[Leveson] Leveson, Nancy G. Safeware: System Safety and Computers, Addison-Wesley, 1995.
[Moore 1999] Moore, Geoffrey A., Inside the Tornado : Marketing Strategies from Silicon Valley's Cutting Edge. HarperBusiness; Reprint edition July 1, 1999.
[Moore 2002] Moore, Geoffrey A. Crossing the Chasm. Harper Business, 2002.
[Hogland] Hoglund, Greg, and Gary McGraw. Exploiting Software: How to break code. Addison-Wesley, 2004
Story on how the "free email leads to spam" equation is being changed with massive private litigation:
Internet giant AOL has ratcheted up the war against unsolicited e-mail with a publicity-grabbing coup - an online raffle of a spammer's seized Porsche.
AOL won the car - a $47,000 Boxster S - as part of a court settlement against an unnamed e-mailer last year.
"We'll take cars, houses, boats - whatever we can find and get a hold of," said AOL's Randall Boe.
According to Mr Boe, the Porsche's previous owner made more than $1m by sending junk e-mail.
Hitting them where it hurts
AOL is one of the noisiest opponents of the evasive spam trade, and this month joined forces with Microsoft, Yahoo and Earthlink to sue hundreds of spammers.
Seizure of property is becoming a major tactic in these lawsuits, since guilty spammers often protest their inability to pay large fines.
The Porsche-owning spammer, whose identity remains confidential, was one of a group sued last year for having sent 1 billion junk messages to AOL members, pitching pornography, college degrees, cable TV descramblers and other products.
Mr Boe said the Porsche was seized mainly for its symbolic value, as the obvious fruit of an illegal trade.
The Porsche sweepstake lasts until 8 April, and will be open only to those who were AOL members when it was first announced.
Story from BBC NEWS:
Published: 2004/03/30 07:20:09 GMT
© BBC MMIV
A good article on the tracking of terror cells, drawing from some weaknesses in cell commsec. The article appears, and purports, to be complete, only because the methods described have already been rendered useless: A new weapon, a new defence. Anti-terror battles are like that; this shows how much more police-style investigation is effective against terrorism than a military posture.
Terror network was tracked by cellphone chips
Don Van Natta Jr. and Desmond Butler/NYT
Thursday, March 4, 2004
How cellphones helped track global terror web
LONDON The terrorism investigation code-named Mont Blanc began almost by accident in April 2002, when authorities intercepted a cellphone call that lasted less than a minute and involved not a single word of conversation.
Investigators, suspicious that the call was a signal between terrorists, followed the trail first to one terror suspect, then to others, and eventually to terror cells on three continents.
What tied them together was a computer chip smaller than a fingernail. But before the investigation wound down in recent weeks, its global net caught dozens of suspected Qaeda members and disrupted at least three planned attacks in Saudi Arabia and Indonesia, according to counterterrorism and intelligence officials in Europe and the United States.
The investigation helped narrow the search for one of the most wanted men in the world, Khalid Shaikh Mohammed, who is accused of being the mastermind of the Sept. 11 attacks, according to three intelligence officials based in Europe. The U.S. authorities arrested Mohammed in Pakistan last March.
For two years, investigators now say, they were able to track the conversations and movements of several Qaeda leaders and dozens of operatives after determining that the suspects favored a particular brand of cellphone chip. The chips carry prepaid minutes and allow phone use around the world.
Investigators said they believed that the chips, made by Swisscom of Switzerland, were popular with terrorists because they could buy the chips without giving their names.
"They thought these phones protected their anonymity, but they didn't," said a senior intelligence official based in Europe. Even without personal information, the authorities were able to conduct routine monitoring of phone conversations.
A half-dozen senior officials in the United States and Europe agreed to talk in detail about the previously undisclosed investigation because, they said, it was completed. They also said they had strong indications that terror suspects, alert to the phones' vulnerability, had largely abandoned them for important communications and instead were using e-mail, Internet phone calls and hand-delivered messages.
"This was one of the most effective tools we had to locate Al Qaeda," said a senior counterterrorism official in Europe.
The officials called the operation one of the most successful investigations since Sept. 11, 2001, and an example of unusual cooperation between agencies in different countries. Led by the Swiss, the investigation involved agents from more than a dozen countries, including the United States, Pakistan, Saudi Arabia, Germany, Britain and Italy.
In 2002, the German authorities broke up a cell after monitoring calls by Abu Musab al-Zarqawi, who has been linked by some top U.S. officials to Al Qaeda, in which he could be heard ordering attacks on Jewish targets in Germany. Since then, investigators say, Zarqawi has been more cautious.
"If you beat terrorists over the head enough, they learn," said Colonel Nick Pratt, a counterterrorism expert and professor at the George C. Marshall European Center for Security Studies in Garmisch-Partenkirchen, Germany. "They are smart."
Officials say that on the rare occasions when operatives still use mobile phones, they keep the calls brief and use code words.
"They know we are on to them and they keep evolving and using new methods, and we keep finding ways to make life miserable for them," said a senior Saudi official. "In many ways, it's like a cat-and-mouse game."
Some Qaeda lieutenants used cellphones only to arrange a conversation on a more secure telephone. It was one such brief cellphone call that set off the Mont Blanc investigation.
The call was placed on April 11, 2002, by Christian Ganczarski, a 36-year-old Polish-born German Muslim who the German authorities suspected was a member of Al Qaeda. From Germany, Ganczarski called Khalid Shaikh Mohammed, said to be Al Qaeda's military commander, who was running operations at the time from a safe house in Karachi, Pakistan, according to two officials involved in the investigation.
The two men did not speak during the call, counterterrorism officials said. Instead, the call was intended to alert Mohammed of a Qaeda suicide bombing mission at a synagogue in Tunisia, which took place that day, according to two senior officials. The attack killed 21 people, mostly German tourists.
Through electronic surveillance, the German authorities traced the call to Mohammed's Swisscom cellphone, but at first they did not know it belonged to him. Two weeks after the Tunisian bombing, the German police searched Ganczarski's house and found a log of his many numbers, including one in Pakistan that was eventually traced to Mohammed. The German police had been monitoring Ganczarski because he had been seen in the company of militants at a mosque in Duisburg, and last June the French police arrested him in Paris.
Mohammed's cellphone number, and many others, were given to the Swiss authorities for further investigation. By checking Swisscom's records, Swiss officials discovered that many other Qaeda suspects used the Swisscom chips, known as Subscriber Identity Module, or SIM cards, which allow phones to connect to cellular networks.
For months the Swiss, working closely with counterparts in the United States and Pakistan, used this information in an effort to track Mohammed's movements inside Pakistan. By monitoring the cellphone traffic, they were able to get a fix on Mohammed, but the investigators did not know his specific location, officials said.
Once Swiss agents had established that Mohammed was in Karachi, the U.S. and Pakistani security services took over the hunt with the aid of technology at the U.S. National Security Agency, said two senior European intelligence officials. But it took months for them to actually find Mohammed "because he wasn't always using that phone," an official said. "He had many, many other phones."
Mohammed was a victim of his own sloppiness, said a senior European intelligence official. He was meticulous about changing cellphones, but apparently he kept using the same SIM card.
In the end, the authorities were led directly to Mohammed by a CIA spy, the director of central intelligence, George Tenet, said in a speech last month. A senior U.S. intelligence official said this week that the capture of Mohammed "was entirely the result of excellent human operations."
When Swiss and other European officials heard that U.S. agents had captured Mohammed last March, "we opened a big bottle of Champagne," a senior intelligence official said.
Among Mohammed's belongings, the authorities seized computers, cellphones and a personal phone book that contained hundreds of numbers. Tracing those numbers led investigators to as many as 6,000 phone numbers, which amounted to a virtual road map of Al Qaeda's operations, officials said.
The authorities noticed that many of Mohammed's communications were with operatives in Indonesia and Saudi Arabia. Last April, using the phone numbers, officials in Jakarta broke up a terror cell connected to Mohammed, officials said.
After the suicide bombings of three housing compounds in Riyadh, Saudi Arabia, on May 12, the Saudi authorities used the phone numbers to track down two "live sleeper cells." Some members were killed in shootouts with the authorities; others were arrested.
Meanwhile, the Swiss had used Mohammed's phone list to begin monitoring the communications and activities of nearly two dozen of his associates. "Huge resources were devoted to this," a senior official said. "Many countries were constantly doing surveillance, monitoring the chatter."
Investigators were particularly alarmed by one call they overheard last June. The message: "The big guy is coming. He will be here soon."
An official familiar with the calls said, "We did not know who he was, but there was a lot of chatter." Whoever "the big guy" was, the authorities had his number. A Swisscom chip was in the phone.
"Then we waited and waited, and we were increasingly anxious and worried because we didn't know who it was or what he had intended to do," an official said.
But in July, the man believed to be "the big guy," Abdullah Oweis, who was born in Saudi Arabia, was arrested in Qatar. "He is one of those people able to move within Western societies and to help the mujahedeen, who have lesser experience," an official said. "He was at the very center of the Al Qaeda hierarchy. He was a major facilitator."
In January, the operation led to the arrests of eight people accused of being members of a Qaeda logistical cell in Switzerland.
Some are suspected of helping with the suicide bombings of the housing compounds in Riyadh, which killed 35 people, including eight Americans.
Later, the European authorities discovered that Mohammed had contacted a company in Geneva that sells Swisscom phone cards. Investigators said he ordered the cards in bulk.
The New York Times
Copyright © 2003 The International Herald Tribune
What surprises me is that no-one is asking why we think the government can do a better job with centralised security than the rest of us can do by ourselves. Whoops! Spoke to soon - Bruce Schneier writes about exactly that in
Security Risks of Centralization.
In discussions with Brill, he regularly said things like: "It's obviously better to do something than nothing." Actually, it's not obvious. Replacing several decentralized security systems with a single centralized security system can actually reduce the overall security, even though the new system is more secure than the systems being replaced.
An example will make this clear. I'll postulate piles of money secured by individual systems. The systems are characterized by the cost to break them. A $100 pile of money secured by a $200 system is secure, since it's not worth the cost to break. A $100 pile of money secured by a $50 system is insecure, since an attacker can make $50 profit by breaking the security and stealing the money.
Here's my example. There are 10 $100 piles, each secured by individual $200 security systems. They're all secure. There are another 10 $100 piles, each secured by individual $50 systems. They're all insecure.
Clearly something must be done.
One suggestion is to replace all the individual security systems by a single centralized system. The new system is much better than the ones being replaced; it's a $500 system.
Unfortunately, the new system won't provide more security. Under the old systems, 10 piles of money could be stolen at a cost of $50 per pile; an attacker would realize a total profit of $500. Under the new system, we have 20 $100 piles all secured by a single $500 system. An attacker now has an incentive to break that more-secure system, since he can steal $2000 by spending $500 -- a profit of $1500.
The problem is centralization. When individual security systems are combined in one centralized system, the incentive to break that new system is generally higher. Even though the centralized system may be harder to break than any of the individual systems, if it is easier to break than ALL of the individual systems, it may result in less security overall.
There is a security benefit to decentralized security.
Bruce Schneier's cryptogram pointed at the controversial new program that encourages US companies to share their vulnerability information with the Department of Homeland Defense.
It's a bit long to post, but it's well worth reading if one is interested in public & critical infrastructure protection. The bottom line: the new legal protection will probably cause more trouble than its worth, and may make things more insecure:
By Kevin Poulsen, SecurityFocus Feb 20 2004 6:08PM
A long-anticipated program meant to encourage companies to provide the federal government with confidential information about vulnerabilities in critical systems took effect Friday, but critics worry that it may do more harm than good.
The so-called Protected Critical Infrastructure Information (PCII) program allows corporations who run key elements of U.S. infrastructure -- energy firms, telecommunications carriers, financial institutions, etc. -- to submit details about their physical and cyber vulnerabilities to a newly-formed office within the Department of Homeland Security, with legally-binding assurances that the information will not be used against them or released to the public.
The program implements controversial legislation that bounced around Capitol Hill for years before Congress passed it in the wake of the September 11 attacks as part of the Homeland Security Act of 2002. Security agencies have long sought information about vulnerabilities and likely attack points in critical infrastructures, but have found the private sector reluctant to share, for fear that sensitive or embarrassing information would be released through the Freedom of Information Act (FOIA).
As of Friday, federal law now protects that vulnerability information from disclosure through FOIA, and makes it illegal for government workers to leak it, provided companies follow certain procedures and submit the data to the new PCII office.
I just received my first well written, properly spelt, nigerian fraud letter - and they've moved to Britain! Of course it is the same old same old concept. Text is below if you are even the slightest bit interested.
My name is Becky J. Harding, I am a senior partner in the firm of Midland Consulting Limited: Private Investigators and Security Consultants. We are conducting a standard process investigation on behalf of HSBC, the International Banking Conglomerate.
This investigation involves a client who shares the same surname with you and also the circumstances surrounding investments made by this client at HSBC Republic, the Private Banking arm of HSBC. The HSBC Private Banking client died intestate and nominated no successor in title over the investments made with the bank. The essence of this communication with you is to request you provide us information/comments on any or all of the four issues:
1-Are you aware of any relative/relation who shares your same name whose last known contact address was Brussels Belgium?
2-Are you aware of any investment of considerable value made by such a person at the Private Banking Division of HSBC Bank PLC?
3-Born on the 1st of October 1941
4-Can you establish beyond reasonable doubt your eligibility to assume status of successor in title to the deceased?
It is pertinent that you inform us ASAP whether or not you are familiar with this personality that we may put an end to this communication with you and our inquiries surrounding this personality.
You must appreciate that we are constrained from providing you with more detailed information at this point. Please respond to this mail as soon as possible to afford us the opportunity to close this investigation.
Thank you for accommodating our enquiry.
Becky J. Harding.
For: Midland Consulting Limited.
PayPal Probed for Anti-Fraud Efforts
Monday March 8, 11:51 am ET
WASHINGTON (Reuters) - Federal and state investigators are examining whether online payment service PayPal violated consumer-protection laws in its fight against online fraud, parent company eBay Inc. (NasdaqNM:EBAY - News) said on Monday.
PayPal sometimes freezes customer accounts while it investigates suspicious transactions, a practice that has generated complaints to consumer-protection authorities, the online auctioneer said in its annual report.
"As a result of customer complaints, PayPal has ... received inquiries regarding its restriction and disclosure practices from the Federal Trade Commission and the attorneys general of a number of states," the report said.
"If PayPal's processes are found to violate federal or state law on consumer protection and unfair business practices, it could be subject to an enforcement action or fines."
An FTC spokeswoman declined initial comment.
PayPal handled more than $12.2 billion in transactions in 2003 and has 40 million customer accounts, according to the annual report.
The rate of fraudulent PayPal transactions is less than one-half of one percent, eBay has said.
An eBay spokesman was not immediately available for comment.
© 2004 Reuters
A working group on anti-phishing was formed late last year, and now publishes the first attempts (that I have seen) at hard statiscs on the epidemic in their monthly Phishing Attack Trends Report on the epidemic.
The report has one salient number: 176 unique phishing attacks in January, up from 116 the previous month. Another document listed on that site (FTC's National and State Trends in Fraud and Identity Theft) showed a figure of $200 million lost in Internet related fraud over the year 2003.
Worth a read, if looking for hard info on phishing. As the WG has just been formed, and stats only go back a few months, it's too early to tell whether this is the collection ramping up, or we are in the middle of an explosion. The FTC's $200m doesn't drill down any further, so phishing will be some small percentage of that number.
(I'm not sure why the WG publishes in PDF, instead of HTML. It seems that they are reaching to bureaucrats and marketeers rather than techies, which is a worrying sign.)
A "solution" to Phishing called PassMarks has been proposed.
The solution claims that the site should present an individualised image, the PassMark, to each account on login. Unfortunately, this won't work.
Phishing involves interposing the attacking site as a spoof between the client's browser and the target site. The spoofing site simply copies the registration details across to the main website, and waits for the PassMark image to come back. And then copies the image back to the user's browser.
The flaw in their analysis may have been that they didn't realise that almost all phishing totally bypasses SSL. Of course. Or, maybe they didn't realise that the attackers are intelligent, and modify their approach to cope with the system under attack. Easy mistake to make, one supposes.
The analysis we've done still stands - what is needed to secure browsing against (current day, validated) threats like phishing is to modify the existing underutilised SSL infrastructure in these fairly minor ways:
The analysis is long, complex and not written down in full. You can see something of the story on the SSL page.
PrisonPlanet reports that RFIDs are being used in new US notes! Read on for suprising results.
Is this a spoof or the birth of a new urban legend? If you have nothing better to do, read slashdot's opinion, which amongst much chit chat suggests that this is not RFIDs:
RFID Tags in New US Notes Explode When You Try to Microwave Them
Adapted from a letter sent to Henry Makow Ph.D.
Want to share an event with you, that we experienced this evening.. Dave had over $1000 dollars in his back pocket (in his wallet). New twenties were the lion share of the bills in his wallet. We walked into a truck stop/travel plaza and they have those new electronic monitors that are supposed to say if you are stealing something. But through every monitor, Dave set it off. He did not have anything to purchase in his hands or pockets. After numerous times of setting off these monitors, a person approached Dave with a 'wand' to swipe why he was setting off the monitors.
Believe it or not, it was his 'wallet'. That is according to the minimum wage employees working at the truck stop! We then walked across the street to a store and purchased aluminum foil. We then wrapped our cash in foil and went thru the same monitors. No monitor went off.
We could have left it at that, but we have also paid attention to the European Union and the 'rfid' tracking devices placed in their money, and the blatant bragging of Walmart and many corporations of using 'rfid' electronics on every marketable item by the year 2005.
Dave and I have brainstormed the fact that most items can be 'microwaved' to fry the 'rfid' chip, thus elimination of tracking by our government.
So we chose to 'microwave' our cash, over $1000 in twenties in a stack, not spread out on a carasoul. Do you know what exploded on American money?? The right eye of Andrew Jackson on the new twenty, every bill was uniform in it's burning... Isnt that interesting?
Now we have to take all of our bills to the bank and have them replaced, cause they are now 'burnt'.
We will now be wrapping all of our larger bills in foil on a regular basis.
What we resent is the fact that the government or a corporation can track our 'cash'. Credit purchases and check purchases have been tracked for years, but cash was not traceble until now...
Dave and Denise
All security models call for a threat model; it is one of the key inputs or factors in the construction of the security model. Secure browsing - SSL / HTTPS - lacked this critical analysis, and recent work over on the Mozilla Browser Project is calling for the rectification of this. Here's my attempt at a threat mode for secure browsing, in draft.
Comments welcome. One thing - I've not found any doco on how a threat model is written out, so I'm in the dark a bit. But, ignorance is no excuse for not trying...
"London, UK - 19 February 2004, 17:30 GMT - A study by the mi2g Intelligence Unit reveals that the world's safest and most secure online server Operating System (OS) is proving to be the Open Source family of BSD (Berkley Software Distribution) and the Mac OS X based on Darwin. The study also reveals that Linux has become the most breached online server OS in the government and non-government spheres for the first time, while the number of successful hacker attacks against Microsoft Windows based servers have fallen consistently for the last ten months."
To read the rest, you have to buy the report, but ...
You can see more in the article below, from last year:
By JACK KAPICA Globe and Mail Update
Friday, Sep. 12, 2003
Linux, not Microsoft Windows, remains the most-attacked operating system, a British security company reports.
During August, 67 per cent of all successful and verifiable digital attacks against on-line servers targeted Linux, followed by Microsoft Windows at 23.2 per cent. A total of 12,892 Linux on-line servers running e-business and information sites were successfully breached in that month, followed by 4,626 Windows servers, according to the report.
Just 360 - less than 2 per cent - of BSD Unix servers were successfully breached in August.
The data comes from the London-based mi2g Intelligence Unit, which has been collecting data on overt digital attacks since 1995 and verifying them. Its database has tracked more than 280,000 overt digital attacks and 7,900 hacker groups.
Linux remained the most attacked operating system on-line during the past year, with 51 per cent of all successful overt digital attacks.
Microsoft Windows servers belonging to governments, however, were the most attacked (51.4 per cent) followed by Linux (14.3 per cent) in August.
The economic damage from the attacks, in lost productivity and recovery costs, fell below average in August, to $707-million (U.S.).
The overall economic damage in August from overt and covert attacks as well as viruses and worms stood at an all-time high of $28.2-billion.
The Sobig and MSBlast malware that afflict Microsoft platforms contributed significantly to the record estimate.
"The proliferation of Linux within the on-line server community coupled with inadequate knowledge of how to keep that environment secure when running vulnerable third-party applications is contributing to a consistently higher proportion of compromised Linux servers," mi29 chairman D.K. Matai said.
"Microsoft deserves credit for having reduced the proportion of successful on-line hacker attacks perpetrated against Windows servers."
"The May figures for manual and semi-automated hacking attacks - 18,847 - against online servers worldwide show signs of stabilisation in comparison to each of the three previous months. At present rates, the projected number of overt digital attacks carried out by hackers against online servers in 2004 will be only 2% up on the previous year and would stand at around 220,000. If this trend continues, it will mark the slowest growth rate for manual and semi-automated hacking attacks against online servers according to records that date back to 1995. This confirms that the dominant threat to the global digital eco-system is coming from malware as opposed to direct hacking attacks."
Just to get a feel for this, it is worth clicking and waiting for the photos to download! Check the quality of the worksmanship for the hidden camera.
News reports from a couple of weeks back indicate that a worm called Dumaru-Y installs a keylogger that listens for e-gold password and account numbers.
This is significant in that this might be the first time that viruses are specifically targetting the DGCs with an attack on the user's dynamic activity. (MiMail just recently targeted both e-gold and Paypal users with more conventional spoofs.)
e-gold is a special favourite with scammers and thieves for three reasons: its payments are RTGS, there is a deep market in independent exchange, and e-gold won't provide much help unless with a court order. Also, it is by volume of transactions by far the largest, which provides cover for theft.
This has been thought about for a long time. In fact, one issuer of gold, eBullion, has had a hardware password token in place for a long time. Others like Pecunix have tried to set up a subsetting password approach, where only a portion of the password is revealed every time.
European banks delivered hardware tokens routinely to thwart such threats. This may have been prudent, but it also saddled these systems with excessive costs; the price of the eBullion crypto token was thought to be too high for most users.
Using viruses is a new tactic, but not an unexpected one. As with all wars, look for an escalation of tactics, and commensurate and matching improvements in security.
In a rare burst of journalistic research, the Economist has a good article on the state of viruses and similar security threats.
It downplays terrorism, up-plays phishing, agrees that Microsoft is a monoculture, but disagrees with any conclusions promoted.
Even better, the Economist goes on to be quite perceptive about the future of anonymity, psuedonymity, and how to protect privacy. It's almost as if they did their homework! Well done, and definitely worth reading.
Microsoft's new bounty program has all the elements of a classic movie script . In Sergio Leone's 3rd spagetti western, The Man with No Name makes good money as a bounty hunter . Is this another chance for him?
Microsoft's theory is that they can stick up a few wanted posters, and thus rid the world of these Ugly virus writers. Law Enforcement Officers with angel eyes will see this as a great opportunity. Microsoft has all this cash, and the LEOs need a helping hand. Surely with the right incentives, they can file the report to the CyberSecurity Czar in Washington that the tips are flooding in?
Wait for the tips, and go catch the Uglies. (And pay out the bounty.) Nothing could be simpler than that. Wait for more tips, and go catch more Uglies. And...
Wait a minute! In the film, Tuco (the Ugly) and The Man with no Name (the Good) are in cohoots! Somehow, Tuco always gets away, and the two of them split the bounty. Again and again... It's a great game.
Make no mistake, $250,000 in Confederate Gold just changed the incentive structure for your average virus writer. Up until now, writing viruses was just for fun. A challenge. Or a way to take down your hated anti-spam site. Some way for the frustrated ex-soviet nuclear scientist to stretch his talents. Or a way to poke a little fun at Microsoft.
Now, there is financial incentive. With one wanted poster, Microsoft has turned virus writing into a profitable business. All Blondie has to do is write a virus, blow away a few million user installations, and then convince Tuco to sit still for a while in a Yankee jail.
The Man with No Name may just ride again!
Merchants who *really* rely on their web site being secure are those that take instructions for the delivery of value over them. It's a given that they have to work very hard to secure their websites, and it is instructive to watch their efforts.
The cutting edge in making web sites secure is occuring in gold community and presumably the PayPal community (I don't really follow the latter). AFAIK, this has been the case since the late 90's, before that, some of the European banks were doing heavy duty stuff with expensive tokens.
e-gold have a sort of graphical number that displays and has to be entered in by hand . This works against bots, but of course, the bot writers have conquered it somehow. e-gold are of course the recurrent victim of the spoofers, and it is not clear why they have not taken more serious steps to protect themselves against attacks on their system.
eBullion sell an expensive hardware token that I have heard stops attacks cold, but suffers from poor take up because of its cost .
Goldmoney relies on client certs, which also seems to be poor in takeup. Probably more to do with the clumsiness of them, due to the early uncertain support in the browser and in the protocol. Also, goldmoney has structured themselves to be an unattractive target for attackers, using governance and marketing techniques, so I expect them to be the last to experience real tests of their security.
Another small player called Pecunix allows you to integrate your PGP key into your account, and confirm your nymity using PGP signatures. At least one other player had decided to try smart cards.
Now a company called NetPay.TV - I have no idea about them, really - have started a service that sends out a 6 digit pin over the SMS messaging features of the GSM network for the user to type in to the website .
It's highly innovative and great security to use a completely different network to communicate with the user and confirm their nymity. On the face of it, it would seem to pretty much knock a hole into the incessant, boring and mind-bogglingly simple attacks against the recommended SSL web site approach.
What remains to be seen is if users are prepared to pay 15c each time for the SMS message. In Europe, SMS messaging is the rage, so there won't be much of a problem there, I suspect.
What's interesing here is that we are seeing the market for security evolve and bypass the rather
broken model that was invented by Netscape back in '94 or so. In the absence of structured, institutional, or mandated approaches, we now have half a dozen distinct approaches to web site application security .
As each of the programmes are voluntary, we have a fair and honest market test of the security results .
 here's one if it can be seen:
Hopefully that doesn't let you into my account! It's curious, if you change the numbers in the above URL, you get a similar drawing, but it is wrong...
 All companies are .com, unless otherwise noted.
 As well as the activity on the gold side, there are the adventures of PayPal with its pairs of tiny payments made to users' conventional bank accounts.
 I just thought of an attack against NetPay.TV, but I'll keep quiet so as not to enjoy anyone else's
It is now clear that the U.S. Department of Homeland Security need rely on no-one to advise them on computer security risks to the homeland. The binary choice of Microsoft as either a) good or b) bad, has now become a unary choice of a) good. At least, by all gainfully employed security experts .
So, why waste the taxpayer's money in asking anyone?
This may make USG purchasing decisions easy, but the expulsion of Dan Geer was rather ham fisted, and will haunt Microsoft in the private sector for some time.
IBM used to pull this trick, back in the good old days of pre-net (I'm talking the 70's and 80's here...). Then, if you went up against the IBM purchasing decision, you knew your job was on the line.
Everyone in the industry knew what "nobody ever got fired..." meant. It didn't only mean that your job was safe if you bought IBM, it also meant that you could be receiving your pink slip for challenging the decision.
Thankfully, those days are long gone, and IBM has real competitors to protect each and every purchasing IT decision maker against the manipulations of a dominating provider. Yet, Microsoft seems to have blundered into this situation without realising the dangers. It has handed its compeititors a no-risk sales argument, as they will never let anyone forget that Microsoft wields immense power - distorting, damaging, and blind power that can do as much harm to the purchaser as it can do good.
Not to mention AtStake, who will probably sink into the mire of the old party game: "remember AtStake?" What on earth are people going to say when they hear that AtStake has been hired to work on securing the next generation of Aegis cruisers or the new Total Awareness Solution?
"Oh, we'll be safe until someone gets fired..."
"At least anyone who's fired can get a job with El Qaeda..."
The good news is that if they do go down, at least the employees will have the added benefit of being fired by @Stake.
I wonder how long it will be before people have forgotten the true pedigree of the phrase "nobody ever got fired for buying IBM?"
Why is there no layer for Security in FC?
(Actually, I get this from time to time. "Why no X? ?!?" It takes a while to develop the answer for each one. This one is about security, but I've also been asked about Law and Economics.)
Security is all pervasive. It is not an add on. It is a requirement built in from the beginning and it infects all modules.
Thus, it is not a layer. It applies to all, although, more particularly, Security will be more present in the lower layers.
Well, perhaps that is not true. It could be said that Security divides into internal and external threats, and the lower layers are more normally about external threats. The Accounting and Governance layers are more normally concerned with the insider threat.
Superficially, security appears to be lower in the stack. But, a true security person recognises that an internal threat is more damning, more dangerous, and more frequent in reality than an external threat. In fact, real security work is often more about insider threats than outsider threats.
So, it's not even possible to be vaguely narrow about Security. Even the upper layers, Finance and Vaue, are critical, as you can't do much security until you understand the application that you are protecting and its concommitant values.