Readers will know we first published the account on "Man in the Browser by Philipp Güring way back when, and followed it up with news that the way forward was dual channel transaction signing. In short, this meant the bank sending an SMS to your handy mobile cell phone with the transaction details, and a check code to enter if you wanted the transaction to go through.
On the face of it, pretty secure. But at the back of our minds, we knew that this was just an increase in difficulty: a crook could seek to control both channels. And so it comes to pass:
In the days leading up to the fraud being committed, [Craig] had received two strange phone calls. One came through to his office two-to-three days earlier, claiming to be a representative of the Australian Tax Office, asking if he worked at the company. Another went through to his home number when he was at work. The caller claimed to be a client seeking his mobile phone number for an urgent job; his daughter gave out the number without hesitation.
The fraudsters used this information to make a call to Craig’s mobile phone provider, Vodafone Australia, asking for his phone number to be “ported” to a new device.
As the port request was processed, the criminals sent an SMS to Craig purporting to be from Vodafone. The message said that Vodafone was experiencing network difficulties and that he would likely experience problems with reception for the next 24 hours. This bought the criminals time to commit the fraud.
The unintended consequence of the phone being used for transaction signing is that the phone is now worth maybe as much as the fraud you can pull off. Assuming the crooks have already cracked the password for the bank account (something probably picked up on a market for pennies), the crooks are now ready to spend substantial amounts of time to crack the phone. In this case:
Within 30 minutes of the port being completed, and with a verification code in hand, the attackers were spending the $45,000 at an electronics retailer.
Thankfully, the abnormally large transaction raised a red flag within the fraud unit of the Commonwealth Bank before any more damage could be done. The team tried – unsuccessfully – to call Craig on his mobile. After several attempts to contact him, Craig’s bank account was frozen. The fraud unit eventually reached him on a landline.
So what happens now that the crooks walked with $45k of juicy electronics (probably convertible to cash at 50-70% off face over ebay) ?
As is standard practice for online banking fraud in Australia, the Commonwealth Bank has absorbed the hit for its customer and put $45,000 back into Craig's account.
A NSW Police detective contacted Craig on September 15 to ensure the bank had followed through with its promise to reinstate the $45,000. With this condition satisfied, the case was suspended on September 29 pending the bank asking the police to proceed with the matter any further.
One local police investigator told SC that in his long career, a bank has only asked for a suspended online fraud case to be investigated once. The vast majority of cases remain suspended. Further, SC Magazine was told that the police would, in any case, have to weigh up whether it has the adequate resources to investigate frauds involving such small amounts of money.
No attempt was made at a local police level to escalate the Craig matter to the NSW Police Fraud and Cybercrime squad, for the same reasons.
In a paper I wrote in 2008, I stated for some value below X, police wouldn't lift a finger. The Prosecutor has too much important work to do! What we have here is a very definate floor beyond which Internet systems which transmit and protect value are unable to rely on external resources such as the law . Reading more:
But the Commonwealth Bank claims it has forwarded evidence to the NSW and Federal Police forces that could have been used to prosecute the offenders.
The bank’s fraud squad – which had identified the suspect transactions within minutes of the fraud being committed - was able to track down where the criminals spent the stolen money.
A spokesman for the bank said it “dealt with both Federal and State (NSW) Police regarding the incident” and that “both authorities were advised on the availability of CCTV footage” of the offenders spending their ill-gotten gains.
“The Bank was advised by one of the authorities that the offender had left the country – reducing the likelihood of further action by that authority,” the spokesperson said.
This number goes up dramatically once we cross a border. In that paper I suggested 25k, here we have a reported number of $45k.
Why is that important? Because, some systems have implicit guarantees that go like "we do blah and blah and blah, and then you go to the police and all your problems are solved!" Sorry, not if it is too small, where small is surprisingly large. Any such system that handwaves you to the police without clearly indicating the floor of interest ... is probably worthless.
So when would you trust a system that backstopped to the police? I'll stick my neck out and say, if it is beyond your borders, and you're risking >> $100k, then you might get some help. Otherwise, don't bet your money on it.
Two Microsoft researchers have published a paper pouring scorn on claims cyber crime causes massive losses in America. They say it’s just too rare for anyone to be able to calculate such a figure.
Dinei Florencio and Cormac Herley argue that samples used in the alarming research we get to hear about tend to contain a few victims who say they lost a lot of money. The researchers then extrapolate that to the rest of the population, which gives a big total loss estimate – in one case of a trillion dollars per year.
But if these victims are unrepresentative of the population, or exaggerate their losses, they can really skew the results. Florencio and Herley point out that one person or company claiming a $50,000 loss in a sample of 1,000 would, when extrapolated, produce a $10 billion loss for America as a whole. So if that loss is not representative of the pattern across the whole country, your total could be $10 billion too high.
Having read the paper, the above is about right. And sufficient description, as the paper goes on for pages and pages making the same point.
Now, I've also been skeptical of the phishing surveys. So, for a long time, I've just stuck to the number of "about a billion a year." And waited for someone to challenge me on it :) Most of the surveys seemed to head in that direction, and what we would hope for would be more useful numbers.
So far, Florencio and Herley aren't providing those numbers. The closest I've seen is the FBI-sponsored report that derives from reported fraud rather than surveys. Which seems to plumb in the direction of 10 billion a year for all identity-related consumer frauds, and a sort handwavy claim that there is a ration of 10:1 between all fraud and Internet related fraud.
I wouldn't be surprised if the number was really 100 million. But that's still a big number. It's still bigger than income of Mozilla, which is the 2nd browser by numbers. It's still bigger than the budget of the Anti-phishing Working Group, an industry-sponsored private thinktank. And CABForum, another industry-only group.
So who benefits from inflated figures? The media, because of the scare stories, and the public and private security organisations and businesses who provide cyber security. The above parliamentary report indicated that in 2009 Australian businesses spent between $1.37 and $1.95 billion in computer security measures. So on the report’s figures, cyber crime produces far more income for those fighting it than those committing it.
Good question from the SMH. The answer is that it isn't in any player's interest to provide better figures. If so (and we can see support from the Silver Bullets structure) what is Florencio and Herley's intent in popping the balloon? They may be academically correct in trying to deflate the security market's obsession with measurable numbers, but without some harder numbers of their own, one wonders what's the point?
What is the real number? Florencio and Herley leave us dangling at that point. Are they are setting up to provide those figures one day? Without that forthcoming, I fear the paper is destined to be just more media fodder as shown in its salacious title. Iow, pointless.
Hopefully numbers are coming. In an industry steeped in Numerology and Silver Bullets, facts and hard numbers are important. Until then, your rough number is as good as mine -- a billion.
More grist for the mill -- where are we on the security debate? Here's a data point.
In May 2009, PATCO, a construction company based in Maine, had its account taken over by cyberthieves, after malware hijacked online banking log-in and password credentials for the commercial account PATCO held with Ocean Bank. ....
There are two ways to look at this: the contractual view, and the responsible party view. The first view holds that contracts describe the arrangement, and parties govern themselves. The second holds that the more responsible party is required to be <ahem> more responsible. PATCO decided to ask for the second:
A magistrate has recommended that a U.S. District Court in Maine deny a motion for a jury trial in an ACH fraud case filed by a commercial customer against its former bank. According to the order, which must still be reviewed by the presiding judge, the bank fulfilled its contractual obligations for security and authentication through its requirement for log-in and password credentials. ....
At issue for PATCO is whether banks should be held responsible when commercial accounts, like PATCO's, are drained because of fraudulent ACH and wire transfers approved by the bank. How much security should banks and credit unions reasonably be required to apply to the commercial accounts they manage?
"Obviously, the major issue is the banks are saying this is the depositors' problem," Patterson says, "but the folks that are losing money through ACH fraud don't have enough sophistication to stop this."
David Navetta, an attorney who specializes in IT security and privacy, says the magistrate's recommendation, if accepted by the judge, could set an interesting legal precedent about the security banks are expected to provide. And unless PATCO disputes the order, Navetta says it's unlikely the judge will overrule the magistrate's findings. PATCO has between 14 and 21 days to respond.
"Many security law commentators, myself included, have long held that *reasonable security does not mean bullet-proof security*, and that companies need not be at the cutting edge of security to avoid liability," Navetta says. "The court explicitly recognizes this concept, and I think that is a good thing: For once, the law and the security world agree on a key concept."
My emphasis added, and it is an important point that security doesn't mean absolute security, it means reasonable security. Which from the principle of the word, means stopping when the costs outweigh the benefits.
But that is not the point that is really addressed. The question is whether (a) how we determine what is acceptable (not reasonable), and (b) if the Customer loses out when acceptable wasn't reasonable, is there any come-back?
In the disposition, the court notes that Ocean Bank's security could have been better. "It is apparent, in the light of hindsight, that the Bank's security procedures in May 2009 were not optimal," the order states. "The Bank would have more effectively harnessed the power of its risk- profiling system if it had conducted manual reviews in response to red flag information instead of merely causing the system to trigger challenge questions."
But since *PATCO agreed to the bank's security methods when it signed the contract*, the court suggests then that PATCO considered the bank's methods to be reasonable, Navetta says. The law also does not require banks to implement the "best" security measures when it comes to protecting commercial accounts, he adds.
So, we can conclude that "reasonable" to the bank meant putting in place risk-profiling systems. Which it then bungled (allegedly). However, the standard of security was as agreed in the contract, *reasonable or not*.
That is, *reasonable security* doesn't enter into it. More on that, as the observers try and mold this into a "best practices" view:
"Patco in effect demands that Ocean Bank have adopted the best security procedures then available," the order states. "As the Bank observes, that is not the law."
(Where it says "best" read "best practices" which is lowest common denominator, a rather different thing to best. In particular, the case is talking about SecureId tokens and the like.)
Patterson argues that Ocean Bank was not complying with the Federal Financial Institutions Examination Council's requirement for multifactor authentication when it relied solely on log-in and password credentials to verify transactions. Navetta agrees, but the court in this order does not.
"The court took a fairly literal approach to its analysis and bought the bank's argument that the scheme being used was multifactor, as described in the [FFIEC] guidance," Navetta says. "The analysis on what constitutes multifactor and whether some multifactor schemes [out of band; physical token] are better than others was discussed, and, to some degree, the court acknowledged that the bank's security could have been better. Even so, it was technically multifactor, as described in the FFEIC guidance, in the court's opinion, and "the best" was not necessary."
Navetta says the court's view of multifactor does not jibe with common industry understanding. Most industry experts, he says, would not consider Ocean Bank's authentication practices in 2009 to be true multifactor. "Obviously, the 'something you have' factor did not fully work if hackers were able to remotely log into the bank using their own computer," he says. "I think that PATCO's argument was the additional factors were meaningless since the challenge question was always asked anyway, and apparently answering it correctly worked even if one of the factors failed. In other words, it appears that PATCO was arguing that the net result of the other two factors failing was going back to a single factor."
This problem has been known for a long time. When the "best practices" approach is used, as in this FFIEC example, there is a list of things you do. You do them, and you're done. You are encouraged to (a) not do any better, and (b) cheat. The trick employed above, to interpret the term "multi-factor" in a literal fashion, rather than using the security industry's customary (and more expensive) definition, has been known for a long long time.
It's all part of the "best practices" approach, and the court may have been wise to avoid further endorsing it. There is now more competition in security practices, says this court, and you'll find it in your contract.
Name collisions are such fun! The apocryphal story of the Spartacus, the slave-turned-revolutionary, has him being saved by all his slave-warriors standing up saying, "I am Spartacus!" The poor Roman Leaders had little grip on the situation, as they couldn't recall any biometrics for the guy.
A short time ago, I wondered into an office to get some service, and a nice lady grabbed me at the door, entered my name in and eventually queued me to a service desk. The poor woman at the computer couldn't work it out though, as having entered my first name into the computer, all the details were wrong. It was finally resolved when an attendant 2 desks away asked why she'd opened up his client's file ... who had the same first name. Apparently, my name is subject to TLCs, three-letter-collisions...
Luckily, that was easy to solve. My 4-letter name reduces to around 2 people in our field. Hopefully, this is all resolved in a seminal paper on the subject:
Global Names Considered Harmful by Mark Miller, Mark Miller, and Mark Miller
As reported by Bill Frantz, that's the paper.
Meanwhile, if you want to purchase one of these fabulous global names, some prices spotted a while back: As reported by Dave Birch somewhere, and I only copied this one line (before the link went south):
Authorities said Lominy charged $1,600 to $2,000 for a state driver's license...
And (this time with the link still working):
The gangs are setting up fake-ID factories using printers bought at high street shops. The Met has shut at least 20 “factories” in the last 18 months and believes more than 30,000 fake identities are in circulation.
Police examined 12,000 of them and established they were behind a racket worth £14 million. One £750 printer was withdrawn from sale at PC World after detectives revealed it could produce replicas of the proposed new ID card and EU driving licences.
Mr Mawer added: “There are people with dual identities, one real and one for committing crime.” He revealed that specialist printers capable of making convincing ID documents such as EU driving licences could be bought for £750, though others cost £5,000.
I'm currently trying to get my drivers licence back, and so far it has cost $465, with more costs to come. At some point, the above deal looks good, and they throw in a free printer! A steal :)
Speaking of "Identity" I also watched a British drama of that name tonight. In true form, it's "gritty police drama," which is to say, a copy of a dozen other shows which make a habit of implausible scripts woven around too many cross-overs.
In a paper Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL_, by Christopher Soghoian and Sid Stammby, there is a reasonably good layout of the problem that browsers face in delivering their "one-model-suits-all" security model. It is more or less what we've understood all these years, in that by accepting an entire root list of 100s of CAs, there is no barrier to any one of them going a little rogue.
Of course, it is easy to raise the hypothetical of the rogue CA, and even to show compelling evidence of business models (they cover much the same claims with a CA that also works in the lawful intercept business that was covered here in FC many years ago). Beyond theoretical or probable evidence, it seems the authors have stumbled on some evidence that it is happening:
The company’s CEO, Victor Oppelman confirmed, in a conversation with the author at the company’s booth, the claims made in their marketing materials: That government customers have compelled CAs into issuing certificates for use in surveillance operations. While Mr Oppelman would not reveal which governments have purchased the 5-series device, he did confirm that it has been sold both domestically and to foreign customers.
(my emphasis.) This has been a lurking problem underlying all CAs since the beginning. The flip side of the trusted-third-party concept ("TTP") is the centralised-vulnerability-party or "CVP". That is, you may have been told you "trust" your TTP, but in reality, you are totally vulnerable to it. E.g., from the famous Blackberry "official spyware" case:
Nevertheless, hundreds of millions of people around the world, most of whom have never heard of Etisalat, unknowingly depend upon a company that has intentionally delivered spyware to its own paying customers, to protect their own communications security.
Which becomes worse when the browsers insist, not without good reason, that the root list is hidden from the consumer. The problem that occurs here is that the compelled CA problem multiplies to the square of the number of roots: if a CA in (say) Ecuador is compelled to deliver a rogue cert, then that can be used against a CA in Korea, and indeed all the other CAs. A brief examination of the ways in which CAs work, and browsers interact with CAs, leads one to the unfortunate conclusion that nobody in the CAs, and nobody in the browsers, can do a darn thing about it.
So it then falls to a question of statistics: at what point do we believe that there are so many CAs in there, that the chance of getting away with a little interception is too enticing? Square law says that the chances are say 100 CAs squared, or 10,000 times the chance of any one intercept. As we've reached that number, this indicates that the temptation to resist intercept is good for all except 0.01% of circumstances. OK, pretty scratchy maths, but it does indicate that the temptation is a small but not infinitesimal number. A risk exists, in words, and in numbers.
One CA can hide amongst the crowd, but there is a little bit of a fix to open up that crowd. This fix is to simply show the user the CA brand, to put faces on the crowd. Think of the above, and while it doesn't solve the underlying weakness of the CVP, it does mean that the mathematics of squared vulnerability collapses. Once a user sees their CA has changed, or has a chance of seeing it, hiding amongst the crowd of CAs is no longer as easy.
Why then do browsers resist this fix? There is one good reason, which is that consumers really don't care and don't want to care. In more particular terms, they do not want to be bothered by security models, and the security displays in the past have never worked out. Gerv puts it this way in comments:
Security UI comes at a cost - a cost in complexity of UI and of message, and in potential user confusion. We should only present users with UI which enables them to make meaningful decisions based on information they have.
They love Skype, which gives them everything they need without asking them anything. Which therefore should be reasonable enough motive to follow those lessons, but the context is different. Skype is in the chat & voice market, and the security model it has chosen is well-excessive to needs there. Browsing on the other hand is in the credit-card shopping and Internet online banking market, and the security model imposed by the mid 1990s evolution of uncontrollable forces has now broken before the onslaught of phishing & friends.
In other words, for browsing, the writing is on the wall. Why then don't they move? In a perceptive footnote, the authors also ponder this conundrum:
3. The browser vendors wield considerable theoretical power over each CA. Any CA no longer trusted by the major browsers will have an impossible time attracting or retaining clients, as visitors to those clients’ websites will be greeted by a scary browser warning each time they attempt to establish a secure connection. Nevertheless, the browser vendors appear loathe to actually drop CAs that engage in inappropriate be- havior — a rather lengthy list of bad CA practices that have not resulted in the CAs being dropped by one browser vendor can be seen in .
I have observed this for a long time now, predicting phishing until it became the flood of fraud. The answer is, to my mind, a complicated one which I can only paraphrase.
For Mozilla, the reason is simple lack of security capability at the *architectural* and *governance* levels. Indeed, it should be noticed that this lack of capability is their policy, as they deliberately and explicitly outsource big security questions to others (known as the "standards groups" such as IETF's RFC committees). As they have little of the capability, they aren't in a good position to use the power, no matter whether they would want to or not. So, it only needs a mildly argumentative approach on the behalf of the others, and Mozilla is restrained from its apparent power.
What then of Microsoft? Well, they certainly have the capability, but they have other fish to fry. They aren't fussed about the power because it doesn't bring them anything of use to them. As a corporation, they are strictly interested in shareholders' profits (by law and by custom), and as nobody can show them a bottom line improvement from CA & cert business, no interest is generated. And without that interest, it is practically impossible to get the various many groups within Microsoft to move.
Unlike Mozilla, my view of Microsoft is much more "external", based on many observations that have never been confirmed internally. However it seems to fit; all of their security work has been directed to market interests. Hence for example their work in identity & authentication (.net, infocard, etc) was all directed at creating the platform for capturing the future market.
What is odd is that all CAs agree that they want their logo on their browser real estate. Big and small. So one would think that there was a unified approach to this, and it would eventually win the day; the browser wins for advancing security, the CAs win because their brand investments now make sense. The consumer wins for both reasons. Indeed, early recommendations from the CABForum, a closed group of CAs and browsers, had these fixes in there.
But these ideas keep running up against resistance, and none of the resistance makes any sense. And that is probably the best way to think of it: the browsers don't have a logical model for where to go for security, so anything leaps the bar when the level is set to zero.
Which all leads to a new group of people trying to solve the problem. The authors present their model as this:
The Firefox browser already retains history data for all visited websites. We have simply modified the browser to cause it to retain slightly more information. Thus, for each new SSL protected website that the user visits, a Certlock enabled browser also caches the following additional certificate information:
A hash of the certificate.
The country of the issuing CA.
The name of the CA.
The country of the website.
The name of the website.
The entire chain of trust up to the root CA.
When a user re-visits a SSL protected website, Certlock first calculates the hash of the site’s certificate and compares it to the stored hash from previous visits. If it hasn’t changed, the page is loaded without warning. If the certificate has changed, the CAs that issued the old and new certificates are compared. If the CAs are the same, or from the same country, the page is loaded without any warning. If, on the other hand, the CAs’ countries differ, then the user will see a warning (See Figure 3).
This isn't new. The authors credit recent work, but no further back than a year or two. Which I find sad because the important work done by TrustBar and Petnames is pretty much forgotten.
But it is encouraging that the security models are battling it out, because it gets people thinking, and challenging their assumptions. Only actual produced code, and garnered market share is likely to change the security benefits of the users. So while we can criticise the country approach (it assumes a sort of magical touch of law within the countries concerned that is already assumed not to exist, by dint of us being here in the first place), the country "proxy" is much better than nothing, and it gets us closer to the real information: the CA.
From a market for security pov, it is an interesting period. The first attempts around 2004-2006 in this area failed. This time, the resurgence seems to have a little more steam, and possibly now is a better time. In 2004-2006 the threat was seen as more or less theoretical by the hoi polloi. Now however we've got governments interested, consumers sick of it, and the entire military-industrial complex obsessed with it (both in participating and fighting). So perhaps the newcomers can ride this wave of FUD in, where previous attempts drowned far from the shore.
Let's talk about why we want Identity. There appear to be two popular reasons why Identity is useful. One is as a handle for the customer experience, so that our dear user can return day after day and maintain her context.
The other is as a vector of punishment. If something goes wrong, we can punish our user, no longer dear.
It's a sad indictment of security, but it does seem as if the state of the security nation is that we cannot design, build and roll-out secure and safe systems. Abuse is likely, even certain, sometimes relished: it is almost a business requirement for a system of value to prove itself by having the value stolen. Following the inevitable security disaster, the business strategy switches smoothly to seeking who to blame, dumping the liability and covering up the dirt.
Users have a very different perspective. Users are well aware of the upsides and downsides, they know well: Identity is for good and for bad.
Indeed, one of the persistent fears of users is that an identity system will be used to hurt them. Steal their soul, breach their privacy, hold them to unreasonable terms, ultimately hunt them down and hurt them, these are some of the thoughts that invasive systems bring to the mind of our dear user.
This is the bad side of identity: the individual and the system are "in dispute," it's man against the machine, Jane against Justice. Unlike the usage case of "identity-as-a-handle," which seems to be relatively well developed in theory and documentation, the "identity-as-punishment" metaphor seems woefully inadequate. It is little talked about, it is the domain of lawyers and investigators, police and journalists. It's not the domain of technologists. Outside the odd and forgettable area of law, disputes are a non-subject, and not covered at all where I believe it is required the most: marketing, design, systems building, customer relations, costs analysis.
Indeed, disputes are taboo for any business.
Yet, this is unsustainable. I like to think of good Internet (or similar) systems as an inverted pyramid. On the top, the mesa, is the place where users build their value. It needs to be flat and stable. Efficient, and able to expand horizontally without costs. Hopefully it won't shift around a lot.
Dig slightly down, and we find the dirty business of user support. Here, the business faces the death of a 1000 tiny support cuts. Each trivial, cheap and ignorable, except in the aggregate. Below them deeper down are the 100 interesting support issues. Deeper still, the 10 or so really serious red alerts. Of which one becomes a real dispute.
The robustness of the pyramid is based on the relationship between the dispute at the bottom, the support activity in the middle, and the top, as it expands horizontally for business and for profit.
Your growth potential is teetering on this one thing: the dispute at the apex of the pyramid. And, if you are interested in privacy, this is the front line, for a perverse reason: this is where it is most lost. Support and full-blown disputes are the front line of privacy and security. Events in this area are destroyers of trust, they are the bane of marketing, the nightmare of PR.
Which brings up some interesting questions. If support is such a destroyer of trust, why is it an afterthought in so many systems? If the dispute is such a business disaster, why is resolution not covered at all? Or hidden, taboo? Or, why do businesses think that their dispute resolution process starts with their customers' identity handles? And ends with the lawyers?
Here's a thought: If badly-handled support and dispute events are leaks of privacy, destroyers of trust, maybe well-handled events are builders of trust? Preservers of privacy?
If that is plausible, if it is possible that good support and good dispute handling build good trust ... maybe a business objective is to shift the process: support designed up front, disputes surfaced, all of it open? A mature and trusted provider might say: we love our disputes, we promote them. Come one, come all. That's how we show we care!
An imature and unstrusted provider will say: we have no disputes, we don't need them. We ask you the user to believe in our promise.
The principle that the business hums along on top of an inverted pyramid, that rests ultimately on a small powerful but brittle apex, is likely to cause some scratching of technophiliac heads. So let me close the circle, and bring it back to the Identity topic.
If you do this, if you design the dispute mechanism as a fully cross-discipline business process for the benefit of all, not only will trust go positive and privacy become aligned, you will get an extra bonus. A carefully constructed dispute resolution method frees up the identity system, as the latter no longer has to do double duty as the user handle *and* the facade of punishment. Your identity system can simply concentrate on the user's experience. The dark clouds of fear disappear, and the technology has a chance to work how the techies said it would.
We can pretty much de-link the entire identity-as-handles from the identity-as-punishment concept. Doing that removes the fear from the user's mind, because she can now analyse the dispute mechanism on its merits. It also means that the Identity system can be written only for its technical and usability merits, something that we always wanted to do but never could, quite.
(This is the rough transcript of a talk I gave at Identity & Privacy conference in London a couple of weeks ago. The concept was first introduced at LexCybernetoria, it was initially tried by WebMoney, partly explored in digital gold currencies, and finally was built in
CAcert's Arbitration project.)
A canonical question in cryptography was about how much money you could put over a digital signature, and a proposed attack would often end, "and then Granny loses her house!" It might be seen as a sort of reminder that the crypto only went so far, and needed to be backed by institutional support for a lot of things.
And now comes Darren with news that Granny is losing her house, proverbially at least. In a somewhat imprecise article (written by a lawyer?) in the Times:
... The ingenuity of the heists carried out ranges from “selling” property they do not own to “buying” property at inflated valuations and making off with the difference.
Critical to many of these scams is the use of stolen identities. According to many solicitors specialising in the field, the key context for the problem was the dash into deregulation and e-commerce earlier this decade.
“There was a view throughout the profession that the abolition of documents of title and reliance upon electronic records would contribute to fraud. And so it has proved,” Samson says. “All this information is open to view through the internet so a fraudster can see exactly who owns a property, assume his or her identity and then sell it.”
While this may sound absurd for owner-occupied homes, it is all too easy, for example, with vacant properties. “What’s more the rightful owner won’t even know that it has happened,” he adds.
So the basic fraud appears to be: find a property that is not cared for by its owner. Assume the owner's identity. Sell it. Or,
To put the hat on what seems a complete botch-up by lawmakers and regulators, the effect of the Land Registration Act 2002 was that the fraudulent purchasers are given a legal title to their “purchase”. “If the fraudster succeeds in having title registered in his name he can mortgage the property,” Samson says. “The true owner may be able to have the transfer to the fraudster reversed by rectification but he will still take the property subject to the mortgage.”
buy it! Now, within that article, there is no shortage of soliciters saying "we told you so!" But the real systemic causes of this fraud will need more digging. We can guess what the first cause is: identify theft. That is, high levels of dependency on the fictitious notion of identity as a protector of security. Yes, that will always get you, and it will likely take another decade before the British populace lose their current faith in identity.
The second cause however is more subtle. As pointed out by Eliana Morandi in a 2007 article, "The role of the notary in real estate conveyancing," problems like that do not happen in continental Europe (see _Digital Evidence and Electronic Signature Law Review," 2007). What's the difference? Whereas the English common law system requires each party to have independent representation, the continental system requires one party, the notary to secure the entire deed for both the buyer and seller. And take the full responsibility, so issues such as this are solved easily:
In cases where, for example, a lender whose mortgage is being paid off has no lawyer, the conveyancer may face claims for having not fully observed the Land Registry’s practice guide. And instead of the Land Registry paying compensation, it will look to the solicitors to reimburse the victims.
Warren Gordon, of Olswang, who sits on the Law Society’s conveyancing and land law committee, protests that it is unrealistic to expect solicitors to do a comprehensive check on someone who is not their client. “It’s unfair to put all the risk on the solicitor, including asking him or her to sign off on the identity of someone he or she does not act for,” he says.
Meanwhile, Paul Marsh, president of the Law Society, points the finger instead at the bankers who are providing fraudsters with the funds to perpetrate their dodgy deals. “At the top end we see vast bonuses being paid to bankers at board level for what turn out to be disastrous investments, while at the grass roots local bankers are under pressure to make loans — to sell money — without even the most basic procedures in place to prevent fraud,” he says. “The banks are refusing to take responsibility for this because they know that they can pin it on the solicitors.”
The bottom line of course is which system is more efficient in the long run. The European Notary may charge more money for the perfect transaction. If the English solicitors can undercut that price, and reduce the fraud such that the result is still better, it is a good deal. Which is it? The abstract to Morandi's article gives a clue:
The role of the notary in real estate conveyancing
Eliana Morandi sets out the role of the civil law notary in the context of real estate conveyancing, illustrating how more effective and less costly it is when undertaken by civil law notaries.
(Unfortunately my copy has conveyed itself into hiding.) If fraud rises in Britain, we will need changes. Now, we've seen with the rise of identity fraud in the USA that there has been zero incentive for the players to change the way identity is used, so we can predict that the Brits will not change the registry practice. Also, the likelihood of the soliciters giving up their lucrative representational practice is pretty low.
However the complicated notarial versus solicitorial versus identity versus registry war pans out in the long run, it seems that solicitors are going to have to bear increased responsibility to check the identity of their counterparty. Perhaps they should pop into the Identity and Privacy forum, 14th 15th May over in London's Charing Cross Hotel? Probably a bargain if it saves them from granny's wrath.
Chris Walsh spotted this:
Details of how police in the Irish Republic finally caught up with the country's most reckless driver have emerged, the Irish Times reports.
He had been wanted from counties Cork to Cavan after racking up scores of speeding tickets and parking fines.
However, each time the serial offender was stopped he managed to evade justice by giving a different address.
But then his cover was blown.
It was discovered that the man every member of the Irish police's rank and file had been looking for - a Mr Prawo Jazdy - wasn't exactly the sort of prized villain whose apprehension leads to an officer winning an award.
In fact he wasn't even human.
"Prawo Jazdy is actually the Polish for driving licence and not the first and surname on the licence," read a letter from June 2007 from an officer working within the Garda's traffic division.
Map showing Poland
"Having noticed this, I decided to check and see how many times officers have made this mistake.
"It is quite embarrassing to see that the system has created Prawo Jazdy as a person with over 50 identities."