June 23, 2004

Phishing II - Front Page News

As well as the FT review, in a further sign that phishing is on track to being a serious threat to the Internet, Google yesterday covered phishing on the front page. 37 articles in one day didn't make a top story, but all signs are pointing to increasing paranoia. If you "search news" then you get about 932 stories.

It's mainstream news. Which is in itself an indictment of the failure of Internet security, a field that continues to reject phishing as a threat.

Let's recap. There are about three lines of potential defense. The user's mailer, the user's browser, and the user herself.

(We can pretty much rule out the server, because that's bypassed by this MITM; some joy would be experienced using IP number tracking, but that can by bypassed and it may be more trouble than it's worth.... We can also pretty much rule out authentication, as scammers that steal hundreds of thousands have no trouble stealing keys. Also in the bit bucket are the various strong URL schemes, which would help, but only when they have reached critical mass. No hope there.)

In turn, here's what they can do against phishing:

The user's mailer can only do so much here - it's job is to take emails, and that's what it does. It has no way of knowing that an email is a phishing attack, especially if the email carefully copies a real one. Bayesian filters might help here, but they are also testable, and they can be beaten by the committed attacker - which a phisher is. Spam tech is not the right way of thinking, because if one spam slips through, we don't mind. In contrast, if a phish slips through, we care a lot (especially if we believe the 5% hit rate.).

Likewise, the user is not so savvy. Most of the users are going to have trouble picking the difference between a real email and a fake one, or a real site and a fake one. It doesn't help that even real emails and sites have lots of subtle problems with them that will cause confusion. So I'd suggest that relying in the user as a way to deal with this is a loser. The more checking the better, and the more knowledge the better, but this isn't going to address the problem.

This leaves the browser. Luckily, in any important relationship, the browser knows, or can know, some things about that relationship. How many times visited, what things done there, etc etc. All the browser has to do then is to track a little more information, and make the user aware of that information.

But to do that, the browsers must change. The'yve got to change in 3 of these 4 ways:

1. cache certificate use statistics and other information, on a certificate and URL basis. Some browsers already cache some info - this is no big deal.
2. display the vital statistics of the connection in a chrome (protected) area - especially the number of visits. This we call the branding box. This represents a big change to browser security model, but, for various reasons, all users of browsers are going to benefit.
3. accept self-signed certs as *normal* security, again displayed in the chrome. This is essential to get people used to seeing many more cert-protected sessions, so that the above 2 parts can start to work.
4. servers should bootstrap as a normal default behavoiur using an on-demand generated self-signed cert. Only when it is routine for certs to be in existance for *any* important website, will the browser be able to reliably track a persistency and the user start to understand the importance of keeping an eye on that persistency.

It's not a key/padlock, it's a number. Hey presto, we *can* teach the user what a number means - it's the number of times that you visited BankofAmerica.com, or FunkofAmericas.co.mm or wheresover you are heading. If that doesn't seem right, don't enter in any information.

These are pretty much simple changes, and the best news is that Ye & Smith's "Trusted Paths for Browsers" showed that this was totally plausible.

Posted by iang at June 23, 2004 10:59 AM | TrackBack
Comments