March 13, 2005

What users think about web security

A security model generally includes the human in the loop; totally automated security models are generally disruptable. As we move into our new era of persistent attacks on web browsers, research on human failure is very useful. Ping points to two papers done around 2001 on what web bowsing security means to users.

Users Conceptions of Web Security (by Friedman, Hurley, Howe, Felten, Nissenbaum) explored how users treat the browser's security display. Unsurprisingly, non-technical users showed poor recognition of spoof pages, based on the presence of a false padlock/key icon. Perhaps more surprisingly, users derived a different view of security: they interpreted security as meaning it was safe to put their information in. Worse, they tended to derive this information as much from the padlock as from the presence of a form:

"That at least has the indication of a secure connection; I mean, it's obviously asking for a social security number and a password."

Which is clearly wrong. But consider the distance between what is correct - the web security model protects the information on the wire - and what it is that is relevant to the user. For the user, they want to keep their information safe from harm, any harm, and they will assume that any signal sent will be to that end.

But as the threat to information is almost entirely at the end node, and not on the wire, users are faced with an odd choice: interpret the signal according to the harms that they know about, which is wrong, or ignoring the signal because it is irrelevent to the harms. Which is unexpected.

It is therefore understandable that users misinterpret the signals they are given.

The second paper, User's Conceptions of Risks and Harms on the Web(by Friedman, Nissenbaum, Hurley, Howe, Felten) is also worth reading, but it was conducted in a period of relative peace. It would be very interesting to see the same study conducted again, now that we are in a state of perpetual attack.

Posted by iang at March 13, 2005 10:56 AM | TrackBack
Comments

Maybe people should be taught that it is not save out there? Could it be that stealing is a more cost effective area to concentrate on rather than valid service offerings. I suggest the study of users stupidity is something most of us non-technical folks require since we have never designed a system we only use them. So people need to understand that designers of systems are good and bad and that just because they look like professors on TV ie Bill Gates the nerd does not mean they are not able to be evil and untrust worthy. I suggest an educational effort designed around teaching children that even if a person looks trust worthy they may not be from there we can teach them that a form does not mean its safe to input mommy and daddys credit card information. The next step is to educate people on telling a lie now and again to be embedded into official documents that way they can trace the leakage of their private information. A good example is changing the middle intial a make sure to not who got what intial. When questioned by the internal security police make sure they state the intial then you can derive the database they have obtained your information from. The next step is to change the credit card information every three months that way even if the information is stolen it cannot be sused as frequently. Id is something you choose to share but does it have to be a person well probably not. I suggest eveyone create corporation and mulitple ids for use in credit transactions online. All chidlren in the US must get a Social Security number soon after birth why not a Tax Payer Id and a State Corporation to boot perhaps several corporate Ids active and inactive. This creation of a moving target makes it harder to steal the id. So this juggling id game will of course create the job for the id agent. I want that job to become the id agent to the stars poor Paris Hilton having her information stolen like that if she had some one to encrypt it and change the pass phrase she would be safe today along with other famous folks. I want the job of ID juggler. My marketing will be your no body till Jimbo knows who you are and are not. The consumer society is a snack tray for the consumers of the society.

Posted by: Jimbo at March 13, 2005 11:43 AM

Simson Garfinkel's dissertation is worth looking at in this context.

http://www.simson.net/thesis/

Posted by: Chris Walsh at March 14, 2005 12:35 PM

Are users stupid? Or is the flaw in designs of systems that assumes humans are capable of being reliable sources of entropy and assuming that human trust behaviors change fundamentally when computer-assisted?

Disciplines served society well since the University of Paris developed from the various schools around Notre Dame.
But it seems to me that the structure of knowledge plays a role in this chronic problem of security. Security and chronic risks are one of the symptoms of a society that is structurally incapable of handling the massive flow of information which we have developed. (Another is political discourse, but let's not go there).

In the future will we have four years of conception study with a major, e.g., Trust and Computer Science, Chinese History in Business? A degree in trust is no more radical than a degree in business after all. What will a degree in computer science look like in twenty years? Or business, since MIS is coming out of so many B-school cores?

Posted by: Jean Camp at March 16, 2005 05:17 PM

> Are users stupid?

Of course, coming from a technical background I always assumed so. But times change.

Users haven't the time or knowledge to analyse risks. So they take the easiest path the see. It might be pushing the wrong button, or it might be giving up on the tool altogether.

This randomness drives techies mad, but I find it quite instructive - the core underlying issue here is that the user hasn't the time or wisdom to deal with the correct choice, but is also driven to make a choice. How then do users choose when the tool doesn't tell them and they don't know? A big question, and this I currently call "the market for silver bullets."

Since understanding that users are right and technies are wrong, I've adopted a bit of a strategy of always being dumb like a user. If a tool doesn't work for me, I move on, I don't spend the time to learn how it should be used. This pays off in two ways; it filters out the tools that will never make it more efficiently than I can otherwise do, and it also teaches me more about tool design which I then feed back into my own efforts.

Such an attitude would never get one into B-school, but maybe that's not important?

Posted by: Iang at March 16, 2005 05:29 PM

> This randomness drives techies mad

But we already know that all randoms are not equal. Economists (ok, some of them) know that people don't behave in a perfectly rational fashion, but we can study them to understand how their internal processes drive them to be systematically irrational.

We do know that lots of people are aware of security issues, but they use images (professional design for a website, same graphic scheme as the real bank) as proxies for authentication. We told them to pick difficult passwords, so they wrote them down. We told them to stop doing that, so they use 1 password for all sites.

Watching users who are *trying* to be secure will show us how things like cognitive laziness can produce models for further technical and economic development.

Posted by: allan friedman at March 18, 2005 12:48 PM

Yes, and by far the biggest temptation in the technical world is to say that users are dumb (impolite) or more education of users is needed (polite). But this is wrong.

My experience of users has always shown that they in general have a very rational way of making their choices. I might not like their logic, but logic it is. So our task as HCI and systems designers is to figure out how to guide them along a logic that is both secure and easy for them to follow.

Usage of logos and images are key to that as you point out - hence my frequent rantings of how we need to introduce brand into web browsing. Brand is simply a security message compressed into a picture. Who knows, one day we might see it, I gather Opera now has brand of CA on the chrome, but I have no Opera so can't check.

Posted by: Iang at March 18, 2005 05:45 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.