June 12, 2004

WYTM - who are you?

The notion of WYTM ("what's your threat model?") is the starting point for a lot of analysis in the cryptographic world. Accepted practice has it that if you haven't done the WYTM properly, the results will be bad or useless.

Briefly, the logic goes, if you don't understand the threats that you are trying to secure against, then you will likely end up building something that doesn't cover those threats. Because security is hard, it's complex, and it is so darn easy to get drawn down the wrong path. We saw this in secure browsing, and the results are unfortunate [1].

So we need a methodology to make sure the end result justified the work put in. And that starts with WYTM: identify your threats, categorise and measure their risks & costs. Then, when we construct the security model, we decide what risks we can afford to cover, and which ones we can't.

But some systems seem to defy that standard security modelling. Take, for example, email.

It is difficult to identify a successful email security system; it's one of those areas that seems so obvious it should be easy, but email to date remains deplorably unprotected.

It could well be that the fault for this lies in WYTM. The problem is rather blatent when pointed out: who is the "you" in "what's your threat model?"

Who are is it we are talking about when we decide to protect someone? All systems of computing include lots of users, but most of them generally have something in common: they work in a firm or they are users of some corporate system, for example. At least, when going to the effort of building a crypto system, there is generally enough commonality to construct a viable model of who the target user is.

This doesn't work for email - "who" changes from person to person, it includes near a billion parties, and many more combinations. What's more, the only commonality seems to be that they are humans, and even that assumption is a little weak given all the automatic senders of email.

The guys who wrote the original email system, and indeed the Internet, cottoned on to one crucial point here: there is no permission needed and no desire to impose the sort of structure that would have made profiling of the user plausible.

No profile of "who" may mean no WYTM.

For example, when Juan sends sexy email to Juanita, it has a very different threat model to when legal counsel sends email to a defendent. Consider contracts fly between trading partners. Or, soon-to-be-divorcees searching for new partners via dating agencies.

Or ... the list goes on. There is literally no one threat model that can deal with this, as the uses are so different, and some of the requirements are diametrically opposed: legal wants the emails escrowed, and authenticated, whereas dating women consider their anonymity and untraceability valuable.

Seen in this light, there isn't one single threat model. Which means either we develop the threat model to end all threat models (!) or we have lots of different threat models.

It is clearly intractable to develop one single threat model, and it quite possibly is intractable to develop a set of them.

So what is a poor cryptoplumber to do? Well, in the words of Sherlock Holmes, "when you have eliminated the impossible, whatever remains, however improbable, must be the truth."

For some systems, WYTM is simply inadequate. What then must we do? Design without WYTM.

With no threat model, to hand, we need a new measure to judge whether to include a given protection or not. About the only answer that I can come up with is based on economic criteria.

In building email security, nobody is paying us for it, so whatever is good and free should be used. This is is opportunistic cryptography: protect what you can for free, or for cheap, and worry about the rest later.

In email terms, this would involve:

* tunnelling over SSH to SMTP servers,
* implementing START/TLS, with self-signed certs
* implementing PGP-enabled gateways like the "Universal" product and many forerunners
* and more...

By definition, there are gaps. But with opportunistic cryptography, we don't so much worry about that, because what we are doing is free anyway.

[1] WYTM?

Posted by iang at June 12, 2004 05:51 AM | TrackBack

Is there case work of threat models that have failed? Perhaps the case work not being generally available because they kept it secret is what's lacking in the ability to model the design of a secure system.

I suggest that this case work information is a viable currency and that if owned would not be spent and exchanged willy nilly with all those that ask or look for it. A case in point might be the rumors of a Chinese Cyber Warfare development entity. While many have suggested its existence I have found no proof of any of its projects or attacks.

Does that mean I cannot understand the threat they present? I gather yes is the answer, if I'm the one who they would attack. Logically there is no entity that will be attacked by the Chinese Cyber Warfare team because no attack has been detected.

Email and the profile of the users are a strange thing indeed. So while Mafia and CIA might take email as unusable due to the electronic record, it might be fine for Mr. White Collar Criminal who does not expect any monitoring, so email makes sense. Is the user one that might expect to be investigated? What might the user be investigated for and by whom? What is the risk versus reward for the activity being conducted using email?

All these questions need to be asked to judge the cost of protection. If it is a question of habit versus proper assesment for the user then that itself is a threat model. Habits dictate the behavior rather than rational analysis of the user's situation and purpose for usage. The best way to play the Security Threat Model is the creation of FUD. FUD will break the habits and might spark some rational thought as to the proper usage, given the terms and conditions the user establishes.

I know people that are so concerned they never return and email.

Posted by: Jim at June 12, 2004 08:10 AM

I now think of it as "What're Your Threats, Mate ?!"

Posted by: JPMay at June 12, 2004 12:11 PM

LOL! That's a classic solution - if we aren't clear what the target audience is, let's conveniently settle on our mates, and hope nobody notices. I mean, what are mates for?

Posted by: Iang at June 12, 2004 06:20 PM

Again I want to go back to the idea of a signing device that can receive and display any arbitrary bit of text on a small screen, allow users to attach any of several response codes and sign and return it to sender. The response codes may be obvious things like (1=sent,2=acknowledgedreceipt,3=accepted,4=denied,5=other) http://ledgerism.net/STR.htm

The following could be out of scope and/or unnecessary on P2P commerce: Protection against DOS for either the initiator, or the respondent, and how either party verifies the counterparty's public key.

There has to be a fundamental principle "Let the reader of the signed document beware". "caveat anagnostes" or maybe "caveat perlego" http://www.nd.edu/~archives/latin.htm The alternative is big brother, confirming everything for us, and imposing the whole range of costs and snooping and control, or, abandoning electronic networks for another generation.

So, the threat models are almost completely limited to physical or network attacks to get inside the handheld device and mess up the data or steal the key(s). These risks should be modest if the device is simple. The MeT device had two areas; an area for application software, and, the security element which was the keys and low level software objects for signing etc.

I believe the world is ready for a signing device, based on something like webfunds with chat, and a fixed document size for the signed, encrypted message payload. The payload would be any simple thing like XML-X that was small and simple enough to write software without holes,


Posted by: Todd at June 12, 2004 06:22 PM