October 06, 2006

Why security training is really important (and it ain't anything to do with security!)

Lynn mentioned in comments yesterday:

I guess I have to admit to being on a roll.

:-) Lynn grasped the nexus between the tea-room and the systems room yesterday:

One of the big issues is inadequate design and/or assumptions ... in part, failing to assume that the operating environment is an extremely hostile environment with an enormous number of bad things that can happen.

What I didn't stress was the reasons behind why security training was so important -- more important than your average CSO knows about. Lynn spots it above: reliability.

The reason we benefit from teaching security (think Fight Club here not the American Football) is that it clearly teaches how to build reliable systems. The problem addressed here is that unreliable systems fall foul of statistical enemies, and they are weak, few and far between. But when you get to big systems and lots of transactions, they become significant, and systems without reliability die the death of a thousand cuts.

Security training solves this because it takes the statistical enemy up several notches and makes it apparent and dangerous even in small environments. And, once a mind is tuned to thinking of the attack of the aggressor, dealing with the statistical failure is easy, it's just an accidental case of what an aggressor could do.

I would even assert that the enormous amounts of money spent attempting to patch an inadequate implementation can be orders of magnitude larger than the cost of doing it right in the first place.

This is the conventional wisdom of the security industry -- and I disagree. Not because it doesn't make sense, and because it isn't true (it makes sense! and it's true!) but because time and time again, we've tried it and it has failed.

The security industry is full of examples where we've spent huge amounts of money on up-front "adequate security," and it's been wasted. It is not full of examples where we've spent huge amounts of money up front, and it's paid off...

Partly, the conventional security industry wisdom fails because it is far too easy for us to hang it all out in the tea-room and make like we actually know what we are talking about in security. It's simply too easy to blather such received wisdom. In the market for silver bullets, we simply don't know, and we share that absence of knowledge with phrases and images that lose meaning for their repitition. In such a market, we end up selling the wrong product for a big price -- payment up front, please!

We are better off -- I assert -- saving our money until the wrong product shows itself to be wrong. Sell the wrong product by all means, but sell it cheaply. Live life a little dangerously, and let a few frauds happen. Ride the GP curve up and learn from your attackers.

But of course, we don't really disagree, as Lynn immediately goes on to say:

Some of this is security proportional to risk ... where it is also fundamental that what may be at risk is correctly identified.

Right.

To close with reference to yesterday's post: Security talk also easily impresses the managerial class, and this is another reason why we need "hackers" to "hack", to use today's unfortunate lingo. A breach of security, rendered before our very servers, speaks for itself, in terms that cut through the sales talk of the silver bullet sellers. A breach of security is a hard fact that can be fed into the above risk analysis, in a world where Spencarian signals abound.

Posted by iang at October 6, 2006 02:35 PM | TrackBack
Comments

note that some of the security issues were extremely well understood in the 60s & 70s ... where systems designed for commercial timesharing operation had some fundamental integrity operations built into the basic operation ... and had little of the vulnerabilities seen in many of the more modern systems.

i've frequently contended that it involves many of the original design and environment assumptions. many of the more modern systems came from a genre of stand-alone, unconnected, personal systems sitting on somebody's kitchen table. there was little reason to design in countermeasures to network-based hostile attacks. many of these systems also developed a large based of applications that were depended on taking over the complete system for operation (like the game market). later some of these systems were adapted to closed networks for group collaboration ... again not a basically hostile environment requiring any attack countermeasures.

it was when a large number of these systems started being attached to open, hostile networking environment that a lot of the problems started showing up ... since none of the original design assumptions included operating in such an environment.

i would contend that it is somewhat analogous to taking one of the original horseless carriages and placing it on the track in the middle of a indy 500 race.

as to the frequent failures of many of the upfront, designed-in security efforts ... one could claim that there was inadequate understanding of the threat and fundamental security principles. of course this can also be attributed to not getting any educational grounding in treats and security.

the counterexample is that many of the systems from the 60s and 70s designed for commercial timesharing (where there is an assumption that different users, would attack each other, given the chance) ... have had much, much lower rate of vulnerabilities.

I was involved in cp67 and vm370 ... originated on 4th floor of 545 tech sq which was used extensively by commercial timesharing service bureaus
http://www.garlic.com/~lynn/subtopic.html#545tech

and mutlics was done on the 5th floor of the same bldg.
multics security reference:
http://www.multicians.org/security.html

until recently buffer overflows were a dominant form of network attacks
http://www.garlic.com/~lynn/subintegrity.html#overflow

which has somewhat given away to networking attacks using various forms of automatic scripting vulnerbilities.

however, a paper from a couple years ago cited the lack of any buffer overflow vulnerabilities in multics
http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation

from above:

2.2 Security as Standard Product Feature
2.3 No Buffer Overflows
2.4 Minimizing Complexity

...

This Research Report consists of two invited papers for the Classic Papers section of the 18 th Annual Computer Security Applications Conference (ACSAC) to be held 9-13 December 2002 in Las Vegas, NV. The papers will be available on the web after the conference at
http://www.acsac.org/

The first paper, Thirty Years Later: Lessons from the Multics Security Evaluation, is a commentary on the second paper, discussing the implications of the second paper's results on contemporary computer security issues. Copyright will be transferred on the first paper.

The second paper, Multics Security Evaluation: Vulnerability Analysis is a reprint of a US Air Force report, first published in 1974. It is a government document, approved for public release, distribution unlimited, and is not subject to copyright. This reprint does not include the original computer listings. They can be found at
http://csrc.nist.gov/publications/history/karg74.pdf

... snip ...

Posted by: Lynn Wheeler at October 6, 2006 11:01 AM

I have recently participated in a small conference on industrial esp..., sorry, business intelligence, where a sysadmin-type guy gave a presentation about how important various security practices (from "conventional wisdom") are and how they all have failed him in practice (especially user training). Yes, you read it right. I couldn't resist giggling.
Now, when I told him about these things we have discussed so much: about learning from your adversaries instead of second-guessing them in the tea-room and about thinking about motivations and rational behavior of participants in the security process (why do people stick passwords on their monitors?), he was profoundly surprised. He said, I have shown him an entirely new facet of the problem of security.
So, I guess, there are still quite a few security professionals around, for whom security is just synonimous to excessive paranoia. And your post sheds some light on why there is a demand for them: because by fighting against Rational Adversary they manage to defeat Blind Chance.
Chalk one up for rational behavior that is not necessarily concious. In other words, for people doing the right thing for all the wrong reasons... :-)

Posted by: Daniel A. Nagy at October 7, 2006 05:55 PM

when we did detailed failure mode analysis for tcp/ip stack early in ha/cmp product development (late 80s)
http://www.garlic.com/~lynn/subtopic.html#hacmp

it wasn't necessary human adversaries ... it was just how could things fail. one of the items we noticed then (among lots of others) was the vulnerability to buffer overflows ... both in the way the code was written as well as deficiencies with the C language programming environment. part of this was having worked with tcp/ip stacks written in other languages that rarely, if ever, experienced buffer overflow.
http://www.garlic.com/~lynn/subintegrity.html#overflow

this was subsequently born out also about the mutlics study published much later
http://www.garlic.com/~lynn/aadsm25.htm#40 Why security training is really important (and it ain't anything to do with security!)

when we were asked to consult with this small client/server startup that wanted to do payment transactions on their server ... one of the things we did was failure mode matrix for the payment gateway ... and needed a method of handling each possible failure ... regardless of whether it involved a human attacker or not. they had this stuff called SSL and it has somewhat since been come to be called electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

we we started on the x9.59 financial standard electronic retail payment protocol ... the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments.

one of the frequent failure modes that was identified was the skimming/harvesting of account numbers by numerous methods ... including data breaches (much of the identity theft that has been in the news involves this).
http://www.garlic.com/~lynn/subintegrity.html#harvest

rather than attempting to address totally eliminating all possible such data breaches (including those that might involved insiders) ... x9.59 changed the paradigm ... and made such data breaches useless to the attackers.

part of the problem was the diametrically opposing objectives for account numbers ... on one hand they needed to be readily available for scores of different business processes ... and on the other hand they needed to be kept confidential and never revealed
http://www.garlic.com/~lynn/subintegrity.html#secrets

in order to prevent fraudulent transactions
http://www.garlic.com/~lynn/subintegrity.html#fraud

x9.59 changed the paradigm so that the account number can't be used for fraudulent transaction.
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

and therefor eliminated (much of the) data breaches as a security issue ... it didn't do anything to eliminate data breaches ... it just eliminated those data breaches providing any benefit to the attacker.

this is somewhat the security proportional to risk scenario
http://www.garlic.com/~lynn/2001h.html#61

in the above, it didn't eliminate the possibility of such breaches ... it just eliminated any risk that might be associated with such breaches.

for some drift ... a more recent security proportional to risk thread
http://www.garlic.com/~lynn/2006s.html#4
http://www.garlic.com/~lynn/2006s.html#5
http://www.garlic.com/~lynn/2006s.html#9
http://www.galric.com/~lynn/2006s.html#10

Posted by: Lynn Wheeler at October 7, 2006 09:29 PM

in combination with changing the paradigm for x9.59
http://www.garlic.com/~lynn/aadsm25.htm#41 Why security training is really important (and it ain't anything to do with security!)

we started looking at a generalized authentication mechanism that would replace pin/password ... which became the aads chip strawman in the 1998 timeframe
http://www.garlic.com/~lynn/x959.html#aadsstraw

now if you had a hardware token that never divulged its private key for authentication ... it would be "something you have" authentication ... and it would have to be physically obtained in order to do some fraud. it wasn't susceptable to the "yes card" type exploits because it didn't use static data
http://www.garlic.com/~lynn/subintegrity.html#yescard

and, in the case of x9.59 transactions ... the actual transaction was signed and authenticated ... so it wouldn't be vulnerable to any kind of mitm-attacks
http://www.garlic.com/~lynn/subintegrity.html#mitm

that might still occur with cards doing dynamic data ... but authenticating separately from doing the transaction.

so from 3-factor authentication model
http://www.garlic.com/~lynn/subintegrity.html#3factor

* something you have
* something you know
* something you are

so a card represents "something you have" authentication. now, normally, multi-factor authentication is assumed to be more secure when the different factors have independent vulnerabilities or failure modes. in the card case, pin/password is nominally a countermeasure to lost/stolen card. however, lack of careful design can result in the "yes card" exploit ... negating any assumption about independent failure modes.

so there is the problem with people having to deal with scores (or possibly hundreds) of passwords, leading to the password post-it note scenario. in theory card authentication could replace all the pin/passwords. however, in the existing institutional-centric model, that eventually leads to each of the scores (or hundreds) of passwords for a person being replaced with a card. this in itself, is an untenable solution ... but if each card than has a different pin/password ... then the person has to write the appropriate pin/password on each card (again negating any security asumptions related to multi-factor authentication).

so one of the efforts in the aads chip strawman was looking at what might be necessary to switch from a institutional-centric model to a person-centric model ... where a person might have a single (or extremely few) hardware tokens ... that they could register every place there was a requirement for authentication (potentially resulting in a person having only a single hardware token and a single pin to remember)

misc. past posts mentioning what enablers would be needed to transition to a person-centric authentication infrastructure
http://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
http://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
http://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
http://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm20.htm#6 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm20.htm#36 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/aadsm21.htm#2 Another entry in the internet security hall of shame
http://www.garlic.com/~lynn/aadsm22.htm#12 thoughts on one time pads
http://www.garlic.com/~lynn/aadsm24.htm#49 Crypto to defend chip IP: snake oil or good idea?
http://www.garlic.com/~lynn/aadsm24.htm#52 Crypto to defend chip IP: snake oil or good idea?
http://www.garlic.com/~lynn/aadsm25.htm#7 Crypto to defend chip IP: snake oil or good idea?
http://www.garlic.com/~lynn/aadsm8.htm#softpki16 DNSSEC (RE: Software for PKI)
http://www.garlic.com/~lynn/2003e.html#22 MP cost effectiveness
http://www.garlic.com/~lynn/2003e.html#31 MP cost effectiveness
http://www.garlic.com/~lynn/2003o.html#9 Bank security question (newbie question)
http://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
http://www.garlic.com/~lynn/2004q.html#0 Single User: Password or Certificate
http://www.garlic.com/~lynn/2005g.html#8 On smartcards and card readers
http://www.garlic.com/~lynn/2005g.html#47 Maximum RAM and ROM for smartcards
http://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
http://www.garlic.com/~lynn/2005m.html#37 public key authentication
http://www.garlic.com/~lynn/2005p.html#6 Innovative password security
http://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
http://www.garlic.com/~lynn/2005r.html#25 PCI audit compliance
http://www.garlic.com/~lynn/2005r.html#31 Symbols vs letters as passphrase?
http://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product
http://www.garlic.com/~lynn/2005u.html#26 RSA SecurID product
http://www.garlic.com/~lynn/2006d.html#41 Caller ID "spoofing"
http://www.garlic.com/~lynn/2006o.html#20 Gen 2 EPC Protocol Approved as ISO 18000-6C
http://www.garlic.com/~lynn/2006p.html#32 OT - hand-held security
http://www.garlic.com/~lynn/2006q.html#3 Device Authentication - The answer to attacks lauched using stolen passwords?

Posted by: Lynn Wheeler at October 7, 2006 10:01 PM
Post a comment









Remember personal info?






Hit preview to see your comment as it would be displayed.