May 27, 2005

Loss Expectancy in NPV calculations

Mr WiKiD does some hypothetical calculations to show that using some security solutions could cause more costs than benefits, as everything you add to your systems increases your chances or loss.

Is that right? Yes, any defence comes with its own weaknesses. Further, defences combine to be stronger in some places and weaker in others.

Building security systems is like tuning speaker systems (or exhaust mufflers on cars). Every time we change something, the sound waves move around so some strengths combine to give us better bass at some frequencies, but worse bass at others. If we are happy to accept the combination of waves at some frequencies and are suppressing the others through say heavy damping, then that's fine. If not we end up with a woefully misbalanced sound.

Security is mostly done like that, but without worrying about where the weaknesses combine. That's probably because the sets of ears listening to security isn't ours, it's the attackers.

So, getting back to WiKiD's assumptions, he added in the Average Annual Loss Expectancy (AALE) into his NPV (Net Present Value) calculations, and because this particular security tool had a relatively high breach rate, the project went strongly negative.

Bummer. But he hones in on an interesting thought - we need much more information on what the Loss Expectancy is for every security tool. Hypothetically I muse on that in a draft paper, and after writing on what the results of such issues are, I can see an immediate issue: how are we going to list and measure the Loss Expectancy product by product, or even category by category?

Posted by iang at May 27, 2005 10:03 PM | TrackBack

You find this sort of stuff also under parameterized risk management ... especially in insurance industry (or self-insured operations) attempting to calculate costs against all possible risk exposures. talk to some corporate risk managers ... they view life thru these sort of glasses all the time.

a couple past mentions on the thread between information security and risk management:

and then security proportional to risk:

somewhat related scenario were risk manager (related to calculating insurance premiums) comments on certificate-based infrastructures vis-a-vis certificateless, online-based infrastructure.

the issue was how many places could a subject use their certificate in value related transactions ... if the certification authority was standing behind such a certifcate (using the letter of credit scenario from the sailing ship days) ... what might be the avg. value at risk and how many such events might there be creating risk exposure to the certifying institution.

the counter example is the current online payment card scenario ... whether they maintain an open-to-buy for the subjects they certify ... and every value transaction aggregates against the subject's open-to-buy. at all times, the certifying institution has real-time management on the subject's open-to-buy (and therefor the actual risk exposure to the institution). open-to-buy ceiling (aka credit limit) may undergo numerous adjustments as the risk taking institution updates its expected liabilities.

Posted by: Lynn Wheeler at May 27, 2005 07:47 PM

oops that should have been "aepay3.htm" instead of "aadsm3.htm" ... i.e.

... while i'm posting ... i might also point out that the situation has some similarities to performance bottleneck ... the old adage ... that when you "elimiante" one ... there is always another waiting right behind

I had done a lot of kernel performance rewrites as an undergraduate (sometimes getting a 100:1 improvement in cpu cycles), inventing page replacement algorithms and scheduling algorithms .... which were being shipped in commercial products ... recent post that is somewhat related to security issues (as an undergraduate being responsible for resource management algorithms ... it was therefor my responsibility to address low-bandwidth information security leakage issues)

in the later half of the 70, i started pointing out the significant shift from systems performance thruput being cpu and memory constrained to being i/o constrained. i made some claim that the relative system disk thruput had declined by a factor of ten over a period of years.

the disk division didn't like it and assigned their performance modeling organization to refute the claim. after a couple months, the group came back and said that i had slightly understated the problem.

memory and cpu had increased by approx. a factor of 50 while disk arm performance had increased by a factor of 3-5 (resulting a relative system thruput deline of at least 10 times).

Posted by: Lynn Wheeler at May 27, 2005 11:25 PM
Post a comment

Remember personal info?

Hit preview to see your comment as it would be displayed.