April 06, 2004

Security Modelling

One bright spot in the aforementioned report on cyber security is the section on security modelling [1] [2]. I had looked at this a few weeks back and found ... very little in the way of methadology and guidance on how to do this as a process [3]. The sections extracted below confirm that there isn't much out there, as well as listing what steps are know, and provide some references. FTR.

[1] Cybersecurity FUD, FC Blog entry, 5th April 2004, http://www.financialcryptography.com/mt/archives/000107.html
[2] Security Across the Software Development Lifecycle Task Force, _Improving Security Across the Software Development LifeCycle_, 1st April, 2004. Appendix B: PROCESSESTOPRODUCESECURESOFTWARE, 'Practices for Producing Secure Software," pp21-25 http://www.cyberpartnership.org/SDLCFULL.pdf
[3] Browser Threat Model, FC Blog entry, 26th February 2004. http://www.financialcryptography.com/mt/archives/000078.html



Principles of Secure Software Development

While principles alone are not sufficient for secure software development, principles can help guide secure software development practices. Some of the earliest secure software development principles were proposed by Saltzer and Schroeder in 1974 [Saltzer]. These eight principles apply today as well and are repeated verbatim here:

1. Economy of mechanism: Keep the design as simple and small as possible.
2. Fail-safe defaults: Base access decisions on permission rather than exclusion.
3. Complete mediation: Every access to every object must be checked for authority.
4. Open design: The design should not be secret.
5. Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key.
6. Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
7. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.
8. Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly.

Later work by Peter Neumann [Neumann], John Viega and Gary McGraw [Viega], and the Open Web Application Security Project (http://www.owasp.org) builds on these basic security principles, but the essence remains the same and has stood the test of time.

Threat Modeling

Threat modeling is a security analysis methodology that can be used to identify risks, and guide subsequent design, coding, and testing decisions. The methodology is mainly used in the earliest phases of a project, using specifications, architectural views, data flow diagrams, activity diagrams, etc. But it can also be used with detailed design documents and code. Threat modeling addresses those threats with the potential of causing the maximum damage to an application.

Overall, threat modeling involves identifying the key assets of an application, decomposing the application, identifying and categorizing the threats to each asset or component, rating the threats based on a risk ranking, and then developing threat mitigation strategies that are then implemented in designs, code, and test cases. Microsoft has defined a structured method for threat modeling, consisting of the following steps [Howard 2002].

  • Identify assets
  • Create an architecture overview
  • Decompose the application
  • Identify the threats
  • Categorize the threats using the STRIDE model (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege)
  • Rank the threats using the DREAD categories (Damage potential, Reproducibility, Exploitability, Affected users, and Discoverability)
  • Develop threat mitigation strategies for the highest ranking threats

    Other structured methods for threat modeling are available as well [Schneier].

    Although some anecdotal evidence exists for the effectiveness of threat modeling in reducing security vulnerabilities, no empirical evidence is readily available.

    Attack Trees

    Attack trees characterize system security when faced with varying attacks. The use of Attack Trees for characterizing system security is based partially on Nancy Leveson's work with "fault trees" in software safety [Leveson]. Attack trees model the decisionmaking process of attackers. Attacks against a system are represented in a tree structure. The root of the tree represents the potential goal of an attacker (for example, to steal a credit card number). The nodes in the tree represent actions the attacker takes, and each path in the tree represents a unique attack to achieve the goal of the attacker.

    Attack trees can be used to answer questions such as what is the easiest attack. The cheapest attack? The attack that causes the most damage? The hardest to detect attack? Attack trees are used for risk analysis, to answer questions about the system's security, to capture security knowledge in a reusable way, and to design, implement, and test countermeasures to attacks [Viega] [Schneier] [Moore].

    Just as with Threat Modeling, there is anecdotal evidence of the benefits of using Attack Trees, but no empirical evidence is readily available.

    Attack Patterns

    Hoglund and McGraw have identified forty-nine attack patterns that can guide design, implementation, and testing [Hoglund]. These soon to be published patterns include:

    1. Make the Client Invisible
    2. Target Programs That Write to Privileged OS Resources
    3. Use a User-Supplied Configuration File to Run Commands That Elevate Privilege
    4. Make Use of Configuration File Search Paths
    5. Direct Access to Executable Files
    6. Embedding Scripts within Scripts
    7. Leverage Executable Code in Nonexecutable Files
    8. Argument Injection
    9. Command Delimiters
    10. Multiple Parsers and Double Escapes
    11. User-Supplied Variable Passed to File System Calls
    12. Postfix NULL Terminator
    13. Postfix, Null Terminate, and Backslash
    14. Relative Path Traversal
    15. Client-Controlled Environment Variables
    16. User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth)
    17. Session ID, Resource ID, and Blind Trust
    18. Analog In-Band Switching Signals (aka "Blue Boxing")
    19. Attack Pattern Fragment: Manipulating Terminal Devices
    20. Simple Script Injection
    21. Embedding Script in Nonscript Elements
    22. XSS in HTTP Headers
    23. HTTP Query Strings
    24. User-Controlled Filename
    25. Passing Local Filenames to Functions That Expect a URL
    26. Meta-characters in E-mail Header
    27. File System Function Injection, Content Based
    28. Client-side Injection, Buffer Overflow
    29. Cause Web Server Misclassification
    30. Alternate Encoding the Leading Ghost Characters
    31. Using Slashes in Alternate Encoding
    32. Using Escaped Slashes in Alternate Encoding
    33. Unicode Encoding
    34. UTF-8 Encoding
    35. URL Encoding
    36. Alternative IP Addresses
    37. Slashes and URL Encoding Combined
    38. Web Logs
    39. Overflow Binary Resource File
    40. Overflow Variables and Tags
    41. Overflow Symbolic Links
    42. MIME Conversion
    43. HTTP Cookies
    44. Filter Failure through Buffer Overflow
    45. Buffer Overflow with Environment Variables
    46. Buffer Overflow in an API Call
    47. Buffer Overflow in Local Command-Line Utilities
    48. Parameter Expansion
    49. String Format Overflow in syslog()

    These attack patterns can be used discover potential security defects.

    References

    [Saltzer] Saltzer, Jerry, and Mike Schroeder, "The Protection of Information in Computer Systems", Proceedings of the IEEE. Vol. 63, No. 9 (September 1975), pp. 1278-1308. Available on-line at http://cap-lore.com/CapTheory/ProtInf/.
    [Neumann] Neumann, Peter, Principles Assuredly Trustworthy Composable Architectures: (Emerging Draft of the) Final Report, December 2003
    [Viega] Viega, John, and Gary McGraw. Building Secure Software: How to Avoid Security Problems the Right Way, Reading, MA: Addison Wesley, 2001.
    [Howard 2002] Howard, Michael, and David C. LeBlanc. Writing Secure Code, 2nd edition, Microsoft Press, 2002
    [Schneier] Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons (2000)
    [Leveson] Leveson, Nancy G. Safeware: System Safety and Computers, Addison-Wesley, 1995.
    [Moore 1999] Moore, Geoffrey A., Inside the Tornado : Marketing Strategies from Silicon Valley's Cutting Edge. HarperBusiness; Reprint edition July 1, 1999.
    [Moore 2002] Moore, Geoffrey A. Crossing the Chasm. Harper Business, 2002.
    [Hogland] Hoglund, Greg, and Gary McGraw. Exploiting Software: How to break code. Addison-Wesley, 2004

    Posted by iang at April 6, 2004 07:54 AM | TrackBack
  • Comments