söndag 15 mars 2009

CSSLP-uppsats 1 - Secure Software Testing

Här nedan är min första uppsats för CSSLP-certifieringen. Ämnesområdet är Secure Software Testing.

Secure Software Testing
Testing for security is different from conventional testing. A conventional bug is a feature not working as specified, whereas security bugs are often hidden in functionality not specified at all. To find security bugs the tester needs to look outside the specification and ask, ”What does the software also do?” This is shown in a Venn diagram by Thompson and Whittaker [1]:


Fault Injection
One way of performing security testing is to use a proxy layer between the operating system and the application. System and library call are monitored and manipulated through this proxy layer to test application behavior. This technique is often called black-box fault injection. Black-box because the tester doesn’t actually inspect the inner workings of the application, fault injection because the application is forced in to a faulty state. This kind of testing is good for testing exception handling, an area known to typically hide security bugs.

Fuzzing
Another security testing technique is fuzzing. Input validation is one of the most important intrusion prevention techniques and fuzzing tests the input validation. Fuzzing was invented and investigated by Miller et al in 1990 [2]. Miller’s fuzzing has two important characteristics:
  1. Input to the application under test is random, and
  2. If the application crashes the test results in fail. All other cases are considered pass.
This means that fuzzing quite easily lends itself to automation.

Fuzzing isn’t specifically targeted toward security, rather reliability and robustness. But an application crash caused by a stream of random input is often a symptom of a security bugs such as a buffer overflow.

Static Source Code Analysis
Testing involves running the code under test. But closely related is static source code analysis (static analysis for short) where the application isn’t executed but rather evaluated symbolically. Static analysis is the foundation of optimizing compilers where the source code is analyzed to extract its properties and actual effect on input and output, for instance ‘can the value of variable a ever be assigned to the variable b?’. As long as the effect of the code is preserved the optimizer can reorganize CPU instructions in any way it finds more efficient.

This kind of analysis of what the code does symbolically can also be used for security analysis. The code can either be checked for absence of bad coding practice or for conformation to good coding practice, as discussed in our paper [3].

Testing and Analysis Tools Don’t Fix Bugs
An important note on both security testing and code analysis is that they only detect potential security bugs. The developer still needs to investigate whether it is an actual security bug, how severe it is, and how to fix it. Tools for testing and analysis don’t fix the bugs, they merely report them. Therefore it’s important that the tools used can explain why something is considered a security bug and how such a bug can be fixed.


References
[1] H. H. Thompson and J. A. Whittaker. Testing for software security. Dr. Dobb’s Journal, 27(11):24–32, November 2002

[2] B.P. Miller, L. Fredriksen, and B. So, "An Empirical Study of the Reliability of UNIX Utilities", Communications of the ACM 33, 12, December 1990.

[3] J. Wilander and P. Fåk. Pattern Matching Security Properties of Code using Dependence Graphs. In Proceedings of the 1st International Workshop on Code Based Software Security Assessments (CoBaSSA 2005), in Pittsburgh, Pennsylvania, USA. Pages 5—8, November 7, 2005.

Submitted by John Wilander (john.wilander@omegapoint.se) in partial fulfillment of the CSSLP experience assessment requirements.

Inga kommentarer: