|Join hundreds of other Developers and InfoSec professionals for Training, Sessions and Community at our first conference of 2019|
[AppSec Tel Aviv, May 26-30th]
AppSecEU08 Scanstud - Evaluating static analysis tools
In late 2007 and early 2008 the Siemens CERT and the security group at the University of Hamburg (SVS) jointly did a project to evaluate the capabilities of commercial static analysis tools in respect to finding security vulnerabilities in source code.
For this purpose a mature evaluation methodology was developed which allows:
- Automatic test-execution and -evaluation
- Easy and reliable testcase creation
- Deterministic correlation between single testcases and respective tool response
The talk will present our methodology, our approach on creating suitable testcases and our experiences regarding the actual evaluation.
Note: We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).
The talk will be presented by one or more of the following individuals: