While the growth of web applications has been extraordinary, there are no structured methodologies for evaluating static code scanners. We introduce a structured methodology to evaluate static code scanners with core areas of emphasis: compatibility, vulnerability detection, reporting functionality, and usability. Our methodology evaluates static code scanners so development teams can compare multiple tools with the same processes and against the same test bed. We leverage WebGoat as a well-accepted vulnerable application so all vulnerabilities are explicitly known and false positives and false negatives can be included as part of our evaluation methodology. Our methodology can be applied to any static code scanner, and when followed correctly will score every tool with same criteria and scoring processes. We completed a case study to evaluate two static source code scanners to illustrate our methodology and to ensure the appropriate criteria and scoring mechanisms are present.
Speaker bio will be posted shortly.