Code Review Introduction

OWASP Code Review Guide Table of Contents

Introduction
Code review is probably the single-most effective technique for identifying security flaws. When used together with automated tools and manual penetration testing, code review can significantly increase the cost effectiveness of an application security verification effort.

This document does not prescribe a process for performing a security code review. Rather, this guide focuses on the mechanics of reviewing code for certain vulnerabilities, and provides limited guidance on how the effort should be structured and executed. OWASP intends to develop a more detailed process in a future version of this guide.

Manual security code review provides insight into the “real risk” associated with insecure code. This is the single most important value from a manual approach. A human reviewer can understand the context for certain coding practices, and make a serious risk estimate that accounts for both the likelihood of attack and the business impact of a breach.

Why Does Code Have Vulnerabilities?
MITRE has catalogued almost 700 different kinds of software weaknesses in their CWE project. These are all different ways that software developers can make mistakes that lead to insecurity. Every one of these weaknesses is subtle and many are seriously tricky. Software developers are not taught about these weaknesses in school and most do not receive any training on the job about these problems.

These problems have become so important in recent years because we continue to increase connectivity and to add technologies and protocols at a shocking rate. Our ability to invent technology has seriously outstripped our ability to secure it. Many of the technologies in use today simply have not received any security scrutiny.

There are many reasons why businesses are not spending the appropriate amount of time on security. Ultimately, these reasons stem from an underlying problem in the software market. Because software is essentially a black-box, it is extremely difficult to tell the difference between good code and insecure code. Without this visibility, buyers won’t pay more for secure code, and vendors would be foolish to spend extra effort to produce secure code.

One goal for this project is to help software buyers gain visibility into the security of software and start to effect change in the software market.

Nevertheless, we still frequently get pushback when we advocate for security code review. Here are some of the (unjustified) excuses that we hear for not putting more effort into security:

'' “We never get hacked (that I know of), we don’t need security” “We have a firewall that protects our applications” "We trust our employees not to attack our applications" ''

Over the last 10 years, the team involved with the OWASP Code Review Project have performed thousands of application reviews, and found that every single application has had serious vulnerabilities. If you haven’t reviewed your code for security holes the likelihood that your application has problems is virtually 100%. Still, there are many organizations that choose not to know about the security of their code. To them, we offer Rumsfeld’s cryptic explanation of what we actually know. If you’re making informed decisions to take risk in your enterprise, we fully support you. However, if you don’t even know what risks you are taking, you are being irresponsible both to your shareholders and your customers.

What is Secure Code Review?
Secure code review is the process of auditing code for an application on a line by line basis for its security quality. Code review is a way of ensuring that the application is developed in an appropriate fashion so as to be “self defending” in its given environment.

Secure Code review is a method of assuring secure application developers are following secure development techniques. A general rule of thumb is that a penetration test should not discover any additional application vulnerabilities relating to the developed code after the application has undergone a proper secure code review.

Secure code review is a manual process. It is labor intensive and not very scalable, but it is accurate if performed by humans (and not tools, so far).

Tools can be used to perform this task but they always need human verification. Tools do not understand context, which is the keystone of secure code review. Tools are good at assessing large amounts of code and pointing out possible issues but a person needs to verify every single result and also figure out the false positives and the false negatives.

There are many source code review tool vendors but none that have created a “silver bullet” at a reasonable cost. Vendor tools that are effective cost upwards around $60,000 USD per developer seat.

Code review can be broken down into a number of discrete phases:


 * 1) Discovery (Pre Transaction analysis).
 * 2) Transactional analysis.
 * 3) Post transaction analysis.
 * 4) Procedure peer review.
 * 5) Reporting & Presentation.

Laying the ground work
(Ideally the reviewer should be involved in the design phase of the application, but this is not always possible so we assume the reviewer was not.)

There are two scenarios to look at:


 * 1) The security consultant was involved since project inception and has guided and helped integrate security into the SDLC.
 * 2) The security consultant is brought into the project near project end and is presented with a mountain of code, has no insight into the design, functionality or business requirements.

So we’ve got 100K lines of code for secure code inspection, how do we handle this?

The most important step is collaboration with developers. Obtaining information from the developers is the most efficient method for performing an accurate code review.

Performing code review can feel like an audit, and everybody hates being audited. The way to approach this is to create an atmosphere of collaboration between the reviewer, the development team and any stakeholders. Portraying an image of an advisor and not a policeman is very important if you wish to get full co-operation from the development team.

“Help me help you” is the approach and ideal that needs to be communicated.

Discovery: Gathering the information
As mentioned above, talking to developers is arguably the most accurate and the quickest way of gaining insight into the application.

A culture of collaboration between the security analyst and the development team is important to establish.

Design documents, business requirements, functional specifications, and any other documentation relating to the code being reviewed can be helpful.

Before we start:

The reviewer(s) need to be familiar with:
 * 1) Code: The language used, the features and issues of that language from a security perspective. The issues one needs to look out for and best practices from a security and performance perspective.
 * 2) Context: They need to be familiar with the application being reviewed. All security is in context of what we are trying to secure. Recommending military standard security mechanisms on an application that vends apples would be over-kill and out of context. What type of data is being manipulated or processed and what would the damage to the company be if this data was compromised? Context is the “Holy Grail” of secure code inspection and risk assessment…we’ll see more later.
 * 3) Audience: From 2 (above) we need to know the users of the application, is it externally facing or internal to “Trusted” users.  Does this application talk to other entities (machines/services)? Do humans use this application?
 * 4) Importance: The availability of the application is also important. Shall the enterprise be affected in any great way if the application is “bounced” or shut down for a significant or insignificant amount of time?

Context, Context, Context
The context in which the application is intended to operate is a very important issue in establishing potential risk.

Defining context should provide us with the following information:
 * Establish the importance of the application to the enterprise.
 * Establish the boundaries of the application context.
 * Establish the trust relationships between entities.
 * Establish potential threats and possible countermeasures.

So we can establish something akin to a threat model. Take into account where our application sits, what it's expected to do and who uses it.

Simple questions like:

“What type/how sensitive is the data/asset contained in the application?”:

This is a keystone to security and assessing possible risk to the application. How desirable is the information? What effect would it have on the enterprise if the information were compromised in any way?

“Is the application internal or external facing?”, “Who uses the application; are they trusted users?”

This is a bit of a false sense of security as attacks take place by internal/trusted users more often than is acknowledged. It does give us context that the application should be limited to a finite number of identified users but its not a guarantee that these users shall all behave properly.

“Where does the application host sit?”

Users should not be allowed past the DMZ into the LAN without being authenticated. Internal users also need to be authenticated. No authentication = no accountability and a weak audit trail.

If there are internal and external users, what are the differences from a security standpoint? How do we identify one from another. How does authorisation work?

“How important is this application to the enterprise?”.

Is the application of minor significance or a Tier A / Mission critical application, without which the enterprise would fail? Any good web application development policy would have additional requirements for different applications of differing importance to the enterprise. It would be the analyst’s job to ensure the policy was followed from a code perspective also.

A useful approach is to present the team with a checklist, which asks the relevant questions pertaining to any web application.

The Checklist
Defining a generic checklist, which can be filled out by the development team, is of high value as the checklist asks the correct questions in order to give us context. The checklist should cover the “Usual Suspects” in application security such as:


 * Authentication
 * Authorization
 * Data Validation (another “holy grail”)
 * Session Management
 * Logging
 * Error Handling
 * Cryptography
 * Topology (where is this application in the network context).