Code Review Introduction

OWASP Code Review Guide Table of Contents

Preface
This document is not a “How to perform a Secure Code review” walkthrough but more a guide on how to perform a successful review. Knowing the mechanics of code inspection is a half the battle but I’m afraid people is the other half.

A proper code review will not only identify vulnerabilities, but will assess which vulnerabilities are at the greatest risk for exploitation.

This document describes how to make the most of a secure code review.

Introduction
The only possible way of developing secure software and keeping it secure going into the future is to make security part of the design. When cars are designed safety is considered and is now a big selling point for people buying a new car, “How safe is it?” would be a question a potential buyer may ask, also look at the advertising referring to the “Star” rating for safety a brand/model of car has.

Unfortunately the software industry is not as evolved and hence people still buy software without paying any regard to the security aspect of the application.

Every day more and more vulnerabilities are discovered in popular applications, which we all know and use and even use for private transactions over the web.

I’m writing this document not from a puristic point of view. Not everything you may agree with but from experience it is rare that we can have the luxury of being a purist in the real world.

Many forces in the business world do not see value in spending a proportion of the budget in security and factoring some security into the project timeline.

The usual one liners we hear in the wilderness:

“We never get hacked (that I know of), we don’t need security” 

“We never get hacked, we got a firewall”.

''Question: “How much does security cost”? Answer: “How much shall no security cost”?''

"Not to know is bad; not to wish to know is worse."

Code inspection is a fairly low-level approach to securing code but is very effective.

The Basics: What we know we don’t know and what we know we know.
"...we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know." - Donald Rumsfeld.

What is Secure Code Review?
Secure code review is the process of auditing code for an application on a line by line basis for its security quality. Code review is a way of ensuring that the application is developed in an appropriate fashion so as to be “self defending” in its given environment.

Secure Code review is a method of assuring secure application developers are following secure development techniques. A general rule of thumb is that a pen test should not discover any additional application vulnerabilities relating to the developed code after the application has undergone a proper secure code review.

Secure code review is a manual process. It is labor intensive and not very scalable but it is accurate if performed by humans (and not tools, so far).

Tools can be used to perform this task but they always need human verification. Tools do not understand context, which is the keystone of secure code review. Tools are good at assessing large amounts of code and pointing out possible issues but a person needs to verify every single result and also figure out the false positives and the false negatives.

There are many source code review tool vendors. None have created a “silver bullet” at a reasonable cost. Vendor tools that are effective cost upwards around $60,000 USD per developer seat.

Code review can be broken down into a number of discrete phases.


 * 1) Discovery (Pre Transaction analysis).
 * 2) Transactional analysis.
 * 3) Post transaction analysis.
 * 4) Procedure peer review.
 * 5) Reporting & Presentation.

Laying the ground work
(Ideally the reviewer should be involved in the design phase of the application, but this is not always possible so we assume the reviewer was not.)

There are two scenarios to look at.


 * 1) The security consultant was involved since project inception and has guided and helped integrate security into the SDLC.
 * 2) The security consultant is brought into the project near project end and is presented with a mountain of code, has no insight into the design, functionality or business requirements.

So we’ve got 100K lines of code for secure code inspection, how do we handle this?

The most important step is collaboration with developers. Obtaining information from the developers is the most efficient method for performing an accurate code review.

Performing code review can feel like an audit, and everybody hates being audited. The way to approach this is to create an atmosphere of collaboration between the reviewer, the development team and any stakeholders. Portraying an image of an advisor and not a policeman is very important if you wish to get full co-operation from the development team.

“Help me help you” is the approach and ideal that needs to be communicated.

Discovery: Gathering the information
As mentioned above talking to developers is arguably the most accurate and the quickest way of gaining insight into the application.

A culture of collaboration between the security analyst and the development team is important to establish.

Design documents, business requirements, functional specifications, and any other documentation relating to the code being reviewed can be helpful.

Before we start:

The reviewer(s) need to be familiar with:
 * 1) Code: The language used, the features and issues of that language from a security perspective. The issues one needs to look out for and best practices from a security and performance perspective.
 * 2) Context: They need to be familiar with the application being reviewed. All security is in context of what we are trying to secure. Recommending military standard security mechanisms on an application that vends apples would be over-kill, and out of context. What type of data is being manipulated or processed and what would the damage to the company be if this data was compromised. Context is the “Holy Grail” of secure code inspection and risk assessment…we’ll see more later.
 * 3) Audience: From 2 (above) we need to know the users of the application, is it externally facing or internal to “Trusted” users.  Does this application talk to other entities (machines/services)? Do humans use this application?
 * 4) Importance: The availability of the application is also important. Shall the enterprise be affected in any great way if the application is “bounced” or shut down for a significant or insignificant amount of time?

Context, Context, Context
The context in which the application is intended to operate is a very important issue in establishing potential risk.

Defining context should provide us with the following information:
 * Establish the importance of the application to the enterprise.
 * Establish the boundaries of the application context.
 * Establish the trust relationships between entities.
 * Establish potential threats and possible countermeasures.

So we can establish something akin to a threat model. Take into account where our application sits, what it's expected to do and who uses it.

Simple questions like:

“What type/how sensitive is the data/asset contained in the application?”:

This is a keystone to security and assessing possible risk to the application. How desirable is the information? What effect would it have on the enterprise if the information were compromised in any way?

“Is the application internal or external facing?”, “Who uses the application; are they trusted users?”

This is a bit of a false sense of security as attacks take place by internal/trusted users more often than is acknowledged. It does give us context that the application should be limited to a finite number of identified users but its not a guarantee that these users shall all behave properly.

“Where does the application host sit?”

Users should not be allowed past the DMZ into the LAN without being authenticated. Internal users also need to be authenticated. No authentication = no accountability and a weak audit trail.

If there are internal and external users, what are the differences from a security standpoint? How do we identify one from another. How does authorisation work?

“How important is this application to the enterprise?”.

Is the application of minor significance or a Tier A / Mission critical application, without which the enterprise would fail? Any good web application development policy would have additional requirements for different applications of differing importance to the enterprise. It would be the analyst’s job to ensure the policy was followed from a code perspective also.

A useful approach is to present the team with a checklist, which asks the relevant, questions pertaining to any web application.

The Checklist
Defining a generic checklist which can be filled out by the development team is of high value is the checklist asks the correct questions in order to give us context. The checklist should cover the “Usual Suspects” in application security such as:


 * Authentication
 * Authorization
 * Data Validation (another “holy grail”)
 * Session management
 * Logging
 * Error handling
 * Cryptography
 * Topology (where is this app in the network context).

An example can be found: [[Media:DesignReviewChecklist.doc‎]] The checklist is a good barometer for the level of security the developers have attempted or thought of.