Attack Surface Analysis Cheat Sheet

Revision as of 12:14, 27 August 2012 by Jim Bird (talk | contribs) (Created page with "= Introduction = First draft work in progress - very rough This article describes a simple and pragmatic way of doing Attack Surface Analysis and managing an application's A...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


First draft work in progress - very rough

This article describes a simple and pragmatic way of doing Attack Surface Analysis and managing an application's Attack Surface.

Attack Surface Analysis is about mapping out what parts of a system need to be reviewed and tested for security vulnerabilities. The point of Attack Surface Analysis is to understand the risk areas in an application, to make developers and security specialists aware of the Attack Surface, to find ways of minimizing it, and to notice when and how it changes and what this means from a risk perspective.

Attack Surface Analysis helps you to:

  1. identify what you need to review/test for security vulnerabilities
  2. identify high risk areas of code that require defense-in-depth protection
  3. identify when you’ve changed the attack surface and need to do some kind of threat assessment

Defining the Attack Surface of an Application

The Attack Surface is how an attacker might be able to get into a system, and how data can be gotten out.

The Attack Surface of an application is:

  1. the sum of all paths for data/commands into and out of the application, and the code that protects these paths (including resource connection and authentication, authorization, activity logging, data validation and encoding); and
  2. all confidential and sensitive data used in the application, including secrets and keys, critical business data and PII, and
  3. the code that protects these data (including encryption and checksums, access auditing, and data integrity and operational security controls).

Group each type of attack point into buckets based on risk (Internet-facing or internal-facing), purpose, implementation, design and technology. You can then count the number of attack points of each type, and find some cases for each type, and focus your review/assessment on those cases.

You overlay this model with the different types of users - roles, privilege levels - that can use the system. Complexity increases with the number of different types of users. But it is important to focus especially on the two extremes: unauthenticated, anonymous users and highly privileged admin users.

With this approach, you don't need to understand every endpoint in order to understand the Attack Surface and the potential risk profile of a system. Instead, you can count the general type of endpoints and the number of points of each type. With this you can budget what it will take to assess risk at scale, and you can tell when the risk profile of an application has significantly changed.

Understanding, mapping and defining the Attack Surface

You can start by capturing the Attack Surface baseline in a picture and notes. Spend a few hours reviewing design and architecture documents from an attack surface perspective.

Review of architecture, different points of entry/exit - UI forms and fields - APIs - Files - Databases - ….?

To make it manageable, break this into different types based on function and technology - Login/authentication code and entry points - Admin interfaces - Inquiry forms - Data entry/CRUD forms - Shopping/business flow forms…

scan it from the outside or the inside

Spidering / crawling (outside) code analysis (inside)

For web apps you can use a tool like Arachni or Skipfish or w3af or one of the many commercial dynamic testing and vulnerability scanning tools or services to crawl your app and map the attack surface – at least the part of the system that is accessible over the web. Or better, get an appsec expert to review the application and pen test it so that you understand the attack surface and real vulnerabilities.

The attack surface model will be rough and incomplete to start, especially if you haven’t done any security work on the system before. Use what you have and fill in the holes as the team makes changes to the attack surface. But how do you know when you are changing the attack surface?

Measuring and Assessing the Attack Surface

You need to understand and identify risks and to know when the application's risk profile changes.

Once you have a map of the attack surface, identify the high risk areas. Focus on remote entry points – interfaces with outside systems and to the Internet – and especially where the system allows anonymous, public access.

  • Network-facing, especially internet-facing code
  • Web forms – rich text input fields (hard to validate)
  • Files from outside of the network
  • Backwards compatible interfaces with other systems – old protocols, sometimes old code and libraries, hard to maintain and test multiple versions
  • Custom APIs – protocols etc – likely to have mistakes in design and implementation
  • Security code: anything to do with crypto, authentication and session management

This is where you are most exposed to attack. Then understand what compensating controls you have in place, operational controls like network firewalls and application firewalls,and intrusion detection or prevention systems to help protect your app.

Can count – Microsoft, SAP

Michael Howard says that different applications have different ways to measure the attack surface Should be done build to build and track it Operating systems: ports, services/demons Databases: stored procedures Productivity software/office: macros, renderers for different file formats(converters)

Managing the Attack Surface

Once you have a baseline understanding of the Attack Surface, can use it to incrementally identify and manage risks going forward as you make changes to the application. Ask yourself:

  • What has changed?
  • What are you doing different? (technology, new approach, ….)
  • What holes could you have opened?

If you add another HTML form of a known type, following the same design and using the same technology, you will know how much security testing and review it needs. If you add a new web services API or file that can be uploaded from the Internet, each of these changes have a different risk profile again - see if if the change fits in an existing bucket, see if the existing controls/protections apply. If you're adding something that doesn't fall into an existing bucket, this means that you have to go through a more thorough risk assessment.

As you add new user types or roles or privilege levels, you do the same kind of analysis and risk assessment. Overlay the type of access across the data and functions and look for problems and inconsistencies. It's important to understand the access model for the application, whether it is positive (access is deny by default) or negative (access is allow by default). In a positive access model, any mistakes in defining what data or functions are permitted to a new user type or role are easy to see. In a negative access model,you have to be much more careful to ensure that a user does not get access to data/functions that they should not be permitted to.

There are two different ways that you can manage changes and risks:

  • Negative: threat modeling – what holes could you have opened, look at attacks and vulnerabilities….
  • Positive: checklist approach of design fundamentals / guidelines from Microsoft’s design thing – authentication, authorization, data validation, error handling, configuration management, sensitive data/privacy, cryptography, auditing and logging

This kind of threat or risk assessment can be done periodically, or as part of design work in serial / phased / spiral / waterfall development projects, or continuously and incrementally in Agile / iterative development.

Normally, an application's Attack Surface will increase over time as you add more interfaces and user types and integrate with other systems. You also want to look for ways to minimize the Attack Surface area and to reduce the Attack Surface when you can by simplifying the model (reducing the number of user levels for example), by turning off features and interfaces that aren't being used, by introducing operational controls such as a WAF. And not storing confidential data that you don't have to.