Difference between revisions of "Abuse Case Cheat Sheet"

From OWASP
Jump to: navigation, search
(Minor changes from the start up to the "Proposition" section. Added QA/Functional tester to team.)
(Edit Spacing and Remove Category)
 
(7 intermediate revisions by 3 users not shown)
Line 1: Line 1:
__NOTOC__
+
__NOTOC__
 
<div style="width:100%;height:160px;border:0,margin:0;overflow: hidden;">[[File:Cheatsheets-header.jpg|link=]]</div>
 
<div style="width:100%;height:160px;border:0,margin:0;overflow: hidden;">[[File:Cheatsheets-header.jpg|link=]]</div>
  
{| style="padding: 0;margin:0;margin-top:10px;text-align:left;" |-
+
The Cheat Sheet Series project has been moved to [https://github.com/OWASP/CheatSheetSeries GitHub]!
| valign="top" style="border-right: 1px dotted gray;padding-right:25px;" |
 
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}'''
 
  
__TOC__{{TOC hidden}}
+
Please visit [https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Abuse_Case_Cheat_Sheet.md Abuse Case Cheat Sheet] to see the latest version of the cheat sheet.
 
 
= Introduction =
 
 
 
Often when the security level of an application is mentioned in requirements, the following ''expressions'' are meet:
 
* ''The application must be secure''.
 
* ''The application must defend against all attacks targeting this category of application''.
 
* ''The application must defend against attacks from the OWASP TOP 10''
 
* ...
 
 
 
These security requirements are too generic and useless for a development team...
 
 
 
To build a secure application, from an pragmatic point of view, it is important to identify the attacks which the application must defend against according to its business and technical context.
 
 
 
= Objective =
 
 
 
The objective of this cheat sheet is to provide a explanation about what an '''Abuse Case''' is, why abuse cases are important when considering the security of an application and, finally, to provide a proposal for a pragmatic approach to built a list of abuse cases and track them for every feature planned to be implemented as part of an application whatever project mode used (waterfall or agile).
 
 
 
'''Important note about this Cheat Sheet:'''
 
<pre>
 
The main objective is to provide a pragmatic approach in order to allow a company or a project team to start building and handling the list of abuse cases and then customize the elements proposed to its context/culture in order to, finally, build its own method.
 
 
 
This cheat sheet can be seen like a getting started tutorial.</pre>
 
 
 
= Context & approach =
 
 
 
== Why clearly identify the attacks? ==
 
 
 
Clearly identifying the attacks against which the application must defend is essential in order to enable the following steps in a project or sprint:
 
* Evaluate the business risk for each of the identified attacks in order perform a selection according to the business risk and the project/sprint budget.
 
* Derive security requirements and add them into the project specification or sprint's user stories acceptance criteria.
 
* Estimate the overhead to provision in the initial project/sprint charge that will be necessary to implement the countermeasures.
 
* About countermeasures: Allow the project team to define them and in which location they are appropriate (network, infrastructure, code...) to be positioned.
 
 
 
 
 
== Notion of Abuse Case ==
 
 
 
In order to help build the list of attacks, the notion of '''Abuse Case''' exists.
 
 
 
An '''Abuse Case''' can be defined as:
 
 
 
<pre>A way to use a feature that was not expected by the implementer, allowing an attacker to influence the feature or outcome of use of the feature based on the attacker action (or input).</pre>
 
 
 
Synopsys define an '''Abuse Case'' like this:
 
 
 
<pre>Misuse and abuse cases describe how users misuse or exploit the weaknesses of controls in software features to attack an application.
 
This can lead to tangible business impact when a direct attack against business functionalities, which may bring in revenue or provide positive user experience, are attacked.
 
Abuse cases can also be an effective way to drive security requirements that lead to proper protection of these critical business use cases.</pre>
 
 
 
Synopsys source: https://www.synopsys.com/blogs/software-security/abuse-cases-can-drive-security-requirements/
 
 
 
Another definition of Abuse Case by Cigital: https://cigital.com/papers/download/misuse-bp.pdf
 
 
 
== How to define the list of Abuse Cases? ==
 
 
 
There are many different ways to define the list of abuse cases for a feature (that can be mapped to a user story in agile mode).
 
 
 
 
 
The project [[OWASP_SAMM_Project|OWASP Open SAMM]] proposes the following approach in the ''Activity A'' of the Security Practice ''Threat Assessment'' for the Maturity level 2:
 
 
 
<pre>
 
Further considering the threats to the organization, conduct a more formal analysis to determine potential misuse or abuse of functionality. Typically, this process begins with identification of normal usage scenarios, e.g. use-case diagrams if available.
 
 
 
If a formal abuse-case technique isn’t used, generate a set of abuse-cases for each scenario by starting with a statement of normal usage and brainstorming ways in which the statement might be negated, in whole or in part. The simplest way to get started is to insert the word “no” or “not” into the usage statement in as many ways as possible, typically around nouns and verbs. Each usage scenario should generate several possible abuse-case statements.
 
 
 
Further elaborate the abuse-case statements to include any application-specific concerns based on the business function of the software. The ultimate goal is for the completed set of abuse statements to form a model for usage patterns that should be disallowed by the software. If desired, these abuse cases can be combined with existing threat models.
 
 
 
After initial creation, abuse-case models should be updated for active projects during the design phase. For existing projects, new requirements should be analyzed for potential abuse, and existing projects should opportunistically build abuse-cases for established functionality where practical.
 
</pre>
 
 
 
Open SAMM source: [[SAMM_-_Threat_Assessment_-_2|Threat Assessment Level 2 Actvity A]]
 
 
 
Another way to achieve the building of the list can be the following (more ground and collaborative oriented):
 
 
 
Make a workshop that includes people with the following profiles:
 
* '''Business analyst''': Will be the business key people that will describe each feature from a business point of view.
 
* '''Risk analyst''': Will be the company's risk personnel that will evaluate the business risk from a proposed attack (sometimes it is the '''Business analyst''' depending on the company).
 
* '''Offsensive guy (Pentester or Application Security guy with offensive mindset)''': Will be the ''attacker'' that will propose all attacks that he can perform on the business feature that will be presented to him. If the company does not have this profile then it is possible to ask an intervention of an external specialist (Pentester or AppSec consultant from a security firm). If possible, include 2 offensives guys (ex: 1 Pentester + 1 AppSec) in order to increase the number of possible attacks that will be identified and considered.
 
* '''Technical leaders of the projects''': Will be the project technical people and will allow technical exchange about attacks and countermeasures identified during the workshop.
 
* '''Quality assurance analyst or functional tester''': Personnel that may have a good sense of how the application/functionality is intended to work (positive testing) and what things cause it to fail (failure cases).
 
 
 
 
 
During this workshop (duration will depend on the size of the feature list, but 4 hours is a good start) all business features that will be part of the project or the sprint will be processed. The output of the workshop will be a list of attacks (abuse cases) for all business features. All abuse cases will have a risk rating that will allow for filtering and prioritization.
 
 
 
It is important to take in account '''Technical''' and '''Business''' kind of abuse cases and mark them accordingly.
 
 
 
''Example:''
 
 
 
* Technical flagged abuse case: Add Cross Site Scripting injection into a comment input field.
 
* Business flagged abuse case: Ability to modify arbitrary the price of an article in a online shop prior to pass an order causing the user to pay a lower amount for the wanted article.
 
 
 
== When to define the list of Abuse Cases? ==
 
 
 
On agile project, the definition workshop must be made after the meeting in which User Stories are associated to a Sprint.
 
 
 
On waterfall project, the definition workshop must be made when business feature to implements are identified and known by the business.
 
 
 
 
 
Whatever the mode of project used (agile or waterfall), the abuse cases selected to be addressed must become security requirements in each feature specification section (waterfall) or User Story acceptance criteria (agile) in order to allow additional cost/effort evaluation, identification and implementation of the countermeasures.
 
 
 
 
 
Each abuse case must have a unique identifier in order to allow tracking of its handling in the whole project/sprint, details about this point will be given in the proposal section.
 
 
 
An example of unique ID can be '''ABUSE_CASE_001'''.
 
 
 
 
 
The following schema provide an overview of the chaining of the different steps involved (from left to right):
 
 
 
[[File:ABUSE_CASE_CS_CHAINING_SCHEMA.png|center]]
 
 
 
= Proposition =
 
 
 
The proposal will use the workshop explained in previous section and will focus on the output of the workshop.
 
 
 
== Step 1: Preparation of the workshop ==
 
 
 
First, even if it is obvious, the business key people must be sure to know, understand and be able to explain the business features that will be processed during the workshop.
 
 
 
Secondly, create a new Microsoft Excel file (you can also use Google Sheet or any other similar software) with the following sheets:
 
* '''FEATURES'''
 
** Will contains a table with the list of business features planned for the workshop.
 
* '''ABUSE CASES'''
 
** Will contains a table with all identified abuse cases during the workshop.
 
* '''COUNTERMEASURES'''
 
** Will contains a table with the list of countermeasure possibles (light description) imagined for the abuse cases identified.
 
** This sheet is not mandatory but it can be usefull to know if, for an abuse case, a fix is easy to implements and then can impact the risk rating.
 
** Countermeasure can be identified by the AppSec profile guy during the workshop because an AppSec guy must be able to perform attacks but also to build defenses (it is not always the case for the Pentester profile guy because this profile generally focus on attack side only, so, the combination Pentester + AppSec is very efficient to have a 360 degree view).
 
 
 
 
 
This is the representation of each sheets along a example of content that will be filled during the workshop:
 
 
 
''FEATURES'' sheet:
 
 
 
{| class="wikitable"
 
! style="text-align: center; font-weight:bold;" | Feature unique ID
 
! style="text-align: center; font-weight:bold;" | Feature name
 
! style="text-align: center; font-weight:bold;" | Feature short description
 
|-
 
| FEATURE_001
 
| DocumentUploadFeature
 
| Allow user to upload document along a message
 
|}
 
 
 
 
 
''COUNTERMEASURES'' sheet:
 
 
 
{| class="wikitable"
 
! style="text-align: center; font-weight:bold;" | Countermeasure unique ID
 
! style="text-align: center; font-weight:bold;" | Countermeasure short description
 
! style="text-align: center; font-weight:bold;" | Countermeasure help/hint
 
|-
 
| DEFENSE_001
 
| Validate the uploaded file by loading it into a parser
 
| Use advice from the OWASP Cheat Sheet about file upload
 
|}
 
 
 
 
 
''ABUSE CASES'' sheet:
 
 
 
{| class="wikitable"
 
! style="text-align: center; font-weight:bold;" | Abuse case unique ID
 
! style="text-align: center; font-weight:bold;" | Feature ID impacted
 
! style="text-align: center; font-weight:bold;" | Abuse case's attack description
 
! style="text-align: center; font-weight:bold;" | Attack referential ID (if applicable)
 
! style="text-align: center; font-weight:bold;" | CVSS V3 risk rating (score)
 
! style="text-align: center; font-weight:bold;" | CVSS V3 string
 
! style="text-align: center; font-weight:bold;" | Kind of abuse case
 
! style="text-align: center; font-weight:bold;" | Countermeasure ID applicable
 
! style="text-align: center; font-weight:bold;" | Handling decision (To Address or Risk Accepted)
 
|-
 
| ABUSE_CASE_001
 
| FEATURE_001
 
| Upload Office file with malicious macro in charge of dropping a malware
 
| CAPEC-17
 
| HIGH (7.7)
 
| CVSS:3.0/AV:N/AC:H/PR:L/UI:R/S:C/C:N/I:H/A:H
 
| Technical
 
| DEFENSE_001
 
| To Address
 
|}
 
 
 
== Step 2: During the workshop ==
 
 
 
Use the Excel file to review all the features.
 
 
 
For each feature, follow this flow:
 
# Business key people explain the current feature from a business point of view.
 
# Offensive guys propose and explain a set of attacks that they can perform against the feature.
 
# For each attacks proposed:
 
## Offensive guys propose a countermeasure and a preferred set up location (infrastructure, network, code, design...).
 
## Technical key peoples of the project give feedback about the feasability of the proposed countermeasure.
 
## Offsensive guy use the CVSS v3 calculator to determine a risk rating: https://www.first.org/cvss/calculator/3.0
 
## Risk key people accept/increase/decrease the rating to have final one that match the real business impact for the company.
 
# Business, Risk and Technical key peoples find a consensus and filter the list of abuses for the current feature to keep ones that must be addressed and flag them accordingly in the ''ABUSE CASES'' sheet ('''if risk is accepted then add a comment to explain why''').
 
# Pass to next feature...
 
 
 
 
 
If the presence of offensive guys is not possible then you can use the following referential/guide of attacks to identify the applicable attacks on your features:
 
* '''OWASP Automated Threats to Web Applications''': https://www.owasp.org/index.php/OWASP_Automated_Threats_to_Web_Applications
 
* '''OWASP Testing Guide''': https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents
 
* '''OWASP Mobile Testing Guide''': https://github.com/OWASP/owasp-mstg
 
* '''Common Attack Pattern Enumeration and Classification (CAPEC)''': https://capec.mitre.org/
 
 
 
 
 
Important note on attacks and countermeasure knowledge base:
 
<pre>
 
With the time and accross projects, you will obtains your own dictionary of attacks and countermeasures that are applicable to the kind of application of your business domain.
 
This dictionary will speed up the further workshops in a significant way.
 
To promote the creation of this dictionary, you can, at the end of the project/sprint, gather the list of attacks and countermeasures identified in a central location (wiki, database, file...) that will be used during the next workshop in combination with the input of the offensive guys.</pre>
 
 
 
== Step 3: After the workshop ==
 
 
 
The Excel file contain, at this stage, the list of all abuse cases that must be handled and, potentially how, depending on the capacity to found countermeasures.
 
 
 
Now, there 2 remaining task:
 
# Business key people must update specification of each feature (waterfall) or the User Story of each feature (agile) to include the associated abuse cases as Security Requirements (waterfall) or Acceptance Criterias (agile).
 
# Technical key peoples must evaluate the overhead in terms of charge/effort to take in account the countermeasure.
 
 
 
== Step 4: During implementation - Abuse cases handling tracking ==
 
 
 
In order to track the handling of all the abuse cases keep in the selection, the following approach can be used:
 
 
 
If one or several abuse cases are handled at:
 
* '''Design, Infrastructure or Network level'''
 
** Put a marker in the documentation or schema to indicate that ''This design/network/infrastructure take in account the abuse cases ABUSE_CASE_001, ABUSE_CASE_002, ABUSE_CASE_xxx''.
 
* '''Code level'''
 
** Put a special comment in the classes/scripts/modules to indicate that ''This class/module/script take in account the abuse cases ABUSE_CASE_001, ABUSE_CASE_002, ABUSE_CASE_xxx''.
 
** Dedicated annotation like <code>@AbuseCase(ids={"ABUSE_CASE_001","ABUSE_CASE_002"})</code> can be used to faciliate tracking and allow identification into integrated developement environment.
 
 
 
Using this way, it become possible (via some minor scripting) to identify where the the abuse cases are addressed.
 
 
 
== Step 5: During implementation - Abuse cases handling validation ==
 
 
 
As abuse cases are defined, it is possible to put in place automated or manual validations to ensure that:
 
* All the selected abuse cases are handled.
 
* A abuse case is correctly handled.
 
 
 
 
 
Validations can be of the following kinds:
 
 
 
* Automated (run regularly at commit, daily or weekly in the Continous Integration Jobs of the project):
 
** Custom audit rules in Static Application Security Testing (SAST) or Dynamic Application Security Testing (DAST) tools.
 
** Dedicated unit, integration or functional security oriented tests.
 
** ...
 
* Manual:
 
** Security code review between project's peer during the design or the implementation.
 
** Provide the list of all abuse cases addressed to pentesters in order that they valid the protection efficiency for each abuse case during an intrusion test against the application (pentester will validate that the attacks identified are not longer effective and will also try to find another possible attacks).
 
** ...
 
 
 
Add automated tests allow also to track that countermeasures against the abuse cases are still effective/in place during maintenance or bug fixing phase of a project (prevent accidental removal/disabling). It is also usefull when Continuous Delivery approach is used (https://continuousdelivery.com/) in to ensure that all abuse cases protections are in place before to open expected access to the application.
 
 
 
= Sources of the schemas =
 
 
 
All schemas has been created using https://www.draw.io/ site and exported, as PNG image, for being integrated into this article.
 
 
 
All XML descriptors files for each schema are available below (using XML description, modification of the schema is possible using DRAW.IO site):
 
 
 
[[Media:ABUSE_CASE_CS_SCHEMA.zip|Schemas descriptors archive]]
 
 
 
= Authors and Primary Editors =
 
 
 
Dominique Righetto - dominique.righetto@owasp.org
 
 
 
= Other Cheatsheets =
 
 
 
{{Cheatsheet_Navigation_Body}}
 
 
 
|}
 
 
 
[[Category:Cheatsheets]]
 

Latest revision as of 02:52, 13 February 2019

Cheatsheets-header.jpg

The Cheat Sheet Series project has been moved to GitHub!

Please visit Abuse Case Cheat Sheet to see the latest version of the cheat sheet.